merge-upstream/v5.10.42 from branch/tag: upstream/v5.10.42 into branch: cos-5.10
Changelog:
-------------------------------------------------------------
A. Cody Schuffelen (1):
virt_wifi: Return micros for BSS TSF values
Abhijeet Rao (1):
xhci-pci: Allow host runtime PM as default for Intel Alder Lake xHCI
Abhinav Kumar (1):
drm/msm/dp: Fix incorrect NULL check kbot warnings in DP driver
Adam Ford (1):
clk: imx: Fix reparenting of UARTs not associated with stdout
Adrian Hunter (8):
perf inject: Fix repipe usage
mmc: sdhci-pci: Fix initialization of some SD cards for Intel BYT-based controllers
mmc: sdhci-pci: Add PCI IDs for Intel LKF
perf intel-pt: Fix sample instruction bytes
perf intel-pt: Fix transaction abort handling
perf scripts python: exported-sql-viewer.py: Fix copy to clipboard from Top Calls by elapsed Time report
perf scripts python: exported-sql-viewer.py: Fix Array TypeError
perf scripts python: exported-sql-viewer.py: Fix warning display
Ahmed S. Darwish (1):
net: xfrm: Localize sequence counter per network namespace
Ajish Koshy (1):
scsi: pm80xx: Fix drives missing during rmmod/insmod loop
Al Cooper (1):
mmc: sdhci-brcmstb: Remove CQE quirk
Al Viro (2):
LOOKUP_MOUNTPOINT: we are cleaning "jumped" flag too late
hostfs: fix memory handling in follow_link()
Alaa Emad (2):
media: dvb: Add check on sp8870_readreg return
media: gspca: mt9m111: Check write_bridge for timeout
Alain Volmat (1):
spi: stm32: Fix use-after-free on unbind
Alan Stern (1):
USB: usbfs: Don't WARN about excessively large memory allocations
Alban Bedel (1):
platform/x86: intel-hid: Support Lenovo ThinkPad X1 Tablet Gen 2
Aleksander Jan Bajkowski (1):
net: lantiq: fix memory corruption in RX ring
Alex Elder (2):
net: ipa: fix init header command validation
net: ipa: memory region array is variable size
Alexander Aring (23):
net: ieee802154: nl-mac: fix check on panid
net: ieee802154: fix nl802154 del llsec key
net: ieee802154: fix nl802154 del llsec dev
net: ieee802154: fix nl802154 add llsec key
net: ieee802154: fix nl802154 del llsec devkey
net: ieee802154: forbid monitor for set llsec params
net: ieee802154: forbid monitor for del llsec seclevel
net: ieee802154: stop dump llsec params for monitors
net: ieee802154: stop dump llsec keys for monitors
net: ieee802154: forbid monitor for add llsec key
net: ieee802154: forbid monitor for del llsec key
net: ieee802154: stop dump llsec devs for monitors
net: ieee802154: forbid monitor for add llsec dev
net: ieee802154: forbid monitor for del llsec dev
net: ieee802154: stop dump llsec devkeys for monitors
net: ieee802154: forbid monitor for add llsec devkey
net: ieee802154: forbid monitor for del llsec devkey
net: ieee802154: stop dump llsec seclevels for monitors
net: ieee802154: forbid monitor for add llsec seclevel
fs: dlm: fix debugfs dump
fs: dlm: add errno handling to check callback
fs: dlm: check on minimum msglen size
fs: dlm: flush swork on shutdown
Alexander Gordeev (1):
s390/cpcmd: fix inline assembly register clobbering
Alexander Lobakin (3):
mtd: spinand: core: add missing MODULE_DEVICE_TABLE()
xsk: Respect device's headroom and tailroom on generic xmit path
gro: fix napi_gro_frags() Fast GRO breakage due to IP alignment check
Alexander Shishkin (2):
intel_th: pci: Add Rocket Lake CPU support
intel_th: pci: Add Alder Lake-M support
Alexander Shiyan (1):
ASoC: fsl_esai: Fix TDM slot setup for I2S mode
Alexander Usyskin (1):
mei: request autosuspend after sending rx flow control
Alexandru Ardelean (3):
iio: adc: Kconfig: make AD9467 depend on ADI_AXI_ADC symbol
iio: hid-sensors: select IIO_TRIGGERED_BUFFER under HID_SENSOR_IIO_TRIGGER
iio: adc: ad7192: handle regulator voltage error first
Alexandru Elisei (1):
KVM: arm64: Initialize VCPU mdcr_el2 before loading it
Alexey Kardashevskiy (2):
powerpc/iommu: Annotate nested lock for lockdep
powerpc: Fix early setup to make early_ioremap() work
Ali Saidi (1):
locking/qrwlock: Fix ordering in queued_write_lock_slowpath()
Alison Schofield (1):
x86, sched: Treat Intel SNC topology as default, COD as exception
Amir Goldstein (1):
ovl: invalidate readdir cache on changes to dir with origin
Amit (1):
nvmet: remove unused ctrl->cqs
Anastasia Kovaleva (1):
scsi: qla2xxx: Prevent PRLI in target mode
Andre Edich (1):
net: phy: lan87xx: fix access to wrong register of LAN87xx
Andre Przywara (5):
kselftest/arm64: sve: Do not use non-canonical FFR register value
arm64: dts: allwinner: Fix SD card CD GPIO for SOPine systems
arm64: dts: allwinner: Revert SD card CD GPIO for Pine64-LTS
kselftest/arm64: mte: Fix compilation with native compiler
kselftest/arm64: mte: Fix MTE feature detection
Andrei Matei (1):
bpf: Allow variable-offset stack access
Andrew Jeffery (1):
serial: 8250: Add UART_BUG_TXRACE workaround for Aspeed VUART
Andrew Price (1):
gfs2: Flag a withdraw if init_threads() fails
Andrew Scull (1):
bug: Remove redundant condition check in report_bug
Andrii Nakryiko (7):
libbpf: Add explicit padding to bpf_xdp_set_link_opts
bpftool: Fix maybe-uninitialized warnings
selftests/bpf: Re-generate vmlinux.h and BPF skeletons if bpftool changed
selftests/bpf: Fix BPF_CORE_READ_BITFIELD() macro
selftests/bpf: Fix field existence CO-RE reloc tests
selftests/bpf: Fix core_reloc test runner
bpf: Prevent writable memory-mapping of read-only ringbuf pages
Andy Gospodarek (1):
bnxt_en: Include new P5 HV definition in VF check.
Andy Shevchenko (16):
i2c: designware: Adjust bus_freq_hz when refuse high speed mode set
gpiolib: Read "gpio-line-names" from a firmware node
dmaengine: dw: Make it dependent to HAS_IOMEM
pinctrl: core: Show pin numbers for the controllers with base = 0
usb: gadget: pch_udc: Revert d3cb25a12138 completely
usb: gadget: pch_udc: Replace cpu_to_le32() by lower_32_bits()
usb: gadget: pch_udc: Check if driver is present before calling ->setup()
usb: gadget: pch_udc: Check for DMA mapping error
usb: gadget: pch_udc: Initialize device pointer before use
usb: gadget: pch_udc: Provide a GPIO line used on Intel Minnowboard (v1)
driver core: platform: Declare early_platform_cleanup() prototype
usb: typec: ucsi: Put fwnode in any case during ->probe()
scripts: switch explicitly to Python 3
iio: dac: ad5770r: Put fwnode in error case during ->probe()
platform/x86: intel_punit_ipc: Append MODULE_DEVICE_TABLE for ACPI
spi: Assume GPIO CS active high in ACPI case
Aniruddha Tvs Rao (1):
mmc: sdhci-tegra: Add required callbacks to set/clear CQE_EN bit
Anirudh Rayabharam (8):
net: hso: fix null-ptr-deref during tty device unregistration
usb: gadget: dummy_hcd: fix gpf in gadget_setup
rapidio: handle create_workqueue() failure
net: stmicro: handle clk_prepare() failure during init
video: hgafb: correctly handle card detect failure during probe
net: fujitsu: fix potential null-ptr-deref
net/smc: properly handle workqueue allocation failure
ath6kl: return error code in ath6kl_wmi_set_roam_lrssi_cmd()
Anirudh Venkataramanan (2):
ice: Continue probe on link/PHY errors
ice: Use port number instead of PF ID for WoL
Anna Schumaker (1):
NFSv4: Fix a NULL pointer dereference in pnfs_mark_matching_lsegs_return()
Annaliese McDermond (3):
ASoC: tlv320aic32x4: Register clocks before registering component
ASoC: tlv320aic32x4: Increase maximum register in regmap
sc16is7xx: Defer probe if device read fails
Anson Jacob (3):
drm/amdkfd: Fix UBSAN shift-out-of-bounds warning
drm/amd/display: Fix UBSAN warning for not a valid value for type '_Bool'
drm/amd/display: Fix UBSAN: shift-out-of-bounds warning
Anthony Wang (1):
drm/amd/display: Force vsync flip when reconfiguring MPCC
Antoine Tenart (2):
vxlan: do not modify the shared tunnel info when PMTU triggers an ICMP reply
geneve: do not modify the shared tunnel info when PMTU triggers an ICMP reply
Antonio Borneo (1):
spi: stm32: drop devres version of spi_register_master
Anup Patel (1):
RISC-V: Fix error code returned by riscv_hartid_to_cpuid()
Archie Pusaka (3):
Bluetooth: verify AMP hci_chan before amp_destroy
Bluetooth: Set CONF_NOT_COMPLETE as l2cap_chan default
Bluetooth: check for zapped sk before connecting
Ard Biesheuvel (6):
ARM: 9056/1: decompressor: fix BSS size calculation for LLVM ld.lld
crypto: api - check for ERR pointers in crypto_destroy_tfm()
ARM: 9011/1: centralize phys-to-virt conversion of DT/ATAGS address
ARM: 9012/1: move device tree mapping out of linear region
ARM: 9020/1: mm: use correct section size macro to describe the FDT virtual address
ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section
Aric Cyr (2):
drm/amd/display: Don't optimize bandwidth before disabling planes
drm/amd/display: DCHUB underflow counter increasing in some scenarios
Ariel Levkovich (2):
net/mlx5e: Fix mapping of ct_label zero
net/mlx5: Set term table as an unmanaged flow table
Arkadiusz Kubalewski (4):
i40e: Fix sparse warning: missing error code 'err'
i40e: Fix sparse error: 'vsi->netdev' could be null
i40e: Fix sparse error: uninitialized symbol 'ring'
i40e: Fix sparse errors in i40e_txrx.c
Arnaldo Carvalho de Melo (2):
perf map: Tighten snprintf() string precision to pass gcc check on some 32-bit arches
perf symbols: Fix dso__fprintf_symbols_by_name() to return the number of printed chars
Arnd Bergmann (25):
x86/build: Turn off -fcf-protection for realmode targets
remoteproc: qcom: pil_info: avoid 64-bit division
soc/fsl: qbman: fix conflicting alignment attributes
lockdep: Address clang -Wformat warning printing for %hd
drm/imx: imx-ldb: fix out of bounds array access warning
ARM: keystone: fix integer overflow warning
ARM: omap1: fix building with clang IAS
Input: i8042 - fix Pegatron C15B ID entry
kasan: fix hwasan build for gcc
amdgpu: avoid incorrect %hu format string
security: commoncap: fix -Wstringop-overread warning
spi: rockchip: avoid objtool warning
crypto: poly1305 - fix poly1305_core_setkey() declaration
irqchip/gic-v3: Fix OF_BAD_ADDR error handling
wlcore: fix overlapping snprintf arguments in debugfs
net: enetc: fix link error again
smp: Fix smp_call_function_single_async prototype
ext4: fix debug format string warning
x86/msr: Fix wr/rdmsr_safe_regs_on_cpu() prototypes
airo: work around stack usage warning
kgdb: fix gcc-11 warning on indentation
usb: sl811-hcd: improve misleading indentation
isdn: capi: fix mismatched prototypes
PCI: thunder: Fix compile testing
kcsan: Fix debugfs initcall return type
Artur Petrosyan (3):
usb: dwc2: Fix session request interrupt handler
usb: dwc2: Fix host mode hibernation exit with remote wakeup flow.
usb: dwc2: Fix hibernation between host and device modes.
Arun Easi (2):
scsi: qla2xxx: Fix crash in qla2xxx_mqueuecommand()
PCI: Allow VPD access for QLogic ISP2722
Athira Rajeev (2):
powerpc/perf: Fix PMU constraint check for EBB events
powerpc/perf: Fix the threshold event selection for memory events in power10
Atul Gopinathan (2):
cdrom: gdrom: deallocate struct gdrom_unit fields in remove_gdrom
serial: max310x: unregister uart driver in case of failure and abort
Aurelien Aptel (2):
smb2: fix use-after-free in smb2_ioctl_query_info()
cifs: set server->cipher_type to AES-128-CCM for SMB3.0
Avri Altman (2):
mmc: block: Update ext_csd.cache_ctrl if it was written
mmc: block: Issue a cache flush only when it's enabled
Axel Rasmussen (1):
userfaultfd: release page in error path to avoid BUG_ON
Aya Levin (5):
net/mlx5e: Fix ethtool indication of connector type
net/mlx5: Fix PPLM register mapping
net/mlx5: Fix PBMC register mapping
net/mlx5e: Fix setting of RS FEC mode
net/mlx5e: Fix error path of updating netdev queues
Ayush Garg (1):
Bluetooth: Fix incorrect status handling in LE PHY UPDATE event
Ayush Sawal (2):
crypto: chelsio - Read rxchannel-id from firmware
cxgb4/ch_ktls: Clear resources when pf4 device is removed
Badhri Jagan Sridharan (5):
usb: typec: tcpm: Address incorrect values of tcpm psy for fixed supply
usb: typec: tcpm: Address incorrect values of tcpm psy for pps supply
usb: typec: tcpm: update power supply once partner accepts
usb: typec: tcpci: Check ROLE_CONTROL while interpreting CC_STATUS
usb: typec: tcpm: Fix error while calculating PPS out values
Baptiste Lepers (1):
sunrpc: Fix misplaced barrier in call_decode
Bart Van Assche (4):
scsi: qla2xxx: Always check the return value of qla24xx_get_isp_stats()
scsi: libfc: Fix a format specifier
blk-mq: Swap two calls in blk_mq_exit_queue()
scsi: ufs: core: Increase the usable queue depth
Bas Nieuwenhuizen (1):
tools/power/turbostat: Fix turbostat for AMD Zen CPUs
Bastian Germann (1):
ASoC: sunxi: sun4i-codec: fill ASoC card owner
Ben Gardon (5):
KVM: x86/mmu: change TDP MMU yield function returns to match cond_resched
KVM: x86/mmu: Merge flush and non-flush tdp_mmu_iter_cond_resched
KVM: x86/mmu: Rename goal_gfn to next_last_level_gfn
KVM: x86/mmu: Ensure forward progress when yielding in TDP MMU iter
KVM: x86/mmu: Yield in TDU MMU iter even if no SPTES changed
Ben Greear (1):
mac80211: fix time-is-after bug in mlme
Bence Csókás (1):
i2c: Add I2C_AQ_NO_REP_START adapter quirk
Benjamin Block (1):
dm rq: fix double free of blk_mq_tag_set in dev remove after table load fails
Benjamin Segall (1):
kvm: exit halt polling on need_resched() as well
Bhaumik Bhatt (3):
bus: mhi: core: Clear configuration from channel context during reset
bus: mhi: core: Destroy SBL devices when moving to mission mode
bus: mhi: core: Clear context for stopped channels from remove()
Bill Wendling (1):
arm64/vdso: Discard .note.gnu.property sections in vDSO
Bixuan Cui (3):
usb: musb: fix PM reference leak in musb_irq_work()
usb: core: hub: Fix PM reference leak in usb_port_resume()
iommu/virtio: Add missing MODULE_DEVICE_TABLE
Bjorn Andersson (5):
net: qrtr: Avoid potential use after free in MHI send
soc: qcom: mdt_loader: Validate that p_filesz < p_memsz
soc: qcom: mdt_loader: Detect truncated read of segments
remoteproc: qcom_q6v5_mss: Validate p_filesz in ELF loader
usb: typec: mux: Fix matching with typec_altmode_desc
Bob Pearson (1):
RDMA/rxe: Fix a bug in rxe_fill_ip_info()
Bob Peterson (1):
gfs2: report "already frozen/thawed" errors
Bodo Stroesser (1):
scsi: target: tcmu: Return from tcmu_handle_completions() if cmd_id not found
Boris Brezillon (2):
drm/panfrost: Clear MMU irqs before handling the fault
drm/panfrost: Don't try to map pages that are already mapped
Boris Burkov (1):
btrfs: return whole extents in fiemap
Brendan Jackman (1):
libbpf: Fix signed overflow in ringbuf_process_ring
Brian King (1):
scsi: ibmvfc: Fix invalid state machine BUG_ON()
Bruce Allan (1):
ice: fix memory allocation call
Caleb Connolly (1):
Input: s6sy761 - fix coordinate read bit shift
Calvin Walton (1):
tools/power turbostat: Fix offset overflow issue in index converting
Can Guo (5):
scsi: ufs: core: Fix task management request completion timeout
scsi: ufs: core: Fix wrong Task Tag used in task management request UPIUs
scsi: ufs: core: Do not put UFS power into LPM if link is broken
scsi: ufs: core: Cancel rpm_dev_flush_recheck_work during system suspend
scsi: ufs: core: Narrow down fast path in system suspend path
Carl Philipp Klemm (1):
power: supply: cpcap-charger: Add usleep to cpcap charger to avoid usb plug bounce
Carlos Leija (1):
ARM: OMAP4: PM: update ROM return address for OSWR and OFF
Carsten Haitzler (1):
drm/komeda: Fix bit check to import to value of proper type
Catalin Marinas (3):
arm64: mte: Ensure TIF_MTE_ASYNC_FAULT is set atomically
arm64: Remove arm64_dma32_phys_limit and its uses
arm64: Fix race condition on PG_dcache_clean in __sync_icache_dcache()
Catherine Sullivan (2):
gve: Check TX QPL was actually assigned
gve: Upgrade memory barrier in poll routine
Cezary Rojewski (1):
ASoC: Intel: Skylake: Compile when any configuration is selected
Chaitanya Kulkarni (3):
scsi: target: pscsi: Fix warning in pscsi_complete_cmd()
nvmet: add lba to sect conversion helpers
nvmet: fix inline bio check for bdev-ns
Changfeng (1):
drm/amdgpu: disable 3DCGCG on picasso/raven1 to avoid compute hang
Chao Yu (14):
f2fs: fix to avoid out-of-bounds memory access
f2fs: move ioctl interface definitions to separated file
f2fs: fix compat F2FS_IOC_{MOVE,GARBAGE_COLLECT}_RANGE
f2fs: fix to allow migrating fully valid segment
f2fs: fix panic during f2fs_resize_fs()
f2fs: fix to align to section for fallocate() on pinned file
f2fs: fix to update last i_size if fallocate partially succeeds
f2fs: fix to avoid touching checkpointed data in get_victim()
f2fs: fix to cover __allocate_new_section() with curseg_lock
f2fs: fix to avoid accessing invalid fio in f2fs_allocate_data_block()
f2fs: avoid unneeded data copy in f2fs_ioc_move_range()
f2fs: compress: fix to free compress page correctly
f2fs: compress: fix race condition of overwrite vs truncate
f2fs: compress: fix to assign cc.cluster_idx correctly
Charan Teja Reddy (1):
sched,psi: Handle potential task count underflow bugs more gracefully
Chen Huang (1):
powerpc: Fix HAVE_HARDLOCKUP_DETECTOR_ARCH build configuration
Chen Hui (2):
clk: qcom: a53-pll: Add missing MODULE_DEVICE_TABLE
clk: qcom: apss-ipq-pll: Add missing MODULE_DEVICE_TABLE
Chen Jun (1):
posix-timers: Preserve return value in clock_adjtime32()
Chinh T Cao (2):
ice: Refactor DCB related variables out of the ice_port_info struct
ice: Recognize 860 as iSCSI port in CEE mode
Chinmay Agarwal (1):
neighbour: Prevent Race condition in neighbour subsytem
Chris Chiu (2):
block: clear GD_NEED_PART_SCAN later in bdev_disk_changed
USB: Add reset-resume quirk for WD19's Realtek Hub
Chris Dion (1):
SUNRPC: Handle major timeout in xprt_adjust_timeout()
Chris Park (1):
drm/amd/display: Disconnect non-DP with no EDID
Chris von Recklinghausen (1):
PM: hibernate: x86: Use crc32 instead of md5 for hibernation e820 integrity check
Christian A. Ehrhardt (1):
vfio/pci: Add missing range check in vfio_pci_mmap
Christian Gmeiner (1):
serial: 8250_pci: handle FL_NOIRQ board flag
Christian König (2):
drm/amdgpu: fix concurrent VM flushes on Vega/Navi v2
drm/amdgpu: stop touching sched.ready in the backend
Christoph Hellwig (5):
block: return -EBUSY when there are open partitions in blkdev_reread_part
md: split mddev_find
md: factor out a mddev_find_locked helper from mddev_find
nvme: do not try to reconfigure APST when the controller is not live
nvme-multipath: fix double initialization of ANA state
Christophe JAILLET (16):
net: davicom: Fix regulator not turned off on failed probe
mmc: uniphier-sd: Fix an error handling path in uniphier_sd_probe()
mmc: uniphier-sd: Fix a resource leak in the remove function
usb: gadget: s3c: Fix incorrect resources releasing
usb: gadget: s3c: Fix the error handling path in 's3c2410_udc_probe()'
media: venus: core: Fix some resource leaks in the error path of 'venus_probe()'
usb: fotg210-hcd: Fix an error message
usb: musb: Fix an error message
ACPI: scan: Fix a memory leak in an error handling path
xhci: Do not use GFP_KERNEL in (potentially) atomic context
openrisc: Fix a memory leak
uio_hv_generic: Fix a memory leak in error handling paths
spi: spi-fsl-dspi: Fix a resource leak in an error handling path
net: netcp: Fix an error message
net: mdio: thunder: Fix a double free issue in the .remove function
net: mdio: octeon: Fix some double free issues
Christophe Kerello (1):
spi: stm32-qspi: fix pm_runtime usage_count counter
Christophe Leroy (5):
mm: ptdump: fix build failure
powerpc/32: Fix boot failure with CONFIG_STACKPROTECTOR
powerpc/64: Fix the definition of the fixmap area
powerpc/52xx: Fix an invalid ASM expression ('addi' used instead of 'add')
powerpc/32: Statically initialise first emergency context
Chuck Lever (6):
NFSD: Fix sparse warning in nfs4proc.c
SUNRPC: Move fault injection call sites
SUNRPC: Remove trace_xprt_transmit_queued
xprtrdma: Avoid Receive Queue wrapping
xprtrdma: Fix cwnd update ordering
xprtrdma: rpcrdma_mr_pop() already does list_del_init()
Chunfeng Yun (6):
arm64: dts: mt8173: fix property typo of 'phys' in dsi node
usb: xhci-mtk: support quirk to disable usb2 lpm
usb: xhci-mtk: remove or operator for setting schedule parameters
usb: xhci-mtk: improve bandwidth scheduling with TT
usb: core: hub: fix race condition about TRSMRCY of resume
usb: core: reduce power-on-good delay time of root hub
Ciara Loftus (4):
libbpf: Ensure umem pointer is non-NULL before dereferencing
libbpf: Restore umem state after socket create failure
libbpf: Only create rx and tx XDP rings when necessary
libbpf: Fix potential NULL pointer dereference
Claire Chang (1):
swiotlb: Fix the type of index
Claudio Imbrenda (5):
KVM: s390: VSIE: correctly handle MVPG when in VSIE
KVM: s390: split kvm_s390_logical_to_effective
KVM: s390: VSIE: fix MVPG handling for prefixing and MSO
KVM: s390: split kvm_s390_real_to_abs
KVM: s390: extend kvm_s390_shadow_fault to return entry pointer
Claudiu Beznea (2):
net: macb: restore cmp registers on resume path
net: macb: fix the restore of cmp registers
Claudiu Manoil (1):
gianfar: Handle error code at MAC address change
Colin Ian King (30):
ice: Fix potential infinite loop when using u8 loop counter
clk: socfpga: arria10: Fix memory leak of socfpga_clk on error return
drm/radeon: fix copy of uninitialized variable back to userspace
memory: gpmc: fix out of bounds read and dereference on gpmc_cs[]
crypto: sun8i-ss - Fix memory leak of object d when dma_iv fails to map
staging: rtl8192u: Fix potential infinite loop
crypto: sun8i-ss - Fix memory leak of pad
crypto: sa2ul - Fix memory leak of rxd
usb: gadget: r8a66597: Add missing null check on return from platform_get_resource
media: vivid: fix assignment of dev->fbuf_out_flags
media: [next] staging: media: atomisp: fix memory leak of object flash
media: m88rs6000t: avoid potential out-of-bounds reads on arrays
clk: uniphier: Fix potential infinite loop
scsi: pm80xx: Fix potential infinite loop
ASoC: Intel: boards: sof-wm8804: add check for PLL setting
liquidio: Fix unintented sign extension of a left shift of a u16
xfs: fix return of uninitialized value in variable error
mt7601u: fix always true expression
cxgb4: Fix unintentional sign extension issues
net: thunderx: Fix unintentional sign extension issue
net/mlx5: Fix bit-wise and with zero
ALSA: usb: midi: don't return -ENOMEM when usb_urb_ep_type_check fails
net: davinci_emac: Fix incorrect masking of tx and rx error channel
wlcore: Fix buffer overrun by snprintf due to incorrect buffer size
KEYS: trusted: Fix memory leak on object td
f2fs: fix a redundant call to f2fs_balance_fs if an error occurs
dmaengine: idxd: Fix potential null dereference on pointer status
fs/proc/generic.c: fix incorrect pde_is_permanent check
iio: tsl2583: Fix division by a zero lux_val
serial: tegra: Fix a mask operation that is always true
Colin Xu (2):
drm/i915/gvt: Fix virtual display setup for BXT/APL
drm/i915/gvt: Fix vfio_edid issue for BXT/APL
Cong Wang (1):
smc: disallow TCP_ULP in smc_setsockopt()
Corentin Labbe (2):
crypto: sun8i-ss - fix result memory leak on error path
crypto: allwinner - add missing CRYPTO_ prefix
Cédric Le Goater (2):
powerpc/xive: Drop check on irq_data in xive_core_debug_show()
powerpc/xive: Fix xmon command "dxi"
DENG Qingfang (1):
net: dsa: mt7530: fix VLAN traffic leaks
Dafna Hirschfeld (1):
media: rkisp1: rsz: crash fix when setting src format
Damien Le Moal (1):
null_blk: fix command timeout completion handling
Dan Carpenter (30):
thunderbolt: Fix a leak in tb_retimer_add()
thunderbolt: Fix off by one in tb_port_find_retimer()
dmaengine: plx_dma: add a missing put_device() on error path
ipw2x00: potential buffer overflow in libipw_wx_set_encodeext()
ovl: fix missing revert_creds() on error path
mtd: rawnand: fsmc: Fix error code in fsmc_nand_probe()
regulator: bd9576: Fix return from bd957x_probe()
node: fix device cleanups in error handling code
Drivers: hv: vmbus: Use after free in __vmbus_open()
soc: aspeed: fix a ternary sign expansion bug
drm/amd/display: Fix off by one in hdmi_14_process_transaction()
media: atomisp: Fix use after free in atomisp_alloc_css_stat_bufs()
drm: xlnx: zynqmp: fix a memset in zynqmp_dp_train()
HSI: core: fix resource leaks in hsi_add_client_from_dt()
nfc: pn533: prevent potential memory corruption
rtw88: Fix an error code in rtw_debugfs_set_rsvd_page()
drm/i915/gvt: Fix error code in intel_gvt_init_device()
drm/amd/pm: fix error code in smu_set_power_limit()
bnxt_en: fix ternary sign extension bug in bnxt_show_temp()
kfifo: fix ternary sign extension bugs
SUNRPC: fix ternary sign expansion bug in tracing
firmware: arm_scpi: Prevent the ternary sign expansion bug
RDMA/uverbs: Fix a NULL vs IS_ERR() bug
NFS: fix an incorrect limit in filelayout_decode_layout()
net: dsa: fix a crash if ->get_sset_count() fails
chelsio/chtls: unlock on error in chtls_pt_recvmsg()
net: hso: check for allocation failure in hso_create_bulk_serial_device()
staging: emxx_udc: fix loop in _nbu2ss_nuke()
ASoC: cs35l33: fix an error code in probe()
scsi: libsas: Use _safe() loop in sas_resume_port()
Dan Williams (1):
xen/unpopulated-alloc: consolidate pgmap manipulation
Daniel Beer (1):
mmc: sdhci-pci-gli: increase 1.8V regulator wait
Daniel Borkmann (15):
bpf: Use correct permission flag for mixed signed bounds arithmetic
bpf: Ensure off_reg has no mixed signed bounds for all types
bpf: Move off_reg into sanitize_ptr_alu
bpf: Rework ptr_limit into alu_limit and add common error path
bpf: Improve verifier error messages for users
bpf: Move sanitize_val_alu out of op switch
bpf: Refactor and streamline bounds check into helper
bpf: Tighten speculative pointer arithmetic mask
bpf: Fix masking negation logic upon negative dst register
bpf: Fix leakage of uninitialized bpf stack under speculation
bpf: Fix propagation of 32 bit unsigned bounds from 64 bit bounds
bpf: Fix alu32 const subreg bound tracking on bitwise operations
bpf: Wrap aux data inside bpf_sanitize_info container
bpf: Fix mask direction swap upon off reg sign change
bpf: No need to simulate speculative domain for immediates
Daniel Cordova A (1):
ALSA: hda: fixup headset for ASUS GU502 laptop
Daniel Gomez (2):
drm/amdgpu/ttm: Fix memory leak userptr pages
drm/radeon/ttm: Fix memory leak userptr pages
Daniel Jurgens (1):
net/mlx5: Don't request more than supported EQs
Daniel Niv (1):
media: media/saa7164: fix saa7164_encoder_register() memory leak bugs
Daniel Phan (1):
mac80211: Check crypto_aead_encrypt for errors
Daniel Wagner (1):
nvmet: seset ns->file when open fails
Daniele Palmas (1):
USB: serial: option: add Telit LE910-S1 compositions 0x7010, 0x7011
Danielle Ratson (1):
selftests: mlxsw: Remove a redundant if statement in tc_flower_scale test
Dario Binacchi (2):
serial: omap: don't disable rs485 if rts gpio is missing
serial: omap: fix rs485 half-duplex filtering
Darren Powell (1):
amdgpu/pm: Prevent force of DCEFCLK on NAVI10 and SIENNA_CICHLID
Darrick J. Wong (1):
ics932s401: fix broken handling of errors when word reading fails
Dave Ertman (1):
ice: remove DCBNL_DEVRESET bit from PF state
Dave Jiang (7):
dmaengine: idxd: Fix clobbering of SWERR overflow bit on writeback
dmaengine: idxd: fix delta_rec and crc size field for completion record
dmaengine: idxd: fix opcap sysfs attribute output
dmaengine: idxd: fix wq size store permission state
dmaengine: idxd: fix wq cleanup of WQCFG registers
dmaengine: idxd: fix dma device lifetime
dmaengine: idxd: fix cdev setup and free device lifetime issues
Dave Marchevsky (1):
bpf: Refcount task stack in bpf_get_task_stack
Dave Rigby (1):
perf unwind: Set userdata for all __report_module() paths
David Awogbemila (3):
gve: Update mgmt_msix_idx if num_ntfy changes
gve: Add NULL pointer checks when freeing irqs.
gve: Correct SKB queue index validation.
David Bauer (5):
spi: ath79: always call chipselect function
spi: ath79: remove spi-master setup and cleanup assignment
spi: sync up initial chipselect state
mtd: don't lock when recursively deleting partitions
mt76: mt76x0: disable GTK offloading
David E. Box (2):
platform/x86: intel_pmc_core: Ignore GBE LTR on Tiger Lake platforms
platform/x86: intel_pmc_core: Don't use global pmcdev in quirks
David Edmondson (1):
KVM: x86: dump_vmcs should not assume GUEST_IA32_EFER is valid
David Gow (1):
kunit: tool: Fix a python tuple typing error
David Hildenbrand (3):
s390: fix detection of vector enhancements facility 1 vs. vector packed decimal facility
kernel/resource: make walk_system_ram_res() find all busy IORESOURCE_SYSTEM_RAM resources
kernel/resource: make walk_mem_res() find all busy IORESOURCE_MEM resources
David Howells (3):
afs: Fix updating of i_mode due to 3rd party change
afs: Fix speculative status fetches
afs: Fix the nlink handling of dir-over-dir rename
David Matlack (1):
kvm: Cap halt polling at kvm->max_halt_poll_ns
David S. Miller (1):
math: Export mul_u64_u64_div_u64
David Ward (3):
ASoC: rt286: Generalize support for ALC3263 codec
ASoC: rt286: Make RT286_SET_GPIO_* readable and writable
drm/amd/display: Initialize attribute for hdcp_srm sysfs file
Davide Caratti (3):
openvswitch: fix stack OOB read while fragmenting IPv4 packets
net/sched: fq_pie: re-factor fix for fq_pie endless loop
net/sched: fq_pie: fix OOB access in the traffic path
Davidlohr Bueso (1):
fs/epoll: restore waking from ep_done_scan()
Dean Anderson (1):
usb: gadget/function/f_fs string table fix for multiple languages
Dejin Zheng (1):
PCI: xgene: Fix cfg resource mapping
Dima Chumak (2):
net/mlx5e: Fix multipath lag activation
net/mlx5e: Fix nullptr in add_vlan_push_action()
Dingchen (David) Zhang (1):
drm/amd/display: add handling for hdcp2 rx id list validation
Dinghao Liu (7):
dmaengine: tegra20: Fix runtime PM imbalance on error
media: platform: sti: Fix runtime PM imbalance in regs_show
media: sun8i-di: Fix runtime PM imbalance in deinterlace_start_streaming
mfd: arizona: Fix rumtime PM imbalance on error
iio: light: gp2ap002: Fix rumtime PM imbalance on error
iio: proximity: pulsedlight: Fix rumtime PM imbalance on error
PCI: tegra: Fix runtime PM imbalance in pex_ep_event_pex_rst_deassert()
Dmitry Baryshkov (2):
drm/msm/dsi_pll_7nm: Fix variable usage for pll_lockdet_rate
PCI: Release OF node in pci_scan_device()'s error path
Dmitry Osipenko (6):
drm/tegra: dc: Don't set PLL clock to 0Hz
cpuidle: tegra: Fix C7 idling state on Tegra114
ARM: tegra: acer-a500: Rename avdd to vdda of touchscreen node
soc/tegra: pmc: Fix completion of power-gate toggling
soc/tegra: regulators: Fix locking up when voltage-spread is out of range
iio: gyro: mpu3050: Fix reported temperature value
Dmitry Safonov (1):
xfrm/compat: Cleanup WARN()s that can be user-triggered
Dmitry Vyukov (1):
drm/vkms: fix misuse of WARN_ON
Dmytro Laktyushkin (1):
drm/amd/display: fix dml prefetch validation
Dom Cobley (1):
drm/vc4: crtc: Reduce PV fifo threshold on hvs4
Dominik Andreas Schorpp (1):
USB: serial: ftdi_sio: add IDs for IDS GmbH Products
Don Brace (1):
scsi: smartpqi: Use host-wide tag space
Dong Aisheng (1):
PM / devfreq: Use more accurate returned new_freq as resume_freq
Dongliang Mu (2):
NFC: nci: fix memory leak in nci_allocate_device
misc/uss720: fix memory leak in uss720_probe
DooHyun Hwang (1):
mmc: core: Do a power cycle when the CMD11 fails
Douglas Gilbert (1):
HID cp2112: fix support for multiple gpiochips
Du Cheng (4):
cfg80211: remove WARN_ON() in cfg80211_sme_connect
net: sched: tapr: prevent cycle_time == 0 in parse_taprio_schedule
ethernet: sun: niu: fix missing checks of niu_pci_eeprom_read()
net: caif: remove BUG_ON(dev == NULL) in caif_xmit
Eckhart Mohr (1):
ALSA: hda/realtek: Add quirk for Intel Clevo PCx0Dx
Eddie James (2):
ARM: dts: aspeed: Rainier: Fix humidity sensor bus address
hwmon: (occ) Fix poll rate limiting
Edward Cree (3):
sfc: farch: fix TX queue lookup in TX flush done handling
sfc: farch: fix TX queue lookup in TX event handling
sfc: ef10: fix TX queue lookup in TX event handling
Elad Grupi (1):
nvmet-tcp: fix a segmentation fault during io parsing error
Eli Cohen (4):
vdpa/mlx5: Fix suspend/resume index restoration
vdpa/mlx5: Fix wrong use of bit numbers
vdpa/mlx5: Set err = -ENOMEM in case dma_map_sg_attrs fails
{net,vdpa}/mlx5: Configure interface MAC into mpfs L2 table
Elia Devito (1):
ALSA: hda/realtek: Add fixup for HP Spectre x360 15-df0xxx
Emily Deng (1):
drm/amdgpu: Fix some unload driver issues
Emmanuel Grumbach (1):
mac80211: clear the beacon's CRC after channel switch
Eric Auger (2):
KVM: arm/arm64: Fix KVM_VGIC_V3_ADDR_TYPE_REDIST read
KVM: arm64: Fix KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION read
Eric Biggers (3):
random: initialize ChaCha20 constants with correct endianness
f2fs: fix error handling in f2fs_end_enable_verity()
crypto: rng - fix crypto_rng_reset() refcounting when !CRYPTO_STATS
Eric Dumazet (13):
net: ensure mac header is set in virtio_net_hdr_to_skb()
sch_red: fix off-by-one checks in red_check_params()
netfilter: nft_limit: avoid possible divide error in nft_limit_init
gro: ensure frag0 meets IP header alignment
inet: use bigger hash table for IP ID generation
net/packet: remove data races in fanout operations
ip6_vti: proper dev_{hold|put} in ndo_[un]init methods
netfilter: nftables: avoid overflows in nft_hash_buckets()
virtio_net: Do not pull payload in skb->head
ip6_gre: proper dev_{hold|put} in ndo_[un]init methods
sit: proper dev_{hold|put} in ndo_[un]init methods
ip6_tunnel: sit: proper dev_{hold|put} in ndo_[un]init methods
ipv6: remove extra dev_hold() for fallback tunnels
Eric Farman (1):
vfio-ccw: Check initialized flag in cp_init()
Erwan Le Ray (14):
serial: stm32: fix code cleaning warnings and checks
serial: stm32: add "_usart" prefix in functions name
serial: stm32: fix probe and remove order for dma
serial: stm32: fix startup by enabling usart for reception
serial: stm32: fix incorrect characters on console
serial: stm32: fix TX and RX FIFO thresholds
serial: stm32: fix a deadlock condition with wakeup event
serial: stm32: fix wake-up flag handling
serial: stm32: fix a deadlock in set_termios
serial: stm32: fix tx dma completion, release channel
serial: stm32: call stm32_transmit_chars locked
serial: stm32: fix FIFO flush in startup and set_termios
serial: stm32: add FIFO flush when port is closed
serial: stm32: fix tx_empty condition
Eryk Brol (1):
drm/amd/display: Check for DSC support instead of ASIC revision
Eryk Rybak (2):
i40e: Fix kernel oops when i40e driver removes VF's
i40e: Fix display statistics for veb_tc
Esteve Varela Colominas (1):
platform/x86: thinkpad_acpi: Allow the FnLock LED to change state
Eugene Korenevsky (1):
cifs: fix out-of-bound memory access when calling smb3_notify() at mount point
Evan Nimmo (1):
xfrm: Use actual socket sk instead of skb socket for xfrm_output_resume
Evan Quan (1):
drm/amd/pm: correct MGpuFanBoost setting
Ewan D. Milne (1):
scsi: scsi_dh_alua: Remove check for ASC 24h in alua_rtpg()
Eyal Birger (1):
xfrm: interface: fix ipv4 pmtu check to honor ip header df
Fabian Vogt (7):
Input: nspire-keypad - enable interrupts only when opened
fotg210-udc: Fix DMA on EP0 for length > max packet size
fotg210-udc: Fix EP0 IN requests bigger than two packets
fotg210-udc: Remove a dubious condition leading to fotg210_done
fotg210-udc: Mask GRP2 interrupts we don't handle
fotg210-udc: Don't DMA more than the buffer can take
fotg210-udc: Complete OUT requests on short packets
Fabien Parent (1):
arm64: dts: mediatek: fix reset GPIO level on pumpkin
Fabio Estevam (1):
media: rkvdec: Remove of_match_ptr()
Fabio Pricoco (1):
ice: Increase control queue timeout
Fabrice Gasnier (1):
mfd: stm32-timers: Avoid clearing auto reload register
Fangzhi Zuo (1):
drm/amd/display: Fix debugfs link_settings entry
Feilong Lin (1):
ACPI / hotplug / PCI: Fix reference count leak in enable_slot()
Felix Fietkau (6):
mt76: fix potential DMA mapping leak
mt76: mt7615: fix tx skb dma unmap
mt76: mt7915: fix tx skb dma unmap
mt76: mt7615: fix entering driver-own state on mt7663
net: ethernet: mtk_eth_soc: fix RX VLAN offload
perf jevents: Fix getting maximum number of fds
Felix Kuehling (1):
drm/amdkfd: fix build error with AMD_IOMMU_V2=m
Fenghua Yu (8):
selftests/resctrl: Enable gcc checks to detect buffer overflows
selftests/resctrl: Fix compilation issues for global variables
selftests/resctrl: Fix compilation issues for other global variables
selftests/resctrl: Clean up resctrl features check
selftests/resctrl: Fix missing options "-n" and "-p"
selftests/resctrl: Use resctrl/info for feature detection
selftests/resctrl: Fix incorrect parsing of iMC counters
selftests/resctrl: Fix checking for < 0 for unsigned values
Fengnan Chang (1):
ext4: fix error code in ext4_commit_super
Fernando Fernandez Mancera (1):
ethtool: fix missing NLM_F_MULTI flag when dumping
Ferry Toth (1):
usb: dwc3: pci: Enable usb2-gadget-lpm-disable for Intel Merrifield
Filipe Manana (5):
btrfs: fix metadata extent leak after failure to create subvolume
btrfs: fix race between transaction aborts and fsyncs leading to use-after-free
btrfs: fix race when picking most recent mod log operation for an old root
btrfs: fix race leading to unpersisted data and metadata on fsync
btrfs: release path before starting transaction when cloning inline extent
Finn Behrens (1):
tweewide: Fix most Shebang lines
Finn Thain (1):
m68k: mvme147,mvme16x: Don't wipe PCC timer config bits
Florent Revest (1):
libbpf: Initialize the bpf_seq_printf parameters array field by field
Florian Fainelli (1):
net: phy: broadcom: Only advertise EEE for supported modes
Florian Westphal (3):
netfilter: x_tables: fix compat match/target pad out-of-bound write
netfilter: bridge: add pre_exit hooks for ebtable unregistration
netfilter: arp_tables: add pre_exit hook for table unregister
Francesco Ruggeri (1):
ipv6: record frag_max_size in atomic fragments in input path
Francois Gervais (1):
rtc: pcf85063: fallback to parent of_node
Fredrik Strupe (1):
ARM: 9071/1: uprobes: Don't hook on thumb instructions
Frieder Schrempf (1):
can: mcp251x: fix resume from sleep before interface was brought up
Fugang Duan (1):
net: fec: fix the potential memory leak in fec_enet_init()
Gao Xiang (2):
parisc: avoid a warning on u8 cast for cmpxchg on u8 pointers
erofs: add unsupported inode i_format check
Geert Uytterhoeven (5):
regulator: bd9571mwv: Fix AVS and DVFS voltage range
phy: marvell: ARMADA375_USBCLUSTER_PHY should not default to y, unconditionally
dt-bindings: media: renesas,vin: Make resets optional on R-Car Gen1
serial: sh-sci: Fix off-by-one error in FIFO threshold register setting
i2c: sh_mobile: Use new clock calculation formulas for RZ/G2E
Geoffrey D. Bennett (2):
ALSA: usb-audio: scarlett2: Fix device hang with ehci-pci
ALSA: usb-audio: scarlett2: Improve driver startup messages
George McCollister (1):
net: hsr: fix mac_len checks
Gerd Hoffmann (3):
drm/qxl: release shadow on shutdown
drm/qxl: use ttm bo priorities
Revert "drm/qxl: do not run release if qxl failed to init"
Gioh Kim (2):
block/rnbd-clt: Fix missing a memory free when unloading the module
RDMA/rtrs-clt: destroy sysfs after removing session from active list
Giovanni Cabiddu (1):
crypto: qat - fix error path in adf_isr_resource_alloc()
Greg Kroah-Hartman (62):
Linux 5.10.29
Linux 5.10.30
Linux 5.10.31
Linux 5.10.32
Linux 5.10.33
Linux 5.10.34
Linux 5.10.35
Linux 5.10.36
Linux 5.10.37
Revert "iommu/vt-d: Remove WO permissions on second-level paging entries"
Revert "iommu/vt-d: Preset Access/Dirty bits for IOVA over FL"
kobject_uevent: remove warning in init_uevent_argv()
Linux 5.10.38
Linux 5.10.39
Revert "ALSA: sb8: add a check for request_region"
Revert "rapidio: fix a NULL pointer dereference when create_workqueue() fails"
Revert "serial: mvebu-uart: Fix to avoid a potential NULL pointer dereference"
Revert "video: hgafb: fix potential NULL pointer dereference"
Revert "net: stmicro: fix a missing check of clk_prepare"
Revert "leds: lp5523: fix a missing check of return value of lp55xx_read"
Revert "hwmon: (lm80) fix a missing check of bus read in lm80 probe"
Revert "video: imsttfb: fix potential NULL pointer dereferences"
Revert "ecryptfs: replace BUG_ON with error handling code"
Revert "scsi: ufs: fix a missing check of devm_reset_control_get"
Revert "gdrom: fix a memory leak bug"
cdrom: gdrom: initialize global variable at init time
Revert "media: rcar_drif: fix a memory disclosure"
Revert "rtlwifi: fix a potential NULL pointer dereference"
Revert "qlcnic: Avoid potential NULL pointer dereference"
Revert "niu: fix missing checks of niu_pci_eeprom_read"
net: rtlwifi: properly check for alloc_workqueue() failure
Linux 5.10.40
Linux 5.10.41
kgdb: fix gcc-11 warnings harder
Revert "crypto: cavium/nitrox - add an error message to explain the failure of pci_request_mem_regions"
Revert "media: usb: gspca: add a missed check for goto_low_power"
Revert "ALSA: sb: fix a missing check of snd_ctl_add"
Revert "serial: max310x: pass return value of spi_register_driver"
Revert "net: fujitsu: fix a potential NULL pointer dereference"
Revert "net/smc: fix a NULL pointer dereference"
Revert "net: caif: replace BUG_ON with recovery code"
Revert "char: hpet: fix a missing check of ioremap"
Revert "ALSA: gus: add a check of the status of snd_ctl_add"
Revert "ALSA: usx2y: Fix potential NULL pointer dereference"
Revert "isdn: mISDNinfineon: fix potential NULL pointer dereference"
Revert "ath6kl: return error code in ath6kl_wmi_set_roam_lrssi_cmd()"
Revert "isdn: mISDN: Fix potential NULL pointer dereference of kzalloc"
Revert "dmaengine: qcom_hidma: Check for driver register failure"
Revert "libertas: add checks for the return value of sysfs_create_group"
libertas: register sysfs groups properly
Revert "ASoC: cs43130: fix a NULL pointer dereference"
ASoC: cs43130: handle errors in cs43130_probe() properly
Revert "media: dvb: Add check on sp8870_readreg"
Revert "media: gspca: mt9m111: Check write_bridge for timeout"
Revert "media: gspca: Check the return value of write_bridge for timeout"
media: gspca: properly check for errors in po1030_probe()
Revert "net: liquidio: fix a NULL pointer dereference"
Revert "brcmfmac: add a check for the status of usb_register"
brcmfmac: properly check for bus register errors
i915: fix build warning in intel_dp_get_link_status()
Revert "Revert "ALSA: usx2y: Fix potential NULL pointer dereference""
Linux 5.10.42
Grzegorz Siwik (1):
i40e: Fix parameters in aq_get_phy_register()
Guangbin Huang (2):
net: hns3: clear VF down state bit before request link status
net: hns3: remediate a potential overflow risk of bd_num_list
Guangqing Zhu (2):
power: supply: cpcap-battery: fix invalid usage of list cursor
thermal/drivers/tsens: Fix missing put_device error
Guchun Chen (3):
drm/amdgpu: fix NULL pointer dereference
drm/amdgpu: update gc golden setting for Navi12
drm/amdgpu: update sdma golden setting for Navi12
Guennadi Liakhovetski (1):
ASoC: SOF: Intel: HDA: fix core status verification
Guenter Roeck (1):
pcnet32: Use pci_resource_len to validate PCI resource
Gulam Mohamed (1):
block: fix a race between del_gendisk and BLKRRPART
Guochun Mao (1):
ubifs: Only check replay with inode type to judge if inode linked
Gustavo A. R. Silva (6):
mtd: physmap: physmap-bt1-rom: Fix unintentional stack access
sctp: Fix out-of-bounds warning in sctp_process_asconf_param()
flow_dissector: Fix out-of-bounds warning in __skb_flow_bpf_to_target()
ethtool: ioctl: Fix out-of-bounds warning in store_link_ksettings_for_user()
wl3501_cs: Fix out-of-bounds warnings in wl3501_send_pkt
wl3501_cs: Fix out-of-bounds warnings in wl3501_mgmt_join
Gustavo Pimentel (1):
dmaengine: dw-edma: Fix crash on loading/unloading driver
Hanna Hawa (2):
pinctrl: pinctrl-single: remove unused parameter
pinctrl: pinctrl-single: fix pcs_pin_dbg_show() when bits_per_mux is not zero
Hannes Reinecke (1):
nvme: retrigger ANA log update if group descriptor isn't found
Hans Verkuil (5):
media: gspca/sq905.c: fix uninitialized variable
media: vivid: update EDID
media: gscpa/stv06xx: fix memory leak
media: v4l2-ctrls: fix reference to freed memory
media: v4l2-ctrls.c: fix race condition in hdl->requests list
Hans de Goede (18):
ASoC: intel: atom: Stop advertising non working S24LE support
extcon: arizona: Fix some issues when HPDET IRQ fires after the jack has been unplugged
extcon: arizona: Fix various races on driver unbind
usb: roles: Call try_module_get() from usb_role_switch_find_by_fwnode()
misc: lis3lv02d: Fix false-positive WARN on various HP models
HID: lenovo: Use brightness_set_blocking callback for setting LEDs brightness
HID: lenovo: Fix lenovo_led_set_tp10ubkbd() error handling
HID: lenovo: Check hid_get_drvdata() returns non NULL in lenovo_event()
HID: lenovo: Map mic-mute button to KEY_F20 instead of KEY_MICMUTE
ASoC: Intel: bytcr_rt5640: Enable jack-detect support on Asus T100TAF
ASoC: Intel: bytcr_rt5640: Add quirk for the Chuwi Hi8 tablet
ASoC: rt5670: Add a quirk for the Dell Venue 10 Pro 5055
Input: elants_i2c - do not bind to i2c-hid compatible ACPI instantiated devices
Input: silead - add workaround for x86 BIOS-es which bring the chip up in a stuck state
gpiolib: acpi: Add quirk to ignore EC wakeups on Dell Venue 10 Pro 5055
platform/x86: intel_int0002_vgpio: Only call enable_irq_wake() when using s2idle
platform/x86: dell-smbios-wmi: Fix oops on rmmod dell_smbios
platform/x86: touchscreen_dmi: Add info for the Chuwi Hi10 Pro (CWI529) tablet
Hansem Ro (1):
Input: ili210x - add missing negation for touch indication on ili210x
Hao Chen (1):
net: hns3: fix for vxlan gpe tx checksum bug
Harald Freudenberger (2):
s390/zcrypt: fix zcard and zqueue hot-unplug memleak
s390/archrandom: add parameter check for s390_arch_random_generate
Harry Wentland (1):
drm/amd/display: Reject non-zero src_y and src_x for video planes
Hauke Mehrtens (1):
mtd: rawnand: mtk: Fix WAITRDY break condition and timeout
He Ying (3):
irqchip/gic-v3: Do not enable irqs when handling spurious interrups
cpuidle: Fix ARM_QCOM_SPM_CPUIDLE configuration
firmware: qcom-scm: Fix QCOM_SCM configuration
Heiko Carstens (2):
init/Kconfig: make COMPILE_TEST depend on !S390
KVM: s390: fix guarded storage control register handling
Heiner Kallweit (2):
r8169: tweak max read request size for newer chips also in jumbo mtu mode
r8169: don't advertise pause in jumbo mode
Heinz Mauelshagen (1):
dm raid: fix inconclusive reshape layout on fast raid4/5/6 table reload sequences
Helge Deller (1):
parisc: parisc-agp requires SBA IOMMU driver
Hemant Kumar (1):
usb: gadget: Fix double free of device descriptor pointers
Heming Zhao (1):
md-cluster: fix use-after-free issue when removing rdev
Hillf Danton (1):
tty: n_gsm: check error while registering tty devices
Hoang Le (2):
tipc: convert dest node's address to network order
Revert "net:tipc: Fix a double free in tipc_sk_mcast_rcv"
Hou Pu (3):
nvmet: return proper error code from discovery ctrl
nvmet: use new ana_log_size instead the old one
nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response
Hristo Venev (2):
net: sit: Unregister catch-all devices
net: ip6_tunnel: Unregister catch-all devices
Hsin-Yi Wang (1):
misc: eeprom: at24: check suspend status before disable regulator
Huang Pei (2):
MIPS: fix local_irq_{disable,enable} in asmmacro.h
MIPS: loongson64: fix bug when PAGE_SIZE > 16KB
Hubert Streidl (1):
mfd: da9063: Support SMBus and I2C mode
Hui Tang (1):
crypto: qat - fix unmap invalid dma address
Hui Wang (4):
ALSA: hda: generic: change the DAC ctl name for LO+SPK or LO+HP
ALSA: hda/realtek: reset eapd coeff to default value for alc287
ALSA: hda/realtek: the bass speaker can't output sound on Yoga 9i
ALSA: hda/realtek: Headphone volume is controlled by Front mixer
Hyeongseok Kim (1):
exfat: fix erroneous discard when clear cluster bit
Håkon Bugge (1):
RDMA/core: Fix corrupted SL on passive side
Ian Abbott (1):
staging: comedi: tests: ni_routes_test: Fix compilation error
Ido Schimmel (2):
mlxsw: spectrum: Fix ECN marking in tunnel decapsulation
mlxsw: spectrum_mr: Update egress RIF list before route's action
Igor Matheus Andrade Torrente (1):
video: hgafb: fix potential NULL pointer dereference
Igor Pylypiv (1):
scsi: pm80xx: Increase timeout for pm80xx mpi_uninit_check()
Ilya Leoshkevich (1):
selftests: fix prepending $(OUTPUT) to $(TEST_PROGS)
Ilya Lipnitskiy (4):
of: property: fw_devlink: do not link ".*,nr-gpios"
MIPS: pci-mt7620: fix PLL lock check
MIPS: pci-rt2880: fix slot 0 configuration
MIPS: pci-legacy: stop using of_pci_range_to_resource
Ilya Maximets (1):
openvswitch: fix send of uninitialized stack memory in ct limit reply
Ingo Molnar (1):
x86/platform/uv: Fix !KEXEC build failure
J. Bruce Fields (1):
nfsd: ensure new clients break delegations
Jacek Bułatek (1):
ice: Fix for dereference of NULL pointer
Jack Pham (3):
usb: dwc3: gadget: Free gadget structure only after freeing endpoints
usb: dwc3: gadget: Enable suspend events
usb: typec: ucsi: Retrieve all the PDOs instead of just the first 4
Jack Qiu (1):
fs: direct-io: fix missing sdio->boundary
Jacob Pan (1):
iommu/vt-d: Reject unsupported page request modes
Jae Hyun Yoo (2):
Revert "i3c master: fix missing destroy_workqueue() on error in i3c_master_register"
media: aspeed: fix clock handling logic
Jaegeuk Kim (1):
dm verity fec: fix misaligned RS roots IO
Jakub Kicinski (1):
ethtool: pause: make sure we init driver stats
James Bottomley (2):
KEYS: trusted: Fix TPM reservation for seal/unseal
security: keys: trusted: fix TPM2 authorizations
James Smart (7):
scsi: lpfc: Fix incorrect dbde assignment when building target abts wqe
scsi: lpfc: Fix pt2pt connection does not recover after LOGO
scsi: lpfc: Fix crash when a REG_RPI mailbox fails triggering a LOGO response
scsi: lpfc: Fix error handling for mailboxes completed in MBX_POLL mode
scsi: lpfc: Remove unsupported mbox PORT_CAPABILITIES logic
scsi: lpfc: Fix illegal memory access on Abort IOCBs
nvme-fc: clear q_live at beginning of association teardown
James Zhu (4):
drm/amdgpu/vcn1: add cancel_delayed_work_sync before power gate
drm/amdgpu/vcn2.0: add cancel_delayed_work_sync before power gate
drm/amdgpu/vcn2.5: add cancel_delayed_work_sync before power gate
drm/amdgpu/jpeg2.0: add cancel_delayed_work_sync before power gate
Jan Beulich (3):
xen-pciback: redo VF placement in the virtual topology
xen-pciback: reconfigure also from backend watch handler
x86/Xen: swap NX determination and GDT setup on BSP
Jan Glauber (1):
md: Fix missing unused status line of /proc/mdstat
Jan Kara (3):
ext4: annotate data race in start_this_handle()
ext4: annotate data race in jbd2_journal_dirty_metadata()
ext4: Fix occasional generic/418 failure
Jan Kratochvil (1):
perf unwind: Fix separate debug info files when using elfutils' libdw's unwinder
Jane Chu (1):
mm/memory-failure: unnecessary amount of unmapping
Jann Horn (1):
tty: Remove dead termiox code
Jared Baldridge (1):
drm: Added orientation quirk for OneGX1 Pro
Jarkko Sakkinen (2):
tpm, tpm_tis: Extend locality handling to TPM2 in tpm_tis_gen_interrupt()
tpm, tpm_tis: Reserve locality in tpm_tis_resume()
Jaroslaw Gawin (1):
i40e: fix the restart auto-negotiation after FEC modified
Jason Gunthorpe (5):
vfio: Depend on MMU
vfio/fsl-mc: Re-order vfio_fsl_mc_probe()
vfio/pci: Move VGA and VF initialization to functions
vfio/pci: Re-order vfio_pci_probe()
vfio/mdev: Do not allow a mdev_type to have a NULL parent pointer
Jason Wang (1):
vhost-vdpa: fix vm_flags for virtqueue doorbell mapping
Jason Xing (1):
i40e: fix the panic when running bpf in xdpdrv mode
Javed Hasan (1):
scsi: qedf: Add pointer checks in qedf_update_link_speed()
Jean Delvare (1):
i2c: i801: Don't generate an interrupt on bus reset
Jeff Layton (4):
ceph: fix inode leak on getattr error in __fh_to_dentry
ceph: fix fscache invalidation
ceph: don't clobber i_snap_caps on non-I_NEW inode
ceph: don't allow access to MDS-private inodes
Jeffrey Hugo (2):
bus: mhi: core: Fix check for syserr at power_up
bus: mhi: core: Sanity check values from remote device before use
Jeffrey Mitchell (1):
ecryptfs: fix kernel panic with null dev_name
Jens Axboe (1):
io_uring: don't mark S_ISBLK async work as unbounded
Jeremy Szu (4):
ALSA: hda/realtek: fix mute/micmute LEDs for HP 855 G8
ALSA: hda/realtek: fix mute/micmute LEDs and speaker for HP Zbook G8
ALSA: hda/realtek: fix mute/micmute LEDs and speaker for HP Zbook Fury 15 G8
ALSA: hda/realtek: fix mute/micmute LEDs and speaker for HP Zbook Fury 17 G8
Jernej Skrabec (2):
arm64: dts: allwinner: h6: beelink-gs1: Remove ext. 32 kHz osc reference
media: cedrus: Fix H265 status definitions
Jerome Forissier (1):
tee: optee: do not check memref size on return from Secure World
Jesse Brandeburg (1):
ixgbe: fix large MTU request from VF
Jia Zhou (1):
ALSA: core: remove redundant spin_lock pair in snd_card_disconnect
Jia-Ju Bai (7):
interconnect: core: fix error return code of icc_link_destroy()
HID: alps: fix error return code in alps_input_configured()
mtd: maps: fix error return code of physmap_flash_remove()
media: platform: sunxi: sun6i-csi: fix error return code of sun6i_video_start_streaming()
thermal: thermal_of: Fix error return code of thermal_of_populate_bind_params()
rpmsg: qcom_glink_native: fix error return code of qcom_glink_rx_data()
kernel: kexec_file: fix error return code of kexec_calculate_store_digests()
Jian Shen (2):
net: hns3: add check for HNS3_NIC_STATE_INITED in hns3_reset_notify_up_enet()
net: hns3: put off calling register_netdev() until client initialize complete
Jianbo Liu (1):
net/mlx5: Set reformat action when needed for termination rules
Jianxiong Gao (9):
driver core: add a min_align_mask field to struct device_dma_parameters
swiotlb: add a IO_TLB_SIZE define
swiotlb: factor out an io_tlb_offset helper
swiotlb: factor out a nr_slots helper
swiotlb: clean up swiotlb_tbl_unmap_single
swiotlb: refactor swiotlb_tbl_map_single
swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single
swiotlb: respect min_align_mask
nvme-pci: set min_align_mask
Jiapeng Zhong (1):
HID: wacom: Assign boolean values to a bool variable
Jiaran Zhang (1):
net: hns3: fix incorrect resp_msg issue
Jim Ma (1):
tls splice: check SPLICE_F_NONBLOCK instead of MSG_DONTWAIT
Jim Mattson (1):
perf/x86/kvm: Fix Broadwell Xeon stepping in isolation_ucodes[]
Jin Yao (1):
perf report: Fix wrong LBR block sorting
Jingwen Chen (1):
drm/amd/amdgpu: fix refcount leak
Jinzhou Su (1):
drm/amdgpu: Add mem sync flag for IB allocated by SA
Jiri Kosina (3):
iwlwifi: Fix softirq/hardirq disabling in iwl_pcie_enqueue_hcmd()
iwlwifi: Fix softirq/hardirq disabling in iwl_pcie_gen2_enqueue_hcmd()
Bluetooth: avoid deadlock between hci_dev->lock and socket lock
Jiri Olsa (6):
tools/resolve_btfids: Build libbpf and libsubcmd in separate directories
tools/resolve_btfids: Check objects before removing
tools/resolve_btfids: Set srctree variable unconditionally
kbuild: Add resolve_btfids clean to root clean target
kbuild: Do not clean resolve_btfids if the output does not exist
perf tools: Fix dynamic libbpf link
Jisheng Zhang (1):
arm64: kprobes: Restore local irqflag if kprobes is cancelled
Joakim Zhang (1):
net: stmmac: Fix MAC WoL not working if PHY does not support WoL
Joe Thornber (2):
dm persistent data: packed struct should have an aligned() attribute too
dm space map common: fix division bug in sm_ll_find_free_block()
Joel Stanley (1):
jffs2: Hook up splice_write callback
Joerg Roedel (5):
x86/sev: Do not require Hypervisor CPUID bit for SEV guests
x86/sev-es: Don't return NULL from sev_es_get_ghcb()
x86/sev-es: Use __put_user()/__get_user() for data accesses
x86/sev-es: Forward page-faults which happen during emulation
x86/boot/compressed/64: Check SEV encryption in the 32-bit boot-path
Johan Hovold (22):
net: hso: fix NULL-deref on disconnect regression
Revert "USB: cdc-acm: fix rounding error in TIOCSSERIAL"
tty: moxa: fix TIOCSSERIAL jiffies conversions
tty: amiserial: fix TIOCSSERIAL permission check
USB: serial: usb_wwan: fix TIOCSSERIAL jiffies conversions
staging: greybus: uart: fix TIOCSSERIAL jiffies conversions
USB: serial: ti_usb_3410_5052: fix TIOCSSERIAL permission check
staging: fwserial: fix TIOCSSERIAL jiffies conversions
tty: moxa: fix TIOCSSERIAL permission check
staging: fwserial: fix TIOCSSERIAL permission check
staging: fwserial: fix TIOCSSERIAL implementation
staging: fwserial: fix TIOCGSERIAL implementation
staging: greybus: uart: fix unprivileged TIOCCSERIAL
USB: cdc-acm: fix unprivileged TIOCCSERIAL
USB: cdc-acm: fix TIOCGSERIAL implementation
tty: actually undefine superseded ASYNC flags
tty: fix return value for unsupported ioctls
tty: fix return value for unsupported termiox ioctls
serial: core: return early on unsupported ioctls
net: hso: fix control-request directions
USB: trancevibrator: fix control-request direction
net: hso: bail out on interrupt URB allocation failure
Johannes Berg (15):
iwlwifi: pcie: properly set LTR workarounds on 22000 devices
nl80211: fix beacon head validation
nl80211: fix potential leak of ACL params
cfg80211: check S1G beacon compat element length
mac80211: fix TXQ AC confusion
cfg80211: scan: drop entry from hidden_list on overflow
mac80211: bail out if cipher schemes are invalid
iwlwifi: pcie: make cfg vs. trans_cfg more robust
um: Mark all kernel symbols as local
um: Disable CONFIG_GCOV with MODULES
mac80211: drop A-MSDUs on old ciphers
mac80211: add fragment cache to sta_info
mac80211: check defrag PN against current frame
mac80211: prevent attacks on TKIP/WEP as well
mac80211: do not accept/forward invalid EAPOL frames
John Fastabend (2):
bpf, sockmap: Fix sk->prot unhash op reset
bpf, sockmap: Fix incorrect fwd_alloc accounting
John Millikin (1):
x86/build: Propagate $(CLANG_FLAGS) to $(REALMODE_FLAGS)
John Paul Adrian Glaubitz (2):
ia64: tools: remove inclusion of ia64-specific version of errno.h header
ia64: tools: remove duplicate definition of ia64_mf() on ia64
Jolly Shah (1):
scsi: libsas: Reset num_scatter if libata marks qc as NODATA
Jonas Holmberg (1):
ALSA: aloop: Fix initialization of controls
Jonas Witschel (1):
ALSA: hda/realtek: fix mute/micmute LEDs for HP ProBook 445 G7
Jonathan Cameron (7):
iio:accel:adis16201: Fix wrong axis assignment that prevents loading
iio:adc:ad7476: Fix remove handling
iio: adc: ad7768-1: Fix too small buffer passed to iio_push_to_buffers_with_timestamp()
iio: adc: ad7124: Fix missbalanced regulator enable / disable on error.
iio: adc: ad7124: Fix potential overflow due to non sequential channel numbers
iio: adc: ad7923: Fix undersized rx buffer.
iio: adc: ad7192: Avoid disabling a clock that was never enabled.
Jonathan Kim (1):
drm/amdgpu: mask the xgmi number of hops reported from psp to kfd
Jonathan McDowell (1):
net: stmmac: Set FIFO sizes for ipq806x
Jonathon Reinhart (3):
net: Make tcp_allowed_congestion_control readonly in non-init netns
netfilter: conntrack: Make global sysctls readonly in non-init netns
net: Only allow init netns to set default tcp cong to a restricted algo
Jordan Niethe (1):
powerpc/64s: Fix pte update for kernel memory on radix
Josef Bacik (5):
btrfs: do proper error handling in create_reloc_root
btrfs: do proper error handling in btrfs_update_reloc_root
btrfs: convert logic BUG_ON()'s in replace_path to ASSERT()'s
btrfs: avoid RCU stalls while running delayed iputs
btrfs: do not BUG_ON in link_to_fixup_dir
Jouni Roivas (1):
hfsplus: prevent corruption in shrinking truncate
Juergen Gross (2):
xen/events: fix setting irq affinity
xen/gntdev: fix gntdev_mmap() error exit path
Julian Braha (2):
lib: fix kconfig dependency on ARCH_WANT_FRAME_POINTERS
media: drivers: media: pci: sta2x11: fix Kconfig dependency on GPIOLIB
Julian Wiedmann (1):
net/smc: remove device from smcd_dev_list after failed device_add()
Jussi Maki (1):
bpf: Set mac_len in bpf_skb_change_head
KP Singh (1):
libbpf: Add explicit padding to btf_dump_emit_type_decl_opts
Kai Stuhlemmer (ebee Engineering) (1):
mtd: rawnand: atmel: Update ecc_stats.corrected counter
Kai Vehmanen (1):
ALSA: hda/hdmi: fix race in handling acomp ELD notification at resume
Kai-Heng Feng (3):
USB: Add LPM quirk for Lenovo ThinkPad USB-C Dock Gen2 Ethernet
drm/radeon/dpm: Disable sclk switching on Oland when two 4K 60Hz monitors are connected
platform/x86: hp_accel: Avoid invoking _INI to speed up resume
Kailang Yang (1):
ALSA: hda/realtek - Headset Mic issue on HP platform
Kaixu Xia (1):
cxgb4: Fix the -Wmisleading-indentation warning
Kalyan Thota (1):
drm/msm/disp/dpu1: icc path needs to be set before dpu runtime resume
Kamal Heib (1):
RDMA/qedr: Fix kernel panic when trying to access recv_cq
Kan Liang (1):
perf/x86/intel/uncore: Remove uncore extra PCI dev HSWEP_PCI_PCU_3
Karthikeyan Kathirvel (1):
mac80211: choose first enabled channel for monitor
Kees Cook (4):
drm/radeon: Fix off-by-one power_state index heap overwrite
drm/radeon: Avoid power table parsing memory leaks
debugfs: Make debugfs_allow RO after init
proc: Check /proc/$pid/attr/ writes against file opener
Kefeng Wang (1):
riscv: Fix spelling mistake "SPARSEMEM" to "SPARSMEM"
Keith Busch (2):
nvmet: remove unsupported command noise
nvme-tcp: rerun io_work if req_list is not empty
Kenneth Feng (1):
drm/amd/pm: fix workload mismatch on vega10
Kenta.Tada@sony.com (1):
seccomp: Fix CONFIG tests for Seccomp_filters
Kevin Barnett (1):
scsi: smartpqi: Add new PCI IDs
Kevin Wang (1):
drm/amdkfd: correct sienna_cichlid SDMA RLC register offset error
Kiran Gunda (1):
backlight: qcom-wled: Fix FSC update issue for WLED5
Kishon Vijay Abraham I (7):
PCI: keystone: Let AM65 use the pci_ops defined in pcie-designware-host.c
phy: cadence: Sierra: Fix PHY power_on sequence
phy: ti: j721e-wiz: Invoke wiz_init() before of_platform_device_create()
phy: ti: j721e-wiz: Delete "clk_div_sel" clk provider during cleanup
PCI: endpoint: Make *_get_first_free_bar() take into account 64 bit BAR
PCI: endpoint: Add helper API to get the 'next' unreserved BAR
PCI: endpoint: Make *_free_bar() to return error codes on failure
Konrad Dybcio (1):
drm/msm/adreno: a5xx_power: Don't apply A540 lm_setup to other GPUs
Krzysztof Goreczny (1):
ice: prevent ice_open and ice_stop during reset
Krzysztof Kozlowski (14):
clk: socfpga: fix iomem pointer cast on 64-bit
ARM: dts: exynos: correct fuel gauge interrupt trigger level on GT-I9100
ARM: dts: exynos: correct fuel gauge interrupt trigger level on Midas family
ARM: dts: exynos: correct MUIC interrupt trigger level on Midas family
ARM: dts: exynos: correct PMIC interrupt trigger level on Midas family
ARM: dts: exynos: correct PMIC interrupt trigger level on Odroid X/U3 family
ARM: dts: exynos: correct PMIC interrupt trigger level on SMDK5250
ARM: dts: exynos: correct PMIC interrupt trigger level on Snow
ARM: dts: s5pv210: correct fuel gauge interrupt trigger level on Fascinate family
memory: renesas-rpc-if: fix possible NULL pointer dereference of resource
memory: samsung: exynos5422-dmc: handle clk_set_parent() failure
ASoC: simple-card: fix possible uninitialized single_cpu local variable
pinctrl: samsung: use 'int' for register masks in Exynos
i2c: s3c2410: fix possible NULL pointer deref on read message after write
Kumar Kartikeya Dwivedi (1):
net: sched: bump refcount for new action in ACT replace mode
Kunihiko Hayashi (2):
ARM: dts: uniphier: Change phy-mode to RGMII-ID to enable delay pins for RTL8211E
arm64: dts: uniphier: Change phy-mode to RGMII-ID to enable delay pins for RTL8211E
Kuninori Morimoto (2):
ASoC: rsnd: call rsnd_ssi_master_clk_start() from rsnd_ssi_init()
ASoC: rsnd: check all BUSIF status when error
Kuogee Hsieh (1):
drm/msm/dp: initialize audio_comp when audio starts
Kurt Kanzenbach (1):
net: hsr: Reset MAC header for Tx path
Lad Prabhakar (2):
media: i2c: imx219: Move out locking/unlocking of vflip and hflip controls from imx219_set_stream
media: i2c: imx219: Balance runtime PM use-count
Lai Jiangshan (1):
KVM/VMX: Invoke NMI non-IST entry instead of IST entry
Lang Yu (1):
drm/amd/amdgpu: fix a potential deadlock in gpu reset
Lars-Peter Clausen (1):
iio: inv_mpu6050: Fully validate gyro and accel scale writes
Laurent Pinchart (3):
dmaengine: xilinx: dpdma: Fix descriptor issuing on video group
dmaengine: xilinx: dpdma: Fix race condition in done IRQ
media: imx: capture: Return -EPIPE from __capture_legacy_try_fmt()
Lee Gibson (1):
qtnfmac: Fix possible buffer overflow in qtnf_event_handle_external_auth
Lee Jones (1):
drm/amd/display/dc/dce/dce_aux: Remove duplicate line causing 'field overwritten' issue
Len Brown (1):
Revert "tools/power turbostat: adjust for temperature offset"
Leo Yan (5):
perf auxtrace: Fix potential NULL pointer dereference
perf tools: Change fields type in perf_record_time_conv
perf jit: Let convert_timestamp() to be backwards-compatible
perf session: Add swap operation for event TIME_CONV
locking/lockdep: Correct calling tracepoints
Leon Romanovsky (5):
RDMA/addr: Be strict with gid size
RDMA/siw: Properly check send and receive CQ pointers
RDMA/siw: Release xarray entry
RDMA/core: Prevent divide-by-zero error triggered by the user
RDMA/rxe: Clear all QP fields if creation failed
Li Huafei (1):
ima: Fix the error code for restoring the PCR value
Liam Howlett (1):
m68k: Add missing mmap_read_lock() to sys_cacheflush()
Lijun Pan (3):
ibmvnic: avoid calling napi_disable() twice
ibmvnic: remove duplicate napi_schedule call in do_reset function
ibmvnic: remove duplicate napi_schedule call in open function
Like Xu (1):
perf/x86: Avoid touching LBR_TOS MSR for Arch LBR
Liming Sun (1):
platform/mellanox: mlxbf-tmfifo: Fix a memory barrier issue
Lin Ma (1):
bluetooth: eliminate the potential race condition when removing the HCI controller
Lingutla Chandrasekhar (1):
sched/fair: Ignore percpu threads for imbalance pulls
Linus Lüssing (1):
net: bridge: mcast: fix broken length + header check for MRDv6 Adv.
Linus Torvalds (3):
readdir: make sure to verify directory entry for legacy interfaces too
Fix misc new gcc warnings
drm/i915/display: fix compiler warning about array overrun
Linus Walleij (3):
ARM: dts: ux500: Fix up TVK R3 sensors
drm/mcde/panel: Inverse misunderstood flag
net: ethernet: ixp4xx: Set the DMA masks explicitly
Liu Jian (1):
bpftool: Add sock_release help info for cgroup attach/prog load command
Liu Ying (1):
media: docs: Fix data organization of MEDIA_BUS_FMT_RGB101010_1X30
Loic Poulain (1):
net: qrtr: Fix memory leak on qrtr_tx_wait failure
Longfang Liu (1):
crypto: hisilicon/sec - fixes a printing error
Lorenz Bauer (1):
bpf: link: Refuse non-O_RDWR flags in BPF_OBJ_GET
Lorenzo Bianconi (2):
mt76: mt7915: fix aggr len debugfs node
mt76: mt7615: fix mib stats counter reporting to mac80211
Lu Baolu (9):
iommu/vt-d: Don't set then clear private data in prq_event_thread()
iommu/vt-d: Report right snoop capability when using FL for IOVA
iommu/vt-d: Report the right page fault address
iommu/vt-d: Preset Access/Dirty bits for IOVA over FL
iommu/vt-d: Remove WO permissions on second-level paging entries
iommu/vt-d: Invalidate PASID cache when root/context entry changed
iommu/vt-d: Preset Access/Dirty bits for IOVA over FL
iommu/vt-d: Remove WO permissions on second-level paging entries
iommu/vt-d: Use user privilege for RID2PASID translation
Luca Ceresoli (1):
fpga: fpga-mgr: xilinx-spi: fix error messages on -EPROBE_DEFER
Luca Coelho (1):
iwlwifi: fix 11ax disabled bit in the regulatory capability flags
Luca Fancellu (1):
xen/evtchn: Change irq_info lock to raw_spinlock_t
Lucas Stankus (1):
staging: iio: cdc: ad7746: avoid overwrite of num_channels
Ludovic Desroches (1):
ARM: dts: at91: change the key code of the gpio key
Ludovic Senecaux (1):
netfilter: conntrack: Fix gre tunneling over ipv6
Luis Henriques (1):
virtiofs: fix memory leak in virtio_fs_probe()
Luiz Augusto von Dentz (1):
Bluetooth: SMP: Fail if remote and local public keys are identical
Lukasz Bartosik (2):
clk: fix invalid usage of list cursor in register
clk: fix invalid usage of list cursor in unregister
Lukasz Luba (2):
thermal/core/fair share: Lock the thermal zone while looping over instances
PM / devfreq: Unlock mutex and free devfreq struct in error path
Lukasz Majczak (1):
ASoC: Intel: kbl_da7219_max98927: Fix kabylake_ssp_fixup function
Luke D Jones (1):
ALSA: hda/realtek: GA503 use same quirks as GA401
Lv Yunlong (21):
ethernet/netronome/nfp: Fix a use after free in nfp_bpf_ctrl_msg_rx
drivers/net/wan/hdlc_fr: Fix a double free in pvc_xmit
ethernet: myri10ge: Fix a use after free in myri10ge_sw_tso
net:tipc: Fix a double free in tipc_sk_mcast_rcv
net/rds: Fix a use after free in rds_message_map_pages
dmaengine: Fix a double free in dma_async_device_register
gpu/xen: Fix a use after free in xen_drm_drv_init
ALSA: emu8000: Fix a use after free in snd_emu8000_create_mixer
ALSA: sb: Fix two use after free in snd_sb_qsound_build
mtd: rawnand: gpmi: Fix a double free in gpmi_nand_init
crypto: qat - Fix a double free in adf_create_ring
drivers/block/null_blk/main: Fix a double free in null_init.
IB/isert: Fix a use after free in isert_connect_request
mwl8k: Fix a double Free in mwl8k_probe_hw
ath10k: Fix a use after free in ath10k_htc_send_bundle
net:emac/emac-mac: Fix a use after free in emac_mac_tx_buf_send
RDMA/siw: Fix a use after free in siw_alloc_mr
RDMA/bnxt_re: Fix a double free in bnxt_qplib_alloc_res
net:nfc:digital: Fix a double free in digital_tg_recv_dep_req
ethernet:enic: Fix a use after free bug in enic_hard_start_xmit
drm/i915/gt: Fix a double free in gen8_preallocate_top_level_pdp
Maciej W. Rozycki (9):
x86/build: Disable HIGHMEM64G selection for M486SX
FDDI: defxx: Bail out gracefully with unassigned PCI resource for CSR
FDDI: defxx: Make MMIO the configuration default except for EISA
MIPS: Reinstate platform `__div64_32' handler
MIPS: Avoid DIVU in `__div64_32' is result would be zero
MIPS: Avoid handcoded DIVU in `__div64_32' altogether
vgacon: Record video mode changes with VT_RESIZEX
vt_ioctl: Revert VT_RESIZEX parameter handling removal
vt: Fix character height handling with VT_RESIZEX
Maciej Żenczykowski (2):
net-ipv6: bugfix - raw & sctp - switch to ipv6_can_nonlocal_bind()
net: fix nla_strcmp to handle more then one trailing null character
Magnus Karlsson (2):
i40e: fix broken XDP support
samples/bpf: Consider frame size in tx_only of xdpsock sample
Mahesh Salgaonkar (1):
powerpc/eeh: Fix EEH handling for hugepages in ioremap space.
Manivannan Sadhasivam (3):
mtd: Handle possible -EPROBE_DEFER from parse_mtd_partitions()
mtd: rawnand: qcom: Return actual error code instead of -ENODEV
ARM: 9075/1: kernel: Fix interrupted SMC calls
Mans Rullgard (1):
ARM: dts: am33xx: add aliases for mmc interfaces
Maor Gottlieb (3):
RDMA/mlx5: Fix drop packet rule in egress table
RDMA/mlx5: Recover from fatal event in dual port mode
RDMA/mlx5: Fix query DCT via DEVX
Marc Kleine-Budde (3):
can: mcp251x: fix support for half duplex SPI host controllers
can: mcp251xfd: mcp251xfd_probe(): add missing can_rx_offload_del() in error path
can: m_can: m_can_tx_work_queue(): fix tx_skb race condition
Marc Zyngier (4):
ACPI: GTDT: Don't corrupt interrupt mappings on watchdow probe failure
KVM: arm64: Fully zero the vcpu state on reset
arm64: entry: factor irq triage logic into macros
KVM: arm64: Prevent mixed-width VM creation
Marcel Hamer (1):
usb: dwc3: omap: improve extcon initialization
Marco Elver (1):
kcsan, debugfs: Move debugfs file creation out of early init
Marcus Folkesson (1):
wilc1000: write value to WILC_INTR2_ENABLE register
Marek Behún (4):
ARM: dts: turris-omnia: configure LED[2]/INTn pin as interrupt pin
arm64: dts: marvell: armada-37xx: add syscon compatible to NB clk node
cpufreq: armada-37xx: Fix setting TBG parent for load levels
clk: mvebu: armada-37xx-periph: remove .set_parent method for CPU PM clock
Marek Vasut (2):
rsi: Use resume_noirq for SDIO
drm/stm: Fix bus_flags handling
Marijn Suijten (2):
drm/msm/mdp5: Configure PP_SYNC_HEIGHT to double the vtotal
drm/msm/mdp5: Do not multiply vclk line count by 100
Mark Langsdorf (2):
ACPI: custom_method: fix potential use-after-free issue
ACPI: custom_method: fix a possible memory leak
Mark Pearson (1):
platform/x86: thinkpad_acpi: Correct thermal sensor allocation
Mark Rutland (1):
arm64: entry: always set GIC_PRIO_PSR_I_SET during entry
Mark Zhang (1):
RDMA/mlx5: Fix mlx5 rates to IB rates map
Martin Blumenstingl (3):
net: dsa: lantiq_gswip: Let GSWIP automatically set the xMII clock
net: dsa: lantiq_gswip: Don't use PHY auto polling
net: dsa: lantiq_gswip: Configure all remaining GSWIP_MII_CFG bits
Martin Leung (1):
drm/amd/display: changing sr exit latency
Martin Schiller (1):
net: phy: intel-xway: enable integrated led functions
Martin Wilck (2):
scsi: target: pscsi: Clean up after failure in pscsi_map_sg()
scsi: scsi_transport_srp: Don't block target in SRP_PORT_LOST state
Masahiro Yamada (4):
init/Kconfig: make COMPILE_TEST depend on HAS_IOMEM
kbuild: update config_data.gz only when the content of .config is changed
kbuild: generate Module.symvers only when vmlinux exists
scripts/clang-tools: switch explicitly to Python 3
Masami Hiramatsu (1):
x86/kprobes: Fix to check non boostable prefixes correctly
Mateusz Palczewski (2):
i40e: Added Asym_Pause to supported link modes
i40e: Fix PHY type identifiers for 2.5G and 5G adapters
Mathias Krause (1):
nitro_enclaves: Fix stale file descriptors on failed usercopy
Mathias Nyman (5):
xhci: check port array allocation was successful before dereferencing it
xhci: check control context is valid before dereferencing it.
xhci: fix potential array out of bounds with several interrupters
thunderbolt: usb4: Fix NVM read buffer bounds and offset issue
thunderbolt: dma_port: Fix NVM read buffer bounds and offset issue
Mathy Vanhoef (4):
mac80211: assure all fragments are encrypted
mac80211: prevent mixed key and fragment cache attacks
mac80211: properly handle A-MSDUs that start with an RFC 1042 header
cfg80211: mitigate A-MSDU aggregation attacks
Matt Chen (1):
iwlwifi: add support for Qu with AX201 device
Matt Wang (1):
scsi: BusLogic: Fix 64-bit system enumeration error for Buslogic
Matthew Wilcox (Oracle) (5):
XArray: Fix splitting to non-zero orders
radix tree test suite: Register the main thread with the RCU library
idr test suite: Take RCU read lock in idr_find_test_1
idr test suite: Create anchor before launching throbber
mm: fix struct page layout on 32-bit systems
Matthias Schiffer (1):
power: supply: bq27xxx: fix power_avg for newer ICs
Matti Vaittinen (1):
gpio: sysfs: Obey valid_mask
Mauro Carvalho Chehab (1):
atomisp: don't let it go past pipes array
Maxim Kochetkov (3):
net: dsa: Fix type was not set for devlink port
net: phy: marvell: fix m88e1011_set_downshift
net: phy: marvell: fix m88e1111_set_downshift
Maxim Mikityanskiy (2):
HID: plantronics: Workaround for double volume key presses
net/mlx5e: Use net_prefetchw instead of prefetchw in MPWQE TX datapath
Maximilian Luz (2):
usb: xhci: Increase timeout for HC halt
serial: 8250_dw: Add device HID for new AMD UART controller
Md Haris Iqbal (3):
RDMA/rtrs-clt: Close rtrs client conn before destroying rtrs clt session files
block/rnbd-clt: Change queue_depth type in rnbd_clt_session to size_t
block/rnbd-clt: Check the return value of the function rtrs_clt_query
Meng Li (1):
regmap: set debugfs_name to NULL after it is freed
Miaohe Lin (4):
khugepaged: fix wrong result value for trace_mm_collapse_huge_page_isolate()
mm/hugeltb: handle the error case in hugetlb_fix_reserve_counts()
mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page()
ksm: fix potential missing rmap_item for stable_node
Michael Brown (1):
xen-netback: Check for hotplug-status existence before watching
Michael Chan (3):
bnxt_en: Fix RX consumer index logic in the error path.
bnxt_en: Add PCI IDs for Hyper-V VF devices.
bnxt_en: Fix context memory setup for 64K page size.
Michael Ellerman (7):
powerpc/pseries: Only register vio drivers if vio bus exists
powerpc/pseries: Stop calling printk in rtas_stop_self()
powerpc/64s: Fix crashes when toggling stf barrier
powerpc/64s: Fix crashes when toggling entry flush barrier
selftests/gpio: Use TEST_GEN_PROGS_EXTENDED
selftests/gpio: Move include of lib.mk up
selftests/gpio: Fix build when source tree is read only
Michael Kelley (1):
Drivers: hv: vmbus: Increase wait time for VMbus unload
Michael Walle (2):
mtd: require write permissions for locking and badblock ioctls
rtc: fsl-ftm-alarm: add MODULE_TABLE()
Michal Kalderon (1):
nvmet-rdma: Fix NULL deref when SEND is completed with error
Michal Simek (1):
firmware: xilinx: Add a blank line after function declaration
Mickaël Salaün (1):
ovl: fix leaked dentry
Mihai Moldovan (1):
kconfig: nconf: stop endless search loops
Mike Galbraith (1):
x86/crash: Fix crash_setup_memmap_entries() out-of-bounds access
Mike Marciniszyn (3):
IB/hfi1: Fix probe time panic when AIP is enabled with a buggy BIOS
IB/hfi1: Use kzalloc() for mmu_rb_handler allocation
IB/hfi1: Correct oversized ring allocation
Mike Rapoport (2):
nds32: flush_dcache_page: use page_mapping_file to avoid races with swapoff
openrisc: mm/init.c: remove unused memblock_region variable in map_ram()
Mike Travis (1):
x86/platform/uv: Set section block size for hubless architectures
Mikhail Durnev (1):
ASoC: rsnd: core: Check convert rate in rsnd_hw_params
Mikko Perttunen (1):
gpu: host1x: Use different lock classes for each client
Miklos Szeredi (3):
ovl: allow upperdir inside lowerdir
virtiofs: fix userns
cuse: prevent clone
Mikulas Patocka (2):
dm snapshot: fix crash with transient storage and zero chunk size
dm snapshot: properly fix a crash when an origin has no snapshots
Milton Miller (1):
net/ncsi: Avoid channel_monitor hrtimer deadlock
Ming Lei (1):
blk-mq: plug request for shared sbitmap
Muchun Song (1):
mm: memcontrol: slab: fix obtain a reference to a freeing memcg
Muhammad Usama Anjum (2):
net: ipv6: check for validity before dereferencing cfg->fc_nlinfo.nlh
media: em28xx: fix memory leak
Murthy Bhat (1):
scsi: smartpqi: Correct request leakage during reset operations
Nathan Chancellor (13):
arm64: alternatives: Move length validation in alternative_{insn, endif}
x86/boot: Add $(CLANG_FLAGS) to compressed KBUILD_CFLAGS
efi/libstub: Add $(CLANG_FLAGS) to x86 flags
Makefile: Move -Wno-unused-but-set-variable out of GCC only block
crypto: arm/curve25519 - Move '.fpu' after '.arch'
ACPI: CPPC: Replace cppc_attr with kobj_attribute
x86/events/amd/iommu: Fix sysfs type mismatch
perf/amd/uncore: Fix sysfs type mismatch
powerpc/fadump: Mark fadump_calculate_reserve_size as __init
powerpc/prom: Mark identical_pvr_fixup as __init
riscv: Use $(LD) instead of $(CC) to link vDSO
scripts/recordmcount.pl: Fix RISC-V regex for clang
riscv: Workaround mcount name prior to clang-13
Neil Armstrong (1):
drm/meson: fix shutdown crash when component not probed
NeilBrown (1):
SUNRPC in case of backlog, hand free slots directly to waiting task
Nicholas Piggin (5):
powerpc/powernv: Enable HAIL (HV AIL) for ISA v3.1 processors
KVM: PPC: Book3S HV P9: Restore host CTRL SPR after guest exit
powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks
powerpc/64s/syscall: Use pt_regs.trap to distinguish syscall ABI difference between sc and scv syscalls
powerpc/64s/syscall: Fix ptrace syscall info with scv syscalls
Nick Desaulniers (1):
gcov: re-fix clang-11+ support
Nick Lowe (1):
igb: Enable RSS for Intel I211 Ethernet Controller
Niklas Cassel (1):
nvme-pci: don't simple map sgl when sgls are disabled
Nikola Livic (1):
pNFS/flexfiles: fix incorrect size check in decode_nfs_fh()
Nikolay Aleksandrov (1):
net: bridge: when suppression is enabled exclude RARP packets
Nikolay Borisov (1):
mm/sl?b.c: remove ctor argument from kmem_cache_flags
Nirmoy Das (1):
drm/amd/display: use GFP_ATOMIC in dcn20_resource_construct
Nobuhiro Iwamatsu (2):
firmware: xilinx: Remove zynqmp_pm_get_eemi_ops() in IS_REACHABLE(CONFIG_ZYNQMP_FIRMWARE)
rtc: ds1307: Fix wday settings for rx8130
Noralf Trønnes (1):
drm/probe-helper: Check epoch counter in output_poll_execute()
Norbert Ciosek (1):
virtchnl: Fix layout of RSS structures
Norman Maurer (1):
net: udp: Add support for getsockopt(..., ..., UDP_GRO, ..., ...);
Obeida Shamoun (1):
backlight: qcom-wled: Use sink_addr for sync toggle
Odin Ugedal (1):
sched/fair: Fix unfairness caused by missing load decay
Oleg Nesterov (1):
ptrace: make ptrace() fail if the tracee changed its pid unexpectedly
Olga Kornievskaia (2):
NFSv4.2: fix copy stateid copying for the async copy
NFSv4.2 fix handling of sr_eof in SEEK's reply
Oliver Hartkopp (2):
can: bcm/raw: fix msg_namelen values depending on CAN_REQUIRED_SIZE
can: isotp: fix msg_namelen values depending on CAN_REQUIRED_SIZE
Oliver Neukum (2):
USB: CDC-ACM: fix poison/unpoison imbalance
cdc-wdm: untangle a circular dependency between callback and softint
Oliver Stäbler (1):
arm64: dts: imx8mm/q: Fix pad control of SD1_DATA0
Omar Sandoval (1):
kyber: fix out of bounds access when preempted
Ondrej Mosnacek (5):
selinux: make nslot handling in avtab more robust
selinux: fix cond_list corruption when changing booleans
selinux: fix race between old and new sidtab
perf/core: Fix unconditional security_locked_down() call
serial: core: fix suspicious security_locked_down() call
Ong Boon Leong (2):
xdp: fix xdp_return_frame() kernel BUG throw for page_pool memory model
net: stmmac: fix TSO and TBS feature enabling during driver open
Or Cohen (2):
net/sctp: fix race condition in sctp_destroy_sock
net/nfc: fix use-after-free llcp_sock_bind/connect
Orson Zhai (1):
mailbox: sprd: Introduce refcnt when clients requests/free channels
Otavio Pontes (1):
x86/microcode: Check for offline CPUs before requesting new microcode
Pablo Neira Ayuso (9):
netfilter: nftables: skip hook overlap logic if flowtable is stale
netfilter: flowtable: fix NAT IPv6 offload mangling
netfilter: conntrack: do not print icmpv6 as unknown via /proc
netfilter: nft_payload: fix C-VLAN offload support
netfilter: nftables_offload: VLAN id needs host byteorder in flow dissector
netfilter: nftables_offload: special ethertype handling for VLAN
netfilter: xt_SECMARK: add new revision to fix structure layout
netfilter: nfnetlink_osf: Fix a missing skb_header_pointer() NULL check
netfilter: nftables: Fix a memleak from userdata error path in new objects
Pali Rohár (7):
net: phy: marvell: fix detection of PHY on Topaz switches
cpufreq: armada-37xx: Fix the AVS value for load L1
clk: mvebu: armada-37xx-periph: Fix switching CPU freq from 250 Mhz to 1 GHz
clk: mvebu: armada-37xx-periph: Fix workaround for switching from L1 to L0
cpufreq: armada-37xx: Fix driver cleanup when registration failed
cpufreq: armada-37xx: Fix determining base CPU frequency
PCI: iproc: Fix return value of iproc_msi_irq_domain_alloc()
Pan Bian (1):
bus: qcom: Put child node before return
Paolo Abeni (8):
net: let skb_orphan_partial wake-up waiters.
mptcp: forbit mcast-related sockopt on MPTCP sockets
udp: never accept GSO_FRAGLIST packets
mptcp: fix splat when closing unaccepted socket
mptcp: avoid error message on infinite mapping
mptcp: drop unconditional pr_warn on bad opt
mptcp: fix data stream corruption
net: really orphan skbs tied to closing sk
Paolo Bonzini (1):
KVM: x86/mmu: preserve pending TLB flush across calls to kvm_tdp_mmu_zap_sp
Paul Aurich (1):
cifs: Return correct error code from smb2_get_enc_key
Paul Cercueil (1):
drm: bridge/panel: Cleanup connector on bridge detach
Paul Clements (1):
md/raid1: properly indicate failure when ending a failed write request
Paul Durrant (1):
xen-blkback: fix compatibility bug with single page rings
Paul Fertser (1):
hwmon: (pmbus/pxe1610) don't bail out when not all pages are active
Paul M Stillwell Jr (1):
ice: handle increasing Tx or Rx ring sizes
Paul Menzel (2):
iommu/amd: Put newline after closing bracket in warning
Revert "iommu/amd: Fix performance counter initialization"
Paul Moore (1):
selinux: add proper NULL termination to the secclass_map permissions
Pavel Andrianov (1):
net: pxa168_eth: Fix a potential data race in pxa168_eth_remove
Pavel Begunkov (3):
io_uring: fix timeout cancel return code
block: don't ignore REQ_NOWAIT for direct IO
io_uring: fix overflows checks in provide buffers
Pavel Machek (1):
intel_th: Consistency and off-by-one fix
Pavel Skripkin (6):
drivers: net: fix memory leak in atusb_probe
drivers: net: fix memory leak in peak_usb_create_dev
net: mac802154: Fix general protection fault
media: dvb-usb: fix memory leak in dvb_usb_adapter_init
tty: fix memory leak in vc_deallocate
net: usb: fix memory leak in smsc75xx_bind
Pavel Tatashin (3):
mm/gup: check every subpage of a compound page during isolation
mm/gup: return an error on migration failure
mm/gup: check for isolation errors
Pavel Tikhomirov (1):
net: sched: sch_teql: fix null-pointer dereference
Pawel Laszczak (2):
usb: gadget: uvc: add bInterval checking for HS mode
usb: webcam: Invalid size of Processing Unit Descriptor
Paweł Chmiel (1):
clk: exynos7: Mark aclk_fsys1_200 as critical
Pedro Tammela (1):
libbpf: Fix bail out from 'ringbuf_process_ring()' on error
PeiSen Hou (1):
ALSA: hda/realtek: Add some CLOVE SSIDs of ALC293
Peilin Ye (1):
media: dvbdev: Fix memory leak in dvb_media_device_free()
Peng Fan (1):
mmc: sdhci-esdhc-imx: validate pinctrl before use it
Peng Li (1):
net: hns3: use netif_tx_disable to stop the transmit queue
Peter Collingbourne (3):
arm64: fix inline asm in load_unaligned_zeropad()
kasan: fix unit tests with CONFIG_UBSAN_LOCAL_BOUNDS enabled
arm64: mte: initialize RGSR_EL1.SEED in __cpu_setup
Peter Ujfalusi (1):
ALSA: hda/realtek: Chain in pop reduction fixup for ThinkStation P340
Peter Wang (1):
scsi: ufs: ufs-mediatek: Fix power down spec violation
Peter Xu (1):
mm/hugetlb: fix F_SEAL_FUTURE_WRITE
Peter Zijlstra (3):
perf: Rework perf_event_exit_event()
sched,fair: Alternative sched_slice()
openrisc: Define memory barrier mb
Petr Machata (3):
selftests: net: mirror_gre_vlan_bridge_1q: Make an FDB entry static
selftests: mlxsw: Increase the tolerance of backlog buildup
selftests: mlxsw: Fix mausezahn invocation in ERSPAN scale test
Petr Mladek (4):
watchdog: rename __touch_watchdog() to a better descriptive name
watchdog: explicitly update timestamp when reporting softlockup
watchdog/softlockup: remove logic that tried to prevent repeated reports
watchdog: fix barriers when printing backtraces from all CPUs
Phil Calvin (1):
ALSA: hda/realtek: fix mic boost on Intel NUC 8
Phil Elwell (1):
usb: dwc2: Fix gadget DMA unmap direction
Phillip Lougher (1):
squashfs: fix divide error in calculate_skip()
Phillip Potter (11):
net: tun: set tun->dev->addr_len during TUNSETLINK processing
net: geneve: check skb is large enough for IPv4/IPv6 header
net: usb: ax88179_178a: initialize local variables before use
fbdev: zero-fill colormap in fbcmap.c
net: geneve: modify IP header check in geneve6_xmit_skb and geneve_xmit_skb
net: hsr: check skb can contain struct hsr_ethhdr in fill_frame_info
scsi: ufs: handle cleanup correctly on devm_reset_control_get error
leds: lp5523: check return value of lp5xx_read and jump to cleanup code
isdn: mISDNinfineon: check/cleanup ioremap failure correctly in setup_io
isdn: mISDN: correctly handle ph_info allocation failure in hfcsusb_ph_info
dmaengine: qcom_hidma: comment platform_driver_register call
Pierre-Louis Bossart (2):
soundwire: cadence: only prepare attached devices on clock stop
ASoC: samsung: tm2_wm5110: check of of_parse return value
Ping Cheng (1):
HID: wacom: set EV_KEY and EV_ABS only for non-HID_GENERIC type of devices
Ping-Ke Shih (2):
rtw88: Fix array overrun in rtw_get_tx_power_params()
rtlwifi: 8821ae: upgrade PHY and RF parameters
Piotr Krysiuk (2):
bpf, x86: Validate computation of branch displacements for x86-64
bpf, x86: Validate computation of branch displacements for x86-32
Po-Hao Huang (1):
rtw88: 8822c: add LC calibration for RTL8822C
Potnuri Bharat Teja (2):
RDMA/cxgb4: check for ipv6 address properly while destroying listener
RDMA/cxgb4: add missing qpid increment
Pradeep Kumar Chitrapu (1):
ath11k: fix thermal temperature read
Pradeep P V K (1):
mmc: sdhci: Check for reset prior to DMA address unmap
Prashant Malani (1):
platform/chrome: cros_ec_typec: Add DP mode check
Qii Wang (3):
i2c: mediatek: Fix wrong dma sync flag
i2c: mediatek: Fix send master code at more than 1MHz
i2c: mediatek: Disable i2c start_en and clear intr_stat brfore reset
Qinglang Miao (9):
soc: qcom: pdr: Fix error return code in pdr_register_listener
i2c: cadence: fix reference leak when pm_runtime_get_sync fails
i2c: img-scb: fix reference leak when pm_runtime_get_sync fails
i2c: imx-lpi2c: fix reference leak when pm_runtime_get_sync fails
i2c: imx: fix reference leak when pm_runtime_get_sync fails
i2c: omap: fix reference leak when pm_runtime_get_sync fails
i2c: sprd: fix reference leak when pm_runtime_get_sync fails
i2c: stm32f7: fix reference leak when pm_runtime_get_sync fails
i2c: xiic: fix reference leak when pm_runtime_get_sync fails
Qu Huang (1):
drm/amdkfd: Fix cat debugfs hang_hws file causes system crash bug
Qu Wenruo (1):
btrfs: handle remount to no compress during compression
Quanyang Wang (11):
spi: spi-zynqmp-gqspi: use wait_for_completion_timeout to make zynqmp_qspi_exec_op not interruptible
spi: spi-zynqmp-gqspi: add mutex locking for exec_op
spi: spi-zynqmp-gqspi: transmit dummy circles by using the controller's internal functionality
spi: spi-zynqmp-gqspi: fix incorrect operating mode in zynqmp_qspi_read_op
spi: spi-zynqmp-gqspi: fix clk_enable/disable imbalance issue
spi: spi-zynqmp-gqspi: fix hang issue when suspend/resume
spi: spi-zynqmp-gqspi: fix use-after-free in zynqmp_qspi_exec_op
spi: spi-zynqmp-gqspi: return -ENOMEM if dma_map_single fails
drm/tilcdc: send vblank event when disabling crtc
clk: zynqmp: move zynqmp_pll_set_mode out of round_rate callback
clk: zynqmp: pll: add set_pll_mode to check condition in zynqmp_pll_enable
Quentin Perret (1):
sched: Fix out-of-bound access in uclamp
Quinn Tran (1):
scsi: qla2xxx: Fix use after free in bsg
Raed Salem (1):
net/mlx5: Fix placement of log_max_flow_counter
Rafael J. Wysocki (4):
ACPI: x86: Call acpi_boot_table_init() after acpi_table_upgrade()
PCI: PM: Do not read power state in pci_enable_device_flags()
cpufreq: intel_pstate: Use HWP if enabled by platform firmware
drivers: base: Fix device link removal
Rafał Miłecki (2):
dt-bindings: net: ethernet-controller: fix typo in NVMEM
ARM: dts: BCM5301X: fix "reg" formatting in /memory node
Rahul Lakkireddy (1):
cxgb4: avoid collecting SGE_QBASE regs during traffic
Raju Rangoju (1):
cxgb4: avoid accessing registers when clearing filters
Ramesh Babu B (1):
net: stmmac: Clear receive all(RA) bit when promiscuous mode is off
Rander Wang (1):
soundwire: stream: fix memory leak in stream config error path
Randy Dunlap (8):
ia64: remove duplicate entries in generic_defconfig
csky: change a Kconfig symbol name to fix e1000 build error
ia64: fix discontig.c section mismatches
NFS: fs_context: validate UDP retrans to prevent shift out-of-bounds
drm: bridge: fix LONTIUM use of mipi_dsi_() functions
powerpc: iommu: fix build when neither PCI or IBMVIO is set
MIPS: alchemy: xxs1500: add gpio-au1000.h header file
MIPS: ralink: export rt_sysc_membase for rt2880_wdt.c
Randy Wright (1):
serial: 8250_pci: Add support for new HPE serial device
Rasmus Villemoes (2):
lib/vsprintf.c: remove leftover 'f' and 'F' cases from bstr_printf()
devtmpfs: fix placement of complete() call
Ravi Kumar Bokka (1):
drivers: nvmem: Fix voltage settings for QTI qfprom-efuse
Reiji Watanabe (1):
KVM: VMX: Don't use vcpu->run->internal.ndata as an array index
Ricardo Ribalda (3):
media: staging/intel-ipu3: Fix memory leak in imu_fmt
media: staging/intel-ipu3: Fix set_fmt error handling
media: staging/intel-ipu3: Fix race condition during set_fmt
Ricardo Rivera-Matos (1):
power: supply: bq25980: Move props from battery node
Richard Fitzgerald (1):
ASoC: cs42l42: Regmap must use_single_read/write
Richard Sanger (1):
net: packetmmap: fix only tx timestamp on request
Rijo Thomas (2):
crypto: ccp - fix command queuing to TEE ring buffer
tee: amdtee: unload TA only when its refcount becomes 0
Rikard Falkeborn (1):
linux/bits.h: fix compilation error with GENMASK
Rob Clark (2):
drm/msm: Ratelimit invalid-fence message
drm/msm: Fix a5xx/a6xx timestamps
Robert Malz (1):
ice: Cleanup fltr list in case of allocation issues
Robin Murphy (2):
perf/arm_pmu_platform: Use dev_err_probe() for IRQ errors
perf/arm_pmu_platform: Fix error handling
Robin Singh (1):
drm/amd/display: fixed divide by zero kernel crash during dsc enablement
Rodrigo Siqueira (1):
drm/amd/display: Fix two cursor duplication when using overlay
Roi Dayan (2):
net/mlx5e: Fix null deref accessing lag dev
netfilter: flowtable: Remove redundant hw refresh bit
Rolf Eike Beer (1):
iommu/vt-d: Fix sysfs leak in alloc_iommu()
Romain Naour (1):
mips: Do not include hi and lo in clobber list for R6
Roman Bolshakov (1):
scsi: target: iscsi: Fix zero tag inside a trace event
Roman Gushchin (1):
percpu: make pcpu_nr_empty_pop_pages per chunk type
Rong Chen (1):
selftests/vm: fix out-of-tree build
Ronnie Sahlberg (2):
cifs: revalidate mapping when we open files for SMB1 POSIX
cifs: fix memory leak in smb2_copychunk_range
Rui Miguel Silva (1):
iio: gyro: fxas21002c: balance runtime power in error path
Ruslan Bilovol (2):
usb: gadget: f_uac2: validate input parameters
usb: gadget: f_uac1: validate input parameters
Russ Weight (1):
fpga: dfl: pci: add DID for D5005 PAC cards
Russell Currey (1):
selftests/powerpc: Fix L1D flushing tests for Power10
Russell King (3):
net: sfp: relax bitrate-derived mode check
net: sfp: cope with SFPs that set both LOS normal and LOS inverted
ARM: footbridge: fix PCI interrupt mapping
Ryan Lee (2):
ASoC: max98373: Changed amp shutdown register as volatile
ASoC: max98373: Added 30ms turn on/off time delay
Ryder Lee (3):
mt76: mt7615: use ieee80211_free_txskb() in mt7615_tx_token_put()
mt76: mt7915: fix mib stats counter reporting to mac80211
mt76: mt7615: fix memleak when mt7615_unregister_device()
Saeed Mahameed (1):
net/mlx5e: reset XPS on error flow if netdev isn't registered yet
Sagi Grimberg (3):
nvme-tcp: block BH in sk state_change sk callback
nvmet-tcp: fix incorrect locking in state_change sk callback
nvme-tcp: fix possible use-after-completion
Sai Prakash Ranjan (2):
arm64: dts: qcom: sm8250: Fix level triggered PMU interrupt polarity
arm64: dts: qcom: sm8250: Fix timer interrupt to specify EL2 physical timer
Salil Mehta (1):
net: hns3: Limiting the scope of vector_ring_chain variable
Sami Loone (2):
ALSA: hda/realtek: fix static noise on ALC285 Lenovo laptops
ALSA: hda/realtek: ALC285 Thinkpad jack pin quirk is unreachable
Sandeep Singh (1):
xhci: Add reset resume quirk for AMD xhci controller.
Sander Vanheule (1):
mt76: mt7615: support loading EEPROM for MT7613BE
Saravana Kannan (1):
driver core: Fix locking bug in deferred_probe_timeout_work_func()
Sargun Dhillon (2):
Documentation: seccomp: Fix user notification documentation
seccomp: Refactor notification handler to prepare for new semantics
Sean Christopherson (26):
KVM: x86/mmu: Ensure TLBs are flushed when yielding during GFN range zap
KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping
KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages
KVM: VMX: Convert vcpu_vmx.exit_reason to a union
x86/cpu: Initialize MSR_TSC_AUX if RDTSCP *or* RDPID is supported
KVM: x86: Defer the MMU unload to the normal path on an global INVPCID
KVM: x86/mmu: Alloc page for PDPTEs when shadowing 32-bit NPT with 64-bit
KVM: x86: Remove emulator's broken checks on CR0/CR3/CR4 loads
KVM: nSVM: Set the shadow root level to the TDP level for nested NPT
KVM: SVM: Don't strip the C-bit from CR2 on #PF interception
KVM: SVM: Do not allow SEV/SEV-ES initialization after vCPUs are created
KVM: SVM: Inject #GP on guest MSR_TSC_AUX accesses if RDTSCP unsupported
KVM: nVMX: Defer the MMU reload to the normal path on an EPTP switch
KVM: nVMX: Truncate bits 63:32 of VMCS field on nested check in !64-bit
KVM: nVMX: Truncate base/index GPR value on address calc in !64-bit
KVM: Destroy I/O bus devices on unregister failure _after_ sync'ing SRCU
KVM: Stop looking for coalesced MMIO zones if the bus is destroyed
KVM: x86/mmu: Retry page faults that hit an invalid memslot
crypto: ccp: Detect and reject "invalid" addresses destined for PSP
KVM: VMX: Intercept FS/GS_BASE MSR accesses for 32-bit KVM
KVM: x86/mmu: Remove the defunct update_pte() paging hook
crypto: ccp: Free SEV device if SEV init fails
KVM: x86: Emulate RDPID only if RDTSCP is supported
KVM: x86: Move RDPID emulation intercept to its own enum
KVM: VMX: Do not advertise RDPID if ENABLE_RDTSCP control is unsupported
KVM: VMX: Disable preemption when probing user return MSRs
Sean MacLennan (1):
USB: serial: ti_usb_3410_5052: add startech.com device id
Sean Wang (2):
mt76: mt7663s: make all of packets 4-bytes aligned in sdio tx aggregation
mt76: mt7663s: fix the possible device hang in high traffic
Sean Young (1):
media: ite-cir: check for receive overflow
Sebastian Krzyszkowiak (1):
arm64: dts: imx8mq-librem5-r3: Mark buck3 as always on
Seevalamuthu Mariappan (1):
mac80211: clear sta->fast_rx when STA removed from 4-addr VLAN
Serge E. Hallyn (1):
capabilities: require CAP_SETFCAP to map uid 0
Sergei Trofimovich (5):
ia64: mca: allocate early mca with GFP_ATOMIC
ia64: fix format strings for err_inject
ia64: fix user_stack_pointer() for ptrace()
ia64: fix EFI_DEBUG build
ia64: module: fix symbolizer crash on fdescr
Sergey Shtylyov (16):
pata_arasan_cf: fix IRQ check
pata_ipx4xx_cf: fix IRQ check
sata_mv: add IRQ checks
ata: libahci_platform: fix IRQ check
scsi: ufs: ufshcd-pltfrm: Fix deferred probing
scsi: hisi_sas: Fix IRQ checks
scsi: jazz_esp: Add IRQ check
scsi: sun3x_esp: Add IRQ check
scsi: sni_53c710: Add IRQ check
i2c: cadence: add IRQ check
i2c: emev2: add IRQ check
i2c: jz4780: add IRQ check
i2c: mlxbf: add IRQ check
i2c: rcar: add IRQ check
i2c: sh7760: add IRQ check
i2c: sh7760: fix IRQ error path
Seunghui Lee (1):
mmc: core: Set read only for SD cards with permanent write protect bit
Shameer Kolothum (1):
iommu: Check dev->iommu in iommu_dev_xxx functions
Shawn Guo (4):
soc: qcom: geni: shield geni_icc_get() for ACPI boot
arm64: dts: qcom: sdm845: fix number of pins in 'gpio-ranges'
arm64: dts: qcom: sm8150: fix number of pins in 'gpio-ranges'
arm64: dts: qcom: sm8250: fix number of pins in 'gpio-ranges'
Shay Drory (2):
RDMA/core: Add CM to restrack after successful attachment to a device
RDMA/core: Don't access cm_id after its destruction
Shayne Chen (1):
mt76: mt7915: fix txpower init for TSSI off chips
Shengjiu Wang (3):
ASoC: wm8960: Fix wrong bclk and lrclk with pll enabled for some chips
ASoC: wm8960: Remove bitclk relax condition in wm8960_configure_sysclk
ASoC: ak5558: correct reset polarity
Shixin Liu (6):
crypto: sun8i-ss - Fix PM reference leak when pm_runtime_get_sync() fails
crypto: sun8i-ce - Fix PM reference leak in sun8i_ce_probe()
crypto: stm32/hash - Fix PM reference leak on stm32-hash.c
crypto: stm32/cryp - Fix PM reference leak on stm32-cryp.c
crypto: sa2ul - Fix PM reference leak in sa_ul_probe()
crypto: omap-aes - Fix PM reference leak on omap-aes.c
Shou-Chieh Hsu (1):
HID: google: add don USB id
Shradha Todi (1):
PCI: endpoint: Fix NULL pointer dereference for ->get_features()
Shuah Khan (5):
usbip: add sysfs_lock to synchronize sysfs code paths
usbip: stub-dev synchronize sysfs code paths
usbip: vudc synchronize sysfs code paths
usbip: synchronize event handler with sysfs code paths
ath10k: Fix ath10k_wmi_tlv_op_pull_peer_stats_info() unlock without lock
Shuo Chen (1):
dyndbg: fix parsing file query without a line-range suffix
Shyam Prasad N (1):
cifs: detect dead connections only when echoes are enabled.
Shyam Sundar S K (2):
amd-xgbe: Update DMA coherency values
platform/x86: hp-wireless: add AMD's hardware id to the supported list
Si-Wei Liu (1):
vdpa/mlx5: should exclude header length and fcs from mtu
Sibi Sankar (1):
remoteproc: qcom_q6v5_mss: Replace ioremap with memremap
Simon Rettberg (1):
drm/i915/gt: Disable HiZ Raw Stall Optimization on broken gen7
Sindhu Devale (1):
RDMA/i40iw: Fix error unwinding when i40iw_hmc_sd_one fails
Smita Koralahalli (1):
perf vendor events amd: Fix broken L2 Cache Hits from L2 HWPF metric
Souptick Joarder (1):
media: atomisp: Fixed error handling path
Sourabh Jain (1):
powerpc/kexec_file: Use current CPU info while setting up FDT
Sreekanth Reddy (1):
scsi: mpt3sas: Block PCI config access from userspace during reset
Srikar Dronamraju (2):
powerpc/smp: Reintroduce cpu_core_mask
powerpc/smp: Set numa node before updating mask
Srinivas Kandagatla (2):
arm64: dts: qcom: db845c: fix correct powerdown pin for WSA881x
soundwire: bus: Fix device found flag correctly
Srinivas Pandruvada (3):
tools/power/x86/intel-speed-select: Increase string size
platform/x86: ISST: Account for increased timeout in some cases
thermal/drivers/intel: Initialize RW trip to THERMAL_TEMP_INVALID
Sriram R (2):
ath10k: Validate first subframe of A-MSDU before processing the list
ath11k: Clear the fragment cache during key install
Stanimir Varbanov (1):
media: venus: hfi_parser: Don't initialize parser on v1
Stanislav Fomichev (1):
tools/resolve_btfids: Add /libbpf to .gitignore
Stefan Assmann (1):
iavf: remove duplicate free resources calls
Stefan Berger (3):
tpm: acpi: Check eventlog signature before using it
tpm: efi: Use local variable for calculating final log size
tpm: vtpm_proxy: Avoid reading host log when using a virtual device
Stefan Chulski (1):
net: mvpp2: add buffer header handling in RX
Stefan Raspl (1):
tools/kvm_stat: Add restart delay
Stefan Riedmueller (1):
ARM: dts: imx6: pbab01: Set vmmc supply for both SD interfaces
Stefan Roese (1):
net: ethernet: mtk_eth_soc: Fix packet statistics support for MT7628/88
Stefano Brivio (1):
netfilter: nft_set_pipapo_avx2: Add irq_fpu_usable() check, fallback to non-AVX2 version
Stefano Garzarella (2):
vsock/vmci: log once the failed queue pair allocation
vsock/virtio: free queued packets when closing socket
Steffen Dirkwinkel (1):
platform/x86: pmc_atom: Match all Beckhoff Automation baytrail boards with critclk_systems DMI table
Steffen Klassert (2):
xfrm: Fix NULL pointer dereference on policy lookup
xfrm: Provide private skb extensions for segmented and hw offloaded ESP packets
Stephen Boyd (6):
drm/msm: Set drvdata to NULL when msm_drm_init() fails
serial: stm32: Use of_device_get_match_data()
firmware: qcom_scm: Make __qcom_scm_is_call_available() return bool
firmware: qcom_scm: Reduce locking section for __get_convention()
firmware: qcom_scm: Workaround lack of "is available" call on SC7180
ASoC: qcom: lpass-cpu: Use optional clk APIs
Steve French (3):
smb3: when mounting with multichannel include it in requested capabilities
smb3: do not attempt multichannel to server which does not support it
SMB3: incorrect file id in requests compounded with open
Steven Rostedt (VMware) (4):
ftrace: Check if pages were allocated before calling free_pages()
ftrace: Handle commands when closing set_ftrace_filter file
tracing: Map all PIDs to command lines
tracing: Restructure trace_clock_global() to never block
Stéphane Marchesin (1):
drm/i915: Fix crash in auto_retire
Subbaraman Narayanamurthy (1):
interconnect: qcom: bcm-voter: add a missing of_node_put()
Sudhakar Panneerselvam (1):
md/bitmap: wait for external bitmap writes to complete during tear down
Sumeet Pawnikar (1):
ACPI: PM: Add ACPI ID of Alder Lake Fan
Sun Ke (1):
nbd: Fix NULL pointer in flush_workqueue
Suravee Suthikulpanit (1):
iommu/amd: Remove performance counter pre-initialization test
Suzuki K Poulose (3):
KVM: arm64: Hide system instruction access to Trace registers
KVM: arm64: Disable guest access to trace filter controls
coresight: Do not scan for graph if none is present
Taehee Yoo (2):
mld: fix panic in mld_newpack()
sch_dsmark: fix a NULL deref in qdisc_reset()
Takashi Iwai (28):
ALSA: hda/realtek: Fix speaker amp setup on Acer Aspire E1
ALSA: hda/conexant: Apply quirk for another HP ZBook G5 model
drm/i915: Fix invalid access to ACPI _DSM objects
ALSA: usb-audio: Add MIDI quirk for Vox ToneLab EX
ALSA: hda/conexant: Re-order CX5066 quirk table entries
ALSA: usb-audio: Explicitly set up the clock selector
media: dvb-usb: Fix use-after-free access
media: dvb-usb: Fix memory leak at error in dvb_usb_device_init()
ALSA: hda/realtek: Re-order ALC882 Acer quirk table entries
ALSA: hda/realtek: Re-order ALC882 Sony quirk table entries
ALSA: hda/realtek: Re-order ALC882 Clevo quirk table entries
ALSA: hda/realtek: Re-order ALC269 HP quirk table entries
ALSA: hda/realtek: Re-order ALC269 Acer quirk table entries
ALSA: hda/realtek: Re-order ALC269 Dell quirk table entries
ALSA: hda/realtek: Re-order ALC269 ASUS quirk table entries
ALSA: hda/realtek: Re-order ALC269 Sony quirk table entries
ALSA: hda/realtek: Re-order ALC269 Lenovo quirk table entries
ALSA: hda/realtek: Re-order remaining ALC269 quirk table entries
ALSA: hda/realtek: Re-order ALC662 quirk table entries
ALSA: hda/realtek: Remove redundant entry for ALC861 Haier/Uniwill devices
ALSA: hda/realtek: Fix speaker amp on HP Envy AiO 32
ALSA: usb-audio: Add error checks for usb_driver_claim_interface() calls
ALSA: hda/realtek: Add quirk for Lenovo Ideapad S740
ALSA: intel8x0: Don't update period unless prepared
ALSA: line6: Fix racy initialization of LINE6 MIDI
ALSA: usb-audio: Validate MS endpoint descriptors
ALSA: hda/realtek: Fix silent headphone output on ASUS UX430UA
ALSA: hda/realtek: Add fixup for HP OMEN laptop
Takashi Sakamoto (7):
ALSA: bebob: enable to deliver MIDI messages for multiple ports
ALSA: dice: fix stream format for TC Electronic Konnekt Live at high sampling transfer frequency
ALSA: firewire-lib: fix amdtp_packet tracepoints event for packet_index field
ALSA: dice: fix stream format at middle sampling rate for Alesis iO 26
ALSA: firewire-lib: fix calculation for size of IR context payload
ALSA: bebob/oxfw: fix Kconfig entry for Mackie d.2 Pro
ALSA: firewire-lib: fix check for the size of isochronous packet payload
Tanner Love (1):
net/packet: make packet_fanout.arr size configurable up to 64K
Tao Liu (1):
openvswitch: meter: fix race when getting now_ms.
Tao Ren (1):
usb: gadget: aspeed: fix dma map failure
Tariq Toukan (1):
net/mlx5e: Enforce minimum value check for ICOSQ size
Tasos Sahanidis (2):
media: saa7134: use sg_dma_len when building pgtable
media: saa7146: use sg_dma_len when building pgtable
Teava Radu (1):
platform/x86: touchscreen_dmi: Add info for the Mediacom Winpad 7.0 W700 tablet
Tejas Patel (1):
firmware: xilinx: Fix dereferencing freed memory
Tejun Heo (1):
blk-iocost: fix weight updates of inner active iocgs
Tetsuo Handa (7):
batman-adv: initialize "struct batadv_tvlv_tt_vlan_data"->reserved field
lockdep: Add a missing initialization hint to the "INFO: Trying to register non-static key" message
misc: vmw_vmci: explicitly initialize vmci_notify_bm_set_msg struct
misc: vmw_vmci: explicitly initialize vmci_datagram payload
ttyprintk: Add TTY hangup callback.
Bluetooth: initialize skb_queue_head at l2cap_chan_create()
tty: vt: always invoke vc->vc_sw->con_resize callback
Thadeu Lima de Souza Cascardo (3):
io_uring: truncate lengths larger than MAX_RW_COUNT on provide buffers
bpf, ringbuf: Deny reserve of buffers larger than ringbuf
Bluetooth: cmtp: fix file refcount when cmtp_attach_device fails
Theodore Ts'o (2):
fs: fix reporting supported extra file attributes for statx()
ext4: allow the dax flag to be set and cleared on inline directories
Thinh Nguyen (5):
usb: xhci: Fix port minor revision
usb: dwc3: gadget: Check for disabled LPM quirk
usb: dwc3: gadget: Remove FS bInterval_m1 limitation
usb: dwc3: gadget: Fix START_TRANSFER link state check
usb: dwc3: gadget: Properly track pending and queued SG
Thomas Gleixner (4):
Revert 337f13046ff0 ("futex: Allow FUTEX_CLOCK_REALTIME with FUTEX_WAIT op")
futex: Do not apply time namespace adjustment on FUTEX_LOCK_PI
KVM: x86: Cancel pvclock_gtod_work on module removal
KVM: x86: Prevent deadlock against tk_core.seq
Thomas Richter (1):
perf ftrace: Fix access to pid in array when setting a pid filter
Thomas Zimmermann (1):
drm/ast: Fix invalid usage of AST_MAX_HWC_WIDTH in cursor atomic_check
Tian Tao (1):
dm integrity: fix missing goto in bitmap_flush_interval error handling
Tiezhu Yang (2):
MIPS/bpf: Enable bpf_probe_read{, str}() on MIPS again
MIPS: Loongson64: Use _CACHE_UNCACHED instead of _CACHE_UNCACHED_ACCELERATED
Timo Gurr (1):
ALSA: usb-audio: Add dB range mapping for Sennheiser Communications Headset PC 8
Toke Høiland-Jørgensen (2):
bpf: Enforce that struct_ops programs be GPL-only
ath9k: Fix error check in ath9k_hw_read_revisions() for PCI devices
Tom Lendacky (2):
x86/sev-es: Move sev_es_put_ghcb() in prep for follow on patch
x86/sev-es: Invalidate the GHCB after completing VMGEXIT
Tom Seewald (3):
qlcnic: Add null check after calling netdev_alloc_skb
char: hpet: add checks after calling ioremap
net: liquidio: Add missing null pointer checks
Tomas Winkler (1):
mei: me: add Alder Lake P device id.
Tong Zhang (8):
mISDN: fix crash in fritzpci
drm/qxl: do not run release if qxl failed to init
drm/ast: fix memory leak when unload the driver
crypto: qat - don't release uninitialized resources
crypto: qat - ADF_STATUS_PF_RUNNING should be set after adf_dev_init
ALSA: hdsp: don't disable if not enabled
ALSA: hdspm: don't disable if not enabled
ALSA: rme9652: don't disable if not enabled
Tong Zhu (1):
neighbour: Disregard DEAD dst in neigh_update
Tony Ambardar (1):
powerpc: fix EDEADLOCK redefinition error in uapi/asm/errno.h
Tony Lindgren (14):
bus: ti-sysc: Fix warning on unbind if reset is not deasserted
ARM: OMAP4: Fix PMIC voltage domains for bionic
ARM: dts: Drop duplicate sha2md5_fck to fix clk_disable race
ARM: dts: Fix moving mmc devices with aliases for omap4 & 5
ARM: OMAP2+: Fix warning for omap_init_time_of()
ARM: OMAP2+: Fix uninitialized sr_inst
gpio: omap: Save and restore sysconfig
ARM: dts: Fix swapped mmc order for omap3
bus: ti-sysc: Probe for l4_wkup and l4_cfg interconnect devices first
clocksource/drivers/timer-ti-dm: Fix posted mode status check order
clocksource/drivers/timer-ti-dm: Add missing set_state_oneshot_stopped
PM: runtime: Fix unpaired parent child_count for force_resume
clocksource/drivers/timer-ti-dm: Prepare to handle dra7 timer wrap issue
clocksource/drivers/timer-ti-dm: Handle dra7 timer wrap errata i940
Trond Myklebust (11):
NFS: Don't discard pNFS layout segments that are marked for return
NFSv4: Don't discard segments marked for return in _pnfs_return_layout()
NFS: nfs4_bitmask_adjust() must not change the server global bitmasks
NFS: Fix attribute bitmask in _nfs42_proc_fallocate()
NFSv4.2: Always flush out writes in nfs42_proc_fallocate()
NFS: Deal correctly with attribute generation counter overflow
NFSv4.x: Don't return NFS4ERR_NOMATCHING_LAYOUT if we're unmounting
NFS: NFS_INO_REVAL_PAGECACHE should mark the change attribute invalid
NFS: Fix an Oopsable condition in __nfs_pageio_add_request()
NFS: Don't corrupt the value of pg_bytes_written in nfs_do_recoalesce()
SUNRPC: More fixes for backlog congestion
Tudor Ambarus (2):
Revert "mtd: spi-nor: macronix: Add support for mx25l51245g"
spi: spi-ti-qspi: Free DMA resources
Tvrtko Ursulin (1):
drm/i915/overlay: Fix active retire callback alignment
Tyrel Datwyler (1):
powerpc/pseries: extract host bridge from pci_bus prior to bus removal
Uladzislau Rezki (Sony) (1):
kvfree_rcu: Use same set of GFP flags as does single-argument
Ulf Hansson (1):
mmc: core: Fix hanging on I/O during system suspend for removable cards
Uwe Kleine-König (1):
pwm: atmel: Fix duty cycle calculation in .get_state()
Vadym Kochan (1):
net: marvell: prestera: fix port event handling on init
Vaibhav Jain (2):
libnvdimm/region: Fix nvdimm_has_flush() to handle ND_REGION_ASYNC
powerpc/mm: Add cond_resched() while removing hpte mappings
Valentin CARON - foss (1):
ARM: dts: stm32: fix usart 2 & 3 pinconf to wake up with flow control
Valentin Schneider (1):
sched/fair: Fix shift-out-of-bounds in load_balance()
Vamshi Krishna Gopal (1):
ASoC: Intel: sof_sdw: add quirk for new ADL-P Rvp
Varad Gautam (1):
ipc/mqueue, msg, sem: avoid relying on a stack reference past its expiry
Vasily Averin (1):
tools/cgroup/slabinfo.py: updated to work on current kernel
Vasily Gorbik (2):
s390/entry: save the caller of psw_idle
s390/disassembler: increase ebpf disasm buffer size
Ville Syrjälä (2):
drm/i915: Avoid div-by-zero on gen2
drm/i915: Read C0DRB3/C1DRB3 as 16 bits again
Vinay Kumar Yadav (4):
ch_ktls: Fix kernel panic
ch_ktls: fix device connection close
ch_ktls: tcb close causes tls connection failure
ch_ktls: do not send snd_una update to TCB in middle
Vincent Donnefort (1):
sched/pelt: Fix task util_est update filtering
Vincent Whitchurch (1):
cifs: Silently ignore unknown oplock break handle
Vineet Gupta (1):
ARC: entry: fix off-by-one error in syscall number validation
Viswas G (1):
scsi: pm80xx: Fix chip initialization failure
Vitaly Chikunov (1):
perf beauty: Fix fsconfig generator
Vitaly Kuznetsov (3):
ACPI: processor: Fix build when CONFIG_ACPI_PROCESSOR=m
genirq/matrix: Prevent allocation counter corruption
KVM: nVMX: Always make an attempt to map eVMCS after migration
Vivek Goyal (5):
fuse: fix write deadlock
fuse: invalidate attrs when page writeback completes
dax: Add an enum for specifying dax wakup mode
dax: Add a wakeup mode parameter to put_unlocked_entry()
dax: Wake up all waiters after invalidating dax entry
Vlad Buslov (4):
net: sched: fix action overwrite reference counting
net: sched: fix err handler in tcf_action_init()
Revert "net: sched: bump refcount for new action in ACT replace mode"
net: zero-initialize tc skb extension on allocation
Vladimir Barinov (1):
arm64: dts: renesas: r8a77980: Fix vin4-7 endpoint binding
Vladimir Isaev (2):
ARC: mm: PAE: use 40-bit physical page mask
ARC: mm: Use max_high_pfn as a HIGHMEM zone border
Vladimir Murzin (1):
ARM: 9069/1: NOMMU: Fix conversion for_each_membock() to for_each_mem_range()
Vladimir Oltean (8):
net/sched: cls_flower: use ntohs for struct flow_dissector_key_ports
net: dsa: sja1105: update existing VLANs from the bridge VLAN list
net: dsa: sja1105: use 4095 as the private VLAN for untagged traffic
net: dsa: sja1105: error out on unsupported PHY mode
net: dsa: sja1105: add error handling in sja1105_setup()
net: dsa: sja1105: call dsa_unregister_switch when allocating memory fails
net: dsa: sja1105: fix VL lookup command packing for P/Q/R/S
net: dsa: fix error code getting shifted with 4 in dsa_slave_get_sset_count
Vladyslav Tarasiuk (1):
net/mlx4: Fix EEPROM dump support
Waiman Long (1):
sched/debug: Fix cgroup_path[] serialization
Wan Jiabing (1):
cavium/liquidio: Fix duplicate argument
Wang Li (2):
spi: qup: fix PM reference leak in spi_qup_remove()
spi: fsl-lpspi: Fix PM reference leak in lpspi_prepare_xfer_hardware()
Wang Qing (1):
arc: kernel: Return -EFAULT if copy_to_user() fails
Wang Wensheng (5):
RDMA/qedr: Fix error return code in qedr_iw_connect()
IB/hfi1: Fix error return code in parse_platform_config()
RDMA/bnxt_re: Fix error return code in bnxt_qplib_cq_process_terminal()
RDMA/srpt: Fix error return code in srpt_cm_req_recv()
mm/sparse: add the missing sparse_buffer_fini() in error branch
Wanpeng Li (5):
KVM: LAPIC: Accurately guarantee busy wait for timer to expire when using hv_timer
context_tracking: Move guest exit context tracking to separate helpers
context_tracking: Move guest exit vtime accounting to separate helpers
KVM: x86: Defer vtime accounting 'til after IRQ handling
KVM: X86: Fix vCPU preempted state from guest's point of view
Wayne Lin (2):
drm/dp_mst: Revise broadcast msg lct & lcr
drm/dp_mst: Set CLEAR_PAYLOAD_ID_TABLE as broadcast
Wei Yongjun (7):
spi: dln2: Fix reference leak to master
spi: omap-100k: Fix reference leak to master
usb: typec: tps6598x: Fix return value check in tps6598x_probe()
usb: typec: stusb160x: fix return value check in stusb160x_probe()
clocksource/drivers/ingenic_ost: Fix return value check in ingenic_ost_probe()
spi: spi-zynqmp-gqspi: Fix missing unlock on error in zynqmp_qspi_exec_op()
media: m88ds3103: fix return value check in m88ds3103_probe()
Wen Gong (6):
mac80211: extend protection against mixed key and fragment cache attacks
ath10k: add CCMP PN replay protection for fragmented frames for PCIe
ath10k: drop fragments with multicast DA for PCIe
ath10k: drop fragments with multicast DA for SDIO
ath10k: drop MPDU which has discard flag set by firmware for SDIO
ath10k: Fix TKIP Michael MIC verification for PCIe
Wengang Wang (1):
ocfs2: fix deadlock between setattr and dio_end_io_write
Werner Sembach (1):
drm/amd/display: Try YCbCr420 color when YCbCr444 fails
Wesley Cheng (2):
usb: dwc3: gadget: Ignore EP queue requests during bus reset
usb: dwc3: gadget: Return success always for kick transfer in ep queue
William A. Kennington III (1):
spi: Fix use-after-free with devm_spi_alloc_*
William Roche (1):
RAS/CEC: Correct ce_add_elem()'s returned values
Wolfram Sang (4):
i2c: turn recovery error on init to debug
i2c: rcar: make sure irq is not threaded on Gen2 and earlier
i2c: rcar: protect against supurious interrupts on V3U
i2c: bail out early when RDWR parameters are wrong
Wong Vee Khee (1):
ethtool: fix incorrect datatype in set_eee ops
Wu Bo (2):
nvmet: fix memory leak in nvmet_alloc_ctrl()
nvme-loop: fix memory leak in nvme_loop_create_ctrl()
Xiang Chen (2):
mtd: spi-nor: core: Fix an issue of releasing resources during read/write
iommu: Fix a boundary issue to avoid performance drop
Xiao Ni (1):
async_xor: increase src_offs when dropping destination page
Xiaogang Chen (1):
drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT interrupt work
Xiaoming Ni (4):
nfc: fix refcount leak in llcp_sock_bind()
nfc: fix refcount leak in llcp_sock_connect()
nfc: fix memory leak in llcp_sock_connect()
nfc: Avoid endless loops caused by repeated llcp_sock_connect()
Xie He (2):
Revert "drivers/net/wan/hdlc_fr: Fix a double free in pvc_xmit"
net: lapbether: Prevent racing when checking whether the netif is running
Xie Yongji (1):
vhost-vdpa: protect concurrent access to vhost device iotlb
Xin Long (9):
esp: delete NETIF_F_SCTP_CRC bit from features for esp offload
tipc: increment the tmp aead refcnt before attaching it
xfrm: BEET mode doesn't support fragments for inner packets
Revert "net/sctp: fix race condition in sctp_destroy_sock"
sctp: delay auto_asconf init until binding the first addr
sctp: do asoc update earlier in sctp_sf_do_dupcook_a
sctp: fix a SCTP_MIB_CURRESTAB leak in sctp_sf_do_dupcook_b
tipc: wait and exit until all work queues are done
tipc: skb_linearize the head skb when reassembling msgs
Xingui Yang (1):
ata: ahci: Disable SXS for Hisilicon Kunpeng920
Xu Yihang (1):
ext4: fix error return code in ext4_fc_perform_commit()
Xu Yilun (1):
mfd: intel-m10-bmc: Fix the register access range
Xuan Zhuo (1):
xsk: Fix for xp_aligned_validate_desc() when len == chunk_size
Yang Yang (1):
jffs2: check the validity of dstlen in jffs2_zlib_compress()
Yang Yingliang (15):
usb: gadget: tegra-xudc: Fix possible use-after-free in tegra_xudc_remove()
phy: phy-twl4030-usb: Fix possible use-after-free in twl4030_usb_remove()
power: supply: generic-adc-battery: fix possible use-after-free in gab_remove()
power: supply: s3c_adc_battery: fix possible use-after-free in s3c_adc_bat_remove()
media: tc358743: fix possible use-after-free in tc358743_remove()
media: adv7604: fix possible use-after-free in adv76xx_remove()
media: i2c: adv7511-v4l2: fix possible use-after-free in adv7511_remove()
media: i2c: tda1997: Fix possible use-after-free in tda1997x_remove()
media: i2c: adv7842: fix possible use-after-free in adv7842_remove()
USB: gadget: udc: fix wrong pointer passed to IS_ERR() and PTR_ERR()
spi: fsl: add missing iounmap() on error in of_fsl_spi_probe()
media: omap4iss: return error code when omap4iss_get() failed
net/tipc: fix missing destroy_workqueue() on error in tipc_crypto_start()
PCI: endpoint: Fix missing destroy_workqueue()
tools/testing/selftests/exec: fix link error
Yangbo Lu (1):
ptp_qoriq: fix overflow in ptp_qoriq_adjfine() u64 calcalation
Yannick Vignon (1):
net: stmmac: Do not enable RX FIFO overflow interrupts
Yaqi Chen (1):
samples/bpf: Fix broken tracex1 due to kprobe argument change
Ye Bin (2):
ext4: fix ext4_error_err save negative errno into superblock
usbip: vudc: fix missing unlock on error in usbip_sockfd_store()
Yi Li (1):
drm/amdgpu: Fix GPU TLB update error when PAGE_SIZE > AMDGPU_PAGE_SIZE
Yi Zhuang (1):
f2fs: Fix a hungtask problem in atomic write
Yingjie Wang (1):
drm/radeon: Fix a missing check bug in radeon_dp_mst_detect()
Yinjun Zhang (3):
nfp: flower: ignore duplicate merge hints from FW
nfp: devlink: initialize the devlink port attribute "lanes"
bpf, offload: Reorder offload callback 'prepare' in verifier
Yonghong Song (3):
bpf, x86: Use kvmalloc_array instead kmalloc_array in bpf_jit_comp
bpf: Permits pointers on stack for helper calls
selftests: Set CC to clang in lib.mk if LLVM is set
Yongxin Liu (2):
ice: fix memory leak of aRFS after resuming from suspend
ixgbe: fix unbalanced device enable/disable in suspend/resume
Yoshihiro Shimoda (5):
ARM: dts: renesas: Add mmc aliases into R-Car Gen2 board dts files
arm64: dts: renesas: Add mmc aliases into board dts files
arm64: dts: renesas: r8a779a0: Fix PMU interrupt
net: renesas: ravb: Fix a stuck issue when a lot of frames are received
usb: gadget: udc: renesas_usb3: Fix a race in usb3_start_pipen()
Yu Chen (1):
usb: dwc3: core: Do core softreset when switch mode
Yuanyuan Zhong (1):
pinctrl: lewisburg: Update number of pins in community
YueHaibing (2):
PM: runtime: Replace inline function pm_runtime_callbacks_present()
iio: adc: ad7793: Add missing error code in ad7793_setup()
Yufen Yu (1):
block: only update parent bi_status when bio fail
Yufeng Mo (3):
net: hns3: fix incorrect configuration for igu_egu_hw_err
net: hns3: initialize the message content in hclge_get_link_mode()
net: hns3: disable phy loopback setting in hclge_mac_start_phy
Yunjian Wang (2):
net: cls_api: Fix uninitialised struct field bo->unlocked_driver_cb
i40e: Fix use-after-free in i40e_client_subtask()
Yunsheng Lin (5):
net: hns3: add handling for xmit skb with recursive fraglist
net: sched: fix packet stuck problem for lockless qdisc
net: sched: fix tx action rescheduling issue during deactivation
net: sched: fix tx action reschedule issue with stopped queue
net: hns3: check the return of skb_checksum_help()
Zhang Xiaoxu (1):
NFSv4: Fix v4.0/v4.1 SEEK_DATA return -ENOTSUPP when set NFS_V4_2 config
Zhang Yi (2):
ext4: fix check to prevent false positive report of incorrect used inodes
ext4: do not set SB_ACTIVE in ext4_orphan_cleanup()
Zhang Zhengming (1):
bridge: Fix possible races between assigning rx_handler_data and setting IFF_BRIDGE_PORT bit
Zhao Heming (1):
md: md_open returns -EBUSY when entering racing area
Zhen Lei (9):
perf map: Fix error return code in maps__clone()
perf data: Fix error return code in perf_data__create_dir()
iommu/arm-smmu-v3: add bit field SFM into GERROR_ERR_MASK
tpm: fix error return code in tpm2_get_cc_attrs_tbl()
ARM: 9064/1: hw_breakpoint: Do not directly check the event's overflow_handler hook
xen/unpopulated-alloc: fix error return code in fill_list()
dt-bindings: serial: 8250: Remove duplicated compatible strings
scsi: qla2xxx: Fix error return code in qla82xx_write_flash_dword()
net: bnx2: Fix error return code in bnx2_init_board()
Zheng Yongjun (1):
net: openvswitch: conntrack: simplify the return expression of ovs_ct_limit_get_default_limit()
Zheyu Ma (1):
serial: rp2: use 'request_firmware' instead of 'request_firmware_nowait'
Zhouyi Zhou (1):
rcu: Remove spurious instrumentation_end() in rcu_nmi_enter()
Zhu Lingshan (1):
Revert "irqbypass: do not start cons/prod when failed connect"
Zihao Yu (1):
riscv,entry: fix misaligned base for excp_vect_table
Zolton Jheng (1):
USB: serial: pl2303: add device id for ADLINK ND-6530 GC
Zou Wei (2):
gpio: cadence: Add missing MODULE_DEVICE_TABLE
interconnect: qcom: Add missing MODULE_DEVICE_TABLE
Zqiang (3):
workqueue: Move the position of debug_work_activate() in __queue_work()
lib: stackdepot: turn depot_lock spinlock to raw_spinlock
locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal
brian-sy yang (1):
thermal/drivers/cpufreq_cooling: Fix slab OOB issue
dillon min (1):
dt-bindings: serial: stm32: Use 'type: object' instead of false for 'additionalProperties'
dongjian (1):
power: supply: Use IRQF_ONESHOT
gexueyuan (1):
memory: pl353: fix mask of ECC page_size config register
karthik alapati (1):
staging: wimax/i2400m: fix byte-order issue
kernel test robot (2):
of: overlay: fix for_each_child.cocci warnings
ALSA: usb-audio: scarlett2: snd_scarlett_gen2_controls_create() can be static
lizhe (1):
jffs2: Fix kasan slab-out-of-bounds problem
louis.wang (1):
ARM: 9066/1: ftrace: pause/unpause function graph tracer in cpu_suspend()
mark-yw.chen (1):
Bluetooth: btusb: Enable quirk boolean flag for Mediatek Chip.
shaoyunl (1):
drm/amdgpu : Fix asic reset regression issue introduce by 8f211fe8ac7c4f
wenxu (1):
net/mlx5e: fix ingress_ifindex check in mlx5e_flower_parse_meta
xinhui pan (1):
drm/amdgpu: Fix a use-after-free
yangerkun (1):
block: reexpand iov_iter after read/write
zhouchuangao (1):
fs/nfs: Use fatal_signal_pending instead of signal_pending
Álvaro Fernández Rojas (3):
mtd: rawnand: brcmnand: fix OOB R/W with Hamming ECC
gpio: guard gpiochip_irqchip_add_domain() with GPIOLIB_IRQCHIP
mips: bmips: fix syscon-reboot nodes
Íñigo Huguet (1):
net:CXGB4: fix leak if sk_buff is not used
周琰杰 (Zhou Yanjie) (1):
I2C: JZ4780: Fix bug for Ingenic X1000.
BUG=None
TEST=tryjob, validation and K8s e2e
RELEASE_NOTE=Updated the Linux kernel to v5.10.42.
Signed-off-by: Oleksandr Tymoshenko <ovt@google.com>
Change-Id: I9b9de1782f69d84d170525465fd19af43a7a9160
diff --git a/.gitignore b/.gitignore
index d01cda8..67d2f35 100644
--- a/.gitignore
+++ b/.gitignore
@@ -55,6 +55,7 @@
/tags
/TAGS
/linux
+/modules-only.symvers
/vmlinux
/vmlinux.32
/vmlinux.symvers
diff --git a/Documentation/arm/memory.rst b/Documentation/arm/memory.rst
index 0521b4c..34bb23c 100644
--- a/Documentation/arm/memory.rst
+++ b/Documentation/arm/memory.rst
@@ -45,9 +45,14 @@
fffe0000 fffe7fff ITCM mapping area for platforms with
ITCM mounted inside the CPU.
-ffc00000 ffefffff Fixmap mapping region. Addresses provided
+ffc80000 ffefffff Fixmap mapping region. Addresses provided
by fix_to_virt() will be located here.
+ffc00000 ffc7ffff Guard region
+
+ff800000 ffbfffff Permanent, fixed read-only mapping of the
+ firmware provided DT blob
+
fee00000 feffffff Mapping of PCI I/O space. This is a static
mapping within the vmalloc space.
diff --git a/Documentation/devicetree/bindings/media/renesas,vin.yaml b/Documentation/devicetree/bindings/media/renesas,vin.yaml
index ad2fe66..c69cf8d 100644
--- a/Documentation/devicetree/bindings/media/renesas,vin.yaml
+++ b/Documentation/devicetree/bindings/media/renesas,vin.yaml
@@ -278,23 +278,35 @@
- interrupts
- clocks
- power-domains
- - resets
-if:
- properties:
- compatible:
- contains:
- enum:
- - renesas,vin-r8a7778
- - renesas,vin-r8a7779
- - renesas,rcar-gen2-vin
-then:
- required:
- - port
-else:
- required:
- - renesas,id
- - ports
+allOf:
+ - if:
+ not:
+ properties:
+ compatible:
+ contains:
+ enum:
+ - renesas,vin-r8a7778
+ - renesas,vin-r8a7779
+ then:
+ required:
+ - resets
+
+ - if:
+ properties:
+ compatible:
+ contains:
+ enum:
+ - renesas,vin-r8a7778
+ - renesas,vin-r8a7779
+ - renesas,rcar-gen2-vin
+ then:
+ required:
+ - port
+ else:
+ required:
+ - renesas,id
+ - ports
additionalProperties: false
diff --git a/Documentation/devicetree/bindings/net/ethernet-controller.yaml b/Documentation/devicetree/bindings/net/ethernet-controller.yaml
index 39147d3..9e41dad 100644
--- a/Documentation/devicetree/bindings/net/ethernet-controller.yaml
+++ b/Documentation/devicetree/bindings/net/ethernet-controller.yaml
@@ -49,7 +49,7 @@
description:
Reference to an nvmem node for the MAC address
- nvmem-cells-names:
+ nvmem-cell-names:
const: mac-address
phy-connection-type:
diff --git a/Documentation/devicetree/bindings/serial/8250.yaml b/Documentation/devicetree/bindings/serial/8250.yaml
index c1d4c19..460cb54 100644
--- a/Documentation/devicetree/bindings/serial/8250.yaml
+++ b/Documentation/devicetree/bindings/serial/8250.yaml
@@ -94,11 +94,6 @@
- mediatek,mt7623-btif
- const: mediatek,mtk-btif
- items:
- - enum:
- - mediatek,mt7622-btif
- - mediatek,mt7623-btif
- - const: mediatek,mtk-btif
- - items:
- const: mrvl,mmp-uart
- const: intel,xscale-uart
- items:
diff --git a/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml b/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml
index 06d5f25..51f390e 100644
--- a/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml
+++ b/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml
@@ -77,7 +77,8 @@
- interrupts
- clocks
-additionalProperties: false
+additionalProperties:
+ type: object
examples:
- |
diff --git a/Documentation/dontdiff b/Documentation/dontdiff
index e361fc9..82e3eee 100644
--- a/Documentation/dontdiff
+++ b/Documentation/dontdiff
@@ -178,6 +178,7 @@
mktree
mkutf8data
modpost
+modules-only.symvers
modules.builtin
modules.builtin.modinfo
modules.nsdeps
diff --git a/Documentation/driver-api/xilinx/eemi.rst b/Documentation/driver-api/xilinx/eemi.rst
index 9dcbc6f..c1bc47b 100644
--- a/Documentation/driver-api/xilinx/eemi.rst
+++ b/Documentation/driver-api/xilinx/eemi.rst
@@ -16,35 +16,8 @@
device to communicate with a power management controller (PMC) on a
device to issue or respond to power management requests.
-EEMI ops is a structure containing all eemi APIs supported by Zynq MPSoC.
-The zynqmp-firmware driver maintain all EEMI APIs in zynqmp_eemi_ops
-structure. Any driver who want to communicate with PMC using EEMI APIs
-can call zynqmp_pm_get_eemi_ops().
-
-Example of EEMI ops::
-
- /* zynqmp-firmware driver maintain all EEMI APIs */
- struct zynqmp_eemi_ops {
- int (*get_api_version)(u32 *version);
- int (*query_data)(struct zynqmp_pm_query_data qdata, u32 *out);
- };
-
- static const struct zynqmp_eemi_ops eemi_ops = {
- .get_api_version = zynqmp_pm_get_api_version,
- .query_data = zynqmp_pm_query_data,
- };
-
-Example of EEMI ops usage::
-
- static const struct zynqmp_eemi_ops *eemi_ops;
- u32 ret_payload[PAYLOAD_ARG_CNT];
- int ret;
-
- eemi_ops = zynqmp_pm_get_eemi_ops();
- if (IS_ERR(eemi_ops))
- return PTR_ERR(eemi_ops);
-
- ret = eemi_ops->query_data(qdata, ret_payload);
+Any driver who wants to communicate with PMC using EEMI APIs use the
+functions provided for each function.
IOCTL
------
diff --git a/Documentation/powerpc/syscall64-abi.rst b/Documentation/powerpc/syscall64-abi.rst
index cf9b285..d824204 100644
--- a/Documentation/powerpc/syscall64-abi.rst
+++ b/Documentation/powerpc/syscall64-abi.rst
@@ -96,6 +96,16 @@
scv 0 syscalls will always behave as PPC_FEATURE2_HTM_NOSC.
+ptrace
+------
+When ptracing system calls (PTRACE_SYSCALL), the pt_regs.trap value contains
+the system call type that can be used to distinguish between sc and scv 0
+system calls, and the different register conventions can be accounted for.
+
+If the value of (pt_regs.trap & 0xfff0) is 0xc00 then the system call was
+performed with the sc instruction, if it is 0x3000 then the system call was
+performed with the scv 0 instruction.
+
vsyscall
========
diff --git a/Documentation/sphinx/parse-headers.pl b/Documentation/sphinx/parse-headers.pl
index 1910079..b063f2f 100755
--- a/Documentation/sphinx/parse-headers.pl
+++ b/Documentation/sphinx/parse-headers.pl
@@ -1,4 +1,4 @@
-#!/usr/bin/perl
+#!/usr/bin/env perl
use strict;
use Text::Tabs;
use Getopt::Long;
diff --git a/Documentation/target/tcm_mod_builder.py b/Documentation/target/tcm_mod_builder.py
index 1548d84..54492aa 100755
--- a/Documentation/target/tcm_mod_builder.py
+++ b/Documentation/target/tcm_mod_builder.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# The TCM v4 multi-protocol fabric module generation script for drivers/target/$NEW_MOD
#
# Copyright (c) 2010 Rising Tide Systems
diff --git a/Documentation/trace/postprocess/decode_msr.py b/Documentation/trace/postprocess/decode_msr.py
index 0ab40e0..aa9cc7a 100644
--- a/Documentation/trace/postprocess/decode_msr.py
+++ b/Documentation/trace/postprocess/decode_msr.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# add symbolic names to read_msr / write_msr in trace
# decode_msr msr-index.h < trace
import sys
diff --git a/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl b/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
index 0a120aa..b9b7d80 100644
--- a/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
+++ b/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
@@ -1,4 +1,4 @@
-#!/usr/bin/perl
+#!/usr/bin/env perl
# This is a POC (proof of concept or piece of crap, take your pick) for reading the
# text representation of trace output related to page allocation. It makes an attempt
# to extract some high-level information on what is going on. The accuracy of the parser
diff --git a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
index 995da15..2f4e398 100644
--- a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
+++ b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
@@ -1,4 +1,4 @@
-#!/usr/bin/perl
+#!/usr/bin/env perl
# This is a POC for reading the text representation of trace output related to
# page reclaim. It makes an attempt to extract some high-level information on
# what is going on. The accuracy of the parser may vary
diff --git a/Documentation/userspace-api/media/v4l/subdev-formats.rst b/Documentation/userspace-api/media/v4l/subdev-formats.rst
index c9b7bb3..eff6727 100644
--- a/Documentation/userspace-api/media/v4l/subdev-formats.rst
+++ b/Documentation/userspace-api/media/v4l/subdev-formats.rst
@@ -1567,8 +1567,8 @@
- MEDIA_BUS_FMT_RGB101010_1X30
- 0x1018
-
- - 0
- - 0
+ -
+ -
- r\ :sub:`9`
- r\ :sub:`8`
- r\ :sub:`7`
diff --git a/Documentation/userspace-api/seccomp_filter.rst b/Documentation/userspace-api/seccomp_filter.rst
index bd91652..6efb41c 100644
--- a/Documentation/userspace-api/seccomp_filter.rst
+++ b/Documentation/userspace-api/seccomp_filter.rst
@@ -250,14 +250,14 @@
seccomp notification fd to receive a ``struct seccomp_notif``, which contains
five members: the input length of the structure, a unique-per-filter ``id``,
the ``pid`` of the task which triggered this request (which may be 0 if the
-task is in a pid ns not visible from the listener's pid namespace), a ``flags``
-member which for now only has ``SECCOMP_NOTIF_FLAG_SIGNALED``, representing
-whether or not the notification is a result of a non-fatal signal, and the
-``data`` passed to seccomp. Userspace can then make a decision based on this
-information about what to do, and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a
-response, indicating what should be returned to userspace. The ``id`` member of
-``struct seccomp_notif_resp`` should be the same ``id`` as in ``struct
-seccomp_notif``.
+task is in a pid ns not visible from the listener's pid namespace). The
+notification also contains the ``data`` passed to seccomp, and a filters flag.
+The structure should be zeroed out prior to calling the ioctl.
+
+Userspace can then make a decision based on this information about what to do,
+and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a response, indicating what should be
+returned to userspace. The ``id`` member of ``struct seccomp_notif_resp`` should
+be the same ``id`` as in ``struct seccomp_notif``.
It is worth noting that ``struct seccomp_data`` contains the values of register
arguments to the syscall, but does not contain pointers to memory. The task's
diff --git a/MAINTAINERS b/MAINTAINERS
index 24cdfcf..4fef10d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6694,6 +6694,7 @@
F: fs/f2fs/
F: include/linux/f2fs_fs.h
F: include/trace/events/f2fs.h
+F: include/uapi/linux/f2fs.h
F71805F HARDWARE MONITORING DRIVER
M: Jean Delvare <jdelvare@suse.com>
diff --git a/Makefile b/Makefile
index cb76f64..290903d 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 10
-SUBLEVEL = 28
+SUBLEVEL = 42
EXTRAVERSION =
NAME = Dare mighty things
@@ -775,16 +775,16 @@
KBUILD_CFLAGS += -mno-global-merge
else
-# These warnings generated too much noise in a regular build.
-# Use make W=1 to enable them (see scripts/Makefile.extrawarn)
-KBUILD_CFLAGS += -Wno-unused-but-set-variable
-
# Warn about unmarked fall-throughs in switch statement.
# Disabled for clang while comment to attribute conversion happens and
# https://github.com/ClangBuiltLinux/linux/issues/636 is discussed.
KBUILD_CFLAGS += $(call cc-option,-Wimplicit-fallthrough,)
endif
+# These warnings generated too much noise in a regular build.
+# Use make W=1 to enable them (see scripts/Makefile.extrawarn)
+KBUILD_CFLAGS += $(call cc-disable-warning, unused-but-set-variable)
+
KBUILD_CFLAGS += $(call cc-disable-warning, unused-const-variable)
ifdef CONFIG_FRAME_POINTER
KBUILD_CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls
@@ -1083,6 +1083,17 @@
endif
endif
+PHONY += resolve_btfids_clean
+
+resolve_btfids_O = $(abspath $(objtree))/tools/bpf/resolve_btfids
+
+# tools/bpf/resolve_btfids directory might not exist
+# in output directory, skip its clean in that case
+resolve_btfids_clean:
+ifneq ($(wildcard $(resolve_btfids_O)),)
+ $(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean
+endif
+
ifdef CONFIG_BPF
ifdef CONFIG_DEBUG_INFO_BTF
ifeq ($(has_libelf),1)
@@ -1472,7 +1483,7 @@
# make distclean Remove editor backup files, patch leftover files and the like
# Directories & files removed with 'make clean'
-CLEAN_FILES += include/ksym vmlinux.symvers \
+CLEAN_FILES += include/ksym vmlinux.symvers modules-only.symvers \
modules.builtin modules.builtin.modinfo modules.nsdeps \
compile_commands.json
@@ -1500,7 +1511,7 @@
$(Q)$(CONFIG_SHELL) $(srctree)/scripts/link-vmlinux.sh clean
$(Q)$(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) clean)
-clean: archclean vmlinuxclean
+clean: archclean vmlinuxclean resolve_btfids_clean
# mrproper - Delete all generated files, including .config
#
diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
index d9c264d..9926cd5 100644
--- a/arch/arc/include/asm/page.h
+++ b/arch/arc/include/asm/page.h
@@ -7,6 +7,18 @@
#include <uapi/asm/page.h>
+#ifdef CONFIG_ARC_HAS_PAE40
+
+#define MAX_POSSIBLE_PHYSMEM_BITS 40
+#define PAGE_MASK_PHYS (0xff00000000ull | PAGE_MASK)
+
+#else /* CONFIG_ARC_HAS_PAE40 */
+
+#define MAX_POSSIBLE_PHYSMEM_BITS 32
+#define PAGE_MASK_PHYS PAGE_MASK
+
+#endif /* CONFIG_ARC_HAS_PAE40 */
+
#ifndef __ASSEMBLY__
#define clear_page(paddr) memset((paddr), 0, PAGE_SIZE)
diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 1636417..5878846 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -107,8 +107,8 @@
#define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE)
/* Set of bits not changed in pte_modify */
-#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_SPECIAL)
-
+#define _PAGE_CHG_MASK (PAGE_MASK_PHYS | _PAGE_ACCESSED | _PAGE_DIRTY | \
+ _PAGE_SPECIAL)
/* More Abbrevaited helpers */
#define PAGE_U_NONE __pgprot(___DEF)
#define PAGE_U_R __pgprot(___DEF | _PAGE_READ)
@@ -132,13 +132,7 @@
#define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ)
#define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ)
-#ifdef CONFIG_ARC_HAS_PAE40
-#define PTE_BITS_NON_RWX_IN_PD1 (0xff00000000 | PAGE_MASK | _PAGE_CACHEABLE)
-#define MAX_POSSIBLE_PHYSMEM_BITS 40
-#else
-#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK | _PAGE_CACHEABLE)
-#define MAX_POSSIBLE_PHYSMEM_BITS 32
-#endif
+#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE)
/**************************************************************************
* Mapping of vm_flags (Generic VM) to PTE flags (arch specific)
diff --git a/arch/arc/include/uapi/asm/page.h b/arch/arc/include/uapi/asm/page.h
index 2a97e27..2a4ad61 100644
--- a/arch/arc/include/uapi/asm/page.h
+++ b/arch/arc/include/uapi/asm/page.h
@@ -33,5 +33,4 @@
#define PAGE_MASK (~(PAGE_SIZE-1))
-
#endif /* _UAPI__ASM_ARC_PAGE_H */
diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S
index ea00c8a..ae656bf 100644
--- a/arch/arc/kernel/entry.S
+++ b/arch/arc/kernel/entry.S
@@ -177,7 +177,7 @@
; Do the Sys Call as we normally would.
; Validate the Sys Call number
- cmp r8, NR_syscalls
+ cmp r8, NR_syscalls - 1
mov.hi r0, -ENOSYS
bhi tracesys_exit
@@ -255,7 +255,7 @@
;============ Normal syscall case
; syscall num shd not exceed the total system calls avail
- cmp r8, NR_syscalls
+ cmp r8, NR_syscalls - 1
mov.hi r0, -ENOSYS
bhi .Lret_from_system_call
diff --git a/arch/arc/kernel/signal.c b/arch/arc/kernel/signal.c
index 2be55fb..98e575d 100644
--- a/arch/arc/kernel/signal.c
+++ b/arch/arc/kernel/signal.c
@@ -96,7 +96,7 @@ stash_usr_regs(struct rt_sigframe __user *sf, struct pt_regs *regs,
sizeof(sf->uc.uc_mcontext.regs.scratch));
err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(sigset_t));
- return err;
+ return err ? -EFAULT : 0;
}
static int restore_usr_regs(struct pt_regs *regs, struct rt_sigframe __user *sf)
@@ -110,7 +110,7 @@ static int restore_usr_regs(struct pt_regs *regs, struct rt_sigframe __user *sf)
&(sf->uc.uc_mcontext.regs.scratch),
sizeof(sf->uc.uc_mcontext.regs.scratch));
if (err)
- return err;
+ return -EFAULT;
set_current_blocked(&set);
regs->bta = uregs.scratch.bta;
diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c
index 3a35b82..da543fd 100644
--- a/arch/arc/mm/init.c
+++ b/arch/arc/mm/init.c
@@ -158,7 +158,16 @@ void __init setup_arch_memory(void)
min_high_pfn = PFN_DOWN(high_mem_start);
max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz);
- max_zone_pfn[ZONE_HIGHMEM] = min_low_pfn;
+ /*
+ * max_high_pfn should be ok here for both HIGHMEM and HIGHMEM+PAE.
+ * For HIGHMEM without PAE max_high_pfn should be less than
+ * min_low_pfn to guarantee that these two regions don't overlap.
+ * For PAE case highmem is greater than lowmem, so it is natural
+ * to use max_high_pfn.
+ *
+ * In both cases, holes should be handled by pfn_valid().
+ */
+ max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn;
high_memory = (void *)(min_high_pfn << PAGE_SHIFT);
kmap_init();
diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c
index fac4adc..95c649f 100644
--- a/arch/arc/mm/ioremap.c
+++ b/arch/arc/mm/ioremap.c
@@ -53,9 +53,10 @@ EXPORT_SYMBOL(ioremap);
void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
unsigned long flags)
{
+ unsigned int off;
unsigned long vaddr;
struct vm_struct *area;
- phys_addr_t off, end;
+ phys_addr_t end;
pgprot_t prot = __pgprot(flags);
/* Don't allow wraparound, zero size */
@@ -72,7 +73,7 @@ void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
/* Mappings have to be page-aligned */
off = paddr & ~PAGE_MASK;
- paddr &= PAGE_MASK;
+ paddr &= PAGE_MASK_PHYS;
size = PAGE_ALIGN(end + 1) - paddr;
/*
diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index 9bb3c24..9c7c682 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -576,7 +576,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned,
pte_t *ptep)
{
unsigned long vaddr = vaddr_unaligned & PAGE_MASK;
- phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK;
+ phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS;
struct page *page = pfn_to_page(pte_pfn(*ptep));
create_tlb(vma, vaddr, ptep);
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index e156741..0d6ee56 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -114,8 +114,8 @@
# Supply kernel BSS size to the decompressor via a linker symbol.
KBSS_SZ = $(shell echo $$(($$($(NM) $(obj)/../../../../vmlinux | \
- sed -n -e 's/^\([^ ]*\) [AB] __bss_start$$/-0x\1/p' \
- -e 's/^\([^ ]*\) [AB] __bss_stop$$/+0x\1/p') )) )
+ sed -n -e 's/^\([^ ]*\) [ABD] __bss_start$$/-0x\1/p' \
+ -e 's/^\([^ ]*\) [ABD] __bss_stop$$/+0x\1/p') )) )
LDFLAGS_vmlinux = --defsym _kernel_bss_size=$(KBSS_SZ)
# Supply ZRELADDR to the decompressor via a linker symbol.
ifneq ($(CONFIG_AUTO_ZRELADDR),y)
diff --git a/arch/arm/boot/dts/am33xx.dtsi b/arch/arm/boot/dts/am33xx.dtsi
index 4c22980..f09a61c 100644
--- a/arch/arm/boot/dts/am33xx.dtsi
+++ b/arch/arm/boot/dts/am33xx.dtsi
@@ -40,6 +40,9 @@ aliases {
ethernet1 = &cpsw_emac1;
spi0 = &spi0;
spi1 = &spi1;
+ mmc0 = &mmc1;
+ mmc1 = &mmc2;
+ mmc2 = &mmc3;
};
cpus {
diff --git a/arch/arm/boot/dts/armada-385-turris-omnia.dts b/arch/arm/boot/dts/armada-385-turris-omnia.dts
index 768b6c5..fde4c30 100644
--- a/arch/arm/boot/dts/armada-385-turris-omnia.dts
+++ b/arch/arm/boot/dts/armada-385-turris-omnia.dts
@@ -236,6 +236,7 @@ phy1: phy@1 {
status = "okay";
compatible = "ethernet-phy-id0141.0DD1", "ethernet-phy-ieee802.3-c22";
reg = <1>;
+ marvell,reg-init = <3 18 0 0x4985>;
/* irq is connected to &pcawan pin 7 */
};
diff --git a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
index 21ae880..c76b004 100644
--- a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
+++ b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
@@ -707,9 +707,9 @@ &i2c7 {
multi-master;
status = "okay";
- si7021-a20@20 {
+ si7021-a20@40 {
compatible = "silabs,si7020";
- reg = <0x20>;
+ reg = <0x40>;
};
tmp275@48 {
diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
index 775ceb3a..edca66c 100644
--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
+++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
@@ -8,6 +8,7 @@
*/
/dts-v1/;
#include "sam9x60.dtsi"
+#include <dt-bindings/input/input.h>
/ {
model = "Microchip SAM9X60-EK";
@@ -84,7 +85,7 @@ gpio_keys {
sw1 {
label = "SW1";
gpios = <&pioD 18 GPIO_ACTIVE_LOW>;
- linux,code=<0x104>;
+ linux,code=<KEY_PROG1>;
wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
index 0e159f8..d3cd244 100644
--- a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+++ b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
@@ -11,6 +11,7 @@
#include "at91-sama5d27_som1.dtsi"
#include <dt-bindings/mfd/atmel-flexcom.h>
#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
/ {
model = "Atmel SAMA5D27 SOM1 EK";
@@ -467,7 +468,7 @@ gpio_keys {
pb4 {
label = "USER";
gpios = <&pioA PIN_PA29 GPIO_ACTIVE_LOW>;
- linux,code = <0x104>;
+ linux,code = <KEY_PROG1>;
wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
index 6b38fa3..4883b84 100644
--- a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
+++ b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
@@ -8,6 +8,7 @@
*/
/dts-v1/;
#include "at91-sama5d27_wlsom1.dtsi"
+#include <dt-bindings/input/input.h>
/ {
model = "Microchip SAMA5D27 WLSOM1 EK";
@@ -35,7 +36,7 @@ gpio_keys {
sw4 {
label = "USER BUTTON";
gpios = <&pioA PIN_PB2 GPIO_ACTIVE_LOW>;
- linux,code = <0x104>;
+ linux,code = <KEY_PROG1>;
wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/at91-sama5d2_icp.dts b/arch/arm/boot/dts/at91-sama5d2_icp.dts
index 6783cf1..19bb50f 100644
--- a/arch/arm/boot/dts/at91-sama5d2_icp.dts
+++ b/arch/arm/boot/dts/at91-sama5d2_icp.dts
@@ -12,6 +12,7 @@
#include "sama5d2.dtsi"
#include "sama5d2-pinfunc.h"
#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
#include <dt-bindings/mfd/atmel-flexcom.h>
/ {
@@ -51,7 +52,7 @@ gpio_keys {
sw4 {
label = "USER_PB1";
gpios = <&pioA PIN_PD0 GPIO_ACTIVE_LOW>;
- linux,code = <0x104>;
+ linux,code = <KEY_PROG1>;
wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
index c894c7c..1c6361b 100644
--- a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+++ b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
@@ -11,6 +11,7 @@
#include "sama5d2-pinfunc.h"
#include <dt-bindings/mfd/atmel-flexcom.h>
#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
#include <dt-bindings/pinctrl/at91.h>
/ {
@@ -403,7 +404,7 @@ gpio_keys {
bp1 {
label = "PB_USER";
gpios = <&pioA PIN_PA10 GPIO_ACTIVE_LOW>;
- linux,code = <0x104>;
+ linux,code = <KEY_PROG1>;
wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/at91-sama5d2_xplained.dts b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
index 058fae1..d767968 100644
--- a/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+++ b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
@@ -10,6 +10,7 @@
#include "sama5d2-pinfunc.h"
#include <dt-bindings/mfd/atmel-flexcom.h>
#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
#include <dt-bindings/regulator/active-semi,8945a-regulator.h>
/ {
@@ -713,7 +714,7 @@ gpio_keys {
bp1 {
label = "PB_USER";
gpios = <&pioA PIN_PB9 GPIO_ACTIVE_LOW>;
- linux,code = <0x104>;
+ linux,code = <KEY_PROG1>;
wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
index 5179258..9c55a92 100644
--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
@@ -7,6 +7,7 @@
*/
/dts-v1/;
#include "sama5d36.dtsi"
+#include <dt-bindings/input/input.h>
/ {
model = "SAMA5D3 Xplained";
@@ -354,7 +355,7 @@ gpio_keys {
bp3 {
label = "PB_USER";
gpios = <&pioE 29 GPIO_ACTIVE_LOW>;
- linux,code = <0x104>;
+ linux,code = <KEY_PROG1>;
wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/at91sam9260ek.dts b/arch/arm/boot/dts/at91sam9260ek.dts
index d3446e4..ce96345 100644
--- a/arch/arm/boot/dts/at91sam9260ek.dts
+++ b/arch/arm/boot/dts/at91sam9260ek.dts
@@ -7,6 +7,7 @@
*/
/dts-v1/;
#include "at91sam9260.dtsi"
+#include <dt-bindings/input/input.h>
/ {
model = "Atmel at91sam9260ek";
@@ -156,7 +157,7 @@ btn3 {
btn4 {
label = "Button 4";
gpios = <&pioA 31 GPIO_ACTIVE_LOW>;
- linux,code = <0x104>;
+ linux,code = <KEY_PROG1>;
wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
index 6e6e672..87bb390 100644
--- a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
+++ b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
@@ -5,6 +5,7 @@
* Copyright (C) 2012 Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
*/
#include "at91sam9g20.dtsi"
+#include <dt-bindings/input/input.h>
/ {
@@ -234,7 +235,7 @@ btn3 {
btn4 {
label = "Button 4";
gpios = <&pioA 31 GPIO_ACTIVE_LOW>;
- linux,code = <0x104>;
+ linux,code = <KEY_PROG1>;
wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/bcm4708-asus-rt-ac56u.dts b/arch/arm/boot/dts/bcm4708-asus-rt-ac56u.dts
index 6a96655..8ed4037 100644
--- a/arch/arm/boot/dts/bcm4708-asus-rt-ac56u.dts
+++ b/arch/arm/boot/dts/bcm4708-asus-rt-ac56u.dts
@@ -21,8 +21,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm4708-asus-rt-ac68u.dts b/arch/arm/boot/dts/bcm4708-asus-rt-ac68u.dts
index 3b0029e..667b118 100644
--- a/arch/arm/boot/dts/bcm4708-asus-rt-ac68u.dts
+++ b/arch/arm/boot/dts/bcm4708-asus-rt-ac68u.dts
@@ -21,8 +21,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts b/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts
index 90f57ba..ff31ce4 100644
--- a/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts
+++ b/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts
@@ -21,8 +21,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x18000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x18000000>;
};
spi {
diff --git a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
index fed75e6..61c7b137 100644
--- a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
+++ b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
@@ -22,8 +22,8 @@ chosen {
memory {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm4708-netgear-r6300-v2.dts b/arch/arm/boot/dts/bcm4708-netgear-r6300-v2.dts
index 79542e1..4c60eda 100644
--- a/arch/arm/boot/dts/bcm4708-netgear-r6300-v2.dts
+++ b/arch/arm/boot/dts/bcm4708-netgear-r6300-v2.dts
@@ -21,8 +21,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts b/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts
index abd35a51..7d46561 100644
--- a/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts
+++ b/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts
@@ -21,8 +21,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm47081-asus-rt-n18u.dts b/arch/arm/boot/dts/bcm47081-asus-rt-n18u.dts
index c29950b..0e273c5 100644
--- a/arch/arm/boot/dts/bcm47081-asus-rt-n18u.dts
+++ b/arch/arm/boot/dts/bcm47081-asus-rt-n18u.dts
@@ -21,8 +21,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts b/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts
index 4dcec68..083ec40 100644
--- a/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts
+++ b/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts
@@ -21,8 +21,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
spi {
diff --git a/arch/arm/boot/dts/bcm47081-buffalo-wzr-900dhp.dts b/arch/arm/boot/dts/bcm47081-buffalo-wzr-900dhp.dts
index 0e349e3..8b1a05a 100644
--- a/arch/arm/boot/dts/bcm47081-buffalo-wzr-900dhp.dts
+++ b/arch/arm/boot/dts/bcm47081-buffalo-wzr-900dhp.dts
@@ -21,8 +21,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
spi {
diff --git a/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts b/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
index 8f1e565..6c6bb7b 100644
--- a/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
+++ b/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
@@ -21,8 +21,8 @@ chosen {
memory {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
index ce888b1..d29e7f8 100644
--- a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
+++ b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
@@ -21,8 +21,8 @@ chosen {
memory {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x18000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x18000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts b/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
index ed8619b..38fbefd 100644
--- a/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
+++ b/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
@@ -18,8 +18,8 @@ chosen {
memory {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
gpio-keys {
diff --git a/arch/arm/boot/dts/bcm4709-netgear-r7000.dts b/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
index 1f87993..7989a53 100644
--- a/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
+++ b/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
@@ -21,8 +21,8 @@ chosen {
memory {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
index 6c6199a..87b655be 100644
--- a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
+++ b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
@@ -32,8 +32,8 @@ chosen {
memory {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts b/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
index 911c65fb..e635a15 100644
--- a/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
+++ b/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
@@ -21,8 +21,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
nand: nand@18028000 {
diff --git a/arch/arm/boot/dts/bcm47094-linksys-panamera.dts b/arch/arm/boot/dts/bcm47094-linksys-panamera.dts
index 0faae89..36d63be 100644
--- a/arch/arm/boot/dts/bcm47094-linksys-panamera.dts
+++ b/arch/arm/boot/dts/bcm47094-linksys-panamera.dts
@@ -18,8 +18,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
gpio-keys {
diff --git a/arch/arm/boot/dts/bcm47094-luxul-abr-4500.dts b/arch/arm/boot/dts/bcm47094-luxul-abr-4500.dts
index 50f7cd0..a6dc999 100644
--- a/arch/arm/boot/dts/bcm47094-luxul-abr-4500.dts
+++ b/arch/arm/boot/dts/bcm47094-luxul-abr-4500.dts
@@ -18,8 +18,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x18000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x18000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm47094-luxul-xbr-4500.dts b/arch/arm/boot/dts/bcm47094-luxul-xbr-4500.dts
index bcc420f..ff98837 100644
--- a/arch/arm/boot/dts/bcm47094-luxul-xbr-4500.dts
+++ b/arch/arm/boot/dts/bcm47094-luxul-xbr-4500.dts
@@ -18,8 +18,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x18000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x18000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
index 9ae815d..2666195 100644
--- a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
+++ b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
@@ -18,8 +18,8 @@ chosen {
memory {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x18000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x18000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm47094-luxul-xwr-3100.dts b/arch/arm/boot/dts/bcm47094-luxul-xwr-3100.dts
index a21b2d1..9f79802 100644
--- a/arch/arm/boot/dts/bcm47094-luxul-xwr-3100.dts
+++ b/arch/arm/boot/dts/bcm47094-luxul-xwr-3100.dts
@@ -18,8 +18,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x08000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x08000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm47094-luxul-xwr-3150-v1.dts b/arch/arm/boot/dts/bcm47094-luxul-xwr-3150-v1.dts
index 4d5c5aa..c8dfa4c 100644
--- a/arch/arm/boot/dts/bcm47094-luxul-xwr-3150-v1.dts
+++ b/arch/arm/boot/dts/bcm47094-luxul-xwr-3150-v1.dts
@@ -18,8 +18,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x18000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x18000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm47094-netgear-r8500.dts b/arch/arm/boot/dts/bcm47094-netgear-r8500.dts
index f42a170..42097a4 100644
--- a/arch/arm/boot/dts/bcm47094-netgear-r8500.dts
+++ b/arch/arm/boot/dts/bcm47094-netgear-r8500.dts
@@ -18,8 +18,8 @@ chosen {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x18000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x18000000>;
};
leds {
diff --git a/arch/arm/boot/dts/bcm47094-phicomm-k3.dts b/arch/arm/boot/dts/bcm47094-phicomm-k3.dts
index ac3a448..a2566ad 100644
--- a/arch/arm/boot/dts/bcm47094-phicomm-k3.dts
+++ b/arch/arm/boot/dts/bcm47094-phicomm-k3.dts
@@ -15,8 +15,8 @@ / {
memory@0 {
device_type = "memory";
- reg = <0x00000000 0x08000000
- 0x88000000 0x18000000>;
+ reg = <0x00000000 0x08000000>,
+ <0x88000000 0x18000000>;
};
gpio-keys {
diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
index 3bf90d9..a294a02 100644
--- a/arch/arm/boot/dts/dra7-l4.dtsi
+++ b/arch/arm/boot/dts/dra7-l4.dtsi
@@ -1168,7 +1168,7 @@ timer2: timer@0 {
};
};
- target-module@34000 { /* 0x48034000, ap 7 46.0 */
+ timer3_target: target-module@34000 { /* 0x48034000, ap 7 46.0 */
compatible = "ti,sysc-omap4-timer", "ti,sysc";
reg = <0x34000 0x4>,
<0x34010 0x4>;
@@ -1195,7 +1195,7 @@ timer3: timer@0 {
};
};
- target-module@36000 { /* 0x48036000, ap 9 4e.0 */
+ timer4_target: target-module@36000 { /* 0x48036000, ap 9 4e.0 */
compatible = "ti,sysc-omap4-timer", "ti,sysc";
reg = <0x36000 0x4>,
<0x36010 0x4>;
diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
index 4e1bbc0..7ecf8f8 100644
--- a/arch/arm/boot/dts/dra7.dtsi
+++ b/arch/arm/boot/dts/dra7.dtsi
@@ -46,6 +46,7 @@ aliases {
timer {
compatible = "arm,armv7-timer";
+ status = "disabled"; /* See ARM architected timer wrap erratum i940 */
interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
@@ -1090,3 +1091,22 @@ timer@0 {
assigned-clock-parents = <&sys_32k_ck>;
};
};
+
+/* Local timers, see ARM architected timer wrap erratum i940 */
+&timer3_target {
+ ti,no-reset-on-init;
+ ti,no-idle;
+ timer@0 {
+ assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>;
+ assigned-clock-parents = <&timer_sys_clk_div>;
+ };
+};
+
+&timer4_target {
+ ti,no-reset-on-init;
+ ti,no-idle;
+ timer@0 {
+ assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>;
+ assigned-clock-parents = <&timer_sys_clk_div>;
+ };
+};
diff --git a/arch/arm/boot/dts/exynos4210-i9100.dts b/arch/arm/boot/dts/exynos4210-i9100.dts
index 5370ee4..7777bf5 100644
--- a/arch/arm/boot/dts/exynos4210-i9100.dts
+++ b/arch/arm/boot/dts/exynos4210-i9100.dts
@@ -136,7 +136,7 @@ battery@36 {
compatible = "maxim,max17042";
interrupt-parent = <&gpx2>;
- interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
+ interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
pinctrl-0 = <&max17042_fuel_irq>;
pinctrl-names = "default";
diff --git a/arch/arm/boot/dts/exynos4412-midas.dtsi b/arch/arm/boot/dts/exynos4412-midas.dtsi
index 7e7c243..0645006 100644
--- a/arch/arm/boot/dts/exynos4412-midas.dtsi
+++ b/arch/arm/boot/dts/exynos4412-midas.dtsi
@@ -174,7 +174,7 @@ i2c_max77693: i2c-gpio-1 {
max77693@66 {
compatible = "maxim,max77693";
interrupt-parent = <&gpx1>;
- interrupts = <5 IRQ_TYPE_EDGE_FALLING>;
+ interrupts = <5 IRQ_TYPE_LEVEL_LOW>;
pinctrl-names = "default";
pinctrl-0 = <&max77693_irq>;
reg = <0x66>;
@@ -223,7 +223,7 @@ i2c_max77693_fuel: i2c-gpio-3 {
max77693-fuel-gauge@36 {
compatible = "maxim,max17047";
interrupt-parent = <&gpx2>;
- interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
+ interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
pinctrl-names = "default";
pinctrl-0 = <&max77693_fuel_irq>;
reg = <0x36>;
@@ -668,7 +668,7 @@ &i2c_7 {
max77686: max77686_pmic@9 {
compatible = "maxim,max77686";
interrupt-parent = <&gpx0>;
- interrupts = <7 IRQ_TYPE_NONE>;
+ interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
pinctrl-0 = <&max77686_irq>;
pinctrl-names = "default";
reg = <0x09>;
diff --git a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
index 2983e91..869d80b 100644
--- a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+++ b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
@@ -279,7 +279,7 @@ usb3503: usb3503@8 {
max77686: pmic@9 {
compatible = "maxim,max77686";
interrupt-parent = <&gpx3>;
- interrupts = <2 IRQ_TYPE_NONE>;
+ interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
pinctrl-names = "default";
pinctrl-0 = <&max77686_irq>;
reg = <0x09>;
diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
index 186790f..d0e48c1 100644
--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
+++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
@@ -134,7 +134,7 @@ max77686: pmic@9 {
compatible = "maxim,max77686";
reg = <0x09>;
interrupt-parent = <&gpx3>;
- interrupts = <2 IRQ_TYPE_NONE>;
+ interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
pinctrl-names = "default";
pinctrl-0 = <&max77686_irq>;
#clock-cells = <1>;
diff --git a/arch/arm/boot/dts/exynos5250-snow-common.dtsi b/arch/arm/boot/dts/exynos5250-snow-common.dtsi
index c952a61..737f0e2 100644
--- a/arch/arm/boot/dts/exynos5250-snow-common.dtsi
+++ b/arch/arm/boot/dts/exynos5250-snow-common.dtsi
@@ -292,7 +292,7 @@ &i2c_0 {
max77686: max77686@9 {
compatible = "maxim,max77686";
interrupt-parent = <&gpx3>;
- interrupts = <2 IRQ_TYPE_NONE>;
+ interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
pinctrl-names = "default";
pinctrl-0 = <&max77686_irq>;
wakeup-source;
diff --git a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
index e361df2..5f84e9f 100644
--- a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
@@ -432,6 +432,7 @@ &usdhc2 {
pinctrl-0 = <&pinctrl_usdhc2>;
cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>;
wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
+ vmmc-supply = <&vdd_sd1_reg>;
status = "disabled";
};
@@ -441,5 +442,6 @@ &usdhc3 {
&pinctrl_usdhc3_cdwp>;
cd-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>;
wp-gpios = <&gpio1 29 GPIO_ACTIVE_HIGH>;
+ vmmc-supply = <&vdd_sd0_reg>;
status = "disabled";
};
diff --git a/arch/arm/boot/dts/omap3.dtsi b/arch/arm/boot/dts/omap3.dtsi
index 9dcae1f..c5b9da0d 100644
--- a/arch/arm/boot/dts/omap3.dtsi
+++ b/arch/arm/boot/dts/omap3.dtsi
@@ -24,6 +24,9 @@ aliases {
i2c0 = &i2c1;
i2c1 = &i2c2;
i2c2 = &i2c3;
+ mmc0 = &mmc1;
+ mmc1 = &mmc2;
+ mmc2 = &mmc3;
serial0 = &uart1;
serial1 = &uart2;
serial2 = &uart3;
diff --git a/arch/arm/boot/dts/omap4.dtsi b/arch/arm/boot/dts/omap4.dtsi
index d6475cc..0491740 100644
--- a/arch/arm/boot/dts/omap4.dtsi
+++ b/arch/arm/boot/dts/omap4.dtsi
@@ -22,6 +22,11 @@ aliases {
i2c1 = &i2c2;
i2c2 = &i2c3;
i2c3 = &i2c4;
+ mmc0 = &mmc1;
+ mmc1 = &mmc2;
+ mmc2 = &mmc3;
+ mmc3 = &mmc4;
+ mmc4 = &mmc5;
serial0 = &uart1;
serial1 = &uart2;
serial2 = &uart3;
diff --git a/arch/arm/boot/dts/omap44xx-clocks.dtsi b/arch/arm/boot/dts/omap44xx-clocks.dtsi
index 5328685..1f1c04d 100644
--- a/arch/arm/boot/dts/omap44xx-clocks.dtsi
+++ b/arch/arm/boot/dts/omap44xx-clocks.dtsi
@@ -770,14 +770,6 @@ per_abe_nc_fclk: per_abe_nc_fclk@108 {
ti,max-div = <2>;
};
- sha2md5_fck: sha2md5_fck@15c8 {
- #clock-cells = <0>;
- compatible = "ti,gate-clock";
- clocks = <&l3_div_ck>;
- ti,bit-shift = <1>;
- reg = <0x15c8>;
- };
-
usb_phy_cm_clk32k: usb_phy_cm_clk32k@640 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
diff --git a/arch/arm/boot/dts/omap5.dtsi b/arch/arm/boot/dts/omap5.dtsi
index 2bf2e58..530210d 100644
--- a/arch/arm/boot/dts/omap5.dtsi
+++ b/arch/arm/boot/dts/omap5.dtsi
@@ -25,6 +25,11 @@ aliases {
i2c2 = &i2c3;
i2c3 = &i2c4;
i2c4 = &i2c5;
+ mmc0 = &mmc1;
+ mmc1 = &mmc2;
+ mmc2 = &mmc3;
+ mmc3 = &mmc4;
+ mmc4 = &mmc5;
serial0 = &uart1;
serial1 = &uart2;
serial2 = &uart3;
diff --git a/arch/arm/boot/dts/r8a7790-lager.dts b/arch/arm/boot/dts/r8a7790-lager.dts
index 09a152b..1d6f0c5 100644
--- a/arch/arm/boot/dts/r8a7790-lager.dts
+++ b/arch/arm/boot/dts/r8a7790-lager.dts
@@ -53,6 +53,9 @@ aliases {
i2c11 = &i2cexio1;
i2c12 = &i2chdmi;
i2c13 = &i2cpwr;
+ mmc0 = &mmcif1;
+ mmc1 = &sdhi0;
+ mmc2 = &sdhi2;
};
chosen {
diff --git a/arch/arm/boot/dts/r8a7791-koelsch.dts b/arch/arm/boot/dts/r8a7791-koelsch.dts
index f603cba..6af1727 100644
--- a/arch/arm/boot/dts/r8a7791-koelsch.dts
+++ b/arch/arm/boot/dts/r8a7791-koelsch.dts
@@ -53,6 +53,9 @@ aliases {
i2c12 = &i2cexio1;
i2c13 = &i2chdmi;
i2c14 = &i2cexio4;
+ mmc0 = &sdhi0;
+ mmc1 = &sdhi1;
+ mmc2 = &sdhi2;
};
chosen {
diff --git a/arch/arm/boot/dts/r8a7791-porter.dts b/arch/arm/boot/dts/r8a7791-porter.dts
index c6d563f..bf51e29c 100644
--- a/arch/arm/boot/dts/r8a7791-porter.dts
+++ b/arch/arm/boot/dts/r8a7791-porter.dts
@@ -28,6 +28,8 @@ aliases {
serial0 = &scif0;
i2c9 = &gpioi2c2;
i2c10 = &i2chdmi;
+ mmc0 = &sdhi0;
+ mmc1 = &sdhi2;
};
chosen {
diff --git a/arch/arm/boot/dts/r8a7793-gose.dts b/arch/arm/boot/dts/r8a7793-gose.dts
index abf487e..2b59a04 100644
--- a/arch/arm/boot/dts/r8a7793-gose.dts
+++ b/arch/arm/boot/dts/r8a7793-gose.dts
@@ -49,6 +49,9 @@ aliases {
i2c10 = &gpioi2c4;
i2c11 = &i2chdmi;
i2c12 = &i2cexio4;
+ mmc0 = &sdhi0;
+ mmc1 = &sdhi1;
+ mmc2 = &sdhi2;
};
chosen {
diff --git a/arch/arm/boot/dts/r8a7794-alt.dts b/arch/arm/boot/dts/r8a7794-alt.dts
index 3f1cc5b..3202598 100644
--- a/arch/arm/boot/dts/r8a7794-alt.dts
+++ b/arch/arm/boot/dts/r8a7794-alt.dts
@@ -19,6 +19,9 @@ aliases {
i2c10 = &gpioi2c4;
i2c11 = &i2chdmi;
i2c12 = &i2cexio4;
+ mmc0 = &mmcif0;
+ mmc1 = &sdhi0;
+ mmc2 = &sdhi1;
};
chosen {
diff --git a/arch/arm/boot/dts/r8a7794-silk.dts b/arch/arm/boot/dts/r8a7794-silk.dts
index 677596f..af066ee 100644
--- a/arch/arm/boot/dts/r8a7794-silk.dts
+++ b/arch/arm/boot/dts/r8a7794-silk.dts
@@ -31,6 +31,8 @@ aliases {
serial0 = &scif2;
i2c9 = &gpioi2c1;
i2c10 = &i2chdmi;
+ mmc0 = &mmcif0;
+ mmc1 = &sdhi1;
};
chosen {
diff --git a/arch/arm/boot/dts/s5pv210-fascinate4g.dts b/arch/arm/boot/dts/s5pv210-fascinate4g.dts
index ca06435..b47d830 100644
--- a/arch/arm/boot/dts/s5pv210-fascinate4g.dts
+++ b/arch/arm/boot/dts/s5pv210-fascinate4g.dts
@@ -115,7 +115,7 @@ &fg {
compatible = "maxim,max77836-battery";
interrupt-parent = <&gph3>;
- interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
+ interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
pinctrl-names = "default";
pinctrl-0 = <&fg_irq>;
diff --git a/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi b/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
index 9f285c7..c0de133 100644
--- a/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
+++ b/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
@@ -8,37 +8,43 @@
/ {
soc {
i2c@80128000 {
- /* Marked:
- * 129
- * M35
- * L3GD20
- */
- l3gd20@6a {
- /* Gyroscope */
- compatible = "st,l3gd20";
- status = "disabled";
+ accelerometer@19 {
+ compatible = "st,lsm303dlhc-accel";
st,drdy-int-pin = <1>;
- drive-open-drain;
- reg = <0x6a>; // 0x6a or 0x6b
+ reg = <0x19>;
+ vdd-supply = <&ab8500_ldo_aux1_reg>;
+ vddio-supply = <&db8500_vsmps2_reg>;
+ interrupt-parent = <&gpio2>;
+ interrupts = <18 IRQ_TYPE_EDGE_RISING>,
+ <19 IRQ_TYPE_EDGE_RISING>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&accel_tvk_mode>;
+ };
+ magnetometer@1e {
+ compatible = "st,lsm303dlm-magn";
+ st,drdy-int-pin = <1>;
+ reg = <0x1e>;
+ vdd-supply = <&ab8500_ldo_aux1_reg>;
+ vddio-supply = <&db8500_vsmps2_reg>;
+ // This interrupt is not properly working with the driver
+ // interrupt-parent = <&gpio1>;
+ // interrupts = <0 IRQ_TYPE_EDGE_RISING>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&magn_tvk_mode>;
+ };
+ gyroscope@68 {
+ /* Gyroscope */
+ compatible = "st,l3g4200d-gyro";
+ reg = <0x68>;
vdd-supply = <&ab8500_ldo_aux1_reg>;
vddio-supply = <&db8500_vsmps2_reg>;
};
- /*
- * Marked:
- * 2122
- * C3H
- * DQEEE
- * LIS3DH?
- */
- lis3dh@18 {
- /* Accelerometer */
- compatible = "st,lis3dh-accel";
- st,drdy-int-pin = <1>;
- reg = <0x18>;
+ pressure@5c {
+ /* Barometer/pressure sensor */
+ compatible = "st,lps001wp-press";
+ reg = <0x5c>;
vdd-supply = <&ab8500_ldo_aux1_reg>;
vddio-supply = <&db8500_vsmps2_reg>;
- pinctrl-names = "default";
- pinctrl-0 = <&accel_tvk_mode>;
};
};
@@ -54,5 +60,26 @@ panel {
};
};
};
+
+ pinctrl {
+ accelerometer {
+ accel_tvk_mode: accel_tvk {
+ /* Accelerometer interrupt lines 1 & 2 */
+ tvk_cfg {
+ pins = "GPIO82_C1", "GPIO83_D3";
+ ste,config = <&gpio_in_pd>;
+ };
+ };
+ };
+ magnetometer {
+ magn_tvk_mode: magn_tvk {
+ /* GPIO 32 used for DRDY, pull this down */
+ tvk_cfg {
+ pins = "GPIO32_V2";
+ ste,config = <&gpio_in_pd>;
+ };
+ };
+ };
+ };
};
};
diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
index d84686e..dee4d32 100644
--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
@@ -1806,10 +1806,15 @@ pins2 {
usart2_idle_pins_c: usart2-idle-2 {
pins1 {
pinmux = <STM32_PINMUX('D', 5, ANALOG)>, /* USART2_TX */
- <STM32_PINMUX('D', 4, ANALOG)>, /* USART2_RTS */
<STM32_PINMUX('D', 3, ANALOG)>; /* USART2_CTS_NSS */
};
pins2 {
+ pinmux = <STM32_PINMUX('D', 4, AF7)>; /* USART2_RTS */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <3>;
+ };
+ pins3 {
pinmux = <STM32_PINMUX('D', 6, AF7)>; /* USART2_RX */
bias-disable;
};
@@ -1855,10 +1860,15 @@ pins2 {
usart3_idle_pins_b: usart3-idle-1 {
pins1 {
pinmux = <STM32_PINMUX('B', 10, ANALOG)>, /* USART3_TX */
- <STM32_PINMUX('G', 8, ANALOG)>, /* USART3_RTS */
<STM32_PINMUX('I', 10, ANALOG)>; /* USART3_CTS_NSS */
};
pins2 {
+ pinmux = <STM32_PINMUX('G', 8, AF8)>; /* USART3_RTS */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins3 {
pinmux = <STM32_PINMUX('B', 12, AF8)>; /* USART3_RX */
bias-disable;
};
@@ -1891,10 +1901,15 @@ pins2 {
usart3_idle_pins_c: usart3-idle-2 {
pins1 {
pinmux = <STM32_PINMUX('B', 10, ANALOG)>, /* USART3_TX */
- <STM32_PINMUX('G', 8, ANALOG)>, /* USART3_RTS */
<STM32_PINMUX('B', 13, ANALOG)>; /* USART3_CTS_NSS */
};
pins2 {
+ pinmux = <STM32_PINMUX('G', 8, AF8)>; /* USART3_RTS */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins3 {
pinmux = <STM32_PINMUX('B', 12, AF8)>; /* USART3_RX */
bias-disable;
};
diff --git a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
index a0b8297..068aabcf 100644
--- a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+++ b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
@@ -448,7 +448,7 @@ touchscreen@4c {
reset-gpios = <&gpio TEGRA_GPIO(Q, 7) GPIO_ACTIVE_HIGH>;
- avdd-supply = <&vdd_3v3_sys>;
+ vdda-supply = <&vdd_3v3_sys>;
vdd-supply = <&vdd_3v3_sys>;
};
diff --git a/arch/arm/boot/dts/uniphier-pxs2.dtsi b/arch/arm/boot/dts/uniphier-pxs2.dtsi
index b0b15c9..e81e593 100644
--- a/arch/arm/boot/dts/uniphier-pxs2.dtsi
+++ b/arch/arm/boot/dts/uniphier-pxs2.dtsi
@@ -583,7 +583,7 @@ eth: ethernet@65000000 {
clocks = <&sys_clk 6>;
reset-names = "ether";
resets = <&sys_rst 6>;
- phy-mode = "rgmii";
+ phy-mode = "rgmii-id";
local-mac-address = [00 00 00 00 00 00];
socionext,syscon-phy-mode = <&soc_glue 0>;
diff --git a/arch/arm/crypto/curve25519-core.S b/arch/arm/crypto/curve25519-core.S
index be18af5..b697fa5 100644
--- a/arch/arm/crypto/curve25519-core.S
+++ b/arch/arm/crypto/curve25519-core.S
@@ -10,8 +10,8 @@
#include <linux/linkage.h>
.text
-.fpu neon
.arch armv7-a
+.fpu neon
.align 4
ENTRY(curve25519_neon)
diff --git a/arch/arm/crypto/poly1305-glue.c b/arch/arm/crypto/poly1305-glue.c
index 3023c1a..c31bd8f 100644
--- a/arch/arm/crypto/poly1305-glue.c
+++ b/arch/arm/crypto/poly1305-glue.c
@@ -29,7 +29,7 @@ void __weak poly1305_blocks_neon(void *state, const u8 *src, u32 len, u32 hibit)
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
+void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
{
poly1305_init_arm(&dctx->h, key);
dctx->s[0] = get_unaligned_le32(key + 16);
diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index fc56fc3..9575b40 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -2,7 +2,7 @@
#ifndef _ASM_FIXMAP_H
#define _ASM_FIXMAP_H
-#define FIXADDR_START 0xffc00000UL
+#define FIXADDR_START 0xffc80000UL
#define FIXADDR_END 0xfff00000UL
#define FIXADDR_TOP (FIXADDR_END - PAGE_SIZE)
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 99035b5..f717d71 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -67,6 +67,10 @@
*/
#define XIP_VIRT_ADDR(physaddr) (MODULES_VADDR + ((physaddr) & 0x000fffff))
+#define FDT_FIXED_BASE UL(0xff800000)
+#define FDT_FIXED_SIZE (2 * SECTION_SIZE)
+#define FDT_VIRT_BASE(physbase) ((void *)(FDT_FIXED_BASE | (physbase) % SECTION_SIZE))
+
#if !defined(CONFIG_SMP) && !defined(CONFIG_ARM_LPAE)
/*
* Allow 16MB-aligned ioremap pages
@@ -107,6 +111,7 @@ extern unsigned long vectors_base;
#define MODULES_VADDR PAGE_OFFSET
#define XIP_VIRT_ADDR(physaddr) (physaddr)
+#define FDT_VIRT_BASE(physbase) ((void *)(physbase))
#endif /* !CONFIG_MMU */
diff --git a/arch/arm/include/asm/prom.h b/arch/arm/include/asm/prom.h
index 1e36c40..402e3f3 100644
--- a/arch/arm/include/asm/prom.h
+++ b/arch/arm/include/asm/prom.h
@@ -9,12 +9,12 @@
#ifdef CONFIG_OF
-extern const struct machine_desc *setup_machine_fdt(unsigned int dt_phys);
+extern const struct machine_desc *setup_machine_fdt(void *dt_virt);
extern void __init arm_dt_init_cpu_maps(void);
#else /* CONFIG_OF */
-static inline const struct machine_desc *setup_machine_fdt(unsigned int dt_phys)
+static inline const struct machine_desc *setup_machine_fdt(void *dt_virt)
{
return NULL;
}
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index be8050b..70993af 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -24,6 +24,7 @@
#include <asm/vdso_datapage.h>
#include <asm/hardware/cache-l2x0.h>
#include <linux/kbuild.h>
+#include <linux/arm-smccc.h>
#include "signal.h"
/*
@@ -148,6 +149,8 @@ int main(void)
DEFINE(SLEEP_SAVE_SP_PHYS, offsetof(struct sleep_save_sp, save_ptr_stash_phys));
DEFINE(SLEEP_SAVE_SP_VIRT, offsetof(struct sleep_save_sp, save_ptr_stash));
#endif
+ DEFINE(ARM_SMCCC_QUIRK_ID_OFFS, offsetof(struct arm_smccc_quirk, id));
+ DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS, offsetof(struct arm_smccc_quirk, state));
BLANK();
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
diff --git a/arch/arm/kernel/atags.h b/arch/arm/kernel/atags.h
index 067e12e..f2819c2 100644
--- a/arch/arm/kernel/atags.h
+++ b/arch/arm/kernel/atags.h
@@ -2,11 +2,11 @@
void convert_to_tag_list(struct tag *tags);
#ifdef CONFIG_ATAGS
-const struct machine_desc *setup_machine_tags(phys_addr_t __atags_pointer,
+const struct machine_desc *setup_machine_tags(void *__atags_vaddr,
unsigned int machine_nr);
#else
static inline const struct machine_desc * __init __noreturn
-setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr)
+setup_machine_tags(void *__atags_vaddr, unsigned int machine_nr)
{
early_print("no ATAGS support: can't continue\n");
while (true);
diff --git a/arch/arm/kernel/atags_parse.c b/arch/arm/kernel/atags_parse.c
index 6c12d9f..373b61f 100644
--- a/arch/arm/kernel/atags_parse.c
+++ b/arch/arm/kernel/atags_parse.c
@@ -174,7 +174,7 @@ static void __init squash_mem_tags(struct tag *tag)
}
const struct machine_desc * __init
-setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr)
+setup_machine_tags(void *atags_vaddr, unsigned int machine_nr)
{
struct tag *tags = (struct tag *)&default_tags;
const struct machine_desc *mdesc = NULL, *p;
@@ -195,8 +195,8 @@ setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr)
if (!mdesc)
return NULL;
- if (__atags_pointer)
- tags = phys_to_virt(__atags_pointer);
+ if (atags_vaddr)
+ tags = atags_vaddr;
else if (mdesc->atag_offset)
tags = (void *)(PAGE_OFFSET + mdesc->atag_offset);
diff --git a/arch/arm/kernel/devtree.c b/arch/arm/kernel/devtree.c
index 7f0745a..28311dd 100644
--- a/arch/arm/kernel/devtree.c
+++ b/arch/arm/kernel/devtree.c
@@ -203,12 +203,12 @@ static const void * __init arch_get_next_mach(const char *const **match)
/**
* setup_machine_fdt - Machine setup when an dtb was passed to the kernel
- * @dt_phys: physical address of dt blob
+ * @dt_virt: virtual address of dt blob
*
* If a dtb was passed to the kernel in r2, then use it to choose the
* correct machine_desc and to setup the system.
*/
-const struct machine_desc * __init setup_machine_fdt(unsigned int dt_phys)
+const struct machine_desc * __init setup_machine_fdt(void *dt_virt)
{
const struct machine_desc *mdesc, *mdesc_best = NULL;
@@ -221,7 +221,7 @@ const struct machine_desc * __init setup_machine_fdt(unsigned int dt_phys)
mdesc_best = &__mach_desc_GENERIC_DT;
#endif
- if (!dt_phys || !early_init_dt_verify(phys_to_virt(dt_phys)))
+ if (!dt_virt || !early_init_dt_verify(dt_virt))
return NULL;
mdesc = of_flat_dt_match_machine(mdesc_best, arch_get_next_mach);
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index 98c1e68..4af5c76 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -274,11 +274,10 @@
* We map 2 sections in case the ATAGs/DTB crosses a section boundary.
*/
mov r0, r2, lsr #SECTION_SHIFT
- movs r0, r0, lsl #SECTION_SHIFT
- subne r3, r0, r8
- addne r3, r3, #PAGE_OFFSET
- addne r3, r4, r3, lsr #(SECTION_SHIFT - PMD_ORDER)
- orrne r6, r7, r0
+ cmp r2, #0
+ ldrne r3, =FDT_FIXED_BASE >> (SECTION_SHIFT - PMD_ORDER)
+ addne r3, r3, r4
+ orrne r6, r7, r0, lsl #SECTION_SHIFT
strne r6, [r3], #1 << PMD_ORDER
addne r6, r6, #1 << SECTION_SHIFT
strne r6, [r3]
diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
index 08660ae..b1423fb 100644
--- a/arch/arm/kernel/hw_breakpoint.c
+++ b/arch/arm/kernel/hw_breakpoint.c
@@ -886,7 +886,7 @@ static void breakpoint_handler(unsigned long unknown, struct pt_regs *regs)
info->trigger = addr;
pr_debug("breakpoint fired: address = 0x%x\n", addr);
perf_bp_event(bp, regs);
- if (!bp->overflow_handler)
+ if (is_default_overflow_handler(bp))
enable_single_step(bp, addr);
goto unlock;
}
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 3f65d0a..f90479d 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -18,6 +18,7 @@
#include <linux/of_platform.h>
#include <linux/init.h>
#include <linux/kexec.h>
+#include <linux/libfdt.h>
#include <linux/of_fdt.h>
#include <linux/cpu.h>
#include <linux/interrupt.h>
@@ -1081,19 +1082,27 @@ void __init hyp_mode_check(void)
void __init setup_arch(char **cmdline_p)
{
- const struct machine_desc *mdesc;
+ const struct machine_desc *mdesc = NULL;
+ void *atags_vaddr = NULL;
+
+ if (__atags_pointer)
+ atags_vaddr = FDT_VIRT_BASE(__atags_pointer);
setup_processor();
- mdesc = setup_machine_fdt(__atags_pointer);
+ if (atags_vaddr) {
+ mdesc = setup_machine_fdt(atags_vaddr);
+ if (mdesc)
+ memblock_reserve(__atags_pointer,
+ fdt_totalsize(atags_vaddr));
+ }
if (!mdesc)
- mdesc = setup_machine_tags(__atags_pointer, __machine_arch_type);
+ mdesc = setup_machine_tags(atags_vaddr, __machine_arch_type);
if (!mdesc) {
early_print("\nError: invalid dtb and unrecognized/unsupported machine ID\n");
early_print(" r1=0x%08x, r2=0x%08x\n", __machine_arch_type,
__atags_pointer);
if (__atags_pointer)
- early_print(" r2[]=%*ph\n", 16,
- phys_to_virt(__atags_pointer));
+ early_print(" r2[]=%*ph\n", 16, atags_vaddr);
dump_machine_table();
}
diff --git a/arch/arm/kernel/smccc-call.S b/arch/arm/kernel/smccc-call.S
index 00664c7..931df62 100644
--- a/arch/arm/kernel/smccc-call.S
+++ b/arch/arm/kernel/smccc-call.S
@@ -3,7 +3,9 @@
* Copyright (c) 2015, Linaro Limited
*/
#include <linux/linkage.h>
+#include <linux/arm-smccc.h>
+#include <asm/asm-offsets.h>
#include <asm/opcodes-sec.h>
#include <asm/opcodes-virt.h>
#include <asm/unwind.h>
@@ -27,7 +29,14 @@
UNWIND( .save {r4-r7})
ldm r12, {r4-r7}
\instr
- pop {r4-r7}
+ ldr r4, [sp, #36]
+ cmp r4, #0
+ beq 1f // No quirk structure
+ ldr r5, [r4, #ARM_SMCCC_QUIRK_ID_OFFS]
+ cmp r5, #ARM_SMCCC_QUIRK_QCOM_A6
+ bne 1f // No quirk present
+ str r6, [r4, #ARM_SMCCC_QUIRK_STATE_OFFS]
+1: pop {r4-r7}
ldr r12, [sp, #(4 * 4)]
stm r12, {r0-r3}
bx lr
diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c
index 24bd205..43f0a3e 100644
--- a/arch/arm/kernel/suspend.c
+++ b/arch/arm/kernel/suspend.c
@@ -1,4 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
+#include <linux/ftrace.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/mm_types.h>
@@ -26,12 +27,22 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
return -EINVAL;
/*
+ * Function graph tracer state gets incosistent when the kernel
+ * calls functions that never return (aka suspend finishers) hence
+ * disable graph tracing during their execution.
+ */
+ pause_graph_tracing();
+
+ /*
* Provide a temporary page table with an identity mapping for
* the MMU-enable code, required for resuming. On successful
* resume (indicated by a zero return code), we need to switch
* back to the correct page tables.
*/
ret = __cpu_suspend(arg, fn, __mpidr);
+
+ unpause_graph_tracing();
+
if (ret == 0) {
cpu_switch_mm(mm->pgd, mm);
local_flush_bp_all();
@@ -45,7 +56,13 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
{
u32 __mpidr = cpu_logical_map(smp_processor_id());
- return __cpu_suspend(arg, fn, __mpidr);
+ int ret;
+
+ pause_graph_tracing();
+ ret = __cpu_suspend(arg, fn, __mpidr);
+ unpause_graph_tracing();
+
+ return ret;
}
#define idmap_pgd NULL
#endif
diff --git a/arch/arm/mach-footbridge/cats-pci.c b/arch/arm/mach-footbridge/cats-pci.c
index 0b2fd7e2..90b1e9b 100644
--- a/arch/arm/mach-footbridge/cats-pci.c
+++ b/arch/arm/mach-footbridge/cats-pci.c
@@ -15,14 +15,14 @@
#include <asm/mach-types.h>
/* cats host-specific stuff */
-static int irqmap_cats[] __initdata = { IRQ_PCI, IRQ_IN0, IRQ_IN1, IRQ_IN3 };
+static int irqmap_cats[] = { IRQ_PCI, IRQ_IN0, IRQ_IN1, IRQ_IN3 };
static u8 cats_no_swizzle(struct pci_dev *dev, u8 *pin)
{
return 0;
}
-static int __init cats_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+static int cats_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
if (dev->irq >= 255)
return -1; /* not a valid interrupt. */
diff --git a/arch/arm/mach-footbridge/ebsa285-pci.c b/arch/arm/mach-footbridge/ebsa285-pci.c
index 6f28aaa..c3f280d 100644
--- a/arch/arm/mach-footbridge/ebsa285-pci.c
+++ b/arch/arm/mach-footbridge/ebsa285-pci.c
@@ -14,9 +14,9 @@
#include <asm/mach/pci.h>
#include <asm/mach-types.h>
-static int irqmap_ebsa285[] __initdata = { IRQ_IN3, IRQ_IN1, IRQ_IN0, IRQ_PCI };
+static int irqmap_ebsa285[] = { IRQ_IN3, IRQ_IN1, IRQ_IN0, IRQ_PCI };
-static int __init ebsa285_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+static int ebsa285_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
if (dev->vendor == PCI_VENDOR_ID_CONTAQ &&
dev->device == PCI_DEVICE_ID_CONTAQ_82C693)
diff --git a/arch/arm/mach-footbridge/netwinder-pci.c b/arch/arm/mach-footbridge/netwinder-pci.c
index 9473aa0..e830439 100644
--- a/arch/arm/mach-footbridge/netwinder-pci.c
+++ b/arch/arm/mach-footbridge/netwinder-pci.c
@@ -18,7 +18,7 @@
* We now use the slot ID instead of the device identifiers to select
* which interrupt is routed where.
*/
-static int __init netwinder_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+static int netwinder_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
switch (slot) {
case 0: /* host bridge */
diff --git a/arch/arm/mach-footbridge/personal-pci.c b/arch/arm/mach-footbridge/personal-pci.c
index 4391e43..9d19aa9 100644
--- a/arch/arm/mach-footbridge/personal-pci.c
+++ b/arch/arm/mach-footbridge/personal-pci.c
@@ -14,13 +14,12 @@
#include <asm/mach/pci.h>
#include <asm/mach-types.h>
-static int irqmap_personal_server[] __initdata = {
+static int irqmap_personal_server[] = {
IRQ_IN0, IRQ_IN1, IRQ_IN2, IRQ_IN3, 0, 0, 0,
IRQ_DOORBELLHOST, IRQ_DMA1, IRQ_DMA2, IRQ_PCI
};
-static int __init personal_server_map_irq(const struct pci_dev *dev, u8 slot,
- u8 pin)
+static int personal_server_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
unsigned char line;
diff --git a/arch/arm/mach-keystone/keystone.c b/arch/arm/mach-keystone/keystone.c
index 09a65c2..b8fa01f 100644
--- a/arch/arm/mach-keystone/keystone.c
+++ b/arch/arm/mach-keystone/keystone.c
@@ -65,7 +65,7 @@ static void __init keystone_init(void)
static long long __init keystone_pv_fixup(void)
{
long long offset;
- phys_addr_t mem_start, mem_end;
+ u64 mem_start, mem_end;
mem_start = memblock_start_of_DRAM();
mem_end = memblock_end_of_DRAM();
@@ -78,7 +78,7 @@ static long long __init keystone_pv_fixup(void)
if (mem_start < KEYSTONE_HIGH_PHYS_START ||
mem_end > KEYSTONE_HIGH_PHYS_END) {
pr_crit("Invalid address space for memory (%08llx-%08llx)\n",
- (u64)mem_start, (u64)mem_end);
+ mem_start, mem_end);
return 0;
}
diff --git a/arch/arm/mach-omap1/ams-delta-fiq-handler.S b/arch/arm/mach-omap1/ams-delta-fiq-handler.S
index 14a6c3e..f745a65 100644
--- a/arch/arm/mach-omap1/ams-delta-fiq-handler.S
+++ b/arch/arm/mach-omap1/ams-delta-fiq-handler.S
@@ -15,6 +15,7 @@
#include <linux/platform_data/gpio-omap.h>
#include <asm/assembler.h>
+#include <asm/irq.h>
#include "ams-delta-fiq.h"
#include "board-ams-delta.h"
diff --git a/arch/arm/mach-omap2/board-generic.c b/arch/arm/mach-omap2/board-generic.c
index 7290f03..1610c56 100644
--- a/arch/arm/mach-omap2/board-generic.c
+++ b/arch/arm/mach-omap2/board-generic.c
@@ -33,7 +33,7 @@ static void __init __maybe_unused omap_generic_init(void)
}
/* Clocks are needed early, see drivers/clocksource for the rest */
-void __init __maybe_unused omap_init_time_of(void)
+static void __init __maybe_unused omap_init_time_of(void)
{
omap_clk_init();
timer_probe();
diff --git a/arch/arm/mach-omap2/omap-secure.c b/arch/arm/mach-omap2/omap-secure.c
index f70d561..0659ab4 100644
--- a/arch/arm/mach-omap2/omap-secure.c
+++ b/arch/arm/mach-omap2/omap-secure.c
@@ -9,6 +9,7 @@
*/
#include <linux/arm-smccc.h>
+#include <linux/cpu_pm.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/io.h>
@@ -20,6 +21,7 @@
#include "common.h"
#include "omap-secure.h"
+#include "soc.h"
static phys_addr_t omap_secure_memblock_base;
@@ -213,3 +215,40 @@ void __init omap_secure_init(void)
{
omap_optee_init_check();
}
+
+/*
+ * Dummy dispatcher call after core OSWR and MPU off. Updates the ROM return
+ * address after MMU has been re-enabled after CPU1 has been woken up again.
+ * Otherwise the ROM code will attempt to use the earlier physical return
+ * address that got set with MMU off when waking up CPU1. Only used on secure
+ * devices.
+ */
+static int cpu_notifier(struct notifier_block *nb, unsigned long cmd, void *v)
+{
+ switch (cmd) {
+ case CPU_CLUSTER_PM_EXIT:
+ omap_secure_dispatcher(OMAP4_PPA_SERVICE_0,
+ FLAG_START_CRITICAL,
+ 0, 0, 0, 0, 0);
+ break;
+ default:
+ break;
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block secure_notifier_block = {
+ .notifier_call = cpu_notifier,
+};
+
+static int __init secure_pm_init(void)
+{
+ if (omap_type() == OMAP2_DEVICE_TYPE_GP || !soc_is_omap44xx())
+ return 0;
+
+ cpu_pm_register_notifier(&secure_notifier_block);
+
+ return 0;
+}
+omap_arch_initcall(secure_pm_init);
diff --git a/arch/arm/mach-omap2/omap-secure.h b/arch/arm/mach-omap2/omap-secure.h
index 4aaa957..172069f 100644
--- a/arch/arm/mach-omap2/omap-secure.h
+++ b/arch/arm/mach-omap2/omap-secure.h
@@ -50,6 +50,7 @@
#define OMAP5_DRA7_MON_SET_ACR_INDEX 0x107
/* Secure PPA(Primary Protected Application) APIs */
+#define OMAP4_PPA_SERVICE_0 0x21
#define OMAP4_PPA_L2_POR_INDEX 0x23
#define OMAP4_PPA_CPU_ACTRL_SMP_INDEX 0x25
diff --git a/arch/arm/mach-omap2/pmic-cpcap.c b/arch/arm/mach-omap2/pmic-cpcap.c
index 09076ad..668dc84 100644
--- a/arch/arm/mach-omap2/pmic-cpcap.c
+++ b/arch/arm/mach-omap2/pmic-cpcap.c
@@ -246,10 +246,10 @@ int __init omap4_cpcap_init(void)
omap_voltage_register_pmic(voltdm, &omap443x_max8952_mpu);
if (of_machine_is_compatible("motorola,droid-bionic")) {
- voltdm = voltdm_lookup("mpu");
+ voltdm = voltdm_lookup("core");
omap_voltage_register_pmic(voltdm, &omap_cpcap_core);
- voltdm = voltdm_lookup("mpu");
+ voltdm = voltdm_lookup("iva");
omap_voltage_register_pmic(voltdm, &omap_cpcap_iva);
} else {
voltdm = voltdm_lookup("core");
diff --git a/arch/arm/mach-omap2/sr_device.c b/arch/arm/mach-omap2/sr_device.c
index 17b66f0..6059256 100644
--- a/arch/arm/mach-omap2/sr_device.c
+++ b/arch/arm/mach-omap2/sr_device.c
@@ -188,7 +188,7 @@ static const char * const dra7_sr_instances[] = {
int __init omap_devinit_smartreflex(void)
{
- const char * const *sr_inst;
+ const char * const *sr_inst = NULL;
int i, nr_sr = 0;
if (soc_is_omap44xx()) {
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index c23dbf8..d54d69c 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -223,7 +223,6 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
if (mdesc->reserve)
mdesc->reserve();
- early_init_fdt_reserve_self();
early_init_fdt_scan_reserved_mem();
/* reserve memory for DMA contiguous allocations */
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index ab69250..fa25982 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -39,6 +39,8 @@
#include "mm.h"
#include "tcm.h"
+extern unsigned long __atags_pointer;
+
/*
* empty_zero_page is a special page that is used for
* zero-initialized data and COW.
@@ -946,7 +948,7 @@ static void __init create_mapping(struct map_desc *md)
return;
}
- if ((md->type == MT_DEVICE || md->type == MT_ROM) &&
+ if (md->type == MT_DEVICE &&
md->virtual >= PAGE_OFFSET && md->virtual < FIXADDR_START &&
(md->virtual < VMALLOC_START || md->virtual >= VMALLOC_END)) {
pr_warn("BUG: mapping for 0x%08llx at 0x%08lx out of vmalloc space\n",
@@ -1333,6 +1335,15 @@ static void __init devicemaps_init(const struct machine_desc *mdesc)
for (addr = VMALLOC_START; addr < (FIXADDR_TOP & PMD_MASK); addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
+ if (__atags_pointer) {
+ /* create a read-only mapping of the device tree */
+ map.pfn = __phys_to_pfn(__atags_pointer & SECTION_MASK);
+ map.virtual = FDT_FIXED_BASE;
+ map.length = FDT_FIXED_SIZE;
+ map.type = MT_ROM;
+ create_mapping(&map);
+ }
+
/*
* Map the kernel if it is XIP.
* It is always first in the modulearea.
@@ -1489,8 +1500,7 @@ static void __init map_lowmem(void)
}
#ifdef CONFIG_ARM_PV_FIXUP
-extern unsigned long __atags_pointer;
-typedef void pgtables_remap(long long offset, unsigned long pgd, void *bdata);
+typedef void pgtables_remap(long long offset, unsigned long pgd);
pgtables_remap lpae_pgtables_remap_asm;
/*
@@ -1503,7 +1513,6 @@ static void __init early_paging_init(const struct machine_desc *mdesc)
unsigned long pa_pgd;
unsigned int cr, ttbcr;
long long offset;
- void *boot_data;
if (!mdesc->pv_fixup)
return;
@@ -1520,7 +1529,6 @@ static void __init early_paging_init(const struct machine_desc *mdesc)
*/
lpae_pgtables_remap = (pgtables_remap *)(unsigned long)__pa(lpae_pgtables_remap_asm);
pa_pgd = __pa(swapper_pg_dir);
- boot_data = __va(__atags_pointer);
barrier();
pr_info("Switching physical address space to 0x%08llx\n",
@@ -1556,7 +1564,7 @@ static void __init early_paging_init(const struct machine_desc *mdesc)
* needs to be assembly. It's fairly simple, as we're using the
* temporary tables setup by the initial assembly code.
*/
- lpae_pgtables_remap(offset, pa_pgd, boot_data);
+ lpae_pgtables_remap(offset, pa_pgd);
/* Re-enable the caches and cacheable TLB walks */
asm volatile("mcr p15, 0, %0, c2, c0, 2" : : "r" (ttbcr));
diff --git a/arch/arm/mm/pmsa-v7.c b/arch/arm/mm/pmsa-v7.c
index 88950e4..59d916ccd 100644
--- a/arch/arm/mm/pmsa-v7.c
+++ b/arch/arm/mm/pmsa-v7.c
@@ -235,6 +235,7 @@ void __init pmsav7_adjust_lowmem_bounds(void)
phys_addr_t mem_end;
phys_addr_t reg_start, reg_end;
unsigned int mem_max_regions;
+ bool first = true;
int num;
u64 i;
@@ -263,7 +264,7 @@ void __init pmsav7_adjust_lowmem_bounds(void)
#endif
for_each_mem_range(i, ®_start, ®_end) {
- if (i == 0) {
+ if (first) {
phys_addr_t phys_offset = PHYS_OFFSET;
/*
@@ -275,6 +276,7 @@ void __init pmsav7_adjust_lowmem_bounds(void)
mem_start = reg_start;
mem_end = reg_end;
specified_mem_size = mem_end - mem_start;
+ first = false;
} else {
/*
* memblock auto merges contiguous blocks, remove
diff --git a/arch/arm/mm/pmsa-v8.c b/arch/arm/mm/pmsa-v8.c
index 2de019f..8359748 100644
--- a/arch/arm/mm/pmsa-v8.c
+++ b/arch/arm/mm/pmsa-v8.c
@@ -95,10 +95,11 @@ void __init pmsav8_adjust_lowmem_bounds(void)
{
phys_addr_t mem_end;
phys_addr_t reg_start, reg_end;
+ bool first = true;
u64 i;
for_each_mem_range(i, ®_start, ®_end) {
- if (i == 0) {
+ if (first) {
phys_addr_t phys_offset = PHYS_OFFSET;
/*
@@ -107,6 +108,7 @@ void __init pmsav8_adjust_lowmem_bounds(void)
if (reg_start != phys_offset)
panic("First memory bank must be contiguous from PHYS_OFFSET");
mem_end = reg_end;
+ first = false;
} else {
/*
* memblock auto merges contiguous blocks, remove
diff --git a/arch/arm/mm/pv-fixup-asm.S b/arch/arm/mm/pv-fixup-asm.S
index 8eade04..5c5e195 100644
--- a/arch/arm/mm/pv-fixup-asm.S
+++ b/arch/arm/mm/pv-fixup-asm.S
@@ -39,8 +39,8 @@
/* Update level 2 entries for the boot data */
add r7, r2, #0x1000
- add r7, r7, r3, lsr #SECTION_SHIFT - L2_ORDER
- bic r7, r7, #(1 << L2_ORDER) - 1
+ movw r3, #FDT_FIXED_BASE >> (SECTION_SHIFT - L2_ORDER)
+ add r7, r7, r3
ldrd r4, r5, [r7]
adds r4, r4, r0
adc r5, r5, r1
diff --git a/arch/arm/probes/uprobes/core.c b/arch/arm/probes/uprobes/core.c
index c4b49b3..f5f790c6 100644
--- a/arch/arm/probes/uprobes/core.c
+++ b/arch/arm/probes/uprobes/core.c
@@ -204,7 +204,7 @@ unsigned long uprobe_get_swbp_addr(struct pt_regs *regs)
static struct undef_hook uprobes_arm_break_hook = {
.instr_mask = 0x0fffffff,
.instr_val = (UPROBE_SWBP_ARM_INSN & 0x0fffffff),
- .cpsr_mask = MODE_MASK,
+ .cpsr_mask = (PSR_T_BIT | MODE_MASK),
.cpsr_val = USR_MODE,
.fn = uprobe_trap_handler,
};
@@ -212,7 +212,7 @@ static struct undef_hook uprobes_arm_break_hook = {
static struct undef_hook uprobes_arm_ss_hook = {
.instr_mask = 0x0fffffff,
.instr_val = (UPROBE_SS_ARM_INSN & 0x0fffffff),
- .cpsr_mask = MODE_MASK,
+ .cpsr_mask = (PSR_T_BIT | MODE_MASK),
.cpsr_val = USR_MODE,
.fn = uprobe_trap_handler,
};
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c1be642..5e5cf3af 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1390,10 +1390,13 @@
The feature is detected at runtime, and will remain as a 'nop'
instruction if the cpu does not implement the feature.
+config AS_HAS_LSE_ATOMICS
+ def_bool $(as-instr,.arch_extension lse)
+
config ARM64_LSE_ATOMICS
bool
default ARM64_USE_LSE_ATOMICS
- depends on $(as-instr,.arch_extension lse)
+ depends on AS_HAS_LSE_ATOMICS
config ARM64_USE_LSE_ATOMICS
bool "Atomic instructions"
@@ -1667,6 +1670,7 @@
bool "Memory Tagging Extension support"
default y
depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI
+ depends on AS_HAS_LSE_ATOMICS
select ARCH_USES_HIGH_VMA_FLAGS
help
Memory Tagging (part of the ARMv8.5 Extensions) provides
diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts
index 302e24b..358df6d 100644
--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts
+++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-lts.dts
@@ -8,3 +8,7 @@ / {
compatible = "pine64,pine64-lts", "allwinner,sun50i-r18",
"allwinner,sun50i-a64";
};
+
+&mmc0 {
+ broken-cd; /* card detect is broken on *some* boards */
+};
diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi
index 3402cec..df62044 100644
--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi
+++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi
@@ -34,7 +34,7 @@ &mmc0 {
vmmc-supply = <®_dcdc1>;
disable-wp;
bus-width = <4>;
- cd-gpios = <&pio 5 6 GPIO_ACTIVE_LOW>; /* PF6 */
+ cd-gpios = <&pio 5 6 GPIO_ACTIVE_HIGH>; /* PF6 push-pull switch */
status = "okay";
};
diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
index 7c9dbde..e8163c5 100644
--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
+++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
@@ -289,10 +289,6 @@ &r_pio {
vcc-pm-supply = <®_aldo1>;
};
-&rtc {
- clocks = <&ext_osc32k>;
-};
-
&spdif {
status = "okay";
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h b/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
index 5ccc4cc..a003e6a 100644
--- a/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
+++ b/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
@@ -124,7 +124,7 @@
#define MX8MM_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0
#define MX8MM_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0
#define MX8MM_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0
-#define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0
+#define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0
#define MX8MM_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0
#define MX8MM_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0
#define MX8MM_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0
diff --git a/arch/arm64/boot/dts/freescale/imx8mq-librem5-r3.dts b/arch/arm64/boot/dts/freescale/imx8mq-librem5-r3.dts
index 6704ea2..cc29223 100644
--- a/arch/arm64/boot/dts/freescale/imx8mq-librem5-r3.dts
+++ b/arch/arm64/boot/dts/freescale/imx8mq-librem5-r3.dts
@@ -22,6 +22,10 @@ &bq25895 {
ti,termination-current = <144000>; /* uA */
};
+&buck3_reg {
+ regulator-always-on;
+};
+
&proximity {
proximity-near-level = <25>;
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h b/arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h
index b94b020..68e8fa1 100644
--- a/arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h
+++ b/arch/arm64/boot/dts/freescale/imx8mq-pinfunc.h
@@ -130,7 +130,7 @@
#define MX8MQ_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0
#define MX8MQ_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0
#define MX8MQ_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0
-#define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0
+#define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0
#define MX8MQ_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0
#define MX8MQ_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0
#define MX8MQ_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0
diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
index d5b6c0a..a89e47d 100644
--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
@@ -156,7 +156,8 @@ uart1: serial@12200 {
};
nb_periph_clk: nb-periph-clk@13000 {
- compatible = "marvell,armada-3700-periph-clock-nb";
+ compatible = "marvell,armada-3700-periph-clock-nb",
+ "syscon";
reg = <0x13000 0x100>;
clocks = <&tbg 0>, <&tbg 1>, <&tbg 2>,
<&tbg 3>, <&xtalclk>;
diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
index 5e046f9..592c6bc 100644
--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
@@ -1169,7 +1169,7 @@ dsi1: dsi@1401c000 {
<&mmsys CLK_MM_DSI1_DIGITAL>,
<&mipi_tx1>;
clock-names = "engine", "digital", "hs";
- phy = <&mipi_tx1>;
+ phys = <&mipi_tx1>;
phy-names = "dphy";
status = "disabled";
};
diff --git a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
index 29d8cf6..99c2d6f 100644
--- a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+++ b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
@@ -56,7 +56,7 @@ &i2c0 {
tca6416: gpio@20 {
compatible = "ti,tca6416";
reg = <0x20>;
- reset-gpios = <&pio 65 GPIO_ACTIVE_HIGH>;
+ reset-gpios = <&pio 65 GPIO_ACTIVE_LOW>;
pinctrl-names = "default";
pinctrl-0 = <&tca6416_pins>;
diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
index c4ac6f5..96d36b3 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
@@ -1015,7 +1015,7 @@ swm: swm@c85 {
left_spkr: wsa8810-left{
compatible = "sdw10217201000";
reg = <0 1>;
- powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
+ powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>;
#thermal-sensor-cells = <0>;
sound-name-prefix = "SpkrLeft";
#sound-dai-cells = <0>;
@@ -1023,7 +1023,7 @@ left_spkr: wsa8810-left{
right_spkr: wsa8810-right{
compatible = "sdw10217201000";
- powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
+ powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>;
reg = <0 2>;
#thermal-sensor-cells = <0>;
sound-name-prefix = "SpkrRight";
diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
index f97f354..ea6e3a1 100644
--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
@@ -2192,7 +2192,7 @@ tlmm: pinctrl@3400000 {
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <2>;
- gpio-ranges = <&tlmm 0 0 150>;
+ gpio-ranges = <&tlmm 0 0 151>;
wakeup-parent = <&pdc_intc>;
cci0_default: cci0-default {
diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
index f0a872e..1aec545 100644
--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
@@ -748,7 +748,7 @@ tlmm: pinctrl@3100000 {
<0x0 0x03D00000 0x0 0x300000>;
reg-names = "west", "east", "north", "south";
interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
- gpio-ranges = <&tlmm 0 0 175>;
+ gpio-ranges = <&tlmm 0 0 176>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
index d057d85..d4547a1 100644
--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
@@ -216,7 +216,7 @@ memory@80000000 {
pmu {
compatible = "arm,armv8-pmuv3";
- interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_LOW>;
};
psci {
@@ -1555,7 +1555,7 @@ tlmm: pinctrl@f100000 {
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <2>;
- gpio-ranges = <&tlmm 0 0 180>;
+ gpio-ranges = <&tlmm 0 0 181>;
wakeup-parent = <&pdc>;
qup_i2c0_default: qup-i2c0-default {
@@ -2379,7 +2379,7 @@ timer {
(GIC_CPU_MASK_SIMPLE(8) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 11
(GIC_CPU_MASK_SIMPLE(8) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 12
+ <GIC_PPI 10
(GIC_CPU_MASK_SIMPLE(8) | IRQ_TYPE_LEVEL_LOW)>;
};
diff --git a/arch/arm64/boot/dts/renesas/hihope-common.dtsi b/arch/arm64/boot/dts/renesas/hihope-common.dtsi
index 2eda9f6..e8bf6f0 100644
--- a/arch/arm64/boot/dts/renesas/hihope-common.dtsi
+++ b/arch/arm64/boot/dts/renesas/hihope-common.dtsi
@@ -12,6 +12,9 @@ / {
aliases {
serial0 = &scif2;
serial1 = &hscif0;
+ mmc0 = &sdhi3;
+ mmc1 = &sdhi0;
+ mmc2 = &sdhi2;
};
chosen {
diff --git a/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts b/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts
index 2c5b057..ad26f5b 100644
--- a/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts
+++ b/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts
@@ -21,6 +21,9 @@ aliases {
serial4 = &hscif2;
serial5 = &scif5;
ethernet0 = &avb;
+ mmc0 = &sdhi3;
+ mmc1 = &sdhi0;
+ mmc2 = &sdhi2;
};
chosen {
diff --git a/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts b/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts
index 26aee00..c4b50a5 100644
--- a/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts
+++ b/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts
@@ -17,6 +17,8 @@ / {
aliases {
serial0 = &scif2;
serial1 = &hscif2;
+ mmc0 = &sdhi0;
+ mmc1 = &sdhi3;
};
chosen {
diff --git a/arch/arm64/boot/dts/renesas/r8a77980.dtsi b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
index d6cae90..e6ef837 100644
--- a/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+++ b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
@@ -990,8 +990,8 @@ port@1 {
reg = <1>;
- vin4csi41: endpoint@2 {
- reg = <2>;
+ vin4csi41: endpoint@3 {
+ reg = <3>;
remote-endpoint = <&csi41vin4>;
};
};
@@ -1018,8 +1018,8 @@ port@1 {
reg = <1>;
- vin5csi41: endpoint@2 {
- reg = <2>;
+ vin5csi41: endpoint@3 {
+ reg = <3>;
remote-endpoint = <&csi41vin5>;
};
};
@@ -1046,8 +1046,8 @@ port@1 {
reg = <1>;
- vin6csi41: endpoint@2 {
- reg = <2>;
+ vin6csi41: endpoint@3 {
+ reg = <3>;
remote-endpoint = <&csi41vin6>;
};
};
@@ -1074,8 +1074,8 @@ port@1 {
reg = <1>;
- vin7csi41: endpoint@2 {
- reg = <2>;
+ vin7csi41: endpoint@3 {
+ reg = <3>;
remote-endpoint = <&csi41vin7>;
};
};
diff --git a/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts b/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
index e0ccca2..b9e3b67 100644
--- a/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
+++ b/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
@@ -16,6 +16,9 @@ / {
aliases {
serial0 = &scif2;
ethernet0 = &avb;
+ mmc0 = &sdhi3;
+ mmc1 = &sdhi0;
+ mmc2 = &sdhi1;
};
chosen {
diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
index 6cf77ce..86ec32a 100644
--- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
@@ -50,10 +50,7 @@ extalr_clk: extalr {
pmu_a76 {
compatible = "arm,cortex-a76-pmu";
- interrupts-extended = <&gic GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>,
- <&gic GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
- <&gic GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
- <&gic GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts-extended = <&gic GIC_PPI 7 IRQ_TYPE_LEVEL_LOW>;
};
/* External SCIF clock - to be overridden by boards that provide it */
diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
index 1bf7795..08b8525 100644
--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
@@ -36,6 +36,9 @@ aliases {
serial0 = &scif2;
serial1 = &hscif1;
ethernet0 = &avb;
+ mmc0 = &sdhi2;
+ mmc1 = &sdhi0;
+ mmc2 = &sdhi3;
};
chosen {
diff --git a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
index 2021777..05e64bf 100644
--- a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+++ b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
@@ -16,6 +16,7 @@ / {
aliases {
serial1 = &hscif0;
serial2 = &scif1;
+ mmc2 = &sdhi3;
};
clksndsel: clksndsel {
diff --git a/arch/arm64/boot/dts/renesas/ulcb.dtsi b/arch/arm64/boot/dts/renesas/ulcb.dtsi
index a2e085d..e11521b 100644
--- a/arch/arm64/boot/dts/renesas/ulcb.dtsi
+++ b/arch/arm64/boot/dts/renesas/ulcb.dtsi
@@ -23,6 +23,8 @@ / {
aliases {
serial0 = &scif2;
ethernet0 = &avb;
+ mmc0 = &sdhi2;
+ mmc1 = &sdhi0;
};
chosen {
diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
index a87b8a6..8f2c1c1 100644
--- a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+++ b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
@@ -734,7 +734,7 @@ eth: ethernet@65000000 {
clocks = <&sys_clk 6>;
reset-names = "ether";
resets = <&sys_rst 6>;
- phy-mode = "rgmii";
+ phy-mode = "rgmii-id";
local-mac-address = [00 00 00 00 00 00];
socionext,syscon-phy-mode = <&soc_glue 0>;
diff --git a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
index 0e52dadf..be97da13 100644
--- a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+++ b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
@@ -564,7 +564,7 @@ eth0: ethernet@65000000 {
clocks = <&sys_clk 6>;
reset-names = "ether";
resets = <&sys_rst 6>;
- phy-mode = "rgmii";
+ phy-mode = "rgmii-id";
local-mac-address = [00 00 00 00 00 00];
socionext,syscon-phy-mode = <&soc_glue 0>;
@@ -585,7 +585,7 @@ eth1: ethernet@65200000 {
clocks = <&sys_clk 7>;
reset-names = "ether";
resets = <&sys_rst 7>;
- phy-mode = "rgmii";
+ phy-mode = "rgmii-id";
local-mac-address = [00 00 00 00 00 00];
socionext,syscon-phy-mode = <&soc_glue 1>;
diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c
index f33ada7..01e22fe 100644
--- a/arch/arm64/crypto/poly1305-glue.c
+++ b/arch/arm64/crypto/poly1305-glue.c
@@ -25,7 +25,7 @@ asmlinkage void poly1305_emit(void *state, u8 *digest, const u32 *nonce);
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
+void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
{
poly1305_init_arm64(&dctx->h, key);
dctx->s[0] = get_unaligned_le32(key + 16);
diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index 619db9b..3cb3c4a 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -119,9 +119,9 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
.popsection
.subsection 1
663: \insn2
-664: .previous
- .org . - (664b-663b) + (662b-661b)
+664: .org . - (664b-663b) + (662b-661b)
.org . - (662b-661b) + (664b-663b)
+ .previous
.endif
.endm
@@ -191,11 +191,11 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
*/
.macro alternative_endif
664:
+ .org . - (664b-663b) + (662b-661b)
+ .org . - (662b-661b) + (664b-663b)
.if .Lasm_alt_mode==0
.previous
.endif
- .org . - (664b-663b) + (662b-661b)
- .org . - (662b-661b) + (664b-663b)
.endm
/*
diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 1c26d7b..cfdde3a 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -131,6 +131,9 @@ static inline void local_daif_inherit(struct pt_regs *regs)
if (interrupts_enabled(regs))
trace_hardirqs_on();
+ if (system_uses_irq_prio_masking())
+ gic_write_pmr(regs->pmr_save);
+
/*
* We can't use local_daif_restore(regs->pstate) here as
* system_has_prio_mask_debugging() won't restore the I bit if it can
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 64ce293..395cb22 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -277,6 +277,7 @@
#define CPTR_EL2_DEFAULT CPTR_EL2_RES1
/* Hyp Debug Configuration Register bits */
+#define MDCR_EL2_TTRF (1 << 19)
#define MDCR_EL2_TPMS (1 << 14)
#define MDCR_EL2_E2PB_MASK (UL(0x3))
#define MDCR_EL2_E2PB_SHIFT (UL(12))
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 00bc6f1..472122d7 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -505,4 +505,9 @@ static __always_inline void __kvm_skip_instr(struct kvm_vcpu *vcpu)
write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
}
+static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
+{
+ return test_bit(feature, vcpu->arch.features);
+}
+
#endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index cc060c4..912b83e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -601,6 +601,7 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
void kvm_arm_init_debug(void);
+void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu);
void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index a884d77..fce8cbec 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -96,8 +96,7 @@
#endif /* CONFIG_ARM64_FORCE_52BIT */
extern phys_addr_t arm64_dma_phys_limit;
-extern phys_addr_t arm64_dma32_phys_limit;
-#define ARCH_LOW_ADDRESS_LIMIT ((arm64_dma_phys_limit ? : arm64_dma32_phys_limit) - 1)
+#define ARCH_LOW_ADDRESS_LIMIT (arm64_dma_phys_limit - 1)
struct debug_info {
#ifdef CONFIG_HAVE_HW_BREAKPOINT
diff --git a/arch/arm64/include/asm/word-at-a-time.h b/arch/arm64/include/asm/word-at-a-time.h
index 3333950..ea48721 100644
--- a/arch/arm64/include/asm/word-at-a-time.h
+++ b/arch/arm64/include/asm/word-at-a-time.h
@@ -53,7 +53,7 @@ static inline unsigned long find_zero(unsigned long mask)
*/
static inline unsigned long load_unaligned_zeropad(const void *addr)
{
- unsigned long ret, offset;
+ unsigned long ret, tmp;
/* Load word from unaligned pointer addr */
asm(
@@ -61,9 +61,9 @@ static inline unsigned long load_unaligned_zeropad(const void *addr)
"2:\n"
" .pushsection .fixup,\"ax\"\n"
" .align 2\n"
- "3: and %1, %2, #0x7\n"
- " bic %2, %2, #0x7\n"
- " ldr %0, [%2]\n"
+ "3: bic %1, %2, #0x7\n"
+ " ldr %0, [%1]\n"
+ " and %1, %2, #0x7\n"
" lsl %1, %1, #0x3\n"
#ifndef __AARCH64EB__
" lsr %0, %0, %1\n"
@@ -73,7 +73,7 @@ static inline unsigned long load_unaligned_zeropad(const void *addr)
" b 2b\n"
" .popsection\n"
_ASM_EXTABLE(1b, 3b)
- : "=&r" (ret), "=&r" (offset)
+ : "=&r" (ret), "=&r" (tmp)
: "r" (addr), "Q" (*(unsigned long *)addr));
return ret;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7da9a7c..5001c43 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -382,7 +382,6 @@ static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
* of support.
*/
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_TRACEVER_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),
ARM64_FTR_END,
};
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 70e0a75..ec120ed 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -178,14 +178,6 @@ static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr)
{
unsigned long far = read_sysreg(far_el1);
- /*
- * The CPU masked interrupts, and we are leaving them masked during
- * do_debug_exception(). Update PMR as if we had called
- * local_daif_mask().
- */
- if (system_uses_irq_prio_masking())
- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
-
arm64_enter_el1_dbg(regs);
do_debug_exception(far, esr, regs);
arm64_exit_el1_dbg(regs);
@@ -350,9 +342,6 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
/* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */
unsigned long far = read_sysreg(far_el1);
- if (system_uses_irq_prio_masking())
- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
-
enter_from_user_mode();
do_debug_exception(far, esr, regs);
local_daif_restore(DAIF_PROCCTX_NOIRQ);
@@ -360,9 +349,6 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_svc(struct pt_regs *regs)
{
- if (system_uses_irq_prio_masking())
- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
-
enter_from_user_mode();
do_el0_svc(regs);
}
@@ -437,9 +423,6 @@ static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
static void noinstr el0_svc_compat(struct pt_regs *regs)
{
- if (system_uses_irq_prio_masking())
- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
-
enter_from_user_mode();
do_el0_svc_compat(regs);
}
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index d72c818..60d3991 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -148,16 +148,18 @@
.endm
/* Check for MTE asynchronous tag check faults */
- .macro check_mte_async_tcf, flgs, tmp
+ .macro check_mte_async_tcf, tmp, ti_flags
#ifdef CONFIG_ARM64_MTE
+ .arch_extension lse
alternative_if_not ARM64_MTE
b 1f
alternative_else_nop_endif
mrs_s \tmp, SYS_TFSRE0_EL1
tbz \tmp, #SYS_TFSR_EL1_TF0_SHIFT, 1f
/* Asynchronous TCF occurred for TTBR0 access, set the TI flag */
- orr \flgs, \flgs, #_TIF_MTE_ASYNC_FAULT
- str \flgs, [tsk, #TSK_TI_FLAGS]
+ mov \tmp, #_TIF_MTE_ASYNC_FAULT
+ add \ti_flags, tsk, #TSK_TI_FLAGS
+ stset \tmp, [\ti_flags]
msr_s SYS_TFSRE0_EL1, xzr
1:
#endif
@@ -207,7 +209,7 @@
disable_step_tsk x19, x20
/* Check for asynchronous tag check faults in user space */
- check_mte_async_tcf x19, x22
+ check_mte_async_tcf x22, x23
apply_ssbd 1, x22, x23
ptrauth_keys_install_kernel tsk, x20, x22, x23
@@ -257,6 +259,8 @@
alternative_if ARM64_HAS_IRQ_PRIO_MASKING
mrs_s x20, SYS_ICC_PMR_EL1
str x20, [sp, #S_PMR_SAVE]
+ mov x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET
+ msr_s SYS_ICC_PMR_EL1, x20
alternative_else_nop_endif
/* Re-enable tag checking (TCO set on exception entry) */
@@ -462,8 +466,8 @@
/*
* Interrupt handling.
*/
- .macro irq_handler
- ldr_l x1, handle_arch_irq
+ .macro irq_handler, handler:req
+ ldr_l x1, \handler
mov x0, sp
irq_stack_entry
blr x1
@@ -493,13 +497,41 @@
#endif
.endm
- .macro gic_prio_irq_setup, pmr:req, tmp:req
-#ifdef CONFIG_ARM64_PSEUDO_NMI
- alternative_if ARM64_HAS_IRQ_PRIO_MASKING
- orr \tmp, \pmr, #GIC_PRIO_PSR_I_SET
- msr_s SYS_ICC_PMR_EL1, \tmp
- alternative_else_nop_endif
+ .macro el1_interrupt_handler, handler:req
+ enable_da_f
+
+ mov x0, sp
+ bl enter_el1_irq_or_nmi
+
+ irq_handler \handler
+
+#ifdef CONFIG_PREEMPTION
+ ldr x24, [tsk, #TSK_TI_PREEMPT] // get preempt count
+alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+ /*
+ * DA_F were cleared at start of handling. If anything is set in DAIF,
+ * we come back from an NMI, so skip preemption
+ */
+ mrs x0, daif
+ orr x24, x24, x0
+alternative_else_nop_endif
+ cbnz x24, 1f // preempt count != 0 || NMI return path
+ bl arm64_preempt_schedule_irq // irq en/disable is done inside
+1:
#endif
+
+ mov x0, sp
+ bl exit_el1_irq_or_nmi
+ .endm
+
+ .macro el0_interrupt_handler, handler:req
+ user_exit_irqoff
+ enable_da_f
+
+ tbz x22, #55, 1f
+ bl do_el0_irq_bp_hardening
+1:
+ irq_handler \handler
.endm
.text
@@ -631,32 +663,7 @@
.align 6
SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
kernel_entry 1
- gic_prio_irq_setup pmr=x20, tmp=x1
- enable_da_f
-
- mov x0, sp
- bl enter_el1_irq_or_nmi
-
- irq_handler
-
-#ifdef CONFIG_PREEMPTION
- ldr x24, [tsk, #TSK_TI_PREEMPT] // get preempt count
-alternative_if ARM64_HAS_IRQ_PRIO_MASKING
- /*
- * DA_F were cleared at start of handling. If anything is set in DAIF,
- * we come back from an NMI, so skip preemption
- */
- mrs x0, daif
- orr x24, x24, x0
-alternative_else_nop_endif
- cbnz x24, 1f // preempt count != 0 || NMI return path
- bl arm64_preempt_schedule_irq // irq en/disable is done inside
-1:
-#endif
-
- mov x0, sp
- bl exit_el1_irq_or_nmi
-
+ el1_interrupt_handler handle_arch_irq
kernel_exit 1
SYM_CODE_END(el1_irq)
@@ -696,22 +703,13 @@
SYM_CODE_START_LOCAL_NOALIGN(el0_irq)
kernel_entry 0
el0_irq_naked:
- gic_prio_irq_setup pmr=x20, tmp=x0
- user_exit_irqoff
- enable_da_f
-
- tbz x22, #55, 1f
- bl do_el0_irq_bp_hardening
-1:
- irq_handler
-
+ el0_interrupt_handler handle_arch_irq
b ret_to_user
SYM_CODE_END(el0_irq)
SYM_CODE_START_LOCAL(el1_error)
kernel_entry 1
mrs x1, esr_el1
- gic_prio_kentry_setup tmp=x2
enable_dbg
mov x0, sp
bl do_serror
@@ -722,7 +720,6 @@
kernel_entry 0
el0_error_naked:
mrs x25, esr_el1
- gic_prio_kentry_setup tmp=x2
user_exit_irqoff
enable_dbg
mov x0, sp
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index f11a1a1..798c3e7 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -286,10 +286,12 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
if (!instruction_pointer(regs))
BUG();
- if (kcb->kprobe_status == KPROBE_REENTER)
+ if (kcb->kprobe_status == KPROBE_REENTER) {
restore_previous_kprobe(kcb);
- else
+ } else {
+ kprobes_restore_local_irqflag(kcb, regs);
reset_current_kprobe();
+ }
break;
case KPROBE_HIT_ACTIVE:
diff --git a/arch/arm64/kernel/vdso/vdso.lds.S b/arch/arm64/kernel/vdso/vdso.lds.S
index d808ad3..b840ab1 100644
--- a/arch/arm64/kernel/vdso/vdso.lds.S
+++ b/arch/arm64/kernel/vdso/vdso.lds.S
@@ -31,6 +31,13 @@
.gnu.version_d : { *(.gnu.version_d) }
.gnu.version_r : { *(.gnu.version_r) }
+ /*
+ * Discard .note.gnu.property sections which are unused and have
+ * different alignment requirement from vDSO note sections.
+ */
+ /DISCARD/ : {
+ *(.note.GNU-stack .note.gnu.property)
+ }
.note : { *(.note.*) } :text :note
. = ALIGN(16);
@@ -51,7 +58,6 @@
PROVIDE(end = .);
/DISCARD/ : {
- *(.note.GNU-stack)
*(.data .data.* .gnu.linkonce.d.* .sdata*)
*(.bss .sbss .dynbss .dynsbss)
}
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a1c2c955..5e5dd99 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -547,6 +547,8 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
vcpu->arch.has_run_once = true;
+ kvm_arm_vcpu_init_debug(vcpu);
+
if (likely(irqchip_in_kernel(kvm))) {
/*
* Map the VGIC hardware resources before running a vcpu the
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 7a7e425..2484b2c 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -69,6 +69,64 @@ void kvm_arm_init_debug(void)
}
/**
+ * kvm_arm_setup_mdcr_el2 - configure vcpu mdcr_el2 value
+ *
+ * @vcpu: the vcpu pointer
+ *
+ * This ensures we will trap access to:
+ * - Performance monitors (MDCR_EL2_TPM/MDCR_EL2_TPMCR)
+ * - Debug ROM Address (MDCR_EL2_TDRA)
+ * - OS related registers (MDCR_EL2_TDOSA)
+ * - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB)
+ * - Self-hosted Trace Filter controls (MDCR_EL2_TTRF)
+ */
+static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
+{
+ /*
+ * This also clears MDCR_EL2_E2PB_MASK to disable guest access
+ * to the profiling buffer.
+ */
+ vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
+ vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
+ MDCR_EL2_TPMS |
+ MDCR_EL2_TTRF |
+ MDCR_EL2_TPMCR |
+ MDCR_EL2_TDRA |
+ MDCR_EL2_TDOSA);
+
+ /* Is the VM being debugged by userspace? */
+ if (vcpu->guest_debug)
+ /* Route all software debug exceptions to EL2 */
+ vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
+
+ /*
+ * Trap debug register access when one of the following is true:
+ * - Userspace is using the hardware to debug the guest
+ * (KVM_GUESTDBG_USE_HW is set).
+ * - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
+ */
+ if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
+ !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+ vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
+
+ trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
+}
+
+/**
+ * kvm_arm_vcpu_init_debug - setup vcpu debug traps
+ *
+ * @vcpu: the vcpu pointer
+ *
+ * Set vcpu initial mdcr_el2 value.
+ */
+void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu)
+{
+ preempt_disable();
+ kvm_arm_setup_mdcr_el2(vcpu);
+ preempt_enable();
+}
+
+/**
* kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
*/
@@ -83,12 +141,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
* @vcpu: the vcpu pointer
*
* This is called before each entry into the hypervisor to setup any
- * debug related registers. Currently this just ensures we will trap
- * access to:
- * - Performance monitors (MDCR_EL2_TPM/MDCR_EL2_TPMCR)
- * - Debug ROM Address (MDCR_EL2_TDRA)
- * - OS related registers (MDCR_EL2_TDOSA)
- * - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB)
+ * debug related registers.
*
* Additionally, KVM only traps guest accesses to the debug registers if
* the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
@@ -100,27 +153,14 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
{
- bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
unsigned long mdscr, orig_mdcr_el2 = vcpu->arch.mdcr_el2;
trace_kvm_arm_setup_debug(vcpu, vcpu->guest_debug);
- /*
- * This also clears MDCR_EL2_E2PB_MASK to disable guest access
- * to the profiling buffer.
- */
- vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
- vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
- MDCR_EL2_TPMS |
- MDCR_EL2_TPMCR |
- MDCR_EL2_TDRA |
- MDCR_EL2_TDOSA);
+ kvm_arm_setup_mdcr_el2(vcpu);
/* Is Guest debugging in effect? */
if (vcpu->guest_debug) {
- /* Route all software debug exceptions to EL2 */
- vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
-
/* Save guest debug state */
save_guest_debug_regs(vcpu);
@@ -174,7 +214,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
- trap_debug = true;
trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
&vcpu->arch.debug_ptr->dbg_bcr[0],
@@ -189,10 +228,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
BUG_ON(!vcpu->guest_debug &&
vcpu->arch.debug_ptr != &vcpu->arch.vcpu_debug_state);
- /* Trap debug register access */
- if (trap_debug)
- vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
-
/* If KDE or MDE are set, perform a full save/restore cycle. */
if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
@@ -201,7 +236,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
- trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index e911eea..b969c21 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -223,6 +223,25 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
return 0;
}
+static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu *tmp;
+ bool is32bit;
+ int i;
+
+ is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT);
+ if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit)
+ return false;
+
+ /* Check that the vcpus are either all 32bit or all 64bit */
+ kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+ if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit)
+ return false;
+ }
+
+ return true;
+}
+
/**
* kvm_reset_vcpu - sets core registers and sys_regs to reset value
* @vcpu: The VCPU pointer
@@ -274,13 +293,14 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
}
}
+ if (!vcpu_allowed_register_width(vcpu)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
switch (vcpu->arch.target) {
default:
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
- if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) {
- ret = -EINVAL;
- goto out;
- }
pstate = VCPU_RESET_PSTATE_SVC;
} else {
pstate = VCPU_RESET_PSTATE_EL1;
@@ -291,6 +311,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
/* Reset core registers */
memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu)));
+ memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs));
+ vcpu->arch.ctxt.spsr_abt = 0;
+ vcpu->arch.ctxt.spsr_und = 0;
+ vcpu->arch.ctxt.spsr_irq = 0;
+ vcpu->arch.ctxt.spsr_fiq = 0;
vcpu_gp_regs(vcpu)->pstate = pstate;
/* Reset system registers */
diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
index 4441967..7740995 100644
--- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
+++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
@@ -87,8 +87,8 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write)
r = vgic_v3_set_redist_base(kvm, 0, *addr, 0);
goto out;
}
- rdreg = list_first_entry(&vgic->rd_regions,
- struct vgic_redist_region, list);
+ rdreg = list_first_entry_or_null(&vgic->rd_regions,
+ struct vgic_redist_region, list);
if (!rdreg)
addr_ptr = &undef_value;
else
@@ -226,6 +226,9 @@ static int vgic_get_common_attr(struct kvm_device *dev,
u64 addr;
unsigned long type = (unsigned long)attr->attr;
+ if (copy_from_user(&addr, uaddr, sizeof(addr)))
+ return -EFAULT;
+
r = kvm_vgic_addr(dev->kvm, type, &addr, false);
if (r)
return (r == -ENODEV) ? -ENXIO : r;
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index ac48516..6d44c02 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -55,8 +55,10 @@ void __sync_icache_dcache(pte_t pte)
{
struct page *page = pte_page(pte);
- if (!test_and_set_bit(PG_dcache_clean, &page->flags))
+ if (!test_bit(PG_dcache_clean, &page->flags)) {
sync_icache_aliases(page_address(page), page_size(page));
+ set_bit(PG_dcache_clean, &page->flags);
+ }
}
EXPORT_SYMBOL_GPL(__sync_icache_dcache);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 916e054..a985d29 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -53,13 +53,13 @@ s64 memstart_addr __ro_after_init = -1;
EXPORT_SYMBOL(memstart_addr);
/*
- * We create both ZONE_DMA and ZONE_DMA32. ZONE_DMA covers the first 1G of
- * memory as some devices, namely the Raspberry Pi 4, have peripherals with
- * this limited view of the memory. ZONE_DMA32 will cover the rest of the 32
- * bit addressable memory area.
+ * If the corresponding config options are enabled, we create both ZONE_DMA
+ * and ZONE_DMA32. By default ZONE_DMA covers the 32-bit addressable memory
+ * unless restricted on specific platforms (e.g. 30-bit on Raspberry Pi 4).
+ * In such case, ZONE_DMA32 covers the rest of the 32-bit addressable memory,
+ * otherwise it is empty.
*/
phys_addr_t arm64_dma_phys_limit __ro_after_init;
-phys_addr_t arm64_dma32_phys_limit __ro_after_init;
#ifdef CONFIG_KEXEC_CORE
/*
@@ -84,7 +84,7 @@ static void __init reserve_crashkernel(void)
if (crash_base == 0) {
/* Current arm64 boot protocol requires 2MB alignment */
- crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
+ crash_base = memblock_find_in_range(0, arm64_dma_phys_limit,
crash_size, SZ_2M);
if (crash_base == 0) {
pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
@@ -189,6 +189,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
unsigned int __maybe_unused acpi_zone_dma_bits;
unsigned int __maybe_unused dt_zone_dma_bits;
+ phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32);
#ifdef CONFIG_ZONE_DMA
acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
@@ -198,8 +199,12 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
#endif
#ifdef CONFIG_ZONE_DMA32
- max_zone_pfns[ZONE_DMA32] = PFN_DOWN(arm64_dma32_phys_limit);
+ max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
+ if (!arm64_dma_phys_limit)
+ arm64_dma_phys_limit = dma32_phys_limit;
#endif
+ if (!arm64_dma_phys_limit)
+ arm64_dma_phys_limit = PHYS_MASK + 1;
max_zone_pfns[ZONE_NORMAL] = max;
free_area_init(max_zone_pfns);
@@ -393,16 +398,9 @@ void __init arm64_memblock_init(void)
early_init_fdt_scan_reserved_mem();
- if (IS_ENABLED(CONFIG_ZONE_DMA32))
- arm64_dma32_phys_limit = max_zone_phys(32);
- else
- arm64_dma32_phys_limit = PHYS_MASK + 1;
-
reserve_elfcorehdr();
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
-
- dma_contiguous_reserve(arm64_dma32_phys_limit);
}
void __init bootmem_init(void)
@@ -438,6 +436,11 @@ void __init bootmem_init(void)
zone_sizes_init(min, max);
/*
+ * Reserve the CMA area after arm64_dma_phys_limit was initialised.
+ */
+ dma_contiguous_reserve(arm64_dma_phys_limit);
+
+ /*
* request_standard_resources() depends on crashkernel's memory being
* reserved, so do it here.
*/
@@ -519,7 +522,7 @@ static void __init free_unused_memmap(void)
void __init mem_init(void)
{
if (swiotlb_force == SWIOTLB_FORCE ||
- max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit))
+ max_pfn > PFN_DOWN(arm64_dma_phys_limit))
swiotlb_init(1);
else
swiotlb_force = SWIOTLB_NO_FORCE;
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 23c326a..a149273 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -444,6 +444,18 @@
mov x10, #(SYS_GCR_EL1_RRND | SYS_GCR_EL1_EXCL_MASK)
msr_s SYS_GCR_EL1, x10
+ /*
+ * If GCR_EL1.RRND=1 is implemented the same way as RRND=0, then
+ * RGSR_EL1.SEED must be non-zero for IRG to produce
+ * pseudorandom numbers. As RGSR_EL1 is UNKNOWN out of reset, we
+ * must initialize it.
+ */
+ mrs x10, CNTVCT_EL0
+ ands x10, x10, #SYS_RGSR_EL1_SEED_MASK
+ csinc x10, x10, xzr, ne
+ lsl x10, x10, #SYS_RGSR_EL1_SEED_SHIFT
+ msr_s SYS_RGSR_EL1, x10
+
/* clear any pending tag check faults in TFSR*_EL1 */
msr_s SYS_TFSR_EL1, xzr
msr_s SYS_TFSRE0_EL1, xzr
diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 268fad5..7bf0a617 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -292,7 +292,7 @@
int "Maximum zone order"
default "11"
-config RAM_BASE
+config DRAM_BASE
hex "DRAM start addr (the same with memory-section in dts)"
default 0x0
diff --git a/arch/csky/include/asm/page.h b/arch/csky/include/asm/page.h
index 9b98bf3..1687824 100644
--- a/arch/csky/include/asm/page.h
+++ b/arch/csky/include/asm/page.h
@@ -28,7 +28,7 @@
#define SSEG_SIZE 0x20000000
#define LOWMEM_LIMIT (SSEG_SIZE * 2)
-#define PHYS_OFFSET_OFFSET (CONFIG_RAM_BASE & (SSEG_SIZE - 1))
+#define PHYS_OFFSET_OFFSET (CONFIG_DRAM_BASE & (SSEG_SIZE - 1))
#ifndef __ASSEMBLY__
diff --git a/arch/ia64/configs/generic_defconfig b/arch/ia64/configs/generic_defconfig
index ca0d596..8916a28 100644
--- a/arch/ia64/configs/generic_defconfig
+++ b/arch/ia64/configs/generic_defconfig
@@ -55,8 +55,6 @@
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_QLOGIC_1280=y
-CONFIG_ATA=y
-CONFIG_ATA_PIIX=y
CONFIG_SATA_VITESSE=y
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
diff --git a/arch/ia64/include/asm/module.h b/arch/ia64/include/asm/module.h
index 5a29652..7271b9c 100644
--- a/arch/ia64/include/asm/module.h
+++ b/arch/ia64/include/asm/module.h
@@ -14,16 +14,20 @@
struct elf64_shdr; /* forward declration */
struct mod_arch_specific {
+ /* Used only at module load time. */
struct elf64_shdr *core_plt; /* core PLT section */
struct elf64_shdr *init_plt; /* init PLT section */
struct elf64_shdr *got; /* global offset table */
struct elf64_shdr *opd; /* official procedure descriptors */
struct elf64_shdr *unwind; /* unwind-table section */
unsigned long gp; /* global-pointer for module */
+ unsigned int next_got_entry; /* index of next available got entry */
+ /* Used at module run and cleanup time. */
void *core_unw_table; /* core unwind-table cookie returned by unwinder */
void *init_unw_table; /* init unwind-table cookie returned by unwinder */
- unsigned int next_got_entry; /* index of next available got entry */
+ void *opd_addr; /* symbolize uses .opd to get to actual function */
+ unsigned long opd_size;
};
#define ARCH_SHF_SMALL SHF_IA_64_SHORT
diff --git a/arch/ia64/include/asm/ptrace.h b/arch/ia64/include/asm/ptrace.h
index b3aa460..08179135 100644
--- a/arch/ia64/include/asm/ptrace.h
+++ b/arch/ia64/include/asm/ptrace.h
@@ -54,8 +54,7 @@
static inline unsigned long user_stack_pointer(struct pt_regs *regs)
{
- /* FIXME: should this be bspstore + nr_dirty regs? */
- return regs->ar_bspstore;
+ return regs->r12;
}
static inline int is_syscall_success(struct pt_regs *regs)
@@ -79,11 +78,6 @@ static inline long regs_return_value(struct pt_regs *regs)
unsigned long __ip = instruction_pointer(regs); \
(__ip & ~3UL) + ((__ip & 3UL) << 2); \
})
-/*
- * Why not default? Because user_stack_pointer() on ia64 gives register
- * stack backing store instead...
- */
-#define current_user_stack_pointer() (current_pt_regs()->r12)
/* given a pointer to a task_struct, return the user's pt_regs */
# define task_pt_regs(t) (((struct pt_regs *) ((char *) (t) + IA64_STK_OFFSET)) - 1)
diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
index f932b25..33282f3 100644
--- a/arch/ia64/kernel/efi.c
+++ b/arch/ia64/kernel/efi.c
@@ -413,10 +413,10 @@ efi_get_pal_addr (void)
mask = ~((1 << IA64_GRANULE_SHIFT) - 1);
printk(KERN_INFO "CPU %d: mapping PAL code "
- "[0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
- smp_processor_id(), md->phys_addr,
- md->phys_addr + efi_md_size(md),
- vaddr & mask, (vaddr & mask) + IA64_GRANULE_SIZE);
+ "[0x%llx-0x%llx) into [0x%llx-0x%llx)\n",
+ smp_processor_id(), md->phys_addr,
+ md->phys_addr + efi_md_size(md),
+ vaddr & mask, (vaddr & mask) + IA64_GRANULE_SIZE);
#endif
return __va(md->phys_addr);
}
@@ -558,6 +558,7 @@ efi_init (void)
{
efi_memory_desc_t *md;
void *p;
+ unsigned int i;
for (i = 0, p = efi_map_start; p < efi_map_end;
++i, p += efi_desc_size)
@@ -584,7 +585,7 @@ efi_init (void)
}
printk("mem%02d: %s "
- "range=[0x%016lx-0x%016lx) (%4lu%s)\n",
+ "range=[0x%016llx-0x%016llx) (%4lu%s)\n",
i, efi_md_typeattr_format(buf, sizeof(buf), md),
md->phys_addr,
md->phys_addr + efi_md_size(md), size, unit);
diff --git a/arch/ia64/kernel/err_inject.c b/arch/ia64/kernel/err_inject.c
index 8b5b8e6b..dd5bfed 100644
--- a/arch/ia64/kernel/err_inject.c
+++ b/arch/ia64/kernel/err_inject.c
@@ -59,7 +59,7 @@ show_##name(struct device *dev, struct device_attribute *attr, \
char *buf) \
{ \
u32 cpu=dev->id; \
- return sprintf(buf, "%lx\n", name[cpu]); \
+ return sprintf(buf, "%llx\n", name[cpu]); \
}
#define store(name) \
@@ -86,9 +86,9 @@ store_call_start(struct device *dev, struct device_attribute *attr,
#ifdef ERR_INJ_DEBUG
printk(KERN_DEBUG "pal_mc_err_inject for cpu%d:\n", cpu);
- printk(KERN_DEBUG "err_type_info=%lx,\n", err_type_info[cpu]);
- printk(KERN_DEBUG "err_struct_info=%lx,\n", err_struct_info[cpu]);
- printk(KERN_DEBUG "err_data_buffer=%lx, %lx, %lx.\n",
+ printk(KERN_DEBUG "err_type_info=%llx,\n", err_type_info[cpu]);
+ printk(KERN_DEBUG "err_struct_info=%llx,\n", err_struct_info[cpu]);
+ printk(KERN_DEBUG "err_data_buffer=%llx, %llx, %llx.\n",
err_data_buffer[cpu].data1,
err_data_buffer[cpu].data2,
err_data_buffer[cpu].data3);
@@ -117,8 +117,8 @@ store_call_start(struct device *dev, struct device_attribute *attr,
#ifdef ERR_INJ_DEBUG
printk(KERN_DEBUG "Returns: status=%d,\n", (int)status[cpu]);
- printk(KERN_DEBUG "capabilities=%lx,\n", capabilities[cpu]);
- printk(KERN_DEBUG "resources=%lx\n", resources[cpu]);
+ printk(KERN_DEBUG "capabilities=%llx,\n", capabilities[cpu]);
+ printk(KERN_DEBUG "resources=%llx\n", resources[cpu]);
#endif
return size;
}
@@ -131,7 +131,7 @@ show_virtual_to_phys(struct device *dev, struct device_attribute *attr,
char *buf)
{
unsigned int cpu=dev->id;
- return sprintf(buf, "%lx\n", phys_addr[cpu]);
+ return sprintf(buf, "%llx\n", phys_addr[cpu]);
}
static ssize_t
@@ -145,7 +145,7 @@ store_virtual_to_phys(struct device *dev, struct device_attribute *attr,
ret = get_user_pages_fast(virt_addr, 1, FOLL_WRITE, NULL);
if (ret<=0) {
#ifdef ERR_INJ_DEBUG
- printk("Virtual address %lx is not existing.\n",virt_addr);
+ printk("Virtual address %llx is not existing.\n", virt_addr);
#endif
return -EINVAL;
}
@@ -163,7 +163,7 @@ show_err_data_buffer(struct device *dev,
{
unsigned int cpu=dev->id;
- return sprintf(buf, "%lx, %lx, %lx\n",
+ return sprintf(buf, "%llx, %llx, %llx\n",
err_data_buffer[cpu].data1,
err_data_buffer[cpu].data2,
err_data_buffer[cpu].data3);
@@ -178,13 +178,13 @@ store_err_data_buffer(struct device *dev,
int ret;
#ifdef ERR_INJ_DEBUG
- printk("write err_data_buffer=[%lx,%lx,%lx] on cpu%d\n",
+ printk("write err_data_buffer=[%llx,%llx,%llx] on cpu%d\n",
err_data_buffer[cpu].data1,
err_data_buffer[cpu].data2,
err_data_buffer[cpu].data3,
cpu);
#endif
- ret=sscanf(buf, "%lx, %lx, %lx",
+ ret = sscanf(buf, "%llx, %llx, %llx",
&err_data_buffer[cpu].data1,
&err_data_buffer[cpu].data2,
&err_data_buffer[cpu].data3);
diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
index 2703f77..bd0a51dc 100644
--- a/arch/ia64/kernel/mca.c
+++ b/arch/ia64/kernel/mca.c
@@ -1822,7 +1822,7 @@ ia64_mca_cpu_init(void *cpu_data)
data = mca_bootmem();
first_time = 0;
} else
- data = (void *)__get_free_pages(GFP_KERNEL,
+ data = (void *)__get_free_pages(GFP_ATOMIC,
get_order(sz));
if (!data)
panic("Could not allocate MCA memory for cpu %d\n",
diff --git a/arch/ia64/kernel/module.c b/arch/ia64/kernel/module.c
index 00a496c..2cba53c 100644
--- a/arch/ia64/kernel/module.c
+++ b/arch/ia64/kernel/module.c
@@ -905,9 +905,31 @@ register_unwind_table (struct module *mod)
int
module_finalize (const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod)
{
+ struct mod_arch_specific *mas = &mod->arch;
+
DEBUGP("%s: init: entry=%p\n", __func__, mod->init);
- if (mod->arch.unwind)
+ if (mas->unwind)
register_unwind_table(mod);
+
+ /*
+ * ".opd" was already relocated to the final destination. Store
+ * it's address for use in symbolizer.
+ */
+ mas->opd_addr = (void *)mas->opd->sh_addr;
+ mas->opd_size = mas->opd->sh_size;
+
+ /*
+ * Module relocation was already done at this point. Section
+ * headers are about to be deleted. Wipe out load-time context.
+ */
+ mas->core_plt = NULL;
+ mas->init_plt = NULL;
+ mas->got = NULL;
+ mas->opd = NULL;
+ mas->unwind = NULL;
+ mas->gp = 0;
+ mas->next_got_entry = 0;
+
return 0;
}
@@ -926,10 +948,9 @@ module_arch_cleanup (struct module *mod)
void *dereference_module_function_descriptor(struct module *mod, void *ptr)
{
- Elf64_Shdr *opd = mod->arch.opd;
+ struct mod_arch_specific *mas = &mod->arch;
- if (ptr < (void *)opd->sh_addr ||
- ptr >= (void *)(opd->sh_addr + opd->sh_size))
+ if (ptr < mas->opd_addr || ptr >= mas->opd_addr + mas->opd_size)
return ptr;
return dereference_function_descriptor(ptr);
diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
index dbe829f..4d08134 100644
--- a/arch/ia64/mm/discontig.c
+++ b/arch/ia64/mm/discontig.c
@@ -94,7 +94,7 @@ static int __init build_node_maps(unsigned long start, unsigned long len,
* acpi_boot_init() (which builds the node_to_cpu_mask array) hasn't been
* called yet. Note that node 0 will also count all non-existent cpus.
*/
-static int __meminit early_nr_cpus_node(int node)
+static int early_nr_cpus_node(int node)
{
int cpu, n = 0;
@@ -109,7 +109,7 @@ static int __meminit early_nr_cpus_node(int node)
* compute_pernodesize - compute size of pernode data
* @node: the node id.
*/
-static unsigned long __meminit compute_pernodesize(int node)
+static unsigned long compute_pernodesize(int node)
{
unsigned long pernodesize = 0, cpus;
@@ -366,7 +366,7 @@ static void __init reserve_pernode_space(void)
}
}
-static void __meminit scatter_node_data(void)
+static void scatter_node_data(void)
{
pg_data_t **dst;
int node;
diff --git a/arch/ia64/scripts/unwcheck.py b/arch/ia64/scripts/unwcheck.py
index c55276e..bfd1b67 100644
--- a/arch/ia64/scripts/unwcheck.py
+++ b/arch/ia64/scripts/unwcheck.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# SPDX-License-Identifier: GPL-2.0
#
# Usage: unwcheck.py FILE
diff --git a/arch/m68k/include/asm/mvme147hw.h b/arch/m68k/include/asm/mvme147hw.h
index 257b291..e28eb1c 100644
--- a/arch/m68k/include/asm/mvme147hw.h
+++ b/arch/m68k/include/asm/mvme147hw.h
@@ -66,6 +66,9 @@ struct pcc_regs {
#define PCC_INT_ENAB 0x08
#define PCC_TIMER_INT_CLR 0x80
+
+#define PCC_TIMER_TIC_EN 0x01
+#define PCC_TIMER_COC_EN 0x02
#define PCC_TIMER_CLR_OVF 0x04
#define PCC_LEVEL_ABORT 0x07
diff --git a/arch/m68k/kernel/sys_m68k.c b/arch/m68k/kernel/sys_m68k.c
index 1c235d8..f55bdcb 100644
--- a/arch/m68k/kernel/sys_m68k.c
+++ b/arch/m68k/kernel/sys_m68k.c
@@ -388,6 +388,8 @@ sys_cacheflush (unsigned long addr, int scope, int cache, unsigned long len)
ret = -EPERM;
if (!capable(CAP_SYS_ADMIN))
goto out;
+
+ mmap_read_lock(current->mm);
} else {
struct vm_area_struct *vma;
diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c
index 490700a..aab7880 100644
--- a/arch/m68k/mvme147/config.c
+++ b/arch/m68k/mvme147/config.c
@@ -116,8 +116,10 @@ static irqreturn_t mvme147_timer_int (int irq, void *dev_id)
unsigned long flags;
local_irq_save(flags);
- m147_pcc->t1_int_cntrl = PCC_TIMER_INT_CLR;
- m147_pcc->t1_cntrl = PCC_TIMER_CLR_OVF;
+ m147_pcc->t1_cntrl = PCC_TIMER_CLR_OVF | PCC_TIMER_COC_EN |
+ PCC_TIMER_TIC_EN;
+ m147_pcc->t1_int_cntrl = PCC_INT_ENAB | PCC_TIMER_INT_CLR |
+ PCC_LEVEL_TIMER1;
clk_total += PCC_TIMER_CYCLES;
timer_routine(0, NULL);
local_irq_restore(flags);
@@ -135,10 +137,10 @@ void mvme147_sched_init (irq_handler_t timer_routine)
/* Init the clock with a value */
/* The clock counter increments until 0xFFFF then reloads */
m147_pcc->t1_preload = PCC_TIMER_PRELOAD;
- m147_pcc->t1_cntrl = 0x0; /* clear timer */
- m147_pcc->t1_cntrl = 0x3; /* start timer */
- m147_pcc->t1_int_cntrl = PCC_TIMER_INT_CLR; /* clear pending ints */
- m147_pcc->t1_int_cntrl = PCC_INT_ENAB|PCC_LEVEL_TIMER1;
+ m147_pcc->t1_cntrl = PCC_TIMER_CLR_OVF | PCC_TIMER_COC_EN |
+ PCC_TIMER_TIC_EN;
+ m147_pcc->t1_int_cntrl = PCC_INT_ENAB | PCC_TIMER_INT_CLR |
+ PCC_LEVEL_TIMER1;
clocksource_register_hz(&mvme147_clk, PCC_TIMER_CLOCK_FREQ);
}
diff --git a/arch/m68k/mvme16x/config.c b/arch/m68k/mvme16x/config.c
index 5b86d10..d43d128 100644
--- a/arch/m68k/mvme16x/config.c
+++ b/arch/m68k/mvme16x/config.c
@@ -367,6 +367,7 @@ static u32 clk_total;
#define PCCTOVR1_COC_EN 0x02
#define PCCTOVR1_OVR_CLR 0x04
+#define PCCTIC1_INT_LEVEL 6
#define PCCTIC1_INT_CLR 0x08
#define PCCTIC1_INT_EN 0x10
@@ -376,8 +377,8 @@ static irqreturn_t mvme16x_timer_int (int irq, void *dev_id)
unsigned long flags;
local_irq_save(flags);
- out_8(PCCTIC1, in_8(PCCTIC1) | PCCTIC1_INT_CLR);
- out_8(PCCTOVR1, PCCTOVR1_OVR_CLR);
+ out_8(PCCTOVR1, PCCTOVR1_OVR_CLR | PCCTOVR1_TIC_EN | PCCTOVR1_COC_EN);
+ out_8(PCCTIC1, PCCTIC1_INT_EN | PCCTIC1_INT_CLR | PCCTIC1_INT_LEVEL);
clk_total += PCC_TIMER_CYCLES;
timer_routine(0, NULL);
local_irq_restore(flags);
@@ -391,14 +392,15 @@ void mvme16x_sched_init (irq_handler_t timer_routine)
int irq;
/* Using PCCchip2 or MC2 chip tick timer 1 */
- out_be32(PCCTCNT1, 0);
- out_be32(PCCTCMP1, PCC_TIMER_CYCLES);
- out_8(PCCTOVR1, in_8(PCCTOVR1) | PCCTOVR1_TIC_EN | PCCTOVR1_COC_EN);
- out_8(PCCTIC1, PCCTIC1_INT_EN | 6);
if (request_irq(MVME16x_IRQ_TIMER, mvme16x_timer_int, IRQF_TIMER, "timer",
timer_routine))
panic ("Couldn't register timer int");
+ out_be32(PCCTCNT1, 0);
+ out_be32(PCCTCMP1, PCC_TIMER_CYCLES);
+ out_8(PCCTOVR1, PCCTOVR1_OVR_CLR | PCCTOVR1_TIC_EN | PCCTOVR1_COC_EN);
+ out_8(PCCTIC1, PCCTIC1_INT_EN | PCCTIC1_INT_CLR | PCCTIC1_INT_LEVEL);
+
clocksource_register_hz(&mvme16x_clk, PCC_TIMER_CLOCK_FREQ);
if (brdno == 0x0162 || brdno == 0x172)
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 2000bb2..1917ccd 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -6,6 +6,7 @@
select ARCH_BINFMT_ELF_STATE if MIPS_FP_SUPPORT
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_KCOV
+ select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE if !EVA
select ARCH_HAS_PTE_SPECIAL if !(32BIT && CPU_HAS_RIXI)
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
diff --git a/arch/mips/alchemy/board-xxs1500.c b/arch/mips/alchemy/board-xxs1500.c
index b184baa..f175bce 100644
--- a/arch/mips/alchemy/board-xxs1500.c
+++ b/arch/mips/alchemy/board-xxs1500.c
@@ -18,6 +18,7 @@
#include <asm/reboot.h>
#include <asm/setup.h>
#include <asm/mach-au1x00/au1000.h>
+#include <asm/mach-au1x00/gpio-au1000.h>
#include <prom.h>
const char *get_system_type(void)
diff --git a/arch/mips/boot/dts/brcm/bcm3368.dtsi b/arch/mips/boot/dts/brcm/bcm3368.dtsi
index 69cbef4723..d4b2b43 100644
--- a/arch/mips/boot/dts/brcm/bcm3368.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm3368.dtsi
@@ -59,7 +59,7 @@ clkctl: clock-controller@fff8c004 {
periph_cntl: syscon@fff8c008 {
compatible = "syscon";
- reg = <0xfff8c000 0x4>;
+ reg = <0xfff8c008 0x4>;
native-endian;
};
diff --git a/arch/mips/boot/dts/brcm/bcm63268.dtsi b/arch/mips/boot/dts/brcm/bcm63268.dtsi
index 5acb49b6..365fa75 100644
--- a/arch/mips/boot/dts/brcm/bcm63268.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm63268.dtsi
@@ -59,7 +59,7 @@ clkctl: clock-controller@10000004 {
periph_cntl: syscon@10000008 {
compatible = "syscon";
- reg = <0x10000000 0xc>;
+ reg = <0x10000008 0x4>;
native-endian;
};
diff --git a/arch/mips/boot/dts/brcm/bcm6358.dtsi b/arch/mips/boot/dts/brcm/bcm6358.dtsi
index f21176c..89a3107 100644
--- a/arch/mips/boot/dts/brcm/bcm6358.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm6358.dtsi
@@ -59,7 +59,7 @@ clkctl: clock-controller@fffe0004 {
periph_cntl: syscon@fffe0008 {
compatible = "syscon";
- reg = <0xfffe0000 0x4>;
+ reg = <0xfffe0008 0x4>;
native-endian;
};
diff --git a/arch/mips/boot/dts/brcm/bcm6362.dtsi b/arch/mips/boot/dts/brcm/bcm6362.dtsi
index c98f911..0b2adefd 100644
--- a/arch/mips/boot/dts/brcm/bcm6362.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm6362.dtsi
@@ -59,7 +59,7 @@ clkctl: clock-controller@10000004 {
periph_cntl: syscon@10000008 {
compatible = "syscon";
- reg = <0x10000000 0xc>;
+ reg = <0x10000008 0x4>;
native-endian;
};
diff --git a/arch/mips/boot/dts/brcm/bcm6368.dtsi b/arch/mips/boot/dts/brcm/bcm6368.dtsi
index 449c167..b84a3bf 100644
--- a/arch/mips/boot/dts/brcm/bcm6368.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm6368.dtsi
@@ -59,7 +59,7 @@ clkctl: clock-controller@10000004 {
periph_cntl: syscon@100000008 {
compatible = "syscon";
- reg = <0x10000000 0xc>;
+ reg = <0x10000008 0x4>;
native-endian;
};
diff --git a/arch/mips/crypto/poly1305-glue.c b/arch/mips/crypto/poly1305-glue.c
index fc881b4..bc6110f 100644
--- a/arch/mips/crypto/poly1305-glue.c
+++ b/arch/mips/crypto/poly1305-glue.c
@@ -17,7 +17,7 @@ asmlinkage void poly1305_init_mips(void *state, const u8 *key);
asmlinkage void poly1305_blocks_mips(void *state, const u8 *src, u32 len, u32 hibit);
asmlinkage void poly1305_emit_mips(void *state, u8 *digest, const u32 *nonce);
-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
+void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
{
poly1305_init_mips(&dctx->h, key);
dctx->s[0] = get_unaligned_le32(key + 16);
diff --git a/arch/mips/include/asm/asmmacro.h b/arch/mips/include/asm/asmmacro.h
index 86f2323..ca83ada 100644
--- a/arch/mips/include/asm/asmmacro.h
+++ b/arch/mips/include/asm/asmmacro.h
@@ -44,8 +44,7 @@
.endm
#endif
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR5) || \
- defined(CONFIG_CPU_MIPSR6)
+#ifdef CONFIG_CPU_HAS_DIEI
.macro local_irq_enable reg=t0
ei
irq_enable_hazard
diff --git a/arch/mips/include/asm/div64.h b/arch/mips/include/asm/div64.h
index dc5ea57..ceece76 100644
--- a/arch/mips/include/asm/div64.h
+++ b/arch/mips/include/asm/div64.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2000, 2004 Maciej W. Rozycki
+ * Copyright (C) 2000, 2004, 2021 Maciej W. Rozycki
* Copyright (C) 2003, 07 Ralf Baechle (ralf@linux-mips.org)
*
* This file is subject to the terms and conditions of the GNU General Public
@@ -9,25 +9,18 @@
#ifndef __ASM_DIV64_H
#define __ASM_DIV64_H
-#include <asm-generic/div64.h>
+#include <asm/bitsperlong.h>
-#if BITS_PER_LONG == 64
-
-#include <linux/types.h>
+#if BITS_PER_LONG == 32
/*
* No traps on overflows for any of these...
*/
-#define __div64_32(n, base) \
-({ \
+#define do_div64_32(res, high, low, base) ({ \
unsigned long __cf, __tmp, __tmp2, __i; \
unsigned long __quot32, __mod32; \
- unsigned long __high, __low; \
- unsigned long long __n; \
\
- __high = *__n >> 32; \
- __low = __n; \
__asm__( \
" .set push \n" \
" .set noat \n" \
@@ -51,18 +44,48 @@
" subu %0, %0, %z6 \n" \
" addiu %2, %2, 1 \n" \
"3: \n" \
- " bnez %4, 0b\n\t" \
- " srl %5, %1, 0x1f\n\t" \
+ " bnez %4, 0b \n" \
+ " srl %5, %1, 0x1f \n" \
" .set pop" \
: "=&r" (__mod32), "=&r" (__tmp), \
"=&r" (__quot32), "=&r" (__cf), \
"=&r" (__i), "=&r" (__tmp2) \
- : "Jr" (base), "0" (__high), "1" (__low)); \
+ : "Jr" (base), "0" (high), "1" (low)); \
\
- (__n) = __quot32; \
+ (res) = __quot32; \
__mod32; \
})
-#endif /* BITS_PER_LONG == 64 */
+#define __div64_32(n, base) ({ \
+ unsigned long __upper, __low, __high, __radix; \
+ unsigned long long __quot; \
+ unsigned long long __div; \
+ unsigned long __mod; \
+ \
+ __div = (*n); \
+ __radix = (base); \
+ \
+ __high = __div >> 32; \
+ __low = __div; \
+ \
+ if (__high < __radix) { \
+ __upper = __high; \
+ __high = 0; \
+ } else { \
+ __upper = __high % __radix; \
+ __high /= __radix; \
+ } \
+ \
+ __mod = do_div64_32(__low, __upper, __low, __radix); \
+ \
+ __quot = __high; \
+ __quot = __quot << 32 | __low; \
+ (*n) = __quot; \
+ __mod; \
+})
+
+#endif /* BITS_PER_LONG == 32 */
+
+#include <asm-generic/div64.h>
#endif /* __ASM_DIV64_H */
diff --git a/arch/mips/include/asm/vdso/gettimeofday.h b/arch/mips/include/asm/vdso/gettimeofday.h
index 2203e2d..44a45f3 100644
--- a/arch/mips/include/asm/vdso/gettimeofday.h
+++ b/arch/mips/include/asm/vdso/gettimeofday.h
@@ -20,6 +20,12 @@
#define VDSO_HAS_CLOCK_GETRES 1
+#if MIPS_ISA_REV < 6
+#define VDSO_SYSCALL_CLOBBERS "hi", "lo",
+#else
+#define VDSO_SYSCALL_CLOBBERS
+#endif
+
static __always_inline long gettimeofday_fallback(
struct __kernel_old_timeval *_tv,
struct timezone *_tz)
@@ -35,7 +41,9 @@ static __always_inline long gettimeofday_fallback(
: "=r" (ret), "=r" (error)
: "r" (tv), "r" (tz), "r" (nr)
: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
- "$14", "$15", "$24", "$25", "hi", "lo", "memory");
+ "$14", "$15", "$24", "$25",
+ VDSO_SYSCALL_CLOBBERS
+ "memory");
return error ? -ret : ret;
}
@@ -59,7 +67,9 @@ static __always_inline long clock_gettime_fallback(
: "=r" (ret), "=r" (error)
: "r" (clkid), "r" (ts), "r" (nr)
: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
- "$14", "$15", "$24", "$25", "hi", "lo", "memory");
+ "$14", "$15", "$24", "$25",
+ VDSO_SYSCALL_CLOBBERS
+ "memory");
return error ? -ret : ret;
}
@@ -83,7 +93,9 @@ static __always_inline int clock_getres_fallback(
: "=r" (ret), "=r" (error)
: "r" (clkid), "r" (ts), "r" (nr)
: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
- "$14", "$15", "$24", "$25", "hi", "lo", "memory");
+ "$14", "$15", "$24", "$25",
+ VDSO_SYSCALL_CLOBBERS
+ "memory");
return error ? -ret : ret;
}
@@ -105,7 +117,9 @@ static __always_inline long clock_gettime32_fallback(
: "=r" (ret), "=r" (error)
: "r" (clkid), "r" (ts), "r" (nr)
: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
- "$14", "$15", "$24", "$25", "hi", "lo", "memory");
+ "$14", "$15", "$24", "$25",
+ VDSO_SYSCALL_CLOBBERS
+ "memory");
return error ? -ret : ret;
}
@@ -125,7 +139,9 @@ static __always_inline int clock_getres32_fallback(
: "=r" (ret), "=r" (error)
: "r" (clkid), "r" (ts), "r" (nr)
: "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
- "$14", "$15", "$24", "$25", "hi", "lo", "memory");
+ "$14", "$15", "$24", "$25",
+ VDSO_SYSCALL_CLOBBERS
+ "memory");
return error ? -ret : ret;
}
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index 31cb919..e6ae2bc 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -1739,7 +1739,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
set_isa(c, MIPS_CPU_ISA_M64R2);
break;
}
- c->writecombine = _CACHE_UNCACHED_ACCELERATED;
c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_EXT |
MIPS_ASE_LOONGSON_EXT2);
break;
@@ -1769,7 +1768,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
* register, we correct it here.
*/
c->options |= MIPS_CPU_FTLB | MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
- c->writecombine = _CACHE_UNCACHED_ACCELERATED;
c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
MIPS_ASE_LOONGSON_EXT | MIPS_ASE_LOONGSON_EXT2);
c->ases &= ~MIPS_ASE_VZ; /* VZ of Loongson-3A2000/3000 is incomplete */
@@ -1780,7 +1778,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
set_elf_platform(cpu, "loongson3a");
set_isa(c, MIPS_CPU_ISA_M64R2);
decode_cpucfg(c);
- c->writecombine = _CACHE_UNCACHED_ACCELERATED;
break;
default:
panic("Unknown Loongson Processor ID!");
diff --git a/arch/mips/loongson64/init.c b/arch/mips/loongson64/init.c
index ed75f79..052cce6 100644
--- a/arch/mips/loongson64/init.c
+++ b/arch/mips/loongson64/init.c
@@ -82,7 +82,7 @@ static int __init add_legacy_isa_io(struct fwnode_handle *fwnode, resource_size_
return -ENOMEM;
range->fwnode = fwnode;
- range->size = size;
+ range->size = size = round_up(size, PAGE_SIZE);
range->hw_start = hw_start;
range->flags = LOGIC_PIO_CPU_MMIO;
diff --git a/arch/mips/pci/pci-legacy.c b/arch/mips/pci/pci-legacy.c
index 39052de..3a90919 100644
--- a/arch/mips/pci/pci-legacy.c
+++ b/arch/mips/pci/pci-legacy.c
@@ -166,8 +166,13 @@ void pci_load_of_ranges(struct pci_controller *hose, struct device_node *node)
res = hose->mem_resource;
break;
}
- if (res != NULL)
- of_pci_range_to_resource(&range, node, res);
+ if (res != NULL) {
+ res->name = node->full_name;
+ res->flags = range.flags;
+ res->start = range.cpu_addr;
+ res->end = range.cpu_addr + range.size - 1;
+ res->parent = res->child = res->sibling = NULL;
+ }
}
}
diff --git a/arch/mips/pci/pci-mt7620.c b/arch/mips/pci/pci-mt7620.c
index d360616..e032932 100644
--- a/arch/mips/pci/pci-mt7620.c
+++ b/arch/mips/pci/pci-mt7620.c
@@ -30,6 +30,7 @@
#define RALINK_GPIOMODE 0x60
#define PPLL_CFG1 0x9c
+#define PPLL_LD BIT(23)
#define PPLL_DRV 0xa0
#define PDRV_SW_SET BIT(31)
@@ -239,8 +240,8 @@ static int mt7620_pci_hw_init(struct platform_device *pdev)
rt_sysc_m32(0, RALINK_PCIE0_CLK_EN, RALINK_CLKCFG1);
mdelay(100);
- if (!(rt_sysc_r32(PPLL_CFG1) & PDRV_SW_SET)) {
- dev_err(&pdev->dev, "MT7620 PPLL unlock\n");
+ if (!(rt_sysc_r32(PPLL_CFG1) & PPLL_LD)) {
+ dev_err(&pdev->dev, "pcie PLL not locked, aborting init\n");
reset_control_assert(rstpcie0);
rt_sysc_m32(RALINK_PCIE0_CLK_EN, 0, RALINK_CLKCFG1);
return -1;
diff --git a/arch/mips/pci/pci-rt2880.c b/arch/mips/pci/pci-rt2880.c
index e1f12e39..f1538d2 100644
--- a/arch/mips/pci/pci-rt2880.c
+++ b/arch/mips/pci/pci-rt2880.c
@@ -180,7 +180,6 @@ static inline void rt2880_pci_write_u32(unsigned long reg, u32 val)
int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
- u16 cmd;
int irq = -1;
if (dev->bus->number != 0)
@@ -188,8 +187,6 @@ int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
switch (PCI_SLOT(dev->devfn)) {
case 0x00:
- rt2880_pci_write_u32(PCI_BASE_ADDRESS_0, 0x08000000);
- (void) rt2880_pci_read_u32(PCI_BASE_ADDRESS_0);
break;
case 0x11:
irq = RT288X_CPU_IRQ_PCI;
@@ -201,16 +198,6 @@ int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
break;
}
- pci_write_config_byte((struct pci_dev *) dev,
- PCI_CACHE_LINE_SIZE, 0x14);
- pci_write_config_byte((struct pci_dev *) dev, PCI_LATENCY_TIMER, 0xFF);
- pci_read_config_word((struct pci_dev *) dev, PCI_COMMAND, &cmd);
- cmd |= PCI_COMMAND_MASTER | PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
- PCI_COMMAND_INVALIDATE | PCI_COMMAND_FAST_BACK |
- PCI_COMMAND_SERR | PCI_COMMAND_WAIT | PCI_COMMAND_PARITY;
- pci_write_config_word((struct pci_dev *) dev, PCI_COMMAND, cmd);
- pci_write_config_byte((struct pci_dev *) dev, PCI_INTERRUPT_LINE,
- dev->irq);
return irq;
}
@@ -251,6 +238,30 @@ static int rt288x_pci_probe(struct platform_device *pdev)
int pcibios_plat_dev_init(struct pci_dev *dev)
{
+ static bool slot0_init;
+
+ /*
+ * Nobody seems to initialize slot 0, but this platform requires it, so
+ * do it once when some other slot is being enabled. The PCI subsystem
+ * should configure other slots properly, so no need to do anything
+ * special for those.
+ */
+ if (!slot0_init && dev->bus->number == 0) {
+ u16 cmd;
+ u32 bar0;
+
+ slot0_init = true;
+
+ pci_bus_write_config_dword(dev->bus, 0, PCI_BASE_ADDRESS_0,
+ 0x08000000);
+ pci_bus_read_config_dword(dev->bus, 0, PCI_BASE_ADDRESS_0,
+ &bar0);
+
+ pci_bus_read_config_word(dev->bus, 0, PCI_COMMAND, &cmd);
+ cmd |= PCI_COMMAND_MASTER | PCI_COMMAND_IO | PCI_COMMAND_MEMORY;
+ pci_bus_write_config_word(dev->bus, 0, PCI_COMMAND, cmd);
+ }
+
return 0;
}
diff --git a/arch/mips/ralink/of.c b/arch/mips/ralink/of.c
index cbae9d2..a971f1a 100644
--- a/arch/mips/ralink/of.c
+++ b/arch/mips/ralink/of.c
@@ -8,6 +8,7 @@
#include <linux/io.h>
#include <linux/clk.h>
+#include <linux/export.h>
#include <linux/init.h>
#include <linux/sizes.h>
#include <linux/of_fdt.h>
@@ -25,6 +26,7 @@
__iomem void *rt_sysc_membase;
__iomem void *rt_memc_membase;
+EXPORT_SYMBOL_GPL(rt_sysc_membase);
__iomem void *plat_of_remap_node(const char *node)
{
diff --git a/arch/nds32/mm/cacheflush.c b/arch/nds32/mm/cacheflush.c
index 6eb98a7..ad5344e 100644
--- a/arch/nds32/mm/cacheflush.c
+++ b/arch/nds32/mm/cacheflush.c
@@ -238,7 +238,7 @@ void flush_dcache_page(struct page *page)
{
struct address_space *mapping;
- mapping = page_mapping(page);
+ mapping = page_mapping_file(page);
if (mapping && !mapping_mapped(mapping))
set_bit(PG_dcache_dirty, &page->flags);
else {
diff --git a/arch/openrisc/include/asm/barrier.h b/arch/openrisc/include/asm/barrier.h
new file mode 100644
index 0000000..7538294
--- /dev/null
+++ b/arch/openrisc/include/asm/barrier.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_BARRIER_H
+#define __ASM_BARRIER_H
+
+#define mb() asm volatile ("l.msync" ::: "memory")
+
+#include <asm-generic/barrier.h>
+
+#endif /* __ASM_BARRIER_H */
diff --git a/arch/openrisc/kernel/setup.c b/arch/openrisc/kernel/setup.c
index 2416a9f..c6f9e7b 100644
--- a/arch/openrisc/kernel/setup.c
+++ b/arch/openrisc/kernel/setup.c
@@ -278,6 +278,8 @@ void calibrate_delay(void)
pr_cont("%lu.%02lu BogoMIPS (lpj=%lu)\n",
loops_per_jiffy / (500000 / HZ),
(loops_per_jiffy / (5000 / HZ)) % 100, loops_per_jiffy);
+
+ of_node_put(cpu);
}
void __init setup_arch(char **cmdline_p)
diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index 8348fea..5e88c35 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -76,7 +76,6 @@ static void __init map_ram(void)
/* These mark extents of read-only kernel pages...
* ...from vmlinux.lds.S
*/
- struct memblock_region *region;
v = PAGE_OFFSET;
@@ -122,7 +121,7 @@ static void __init map_ram(void)
}
printk(KERN_INFO "%s: Memory: 0x%x-0x%x\n", __func__,
- region->base, region->base + region->size);
+ start, end);
}
}
diff --git a/arch/parisc/include/asm/cmpxchg.h b/arch/parisc/include/asm/cmpxchg.h
index cf5ee9b..84ee232 100644
--- a/arch/parisc/include/asm/cmpxchg.h
+++ b/arch/parisc/include/asm/cmpxchg.h
@@ -72,7 +72,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size)
#endif
case 4: return __cmpxchg_u32((unsigned int *)ptr,
(unsigned int)old, (unsigned int)new_);
- case 1: return __cmpxchg_u8((u8 *)ptr, (u8)old, (u8)new_);
+ case 1: return __cmpxchg_u8((u8 *)ptr, old & 0xff, new_ & 0xff);
}
__cmpxchg_called_with_bad_pointer();
return old;
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 31ed808..5afa0eb 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -222,7 +222,7 @@
select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI if PERF_EVENTS || (PPC64 && PPC_BOOK3S)
- select HAVE_HARDLOCKUP_DETECTOR_ARCH if (PPC64 && PPC_BOOK3S)
+ select HAVE_HARDLOCKUP_DETECTOR_ARCH if PPC64 && PPC_BOOK3S && SMP
select HAVE_OPROFILE
select HAVE_OPTPROBES if PPC64
select HAVE_PERF_EVENTS
diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index b88900f..52abca8 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -352,6 +352,7 @@
config FAIL_IOMMU
bool "Fault-injection capability for IOMMU"
depends on FAULT_INJECTION
+ depends on PCI || IBMVIO
help
Provide fault-injection capability for IOMMU. Each device can
be selectively enabled via the fail_iommu property.
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cd3feea..4a3dca0 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -7,6 +7,7 @@
#ifndef __ASSEMBLY__
#include <linux/mmdebug.h>
#include <linux/bug.h>
+#include <linux/sizes.h>
#endif
/*
@@ -323,7 +324,8 @@ extern unsigned long pci_io_base;
#define PHB_IO_END (KERN_IO_START + FULL_IO_SIZE)
#define IOREMAP_BASE (PHB_IO_END)
#define IOREMAP_START (ioremap_bot)
-#define IOREMAP_END (KERN_IO_END)
+#define IOREMAP_END (KERN_IO_END - FIXADDR_SIZE)
+#define FIXADDR_SIZE SZ_32M
/* Advertise special mapping type for AGP */
#define HAVE_PAGE_AGP
diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
index c7813dc..59cab55 100644
--- a/arch/powerpc/include/asm/book3s/64/radix.h
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -222,8 +222,10 @@ static inline void radix__set_pte_at(struct mm_struct *mm, unsigned long addr,
* from ptesync, it should probably go into update_mmu_cache, rather
* than set_pte_at (which is used to set ptes unrelated to faults).
*
- * Spurious faults to vmalloc region are not tolerated, so there is
- * a ptesync in flush_cache_vmap.
+ * Spurious faults from the kernel memory are not tolerated, so there
+ * is a ptesync in flush_cache_vmap, and __map_kernel_page() follows
+ * the pte update sequence from ISA Book III 6.10 Translation Table
+ * Update Synchronization Requirements.
*/
}
diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
index 6bfc879..591b2f4 100644
--- a/arch/powerpc/include/asm/fixmap.h
+++ b/arch/powerpc/include/asm/fixmap.h
@@ -23,12 +23,17 @@
#include <asm/kmap_types.h>
#endif
+#ifdef CONFIG_PPC64
+#define FIXADDR_TOP (IOREMAP_END + FIXADDR_SIZE)
+#else
+#define FIXADDR_SIZE 0
#ifdef CONFIG_KASAN
#include <asm/kasan.h>
#define FIXADDR_TOP (KASAN_SHADOW_START - PAGE_SIZE)
#else
#define FIXADDR_TOP ((unsigned long)(-PAGE_SIZE))
#endif
+#endif
/*
* Here we define all the compile-time 'special' virtual
@@ -50,6 +55,7 @@
*/
enum fixed_addresses {
FIX_HOLE,
+#ifdef CONFIG_PPC32
/* reserve the top 128K for early debugging purposes */
FIX_EARLY_DEBUG_TOP = FIX_HOLE,
FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128K, PAGE_SIZE)/PAGE_SIZE)-1,
@@ -72,6 +78,7 @@ enum fixed_addresses {
FIX_IMMR_SIZE,
#endif
/* FIX_PCIE_MCFG, */
+#endif /* CONFIG_PPC32 */
__end_of_permanent_fixed_addresses,
#define NR_FIX_BTMAPS (SZ_256K / PAGE_SIZE)
@@ -98,6 +105,8 @@ enum fixed_addresses {
static inline void __set_fixmap(enum fixed_addresses idx,
phys_addr_t phys, pgprot_t flags)
{
+ BUILD_BUG_ON(IS_ENABLED(CONFIG_PPC64) && __FIXADDR_SIZE > FIXADDR_SIZE);
+
if (__builtin_constant_p(idx))
BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
else if (WARN_ON(idx >= __end_of_fixed_addresses))
diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index c1fbccb..3e8e19f 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -437,6 +437,9 @@
*/
long plpar_hcall_norets(unsigned long opcode, ...);
+/* Variant which does not do hcall tracing */
+long plpar_hcall_norets_notrace(unsigned long opcode, ...);
+
/**
* plpar_hcall: - Make a pseries hypervisor call
* @opcode: The hypervisor call to make.
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index 6cb8aa3..57cd389 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -6,6 +6,8 @@
* the ppc64 non-hashed page table.
*/
+#include <linux/sizes.h>
+
#include <asm/nohash/64/pgtable-4k.h>
#include <asm/barrier.h>
#include <asm/asm-const.h>
@@ -54,7 +56,8 @@
#define PHB_IO_END (KERN_IO_START + FULL_IO_SIZE)
#define IOREMAP_BASE (PHB_IO_END)
#define IOREMAP_START (ioremap_bot)
-#define IOREMAP_END (KERN_VIRT_START + KERN_VIRT_SIZE)
+#define IOREMAP_END (KERN_VIRT_START + KERN_VIRT_SIZE - FIXADDR_SIZE)
+#define FIXADDR_SIZE SZ_32M
/*
diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
index 9362c94..588bfb9 100644
--- a/arch/powerpc/include/asm/paravirt.h
+++ b/arch/powerpc/include/asm/paravirt.h
@@ -24,19 +24,35 @@ static inline u32 yield_count_of(int cpu)
return be32_to_cpu(yield_count);
}
+/*
+ * Spinlock code confers and prods, so don't trace the hcalls because the
+ * tracing code takes spinlocks which can cause recursion deadlocks.
+ *
+ * These calls are made while the lock is not held: the lock slowpath yields if
+ * it can not acquire the lock, and unlock slow path might prod if a waiter has
+ * yielded). So this may not be a problem for simple spin locks because the
+ * tracing does not technically recurse on the lock, but we avoid it anyway.
+ *
+ * However the queued spin lock contended path is more strictly ordered: the
+ * H_CONFER hcall is made after the task has queued itself on the lock, so then
+ * recursing on that lock will cause the task to then queue up again behind the
+ * first instance (or worse: queued spinlocks use tricks that assume a context
+ * never waits on more than one spinlock, so such recursion may cause random
+ * corruption in the lock code).
+ */
static inline void yield_to_preempted(int cpu, u32 yield_count)
{
- plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count);
+ plpar_hcall_norets_notrace(H_CONFER, get_hard_smp_processor_id(cpu), yield_count);
}
static inline void prod_cpu(int cpu)
{
- plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu));
+ plpar_hcall_norets_notrace(H_PROD, get_hard_smp_processor_id(cpu));
}
static inline void yield_to_any(void)
{
- plpar_hcall_norets(H_CONFER, -1, 0);
+ plpar_hcall_norets_notrace(H_CONFER, -1, 0);
}
#else
static inline bool is_shared_processor(void)
diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index d6f262d..a7e7688 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -19,6 +19,7 @@
#ifndef _ASM_POWERPC_PTRACE_H
#define _ASM_POWERPC_PTRACE_H
+#include <linux/err.h>
#include <uapi/asm/ptrace.h>
#include <asm/asm-const.h>
@@ -144,25 +145,6 @@ extern unsigned long profile_pc(struct pt_regs *regs);
long do_syscall_trace_enter(struct pt_regs *regs);
void do_syscall_trace_leave(struct pt_regs *regs);
-#define kernel_stack_pointer(regs) ((regs)->gpr[1])
-static inline int is_syscall_success(struct pt_regs *regs)
-{
- return !(regs->ccr & 0x10000000);
-}
-
-static inline long regs_return_value(struct pt_regs *regs)
-{
- if (is_syscall_success(regs))
- return regs->gpr[3];
- else
- return -regs->gpr[3];
-}
-
-static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
-{
- regs->gpr[3] = rc;
-}
-
#ifdef __powerpc64__
#define user_mode(regs) ((((regs)->msr) >> MSR_PR_LG) & 0x1)
#else
@@ -245,6 +227,31 @@ static inline void set_trap_norestart(struct pt_regs *regs)
regs->trap |= 0x10;
}
+#define kernel_stack_pointer(regs) ((regs)->gpr[1])
+static inline int is_syscall_success(struct pt_regs *regs)
+{
+ if (trap_is_scv(regs))
+ return !IS_ERR_VALUE((unsigned long)regs->gpr[3]);
+ else
+ return !(regs->ccr & 0x10000000);
+}
+
+static inline long regs_return_value(struct pt_regs *regs)
+{
+ if (trap_is_scv(regs))
+ return regs->gpr[3];
+
+ if (is_syscall_success(regs))
+ return regs->gpr[3];
+ else
+ return -regs->gpr[3];
+}
+
+static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
+{
+ regs->gpr[3] = rc;
+}
+
#define arch_has_single_step() (1)
#define arch_has_block_step() (true)
#define ARCH_HAS_USER_SINGLE_STEP_REPORT
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index f877a57..f4b9890 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -444,6 +444,7 @@
#define LPCR_VRMA_LP1 ASM_CONST(0x0000800000000000)
#define LPCR_RMLS 0x1C000000 /* Implementation dependent RMO limit sel */
#define LPCR_RMLS_SH 26
+#define LPCR_HAIL ASM_CONST(0x0000000004000000) /* HV AIL (ISAv3.1) */
#define LPCR_ILE ASM_CONST(0x0000000002000000) /* !HV irqs set MSR:LE */
#define LPCR_AIL ASM_CONST(0x0000000001800000) /* Alternate interrupt location */
#define LPCR_AIL_0 ASM_CONST(0x0000000000000000) /* MMU off exception offset 0x0 */
diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
index b2035b2..635bdf9 100644
--- a/arch/powerpc/include/asm/smp.h
+++ b/arch/powerpc/include/asm/smp.h
@@ -121,6 +121,11 @@ static inline struct cpumask *cpu_sibling_mask(int cpu)
return per_cpu(cpu_sibling_map, cpu);
}
+static inline struct cpumask *cpu_core_mask(int cpu)
+{
+ return per_cpu(cpu_core_map, cpu);
+}
+
static inline struct cpumask *cpu_l2_cache_mask(int cpu)
{
return per_cpu(cpu_l2_cache_map, cpu);
diff --git a/arch/powerpc/include/asm/syscall.h b/arch/powerpc/include/asm/syscall.h
index fd1b518..ba0f88f 100644
--- a/arch/powerpc/include/asm/syscall.h
+++ b/arch/powerpc/include/asm/syscall.h
@@ -41,11 +41,17 @@ static inline void syscall_rollback(struct task_struct *task,
static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs)
{
- /*
- * If the system call failed,
- * regs->gpr[3] contains a positive ERRORCODE.
- */
- return (regs->ccr & 0x10000000UL) ? -regs->gpr[3] : 0;
+ if (trap_is_scv(regs)) {
+ unsigned long error = regs->gpr[3];
+
+ return IS_ERR_VALUE(error) ? error : 0;
+ } else {
+ /*
+ * If the system call failed,
+ * regs->gpr[3] contains a positive ERRORCODE.
+ */
+ return (regs->ccr & 0x10000000UL) ? -regs->gpr[3] : 0;
+ }
}
static inline long syscall_get_return_value(struct task_struct *task,
@@ -58,18 +64,22 @@ static inline void syscall_set_return_value(struct task_struct *task,
struct pt_regs *regs,
int error, long val)
{
- /*
- * In the general case it's not obvious that we must deal with CCR
- * here, as the syscall exit path will also do that for us. However
- * there are some places, eg. the signal code, which check ccr to
- * decide if the value in r3 is actually an error.
- */
- if (error) {
- regs->ccr |= 0x10000000L;
- regs->gpr[3] = error;
+ if (trap_is_scv(regs)) {
+ regs->gpr[3] = (long) error ?: val;
} else {
- regs->ccr &= ~0x10000000L;
- regs->gpr[3] = val;
+ /*
+ * In the general case it's not obvious that we must deal with
+ * CCR here, as the syscall exit path will also do that for us.
+ * However there are some places, eg. the signal code, which
+ * check ccr to decide if the value in r3 is actually an error.
+ */
+ if (error) {
+ regs->ccr |= 0x10000000L;
+ regs->gpr[3] = error;
+ } else {
+ regs->ccr &= ~0x10000000L;
+ regs->gpr[3] = val;
+ }
}
}
diff --git a/arch/powerpc/include/uapi/asm/errno.h b/arch/powerpc/include/uapi/asm/errno.h
index cc79856..4ba87de 100644
--- a/arch/powerpc/include/uapi/asm/errno.h
+++ b/arch/powerpc/include/uapi/asm/errno.h
@@ -2,6 +2,7 @@
#ifndef _ASM_POWERPC_ERRNO_H
#define _ASM_POWERPC_ERRNO_H
+#undef EDEADLOCK
#include <asm-generic/errno.h>
#undef EDEADLOCK
diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
index 813713c..20c417a 100644
--- a/arch/powerpc/kernel/eeh.c
+++ b/arch/powerpc/kernel/eeh.c
@@ -362,14 +362,11 @@ static inline unsigned long eeh_token_to_phys(unsigned long token)
pa = pte_pfn(*ptep);
/* On radix we can do hugepage mappings for io, so handle that */
- if (hugepage_shift) {
- pa <<= hugepage_shift;
- pa |= token & ((1ul << hugepage_shift) - 1);
- } else {
- pa <<= PAGE_SHIFT;
- pa |= token & (PAGE_SIZE - 1);
- }
+ if (!hugepage_shift)
+ hugepage_shift = PAGE_SHIFT;
+ pa <<= PAGE_SHIFT;
+ pa |= token & ((1ul << hugepage_shift) - 1);
return pa;
}
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index 8482739..eddf362 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -292,7 +292,7 @@ static void fadump_show_config(void)
* that is required for a kernel to boot successfully.
*
*/
-static inline u64 fadump_calculate_reserve_size(void)
+static __init u64 fadump_calculate_reserve_size(void)
{
u64 base, size, bootmem_min;
int ret;
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index fef0b34..f8e3d15d 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -338,11 +338,7 @@
lis r1, emergency_ctx@ha
#endif
lwz r1, emergency_ctx@l(r1)
- cmpwi cr1, r1, 0
- bne cr1, 1f
- lis r1, init_thread_union@ha
- addi r1, r1, init_thread_union@l
-1: addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
+ addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
EXCEPTION_PROLOG_2
SAVE_NVGPRS(r11)
addi r3, r1, STACK_FRAME_OVERHEAD
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 5b69a6a..6806eef 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -1050,7 +1050,7 @@ int iommu_take_ownership(struct iommu_table *tbl)
spin_lock_irqsave(&tbl->large_pool.lock, flags);
for (i = 0; i < tbl->nr_pools; i++)
- spin_lock(&tbl->pools[i].lock);
+ spin_lock_nest_lock(&tbl->pools[i].lock, &tbl->large_pool.lock);
iommu_table_release_pages(tbl);
@@ -1078,7 +1078,7 @@ void iommu_release_ownership(struct iommu_table *tbl)
spin_lock_irqsave(&tbl->large_pool.lock, flags);
for (i = 0; i < tbl->nr_pools; i++)
- spin_lock(&tbl->pools[i].lock);
+ spin_lock_nest_lock(&tbl->pools[i].lock, &tbl->large_pool.lock);
memset(tbl->it_map, 0, sz);
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index c1545f2..7a14a09 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -268,7 +268,7 @@ static struct feature_property {
};
#if defined(CONFIG_44x) && defined(CONFIG_PPC_FPU)
-static inline void identical_pvr_fixup(unsigned long node)
+static __init void identical_pvr_fixup(unsigned long node)
{
unsigned int pvr;
const char *model = of_get_flat_dt_prop(node, "model", NULL);
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 057d6b8..e7f2eb7 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -164,7 +164,7 @@ void __init irqstack_early_init(void)
}
#ifdef CONFIG_VMAP_STACK
-void *emergency_ctx[NR_CPUS] __ro_after_init;
+void *emergency_ctx[NR_CPUS] __ro_after_init = {[0] = &init_stack};
void __init emergency_stack_init(void)
{
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index c28e949..3f8426b 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -231,10 +231,23 @@ static void cpu_ready_for_interrupts(void)
* If we are not in hypervisor mode the job is done once for
* the whole partition in configure_exceptions().
*/
- if (cpu_has_feature(CPU_FTR_HVMODE) &&
- cpu_has_feature(CPU_FTR_ARCH_207S)) {
+ if (cpu_has_feature(CPU_FTR_HVMODE)) {
unsigned long lpcr = mfspr(SPRN_LPCR);
- mtspr(SPRN_LPCR, lpcr | LPCR_AIL_3);
+ unsigned long new_lpcr = lpcr;
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ /* P10 DD1 does not have HAIL */
+ if (pvr_version_is(PVR_POWER10) &&
+ (mfspr(SPRN_PVR) & 0xf00) == 0x100)
+ new_lpcr |= LPCR_AIL_3;
+ else
+ new_lpcr |= LPCR_HAIL;
+ } else if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
+ new_lpcr |= LPCR_AIL_3;
+ }
+
+ if (new_lpcr != lpcr)
+ mtspr(SPRN_LPCR, new_lpcr);
}
/*
@@ -355,11 +368,11 @@ void __init early_setup(unsigned long dt_ptr)
apply_feature_fixups();
setup_feature_keys();
- early_ioremap_setup();
-
/* Initialize the hash table or TLB handling */
early_init_mmu();
+ early_ioremap_setup();
+
/*
* After firmware and early platform setup code has set things up,
* we note the SPR values for configurable control/performance
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 7d6cf75..db7ac77 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -975,17 +975,12 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
local_memory_node(numa_cpu_lookup_table[cpu]));
}
#endif
- /*
- * cpu_core_map is now more updated and exists only since
- * its been exported for long. It only will have a snapshot
- * of cpu_cpu_mask.
- */
- cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
}
/* Init the cpumasks so the boot CPU is related to itself */
cpumask_set_cpu(boot_cpuid, cpu_sibling_mask(boot_cpuid));
cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid));
+ cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid));
if (has_coregroup_support())
cpumask_set_cpu(boot_cpuid, cpu_coregroup_mask(boot_cpuid));
@@ -1304,6 +1299,9 @@ static void remove_cpu_from_masks(int cpu)
set_cpus_unrelated(cpu, i, cpu_smallcore_mask);
}
+ for_each_cpu(i, cpu_core_mask(cpu))
+ set_cpus_unrelated(cpu, i, cpu_core_mask);
+
if (has_coregroup_support()) {
for_each_cpu(i, cpu_coregroup_mask(cpu))
set_cpus_unrelated(cpu, i, cpu_coregroup_mask);
@@ -1364,8 +1362,11 @@ static void update_coregroup_mask(int cpu, cpumask_var_t *mask)
static void add_cpu_to_masks(int cpu)
{
+ struct cpumask *(*submask_fn)(int) = cpu_sibling_mask;
int first_thread = cpu_first_thread_sibling(cpu);
+ int chip_id = cpu_to_chip_id(cpu);
cpumask_var_t mask;
+ bool ret;
int i;
/*
@@ -1381,12 +1382,36 @@ static void add_cpu_to_masks(int cpu)
add_cpu_to_smallcore_masks(cpu);
/* In CPU-hotplug path, hence use GFP_ATOMIC */
- alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
+ ret = alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
update_mask_by_l2(cpu, &mask);
if (has_coregroup_support())
update_coregroup_mask(cpu, &mask);
+ if (chip_id == -1 || !ret) {
+ cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
+ goto out;
+ }
+
+ if (shared_caches)
+ submask_fn = cpu_l2_cache_mask;
+
+ /* Update core_mask with all the CPUs that are part of submask */
+ or_cpumasks_related(cpu, cpu, submask_fn, cpu_core_mask);
+
+ /* Skip all CPUs already part of current CPU core mask */
+ cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
+
+ for_each_cpu(i, mask) {
+ if (chip_id == cpu_to_chip_id(i)) {
+ or_cpumasks_related(cpu, i, submask_fn, cpu_core_mask);
+ cpumask_andnot(mask, mask, submask_fn(i));
+ } else {
+ cpumask_andnot(mask, mask, cpu_core_mask(i));
+ }
+ }
+
+out:
free_cpumask_var(mask);
}
@@ -1417,6 +1442,9 @@ void start_secondary(void *unused)
vdso_getcpu_init();
#endif
+ set_numa_node(numa_cpu_lookup_table[cpu]);
+ set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]));
+
/* Update topology CPU masks */
add_cpu_to_masks(cpu);
@@ -1435,9 +1463,6 @@ void start_secondary(void *unused)
shared_caches = true;
}
- set_numa_node(numa_cpu_lookup_table[cpu]);
- set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]));
-
smp_wmb();
notify_cpu_starting(cpu);
set_cpu_online(cpu, true);
diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
index 02b9e4d..a8a7cb7 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -961,6 +961,93 @@ unsigned int kexec_fdt_totalsize_ppc64(struct kimage *image)
}
/**
+ * add_node_props - Reads node properties from device node structure and add
+ * them to fdt.
+ * @fdt: Flattened device tree of the kernel
+ * @node_offset: offset of the node to add a property at
+ * @dn: device node pointer
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int add_node_props(void *fdt, int node_offset, const struct device_node *dn)
+{
+ int ret = 0;
+ struct property *pp;
+
+ if (!dn)
+ return -EINVAL;
+
+ for_each_property_of_node(dn, pp) {
+ ret = fdt_setprop(fdt, node_offset, pp->name, pp->value, pp->length);
+ if (ret < 0) {
+ pr_err("Unable to add %s property: %s\n", pp->name, fdt_strerror(ret));
+ return ret;
+ }
+ }
+ return ret;
+}
+
+/**
+ * update_cpus_node - Update cpus node of flattened device tree using of_root
+ * device node.
+ * @fdt: Flattened device tree of the kernel.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int update_cpus_node(void *fdt)
+{
+ struct device_node *cpus_node, *dn;
+ int cpus_offset, cpus_subnode_offset, ret = 0;
+
+ cpus_offset = fdt_path_offset(fdt, "/cpus");
+ if (cpus_offset < 0 && cpus_offset != -FDT_ERR_NOTFOUND) {
+ pr_err("Malformed device tree: error reading /cpus node: %s\n",
+ fdt_strerror(cpus_offset));
+ return cpus_offset;
+ }
+
+ if (cpus_offset > 0) {
+ ret = fdt_del_node(fdt, cpus_offset);
+ if (ret < 0) {
+ pr_err("Error deleting /cpus node: %s\n", fdt_strerror(ret));
+ return -EINVAL;
+ }
+ }
+
+ /* Add cpus node to fdt */
+ cpus_offset = fdt_add_subnode(fdt, fdt_path_offset(fdt, "/"), "cpus");
+ if (cpus_offset < 0) {
+ pr_err("Error creating /cpus node: %s\n", fdt_strerror(cpus_offset));
+ return -EINVAL;
+ }
+
+ /* Add cpus node properties */
+ cpus_node = of_find_node_by_path("/cpus");
+ ret = add_node_props(fdt, cpus_offset, cpus_node);
+ of_node_put(cpus_node);
+ if (ret < 0)
+ return ret;
+
+ /* Loop through all subnodes of cpus and add them to fdt */
+ for_each_node_by_type(dn, "cpu") {
+ cpus_subnode_offset = fdt_add_subnode(fdt, cpus_offset, dn->full_name);
+ if (cpus_subnode_offset < 0) {
+ pr_err("Unable to add %s subnode: %s\n", dn->full_name,
+ fdt_strerror(cpus_subnode_offset));
+ ret = cpus_subnode_offset;
+ goto out;
+ }
+
+ ret = add_node_props(fdt, cpus_subnode_offset, dn);
+ if (ret < 0)
+ goto out;
+ }
+out:
+ of_node_put(dn);
+ return ret;
+}
+
+/**
* setup_new_fdt_ppc64 - Update the flattend device-tree of the kernel
* being loaded.
* @image: kexec image being loaded.
@@ -1020,6 +1107,11 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
}
}
+ /* Update cpus nodes information to account hotplug CPUs. */
+ ret = update_cpus_node(fdt);
+ if (ret < 0)
+ goto out;
+
/* Update memory reserve map */
ret = get_reserved_memory_ranges(&rmem);
if (ret)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index e3b1839..280f799 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3697,7 +3697,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.dec_expires = dec + tb;
vcpu->cpu = -1;
vcpu->arch.thread_cpu = -1;
+ /* Save guest CTRL register, set runlatch to 1 */
vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
+ if (!(vcpu->arch.ctrl & 1))
+ mtspr(SPRN_CTRLT, vcpu->arch.ctrl | 1);
vcpu->arch.iamr = mfspr(SPRN_IAMR);
vcpu->arch.pspb = mfspr(SPRN_PSPB);
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 69a91b5..5899123 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -5,6 +5,9 @@
ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)
+CFLAGS_code-patching.o += -fno-stack-protector
+CFLAGS_feature-fixups.o += -fno-stack-protector
+
CFLAGS_REMOVE_code-patching.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_feature-fixups.o = $(CC_FLAGS_FTRACE)
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 92705d6..bda150e 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -14,6 +14,7 @@
#include <linux/string.h>
#include <linux/init.h>
#include <linux/sched/mm.h>
+#include <linux/stop_machine.h>
#include <asm/cputable.h>
#include <asm/code-patching.h>
#include <asm/page.h>
@@ -227,11 +228,25 @@ static void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
: "unknown");
}
+static int __do_stf_barrier_fixups(void *data)
+{
+ enum stf_barrier_type *types = data;
+
+ do_stf_entry_barrier_fixups(*types);
+ do_stf_exit_barrier_fixups(*types);
+
+ return 0;
+}
void do_stf_barrier_fixups(enum stf_barrier_type types)
{
- do_stf_entry_barrier_fixups(types);
- do_stf_exit_barrier_fixups(types);
+ /*
+ * The call to the fallback entry flush, and the fallback/sync-ori exit
+ * flush can not be safely patched in/out while other CPUs are executing
+ * them. So call __do_stf_barrier_fixups() on one CPU while all other CPUs
+ * spin in the stop machine core with interrupts hard disabled.
+ */
+ stop_machine(__do_stf_barrier_fixups, &types, NULL);
}
void do_uaccess_flush_fixups(enum l1d_flush_type types)
@@ -284,8 +299,9 @@ void do_uaccess_flush_fixups(enum l1d_flush_type types)
: "unknown");
}
-void do_entry_flush_fixups(enum l1d_flush_type types)
+static int __do_entry_flush_fixups(void *data)
{
+ enum l1d_flush_type types = *(enum l1d_flush_type *)data;
unsigned int instrs[3], *dest;
long *start, *end;
int i;
@@ -354,6 +370,19 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
: "ori type" :
(types & L1D_FLUSH_MTTRIG) ? "mttrig type"
: "unknown");
+
+ return 0;
+}
+
+void do_entry_flush_fixups(enum l1d_flush_type types)
+{
+ /*
+ * The call to the fallback flush can not be safely patched in/out while
+ * other CPUs are executing it. So call __do_entry_flush_fixups() on one
+ * CPU while all other CPUs spin in the stop machine core with interrupts
+ * hard disabled.
+ */
+ stop_machine(__do_entry_flush_fixups, &types, NULL);
}
void do_rfi_flush_fixups(enum l1d_flush_type types)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 24702c0..0141d57 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -336,7 +336,7 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
int htab_remove_mapping(unsigned long vstart, unsigned long vend,
int psize, int ssize)
{
- unsigned long vaddr;
+ unsigned long vaddr, time_limit;
unsigned int step, shift;
int rc;
int ret = 0;
@@ -349,8 +349,19 @@ int htab_remove_mapping(unsigned long vstart, unsigned long vend,
/* Unmap the full range specificied */
vaddr = ALIGN_DOWN(vstart, step);
+ time_limit = jiffies + HZ;
+
for (;vaddr < vend; vaddr += step) {
rc = mmu_hash_ops.hpte_removebolted(vaddr, psize, ssize);
+
+ /*
+ * For large number of mappings introduce a cond_resched()
+ * to prevent softlockup warnings.
+ */
+ if (time_after(jiffies, time_limit)) {
+ cond_resched();
+ time_limit = jiffies + HZ;
+ }
if (rc == -ENOENT) {
ret = -ENOENT;
continue;
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 3adcf73..1d5eec8 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -108,7 +108,7 @@ static int early_map_kernel_page(unsigned long ea, unsigned long pa,
set_the_pte:
set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
- smp_wmb();
+ asm volatile("ptesync": : :"memory");
return 0;
}
@@ -168,7 +168,7 @@ static int __map_kernel_page(unsigned long ea, unsigned long pa,
set_the_pte:
set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
- smp_wmb();
+ asm volatile("ptesync": : :"memory");
return 0;
}
diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
index e1a21d3..5e8eedd 100644
--- a/arch/powerpc/perf/isa207-common.c
+++ b/arch/powerpc/perf/isa207-common.c
@@ -400,8 +400,8 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
* EBB events are pinned & exclusive, so this should never actually
* hit, but we leave it as a fallback in case.
*/
- mask |= CNST_EBB_VAL(ebb);
- value |= CNST_EBB_MASK;
+ mask |= CNST_EBB_MASK;
+ value |= CNST_EBB_VAL(ebb);
*maskp = mask;
*valp = value;
diff --git a/arch/powerpc/perf/power10-events-list.h b/arch/powerpc/perf/power10-events-list.h
index 60c1b81..e664878 100644
--- a/arch/powerpc/perf/power10-events-list.h
+++ b/arch/powerpc/perf/power10-events-list.h
@@ -66,5 +66,5 @@ EVENT(PM_RUN_INST_CMPL_ALT, 0x00002);
* thresh end (TE)
*/
-EVENT(MEM_LOADS, 0x34340401e0);
-EVENT(MEM_STORES, 0x343c0401e0);
+EVENT(MEM_LOADS, 0x35340401e0);
+EVENT(MEM_STORES, 0x353c0401e0);
diff --git a/arch/powerpc/platforms/52xx/lite5200_sleep.S b/arch/powerpc/platforms/52xx/lite5200_sleep.S
index 11475c5..afee8b1 100644
--- a/arch/powerpc/platforms/52xx/lite5200_sleep.S
+++ b/arch/powerpc/platforms/52xx/lite5200_sleep.S
@@ -181,7 +181,7 @@
udelay: /* r11 - tb_ticks_per_usec, r12 - usecs, overwrites r13 */
mullw r12, r12, r11
mftb r13 /* start */
- addi r12, r13, r12 /* end */
+ add r12, r13, r12 /* end */
1:
mftb r13 /* current */
cmp cr0, r13, r12
diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
index 12cbffd..325f3b2 100644
--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
+++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
@@ -47,9 +47,6 @@ static void rtas_stop_self(void)
BUG_ON(rtas_stop_self_token == RTAS_UNKNOWN_SERVICE);
- printk("cpu %u (hwid %u) Ready to die...\n",
- smp_processor_id(), hard_smp_processor_id());
-
rtas_call_unlocked(&args, rtas_stop_self_token, 0, 1, NULL);
panic("Alas, I survived.\n");
diff --git a/arch/powerpc/platforms/pseries/hvCall.S b/arch/powerpc/platforms/pseries/hvCall.S
index 2136e42..8a2b8d6 100644
--- a/arch/powerpc/platforms/pseries/hvCall.S
+++ b/arch/powerpc/platforms/pseries/hvCall.S
@@ -102,6 +102,16 @@
#define HCALL_BRANCH(LABEL)
#endif
+_GLOBAL_TOC(plpar_hcall_norets_notrace)
+ HMT_MEDIUM
+
+ mfcr r0
+ stw r0,8(r1)
+ HVSC /* invoke the hypervisor */
+ lwz r0,8(r1)
+ mtcrf 0xff,r0
+ blr /* return r3 = status */
+
_GLOBAL_TOC(plpar_hcall_norets)
HMT_MEDIUM
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 764170f..1c3ac0f 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -1827,8 +1827,7 @@ void hcall_tracepoint_unregfunc(void)
/*
* Since the tracing code might execute hcalls we need to guard against
- * recursion. One example of this are spinlocks calling H_YIELD on
- * shared processor partitions.
+ * recursion.
*/
static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
diff --git a/arch/powerpc/platforms/pseries/pci_dlpar.c b/arch/powerpc/platforms/pseries/pci_dlpar.c
index f9ae17e..a8f9140 100644
--- a/arch/powerpc/platforms/pseries/pci_dlpar.c
+++ b/arch/powerpc/platforms/pseries/pci_dlpar.c
@@ -50,6 +50,7 @@ EXPORT_SYMBOL_GPL(init_phb_dynamic);
int remove_phb_dynamic(struct pci_controller *phb)
{
struct pci_bus *b = phb->bus;
+ struct pci_host_bridge *host_bridge = to_pci_host_bridge(b->bridge);
struct resource *res;
int rc, i;
@@ -76,7 +77,8 @@ int remove_phb_dynamic(struct pci_controller *phb)
/* Remove the PCI bus and unregister the bridge device from sysfs */
phb->bus = NULL;
pci_remove_bus(b);
- device_unregister(b->bridge);
+ host_bridge->bus = NULL;
+ device_unregister(&host_bridge->dev);
/* Now release the IO resource */
if (res->flags & IORESOURCE_IO)
diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/pseries/vio.c
index b2797cf..68276e0 100644
--- a/arch/powerpc/platforms/pseries/vio.c
+++ b/arch/powerpc/platforms/pseries/vio.c
@@ -1286,6 +1286,10 @@ static int vio_bus_remove(struct device *dev)
int __vio_register_driver(struct vio_driver *viodrv, struct module *owner,
const char *mod_name)
{
+ // vio_bus_type is only initialised for pseries
+ if (!machine_is(pseries))
+ return -ENODEV;
+
pr_debug("%s: driver %s registering\n", __func__, viodrv->name);
/* fill in 'struct driver' fields */
diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
index a80440a..5b0f6b6 100644
--- a/arch/powerpc/sysdev/xive/common.c
+++ b/arch/powerpc/sysdev/xive/common.c
@@ -261,17 +261,20 @@ notrace void xmon_xive_do_dump(int cpu)
xmon_printf("\n");
}
+static struct irq_data *xive_get_irq_data(u32 hw_irq)
+{
+ unsigned int irq = irq_find_mapping(xive_irq_domain, hw_irq);
+
+ return irq ? irq_get_irq_data(irq) : NULL;
+}
+
int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d)
{
- struct irq_chip *chip = irq_data_get_irq_chip(d);
int rc;
u32 target;
u8 prio;
u32 lirq;
- if (!is_xive_irq(chip))
- return -EINVAL;
-
rc = xive_ops->get_irq_config(hw_irq, &target, &prio, &lirq);
if (rc) {
xmon_printf("IRQ 0x%08x : no config rc=%d\n", hw_irq, rc);
@@ -281,6 +284,9 @@ int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d)
xmon_printf("IRQ 0x%08x : target=0x%x prio=%02x lirq=0x%x ",
hw_irq, target, prio, lirq);
+ if (!d)
+ d = xive_get_irq_data(hw_irq);
+
if (d) {
struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
u64 val = xive_esb_read(xd, XIVE_ESB_GET);
@@ -1606,6 +1612,8 @@ static void xive_debug_show_irq(struct seq_file *m, u32 hw_irq, struct irq_data
u32 target;
u8 prio;
u32 lirq;
+ struct xive_irq_data *xd;
+ u64 val;
if (!is_xive_irq(chip))
return;
@@ -1619,17 +1627,14 @@ static void xive_debug_show_irq(struct seq_file *m, u32 hw_irq, struct irq_data
seq_printf(m, "IRQ 0x%08x : target=0x%x prio=%02x lirq=0x%x ",
hw_irq, target, prio, lirq);
- if (d) {
- struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
- u64 val = xive_esb_read(xd, XIVE_ESB_GET);
-
- seq_printf(m, "flags=%c%c%c PQ=%c%c",
- xd->flags & XIVE_IRQ_FLAG_STORE_EOI ? 'S' : ' ',
- xd->flags & XIVE_IRQ_FLAG_LSI ? 'L' : ' ',
- xd->flags & XIVE_IRQ_FLAG_H_INT_ESB ? 'H' : ' ',
- val & XIVE_ESB_VAL_P ? 'P' : '-',
- val & XIVE_ESB_VAL_Q ? 'Q' : '-');
- }
+ xd = irq_data_get_irq_handler_data(d);
+ val = xive_esb_read(xd, XIVE_ESB_GET);
+ seq_printf(m, "flags=%c%c%c PQ=%c%c",
+ xd->flags & XIVE_IRQ_FLAG_STORE_EOI ? 'S' : ' ',
+ xd->flags & XIVE_IRQ_FLAG_LSI ? 'L' : ' ',
+ xd->flags & XIVE_IRQ_FLAG_H_INT_ESB ? 'H' : ' ',
+ val & XIVE_ESB_VAL_P ? 'P' : '-',
+ val & XIVE_ESB_VAL_Q ? 'Q' : '-');
seq_puts(m, "\n");
}
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index df7fccf..f7abd11 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -144,7 +144,7 @@
config ARCH_SPARSEMEM_ENABLE
def_bool y
depends on MMU
- select SPARSEMEM_STATIC if 32BIT && SPARSMEM
+ select SPARSEMEM_STATIC if 32BIT && SPARSEMEM
select SPARSEMEM_VMEMMAP_ENABLE if 64BIT
config ARCH_SELECT_MEMORY_MODEL
diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 845002c..04dad33 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -13,9 +13,19 @@
#endif
#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
+/*
+ * Clang prior to 13 had "mcount" instead of "_mcount":
+ * https://reviews.llvm.org/D98881
+ */
+#if defined(CONFIG_CC_IS_GCC) || CONFIG_CLANG_VERSION >= 130000
+#define MCOUNT_NAME _mcount
+#else
+#define MCOUNT_NAME mcount
+#endif
+
#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__
-void _mcount(void);
+void MCOUNT_NAME(void);
static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
return addr;
@@ -36,7 +46,7 @@ struct dyn_arch_ftrace {
* both auipc and jalr at the same time.
*/
-#define MCOUNT_ADDR ((unsigned long)_mcount)
+#define MCOUNT_ADDR ((unsigned long)MCOUNT_NAME)
#define JALR_SIGN_MASK (0x00000800)
#define JALR_OFFSET_MASK (0x00000fff)
#define AUIPC_OFFSET_MASK (0xfffff000)
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
index 744f320..76274a4 100644
--- a/arch/riscv/kernel/entry.S
+++ b/arch/riscv/kernel/entry.S
@@ -447,6 +447,7 @@
#endif
.section ".rodata"
+ .align LGREG
/* Exception vector table */
ENTRY(excp_vect_table)
RISCV_PTR do_trap_insn_misaligned
diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S
index 8a5593f..6d46268 100644
--- a/arch/riscv/kernel/mcount.S
+++ b/arch/riscv/kernel/mcount.S
@@ -47,8 +47,8 @@
ENTRY(ftrace_stub)
#ifdef CONFIG_DYNAMIC_FTRACE
- .global _mcount
- .set _mcount, ftrace_stub
+ .global MCOUNT_NAME
+ .set MCOUNT_NAME, ftrace_stub
#endif
ret
ENDPROC(ftrace_stub)
@@ -78,7 +78,7 @@
#endif
#ifndef CONFIG_DYNAMIC_FTRACE
-ENTRY(_mcount)
+ENTRY(MCOUNT_NAME)
la t4, ftrace_stub
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
la t0, ftrace_graph_return
@@ -124,6 +124,6 @@
jalr t5
RESTORE_ABI_STATE
ret
-ENDPROC(_mcount)
+ENDPROC(MCOUNT_NAME)
#endif
-EXPORT_SYMBOL(_mcount)
+EXPORT_SYMBOL(MCOUNT_NAME)
diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
index ea028d9..d445674 100644
--- a/arch/riscv/kernel/smp.c
+++ b/arch/riscv/kernel/smp.c
@@ -54,7 +54,7 @@ int riscv_hartid_to_cpuid(int hartid)
return i;
pr_err("Couldn't find cpu id for hartid [%d]\n", hartid);
- return i;
+ return -ENOENT;
}
void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out)
diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
index 71a315e..ca2b40d 100644
--- a/arch/riscv/kernel/vdso/Makefile
+++ b/arch/riscv/kernel/vdso/Makefile
@@ -41,11 +41,10 @@
$(obj)/vdso.o: $(obj)/vdso.so
# link rule for the .so file, .lds has to be first
-SYSCFLAGS_vdso.so.dbg = $(c_flags)
$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE
$(call if_changed,vdsold)
-SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
- -Wl,--build-id=sha1 -Wl,--hash-style=both
+LDFLAGS_vdso.so.dbg = -shared -s -soname=linux-vdso.so.1 \
+ --build-id=sha1 --hash-style=both --eh-frame-hdr
# We also create a special relocatable object that should mirror the symbol
# table and layout of the linked DSO. With ld --just-symbols we can then
@@ -60,13 +59,10 @@
# actual build commands
# The DSO images are built using a special linker script
-# Add -lgcc so rv32 gets static muldi3 and lshrdi3 definitions.
# Make sure only to export the intended __vdso_xxx symbol offsets.
quiet_cmd_vdsold = VDSOLD $@
- cmd_vdsold = $(CC) $(KBUILD_CFLAGS) $(call cc-option, -no-pie) -nostdlib -nostartfiles $(SYSCFLAGS_$(@F)) \
- -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp && \
- $(CROSS_COMPILE)objcopy \
- $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@ && \
+ cmd_vdsold = $(LD) $(ld_flags) -T $(filter-out FORCE,$^) -o $@.tmp && \
+ $(OBJCOPY) $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@ && \
rm $@.tmp
# Extracts symbol offsets from the VDSO, converting them into an assembly file
diff --git a/arch/s390/crypto/arch_random.c b/arch/s390/crypto/arch_random.c
index dd95cdbd..4cbb4b6 100644
--- a/arch/s390/crypto/arch_random.c
+++ b/arch/s390/crypto/arch_random.c
@@ -53,6 +53,10 @@ static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer);
bool s390_arch_random_generate(u8 *buf, unsigned int nbytes)
{
+ /* max hunk is ARCH_RNG_BUF_SIZE */
+ if (nbytes > ARCH_RNG_BUF_SIZE)
+ return false;
+
/* lock rng buffer */
if (!spin_trylock(&arch_rng_lock))
return false;
diff --git a/arch/s390/kernel/cpcmd.c b/arch/s390/kernel/cpcmd.c
index af013b4..2da0273 100644
--- a/arch/s390/kernel/cpcmd.c
+++ b/arch/s390/kernel/cpcmd.c
@@ -37,10 +37,12 @@ static int diag8_noresponse(int cmdlen)
static int diag8_response(int cmdlen, char *response, int *rlen)
{
+ unsigned long _cmdlen = cmdlen | 0x40000000L;
+ unsigned long _rlen = *rlen;
register unsigned long reg2 asm ("2") = (addr_t) cpcmd_buf;
register unsigned long reg3 asm ("3") = (addr_t) response;
- register unsigned long reg4 asm ("4") = cmdlen | 0x40000000L;
- register unsigned long reg5 asm ("5") = *rlen;
+ register unsigned long reg4 asm ("4") = _cmdlen;
+ register unsigned long reg5 asm ("5") = _rlen;
asm volatile(
" diag %2,%0,0x8\n"
diff --git a/arch/s390/kernel/dis.c b/arch/s390/kernel/dis.c
index a7eab7b..5412efe 100644
--- a/arch/s390/kernel/dis.c
+++ b/arch/s390/kernel/dis.c
@@ -563,7 +563,7 @@ void show_code(struct pt_regs *regs)
void print_fn_code(unsigned char *code, unsigned long len)
{
- char buffer[64], *ptr;
+ char buffer[128], *ptr;
int opsize, i;
while (len) {
diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
index 7120332..81c458e 100644
--- a/arch/s390/kernel/entry.S
+++ b/arch/s390/kernel/entry.S
@@ -994,6 +994,7 @@
* Load idle PSW.
*/
ENTRY(psw_idle)
+ stg %r14,(__SF_GPRS+8*8)(%r15)
stg %r3,__SF_EMPTY(%r15)
larl %r1,.Lpsw_idle_exit
stg %r1,__SF_EMPTY+8(%r15)
diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
index 4d843e6..e83ce90 100644
--- a/arch/s390/kernel/setup.c
+++ b/arch/s390/kernel/setup.c
@@ -925,9 +925,9 @@ static int __init setup_hwcaps(void)
if (MACHINE_HAS_VX) {
elf_hwcap |= HWCAP_S390_VXRS;
if (test_facility(134))
- elf_hwcap |= HWCAP_S390_VXRS_EXT;
- if (test_facility(135))
elf_hwcap |= HWCAP_S390_VXRS_BCD;
+ if (test_facility(135))
+ elf_hwcap |= HWCAP_S390_VXRS_EXT;
if (test_facility(148))
elf_hwcap |= HWCAP_S390_VXRS_EXT2;
if (test_facility(152))
diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 6d6b570..b9f85b2 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm/gaccess.c
@@ -976,7 +976,9 @@ int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra)
* kvm_s390_shadow_tables - walk the guest page table and create shadow tables
* @sg: pointer to the shadow guest address space structure
* @saddr: faulting address in the shadow gmap
- * @pgt: pointer to the page table address result
+ * @pgt: pointer to the beginning of the page table for the given address if
+ * successful (return value 0), or to the first invalid DAT entry in
+ * case of exceptions (return value > 0)
* @fake: pgt references contiguous guest memory block, not a pgtable
*/
static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
@@ -1034,6 +1036,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
rfte.val = ptr;
goto shadow_r2t;
}
+ *pgt = ptr + vaddr.rfx * 8;
rc = gmap_read_table(parent, ptr + vaddr.rfx * 8, &rfte.val);
if (rc)
return rc;
@@ -1060,6 +1063,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
rste.val = ptr;
goto shadow_r3t;
}
+ *pgt = ptr + vaddr.rsx * 8;
rc = gmap_read_table(parent, ptr + vaddr.rsx * 8, &rste.val);
if (rc)
return rc;
@@ -1087,6 +1091,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
rtte.val = ptr;
goto shadow_sgt;
}
+ *pgt = ptr + vaddr.rtx * 8;
rc = gmap_read_table(parent, ptr + vaddr.rtx * 8, &rtte.val);
if (rc)
return rc;
@@ -1123,6 +1128,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
ste.val = ptr;
goto shadow_pgt;
}
+ *pgt = ptr + vaddr.sx * 8;
rc = gmap_read_table(parent, ptr + vaddr.sx * 8, &ste.val);
if (rc)
return rc;
@@ -1157,6 +1163,8 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
* @vcpu: virtual cpu
* @sg: pointer to the shadow guest address space structure
* @saddr: faulting address in the shadow gmap
+ * @datptr: will contain the address of the faulting DAT table entry, or of
+ * the valid leaf, plus some flags
*
* Returns: - 0 if the shadow fault was successfully resolved
* - > 0 (pgm exception code) on exceptions while faulting
@@ -1165,11 +1173,11 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
* - -ENOMEM if out of memory
*/
int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg,
- unsigned long saddr)
+ unsigned long saddr, unsigned long *datptr)
{
union vaddress vaddr;
union page_table_entry pte;
- unsigned long pgt;
+ unsigned long pgt = 0;
int dat_protection, fake;
int rc;
@@ -1191,8 +1199,20 @@ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg,
pte.val = pgt + vaddr.px * PAGE_SIZE;
goto shadow_page;
}
- if (!rc)
- rc = gmap_read_table(sg->parent, pgt + vaddr.px * 8, &pte.val);
+
+ switch (rc) {
+ case PGM_SEGMENT_TRANSLATION:
+ case PGM_REGION_THIRD_TRANS:
+ case PGM_REGION_SECOND_TRANS:
+ case PGM_REGION_FIRST_TRANS:
+ pgt |= PEI_NOT_PTE;
+ break;
+ case 0:
+ pgt += vaddr.px * 8;
+ rc = gmap_read_table(sg->parent, pgt, &pte.val);
+ }
+ if (datptr)
+ *datptr = pgt | dat_protection * PEI_DAT_PROT;
if (!rc && pte.i)
rc = PGM_PAGE_TRANSLATION;
if (!rc && pte.z)
diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index f4c5175..7c72a5e 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -18,6 +18,23 @@
/**
* kvm_s390_real_to_abs - convert guest real address to guest absolute address
+ * @prefix - guest prefix
+ * @gra - guest real address
+ *
+ * Returns the guest absolute address that corresponds to the passed guest real
+ * address @gra of by applying the given prefix.
+ */
+static inline unsigned long _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
+{
+ if (gra < 2 * PAGE_SIZE)
+ gra += prefix;
+ else if (gra >= prefix && gra < prefix + 2 * PAGE_SIZE)
+ gra -= prefix;
+ return gra;
+}
+
+/**
+ * kvm_s390_real_to_abs - convert guest real address to guest absolute address
* @vcpu - guest virtual cpu
* @gra - guest real address
*
@@ -27,13 +44,30 @@
static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
unsigned long gra)
{
- unsigned long prefix = kvm_s390_get_prefix(vcpu);
+ return _kvm_s390_real_to_abs(kvm_s390_get_prefix(vcpu), gra);
+}
- if (gra < 2 * PAGE_SIZE)
- gra += prefix;
- else if (gra >= prefix && gra < prefix + 2 * PAGE_SIZE)
- gra -= prefix;
- return gra;
+/**
+ * _kvm_s390_logical_to_effective - convert guest logical to effective address
+ * @psw: psw of the guest
+ * @ga: guest logical address
+ *
+ * Convert a guest logical address to an effective address by applying the
+ * rules of the addressing mode defined by bits 31 and 32 of the given PSW
+ * (extendended/basic addressing mode).
+ *
+ * Depending on the addressing mode, the upper 40 bits (24 bit addressing
+ * mode), 33 bits (31 bit addressing mode) or no bits (64 bit addressing
+ * mode) of @ga will be zeroed and the remaining bits will be returned.
+ */
+static inline unsigned long _kvm_s390_logical_to_effective(psw_t *psw,
+ unsigned long ga)
+{
+ if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
+ return ga;
+ if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
+ return ga & ((1UL << 31) - 1);
+ return ga & ((1UL << 24) - 1);
}
/**
@@ -52,13 +86,7 @@ static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
unsigned long ga)
{
- psw_t *psw = &vcpu->arch.sie_block->gpsw;
-
- if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
- return ga;
- if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
- return ga & ((1UL << 31) - 1);
- return ga & ((1UL << 24) - 1);
+ return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
}
/*
@@ -359,7 +387,11 @@ void ipte_unlock(struct kvm_vcpu *vcpu);
int ipte_lock_held(struct kvm_vcpu *vcpu);
int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra);
+/* MVPG PEI indication bits */
+#define PEI_DAT_PROT 2
+#define PEI_NOT_PTE 4
+
int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *shadow,
- unsigned long saddr);
+ unsigned long saddr, unsigned long *datptr);
#endif /* __KVM_S390_GACCESS_H */
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 425d3d7..20afffd 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4308,16 +4308,16 @@ static void store_regs_fmt2(struct kvm_vcpu *vcpu)
kvm_run->s.regs.bpbc = (vcpu->arch.sie_block->fpf & FPF_BPBC) == FPF_BPBC;
kvm_run->s.regs.diag318 = vcpu->arch.diag318_info.val;
if (MACHINE_HAS_GS) {
+ preempt_disable();
__ctl_set_bit(2, 4);
if (vcpu->arch.gs_enabled)
save_gs_cb(current->thread.gs_cb);
- preempt_disable();
current->thread.gs_cb = vcpu->arch.host_gscb;
restore_gs_cb(vcpu->arch.host_gscb);
- preempt_enable();
if (!vcpu->arch.host_gscb)
__ctl_clear_bit(2, 4);
vcpu->arch.host_gscb = NULL;
+ preempt_enable();
}
/* SIE will save etoken directly into SDNX and therefore kvm_run */
}
diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
index 4f3cbf6..3fbf708 100644
--- a/arch/s390/kvm/vsie.c
+++ b/arch/s390/kvm/vsie.c
@@ -416,11 +416,6 @@ static void unshadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
memcpy((void *)((u64)scb_o + 0xc0),
(void *)((u64)scb_s + 0xc0), 0xf0 - 0xc0);
break;
- case ICPT_PARTEXEC:
- /* MVPG only */
- memcpy((void *)((u64)scb_o + 0xc0),
- (void *)((u64)scb_s + 0xc0), 0xd0 - 0xc0);
- break;
}
if (scb_s->ihcpu != 0xffffU)
@@ -619,10 +614,10 @@ static int map_prefix(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
/* with mso/msl, the prefix lies at offset *mso* */
prefix += scb_s->mso;
- rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, prefix);
+ rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, prefix, NULL);
if (!rc && (scb_s->ecb & ECB_TE))
rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
- prefix + PAGE_SIZE);
+ prefix + PAGE_SIZE, NULL);
/*
* We don't have to mprotect, we will be called for all unshadows.
* SIE will detect if protection applies and trigger a validity.
@@ -913,7 +908,7 @@ static int handle_fault(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
current->thread.gmap_addr, 1);
rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
- current->thread.gmap_addr);
+ current->thread.gmap_addr, NULL);
if (rc > 0) {
rc = inject_fault(vcpu, rc,
current->thread.gmap_addr,
@@ -935,7 +930,7 @@ static void handle_last_fault(struct kvm_vcpu *vcpu,
{
if (vsie_page->fault_addr)
kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
- vsie_page->fault_addr);
+ vsie_page->fault_addr, NULL);
vsie_page->fault_addr = 0;
}
@@ -983,6 +978,98 @@ static int handle_stfle(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
}
/*
+ * Get a register for a nested guest.
+ * @vcpu the vcpu of the guest
+ * @vsie_page the vsie_page for the nested guest
+ * @reg the register number, the upper 4 bits are ignored.
+ * returns: the value of the register.
+ */
+static u64 vsie_get_register(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, u8 reg)
+{
+ /* no need to validate the parameter and/or perform error handling */
+ reg &= 0xf;
+ switch (reg) {
+ case 15:
+ return vsie_page->scb_s.gg15;
+ case 14:
+ return vsie_page->scb_s.gg14;
+ default:
+ return vcpu->run->s.regs.gprs[reg];
+ }
+}
+
+static int vsie_handle_mvpg(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+{
+ struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
+ unsigned long pei_dest, pei_src, src, dest, mask, prefix;
+ u64 *pei_block = &vsie_page->scb_o->mcic;
+ int edat, rc_dest, rc_src;
+ union ctlreg0 cr0;
+
+ cr0.val = vcpu->arch.sie_block->gcr[0];
+ edat = cr0.edat && test_kvm_facility(vcpu->kvm, 8);
+ mask = _kvm_s390_logical_to_effective(&scb_s->gpsw, PAGE_MASK);
+ prefix = scb_s->prefix << GUEST_PREFIX_SHIFT;
+
+ dest = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 20) & mask;
+ dest = _kvm_s390_real_to_abs(prefix, dest) + scb_s->mso;
+ src = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 16) & mask;
+ src = _kvm_s390_real_to_abs(prefix, src) + scb_s->mso;
+
+ rc_dest = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, dest, &pei_dest);
+ rc_src = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, src, &pei_src);
+ /*
+ * Either everything went well, or something non-critical went wrong
+ * e.g. because of a race. In either case, simply retry.
+ */
+ if (rc_dest == -EAGAIN || rc_src == -EAGAIN || (!rc_dest && !rc_src)) {
+ retry_vsie_icpt(vsie_page);
+ return -EAGAIN;
+ }
+ /* Something more serious went wrong, propagate the error */
+ if (rc_dest < 0)
+ return rc_dest;
+ if (rc_src < 0)
+ return rc_src;
+
+ /* The only possible suppressing exception: just deliver it */
+ if (rc_dest == PGM_TRANSLATION_SPEC || rc_src == PGM_TRANSLATION_SPEC) {
+ clear_vsie_icpt(vsie_page);
+ rc_dest = kvm_s390_inject_program_int(vcpu, PGM_TRANSLATION_SPEC);
+ WARN_ON_ONCE(rc_dest);
+ return 1;
+ }
+
+ /*
+ * Forward the PEI intercept to the guest if it was a page fault, or
+ * also for segment and region table faults if EDAT applies.
+ */
+ if (edat) {
+ rc_dest = rc_dest == PGM_ASCE_TYPE ? rc_dest : 0;
+ rc_src = rc_src == PGM_ASCE_TYPE ? rc_src : 0;
+ } else {
+ rc_dest = rc_dest != PGM_PAGE_TRANSLATION ? rc_dest : 0;
+ rc_src = rc_src != PGM_PAGE_TRANSLATION ? rc_src : 0;
+ }
+ if (!rc_dest && !rc_src) {
+ pei_block[0] = pei_dest;
+ pei_block[1] = pei_src;
+ return 1;
+ }
+
+ retry_vsie_icpt(vsie_page);
+
+ /*
+ * The host has edat, and the guest does not, or it was an ASCE type
+ * exception. The host needs to inject the appropriate DAT interrupts
+ * into the guest.
+ */
+ if (rc_dest)
+ return inject_fault(vcpu, rc_dest, dest, 1);
+ return inject_fault(vcpu, rc_src, src, 0);
+}
+
+/*
* Run the vsie on a shadow scb and a shadow gmap, without any further
* sanity checks, handling SIE faults.
*
@@ -1068,6 +1155,10 @@ static int do_vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
if ((scb_s->ipa & 0xf000) != 0xf000)
scb_s->ipa += 0x1000;
break;
+ case ICPT_PARTEXEC:
+ if (scb_s->ipa == 0xb254)
+ rc = vsie_handle_mvpg(vcpu, vsie_page);
+ break;
}
return rc;
}
diff --git a/arch/um/Kconfig.debug b/arch/um/Kconfig.debug
index 315d368..1dfb295 100644
--- a/arch/um/Kconfig.debug
+++ b/arch/um/Kconfig.debug
@@ -17,6 +17,7 @@
bool "Enable gcov support"
depends on DEBUG_INFO
depends on !KCOV
+ depends on !MODULES
help
This option allows developers to retrieve coverage data from a UML
session.
diff --git a/arch/um/kernel/Makefile b/arch/um/kernel/Makefile
index 5aa8820..e698e0c 100644
--- a/arch/um/kernel/Makefile
+++ b/arch/um/kernel/Makefile
@@ -21,7 +21,6 @@
obj-$(CONFIG_BLK_DEV_INITRD) += initrd.o
obj-$(CONFIG_GPROF) += gprof_syms.o
-obj-$(CONFIG_GCOV) += gmon_syms.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
diff --git a/arch/um/kernel/dyn.lds.S b/arch/um/kernel/dyn.lds.S
index dacbfab..2f2a8ce 100644
--- a/arch/um/kernel/dyn.lds.S
+++ b/arch/um/kernel/dyn.lds.S
@@ -6,6 +6,12 @@
ENTRY(_start)
jiffies = jiffies_64;
+VERSION {
+ {
+ local: *;
+ };
+}
+
SECTIONS
{
PROVIDE (__executable_start = START);
diff --git a/arch/um/kernel/gmon_syms.c b/arch/um/kernel/gmon_syms.c
deleted file mode 100644
index 9361a8e..0000000
--- a/arch/um/kernel/gmon_syms.c
+++ /dev/null
@@ -1,16 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright (C) 2001 - 2007 Jeff Dike (jdike@{addtoit,linux.intel}.com)
- */
-
-#include <linux/module.h>
-
-extern void __bb_init_func(void *) __attribute__((weak));
-EXPORT_SYMBOL(__bb_init_func);
-
-extern void __gcov_init(void *) __attribute__((weak));
-EXPORT_SYMBOL(__gcov_init);
-extern void __gcov_merge_add(void *, unsigned int) __attribute__((weak));
-EXPORT_SYMBOL(__gcov_merge_add);
-extern void __gcov_exit(void) __attribute__((weak));
-EXPORT_SYMBOL(__gcov_exit);
diff --git a/arch/um/kernel/uml.lds.S b/arch/um/kernel/uml.lds.S
index 45d957d..7a8e2b1 100644
--- a/arch/um/kernel/uml.lds.S
+++ b/arch/um/kernel/uml.lds.S
@@ -7,6 +7,12 @@
ENTRY(_start)
jiffies = jiffies_64;
+VERSION {
+ {
+ local: *;
+ };
+}
+
SECTIONS
{
/* This must contain the right address - not quite the default ELF one.*/
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3a5ecb1..f3c8a81 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -562,6 +562,7 @@
depends on X86_EXTENDED_PLATFORM
depends on NUMA
depends on EFI
+ depends on KEXEC_CORE
depends on X86_X2APIC
depends on PCI
help
@@ -1414,7 +1415,7 @@
config HIGHMEM64G
bool "64GB"
- depends on !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !WINCHIP3D && !MK6
+ depends on !M486SX && !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !WINCHIP3D && !MK6
select X86_PAE
help
Select this if you have a 32-bit processor and more than 4
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 0a6d497..8ed757d 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -34,12 +34,13 @@
REALMODE_CFLAGS := $(M16_CFLAGS) -g -Os -DDISABLE_BRANCH_PROFILING \
-Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
-fno-strict-aliasing -fomit-frame-pointer -fno-pic \
- -mno-mmx -mno-sse
+ -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
REALMODE_CFLAGS += -ffreestanding
REALMODE_CFLAGS += -fno-stack-protector
REALMODE_CFLAGS += $(call __cc-option, $(CC), $(REALMODE_CFLAGS), -Wno-address-of-packed-member)
REALMODE_CFLAGS += $(call __cc-option, $(CC), $(REALMODE_CFLAGS), $(cc_stack_align4))
+REALMODE_CFLAGS += $(CLANG_FLAGS)
export REALMODE_CFLAGS
# BITS is used as extension for files which are available in a 32 bit
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 40b8fd3..6004047 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -46,6 +46,7 @@
# Disable relocation relaxation in case the link is not PIE.
KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mrelax-relocations=no)
KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h
+KBUILD_CFLAGS += $(CLANG_FLAGS)
# sev-es.c indirectly inludes inat-table.h which is generated during
# compilation and stored in $(objtree). Add the directory to the includes so
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 017de6c..72f655c 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -172,11 +172,21 @@
*/
call get_sev_encryption_bit
xorl %edx, %edx
+#ifdef CONFIG_AMD_MEM_ENCRYPT
testl %eax, %eax
jz 1f
subl $32, %eax /* Encryption bit is always above bit 31 */
bts %eax, %edx /* Set encryption mask for page tables */
+ /*
+ * Mark SEV as active in sev_status so that startup32_check_sev_cbit()
+ * will do a check. The sev_status memory will be fully initialized
+ * with the contents of MSR_AMD_SEV_STATUS later in
+ * set_sev_encryption_mask(). For now it is sufficient to know that SEV
+ * is active.
+ */
+ movl $1, rva(sev_status)(%ebp)
1:
+#endif
/* Initialize Page tables to 0 */
leal rva(pgtable)(%ebx), %edi
@@ -261,6 +271,9 @@
movl %esi, %edx
1:
#endif
+ /* Check if the C-bit position is correct when SEV is active */
+ call startup32_check_sev_cbit
+
pushl $__KERNEL_CS
pushl %eax
@@ -787,6 +800,78 @@
#endif
/*
+ * Check for the correct C-bit position when the startup_32 boot-path is used.
+ *
+ * The check makes use of the fact that all memory is encrypted when paging is
+ * disabled. The function creates 64 bits of random data using the RDRAND
+ * instruction. RDRAND is mandatory for SEV guests, so always available. If the
+ * hypervisor violates that the kernel will crash right here.
+ *
+ * The 64 bits of random data are stored to a memory location and at the same
+ * time kept in the %eax and %ebx registers. Since encryption is always active
+ * when paging is off the random data will be stored encrypted in main memory.
+ *
+ * Then paging is enabled. When the C-bit position is correct all memory is
+ * still mapped encrypted and comparing the register values with memory will
+ * succeed. An incorrect C-bit position will map all memory unencrypted, so that
+ * the compare will use the encrypted random data and fail.
+ */
+ __HEAD
+ .code32
+SYM_FUNC_START(startup32_check_sev_cbit)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+ pushl %eax
+ pushl %ebx
+ pushl %ecx
+ pushl %edx
+
+ /* Check for non-zero sev_status */
+ movl rva(sev_status)(%ebp), %eax
+ testl %eax, %eax
+ jz 4f
+
+ /*
+ * Get two 32-bit random values - Don't bail out if RDRAND fails
+ * because it is better to prevent forward progress if no random value
+ * can be gathered.
+ */
+1: rdrand %eax
+ jnc 1b
+2: rdrand %ebx
+ jnc 2b
+
+ /* Store to memory and keep it in the registers */
+ movl %eax, rva(sev_check_data)(%ebp)
+ movl %ebx, rva(sev_check_data+4)(%ebp)
+
+ /* Enable paging to see if encryption is active */
+ movl %cr0, %edx /* Backup %cr0 in %edx */
+ movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
+ movl %ecx, %cr0
+
+ cmpl %eax, rva(sev_check_data)(%ebp)
+ jne 3f
+ cmpl %ebx, rva(sev_check_data+4)(%ebp)
+ jne 3f
+
+ movl %edx, %cr0 /* Restore previous %cr0 */
+
+ jmp 4f
+
+3: /* Check failed - hlt the machine */
+ hlt
+ jmp 3b
+
+4:
+ popl %edx
+ popl %ecx
+ popl %ebx
+ popl %eax
+#endif
+ ret
+SYM_FUNC_END(startup32_check_sev_cbit)
+
+/*
* Stack and heap for uncompression
*/
.bss
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index aa56179..a6dea4e 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -23,12 +23,6 @@
push %ecx
push %edx
- /* Check if running under a hypervisor */
- movl $1, %eax
- cpuid
- bt $31, %ecx /* Check the hypervisor bit */
- jnc .Lno_sev
-
movl $0x80000000, %eax /* CPUID to check the highest leaf */
cpuid
cmpl $0x8000001f, %eax /* See if 0x8000001f is available */
diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c
index c44aba2..2bc312f 100644
--- a/arch/x86/crypto/poly1305_glue.c
+++ b/arch/x86/crypto/poly1305_glue.c
@@ -16,7 +16,7 @@
#include <asm/simd.h>
asmlinkage void poly1305_init_x86_64(void *ctx,
- const u8 key[POLY1305_KEY_SIZE]);
+ const u8 key[POLY1305_BLOCK_SIZE]);
asmlinkage void poly1305_blocks_x86_64(void *ctx, const u8 *inp,
const size_t len, const u32 padbit);
asmlinkage void poly1305_emit_x86_64(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
@@ -81,7 +81,7 @@ static void convert_to_base2_64(void *ctx)
state->is_base2_26 = 0;
}
-static void poly1305_simd_init(void *ctx, const u8 key[POLY1305_KEY_SIZE])
+static void poly1305_simd_init(void *ctx, const u8 key[POLY1305_BLOCK_SIZE])
{
poly1305_init_x86_64(ctx, key);
}
@@ -129,7 +129,7 @@ static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
poly1305_emit_avx(ctx, mac, nonce);
}
-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
+void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
{
poly1305_simd_init(&dctx->h, key);
dctx->s[0] = get_unaligned_le32(&key[16]);
diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
index be50ef8..6a98a76 100644
--- a/arch/x86/events/amd/iommu.c
+++ b/arch/x86/events/amd/iommu.c
@@ -81,12 +81,12 @@ static struct attribute_group amd_iommu_events_group = {
};
struct amd_iommu_event_desc {
- struct kobj_attribute attr;
+ struct device_attribute attr;
const char *event;
};
-static ssize_t _iommu_event_show(struct kobject *kobj,
- struct kobj_attribute *attr, char *buf)
+static ssize_t _iommu_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
{
struct amd_iommu_event_desc *event =
container_of(attr, struct amd_iommu_event_desc, attr);
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index 7f014d4..582c0ff 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -275,14 +275,14 @@ static struct attribute_group amd_uncore_attr_group = {
};
#define DEFINE_UNCORE_FORMAT_ATTR(_var, _name, _format) \
-static ssize_t __uncore_##_var##_show(struct kobject *kobj, \
- struct kobj_attribute *attr, \
+static ssize_t __uncore_##_var##_show(struct device *dev, \
+ struct device_attribute *attr, \
char *page) \
{ \
BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \
return sprintf(page, _format "\n"); \
} \
-static struct kobj_attribute format_attr_##_var = \
+static struct device_attribute format_attr_##_var = \
__ATTR(_name, 0444, __uncore_##_var##_show, NULL)
DEFINE_UNCORE_FORMAT_ATTR(event12, event, "config:0-7,32-35");
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index e7dc13f..ee659b5 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4387,7 +4387,7 @@ static const struct x86_cpu_desc isolation_ucodes[] = {
INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_D, 3, 0x07000009),
INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_D, 4, 0x0f000009),
INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_D, 5, 0x0e000002),
- INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_X, 2, 0x0b000014),
+ INTEL_CPU_DESC(INTEL_FAM6_BROADWELL_X, 1, 0x0b000014),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 3, 0x00000021),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 4, 0x00000000),
INTEL_CPU_DESC(INTEL_FAM6_SKYLAKE_X, 5, 0x00000000),
@@ -5563,7 +5563,7 @@ __init int intel_pmu_init(void)
* Check all LBT MSR here.
* Disable LBR access if any LBR MSRs can not be accessed.
*/
- if (x86_pmu.lbr_nr && !check_msr(x86_pmu.lbr_tos, 0x3UL))
+ if (x86_pmu.lbr_tos && !check_msr(x86_pmu.lbr_tos, 0x3UL))
x86_pmu.lbr_nr = 0;
for (i = 0; i < x86_pmu.lbr_nr; i++) {
if (!(check_msr(x86_pmu.lbr_from + i, 0xffffUL) &&
diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index 7bdb182..3112186 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -1159,7 +1159,6 @@ enum {
SNBEP_PCI_QPI_PORT0_FILTER,
SNBEP_PCI_QPI_PORT1_FILTER,
BDX_PCI_QPI_PORT2_FILTER,
- HSWEP_PCI_PCU_3,
};
static int snbep_qpi_hw_config(struct intel_uncore_box *box, struct perf_event *event)
@@ -2816,22 +2815,33 @@ static struct intel_uncore_type *hswep_msr_uncores[] = {
NULL,
};
+#define HSWEP_PCU_DID 0x2fc0
+#define HSWEP_PCU_CAPID4_OFFET 0x94
+#define hswep_get_chop(_cap) (((_cap) >> 6) & 0x3)
+
+static bool hswep_has_limit_sbox(unsigned int device)
+{
+ struct pci_dev *dev = pci_get_device(PCI_VENDOR_ID_INTEL, device, NULL);
+ u32 capid4;
+
+ if (!dev)
+ return false;
+
+ pci_read_config_dword(dev, HSWEP_PCU_CAPID4_OFFET, &capid4);
+ if (!hswep_get_chop(capid4))
+ return true;
+
+ return false;
+}
+
void hswep_uncore_cpu_init(void)
{
- int pkg = boot_cpu_data.logical_proc_id;
-
if (hswep_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
hswep_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
/* Detect 6-8 core systems with only two SBOXes */
- if (uncore_extra_pci_dev[pkg].dev[HSWEP_PCI_PCU_3]) {
- u32 capid4;
-
- pci_read_config_dword(uncore_extra_pci_dev[pkg].dev[HSWEP_PCI_PCU_3],
- 0x94, &capid4);
- if (((capid4 >> 6) & 0x3) == 0)
- hswep_uncore_sbox.num_boxes = 2;
- }
+ if (hswep_has_limit_sbox(HSWEP_PCU_DID))
+ hswep_uncore_sbox.num_boxes = 2;
uncore_msr_uncores = hswep_msr_uncores;
}
@@ -3094,11 +3104,6 @@ static const struct pci_device_id hswep_uncore_pci_ids[] = {
.driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
SNBEP_PCI_QPI_PORT1_FILTER),
},
- { /* PCU.3 (for Capability registers) */
- PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2fc0),
- .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
- HSWEP_PCI_PCU_3),
- },
{ /* end: all zeroes */ }
};
@@ -3190,27 +3195,18 @@ static struct event_constraint bdx_uncore_pcu_constraints[] = {
EVENT_CONSTRAINT_END
};
+#define BDX_PCU_DID 0x6fc0
+
void bdx_uncore_cpu_init(void)
{
- int pkg = topology_phys_to_logical_pkg(boot_cpu_data.phys_proc_id);
-
if (bdx_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
bdx_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
uncore_msr_uncores = bdx_msr_uncores;
- /* BDX-DE doesn't have SBOX */
- if (boot_cpu_data.x86_model == 86) {
- uncore_msr_uncores[BDX_MSR_UNCORE_SBOX] = NULL;
/* Detect systems with no SBOXes */
- } else if (uncore_extra_pci_dev[pkg].dev[HSWEP_PCI_PCU_3]) {
- struct pci_dev *pdev;
- u32 capid4;
+ if ((boot_cpu_data.x86_model == 86) || hswep_has_limit_sbox(BDX_PCU_DID))
+ uncore_msr_uncores[BDX_MSR_UNCORE_SBOX] = NULL;
- pdev = uncore_extra_pci_dev[pkg].dev[HSWEP_PCI_PCU_3];
- pci_read_config_dword(pdev, 0x94, &capid4);
- if (((capid4 >> 6) & 0x3) == 0)
- bdx_msr_uncores[BDX_MSR_UNCORE_SBOX] = NULL;
- }
hswep_uncore_pcu.constraints = bdx_uncore_pcu_constraints;
}
@@ -3431,11 +3427,6 @@ static const struct pci_device_id bdx_uncore_pci_ids[] = {
.driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
BDX_PCI_QPI_PORT2_FILTER),
},
- { /* PCU.3 (for Capability registers) */
- PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x6fc0),
- .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
- HSWEP_PCI_PCU_3),
- },
{ /* end: all zeroes */ }
};
diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index f656aab..0e33257 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -588,6 +588,21 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_MC, exc_machine_check);
#endif
/* NMI */
+
+#if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL)
+/*
+ * Special NOIST entry point for VMX which invokes this on the kernel
+ * stack. asm_exc_nmi() requires an IST to work correctly vs. the NMI
+ * 'executing' marker.
+ *
+ * On 32bit this just uses the regular NMI entry point because 32-bit does
+ * not have ISTs.
+ */
+DECLARE_IDTENTRY(X86_TRAP_NMI, exc_nmi_noist);
+#else
+#define asm_exc_nmi_noist asm_exc_nmi
+#endif
+
DECLARE_IDTENTRY_NMI(X86_TRAP_NMI, exc_nmi);
#ifdef CONFIG_XEN_PV
DECLARE_IDTENTRY_RAW(X86_TRAP_NMI, xenpv_exc_nmi);
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 02d4c74..ef56780 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -358,8 +358,6 @@ struct kvm_mmu {
int (*sync_page)(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *sp);
void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
- void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
- u64 *spte, const void *pte);
hpa_t root_hpa;
gpa_t root_pgd;
union kvm_mmu_role mmu_role;
@@ -1019,7 +1017,6 @@ struct kvm_arch {
struct kvm_vm_stat {
ulong mmu_shadow_zapped;
ulong mmu_pte_write;
- ulong mmu_pte_updated;
ulong mmu_pde_zapped;
ulong mmu_flooded;
ulong mmu_recycled;
@@ -1671,6 +1668,7 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low,
unsigned long icr, int op_64_bit);
void kvm_define_user_return_msr(unsigned index, u32 msr);
+int kvm_probe_user_return_msr(u32 msr);
int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask);
u64 kvm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc);
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index 57ef209..630ff08 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -132,7 +132,7 @@ void native_play_dead(void);
void play_dead_common(void);
void wbinvd_on_cpu(int cpu);
int wbinvd_on_all_cpus(void);
-bool wakeup_cpu0(void);
+void cond_wakeup_cpu0(void);
void native_smp_send_reschedule(int cpu);
void native_send_call_func_ipi(const struct cpumask *mask);
diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
index 235f5cd..40f466d 100644
--- a/arch/x86/kernel/apic/x2apic_uv_x.c
+++ b/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -1652,6 +1652,9 @@ static __init int uv_system_init_hubless(void)
if (rc < 0)
return rc;
+ /* Set section block size for current node memory */
+ set_block_size();
+
/* Create user access node */
if (rc >= 0)
uv_setup_proc_files(1);
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 35ad848..25148eb 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1847,7 +1847,7 @@ static inline void setup_getcpu(int cpu)
unsigned long cpudata = vdso_encode_cpunode(cpu, early_cpu_to_node(cpu));
struct desc_struct d = { };
- if (boot_cpu_has(X86_FEATURE_RDTSCP))
+ if (boot_cpu_has(X86_FEATURE_RDTSCP) || boot_cpu_has(X86_FEATURE_RDPID))
write_rdtscp_aux(cpudata);
/* Store CPU and node number in limit. */
diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
index ec6f041..bbbd248 100644
--- a/arch/x86/kernel/cpu/microcode/core.c
+++ b/arch/x86/kernel/cpu/microcode/core.c
@@ -629,16 +629,16 @@ static ssize_t reload_store(struct device *dev,
if (val != 1)
return size;
- tmp_ret = microcode_ops->request_microcode_fw(bsp, µcode_pdev->dev, true);
- if (tmp_ret != UCODE_NEW)
- return size;
-
get_online_cpus();
ret = check_online_cpus();
if (ret)
goto put;
+ tmp_ret = microcode_ops->request_microcode_fw(bsp, µcode_pdev->dev, true);
+ if (tmp_ret != UCODE_NEW)
+ goto put;
+
mutex_lock(µcode_mutex);
ret = microcode_reload_late();
mutex_unlock(µcode_mutex);
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index a8f3af2..b1deacb 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -337,7 +337,7 @@ int crash_setup_memmap_entries(struct kimage *image, struct boot_params *params)
struct crash_memmap_data cmd;
struct crash_mem *cmem;
- cmem = vzalloc(sizeof(struct crash_mem));
+ cmem = vzalloc(struct_size(cmem, ranges, 1));
if (!cmem)
return -ENOMEM;
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index 22aad41..629c499 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -31,8 +31,8 @@
* - inform the user about the firmware's notion of memory layout
* via /sys/firmware/memmap
*
- * - the hibernation code uses it to generate a kernel-independent MD5
- * fingerprint of the physical memory layout of a system.
+ * - the hibernation code uses it to generate a kernel-independent CRC32
+ * checksum of the physical memory layout of a system.
*
* - 'e820_table_kexec': a slightly modified (by the kernel) firmware version
* passed to us by the bootloader - the major difference between
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 39f7d8c..535da74 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -159,6 +159,8 @@ NOKPROBE_SYMBOL(skip_prefixes);
int can_boost(struct insn *insn, void *addr)
{
kprobe_opcode_t opcode;
+ insn_byte_t prefix;
+ int i;
if (search_exception_tables((unsigned long)addr))
return 0; /* Page fault may occur on this address. */
@@ -171,9 +173,14 @@ int can_boost(struct insn *insn, void *addr)
if (insn->opcode.nbytes != 1)
return 0;
- /* Can't boost Address-size override prefix */
- if (unlikely(inat_is_address_size_prefix(insn->attr)))
- return 0;
+ for_each_insn_prefix(insn, i, prefix) {
+ insn_attr_t attr;
+
+ attr = inat_get_opcode_attribute(prefix);
+ /* Can't boost Address-size override prefix and CS override prefix */
+ if (prefix == 0x2e || inat_is_address_size_prefix(attr))
+ return 0;
+ }
opcode = insn->opcode.bytes[0];
@@ -198,8 +205,8 @@ int can_boost(struct insn *insn, void *addr)
/* clear and set flags are boostable */
return (opcode == 0xf5 || (0xf7 < opcode && opcode < 0xfe));
default:
- /* CS override prefix and call are not boostable */
- return (opcode != 0x2e && opcode != 0x9a);
+ /* call is not boostable */
+ return opcode != 0x9a;
}
}
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index bf250a3..2ef961c 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -524,6 +524,16 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
mds_user_clear_cpu_buffers();
}
+#if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL)
+DEFINE_IDTENTRY_RAW(exc_nmi_noist)
+{
+ exc_nmi(regs);
+}
+#endif
+#if IS_MODULE(CONFIG_KVM_INTEL)
+EXPORT_SYMBOL_GPL(asm_exc_nmi_noist);
+#endif
+
void stop_nmi(void)
{
ignore_nmis++;
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index d237950..28c89fc 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1051,9 +1051,6 @@ void __init setup_arch(char **cmdline_p)
cleanup_highmap();
- /* Look for ACPI tables and reserve memory occupied by them. */
- acpi_boot_table_init();
-
memblock_set_current_limit(ISA_END_ADDRESS);
e820__memblock_setup();
@@ -1132,6 +1129,8 @@ void __init setup_arch(char **cmdline_p)
reserve_initrd();
acpi_table_upgrade();
+ /* Look for ACPI tables and reserve memory occupied by them. */
+ acpi_boot_table_init();
vsmp_init();
diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
index cdc04d0..ecb20b1 100644
--- a/arch/x86/kernel/sev-es-shared.c
+++ b/arch/x86/kernel/sev-es-shared.c
@@ -63,6 +63,7 @@ static bool sev_es_negotiate_protocol(void)
static __always_inline void vc_ghcb_invalidate(struct ghcb *ghcb)
{
+ ghcb->save.sw_exit_code = 0;
memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
}
@@ -186,7 +187,6 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
* make it accessible to the hypervisor.
*
* In particular, check for:
- * - Hypervisor CPUID bit
* - Availability of CPUID leaf 0x8000001f
* - SEV CPUID bit.
*
@@ -194,10 +194,7 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
* can't be checked here.
*/
- if ((fn == 1 && !(regs->cx & BIT(31))))
- /* Hypervisor bit */
- goto fail;
- else if (fn == 0x80000000 && (regs->ax < 0x8000001f))
+ if (fn == 0x80000000 && (regs->ax < 0x8000001f))
/* SEV leaf check */
goto fail;
else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))
diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index 04a780a..e0cdab7 100644
--- a/arch/x86/kernel/sev-es.c
+++ b/arch/x86/kernel/sev-es.c
@@ -191,8 +191,18 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
if (unlikely(data->ghcb_active)) {
/* GHCB is already in use - save its contents */
- if (unlikely(data->backup_ghcb_active))
- return NULL;
+ if (unlikely(data->backup_ghcb_active)) {
+ /*
+ * Backup-GHCB is also already in use. There is no way
+ * to continue here so just kill the machine. To make
+ * panic() work, mark GHCBs inactive so that messages
+ * can be printed out.
+ */
+ data->ghcb_active = false;
+ data->backup_ghcb_active = false;
+
+ panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use");
+ }
/* Mark backup_ghcb active before writing to it */
data->backup_ghcb_active = true;
@@ -209,24 +219,6 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
return ghcb;
}
-static __always_inline void sev_es_put_ghcb(struct ghcb_state *state)
-{
- struct sev_es_runtime_data *data;
- struct ghcb *ghcb;
-
- data = this_cpu_read(runtime_data);
- ghcb = &data->ghcb_page;
-
- if (state->ghcb) {
- /* Restore GHCB from Backup */
- *ghcb = *state->ghcb;
- data->backup_ghcb_active = false;
- state->ghcb = NULL;
- } else {
- data->ghcb_active = false;
- }
-}
-
/* Needed in vc_early_forward_exception */
void do_early_exception(struct pt_regs *regs, int trapnr);
@@ -296,31 +288,44 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
u16 d2;
u8 d1;
- /* If instruction ran in kernel mode and the I/O buffer is in kernel space */
- if (!user_mode(ctxt->regs) && !access_ok(target, size)) {
- memcpy(dst, buf, size);
- return ES_OK;
- }
-
+ /*
+ * This function uses __put_user() independent of whether kernel or user
+ * memory is accessed. This works fine because __put_user() does no
+ * sanity checks of the pointer being accessed. All that it does is
+ * to report when the access failed.
+ *
+ * Also, this function runs in atomic context, so __put_user() is not
+ * allowed to sleep. The page-fault handler detects that it is running
+ * in atomic context and will not try to take mmap_sem and handle the
+ * fault, so additional pagefault_enable()/disable() calls are not
+ * needed.
+ *
+ * The access can't be done via copy_to_user() here because
+ * vc_write_mem() must not use string instructions to access unsafe
+ * memory. The reason is that MOVS is emulated by the #VC handler by
+ * splitting the move up into a read and a write and taking a nested #VC
+ * exception on whatever of them is the MMIO access. Using string
+ * instructions here would cause infinite nesting.
+ */
switch (size) {
case 1:
memcpy(&d1, buf, 1);
- if (put_user(d1, target))
+ if (__put_user(d1, target))
goto fault;
break;
case 2:
memcpy(&d2, buf, 2);
- if (put_user(d2, target))
+ if (__put_user(d2, target))
goto fault;
break;
case 4:
memcpy(&d4, buf, 4);
- if (put_user(d4, target))
+ if (__put_user(d4, target))
goto fault;
break;
case 8:
memcpy(&d8, buf, 8);
- if (put_user(d8, target))
+ if (__put_user(d8, target))
goto fault;
break;
default:
@@ -351,30 +356,43 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
u16 d2;
u8 d1;
- /* If instruction ran in kernel mode and the I/O buffer is in kernel space */
- if (!user_mode(ctxt->regs) && !access_ok(s, size)) {
- memcpy(buf, src, size);
- return ES_OK;
- }
-
+ /*
+ * This function uses __get_user() independent of whether kernel or user
+ * memory is accessed. This works fine because __get_user() does no
+ * sanity checks of the pointer being accessed. All that it does is
+ * to report when the access failed.
+ *
+ * Also, this function runs in atomic context, so __get_user() is not
+ * allowed to sleep. The page-fault handler detects that it is running
+ * in atomic context and will not try to take mmap_sem and handle the
+ * fault, so additional pagefault_enable()/disable() calls are not
+ * needed.
+ *
+ * The access can't be done via copy_from_user() here because
+ * vc_read_mem() must not use string instructions to access unsafe
+ * memory. The reason is that MOVS is emulated by the #VC handler by
+ * splitting the move up into a read and a write and taking a nested #VC
+ * exception on whatever of them is the MMIO access. Using string
+ * instructions here would cause infinite nesting.
+ */
switch (size) {
case 1:
- if (get_user(d1, s))
+ if (__get_user(d1, s))
goto fault;
memcpy(buf, &d1, 1);
break;
case 2:
- if (get_user(d2, s))
+ if (__get_user(d2, s))
goto fault;
memcpy(buf, &d2, 2);
break;
case 4:
- if (get_user(d4, s))
+ if (__get_user(d4, s))
goto fault;
memcpy(buf, &d4, 4);
break;
case 8:
- if (get_user(d8, s))
+ if (__get_user(d8, s))
goto fault;
memcpy(buf, &d8, 8);
break;
@@ -434,6 +452,29 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt
/* Include code shared with pre-decompression boot stage */
#include "sev-es-shared.c"
+static __always_inline void sev_es_put_ghcb(struct ghcb_state *state)
+{
+ struct sev_es_runtime_data *data;
+ struct ghcb *ghcb;
+
+ data = this_cpu_read(runtime_data);
+ ghcb = &data->ghcb_page;
+
+ if (state->ghcb) {
+ /* Restore GHCB from Backup */
+ *ghcb = *state->ghcb;
+ data->backup_ghcb_active = false;
+ state->ghcb = NULL;
+ } else {
+ /*
+ * Invalidate the GHCB so a VMGEXIT instruction issued
+ * from userspace won't appear to be valid.
+ */
+ vc_ghcb_invalidate(ghcb);
+ data->ghcb_active = false;
+ }
+}
+
void noinstr __sev_es_nmi_complete(void)
{
struct ghcb_state state;
@@ -1228,6 +1269,10 @@ static __always_inline void vc_forward_exception(struct es_em_ctxt *ctxt)
case X86_TRAP_UD:
exc_invalid_op(ctxt->regs);
break;
+ case X86_TRAP_PF:
+ write_cr2(ctxt->fi.cr2);
+ exc_page_fault(ctxt->regs, error_code);
+ break;
case X86_TRAP_AC:
exc_alignment_check(ctxt->regs, error_code);
break;
@@ -1257,7 +1302,6 @@ static __always_inline bool on_vc_fallback_stack(struct pt_regs *regs)
*/
DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
{
- struct sev_es_runtime_data *data = this_cpu_read(runtime_data);
irqentry_state_t irq_state;
struct ghcb_state state;
struct es_em_ctxt ctxt;
@@ -1283,16 +1327,6 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
*/
ghcb = sev_es_get_ghcb(&state);
- if (!ghcb) {
- /*
- * Mark GHCBs inactive so that panic() is able to print the
- * message.
- */
- data->ghcb_active = false;
- data->backup_ghcb_active = false;
-
- panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use");
- }
vc_ghcb_invalidate(ghcb);
result = vc_init_em_ctxt(&ctxt, regs, error_code);
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 5ea5f96..582387f 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -452,47 +452,12 @@ static bool match_smt(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
return false;
}
-/*
- * Define snc_cpu[] for SNC (Sub-NUMA Cluster) CPUs.
- *
- * These are Intel CPUs that enumerate an LLC that is shared by
- * multiple NUMA nodes. The LLC on these systems is shared for
- * off-package data access but private to the NUMA node (half
- * of the package) for on-package access.
- *
- * CPUID (the source of the information about the LLC) can only
- * enumerate the cache as being shared *or* unshared, but not
- * this particular configuration. The CPU in this case enumerates
- * the cache to be shared across the entire package (spanning both
- * NUMA nodes).
- */
-
-static const struct x86_cpu_id snc_cpu[] = {
- X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, NULL),
- {}
-};
-
-static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
{
- int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
-
- /* Do not match if we do not have a valid APICID for cpu: */
- if (per_cpu(cpu_llc_id, cpu1) == BAD_APICID)
- return false;
-
- /* Do not match if LLC id does not match: */
- if (per_cpu(cpu_llc_id, cpu1) != per_cpu(cpu_llc_id, cpu2))
- return false;
-
- /*
- * Allow the SNC topology without warning. Return of false
- * means 'c' does not share the LLC of 'o'. This will be
- * reflected to userspace.
- */
- if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu))
- return false;
-
- return topology_sane(c, o, "llc");
+ if (c->phys_proc_id == o->phys_proc_id &&
+ c->cpu_die_id == o->cpu_die_id)
+ return true;
+ return false;
}
/*
@@ -507,12 +472,50 @@ static bool match_pkg(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
return false;
}
-static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+/*
+ * Define intel_cod_cpu[] for Intel COD (Cluster-on-Die) CPUs.
+ *
+ * Any Intel CPU that has multiple nodes per package and does not
+ * match intel_cod_cpu[] has the SNC (Sub-NUMA Cluster) topology.
+ *
+ * When in SNC mode, these CPUs enumerate an LLC that is shared
+ * by multiple NUMA nodes. The LLC is shared for off-package data
+ * access but private to the NUMA node (half of the package) for
+ * on-package access. CPUID (the source of the information about
+ * the LLC) can only enumerate the cache as shared or unshared,
+ * but not this particular configuration.
+ */
+
+static const struct x86_cpu_id intel_cod_cpu[] = {
+ X86_MATCH_INTEL_FAM6_MODEL(HASWELL_X, 0), /* COD */
+ X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_X, 0), /* COD */
+ X86_MATCH_INTEL_FAM6_MODEL(ANY, 1), /* SNC */
+ {}
+};
+
+static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
{
- if ((c->phys_proc_id == o->phys_proc_id) &&
- (c->cpu_die_id == o->cpu_die_id))
- return true;
- return false;
+ const struct x86_cpu_id *id = x86_match_cpu(intel_cod_cpu);
+ int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
+ bool intel_snc = id && id->driver_data;
+
+ /* Do not match if we do not have a valid APICID for cpu: */
+ if (per_cpu(cpu_llc_id, cpu1) == BAD_APICID)
+ return false;
+
+ /* Do not match if LLC id does not match: */
+ if (per_cpu(cpu_llc_id, cpu1) != per_cpu(cpu_llc_id, cpu2))
+ return false;
+
+ /*
+ * Allow the SNC topology without warning. Return of false
+ * means 'c' does not share the LLC of 'o'. This will be
+ * reflected to userspace.
+ */
+ if (match_pkg(c, o) && !topology_same_node(c, o) && intel_snc)
+ return false;
+
+ return topology_sane(c, o, "llc");
}
@@ -586,14 +589,23 @@ void set_cpu_sibling_map(int cpu)
for_each_cpu(i, cpu_sibling_setup_mask) {
o = &cpu_data(i);
+ if (match_pkg(c, o) && !topology_same_node(c, o))
+ x86_has_numa_in_package = true;
+
if ((i == cpu) || (has_smt && match_smt(c, o)))
link_mask(topology_sibling_cpumask, cpu, i);
if ((i == cpu) || (has_mp && match_llc(c, o)))
link_mask(cpu_llc_shared_mask, cpu, i);
+ if ((i == cpu) || (has_mp && match_die(c, o)))
+ link_mask(topology_die_cpumask, cpu, i);
}
+ threads = cpumask_weight(topology_sibling_cpumask(cpu));
+ if (threads > __max_smt_threads)
+ __max_smt_threads = threads;
+
/*
* This needs a separate iteration over the cpus because we rely on all
* topology_sibling_cpumask links to be set-up.
@@ -607,8 +619,7 @@ void set_cpu_sibling_map(int cpu)
/*
* Does this new cpu bringup a new core?
*/
- if (cpumask_weight(
- topology_sibling_cpumask(cpu)) == 1) {
+ if (threads == 1) {
/*
* for each core in package, increment
* the booted_cores for this new cpu
@@ -625,16 +636,7 @@ void set_cpu_sibling_map(int cpu)
} else if (i != cpu && !c->booted_cores)
c->booted_cores = cpu_data(i).booted_cores;
}
- if (match_pkg(c, o) && !topology_same_node(c, o))
- x86_has_numa_in_package = true;
-
- if ((i == cpu) || (has_mp && match_die(c, o)))
- link_mask(topology_die_cpumask, cpu, i);
}
-
- threads = cpumask_weight(topology_sibling_cpumask(cpu));
- if (threads > __max_smt_threads)
- __max_smt_threads = threads;
}
/* maps the cpu to the sched domain representing multi-core */
@@ -1655,13 +1657,17 @@ void play_dead_common(void)
local_irq_disable();
}
-bool wakeup_cpu0(void)
+/**
+ * cond_wakeup_cpu0 - Wake up CPU0 if needed.
+ *
+ * If NMI wants to wake up CPU0, start CPU0.
+ */
+void cond_wakeup_cpu0(void)
{
if (smp_processor_id() == 0 && enable_start_cpu0)
- return true;
-
- return false;
+ start_cpu0();
}
+EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
/*
* We need to flush the caches before going to sleep, lest we have
@@ -1730,11 +1736,8 @@ static inline void mwait_play_dead(void)
__monitor(mwait_ptr, 0, 0);
mb();
__mwait(eax, 0);
- /*
- * If NMI wants to wake up CPU0, start CPU0.
- */
- if (wakeup_cpu0())
- start_cpu0();
+
+ cond_wakeup_cpu0();
}
}
@@ -1745,11 +1748,8 @@ void hlt_play_dead(void)
while (1) {
native_halt();
- /*
- * If NMI wants to wake up CPU0, start CPU0.
- */
- if (wakeup_cpu0())
- start_cpu0();
+
+ cond_wakeup_cpu0();
}
}
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 62157b1..56a62d5 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -572,7 +572,8 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
case 7:
entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
entry->eax = 0;
- entry->ecx = F(RDPID);
+ if (kvm_cpu_cap_has(X86_FEATURE_RDTSCP))
+ entry->ecx = F(RDPID);
++array->nent;
default:
break;
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 1453b9b..e82151b 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -4220,7 +4220,7 @@ static bool valid_cr(int nr)
}
}
-static int check_cr_read(struct x86_emulate_ctxt *ctxt)
+static int check_cr_access(struct x86_emulate_ctxt *ctxt)
{
if (!valid_cr(ctxt->modrm_reg))
return emulate_ud(ctxt);
@@ -4228,80 +4228,6 @@ static int check_cr_read(struct x86_emulate_ctxt *ctxt)
return X86EMUL_CONTINUE;
}
-static int check_cr_write(struct x86_emulate_ctxt *ctxt)
-{
- u64 new_val = ctxt->src.val64;
- int cr = ctxt->modrm_reg;
- u64 efer = 0;
-
- static u64 cr_reserved_bits[] = {
- 0xffffffff00000000ULL,
- 0, 0, 0, /* CR3 checked later */
- CR4_RESERVED_BITS,
- 0, 0, 0,
- CR8_RESERVED_BITS,
- };
-
- if (!valid_cr(cr))
- return emulate_ud(ctxt);
-
- if (new_val & cr_reserved_bits[cr])
- return emulate_gp(ctxt, 0);
-
- switch (cr) {
- case 0: {
- u64 cr4;
- if (((new_val & X86_CR0_PG) && !(new_val & X86_CR0_PE)) ||
- ((new_val & X86_CR0_NW) && !(new_val & X86_CR0_CD)))
- return emulate_gp(ctxt, 0);
-
- cr4 = ctxt->ops->get_cr(ctxt, 4);
- ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
-
- if ((new_val & X86_CR0_PG) && (efer & EFER_LME) &&
- !(cr4 & X86_CR4_PAE))
- return emulate_gp(ctxt, 0);
-
- break;
- }
- case 3: {
- u64 rsvd = 0;
-
- ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
- if (efer & EFER_LMA) {
- u64 maxphyaddr;
- u32 eax, ebx, ecx, edx;
-
- eax = 0x80000008;
- ecx = 0;
- if (ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx,
- &edx, true))
- maxphyaddr = eax & 0xff;
- else
- maxphyaddr = 36;
- rsvd = rsvd_bits(maxphyaddr, 63);
- if (ctxt->ops->get_cr(ctxt, 4) & X86_CR4_PCIDE)
- rsvd &= ~X86_CR3_PCID_NOFLUSH;
- }
-
- if (new_val & rsvd)
- return emulate_gp(ctxt, 0);
-
- break;
- }
- case 4: {
- ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
-
- if ((efer & EFER_LMA) && !(new_val & X86_CR4_PAE))
- return emulate_gp(ctxt, 0);
-
- break;
- }
- }
-
- return X86EMUL_CONTINUE;
-}
-
static int check_dr7_gd(struct x86_emulate_ctxt *ctxt)
{
unsigned long dr7;
@@ -4576,7 +4502,7 @@ static const struct opcode group8[] = {
* from the register case of group9.
*/
static const struct gprefix pfx_0f_c7_7 = {
- N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdtscp),
+ N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdpid),
};
@@ -4841,10 +4767,10 @@ static const struct opcode twobyte_table[256] = {
D(ImplicitOps | ModRM | SrcMem | NoAccess), /* 8 * reserved NOP */
D(ImplicitOps | ModRM | SrcMem | NoAccess), /* NOP + 7 * reserved NOP */
/* 0x20 - 0x2F */
- DIP(ModRM | DstMem | Priv | Op3264 | NoMod, cr_read, check_cr_read),
+ DIP(ModRM | DstMem | Priv | Op3264 | NoMod, cr_read, check_cr_access),
DIP(ModRM | DstMem | Priv | Op3264 | NoMod, dr_read, check_dr_read),
IIP(ModRM | SrcMem | Priv | Op3264 | NoMod, em_cr_write, cr_write,
- check_cr_write),
+ check_cr_access),
IIP(ModRM | SrcMem | Priv | Op3264 | NoMod, em_dr_write, dr_write,
check_dr_write),
N, N, N, N,
diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
index 43c93ff..7d5be04 100644
--- a/arch/x86/kvm/kvm_emulate.h
+++ b/arch/x86/kvm/kvm_emulate.h
@@ -468,6 +468,7 @@ enum x86_intercept {
x86_intercept_clgi,
x86_intercept_skinit,
x86_intercept_rdtscp,
+ x86_intercept_rdpid,
x86_intercept_icebp,
x86_intercept_wbinvd,
x86_intercept_monitor,
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 4ca81ae..5759eb0 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1908,8 +1908,8 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
if (!apic->lapic_timer.hv_timer_in_use)
goto out;
WARN_ON(rcuwait_active(&vcpu->wait));
- cancel_hv_timer(apic);
apic_timer_expired(apic, false);
+ cancel_hv_timer(apic);
if (apic_lvtt_period(apic) && apic->lapic_timer.period) {
advance_periodic_target_expiration(apic);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index dacbd13..ac50547 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1715,13 +1715,6 @@ static int nonpaging_sync_page(struct kvm_vcpu *vcpu,
return 0;
}
-static void nonpaging_update_pte(struct kvm_vcpu *vcpu,
- struct kvm_mmu_page *sp, u64 *spte,
- const void *pte)
-{
- WARN_ON(1);
-}
-
#define KVM_PAGE_ARRAY_NR 16
struct kvm_mmu_pages {
@@ -3195,14 +3188,14 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL &&
(mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) {
mmu_free_root_page(kvm, &mmu->root_hpa, &invalid_list);
- } else {
+ } else if (mmu->pae_root) {
for (i = 0; i < 4; ++i)
if (mmu->pae_root[i] != 0)
mmu_free_root_page(kvm,
&mmu->pae_root[i],
&invalid_list);
- mmu->root_hpa = INVALID_PAGE;
}
+ mmu->root_hpa = INVALID_PAGE;
mmu->root_pgd = 0;
}
@@ -3314,9 +3307,23 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
* the shadow page table may be a PAE or a long mode page table.
*/
pm_mask = PT_PRESENT_MASK;
- if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL)
+ if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) {
pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK;
+ /*
+ * Allocate the page for the PDPTEs when shadowing 32-bit NPT
+ * with 64-bit only when needed. Unlike 32-bit NPT, it doesn't
+ * need to be in low mem. See also lm_root below.
+ */
+ if (!vcpu->arch.mmu->pae_root) {
+ WARN_ON_ONCE(!tdp_enabled);
+
+ vcpu->arch.mmu->pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT);
+ if (!vcpu->arch.mmu->pae_root)
+ return -ENOMEM;
+ }
+ }
+
for (i = 0; i < 4; ++i) {
MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->pae_root[i]));
if (vcpu->arch.mmu->root_level == PT32E_ROOT_LEVEL) {
@@ -3339,21 +3346,19 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root);
/*
- * If we shadow a 32 bit page table with a long mode page
- * table we enter this path.
+ * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP
+ * tables are allocated and initialized at MMU creation as there is no
+ * equivalent level in the guest's NPT to shadow. Allocate the tables
+ * on demand, as running a 32-bit L1 VMM is very rare. The PDP is
+ * handled above (to share logic with PAE), deal with the PML4 here.
*/
if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) {
if (vcpu->arch.mmu->lm_root == NULL) {
- /*
- * The additional page necessary for this is only
- * allocated on demand.
- */
-
u64 *lm_root;
lm_root = (void*)get_zeroed_page(GFP_KERNEL_ACCOUNT);
- if (lm_root == NULL)
- return 1;
+ if (!lm_root)
+ return -ENOMEM;
lm_root[0] = __pa(vcpu->arch.mmu->pae_root) | pm_mask;
@@ -3651,6 +3656,14 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
bool async;
+ /*
+ * Retry the page fault if the gfn hit a memslot that is being deleted
+ * or moved. This ensures any existing SPTEs for the old memslot will
+ * be zapped before KVM inserts a new MMIO SPTE for the gfn.
+ */
+ if (slot && (slot->flags & KVM_MEMSLOT_INVALID))
+ return true;
+
/* Don't expose private memslots to L2. */
if (is_guest_mode(vcpu) && !kvm_is_visible_memslot(slot)) {
*pfn = KVM_PFN_NOSLOT;
@@ -3800,7 +3813,6 @@ static void nonpaging_init_context(struct kvm_vcpu *vcpu,
context->gva_to_gpa = nonpaging_gva_to_gpa;
context->sync_page = nonpaging_sync_page;
context->invlpg = NULL;
- context->update_pte = nonpaging_update_pte;
context->root_level = 0;
context->shadow_root_level = PT32E_ROOT_LEVEL;
context->direct_map = true;
@@ -4382,7 +4394,6 @@ static void paging64_init_context_common(struct kvm_vcpu *vcpu,
context->gva_to_gpa = paging64_gva_to_gpa;
context->sync_page = paging64_sync_page;
context->invlpg = paging64_invlpg;
- context->update_pte = paging64_update_pte;
context->shadow_root_level = level;
context->direct_map = false;
}
@@ -4411,7 +4422,6 @@ static void paging32_init_context(struct kvm_vcpu *vcpu,
context->gva_to_gpa = paging32_gva_to_gpa;
context->sync_page = paging32_sync_page;
context->invlpg = paging32_invlpg;
- context->update_pte = paging32_update_pte;
context->shadow_root_level = PT32E_ROOT_LEVEL;
context->direct_map = false;
}
@@ -4493,7 +4503,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
context->page_fault = kvm_tdp_page_fault;
context->sync_page = nonpaging_sync_page;
context->invlpg = NULL;
- context->update_pte = nonpaging_update_pte;
context->shadow_root_level = kvm_mmu_get_tdp_level(vcpu);
context->direct_map = true;
context->get_guest_pgd = get_cr3;
@@ -4605,12 +4614,17 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer,
struct kvm_mmu *context = &vcpu->arch.guest_mmu;
union kvm_mmu_role new_role = kvm_calc_shadow_npt_root_page_role(vcpu);
- context->shadow_root_level = new_role.base.level;
-
__kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base, false, false);
- if (new_role.as_u64 != context->mmu_role.as_u64)
+ if (new_role.as_u64 != context->mmu_role.as_u64) {
shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role);
+
+ /*
+ * Override the level set by the common init helper, nested TDP
+ * always uses the host's TDP configuration.
+ */
+ context->shadow_root_level = new_role.base.level;
+ }
}
EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu);
@@ -4665,7 +4679,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
context->gva_to_gpa = ept_gva_to_gpa;
context->sync_page = ept_sync_page;
context->invlpg = ept_invlpg;
- context->update_pte = ept_update_pte;
context->root_level = level;
context->direct_map = false;
context->mmu_role.as_u64 = new_role.as_u64;
@@ -4813,19 +4826,6 @@ void kvm_mmu_unload(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_mmu_unload);
-static void mmu_pte_write_new_pte(struct kvm_vcpu *vcpu,
- struct kvm_mmu_page *sp, u64 *spte,
- const void *new)
-{
- if (sp->role.level != PG_LEVEL_4K) {
- ++vcpu->kvm->stat.mmu_pde_zapped;
- return;
- }
-
- ++vcpu->kvm->stat.mmu_pte_updated;
- vcpu->arch.mmu->update_pte(vcpu, sp, spte, new);
-}
-
static bool need_remote_flush(u64 old, u64 new)
{
if (!is_shadow_present_pte(old))
@@ -4941,22 +4941,6 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte)
return spte;
}
-/*
- * Ignore various flags when determining if a SPTE can be immediately
- * overwritten for the current MMU.
- * - level: explicitly checked in mmu_pte_write_new_pte(), and will never
- * match the current MMU role, as MMU's level tracks the root level.
- * - access: updated based on the new guest PTE
- * - quadrant: handled by get_written_sptes()
- * - invalid: always false (loop only walks valid shadow pages)
- */
-static const union kvm_mmu_page_role role_ign = {
- .level = 0xf,
- .access = 0x7,
- .quadrant = 0x3,
- .invalid = 0x1,
-};
-
static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
const u8 *new, int bytes,
struct kvm_page_track_notifier_node *node)
@@ -5007,14 +4991,10 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
local_flush = true;
while (npte--) {
- u32 base_role = vcpu->arch.mmu->mmu_role.base.word;
-
entry = *spte;
mmu_page_zap_pte(vcpu->kvm, sp, spte, NULL);
- if (gentry &&
- !((sp->role.word ^ base_role) & ~role_ign.word) &&
- rmap_can_add(vcpu))
- mmu_pte_write_new_pte(vcpu, sp, spte, &gentry);
+ if (gentry && sp->role.level != PG_LEVEL_4K)
+ ++vcpu->kvm->stat.mmu_pde_zapped;
if (need_remote_flush(entry, *spte))
remote_flush = true;
++spte;
@@ -5297,9 +5277,11 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
* while the PDP table is a per-vCPU construct that's allocated at MMU
* creation. When emulating 32-bit mode, cr3 is only 32 bits even on
* x86_64. Therefore we need to allocate the PDP table in the first
- * 4GB of memory, which happens to fit the DMA32 zone. Except for
- * SVM's 32-bit NPT support, TDP paging doesn't use PAE paging and can
- * skip allocating the PDP table.
+ * 4GB of memory, which happens to fit the DMA32 zone. TDP paging
+ * generally doesn't use PAE paging and can skip allocating the PDP
+ * table. The main exception, handled here, is SVM's 32-bit NPT. The
+ * other exception is for shadowing L1's 32-bit or PAE NPT on 64-bit
+ * KVM; that horror is handled on-demand by mmu_alloc_shadow_roots().
*/
if (tdp_enabled && kvm_mmu_get_tdp_level(vcpu) > PT32E_ROOT_LEVEL)
return 0;
@@ -5972,6 +5954,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
struct kvm_mmu_page *sp;
unsigned int ratio;
LIST_HEAD(invalid_list);
+ bool flush = false;
ulong to_zap;
rcu_idx = srcu_read_lock(&kvm->srcu);
@@ -5992,20 +5975,20 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
struct kvm_mmu_page,
lpage_disallowed_link);
WARN_ON_ONCE(!sp->lpage_disallowed);
- if (sp->tdp_mmu_page)
- kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn,
- sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level));
- else {
+ if (sp->tdp_mmu_page) {
+ flush |= kvm_tdp_mmu_zap_sp(kvm, sp);
+ } else {
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
WARN_ON_ONCE(sp->lpage_disallowed);
}
if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
- kvm_mmu_commit_zap_page(kvm, &invalid_list);
+ kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
cond_resched_lock(&kvm->mmu_lock);
+ flush = false;
}
}
- kvm_mmu_commit_zap_page(kvm, &invalid_list);
+ kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
spin_unlock(&kvm->mmu_lock);
srcu_read_unlock(&kvm->srcu, rcu_idx);
diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
index 87b7e16..1a09d21 100644
--- a/arch/x86/kvm/mmu/tdp_iter.c
+++ b/arch/x86/kvm/mmu/tdp_iter.c
@@ -22,21 +22,22 @@ static gfn_t round_gfn_for_level(gfn_t gfn, int level)
/*
* Sets a TDP iterator to walk a pre-order traversal of the paging structure
- * rooted at root_pt, starting with the walk to translate goal_gfn.
+ * rooted at root_pt, starting with the walk to translate next_last_level_gfn.
*/
void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
- int min_level, gfn_t goal_gfn)
+ int min_level, gfn_t next_last_level_gfn)
{
WARN_ON(root_level < 1);
WARN_ON(root_level > PT64_ROOT_MAX_LEVEL);
- iter->goal_gfn = goal_gfn;
+ iter->next_last_level_gfn = next_last_level_gfn;
+ iter->yielded_gfn = iter->next_last_level_gfn;
iter->root_level = root_level;
iter->min_level = min_level;
iter->level = root_level;
iter->pt_path[iter->level - 1] = root_pt;
- iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
+ iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level);
tdp_iter_refresh_sptep(iter);
iter->valid = true;
@@ -82,7 +83,7 @@ static bool try_step_down(struct tdp_iter *iter)
iter->level--;
iter->pt_path[iter->level - 1] = child_pt;
- iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
+ iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level);
tdp_iter_refresh_sptep(iter);
return true;
@@ -106,7 +107,7 @@ static bool try_step_side(struct tdp_iter *iter)
return false;
iter->gfn += KVM_PAGES_PER_HPAGE(iter->level);
- iter->goal_gfn = iter->gfn;
+ iter->next_last_level_gfn = iter->gfn;
iter->sptep++;
iter->old_spte = READ_ONCE(*iter->sptep);
@@ -158,23 +159,6 @@ void tdp_iter_next(struct tdp_iter *iter)
iter->valid = false;
}
-/*
- * Restart the walk over the paging structure from the root, starting from the
- * highest gfn the iterator had previously reached. Assumes that the entire
- * paging structure, except the root page, may have been completely torn down
- * and rebuilt.
- */
-void tdp_iter_refresh_walk(struct tdp_iter *iter)
-{
- gfn_t goal_gfn = iter->goal_gfn;
-
- if (iter->gfn > goal_gfn)
- goal_gfn = iter->gfn;
-
- tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
- iter->root_level, iter->min_level, goal_gfn);
-}
-
u64 *tdp_iter_root_pt(struct tdp_iter *iter)
{
return iter->pt_path[iter->root_level - 1];
diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
index 47170d0..d480c54 100644
--- a/arch/x86/kvm/mmu/tdp_iter.h
+++ b/arch/x86/kvm/mmu/tdp_iter.h
@@ -15,7 +15,13 @@ struct tdp_iter {
* The iterator will traverse the paging structure towards the mapping
* for this GFN.
*/
- gfn_t goal_gfn;
+ gfn_t next_last_level_gfn;
+ /*
+ * The next_last_level_gfn at the time when the thread last
+ * yielded. Only yielding when the next_last_level_gfn !=
+ * yielded_gfn helps ensure forward progress.
+ */
+ gfn_t yielded_gfn;
/* Pointers to the page tables traversed to reach the current SPTE */
u64 *pt_path[PT64_ROOT_MAX_LEVEL];
/* A pointer to the current SPTE */
@@ -52,9 +58,8 @@ struct tdp_iter {
u64 *spte_to_child_pt(u64 pte, int level);
void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
- int min_level, gfn_t goal_gfn);
+ int min_level, gfn_t next_last_level_gfn);
void tdp_iter_next(struct tdp_iter *iter);
-void tdp_iter_refresh_walk(struct tdp_iter *iter);
u64 *tdp_iter_root_pt(struct tdp_iter *iter);
#endif /* __KVM_X86_MMU_TDP_ITER_H */
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index ffa0bd0..61c00f8 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -103,7 +103,7 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa)
}
static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
- gfn_t start, gfn_t end, bool can_yield);
+ gfn_t start, gfn_t end, bool can_yield, bool flush);
void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root)
{
@@ -116,7 +116,7 @@ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root)
list_del(&root->link);
- zap_gfn_range(kvm, root, 0, max_gfn, false);
+ zap_gfn_range(kvm, root, 0, max_gfn, false, false);
free_page((unsigned long)root->spt);
kmem_cache_free(mmu_page_header_cache, root);
@@ -405,27 +405,43 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
_mmu->shadow_root_level, _start, _end)
/*
- * Flush the TLB if the process should drop kvm->mmu_lock.
- * Return whether the caller still needs to flush the tlb.
+ * Yield if the MMU lock is contended or this thread needs to return control
+ * to the scheduler.
+ *
+ * If this function should yield and flush is set, it will perform a remote
+ * TLB flush before yielding.
+ *
+ * If this function yields, it will also reset the tdp_iter's walk over the
+ * paging structure and the calling function should skip to the next
+ * iteration to allow the iterator to continue its traversal from the
+ * paging structure root.
+ *
+ * Return true if this function yielded and the iterator's traversal was reset.
+ * Return false if a yield was not needed.
*/
-static bool tdp_mmu_iter_flush_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
+static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
+ struct tdp_iter *iter, bool flush)
{
- if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
- kvm_flush_remote_tlbs(kvm);
- cond_resched_lock(&kvm->mmu_lock);
- tdp_iter_refresh_walk(iter);
+ /* Ensure forward progress has been made before yielding. */
+ if (iter->next_last_level_gfn == iter->yielded_gfn)
return false;
- } else {
+
+ if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
+ if (flush)
+ kvm_flush_remote_tlbs(kvm);
+
+ cond_resched_lock(&kvm->mmu_lock);
+
+ WARN_ON(iter->gfn > iter->next_last_level_gfn);
+
+ tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
+ iter->root_level, iter->min_level,
+ iter->next_last_level_gfn);
+
return true;
}
-}
-static void tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
-{
- if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
- cond_resched_lock(&kvm->mmu_lock);
- tdp_iter_refresh_walk(iter);
- }
+ return false;
}
/*
@@ -437,15 +453,22 @@ static void tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
* scheduler needs the CPU or there is contention on the MMU lock. If this
* function cannot yield, it will not release the MMU lock or reschedule and
* the caller must ensure it does not supply too large a GFN range, or the
- * operation can cause a soft lockup.
+ * operation can cause a soft lockup. Note, in some use cases a flush may be
+ * required by prior actions. Ensure the pending flush is performed prior to
+ * yielding.
*/
static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
- gfn_t start, gfn_t end, bool can_yield)
+ gfn_t start, gfn_t end, bool can_yield, bool flush)
{
struct tdp_iter iter;
- bool flush_needed = false;
tdp_root_for_each_pte(iter, root, start, end) {
+ if (can_yield &&
+ tdp_mmu_iter_cond_resched(kvm, &iter, flush)) {
+ flush = false;
+ continue;
+ }
+
if (!is_shadow_present_pte(iter.old_spte))
continue;
@@ -460,13 +483,10 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
continue;
tdp_mmu_set_spte(kvm, &iter, 0);
-
- if (can_yield)
- flush_needed = tdp_mmu_iter_flush_cond_resched(kvm, &iter);
- else
- flush_needed = true;
+ flush = true;
}
- return flush_needed;
+
+ return flush;
}
/*
@@ -475,13 +495,14 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
* SPTEs have been cleared and a TLB flush is needed before releasing the
* MMU lock.
*/
-bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end)
+bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,
+ bool can_yield)
{
struct kvm_mmu_page *root;
bool flush = false;
for_each_tdp_mmu_root_yield_safe(kvm, root)
- flush |= zap_gfn_range(kvm, root, start, end, true);
+ flush = zap_gfn_range(kvm, root, start, end, can_yield, flush);
return flush;
}
@@ -673,7 +694,7 @@ static int zap_gfn_range_hva_wrapper(struct kvm *kvm,
struct kvm_mmu_page *root, gfn_t start,
gfn_t end, unsigned long unused)
{
- return zap_gfn_range(kvm, root, start, end, false);
+ return zap_gfn_range(kvm, root, start, end, false, false);
}
int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start,
@@ -824,6 +845,9 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
for_each_tdp_pte_min_level(iter, root->spt, root->role.level,
min_level, start, end) {
+ if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
+ continue;
+
if (!is_shadow_present_pte(iter.old_spte) ||
!is_last_spte(iter.old_spte, iter.level))
continue;
@@ -832,8 +856,6 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte);
spte_set = true;
-
- tdp_mmu_iter_cond_resched(kvm, &iter);
}
return spte_set;
}
@@ -877,6 +899,9 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
bool spte_set = false;
tdp_root_for_each_leaf_pte(iter, root, start, end) {
+ if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
+ continue;
+
if (spte_ad_need_write_protect(iter.old_spte)) {
if (is_writable_pte(iter.old_spte))
new_spte = iter.old_spte & ~PT_WRITABLE_MASK;
@@ -891,8 +916,6 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte);
spte_set = true;
-
- tdp_mmu_iter_cond_resched(kvm, &iter);
}
return spte_set;
}
@@ -1000,6 +1023,9 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
bool spte_set = false;
tdp_root_for_each_pte(iter, root, start, end) {
+ if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
+ continue;
+
if (!is_shadow_present_pte(iter.old_spte))
continue;
@@ -1007,8 +1033,6 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
tdp_mmu_set_spte(kvm, &iter, new_spte);
spte_set = true;
-
- tdp_mmu_iter_cond_resched(kvm, &iter);
}
return spte_set;
@@ -1049,6 +1073,11 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
bool spte_set = false;
tdp_root_for_each_pte(iter, root, start, end) {
+ if (tdp_mmu_iter_cond_resched(kvm, &iter, spte_set)) {
+ spte_set = false;
+ continue;
+ }
+
if (!is_shadow_present_pte(iter.old_spte) ||
!is_last_spte(iter.old_spte, iter.level))
continue;
@@ -1061,7 +1090,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
tdp_mmu_set_spte(kvm, &iter, 0);
- spte_set = tdp_mmu_iter_flush_cond_resched(kvm, &iter);
+ spte_set = true;
}
if (spte_set)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index cbbdbad..a7a3f6d 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -12,7 +12,23 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t root);
hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root);
-bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end);
+bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,
+ bool can_yield);
+static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start,
+ gfn_t end)
+{
+ return __kvm_tdp_mmu_zap_gfn_range(kvm, start, end, true);
+}
+static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
+{
+ gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);
+
+ /*
+ * Don't allow yielding, as the caller may have pending pages to zap
+ * on the shadow MMU.
+ */
+ return __kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, end, false);
+}
void kvm_tdp_mmu_zap_all(struct kvm *kvm);
int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index e3e0498..16b10b9 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -168,6 +168,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
int asid, ret;
+ if (kvm->created_vcpus)
+ return -EINVAL;
+
ret = -EBUSY;
if (unlikely(sev->active))
return ret;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 642f0da..9d4eb11 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1805,7 +1805,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
static int pf_interception(struct vcpu_svm *svm)
{
- u64 fault_address = __sme_clr(svm->vmcb->control.exit_info_2);
+ u64 fault_address = svm->vmcb->control.exit_info_2;
u64 error_code = svm->vmcb->control.exit_info_1;
return kvm_handle_page_fault(&svm->vcpu, error_code, fault_address,
@@ -2519,6 +2519,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
case MSR_TSC_AUX:
if (!boot_cpu_has(X86_FEATURE_RDTSCP))
return 1;
+ if (!msr_info->host_initiated &&
+ !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
+ return 1;
msr_info->data = svm->tsc_aux;
break;
/*
@@ -2713,6 +2716,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
if (!boot_cpu_has(X86_FEATURE_RDTSCP))
return 1;
+ if (!msr->host_initiated &&
+ !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
+ return 1;
+
/*
* This is rare, so we update the MSR here instead of using
* direct_access_msrs. Doing that would require a rdmsr in
@@ -3525,15 +3532,15 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu,
* have them in state 'on' as recorded before entering guest mode.
* Same as enter_from_user_mode().
*
- * guest_exit_irqoff() restores host context and reinstates RCU if
- * enabled and required.
+ * context_tracking_guest_exit() restores host context and reinstates
+ * RCU if enabled and required.
*
* This needs to be done before the below as native_read_msr()
* contains a tracepoint and x86_spec_ctrl_restore_host() calls
* into world and some more.
*/
lockdep_hardirqs_off(CALLER_ADDR0);
- guest_exit_irqoff();
+ context_tracking_guest_exit();
instrumentation_begin();
trace_hardirqs_off_finish();
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index f3eca452..32e6f33 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -618,6 +618,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
}
/* KVM unconditionally exposes the FS/GS base MSRs to L1. */
+#ifdef CONFIG_X86_64
nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
MSR_FS_BASE, MSR_TYPE_RW);
@@ -626,6 +627,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
+#endif
/*
* Checking the L0->L1 bitmap is trying to verify two things:
@@ -3137,15 +3139,8 @@ static bool nested_get_evmcs_page(struct kvm_vcpu *vcpu)
nested_vmx_handle_enlightened_vmptrld(vcpu, false);
if (evmptrld_status == EVMPTRLD_VMFAIL ||
- evmptrld_status == EVMPTRLD_ERROR) {
- pr_debug_ratelimited("%s: enlightened vmptrld failed\n",
- __func__);
- vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
- vcpu->run->internal.suberror =
- KVM_INTERNAL_ERROR_EMULATION;
- vcpu->run->internal.ndata = 0;
+ evmptrld_status == EVMPTRLD_ERROR)
return false;
- }
}
return true;
@@ -3233,8 +3228,16 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
{
- if (!nested_get_evmcs_page(vcpu))
+ if (!nested_get_evmcs_page(vcpu)) {
+ pr_debug_ratelimited("%s: enlightened vmptrld failed\n",
+ __func__);
+ vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+ vcpu->run->internal.suberror =
+ KVM_INTERNAL_ERROR_EMULATION;
+ vcpu->run->internal.ndata = 0;
+
return false;
+ }
if (is_guest_mode(vcpu) && !nested_get_vmcs12_pages(vcpu))
return false;
@@ -3329,7 +3332,11 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
enum vm_entry_failure_code entry_failure_code;
bool evaluate_pending_interrupts;
- u32 exit_reason, failed_index;
+ union vmx_exit_reason exit_reason = {
+ .basic = EXIT_REASON_INVALID_STATE,
+ .failed_vmentry = 1,
+ };
+ u32 failed_index;
if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
kvm_vcpu_flush_tlb_current(vcpu);
@@ -3381,7 +3388,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
if (nested_vmx_check_guest_state(vcpu, vmcs12,
&entry_failure_code)) {
- exit_reason = EXIT_REASON_INVALID_STATE;
+ exit_reason.basic = EXIT_REASON_INVALID_STATE;
vmcs12->exit_qualification = entry_failure_code;
goto vmentry_fail_vmexit;
}
@@ -3392,7 +3399,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
vcpu->arch.tsc_offset += vmcs12->tsc_offset;
if (prepare_vmcs02(vcpu, vmcs12, &entry_failure_code)) {
- exit_reason = EXIT_REASON_INVALID_STATE;
+ exit_reason.basic = EXIT_REASON_INVALID_STATE;
vmcs12->exit_qualification = entry_failure_code;
goto vmentry_fail_vmexit_guest_mode;
}
@@ -3402,7 +3409,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
vmcs12->vm_entry_msr_load_addr,
vmcs12->vm_entry_msr_load_count);
if (failed_index) {
- exit_reason = EXIT_REASON_MSR_LOAD_FAIL;
+ exit_reason.basic = EXIT_REASON_MSR_LOAD_FAIL;
vmcs12->exit_qualification = failed_index;
goto vmentry_fail_vmexit_guest_mode;
}
@@ -3470,7 +3477,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
return NVMX_VMENTRY_VMEXIT;
load_vmcs12_host_state(vcpu, vmcs12);
- vmcs12->vm_exit_reason = exit_reason | VMX_EXIT_REASONS_FAILED_VMENTRY;
+ vmcs12->vm_exit_reason = exit_reason.full;
if (enable_shadow_vmcs || vmx->nested.hv_evmcs)
vmx->nested.need_vmcs12_to_shadow_sync = true;
return NVMX_VMENTRY_VMEXIT;
@@ -4435,7 +4442,15 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
/* trying to cancel vmlaunch/vmresume is a bug */
WARN_ON_ONCE(vmx->nested.nested_run_pending);
- kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+ if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) {
+ /*
+ * KVM_REQ_GET_NESTED_STATE_PAGES is also used to map
+ * Enlightened VMCS after migration and we still need to
+ * do that when something is forcing L2->L1 exit prior to
+ * the first L2 run.
+ */
+ (void)nested_get_evmcs_page(vcpu);
+ }
/* Service the TLB flush request for L2 before switching to L1. */
if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
@@ -4609,9 +4624,9 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
else if (addr_size == 0)
off = (gva_t)sign_extend64(off, 15);
if (base_is_valid)
- off += kvm_register_read(vcpu, base_reg);
+ off += kvm_register_readl(vcpu, base_reg);
if (index_is_valid)
- off += kvm_register_read(vcpu, index_reg) << scaling;
+ off += kvm_register_readl(vcpu, index_reg) << scaling;
vmx_get_segment(vcpu, &s, seg_reg);
/*
@@ -5487,16 +5502,11 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
if (!nested_vmx_check_eptp(vcpu, new_eptp))
return 1;
- kvm_mmu_unload(vcpu);
mmu->ept_ad = accessed_dirty;
mmu->mmu_role.base.ad_disabled = !accessed_dirty;
vmcs12->ept_pointer = new_eptp;
- /*
- * TODO: Check what's the correct approach in case
- * mmu reload fails. Currently, we just let the next
- * reload potentially fail
- */
- kvm_mmu_reload(vcpu);
+
+ kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
}
return 0;
@@ -5533,7 +5543,12 @@ static int handle_vmfunc(struct kvm_vcpu *vcpu)
return kvm_skip_emulated_instruction(vcpu);
fail:
- nested_vmx_vmexit(vcpu, vmx->exit_reason,
+ /*
+ * This is effectively a reflected VM-Exit, as opposed to a synthesized
+ * nested VM-Exit. Pass the original exit reason, i.e. don't hardcode
+ * EXIT_REASON_VMFUNC as the exit reason.
+ */
+ nested_vmx_vmexit(vcpu, vmx->exit_reason.full,
vmx_get_intr_info(vcpu),
vmx_get_exit_qual(vcpu));
return 1;
@@ -5601,7 +5616,8 @@ static bool nested_vmx_exit_handled_io(struct kvm_vcpu *vcpu,
* MSR bitmap. This may be the case even when L0 doesn't use MSR bitmaps.
*/
static bool nested_vmx_exit_handled_msr(struct kvm_vcpu *vcpu,
- struct vmcs12 *vmcs12, u32 exit_reason)
+ struct vmcs12 *vmcs12,
+ union vmx_exit_reason exit_reason)
{
u32 msr_index = kvm_rcx_read(vcpu);
gpa_t bitmap;
@@ -5615,7 +5631,7 @@ static bool nested_vmx_exit_handled_msr(struct kvm_vcpu *vcpu,
* First we need to figure out which of the four to use:
*/
bitmap = vmcs12->msr_bitmap;
- if (exit_reason == EXIT_REASON_MSR_WRITE)
+ if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
bitmap += 2048;
if (msr_index >= 0xc0000000) {
msr_index -= 0xc0000000;
@@ -5719,7 +5735,7 @@ static bool nested_vmx_exit_handled_vmcs_access(struct kvm_vcpu *vcpu,
/* Decode instruction info and find the field to access */
vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
- field = kvm_register_read(vcpu, (((vmx_instruction_info) >> 28) & 0xf));
+ field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf));
/* Out-of-range fields always cause a VM exit from L2 to L1 */
if (field >> 15)
@@ -5752,11 +5768,12 @@ static bool nested_vmx_exit_handled_mtf(struct vmcs12 *vmcs12)
* Return true if L0 wants to handle an exit from L2 regardless of whether or not
* L1 wants the exit. Only call this when in is_guest_mode (L2).
*/
-static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu, u32 exit_reason)
+static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
+ union vmx_exit_reason exit_reason)
{
u32 intr_info;
- switch ((u16)exit_reason) {
+ switch ((u16)exit_reason.basic) {
case EXIT_REASON_EXCEPTION_NMI:
intr_info = vmx_get_intr_info(vcpu);
if (is_nmi(intr_info))
@@ -5812,12 +5829,13 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu, u32 exit_reason)
* Return 1 if L1 wants to intercept an exit from L2. Only call this when in
* is_guest_mode (L2).
*/
-static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu, u32 exit_reason)
+static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu,
+ union vmx_exit_reason exit_reason)
{
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
u32 intr_info;
- switch ((u16)exit_reason) {
+ switch ((u16)exit_reason.basic) {
case EXIT_REASON_EXCEPTION_NMI:
intr_info = vmx_get_intr_info(vcpu);
if (is_nmi(intr_info))
@@ -5936,7 +5954,7 @@ static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu, u32 exit_reason)
bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
- u32 exit_reason = vmx->exit_reason;
+ union vmx_exit_reason exit_reason = vmx->exit_reason;
unsigned long exit_qual;
u32 exit_intr_info;
@@ -5955,7 +5973,7 @@ bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu)
goto reflect_vmexit;
}
- trace_kvm_nested_vmexit(exit_reason, vcpu, KVM_ISA_VMX);
+ trace_kvm_nested_vmexit(exit_reason.full, vcpu, KVM_ISA_VMX);
/* If L0 (KVM) wants the exit, it trumps L1's desires. */
if (nested_vmx_l0_wants_exit(vcpu, exit_reason))
@@ -5981,7 +5999,7 @@ bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu)
exit_qual = vmx_get_exit_qual(vcpu);
reflect_vmexit:
- nested_vmx_vmexit(vcpu, exit_reason, exit_intr_info, exit_qual);
+ nested_vmx_vmexit(vcpu, exit_reason.full, exit_intr_info, exit_qual);
return true;
}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 82af43e..4587736 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -36,6 +36,7 @@
#include <asm/debugreg.h>
#include <asm/desc.h>
#include <asm/fpu/internal.h>
+#include <asm/idtentry.h>
#include <asm/io.h>
#include <asm/irq_remapping.h>
#include <asm/kexec.h>
@@ -156,9 +157,11 @@ static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = {
MSR_IA32_SPEC_CTRL,
MSR_IA32_PRED_CMD,
MSR_IA32_TSC,
+#ifdef CONFIG_X86_64
MSR_FS_BASE,
MSR_GS_BASE,
MSR_KERNEL_GS_BASE,
+#endif
MSR_IA32_SYSENTER_CS,
MSR_IA32_SYSENTER_ESP,
MSR_IA32_SYSENTER_EIP,
@@ -1578,7 +1581,7 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
* i.e. we end up advancing IP with some random value.
*/
if (!static_cpu_has(X86_FEATURE_HYPERVISOR) ||
- to_vmx(vcpu)->exit_reason != EXIT_REASON_EPT_MISCONFIG) {
+ to_vmx(vcpu)->exit_reason.basic != EXIT_REASON_EPT_MISCONFIG) {
orig_rip = kvm_rip_read(vcpu);
rip = orig_rip + vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
#ifdef CONFIG_X86_64
@@ -5687,7 +5690,7 @@ static void vmx_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2,
struct vcpu_vmx *vmx = to_vmx(vcpu);
*info1 = vmx_get_exit_qual(vcpu);
- if (!(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY)) {
+ if (!(vmx->exit_reason.failed_vmentry)) {
*info2 = vmx->idt_vectoring_info;
*intr_info = vmx_get_intr_info(vcpu);
if (is_exception_with_error_code(*intr_info))
@@ -5779,7 +5782,6 @@ void dump_vmcs(void)
u32 vmentry_ctl, vmexit_ctl;
u32 cpu_based_exec_ctrl, pin_based_exec_ctrl, secondary_exec_control;
unsigned long cr4;
- u64 efer;
if (!dump_invalid_vmcs) {
pr_warn_ratelimited("set kvm_intel.dump_invalid_vmcs=1 to dump internal KVM state.\n");
@@ -5791,7 +5793,6 @@ void dump_vmcs(void)
cpu_based_exec_ctrl = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
pin_based_exec_ctrl = vmcs_read32(PIN_BASED_VM_EXEC_CONTROL);
cr4 = vmcs_readl(GUEST_CR4);
- efer = vmcs_read64(GUEST_IA32_EFER);
secondary_exec_control = 0;
if (cpu_has_secondary_exec_ctrls())
secondary_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
@@ -5803,9 +5804,7 @@ void dump_vmcs(void)
pr_err("CR4: actual=0x%016lx, shadow=0x%016lx, gh_mask=%016lx\n",
cr4, vmcs_readl(CR4_READ_SHADOW), vmcs_readl(CR4_GUEST_HOST_MASK));
pr_err("CR3 = 0x%016lx\n", vmcs_readl(GUEST_CR3));
- if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT) &&
- (cr4 & X86_CR4_PAE) && !(efer & EFER_LMA))
- {
+ if (cpu_has_vmx_ept()) {
pr_err("PDPTR0 = 0x%016llx PDPTR1 = 0x%016llx\n",
vmcs_read64(GUEST_PDPTR0), vmcs_read64(GUEST_PDPTR1));
pr_err("PDPTR2 = 0x%016llx PDPTR3 = 0x%016llx\n",
@@ -5831,7 +5830,8 @@ void dump_vmcs(void)
if ((vmexit_ctl & (VM_EXIT_SAVE_IA32_PAT | VM_EXIT_SAVE_IA32_EFER)) ||
(vmentry_ctl & (VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_IA32_EFER)))
pr_err("EFER = 0x%016llx PAT = 0x%016llx\n",
- efer, vmcs_read64(GUEST_IA32_PAT));
+ vmcs_read64(GUEST_IA32_EFER),
+ vmcs_read64(GUEST_IA32_PAT));
pr_err("DebugCtl = 0x%016llx DebugExceptions = 0x%016lx\n",
vmcs_read64(GUEST_IA32_DEBUGCTL),
vmcs_readl(GUEST_PENDING_DBG_EXCEPTIONS));
@@ -5931,8 +5931,9 @@ void dump_vmcs(void)
static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
- u32 exit_reason = vmx->exit_reason;
+ union vmx_exit_reason exit_reason = vmx->exit_reason;
u32 vectoring_info = vmx->idt_vectoring_info;
+ u16 exit_handler_index;
/*
* Flush logged GPAs PML buffer, this will make dirty_bitmap more
@@ -5974,11 +5975,11 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
return 1;
}
- if (exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY) {
+ if (exit_reason.failed_vmentry) {
dump_vmcs();
vcpu->run->exit_reason = KVM_EXIT_FAIL_ENTRY;
vcpu->run->fail_entry.hardware_entry_failure_reason
- = exit_reason;
+ = exit_reason.full;
vcpu->run->fail_entry.cpu = vcpu->arch.last_vmentry_cpu;
return 0;
}
@@ -6000,24 +6001,24 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
* will cause infinite loop.
*/
if ((vectoring_info & VECTORING_INFO_VALID_MASK) &&
- (exit_reason != EXIT_REASON_EXCEPTION_NMI &&
- exit_reason != EXIT_REASON_EPT_VIOLATION &&
- exit_reason != EXIT_REASON_PML_FULL &&
- exit_reason != EXIT_REASON_APIC_ACCESS &&
- exit_reason != EXIT_REASON_TASK_SWITCH)) {
+ (exit_reason.basic != EXIT_REASON_EXCEPTION_NMI &&
+ exit_reason.basic != EXIT_REASON_EPT_VIOLATION &&
+ exit_reason.basic != EXIT_REASON_PML_FULL &&
+ exit_reason.basic != EXIT_REASON_APIC_ACCESS &&
+ exit_reason.basic != EXIT_REASON_TASK_SWITCH)) {
+ int ndata = 3;
+
vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_DELIVERY_EV;
- vcpu->run->internal.ndata = 3;
vcpu->run->internal.data[0] = vectoring_info;
- vcpu->run->internal.data[1] = exit_reason;
+ vcpu->run->internal.data[1] = exit_reason.full;
vcpu->run->internal.data[2] = vcpu->arch.exit_qualification;
- if (exit_reason == EXIT_REASON_EPT_MISCONFIG) {
- vcpu->run->internal.ndata++;
- vcpu->run->internal.data[3] =
+ if (exit_reason.basic == EXIT_REASON_EPT_MISCONFIG) {
+ vcpu->run->internal.data[ndata++] =
vmcs_read64(GUEST_PHYSICAL_ADDRESS);
}
- vcpu->run->internal.data[vcpu->run->internal.ndata++] =
- vcpu->arch.last_vmentry_cpu;
+ vcpu->run->internal.data[ndata++] = vcpu->arch.last_vmentry_cpu;
+ vcpu->run->internal.ndata = ndata;
return 0;
}
@@ -6043,38 +6044,39 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
if (exit_fastpath != EXIT_FASTPATH_NONE)
return 1;
- if (exit_reason >= kvm_vmx_max_exit_handlers)
+ if (exit_reason.basic >= kvm_vmx_max_exit_handlers)
goto unexpected_vmexit;
#ifdef CONFIG_RETPOLINE
- if (exit_reason == EXIT_REASON_MSR_WRITE)
+ if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
return kvm_emulate_wrmsr(vcpu);
- else if (exit_reason == EXIT_REASON_PREEMPTION_TIMER)
+ else if (exit_reason.basic == EXIT_REASON_PREEMPTION_TIMER)
return handle_preemption_timer(vcpu);
- else if (exit_reason == EXIT_REASON_INTERRUPT_WINDOW)
+ else if (exit_reason.basic == EXIT_REASON_INTERRUPT_WINDOW)
return handle_interrupt_window(vcpu);
- else if (exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT)
+ else if (exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
return handle_external_interrupt(vcpu);
- else if (exit_reason == EXIT_REASON_HLT)
+ else if (exit_reason.basic == EXIT_REASON_HLT)
return kvm_emulate_halt(vcpu);
- else if (exit_reason == EXIT_REASON_EPT_MISCONFIG)
+ else if (exit_reason.basic == EXIT_REASON_EPT_MISCONFIG)
return handle_ept_misconfig(vcpu);
#endif
- exit_reason = array_index_nospec(exit_reason,
- kvm_vmx_max_exit_handlers);
- if (!kvm_vmx_exit_handlers[exit_reason])
+ exit_handler_index = array_index_nospec((u16)exit_reason.basic,
+ kvm_vmx_max_exit_handlers);
+ if (!kvm_vmx_exit_handlers[exit_handler_index])
goto unexpected_vmexit;
- return kvm_vmx_exit_handlers[exit_reason](vcpu);
+ return kvm_vmx_exit_handlers[exit_handler_index](vcpu);
unexpected_vmexit:
- vcpu_unimpl(vcpu, "vmx: unexpected exit reason 0x%x\n", exit_reason);
+ vcpu_unimpl(vcpu, "vmx: unexpected exit reason 0x%x\n",
+ exit_reason.full);
dump_vmcs();
vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
vcpu->run->internal.suberror =
KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON;
vcpu->run->internal.ndata = 2;
- vcpu->run->internal.data[0] = exit_reason;
+ vcpu->run->internal.data[0] = exit_reason.full;
vcpu->run->internal.data[1] = vcpu->arch.last_vmentry_cpu;
return 0;
}
@@ -6353,18 +6355,17 @@ static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
void vmx_do_interrupt_nmi_irqoff(unsigned long entry);
-static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu, u32 intr_info)
+static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu,
+ unsigned long entry)
{
- unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
- gate_desc *desc = (gate_desc *)host_idt_base + vector;
-
kvm_before_interrupt(vcpu);
- vmx_do_interrupt_nmi_irqoff(gate_offset(desc));
+ vmx_do_interrupt_nmi_irqoff(entry);
kvm_after_interrupt(vcpu);
}
static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
{
+ const unsigned long nmi_entry = (unsigned long)asm_exc_nmi_noist;
u32 intr_info = vmx_get_intr_info(&vmx->vcpu);
/* if exit due to PF check for async PF */
@@ -6375,27 +6376,29 @@ static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
kvm_machine_check();
/* We need to handle NMIs before interrupts are enabled */
else if (is_nmi(intr_info))
- handle_interrupt_nmi_irqoff(&vmx->vcpu, intr_info);
+ handle_interrupt_nmi_irqoff(&vmx->vcpu, nmi_entry);
}
static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
{
u32 intr_info = vmx_get_intr_info(vcpu);
+ unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
+ gate_desc *desc = (gate_desc *)host_idt_base + vector;
if (WARN_ONCE(!is_external_intr(intr_info),
"KVM: unexpected VM-Exit interrupt info: 0x%x", intr_info))
return;
- handle_interrupt_nmi_irqoff(vcpu, intr_info);
+ handle_interrupt_nmi_irqoff(vcpu, gate_offset(desc));
}
static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
- if (vmx->exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT)
+ if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
handle_external_interrupt_irqoff(vcpu);
- else if (vmx->exit_reason == EXIT_REASON_EXCEPTION_NMI)
+ else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI)
handle_exception_nmi_irqoff(vmx);
}
@@ -6583,7 +6586,7 @@ void noinstr vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{
- switch (to_vmx(vcpu)->exit_reason) {
+ switch (to_vmx(vcpu)->exit_reason.basic) {
case EXIT_REASON_MSR_WRITE:
return handle_fastpath_set_msr_irqoff(vcpu);
case EXIT_REASON_PREEMPTION_TIMER:
@@ -6637,15 +6640,15 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
* have them in state 'on' as recorded before entering guest mode.
* Same as enter_from_user_mode().
*
- * guest_exit_irqoff() restores host context and reinstates RCU if
- * enabled and required.
+ * context_tracking_guest_exit() restores host context and reinstates
+ * RCU if enabled and required.
*
* This needs to be done before the below as native_read_msr()
* contains a tracepoint and x86_spec_ctrl_restore_host() calls
* into world and some more.
*/
lockdep_hardirqs_off(CALLER_ADDR0);
- guest_exit_irqoff();
+ context_tracking_guest_exit();
instrumentation_begin();
trace_hardirqs_off_finish();
@@ -6782,17 +6785,17 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
vmx->idt_vectoring_info = 0;
if (unlikely(vmx->fail)) {
- vmx->exit_reason = 0xdead;
+ vmx->exit_reason.full = 0xdead;
return EXIT_FASTPATH_NONE;
}
- vmx->exit_reason = vmcs_read32(VM_EXIT_REASON);
- if (unlikely((u16)vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY))
+ vmx->exit_reason.full = vmcs_read32(VM_EXIT_REASON);
+ if (unlikely((u16)vmx->exit_reason.basic == EXIT_REASON_MCE_DURING_VMENTRY))
kvm_machine_check();
- trace_kvm_exit(vmx->exit_reason, vcpu, KVM_ISA_VMX);
+ trace_kvm_exit(vmx->exit_reason.full, vcpu, KVM_ISA_VMX);
- if (unlikely(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
+ if (unlikely(vmx->exit_reason.failed_vmentry))
return EXIT_FASTPATH_NONE;
vmx->loaded_vmcs->launched = 1;
@@ -6861,12 +6864,9 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) {
u32 index = vmx_uret_msrs_list[i];
- u32 data_low, data_high;
int j = vmx->nr_uret_msrs;
- if (rdmsr_safe(index, &data_low, &data_high) < 0)
- continue;
- if (wrmsr_safe(index, data_low, data_high) < 0)
+ if (kvm_probe_user_return_msr(index))
continue;
vmx->guest_uret_msrs[j].slot = i;
@@ -6905,9 +6905,11 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
bitmap_fill(vmx->shadow_msr_intercept.write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
vmx_disable_intercept_for_msr(vcpu, MSR_IA32_TSC, MSR_TYPE_R);
+#ifdef CONFIG_X86_64
vmx_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW);
vmx_disable_intercept_for_msr(vcpu, MSR_GS_BASE, MSR_TYPE_RW);
vmx_disable_intercept_for_msr(vcpu, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
+#endif
vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
@@ -7297,9 +7299,11 @@ static __init void vmx_set_cpu_caps(void)
if (!cpu_has_vmx_xsaves())
kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
- /* CPUID 0x80000001 */
- if (!cpu_has_vmx_rdtscp())
+ /* CPUID 0x80000001 and 0x7 (RDPID) */
+ if (!cpu_has_vmx_rdtscp()) {
kvm_cpu_cap_clear(X86_FEATURE_RDTSCP);
+ kvm_cpu_cap_clear(X86_FEATURE_RDPID);
+ }
if (cpu_has_vmx_waitpkg())
kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG);
@@ -7355,8 +7359,9 @@ static int vmx_check_intercept(struct kvm_vcpu *vcpu,
/*
* RDPID causes #UD if disabled through secondary execution controls.
* Because it is marked as EmulateOnUD, we need to intercept it here.
+ * Note, RDPID is hidden behind ENABLE_RDTSCP.
*/
- case x86_intercept_rdtscp:
+ case x86_intercept_rdpid:
if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_RDTSCP)) {
exception->vector = UD_VECTOR;
exception->error_code_valid = false;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index f6f66e5..ae3a89a 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -70,6 +70,29 @@ struct pt_desc {
struct pt_ctx guest;
};
+union vmx_exit_reason {
+ struct {
+ u32 basic : 16;
+ u32 reserved16 : 1;
+ u32 reserved17 : 1;
+ u32 reserved18 : 1;
+ u32 reserved19 : 1;
+ u32 reserved20 : 1;
+ u32 reserved21 : 1;
+ u32 reserved22 : 1;
+ u32 reserved23 : 1;
+ u32 reserved24 : 1;
+ u32 reserved25 : 1;
+ u32 reserved26 : 1;
+ u32 enclave_mode : 1;
+ u32 smi_pending_mtf : 1;
+ u32 smi_from_vmx_root : 1;
+ u32 reserved30 : 1;
+ u32 failed_vmentry : 1;
+ };
+ u32 full;
+};
+
/*
* The nested_vmx structure is part of vcpu_vmx, and holds information we need
* for correct emulation of VMX (i.e., nested VMX) on this vcpu.
@@ -244,7 +267,7 @@ struct vcpu_vmx {
int vpid;
bool emulation_required;
- u32 exit_reason;
+ union vmx_exit_reason exit_reason;
/* Posted interrupt descriptor */
struct pi_desc pi_desc;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0d8383b..1090416 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -233,7 +233,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
VM_STAT("mmu_shadow_zapped", mmu_shadow_zapped),
VM_STAT("mmu_pte_write", mmu_pte_write),
- VM_STAT("mmu_pte_updated", mmu_pte_updated),
VM_STAT("mmu_pde_zapped", mmu_pde_zapped),
VM_STAT("mmu_flooded", mmu_flooded),
VM_STAT("mmu_recycled", mmu_recycled),
@@ -323,6 +322,22 @@ static void kvm_on_user_return(struct user_return_notifier *urn)
}
}
+int kvm_probe_user_return_msr(u32 msr)
+{
+ u64 val;
+ int ret;
+
+ preempt_disable();
+ ret = rdmsrl_safe(msr, &val);
+ if (ret)
+ goto out;
+ ret = wrmsrl_safe(msr, val);
+out:
+ preempt_enable();
+ return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_probe_user_return_msr);
+
void kvm_define_user_return_msr(unsigned slot, u32 msr)
{
BUG_ON(slot >= KVM_MAX_NR_USER_RETURN_MSRS);
@@ -2991,6 +3006,8 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
st->preempted & KVM_VCPU_FLUSH_TLB);
if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB)
kvm_vcpu_flush_tlb_guest(vcpu);
+ } else {
+ st->preempted = 0;
}
vcpu->arch.st.preempted = 0;
@@ -7850,6 +7867,18 @@ static void pvclock_gtod_update_fn(struct work_struct *work)
static DECLARE_WORK(pvclock_gtod_work, pvclock_gtod_update_fn);
/*
+ * Indirection to move queue_work() out of the tk_core.seq write held
+ * region to prevent possible deadlocks against time accessors which
+ * are invoked with work related locks held.
+ */
+static void pvclock_irq_work_fn(struct irq_work *w)
+{
+ queue_work(system_long_wq, &pvclock_gtod_work);
+}
+
+static DEFINE_IRQ_WORK(pvclock_irq_work, pvclock_irq_work_fn);
+
+/*
* Notification about pvclock gtod data update.
*/
static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused,
@@ -7860,13 +7889,14 @@ static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused,
update_pvclock_gtod(tk);
- /* disable master clock if host does not trust, or does not
- * use, TSC based clocksource.
+ /*
+ * Disable master clock if host does not trust, or does not use,
+ * TSC based clocksource. Delegate queue_work() to irq_work as
+ * this is invoked with tk_core.seq write held.
*/
if (!gtod_is_based_on_tsc(gtod->clock.vclock_mode) &&
atomic_read(&kvm_guest_has_master_clock) != 0)
- queue_work(system_long_wq, &pvclock_gtod_work);
-
+ irq_work_queue(&pvclock_irq_work);
return 0;
}
@@ -7982,6 +8012,8 @@ void kvm_arch_exit(void)
cpuhp_remove_state_nocalls(CPUHP_AP_X86_KVM_CLK_ONLINE);
#ifdef CONFIG_X86_64
pvclock_gtod_unregister_notifier(&pvclock_gtod_notifier);
+ irq_work_sync(&pvclock_irq_work);
+ cancel_work_sync(&pvclock_gtod_work);
#endif
kvm_x86_ops.hardware_enable = NULL;
kvm_mmu_module_exit();
@@ -9033,6 +9065,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
local_irq_disable();
kvm_after_interrupt(vcpu);
+ /*
+ * Wait until after servicing IRQs to account guest time so that any
+ * ticks that occurred while running the guest are properly accounted
+ * to the guest. Waiting until IRQs are enabled degrades the accuracy
+ * of accounting via context tracking, but the loss of accuracy is
+ * acceptable for all known use cases.
+ */
+ vtime_account_guest_exit();
+
if (lapic_in_kernel(vcpu)) {
s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta;
if (delta != S64_MIN) {
@@ -11290,7 +11331,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva)
fallthrough;
case INVPCID_TYPE_ALL_INCL_GLOBAL:
- kvm_mmu_unload(vcpu);
+ kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
return kvm_skip_emulated_instruction(vcpu);
default:
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index fee8b9c..9009393 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -253,7 +253,7 @@ static void __wrmsr_safe_regs_on_cpu(void *info)
rv->err = wrmsr_safe_regs(rv->regs);
}
-int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 *regs)
+int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
{
int err;
struct msr_regs_info rv;
@@ -266,7 +266,7 @@ int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 *regs)
}
EXPORT_SYMBOL(rdmsr_safe_regs_on_cpu);
-int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 *regs)
+int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
{
int err;
struct msr_regs_info rv;
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 6c5eb6f..a19374d 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -503,14 +503,10 @@ void __init sme_enable(struct boot_params *bp)
#define AMD_SME_BIT BIT(0)
#define AMD_SEV_BIT BIT(1)
- /*
- * Set the feature mask (SME or SEV) based on whether we are
- * running under a hypervisor.
- */
- eax = 1;
- ecx = 0;
- native_cpuid(&eax, &ebx, &ecx, &edx);
- feature_mask = (ecx & BIT(31)) ? AMD_SEV_BIT : AMD_SME_BIT;
+
+ /* Check the SEV MSR whether SEV or SME is enabled */
+ sev_status = __rdmsr(MSR_AMD64_SEV);
+ feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
/*
* Check for the SME/SEV feature:
@@ -530,19 +526,26 @@ void __init sme_enable(struct boot_params *bp)
/* Check if memory encryption is enabled */
if (feature_mask == AMD_SME_BIT) {
+ /*
+ * No SME if Hypervisor bit is set. This check is here to
+ * prevent a guest from trying to enable SME. For running as a
+ * KVM guest the MSR_K8_SYSCFG will be sufficient, but there
+ * might be other hypervisors which emulate that MSR as non-zero
+ * or even pass it through to the guest.
+ * A malicious hypervisor can still trick a guest into this
+ * path, but there is no way to protect against that.
+ */
+ eax = 1;
+ ecx = 0;
+ native_cpuid(&eax, &ebx, &ecx, &edx);
+ if (ecx & BIT(31))
+ return;
+
/* For SME, check the SYSCFG MSR */
msr = __rdmsr(MSR_K8_SYSCFG);
if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT))
return;
} else {
- /* For SEV, check the SEV MSR */
- msr = __rdmsr(MSR_AMD64_SEV);
- if (!(msr & MSR_AMD64_SEV_ENABLED))
- return;
-
- /* Save SEV_STATUS to avoid reading MSR again */
- sev_status = msr;
-
/* SEV state cannot be controlled by a command line option */
sme_me_mask = me_mask;
sev_enabled = true;
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 213a3ed..d5b0bed 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1573,7 +1573,16 @@ st: if (is_imm8(insn->off))
}
if (image) {
- if (unlikely(proglen + ilen > oldproglen)) {
+ /*
+ * When populating the image, assert that:
+ *
+ * i) We do not write beyond the allocated space, and
+ * ii) addrs[i] did not change from the prior run, in order
+ * to validate assumptions made for computing branch
+ * displacements.
+ */
+ if (unlikely(proglen + ilen > oldproglen ||
+ proglen + ilen != addrs[i])) {
pr_err("bpf_jit: fatal error\n");
return -EFAULT;
}
@@ -2135,7 +2144,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
extra_pass = true;
goto skip_init_addrs;
}
- addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
+ addrs = kvmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
if (!addrs) {
prog = orig_prog;
goto out_addrs;
@@ -2225,7 +2234,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
if (image)
bpf_prog_fill_jited_linfo(prog, addrs + 1);
out_addrs:
- kfree(addrs);
+ kvfree(addrs);
kfree(jit_data);
prog->aux->jit_data = NULL;
}
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index d17b67c..6a99def 100644
--- a/arch/x86/net/bpf_jit_comp32.c
+++ b/arch/x86/net/bpf_jit_comp32.c
@@ -2276,7 +2276,16 @@ emit_cond_jmp: jmp_cond = get_cond_jmp_opcode(BPF_OP(code), false);
}
if (image) {
- if (unlikely(proglen + ilen > oldproglen)) {
+ /*
+ * When populating the image, assert that:
+ *
+ * i) We do not write beyond the allocated space, and
+ * ii) addrs[i] did not change from the prior run, in order
+ * to validate assumptions made for computing branch
+ * displacements.
+ */
+ if (unlikely(proglen + ilen > oldproglen ||
+ proglen + ilen != addrs[i])) {
pr_err("bpf_jit: fatal error\n");
return -EFAULT;
}
diff --git a/arch/x86/power/hibernate.c b/arch/x86/power/hibernate.c
index cd3914f..e94e005 100644
--- a/arch/x86/power/hibernate.c
+++ b/arch/x86/power/hibernate.c
@@ -13,8 +13,8 @@
#include <linux/kdebug.h>
#include <linux/cpu.h>
#include <linux/pgtable.h>
-
-#include <crypto/hash.h>
+#include <linux/types.h>
+#include <linux/crc32.h>
#include <asm/e820/api.h>
#include <asm/init.h>
@@ -54,95 +54,33 @@ int pfn_is_nosave(unsigned long pfn)
return pfn >= nosave_begin_pfn && pfn < nosave_end_pfn;
}
-
-#define MD5_DIGEST_SIZE 16
-
struct restore_data_record {
unsigned long jump_address;
unsigned long jump_address_phys;
unsigned long cr3;
unsigned long magic;
- u8 e820_digest[MD5_DIGEST_SIZE];
+ unsigned long e820_checksum;
};
-#if IS_BUILTIN(CONFIG_CRYPTO_MD5)
/**
- * get_e820_md5 - calculate md5 according to given e820 table
+ * compute_e820_crc32 - calculate crc32 of a given e820 table
*
* @table: the e820 table to be calculated
- * @buf: the md5 result to be stored to
+ *
+ * Return: the resulting checksum
*/
-static int get_e820_md5(struct e820_table *table, void *buf)
+static inline u32 compute_e820_crc32(struct e820_table *table)
{
- struct crypto_shash *tfm;
- struct shash_desc *desc;
- int size;
- int ret = 0;
-
- tfm = crypto_alloc_shash("md5", 0, 0);
- if (IS_ERR(tfm))
- return -ENOMEM;
-
- desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm),
- GFP_KERNEL);
- if (!desc) {
- ret = -ENOMEM;
- goto free_tfm;
- }
-
- desc->tfm = tfm;
-
- size = offsetof(struct e820_table, entries) +
+ int size = offsetof(struct e820_table, entries) +
sizeof(struct e820_entry) * table->nr_entries;
- if (crypto_shash_digest(desc, (u8 *)table, size, buf))
- ret = -EINVAL;
-
- kfree_sensitive(desc);
-
-free_tfm:
- crypto_free_shash(tfm);
- return ret;
+ return ~crc32_le(~0, (unsigned char const *)table, size);
}
-static int hibernation_e820_save(void *buf)
-{
- return get_e820_md5(e820_table_firmware, buf);
-}
-
-static bool hibernation_e820_mismatch(void *buf)
-{
- int ret;
- u8 result[MD5_DIGEST_SIZE];
-
- memset(result, 0, MD5_DIGEST_SIZE);
- /* If there is no digest in suspend kernel, let it go. */
- if (!memcmp(result, buf, MD5_DIGEST_SIZE))
- return false;
-
- ret = get_e820_md5(e820_table_firmware, result);
- if (ret)
- return true;
-
- return memcmp(result, buf, MD5_DIGEST_SIZE) ? true : false;
-}
-#else
-static int hibernation_e820_save(void *buf)
-{
- return 0;
-}
-
-static bool hibernation_e820_mismatch(void *buf)
-{
- /* If md5 is not builtin for restore kernel, let it go. */
- return false;
-}
-#endif
-
#ifdef CONFIG_X86_64
-#define RESTORE_MAGIC 0x23456789ABCDEF01UL
+#define RESTORE_MAGIC 0x23456789ABCDEF02UL
#else
-#define RESTORE_MAGIC 0x12345678UL
+#define RESTORE_MAGIC 0x12345679UL
#endif
/**
@@ -179,7 +117,8 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
*/
rdr->cr3 = restore_cr3 & ~CR3_PCID_MASK;
- return hibernation_e820_save(rdr->e820_digest);
+ rdr->e820_checksum = compute_e820_crc32(e820_table_firmware);
+ return 0;
}
/**
@@ -200,7 +139,7 @@ int arch_hibernation_header_restore(void *addr)
jump_address_phys = rdr->jump_address_phys;
restore_cr3 = rdr->cr3;
- if (hibernation_e820_mismatch(rdr->e820_digest)) {
+ if (rdr->e820_checksum != compute_e820_crc32(e820_table_firmware)) {
pr_crit("Hibernate inconsistent memory map detected!\n");
return -ENODEV;
}
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 9a5a50c..8064df6 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1262,16 +1262,16 @@ asmlinkage __visible void __init xen_start_kernel(void)
/* Get mfn list */
xen_build_dynamic_phys_to_machine();
+ /* Work out if we support NX */
+ get_cpu_cap(&boot_cpu_data);
+ x86_configure_nx();
+
/*
* Set up kernel GDT and segment registers, mainly so that
* -fstack-protector code can be executed.
*/
xen_setup_gdt(0);
- /* Work out if we support NX */
- get_cpu_cap(&boot_cpu_data);
- x86_configure_nx();
-
/* Determine virtual and physical address sizes */
get_cpu_address_sizes(&boot_cpu_data);
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 5720978..c91dca6 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2210,10 +2210,9 @@ static void bfq_remove_request(struct request_queue *q,
}
-static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
+static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
- struct request_queue *q = hctx->queue;
struct bfq_data *bfqd = q->elevator->elevator_data;
struct request *free = NULL;
/*
diff --git a/block/bio.c b/block/bio.c
index fa01bef3..9c931df 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -313,7 +313,7 @@ static struct bio *__bio_chain_endio(struct bio *bio)
{
struct bio *parent = bio->bi_private;
- if (!parent->bi_status)
+ if (bio->bi_status && !parent->bi_status)
parent->bi_status = bio->bi_status;
bio_put(bio);
return parent;
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index 7e963b4..aaae531 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -1023,7 +1023,17 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse,
lockdep_assert_held(&ioc->lock);
- inuse = clamp_t(u32, inuse, 1, active);
+ /*
+ * For an active leaf node, its inuse shouldn't be zero or exceed
+ * @active. An active internal node's inuse is solely determined by the
+ * inuse to active ratio of its children regardless of @inuse.
+ */
+ if (list_empty(&iocg->active_list) && iocg->child_active_sum) {
+ inuse = DIV64_U64_ROUND_UP(active * iocg->child_inuse_sum,
+ iocg->child_active_sum);
+ } else {
+ inuse = clamp_t(u32, inuse, 1, active);
+ }
iocg->last_inuse = iocg->inuse;
if (save)
@@ -1040,7 +1050,7 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse,
/* update the level sums */
parent->child_active_sum += (s32)(active - child->active);
parent->child_inuse_sum += (s32)(inuse - child->inuse);
- /* apply the udpates */
+ /* apply the updates */
child->active = active;
child->inuse = inuse;
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index d1eafe2..581be65 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -348,14 +348,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
struct elevator_queue *e = q->elevator;
- struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
- struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
+ struct blk_mq_ctx *ctx;
+ struct blk_mq_hw_ctx *hctx;
bool ret = false;
enum hctx_type type;
if (e && e->type->ops.bio_merge)
- return e->type->ops.bio_merge(hctx, bio, nr_segs);
+ return e->type->ops.bio_merge(q, bio, nr_segs);
+ ctx = blk_mq_get_ctx(q);
+ hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
type = hctx->type;
if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) ||
list_empty_careful(&ctx->rq_lists[type]))
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2a1eff6..4bf9449 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2203,8 +2203,9 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
/* Bypass scheduler for flush requests */
blk_insert_flush(rq);
blk_mq_run_hw_queue(data.hctx, true);
- } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs ||
- !blk_queue_nonrot(q))) {
+ } else if (plug && (q->nr_hw_queues == 1 ||
+ blk_mq_is_sbitmap_shared(rq->mq_hctx->flags) ||
+ q->mq_ops->commit_rqs || !blk_queue_nonrot(q))) {
/*
* Use plugging if we have a ->commit_rqs() hook as well, as
* we know the driver uses bd->last in a smart fashion.
@@ -3255,10 +3256,12 @@ EXPORT_SYMBOL(blk_mq_init_allocated_queue);
/* tags can _not_ be used after returning from blk_mq_exit_queue */
void blk_mq_exit_queue(struct request_queue *q)
{
- struct blk_mq_tag_set *set = q->tag_set;
+ struct blk_mq_tag_set *set = q->tag_set;
- blk_mq_del_queue_tag_set(q);
+ /* Checks hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED. */
blk_mq_exit_hw_queues(q, set, set->nr_hw_queues);
+ /* May clear BLK_MQ_F_TAG_QUEUE_SHARED in hctx->flags. */
+ blk_mq_del_queue_tag_set(q);
}
static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
diff --git a/block/ioctl.c b/block/ioctl.c
index 3be4d0e..ed240e1 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -98,6 +98,8 @@ static int blkdev_reread_part(struct block_device *bdev, fmode_t mode)
return -EINVAL;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
+ if (bdev->bd_part_count)
+ return -EBUSY;
/*
* Reopen the device to revalidate the driver state and force a
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index dc89199..7f9ef77 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -562,11 +562,12 @@ static void kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
}
}
-static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
+static bool kyber_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
+ struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
+ struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
struct kyber_hctx_data *khd = hctx->sched_data;
- struct blk_mq_ctx *ctx = blk_mq_get_ctx(hctx->queue);
struct kyber_ctx_queue *kcq = &khd->kcqs[ctx->index_hw[hctx->type]];
unsigned int sched_domain = kyber_sched_domain(bio->bi_opf);
struct list_head *rq_list = &kcq->rq_list[sched_domain];
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 800ac90..2b9635d 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -461,10 +461,9 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
return ELEVATOR_NO_MERGE;
}
-static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
+static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
- struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
struct request *free = NULL;
bool ret;
diff --git a/crypto/api.c b/crypto/api.c
index ed08cbd..c4eda56 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -562,7 +562,7 @@ void crypto_destroy_tfm(void *mem, struct crypto_tfm *tfm)
{
struct crypto_alg *alg;
- if (unlikely(!mem))
+ if (IS_ERR_OR_NULL(mem))
return;
alg = tfm->__crt_alg;
diff --git a/crypto/async_tx/async_xor.c b/crypto/async_tx/async_xor.c
index a057ecb..6cd7f70 100644
--- a/crypto/async_tx/async_xor.c
+++ b/crypto/async_tx/async_xor.c
@@ -233,6 +233,7 @@ async_xor_offs(struct page *dest, unsigned int offset,
if (submit->flags & ASYNC_TX_XOR_DROP_DST) {
src_cnt--;
src_list++;
+ src_offs++;
}
/* wait for any prerequisite operations */
diff --git a/crypto/rng.c b/crypto/rng.c
index a888d84..fea082b 100644
--- a/crypto/rng.c
+++ b/crypto/rng.c
@@ -34,22 +34,18 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 *seed, unsigned int slen)
u8 *buf = NULL;
int err;
- crypto_stats_get(alg);
if (!seed && slen) {
buf = kmalloc(slen, GFP_KERNEL);
- if (!buf) {
- crypto_alg_put(alg);
+ if (!buf)
return -ENOMEM;
- }
err = get_random_bytes_wait(buf, slen);
- if (err) {
- crypto_alg_put(alg);
+ if (err)
goto out;
- }
seed = buf;
}
+ crypto_stats_get(alg);
err = crypto_rng_alg(tfm)->seed(tfm, seed, slen);
crypto_stats_rng_seed(alg, err);
out:
diff --git a/drivers/acpi/acpi_apd.c b/drivers/acpi/acpi_apd.c
index 39359ce..645e82a 100644
--- a/drivers/acpi/acpi_apd.c
+++ b/drivers/acpi/acpi_apd.c
@@ -226,6 +226,7 @@ static const struct acpi_device_id acpi_apd_device_ids[] = {
{ "AMDI0010", APD_ADDR(wt_i2c_desc) },
{ "AMD0020", APD_ADDR(cz_uart_desc) },
{ "AMDI0020", APD_ADDR(cz_uart_desc) },
+ { "AMDI0022", APD_ADDR(cz_uart_desc) },
{ "AMD0030", },
{ "AMD0040", APD_ADDR(fch_misc_desc)},
{ "HYGO0010", APD_ADDR(wt_i2c_desc) },
diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c
index f2d0e59..0a0a982 100644
--- a/drivers/acpi/arm64/gtdt.c
+++ b/drivers/acpi/arm64/gtdt.c
@@ -329,7 +329,7 @@ static int __init gtdt_import_sbsa_gwdt(struct acpi_gtdt_watchdog *wd,
int index)
{
struct platform_device *pdev;
- int irq = map_gt_gsi(wd->timer_interrupt, wd->timer_flags);
+ int irq;
/*
* According to SBSA specification the size of refresh and control
@@ -338,7 +338,7 @@ static int __init gtdt_import_sbsa_gwdt(struct acpi_gtdt_watchdog *wd,
struct resource res[] = {
DEFINE_RES_MEM(wd->control_frame_address, SZ_4K),
DEFINE_RES_MEM(wd->refresh_frame_address, SZ_4K),
- DEFINE_RES_IRQ(irq),
+ {},
};
int nr_res = ARRAY_SIZE(res);
@@ -348,10 +348,11 @@ static int __init gtdt_import_sbsa_gwdt(struct acpi_gtdt_watchdog *wd,
if (!(wd->refresh_frame_address && wd->control_frame_address)) {
pr_err(FW_BUG "failed to get the Watchdog base address.\n");
- acpi_unregister_gsi(wd->timer_interrupt);
return -EINVAL;
}
+ irq = map_gt_gsi(wd->timer_interrupt, wd->timer_flags);
+ res[2] = (struct resource)DEFINE_RES_IRQ(irq);
if (irq <= 0) {
pr_warn("failed to map the Watchdog interrupt.\n");
nr_res--;
@@ -364,7 +365,8 @@ static int __init gtdt_import_sbsa_gwdt(struct acpi_gtdt_watchdog *wd,
*/
pdev = platform_device_register_simple("sbsa-gwdt", index, res, nr_res);
if (IS_ERR(pdev)) {
- acpi_unregister_gsi(wd->timer_interrupt);
+ if (irq > 0)
+ acpi_unregister_gsi(wd->timer_interrupt);
return PTR_ERR(pdev);
}
diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
index 7a99b19..0a2da06 100644
--- a/drivers/acpi/cppc_acpi.c
+++ b/drivers/acpi/cppc_acpi.c
@@ -118,23 +118,15 @@ static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr);
*/
#define NUM_RETRIES 500ULL
-struct cppc_attr {
- struct attribute attr;
- ssize_t (*show)(struct kobject *kobj,
- struct attribute *attr, char *buf);
- ssize_t (*store)(struct kobject *kobj,
- struct attribute *attr, const char *c, ssize_t count);
-};
-
#define define_one_cppc_ro(_name) \
-static struct cppc_attr _name = \
+static struct kobj_attribute _name = \
__ATTR(_name, 0444, show_##_name, NULL)
#define to_cpc_desc(a) container_of(a, struct cpc_desc, kobj)
#define show_cppc_data(access_fn, struct_name, member_name) \
static ssize_t show_##member_name(struct kobject *kobj, \
- struct attribute *attr, char *buf) \
+ struct kobj_attribute *attr, char *buf) \
{ \
struct cpc_desc *cpc_ptr = to_cpc_desc(kobj); \
struct struct_name st_name = {0}; \
@@ -160,7 +152,7 @@ show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, reference_perf);
show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, wraparound_time);
static ssize_t show_feedback_ctrs(struct kobject *kobj,
- struct attribute *attr, char *buf)
+ struct kobj_attribute *attr, char *buf)
{
struct cpc_desc *cpc_ptr = to_cpc_desc(kobj);
struct cppc_perf_fb_ctrs fb_ctrs = {0};
diff --git a/drivers/acpi/custom_method.c b/drivers/acpi/custom_method.c
index 7b54dc9..4058e02 100644
--- a/drivers/acpi/custom_method.c
+++ b/drivers/acpi/custom_method.c
@@ -42,6 +42,8 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf,
sizeof(struct acpi_table_header)))
return -EFAULT;
uncopied_bytes = max_size = table.length;
+ /* make sure the buf is not allocated */
+ kfree(buf);
buf = kzalloc(max_size, GFP_KERNEL);
if (!buf)
return -ENOMEM;
@@ -55,6 +57,7 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf,
(*ppos + count < count) ||
(count > uncopied_bytes)) {
kfree(buf);
+ buf = NULL;
return -EINVAL;
}
@@ -76,7 +79,6 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf,
add_taint(TAINT_OVERRIDDEN_ACPI_TABLE, LOCKDEP_NOW_UNRELIABLE);
}
- kfree(buf);
return count;
}
diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
index ef77dbc..48ff682 100644
--- a/drivers/acpi/device_pm.c
+++ b/drivers/acpi/device_pm.c
@@ -1301,6 +1301,7 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
{"PNP0C0B", }, /* Generic ACPI fan */
{"INT3404", }, /* Fan */
{"INTC1044", }, /* Fan for Tiger Lake generation */
+ {"INTC1048", }, /* Fan for Alder Lake generation */
{}
};
struct acpi_device *adev = ACPI_COMPANION(dev);
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index 4e30396..fb161a2 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -545,9 +545,7 @@ static int acpi_idle_play_dead(struct cpuidle_device *dev, int index)
return -ENODEV;
#if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU)
- /* If NMI wants to wake up CPU0, start CPU0. */
- if (wakeup_cpu0())
- start_cpu0();
+ cond_wakeup_cpu0();
#endif
}
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index b47f14a..de0533b 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -705,6 +705,7 @@ int acpi_device_add(struct acpi_device *device,
result = acpi_device_set_name(device, acpi_device_bus_id);
if (result) {
+ kfree_const(acpi_device_bus_id->bus_id);
kfree(acpi_device_bus_id);
goto err_unlock;
}
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index 00ba8e5..33192a8 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -1772,6 +1772,11 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
hpriv->flags |= AHCI_HFLAG_NO_DEVSLP;
#ifdef CONFIG_ARM64
+ if (pdev->vendor == PCI_VENDOR_ID_HUAWEI &&
+ pdev->device == 0xa235 &&
+ pdev->revision < 0x30)
+ hpriv->flags |= AHCI_HFLAG_NO_SXS;
+
if (pdev->vendor == 0x177d && pdev->device == 0xa01c)
hpriv->irq_handler = ahci_thunderx_irq_handler;
#endif
diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
index 98b8baa..d1f284f 100644
--- a/drivers/ata/ahci.h
+++ b/drivers/ata/ahci.h
@@ -242,6 +242,7 @@ enum {
suspend/resume */
AHCI_HFLAG_IGN_NOTSUPP_POWER_ON = (1 << 27), /* ignore -EOPNOTSUPP
from phy_power_on() */
+ AHCI_HFLAG_NO_SXS = (1 << 28), /* SXS not supported */
/* ap->flags bits */
diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
index ea5bf5f..fec2e97 100644
--- a/drivers/ata/libahci.c
+++ b/drivers/ata/libahci.c
@@ -493,6 +493,11 @@ void ahci_save_initial_config(struct device *dev, struct ahci_host_priv *hpriv)
cap |= HOST_CAP_ALPM;
}
+ if ((cap & HOST_CAP_SXS) && (hpriv->flags & AHCI_HFLAG_NO_SXS)) {
+ dev_info(dev, "controller does not support SXS, disabling CAP_SXS\n");
+ cap &= ~HOST_CAP_SXS;
+ }
+
if (hpriv->force_port_map && port_map != hpriv->force_port_map) {
dev_info(dev, "forcing port_map 0x%x -> 0x%x\n",
port_map, hpriv->force_port_map);
diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c
index de638da..b2f5520 100644
--- a/drivers/ata/libahci_platform.c
+++ b/drivers/ata/libahci_platform.c
@@ -582,11 +582,13 @@ int ahci_platform_init_host(struct platform_device *pdev,
int i, irq, n_ports, rc;
irq = platform_get_irq(pdev, 0);
- if (irq <= 0) {
+ if (irq < 0) {
if (irq != -EPROBE_DEFER)
dev_err(dev, "no irq\n");
return irq;
}
+ if (!irq)
+ return -EINVAL;
hpriv->irq = irq;
diff --git a/drivers/ata/pata_arasan_cf.c b/drivers/ata/pata_arasan_cf.c
index e9cf31f..63f3944 100644
--- a/drivers/ata/pata_arasan_cf.c
+++ b/drivers/ata/pata_arasan_cf.c
@@ -818,12 +818,19 @@ static int arasan_cf_probe(struct platform_device *pdev)
else
quirk = CF_BROKEN_UDMA; /* as it is on spear1340 */
- /* if irq is 0, support only PIO */
- acdev->irq = platform_get_irq(pdev, 0);
- if (acdev->irq)
+ /*
+ * If there's an error getting IRQ (or we do get IRQ0),
+ * support only PIO
+ */
+ ret = platform_get_irq(pdev, 0);
+ if (ret > 0) {
+ acdev->irq = ret;
irq_handler = arasan_cf_interrupt;
- else
+ } else if (ret == -EPROBE_DEFER) {
+ return ret;
+ } else {
quirk |= CF_BROKEN_MWDMA | CF_BROKEN_UDMA;
+ }
acdev->pbase = res->start;
acdev->vbase = devm_ioremap(&pdev->dev, res->start,
diff --git a/drivers/ata/pata_ixp4xx_cf.c b/drivers/ata/pata_ixp4xx_cf.c
index d1644a8e..abc0e87 100644
--- a/drivers/ata/pata_ixp4xx_cf.c
+++ b/drivers/ata/pata_ixp4xx_cf.c
@@ -165,8 +165,12 @@ static int ixp4xx_pata_probe(struct platform_device *pdev)
return -ENOMEM;
irq = platform_get_irq(pdev, 0);
- if (irq)
+ if (irq > 0)
irq_set_irq_type(irq, IRQ_TYPE_EDGE_RISING);
+ else if (irq < 0)
+ return irq;
+ else
+ return -EINVAL;
/* Setup expansion bus chip selects */
*data->cs0_cfg = data->cs0_bits;
diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
index 664ef65..b62446e 100644
--- a/drivers/ata/sata_mv.c
+++ b/drivers/ata/sata_mv.c
@@ -4097,6 +4097,10 @@ static int mv_platform_probe(struct platform_device *pdev)
n_ports = mv_platform_data->n_ports;
irq = platform_get_irq(pdev, 0);
}
+ if (irq < 0)
+ return irq;
+ if (!irq)
+ return -EINVAL;
host = ata_host_alloc_pinfo(&pdev->dev, ppi, n_ports);
hpriv = devm_kzalloc(&pdev->dev, sizeof(*hpriv), GFP_KERNEL);
diff --git a/drivers/base/core.c b/drivers/base/core.c
index 96f73aa..8b3c3fc 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -83,6 +83,11 @@ int device_links_read_lock_held(void)
{
return srcu_read_lock_held(&device_links_srcu);
}
+
+static void device_link_synchronize_removal(void)
+{
+ synchronize_srcu(&device_links_srcu);
+}
#else /* !CONFIG_SRCU */
static DECLARE_RWSEM(device_links_lock);
@@ -113,6 +118,10 @@ int device_links_read_lock_held(void)
return lockdep_is_held(&device_links_lock);
}
#endif
+
+static inline void device_link_synchronize_removal(void)
+{
+}
#endif /* !CONFIG_SRCU */
static bool device_is_ancestor(struct device *dev, struct device *target)
@@ -332,8 +341,13 @@ static struct attribute *devlink_attrs[] = {
};
ATTRIBUTE_GROUPS(devlink);
-static void device_link_free(struct device_link *link)
+static void device_link_release_fn(struct work_struct *work)
{
+ struct device_link *link = container_of(work, struct device_link, rm_work);
+
+ /* Ensure that all references to the link object have been dropped. */
+ device_link_synchronize_removal();
+
while (refcount_dec_not_one(&link->rpm_active))
pm_runtime_put(link->supplier);
@@ -342,24 +356,19 @@ static void device_link_free(struct device_link *link)
kfree(link);
}
-#ifdef CONFIG_SRCU
-static void __device_link_free_srcu(struct rcu_head *rhead)
-{
- device_link_free(container_of(rhead, struct device_link, rcu_head));
-}
-
static void devlink_dev_release(struct device *dev)
{
struct device_link *link = to_devlink(dev);
- call_srcu(&device_links_srcu, &link->rcu_head, __device_link_free_srcu);
+ INIT_WORK(&link->rm_work, device_link_release_fn);
+ /*
+ * It may take a while to complete this work because of the SRCU
+ * synchronization in device_link_release_fn() and if the consumer or
+ * supplier devices get deleted when it runs, so put it into the "long"
+ * workqueue.
+ */
+ queue_work(system_long_wq, &link->rm_work);
}
-#else
-static void devlink_dev_release(struct device *dev)
-{
- device_link_free(to_devlink(dev));
-}
-#endif
static struct class devlink_class = {
.name = "devlink",
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index 43130d6..8d7f94e 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -292,14 +292,16 @@ int driver_deferred_probe_check_state(struct device *dev)
static void deferred_probe_timeout_work_func(struct work_struct *work)
{
- struct device_private *private, *p;
+ struct device_private *p;
driver_deferred_probe_timeout = 0;
driver_deferred_probe_trigger();
flush_work(&deferred_probe_work);
- list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe)
- dev_info(private->device, "deferred probe pending\n");
+ mutex_lock(&deferred_probe_mutex);
+ list_for_each_entry(p, &deferred_probe_pending_list, deferred_probe)
+ dev_info(p->device, "deferred probe pending\n");
+ mutex_unlock(&deferred_probe_mutex);
wake_up_all(&probe_timeout_waitqueue);
}
static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
diff --git a/drivers/base/devtmpfs.c b/drivers/base/devtmpfs.c
index 0ff04cd..6bf82b3 100644
--- a/drivers/base/devtmpfs.c
+++ b/drivers/base/devtmpfs.c
@@ -420,7 +420,6 @@ static int __init devtmpfs_setup(void *p)
init_chroot(".");
out:
*(int *)p = err;
- complete(&setup_done);
return err;
}
@@ -433,6 +432,7 @@ static int __ref devtmpfsd(void *p)
{
int err = devtmpfs_setup(p);
+ complete(&setup_done);
if (err)
return err;
devtmpfs_work_loop();
diff --git a/drivers/base/node.c b/drivers/base/node.c
index 6ffa470..21965de 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -268,21 +268,20 @@ static void node_init_cache_dev(struct node *node)
if (!dev)
return;
+ device_initialize(dev);
dev->parent = &node->dev;
dev->release = node_cache_release;
if (dev_set_name(dev, "memory_side_cache"))
- goto free_dev;
+ goto put_device;
- if (device_register(dev))
- goto free_name;
+ if (device_add(dev))
+ goto put_device;
pm_runtime_no_callbacks(dev);
node->cache_dev = dev;
return;
-free_name:
- kfree_const(dev->kobj.name);
-free_dev:
- kfree(dev);
+put_device:
+ put_device(dev);
}
/**
@@ -319,25 +318,24 @@ void node_add_cache(unsigned int nid, struct node_cache_attrs *cache_attrs)
return;
dev = &info->dev;
+ device_initialize(dev);
dev->parent = node->cache_dev;
dev->release = node_cacheinfo_release;
dev->groups = cache_groups;
if (dev_set_name(dev, "index%d", cache_attrs->level))
- goto free_cache;
+ goto put_device;
info->cache_attrs = *cache_attrs;
- if (device_register(dev)) {
+ if (device_add(dev)) {
dev_warn(&node->dev, "failed to add cache level:%d\n",
cache_attrs->level);
- goto free_name;
+ goto put_device;
}
pm_runtime_no_callbacks(dev);
list_add_tail(&info->node, &node->cache_attrs);
return;
-free_name:
- kfree_const(dev->kobj.name);
-free_cache:
- kfree(info);
+put_device:
+ put_device(dev);
}
static void node_remove_caches(struct node *node)
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index d6d73ff..bc649da 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -1637,6 +1637,7 @@ void pm_runtime_init(struct device *dev)
dev->power.request_pending = false;
dev->power.request = RPM_REQ_NONE;
dev->power.deferred_resume = false;
+ dev->power.needs_force_resume = 0;
INIT_WORK(&dev->power.work, pm_runtime_work);
dev->power.timer_expires = 0;
@@ -1804,10 +1805,12 @@ int pm_runtime_force_suspend(struct device *dev)
* its parent, but set its status to RPM_SUSPENDED anyway in case this
* function will be called again for it in the meantime.
*/
- if (pm_runtime_need_not_resume(dev))
+ if (pm_runtime_need_not_resume(dev)) {
pm_runtime_set_suspended(dev);
- else
+ } else {
__update_runtime_status(dev, RPM_SUSPENDED);
+ dev->power.needs_force_resume = 1;
+ }
return 0;
@@ -1834,7 +1837,7 @@ int pm_runtime_force_resume(struct device *dev)
int (*callback)(struct device *);
int ret = 0;
- if (!pm_runtime_status_suspended(dev) || pm_runtime_need_not_resume(dev))
+ if (!pm_runtime_status_suspended(dev) || !dev->power.needs_force_resume)
goto out;
/*
@@ -1853,6 +1856,7 @@ int pm_runtime_force_resume(struct device *dev)
pm_runtime_mark_last_busy(dev);
out:
+ dev->power.needs_force_resume = 0;
pm_runtime_enable(dev);
return ret;
}
diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
index ff2ee87..211a335 100644
--- a/drivers/base/regmap/regmap-debugfs.c
+++ b/drivers/base/regmap/regmap-debugfs.c
@@ -660,6 +660,7 @@ void regmap_debugfs_exit(struct regmap *map)
regmap_debugfs_free_dump_cache(map);
mutex_unlock(&map->cache_lock);
kfree(map->debugfs_name);
+ map->debugfs_name = NULL;
} else {
struct regmap_debugfs_node *node, *tmp;
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 5e45edd..9a70eab 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -2031,7 +2031,8 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
* config ref and try to destroy the workqueue from inside the work
* queue.
*/
- flush_workqueue(nbd->recv_workq);
+ if (nbd->recv_workq)
+ flush_workqueue(nbd->recv_workq);
if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
&nbd->config->runtime_flags))
nbd_config_put(nbd);
diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h
index c24d9b5..7de703f 100644
--- a/drivers/block/null_blk.h
+++ b/drivers/block/null_blk.h
@@ -20,6 +20,7 @@ struct nullb_cmd {
blk_status_t error;
struct nullb_queue *nq;
struct hrtimer timer;
+ bool fake_timeout;
};
struct nullb_queue {
diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c
index 4685ea4..bb3686c 100644
--- a/drivers/block/null_blk_main.c
+++ b/drivers/block/null_blk_main.c
@@ -1367,10 +1367,13 @@ static blk_status_t null_handle_cmd(struct nullb_cmd *cmd, sector_t sector,
}
if (dev->zoned)
- cmd->error = null_process_zoned_cmd(cmd, op,
- sector, nr_sectors);
+ sts = null_process_zoned_cmd(cmd, op, sector, nr_sectors);
else
- cmd->error = null_process_cmd(cmd, op, sector, nr_sectors);
+ sts = null_process_cmd(cmd, op, sector, nr_sectors);
+
+ /* Do not overwrite errors (e.g. timeout errors) */
+ if (cmd->error == BLK_STS_OK)
+ cmd->error = sts;
out:
nullb_complete_cmd(cmd);
@@ -1449,8 +1452,20 @@ static bool should_requeue_request(struct request *rq)
static enum blk_eh_timer_return null_timeout_rq(struct request *rq, bool res)
{
+ struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);
+
pr_info("rq %p timed out\n", rq);
- blk_mq_complete_request(rq);
+
+ /*
+ * If the device is marked as blocking (i.e. memory backed or zoned
+ * device), the submission path may be blocked waiting for resources
+ * and cause real timeouts. For these real timeouts, the submission
+ * path will complete the request using blk_mq_complete_request().
+ * Only fake timeouts need to execute blk_mq_complete_request() here.
+ */
+ cmd->error = BLK_STS_TIMEOUT;
+ if (cmd->fake_timeout)
+ blk_mq_complete_request(rq);
return BLK_EH_DONE;
}
@@ -1471,6 +1486,7 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
cmd->rq = bd->rq;
cmd->error = BLK_STS_OK;
cmd->nq = nq;
+ cmd->fake_timeout = should_timeout_request(bd->rq);
blk_mq_start_request(bd->rq);
@@ -1487,7 +1503,7 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
return BLK_STS_OK;
}
}
- if (should_timeout_request(bd->rq))
+ if (cmd->fake_timeout)
return BLK_STS_OK;
return null_handle_cmd(cmd, sector, nr_sectors, req_op(bd->rq));
diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c
index 172f720..f5df82c 100644
--- a/drivers/block/null_blk_zoned.c
+++ b/drivers/block/null_blk_zoned.c
@@ -149,6 +149,7 @@ void null_free_zoned_dev(struct nullb_device *dev)
{
bitmap_free(dev->zone_locks);
kvfree(dev->zones);
+ dev->zones = NULL;
}
static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno)
diff --git a/drivers/block/rnbd/rnbd-clt-sysfs.c b/drivers/block/rnbd/rnbd-clt-sysfs.c
index d9dd138..5613cd4 100644
--- a/drivers/block/rnbd/rnbd-clt-sysfs.c
+++ b/drivers/block/rnbd/rnbd-clt-sysfs.c
@@ -433,10 +433,14 @@ void rnbd_clt_remove_dev_symlink(struct rnbd_clt_dev *dev)
* i.e. rnbd_clt_unmap_dev_store() leading to a sysfs warning because
* of sysfs link already was removed already.
*/
- if (dev->blk_symlink_name && try_module_get(THIS_MODULE)) {
- sysfs_remove_link(rnbd_devs_kobj, dev->blk_symlink_name);
+ if (dev->blk_symlink_name) {
+ if (try_module_get(THIS_MODULE)) {
+ sysfs_remove_link(rnbd_devs_kobj, dev->blk_symlink_name);
+ module_put(THIS_MODULE);
+ }
+ /* It should be freed always. */
kfree(dev->blk_symlink_name);
- module_put(THIS_MODULE);
+ dev->blk_symlink_name = NULL;
}
}
diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index ba334fe..71b86fe 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -679,7 +679,11 @@ static void remap_devs(struct rnbd_clt_session *sess)
return;
}
- rtrs_clt_query(sess->rtrs, &attrs);
+ err = rtrs_clt_query(sess->rtrs, &attrs);
+ if (err) {
+ pr_err("rtrs_clt_query(\"%s\"): %d\n", sess->sessname, err);
+ return;
+ }
mutex_lock(&sess->lock);
sess->max_io_size = attrs.max_io_size;
@@ -1211,7 +1215,11 @@ find_and_get_or_create_sess(const char *sessname,
err = PTR_ERR(sess->rtrs);
goto wake_up_and_put;
}
- rtrs_clt_query(sess->rtrs, &attrs);
+
+ err = rtrs_clt_query(sess->rtrs, &attrs);
+ if (err)
+ goto close_rtrs;
+
sess->max_io_size = attrs.max_io_size;
sess->queue_depth = attrs.queue_depth;
diff --git a/drivers/block/rnbd/rnbd-clt.h b/drivers/block/rnbd/rnbd-clt.h
index b193d590..2941e38 100644
--- a/drivers/block/rnbd/rnbd-clt.h
+++ b/drivers/block/rnbd/rnbd-clt.h
@@ -79,7 +79,7 @@ struct rnbd_clt_session {
DECLARE_BITMAP(cpu_queues_bm, NR_CPUS);
int __percpu *cpu_rr; /* per-cpu var for CPU round-robin */
atomic_t busy;
- int queue_depth;
+ size_t queue_depth;
u32 max_io_size;
struct blk_mq_tag_set tag_set;
struct mutex lock; /* protects state and devs_list */
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index a1b9df2..040829e 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -313,6 +313,7 @@ struct xen_blkif {
struct work_struct free_work;
unsigned int nr_ring_pages;
+ bool multi_ref;
/* All rings for this device. */
struct xen_blkif_ring *rings;
unsigned int nr_rings;
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 9860d48..6c5e937 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -998,14 +998,17 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir)
for (i = 0; i < nr_grefs; i++) {
char ring_ref_name[RINGREF_NAME_LEN];
- snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i);
+ if (blkif->multi_ref)
+ snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i);
+ else {
+ WARN_ON(i != 0);
+ snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref");
+ }
+
err = xenbus_scanf(XBT_NIL, dir, ring_ref_name,
"%u", &ring_ref[i]);
if (err != 1) {
- if (nr_grefs == 1)
- break;
-
err = -EINVAL;
xenbus_dev_fatal(dev, err, "reading %s/%s",
dir, ring_ref_name);
@@ -1013,18 +1016,6 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir)
}
}
- if (err != 1) {
- WARN_ON(nr_grefs != 1);
-
- err = xenbus_scanf(XBT_NIL, dir, "ring-ref", "%u",
- &ring_ref[0]);
- if (err != 1) {
- err = -EINVAL;
- xenbus_dev_fatal(dev, err, "reading %s/ring-ref", dir);
- return err;
- }
- }
-
err = -ENOMEM;
for (i = 0; i < nr_grefs * XEN_BLKIF_REQS_PER_PAGE; i++) {
req = kzalloc(sizeof(*req), GFP_KERNEL);
@@ -1129,10 +1120,15 @@ static int connect_ring(struct backend_info *be)
blkif->nr_rings, blkif->blk_protocol, protocol,
blkif->vbd.feature_gnt_persistent ? "persistent grants" : "");
- ring_page_order = xenbus_read_unsigned(dev->otherend,
- "ring-page-order", 0);
-
- if (ring_page_order > xen_blkif_max_ring_order) {
+ err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-page-order", "%u",
+ &ring_page_order);
+ if (err != 1) {
+ blkif->nr_ring_pages = 1;
+ blkif->multi_ref = false;
+ } else if (ring_page_order <= xen_blkif_max_ring_order) {
+ blkif->nr_ring_pages = 1 << ring_page_order;
+ blkif->multi_ref = true;
+ } else {
err = -EINVAL;
xenbus_dev_fatal(dev, err,
"requested ring page order %d exceed max:%d",
@@ -1141,8 +1137,6 @@ static int connect_ring(struct backend_info *be)
return err;
}
- blkif->nr_ring_pages = 1 << ring_page_order;
-
if (blkif->nr_rings == 1)
return read_per_ring_refs(&blkif->rings[0], dev->otherend);
else {
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 2953b96..175cb1c 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -392,7 +392,9 @@ static const struct usb_device_id blacklist_table[] = {
/* MediaTek Bluetooth devices */
{ USB_VENDOR_AND_INTERFACE_INFO(0x0e8d, 0xe0, 0x01, 0x01),
- .driver_info = BTUSB_MEDIATEK },
+ .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH |
+ BTUSB_VALID_LE_STATES },
/* Additional Realtek 8723AE Bluetooth devices */
{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 8cefa359..0d0386f 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -544,6 +544,7 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
struct mhi_ring *buf_ring;
struct mhi_ring *tre_ring;
struct mhi_chan_ctxt *chan_ctxt;
+ u32 tmp;
buf_ring = &mhi_chan->buf_ring;
tre_ring = &mhi_chan->tre_ring;
@@ -554,7 +555,19 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
vfree(buf_ring->base);
buf_ring->base = tre_ring->base = NULL;
+ tre_ring->ctxt_wp = NULL;
chan_ctxt->rbase = 0;
+ chan_ctxt->rlen = 0;
+ chan_ctxt->rp = 0;
+ chan_ctxt->wp = 0;
+
+ tmp = chan_ctxt->chcfg;
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+ chan_ctxt->chcfg = tmp;
+
+ /* Update to all cores */
+ smp_wmb();
}
int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
@@ -1267,7 +1280,8 @@ static int mhi_driver_remove(struct device *dev)
mutex_lock(&mhi_chan->mutex);
- if (ch_state[dir] == MHI_CH_STATE_ENABLED &&
+ if ((ch_state[dir] == MHI_CH_STATE_ENABLED ||
+ ch_state[dir] == MHI_CH_STATE_STOP) &&
!mhi_chan->offload_ch)
mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index 2cff5dd..d86ce1a 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -220,10 +220,17 @@ static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
smp_wmb();
}
+static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
+{
+ return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len;
+}
+
int mhi_destroy_device(struct device *dev, void *data)
{
+ struct mhi_chan *ul_chan, *dl_chan;
struct mhi_device *mhi_dev;
struct mhi_controller *mhi_cntrl;
+ enum mhi_ee_type ee = MHI_EE_MAX;
if (dev->bus != &mhi_bus_type)
return 0;
@@ -235,6 +242,17 @@ int mhi_destroy_device(struct device *dev, void *data)
if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
return 0;
+ ul_chan = mhi_dev->ul_chan;
+ dl_chan = mhi_dev->dl_chan;
+
+ /*
+ * If execution environment is specified, remove only those devices that
+ * started in them based on ee_mask for the channels as we move on to a
+ * different execution environment
+ */
+ if (data)
+ ee = *(enum mhi_ee_type *)data;
+
/*
* For the suspend and resume case, this function will get called
* without mhi_unregister_controller(). Hence, we need to drop the
@@ -242,11 +260,19 @@ int mhi_destroy_device(struct device *dev, void *data)
* be sure that there will be no instances of mhi_dev left after
* this.
*/
- if (mhi_dev->ul_chan)
- put_device(&mhi_dev->ul_chan->mhi_dev->dev);
+ if (ul_chan) {
+ if (ee != MHI_EE_MAX && !(ul_chan->ee_mask & BIT(ee)))
+ return 0;
- if (mhi_dev->dl_chan)
- put_device(&mhi_dev->dl_chan->mhi_dev->dev);
+ put_device(&ul_chan->mhi_dev->dev);
+ }
+
+ if (dl_chan) {
+ if (ee != MHI_EE_MAX && !(dl_chan->ee_mask & BIT(ee)))
+ return 0;
+
+ put_device(&dl_chan->mhi_dev->dev);
+ }
dev_dbg(&mhi_cntrl->mhi_dev->dev, "destroy device for chan:%s\n",
mhi_dev->name);
@@ -349,7 +375,16 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
struct mhi_event_ctxt *er_ctxt =
&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
struct mhi_ring *ev_ring = &mhi_event->ring;
- void *dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ dma_addr_t ptr = er_ctxt->rp;
+ void *dev_rp;
+
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event ring rp points outside of the event ring\n");
+ return IRQ_HANDLED;
+ }
+
+ dev_rp = mhi_to_virtual(ev_ring, ptr);
/* Only proceed if event ring has pending events */
if (ev_ring->rp == dev_rp)
@@ -498,6 +533,11 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info;
u16 xfer_len;
+ if (!is_valid_ring_ptr(tre_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event element points outside of the tre ring\n");
+ break;
+ }
/* Get the TRB this event points to */
ev_tre = mhi_to_virtual(tre_ring, ptr);
@@ -657,6 +697,12 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan;
u32 chan;
+ if (!is_valid_ring_ptr(mhi_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event element points outside of the cmd ring\n");
+ return;
+ }
+
cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
@@ -681,6 +727,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 chan;
int count = 0;
+ dma_addr_t ptr = er_ctxt->rp;
/*
* This is a quick check to avoid unnecessary event processing
@@ -690,7 +737,13 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
return -EIO;
- dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event ring rp points outside of the event ring\n");
+ return -EIO;
+ }
+
+ dev_rp = mhi_to_virtual(ev_ring, ptr);
local_rp = ev_ring->rp;
while (dev_rp != local_rp) {
@@ -801,6 +854,8 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
*/
if (chan < mhi_cntrl->max_chan) {
mhi_chan = &mhi_cntrl->mhi_chan[chan];
+ if (!mhi_chan->configured)
+ break;
parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
event_quota--;
}
@@ -812,7 +867,15 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
local_rp = ev_ring->rp;
- dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+
+ ptr = er_ctxt->rp;
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event ring rp points outside of the event ring\n");
+ return -EIO;
+ }
+
+ dev_rp = mhi_to_virtual(ev_ring, ptr);
count++;
}
@@ -835,11 +898,18 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
int count = 0;
u32 chan;
struct mhi_chan *mhi_chan;
+ dma_addr_t ptr = er_ctxt->rp;
if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
return -EIO;
- dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event ring rp points outside of the event ring\n");
+ return -EIO;
+ }
+
+ dev_rp = mhi_to_virtual(ev_ring, ptr);
local_rp = ev_ring->rp;
while (dev_rp != local_rp && event_quota > 0) {
@@ -853,7 +923,8 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
* Only process the event ring elements whose channel
* ID is within the maximum supported range.
*/
- if (chan < mhi_cntrl->max_chan) {
+ if (chan < mhi_cntrl->max_chan &&
+ mhi_cntrl->mhi_chan[chan].configured) {
mhi_chan = &mhi_cntrl->mhi_chan[chan];
if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
@@ -867,7 +938,15 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
local_rp = ev_ring->rp;
- dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+
+ ptr = er_ctxt->rp;
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event ring rp points outside of the event ring\n");
+ return -EIO;
+ }
+
+ dev_rp = mhi_to_virtual(ev_ring, ptr);
count++;
}
read_lock_bh(&mhi_cntrl->pm_lock);
@@ -1394,6 +1473,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
struct mhi_ring *ev_ring;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
unsigned long flags;
+ dma_addr_t ptr;
dev_dbg(dev, "Marking all events for chan: %d as stale\n", chan);
@@ -1401,7 +1481,15 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
/* mark all stale events related to channel as STALE event */
spin_lock_irqsave(&mhi_event->lock, flags);
- dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+
+ ptr = er_ctxt->rp;
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event ring rp points outside of the event ring\n");
+ dev_rp = ev_ring->rp;
+ } else {
+ dev_rp = mhi_to_virtual(ev_ring, ptr);
+ }
local_rp = ev_ring->rp;
while (dev_rp != local_rp) {
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
index 3de7b16..aeb895c 100644
--- a/drivers/bus/mhi/core/pm.c
+++ b/drivers/bus/mhi/core/pm.c
@@ -376,6 +376,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
{
struct mhi_event *mhi_event;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ enum mhi_ee_type current_ee = mhi_cntrl->ee;
int i, ret;
dev_dbg(dev, "Processing Mission Mode transition\n");
@@ -390,6 +391,8 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
wake_up_all(&mhi_cntrl->state_event);
+ device_for_each_child(&mhi_cntrl->mhi_dev->dev, ¤t_ee,
+ mhi_destroy_device);
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_MISSION_MODE);
/* Force MHI to be in M0 state before continuing */
@@ -992,7 +995,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
&val) ||
!val,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
- if (ret) {
+ if (!ret) {
ret = -EIO;
dev_info(dev, "Failed to reset MHI due to syserr state\n");
goto error_bhi_offset;
diff --git a/drivers/bus/qcom-ebi2.c b/drivers/bus/qcom-ebi2.c
index 03ddcf4..0b8f53a 100644
--- a/drivers/bus/qcom-ebi2.c
+++ b/drivers/bus/qcom-ebi2.c
@@ -353,8 +353,10 @@ static int qcom_ebi2_probe(struct platform_device *pdev)
/* Figure out the chipselect */
ret = of_property_read_u32(child, "reg", &csindex);
- if (ret)
+ if (ret) {
+ of_node_put(child);
return ret;
+ }
if (csindex > 5) {
dev_err(dev,
diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
index 45f5530..9afbe49 100644
--- a/drivers/bus/ti-sysc.c
+++ b/drivers/bus/ti-sysc.c
@@ -635,6 +635,51 @@ static int sysc_parse_and_check_child_range(struct sysc *ddata)
return 0;
}
+/* Interconnect instances to probe before l4_per instances */
+static struct resource early_bus_ranges[] = {
+ /* am3/4 l4_wkup */
+ { .start = 0x44c00000, .end = 0x44c00000 + 0x300000, },
+ /* omap4/5 and dra7 l4_cfg */
+ { .start = 0x4a000000, .end = 0x4a000000 + 0x300000, },
+ /* omap4 l4_wkup */
+ { .start = 0x4a300000, .end = 0x4a300000 + 0x30000, },
+ /* omap5 and dra7 l4_wkup without dra7 dcan segment */
+ { .start = 0x4ae00000, .end = 0x4ae00000 + 0x30000, },
+};
+
+static atomic_t sysc_defer = ATOMIC_INIT(10);
+
+/**
+ * sysc_defer_non_critical - defer non_critical interconnect probing
+ * @ddata: device driver data
+ *
+ * We want to probe l4_cfg and l4_wkup interconnect instances before any
+ * l4_per instances as l4_per instances depend on resources on l4_cfg and
+ * l4_wkup interconnects.
+ */
+static int sysc_defer_non_critical(struct sysc *ddata)
+{
+ struct resource *res;
+ int i;
+
+ if (!atomic_read(&sysc_defer))
+ return 0;
+
+ for (i = 0; i < ARRAY_SIZE(early_bus_ranges); i++) {
+ res = &early_bus_ranges[i];
+ if (ddata->module_pa >= res->start &&
+ ddata->module_pa <= res->end) {
+ atomic_set(&sysc_defer, 0);
+
+ return 0;
+ }
+ }
+
+ atomic_dec_if_positive(&sysc_defer);
+
+ return -EPROBE_DEFER;
+}
+
static struct device_node *stdout_path;
static void sysc_init_stdout_path(struct sysc *ddata)
@@ -859,6 +904,10 @@ static int sysc_map_and_check_registers(struct sysc *ddata)
if (error)
return error;
+ error = sysc_defer_non_critical(ddata);
+ if (error)
+ return error;
+
sysc_check_children(ddata);
error = sysc_parse_registers(ddata);
@@ -3044,7 +3093,9 @@ static int sysc_remove(struct platform_device *pdev)
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
- reset_control_assert(ddata->rsts);
+
+ if (!reset_control_status(ddata->rsts))
+ reset_control_assert(ddata->rsts);
unprepare:
sysc_unprepare(ddata);
diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
index 9874fc1..1831099 100644
--- a/drivers/cdrom/gdrom.c
+++ b/drivers/cdrom/gdrom.c
@@ -743,6 +743,13 @@ static const struct blk_mq_ops gdrom_mq_ops = {
static int probe_gdrom(struct platform_device *devptr)
{
int err;
+
+ /*
+ * Ensure our "one" device is initialized properly in case of previous
+ * usages of it
+ */
+ memset(&gd, 0, sizeof(gd));
+
/* Start the device */
if (gdrom_execute_diagnostic() != 1) {
pr_warn("ATA Probe for GDROM failed\n");
@@ -831,6 +838,8 @@ static int remove_gdrom(struct platform_device *devptr)
if (gdrom_major)
unregister_blkdev(gdrom_major, GDROM_DEV_NAME);
unregister_cdrom(gd.cd_info);
+ kfree(gd.cd_info);
+ kfree(gd.toc);
return 0;
}
@@ -846,7 +855,7 @@ static struct platform_driver gdrom_driver = {
static int __init init_gdrom(void)
{
int rc;
- gd.toc = NULL;
+
rc = platform_driver_register(&gdrom_driver);
if (rc)
return rc;
@@ -862,8 +871,6 @@ static void __exit exit_gdrom(void)
{
platform_device_unregister(pd);
platform_driver_unregister(&gdrom_driver);
- kfree(gd.toc);
- kfree(gd.cd_info);
}
module_init(init_gdrom);
diff --git a/drivers/char/agp/Kconfig b/drivers/char/agp/Kconfig
index a086dd34..4f501e4 100644
--- a/drivers/char/agp/Kconfig
+++ b/drivers/char/agp/Kconfig
@@ -125,7 +125,7 @@
config AGP_PARISC
tristate "HP Quicksilver AGP support"
- depends on AGP && PARISC && 64BIT
+ depends on AGP && PARISC && 64BIT && IOMMU_SBA
help
This option gives you AGP GART support for the HP Quicksilver
AGP bus adapter on HP PA-RISC machines (Ok, just on the C8000
diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
index ed3b7da..8b55085 100644
--- a/drivers/char/hpet.c
+++ b/drivers/char/hpet.c
@@ -984,6 +984,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
hdp->hd_phys_address = fixmem32->address;
hdp->hd_address = ioremap(fixmem32->address,
HPET_RANGE_SIZE);
+ if (!hdp->hd_address)
+ return AE_ERROR;
if (hpet_is_known(hdp)) {
iounmap(hdp->hd_address);
diff --git a/drivers/char/random.c b/drivers/char/random.c
index f462b9d..340ad21 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -819,7 +819,7 @@ static bool __init crng_init_try_arch_early(struct crng_state *crng)
static void __maybe_unused crng_initialize_secondary(struct crng_state *crng)
{
- memcpy(&crng->state[0], "expand 32-byte k", 16);
+ chacha_init_consts(crng->state);
_get_random_bytes(&crng->state[4], sizeof(__u32) * 12);
crng_init_try_arch(crng);
crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
@@ -827,7 +827,7 @@ static void __maybe_unused crng_initialize_secondary(struct crng_state *crng)
static void __init crng_initialize_primary(struct crng_state *crng)
{
- memcpy(&crng->state[0], "expand 32-byte k", 16);
+ chacha_init_consts(crng->state);
_extract_entropy(&input_pool, &crng->state[4], sizeof(__u32) * 12, 0);
if (crng_init_try_arch_early(crng) && trust_cpu) {
invalidate_batched_entropy();
diff --git a/drivers/char/tpm/eventlog/acpi.c b/drivers/char/tpm/eventlog/acpi.c
index 3633ed7..1b18ce5 100644
--- a/drivers/char/tpm/eventlog/acpi.c
+++ b/drivers/char/tpm/eventlog/acpi.c
@@ -41,6 +41,27 @@ struct acpi_tcpa {
};
};
+/* Check that the given log is indeed a TPM2 log. */
+static bool tpm_is_tpm2_log(void *bios_event_log, u64 len)
+{
+ struct tcg_efi_specid_event_head *efispecid;
+ struct tcg_pcr_event *event_header;
+ int n;
+
+ if (len < sizeof(*event_header))
+ return false;
+ len -= sizeof(*event_header);
+ event_header = bios_event_log;
+
+ if (len < sizeof(*efispecid))
+ return false;
+ efispecid = (struct tcg_efi_specid_event_head *)event_header->event;
+
+ n = memcmp(efispecid->signature, TCG_SPECID_SIG,
+ sizeof(TCG_SPECID_SIG));
+ return n == 0;
+}
+
/* read binary bios log */
int tpm_read_log_acpi(struct tpm_chip *chip)
{
@@ -52,6 +73,7 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
struct acpi_table_tpm2 *tbl;
struct acpi_tpm2_phy *tpm2_phy;
int format;
+ int ret;
log = &chip->log;
@@ -112,6 +134,7 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
log->bios_event_log_end = log->bios_event_log + len;
+ ret = -EIO;
virt = acpi_os_map_iomem(start, len);
if (!virt)
goto err;
@@ -119,11 +142,19 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
memcpy_fromio(log->bios_event_log, virt, len);
acpi_os_unmap_iomem(virt, len);
+
+ if (chip->flags & TPM_CHIP_FLAG_TPM2 &&
+ !tpm_is_tpm2_log(log->bios_event_log, len)) {
+ /* try EFI log next */
+ ret = -ENODEV;
+ goto err;
+ }
+
return format;
err:
kfree(log->bios_event_log);
log->bios_event_log = NULL;
- return -EIO;
+ return ret;
}
diff --git a/drivers/char/tpm/eventlog/common.c b/drivers/char/tpm/eventlog/common.c
index 7460f23..8512ec7 100644
--- a/drivers/char/tpm/eventlog/common.c
+++ b/drivers/char/tpm/eventlog/common.c
@@ -107,6 +107,9 @@ void tpm_bios_log_setup(struct tpm_chip *chip)
int log_version;
int rc = 0;
+ if (chip->flags & TPM_CHIP_FLAG_VIRTUAL)
+ return;
+
rc = tpm_read_log(chip);
if (rc < 0)
return;
diff --git a/drivers/char/tpm/eventlog/efi.c b/drivers/char/tpm/eventlog/efi.c
index 35229e5..e6cb9d5 100644
--- a/drivers/char/tpm/eventlog/efi.c
+++ b/drivers/char/tpm/eventlog/efi.c
@@ -17,6 +17,7 @@ int tpm_read_log_efi(struct tpm_chip *chip)
{
struct efi_tcg2_final_events_table *final_tbl = NULL;
+ int final_events_log_size = efi_tpm_final_log_size;
struct linux_efi_tpm_eventlog *log_tbl;
struct tpm_bios_log *log;
u32 log_size;
@@ -66,12 +67,12 @@ int tpm_read_log_efi(struct tpm_chip *chip)
ret = tpm_log_version;
if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR ||
- efi_tpm_final_log_size == 0 ||
+ final_events_log_size == 0 ||
tpm_log_version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2)
goto out;
final_tbl = memremap(efi.tpm_final_log,
- sizeof(*final_tbl) + efi_tpm_final_log_size,
+ sizeof(*final_tbl) + final_events_log_size,
MEMREMAP_WB);
if (!final_tbl) {
pr_err("Could not map UEFI TPM final log\n");
@@ -80,10 +81,18 @@ int tpm_read_log_efi(struct tpm_chip *chip)
goto out;
}
- efi_tpm_final_log_size -= log_tbl->final_events_preboot_size;
+ /*
+ * The 'final events log' size excludes the 'final events preboot log'
+ * at its beginning.
+ */
+ final_events_log_size -= log_tbl->final_events_preboot_size;
+ /*
+ * Allocate memory for the 'combined log' where we will append the
+ * 'final events log' to.
+ */
tmp = krealloc(log->bios_event_log,
- log_size + efi_tpm_final_log_size,
+ log_size + final_events_log_size,
GFP_KERNEL);
if (!tmp) {
kfree(log->bios_event_log);
@@ -94,15 +103,19 @@ int tpm_read_log_efi(struct tpm_chip *chip)
log->bios_event_log = tmp;
/*
- * Copy any of the final events log that didn't also end up in the
- * main log. Events can be logged in both if events are generated
+ * Append any of the 'final events log' that didn't also end up in the
+ * 'main log'. Events can be logged in both if events are generated
* between GetEventLog() and ExitBootServices().
*/
memcpy((void *)log->bios_event_log + log_size,
final_tbl->events + log_tbl->final_events_preboot_size,
- efi_tpm_final_log_size);
+ final_events_log_size);
+ /*
+ * The size of the 'combined log' is the size of the 'main log' plus
+ * the size of the 'final events log'.
+ */
log->bios_event_log_end = log->bios_event_log +
- log_size + efi_tpm_final_log_size;
+ log_size + final_events_log_size;
out:
memunmap(final_tbl);
diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
index eff1f12..c84d239 100644
--- a/drivers/char/tpm/tpm2-cmd.c
+++ b/drivers/char/tpm/tpm2-cmd.c
@@ -656,6 +656,7 @@ int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip)
if (nr_commands !=
be32_to_cpup((__be32 *)&buf.data[TPM_HEADER_SIZE + 5])) {
+ rc = -EFAULT;
tpm_buf_destroy(&buf);
goto out;
}
diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index a2e0395..55b9d39 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -709,16 +709,14 @@ static int tpm_tis_gen_interrupt(struct tpm_chip *chip)
cap_t cap;
int ret;
- /* TPM 2.0 */
- if (chip->flags & TPM_CHIP_FLAG_TPM2)
- return tpm2_get_tpm_pt(chip, 0x100, &cap2, desc);
-
- /* TPM 1.2 */
ret = request_locality(chip, 0);
if (ret < 0)
return ret;
- ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0);
+ if (chip->flags & TPM_CHIP_FLAG_TPM2)
+ ret = tpm2_get_tpm_pt(chip, 0x100, &cap2, desc);
+ else
+ ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0);
release_locality(chip, 0);
@@ -1127,12 +1125,20 @@ int tpm_tis_resume(struct device *dev)
if (ret)
return ret;
- /* TPM 1.2 requires self-test on resume. This function actually returns
+ /*
+ * TPM 1.2 requires self-test on resume. This function actually returns
* an error code but for unknown reason it isn't handled.
*/
- if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
+ if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+ ret = request_locality(chip, 0);
+ if (ret < 0)
+ return ret;
+
tpm1_do_selftest(chip);
+ release_locality(chip, 0);
+ }
+
return 0;
}
EXPORT_SYMBOL_GPL(tpm_tis_resume);
diff --git a/drivers/char/ttyprintk.c b/drivers/char/ttyprintk.c
index 6a0059e..93f5d11 100644
--- a/drivers/char/ttyprintk.c
+++ b/drivers/char/ttyprintk.c
@@ -158,12 +158,23 @@ static int tpk_ioctl(struct tty_struct *tty,
return 0;
}
+/*
+ * TTY operations hangup function.
+ */
+static void tpk_hangup(struct tty_struct *tty)
+{
+ struct ttyprintk_port *tpkp = tty->driver_data;
+
+ tty_port_hangup(&tpkp->port);
+}
+
static const struct tty_operations ttyprintk_ops = {
.open = tpk_open,
.close = tpk_close,
.write = tpk_write,
.write_room = tpk_write_room,
.ioctl = tpk_ioctl,
+ .hangup = tpk_hangup,
};
static const struct tty_port_operations null_ops = { };
diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
index a55b37f..bc3be5f 100644
--- a/drivers/clk/clk-ast2600.c
+++ b/drivers/clk/clk-ast2600.c
@@ -61,10 +61,10 @@ static void __iomem *scu_g6_base;
static const struct aspeed_gate_data aspeed_g6_gates[] = {
/* clk rst name parent flags */
[ASPEED_CLK_GATE_MCLK] = { 0, -1, "mclk-gate", "mpll", CLK_IS_CRITICAL }, /* SDRAM */
- [ASPEED_CLK_GATE_ECLK] = { 1, -1, "eclk-gate", "eclk", 0 }, /* Video Engine */
+ [ASPEED_CLK_GATE_ECLK] = { 1, 6, "eclk-gate", "eclk", 0 }, /* Video Engine */
[ASPEED_CLK_GATE_GCLK] = { 2, 7, "gclk-gate", NULL, 0 }, /* 2D engine */
/* vclk parent - dclk/d1clk/hclk/mclk */
- [ASPEED_CLK_GATE_VCLK] = { 3, 6, "vclk-gate", NULL, 0 }, /* Video Capture */
+ [ASPEED_CLK_GATE_VCLK] = { 3, -1, "vclk-gate", NULL, 0 }, /* Video Capture */
[ASPEED_CLK_GATE_BCLK] = { 4, 8, "bclk-gate", "bclk", 0 }, /* PCIe/PCI */
/* From dpll */
[ASPEED_CLK_GATE_DCLK] = { 5, -1, "dclk-gate", NULL, CLK_IS_CRITICAL }, /* DAC */
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index f83dac54..61c7871 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -4262,20 +4262,19 @@ int clk_notifier_register(struct clk *clk, struct notifier_block *nb)
/* search the list of notifiers for this clk */
list_for_each_entry(cn, &clk_notifier_list, node)
if (cn->clk == clk)
- break;
+ goto found;
/* if clk wasn't in the notifier list, allocate new clk_notifier */
- if (cn->clk != clk) {
- cn = kzalloc(sizeof(*cn), GFP_KERNEL);
- if (!cn)
- goto out;
+ cn = kzalloc(sizeof(*cn), GFP_KERNEL);
+ if (!cn)
+ goto out;
- cn->clk = clk;
- srcu_init_notifier_head(&cn->notifier_head);
+ cn->clk = clk;
+ srcu_init_notifier_head(&cn->notifier_head);
- list_add(&cn->node, &clk_notifier_list);
- }
+ list_add(&cn->node, &clk_notifier_list);
+found:
ret = srcu_notifier_chain_register(&cn->notifier_head, nb);
clk->core->notifier_count++;
@@ -4300,32 +4299,28 @@ EXPORT_SYMBOL_GPL(clk_notifier_register);
*/
int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb)
{
- struct clk_notifier *cn = NULL;
- int ret = -EINVAL;
+ struct clk_notifier *cn;
+ int ret = -ENOENT;
if (!clk || !nb)
return -EINVAL;
clk_prepare_lock();
- list_for_each_entry(cn, &clk_notifier_list, node)
- if (cn->clk == clk)
+ list_for_each_entry(cn, &clk_notifier_list, node) {
+ if (cn->clk == clk) {
+ ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb);
+
+ clk->core->notifier_count--;
+
+ /* XXX the notifier code should handle this better */
+ if (!cn->notifier_head.head) {
+ srcu_cleanup_notifier_head(&cn->notifier_head);
+ list_del(&cn->node);
+ kfree(cn);
+ }
break;
-
- if (cn->clk == clk) {
- ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb);
-
- clk->core->notifier_count--;
-
- /* XXX the notifier code should handle this better */
- if (!cn->notifier_head.head) {
- srcu_cleanup_notifier_head(&cn->notifier_head);
- list_del(&cn->node);
- kfree(cn);
}
-
- } else {
- ret = -ENOENT;
}
clk_prepare_unlock();
diff --git a/drivers/clk/imx/clk-imx25.c b/drivers/clk/imx/clk-imx25.c
index a66cabf..66192fe 100644
--- a/drivers/clk/imx/clk-imx25.c
+++ b/drivers/clk/imx/clk-imx25.c
@@ -73,16 +73,6 @@ enum mx25_clks {
static struct clk *clk[clk_max];
-static struct clk ** const uart_clks[] __initconst = {
- &clk[uart_ipg_per],
- &clk[uart1_ipg],
- &clk[uart2_ipg],
- &clk[uart3_ipg],
- &clk[uart4_ipg],
- &clk[uart5_ipg],
- NULL
-};
-
static int __init __mx25_clocks_init(void __iomem *ccm_base)
{
BUG_ON(!ccm_base);
@@ -228,7 +218,7 @@ static int __init __mx25_clocks_init(void __iomem *ccm_base)
*/
clk_set_parent(clk[cko_sel], clk[ipg]);
- imx_register_uart_clocks(uart_clks);
+ imx_register_uart_clocks(6);
return 0;
}
diff --git a/drivers/clk/imx/clk-imx27.c b/drivers/clk/imx/clk-imx27.c
index 5585ded..56a5fc4 100644
--- a/drivers/clk/imx/clk-imx27.c
+++ b/drivers/clk/imx/clk-imx27.c
@@ -49,17 +49,6 @@ static const char *ssi_sel_clks[] = { "spll_gate", "mpll", };
static struct clk *clk[IMX27_CLK_MAX];
static struct clk_onecell_data clk_data;
-static struct clk ** const uart_clks[] __initconst = {
- &clk[IMX27_CLK_PER1_GATE],
- &clk[IMX27_CLK_UART1_IPG_GATE],
- &clk[IMX27_CLK_UART2_IPG_GATE],
- &clk[IMX27_CLK_UART3_IPG_GATE],
- &clk[IMX27_CLK_UART4_IPG_GATE],
- &clk[IMX27_CLK_UART5_IPG_GATE],
- &clk[IMX27_CLK_UART6_IPG_GATE],
- NULL
-};
-
static void __init _mx27_clocks_init(unsigned long fref)
{
BUG_ON(!ccm);
@@ -176,7 +165,7 @@ static void __init _mx27_clocks_init(unsigned long fref)
clk_prepare_enable(clk[IMX27_CLK_EMI_AHB_GATE]);
- imx_register_uart_clocks(uart_clks);
+ imx_register_uart_clocks(7);
imx_print_silicon_rev("i.MX27", mx27_revision());
}
diff --git a/drivers/clk/imx/clk-imx35.c b/drivers/clk/imx/clk-imx35.c
index c1df036..0fe5ac2 100644
--- a/drivers/clk/imx/clk-imx35.c
+++ b/drivers/clk/imx/clk-imx35.c
@@ -82,14 +82,6 @@ enum mx35_clks {
static struct clk *clk[clk_max];
-static struct clk ** const uart_clks[] __initconst = {
- &clk[ipg],
- &clk[uart1_gate],
- &clk[uart2_gate],
- &clk[uart3_gate],
- NULL
-};
-
static void __init _mx35_clocks_init(void)
{
void __iomem *base;
@@ -243,7 +235,7 @@ static void __init _mx35_clocks_init(void)
*/
clk_prepare_enable(clk[scc_gate]);
- imx_register_uart_clocks(uart_clks);
+ imx_register_uart_clocks(4);
imx_print_silicon_rev("i.MX35", mx35_revision());
}
diff --git a/drivers/clk/imx/clk-imx5.c b/drivers/clk/imx/clk-imx5.c
index 01e079b..e449384 100644
--- a/drivers/clk/imx/clk-imx5.c
+++ b/drivers/clk/imx/clk-imx5.c
@@ -128,30 +128,6 @@ static const char *ieee1588_sels[] = { "pll3_sw", "pll4_sw", "dummy" /* usbphy2_
static struct clk *clk[IMX5_CLK_END];
static struct clk_onecell_data clk_data;
-static struct clk ** const uart_clks_mx51[] __initconst = {
- &clk[IMX5_CLK_UART1_IPG_GATE],
- &clk[IMX5_CLK_UART1_PER_GATE],
- &clk[IMX5_CLK_UART2_IPG_GATE],
- &clk[IMX5_CLK_UART2_PER_GATE],
- &clk[IMX5_CLK_UART3_IPG_GATE],
- &clk[IMX5_CLK_UART3_PER_GATE],
- NULL
-};
-
-static struct clk ** const uart_clks_mx50_mx53[] __initconst = {
- &clk[IMX5_CLK_UART1_IPG_GATE],
- &clk[IMX5_CLK_UART1_PER_GATE],
- &clk[IMX5_CLK_UART2_IPG_GATE],
- &clk[IMX5_CLK_UART2_PER_GATE],
- &clk[IMX5_CLK_UART3_IPG_GATE],
- &clk[IMX5_CLK_UART3_PER_GATE],
- &clk[IMX5_CLK_UART4_IPG_GATE],
- &clk[IMX5_CLK_UART4_PER_GATE],
- &clk[IMX5_CLK_UART5_IPG_GATE],
- &clk[IMX5_CLK_UART5_PER_GATE],
- NULL
-};
-
static void __init mx5_clocks_common_init(void __iomem *ccm_base)
{
clk[IMX5_CLK_DUMMY] = imx_clk_fixed("dummy", 0);
@@ -382,7 +358,7 @@ static void __init mx50_clocks_init(struct device_node *np)
r = clk_round_rate(clk[IMX5_CLK_USBOH3_PER_GATE], 54000000);
clk_set_rate(clk[IMX5_CLK_USBOH3_PER_GATE], r);
- imx_register_uart_clocks(uart_clks_mx50_mx53);
+ imx_register_uart_clocks(5);
}
CLK_OF_DECLARE(imx50_ccm, "fsl,imx50-ccm", mx50_clocks_init);
@@ -488,7 +464,7 @@ static void __init mx51_clocks_init(struct device_node *np)
val |= 1 << 23;
writel(val, MXC_CCM_CLPCR);
- imx_register_uart_clocks(uart_clks_mx51);
+ imx_register_uart_clocks(3);
}
CLK_OF_DECLARE(imx51_ccm, "fsl,imx51-ccm", mx51_clocks_init);
@@ -633,6 +609,6 @@ static void __init mx53_clocks_init(struct device_node *np)
r = clk_round_rate(clk[IMX5_CLK_USBOH3_PER_GATE], 54000000);
clk_set_rate(clk[IMX5_CLK_USBOH3_PER_GATE], r);
- imx_register_uart_clocks(uart_clks_mx50_mx53);
+ imx_register_uart_clocks(5);
}
CLK_OF_DECLARE(imx53_ccm, "fsl,imx53-ccm", mx53_clocks_init);
diff --git a/drivers/clk/imx/clk-imx6q.c b/drivers/clk/imx/clk-imx6q.c
index b2ff187..f444bbe 100644
--- a/drivers/clk/imx/clk-imx6q.c
+++ b/drivers/clk/imx/clk-imx6q.c
@@ -140,13 +140,6 @@ static inline int clk_on_imx6dl(void)
return of_machine_is_compatible("fsl,imx6dl");
}
-static const int uart_clk_ids[] __initconst = {
- IMX6QDL_CLK_UART_IPG,
- IMX6QDL_CLK_UART_SERIAL,
-};
-
-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
-
static int ldb_di_sel_by_clock_id(int clock_id)
{
switch (clock_id) {
@@ -440,7 +433,6 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
struct device_node *np;
void __iomem *anatop_base, *base;
int ret;
- int i;
clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
IMX6QDL_CLK_END), GFP_KERNEL);
@@ -982,12 +974,6 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
hws[IMX6QDL_CLK_PLL3_USB_OTG]->clk);
}
- for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
- int index = uart_clk_ids[i];
-
- uart_clks[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(uart_clks);
+ imx_register_uart_clocks(1);
}
CLK_OF_DECLARE(imx6q, "fsl,imx6q-ccm", imx6q_clocks_init);
diff --git a/drivers/clk/imx/clk-imx6sl.c b/drivers/clk/imx/clk-imx6sl.c
index 2f93619..d997b5b 100644
--- a/drivers/clk/imx/clk-imx6sl.c
+++ b/drivers/clk/imx/clk-imx6sl.c
@@ -178,19 +178,11 @@ void imx6sl_set_wait_clk(bool enter)
imx6sl_enable_pll_arm(false);
}
-static const int uart_clk_ids[] __initconst = {
- IMX6SL_CLK_UART,
- IMX6SL_CLK_UART_SERIAL,
-};
-
-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
-
static void __init imx6sl_clocks_init(struct device_node *ccm_node)
{
struct device_node *np;
void __iomem *base;
int ret;
- int i;
clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
IMX6SL_CLK_END), GFP_KERNEL);
@@ -447,12 +439,6 @@ static void __init imx6sl_clocks_init(struct device_node *ccm_node)
clk_set_parent(hws[IMX6SL_CLK_LCDIF_AXI_SEL]->clk,
hws[IMX6SL_CLK_PLL2_PFD2]->clk);
- for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
- int index = uart_clk_ids[i];
-
- uart_clks[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(uart_clks);
+ imx_register_uart_clocks(2);
}
CLK_OF_DECLARE(imx6sl, "fsl,imx6sl-ccm", imx6sl_clocks_init);
diff --git a/drivers/clk/imx/clk-imx6sll.c b/drivers/clk/imx/clk-imx6sll.c
index 8e8288b..31d777f 100644
--- a/drivers/clk/imx/clk-imx6sll.c
+++ b/drivers/clk/imx/clk-imx6sll.c
@@ -76,26 +76,10 @@ static u32 share_count_ssi1;
static u32 share_count_ssi2;
static u32 share_count_ssi3;
-static const int uart_clk_ids[] __initconst = {
- IMX6SLL_CLK_UART1_IPG,
- IMX6SLL_CLK_UART1_SERIAL,
- IMX6SLL_CLK_UART2_IPG,
- IMX6SLL_CLK_UART2_SERIAL,
- IMX6SLL_CLK_UART3_IPG,
- IMX6SLL_CLK_UART3_SERIAL,
- IMX6SLL_CLK_UART4_IPG,
- IMX6SLL_CLK_UART4_SERIAL,
- IMX6SLL_CLK_UART5_IPG,
- IMX6SLL_CLK_UART5_SERIAL,
-};
-
-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
-
static void __init imx6sll_clocks_init(struct device_node *ccm_node)
{
struct device_node *np;
void __iomem *base;
- int i;
clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
IMX6SLL_CLK_END), GFP_KERNEL);
@@ -356,13 +340,7 @@ static void __init imx6sll_clocks_init(struct device_node *ccm_node)
of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
- for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
- int index = uart_clk_ids[i];
-
- uart_clks[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(uart_clks);
+ imx_register_uart_clocks(5);
/* Lower the AHB clock rate before changing the clock source. */
clk_set_rate(hws[IMX6SLL_CLK_AHB]->clk, 99000000);
diff --git a/drivers/clk/imx/clk-imx6sx.c b/drivers/clk/imx/clk-imx6sx.c
index 20dcce5..fc1bd23d 100644
--- a/drivers/clk/imx/clk-imx6sx.c
+++ b/drivers/clk/imx/clk-imx6sx.c
@@ -117,18 +117,10 @@ static u32 share_count_ssi3;
static u32 share_count_sai1;
static u32 share_count_sai2;
-static const int uart_clk_ids[] __initconst = {
- IMX6SX_CLK_UART_IPG,
- IMX6SX_CLK_UART_SERIAL,
-};
-
-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
-
static void __init imx6sx_clocks_init(struct device_node *ccm_node)
{
struct device_node *np;
void __iomem *base;
- int i;
clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
IMX6SX_CLK_CLK_END), GFP_KERNEL);
@@ -556,12 +548,6 @@ static void __init imx6sx_clocks_init(struct device_node *ccm_node)
clk_set_parent(hws[IMX6SX_CLK_QSPI1_SEL]->clk, hws[IMX6SX_CLK_PLL2_BUS]->clk);
clk_set_parent(hws[IMX6SX_CLK_QSPI2_SEL]->clk, hws[IMX6SX_CLK_PLL2_BUS]->clk);
- for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
- int index = uart_clk_ids[i];
-
- uart_clks[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(uart_clks);
+ imx_register_uart_clocks(2);
}
CLK_OF_DECLARE(imx6sx, "fsl,imx6sx-ccm", imx6sx_clocks_init);
diff --git a/drivers/clk/imx/clk-imx7d.c b/drivers/clk/imx/clk-imx7d.c
index 22d24a6..c4e0f1c 100644
--- a/drivers/clk/imx/clk-imx7d.c
+++ b/drivers/clk/imx/clk-imx7d.c
@@ -377,23 +377,10 @@ static const char *pll_video_bypass_sel[] = { "pll_video_main", "pll_video_main_
static struct clk_hw **hws;
static struct clk_hw_onecell_data *clk_hw_data;
-static const int uart_clk_ids[] __initconst = {
- IMX7D_UART1_ROOT_CLK,
- IMX7D_UART2_ROOT_CLK,
- IMX7D_UART3_ROOT_CLK,
- IMX7D_UART4_ROOT_CLK,
- IMX7D_UART5_ROOT_CLK,
- IMX7D_UART6_ROOT_CLK,
- IMX7D_UART7_ROOT_CLK,
-};
-
-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
-
static void __init imx7d_clocks_init(struct device_node *ccm_node)
{
struct device_node *np;
void __iomem *base;
- int i;
clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
IMX7D_CLK_END), GFP_KERNEL);
@@ -897,14 +884,7 @@ static void __init imx7d_clocks_init(struct device_node *ccm_node)
hws[IMX7D_USB1_MAIN_480M_CLK] = imx_clk_hw_fixed_factor("pll_usb1_main_clk", "osc", 20, 1);
hws[IMX7D_USB_MAIN_480M_CLK] = imx_clk_hw_fixed_factor("pll_usb_main_clk", "osc", 20, 1);
- for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
- int index = uart_clk_ids[i];
-
- uart_clks[i] = &hws[index]->clk;
- }
-
-
- imx_register_uart_clocks(uart_clks);
+ imx_register_uart_clocks(7);
}
CLK_OF_DECLARE(imx7d, "fsl,imx7d-ccm", imx7d_clocks_init);
diff --git a/drivers/clk/imx/clk-imx7ulp.c b/drivers/clk/imx/clk-imx7ulp.c
index 634c0b6..779e091 100644
--- a/drivers/clk/imx/clk-imx7ulp.c
+++ b/drivers/clk/imx/clk-imx7ulp.c
@@ -43,19 +43,6 @@ static const struct clk_div_table ulp_div_table[] = {
{ /* sentinel */ },
};
-static const int pcc2_uart_clk_ids[] __initconst = {
- IMX7ULP_CLK_LPUART4,
- IMX7ULP_CLK_LPUART5,
-};
-
-static const int pcc3_uart_clk_ids[] __initconst = {
- IMX7ULP_CLK_LPUART6,
- IMX7ULP_CLK_LPUART7,
-};
-
-static struct clk **pcc2_uart_clks[ARRAY_SIZE(pcc2_uart_clk_ids) + 1] __initdata;
-static struct clk **pcc3_uart_clks[ARRAY_SIZE(pcc3_uart_clk_ids) + 1] __initdata;
-
static void __init imx7ulp_clk_scg1_init(struct device_node *np)
{
struct clk_hw_onecell_data *clk_data;
@@ -150,7 +137,6 @@ static void __init imx7ulp_clk_pcc2_init(struct device_node *np)
struct clk_hw_onecell_data *clk_data;
struct clk_hw **hws;
void __iomem *base;
- int i;
clk_data = kzalloc(struct_size(clk_data, hws, IMX7ULP_CLK_PCC2_END),
GFP_KERNEL);
@@ -190,13 +176,7 @@ static void __init imx7ulp_clk_pcc2_init(struct device_node *np)
of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
- for (i = 0; i < ARRAY_SIZE(pcc2_uart_clk_ids); i++) {
- int index = pcc2_uart_clk_ids[i];
-
- pcc2_uart_clks[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(pcc2_uart_clks);
+ imx_register_uart_clocks(2);
}
CLK_OF_DECLARE(imx7ulp_clk_pcc2, "fsl,imx7ulp-pcc2", imx7ulp_clk_pcc2_init);
@@ -205,7 +185,6 @@ static void __init imx7ulp_clk_pcc3_init(struct device_node *np)
struct clk_hw_onecell_data *clk_data;
struct clk_hw **hws;
void __iomem *base;
- int i;
clk_data = kzalloc(struct_size(clk_data, hws, IMX7ULP_CLK_PCC3_END),
GFP_KERNEL);
@@ -244,13 +223,7 @@ static void __init imx7ulp_clk_pcc3_init(struct device_node *np)
of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
- for (i = 0; i < ARRAY_SIZE(pcc3_uart_clk_ids); i++) {
- int index = pcc3_uart_clk_ids[i];
-
- pcc3_uart_clks[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(pcc3_uart_clks);
+ imx_register_uart_clocks(7);
}
CLK_OF_DECLARE(imx7ulp_clk_pcc3, "fsl,imx7ulp-pcc3", imx7ulp_clk_pcc3_init);
diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
index f358ad9..4cbf86a 100644
--- a/drivers/clk/imx/clk-imx8mm.c
+++ b/drivers/clk/imx/clk-imx8mm.c
@@ -291,20 +291,12 @@ static const char *imx8mm_clko2_sels[] = {"osc_24m", "sys_pll2_200m", "sys_pll1_
static struct clk_hw_onecell_data *clk_hw_data;
static struct clk_hw **hws;
-static const int uart_clk_ids[] = {
- IMX8MM_CLK_UART1_ROOT,
- IMX8MM_CLK_UART2_ROOT,
- IMX8MM_CLK_UART3_ROOT,
- IMX8MM_CLK_UART4_ROOT,
-};
-static struct clk **uart_hws[ARRAY_SIZE(uart_clk_ids) + 1];
-
static int imx8mm_clocks_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
void __iomem *base;
- int ret, i;
+ int ret;
clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
IMX8MM_CLK_END), GFP_KERNEL);
@@ -622,13 +614,7 @@ static int imx8mm_clocks_probe(struct platform_device *pdev)
goto unregister_hws;
}
- for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
- int index = uart_clk_ids[i];
-
- uart_hws[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(uart_hws);
+ imx_register_uart_clocks(4);
return 0;
diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
index f3c5e6c..f98f252 100644
--- a/drivers/clk/imx/clk-imx8mn.c
+++ b/drivers/clk/imx/clk-imx8mn.c
@@ -284,20 +284,12 @@ static const char * const imx8mn_clko2_sels[] = {"osc_24m", "sys_pll2_200m", "sy
static struct clk_hw_onecell_data *clk_hw_data;
static struct clk_hw **hws;
-static const int uart_clk_ids[] = {
- IMX8MN_CLK_UART1_ROOT,
- IMX8MN_CLK_UART2_ROOT,
- IMX8MN_CLK_UART3_ROOT,
- IMX8MN_CLK_UART4_ROOT,
-};
-static struct clk **uart_hws[ARRAY_SIZE(uart_clk_ids) + 1];
-
static int imx8mn_clocks_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
void __iomem *base;
- int ret, i;
+ int ret;
clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
IMX8MN_CLK_END), GFP_KERNEL);
@@ -573,13 +565,7 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
goto unregister_hws;
}
- for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
- int index = uart_clk_ids[i];
-
- uart_hws[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(uart_hws);
+ imx_register_uart_clocks(4);
return 0;
diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
index 48e2124..0391f5b 100644
--- a/drivers/clk/imx/clk-imx8mp.c
+++ b/drivers/clk/imx/clk-imx8mp.c
@@ -414,20 +414,11 @@ static const char * const imx8mp_dram_core_sels[] = {"dram_pll_out", "dram_alt_r
static struct clk_hw **hws;
static struct clk_hw_onecell_data *clk_hw_data;
-static const int uart_clk_ids[] = {
- IMX8MP_CLK_UART1_ROOT,
- IMX8MP_CLK_UART2_ROOT,
- IMX8MP_CLK_UART3_ROOT,
- IMX8MP_CLK_UART4_ROOT,
-};
-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1];
-
static int imx8mp_clocks_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
void __iomem *anatop_base, *ccm_base;
- int i;
np = of_find_compatible_node(NULL, NULL, "fsl,imx8mp-anatop");
anatop_base = of_iomap(np, 0);
@@ -737,13 +728,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
- for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
- int index = uart_clk_ids[i];
-
- uart_clks[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(uart_clks);
+ imx_register_uart_clocks(4);
return 0;
}
diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
index 06292d4..4e6c81a 100644
--- a/drivers/clk/imx/clk-imx8mq.c
+++ b/drivers/clk/imx/clk-imx8mq.c
@@ -273,20 +273,12 @@ static const char * const imx8mq_clko2_sels[] = {"osc_25m", "sys2_pll_200m", "sy
static struct clk_hw_onecell_data *clk_hw_data;
static struct clk_hw **hws;
-static const int uart_clk_ids[] = {
- IMX8MQ_CLK_UART1_ROOT,
- IMX8MQ_CLK_UART2_ROOT,
- IMX8MQ_CLK_UART3_ROOT,
- IMX8MQ_CLK_UART4_ROOT,
-};
-static struct clk **uart_hws[ARRAY_SIZE(uart_clk_ids) + 1];
-
static int imx8mq_clocks_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
void __iomem *base;
- int err, i;
+ int err;
clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
IMX8MQ_CLK_END), GFP_KERNEL);
@@ -607,13 +599,7 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
goto unregister_hws;
}
- for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
- int index = uart_clk_ids[i];
-
- uart_hws[i] = &hws[index]->clk;
- }
-
- imx_register_uart_clocks(uart_hws);
+ imx_register_uart_clocks(4);
return 0;
diff --git a/drivers/clk/imx/clk.c b/drivers/clk/imx/clk.c
index 47882c5..7cc6699 100644
--- a/drivers/clk/imx/clk.c
+++ b/drivers/clk/imx/clk.c
@@ -147,8 +147,10 @@ void imx_cscmr1_fixup(u32 *val)
}
#ifndef MODULE
-static int imx_keep_uart_clocks;
-static struct clk ** const *imx_uart_clocks;
+
+static bool imx_keep_uart_clocks;
+static int imx_enabled_uart_clocks;
+static struct clk **imx_uart_clocks;
static int __init imx_keep_uart_clocks_param(char *str)
{
@@ -161,24 +163,45 @@ __setup_param("earlycon", imx_keep_uart_earlycon,
__setup_param("earlyprintk", imx_keep_uart_earlyprintk,
imx_keep_uart_clocks_param, 0);
-void imx_register_uart_clocks(struct clk ** const clks[])
+void imx_register_uart_clocks(unsigned int clk_count)
{
+ imx_enabled_uart_clocks = 0;
+
+/* i.MX boards use device trees now. For build tests without CONFIG_OF, do nothing */
+#ifdef CONFIG_OF
if (imx_keep_uart_clocks) {
int i;
- imx_uart_clocks = clks;
- for (i = 0; imx_uart_clocks[i]; i++)
- clk_prepare_enable(*imx_uart_clocks[i]);
+ imx_uart_clocks = kcalloc(clk_count, sizeof(struct clk *), GFP_KERNEL);
+
+ if (!of_stdout)
+ return;
+
+ for (i = 0; i < clk_count; i++) {
+ imx_uart_clocks[imx_enabled_uart_clocks] = of_clk_get(of_stdout, i);
+
+ /* Stop if there are no more of_stdout references */
+ if (IS_ERR(imx_uart_clocks[imx_enabled_uart_clocks]))
+ return;
+
+ /* Only enable the clock if it's not NULL */
+ if (imx_uart_clocks[imx_enabled_uart_clocks])
+ clk_prepare_enable(imx_uart_clocks[imx_enabled_uart_clocks++]);
+ }
}
+#endif
}
static int __init imx_clk_disable_uart(void)
{
- if (imx_keep_uart_clocks && imx_uart_clocks) {
+ if (imx_keep_uart_clocks && imx_enabled_uart_clocks) {
int i;
- for (i = 0; imx_uart_clocks[i]; i++)
- clk_disable_unprepare(*imx_uart_clocks[i]);
+ for (i = 0; i < imx_enabled_uart_clocks; i++) {
+ clk_disable_unprepare(imx_uart_clocks[i]);
+ clk_put(imx_uart_clocks[i]);
+ }
+ kfree(imx_uart_clocks);
}
return 0;
diff --git a/drivers/clk/imx/clk.h b/drivers/clk/imx/clk.h
index 1d7be0c..f04cbba 100644
--- a/drivers/clk/imx/clk.h
+++ b/drivers/clk/imx/clk.h
@@ -13,9 +13,9 @@ extern spinlock_t imx_ccm_lock;
void imx_check_clocks(struct clk *clks[], unsigned int count);
void imx_check_clk_hws(struct clk_hw *clks[], unsigned int count);
#ifndef MODULE
-void imx_register_uart_clocks(struct clk ** const clks[]);
+void imx_register_uart_clocks(unsigned int clk_count);
#else
-static inline void imx_register_uart_clocks(struct clk ** const clks[])
+static inline void imx_register_uart_clocks(unsigned int clk_count)
{
}
#endif
diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
index f5746f9..32ac6b6 100644
--- a/drivers/clk/mvebu/armada-37xx-periph.c
+++ b/drivers/clk/mvebu/armada-37xx-periph.c
@@ -84,6 +84,7 @@ struct clk_pm_cpu {
void __iomem *reg_div;
u8 shift_div;
struct regmap *nb_pm_base;
+ unsigned long l1_expiration;
};
#define to_clk_double_div(_hw) container_of(_hw, struct clk_double_div, hw)
@@ -440,33 +441,6 @@ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
return val;
}
-static int clk_pm_cpu_set_parent(struct clk_hw *hw, u8 index)
-{
- struct clk_pm_cpu *pm_cpu = to_clk_pm_cpu(hw);
- struct regmap *base = pm_cpu->nb_pm_base;
- int load_level;
-
- /*
- * We set the clock parent only if the DVFS is available but
- * not enabled.
- */
- if (IS_ERR(base) || armada_3700_pm_dvfs_is_enabled(base))
- return -EINVAL;
-
- /* Set the parent clock for all the load level */
- for (load_level = 0; load_level < LOAD_LEVEL_NR; load_level++) {
- unsigned int reg, mask, val,
- offset = ARMADA_37XX_NB_TBG_SEL_OFF;
-
- armada_3700_pm_dvfs_update_regs(load_level, ®, &offset);
-
- val = index << offset;
- mask = ARMADA_37XX_NB_TBG_SEL_MASK << offset;
- regmap_update_bits(base, reg, mask, val);
- }
- return 0;
-}
-
static unsigned long clk_pm_cpu_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate)
{
@@ -514,8 +488,10 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
}
/*
- * Switching the CPU from the L2 or L3 frequencies (300 and 200 Mhz
- * respectively) to L0 frequency (1.2 Ghz) requires a significant
+ * Workaround when base CPU frequnecy is 1000 or 1200 MHz
+ *
+ * Switching the CPU from the L2 or L3 frequencies (250/300 or 200 MHz
+ * respectively) to L0 frequency (1/1.2 GHz) requires a significant
* amount of time to let VDD stabilize to the appropriate
* voltage. This amount of time is large enough that it cannot be
* covered by the hardware countdown register. Due to this, the CPU
@@ -525,26 +501,56 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
* To work around this problem, we prevent switching directly from the
* L2/L3 frequencies to the L0 frequency, and instead switch to the L1
* frequency in-between. The sequence therefore becomes:
- * 1. First switch from L2/L3(200/300MHz) to L1(600MHZ)
+ * 1. First switch from L2/L3 (200/250/300 MHz) to L1 (500/600 MHz)
* 2. Sleep 20ms for stabling VDD voltage
- * 3. Then switch from L1(600MHZ) to L0(1200Mhz).
+ * 3. Then switch from L1 (500/600 MHz) to L0 (1000/1200 MHz).
*/
-static void clk_pm_cpu_set_rate_wa(unsigned long rate, struct regmap *base)
+static void clk_pm_cpu_set_rate_wa(struct clk_pm_cpu *pm_cpu,
+ unsigned int new_level, unsigned long rate,
+ struct regmap *base)
{
unsigned int cur_level;
- if (rate != 1200 * 1000 * 1000)
- return;
-
regmap_read(base, ARMADA_37XX_NB_CPU_LOAD, &cur_level);
cur_level &= ARMADA_37XX_NB_CPU_LOAD_MASK;
- if (cur_level <= ARMADA_37XX_DVFS_LOAD_1)
+
+ if (cur_level == new_level)
return;
+ /*
+ * System wants to go to L1 on its own. If we are going from L2/L3,
+ * remember when 20ms will expire. If from L0, set the value so that
+ * next switch to L0 won't have to wait.
+ */
+ if (new_level == ARMADA_37XX_DVFS_LOAD_1) {
+ if (cur_level == ARMADA_37XX_DVFS_LOAD_0)
+ pm_cpu->l1_expiration = jiffies;
+ else
+ pm_cpu->l1_expiration = jiffies + msecs_to_jiffies(20);
+ return;
+ }
+
+ /*
+ * If we are setting to L2/L3, just invalidate L1 expiration time,
+ * sleeping is not needed.
+ */
+ if (rate < 1000*1000*1000)
+ goto invalidate_l1_exp;
+
+ /*
+ * We are going to L0 with rate >= 1GHz. Check whether we have been at
+ * L1 for long enough time. If not, go to L1 for 20ms.
+ */
+ if (pm_cpu->l1_expiration && jiffies >= pm_cpu->l1_expiration)
+ goto invalidate_l1_exp;
+
regmap_update_bits(base, ARMADA_37XX_NB_CPU_LOAD,
ARMADA_37XX_NB_CPU_LOAD_MASK,
ARMADA_37XX_DVFS_LOAD_1);
msleep(20);
+
+invalidate_l1_exp:
+ pm_cpu->l1_expiration = 0;
}
static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
@@ -578,7 +584,9 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
reg = ARMADA_37XX_NB_CPU_LOAD;
mask = ARMADA_37XX_NB_CPU_LOAD_MASK;
- clk_pm_cpu_set_rate_wa(rate, base);
+ /* Apply workaround when base CPU frequency is 1000 or 1200 MHz */
+ if (parent_rate >= 1000*1000*1000)
+ clk_pm_cpu_set_rate_wa(pm_cpu, load_level, rate, base);
regmap_update_bits(base, reg, mask, load_level);
@@ -592,7 +600,6 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
static const struct clk_ops clk_pm_cpu_ops = {
.get_parent = clk_pm_cpu_get_parent,
- .set_parent = clk_pm_cpu_set_parent,
.round_rate = clk_pm_cpu_round_rate,
.set_rate = clk_pm_cpu_set_rate,
.recalc_rate = clk_pm_cpu_recalc_rate,
diff --git a/drivers/clk/qcom/a53-pll.c b/drivers/clk/qcom/a53-pll.c
index 45cfc57..af6ac17 100644
--- a/drivers/clk/qcom/a53-pll.c
+++ b/drivers/clk/qcom/a53-pll.c
@@ -93,6 +93,7 @@ static const struct of_device_id qcom_a53pll_match_table[] = {
{ .compatible = "qcom,msm8916-a53pll" },
{ }
};
+MODULE_DEVICE_TABLE(of, qcom_a53pll_match_table);
static struct platform_driver qcom_a53pll_driver = {
.probe = qcom_a53pll_probe,
diff --git a/drivers/clk/qcom/apss-ipq-pll.c b/drivers/clk/qcom/apss-ipq-pll.c
index 30be87f..bef7899 100644
--- a/drivers/clk/qcom/apss-ipq-pll.c
+++ b/drivers/clk/qcom/apss-ipq-pll.c
@@ -81,6 +81,7 @@ static const struct of_device_id apss_ipq_pll_match_table[] = {
{ .compatible = "qcom,ipq6018-a53pll" },
{ }
};
+MODULE_DEVICE_TABLE(of, apss_ipq_pll_match_table);
static struct platform_driver apss_ipq_pll_driver = {
.probe = apss_ipq_pll_probe,
diff --git a/drivers/clk/samsung/clk-exynos7.c b/drivers/clk/samsung/clk-exynos7.c
index 87ee1ba..4a5d2a9 100644
--- a/drivers/clk/samsung/clk-exynos7.c
+++ b/drivers/clk/samsung/clk-exynos7.c
@@ -537,8 +537,13 @@ static const struct samsung_gate_clock top1_gate_clks[] __initconst = {
GATE(CLK_ACLK_FSYS0_200, "aclk_fsys0_200", "dout_aclk_fsys0_200",
ENABLE_ACLK_TOP13, 28, CLK_SET_RATE_PARENT |
CLK_IS_CRITICAL, 0),
+ /*
+ * This clock is required for the CMU_FSYS1 registers access, keep it
+ * enabled permanently until proper runtime PM support is added.
+ */
GATE(CLK_ACLK_FSYS1_200, "aclk_fsys1_200", "dout_aclk_fsys1_200",
- ENABLE_ACLK_TOP13, 24, CLK_SET_RATE_PARENT, 0),
+ ENABLE_ACLK_TOP13, 24, CLK_SET_RATE_PARENT |
+ CLK_IS_CRITICAL, 0),
GATE(CLK_SCLK_PHY_FSYS1_26M, "sclk_phy_fsys1_26m",
"dout_sclk_phy_fsys1_26m", ENABLE_SCLK_TOP1_FSYS11,
diff --git a/drivers/clk/socfpga/clk-gate-a10.c b/drivers/clk/socfpga/clk-gate-a10.c
index cd5df91..d627788 100644
--- a/drivers/clk/socfpga/clk-gate-a10.c
+++ b/drivers/clk/socfpga/clk-gate-a10.c
@@ -146,6 +146,7 @@ static void __init __socfpga_gate_init(struct device_node *node,
if (IS_ERR(socfpga_clk->sys_mgr_base_addr)) {
pr_err("%s: failed to find altr,sys-mgr regmap!\n",
__func__);
+ kfree(socfpga_clk);
return;
}
}
diff --git a/drivers/clk/socfpga/clk-gate.c b/drivers/clk/socfpga/clk-gate.c
index 43ecd50..cf94a12 100644
--- a/drivers/clk/socfpga/clk-gate.c
+++ b/drivers/clk/socfpga/clk-gate.c
@@ -99,7 +99,7 @@ static unsigned long socfpga_clk_recalc_rate(struct clk_hw *hwclk,
val = readl(socfpgaclk->div_reg) >> socfpgaclk->shift;
val &= GENMASK(socfpgaclk->width - 1, 0);
/* Check for GPIO_DB_CLK by its offset */
- if ((int) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET)
+ if ((uintptr_t) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET)
div = val + 1;
else
div = (1 << val);
diff --git a/drivers/clk/uniphier/clk-uniphier-mux.c b/drivers/clk/uniphier/clk-uniphier-mux.c
index 462c843..1998e9d 100644
--- a/drivers/clk/uniphier/clk-uniphier-mux.c
+++ b/drivers/clk/uniphier/clk-uniphier-mux.c
@@ -31,10 +31,10 @@ static int uniphier_clk_mux_set_parent(struct clk_hw *hw, u8 index)
static u8 uniphier_clk_mux_get_parent(struct clk_hw *hw)
{
struct uniphier_clk_mux *mux = to_uniphier_clk_mux(hw);
- int num_parents = clk_hw_get_num_parents(hw);
+ unsigned int num_parents = clk_hw_get_num_parents(hw);
int ret;
unsigned int val;
- u8 i;
+ unsigned int i;
ret = regmap_read(mux->regmap, mux->reg, &val);
if (ret)
diff --git a/drivers/clk/zynqmp/pll.c b/drivers/clk/zynqmp/pll.c
index 92f449e..abe6afb 100644
--- a/drivers/clk/zynqmp/pll.c
+++ b/drivers/clk/zynqmp/pll.c
@@ -14,10 +14,12 @@
* struct zynqmp_pll - PLL clock
* @hw: Handle between common and hardware-specific interfaces
* @clk_id: PLL clock ID
+ * @set_pll_mode: Whether an IOCTL_SET_PLL_FRAC_MODE request be sent to ATF
*/
struct zynqmp_pll {
struct clk_hw hw;
u32 clk_id;
+ bool set_pll_mode;
};
#define to_zynqmp_pll(_hw) container_of(_hw, struct zynqmp_pll, hw)
@@ -81,6 +83,8 @@ static inline void zynqmp_pll_set_mode(struct clk_hw *hw, bool on)
if (ret)
pr_warn_once("%s() PLL set frac mode failed for %s, ret = %d\n",
__func__, clk_name, ret);
+ else
+ clk->set_pll_mode = true;
}
/**
@@ -100,9 +104,7 @@ static long zynqmp_pll_round_rate(struct clk_hw *hw, unsigned long rate,
/* Enable the fractional mode if needed */
rate_div = (rate * FRAC_DIV) / *prate;
f = rate_div % FRAC_DIV;
- zynqmp_pll_set_mode(hw, !!f);
-
- if (zynqmp_pll_get_mode(hw) == PLL_MODE_FRAC) {
+ if (f) {
if (rate > PS_PLL_VCO_MAX) {
fbdiv = rate / PS_PLL_VCO_MAX;
rate = rate / (fbdiv + 1);
@@ -173,10 +175,12 @@ static int zynqmp_pll_set_rate(struct clk_hw *hw, unsigned long rate,
long rate_div, frac, m, f;
int ret;
- if (zynqmp_pll_get_mode(hw) == PLL_MODE_FRAC) {
- rate_div = (rate * FRAC_DIV) / parent_rate;
+ rate_div = (rate * FRAC_DIV) / parent_rate;
+ f = rate_div % FRAC_DIV;
+ zynqmp_pll_set_mode(hw, !!f);
+
+ if (f) {
m = rate_div / FRAC_DIV;
- f = rate_div % FRAC_DIV;
m = clamp_t(u32, m, (PLL_FBDIV_MIN), (PLL_FBDIV_MAX));
rate = parent_rate * m;
frac = (parent_rate * f) / FRAC_DIV;
@@ -240,9 +244,15 @@ static int zynqmp_pll_enable(struct clk_hw *hw)
u32 clk_id = clk->clk_id;
int ret;
- if (zynqmp_pll_is_enabled(hw))
+ /*
+ * Don't skip enabling clock if there is an IOCTL_SET_PLL_FRAC_MODE request
+ * that has been sent to ATF.
+ */
+ if (zynqmp_pll_is_enabled(hw) && (!clk->set_pll_mode))
return 0;
+ clk->set_pll_mode = false;
+
ret = zynqmp_pm_clock_enable(clk_id);
if (ret)
pr_warn_once("%s() clock enable failed for %s, ret = %d\n",
diff --git a/drivers/clocksource/ingenic-ost.c b/drivers/clocksource/ingenic-ost.c
index 029efc2..6af2470 100644
--- a/drivers/clocksource/ingenic-ost.c
+++ b/drivers/clocksource/ingenic-ost.c
@@ -88,9 +88,9 @@ static int __init ingenic_ost_probe(struct platform_device *pdev)
return PTR_ERR(ost->regs);
map = device_node_to_regmap(dev->parent->of_node);
- if (!map) {
+ if (IS_ERR(map)) {
dev_err(dev, "regmap not found");
- return -EINVAL;
+ return PTR_ERR(map);
}
ost->clk = devm_clk_get(dev, "ost");
diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
index 33b3e8a..b6f9796 100644
--- a/drivers/clocksource/timer-ti-dm-systimer.c
+++ b/drivers/clocksource/timer-ti-dm-systimer.c
@@ -2,6 +2,7 @@
#include <linux/clk.h>
#include <linux/clocksource.h>
#include <linux/clockchips.h>
+#include <linux/cpuhotplug.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/iopoll.h>
@@ -449,13 +450,13 @@ static int dmtimer_set_next_event(unsigned long cycles,
struct dmtimer_systimer *t = &clkevt->t;
void __iomem *pend = t->base + t->pend;
- writel_relaxed(0xffffffff - cycles, t->base + t->counter);
while (readl_relaxed(pend) & WP_TCRR)
cpu_relax();
+ writel_relaxed(0xffffffff - cycles, t->base + t->counter);
- writel_relaxed(OMAP_TIMER_CTRL_ST, t->base + t->ctrl);
while (readl_relaxed(pend) & WP_TCLR)
cpu_relax();
+ writel_relaxed(OMAP_TIMER_CTRL_ST, t->base + t->ctrl);
return 0;
}
@@ -490,18 +491,18 @@ static int dmtimer_set_periodic(struct clock_event_device *evt)
dmtimer_clockevent_shutdown(evt);
/* Looks like we need to first set the load value separately */
- writel_relaxed(clkevt->period, t->base + t->load);
while (readl_relaxed(pend) & WP_TLDR)
cpu_relax();
+ writel_relaxed(clkevt->period, t->base + t->load);
- writel_relaxed(clkevt->period, t->base + t->counter);
while (readl_relaxed(pend) & WP_TCRR)
cpu_relax();
+ writel_relaxed(clkevt->period, t->base + t->counter);
- writel_relaxed(OMAP_TIMER_CTRL_AR | OMAP_TIMER_CTRL_ST,
- t->base + t->ctrl);
while (readl_relaxed(pend) & WP_TCLR)
cpu_relax();
+ writel_relaxed(OMAP_TIMER_CTRL_AR | OMAP_TIMER_CTRL_ST,
+ t->base + t->ctrl);
return 0;
}
@@ -530,17 +531,17 @@ static void omap_clockevent_unidle(struct clock_event_device *evt)
writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
}
-static int __init dmtimer_clockevent_init(struct device_node *np)
+static int __init dmtimer_clkevt_init_common(struct dmtimer_clockevent *clkevt,
+ struct device_node *np,
+ unsigned int features,
+ const struct cpumask *cpumask,
+ const char *name,
+ int rating)
{
- struct dmtimer_clockevent *clkevt;
struct clock_event_device *dev;
struct dmtimer_systimer *t;
int error;
- clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
- if (!clkevt)
- return -ENOMEM;
-
t = &clkevt->t;
dev = &clkevt->dev;
@@ -548,24 +549,23 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
* We mostly use cpuidle_coupled with ARM local timers for runtime,
* so there's probably no use for CLOCK_EVT_FEAT_DYNIRQ here.
*/
- dev->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT;
- dev->rating = 300;
+ dev->features = features;
+ dev->rating = rating;
dev->set_next_event = dmtimer_set_next_event;
dev->set_state_shutdown = dmtimer_clockevent_shutdown;
dev->set_state_periodic = dmtimer_set_periodic;
dev->set_state_oneshot = dmtimer_clockevent_shutdown;
+ dev->set_state_oneshot_stopped = dmtimer_clockevent_shutdown;
dev->tick_resume = dmtimer_clockevent_shutdown;
- dev->cpumask = cpu_possible_mask;
+ dev->cpumask = cpumask;
dev->irq = irq_of_parse_and_map(np, 0);
- if (!dev->irq) {
- error = -ENXIO;
- goto err_out_free;
- }
+ if (!dev->irq)
+ return -ENXIO;
error = dmtimer_systimer_setup(np, &clkevt->t);
if (error)
- goto err_out_free;
+ return error;
clkevt->period = 0xffffffff - DIV_ROUND_CLOSEST(t->rate, HZ);
@@ -577,38 +577,132 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl);
error = request_irq(dev->irq, dmtimer_clockevent_interrupt,
- IRQF_TIMER, "clockevent", clkevt);
+ IRQF_TIMER, name, clkevt);
if (error)
goto err_out_unmap;
writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->irq_ena);
writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
- pr_info("TI gptimer clockevent: %s%lu Hz at %pOF\n",
- of_find_property(np, "ti,timer-alwon", NULL) ?
+ pr_info("TI gptimer %s: %s%lu Hz at %pOF\n",
+ name, of_find_property(np, "ti,timer-alwon", NULL) ?
"always-on " : "", t->rate, np->parent);
- clockevents_config_and_register(dev, t->rate,
- 3, /* Timer internal resynch latency */
- 0xffffffff);
-
- if (of_machine_is_compatible("ti,am33xx") ||
- of_machine_is_compatible("ti,am43")) {
- dev->suspend = omap_clockevent_idle;
- dev->resume = omap_clockevent_unidle;
- }
-
return 0;
err_out_unmap:
iounmap(t->base);
+ return error;
+}
+
+static int __init dmtimer_clockevent_init(struct device_node *np)
+{
+ struct dmtimer_clockevent *clkevt;
+ int error;
+
+ clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
+ if (!clkevt)
+ return -ENOMEM;
+
+ error = dmtimer_clkevt_init_common(clkevt, np,
+ CLOCK_EVT_FEAT_PERIODIC |
+ CLOCK_EVT_FEAT_ONESHOT,
+ cpu_possible_mask, "clockevent",
+ 300);
+ if (error)
+ goto err_out_free;
+
+ clockevents_config_and_register(&clkevt->dev, clkevt->t.rate,
+ 3, /* Timer internal resync latency */
+ 0xffffffff);
+
+ if (of_machine_is_compatible("ti,am33xx") ||
+ of_machine_is_compatible("ti,am43")) {
+ clkevt->dev.suspend = omap_clockevent_idle;
+ clkevt->dev.resume = omap_clockevent_unidle;
+ }
+
+ return 0;
+
err_out_free:
kfree(clkevt);
return error;
}
+/* Dmtimer as percpu timer. See dra7 ARM architected timer wrap erratum i940 */
+static DEFINE_PER_CPU(struct dmtimer_clockevent, dmtimer_percpu_timer);
+
+static int __init dmtimer_percpu_timer_init(struct device_node *np, int cpu)
+{
+ struct dmtimer_clockevent *clkevt;
+ int error;
+
+ if (!cpu_possible(cpu))
+ return -EINVAL;
+
+ if (!of_property_read_bool(np->parent, "ti,no-reset-on-init") ||
+ !of_property_read_bool(np->parent, "ti,no-idle"))
+ pr_warn("Incomplete dtb for percpu dmtimer %pOF\n", np->parent);
+
+ clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
+
+ error = dmtimer_clkevt_init_common(clkevt, np, CLOCK_EVT_FEAT_ONESHOT,
+ cpumask_of(cpu), "percpu-dmtimer",
+ 500);
+ if (error)
+ return error;
+
+ return 0;
+}
+
+/* See TRM for timer internal resynch latency */
+static int omap_dmtimer_starting_cpu(unsigned int cpu)
+{
+ struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
+ struct clock_event_device *dev = &clkevt->dev;
+ struct dmtimer_systimer *t = &clkevt->t;
+
+ clockevents_config_and_register(dev, t->rate, 3, ULONG_MAX);
+ irq_force_affinity(dev->irq, cpumask_of(cpu));
+
+ return 0;
+}
+
+static int __init dmtimer_percpu_timer_startup(void)
+{
+ struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, 0);
+ struct dmtimer_systimer *t = &clkevt->t;
+
+ if (t->sysc) {
+ cpuhp_setup_state(CPUHP_AP_TI_GP_TIMER_STARTING,
+ "clockevents/omap/gptimer:starting",
+ omap_dmtimer_starting_cpu, NULL);
+ }
+
+ return 0;
+}
+subsys_initcall(dmtimer_percpu_timer_startup);
+
+static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
+{
+ struct device_node *arm_timer;
+
+ arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
+ if (of_device_is_available(arm_timer)) {
+ pr_warn_once("ARM architected timer wrap issue i940 detected\n");
+ return 0;
+ }
+
+ if (pa == 0x48034000) /* dra7 dmtimer3 */
+ return dmtimer_percpu_timer_init(np, 0);
+ else if (pa == 0x48036000) /* dra7 dmtimer4 */
+ return dmtimer_percpu_timer_init(np, 1);
+
+ return 0;
+}
+
/* Clocksource */
static struct dmtimer_clocksource *
to_dmtimer_clocksource(struct clocksource *cs)
@@ -742,6 +836,9 @@ static int __init dmtimer_systimer_init(struct device_node *np)
if (clockevent == pa)
return dmtimer_clockevent_init(np);
+ if (of_machine_is_compatible("ti,dra7"))
+ return dmtimer_percpu_quirk_init(np, pa);
+
return 0;
}
diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
index b4af409..e4782f5 100644
--- a/drivers/cpufreq/armada-37xx-cpufreq.c
+++ b/drivers/cpufreq/armada-37xx-cpufreq.c
@@ -25,6 +25,10 @@
#include "cpufreq-dt.h"
+/* Clk register set */
+#define ARMADA_37XX_CLK_TBG_SEL 0
+#define ARMADA_37XX_CLK_TBG_SEL_CPU_OFF 22
+
/* Power management in North Bridge register set */
#define ARMADA_37XX_NB_L0L1 0x18
#define ARMADA_37XX_NB_L2L3 0x1C
@@ -69,6 +73,8 @@
#define LOAD_LEVEL_NR 4
#define MIN_VOLT_MV 1000
+#define MIN_VOLT_MV_FOR_L1_1000MHZ 1108
+#define MIN_VOLT_MV_FOR_L1_1200MHZ 1155
/* AVS value for the corresponding voltage (in mV) */
static int avs_map[] = {
@@ -120,10 +126,15 @@ static struct armada_37xx_dvfs *armada_37xx_cpu_freq_info_get(u32 freq)
* will be configured then the DVFS will be enabled.
*/
static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
- struct clk *clk, u8 *divider)
+ struct regmap *clk_base, u8 *divider)
{
+ u32 cpu_tbg_sel;
int load_lvl;
- struct clk *parent;
+
+ /* Determine to which TBG clock is CPU connected */
+ regmap_read(clk_base, ARMADA_37XX_CLK_TBG_SEL, &cpu_tbg_sel);
+ cpu_tbg_sel >>= ARMADA_37XX_CLK_TBG_SEL_CPU_OFF;
+ cpu_tbg_sel &= ARMADA_37XX_NB_TBG_SEL_MASK;
for (load_lvl = 0; load_lvl < LOAD_LEVEL_NR; load_lvl++) {
unsigned int reg, mask, val, offset = 0;
@@ -142,6 +153,11 @@ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
mask = (ARMADA_37XX_NB_CLK_SEL_MASK
<< ARMADA_37XX_NB_CLK_SEL_OFF);
+ /* Set TBG index, for all levels we use the same TBG */
+ val = cpu_tbg_sel << ARMADA_37XX_NB_TBG_SEL_OFF;
+ mask = (ARMADA_37XX_NB_TBG_SEL_MASK
+ << ARMADA_37XX_NB_TBG_SEL_OFF);
+
/*
* Set cpu divider based on the pre-computed array in
* order to have balanced step.
@@ -160,14 +176,6 @@ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
regmap_update_bits(base, reg, mask, val);
}
-
- /*
- * Set cpu clock source, for all the level we keep the same
- * clock source that the one already configured. For this one
- * we need to use the clock framework
- */
- parent = clk_get_parent(clk);
- clk_set_parent(clk, parent);
}
/*
@@ -202,6 +210,8 @@ static u32 armada_37xx_avs_val_match(int target_vm)
* - L2 & L3 voltage should be about 150mv smaller than L0 voltage.
* This function calculates L1 & L2 & L3 AVS values dynamically based
* on L0 voltage and fill all AVS values to the AVS value table.
+ * When base CPU frequency is 1000 or 1200 MHz then there is additional
+ * minimal avs value for load L1.
*/
static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
struct armada_37xx_dvfs *dvfs)
@@ -233,6 +243,19 @@ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
for (load_level = 1; load_level < LOAD_LEVEL_NR; load_level++)
dvfs->avs[load_level] = avs_min;
+ /*
+ * Set the avs values for load L0 and L1 when base CPU frequency
+ * is 1000/1200 MHz to its typical initial values according to
+ * the Armada 3700 Hardware Specifications.
+ */
+ if (dvfs->cpu_freq_max >= 1000*1000*1000) {
+ if (dvfs->cpu_freq_max >= 1200*1000*1000)
+ avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ);
+ else
+ avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ);
+ dvfs->avs[0] = dvfs->avs[1] = avs_min;
+ }
+
return;
}
@@ -252,6 +275,26 @@ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
target_vm = avs_map[l0_vdd_min] - 150;
target_vm = target_vm > MIN_VOLT_MV ? target_vm : MIN_VOLT_MV;
dvfs->avs[2] = dvfs->avs[3] = armada_37xx_avs_val_match(target_vm);
+
+ /*
+ * Fix the avs value for load L1 when base CPU frequency is 1000/1200 MHz,
+ * otherwise the CPU gets stuck when switching from load L1 to load L0.
+ * Also ensure that avs value for load L1 is not higher than for L0.
+ */
+ if (dvfs->cpu_freq_max >= 1000*1000*1000) {
+ u32 avs_min_l1;
+
+ if (dvfs->cpu_freq_max >= 1200*1000*1000)
+ avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ);
+ else
+ avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ);
+
+ if (avs_min_l1 > dvfs->avs[0])
+ avs_min_l1 = dvfs->avs[0];
+
+ if (dvfs->avs[1] < avs_min_l1)
+ dvfs->avs[1] = avs_min_l1;
+ }
}
static void __init armada37xx_cpufreq_avs_setup(struct regmap *base,
@@ -358,11 +401,16 @@ static int __init armada37xx_cpufreq_driver_init(void)
struct platform_device *pdev;
unsigned long freq;
unsigned int cur_frequency, base_frequency;
- struct regmap *nb_pm_base, *avs_base;
+ struct regmap *nb_clk_base, *nb_pm_base, *avs_base;
struct device *cpu_dev;
int load_lvl, ret;
struct clk *clk, *parent;
+ nb_clk_base =
+ syscon_regmap_lookup_by_compatible("marvell,armada-3700-periph-clock-nb");
+ if (IS_ERR(nb_clk_base))
+ return -ENODEV;
+
nb_pm_base =
syscon_regmap_lookup_by_compatible("marvell,armada-3700-nb-pm");
@@ -421,7 +469,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
return -EINVAL;
}
- dvfs = armada_37xx_cpu_freq_info_get(cur_frequency);
+ dvfs = armada_37xx_cpu_freq_info_get(base_frequency);
if (!dvfs) {
clk_put(clk);
return -EINVAL;
@@ -439,7 +487,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
armada37xx_cpufreq_avs_configure(avs_base, dvfs);
armada37xx_cpufreq_avs_setup(avs_base, dvfs);
- armada37xx_cpufreq_dvfs_setup(nb_pm_base, clk, dvfs->divider);
+ armada37xx_cpufreq_dvfs_setup(nb_pm_base, nb_clk_base, dvfs->divider);
clk_put(clk);
for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR;
@@ -473,7 +521,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
remove_opp:
/* clean-up the already added opp before leaving */
while (load_lvl-- > ARMADA_37XX_DVFS_LOAD_0) {
- freq = cur_frequency / dvfs->divider[load_lvl];
+ freq = base_frequency / dvfs->divider[load_lvl];
dev_pm_opp_remove(cpu_dev, freq);
}
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index c8ae855..44a5d15 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -3019,6 +3019,14 @@ static const struct x86_cpu_id hwp_support_ids[] __initconst = {
{}
};
+static bool intel_pstate_hwp_is_enabled(void)
+{
+ u64 value;
+
+ rdmsrl(MSR_PM_ENABLE, value);
+ return !!(value & 0x1);
+}
+
static int __init intel_pstate_init(void)
{
const struct x86_cpu_id *id;
@@ -3037,8 +3045,12 @@ static int __init intel_pstate_init(void)
* Avoid enabling HWP for processors without EPP support,
* because that means incomplete HWP implementation which is a
* corner case and supporting it is generally problematic.
+ *
+ * If HWP is enabled already, though, there is no choice but to
+ * deal with it.
*/
- if (!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) {
+ if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) ||
+ intel_pstate_hwp_is_enabled()) {
hwp_active++;
hwp_mode_bdw = id->driver_data;
intel_pstate.attr = hwp_cpufreq_attrs;
diff --git a/drivers/cpuidle/Kconfig.arm b/drivers/cpuidle/Kconfig.arm
index 0844fad..334f83e 100644
--- a/drivers/cpuidle/Kconfig.arm
+++ b/drivers/cpuidle/Kconfig.arm
@@ -107,7 +107,7 @@
config ARM_QCOM_SPM_CPUIDLE
bool "CPU Idle Driver for Qualcomm Subsystem Power Manager (SPM)"
- depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64
+ depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64 && MMU
select ARM_CPU_SUSPEND
select CPU_IDLE_MULTIPLE_DRIVERS
select DT_IDLE_STATES
diff --git a/drivers/cpuidle/cpuidle-tegra.c b/drivers/cpuidle/cpuidle-tegra.c
index 191966d..29c5e83 100644
--- a/drivers/cpuidle/cpuidle-tegra.c
+++ b/drivers/cpuidle/cpuidle-tegra.c
@@ -135,13 +135,13 @@ static int tegra_cpuidle_c7_enter(void)
{
int err;
- if (tegra_cpuidle_using_firmware()) {
- err = call_firmware_op(prepare_idle, TF_PM_MODE_LP2_NOFLUSH_L2);
- if (err)
- return err;
+ err = call_firmware_op(prepare_idle, TF_PM_MODE_LP2_NOFLUSH_L2);
+ if (err && err != -ENOSYS)
+ return err;
- return call_firmware_op(do_idle, 0);
- }
+ err = call_firmware_op(do_idle, 0);
+ if (err != -ENOSYS)
+ return err;
return cpu_suspend(0, tegra30_pm_secondary_cpu_suspend);
}
diff --git a/drivers/crypto/allwinner/Kconfig b/drivers/crypto/allwinner/Kconfig
index 0cdfe0e..ce34048 100644
--- a/drivers/crypto/allwinner/Kconfig
+++ b/drivers/crypto/allwinner/Kconfig
@@ -62,10 +62,10 @@
config CRYPTO_DEV_SUN8I_CE_HASH
bool "Enable support for hash on sun8i-ce"
depends on CRYPTO_DEV_SUN8I_CE
- select MD5
- select SHA1
- select SHA256
- select SHA512
+ select CRYPTO_MD5
+ select CRYPTO_SHA1
+ select CRYPTO_SHA256
+ select CRYPTO_SHA512
help
Say y to enable support for hash algorithms.
@@ -123,8 +123,8 @@
config CRYPTO_DEV_SUN8I_SS_HASH
bool "Enable support for hash on sun8i-ss"
depends on CRYPTO_DEV_SUN8I_SS
- select MD5
- select SHA1
- select SHA256
+ select CRYPTO_MD5
+ select CRYPTO_SHA1
+ select CRYPTO_SHA256
help
Say y to enable support for hash algorithms.
diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
index 158422f..00194d1 100644
--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
+++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
@@ -932,7 +932,7 @@ static int sun8i_ce_probe(struct platform_device *pdev)
if (err)
goto error_alg;
- err = pm_runtime_get_sync(ce->dev);
+ err = pm_runtime_resume_and_get(ce->dev);
if (err < 0)
goto error_alg;
diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
index ed2a69f..7c355bc 100644
--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
@@ -351,7 +351,7 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm)
op->enginectx.op.prepare_request = NULL;
op->enginectx.op.unprepare_request = NULL;
- err = pm_runtime_get_sync(op->ss->dev);
+ err = pm_runtime_resume_and_get(op->ss->dev);
if (err < 0) {
dev_err(op->ss->dev, "pm error %d\n", err);
goto error_pm;
diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
index e0ddc68..80e8906 100644
--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
@@ -753,7 +753,7 @@ static int sun8i_ss_probe(struct platform_device *pdev)
if (err)
goto error_alg;
- err = pm_runtime_get_sync(ss->dev);
+ err = pm_runtime_resume_and_get(ss->dev);
if (err < 0)
goto error_alg;
diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
index b6ab205..756d5a7 100644
--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
@@ -347,8 +347,10 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
bf = (__le32 *)pad;
result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
- if (!result)
+ if (!result) {
+ kfree(pad);
return -ENOMEM;
+ }
for (i = 0; i < MAX_SG; i++) {
rctx->t_dst[i].addr = 0;
@@ -434,11 +436,10 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
dma_unmap_sg(ss->dev, areq->src, nr_sgs, DMA_TO_DEVICE);
dma_unmap_single(ss->dev, addr_res, digestsize, DMA_FROM_DEVICE);
- kfree(pad);
-
memcpy(areq->result, result, algt->alg.hash.halg.digestsize);
- kfree(result);
theend:
+ kfree(pad);
+ kfree(result);
crypto_finalize_hash_request(engine, breq, err);
return 0;
}
diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c
index 08a1473..3191527 100644
--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c
+++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c
@@ -103,7 +103,8 @@ int sun8i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
dma_iv = dma_map_single(ss->dev, ctx->seed, ctx->slen, DMA_TO_DEVICE);
if (dma_mapping_error(ss->dev, dma_iv)) {
dev_err(ss->dev, "Cannot DMA MAP IV\n");
- return -EFAULT;
+ err = -EFAULT;
+ goto err_free;
}
dma_dst = dma_map_single(ss->dev, d, todo, DMA_FROM_DEVICE);
@@ -167,6 +168,7 @@ int sun8i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
memcpy(ctx->seed, d + dlen, ctx->slen);
}
memzero_explicit(d, todo);
+err_free:
kfree(d);
return err;
diff --git a/drivers/crypto/cavium/nitrox/nitrox_main.c b/drivers/crypto/cavium/nitrox/nitrox_main.c
index 9d14be9..cee2a27 100644
--- a/drivers/crypto/cavium/nitrox/nitrox_main.c
+++ b/drivers/crypto/cavium/nitrox/nitrox_main.c
@@ -451,7 +451,6 @@ static int nitrox_probe(struct pci_dev *pdev,
err = pci_request_mem_regions(pdev, nitrox_driver_name);
if (err) {
pci_disable_device(pdev);
- dev_err(&pdev->dev, "Failed to request mem regions!\n");
return err;
}
pci_set_master(pdev);
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
index 476113e..21caed4 100644
--- a/drivers/crypto/ccp/sev-dev.c
+++ b/drivers/crypto/ccp/sev-dev.c
@@ -149,6 +149,9 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
sev = psp->sev_data;
+ if (data && WARN_ON_ONCE(!virt_addr_valid(data)))
+ return -EINVAL;
+
/* Get the physical address of the command buffer */
phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
@@ -986,7 +989,7 @@ int sev_dev_init(struct psp_device *psp)
if (!sev->vdata) {
ret = -ENODEV;
dev_err(dev, "sev: missing driver data\n");
- goto e_err;
+ goto e_sev;
}
psp_set_sev_irq_handler(psp, sev_irq_handler, sev);
@@ -1001,6 +1004,8 @@ int sev_dev_init(struct psp_device *psp)
e_irq:
psp_clear_sev_irq_handler(psp);
+e_sev:
+ devm_kfree(dev, sev);
e_err:
psp->sev_data = NULL;
diff --git a/drivers/crypto/ccp/tee-dev.c b/drivers/crypto/ccp/tee-dev.c
index 5e697a9..bcb81fe 100644
--- a/drivers/crypto/ccp/tee-dev.c
+++ b/drivers/crypto/ccp/tee-dev.c
@@ -36,6 +36,7 @@ static int tee_alloc_ring(struct psp_tee_device *tee, int ring_size)
if (!start_addr)
return -ENOMEM;
+ memset(start_addr, 0x0, ring_size);
rb_mgr->ring_start = start_addr;
rb_mgr->ring_size = ring_size;
rb_mgr->ring_pa = __psp_pa(start_addr);
@@ -244,41 +245,54 @@ static int tee_submit_cmd(struct psp_tee_device *tee, enum tee_cmd_id cmd_id,
void *buf, size_t len, struct tee_ring_cmd **resp)
{
struct tee_ring_cmd *cmd;
- u32 rptr, wptr;
int nloop = 1000, ret = 0;
+ u32 rptr;
*resp = NULL;
mutex_lock(&tee->rb_mgr.mutex);
- wptr = tee->rb_mgr.wptr;
-
- /* Check if ring buffer is full */
+ /* Loop until empty entry found in ring buffer */
do {
+ /* Get pointer to ring buffer command entry */
+ cmd = (struct tee_ring_cmd *)
+ (tee->rb_mgr.ring_start + tee->rb_mgr.wptr);
+
rptr = ioread32(tee->io_regs + tee->vdata->ring_rptr_reg);
- if (!(wptr + sizeof(struct tee_ring_cmd) == rptr))
+ /* Check if ring buffer is full or command entry is waiting
+ * for response from TEE
+ */
+ if (!(tee->rb_mgr.wptr + sizeof(struct tee_ring_cmd) == rptr ||
+ cmd->flag == CMD_WAITING_FOR_RESPONSE))
break;
- dev_info(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u\n",
- rptr, wptr);
+ dev_dbg(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u\n",
+ rptr, tee->rb_mgr.wptr);
- /* Wait if ring buffer is full */
+ /* Wait if ring buffer is full or TEE is processing data */
mutex_unlock(&tee->rb_mgr.mutex);
schedule_timeout_interruptible(msecs_to_jiffies(10));
mutex_lock(&tee->rb_mgr.mutex);
} while (--nloop);
- if (!nloop && (wptr + sizeof(struct tee_ring_cmd) == rptr)) {
- dev_err(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u\n",
- rptr, wptr);
+ if (!nloop &&
+ (tee->rb_mgr.wptr + sizeof(struct tee_ring_cmd) == rptr ||
+ cmd->flag == CMD_WAITING_FOR_RESPONSE)) {
+ dev_err(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u response flag %u\n",
+ rptr, tee->rb_mgr.wptr, cmd->flag);
ret = -EBUSY;
goto unlock;
}
- /* Pointer to empty data entry in ring buffer */
- cmd = (struct tee_ring_cmd *)(tee->rb_mgr.ring_start + wptr);
+ /* Do not submit command if PSP got disabled while processing any
+ * command in another thread
+ */
+ if (psp_dead) {
+ ret = -EBUSY;
+ goto unlock;
+ }
/* Write command data into ring buffer */
cmd->cmd_id = cmd_id;
@@ -286,6 +300,9 @@ static int tee_submit_cmd(struct psp_tee_device *tee, enum tee_cmd_id cmd_id,
memset(&cmd->buf[0], 0, sizeof(cmd->buf));
memcpy(&cmd->buf[0], buf, len);
+ /* Indicate driver is waiting for response */
+ cmd->flag = CMD_WAITING_FOR_RESPONSE;
+
/* Update local copy of write pointer */
tee->rb_mgr.wptr += sizeof(struct tee_ring_cmd);
if (tee->rb_mgr.wptr >= tee->rb_mgr.ring_size)
@@ -353,12 +370,16 @@ int psp_tee_process_cmd(enum tee_cmd_id cmd_id, void *buf, size_t len,
return ret;
ret = tee_wait_cmd_completion(tee, resp, TEE_DEFAULT_TIMEOUT);
- if (ret)
+ if (ret) {
+ resp->flag = CMD_RESPONSE_TIMEDOUT;
return ret;
+ }
memcpy(buf, &resp->buf[0], len);
*status = resp->status;
+ resp->flag = CMD_RESPONSE_COPIED;
+
return 0;
}
EXPORT_SYMBOL(psp_tee_process_cmd);
diff --git a/drivers/crypto/ccp/tee-dev.h b/drivers/crypto/ccp/tee-dev.h
index f099601..49d2615 100644
--- a/drivers/crypto/ccp/tee-dev.h
+++ b/drivers/crypto/ccp/tee-dev.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
- * Copyright 2019 Advanced Micro Devices, Inc.
+ * Copyright (C) 2019,2021 Advanced Micro Devices, Inc.
*
* Author: Rijo Thomas <Rijo-john.Thomas@amd.com>
* Author: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
@@ -18,7 +18,7 @@
#include <linux/mutex.h>
#define TEE_DEFAULT_TIMEOUT 10
-#define MAX_BUFFER_SIZE 992
+#define MAX_BUFFER_SIZE 988
/**
* enum tee_ring_cmd_id - TEE interface commands for ring buffer configuration
@@ -82,6 +82,20 @@ enum tee_cmd_state {
};
/**
+ * enum cmd_resp_state - TEE command's response status maintained by driver
+ * @CMD_RESPONSE_INVALID: initial state when no command is written to ring
+ * @CMD_WAITING_FOR_RESPONSE: driver waiting for response from TEE
+ * @CMD_RESPONSE_TIMEDOUT: failed to get response from TEE
+ * @CMD_RESPONSE_COPIED: driver has copied response from TEE
+ */
+enum cmd_resp_state {
+ CMD_RESPONSE_INVALID,
+ CMD_WAITING_FOR_RESPONSE,
+ CMD_RESPONSE_TIMEDOUT,
+ CMD_RESPONSE_COPIED,
+};
+
+/**
* struct tee_ring_cmd - Structure of the command buffer in TEE ring
* @cmd_id: refers to &enum tee_cmd_id. Command id for the ring buffer
* interface
@@ -91,6 +105,7 @@ enum tee_cmd_state {
* @pdata: private data (currently unused)
* @res1: reserved region
* @buf: TEE command specific buffer
+ * @flag: refers to &enum cmd_resp_state
*/
struct tee_ring_cmd {
u32 cmd_id;
@@ -100,6 +115,7 @@ struct tee_ring_cmd {
u64 pdata;
u32 res1[2];
u8 buf[MAX_BUFFER_SIZE];
+ u32 flag;
/* Total size: 1024 bytes */
} __packed;
diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
index 13b908e..884adeb 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -768,13 +768,14 @@ static inline void create_wreq(struct chcr_context *ctx,
struct uld_ctx *u_ctx = ULD_CTX(ctx);
unsigned int tx_channel_id, rx_channel_id;
unsigned int txqidx = 0, rxqidx = 0;
- unsigned int qid, fid;
+ unsigned int qid, fid, portno;
get_qidxs(req, &txqidx, &rxqidx);
qid = u_ctx->lldi.rxq_ids[rxqidx];
fid = u_ctx->lldi.rxq_ids[0];
+ portno = rxqidx / ctx->rxq_perchan;
tx_channel_id = txqidx / ctx->txq_perchan;
- rx_channel_id = rxqidx / ctx->rxq_perchan;
+ rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[portno]);
chcr_req->wreq.op_to_cctx_size = FILL_WR_OP_CCTX_SIZE;
@@ -805,6 +806,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(wrparam->req);
struct chcr_context *ctx = c_ctx(tfm);
+ struct uld_ctx *u_ctx = ULD_CTX(ctx);
struct ablk_ctx *ablkctx = ABLK_CTX(ctx);
struct sk_buff *skb = NULL;
struct chcr_wr *chcr_req;
@@ -821,6 +823,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam)
struct adapter *adap = padap(ctx->dev);
unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
nents = sg_nents_xlen(reqctx->dstsg, wrparam->bytes, CHCR_DST_SG_SIZE,
reqctx->dst_ofst);
dst_size = get_space_for_phys_dsgl(nents);
@@ -1579,6 +1582,7 @@ static struct sk_buff *create_hash_wr(struct ahash_request *req,
int error = 0;
unsigned int rx_channel_id = req_ctx->rxqidx / ctx->rxq_perchan;
+ rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
transhdr_len = HASH_TRANSHDR_SIZE(param->kctx_len);
req_ctx->hctx_wr.imm = (transhdr_len + param->bfr_len +
param->sg_len) <= SGE_MAX_WR_LEN;
@@ -2437,6 +2441,7 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req,
{
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
struct chcr_context *ctx = a_ctx(tfm);
+ struct uld_ctx *u_ctx = ULD_CTX(ctx);
struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
struct chcr_authenc_ctx *actx = AUTHENC_CTX(aeadctx);
struct chcr_aead_reqctx *reqctx = aead_request_ctx(req);
@@ -2456,6 +2461,7 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req,
struct adapter *adap = padap(ctx->dev);
unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
if (req->cryptlen == 0)
return NULL;
@@ -2709,9 +2715,11 @@ void chcr_add_aead_dst_ent(struct aead_request *req,
struct dsgl_walk dsgl_walk;
unsigned int authsize = crypto_aead_authsize(tfm);
struct chcr_context *ctx = a_ctx(tfm);
+ struct uld_ctx *u_ctx = ULD_CTX(ctx);
u32 temp;
unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
dsgl_walk_init(&dsgl_walk, phys_cpl);
dsgl_walk_add_page(&dsgl_walk, IV + reqctx->b0_len, reqctx->iv_dma);
temp = req->assoclen + req->cryptlen +
@@ -2751,9 +2759,11 @@ void chcr_add_cipher_dst_ent(struct skcipher_request *req,
struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req);
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(wrparam->req);
struct chcr_context *ctx = c_ctx(tfm);
+ struct uld_ctx *u_ctx = ULD_CTX(ctx);
struct dsgl_walk dsgl_walk;
unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
dsgl_walk_init(&dsgl_walk, phys_cpl);
dsgl_walk_add_sg(&dsgl_walk, reqctx->dstsg, wrparam->bytes,
reqctx->dst_ofst);
@@ -2957,6 +2967,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl,
{
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
struct chcr_context *ctx = a_ctx(tfm);
+ struct uld_ctx *u_ctx = ULD_CTX(ctx);
struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
struct chcr_aead_reqctx *reqctx = aead_request_ctx(req);
unsigned int cipher_mode = CHCR_SCMD_CIPHER_MODE_AES_CCM;
@@ -2966,6 +2977,8 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl,
unsigned int tag_offset = 0, auth_offset = 0;
unsigned int assoclen;
+ rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+
if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4309)
assoclen = req->assoclen - 8;
else
@@ -3126,6 +3139,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req,
{
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
struct chcr_context *ctx = a_ctx(tfm);
+ struct uld_ctx *u_ctx = ULD_CTX(ctx);
struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
struct chcr_aead_reqctx *reqctx = aead_request_ctx(req);
struct sk_buff *skb = NULL;
@@ -3142,6 +3156,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req,
struct adapter *adap = padap(ctx->dev);
unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106)
assoclen = req->assoclen - 8;
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
index bb49342..41f1fca 100644
--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
+++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
@@ -544,7 +544,7 @@ static int sec_skcipher_init(struct crypto_skcipher *tfm)
crypto_skcipher_set_reqsize(tfm, sizeof(struct sec_req));
ctx->c_ctx.ivsize = crypto_skcipher_ivsize(tfm);
if (ctx->c_ctx.ivsize > SEC_IV_SIZE) {
- dev_err(SEC_CTX_DEV(ctx), "get error skcipher iv size!\n");
+ pr_err("get error skcipher iv size!\n");
return -EINVAL;
}
diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index 1b1e0ab..0dd4c6b 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -103,7 +103,7 @@ static int omap_aes_hw_init(struct omap_aes_dev *dd)
dd->err = 0;
}
- err = pm_runtime_get_sync(dd->dev);
+ err = pm_runtime_resume_and_get(dd->dev);
if (err < 0) {
dev_err(dd->dev, "failed to get sync: %d\n", err);
return err;
@@ -1133,7 +1133,7 @@ static int omap_aes_probe(struct platform_device *pdev)
pm_runtime_set_autosuspend_delay(dev, DEFAULT_AUTOSUSPEND_DELAY);
pm_runtime_enable(dev);
- err = pm_runtime_get_sync(dev);
+ err = pm_runtime_resume_and_get(dev);
if (err < 0) {
dev_err(dev, "%s: failed to get_sync(%d)\n",
__func__, err);
@@ -1302,7 +1302,7 @@ static int omap_aes_suspend(struct device *dev)
static int omap_aes_resume(struct device *dev)
{
- pm_runtime_get_sync(dev);
+ pm_runtime_resume_and_get(dev);
return 0;
}
#endif
diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
index 456979b..ea932b6 100644
--- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
@@ -184,12 +184,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
goto out_err_free_reg;
- set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
-
ret = adf_dev_init(accel_dev);
if (ret)
goto out_err_dev_shutdown;
+ set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+
ret = adf_dev_start(accel_dev);
if (ret)
goto out_err_dev_stop;
diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
index b9810f79..6200ad4 100644
--- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
@@ -184,12 +184,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
goto out_err_free_reg;
- set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
-
ret = adf_dev_init(accel_dev);
if (ret)
goto out_err_dev_shutdown;
+ set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+
ret = adf_dev_start(accel_dev);
if (ret)
goto out_err_dev_stop;
diff --git a/drivers/crypto/qat/qat_common/adf_isr.c b/drivers/crypto/qat/qat_common/adf_isr.c
index 36136f7..da6ef00 100644
--- a/drivers/crypto/qat/qat_common/adf_isr.c
+++ b/drivers/crypto/qat/qat_common/adf_isr.c
@@ -286,19 +286,32 @@ int adf_isr_resource_alloc(struct adf_accel_dev *accel_dev)
ret = adf_isr_alloc_msix_entry_table(accel_dev);
if (ret)
- return ret;
- if (adf_enable_msix(accel_dev))
goto err_out;
- if (adf_setup_bh(accel_dev))
- goto err_out;
+ ret = adf_enable_msix(accel_dev);
+ if (ret)
+ goto err_free_msix_table;
- if (adf_request_irqs(accel_dev))
- goto err_out;
+ ret = adf_setup_bh(accel_dev);
+ if (ret)
+ goto err_disable_msix;
+
+ ret = adf_request_irqs(accel_dev);
+ if (ret)
+ goto err_cleanup_bh;
return 0;
+
+err_cleanup_bh:
+ adf_cleanup_bh(accel_dev);
+
+err_disable_msix:
+ adf_disable_msix(&accel_dev->accel_pci_dev);
+
+err_free_msix_table:
+ adf_isr_free_msix_entry_table(accel_dev);
+
err_out:
- adf_isr_resource_free(accel_dev);
- return -EFAULT;
+ return ret;
}
EXPORT_SYMBOL_GPL(adf_isr_resource_alloc);
diff --git a/drivers/crypto/qat/qat_common/adf_transport.c b/drivers/crypto/qat/qat_common/adf_transport.c
index 2ad7740..cdfd56c 100644
--- a/drivers/crypto/qat/qat_common/adf_transport.c
+++ b/drivers/crypto/qat/qat_common/adf_transport.c
@@ -153,6 +153,7 @@ static int adf_init_ring(struct adf_etr_ring_data *ring)
dev_err(&GET_DEV(accel_dev), "Ring address not aligned\n");
dma_free_coherent(&GET_DEV(accel_dev), ring_size_bytes,
ring->base_addr, ring->dma_addr);
+ ring->base_addr = NULL;
return -EFAULT;
}
diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c
index c4a44dc..31a3628 100644
--- a/drivers/crypto/qat/qat_common/adf_vf_isr.c
+++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c
@@ -260,17 +260,26 @@ int adf_vf_isr_resource_alloc(struct adf_accel_dev *accel_dev)
goto err_out;
if (adf_setup_pf2vf_bh(accel_dev))
- goto err_out;
+ goto err_disable_msi;
if (adf_setup_bh(accel_dev))
- goto err_out;
+ goto err_cleanup_pf2vf_bh;
if (adf_request_msi_irq(accel_dev))
- goto err_out;
+ goto err_cleanup_bh;
return 0;
+
+err_cleanup_bh:
+ adf_cleanup_bh(accel_dev);
+
+err_cleanup_pf2vf_bh:
+ adf_cleanup_pf2vf_bh(accel_dev);
+
+err_disable_msi:
+ adf_disable_msi(accel_dev);
+
err_out:
- adf_vf_isr_resource_free(accel_dev);
return -EFAULT;
}
EXPORT_SYMBOL_GPL(adf_vf_isr_resource_alloc);
diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
index d552dbc..06abe1e 100644
--- a/drivers/crypto/qat/qat_common/qat_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_algs.c
@@ -670,7 +670,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
struct qat_alg_buf_list *bufl;
struct qat_alg_buf_list *buflout = NULL;
dma_addr_t blp;
- dma_addr_t bloutp = 0;
+ dma_addr_t bloutp;
struct scatterlist *sg;
size_t sz_out, sz = struct_size(bufl, bufers, n + 1);
@@ -682,6 +682,9 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
if (unlikely(!bufl))
return -ENOMEM;
+ for_each_sg(sgl, sg, n, i)
+ bufl->bufers[i].addr = DMA_MAPPING_ERROR;
+
blp = dma_map_single(dev, bufl, sz, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, blp)))
goto err_in;
@@ -715,10 +718,14 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
dev_to_node(&GET_DEV(inst->accel_dev)));
if (unlikely(!buflout))
goto err_in;
+
+ bufers = buflout->bufers;
+ for_each_sg(sglout, sg, n, i)
+ bufers[i].addr = DMA_MAPPING_ERROR;
+
bloutp = dma_map_single(dev, buflout, sz_out, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, bloutp)))
goto err_out;
- bufers = buflout->bufers;
for_each_sg(sglout, sg, n, i) {
int y = sg_nctr;
diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
index 404cf9d..737508d 100644
--- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
@@ -184,12 +184,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
goto out_err_free_reg;
- set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
-
ret = adf_dev_init(accel_dev);
if (ret)
goto out_err_dev_shutdown;
+ set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+
ret = adf_dev_start(accel_dev);
if (ret)
goto out_err_dev_stop;
diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c
index eda93fa..4640fe0 100644
--- a/drivers/crypto/sa2ul.c
+++ b/drivers/crypto/sa2ul.c
@@ -1138,8 +1138,10 @@ static int sa_run(struct sa_req *req)
mapped_sg->sgt.sgl = src;
mapped_sg->sgt.orig_nents = src_nents;
ret = dma_map_sgtable(ddev, &mapped_sg->sgt, dir_src, 0);
- if (ret)
+ if (ret) {
+ kfree(rxd);
return ret;
+ }
mapped_sg->dir = dir_src;
mapped_sg->mapped = true;
@@ -1147,8 +1149,10 @@ static int sa_run(struct sa_req *req)
mapped_sg->sgt.sgl = req->src;
mapped_sg->sgt.orig_nents = sg_nents;
ret = dma_map_sgtable(ddev, &mapped_sg->sgt, dir_src, 0);
- if (ret)
+ if (ret) {
+ kfree(rxd);
return ret;
+ }
mapped_sg->dir = dir_src;
mapped_sg->mapped = true;
@@ -2345,7 +2349,7 @@ static int sa_ul_probe(struct platform_device *pdev)
dev_set_drvdata(sa_k3_dev, dev_data);
pm_runtime_enable(dev);
- ret = pm_runtime_get_sync(dev);
+ ret = pm_runtime_resume_and_get(dev);
if (ret < 0) {
dev_err(&pdev->dev, "%s: failed to get sync: %d\n", __func__,
ret);
diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
index 2670c30..7999b26 100644
--- a/drivers/crypto/stm32/stm32-cryp.c
+++ b/drivers/crypto/stm32/stm32-cryp.c
@@ -542,7 +542,7 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
int ret;
u32 cfg, hw_mode;
- pm_runtime_get_sync(cryp->dev);
+ pm_runtime_resume_and_get(cryp->dev);
/* Disable interrupt */
stm32_cryp_write(cryp, CRYP_IMSCR, 0);
@@ -2043,7 +2043,7 @@ static int stm32_cryp_remove(struct platform_device *pdev)
if (!cryp)
return -ENODEV;
- ret = pm_runtime_get_sync(cryp->dev);
+ ret = pm_runtime_resume_and_get(cryp->dev);
if (ret < 0)
return ret;
diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index e3e2527..ff5362d 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -812,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
struct stm32_hash_request_ctx *rctx)
{
- pm_runtime_get_sync(hdev->dev);
+ pm_runtime_resume_and_get(hdev->dev);
if (!(HASH_FLAGS_INIT & hdev->flags)) {
stm32_hash_write(hdev, HASH_CR, HASH_CR_INIT);
@@ -961,7 +961,7 @@ static int stm32_hash_export(struct ahash_request *req, void *out)
u32 *preg;
unsigned int i;
- pm_runtime_get_sync(hdev->dev);
+ pm_runtime_resume_and_get(hdev->dev);
while ((stm32_hash_read(hdev, HASH_SR) & HASH_SR_BUSY))
cpu_relax();
@@ -999,7 +999,7 @@ static int stm32_hash_import(struct ahash_request *req, const void *in)
preg = rctx->hw_context;
- pm_runtime_get_sync(hdev->dev);
+ pm_runtime_resume_and_get(hdev->dev);
stm32_hash_write(hdev, HASH_IMR, *preg++);
stm32_hash_write(hdev, HASH_STR, *preg++);
@@ -1565,7 +1565,7 @@ static int stm32_hash_remove(struct platform_device *pdev)
if (!hdev)
return -ENODEV;
- ret = pm_runtime_get_sync(hdev->dev);
+ ret = pm_runtime_resume_and_get(hdev->dev);
if (ret < 0)
return ret;
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index 861c100..98f03a0 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -377,7 +377,7 @@ static int devfreq_set_target(struct devfreq *devfreq, unsigned long new_freq,
devfreq->previous_freq = new_freq;
if (devfreq->suspend_freq)
- devfreq->resume_freq = cur_freq;
+ devfreq->resume_freq = new_freq;
return err;
}
@@ -788,7 +788,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
if (devfreq->profile->timer < 0
|| devfreq->profile->timer >= DEVFREQ_TIMER_NUM) {
- goto err_out;
+ mutex_unlock(&devfreq->lock);
+ goto err_dev;
}
if (!devfreq->profile->max_state && !devfreq->profile->freq_table) {
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index fe6a460..af3ee28 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -1086,6 +1086,7 @@ static int __dma_async_device_channel_register(struct dma_device *device,
kfree(chan->dev);
err_free_local:
free_percpu(chan->local);
+ chan->local = NULL;
return rc;
}
diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
index 08d71da..58c8cc8 100644
--- a/drivers/dma/dw-edma/dw-edma-core.c
+++ b/drivers/dma/dw-edma/dw-edma-core.c
@@ -937,21 +937,20 @@ int dw_edma_remove(struct dw_edma_chip *chip)
/* Power management */
pm_runtime_disable(dev);
- list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels,
- vc.chan.device_node) {
- list_del(&chan->vc.chan.device_node);
- tasklet_kill(&chan->vc.task);
- }
-
- list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels,
- vc.chan.device_node) {
- list_del(&chan->vc.chan.device_node);
- tasklet_kill(&chan->vc.task);
- }
-
/* Deregister eDMA device */
dma_async_device_unregister(&dw->wr_edma);
+ list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels,
+ vc.chan.device_node) {
+ tasklet_kill(&chan->vc.task);
+ list_del(&chan->vc.chan.device_node);
+ }
+
dma_async_device_unregister(&dw->rd_edma);
+ list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels,
+ vc.chan.device_node) {
+ tasklet_kill(&chan->vc.task);
+ list_del(&chan->vc.chan.device_node);
+ }
/* Turn debugfs off */
dw_edma_v0_core_debugfs_off();
diff --git a/drivers/dma/dw/Kconfig b/drivers/dma/dw/Kconfig
index e516269..db25f9b 100644
--- a/drivers/dma/dw/Kconfig
+++ b/drivers/dma/dw/Kconfig
@@ -10,6 +10,7 @@
config DW_DMAC
tristate "Synopsys DesignWare AHB DMA platform driver"
+ depends on HAS_IOMEM
select DW_DMAC_CORE
help
Support the Synopsys DesignWare AHB DMA controller. This
@@ -18,6 +19,7 @@
config DW_DMAC_PCI
tristate "Synopsys DesignWare AHB DMA PCI driver"
depends on PCI
+ depends on HAS_IOMEM
select DW_DMAC_CORE
help
Support the Synopsys DesignWare AHB DMA controller on the
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
index c397615..4da8857 100644
--- a/drivers/dma/idxd/cdev.c
+++ b/drivers/dma/idxd/cdev.c
@@ -35,15 +35,15 @@ struct idxd_user_context {
unsigned int flags;
};
-enum idxd_cdev_cleanup {
- CDEV_NORMAL = 0,
- CDEV_FAILED,
-};
-
static void idxd_cdev_dev_release(struct device *dev)
{
- dev_dbg(dev, "releasing cdev device\n");
- kfree(dev);
+ struct idxd_cdev *idxd_cdev = container_of(dev, struct idxd_cdev, dev);
+ struct idxd_cdev_context *cdev_ctx;
+ struct idxd_wq *wq = idxd_cdev->wq;
+
+ cdev_ctx = &ictx[wq->idxd->type];
+ ida_simple_remove(&cdev_ctx->minor_ida, idxd_cdev->minor);
+ kfree(idxd_cdev);
}
static struct device_type idxd_cdev_device_type = {
@@ -58,14 +58,11 @@ static inline struct idxd_cdev *inode_idxd_cdev(struct inode *inode)
return container_of(cdev, struct idxd_cdev, cdev);
}
-static inline struct idxd_wq *idxd_cdev_wq(struct idxd_cdev *idxd_cdev)
-{
- return container_of(idxd_cdev, struct idxd_wq, idxd_cdev);
-}
-
static inline struct idxd_wq *inode_wq(struct inode *inode)
{
- return idxd_cdev_wq(inode_idxd_cdev(inode));
+ struct idxd_cdev *idxd_cdev = inode_idxd_cdev(inode);
+
+ return idxd_cdev->wq;
}
static int idxd_cdev_open(struct inode *inode, struct file *filp)
@@ -172,11 +169,10 @@ static __poll_t idxd_cdev_poll(struct file *filp,
struct idxd_user_context *ctx = filp->private_data;
struct idxd_wq *wq = ctx->wq;
struct idxd_device *idxd = wq->idxd;
- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
unsigned long flags;
__poll_t out = 0;
- poll_wait(filp, &idxd_cdev->err_queue, wait);
+ poll_wait(filp, &wq->err_queue, wait);
spin_lock_irqsave(&idxd->dev_lock, flags);
if (idxd->sw_err.valid)
out = EPOLLIN | EPOLLRDNORM;
@@ -198,98 +194,67 @@ int idxd_cdev_get_major(struct idxd_device *idxd)
return MAJOR(ictx[idxd->type].devt);
}
-static int idxd_wq_cdev_dev_setup(struct idxd_wq *wq)
+int idxd_wq_add_cdev(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
- struct idxd_cdev_context *cdev_ctx;
+ struct idxd_cdev *idxd_cdev;
+ struct cdev *cdev;
struct device *dev;
- int minor, rc;
+ struct idxd_cdev_context *cdev_ctx;
+ int rc, minor;
- idxd_cdev->dev = kzalloc(sizeof(*idxd_cdev->dev), GFP_KERNEL);
- if (!idxd_cdev->dev)
+ idxd_cdev = kzalloc(sizeof(*idxd_cdev), GFP_KERNEL);
+ if (!idxd_cdev)
return -ENOMEM;
- dev = idxd_cdev->dev;
- dev->parent = &idxd->pdev->dev;
- dev_set_name(dev, "%s/wq%u.%u", idxd_get_dev_name(idxd),
- idxd->id, wq->id);
- dev->bus = idxd_get_bus_type(idxd);
-
+ idxd_cdev->wq = wq;
+ cdev = &idxd_cdev->cdev;
+ dev = &idxd_cdev->dev;
cdev_ctx = &ictx[wq->idxd->type];
minor = ida_simple_get(&cdev_ctx->minor_ida, 0, MINORMASK, GFP_KERNEL);
if (minor < 0) {
- rc = minor;
- kfree(dev);
- goto ida_err;
- }
-
- dev->devt = MKDEV(MAJOR(cdev_ctx->devt), minor);
- dev->type = &idxd_cdev_device_type;
- rc = device_register(dev);
- if (rc < 0) {
- dev_err(&idxd->pdev->dev, "device register failed\n");
- goto dev_reg_err;
+ kfree(idxd_cdev);
+ return minor;
}
idxd_cdev->minor = minor;
- return 0;
+ device_initialize(dev);
+ dev->parent = &wq->conf_dev;
+ dev->bus = idxd_get_bus_type(idxd);
+ dev->type = &idxd_cdev_device_type;
+ dev->devt = MKDEV(MAJOR(cdev_ctx->devt), minor);
- dev_reg_err:
- ida_simple_remove(&cdev_ctx->minor_ida, MINOR(dev->devt));
- put_device(dev);
- ida_err:
- idxd_cdev->dev = NULL;
- return rc;
-}
-
-static void idxd_wq_cdev_cleanup(struct idxd_wq *wq,
- enum idxd_cdev_cleanup cdev_state)
-{
- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
- struct idxd_cdev_context *cdev_ctx;
-
- cdev_ctx = &ictx[wq->idxd->type];
- if (cdev_state == CDEV_NORMAL)
- cdev_del(&idxd_cdev->cdev);
- device_unregister(idxd_cdev->dev);
- /*
- * The device_type->release() will be called on the device and free
- * the allocated struct device. We can just forget it.
- */
- ida_simple_remove(&cdev_ctx->minor_ida, idxd_cdev->minor);
- idxd_cdev->dev = NULL;
- idxd_cdev->minor = -1;
-}
-
-int idxd_wq_add_cdev(struct idxd_wq *wq)
-{
- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
- struct cdev *cdev = &idxd_cdev->cdev;
- struct device *dev;
- int rc;
-
- rc = idxd_wq_cdev_dev_setup(wq);
+ rc = dev_set_name(dev, "%s/wq%u.%u", idxd_get_dev_name(idxd),
+ idxd->id, wq->id);
if (rc < 0)
- return rc;
+ goto err;
- dev = idxd_cdev->dev;
+ wq->idxd_cdev = idxd_cdev;
cdev_init(cdev, &idxd_cdev_fops);
- cdev_set_parent(cdev, &dev->kobj);
- rc = cdev_add(cdev, dev->devt, 1);
+ rc = cdev_device_add(cdev, dev);
if (rc) {
dev_dbg(&wq->idxd->pdev->dev, "cdev_add failed: %d\n", rc);
- idxd_wq_cdev_cleanup(wq, CDEV_FAILED);
- return rc;
+ goto err;
}
- init_waitqueue_head(&idxd_cdev->err_queue);
return 0;
+
+ err:
+ put_device(dev);
+ wq->idxd_cdev = NULL;
+ return rc;
}
void idxd_wq_del_cdev(struct idxd_wq *wq)
{
- idxd_wq_cdev_cleanup(wq, CDEV_NORMAL);
+ struct idxd_cdev *idxd_cdev;
+ struct idxd_cdev_context *cdev_ctx;
+
+ cdev_ctx = &ictx[wq->idxd->type];
+ idxd_cdev = wq->idxd_cdev;
+ wq->idxd_cdev = NULL;
+ cdev_device_del(&idxd_cdev->cdev, &idxd_cdev->dev);
+ put_device(&idxd_cdev->dev);
}
int idxd_cdev_register(void)
diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
index a670483..47aae5f 100644
--- a/drivers/dma/idxd/device.c
+++ b/drivers/dma/idxd/device.c
@@ -169,8 +169,6 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq)
desc->id = i;
desc->wq = wq;
desc->cpu = -1;
- dma_async_tx_descriptor_init(&desc->txd, &wq->dma_chan);
- desc->txd.tx_submit = idxd_dma_tx_submit;
}
return 0;
@@ -263,6 +261,22 @@ void idxd_wq_drain(struct idxd_wq *wq)
idxd_cmd_exec(idxd, IDXD_CMD_DRAIN_WQ, operand, NULL);
}
+void idxd_wq_reset(struct idxd_wq *wq)
+{
+ struct idxd_device *idxd = wq->idxd;
+ struct device *dev = &idxd->pdev->dev;
+ u32 operand;
+
+ if (wq->state != IDXD_WQ_ENABLED) {
+ dev_dbg(dev, "WQ %d in wrong state: %d\n", wq->id, wq->state);
+ return;
+ }
+
+ operand = BIT(wq->id % 16) | ((wq->id / 16) << 16);
+ idxd_cmd_exec(idxd, IDXD_CMD_RESET_WQ, operand, NULL);
+ wq->state = IDXD_WQ_DISABLED;
+}
+
int idxd_wq_map_portal(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
@@ -291,8 +305,6 @@ void idxd_wq_unmap_portal(struct idxd_wq *wq)
void idxd_wq_disable_cleanup(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
- struct device *dev = &idxd->pdev->dev;
- int i, wq_offset;
lockdep_assert_held(&idxd->dev_lock);
memset(wq->wqcfg, 0, idxd->wqcfg_size);
@@ -303,14 +315,6 @@ void idxd_wq_disable_cleanup(struct idxd_wq *wq)
wq->priority = 0;
clear_bit(WQ_FLAG_DEDICATED, &wq->flags);
memset(wq->name, 0, WQ_NAME_SIZE);
-
- for (i = 0; i < WQCFG_STRIDES(idxd); i++) {
- wq_offset = WQCFG_OFFSET(idxd, wq->id, i);
- iowrite32(0, idxd->reg_base + wq_offset);
- dev_dbg(dev, "WQ[%d][%d][%#x]: %#x\n",
- wq->id, i, wq_offset,
- ioread32(idxd->reg_base + wq_offset));
- }
}
/* Device control bits */
@@ -372,7 +376,8 @@ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand,
if (idxd_device_is_halted(idxd)) {
dev_warn(&idxd->pdev->dev, "Device is HALTED!\n");
- *status = IDXD_CMDSTS_HW_ERR;
+ if (status)
+ *status = IDXD_CMDSTS_HW_ERR;
return;
}
@@ -560,7 +565,14 @@ static int idxd_wq_config_write(struct idxd_wq *wq)
if (!wq->group)
return 0;
- memset(wq->wqcfg, 0, idxd->wqcfg_size);
+ /*
+ * Instead of memset the entire shadow copy of WQCFG, copy from the hardware after
+ * wq reset. This will copy back the sticky values that are present on some devices.
+ */
+ for (i = 0; i < WQCFG_STRIDES(idxd); i++) {
+ wq_offset = WQCFG_OFFSET(idxd, wq->id, i);
+ wq->wqcfg->bits[i] = ioread32(idxd->reg_base + wq_offset);
+ }
/* byte 0-3 */
wq->wqcfg->wq_size = wq->size;
diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c
index ec177a5..aa74355 100644
--- a/drivers/dma/idxd/dma.c
+++ b/drivers/dma/idxd/dma.c
@@ -14,7 +14,10 @@
static inline struct idxd_wq *to_idxd_wq(struct dma_chan *c)
{
- return container_of(c, struct idxd_wq, dma_chan);
+ struct idxd_dma_chan *idxd_chan;
+
+ idxd_chan = container_of(c, struct idxd_dma_chan, chan);
+ return idxd_chan->wq;
}
void idxd_dma_complete_txd(struct idxd_desc *desc,
@@ -144,7 +147,7 @@ static void idxd_dma_issue_pending(struct dma_chan *dma_chan)
{
}
-dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+static dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
{
struct dma_chan *c = tx->chan;
struct idxd_wq *wq = to_idxd_wq(c);
@@ -165,14 +168,25 @@ dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
static void idxd_dma_release(struct dma_device *device)
{
+ struct idxd_dma_dev *idxd_dma = container_of(device, struct idxd_dma_dev, dma);
+
+ kfree(idxd_dma);
}
int idxd_register_dma_device(struct idxd_device *idxd)
{
- struct dma_device *dma = &idxd->dma_dev;
+ struct idxd_dma_dev *idxd_dma;
+ struct dma_device *dma;
+ struct device *dev = &idxd->pdev->dev;
+ int rc;
+ idxd_dma = kzalloc_node(sizeof(*idxd_dma), GFP_KERNEL, dev_to_node(dev));
+ if (!idxd_dma)
+ return -ENOMEM;
+
+ dma = &idxd_dma->dma;
INIT_LIST_HEAD(&dma->channels);
- dma->dev = &idxd->pdev->dev;
+ dma->dev = dev;
dma_cap_set(DMA_PRIVATE, dma->cap_mask);
dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask);
@@ -188,35 +202,72 @@ int idxd_register_dma_device(struct idxd_device *idxd)
dma->device_alloc_chan_resources = idxd_dma_alloc_chan_resources;
dma->device_free_chan_resources = idxd_dma_free_chan_resources;
- return dma_async_device_register(&idxd->dma_dev);
+ rc = dma_async_device_register(dma);
+ if (rc < 0) {
+ kfree(idxd_dma);
+ return rc;
+ }
+
+ idxd_dma->idxd = idxd;
+ /*
+ * This pointer is protected by the refs taken by the dma_chan. It will remain valid
+ * as long as there are outstanding channels.
+ */
+ idxd->idxd_dma = idxd_dma;
+ return 0;
}
void idxd_unregister_dma_device(struct idxd_device *idxd)
{
- dma_async_device_unregister(&idxd->dma_dev);
+ dma_async_device_unregister(&idxd->idxd_dma->dma);
}
int idxd_register_dma_channel(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
- struct dma_device *dma = &idxd->dma_dev;
- struct dma_chan *chan = &wq->dma_chan;
- int rc;
+ struct dma_device *dma = &idxd->idxd_dma->dma;
+ struct device *dev = &idxd->pdev->dev;
+ struct idxd_dma_chan *idxd_chan;
+ struct dma_chan *chan;
+ int rc, i;
- memset(&wq->dma_chan, 0, sizeof(struct dma_chan));
+ idxd_chan = kzalloc_node(sizeof(*idxd_chan), GFP_KERNEL, dev_to_node(dev));
+ if (!idxd_chan)
+ return -ENOMEM;
+
+ chan = &idxd_chan->chan;
chan->device = dma;
list_add_tail(&chan->device_node, &dma->channels);
+
+ for (i = 0; i < wq->num_descs; i++) {
+ struct idxd_desc *desc = wq->descs[i];
+
+ dma_async_tx_descriptor_init(&desc->txd, chan);
+ desc->txd.tx_submit = idxd_dma_tx_submit;
+ }
+
rc = dma_async_device_channel_register(dma, chan);
- if (rc < 0)
+ if (rc < 0) {
+ kfree(idxd_chan);
return rc;
+ }
+
+ wq->idxd_chan = idxd_chan;
+ idxd_chan->wq = wq;
+ get_device(&wq->conf_dev);
return 0;
}
void idxd_unregister_dma_channel(struct idxd_wq *wq)
{
- struct dma_chan *chan = &wq->dma_chan;
+ struct idxd_dma_chan *idxd_chan = wq->idxd_chan;
+ struct dma_chan *chan = &idxd_chan->chan;
+ struct idxd_dma_dev *idxd_dma = wq->idxd->idxd_dma;
- dma_async_device_channel_unregister(&wq->idxd->dma_dev, chan);
+ dma_async_device_channel_unregister(&idxd_dma->dma, chan);
list_del(&chan->device_node);
+ kfree(wq->idxd_chan);
+ wq->idxd_chan = NULL;
+ put_device(&wq->conf_dev);
}
diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index 953ef65..eef6996 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -14,6 +14,9 @@
extern struct kmem_cache *idxd_desc_pool;
+struct idxd_device;
+struct idxd_wq;
+
#define IDXD_REG_TIMEOUT 50
#define IDXD_DRAIN_TIMEOUT 5000
@@ -68,10 +71,10 @@ enum idxd_wq_type {
};
struct idxd_cdev {
+ struct idxd_wq *wq;
struct cdev cdev;
- struct device *dev;
+ struct device dev;
int minor;
- struct wait_queue_head err_queue;
};
#define IDXD_ALLOCATED_BATCH_SIZE 128U
@@ -88,10 +91,16 @@ enum idxd_complete_type {
IDXD_COMPLETE_ABORT,
};
+struct idxd_dma_chan {
+ struct dma_chan chan;
+ struct idxd_wq *wq;
+};
+
struct idxd_wq {
void __iomem *dportal;
struct device conf_dev;
- struct idxd_cdev idxd_cdev;
+ struct idxd_cdev *idxd_cdev;
+ struct wait_queue_head err_queue;
struct idxd_device *idxd;
int id;
enum idxd_wq_type type;
@@ -112,7 +121,7 @@ struct idxd_wq {
int compls_size;
struct idxd_desc **descs;
struct sbitmap_queue sbq;
- struct dma_chan dma_chan;
+ struct idxd_dma_chan *idxd_chan;
char name[WQ_NAME_SIZE + 1];
u64 max_xfer_bytes;
u32 max_batch_size;
@@ -147,6 +156,11 @@ enum idxd_device_flag {
IDXD_FLAG_CMD_RUNNING,
};
+struct idxd_dma_dev {
+ struct idxd_device *idxd;
+ struct dma_device dma;
+};
+
struct idxd_device {
enum idxd_type type;
struct device conf_dev;
@@ -191,7 +205,7 @@ struct idxd_device {
int num_wq_irqs;
struct idxd_irq_entry *irq_entries;
- struct dma_device dma_dev;
+ struct idxd_dma_dev *idxd_dma;
struct workqueue_struct *wq;
struct work_struct work;
};
@@ -295,6 +309,7 @@ void idxd_wq_free_resources(struct idxd_wq *wq);
int idxd_wq_enable(struct idxd_wq *wq);
int idxd_wq_disable(struct idxd_wq *wq);
void idxd_wq_drain(struct idxd_wq *wq);
+void idxd_wq_reset(struct idxd_wq *wq);
int idxd_wq_map_portal(struct idxd_wq *wq);
void idxd_wq_unmap_portal(struct idxd_wq *wq);
void idxd_wq_disable_cleanup(struct idxd_wq *wq);
@@ -312,7 +327,6 @@ void idxd_unregister_dma_channel(struct idxd_wq *wq);
void idxd_parse_completion_status(u8 status, enum dmaengine_tx_result *res);
void idxd_dma_complete_txd(struct idxd_desc *desc,
enum idxd_complete_type comp_type);
-dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx);
/* cdev */
int idxd_cdev_register(void);
diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
index fa8c422..f4c7ce8 100644
--- a/drivers/dma/idxd/init.c
+++ b/drivers/dma/idxd/init.c
@@ -175,7 +175,7 @@ static int idxd_setup_internals(struct idxd_device *idxd)
wq->id = i;
wq->idxd = idxd;
mutex_init(&wq->wq_lock);
- wq->idxd_cdev.minor = -1;
+ init_waitqueue_head(&wq->err_queue);
wq->max_xfer_bytes = idxd->max_xfer_bytes;
wq->max_batch_size = idxd->max_batch_size;
wq->wqcfg = devm_kzalloc(dev, idxd->wqcfg_size, GFP_KERNEL);
diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
index 552e2e2..fc95791 100644
--- a/drivers/dma/idxd/irq.c
+++ b/drivers/dma/idxd/irq.c
@@ -66,14 +66,16 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
for (i = 0; i < 4; i++)
idxd->sw_err.bits[i] = ioread64(idxd->reg_base +
IDXD_SWERR_OFFSET + i * sizeof(u64));
- iowrite64(IDXD_SWERR_ACK, idxd->reg_base + IDXD_SWERR_OFFSET);
+
+ iowrite64(idxd->sw_err.bits[0] & IDXD_SWERR_ACK,
+ idxd->reg_base + IDXD_SWERR_OFFSET);
if (idxd->sw_err.valid && idxd->sw_err.wq_idx_valid) {
int id = idxd->sw_err.wq_idx;
struct idxd_wq *wq = &idxd->wqs[id];
if (wq->type == IDXD_WQT_USER)
- wake_up_interruptible(&wq->idxd_cdev.err_queue);
+ wake_up_interruptible(&wq->err_queue);
} else {
int i;
@@ -81,7 +83,7 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
struct idxd_wq *wq = &idxd->wqs[i];
if (wq->type == IDXD_WQT_USER)
- wake_up_interruptible(&wq->idxd_cdev.err_queue);
+ wake_up_interruptible(&wq->err_queue);
}
}
diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
index fb97c9f..7b41cdf 100644
--- a/drivers/dma/idxd/sysfs.c
+++ b/drivers/dma/idxd/sysfs.c
@@ -241,7 +241,6 @@ static void disable_wq(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
struct device *dev = &idxd->pdev->dev;
- int rc;
mutex_lock(&wq->wq_lock);
dev_dbg(dev, "%s removing WQ %s\n", __func__, dev_name(&wq->conf_dev));
@@ -262,17 +261,13 @@ static void disable_wq(struct idxd_wq *wq)
idxd_wq_unmap_portal(wq);
idxd_wq_drain(wq);
- rc = idxd_wq_disable(wq);
+ idxd_wq_reset(wq);
idxd_wq_free_resources(wq);
wq->client_count = 0;
mutex_unlock(&wq->wq_lock);
- if (rc < 0)
- dev_warn(dev, "Failed to disable %s: %d\n",
- dev_name(&wq->conf_dev), rc);
- else
- dev_info(dev, "wq %s disabled\n", dev_name(&wq->conf_dev));
+ dev_info(dev, "wq %s disabled\n", dev_name(&wq->conf_dev));
}
static int idxd_config_bus_remove(struct device *dev)
@@ -923,7 +918,7 @@ static ssize_t wq_size_store(struct device *dev,
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
return -EPERM;
- if (wq->state != IDXD_WQ_DISABLED)
+ if (idxd->state == IDXD_DEV_ENABLED)
return -EPERM;
if (size + total_claimed_wq_size(idxd) - wq->size > idxd->max_wq_size)
@@ -1057,8 +1052,16 @@ static ssize_t wq_cdev_minor_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
+ int minor = -1;
- return sprintf(buf, "%d\n", wq->idxd_cdev.minor);
+ mutex_lock(&wq->wq_lock);
+ if (wq->idxd_cdev)
+ minor = wq->idxd_cdev->minor;
+ mutex_unlock(&wq->wq_lock);
+
+ if (minor == -1)
+ return -ENXIO;
+ return sysfs_emit(buf, "%d\n", minor);
}
static struct device_attribute dev_attr_wq_cdev_minor =
@@ -1259,8 +1262,14 @@ static ssize_t op_cap_show(struct device *dev,
{
struct idxd_device *idxd =
container_of(dev, struct idxd_device, conf_dev);
+ int i, rc = 0;
- return sprintf(buf, "%#llx\n", idxd->hw.opcap.bits[0]);
+ for (i = 0; i < 4; i++)
+ rc += sysfs_emit_at(buf, rc, "%#llx ", idxd->hw.opcap.bits[i]);
+
+ rc--;
+ rc += sysfs_emit_at(buf, rc, "\n");
+ return rc;
}
static DEVICE_ATTR_RO(op_cap);
diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
index f387c5b..1669345 100644
--- a/drivers/dma/plx_dma.c
+++ b/drivers/dma/plx_dma.c
@@ -507,10 +507,8 @@ static int plx_dma_create(struct pci_dev *pdev)
rc = request_irq(pci_irq_vector(pdev, 0), plx_dma_isr, 0,
KBUILD_MODNAME, plxdev);
- if (rc) {
- kfree(plxdev);
- return rc;
- }
+ if (rc)
+ goto free_plx;
spin_lock_init(&plxdev->ring_lock);
tasklet_setup(&plxdev->desc_task, plx_dma_desc_task);
@@ -540,14 +538,20 @@ static int plx_dma_create(struct pci_dev *pdev)
rc = dma_async_device_register(dma);
if (rc) {
pci_err(pdev, "Failed to register dma device: %d\n", rc);
- free_irq(pci_irq_vector(pdev, 0), plxdev);
- kfree(plxdev);
- return rc;
+ goto put_device;
}
pci_set_drvdata(pdev, plxdev);
return 0;
+
+put_device:
+ put_device(&pdev->dev);
+ free_irq(pci_irq_vector(pdev, 0), plxdev);
+free_plx:
+ kfree(plxdev);
+
+ return rc;
}
static int plx_dma_probe(struct pci_dev *pdev,
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
index 806ca02..6202660 100644
--- a/drivers/dma/qcom/hidma_mgmt.c
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -418,8 +418,23 @@ static int __init hidma_mgmt_init(void)
hidma_mgmt_of_populate_channels(child);
}
#endif
- return platform_driver_register(&hidma_mgmt_driver);
+ /*
+ * We do not check for return value here, as it is assumed that
+ * platform_driver_register must not fail. The reason for this is that
+ * the (potential) hidma_mgmt_of_populate_channels calls above are not
+ * cleaned up if it does fail, and to do this work is quite
+ * complicated. In particular, various calls of of_address_to_resource,
+ * of_irq_to_resource, platform_device_register_full, of_dma_configure,
+ * and of_msi_configure which then call other functions and so on, must
+ * be cleaned up - this is not a trivial exercise.
+ *
+ * Currently, this module is not intended to be unloaded, and there is
+ * no module_exit function defined which does the needed cleanup. For
+ * this reason, we have to assume success here.
+ */
+ platform_driver_register(&hidma_mgmt_driver);
+ return 0;
}
module_init(hidma_mgmt_init);
MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
index 71827d9..b726074 100644
--- a/drivers/dma/tegra20-apb-dma.c
+++ b/drivers/dma/tegra20-apb-dma.c
@@ -723,7 +723,7 @@ static void tegra_dma_issue_pending(struct dma_chan *dc)
goto end;
}
if (!tdc->busy) {
- err = pm_runtime_get_sync(tdc->tdma->dev);
+ err = pm_runtime_resume_and_get(tdc->tdma->dev);
if (err < 0) {
dev_err(tdc2dev(tdc), "Failed to enable DMA\n");
goto end;
@@ -818,7 +818,7 @@ static void tegra_dma_synchronize(struct dma_chan *dc)
struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc);
int err;
- err = pm_runtime_get_sync(tdc->tdma->dev);
+ err = pm_runtime_resume_and_get(tdc->tdma->dev);
if (err < 0) {
dev_err(tdc2dev(tdc), "Failed to synchronize DMA: %d\n", err);
return;
diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
index 55df63d..70b29bd 100644
--- a/drivers/dma/xilinx/xilinx_dpdma.c
+++ b/drivers/dma/xilinx/xilinx_dpdma.c
@@ -839,6 +839,7 @@ static void xilinx_dpdma_chan_queue_transfer(struct xilinx_dpdma_chan *chan)
struct xilinx_dpdma_tx_desc *desc;
struct virt_dma_desc *vdesc;
u32 reg, channels;
+ bool first_frame;
lockdep_assert_held(&chan->lock);
@@ -852,14 +853,6 @@ static void xilinx_dpdma_chan_queue_transfer(struct xilinx_dpdma_chan *chan)
chan->running = true;
}
- if (chan->video_group)
- channels = xilinx_dpdma_chan_video_group_ready(chan);
- else
- channels = BIT(chan->id);
-
- if (!channels)
- return;
-
vdesc = vchan_next_desc(&chan->vchan);
if (!vdesc)
return;
@@ -884,13 +877,26 @@ static void xilinx_dpdma_chan_queue_transfer(struct xilinx_dpdma_chan *chan)
FIELD_PREP(XILINX_DPDMA_CH_DESC_START_ADDRE_MASK,
upper_32_bits(sw_desc->dma_addr)));
- if (chan->first_frame)
+ first_frame = chan->first_frame;
+ chan->first_frame = false;
+
+ if (chan->video_group) {
+ channels = xilinx_dpdma_chan_video_group_ready(chan);
+ /*
+ * Trigger the transfer only when all channels in the group are
+ * ready.
+ */
+ if (!channels)
+ return;
+ } else {
+ channels = BIT(chan->id);
+ }
+
+ if (first_frame)
reg = XILINX_DPDMA_GBL_TRIG_MASK(channels);
else
reg = XILINX_DPDMA_GBL_RETRIG_MASK(channels);
- chan->first_frame = false;
-
dpdma_write(xdev->reg, XILINX_DPDMA_GBL, reg);
}
@@ -1042,13 +1048,14 @@ static int xilinx_dpdma_chan_stop(struct xilinx_dpdma_chan *chan)
*/
static void xilinx_dpdma_chan_done_irq(struct xilinx_dpdma_chan *chan)
{
- struct xilinx_dpdma_tx_desc *active = chan->desc.active;
+ struct xilinx_dpdma_tx_desc *active;
unsigned long flags;
spin_lock_irqsave(&chan->lock, flags);
xilinx_dpdma_debugfs_desc_done_irq(chan);
+ active = chan->desc.active;
if (active)
vchan_cyclic_callback(&active->vdesc);
else
diff --git a/drivers/extcon/extcon-arizona.c b/drivers/extcon/extcon-arizona.c
index aae82db..76aacba 100644
--- a/drivers/extcon/extcon-arizona.c
+++ b/drivers/extcon/extcon-arizona.c
@@ -601,7 +601,7 @@ static irqreturn_t arizona_hpdet_irq(int irq, void *data)
struct arizona *arizona = info->arizona;
int id_gpio = arizona->pdata.hpdet_id_gpio;
unsigned int report = EXTCON_JACK_HEADPHONE;
- int ret, reading;
+ int ret, reading, state;
bool mic = false;
mutex_lock(&info->lock);
@@ -614,12 +614,11 @@ static irqreturn_t arizona_hpdet_irq(int irq, void *data)
}
/* If the cable was removed while measuring ignore the result */
- ret = extcon_get_state(info->edev, EXTCON_MECHANICAL);
- if (ret < 0) {
- dev_err(arizona->dev, "Failed to check cable state: %d\n",
- ret);
+ state = extcon_get_state(info->edev, EXTCON_MECHANICAL);
+ if (state < 0) {
+ dev_err(arizona->dev, "Failed to check cable state: %d\n", state);
goto out;
- } else if (!ret) {
+ } else if (!state) {
dev_dbg(arizona->dev, "Ignoring HPDET for removed cable\n");
goto done;
}
@@ -667,7 +666,7 @@ static irqreturn_t arizona_hpdet_irq(int irq, void *data)
gpio_set_value_cansleep(id_gpio, 0);
/* If we have a mic then reenable MICDET */
- if (mic || info->mic)
+ if (state && (mic || info->mic))
arizona_start_mic(info);
if (info->hpdet_active) {
@@ -675,7 +674,9 @@ static irqreturn_t arizona_hpdet_irq(int irq, void *data)
info->hpdet_active = false;
}
- info->hpdet_done = true;
+ /* Do not set hp_det done when the cable has been unplugged */
+ if (state)
+ info->hpdet_done = true;
out:
mutex_unlock(&info->lock);
@@ -1759,25 +1760,6 @@ static int arizona_extcon_remove(struct platform_device *pdev)
bool change;
int ret;
- ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
- ARIZONA_MICD_ENA, 0,
- &change);
- if (ret < 0) {
- dev_err(&pdev->dev, "Failed to disable micd on remove: %d\n",
- ret);
- } else if (change) {
- regulator_disable(info->micvdd);
- pm_runtime_put(info->dev);
- }
-
- gpiod_put(info->micd_pol_gpio);
-
- pm_runtime_disable(&pdev->dev);
-
- regmap_update_bits(arizona->regmap,
- ARIZONA_MICD_CLAMP_CONTROL,
- ARIZONA_MICD_CLAMP_MODE_MASK, 0);
-
if (info->micd_clamp) {
jack_irq_rise = ARIZONA_IRQ_MICD_CLAMP_RISE;
jack_irq_fall = ARIZONA_IRQ_MICD_CLAMP_FALL;
@@ -1793,10 +1775,31 @@ static int arizona_extcon_remove(struct platform_device *pdev)
arizona_free_irq(arizona, jack_irq_rise, info);
arizona_free_irq(arizona, jack_irq_fall, info);
cancel_delayed_work_sync(&info->hpdet_work);
+ cancel_delayed_work_sync(&info->micd_detect_work);
+ cancel_delayed_work_sync(&info->micd_timeout_work);
+
+ ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
+ ARIZONA_MICD_ENA, 0,
+ &change);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to disable micd on remove: %d\n",
+ ret);
+ } else if (change) {
+ regulator_disable(info->micvdd);
+ pm_runtime_put(info->dev);
+ }
+
+ regmap_update_bits(arizona->regmap,
+ ARIZONA_MICD_CLAMP_CONTROL,
+ ARIZONA_MICD_CLAMP_MODE_MASK, 0);
regmap_update_bits(arizona->regmap, ARIZONA_JACK_DETECT_ANALOGUE,
ARIZONA_JD1_ENA, 0);
arizona_clk32k_disable(arizona);
+ gpiod_put(info->micd_pol_gpio);
+
+ pm_runtime_disable(&pdev->dev);
+
return 0;
}
diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
index 3315e3c..5fa6b3c 100644
--- a/drivers/firmware/Kconfig
+++ b/drivers/firmware/Kconfig
@@ -237,6 +237,7 @@
config QCOM_SCM
bool
depends on ARM || ARM64
+ depends on HAVE_ARM_SMCCC
select RESET_CONTROLLER
config QCOM_SCM_DOWNLOAD_MODE_DEFAULT
diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c
index d0dee37..4ceba5e 100644
--- a/drivers/firmware/arm_scpi.c
+++ b/drivers/firmware/arm_scpi.c
@@ -552,8 +552,10 @@ static unsigned long scpi_clk_get_val(u16 clk_id)
ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id,
sizeof(le_clk_id), &rate, sizeof(rate));
+ if (ret)
+ return 0;
- return ret ? ret : le32_to_cpu(rate);
+ return le32_to_cpu(rate);
}
static int scpi_clk_set_val(u16 clk_id, unsigned long rate)
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index 8a94388..a2ae9c3 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -13,7 +13,8 @@
-Wno-pointer-sign \
$(call cc-disable-warning, address-of-packed-member) \
$(call cc-disable-warning, gnu) \
- -fno-asynchronous-unwind-tables
+ -fno-asynchronous-unwind-tables \
+ $(CLANG_FLAGS)
# arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
# disable the stackleak plugin
diff --git a/drivers/firmware/qcom_scm-smc.c b/drivers/firmware/qcom_scm-smc.c
index 497c13b..d111833 100644
--- a/drivers/firmware/qcom_scm-smc.c
+++ b/drivers/firmware/qcom_scm-smc.c
@@ -77,8 +77,10 @@ static void __scm_smc_do(const struct arm_smccc_args *smc,
} while (res->a0 == QCOM_SCM_V2_EBUSY);
}
-int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
- struct qcom_scm_res *res, bool atomic)
+
+int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+ enum qcom_scm_convention qcom_convention,
+ struct qcom_scm_res *res, bool atomic)
{
int arglen = desc->arginfo & 0xf;
int i;
@@ -87,9 +89,8 @@ int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
size_t alloc_len;
gfp_t flag = atomic ? GFP_ATOMIC : GFP_KERNEL;
u32 smccc_call_type = atomic ? ARM_SMCCC_FAST_CALL : ARM_SMCCC_STD_CALL;
- u32 qcom_smccc_convention =
- (qcom_scm_convention == SMC_CONVENTION_ARM_32) ?
- ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64;
+ u32 qcom_smccc_convention = (qcom_convention == SMC_CONVENTION_ARM_32) ?
+ ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64;
struct arm_smccc_res smc_res;
struct arm_smccc_args smc = {0};
@@ -148,4 +149,5 @@ int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
}
return (long)smc_res.a0 ? qcom_scm_remap_error(smc_res.a0) : 0;
+
}
diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
index 7be48c1..c5b20bd 100644
--- a/drivers/firmware/qcom_scm.c
+++ b/drivers/firmware/qcom_scm.c
@@ -113,14 +113,10 @@ static void qcom_scm_clk_disable(void)
clk_disable_unprepare(__scm->bus_clk);
}
-static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
- u32 cmd_id);
+enum qcom_scm_convention qcom_scm_convention = SMC_CONVENTION_UNKNOWN;
+static DEFINE_SPINLOCK(scm_query_lock);
-enum qcom_scm_convention qcom_scm_convention;
-static bool has_queried __read_mostly;
-static DEFINE_SPINLOCK(query_lock);
-
-static void __query_convention(void)
+static enum qcom_scm_convention __get_convention(void)
{
unsigned long flags;
struct qcom_scm_desc desc = {
@@ -133,36 +129,50 @@ static void __query_convention(void)
.owner = ARM_SMCCC_OWNER_SIP,
};
struct qcom_scm_res res;
+ enum qcom_scm_convention probed_convention;
int ret;
+ bool forced = false;
- spin_lock_irqsave(&query_lock, flags);
- if (has_queried)
- goto out;
+ if (likely(qcom_scm_convention != SMC_CONVENTION_UNKNOWN))
+ return qcom_scm_convention;
- qcom_scm_convention = SMC_CONVENTION_ARM_64;
- // Device isn't required as there is only one argument - no device
- // needed to dma_map_single to secure world
- ret = scm_smc_call(NULL, &desc, &res, true);
+ /*
+ * Device isn't required as there is only one argument - no device
+ * needed to dma_map_single to secure world
+ */
+ probed_convention = SMC_CONVENTION_ARM_64;
+ ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true);
if (!ret && res.result[0] == 1)
- goto out;
+ goto found;
- qcom_scm_convention = SMC_CONVENTION_ARM_32;
- ret = scm_smc_call(NULL, &desc, &res, true);
+ /*
+ * Some SC7180 firmwares didn't implement the
+ * QCOM_SCM_INFO_IS_CALL_AVAIL call, so we fallback to forcing ARM_64
+ * calling conventions on these firmwares. Luckily we don't make any
+ * early calls into the firmware on these SoCs so the device pointer
+ * will be valid here to check if the compatible matches.
+ */
+ if (of_device_is_compatible(__scm ? __scm->dev->of_node : NULL, "qcom,scm-sc7180")) {
+ forced = true;
+ goto found;
+ }
+
+ probed_convention = SMC_CONVENTION_ARM_32;
+ ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true);
if (!ret && res.result[0] == 1)
- goto out;
+ goto found;
- qcom_scm_convention = SMC_CONVENTION_LEGACY;
-out:
- has_queried = true;
- spin_unlock_irqrestore(&query_lock, flags);
- pr_info("qcom_scm: convention: %s\n",
- qcom_scm_convention_names[qcom_scm_convention]);
-}
+ probed_convention = SMC_CONVENTION_LEGACY;
+found:
+ spin_lock_irqsave(&scm_query_lock, flags);
+ if (probed_convention != qcom_scm_convention) {
+ qcom_scm_convention = probed_convention;
+ pr_info("qcom_scm: convention: %s%s\n",
+ qcom_scm_convention_names[qcom_scm_convention],
+ forced ? " (forced)" : "");
+ }
+ spin_unlock_irqrestore(&scm_query_lock, flags);
-static inline enum qcom_scm_convention __get_convention(void)
-{
- if (unlikely(!has_queried))
- __query_convention();
return qcom_scm_convention;
}
@@ -219,8 +229,8 @@ static int qcom_scm_call_atomic(struct device *dev,
}
}
-static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
- u32 cmd_id)
+static bool __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
+ u32 cmd_id)
{
int ret;
struct qcom_scm_desc desc = {
@@ -247,7 +257,7 @@ static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
ret = qcom_scm_call(dev, &desc, &res);
- return ret ? : res.result[0];
+ return ret ? false : !!res.result[0];
}
/**
@@ -585,9 +595,8 @@ bool qcom_scm_pas_supported(u32 peripheral)
};
struct qcom_scm_res res;
- ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL,
- QCOM_SCM_PIL_PAS_IS_SUPPORTED);
- if (ret <= 0)
+ if (!__qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL,
+ QCOM_SCM_PIL_PAS_IS_SUPPORTED))
return false;
ret = qcom_scm_call(__scm->dev, &desc, &res);
@@ -1054,17 +1063,18 @@ EXPORT_SYMBOL(qcom_scm_ice_set_key);
*/
bool qcom_scm_hdcp_available(void)
{
+ bool avail;
int ret = qcom_scm_clk_enable();
if (ret)
return ret;
- ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP,
+ avail = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP,
QCOM_SCM_HDCP_INVOKE);
qcom_scm_clk_disable();
- return ret > 0;
+ return avail;
}
EXPORT_SYMBOL(qcom_scm_hdcp_available);
@@ -1236,7 +1246,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
__scm = scm;
__scm->dev = &pdev->dev;
- __query_convention();
+ __get_convention();
/*
* If requested enable "download mode", from this point on warmboot
diff --git a/drivers/firmware/qcom_scm.h b/drivers/firmware/qcom_scm.h
index 95cd1ac3..632fe31 100644
--- a/drivers/firmware/qcom_scm.h
+++ b/drivers/firmware/qcom_scm.h
@@ -61,8 +61,11 @@ struct qcom_scm_res {
};
#define SCM_SMC_FNID(s, c) ((((s) & 0xFF) << 8) | ((c) & 0xFF))
-extern int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
- struct qcom_scm_res *res, bool atomic);
+extern int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+ enum qcom_scm_convention qcom_convention,
+ struct qcom_scm_res *res, bool atomic);
+#define scm_smc_call(dev, desc, res, atomic) \
+ __scm_smc_call((dev), (desc), qcom_scm_convention, (res), (atomic))
#define SCM_LEGACY_FNID(s, c) (((s) << 10) | ((c) & 0x3ff))
extern int scm_legacy_call_atomic(struct device *dev,
diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
index fd95ede..9e65045 100644
--- a/drivers/firmware/xilinx/zynqmp.c
+++ b/drivers/firmware/xilinx/zynqmp.c
@@ -2,7 +2,7 @@
/*
* Xilinx Zynq MPSoC Firmware layer
*
- * Copyright (C) 2014-2020 Xilinx, Inc.
+ * Copyright (C) 2014-2021 Xilinx, Inc.
*
* Michal Simek <michal.simek@xilinx.com>
* Davorin Mista <davorin.mista@aggios.com>
@@ -1280,12 +1280,13 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
static int zynqmp_firmware_remove(struct platform_device *pdev)
{
struct pm_api_feature_data *feature_data;
+ struct hlist_node *tmp;
int i;
mfd_remove_devices(&pdev->dev);
zynqmp_pm_api_debugfs_exit();
- hash_for_each(pm_api_features_map, i, feature_data, hentry) {
+ hash_for_each_safe(pm_api_features_map, i, tmp, feature_data, hentry) {
hash_del(&feature_data->hentry);
kfree(feature_data);
}
diff --git a/drivers/fpga/dfl-pci.c b/drivers/fpga/dfl-pci.c
index a2203d0..bc108ee 100644
--- a/drivers/fpga/dfl-pci.c
+++ b/drivers/fpga/dfl-pci.c
@@ -61,14 +61,16 @@ static void cci_pci_free_irq(struct pci_dev *pcidev)
}
/* PCI Device ID */
-#define PCIE_DEVICE_ID_PF_INT_5_X 0xBCBD
-#define PCIE_DEVICE_ID_PF_INT_6_X 0xBCC0
-#define PCIE_DEVICE_ID_PF_DSC_1_X 0x09C4
-#define PCIE_DEVICE_ID_INTEL_PAC_N3000 0x0B30
+#define PCIE_DEVICE_ID_PF_INT_5_X 0xBCBD
+#define PCIE_DEVICE_ID_PF_INT_6_X 0xBCC0
+#define PCIE_DEVICE_ID_PF_DSC_1_X 0x09C4
+#define PCIE_DEVICE_ID_INTEL_PAC_N3000 0x0B30
+#define PCIE_DEVICE_ID_INTEL_PAC_D5005 0x0B2B
/* VF Device */
-#define PCIE_DEVICE_ID_VF_INT_5_X 0xBCBF
-#define PCIE_DEVICE_ID_VF_INT_6_X 0xBCC1
-#define PCIE_DEVICE_ID_VF_DSC_1_X 0x09C5
+#define PCIE_DEVICE_ID_VF_INT_5_X 0xBCBF
+#define PCIE_DEVICE_ID_VF_INT_6_X 0xBCC1
+#define PCIE_DEVICE_ID_VF_DSC_1_X 0x09C5
+#define PCIE_DEVICE_ID_INTEL_PAC_D5005_VF 0x0B2C
static struct pci_device_id cci_pcie_id_tbl[] = {
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_5_X),},
@@ -78,6 +80,8 @@ static struct pci_device_id cci_pcie_id_tbl[] = {
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_DSC_1_X),},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_DSC_1_X),},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_INTEL_PAC_N3000),},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_INTEL_PAC_D5005),},
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_INTEL_PAC_D5005_VF),},
{0,}
};
MODULE_DEVICE_TABLE(pci, cci_pcie_id_tbl);
diff --git a/drivers/fpga/xilinx-spi.c b/drivers/fpga/xilinx-spi.c
index 824abbb..d3e6f41 100644
--- a/drivers/fpga/xilinx-spi.c
+++ b/drivers/fpga/xilinx-spi.c
@@ -233,25 +233,19 @@ static int xilinx_spi_probe(struct spi_device *spi)
/* PROGRAM_B is active low */
conf->prog_b = devm_gpiod_get(&spi->dev, "prog_b", GPIOD_OUT_LOW);
- if (IS_ERR(conf->prog_b)) {
- dev_err(&spi->dev, "Failed to get PROGRAM_B gpio: %ld\n",
- PTR_ERR(conf->prog_b));
- return PTR_ERR(conf->prog_b);
- }
+ if (IS_ERR(conf->prog_b))
+ return dev_err_probe(&spi->dev, PTR_ERR(conf->prog_b),
+ "Failed to get PROGRAM_B gpio\n");
conf->init_b = devm_gpiod_get_optional(&spi->dev, "init-b", GPIOD_IN);
- if (IS_ERR(conf->init_b)) {
- dev_err(&spi->dev, "Failed to get INIT_B gpio: %ld\n",
- PTR_ERR(conf->init_b));
- return PTR_ERR(conf->init_b);
- }
+ if (IS_ERR(conf->init_b))
+ return dev_err_probe(&spi->dev, PTR_ERR(conf->init_b),
+ "Failed to get INIT_B gpio\n");
conf->done = devm_gpiod_get(&spi->dev, "done", GPIOD_IN);
- if (IS_ERR(conf->done)) {
- dev_err(&spi->dev, "Failed to get DONE gpio: %ld\n",
- PTR_ERR(conf->done));
- return PTR_ERR(conf->done);
- }
+ if (IS_ERR(conf->done))
+ return dev_err_probe(&spi->dev, PTR_ERR(conf->done),
+ "Failed to get DONE gpio\n");
mgr = devm_fpga_mgr_create(&spi->dev,
"Xilinx Slave Serial FPGA Manager",
diff --git a/drivers/gpio/gpio-cadence.c b/drivers/gpio/gpio-cadence.c
index a4d3239..4ab3fcd 100644
--- a/drivers/gpio/gpio-cadence.c
+++ b/drivers/gpio/gpio-cadence.c
@@ -278,6 +278,7 @@ static const struct of_device_id cdns_of_ids[] = {
{ .compatible = "cdns,gpio-r1p02" },
{ /* sentinel */ },
};
+MODULE_DEVICE_TABLE(of, cdns_of_ids);
static struct platform_driver cdns_gpio_driver = {
.driver = {
diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
index f7ceb2b..a7e8ed5 100644
--- a/drivers/gpio/gpio-omap.c
+++ b/drivers/gpio/gpio-omap.c
@@ -29,6 +29,7 @@
#define OMAP4_GPIO_DEBOUNCINGTIME_MASK 0xFF
struct gpio_regs {
+ u32 sysconfig;
u32 irqenable1;
u32 irqenable2;
u32 wake_en;
@@ -1072,6 +1073,7 @@ static void omap_gpio_init_context(struct gpio_bank *p)
const struct omap_gpio_reg_offs *regs = p->regs;
void __iomem *base = p->base;
+ p->context.sysconfig = readl_relaxed(base + regs->sysconfig);
p->context.ctrl = readl_relaxed(base + regs->ctrl);
p->context.oe = readl_relaxed(base + regs->direction);
p->context.wake_en = readl_relaxed(base + regs->wkup_en);
@@ -1091,6 +1093,7 @@ static void omap_gpio_restore_context(struct gpio_bank *bank)
const struct omap_gpio_reg_offs *regs = bank->regs;
void __iomem *base = bank->base;
+ writel_relaxed(bank->context.sysconfig, base + regs->sysconfig);
writel_relaxed(bank->context.wake_en, base + regs->wkup_en);
writel_relaxed(bank->context.ctrl, base + regs->ctrl);
writel_relaxed(bank->context.leveldetect0, base + regs->leveldetect0);
@@ -1118,6 +1121,10 @@ static void omap_gpio_idle(struct gpio_bank *bank, bool may_lose_context)
bank->saved_datain = readl_relaxed(base + bank->regs->datain);
+ /* Save syconfig, it's runtime value can be different from init value */
+ if (bank->loses_context)
+ bank->context.sysconfig = readl_relaxed(base + bank->regs->sysconfig);
+
if (!bank->enabled_non_wakeup_gpios)
goto update_gpio_context_count;
@@ -1282,6 +1289,7 @@ static int gpio_omap_cpu_notifier(struct notifier_block *nb,
static const struct omap_gpio_reg_offs omap2_gpio_regs = {
.revision = OMAP24XX_GPIO_REVISION,
+ .sysconfig = OMAP24XX_GPIO_SYSCONFIG,
.direction = OMAP24XX_GPIO_OE,
.datain = OMAP24XX_GPIO_DATAIN,
.dataout = OMAP24XX_GPIO_DATAOUT,
@@ -1305,6 +1313,7 @@ static const struct omap_gpio_reg_offs omap2_gpio_regs = {
static const struct omap_gpio_reg_offs omap4_gpio_regs = {
.revision = OMAP4_GPIO_REVISION,
+ .sysconfig = OMAP4_GPIO_SYSCONFIG,
.direction = OMAP4_GPIO_OE,
.datain = OMAP4_GPIO_DATAIN,
.dataout = OMAP4_GPIO_DATAOUT,
diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
index 863f059..6f11714 100644
--- a/drivers/gpio/gpiolib-acpi.c
+++ b/drivers/gpio/gpiolib-acpi.c
@@ -1409,6 +1409,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
},
{
/*
+ * The Dell Venue 10 Pro 5055, with Bay Trail SoC + TI PMIC uses an
+ * external embedded-controller connected via I2C + an ACPI GPIO
+ * event handler on INT33FFC:02 pin 12, causing spurious wakeups.
+ */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Venue 10 Pro 5055"),
+ },
+ .driver_data = &(struct acpi_gpiolib_dmi_quirk) {
+ .ignore_wake = "INT33FC:02@12",
+ },
+ },
+ {
+ /*
* HP X2 10 models with Cherry Trail SoC + TI PMIC use an
* external embedded-controller connected via I2C + an ACPI GPIO
* event handler on INT33FF:01 pin 0, causing spurious wakeups.
diff --git a/drivers/gpio/gpiolib-sysfs.c b/drivers/gpio/gpiolib-sysfs.c
index 728f6c6..fa5d945 100644
--- a/drivers/gpio/gpiolib-sysfs.c
+++ b/drivers/gpio/gpiolib-sysfs.c
@@ -458,6 +458,8 @@ static ssize_t export_store(struct class *class,
long gpio;
struct gpio_desc *desc;
int status;
+ struct gpio_chip *gc;
+ int offset;
status = kstrtol(buf, 0, &gpio);
if (status < 0)
@@ -469,6 +471,12 @@ static ssize_t export_store(struct class *class,
pr_warn("%s: invalid GPIO %ld\n", __func__, gpio);
return -EINVAL;
}
+ gc = desc->gdev->chip;
+ offset = gpio_chip_hwgpio(desc);
+ if (!gpiochip_line_is_valid(gc, offset)) {
+ pr_warn("%s: GPIO %ld masked\n", __func__, gpio);
+ return -EINVAL;
+ }
/* No extra locking here; FLAG_SYSFS just signifies that the
* request and export were done by on behalf of userspace, so
diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
index 0a2c4ad..af5bb8f 100644
--- a/drivers/gpio/gpiolib.c
+++ b/drivers/gpio/gpiolib.c
@@ -364,22 +364,18 @@ static int gpiochip_set_desc_names(struct gpio_chip *gc)
*
* Looks for device property "gpio-line-names" and if it exists assigns
* GPIO line names for the chip. The memory allocated for the assigned
- * names belong to the underlying software node and should not be released
+ * names belong to the underlying firmware node and should not be released
* by the caller.
*/
static int devprop_gpiochip_set_names(struct gpio_chip *chip)
{
struct gpio_device *gdev = chip->gpiodev;
- struct device *dev = chip->parent;
+ struct fwnode_handle *fwnode = dev_fwnode(&gdev->dev);
const char **names;
int ret, i;
int count;
- /* GPIO chip may not have a parent device whose properties we inspect. */
- if (!dev)
- return 0;
-
- count = device_property_string_array_count(dev, "gpio-line-names");
+ count = fwnode_property_string_array_count(fwnode, "gpio-line-names");
if (count < 0)
return 0;
@@ -393,7 +389,7 @@ static int devprop_gpiochip_set_names(struct gpio_chip *chip)
if (!names)
return -ENOMEM;
- ret = device_property_read_string_array(dev, "gpio-line-names",
+ ret = fwnode_property_read_string_array(fwnode, "gpio-line-names",
names, count);
if (ret < 0) {
dev_warn(&gdev->dev, "failed to read GPIO line names\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
index 50016bf..8346cae 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
@@ -157,16 +157,16 @@ static uint32_t get_sdma_rlc_reg_offset(struct amdgpu_device *adev,
mmSDMA0_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
break;
case 1:
- sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA1, 0,
+ sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
mmSDMA1_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
break;
case 2:
- sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA2, 0,
- mmSDMA2_RLC0_RB_CNTL) - mmSDMA2_RLC0_RB_CNTL;
+ sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
+ mmSDMA2_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
break;
case 3:
- sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA3, 0,
- mmSDMA3_RLC0_RB_CNTL) - mmSDMA2_RLC0_RB_CNTL;
+ sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
+ mmSDMA3_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
break;
}
@@ -451,7 +451,7 @@ static int hqd_sdma_dump_v10_3(struct kgd_dev *kgd,
engine_id, queue_id);
uint32_t i = 0, reg;
#undef HQD_N_REGS
-#define HQD_N_REGS (19+6+7+10)
+#define HQD_N_REGS (19+6+7+12)
*dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL);
if (*dump == NULL)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 76d10f1..87c7c45 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3551,6 +3551,7 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
{
dev_info(adev->dev, "amdgpu: finishing device.\n");
flush_delayed_work(&adev->delayed_init_work);
+ ttm_bo_lock_delayed_workqueue(&adev->mman.bdev);
adev->shutdown = true;
kfree(adev->pci_state);
@@ -4367,7 +4368,6 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,
r = amdgpu_ib_ring_tests(tmp_adev);
if (r) {
dev_err(tmp_adev->dev, "ib ring test failed (%d).\n", r);
- r = amdgpu_device_ip_suspend(tmp_adev);
need_full_reset = true;
r = -EAGAIN;
goto end;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
index 1ea8af4..43f29ee 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
@@ -289,10 +289,13 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
static int amdgpu_fbdev_destroy(struct drm_device *dev, struct amdgpu_fbdev *rfbdev)
{
struct amdgpu_framebuffer *rfb = &rfbdev->rfb;
+ int i;
drm_fb_helper_unregister_fbi(&rfbdev->helper);
if (rfb->base.obj[0]) {
+ for (i = 0; i < rfb->base.format->num_planes; i++)
+ drm_gem_object_put(rfb->base.obj[0]);
amdgpufb_destroy_pinned_object(rfb->base.obj[0]);
rfb->base.obj[0] = NULL;
drm_framebuffer_unregister_private(&rfb->base);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index fe2d495..d07c458 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -532,6 +532,8 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev)
if (!ring || !ring->fence_drv.initialized)
continue;
+ if (!ring->no_scheduler)
+ drm_sched_fini(&ring->sched);
r = amdgpu_fence_wait_empty(ring);
if (r) {
/* no need to trigger GPU reset as we are unloading */
@@ -540,8 +542,7 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev)
if (ring->fence_drv.irq_src)
amdgpu_irq_put(adev, ring->fence_drv.irq_src,
ring->fence_drv.irq_type);
- if (!ring->no_scheduler)
- drm_sched_fini(&ring->sched);
+
del_timer_sync(&ring->fence_drv.fallback_timer);
for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
dma_fence_put(ring->fence_drv.fences[j]);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 2f53fa0..28f20f0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -75,6 +75,8 @@ int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
}
ib->ptr = amdgpu_sa_bo_cpu_addr(ib->sa_bo);
+ /* flush the cache before commit the IB */
+ ib->flags = AMDGPU_IB_FLAG_EMIT_MEM_SYNC;
if (!vm)
ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index 6e9a9e5d..90e16d1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -215,7 +215,11 @@ static int amdgpu_vmid_grab_idle(struct amdgpu_vm *vm,
/* Check if we have an idle VMID */
i = 0;
list_for_each_entry((*idle), &id_mgr->ids_lru, list) {
- fences[i] = amdgpu_sync_peek_fence(&(*idle)->active, ring);
+ /* Don't use per engine and per process VMID at the same time */
+ struct amdgpu_ring *r = adev->vm_manager.concurrent_flush ?
+ NULL : ring;
+
+ fences[i] = amdgpu_sync_peek_fence(&(*idle)->active, r);
if (!fences[i])
break;
++i;
@@ -280,7 +284,7 @@ static int amdgpu_vmid_grab_reserved(struct amdgpu_vm *vm,
if (updates && (*id)->flushed_updates &&
updates->context == (*id)->flushed_updates->context &&
!dma_fence_is_later(updates, (*id)->flushed_updates))
- updates = NULL;
+ updates = NULL;
if ((*id)->owner != vm->immediate.fence_context ||
job->vm_pd_addr != (*id)->pd_gpu_addr ||
@@ -289,6 +293,10 @@ static int amdgpu_vmid_grab_reserved(struct amdgpu_vm *vm,
!dma_fence_is_signaled((*id)->last_flush))) {
struct dma_fence *tmp;
+ /* Don't use per engine and per process VMID at the same time */
+ if (adev->vm_manager.concurrent_flush)
+ ring = NULL;
+
/* to prevent one context starved by another context */
(*id)->pd_gpu_addr = 0;
tmp = amdgpu_sync_peek_fence(&(*id)->active, ring);
@@ -364,12 +372,7 @@ static int amdgpu_vmid_grab_used(struct amdgpu_vm *vm,
if (updates && (!flushed || dma_fence_is_later(updates, flushed)))
needs_flush = true;
- /* Concurrent flushes are only possible starting with Vega10 and
- * are broken on Navi10 and Navi14.
- */
- if (needs_flush && (adev->asic_type < CHIP_VEGA10 ||
- adev->asic_type == CHIP_NAVI10 ||
- adev->asic_type == CHIP_NAVI14))
+ if (needs_flush && !adev->vm_manager.concurrent_flush)
continue;
/* Good, we can use this VMID. Remember this submission as
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
index 300ac73..2f70fdd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
@@ -499,7 +499,7 @@ void amdgpu_irq_gpu_reset_resume_helper(struct amdgpu_device *adev)
for (j = 0; j < AMDGPU_MAX_IRQ_SRC_ID; ++j) {
struct amdgpu_irq_src *src = adev->irq.client[i].sources[j];
- if (!src)
+ if (!src || !src->funcs || !src->funcs->set)
continue;
for (k = 0; k < src->num_types; k++)
amdgpu_irq_update(adev, src, k);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index a0248d7..5207ad6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -267,7 +267,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
*addr += offset & ~PAGE_MASK;
num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
- num_bytes = num_pages * 8;
+ num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
r = amdgpu_job_alloc_with_ib(adev, num_dw * 4 + num_bytes,
AMDGPU_IB_POOL_DELAYED, &job);
@@ -1034,7 +1034,7 @@ static void amdgpu_ttm_tt_unpin_userptr(struct ttm_bo_device *bdev,
DMA_BIDIRECTIONAL : DMA_TO_DEVICE;
/* double check that we don't free the table twice */
- if (!ttm->sg->sgl)
+ if (!ttm->sg || !ttm->sg->sgl)
return;
/* unmap the pages mapped to the device */
@@ -1254,13 +1254,13 @@ static void amdgpu_ttm_backend_unbind(struct ttm_bo_device *bdev,
struct amdgpu_ttm_tt *gtt = (void *)ttm;
int r;
- if (!gtt->bound)
- return;
-
/* if the pages have userptr pinning then clear that first */
if (gtt->userptr)
amdgpu_ttm_tt_unpin_userptr(bdev, ttm);
+ if (!gtt->bound)
+ return;
+
if (gtt->offset == AMDGPU_BO_INVALID_OFFSET)
return;
@@ -1381,6 +1381,7 @@ static void amdgpu_ttm_tt_unpopulate(struct ttm_bo_device *bdev, struct ttm_tt *
if (gtt && gtt->userptr) {
amdgpu_ttm_tt_set_user_pages(ttm, NULL);
kfree(ttm->sg);
+ ttm->sg = NULL;
ttm->page_flags &= ~TTM_PAGE_FLAG_SG;
return;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index f8bebf1..665ead1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -259,7 +259,7 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
if ((adev->asic_type == CHIP_POLARIS10 ||
adev->asic_type == CHIP_POLARIS11) &&
(adev->uvd.fw_version < FW_1_66_16))
- DRM_ERROR("POLARIS10/11 UVD firmware version %hu.%hu is too old.\n",
+ DRM_ERROR("POLARIS10/11 UVD firmware version %u.%u is too old.\n",
version_major, version_minor);
} else {
unsigned int enc_major, enc_minor, dec_minor;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 605d154..b47829f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -3173,6 +3173,12 @@ void amdgpu_vm_manager_init(struct amdgpu_device *adev)
{
unsigned i;
+ /* Concurrent flushes are only possible starting with Vega10 and
+ * are broken on Navi10 and Navi14.
+ */
+ adev->vm_manager.concurrent_flush = !(adev->asic_type < CHIP_VEGA10 ||
+ adev->asic_type == CHIP_NAVI10 ||
+ adev->asic_type == CHIP_NAVI14);
amdgpu_vmid_mgr_init(adev);
adev->vm_manager.fence_context =
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 58c83a7..c421880 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -325,6 +325,7 @@ struct amdgpu_vm_manager {
/* Handling of VMIDs */
struct amdgpu_vmid_mgr id_mgr[AMDGPU_MAX_VMHUBS];
unsigned int first_kfd_vmid;
+ bool concurrent_flush;
/* Handling of VM fences */
u64 fence_context;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
index 1162913..0526dec 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
@@ -465,15 +465,22 @@ int amdgpu_xgmi_update_topology(struct amdgpu_hive_info *hive, struct amdgpu_dev
}
+/*
+ * NOTE psp_xgmi_node_info.num_hops layout is as follows:
+ * num_hops[7:6] = link type (0 = xGMI2, 1 = xGMI3, 2/3 = reserved)
+ * num_hops[5:3] = reserved
+ * num_hops[2:0] = number of hops
+ */
int amdgpu_xgmi_get_hops_count(struct amdgpu_device *adev,
struct amdgpu_device *peer_adev)
{
struct psp_xgmi_topology_info *top = &adev->psp.xgmi_context.top_info;
+ uint8_t num_hops_mask = 0x7;
int i;
for (i = 0 ; i < top->num_nodes; ++i)
if (top->nodes[i].node_id == peer_adev->gmc.xgmi.node_id)
- return top->nodes[i].num_hops;
+ return top->nodes[i].num_hops & num_hops_mask;
return -EINVAL;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 4ebb43e..fc8da5f 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -1334,9 +1334,10 @@ static const struct soc15_reg_golden golden_settings_gc_10_1_2[] =
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG, 0xffffffff, 0x20000000),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG2, 0xffffffff, 0x00000420),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000200),
- SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x04800000),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x04900000),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DFSM_TILES_IN_FLIGHT, 0x0000ffff, 0x0000003f),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_LAST_OF_BURST_CONFIG, 0xffffffff, 0x03860204),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1800ff, 0x00000044),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff0ffff, 0x00000500),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGE_PRIV_CONTROL, 0x00007fff, 0x000001fe),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0xffffffff, 0xe4e4e4e4),
@@ -1354,12 +1355,13 @@ static const struct soc15_reg_golden golden_settings_gc_10_1_2[] =
SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE_2, 0x00000820, 0x00000820),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_LINE_STIPPLE_STATE, 0x0000ff0f, 0x00000000),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmRMI_SPARE, 0xffffffff, 0xffff3101),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_CONFIG_CNTL_1, 0x001f0000, 0x00070104),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_ALU_CLK_CTRL, 0xffffffff, 0xffffffff),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_ARB_CONFIG, 0x00000133, 0x00000130),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_LDS_CLK_CTRL, 0xffffffff, 0xffffffff),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7ffff, 0x01030000),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CNTL, 0xffdf80ff, 0x479c0010),
- SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffffffff, 0x00800000)
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffffffff, 0x00c00000)
};
static void gfx_v10_rlcg_wreg(struct amdgpu_device *adev, u32 offset, u32 v)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 957c12b..fb15e8b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -4859,7 +4859,7 @@ static void gfx_v9_0_update_3d_clock_gating(struct amdgpu_device *adev,
amdgpu_gfx_rlc_enter_safe_mode(adev);
/* Enable 3D CGCG/CGLS */
- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)) {
+ if (enable) {
/* write cmd to clear cgcg/cgls ov */
def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
/* unset CGCG override */
@@ -4871,8 +4871,12 @@ static void gfx_v9_0_update_3d_clock_gating(struct amdgpu_device *adev,
/* enable 3Dcgcg FSM(0x0000363f) */
def = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D);
- data = (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
- RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)
+ data = (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+ RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
+ else
+ data = 0x0 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT;
+
if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS)
data |= (0x000F << RLC_CGCG_CGLS_CTRL_3D__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK;
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
index 94caf52..ae8c0f8 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
@@ -172,6 +172,8 @@ static int jpeg_v2_0_hw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ cancel_delayed_work_sync(&adev->vcn.idle_work);
+
if (adev->jpeg.cur_state != AMD_PG_STATE_GATE &&
RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS))
jpeg_v2_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
index 845306f..63b3501 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
@@ -198,8 +198,6 @@ static int jpeg_v2_5_hw_fini(void *handle)
if (adev->jpeg.cur_state != AMD_PG_STATE_GATE &&
RREG32_SOC15(JPEG, i, mmUVD_JRBC_STATUS))
jpeg_v2_5_set_powergating_state(adev, AMD_PG_STATE_GATE);
-
- ring->sched.ready = false;
}
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
index 3a0dff5..9259e35 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
@@ -166,8 +166,6 @@ static int jpeg_v3_0_hw_fini(void *handle)
RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS))
jpeg_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
- ring->sched.ready = false;
-
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 9c72b95..be23d63 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -124,6 +124,10 @@ static const struct soc15_reg_golden golden_settings_sdma_nv14[] = {
static const struct soc15_reg_golden golden_settings_sdma_nv12[] = {
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC3_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_GB_ADDR_CONFIG, 0x001877ff, 0x00000044),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x001877ff, 0x00000044),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_GB_ADDR_CONFIG, 0x001877ff, 0x00000044),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x001877ff, 0x00000044),
SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC3_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
};
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index 2a48505..1bd330d 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -476,11 +476,6 @@ static void sdma_v5_2_gfx_stop(struct amdgpu_device *adev)
ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 0);
WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_IB_CNTL), ib_cntl);
}
-
- sdma0->sched.ready = false;
- sdma1->sched.ready = false;
- sdma2->sched.ready = false;
- sdma3->sched.ready = false;
}
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 7efc618..37226cb 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -1183,7 +1183,6 @@ static int soc15_common_early_init(void *handle)
adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
AMD_CG_SUPPORT_GFX_MGLS |
AMD_CG_SUPPORT_GFX_CP_LS |
- AMD_CG_SUPPORT_GFX_3D_CGCG |
AMD_CG_SUPPORT_GFX_3D_CGLS |
AMD_CG_SUPPORT_GFX_CGCG |
AMD_CG_SUPPORT_GFX_CGLS |
@@ -1203,7 +1202,6 @@ static int soc15_common_early_init(void *handle)
AMD_CG_SUPPORT_GFX_MGLS |
AMD_CG_SUPPORT_GFX_RLC_LS |
AMD_CG_SUPPORT_GFX_CP_LS |
- AMD_CG_SUPPORT_GFX_3D_CGCG |
AMD_CG_SUPPORT_GFX_3D_CGLS |
AMD_CG_SUPPORT_GFX_CGCG |
AMD_CG_SUPPORT_GFX_CGLS |
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index 86e1ef7..aa8ae0c 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -232,9 +232,13 @@ static int vcn_v1_0_hw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ cancel_delayed_work_sync(&adev->vcn.idle_work);
+
if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
- RREG32_SOC15(VCN, 0, mmUVD_STATUS))
+ (adev->vcn.cur_state != AMD_PG_STATE_GATE &&
+ RREG32_SOC15(VCN, 0, mmUVD_STATUS))) {
vcn_v1_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
+ }
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
index e5d29de..fc939d4 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
@@ -262,6 +262,8 @@ static int vcn_v2_0_hw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ cancel_delayed_work_sync(&adev->vcn.idle_work);
+
if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
(adev->vcn.cur_state != AMD_PG_STATE_GATE &&
RREG32_SOC15(VCN, 0, mmUVD_STATUS)))
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
index 0f1d3ef..2c32836 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
@@ -321,6 +321,8 @@ static int vcn_v2_5_hw_fini(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int i;
+ cancel_delayed_work_sync(&adev->vcn.idle_work);
+
for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
if (adev->vcn.harvest_config & (1 << i))
continue;
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
index b5f8f3d..700621d 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
@@ -346,7 +346,7 @@ static int vcn_v3_0_hw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
struct amdgpu_ring *ring;
- int i, j;
+ int i;
for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
if (adev->vcn.harvest_config & (1 << i))
@@ -361,12 +361,6 @@ static int vcn_v3_0_hw_fini(void *handle)
vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
}
}
- ring->sched.ready = false;
-
- for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
- ring = &adev->vcn.inst[i].ring_enc[j];
- ring->sched.ready = false;
- }
}
return 0;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_debugfs.c b/drivers/gpu/drm/amd/amdkfd/kfd_debugfs.c
index 511712c..673d5e3 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_debugfs.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_debugfs.c
@@ -33,6 +33,11 @@ static int kfd_debugfs_open(struct inode *inode, struct file *file)
return single_open(file, show, NULL);
}
+static int kfd_debugfs_hang_hws_read(struct seq_file *m, void *data)
+{
+ seq_printf(m, "echo gpu_id > hang_hws\n");
+ return 0;
+}
static ssize_t kfd_debugfs_hang_hws_write(struct file *file,
const char __user *user_buf, size_t size, loff_t *ppos)
@@ -94,7 +99,7 @@ void kfd_debugfs_init(void)
debugfs_create_file("rls", S_IFREG | 0444, debugfs_root,
kfd_debugfs_rls_by_device, &kfd_debugfs_fops);
debugfs_create_file("hang_hws", S_IFREG | 0200, debugfs_root,
- NULL, &kfd_debugfs_hang_hws_fops);
+ kfd_debugfs_hang_hws_read, &kfd_debugfs_hang_hws_fops);
}
void kfd_debugfs_fini(void)
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 8e5cfb1..6ea8a4b 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -1128,6 +1128,9 @@ static int set_sched_resources(struct device_queue_manager *dqm)
static int initialize_cpsch(struct device_queue_manager *dqm)
{
+ uint64_t num_sdma_queues;
+ uint64_t num_xgmi_sdma_queues;
+
pr_debug("num of pipes: %d\n", get_pipes_per_mec(dqm));
mutex_init(&dqm->lock_hidden);
@@ -1136,8 +1139,18 @@ static int initialize_cpsch(struct device_queue_manager *dqm)
dqm->active_cp_queue_count = 0;
dqm->gws_queue_count = 0;
dqm->active_runlist = false;
- dqm->sdma_bitmap = ~0ULL >> (64 - get_num_sdma_queues(dqm));
- dqm->xgmi_sdma_bitmap = ~0ULL >> (64 - get_num_xgmi_sdma_queues(dqm));
+
+ num_sdma_queues = get_num_sdma_queues(dqm);
+ if (num_sdma_queues >= BITS_PER_TYPE(dqm->sdma_bitmap))
+ dqm->sdma_bitmap = ULLONG_MAX;
+ else
+ dqm->sdma_bitmap = (BIT_ULL(num_sdma_queues) - 1);
+
+ num_xgmi_sdma_queues = get_num_xgmi_sdma_queues(dqm);
+ if (num_xgmi_sdma_queues >= BITS_PER_TYPE(dqm->xgmi_sdma_bitmap))
+ dqm->xgmi_sdma_bitmap = ULLONG_MAX;
+ else
+ dqm->xgmi_sdma_bitmap = (BIT_ULL(num_xgmi_sdma_queues) - 1);
INIT_WORK(&dqm->hw_exception_work, kfd_process_hw_exception);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
index 66bbca6..9318936 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
@@ -20,6 +20,10 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <linux/kconfig.h>
+
+#if IS_REACHABLE(CONFIG_AMD_IOMMU_V2)
+
#include <linux/printk.h>
#include <linux/device.h>
#include <linux/slab.h>
@@ -355,3 +359,5 @@ int kfd_iommu_add_perf_counters(struct kfd_topology_device *kdev)
return 0;
}
+
+#endif
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h
index dd23d9f..afd420b 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h
@@ -23,7 +23,9 @@
#ifndef __KFD_IOMMU_H__
#define __KFD_IOMMU_H__
-#if defined(CONFIG_AMD_IOMMU_V2_MODULE) || defined(CONFIG_AMD_IOMMU_V2)
+#include <linux/kconfig.h>
+
+#if IS_REACHABLE(CONFIG_AMD_IOMMU_V2)
#define KFD_SUPPORT_IOMMU_V2
@@ -46,6 +48,9 @@ static inline int kfd_iommu_check_device(struct kfd_dev *kfd)
}
static inline int kfd_iommu_device_init(struct kfd_dev *kfd)
{
+#if IS_MODULE(CONFIG_AMD_IOMMU_V2)
+ WARN_ONCE(1, "iommu_v2 module is not usable by built-in KFD");
+#endif
return 0;
}
@@ -73,6 +78,6 @@ static inline int kfd_iommu_add_perf_counters(struct kfd_topology_device *kdev)
return 0;
}
-#endif /* defined(CONFIG_AMD_IOMMU_V2) */
+#endif /* IS_REACHABLE(CONFIG_AMD_IOMMU_V2) */
#endif /* __KFD_IOMMU_H__ */
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index c07737c..fbbb1bd 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3685,6 +3685,23 @@ static int fill_dc_scaling_info(const struct drm_plane_state *state,
scaling_info->src_rect.x = state->src_x >> 16;
scaling_info->src_rect.y = state->src_y >> 16;
+ /*
+ * For reasons we don't (yet) fully understand a non-zero
+ * src_y coordinate into an NV12 buffer can cause a
+ * system hang. To avoid hangs (and maybe be overly cautious)
+ * let's reject both non-zero src_x and src_y.
+ *
+ * We currently know of only one use-case to reproduce a
+ * scenario with non-zero src_x and src_y for NV12, which
+ * is to gesture the YouTube Android app into full screen
+ * on ChromeOS.
+ */
+ if (state->fb &&
+ state->fb->format->format == DRM_FORMAT_NV12 &&
+ (scaling_info->src_rect.x != 0 ||
+ scaling_info->src_rect.y != 0))
+ return -EINVAL;
+
scaling_info->src_rect.width = state->src_w >> 16;
if (scaling_info->src_rect.width == 0)
return -EINVAL;
@@ -5328,6 +5345,15 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
} while (stream == NULL && requested_bpc >= 6);
+ if (dc_result == DC_FAIL_ENC_VALIDATE && !aconnector->force_yuv420_output) {
+ DRM_DEBUG_KMS("Retry forcing YCbCr420 encoding\n");
+
+ aconnector->force_yuv420_output = true;
+ stream = create_validate_stream_for_sink(aconnector, drm_mode,
+ dm_state, old_stream);
+ aconnector->force_yuv420_output = false;
+ }
+
return stream;
}
@@ -6800,10 +6826,6 @@ static int get_cursor_position(struct drm_plane *plane, struct drm_crtc *crtc,
int x, y;
int xorigin = 0, yorigin = 0;
- position->enable = false;
- position->x = 0;
- position->y = 0;
-
if (!crtc || !plane->state->fb)
return 0;
@@ -6850,7 +6872,7 @@ static void handle_cursor_update(struct drm_plane *plane,
struct dm_crtc_state *crtc_state = crtc ? to_dm_crtc_state(crtc->state) : NULL;
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
uint64_t address = afb ? afb->address : 0;
- struct dc_cursor_position position;
+ struct dc_cursor_position position = {0};
struct dc_cursor_attributes attributes;
int ret;
@@ -8589,6 +8611,53 @@ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm
}
#endif
+static int validate_overlay(struct drm_atomic_state *state)
+{
+ int i;
+ struct drm_plane *plane;
+ struct drm_plane_state *old_plane_state, *new_plane_state;
+ struct drm_plane_state *primary_state, *overlay_state = NULL;
+
+ /* Check if primary plane is contained inside overlay */
+ for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
+ if (plane->type == DRM_PLANE_TYPE_OVERLAY) {
+ if (drm_atomic_plane_disabling(plane->state, new_plane_state))
+ return 0;
+
+ overlay_state = new_plane_state;
+ continue;
+ }
+ }
+
+ /* check if we're making changes to the overlay plane */
+ if (!overlay_state)
+ return 0;
+
+ /* check if overlay plane is enabled */
+ if (!overlay_state->crtc)
+ return 0;
+
+ /* find the primary plane for the CRTC that the overlay is enabled on */
+ primary_state = drm_atomic_get_plane_state(state, overlay_state->crtc->primary);
+ if (IS_ERR(primary_state))
+ return PTR_ERR(primary_state);
+
+ /* check if primary plane is enabled */
+ if (!primary_state->crtc)
+ return 0;
+
+ /* Perform the bounds check to ensure the overlay plane covers the primary */
+ if (primary_state->crtc_x < overlay_state->crtc_x ||
+ primary_state->crtc_y < overlay_state->crtc_y ||
+ primary_state->crtc_x + primary_state->crtc_w > overlay_state->crtc_x + overlay_state->crtc_w ||
+ primary_state->crtc_y + primary_state->crtc_h > overlay_state->crtc_y + overlay_state->crtc_h) {
+ DRM_DEBUG_ATOMIC("Overlay plane is enabled with hardware cursor but does not fully cover primary plane\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/**
* amdgpu_dm_atomic_check() - Atomic check implementation for AMDgpu DM.
* @dev: The DRM device
@@ -8659,7 +8728,7 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
}
#if defined(CONFIG_DRM_AMD_DC_DCN)
- if (adev->asic_type >= CHIP_NAVI10) {
+ if (dc_resource_is_dsc_encoding_supported(dc)) {
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
if (drm_atomic_crtc_needs_modeset(new_crtc_state)) {
ret = add_affected_mst_dsc_crtcs(state, crtc);
@@ -8767,6 +8836,10 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
goto fail;
}
+ ret = validate_overlay(state);
+ if (ret)
+ goto fail;
+
/* Add new/modified planes */
for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
ret = dm_update_plane_state(dc, state, plane,
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index a8a0e8c..1df7f1b 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -69,18 +69,6 @@ struct common_irq_params {
};
/**
- * struct irq_list_head - Linked-list for low context IRQ handlers.
- *
- * @head: The list_head within &struct handler_data
- * @work: A work_struct containing the deferred handler work
- */
-struct irq_list_head {
- struct list_head head;
- /* In case this interrupt needs post-processing, 'work' will be queued*/
- struct work_struct work;
-};
-
-/**
* struct dm_compressor_info - Buffer info used by frame buffer compression
* @cpu_addr: MMIO cpu addr
* @bo_ptr: Pointer to the buffer object
@@ -270,7 +258,7 @@ struct amdgpu_display_manager {
* Note that handlers are called in the same order as they were
* registered (FIFO).
*/
- struct irq_list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
+ struct list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
/**
* @irq_handler_list_high_tab:
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
index 8cd646e..e02a55fc 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
@@ -150,7 +150,7 @@ static int parse_write_buffer_into_params(char *wr_buf, uint32_t wr_buf_size,
*
* --- to get dp configuration
*
- * cat link_settings
+ * cat /sys/kernel/debug/dri/0/DP-x/link_settings
*
* It will list current, verified, reported, preferred dp configuration.
* current -- for current video mode
@@ -163,7 +163,7 @@ static int parse_write_buffer_into_params(char *wr_buf, uint32_t wr_buf_size,
* echo <lane_count> <link_rate> > link_settings
*
* for example, to force to 2 lane, 2.7GHz,
- * echo 4 0xa > link_settings
+ * echo 4 0xa > /sys/kernel/debug/dri/0/DP-x/link_settings
*
* spread_spectrum could not be changed dynamically.
*
@@ -171,7 +171,7 @@ static int parse_write_buffer_into_params(char *wr_buf, uint32_t wr_buf_size,
* done. please check link settings after force operation to see if HW get
* programming.
*
- * cat link_settings
+ * cat /sys/kernel/debug/dri/0/DP-x/link_settings
*
* check current and preferred settings.
*
@@ -255,7 +255,7 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
int max_param_num = 2;
uint8_t param_nums = 0;
long param[2];
- bool valid_input = false;
+ bool valid_input = true;
if (size == 0)
return -EINVAL;
@@ -282,9 +282,9 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
case LANE_COUNT_ONE:
case LANE_COUNT_TWO:
case LANE_COUNT_FOUR:
- valid_input = true;
break;
default:
+ valid_input = false;
break;
}
@@ -294,9 +294,9 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
case LINK_RATE_RBR2:
case LINK_RATE_HIGH2:
case LINK_RATE_HIGH3:
- valid_input = true;
break;
default:
+ valid_input = false;
break;
}
@@ -310,10 +310,11 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
* spread spectrum will not be changed
*/
prefer_link_settings.link_spread = link->cur_link_settings.link_spread;
+ prefer_link_settings.use_link_rate_set = false;
prefer_link_settings.lane_count = param[0];
prefer_link_settings.link_rate = param[1];
- dc_link_set_preferred_link_settings(dc, &prefer_link_settings, link);
+ dc_link_set_preferred_training_settings(dc, &prefer_link_settings, NULL, link, true);
kfree(wr_buf);
return size;
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
index 79de68a..0c3b159 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
@@ -643,6 +643,7 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct amdgpu_device *adev, struct
/* File created at /sys/class/drm/card0/device/hdcp_srm*/
hdcp_work[0].attr = data_attr;
+ sysfs_bin_attr_init(&hdcp_work[0].attr);
if (sysfs_create_bin_file(&adev->dev->kobj, &hdcp_work[0].attr))
DRM_WARN("Failed to create device file hdcp_srm");
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
index 3577785..281b274 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
@@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
struct amdgpu_display_manager *dm;
/* DAL irq source which registered for this interrupt. */
enum dc_irq_source irq_source;
+ struct work_struct work;
};
#define DM_IRQ_TABLE_LOCK(adev, flags) \
@@ -111,20 +112,10 @@ static void init_handler_common_data(struct amdgpu_dm_irq_handler_data *hcd,
*/
static void dm_irq_work_func(struct work_struct *work)
{
- struct irq_list_head *irq_list_head =
- container_of(work, struct irq_list_head, work);
- struct list_head *handler_list = &irq_list_head->head;
- struct amdgpu_dm_irq_handler_data *handler_data;
+ struct amdgpu_dm_irq_handler_data *handler_data =
+ container_of(work, struct amdgpu_dm_irq_handler_data, work);
- list_for_each_entry(handler_data, handler_list, list) {
- DRM_DEBUG_KMS("DM_IRQ: work_func: for dal_src=%d\n",
- handler_data->irq_source);
-
- DRM_DEBUG_KMS("DM_IRQ: schedule_work: for dal_src=%d\n",
- handler_data->irq_source);
-
- handler_data->handler(handler_data->handler_arg);
- }
+ handler_data->handler(handler_data->handler_arg);
/* Call a DAL subcomponent which registered for interrupt notification
* at INTERRUPT_LOW_IRQ_CONTEXT.
@@ -156,7 +147,7 @@ static struct list_head *remove_irq_handler(struct amdgpu_device *adev,
break;
case INTERRUPT_LOW_IRQ_CONTEXT:
default:
- hnd_list = &adev->dm.irq_handler_list_low_tab[irq_source].head;
+ hnd_list = &adev->dm.irq_handler_list_low_tab[irq_source];
break;
}
@@ -287,7 +278,8 @@ void *amdgpu_dm_irq_register_interrupt(struct amdgpu_device *adev,
break;
case INTERRUPT_LOW_IRQ_CONTEXT:
default:
- hnd_list = &adev->dm.irq_handler_list_low_tab[irq_source].head;
+ hnd_list = &adev->dm.irq_handler_list_low_tab[irq_source];
+ INIT_WORK(&handler_data->work, dm_irq_work_func);
break;
}
@@ -369,7 +361,7 @@ void amdgpu_dm_irq_unregister_interrupt(struct amdgpu_device *adev,
int amdgpu_dm_irq_init(struct amdgpu_device *adev)
{
int src;
- struct irq_list_head *lh;
+ struct list_head *lh;
DRM_DEBUG_KMS("DM_IRQ\n");
@@ -378,9 +370,7 @@ int amdgpu_dm_irq_init(struct amdgpu_device *adev)
for (src = 0; src < DAL_IRQ_SOURCES_NUMBER; src++) {
/* low context handler list init */
lh = &adev->dm.irq_handler_list_low_tab[src];
- INIT_LIST_HEAD(&lh->head);
- INIT_WORK(&lh->work, dm_irq_work_func);
-
+ INIT_LIST_HEAD(lh);
/* high context handler init */
INIT_LIST_HEAD(&adev->dm.irq_handler_list_high_tab[src]);
}
@@ -397,8 +387,11 @@ int amdgpu_dm_irq_init(struct amdgpu_device *adev)
void amdgpu_dm_irq_fini(struct amdgpu_device *adev)
{
int src;
- struct irq_list_head *lh;
+ struct list_head *lh;
+ struct list_head *entry, *tmp;
+ struct amdgpu_dm_irq_handler_data *handler;
unsigned long irq_table_flags;
+
DRM_DEBUG_KMS("DM_IRQ: releasing resources.\n");
for (src = 0; src < DAL_IRQ_SOURCES_NUMBER; src++) {
DM_IRQ_TABLE_LOCK(adev, irq_table_flags);
@@ -407,7 +400,16 @@ void amdgpu_dm_irq_fini(struct amdgpu_device *adev)
* (because no code can schedule a new one). */
lh = &adev->dm.irq_handler_list_low_tab[src];
DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags);
- flush_work(&lh->work);
+
+ if (!list_empty(lh)) {
+ list_for_each_safe(entry, tmp, lh) {
+ handler = list_entry(
+ entry,
+ struct amdgpu_dm_irq_handler_data,
+ list);
+ flush_work(&handler->work);
+ }
+ }
}
}
@@ -417,6 +419,8 @@ int amdgpu_dm_irq_suspend(struct amdgpu_device *adev)
struct list_head *hnd_list_h;
struct list_head *hnd_list_l;
unsigned long irq_table_flags;
+ struct list_head *entry, *tmp;
+ struct amdgpu_dm_irq_handler_data *handler;
DM_IRQ_TABLE_LOCK(adev, irq_table_flags);
@@ -427,14 +431,22 @@ int amdgpu_dm_irq_suspend(struct amdgpu_device *adev)
* will be disabled from manage_dm_interrupts on disable CRTC.
*/
for (src = DC_IRQ_SOURCE_HPD1; src <= DC_IRQ_SOURCE_HPD6RX; src++) {
- hnd_list_l = &adev->dm.irq_handler_list_low_tab[src].head;
+ hnd_list_l = &adev->dm.irq_handler_list_low_tab[src];
hnd_list_h = &adev->dm.irq_handler_list_high_tab[src];
if (!list_empty(hnd_list_l) || !list_empty(hnd_list_h))
dc_interrupt_set(adev->dm.dc, src, false);
DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags);
- flush_work(&adev->dm.irq_handler_list_low_tab[src].work);
+ if (!list_empty(hnd_list_l)) {
+ list_for_each_safe (entry, tmp, hnd_list_l) {
+ handler = list_entry(
+ entry,
+ struct amdgpu_dm_irq_handler_data,
+ list);
+ flush_work(&handler->work);
+ }
+ }
DM_IRQ_TABLE_LOCK(adev, irq_table_flags);
}
@@ -454,7 +466,7 @@ int amdgpu_dm_irq_resume_early(struct amdgpu_device *adev)
/* re-enable short pulse interrupts HW interrupt */
for (src = DC_IRQ_SOURCE_HPD1RX; src <= DC_IRQ_SOURCE_HPD6RX; src++) {
- hnd_list_l = &adev->dm.irq_handler_list_low_tab[src].head;
+ hnd_list_l = &adev->dm.irq_handler_list_low_tab[src];
hnd_list_h = &adev->dm.irq_handler_list_high_tab[src];
if (!list_empty(hnd_list_l) || !list_empty(hnd_list_h))
dc_interrupt_set(adev->dm.dc, src, true);
@@ -480,7 +492,7 @@ int amdgpu_dm_irq_resume_late(struct amdgpu_device *adev)
* will be enabled from manage_dm_interrupts on enable CRTC.
*/
for (src = DC_IRQ_SOURCE_HPD1; src <= DC_IRQ_SOURCE_HPD6; src++) {
- hnd_list_l = &adev->dm.irq_handler_list_low_tab[src].head;
+ hnd_list_l = &adev->dm.irq_handler_list_low_tab[src];
hnd_list_h = &adev->dm.irq_handler_list_high_tab[src];
if (!list_empty(hnd_list_l) || !list_empty(hnd_list_h))
dc_interrupt_set(adev->dm.dc, src, true);
@@ -497,22 +509,53 @@ int amdgpu_dm_irq_resume_late(struct amdgpu_device *adev)
static void amdgpu_dm_irq_schedule_work(struct amdgpu_device *adev,
enum dc_irq_source irq_source)
{
- unsigned long irq_table_flags;
- struct work_struct *work = NULL;
+ struct list_head *handler_list = &adev->dm.irq_handler_list_low_tab[irq_source];
+ struct amdgpu_dm_irq_handler_data *handler_data;
+ bool work_queued = false;
- DM_IRQ_TABLE_LOCK(adev, irq_table_flags);
+ if (list_empty(handler_list))
+ return;
- if (!list_empty(&adev->dm.irq_handler_list_low_tab[irq_source].head))
- work = &adev->dm.irq_handler_list_low_tab[irq_source].work;
-
- DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags);
-
- if (work) {
- if (!schedule_work(work))
- DRM_INFO("amdgpu_dm_irq_schedule_work FAILED src %d\n",
- irq_source);
+ list_for_each_entry (handler_data, handler_list, list) {
+ if (!queue_work(system_highpri_wq, &handler_data->work)) {
+ continue;
+ } else {
+ work_queued = true;
+ break;
+ }
}
+ if (!work_queued) {
+ struct amdgpu_dm_irq_handler_data *handler_data_add;
+ /*get the amdgpu_dm_irq_handler_data of first item pointed by handler_list*/
+ handler_data = container_of(handler_list->next, struct amdgpu_dm_irq_handler_data, list);
+
+ /*allocate a new amdgpu_dm_irq_handler_data*/
+ handler_data_add = kzalloc(sizeof(*handler_data), GFP_KERNEL);
+ if (!handler_data_add) {
+ DRM_ERROR("DM_IRQ: failed to allocate irq handler!\n");
+ return;
+ }
+
+ /*copy new amdgpu_dm_irq_handler_data members from handler_data*/
+ handler_data_add->handler = handler_data->handler;
+ handler_data_add->handler_arg = handler_data->handler_arg;
+ handler_data_add->dm = handler_data->dm;
+ handler_data_add->irq_source = irq_source;
+
+ list_add_tail(&handler_data_add->list, handler_list);
+
+ INIT_WORK(&handler_data_add->work, dm_irq_work_func);
+
+ if (queue_work(system_highpri_wq, &handler_data_add->work))
+ DRM_DEBUG("Queued work for handling interrupt from "
+ "display for IRQ source %d\n",
+ irq_source);
+ else
+ DRM_ERROR("Failed to queue work for handling interrupt "
+ "from display for IRQ source %d\n",
+ irq_source);
+ }
}
/*
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
index 95d8834..cab47bb 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
@@ -240,6 +240,7 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
bool force_reset = false;
bool update_uclk = false;
bool p_state_change_support;
+ int total_plane_count;
if (dc->work_arounds.skip_clock_update || !clk_mgr->smu_present)
return;
@@ -280,7 +281,8 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
clk_mgr_base->clks.socclk_khz = new_clocks->socclk_khz;
clk_mgr_base->clks.prev_p_state_change_support = clk_mgr_base->clks.p_state_change_support;
- p_state_change_support = new_clocks->p_state_change_support || (display_count == 0);
+ total_plane_count = clk_mgr_helper_get_active_plane_cnt(dc, context);
+ p_state_change_support = new_clocks->p_state_change_support || (total_plane_count == 0);
if (should_update_pstate_support(safe_to_lower, p_state_change_support, clk_mgr_base->clks.p_state_change_support)) {
clk_mgr_base->clks.p_state_change_support = p_state_change_support;
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index ffb2119..284ed1c 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -2345,7 +2345,8 @@ static void commit_planes_do_stream_update(struct dc *dc,
if (pipe_ctx->stream_res.audio && !dc->debug.az_endpoint_mute_only)
pipe_ctx->stream_res.audio->funcs->az_disable(pipe_ctx->stream_res.audio);
- dc->hwss.optimize_bandwidth(dc, dc->current_state);
+ dc->optimized_required = true;
+
} else {
if (dc->optimize_seamless_boot_streams == 0)
dc->hwss.prepare_bandwidth(dc, dc->current_state);
@@ -2503,6 +2504,10 @@ static void commit_planes_for_stream(struct dc *dc,
plane_state->triplebuffer_flips = true;
}
}
+ if (update_type == UPDATE_TYPE_FULL) {
+ /* force vsync flip when reconfiguring pipes to prevent underflow */
+ plane_state->flip_immediate = false;
+ }
}
}
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index f003959..62778cc 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -1049,6 +1049,24 @@ static bool dc_link_detect_helper(struct dc_link *link,
dc_is_dvi_signal(link->connector_signal)) {
if (prev_sink)
dc_sink_release(prev_sink);
+ link_disconnect_sink(link);
+
+ return false;
+ }
+ /*
+ * Abort detection for DP connectors if we have
+ * no EDID and connector is active converter
+ * as there are no display downstream
+ *
+ */
+ if (dc_is_dp_sst_signal(link->connector_signal) &&
+ (link->dpcd_caps.dongle_type ==
+ DISPLAY_DONGLE_DP_VGA_CONVERTER ||
+ link->dpcd_caps.dongle_type ==
+ DISPLAY_DONGLE_DP_DVI_CONVERTER)) {
+ if (prev_sink)
+ dc_sink_release(prev_sink);
+ link_disconnect_sink(link);
return false;
}
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
index 4e87e70..874b132 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
@@ -283,7 +283,7 @@ struct abm *dce_abm_create(
const struct dce_abm_shift *abm_shift,
const struct dce_abm_mask *abm_mask)
{
- struct dce_abm *abm_dce = kzalloc(sizeof(*abm_dce), GFP_KERNEL);
+ struct dce_abm *abm_dce = kzalloc(sizeof(*abm_dce), GFP_ATOMIC);
if (abm_dce == NULL) {
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h
index 3824658..f72f02e 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h
@@ -99,7 +99,6 @@ struct dce110_aux_registers {
AUX_SF(AUX_SW_CONTROL, AUX_SW_GO, mask_sh),\
AUX_SF(AUX_SW_DATA, AUX_SW_AUTOINCREMENT_DISABLE, mask_sh),\
AUX_SF(AUX_SW_DATA, AUX_SW_DATA_RW, mask_sh),\
- AUX_SF(AUX_SW_DATA, AUX_SW_AUTOINCREMENT_DISABLE, mask_sh),\
AUX_SF(AUX_SW_DATA, AUX_SW_INDEX, mask_sh),\
AUX_SF(AUX_SW_DATA, AUX_SW_DATA, mask_sh),\
AUX_SF(AUX_SW_STATUS, AUX_SW_REPLY_BYTE_COUNT, mask_sh),\
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
index f0cebe7..4216419 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
@@ -925,7 +925,7 @@ struct dmcu *dcn10_dmcu_create(
const struct dce_dmcu_shift *dmcu_shift,
const struct dce_dmcu_mask *dmcu_mask)
{
- struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL);
+ struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC);
if (dmcu_dce == NULL) {
BREAK_TO_DEBUGGER();
@@ -946,7 +946,7 @@ struct dmcu *dcn20_dmcu_create(
const struct dce_dmcu_shift *dmcu_shift,
const struct dce_dmcu_mask *dmcu_mask)
{
- struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL);
+ struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC);
if (dmcu_dce == NULL) {
BREAK_TO_DEBUGGER();
@@ -967,7 +967,7 @@ struct dmcu *dcn21_dmcu_create(
const struct dce_dmcu_shift *dmcu_shift,
const struct dce_dmcu_mask *dmcu_mask)
{
- struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL);
+ struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC);
if (dmcu_dce == NULL) {
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
index 62cc265..8774406 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
@@ -112,7 +112,7 @@ struct dccg *dccg2_create(
const struct dccg_shift *dccg_shift,
const struct dccg_mask *dccg_mask)
{
- struct dcn_dccg *dccg_dcn = kzalloc(sizeof(*dccg_dcn), GFP_KERNEL);
+ struct dcn_dccg *dccg_dcn = kzalloc(sizeof(*dccg_dcn), GFP_ATOMIC);
struct dccg *base;
if (dccg_dcn == NULL) {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
index 368818d..cd9bd71 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
@@ -1,5 +1,5 @@
/*
- * Copyright 2012-17 Advanced Micro Devices, Inc.
+ * Copyright 2012-2021 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -181,11 +181,14 @@ void hubp2_vready_at_or_After_vsync(struct hubp *hubp,
else
Set HUBP_VREADY_AT_OR_AFTER_VSYNC = 0
*/
- if ((pipe_dest->vstartup_start - (pipe_dest->vready_offset+pipe_dest->vupdate_width
- + pipe_dest->vupdate_offset) / pipe_dest->htotal) <= pipe_dest->vblank_end) {
- value = 1;
- } else
- value = 0;
+ if (pipe_dest->htotal != 0) {
+ if ((pipe_dest->vstartup_start - (pipe_dest->vready_offset+pipe_dest->vupdate_width
+ + pipe_dest->vupdate_offset) / pipe_dest->htotal) <= pipe_dest->vblank_end) {
+ value = 1;
+ } else
+ value = 0;
+ }
+
REG_UPDATE(DCHUBP_CNTL, HUBP_VREADY_AT_OR_AFTER_VSYNC, value);
}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index 4ea53c5..33488b3 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -1104,7 +1104,7 @@ struct dpp *dcn20_dpp_create(
uint32_t inst)
{
struct dcn20_dpp *dpp =
- kzalloc(sizeof(struct dcn20_dpp), GFP_KERNEL);
+ kzalloc(sizeof(struct dcn20_dpp), GFP_ATOMIC);
if (!dpp)
return NULL;
@@ -1122,7 +1122,7 @@ struct input_pixel_processor *dcn20_ipp_create(
struct dc_context *ctx, uint32_t inst)
{
struct dcn10_ipp *ipp =
- kzalloc(sizeof(struct dcn10_ipp), GFP_KERNEL);
+ kzalloc(sizeof(struct dcn10_ipp), GFP_ATOMIC);
if (!ipp) {
BREAK_TO_DEBUGGER();
@@ -1139,7 +1139,7 @@ struct output_pixel_processor *dcn20_opp_create(
struct dc_context *ctx, uint32_t inst)
{
struct dcn20_opp *opp =
- kzalloc(sizeof(struct dcn20_opp), GFP_KERNEL);
+ kzalloc(sizeof(struct dcn20_opp), GFP_ATOMIC);
if (!opp) {
BREAK_TO_DEBUGGER();
@@ -1156,7 +1156,7 @@ struct dce_aux *dcn20_aux_engine_create(
uint32_t inst)
{
struct aux_engine_dce110 *aux_engine =
- kzalloc(sizeof(struct aux_engine_dce110), GFP_KERNEL);
+ kzalloc(sizeof(struct aux_engine_dce110), GFP_ATOMIC);
if (!aux_engine)
return NULL;
@@ -1194,7 +1194,7 @@ struct dce_i2c_hw *dcn20_i2c_hw_create(
uint32_t inst)
{
struct dce_i2c_hw *dce_i2c_hw =
- kzalloc(sizeof(struct dce_i2c_hw), GFP_KERNEL);
+ kzalloc(sizeof(struct dce_i2c_hw), GFP_ATOMIC);
if (!dce_i2c_hw)
return NULL;
@@ -1207,7 +1207,7 @@ struct dce_i2c_hw *dcn20_i2c_hw_create(
struct mpc *dcn20_mpc_create(struct dc_context *ctx)
{
struct dcn20_mpc *mpc20 = kzalloc(sizeof(struct dcn20_mpc),
- GFP_KERNEL);
+ GFP_ATOMIC);
if (!mpc20)
return NULL;
@@ -1225,7 +1225,7 @@ struct hubbub *dcn20_hubbub_create(struct dc_context *ctx)
{
int i;
struct dcn20_hubbub *hubbub = kzalloc(sizeof(struct dcn20_hubbub),
- GFP_KERNEL);
+ GFP_ATOMIC);
if (!hubbub)
return NULL;
@@ -1253,7 +1253,7 @@ struct timing_generator *dcn20_timing_generator_create(
uint32_t instance)
{
struct optc *tgn10 =
- kzalloc(sizeof(struct optc), GFP_KERNEL);
+ kzalloc(sizeof(struct optc), GFP_ATOMIC);
if (!tgn10)
return NULL;
@@ -1332,7 +1332,7 @@ static struct clock_source *dcn20_clock_source_create(
bool dp_clk_src)
{
struct dce110_clk_src *clk_src =
- kzalloc(sizeof(struct dce110_clk_src), GFP_KERNEL);
+ kzalloc(sizeof(struct dce110_clk_src), GFP_ATOMIC);
if (!clk_src)
return NULL;
@@ -1438,7 +1438,7 @@ struct display_stream_compressor *dcn20_dsc_create(
struct dc_context *ctx, uint32_t inst)
{
struct dcn20_dsc *dsc =
- kzalloc(sizeof(struct dcn20_dsc), GFP_KERNEL);
+ kzalloc(sizeof(struct dcn20_dsc), GFP_ATOMIC);
if (!dsc) {
BREAK_TO_DEBUGGER();
@@ -1572,7 +1572,7 @@ struct hubp *dcn20_hubp_create(
uint32_t inst)
{
struct dcn20_hubp *hubp2 =
- kzalloc(sizeof(struct dcn20_hubp), GFP_KERNEL);
+ kzalloc(sizeof(struct dcn20_hubp), GFP_ATOMIC);
if (!hubp2)
return NULL;
@@ -3391,7 +3391,7 @@ bool dcn20_mmhubbub_create(struct dc_context *ctx, struct resource_pool *pool)
static struct pp_smu_funcs *dcn20_pp_smu_create(struct dc_context *ctx)
{
- struct pp_smu_funcs *pp_smu = kzalloc(sizeof(*pp_smu), GFP_KERNEL);
+ struct pp_smu_funcs *pp_smu = kzalloc(sizeof(*pp_smu), GFP_ATOMIC);
if (!pp_smu)
return pp_smu;
@@ -4142,7 +4142,7 @@ struct resource_pool *dcn20_create_resource_pool(
struct dc *dc)
{
struct dcn20_resource_pool *pool =
- kzalloc(sizeof(struct dcn20_resource_pool), GFP_KERNEL);
+ kzalloc(sizeof(struct dcn20_resource_pool), GFP_ATOMIC);
if (!pool)
return NULL;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
index 2455d21..8465cae 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
@@ -180,7 +180,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_0_soc = {
},
.min_dcfclk = 500.0, /* TODO: set this to actual min DCFCLK */
.num_states = 1,
- .sr_exit_time_us = 12,
+ .sr_exit_time_us = 15.5,
.sr_enter_plus_exit_time_us = 20,
.urgent_latency_us = 4.0,
.urgent_latency_pixel_data_only_us = 4.0,
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
index 45f0289..b3f0476 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
@@ -3437,6 +3437,7 @@ void dml20_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
mode_lib->vba.DCCEnabledInAnyPlane = true;
}
}
+ mode_lib->vba.UrgentLatency = mode_lib->vba.UrgentLatencyPixelDataOnly;
for (i = 0; i <= mode_lib->vba.soc.num_states; i++) {
locals->FabricAndDRAMBandwidthPerState[i] = dml_min(
mode_lib->vba.DRAMSpeedPerState[i] * mode_lib->vba.NumberOfChannels
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
index 80170f9..1bcda7e 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
@@ -3510,6 +3510,7 @@ void dml20v2_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode
mode_lib->vba.DCCEnabledInAnyPlane = true;
}
}
+ mode_lib->vba.UrgentLatency = mode_lib->vba.UrgentLatencyPixelDataOnly;
for (i = 0; i <= mode_lib->vba.soc.num_states; i++) {
locals->FabricAndDRAMBandwidthPerState[i] = dml_min(
mode_lib->vba.DRAMSpeedPerState[i] * mode_lib->vba.NumberOfChannels
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
index 72423dc..799bae2 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
@@ -293,13 +293,31 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
if (surf_linear) {
log2_swath_height_l = 0;
log2_swath_height_c = 0;
- } else if (!surf_vert) {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
} else {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
+ unsigned int swath_height_l;
+ unsigned int swath_height_c;
+
+ if (!surf_vert) {
+ swath_height_l = rq_param->misc.rq_l.blk256_height;
+ swath_height_c = rq_param->misc.rq_c.blk256_height;
+ } else {
+ swath_height_l = rq_param->misc.rq_l.blk256_width;
+ swath_height_c = rq_param->misc.rq_c.blk256_width;
+ }
+
+ if (swath_height_l > 0)
+ log2_swath_height_l = dml_log2(swath_height_l);
+
+ if (req128_l && log2_swath_height_l > 0)
+ log2_swath_height_l -= 1;
+
+ if (swath_height_c > 0)
+ log2_swath_height_c = dml_log2(swath_height_c);
+
+ if (req128_c && log2_swath_height_c > 0)
+ log2_swath_height_c -= 1;
}
+
rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
index 9c78446..6a6d597 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
@@ -293,13 +293,31 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
if (surf_linear) {
log2_swath_height_l = 0;
log2_swath_height_c = 0;
- } else if (!surf_vert) {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
} else {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
+ unsigned int swath_height_l;
+ unsigned int swath_height_c;
+
+ if (!surf_vert) {
+ swath_height_l = rq_param->misc.rq_l.blk256_height;
+ swath_height_c = rq_param->misc.rq_c.blk256_height;
+ } else {
+ swath_height_l = rq_param->misc.rq_l.blk256_width;
+ swath_height_c = rq_param->misc.rq_c.blk256_width;
+ }
+
+ if (swath_height_l > 0)
+ log2_swath_height_l = dml_log2(swath_height_l);
+
+ if (req128_l && log2_swath_height_l > 0)
+ log2_swath_height_l -= 1;
+
+ if (swath_height_c > 0)
+ log2_swath_height_c = dml_log2(swath_height_c);
+
+ if (req128_c && log2_swath_height_c > 0)
+ log2_swath_height_c -= 1;
}
+
rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
index edd41d3..dc1c81a 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
@@ -277,13 +277,31 @@ static void handle_det_buf_split(
if (surf_linear) {
log2_swath_height_l = 0;
log2_swath_height_c = 0;
- } else if (!surf_vert) {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
} else {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
+ unsigned int swath_height_l;
+ unsigned int swath_height_c;
+
+ if (!surf_vert) {
+ swath_height_l = rq_param->misc.rq_l.blk256_height;
+ swath_height_c = rq_param->misc.rq_c.blk256_height;
+ } else {
+ swath_height_l = rq_param->misc.rq_l.blk256_width;
+ swath_height_c = rq_param->misc.rq_c.blk256_width;
+ }
+
+ if (swath_height_l > 0)
+ log2_swath_height_l = dml_log2(swath_height_l);
+
+ if (req128_l && log2_swath_height_l > 0)
+ log2_swath_height_l -= 1;
+
+ if (swath_height_c > 0)
+ log2_swath_height_c = dml_log2(swath_height_c);
+
+ if (req128_c && log2_swath_height_c > 0)
+ log2_swath_height_c -= 1;
}
+
rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c
index 416bf6f..58c312f80 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_rq_dlg_calc_30.c
@@ -237,13 +237,31 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
if (surf_linear) {
log2_swath_height_l = 0;
log2_swath_height_c = 0;
- } else if (!surf_vert) {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
} else {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
+ unsigned int swath_height_l;
+ unsigned int swath_height_c;
+
+ if (!surf_vert) {
+ swath_height_l = rq_param->misc.rq_l.blk256_height;
+ swath_height_c = rq_param->misc.rq_c.blk256_height;
+ } else {
+ swath_height_l = rq_param->misc.rq_l.blk256_width;
+ swath_height_c = rq_param->misc.rq_c.blk256_width;
+ }
+
+ if (swath_height_l > 0)
+ log2_swath_height_l = dml_log2(swath_height_l);
+
+ if (req128_l && log2_swath_height_l > 0)
+ log2_swath_height_l -= 1;
+
+ if (swath_height_c > 0)
+ log2_swath_height_c = dml_log2(swath_height_c);
+
+ if (req128_c && log2_swath_height_c > 0)
+ log2_swath_height_c -= 1;
}
+
rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
index 4c3e9cc..414da64 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
@@ -344,13 +344,31 @@ static void handle_det_buf_split(
if (surf_linear) {
log2_swath_height_l = 0;
log2_swath_height_c = 0;
- } else if (!surf_vert) {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_height) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_height) - req128_c;
} else {
- log2_swath_height_l = dml_log2(rq_param->misc.rq_l.blk256_width) - req128_l;
- log2_swath_height_c = dml_log2(rq_param->misc.rq_c.blk256_width) - req128_c;
+ unsigned int swath_height_l;
+ unsigned int swath_height_c;
+
+ if (!surf_vert) {
+ swath_height_l = rq_param->misc.rq_l.blk256_height;
+ swath_height_c = rq_param->misc.rq_c.blk256_height;
+ } else {
+ swath_height_l = rq_param->misc.rq_l.blk256_width;
+ swath_height_c = rq_param->misc.rq_c.blk256_width;
+ }
+
+ if (swath_height_l > 0)
+ log2_swath_height_l = dml_log2(swath_height_l);
+
+ if (req128_l && log2_swath_height_l > 0)
+ log2_swath_height_l -= 1;
+
+ if (swath_height_c > 0)
+ log2_swath_height_c = dml_log2(swath_height_c);
+
+ if (req128_c && log2_swath_height_c > 0)
+ log2_swath_height_c -= 1;
}
+
rq_param->dlg.rq_l.swath_height = 1 << log2_swath_height_l;
rq_param->dlg.rq_c.swath_height = 1 << log2_swath_height_c;
diff --git a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
index 5e384a8..51855a2 100644
--- a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+++ b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
@@ -39,7 +39,7 @@
#define HDCP14_KSV_SIZE 5
#define HDCP14_MAX_KSV_FIFO_SIZE 127*HDCP14_KSV_SIZE
-static const bool hdcp_cmd_is_read[] = {
+static const bool hdcp_cmd_is_read[HDCP_MESSAGE_ID_MAX] = {
[HDCP_MESSAGE_ID_READ_BKSV] = true,
[HDCP_MESSAGE_ID_READ_RI_R0] = true,
[HDCP_MESSAGE_ID_READ_PJ] = true,
@@ -75,7 +75,7 @@ static const bool hdcp_cmd_is_read[] = {
[HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = false
};
-static const uint8_t hdcp_i2c_offsets[] = {
+static const uint8_t hdcp_i2c_offsets[HDCP_MESSAGE_ID_MAX] = {
[HDCP_MESSAGE_ID_READ_BKSV] = 0x0,
[HDCP_MESSAGE_ID_READ_RI_R0] = 0x8,
[HDCP_MESSAGE_ID_READ_PJ] = 0xA,
@@ -106,7 +106,8 @@ static const uint8_t hdcp_i2c_offsets[] = {
[HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_SEND_ACK] = 0x60,
[HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_STREAM_MANAGE] = 0x60,
[HDCP_MESSAGE_ID_READ_REPEATER_AUTH_STREAM_READY] = 0x80,
- [HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70
+ [HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70,
+ [HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = 0x0,
};
struct protection_properties {
@@ -184,7 +185,7 @@ static const struct protection_properties hdmi_14_protection = {
.process_transaction = hdmi_14_process_transaction
};
-static const uint32_t hdcp_dpcd_addrs[] = {
+static const uint32_t hdcp_dpcd_addrs[HDCP_MESSAGE_ID_MAX] = {
[HDCP_MESSAGE_ID_READ_BKSV] = 0x68000,
[HDCP_MESSAGE_ID_READ_RI_R0] = 0x68005,
[HDCP_MESSAGE_ID_READ_PJ] = 0xFFFFFFFF,
diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
index 3a367a59..972f260 100644
--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
@@ -789,6 +789,8 @@ enum mod_hdcp_status mod_hdcp_hdcp2_validate_rx_id_list(struct mod_hdcp *hdcp)
TA_HDCP2_MSG_AUTHENTICATION_STATUS__RECEIVERID_REVOKED) {
hdcp->connection.is_hdcp2_revoked = 1;
status = MOD_HDCP_STATUS_HDCP2_RX_ID_LIST_REVOKED;
+ } else {
+ status = MOD_HDCP_STATUS_HDCP2_VALIDATE_RX_ID_LIST_FAILURE;
}
}
mutex_unlock(&psp->hdcp_context.mutex);
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
index ed4eafc..132c269 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
@@ -5159,7 +5159,7 @@ static int vega10_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui
out:
smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask,
- 1 << power_profile_mode,
+ (!power_profile_mode) ? 0 : 1 << (power_profile_mode - 1),
NULL);
hwmgr->power_profile_mode = power_profile_mode;
diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index 5cc45b1..e589321 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -2001,6 +2001,7 @@ int smu_set_power_limit(struct smu_context *smu, uint32_t limit)
dev_err(smu->adev->dev,
"New power limit (%d) is over the max allowed %d\n",
limit, smu->max_power_limit);
+ ret = -EINVAL;
goto out;
}
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
index f2c8719..2937784 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
@@ -1110,7 +1110,6 @@ static int navi10_force_clk_levels(struct smu_context *smu,
case SMU_SOCCLK:
case SMU_MCLK:
case SMU_UCLK:
- case SMU_DCEFCLK:
case SMU_FCLK:
/* There is only 2 levels for fine grained DPM */
if (navi10_is_support_fine_grained_dpm(smu, clk_type)) {
@@ -1130,6 +1129,10 @@ static int navi10_force_clk_levels(struct smu_context *smu,
if (ret)
return size;
break;
+ case SMU_DCEFCLK:
+ dev_info(smu->adev->dev,"Setting DCEFCLK min/max dpm level is not supported!\n");
+ break;
+
default:
break;
}
@@ -2603,6 +2606,8 @@ static ssize_t navi10_get_gpu_metrics(struct smu_context *smu,
static int navi10_enable_mgpu_fan_boost(struct smu_context *smu)
{
+ struct smu_table_context *table_context = &smu->smu_table;
+ PPTable_t *smc_pptable = table_context->driver_pptable;
struct amdgpu_device *adev = smu->adev;
uint32_t param = 0;
@@ -2610,6 +2615,13 @@ static int navi10_enable_mgpu_fan_boost(struct smu_context *smu)
if (adev->asic_type == CHIP_NAVI12)
return 0;
+ /*
+ * Skip the MGpuFanBoost setting for those ASICs
+ * which do not support it
+ */
+ if (!smc_pptable->MGpuFanBoostLimitRpm)
+ return 0;
+
/* Workaround for WS SKU */
if (adev->pdev->device == 0x7312 &&
adev->pdev->revision == 0)
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
index 31da8fa..8556c22 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
@@ -1018,7 +1018,6 @@ static int sienna_cichlid_force_clk_levels(struct smu_context *smu,
case SMU_SOCCLK:
case SMU_MCLK:
case SMU_UCLK:
- case SMU_DCEFCLK:
case SMU_FCLK:
/* There is only 2 levels for fine grained DPM */
if (sienna_cichlid_is_support_fine_grained_dpm(smu, clk_type)) {
@@ -1038,6 +1037,9 @@ static int sienna_cichlid_force_clk_levels(struct smu_context *smu,
if (ret)
goto forec_level_out;
break;
+ case SMU_DCEFCLK:
+ dev_info(smu->adev->dev,"Setting DCEFCLK min/max dpm level is not supported!\n");
+ break;
default:
break;
}
@@ -2713,6 +2715,16 @@ static ssize_t sienna_cichlid_get_gpu_metrics(struct smu_context *smu,
static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu)
{
+ struct smu_table_context *table_context = &smu->smu_table;
+ PPTable_t *smc_pptable = table_context->driver_pptable;
+
+ /*
+ * Skip the MGpuFanBoost setting for those ASICs
+ * which do not support it
+ */
+ if (!smc_pptable->MGpuFanBoostLimitRpm)
+ return 0;
+
return smu_cmn_send_smc_msg_with_param(smu,
SMU_MSG_SetMGpuFanBoostLimitRpm,
0,
diff --git a/drivers/gpu/drm/arm/display/include/malidp_utils.h b/drivers/gpu/drm/arm/display/include/malidp_utils.h
index 3bc383d..49a1d7f 100644
--- a/drivers/gpu/drm/arm/display/include/malidp_utils.h
+++ b/drivers/gpu/drm/arm/display/include/malidp_utils.h
@@ -13,9 +13,6 @@
#define has_bit(nr, mask) (BIT(nr) & (mask))
#define has_bits(bits, mask) (((bits) & (mask)) == (bits))
-#define dp_for_each_set_bit(bit, mask) \
- for_each_set_bit((bit), ((unsigned long *)&(mask)), sizeof(mask) * 8)
-
#define dp_wait_cond(__cond, __tries, __min_range, __max_range) \
({ \
int num_tries = __tries; \
diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline.c b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline.c
index 452e505..79a6339 100644
--- a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline.c
+++ b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline.c
@@ -46,8 +46,9 @@ void komeda_pipeline_destroy(struct komeda_dev *mdev,
{
struct komeda_component *c;
int i;
+ unsigned long avail_comps = pipe->avail_comps;
- dp_for_each_set_bit(i, pipe->avail_comps) {
+ for_each_set_bit(i, &avail_comps, 32) {
c = komeda_pipeline_get_component(pipe, i);
komeda_component_destroy(mdev, c);
}
@@ -246,6 +247,7 @@ static void komeda_pipeline_dump(struct komeda_pipeline *pipe)
{
struct komeda_component *c;
int id;
+ unsigned long avail_comps = pipe->avail_comps;
DRM_INFO("Pipeline-%d: n_layers: %d, n_scalers: %d, output: %s.\n",
pipe->id, pipe->n_layers, pipe->n_scalers,
@@ -257,7 +259,7 @@ static void komeda_pipeline_dump(struct komeda_pipeline *pipe)
pipe->of_output_links[1] ?
pipe->of_output_links[1]->full_name : "none");
- dp_for_each_set_bit(id, pipe->avail_comps) {
+ for_each_set_bit(id, &avail_comps, 32) {
c = komeda_pipeline_get_component(pipe, id);
komeda_component_dump(c);
@@ -269,8 +271,9 @@ static void komeda_component_verify_inputs(struct komeda_component *c)
struct komeda_pipeline *pipe = c->pipeline;
struct komeda_component *input;
int id;
+ unsigned long supported_inputs = c->supported_inputs;
- dp_for_each_set_bit(id, c->supported_inputs) {
+ for_each_set_bit(id, &supported_inputs, 32) {
input = komeda_pipeline_get_component(pipe, id);
if (!input) {
c->supported_inputs &= ~(BIT(id));
@@ -301,8 +304,9 @@ static void komeda_pipeline_assemble(struct komeda_pipeline *pipe)
struct komeda_component *c;
struct komeda_layer *layer;
int i, id;
+ unsigned long avail_comps = pipe->avail_comps;
- dp_for_each_set_bit(id, pipe->avail_comps) {
+ for_each_set_bit(id, &avail_comps, 32) {
c = komeda_pipeline_get_component(pipe, id);
komeda_component_verify_inputs(c);
}
@@ -354,13 +358,15 @@ void komeda_pipeline_dump_register(struct komeda_pipeline *pipe,
{
struct komeda_component *c;
u32 id;
+ unsigned long avail_comps;
seq_printf(sf, "\n======== Pipeline-%d ==========\n", pipe->id);
if (pipe->funcs && pipe->funcs->dump_register)
pipe->funcs->dump_register(pipe, sf);
- dp_for_each_set_bit(id, pipe->avail_comps) {
+ avail_comps = pipe->avail_comps;
+ for_each_set_bit(id, &avail_comps, 32) {
c = komeda_pipeline_get_component(pipe, id);
seq_printf(sf, "\n------%s------\n", c->name);
diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
index 8f32ae7..c3cdf28 100644
--- a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
+++ b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
@@ -1231,14 +1231,15 @@ komeda_pipeline_unbound_components(struct komeda_pipeline *pipe,
struct komeda_pipeline_state *old = priv_to_pipe_st(pipe->obj.state);
struct komeda_component_state *c_st;
struct komeda_component *c;
- u32 disabling_comps, id;
+ u32 id;
+ unsigned long disabling_comps;
WARN_ON(!old);
disabling_comps = (~new->active_comps) & old->active_comps;
/* unbound all disabling component */
- dp_for_each_set_bit(id, disabling_comps) {
+ for_each_set_bit(id, &disabling_comps, 32) {
c = komeda_pipeline_get_component(pipe, id);
c_st = komeda_component_get_state_and_set_user(c,
drm_st, NULL, new->crtc);
@@ -1286,7 +1287,8 @@ bool komeda_pipeline_disable(struct komeda_pipeline *pipe,
struct komeda_pipeline_state *old;
struct komeda_component *c;
struct komeda_component_state *c_st;
- u32 id, disabling_comps = 0;
+ u32 id;
+ unsigned long disabling_comps;
old = komeda_pipeline_get_old_state(pipe, old_state);
@@ -1296,10 +1298,10 @@ bool komeda_pipeline_disable(struct komeda_pipeline *pipe,
disabling_comps = old->active_comps &
pipe->standalone_disabled_comps;
- DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, disabling_comps: 0x%x.\n",
+ DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, disabling_comps: 0x%lx.\n",
pipe->id, old->active_comps, disabling_comps);
- dp_for_each_set_bit(id, disabling_comps) {
+ for_each_set_bit(id, &disabling_comps, 32) {
c = komeda_pipeline_get_component(pipe, id);
c_st = priv_to_comp_st(c->obj.state);
@@ -1330,16 +1332,17 @@ void komeda_pipeline_update(struct komeda_pipeline *pipe,
struct komeda_pipeline_state *new = priv_to_pipe_st(pipe->obj.state);
struct komeda_pipeline_state *old;
struct komeda_component *c;
- u32 id, changed_comps = 0;
+ u32 id;
+ unsigned long changed_comps;
old = komeda_pipeline_get_old_state(pipe, old_state);
changed_comps = new->active_comps | old->active_comps;
- DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, changed: 0x%x.\n",
+ DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, changed: 0x%lx.\n",
pipe->id, new->active_comps, changed_comps);
- dp_for_each_set_bit(id, changed_comps) {
+ for_each_set_bit(id, &changed_comps, 32) {
c = komeda_pipeline_get_component(pipe, id);
if (new->active_comps & BIT(c->id))
diff --git a/drivers/gpu/drm/ast/ast_drv.c b/drivers/gpu/drm/ast/ast_drv.c
index f0b4af1..59d2466 100644
--- a/drivers/gpu/drm/ast/ast_drv.c
+++ b/drivers/gpu/drm/ast/ast_drv.c
@@ -30,6 +30,7 @@
#include <linux/module.h>
#include <linux/pci.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_fb_helper.h>
@@ -138,6 +139,7 @@ static void ast_pci_remove(struct pci_dev *pdev)
struct drm_device *dev = pci_get_drvdata(pdev);
drm_dev_unregister(dev);
+ drm_atomic_helper_shutdown(dev);
}
static int ast_drm_freeze(struct drm_device *dev)
diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
index 0a1e1cf..a3c2f76 100644
--- a/drivers/gpu/drm/ast/ast_mode.c
+++ b/drivers/gpu/drm/ast/ast_mode.c
@@ -688,7 +688,7 @@ ast_cursor_plane_helper_atomic_update(struct drm_plane *plane,
unsigned int offset_x, offset_y;
offset_x = AST_MAX_HWC_WIDTH - fb->width;
- offset_y = AST_MAX_HWC_WIDTH - fb->height;
+ offset_y = AST_MAX_HWC_HEIGHT - fb->height;
if (state->fb != old_state->fb) {
/* A new cursor image was installed. */
diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
index ef91646..e145cbb 100644
--- a/drivers/gpu/drm/bridge/Kconfig
+++ b/drivers/gpu/drm/bridge/Kconfig
@@ -54,6 +54,7 @@
depends on OF
select DRM_PANEL_BRIDGE
select DRM_KMS_HELPER
+ select DRM_MIPI_DSI
select REGMAP_I2C
help
Driver for Lontium LT9611 DSI to HDMI bridge
@@ -138,6 +139,7 @@
tristate "Silicon Image sii902x RGB/HDMI bridge"
depends on OF
select DRM_KMS_HELPER
+ select DRM_MIPI_DSI
select REGMAP_I2C
select I2C_MUX
select SND_SOC_HDMI_CODEC if SND_SOC
@@ -187,6 +189,7 @@
tristate "Toshiba TC358767 eDP bridge"
depends on OF
select DRM_KMS_HELPER
+ select DRM_MIPI_DSI
select REGMAP_I2C
select DRM_PANEL
help
diff --git a/drivers/gpu/drm/bridge/panel.c b/drivers/gpu/drm/bridge/panel.c
index 0ddc375..c916f4b 100644
--- a/drivers/gpu/drm/bridge/panel.c
+++ b/drivers/gpu/drm/bridge/panel.c
@@ -87,6 +87,18 @@ static int panel_bridge_attach(struct drm_bridge *bridge,
static void panel_bridge_detach(struct drm_bridge *bridge)
{
+ struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge);
+ struct drm_connector *connector = &panel_bridge->connector;
+
+ /*
+ * Cleanup the connector if we know it was initialized.
+ *
+ * FIXME: This wouldn't be needed if the panel_bridge structure was
+ * allocated with drmm_kzalloc(). This might be tricky since the
+ * drm_device pointer can only be retrieved when the bridge is attached.
+ */
+ if (connector->dev)
+ drm_connector_cleanup(connector);
}
static void panel_bridge_pre_enable(struct drm_bridge *bridge)
diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
index 9cf35da..a08cc6b 100644
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -1154,6 +1154,7 @@ static void build_clear_payload_id_table(struct drm_dp_sideband_msg_tx *msg)
req.req_type = DP_CLEAR_PAYLOAD_ID_TABLE;
drm_dp_encode_sideband_req(&req, msg);
+ msg->path_msg = true;
}
static int build_enum_path_resources(struct drm_dp_sideband_msg_tx *msg,
@@ -2824,15 +2825,21 @@ static int set_hdr_from_dst_qlock(struct drm_dp_sideband_msg_hdr *hdr,
req_type = txmsg->msg[0] & 0x7f;
if (req_type == DP_CONNECTION_STATUS_NOTIFY ||
- req_type == DP_RESOURCE_STATUS_NOTIFY)
+ req_type == DP_RESOURCE_STATUS_NOTIFY ||
+ req_type == DP_CLEAR_PAYLOAD_ID_TABLE)
hdr->broadcast = 1;
else
hdr->broadcast = 0;
hdr->path_msg = txmsg->path_msg;
- hdr->lct = mstb->lct;
- hdr->lcr = mstb->lct - 1;
- if (mstb->lct > 1)
- memcpy(hdr->rad, mstb->rad, mstb->lct / 2);
+ if (hdr->broadcast) {
+ hdr->lct = 1;
+ hdr->lcr = 6;
+ } else {
+ hdr->lct = mstb->lct;
+ hdr->lcr = mstb->lct - 1;
+ }
+
+ memcpy(hdr->rad, mstb->rad, hdr->lct / 2);
return 0;
}
diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
index 58f5dc2..f6bdec7 100644
--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
+++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
@@ -84,6 +84,13 @@ static const struct drm_dmi_panel_orientation_data itworks_tw891 = {
.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
};
+static const struct drm_dmi_panel_orientation_data onegx1_pro = {
+ .width = 1200,
+ .height = 1920,
+ .bios_dates = (const char * const []){ "12/17/2020", NULL },
+ .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+};
+
static const struct drm_dmi_panel_orientation_data lcd720x1280_rightside_up = {
.width = 720,
.height = 1280,
@@ -211,6 +218,13 @@ static const struct dmi_system_id orientation_data[] = {
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
},
.driver_data = (void *)&lcd1200x1920_rightside_up,
+ }, { /* OneGX1 Pro */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SYSTEM_MANUFACTURER"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SYSTEM_PRODUCT_NAME"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Default string"),
+ },
+ .driver_data = (void *)&onegx1_pro,
}, { /* VIOS LTH17 */
.matches = {
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "VIOS"),
diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
index d601772..e5432dc 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -623,6 +623,7 @@ static void output_poll_execute(struct work_struct *work)
struct drm_connector_list_iter conn_iter;
enum drm_connector_status old_status;
bool repoll = false, changed;
+ u64 old_epoch_counter;
if (!dev->mode_config.poll_enabled)
return;
@@ -659,8 +660,9 @@ static void output_poll_execute(struct work_struct *work)
repoll = true;
+ old_epoch_counter = connector->epoch_counter;
connector->status = drm_helper_probe_detect(connector, NULL, false);
- if (old_status != connector->status) {
+ if (old_epoch_counter != connector->epoch_counter) {
const char *old, *new;
/*
@@ -689,6 +691,9 @@ static void output_poll_execute(struct work_struct *work)
connector->base.id,
connector->name,
old, new);
+ DRM_DEBUG_KMS("[CONNECTOR:%d:%s] epoch counter %llu -> %llu\n",
+ connector->base.id, connector->name,
+ old_epoch_counter, connector->epoch_counter);
changed = true;
}
diff --git a/drivers/gpu/drm/i915/display/intel_acpi.c b/drivers/gpu/drm/i915/display/intel_acpi.c
index e21fb14..833d0c1 100644
--- a/drivers/gpu/drm/i915/display/intel_acpi.c
+++ b/drivers/gpu/drm/i915/display/intel_acpi.c
@@ -84,13 +84,31 @@ static void intel_dsm_platform_mux_info(acpi_handle dhandle)
return;
}
+ if (!pkg->package.count) {
+ DRM_DEBUG_DRIVER("no connection in _DSM\n");
+ return;
+ }
+
connector_count = &pkg->package.elements[0];
DRM_DEBUG_DRIVER("MUX info connectors: %lld\n",
(unsigned long long)connector_count->integer.value);
for (i = 1; i < pkg->package.count; i++) {
union acpi_object *obj = &pkg->package.elements[i];
- union acpi_object *connector_id = &obj->package.elements[0];
- union acpi_object *info = &obj->package.elements[1];
+ union acpi_object *connector_id;
+ union acpi_object *info;
+
+ if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < 2) {
+ DRM_DEBUG_DRIVER("Invalid object for MUX #%d\n", i);
+ continue;
+ }
+
+ connector_id = &obj->package.elements[0];
+ info = &obj->package.elements[1];
+ if (info->type != ACPI_TYPE_BUFFER || info->buffer.length < 4) {
+ DRM_DEBUG_DRIVER("Invalid info for MUX obj #%d\n", i);
+ continue;
+ }
+
DRM_DEBUG_DRIVER("Connector id: 0x%016llx\n",
(unsigned long long)connector_id->integer.value);
DRM_DEBUG_DRIVER(" port id: %s\n",
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 1937b3d..eb02ecb 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -4136,7 +4136,7 @@ static void chv_dp_post_pll_disable(struct intel_atomic_state *state,
* link status information
*/
bool
-intel_dp_get_link_status(struct intel_dp *intel_dp, u8 link_status[DP_LINK_STATUS_SIZE])
+intel_dp_get_link_status(struct intel_dp *intel_dp, u8 *link_status)
{
return drm_dp_dpcd_read(&intel_dp->aux, DP_LANE0_1_STATUS, link_status,
DP_LINK_STATUS_SIZE) == DP_LINK_STATUS_SIZE;
@@ -5655,7 +5655,18 @@ intel_dp_check_mst_status(struct intel_dp *intel_dp)
drm_WARN_ON_ONCE(&i915->drm, intel_dp->active_mst_links < 0);
for (;;) {
- u8 esi[DP_DPRX_ESI_LEN] = {};
+ /*
+ * The +2 is because DP_DPRX_ESI_LEN is 14, but we then
+ * pass in "esi+10" to drm_dp_channel_eq_ok(), which
+ * takes a 6-byte array. So we actually need 16 bytes
+ * here.
+ *
+ * Somebody who knows what the limits actually are
+ * should check this, but for now this is at least
+ * harmless and avoids a valid compiler warning about
+ * using more of the array than we have allocated.
+ */
+ u8 esi[DP_DPRX_ESI_LEN+2] = {};
bool handled;
int retry;
diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
index b73d51e..0e60aec 100644
--- a/drivers/gpu/drm/i915/display/intel_overlay.c
+++ b/drivers/gpu/drm/i915/display/intel_overlay.c
@@ -382,7 +382,7 @@ static void intel_overlay_off_tail(struct intel_overlay *overlay)
i830_overlay_clock_gating(dev_priv, true);
}
-static void
+__i915_active_call static void
intel_overlay_last_flip_retire(struct i915_active *active)
{
struct intel_overlay *overlay =
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 3d69e51f..5754bcc 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -189,7 +189,7 @@ compute_partial_view(const struct drm_i915_gem_object *obj,
struct i915_ggtt_view view;
if (i915_gem_object_is_tiled(obj))
- chunk = roundup(chunk, tile_row_pages(obj));
+ chunk = roundup(chunk, tile_row_pages(obj) ?: 1);
view.type = I915_GGTT_VIEW_PARTIAL;
view.partial.offset = rounddown(page_offset, chunk);
diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
index 4adbc2b..c724fba 100644
--- a/drivers/gpu/drm/i915/gt/gen7_renderclear.c
+++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c
@@ -397,7 +397,10 @@ static void emit_batch(struct i915_vma * const vma,
gen7_emit_pipeline_invalidate(&cmds);
batch_add(&cmds, MI_LOAD_REGISTER_IMM(2));
batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_0_GEN7));
- batch_add(&cmds, 0xffff0000);
+ batch_add(&cmds, 0xffff0000 |
+ ((IS_IVB_GT1(i915) || IS_VALLEYVIEW(i915)) ?
+ HIZ_RAW_STALL_OPT_DISABLE :
+ 0));
batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_1));
batch_add(&cmds, 0xffff0000 | PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
gen7_emit_pipeline_invalidate(&cmds);
diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
index 38c7069..f08e25e 100644
--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
@@ -628,7 +628,6 @@ static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt)
err = pin_pt_dma(vm, pde->pt.base);
if (err) {
- i915_gem_object_put(pde->pt.base);
free_pd(vm, pde);
return err;
}
diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
index 6614f67..b5937b3 100644
--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
@@ -652,8 +652,8 @@ static void detect_bit_6_swizzle(struct i915_ggtt *ggtt)
* banks of memory are paired and unswizzled on the
* uneven portion, so leave that as unknown.
*/
- if (intel_uncore_read(uncore, C0DRB3) ==
- intel_uncore_read(uncore, C1DRB3)) {
+ if (intel_uncore_read16(uncore, C0DRB3) ==
+ intel_uncore_read16(uncore, C1DRB3)) {
swizzle_x = I915_BIT_6_SWIZZLE_9_10;
swizzle_y = I915_BIT_6_SWIZZLE_9;
}
diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
index d7898e87..2346502 100644
--- a/drivers/gpu/drm/i915/gvt/display.c
+++ b/drivers/gpu/drm/i915/gvt/display.c
@@ -173,21 +173,176 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
int pipe;
if (IS_BROXTON(dev_priv)) {
+ enum transcoder trans;
+ enum port port;
+
+ /* Clear PIPE, DDI, PHY, HPD before setting new */
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~(BXT_DE_PORT_HP_DDIA |
BXT_DE_PORT_HP_DDIB |
BXT_DE_PORT_HP_DDIC);
+ for_each_pipe(dev_priv, pipe) {
+ vgpu_vreg_t(vgpu, PIPECONF(pipe)) &=
+ ~(PIPECONF_ENABLE | I965_PIPECONF_ACTIVE);
+ vgpu_vreg_t(vgpu, DSPCNTR(pipe)) &= ~DISPLAY_PLANE_ENABLE;
+ vgpu_vreg_t(vgpu, SPRCTL(pipe)) &= ~SPRITE_ENABLE;
+ vgpu_vreg_t(vgpu, CURCNTR(pipe)) &= ~MCURSOR_MODE;
+ vgpu_vreg_t(vgpu, CURCNTR(pipe)) |= MCURSOR_MODE_DISABLE;
+ }
+
+ for (trans = TRANSCODER_A; trans <= TRANSCODER_EDP; trans++) {
+ vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(trans)) &=
+ ~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
+ TRANS_DDI_PORT_MASK | TRANS_DDI_FUNC_ENABLE);
+ }
+ vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) &=
+ ~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
+ TRANS_DDI_PORT_MASK);
+
+ for (port = PORT_A; port <= PORT_C; port++) {
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL(port)) &=
+ ~BXT_PHY_LANE_ENABLED;
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL(port)) |=
+ (BXT_PHY_CMNLANE_POWERDOWN_ACK |
+ BXT_PHY_LANE_POWERDOWN_ACK);
+
+ vgpu_vreg_t(vgpu, BXT_PORT_PLL_ENABLE(port)) &=
+ ~(PORT_PLL_POWER_STATE | PORT_PLL_POWER_ENABLE |
+ PORT_PLL_REF_SEL | PORT_PLL_LOCK |
+ PORT_PLL_ENABLE);
+
+ vgpu_vreg_t(vgpu, DDI_BUF_CTL(port)) &=
+ ~(DDI_INIT_DISPLAY_DETECTED |
+ DDI_BUF_CTL_ENABLE);
+ vgpu_vreg_t(vgpu, DDI_BUF_CTL(port)) |= DDI_BUF_IS_IDLE;
+ }
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~(PORTA_HOTPLUG_ENABLE | PORTA_HOTPLUG_STATUS_MASK);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~(PORTB_HOTPLUG_ENABLE | PORTB_HOTPLUG_STATUS_MASK);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~(PORTC_HOTPLUG_ENABLE | PORTC_HOTPLUG_STATUS_MASK);
+ /* No hpd_invert set in vgpu vbt, need to clear invert mask */
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= ~BXT_DDI_HPD_INVERT_MASK;
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~BXT_DE_PORT_HOTPLUG_MASK;
+
+ vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) &= ~(BIT(0) | BIT(1));
+ vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) &=
+ ~PHY_POWER_GOOD;
+ vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY1)) &=
+ ~PHY_POWER_GOOD;
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY0)) &= ~BIT(30);
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY1)) &= ~BIT(30);
+
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) &= ~SFUSE_STRAP_DDIB_DETECTED;
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) &= ~SFUSE_STRAP_DDIC_DETECTED;
+
+ /*
+ * Only 1 PIPE enabled in current vGPU display and PIPE_A is
+ * tied to TRANSCODER_A in HW, so it's safe to assume PIPE_A,
+ * TRANSCODER_A can be enabled. PORT_x depends on the input of
+ * setup_virtual_dp_monitor.
+ */
+ vgpu_vreg_t(vgpu, PIPECONF(PIPE_A)) |= PIPECONF_ENABLE;
+ vgpu_vreg_t(vgpu, PIPECONF(PIPE_A)) |= I965_PIPECONF_ACTIVE;
+
+ /*
+ * Golden M/N are calculated based on:
+ * 24 bpp, 4 lanes, 154000 pixel clk (from virtual EDID),
+ * DP link clk 1620 MHz and non-constant_n.
+ * TODO: calculate DP link symbol clk and stream clk m/n.
+ */
+ vgpu_vreg_t(vgpu, PIPE_DATA_M1(TRANSCODER_A)) = 63 << TU_SIZE_SHIFT;
+ vgpu_vreg_t(vgpu, PIPE_DATA_M1(TRANSCODER_A)) |= 0x5b425e;
+ vgpu_vreg_t(vgpu, PIPE_DATA_N1(TRANSCODER_A)) = 0x800000;
+ vgpu_vreg_t(vgpu, PIPE_LINK_M1(TRANSCODER_A)) = 0x3cd6e;
+ vgpu_vreg_t(vgpu, PIPE_LINK_N1(TRANSCODER_A)) = 0x80000;
+
+ /* Enable per-DDI/PORT vreg */
if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) {
+ vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) |= BIT(1);
+ vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY1)) |=
+ PHY_POWER_GOOD;
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY1)) |=
+ BIT(30);
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_A)) |=
+ BXT_PHY_LANE_ENABLED;
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_A)) &=
+ ~(BXT_PHY_CMNLANE_POWERDOWN_ACK |
+ BXT_PHY_LANE_POWERDOWN_ACK);
+ vgpu_vreg_t(vgpu, BXT_PORT_PLL_ENABLE(PORT_A)) |=
+ (PORT_PLL_POWER_STATE | PORT_PLL_POWER_ENABLE |
+ PORT_PLL_REF_SEL | PORT_PLL_LOCK |
+ PORT_PLL_ENABLE);
+ vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_A)) |=
+ (DDI_BUF_CTL_ENABLE | DDI_INIT_DISPLAY_DETECTED);
+ vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_A)) &=
+ ~DDI_BUF_IS_IDLE;
+ vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_EDP)) |=
+ (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST |
+ TRANS_DDI_FUNC_ENABLE);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTA_HOTPLUG_ENABLE;
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
BXT_DE_PORT_HP_DDIA;
}
if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) |= SFUSE_STRAP_DDIB_DETECTED;
+ vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) |= BIT(0);
+ vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) |=
+ PHY_POWER_GOOD;
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY0)) |=
+ BIT(30);
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_B)) |=
+ BXT_PHY_LANE_ENABLED;
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_B)) &=
+ ~(BXT_PHY_CMNLANE_POWERDOWN_ACK |
+ BXT_PHY_LANE_POWERDOWN_ACK);
+ vgpu_vreg_t(vgpu, BXT_PORT_PLL_ENABLE(PORT_B)) |=
+ (PORT_PLL_POWER_STATE | PORT_PLL_POWER_ENABLE |
+ PORT_PLL_REF_SEL | PORT_PLL_LOCK |
+ PORT_PLL_ENABLE);
+ vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_B)) |=
+ DDI_BUF_CTL_ENABLE;
+ vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_B)) &=
+ ~DDI_BUF_IS_IDLE;
+ vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |=
+ (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST |
+ (PORT_B << TRANS_DDI_PORT_SHIFT) |
+ TRANS_DDI_FUNC_ENABLE);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTB_HOTPLUG_ENABLE;
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
BXT_DE_PORT_HP_DDIB;
}
if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) |= SFUSE_STRAP_DDIC_DETECTED;
+ vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) |= BIT(0);
+ vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) |=
+ PHY_POWER_GOOD;
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY0)) |=
+ BIT(30);
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_C)) |=
+ BXT_PHY_LANE_ENABLED;
+ vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_C)) &=
+ ~(BXT_PHY_CMNLANE_POWERDOWN_ACK |
+ BXT_PHY_LANE_POWERDOWN_ACK);
+ vgpu_vreg_t(vgpu, BXT_PORT_PLL_ENABLE(PORT_C)) |=
+ (PORT_PLL_POWER_STATE | PORT_PLL_POWER_ENABLE |
+ PORT_PLL_REF_SEL | PORT_PLL_LOCK |
+ PORT_PLL_ENABLE);
+ vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_C)) |=
+ DDI_BUF_CTL_ENABLE;
+ vgpu_vreg_t(vgpu, DDI_BUF_CTL(PORT_C)) &=
+ ~DDI_BUF_IS_IDLE;
+ vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |=
+ (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST |
+ (PORT_B << TRANS_DDI_PORT_SHIFT) |
+ TRANS_DDI_FUNC_ENABLE);
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTC_HOTPLUG_ENABLE;
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
BXT_DE_PORT_HP_DDIC;
}
@@ -519,6 +674,63 @@ void intel_vgpu_emulate_hotplug(struct intel_vgpu *vgpu, bool connected)
vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
PORTD_HOTPLUG_STATUS_MASK;
intel_vgpu_trigger_virtual_event(vgpu, DP_D_HOTPLUG);
+ } else if (IS_BROXTON(i915)) {
+ if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) {
+ if (connected) {
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+ BXT_DE_PORT_HP_DDIA;
+ } else {
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
+ ~BXT_DE_PORT_HP_DDIA;
+ }
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |=
+ BXT_DE_PORT_HP_DDIA;
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~PORTA_HOTPLUG_STATUS_MASK;
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTA_HOTPLUG_LONG_DETECT;
+ intel_vgpu_trigger_virtual_event(vgpu, DP_A_HOTPLUG);
+ }
+ if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
+ if (connected) {
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+ BXT_DE_PORT_HP_DDIB;
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) |=
+ SFUSE_STRAP_DDIB_DETECTED;
+ } else {
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
+ ~BXT_DE_PORT_HP_DDIB;
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) &=
+ ~SFUSE_STRAP_DDIB_DETECTED;
+ }
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |=
+ BXT_DE_PORT_HP_DDIB;
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~PORTB_HOTPLUG_STATUS_MASK;
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTB_HOTPLUG_LONG_DETECT;
+ intel_vgpu_trigger_virtual_event(vgpu, DP_B_HOTPLUG);
+ }
+ if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
+ if (connected) {
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+ BXT_DE_PORT_HP_DDIC;
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) |=
+ SFUSE_STRAP_DDIC_DETECTED;
+ } else {
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &=
+ ~BXT_DE_PORT_HP_DDIC;
+ vgpu_vreg_t(vgpu, SFUSE_STRAP) &=
+ ~SFUSE_STRAP_DDIC_DETECTED;
+ }
+ vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |=
+ BXT_DE_PORT_HP_DDIC;
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &=
+ ~PORTC_HOTPLUG_STATUS_MASK;
+ vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |=
+ PORTC_HOTPLUG_LONG_DETECT;
+ intel_vgpu_trigger_virtual_event(vgpu, DP_C_HOTPLUG);
+ }
}
}
diff --git a/drivers/gpu/drm/i915/gvt/gvt.c b/drivers/gpu/drm/i915/gvt/gvt.c
index c7c5612..5c9ef8e 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.c
+++ b/drivers/gpu/drm/i915/gvt/gvt.c
@@ -126,7 +126,7 @@ static bool intel_get_gvt_attrs(struct attribute_group ***intel_vgpu_type_groups
return true;
}
-static bool intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
+static int intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
{
int i, j;
struct intel_vgpu_type *type;
@@ -144,7 +144,7 @@ static bool intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
gvt_vgpu_type_groups[i] = group;
}
- return true;
+ return 0;
unwind:
for (j = 0; j < i; j++) {
@@ -152,7 +152,7 @@ static bool intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
kfree(group);
}
- return false;
+ return -ENOMEM;
}
static void intel_gvt_cleanup_vgpu_type_groups(struct intel_gvt *gvt)
@@ -360,7 +360,7 @@ int intel_gvt_init_device(struct drm_i915_private *i915)
goto out_clean_thread;
ret = intel_gvt_init_vgpu_type_groups(gvt);
- if (ret == false) {
+ if (ret) {
gvt_err("failed to init vgpu type groups: %d\n", ret);
goto out_clean_types;
}
diff --git a/drivers/gpu/drm/i915/gvt/mmio.c b/drivers/gpu/drm/i915/gvt/mmio.c
index b6811f6a..24210b1 100644
--- a/drivers/gpu/drm/i915/gvt/mmio.c
+++ b/drivers/gpu/drm/i915/gvt/mmio.c
@@ -280,6 +280,11 @@ void intel_vgpu_reset_mmio(struct intel_vgpu *vgpu, bool dmlr)
vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_C)) |=
BXT_PHY_CMNLANE_POWERDOWN_ACK |
BXT_PHY_LANE_POWERDOWN_ACK;
+ vgpu_vreg_t(vgpu, SKL_FUSE_STATUS) |=
+ SKL_FUSE_DOWNLOAD_STATUS |
+ SKL_FUSE_PG_DIST_STATUS(SKL_PG0) |
+ SKL_FUSE_PG_DIST_STATUS(SKL_PG1) |
+ SKL_FUSE_PG_DIST_STATUS(SKL_PG2);
}
} else {
#define GVT_GEN8_MMIO_RESET_OFFSET (0x44200)
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 399582a..821b6c3 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -437,10 +437,9 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
if (ret)
goto out_clean_sched_policy;
- if (IS_BROADWELL(dev_priv))
+ if (IS_BROADWELL(dev_priv) || IS_BROXTON(dev_priv))
ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B);
- /* FixMe: Re-enable APL/BXT once vfio_edid enabled */
- else if (!IS_BROXTON(dev_priv))
+ else
ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);
if (ret)
goto out_clean_sched_policy;
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 9ed19b8..c4c2d24 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -1159,7 +1159,8 @@ static int auto_active(struct i915_active *ref)
return 0;
}
-static void auto_retire(struct i915_active *ref)
+__i915_active_call static void
+auto_retire(struct i915_active *ref)
{
i915_active_put(ref);
}
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index cfb8067..1f23cb6 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -2992,7 +2992,7 @@ int ilk_wm_max_level(const struct drm_i915_private *dev_priv)
static void intel_print_wm_latency(struct drm_i915_private *dev_priv,
const char *name,
- const u16 wm[8])
+ const u16 wm[])
{
int level, max_level = ilk_wm_max_level(dev_priv);
diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
index 41e2978..75036aa 100644
--- a/drivers/gpu/drm/imx/imx-ldb.c
+++ b/drivers/gpu/drm/imx/imx-ldb.c
@@ -190,6 +190,11 @@ static void imx_ldb_encoder_enable(struct drm_encoder *encoder)
int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN;
int mux = drm_of_encoder_active_port_id(imx_ldb_ch->child, encoder);
+ if (mux < 0 || mux >= ARRAY_SIZE(ldb->clk_sel)) {
+ dev_warn(ldb->dev, "%s: invalid mux %d\n", __func__, mux);
+ return;
+ }
+
drm_panel_prepare(imx_ldb_ch->panel);
if (dual) {
@@ -248,6 +253,11 @@ imx_ldb_encoder_atomic_mode_set(struct drm_encoder *encoder,
int mux = drm_of_encoder_active_port_id(imx_ldb_ch->child, encoder);
u32 bus_format = imx_ldb_ch->bus_format;
+ if (mux < 0 || mux >= ARRAY_SIZE(ldb->clk_sel)) {
+ dev_warn(ldb->dev, "%s: invalid mux %d\n", __func__, mux);
+ return;
+ }
+
if (mode->clock > 170000) {
dev_warn(ldb->dev,
"%s: mode exceeds 170 MHz pixel clock\n", __func__);
diff --git a/drivers/gpu/drm/mcde/mcde_dsi.c b/drivers/gpu/drm/mcde/mcde_dsi.c
index 2314c81..b3fd350 100644
--- a/drivers/gpu/drm/mcde/mcde_dsi.c
+++ b/drivers/gpu/drm/mcde/mcde_dsi.c
@@ -760,7 +760,7 @@ static void mcde_dsi_start(struct mcde_dsi *d)
DSI_MCTL_MAIN_DATA_CTL_BTA_EN |
DSI_MCTL_MAIN_DATA_CTL_READ_EN |
DSI_MCTL_MAIN_DATA_CTL_REG_TE_EN;
- if (d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET)
+ if (!(d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET))
val |= DSI_MCTL_MAIN_DATA_CTL_HOST_EOT_GEN;
writel(val, d->regs + DSI_MCTL_MAIN_DATA_CTL);
diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
index db56732..2753067 100644
--- a/drivers/gpu/drm/meson/meson_drv.c
+++ b/drivers/gpu/drm/meson/meson_drv.c
@@ -485,11 +485,12 @@ static int meson_probe_remote(struct platform_device *pdev,
static void meson_drv_shutdown(struct platform_device *pdev)
{
struct meson_drm *priv = dev_get_drvdata(&pdev->dev);
- struct drm_device *drm = priv->drm;
- DRM_DEBUG_DRIVER("\n");
- drm_kms_helper_poll_fini(drm);
- drm_atomic_helper_shutdown(drm);
+ if (!priv)
+ return;
+
+ drm_kms_helper_poll_fini(priv->drm);
+ drm_atomic_helper_shutdown(priv->drm);
}
static int meson_drv_probe(struct platform_device *pdev)
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 5e11cdb..0ca7e53 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -1240,8 +1240,8 @@ static int a5xx_pm_suspend(struct msm_gpu *gpu)
static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
{
- *value = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_CP_0_LO,
- REG_A5XX_RBBM_PERFCTR_CP_0_HI);
+ *value = gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO,
+ REG_A5XX_RBBM_ALWAYSON_COUNTER_HI);
return 0;
}
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/adreno/a5xx_power.c
index f176a6f..e58670a 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_power.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c
@@ -304,7 +304,7 @@ int a5xx_power_init(struct msm_gpu *gpu)
/* Set up the limits management */
if (adreno_is_a530(adreno_gpu))
a530_lm_setup(gpu);
- else
+ else if (adreno_is_a540(adreno_gpu))
a540_lm_setup(gpu);
/* Set up SP/TP power collpase */
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 83b50f6..722c2fe 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -1073,8 +1073,8 @@ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
/* Force the GPU power on so we can read this register */
a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
- *value = gpu_read64(gpu, REG_A6XX_RBBM_PERFCTR_CP_0_LO,
- REG_A6XX_RBBM_PERFCTR_CP_0_HI);
+ *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
+ REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
mutex_unlock(&perfcounter_oob);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index d93c44f..e69ea81 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -43,6 +43,8 @@
#define DPU_DEBUGFS_DIR "msm_dpu"
#define DPU_DEBUGFS_HWMASKNAME "hw_log_mask"
+#define MIN_IB_BW 400000000ULL /* Min ib vote 400MB */
+
static int dpu_kms_hw_init(struct msm_kms *kms);
static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms);
@@ -929,6 +931,9 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
DPU_DEBUG("REG_DMA is not defined");
}
+ if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
+ dpu_kms_parse_data_bus_icc_path(dpu_kms);
+
pm_runtime_get_sync(&dpu_kms->pdev->dev);
dpu_kms->core_rev = readl_relaxed(dpu_kms->mmio + 0x0);
@@ -1030,9 +1035,6 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
dpu_vbif_init_memtypes(dpu_kms);
- if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
- dpu_kms_parse_data_bus_icc_path(dpu_kms);
-
pm_runtime_put_sync(&dpu_kms->pdev->dev);
return 0;
@@ -1189,10 +1191,10 @@ static int __maybe_unused dpu_runtime_resume(struct device *dev)
ddev = dpu_kms->dev;
+ WARN_ON(!(dpu_kms->num_paths));
/* Min vote of BW is required before turning on AXI clk */
for (i = 0; i < dpu_kms->num_paths; i++)
- icc_set_bw(dpu_kms->path[i], 0,
- dpu_kms->catalog->perf.min_dram_ib);
+ icc_set_bw(dpu_kms->path[i], 0, Bps_to_icc(MIN_IB_BW));
rc = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true);
if (rc) {
diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_cmd_encoder.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_cmd_encoder.c
index ff2c1d5..0392d4d 100644
--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_cmd_encoder.c
+++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_cmd_encoder.c
@@ -20,7 +20,7 @@ static int pingpong_tearcheck_setup(struct drm_encoder *encoder,
{
struct mdp5_kms *mdp5_kms = get_kms(encoder);
struct device *dev = encoder->dev->dev;
- u32 total_lines_x100, vclks_line, cfg;
+ u32 total_lines, vclks_line, cfg;
long vsync_clk_speed;
struct mdp5_hw_mixer *mixer = mdp5_crtc_get_mixer(encoder->crtc);
int pp_id = mixer->pp;
@@ -30,8 +30,8 @@ static int pingpong_tearcheck_setup(struct drm_encoder *encoder,
return -EINVAL;
}
- total_lines_x100 = mode->vtotal * drm_mode_vrefresh(mode);
- if (!total_lines_x100) {
+ total_lines = mode->vtotal * drm_mode_vrefresh(mode);
+ if (!total_lines) {
DRM_DEV_ERROR(dev, "%s: vtotal(%d) or vrefresh(%d) is 0\n",
__func__, mode->vtotal, drm_mode_vrefresh(mode));
return -EINVAL;
@@ -43,15 +43,23 @@ static int pingpong_tearcheck_setup(struct drm_encoder *encoder,
vsync_clk_speed);
return -EINVAL;
}
- vclks_line = vsync_clk_speed * 100 / total_lines_x100;
+ vclks_line = vsync_clk_speed / total_lines;
cfg = MDP5_PP_SYNC_CONFIG_VSYNC_COUNTER_EN
| MDP5_PP_SYNC_CONFIG_VSYNC_IN_EN;
cfg |= MDP5_PP_SYNC_CONFIG_VSYNC_COUNT(vclks_line);
+ /*
+ * Tearcheck emits a blanking signal every vclks_line * vtotal * 2 ticks on
+ * the vsync_clk equating to roughly half the desired panel refresh rate.
+ * This is only necessary as stability fallback if interrupts from the
+ * panel arrive too late or not at all, but is currently used by default
+ * because these panel interrupts are not wired up yet.
+ */
mdp5_write(mdp5_kms, REG_MDP5_PP_SYNC_CONFIG_VSYNC(pp_id), cfg);
mdp5_write(mdp5_kms,
- REG_MDP5_PP_SYNC_CONFIG_HEIGHT(pp_id), 0xfff0);
+ REG_MDP5_PP_SYNC_CONFIG_HEIGHT(pp_id), (2 * mode->vtotal));
+
mdp5_write(mdp5_kms,
REG_MDP5_PP_VSYNC_INIT_VAL(pp_id), mode->vdisplay);
mdp5_write(mdp5_kms, REG_MDP5_PP_RD_PTR_IRQ(pp_id), mode->vdisplay + 1);
diff --git a/drivers/gpu/drm/msm/dp/dp_audio.c b/drivers/gpu/drm/msm/dp/dp_audio.c
index 82a8673..d7e4a39 100644
--- a/drivers/gpu/drm/msm/dp/dp_audio.c
+++ b/drivers/gpu/drm/msm/dp/dp_audio.c
@@ -527,6 +527,7 @@ int dp_audio_hw_params(struct device *dev,
dp_audio_setup_acr(audio);
dp_audio_safe_to_exit_level(audio);
dp_audio_enable(audio, true);
+ dp_display_signal_audio_start(dp_display);
dp_display->audio_enabled = true;
end:
diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
index a2db14f..66f2ea3 100644
--- a/drivers/gpu/drm/msm/dp/dp_display.c
+++ b/drivers/gpu/drm/msm/dp/dp_display.c
@@ -176,6 +176,15 @@ static int dp_del_event(struct dp_display_private *dp_priv, u32 event)
return 0;
}
+void dp_display_signal_audio_start(struct msm_dp *dp_display)
+{
+ struct dp_display_private *dp;
+
+ dp = container_of(dp_display, struct dp_display_private, dp_display);
+
+ reinit_completion(&dp->audio_comp);
+}
+
void dp_display_signal_audio_complete(struct msm_dp *dp_display)
{
struct dp_display_private *dp;
@@ -620,7 +629,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND);
/* signal the disconnect event early to ensure proper teardown */
- reinit_completion(&dp->audio_comp);
dp_display_handle_plugged_change(g_dp_display, false);
dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK |
@@ -841,7 +849,6 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
/* wait only if audio was enabled */
if (dp_display->audio_enabled) {
/* signal the disconnect event */
- reinit_completion(&dp->audio_comp);
dp_display_handle_plugged_change(dp_display, false);
if (!wait_for_completion_timeout(&dp->audio_comp,
HZ * 5))
diff --git a/drivers/gpu/drm/msm/dp/dp_display.h b/drivers/gpu/drm/msm/dp/dp_display.h
index 6092ba1..5173c89 100644
--- a/drivers/gpu/drm/msm/dp/dp_display.h
+++ b/drivers/gpu/drm/msm/dp/dp_display.h
@@ -34,6 +34,7 @@ int dp_display_get_modes(struct msm_dp *dp_display,
int dp_display_request_irq(struct msm_dp *dp_display);
bool dp_display_check_video_test(struct msm_dp *dp_display);
int dp_display_get_test_bpp(struct msm_dp *dp_display);
+void dp_display_signal_audio_start(struct msm_dp *dp_display);
void dp_display_signal_audio_complete(struct msm_dp *dp_display);
#endif /* _DP_DISPLAY_H_ */
diff --git a/drivers/gpu/drm/msm/dp/dp_hpd.c b/drivers/gpu/drm/msm/dp/dp_hpd.c
index 5b8fe3202..e1c90fa 100644
--- a/drivers/gpu/drm/msm/dp/dp_hpd.c
+++ b/drivers/gpu/drm/msm/dp/dp_hpd.c
@@ -34,8 +34,8 @@ int dp_hpd_connect(struct dp_usbpd *dp_usbpd, bool hpd)
dp_usbpd->hpd_high = hpd;
- if (!hpd_priv->dp_cb && !hpd_priv->dp_cb->configure
- && !hpd_priv->dp_cb->disconnect) {
+ if (!hpd_priv->dp_cb || !hpd_priv->dp_cb->configure
+ || !hpd_priv->dp_cb->disconnect) {
pr_err("hpd dp_cb not initialized\n");
return -EINVAL;
}
diff --git a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
index c1f6708..c1c4184 100644
--- a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
+++ b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_7nm.c
@@ -325,7 +325,7 @@ static void dsi_pll_commit(struct dsi_pll_7nm *pll)
pll_write(base + REG_DSI_7nm_PHY_PLL_FRAC_DIV_START_LOW_1, reg->frac_div_start_low);
pll_write(base + REG_DSI_7nm_PHY_PLL_FRAC_DIV_START_MID_1, reg->frac_div_start_mid);
pll_write(base + REG_DSI_7nm_PHY_PLL_FRAC_DIV_START_HIGH_1, reg->frac_div_start_high);
- pll_write(base + REG_DSI_7nm_PHY_PLL_PLL_LOCKDET_RATE_1, 0x40);
+ pll_write(base + REG_DSI_7nm_PHY_PLL_PLL_LOCKDET_RATE_1, reg->pll_lockdet_rate);
pll_write(base + REG_DSI_7nm_PHY_PLL_PLL_LOCK_DELAY, 0x06);
pll_write(base + REG_DSI_7nm_PHY_PLL_CMODE_1, 0x10); /* TODO: 0x00 for CPHY */
pll_write(base + REG_DSI_7nm_PHY_PLL_CLOCK_INVERTERS, reg->pll_clock_inverters);
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index b38ebcc..0aacc43 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -557,6 +557,7 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv)
kfree(priv);
err_put_drm_dev:
drm_dev_put(ddev);
+ platform_set_drvdata(pdev, NULL);
return ret;
}
diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c
index ad27036..cd59a59 100644
--- a/drivers/gpu/drm/msm/msm_fence.c
+++ b/drivers/gpu/drm/msm/msm_fence.c
@@ -45,7 +45,7 @@ int msm_wait_fence(struct msm_fence_context *fctx, uint32_t fence,
int ret;
if (fence > fctx->last_fence) {
- DRM_ERROR("%s: waiting on invalid fence: %u (of %u)\n",
+ DRM_ERROR_RATELIMITED("%s: waiting on invalid fence: %u (of %u)\n",
fctx->name, fence, fctx->last_fence);
return -EINVAL;
}
diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
index b9a0e56..ef70140 100644
--- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
@@ -898,8 +898,7 @@ static int nt35510_probe(struct mipi_dsi_device *dsi)
*/
dsi->hs_rate = 349440000;
dsi->lp_rate = 9600000;
- dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS |
- MIPI_DSI_MODE_EOT_PACKET;
+ dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
/*
* Every new incarnation of this display must have a unique
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
index 4aac0d1..70560ca 100644
--- a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
+++ b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
@@ -184,9 +184,7 @@ static int s6d16d0_probe(struct mipi_dsi_device *dsi)
* As we only send commands we do not need to be continuously
* clocked.
*/
- dsi->mode_flags =
- MIPI_DSI_CLOCK_NON_CONTINUOUS |
- MIPI_DSI_MODE_EOT_PACKET;
+ dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
s6->supply = devm_regulator_get(dev, "vdd1");
if (IS_ERR(s6->supply))
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c b/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
index eec74c1..9c3563c 100644
--- a/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
+++ b/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
@@ -97,7 +97,6 @@ static int s6e63m0_dsi_probe(struct mipi_dsi_device *dsi)
dsi->hs_rate = 349440000;
dsi->lp_rate = 9600000;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
- MIPI_DSI_MODE_EOT_PACKET |
MIPI_DSI_MODE_VIDEO_BURST;
ret = s6e63m0_probe(dev, s6e63m0_dsi_dcs_read, s6e63m0_dsi_dcs_write,
diff --git a/drivers/gpu/drm/panel/panel-sony-acx424akp.c b/drivers/gpu/drm/panel/panel-sony-acx424akp.c
index 065efae..95659a4 100644
--- a/drivers/gpu/drm/panel/panel-sony-acx424akp.c
+++ b/drivers/gpu/drm/panel/panel-sony-acx424akp.c
@@ -449,8 +449,7 @@ static int acx424akp_probe(struct mipi_dsi_device *dsi)
MIPI_DSI_MODE_VIDEO_BURST;
else
dsi->mode_flags =
- MIPI_DSI_CLOCK_NON_CONTINUOUS |
- MIPI_DSI_MODE_EOT_PACKET;
+ MIPI_DSI_CLOCK_NON_CONTINUOUS;
acx->supply = devm_regulator_get(dev, "vddi");
if (IS_ERR(acx->supply))
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index be8d68f..1986862 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -495,8 +495,14 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
}
bo->base.pages = pages;
bo->base.pages_use_count = 1;
- } else
+ } else {
pages = bo->base.pages;
+ if (pages[page_offset]) {
+ /* Pages are already mapped, bail out. */
+ mutex_unlock(&bo->base.pages_lock);
+ goto out;
+ }
+ }
mapping = bo->base.base.filp->f_mapping;
mapping_set_unevictable(mapping);
@@ -529,6 +535,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
+out:
panfrost_gem_mapping_put(bomapping);
return 0;
@@ -600,6 +607,8 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
access_type = (fault_status >> 8) & 0x3;
source_id = (fault_status >> 16);
+ mmu_write(pfdev, MMU_INT_CLEAR, mask);
+
/* Page fault only */
ret = -1;
if ((status & mask) == BIT(i) && (exception_type & 0xF8) == 0xC0)
@@ -623,8 +632,6 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
access_type, access_type_name(pfdev, fault_status),
source_id);
- mmu_write(pfdev, MMU_INT_CLEAR, mask);
-
status &= ~mask;
}
diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
index 54e3c3a..741cc98 100644
--- a/drivers/gpu/drm/qxl/qxl_cmd.c
+++ b/drivers/gpu/drm/qxl/qxl_cmd.c
@@ -268,7 +268,7 @@ int qxl_alloc_bo_reserved(struct qxl_device *qdev,
int ret;
ret = qxl_bo_create(qdev, size, false /* not kernel - device */,
- false, QXL_GEM_DOMAIN_VRAM, NULL, &bo);
+ false, QXL_GEM_DOMAIN_VRAM, 0, NULL, &bo);
if (ret) {
DRM_ERROR("failed to allocate VRAM BO\n");
return ret;
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 862ef59..f22a1b7 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -791,8 +791,8 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
qdev->dumb_shadow_bo = NULL;
}
qxl_bo_create(qdev, surf.height * surf.stride,
- true, true, QXL_GEM_DOMAIN_SURFACE, &surf,
- &qdev->dumb_shadow_bo);
+ true, true, QXL_GEM_DOMAIN_SURFACE, 0,
+ &surf, &qdev->dumb_shadow_bo);
}
if (user_bo->shadow != qdev->dumb_shadow_bo) {
if (user_bo->shadow) {
@@ -1224,6 +1224,10 @@ int qxl_modeset_init(struct qxl_device *qdev)
void qxl_modeset_fini(struct qxl_device *qdev)
{
+ if (qdev->dumb_shadow_bo) {
+ drm_gem_object_put(&qdev->dumb_shadow_bo->tbo.base);
+ qdev->dumb_shadow_bo = NULL;
+ }
qxl_destroy_monitors_object(qdev);
drm_mode_config_cleanup(&qdev->ddev);
}
diff --git a/drivers/gpu/drm/qxl/qxl_gem.c b/drivers/gpu/drm/qxl/qxl_gem.c
index 48e0962..a08da0b 100644
--- a/drivers/gpu/drm/qxl/qxl_gem.c
+++ b/drivers/gpu/drm/qxl/qxl_gem.c
@@ -55,7 +55,7 @@ int qxl_gem_object_create(struct qxl_device *qdev, int size,
/* At least align on page size */
if (alignment < PAGE_SIZE)
alignment = PAGE_SIZE;
- r = qxl_bo_create(qdev, size, kernel, false, initial_domain, surf, &qbo);
+ r = qxl_bo_create(qdev, size, kernel, false, initial_domain, 0, surf, &qbo);
if (r) {
if (r != -ERESTARTSYS)
DRM_ERROR(
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 2bc3644..544a9e4 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -103,8 +103,8 @@ static const struct drm_gem_object_funcs qxl_object_funcs = {
.print_info = drm_gem_ttm_print_info,
};
-int qxl_bo_create(struct qxl_device *qdev,
- unsigned long size, bool kernel, bool pinned, u32 domain,
+int qxl_bo_create(struct qxl_device *qdev, unsigned long size,
+ bool kernel, bool pinned, u32 domain, u32 priority,
struct qxl_surface *surf,
struct qxl_bo **bo_ptr)
{
@@ -137,6 +137,7 @@ int qxl_bo_create(struct qxl_device *qdev,
qxl_ttm_placement_from_domain(bo, domain, pinned);
+ bo->tbo.priority = priority;
r = ttm_bo_init(&qdev->mman.bdev, &bo->tbo, size, type,
&bo->placement, 0, !kernel, size,
NULL, NULL, &qxl_ttm_bo_destroy);
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 6b434e5..5762ea4 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -84,6 +84,7 @@ static inline int qxl_bo_wait(struct qxl_bo *bo, u32 *mem_type,
extern int qxl_bo_create(struct qxl_device *qdev,
unsigned long size,
bool kernel, bool pinned, u32 domain,
+ u32 priority,
struct qxl_surface *surf,
struct qxl_bo **bo_ptr);
extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c
index 4fae3e3..b2a475a 100644
--- a/drivers/gpu/drm/qxl/qxl_release.c
+++ b/drivers/gpu/drm/qxl/qxl_release.c
@@ -199,11 +199,12 @@ qxl_release_free(struct qxl_device *qdev,
}
static int qxl_release_bo_alloc(struct qxl_device *qdev,
- struct qxl_bo **bo)
+ struct qxl_bo **bo,
+ u32 priority)
{
/* pin releases bo's they are too messy to evict */
return qxl_bo_create(qdev, PAGE_SIZE, false, true,
- QXL_GEM_DOMAIN_VRAM, NULL, bo);
+ QXL_GEM_DOMAIN_VRAM, priority, NULL, bo);
}
int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo)
@@ -326,13 +327,18 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size,
int ret = 0;
union qxl_release_info *info;
int cur_idx;
+ u32 priority;
- if (type == QXL_RELEASE_DRAWABLE)
+ if (type == QXL_RELEASE_DRAWABLE) {
cur_idx = 0;
- else if (type == QXL_RELEASE_SURFACE_CMD)
+ priority = 0;
+ } else if (type == QXL_RELEASE_SURFACE_CMD) {
cur_idx = 1;
- else if (type == QXL_RELEASE_CURSOR_CMD)
+ priority = 1;
+ } else if (type == QXL_RELEASE_CURSOR_CMD) {
cur_idx = 2;
+ priority = 1;
+ }
else {
DRM_ERROR("got illegal type: %d\n", type);
return -EINVAL;
@@ -352,7 +358,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size,
qdev->current_release_bo[cur_idx] = NULL;
}
if (!qdev->current_release_bo[cur_idx]) {
- ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx]);
+ ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx], priority);
if (ret) {
mutex_unlock(&qdev->release_mutex);
qxl_release_free(qdev, *release);
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index a6d8de01..a813c00 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -1559,6 +1559,7 @@ struct radeon_dpm {
void *priv;
u32 new_active_crtcs;
int new_active_crtc_count;
+ int high_pixelclock_count;
u32 current_active_crtcs;
int current_active_crtc_count;
bool single_display;
diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
index 5d25917..aca6e5cf 100644
--- a/drivers/gpu/drm/radeon/radeon_atombios.c
+++ b/drivers/gpu/drm/radeon/radeon_atombios.c
@@ -2126,11 +2126,14 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev)
return state_index;
/* last mode is usually default, array is low to high */
for (i = 0; i < num_modes; i++) {
- rdev->pm.power_state[state_index].clock_info =
- kcalloc(1, sizeof(struct radeon_pm_clock_info),
- GFP_KERNEL);
+ /* avoid memory leaks from invalid modes or unknown frev. */
+ if (!rdev->pm.power_state[state_index].clock_info) {
+ rdev->pm.power_state[state_index].clock_info =
+ kzalloc(sizeof(struct radeon_pm_clock_info),
+ GFP_KERNEL);
+ }
if (!rdev->pm.power_state[state_index].clock_info)
- return state_index;
+ goto out;
rdev->pm.power_state[state_index].num_clock_modes = 1;
rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
switch (frev) {
@@ -2249,17 +2252,24 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev)
break;
}
}
+out:
+ /* free any unused clock_info allocation. */
+ if (state_index && state_index < num_modes) {
+ kfree(rdev->pm.power_state[state_index].clock_info);
+ rdev->pm.power_state[state_index].clock_info = NULL;
+ }
+
/* last mode is usually default */
- if (rdev->pm.default_power_state_index == -1) {
+ if (state_index && rdev->pm.default_power_state_index == -1) {
rdev->pm.power_state[state_index - 1].type =
POWER_STATE_TYPE_DEFAULT;
rdev->pm.default_power_state_index = state_index - 1;
rdev->pm.power_state[state_index - 1].default_clock_mode =
&rdev->pm.power_state[state_index - 1].clock_info[0];
- rdev->pm.power_state[state_index].flags &=
+ rdev->pm.power_state[state_index - 1].flags &=
~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
- rdev->pm.power_state[state_index].misc = 0;
- rdev->pm.power_state[state_index].misc2 = 0;
+ rdev->pm.power_state[state_index - 1].misc = 0;
+ rdev->pm.power_state[state_index - 1].misc2 = 0;
}
return state_index;
}
diff --git a/drivers/gpu/drm/radeon/radeon_dp_mst.c b/drivers/gpu/drm/radeon/radeon_dp_mst.c
index 0083087..9bd6c06 100644
--- a/drivers/gpu/drm/radeon/radeon_dp_mst.c
+++ b/drivers/gpu/drm/radeon/radeon_dp_mst.c
@@ -242,6 +242,9 @@ radeon_dp_mst_detect(struct drm_connector *connector,
to_radeon_connector(connector);
struct radeon_connector *master = radeon_connector->mst_port;
+ if (drm_connector_is_unregistered(connector))
+ return connector_status_disconnected;
+
return drm_dp_mst_detect_port(connector, ctx, &master->mst_mgr,
radeon_connector->port);
}
diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
index 99ee60f..8c0a572 100644
--- a/drivers/gpu/drm/radeon/radeon_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_kms.c
@@ -512,6 +512,7 @@ static int radeon_info_ioctl(struct drm_device *dev, void *data, struct drm_file
*value = rdev->config.si.backend_enable_mask;
} else {
DRM_DEBUG_KMS("BACKEND_ENABLED_MASK is si+ only!\n");
+ return -EINVAL;
}
break;
case RADEON_INFO_MAX_SCLK:
diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
index 05c4196..84b8d58 100644
--- a/drivers/gpu/drm/radeon/radeon_pm.c
+++ b/drivers/gpu/drm/radeon/radeon_pm.c
@@ -1747,6 +1747,7 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
struct drm_device *ddev = rdev->ddev;
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
+ struct radeon_connector *radeon_connector;
if (!rdev->pm.dpm_enabled)
return;
@@ -1756,6 +1757,7 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
/* update active crtc counts */
rdev->pm.dpm.new_active_crtcs = 0;
rdev->pm.dpm.new_active_crtc_count = 0;
+ rdev->pm.dpm.high_pixelclock_count = 0;
if (rdev->num_crtc && rdev->mode_info.mode_config_initialized) {
list_for_each_entry(crtc,
&ddev->mode_config.crtc_list, head) {
@@ -1763,6 +1765,12 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
if (crtc->enabled) {
rdev->pm.dpm.new_active_crtcs |= (1 << radeon_crtc->crtc_id);
rdev->pm.dpm.new_active_crtc_count++;
+ if (!radeon_crtc->connector)
+ continue;
+
+ radeon_connector = to_radeon_connector(radeon_crtc->connector);
+ if (radeon_connector->pixelclock_for_modeset > 297000)
+ rdev->pm.dpm.high_pixelclock_count++;
}
}
}
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 36150b7..a65cb34 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -566,13 +566,14 @@ static void radeon_ttm_backend_unbind(struct ttm_bo_device *bdev, struct ttm_tt
struct radeon_ttm_tt *gtt = (void *)ttm;
struct radeon_device *rdev = radeon_get_rdev(bdev);
+ if (gtt->userptr)
+ radeon_ttm_tt_unpin_userptr(bdev, ttm);
+
if (!gtt->bound)
return;
radeon_gart_unbind(rdev, gtt->offset, ttm->num_pages);
- if (gtt->userptr)
- radeon_ttm_tt_unpin_userptr(bdev, ttm);
gtt->bound = false;
}
diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
index d1c73e9..a84df43 100644
--- a/drivers/gpu/drm/radeon/si_dpm.c
+++ b/drivers/gpu/drm/radeon/si_dpm.c
@@ -2982,6 +2982,9 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
(rdev->pdev->device == 0x6605)) {
max_sclk = 75000;
}
+
+ if (rdev->pm.dpm.high_pixelclock_count > 1)
+ disable_sclk_switching = true;
}
if (rps->vce_active) {
diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
index 6e28f70..62488ac 100644
--- a/drivers/gpu/drm/stm/ltdc.c
+++ b/drivers/gpu/drm/stm/ltdc.c
@@ -525,13 +525,42 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
{
struct ltdc_device *ldev = crtc_to_ltdc(crtc);
struct drm_device *ddev = crtc->dev;
+ struct drm_connector_list_iter iter;
+ struct drm_connector *connector = NULL;
+ struct drm_encoder *encoder = NULL;
+ struct drm_bridge *bridge = NULL;
struct drm_display_mode *mode = &crtc->state->adjusted_mode;
struct videomode vm;
u32 hsync, vsync, accum_hbp, accum_vbp, accum_act_w, accum_act_h;
u32 total_width, total_height;
+ u32 bus_flags = 0;
u32 val;
int ret;
+ /* get encoder from crtc */
+ drm_for_each_encoder(encoder, ddev)
+ if (encoder->crtc == crtc)
+ break;
+
+ if (encoder) {
+ /* get bridge from encoder */
+ list_for_each_entry(bridge, &encoder->bridge_chain, chain_node)
+ if (bridge->encoder == encoder)
+ break;
+
+ /* Get the connector from encoder */
+ drm_connector_list_iter_begin(ddev, &iter);
+ drm_for_each_connector_iter(connector, &iter)
+ if (connector->encoder == encoder)
+ break;
+ drm_connector_list_iter_end(&iter);
+ }
+
+ if (bridge && bridge->timings)
+ bus_flags = bridge->timings->input_bus_flags;
+ else if (connector)
+ bus_flags = connector->display_info.bus_flags;
+
if (!pm_runtime_active(ddev->dev)) {
ret = pm_runtime_get_sync(ddev->dev);
if (ret) {
@@ -567,10 +596,10 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
if (vm.flags & DISPLAY_FLAGS_VSYNC_HIGH)
val |= GCR_VSPOL;
- if (vm.flags & DISPLAY_FLAGS_DE_LOW)
+ if (bus_flags & DRM_BUS_FLAG_DE_LOW)
val |= GCR_DEPOL;
- if (vm.flags & DISPLAY_FLAGS_PIXDATA_NEGEDGE)
+ if (bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
val |= GCR_PCPOL;
reg_update_bits(ldev->regs, LTDC_GCR,
diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
index 3a244ef..3aa9a74 100644
--- a/drivers/gpu/drm/tegra/dc.c
+++ b/drivers/gpu/drm/tegra/dc.c
@@ -1688,6 +1688,11 @@ static void tegra_dc_commit_state(struct tegra_dc *dc,
dev_err(dc->dev,
"failed to set clock rate to %lu Hz\n",
state->pclk);
+
+ err = clk_set_rate(dc->clk, state->pclk);
+ if (err < 0)
+ dev_err(dc->dev, "failed to set clock %pC to %lu Hz: %d\n",
+ dc->clk, state->pclk, err);
}
DRM_DEBUG_KMS("rate: %lu, div: %u\n", clk_get_rate(dc->clk),
@@ -1698,11 +1703,6 @@ static void tegra_dc_commit_state(struct tegra_dc *dc,
value = SHIFT_CLK_DIVIDER(state->div) | PIXEL_CLK_DIVIDER_PCD1;
tegra_dc_writel(dc, value, DC_DISP_DISP_CLOCK_CONTROL);
}
-
- err = clk_set_rate(dc->clk, state->pclk);
- if (err < 0)
- dev_err(dc->dev, "failed to set clock %pC to %lu Hz: %d\n",
- dc->clk, state->pclk, err);
}
static void tegra_dc_stop(struct tegra_dc *dc)
diff --git a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
index 518220b..0aaa4a2 100644
--- a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
+++ b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
@@ -518,6 +518,15 @@ static void tilcdc_crtc_off(struct drm_crtc *crtc, bool shutdown)
drm_crtc_vblank_off(crtc);
+ spin_lock_irq(&crtc->dev->event_lock);
+
+ if (crtc->state->event) {
+ drm_crtc_send_vblank_event(crtc, crtc->state->event);
+ crtc->state->event = NULL;
+ }
+
+ spin_unlock_irq(&crtc->dev->event_lock);
+
tilcdc_crtc_disable_irqs(dev);
pm_runtime_put_sync(dev->dev);
diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
index 482219f..1d2416d 100644
--- a/drivers/gpu/drm/vc4/vc4_crtc.c
+++ b/drivers/gpu/drm/vc4/vc4_crtc.c
@@ -210,6 +210,7 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format)
{
const struct vc4_crtc_data *crtc_data = vc4_crtc_to_vc4_crtc_data(vc4_crtc);
const struct vc4_pv_data *pv_data = vc4_crtc_to_vc4_pv_data(vc4_crtc);
+ struct vc4_dev *vc4 = to_vc4_dev(vc4_crtc->base.dev);
u32 fifo_len_bytes = pv_data->fifo_depth;
/*
@@ -238,6 +239,22 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format)
if (crtc_data->hvs_output == 5)
return 32;
+ /*
+ * It looks like in some situations, we will overflow
+ * the PixelValve FIFO (with the bit 10 of PV stat being
+ * set) and stall the HVS / PV, eventually resulting in
+ * a page flip timeout.
+ *
+ * Displaying the video overlay during a playback with
+ * Kodi on an RPi3 seems to be a great solution with a
+ * failure rate around 50%.
+ *
+ * Removing 1 from the FIFO full level however
+ * seems to completely remove that issue.
+ */
+ if (!vc4->hvs->hvs5)
+ return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1;
+
return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX;
}
}
diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
index 09c012d..1ae5cd4 100644
--- a/drivers/gpu/drm/vkms/vkms_crtc.c
+++ b/drivers/gpu/drm/vkms/vkms_crtc.c
@@ -18,7 +18,8 @@ static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer,
output->period_ns);
- WARN_ON(ret_overrun != 1);
+ if (ret_overrun != 1)
+ pr_warn("%s: vblank timer overrun\n", __func__);
spin_lock(&output->lock);
ret = drm_crtc_handle_vblank(crtc);
diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index cc93a8c..8ea9154 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -531,7 +531,7 @@ static int xen_drm_drv_init(struct xen_drm_front_info *front_info)
drm_dev = drm_dev_alloc(&xen_drm_driver, dev);
if (IS_ERR(drm_dev)) {
ret = PTR_ERR(drm_dev);
- goto fail;
+ goto fail_dev;
}
drm_info->drm_dev = drm_dev;
@@ -561,8 +561,10 @@ static int xen_drm_drv_init(struct xen_drm_front_info *front_info)
drm_kms_helper_poll_fini(drm_dev);
drm_mode_config_cleanup(drm_dev);
drm_dev_put(drm_dev);
-fail:
+fail_dev:
kfree(drm_info);
+ front_info->drm_info = NULL;
+fail:
return ret;
}
diff --git a/drivers/gpu/drm/xlnx/zynqmp_dp.c b/drivers/gpu/drm/xlnx/zynqmp_dp.c
index 99158ee..59d1fb0 100644
--- a/drivers/gpu/drm/xlnx/zynqmp_dp.c
+++ b/drivers/gpu/drm/xlnx/zynqmp_dp.c
@@ -866,7 +866,7 @@ static int zynqmp_dp_train(struct zynqmp_dp *dp)
return ret;
zynqmp_dp_write(dp, ZYNQMP_DP_SCRAMBLING_DISABLE, 1);
- memset(dp->train_set, 0, 4);
+ memset(dp->train_set, 0, sizeof(dp->train_set));
ret = zynqmp_dp_link_train_cr(dp);
if (ret)
return ret;
diff --git a/drivers/gpu/host1x/bus.c b/drivers/gpu/host1x/bus.c
index e201f62..9e2cb69 100644
--- a/drivers/gpu/host1x/bus.c
+++ b/drivers/gpu/host1x/bus.c
@@ -704,8 +704,9 @@ void host1x_driver_unregister(struct host1x_driver *driver)
EXPORT_SYMBOL(host1x_driver_unregister);
/**
- * host1x_client_register() - register a host1x client
+ * __host1x_client_register() - register a host1x client
* @client: host1x client
+ * @key: lock class key for the client-specific mutex
*
* Registers a host1x client with each host1x controller instance. Note that
* each client will only match their parent host1x controller and will only be
@@ -714,13 +715,14 @@ EXPORT_SYMBOL(host1x_driver_unregister);
* device and call host1x_device_init(), which will in turn call each client's
* &host1x_client_ops.init implementation.
*/
-int host1x_client_register(struct host1x_client *client)
+int __host1x_client_register(struct host1x_client *client,
+ struct lock_class_key *key)
{
struct host1x *host1x;
int err;
INIT_LIST_HEAD(&client->list);
- mutex_init(&client->lock);
+ __mutex_init(&client->lock, "host1x client lock", key);
client->usecount = 0;
mutex_lock(&devices_lock);
@@ -741,7 +743,7 @@ int host1x_client_register(struct host1x_client *client)
return 0;
}
-EXPORT_SYMBOL(host1x_client_register);
+EXPORT_SYMBOL(__host1x_client_register);
/**
* host1x_client_unregister() - unregister a host1x client
diff --git a/drivers/hid/hid-alps.c b/drivers/hid/hid-alps.c
index 3feaece..6b66593 100644
--- a/drivers/hid/hid-alps.c
+++ b/drivers/hid/hid-alps.c
@@ -761,6 +761,7 @@ static int alps_input_configured(struct hid_device *hdev, struct hid_input *hi)
if (input_register_device(data->input2)) {
input_free_device(input2);
+ ret = -ENOENT;
goto exit;
}
}
diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
index 21e1562..477baa3 100644
--- a/drivers/hid/hid-cp2112.c
+++ b/drivers/hid/hid-cp2112.c
@@ -161,6 +161,7 @@ struct cp2112_device {
atomic_t read_avail;
atomic_t xfer_avail;
struct gpio_chip gc;
+ struct irq_chip irq;
u8 *in_out_buffer;
struct mutex lock;
@@ -1175,16 +1176,6 @@ static int cp2112_gpio_irq_type(struct irq_data *d, unsigned int type)
return 0;
}
-static struct irq_chip cp2112_gpio_irqchip = {
- .name = "cp2112-gpio",
- .irq_startup = cp2112_gpio_irq_startup,
- .irq_shutdown = cp2112_gpio_irq_shutdown,
- .irq_ack = cp2112_gpio_irq_ack,
- .irq_mask = cp2112_gpio_irq_mask,
- .irq_unmask = cp2112_gpio_irq_unmask,
- .irq_set_type = cp2112_gpio_irq_type,
-};
-
static int __maybe_unused cp2112_allocate_irq(struct cp2112_device *dev,
int pin)
{
@@ -1339,8 +1330,17 @@ static int cp2112_probe(struct hid_device *hdev, const struct hid_device_id *id)
dev->gc.can_sleep = 1;
dev->gc.parent = &hdev->dev;
+ dev->irq.name = "cp2112-gpio";
+ dev->irq.irq_startup = cp2112_gpio_irq_startup;
+ dev->irq.irq_shutdown = cp2112_gpio_irq_shutdown;
+ dev->irq.irq_ack = cp2112_gpio_irq_ack;
+ dev->irq.irq_mask = cp2112_gpio_irq_mask;
+ dev->irq.irq_unmask = cp2112_gpio_irq_unmask;
+ dev->irq.irq_set_type = cp2112_gpio_irq_type;
+ dev->irq.flags = IRQCHIP_MASK_ON_SUSPEND;
+
girq = &dev->gc.irq;
- girq->chip = &cp2112_gpio_irqchip;
+ girq->chip = &dev->irq;
/* The event comes from the outside so no parent handler */
girq->parent_handler = NULL;
girq->num_parents = 0;
diff --git a/drivers/hid/hid-google-hammer.c b/drivers/hid/hid-google-hammer.c
index 85a054f..2a176f7 100644
--- a/drivers/hid/hid-google-hammer.c
+++ b/drivers/hid/hid-google-hammer.c
@@ -527,6 +527,8 @@ static void hammer_remove(struct hid_device *hdev)
static const struct hid_device_id hammer_devices[] = {
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_DON) },
+ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) },
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) },
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
index 06813f2..e220a05 100644
--- a/drivers/hid/hid-ids.h
+++ b/drivers/hid/hid-ids.h
@@ -486,6 +486,7 @@
#define USB_DEVICE_ID_GOOGLE_MASTERBALL 0x503c
#define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d
#define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044
+#define USB_DEVICE_ID_GOOGLE_DON 0x5050
#define USB_VENDOR_ID_GOTOP 0x08f2
#define USB_DEVICE_ID_SUPER_Q2 0x007f
@@ -937,6 +938,7 @@
#define USB_DEVICE_ID_ORTEK_IHOME_IMAC_A210S 0x8003
#define USB_VENDOR_ID_PLANTRONICS 0x047f
+#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES 0xc056
#define USB_VENDOR_ID_PANASONIC 0x04da
#define USB_DEVICE_ID_PANABOARD_UBT780 0x1044
diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
index c6c8e20..0ff03fe 100644
--- a/drivers/hid/hid-lenovo.c
+++ b/drivers/hid/hid-lenovo.c
@@ -33,6 +33,9 @@
#include "hid-ids.h"
+/* Userspace expects F20 for mic-mute KEY_MICMUTE does not work */
+#define LENOVO_KEY_MICMUTE KEY_F20
+
struct lenovo_drvdata {
u8 led_report[3]; /* Must be first for proper alignment */
int led_state;
@@ -62,8 +65,8 @@ struct lenovo_drvdata {
#define TP10UBKBD_LED_OFF 1
#define TP10UBKBD_LED_ON 2
-static void lenovo_led_set_tp10ubkbd(struct hid_device *hdev, u8 led_code,
- enum led_brightness value)
+static int lenovo_led_set_tp10ubkbd(struct hid_device *hdev, u8 led_code,
+ enum led_brightness value)
{
struct lenovo_drvdata *data = hid_get_drvdata(hdev);
int ret;
@@ -75,10 +78,18 @@ static void lenovo_led_set_tp10ubkbd(struct hid_device *hdev, u8 led_code,
data->led_report[2] = value ? TP10UBKBD_LED_ON : TP10UBKBD_LED_OFF;
ret = hid_hw_raw_request(hdev, data->led_report[0], data->led_report, 3,
HID_OUTPUT_REPORT, HID_REQ_SET_REPORT);
- if (ret)
- hid_err(hdev, "Set LED output report error: %d\n", ret);
+ if (ret != 3) {
+ if (ret != -ENODEV)
+ hid_err(hdev, "Set LED output report error: %d\n", ret);
+
+ ret = ret < 0 ? ret : -EIO;
+ } else {
+ ret = 0;
+ }
mutex_unlock(&data->led_report_mutex);
+
+ return ret;
}
static void lenovo_tp10ubkbd_sync_fn_lock(struct work_struct *work)
@@ -126,7 +137,7 @@ static int lenovo_input_mapping_tpkbd(struct hid_device *hdev,
if (usage->hid == (HID_UP_BUTTON | 0x0010)) {
/* This sub-device contains trackpoint, mark it */
hid_set_drvdata(hdev, (void *)1);
- map_key_clear(KEY_MICMUTE);
+ map_key_clear(LENOVO_KEY_MICMUTE);
return 1;
}
return 0;
@@ -141,7 +152,7 @@ static int lenovo_input_mapping_cptkbd(struct hid_device *hdev,
(usage->hid & HID_USAGE_PAGE) == HID_UP_LNVENDOR) {
switch (usage->hid & HID_USAGE) {
case 0x00f1: /* Fn-F4: Mic mute */
- map_key_clear(KEY_MICMUTE);
+ map_key_clear(LENOVO_KEY_MICMUTE);
return 1;
case 0x00f2: /* Fn-F5: Brightness down */
map_key_clear(KEY_BRIGHTNESSDOWN);
@@ -231,7 +242,7 @@ static int lenovo_input_mapping_tp10_ultrabook_kbd(struct hid_device *hdev,
map_key_clear(KEY_FN_ESC);
return 1;
case 9: /* Fn-F4: Mic mute */
- map_key_clear(KEY_MICMUTE);
+ map_key_clear(LENOVO_KEY_MICMUTE);
return 1;
case 10: /* Fn-F7: Control panel */
map_key_clear(KEY_CONFIG);
@@ -349,7 +360,7 @@ static ssize_t attr_fn_lock_store(struct device *dev,
{
struct hid_device *hdev = to_hid_device(dev);
struct lenovo_drvdata *data = hid_get_drvdata(hdev);
- int value;
+ int value, ret;
if (kstrtoint(buf, 10, &value))
return -EINVAL;
@@ -364,7 +375,9 @@ static ssize_t attr_fn_lock_store(struct device *dev,
lenovo_features_set_cptkbd(hdev);
break;
case USB_DEVICE_ID_LENOVO_TP10UBKBD:
- lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);
+ ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);
+ if (ret)
+ return ret;
break;
}
@@ -498,6 +511,9 @@ static int lenovo_event_cptkbd(struct hid_device *hdev,
static int lenovo_event(struct hid_device *hdev, struct hid_field *field,
struct hid_usage *usage, __s32 value)
{
+ if (!hid_get_drvdata(hdev))
+ return 0;
+
switch (hdev->product) {
case USB_DEVICE_ID_LENOVO_CUSBKBD:
case USB_DEVICE_ID_LENOVO_CBTKBD:
@@ -777,7 +793,7 @@ static enum led_brightness lenovo_led_brightness_get(
: LED_OFF;
}
-static void lenovo_led_brightness_set(struct led_classdev *led_cdev,
+static int lenovo_led_brightness_set(struct led_classdev *led_cdev,
enum led_brightness value)
{
struct device *dev = led_cdev->dev->parent;
@@ -785,6 +801,7 @@ static void lenovo_led_brightness_set(struct led_classdev *led_cdev,
struct lenovo_drvdata *data_pointer = hid_get_drvdata(hdev);
u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED };
int led_nr = 0;
+ int ret = 0;
if (led_cdev == &data_pointer->led_micmute)
led_nr = 1;
@@ -799,9 +816,11 @@ static void lenovo_led_brightness_set(struct led_classdev *led_cdev,
lenovo_led_set_tpkbd(hdev);
break;
case USB_DEVICE_ID_LENOVO_TP10UBKBD:
- lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);
+ ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);
break;
}
+
+ return ret;
}
static int lenovo_register_leds(struct hid_device *hdev)
@@ -822,7 +841,8 @@ static int lenovo_register_leds(struct hid_device *hdev)
data->led_mute.name = name_mute;
data->led_mute.brightness_get = lenovo_led_brightness_get;
- data->led_mute.brightness_set = lenovo_led_brightness_set;
+ data->led_mute.brightness_set_blocking = lenovo_led_brightness_set;
+ data->led_mute.flags = LED_HW_PLUGGABLE;
data->led_mute.dev = &hdev->dev;
ret = led_classdev_register(&hdev->dev, &data->led_mute);
if (ret < 0)
@@ -830,7 +850,8 @@ static int lenovo_register_leds(struct hid_device *hdev)
data->led_micmute.name = name_micm;
data->led_micmute.brightness_get = lenovo_led_brightness_get;
- data->led_micmute.brightness_set = lenovo_led_brightness_set;
+ data->led_micmute.brightness_set_blocking = lenovo_led_brightness_set;
+ data->led_micmute.flags = LED_HW_PLUGGABLE;
data->led_micmute.dev = &hdev->dev;
ret = led_classdev_register(&hdev->dev, &data->led_micmute);
if (ret < 0) {
diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
index 85b685e..e81b7ce 100644
--- a/drivers/hid/hid-plantronics.c
+++ b/drivers/hid/hid-plantronics.c
@@ -13,6 +13,7 @@
#include <linux/hid.h>
#include <linux/module.h>
+#include <linux/jiffies.h>
#define PLT_HID_1_0_PAGE 0xffa00000
#define PLT_HID_2_0_PAGE 0xffa20000
@@ -36,6 +37,16 @@
#define PLT_ALLOW_CONSUMER (field->application == HID_CP_CONSUMERCONTROL && \
(usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER)
+#define PLT_QUIRK_DOUBLE_VOLUME_KEYS BIT(0)
+
+#define PLT_DOUBLE_KEY_TIMEOUT 5 /* ms */
+
+struct plt_drv_data {
+ unsigned long device_type;
+ unsigned long last_volume_key_ts;
+ u32 quirks;
+};
+
static int plantronics_input_mapping(struct hid_device *hdev,
struct hid_input *hi,
struct hid_field *field,
@@ -43,7 +54,8 @@ static int plantronics_input_mapping(struct hid_device *hdev,
unsigned long **bit, int *max)
{
unsigned short mapped_key;
- unsigned long plt_type = (unsigned long)hid_get_drvdata(hdev);
+ struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
+ unsigned long plt_type = drv_data->device_type;
/* special case for PTT products */
if (field->application == HID_GD_JOYSTICK)
@@ -105,6 +117,30 @@ static int plantronics_input_mapping(struct hid_device *hdev,
return 1;
}
+static int plantronics_event(struct hid_device *hdev, struct hid_field *field,
+ struct hid_usage *usage, __s32 value)
+{
+ struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
+
+ if (drv_data->quirks & PLT_QUIRK_DOUBLE_VOLUME_KEYS) {
+ unsigned long prev_ts, cur_ts;
+
+ /* Usages are filtered in plantronics_usages. */
+
+ if (!value) /* Handle key presses only. */
+ return 0;
+
+ prev_ts = drv_data->last_volume_key_ts;
+ cur_ts = jiffies;
+ if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_DOUBLE_KEY_TIMEOUT)
+ return 1; /* Ignore the repeated key. */
+
+ drv_data->last_volume_key_ts = cur_ts;
+ }
+
+ return 0;
+}
+
static unsigned long plantronics_device_type(struct hid_device *hdev)
{
unsigned i, col_page;
@@ -133,15 +169,24 @@ static unsigned long plantronics_device_type(struct hid_device *hdev)
static int plantronics_probe(struct hid_device *hdev,
const struct hid_device_id *id)
{
+ struct plt_drv_data *drv_data;
int ret;
+ drv_data = devm_kzalloc(&hdev->dev, sizeof(*drv_data), GFP_KERNEL);
+ if (!drv_data)
+ return -ENOMEM;
+
ret = hid_parse(hdev);
if (ret) {
hid_err(hdev, "parse failed\n");
goto err;
}
- hid_set_drvdata(hdev, (void *)plantronics_device_type(hdev));
+ drv_data->device_type = plantronics_device_type(hdev);
+ drv_data->quirks = id->driver_data;
+ drv_data->last_volume_key_ts = jiffies - msecs_to_jiffies(PLT_DOUBLE_KEY_TIMEOUT);
+
+ hid_set_drvdata(hdev, drv_data);
ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT |
HID_CONNECT_HIDINPUT_FORCE | HID_CONNECT_HIDDEV_FORCE);
@@ -153,15 +198,26 @@ static int plantronics_probe(struct hid_device *hdev,
}
static const struct hid_device_id plantronics_devices[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+ USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES),
+ .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, HID_ANY_ID) },
{ }
};
MODULE_DEVICE_TABLE(hid, plantronics_devices);
+static const struct hid_usage_id plantronics_usages[] = {
+ { HID_CP_VOLUMEUP, EV_KEY, HID_ANY_ID },
+ { HID_CP_VOLUMEDOWN, EV_KEY, HID_ANY_ID },
+ { HID_TERMINATOR, HID_TERMINATOR, HID_TERMINATOR }
+};
+
static struct hid_driver plantronics_driver = {
.name = "plantronics",
.id_table = plantronics_devices,
+ .usage_table = plantronics_usages,
.input_mapping = plantronics_input_mapping,
+ .event = plantronics_event,
.probe = plantronics_probe,
};
module_hid_driver(plantronics_driver);
diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
index 44d715c..2d70dc4 100644
--- a/drivers/hid/wacom_wac.c
+++ b/drivers/hid/wacom_wac.c
@@ -2533,7 +2533,7 @@ static void wacom_wac_finger_slot(struct wacom_wac *wacom_wac,
!wacom_wac->shared->is_touch_on) {
if (!wacom_wac->shared->touch_down)
return;
- prox = 0;
+ prox = false;
}
wacom_wac->hid_data.num_received++;
@@ -3574,8 +3574,6 @@ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
{
struct wacom_features *features = &wacom_wac->features;
- input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS);
-
if (!(features->device_type & WACOM_DEVICETYPE_PEN))
return -ENODEV;
@@ -3590,6 +3588,7 @@ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
return 0;
}
+ input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS);
__set_bit(BTN_TOUCH, input_dev->keybit);
__set_bit(ABS_MISC, input_dev->absbit);
@@ -3742,8 +3741,6 @@ int wacom_setup_touch_input_capabilities(struct input_dev *input_dev,
{
struct wacom_features *features = &wacom_wac->features;
- input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS);
-
if (!(features->device_type & WACOM_DEVICETYPE_TOUCH))
return -ENODEV;
@@ -3756,6 +3753,7 @@ int wacom_setup_touch_input_capabilities(struct input_dev *input_dev,
/* setup has already been done */
return 0;
+ input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS);
__set_bit(BTN_TOUCH, input_dev->keybit);
if (features->touch_max == 1) {
diff --git a/drivers/hsi/hsi_core.c b/drivers/hsi/hsi_core.c
index 47f0208..a5f92e2 100644
--- a/drivers/hsi/hsi_core.c
+++ b/drivers/hsi/hsi_core.c
@@ -210,8 +210,6 @@ static void hsi_add_client_from_dt(struct hsi_port *port,
if (err)
goto err;
- dev_set_name(&cl->device, "%s", name);
-
err = hsi_of_property_parse_mode(client, "hsi-mode", &mode);
if (err) {
err = hsi_of_property_parse_mode(client, "hsi-rx-mode",
@@ -293,6 +291,7 @@ static void hsi_add_client_from_dt(struct hsi_port *port,
cl->device.release = hsi_client_release;
cl->device.of_node = client;
+ dev_set_name(&cl->device, "%s", name);
if (device_register(&cl->device) < 0) {
pr_err("hsi: failed to register client: %s\n", name);
put_device(&cl->device);
diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
index fbdda99..f064fa6 100644
--- a/drivers/hv/channel.c
+++ b/drivers/hv/channel.c
@@ -583,7 +583,7 @@ static int __vmbus_open(struct vmbus_channel *newchannel,
if (newchannel->rescind) {
err = -ENODEV;
- goto error_free_info;
+ goto error_clean_msglist;
}
err = vmbus_post_msg(open_msg,
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 6be9f56..6476bfe 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -725,6 +725,12 @@ static void init_vp_index(struct vmbus_channel *channel)
free_cpumask_var(available_mask);
}
+#define UNLOAD_DELAY_UNIT_MS 10 /* 10 milliseconds */
+#define UNLOAD_WAIT_MS (100*1000) /* 100 seconds */
+#define UNLOAD_WAIT_LOOPS (UNLOAD_WAIT_MS/UNLOAD_DELAY_UNIT_MS)
+#define UNLOAD_MSG_MS (5*1000) /* Every 5 seconds */
+#define UNLOAD_MSG_LOOPS (UNLOAD_MSG_MS/UNLOAD_DELAY_UNIT_MS)
+
static void vmbus_wait_for_unload(void)
{
int cpu;
@@ -742,12 +748,17 @@ static void vmbus_wait_for_unload(void)
* vmbus_connection.unload_event. If not, the last thing we can do is
* read message pages for all CPUs directly.
*
- * Wait no more than 10 seconds so that the panic path can't get
- * hung forever in case the response message isn't seen.
+ * Wait up to 100 seconds since an Azure host must writeback any dirty
+ * data in its disk cache before the VMbus UNLOAD request will
+ * complete. This flushing has been empirically observed to take up
+ * to 50 seconds in cases with a lot of dirty data, so allow additional
+ * leeway and for inaccuracies in mdelay(). But eventually time out so
+ * that the panic path can't get hung forever in case the response
+ * message isn't seen.
*/
- for (i = 0; i < 1000; i++) {
+ for (i = 1; i <= UNLOAD_WAIT_LOOPS; i++) {
if (completion_done(&vmbus_connection.unload_event))
- break;
+ goto completed;
for_each_online_cpu(cpu) {
struct hv_per_cpu_context *hv_cpu
@@ -770,9 +781,18 @@ static void vmbus_wait_for_unload(void)
vmbus_signal_eom(msg, message_type);
}
- mdelay(10);
- }
+ /*
+ * Give a notice periodically so someone watching the
+ * serial output won't think it is completely hung.
+ */
+ if (!(i % UNLOAD_MSG_LOOPS))
+ pr_notice("Waiting for VMBus UNLOAD to complete\n");
+ mdelay(UNLOAD_DELAY_UNIT_MS);
+ }
+ pr_err("Continuing even though VMBus UNLOAD did not complete\n");
+
+completed:
/*
* We're crashing and already got the UNLOAD_RESPONSE, cleanup all
* maybe-pending messages on all CPUs to be able to receive new
diff --git a/drivers/hwmon/lm80.c b/drivers/hwmon/lm80.c
index ac4adb4..97ab491d 100644
--- a/drivers/hwmon/lm80.c
+++ b/drivers/hwmon/lm80.c
@@ -596,7 +596,6 @@ static int lm80_probe(struct i2c_client *client)
struct device *dev = &client->dev;
struct device *hwmon_dev;
struct lm80_data *data;
- int rv;
data = devm_kzalloc(dev, sizeof(struct lm80_data), GFP_KERNEL);
if (!data)
@@ -609,14 +608,8 @@ static int lm80_probe(struct i2c_client *client)
lm80_init_client(client);
/* A few vars need to be filled upon startup */
- rv = lm80_read_value(client, LM80_REG_FAN_MIN(1));
- if (rv < 0)
- return rv;
- data->fan[f_min][0] = rv;
- rv = lm80_read_value(client, LM80_REG_FAN_MIN(2));
- if (rv < 0)
- return rv;
- data->fan[f_min][1] = rv;
+ data->fan[f_min][0] = lm80_read_value(client, LM80_REG_FAN_MIN(1));
+ data->fan[f_min][1] = lm80_read_value(client, LM80_REG_FAN_MIN(2));
hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name,
data, lm80_groups);
diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c
index a717779..d052502 100644
--- a/drivers/hwmon/occ/common.c
+++ b/drivers/hwmon/occ/common.c
@@ -209,9 +209,9 @@ int occ_update_response(struct occ *occ)
return rc;
/* limit the maximum rate of polling the OCC */
- if (time_after(jiffies, occ->last_update + OCC_UPDATE_FREQUENCY)) {
+ if (time_after(jiffies, occ->next_update)) {
rc = occ_poll(occ);
- occ->last_update = jiffies;
+ occ->next_update = jiffies + OCC_UPDATE_FREQUENCY;
} else {
rc = occ->last_error;
}
@@ -1089,6 +1089,7 @@ int occ_setup(struct occ *occ, const char *name)
return rc;
}
+ occ->next_update = jiffies + OCC_UPDATE_FREQUENCY;
occ_parse_poll_response(occ);
rc = occ_setup_sensor_attrs(occ);
diff --git a/drivers/hwmon/occ/common.h b/drivers/hwmon/occ/common.h
index 67e6968..e6df719 100644
--- a/drivers/hwmon/occ/common.h
+++ b/drivers/hwmon/occ/common.h
@@ -99,7 +99,7 @@ struct occ {
u8 poll_cmd_data; /* to perform OCC poll command */
int (*send_cmd)(struct occ *occ, u8 *cmd);
- unsigned long last_update;
+ unsigned long next_update;
struct mutex lock; /* lock OCC access */
struct device *hwmon;
diff --git a/drivers/hwmon/pmbus/pxe1610.c b/drivers/hwmon/pmbus/pxe1610.c
index fa5c5dd..212433e 100644
--- a/drivers/hwmon/pmbus/pxe1610.c
+++ b/drivers/hwmon/pmbus/pxe1610.c
@@ -41,6 +41,15 @@ static int pxe1610_identify(struct i2c_client *client,
info->vrm_version[i] = vr13;
break;
default:
+ /*
+ * If prior pages are available limit operation
+ * to them
+ */
+ if (i != 0) {
+ info->pages = i;
+ return 0;
+ }
+
return -ENODEV;
}
}
diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
index 3629b78..c594f45 100644
--- a/drivers/hwtracing/coresight/coresight-platform.c
+++ b/drivers/hwtracing/coresight/coresight-platform.c
@@ -90,6 +90,12 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
struct of_endpoint endpoint;
int in = 0, out = 0;
+ /*
+ * Avoid warnings in of_graph_get_next_endpoint()
+ * if the device doesn't have any graph connections
+ */
+ if (!of_graph_is_present(node))
+ return;
do {
ep = of_graph_get_next_endpoint(node, ep);
if (!ep)
diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
index f72803a..28509b0 100644
--- a/drivers/hwtracing/intel_th/gth.c
+++ b/drivers/hwtracing/intel_th/gth.c
@@ -543,7 +543,7 @@ static void intel_th_gth_disable(struct intel_th_device *thdev,
output->active = false;
for_each_set_bit(master, gth->output[output->port].master,
- TH_CONFIGURABLE_MASTERS) {
+ TH_CONFIGURABLE_MASTERS + 1) {
gth_master_set(gth, master, -1);
}
spin_unlock(>h->gth_lock);
@@ -697,7 +697,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
othdev->output.port = -1;
othdev->output.active = false;
gth->output[port].output = NULL;
- for (master = 0; master <= TH_CONFIGURABLE_MASTERS; master++)
+ for (master = 0; master < TH_CONFIGURABLE_MASTERS + 1; master++)
if (gth->master[master] == port)
gth->master[master] = -1;
spin_unlock(>h->gth_lock);
diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
index 251e75c..817cdb2 100644
--- a/drivers/hwtracing/intel_th/pci.c
+++ b/drivers/hwtracing/intel_th/pci.c
@@ -274,10 +274,20 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
.driver_data = (kernel_ulong_t)&intel_th_2x,
},
{
+ /* Alder Lake-M */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x54a6),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
+ {
/* Alder Lake CPU */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),
.driver_data = (kernel_ulong_t)&intel_th_2x,
},
+ {
+ /* Rocket Lake CPU */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c19),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
{ 0 },
};
diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
index e4b7f2a..c1bbc4c 100644
--- a/drivers/i2c/busses/i2c-cadence.c
+++ b/drivers/i2c/busses/i2c-cadence.c
@@ -789,7 +789,7 @@ static int cdns_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
bool change_role = false;
#endif
- ret = pm_runtime_get_sync(id->dev);
+ ret = pm_runtime_resume_and_get(id->dev);
if (ret < 0)
return ret;
@@ -911,7 +911,7 @@ static int cdns_reg_slave(struct i2c_client *slave)
if (slave->flags & I2C_CLIENT_TEN)
return -EAFNOSUPPORT;
- ret = pm_runtime_get_sync(id->dev);
+ ret = pm_runtime_resume_and_get(id->dev);
if (ret < 0)
return ret;
@@ -1200,7 +1200,10 @@ static int cdns_i2c_probe(struct platform_device *pdev)
if (IS_ERR(id->membase))
return PTR_ERR(id->membase);
- id->irq = platform_get_irq(pdev, 0);
+ ret = platform_get_irq(pdev, 0);
+ if (ret < 0)
+ return ret;
+ id->irq = ret;
id->adap.owner = THIS_MODULE;
id->adap.dev.of_node = pdev->dev.of_node;
diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
index d6425ad..2871cf2 100644
--- a/drivers/i2c/busses/i2c-designware-master.c
+++ b/drivers/i2c/busses/i2c-designware-master.c
@@ -129,6 +129,7 @@ static int i2c_dw_set_timings_master(struct dw_i2c_dev *dev)
if ((comp_param1 & DW_IC_COMP_PARAM_1_SPEED_MODE_MASK)
!= DW_IC_COMP_PARAM_1_SPEED_MODE_HIGH) {
dev_err(dev->dev, "High Speed not supported!\n");
+ t->bus_freq_hz = I2C_MAX_FAST_MODE_FREQ;
dev->master_cfg &= ~DW_IC_CON_SPEED_MASK;
dev->master_cfg |= DW_IC_CON_SPEED_FAST;
dev->hs_hcnt = 0;
diff --git a/drivers/i2c/busses/i2c-emev2.c b/drivers/i2c/busses/i2c-emev2.c
index a08554c..bdff0e6 100644
--- a/drivers/i2c/busses/i2c-emev2.c
+++ b/drivers/i2c/busses/i2c-emev2.c
@@ -395,7 +395,10 @@ static int em_i2c_probe(struct platform_device *pdev)
em_i2c_reset(&priv->adap);
- priv->irq = platform_get_irq(pdev, 0);
+ ret = platform_get_irq(pdev, 0);
+ if (ret < 0)
+ goto err_clk;
+ priv->irq = ret;
ret = devm_request_irq(&pdev->dev, priv->irq, em_i2c_irq_handler, 0,
"em_i2c", priv);
if (ret)
diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
index 877fe37..e42b87e 100644
--- a/drivers/i2c/busses/i2c-i801.c
+++ b/drivers/i2c/busses/i2c-i801.c
@@ -391,11 +391,9 @@ static int i801_check_post(struct i801_priv *priv, int status)
dev_err(&priv->pci_dev->dev, "Transaction timeout\n");
/* try to stop the current command */
dev_dbg(&priv->pci_dev->dev, "Terminating the current operation\n");
- outb_p(inb_p(SMBHSTCNT(priv)) | SMBHSTCNT_KILL,
- SMBHSTCNT(priv));
+ outb_p(SMBHSTCNT_KILL, SMBHSTCNT(priv));
usleep_range(1000, 2000);
- outb_p(inb_p(SMBHSTCNT(priv)) & (~SMBHSTCNT_KILL),
- SMBHSTCNT(priv));
+ outb_p(0, SMBHSTCNT(priv));
/* Check if it worked */
status = inb_p(SMBHSTSTS(priv));
diff --git a/drivers/i2c/busses/i2c-img-scb.c b/drivers/i2c/busses/i2c-img-scb.c
index 98a8930..8e98794 100644
--- a/drivers/i2c/busses/i2c-img-scb.c
+++ b/drivers/i2c/busses/i2c-img-scb.c
@@ -1057,7 +1057,7 @@ static int img_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
atomic = true;
}
- ret = pm_runtime_get_sync(adap->dev.parent);
+ ret = pm_runtime_resume_and_get(adap->dev.parent);
if (ret < 0)
return ret;
@@ -1158,7 +1158,7 @@ static int img_i2c_init(struct img_i2c *i2c)
u32 rev;
int ret;
- ret = pm_runtime_get_sync(i2c->adap.dev.parent);
+ ret = pm_runtime_resume_and_get(i2c->adap.dev.parent);
if (ret < 0)
return ret;
diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
index 9db6ccd..8b9ba05 100644
--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
+++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
@@ -259,7 +259,7 @@ static int lpi2c_imx_master_enable(struct lpi2c_imx_struct *lpi2c_imx)
unsigned int temp;
int ret;
- ret = pm_runtime_get_sync(lpi2c_imx->adapter.dev.parent);
+ ret = pm_runtime_resume_and_get(lpi2c_imx->adapter.dev.parent);
if (ret < 0)
return ret;
diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
index e6f8d6e..72af4b4 100644
--- a/drivers/i2c/busses/i2c-imx.c
+++ b/drivers/i2c/busses/i2c-imx.c
@@ -1036,7 +1036,7 @@ static int i2c_imx_xfer(struct i2c_adapter *adapter,
struct imx_i2c_struct *i2c_imx = i2c_get_adapdata(adapter);
int result;
- result = pm_runtime_get_sync(i2c_imx->adapter.dev.parent);
+ result = pm_runtime_resume_and_get(i2c_imx->adapter.dev.parent);
if (result < 0)
return result;
@@ -1280,7 +1280,7 @@ static int i2c_imx_remove(struct platform_device *pdev)
struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev);
int irq, ret;
- ret = pm_runtime_get_sync(&pdev->dev);
+ ret = pm_runtime_resume_and_get(&pdev->dev);
if (ret < 0)
return ret;
diff --git a/drivers/i2c/busses/i2c-jz4780.c b/drivers/i2c/busses/i2c-jz4780.c
index cb4a25e..e181db3 100644
--- a/drivers/i2c/busses/i2c-jz4780.c
+++ b/drivers/i2c/busses/i2c-jz4780.c
@@ -526,8 +526,8 @@ static irqreturn_t jz4780_i2c_irq(int irqno, void *dev_id)
i2c_sta = jz4780_i2c_readw(i2c, JZ4780_I2C_STA);
data = *i2c->wbuf;
data &= ~JZ4780_I2C_DC_READ;
- if ((!i2c->stop_hold) && (i2c->cdata->version >=
- ID_X1000))
+ if ((i2c->wt_len == 1) && (!i2c->stop_hold) &&
+ (i2c->cdata->version >= ID_X1000))
data |= X1000_I2C_DC_STOP;
jz4780_i2c_writew(i2c, JZ4780_I2C_DC, data);
i2c->wbuf++;
@@ -826,7 +826,10 @@ static int jz4780_i2c_probe(struct platform_device *pdev)
jz4780_i2c_writew(i2c, JZ4780_I2C_INTM, 0x0);
- i2c->irq = platform_get_irq(pdev, 0);
+ ret = platform_get_irq(pdev, 0);
+ if (ret < 0)
+ goto err;
+ i2c->irq = ret;
ret = devm_request_irq(&pdev->dev, i2c->irq, jz4780_i2c_irq, 0,
dev_name(&pdev->dev), i2c);
if (ret)
diff --git a/drivers/i2c/busses/i2c-mlxbf.c b/drivers/i2c/busses/i2c-mlxbf.c
index 2fb0532..ab261d7 100644
--- a/drivers/i2c/busses/i2c-mlxbf.c
+++ b/drivers/i2c/busses/i2c-mlxbf.c
@@ -2376,6 +2376,8 @@ static int mlxbf_i2c_probe(struct platform_device *pdev)
mlxbf_i2c_init_slave(pdev, priv);
irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+ return irq;
ret = devm_request_irq(dev, irq, mlxbf_smbus_irq,
IRQF_ONESHOT | IRQF_SHARED | IRQF_PROBE_SHARED,
dev_name(dev), priv);
diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
index 2ffd2f3..dcde71a 100644
--- a/drivers/i2c/busses/i2c-mt65xx.c
+++ b/drivers/i2c/busses/i2c-mt65xx.c
@@ -478,8 +478,13 @@ static void mtk_i2c_clock_disable(struct mtk_i2c *i2c)
static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
{
u16 control_reg;
+ u16 intr_stat_reg;
- if (i2c->dev_comp->dma_sync) {
+ mtk_i2c_writew(i2c, I2C_CHN_CLR_FLAG, OFFSET_START);
+ intr_stat_reg = mtk_i2c_readw(i2c, OFFSET_INTR_STAT);
+ mtk_i2c_writew(i2c, intr_stat_reg, OFFSET_INTR_STAT);
+
+ if (i2c->dev_comp->apdma_sync) {
writel(I2C_DMA_WARM_RST, i2c->pdmabase + OFFSET_RST);
udelay(10);
writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST);
@@ -564,7 +569,7 @@ static const struct i2c_spec_values *mtk_i2c_get_spec(unsigned int speed)
static int mtk_i2c_max_step_cnt(unsigned int target_speed)
{
- if (target_speed > I2C_MAX_FAST_MODE_FREQ)
+ if (target_speed > I2C_MAX_FAST_MODE_PLUS_FREQ)
return MAX_HS_STEP_CNT_DIV;
else
return MAX_STEP_CNT_DIV;
@@ -635,7 +640,7 @@ static int mtk_i2c_check_ac_timing(struct mtk_i2c *i2c,
if (sda_min > sda_max)
return -3;
- if (check_speed > I2C_MAX_FAST_MODE_FREQ) {
+ if (check_speed > I2C_MAX_FAST_MODE_PLUS_FREQ) {
if (i2c->dev_comp->ltiming_adjust) {
i2c->ac_timing.hs = I2C_TIME_DEFAULT_VALUE |
(sample_cnt << 12) | (high_cnt << 8);
@@ -850,7 +855,7 @@ static int mtk_i2c_do_transfer(struct mtk_i2c *i2c, struct i2c_msg *msgs,
control_reg = mtk_i2c_readw(i2c, OFFSET_CONTROL) &
~(I2C_CONTROL_DIR_CHANGE | I2C_CONTROL_RS);
- if ((i2c->speed_hz > I2C_MAX_FAST_MODE_FREQ) || (left_num >= 1))
+ if ((i2c->speed_hz > I2C_MAX_FAST_MODE_PLUS_FREQ) || (left_num >= 1))
control_reg |= I2C_CONTROL_RS;
if (i2c->op == I2C_MASTER_WRRD)
@@ -1067,7 +1072,8 @@ static int mtk_i2c_transfer(struct i2c_adapter *adap,
}
}
- if (i2c->auto_restart && num >= 2 && i2c->speed_hz > I2C_MAX_FAST_MODE_FREQ)
+ if (i2c->auto_restart && num >= 2 &&
+ i2c->speed_hz > I2C_MAX_FAST_MODE_PLUS_FREQ)
/* ignore the first restart irq after the master code,
* otherwise the first transfer will be discarded.
*/
diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
index 12ac421..d4f6c6d 100644
--- a/drivers/i2c/busses/i2c-omap.c
+++ b/drivers/i2c/busses/i2c-omap.c
@@ -1404,9 +1404,9 @@ omap_i2c_probe(struct platform_device *pdev)
pm_runtime_set_autosuspend_delay(omap->dev, OMAP_I2C_PM_TIMEOUT);
pm_runtime_use_autosuspend(omap->dev);
- r = pm_runtime_get_sync(omap->dev);
+ r = pm_runtime_resume_and_get(omap->dev);
if (r < 0)
- goto err_free_mem;
+ goto err_disable_pm;
/*
* Read the Rev hi bit-[15:14] ie scheme this is 1 indicates ver2.
@@ -1513,8 +1513,8 @@ omap_i2c_probe(struct platform_device *pdev)
omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0);
pm_runtime_dont_use_autosuspend(omap->dev);
pm_runtime_put_sync(omap->dev);
+err_disable_pm:
pm_runtime_disable(&pdev->dev);
-err_free_mem:
return r;
}
@@ -1525,7 +1525,7 @@ static int omap_i2c_remove(struct platform_device *pdev)
int ret;
i2c_del_adapter(&omap->adapter);
- ret = pm_runtime_get_sync(&pdev->dev);
+ ret = pm_runtime_resume_and_get(&pdev->dev);
if (ret < 0)
return ret;
diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
index ad6630e..8722ca2 100644
--- a/drivers/i2c/busses/i2c-rcar.c
+++ b/drivers/i2c/busses/i2c-rcar.c
@@ -625,20 +625,11 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
* generated. It turned out that taking a spinlock at the beginning of the ISR
* was already causing repeated messages. Thus, this driver was converted to
* the now lockless behaviour. Please keep this in mind when hacking the driver.
+ * R-Car Gen3 seems to have this fixed but earlier versions than R-Car Gen2 are
+ * likely affected. Therefore, we have different interrupt handler entries.
*/
-static irqreturn_t rcar_i2c_irq(int irq, void *ptr)
+static irqreturn_t rcar_i2c_irq(int irq, struct rcar_i2c_priv *priv, u32 msr)
{
- struct rcar_i2c_priv *priv = ptr;
- u32 msr;
-
- /* Clear START or STOP immediately, except for REPSTART after read */
- if (likely(!(priv->flags & ID_P_REP_AFTER_RD)))
- rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
-
- msr = rcar_i2c_read(priv, ICMSR);
-
- /* Only handle interrupts that are currently enabled */
- msr &= rcar_i2c_read(priv, ICMIER);
if (!msr) {
if (rcar_i2c_slave_irq(priv))
return IRQ_HANDLED;
@@ -682,6 +673,41 @@ static irqreturn_t rcar_i2c_irq(int irq, void *ptr)
return IRQ_HANDLED;
}
+static irqreturn_t rcar_i2c_gen2_irq(int irq, void *ptr)
+{
+ struct rcar_i2c_priv *priv = ptr;
+ u32 msr;
+
+ /* Clear START or STOP immediately, except for REPSTART after read */
+ if (likely(!(priv->flags & ID_P_REP_AFTER_RD)))
+ rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
+
+ /* Only handle interrupts that are currently enabled */
+ msr = rcar_i2c_read(priv, ICMSR);
+ msr &= rcar_i2c_read(priv, ICMIER);
+
+ return rcar_i2c_irq(irq, priv, msr);
+}
+
+static irqreturn_t rcar_i2c_gen3_irq(int irq, void *ptr)
+{
+ struct rcar_i2c_priv *priv = ptr;
+ u32 msr;
+
+ /* Only handle interrupts that are currently enabled */
+ msr = rcar_i2c_read(priv, ICMSR);
+ msr &= rcar_i2c_read(priv, ICMIER);
+
+ /*
+ * Clear START or STOP immediately, except for REPSTART after read or
+ * if a spurious interrupt was detected.
+ */
+ if (likely(!(priv->flags & ID_P_REP_AFTER_RD) && msr))
+ rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
+
+ return rcar_i2c_irq(irq, priv, msr);
+}
+
static struct dma_chan *rcar_i2c_request_dma_chan(struct device *dev,
enum dma_transfer_direction dir,
dma_addr_t port_addr)
@@ -928,6 +954,8 @@ static int rcar_i2c_probe(struct platform_device *pdev)
struct rcar_i2c_priv *priv;
struct i2c_adapter *adap;
struct device *dev = &pdev->dev;
+ unsigned long irqflags = 0;
+ irqreturn_t (*irqhandler)(int irq, void *ptr) = rcar_i2c_gen3_irq;
int ret;
/* Otherwise logic will break because some bytes must always use PIO */
@@ -976,6 +1004,11 @@ static int rcar_i2c_probe(struct platform_device *pdev)
rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+ if (priv->devtype < I2C_RCAR_GEN3) {
+ irqflags |= IRQF_NO_THREAD;
+ irqhandler = rcar_i2c_gen2_irq;
+ }
+
if (priv->devtype == I2C_RCAR_GEN3) {
priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
if (!IS_ERR(priv->rstc)) {
@@ -994,8 +1027,11 @@ static int rcar_i2c_probe(struct platform_device *pdev)
if (of_property_read_bool(dev->of_node, "smbus"))
priv->flags |= ID_P_HOST_NOTIFY;
- priv->irq = platform_get_irq(pdev, 0);
- ret = devm_request_irq(dev, priv->irq, rcar_i2c_irq, 0, dev_name(dev), priv);
+ ret = platform_get_irq(pdev, 0);
+ if (ret < 0)
+ goto out_pm_disable;
+ priv->irq = ret;
+ ret = devm_request_irq(dev, priv->irq, irqhandler, irqflags, dev_name(dev), priv);
if (ret < 0) {
dev_err(dev, "cannot get irq %d\n", priv->irq);
goto out_pm_disable;
diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
index 3eafe0e..40fa9e4 100644
--- a/drivers/i2c/busses/i2c-s3c2410.c
+++ b/drivers/i2c/busses/i2c-s3c2410.c
@@ -483,7 +483,10 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
* forces us to send a new START
* when we change direction
*/
+ dev_dbg(i2c->dev,
+ "missing START before write->read\n");
s3c24xx_i2c_stop(i2c, -EINVAL);
+ break;
}
goto retry_write;
diff --git a/drivers/i2c/busses/i2c-sh7760.c b/drivers/i2c/busses/i2c-sh7760.c
index c2005c7..319d1fa 100644
--- a/drivers/i2c/busses/i2c-sh7760.c
+++ b/drivers/i2c/busses/i2c-sh7760.c
@@ -471,7 +471,10 @@ static int sh7760_i2c_probe(struct platform_device *pdev)
goto out2;
}
- id->irq = platform_get_irq(pdev, 0);
+ ret = platform_get_irq(pdev, 0);
+ if (ret < 0)
+ goto out3;
+ id->irq = ret;
id->adap.nr = pdev->id;
id->adap.algo = &sh7760_i2c_algo;
diff --git a/drivers/i2c/busses/i2c-sh_mobile.c b/drivers/i2c/busses/i2c-sh_mobile.c
index bdd60770..c253535 100644
--- a/drivers/i2c/busses/i2c-sh_mobile.c
+++ b/drivers/i2c/busses/i2c-sh_mobile.c
@@ -807,7 +807,7 @@ static const struct sh_mobile_dt_config r8a7740_dt_config = {
static const struct of_device_id sh_mobile_i2c_dt_ids[] = {
{ .compatible = "renesas,iic-r8a73a4", .data = &fast_clock_dt_config },
{ .compatible = "renesas,iic-r8a7740", .data = &r8a7740_dt_config },
- { .compatible = "renesas,iic-r8a774c0", .data = &fast_clock_dt_config },
+ { .compatible = "renesas,iic-r8a774c0", .data = &v2_freq_calc_dt_config },
{ .compatible = "renesas,iic-r8a7790", .data = &v2_freq_calc_dt_config },
{ .compatible = "renesas,iic-r8a7791", .data = &v2_freq_calc_dt_config },
{ .compatible = "renesas,iic-r8a7792", .data = &v2_freq_calc_dt_config },
diff --git a/drivers/i2c/busses/i2c-sprd.c b/drivers/i2c/busses/i2c-sprd.c
index 2917fec..8ead7e0 100644
--- a/drivers/i2c/busses/i2c-sprd.c
+++ b/drivers/i2c/busses/i2c-sprd.c
@@ -290,7 +290,7 @@ static int sprd_i2c_master_xfer(struct i2c_adapter *i2c_adap,
struct sprd_i2c *i2c_dev = i2c_adap->algo_data;
int im, ret;
- ret = pm_runtime_get_sync(i2c_dev->dev);
+ ret = pm_runtime_resume_and_get(i2c_dev->dev);
if (ret < 0)
return ret;
@@ -576,7 +576,7 @@ static int sprd_i2c_remove(struct platform_device *pdev)
struct sprd_i2c *i2c_dev = platform_get_drvdata(pdev);
int ret;
- ret = pm_runtime_get_sync(i2c_dev->dev);
+ ret = pm_runtime_resume_and_get(i2c_dev->dev);
if (ret < 0)
return ret;
diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
index 6747353..1e800b6 100644
--- a/drivers/i2c/busses/i2c-stm32f7.c
+++ b/drivers/i2c/busses/i2c-stm32f7.c
@@ -1652,7 +1652,7 @@ static int stm32f7_i2c_xfer(struct i2c_adapter *i2c_adap,
i2c_dev->msg_id = 0;
f7_msg->smbus = false;
- ret = pm_runtime_get_sync(i2c_dev->dev);
+ ret = pm_runtime_resume_and_get(i2c_dev->dev);
if (ret < 0)
return ret;
@@ -1698,7 +1698,7 @@ static int stm32f7_i2c_smbus_xfer(struct i2c_adapter *adapter, u16 addr,
f7_msg->read_write = read_write;
f7_msg->smbus = true;
- ret = pm_runtime_get_sync(dev);
+ ret = pm_runtime_resume_and_get(dev);
if (ret < 0)
return ret;
@@ -1799,7 +1799,7 @@ static int stm32f7_i2c_reg_slave(struct i2c_client *slave)
if (ret)
return ret;
- ret = pm_runtime_get_sync(dev);
+ ret = pm_runtime_resume_and_get(dev);
if (ret < 0)
return ret;
@@ -1880,7 +1880,7 @@ static int stm32f7_i2c_unreg_slave(struct i2c_client *slave)
WARN_ON(!i2c_dev->slave[id]);
- ret = pm_runtime_get_sync(i2c_dev->dev);
+ ret = pm_runtime_resume_and_get(i2c_dev->dev);
if (ret < 0)
return ret;
@@ -2277,7 +2277,7 @@ static int stm32f7_i2c_regs_backup(struct stm32f7_i2c_dev *i2c_dev)
int ret;
struct stm32f7_i2c_regs *backup_regs = &i2c_dev->backup_regs;
- ret = pm_runtime_get_sync(i2c_dev->dev);
+ ret = pm_runtime_resume_and_get(i2c_dev->dev);
if (ret < 0)
return ret;
@@ -2299,7 +2299,7 @@ static int stm32f7_i2c_regs_restore(struct stm32f7_i2c_dev *i2c_dev)
int ret;
struct stm32f7_i2c_regs *backup_regs = &i2c_dev->backup_regs;
- ret = pm_runtime_get_sync(i2c_dev->dev);
+ ret = pm_runtime_resume_and_get(i2c_dev->dev);
if (ret < 0)
return ret;
diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
index 087b295..2a8568b 100644
--- a/drivers/i2c/busses/i2c-xiic.c
+++ b/drivers/i2c/busses/i2c-xiic.c
@@ -706,7 +706,7 @@ static int xiic_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
dev_dbg(adap->dev.parent, "%s entry SR: 0x%x\n", __func__,
xiic_getreg8(i2c, XIIC_SR_REG_OFFSET));
- err = pm_runtime_get_sync(i2c->dev);
+ err = pm_runtime_resume_and_get(i2c->dev);
if (err < 0)
return err;
@@ -873,7 +873,7 @@ static int xiic_i2c_remove(struct platform_device *pdev)
/* remove adapter & data */
i2c_del_adapter(&i2c->adap);
- ret = pm_runtime_get_sync(i2c->dev);
+ ret = pm_runtime_resume_and_get(i2c->dev);
if (ret < 0)
return ret;
diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
index 573b5da..c13e7f10 100644
--- a/drivers/i2c/i2c-core-base.c
+++ b/drivers/i2c/i2c-core-base.c
@@ -378,7 +378,7 @@ static int i2c_gpio_init_recovery(struct i2c_adapter *adap)
static int i2c_init_recovery(struct i2c_adapter *adap)
{
struct i2c_bus_recovery_info *bri = adap->bus_recovery_info;
- char *err_str;
+ char *err_str, *err_level = KERN_ERR;
if (!bri)
return 0;
@@ -387,7 +387,8 @@ static int i2c_init_recovery(struct i2c_adapter *adap)
return -EPROBE_DEFER;
if (!bri->recover_bus) {
- err_str = "no recover_bus() found";
+ err_str = "no suitable method provided";
+ err_level = KERN_DEBUG;
goto err;
}
@@ -414,7 +415,7 @@ static int i2c_init_recovery(struct i2c_adapter *adap)
return 0;
err:
- dev_err(&adap->dev, "Not using recovery: %s\n", err_str);
+ dev_printk(err_level, &adap->dev, "Not using recovery: %s\n", err_str);
adap->bus_recovery_info = NULL;
return -EINVAL;
diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
index 6ceb11c..6ef38a8 100644
--- a/drivers/i2c/i2c-dev.c
+++ b/drivers/i2c/i2c-dev.c
@@ -440,8 +440,13 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
sizeof(rdwr_arg)))
return -EFAULT;
- /* Put an arbitrary limit on the number of messages that can
- * be sent at once */
+ if (!rdwr_arg.msgs || rdwr_arg.nmsgs == 0)
+ return -EINVAL;
+
+ /*
+ * Put an arbitrary limit on the number of messages that can
+ * be sent at once
+ */
if (rdwr_arg.nmsgs > I2C_RDWR_IOCTL_MAX_MSGS)
return -EINVAL;
diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
index b61bf53..1c6b78a 100644
--- a/drivers/i3c/master.c
+++ b/drivers/i3c/master.c
@@ -2537,7 +2537,7 @@ int i3c_master_register(struct i3c_master_controller *master,
ret = i3c_master_bus_init(master);
if (ret)
- goto err_destroy_wq;
+ goto err_put_dev;
ret = device_add(&master->dev);
if (ret)
@@ -2568,9 +2568,6 @@ int i3c_master_register(struct i3c_master_controller *master,
err_cleanup_bus:
i3c_master_bus_cleanup(master);
-err_destroy_wq:
- destroy_workqueue(master->wq);
-
err_put_dev:
put_device(&master->dev);
diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig
index 2e0c62c..8acf277 100644
--- a/drivers/iio/accel/Kconfig
+++ b/drivers/iio/accel/Kconfig
@@ -211,7 +211,6 @@
config HID_SENSOR_ACCEL_3D
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
tristate "HID Accelerometers 3D"
diff --git a/drivers/iio/accel/adis16201.c b/drivers/iio/accel/adis16201.c
index f955ccc..84bbdfd 100644
--- a/drivers/iio/accel/adis16201.c
+++ b/drivers/iio/accel/adis16201.c
@@ -215,7 +215,7 @@ static const struct iio_chan_spec adis16201_channels[] = {
ADIS_AUX_ADC_CHAN(ADIS16201_AUX_ADC_REG, ADIS16201_SCAN_AUX_ADC, 0, 12),
ADIS_INCLI_CHAN(X, ADIS16201_XINCL_OUT_REG, ADIS16201_SCAN_INCLI_X,
BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14),
- ADIS_INCLI_CHAN(X, ADIS16201_YINCL_OUT_REG, ADIS16201_SCAN_INCLI_Y,
+ ADIS_INCLI_CHAN(Y, ADIS16201_YINCL_OUT_REG, ADIS16201_SCAN_INCLI_Y,
BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14),
IIO_CHAN_SOFT_TIMESTAMP(7)
};
diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
index 86fda61..e39b679 100644
--- a/drivers/iio/adc/Kconfig
+++ b/drivers/iio/adc/Kconfig
@@ -249,7 +249,7 @@
config AD9467
tristate "Analog Devices AD9467 High Speed ADC driver"
depends on SPI
- select ADI_AXI_ADC
+ depends on ADI_AXI_ADC
help
Say yes here to build support for Analog Devices:
* AD9467 16-Bit, 200 MSPS/250 MSPS Analog-to-Digital Converter
diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
index 766c733..9c2401c 100644
--- a/drivers/iio/adc/ad7124.c
+++ b/drivers/iio/adc/ad7124.c
@@ -616,6 +616,13 @@ static int ad7124_of_parse_channel_config(struct iio_dev *indio_dev,
if (ret)
goto err;
+ if (channel >= indio_dev->num_channels) {
+ dev_err(indio_dev->dev.parent,
+ "Channel index >= number of channels\n");
+ ret = -EINVAL;
+ goto err;
+ }
+
ret = of_property_read_u32_array(child, "diff-channels",
ain, 2);
if (ret)
@@ -707,6 +714,11 @@ static int ad7124_setup(struct ad7124_state *st)
return ret;
}
+static void ad7124_reg_disable(void *r)
+{
+ regulator_disable(r);
+}
+
static int ad7124_probe(struct spi_device *spi)
{
const struct ad7124_chip_info *info;
@@ -752,17 +764,20 @@ static int ad7124_probe(struct spi_device *spi)
ret = regulator_enable(st->vref[i]);
if (ret)
return ret;
+
+ ret = devm_add_action_or_reset(&spi->dev, ad7124_reg_disable,
+ st->vref[i]);
+ if (ret)
+ return ret;
}
st->mclk = devm_clk_get(&spi->dev, "mclk");
- if (IS_ERR(st->mclk)) {
- ret = PTR_ERR(st->mclk);
- goto error_regulator_disable;
- }
+ if (IS_ERR(st->mclk))
+ return PTR_ERR(st->mclk);
ret = clk_prepare_enable(st->mclk);
if (ret < 0)
- goto error_regulator_disable;
+ return ret;
ret = ad7124_soft_reset(st);
if (ret < 0)
@@ -792,11 +807,6 @@ static int ad7124_probe(struct spi_device *spi)
ad_sd_cleanup_buffer_and_trigger(indio_dev);
error_clk_disable_unprepare:
clk_disable_unprepare(st->mclk);
-error_regulator_disable:
- for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) {
- if (!IS_ERR_OR_NULL(st->vref[i]))
- regulator_disable(st->vref[i]);
- }
return ret;
}
@@ -805,17 +815,11 @@ static int ad7124_remove(struct spi_device *spi)
{
struct iio_dev *indio_dev = spi_get_drvdata(spi);
struct ad7124_state *st = iio_priv(indio_dev);
- int i;
iio_device_unregister(indio_dev);
ad_sd_cleanup_buffer_and_trigger(indio_dev);
clk_disable_unprepare(st->mclk);
- for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) {
- if (!IS_ERR_OR_NULL(st->vref[i]))
- regulator_disable(st->vref[i]);
- }
-
return 0;
}
diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
index 2ed5805..1141cc1 100644
--- a/drivers/iio/adc/ad7192.c
+++ b/drivers/iio/adc/ad7192.c
@@ -912,7 +912,7 @@ static int ad7192_probe(struct spi_device *spi)
{
struct ad7192_state *st;
struct iio_dev *indio_dev;
- int ret, voltage_uv = 0;
+ int ret;
if (!spi->irq) {
dev_err(&spi->dev, "no IRQ?\n");
@@ -949,15 +949,12 @@ static int ad7192_probe(struct spi_device *spi)
goto error_disable_avdd;
}
- voltage_uv = regulator_get_voltage(st->avdd);
-
- if (voltage_uv > 0) {
- st->int_vref_mv = voltage_uv / 1000;
- } else {
- ret = voltage_uv;
+ ret = regulator_get_voltage(st->avdd);
+ if (ret < 0) {
dev_err(&spi->dev, "Device tree error, reference voltage undefined\n");
goto error_disable_avdd;
}
+ st->int_vref_mv = ret / 1000;
spi_set_drvdata(spi, indio_dev);
st->chip_info = of_device_get_match_data(&spi->dev);
@@ -1014,7 +1011,9 @@ static int ad7192_probe(struct spi_device *spi)
return 0;
error_disable_clk:
- clk_disable_unprepare(st->mclk);
+ if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 ||
+ st->clock_sel == AD7192_CLK_EXT_MCLK2)
+ clk_disable_unprepare(st->mclk);
error_remove_trigger:
ad_sd_cleanup_buffer_and_trigger(indio_dev);
error_disable_dvdd:
@@ -1031,7 +1030,9 @@ static int ad7192_remove(struct spi_device *spi)
struct ad7192_state *st = iio_priv(indio_dev);
iio_device_unregister(indio_dev);
- clk_disable_unprepare(st->mclk);
+ if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 ||
+ st->clock_sel == AD7192_CLK_EXT_MCLK2)
+ clk_disable_unprepare(st->mclk);
ad_sd_cleanup_buffer_and_trigger(indio_dev);
regulator_disable(st->dvdd);
diff --git a/drivers/iio/adc/ad7476.c b/drivers/iio/adc/ad7476.c
index 66c55ae..bf55726 100644
--- a/drivers/iio/adc/ad7476.c
+++ b/drivers/iio/adc/ad7476.c
@@ -316,25 +316,15 @@ static int ad7476_probe(struct spi_device *spi)
spi_message_init(&st->msg);
spi_message_add_tail(&st->xfer, &st->msg);
- ret = iio_triggered_buffer_setup(indio_dev, NULL,
- &ad7476_trigger_handler, NULL);
+ ret = devm_iio_triggered_buffer_setup(&spi->dev, indio_dev, NULL,
+ &ad7476_trigger_handler, NULL);
if (ret)
- goto error_disable_reg;
+ return ret;
if (st->chip_info->reset)
st->chip_info->reset(st);
- ret = iio_device_register(indio_dev);
- if (ret)
- goto error_ring_unregister;
- return 0;
-
-error_ring_unregister:
- iio_triggered_buffer_cleanup(indio_dev);
-error_disable_reg:
- regulator_disable(st->reg);
-
- return ret;
+ return devm_iio_device_register(&spi->dev, indio_dev);
}
static const struct spi_device_id ad7476_id[] = {
diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
index 0e93b07..c7e15c4 100644
--- a/drivers/iio/adc/ad7768-1.c
+++ b/drivers/iio/adc/ad7768-1.c
@@ -166,6 +166,10 @@ struct ad7768_state {
* transfer buffers to live in their own cache lines.
*/
union {
+ struct {
+ __be32 chan;
+ s64 timestamp;
+ } scan;
__be32 d32;
u8 d8[2];
} data ____cacheline_aligned;
@@ -459,11 +463,11 @@ static irqreturn_t ad7768_trigger_handler(int irq, void *p)
mutex_lock(&st->lock);
- ret = spi_read(st->spi, &st->data.d32, 3);
+ ret = spi_read(st->spi, &st->data.scan.chan, 3);
if (ret < 0)
goto err_unlock;
- iio_push_to_buffers_with_timestamp(indio_dev, &st->data.d32,
+ iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan,
iio_get_time_ns(indio_dev));
iio_trigger_notify_done(indio_dev->trig);
diff --git a/drivers/iio/adc/ad7793.c b/drivers/iio/adc/ad7793.c
index 5e980a06..440ef4c 100644
--- a/drivers/iio/adc/ad7793.c
+++ b/drivers/iio/adc/ad7793.c
@@ -279,6 +279,7 @@ static int ad7793_setup(struct iio_dev *indio_dev,
id &= AD7793_ID_MASK;
if (id != st->chip_info->id) {
+ ret = -ENODEV;
dev_err(&st->sd.spi->dev, "device ID query failed\n");
goto out;
}
diff --git a/drivers/iio/adc/ad7923.c b/drivers/iio/adc/ad7923.c
index a2cc966..8c1e866 100644
--- a/drivers/iio/adc/ad7923.c
+++ b/drivers/iio/adc/ad7923.c
@@ -59,8 +59,10 @@ struct ad7923_state {
/*
* DMA (thus cache coherency maintenance) requires the
* transfer buffers to live in their own cache lines.
+ * Ensure rx_buf can be directly used in iio_push_to_buffers_with_timetamp
+ * Length = 8 channels + 4 extra for 8 byte timestamp
*/
- __be16 rx_buf[4] ____cacheline_aligned;
+ __be16 rx_buf[12] ____cacheline_aligned;
__be16 tx_buf[4];
};
diff --git a/drivers/iio/common/hid-sensors/Kconfig b/drivers/iio/common/hid-sensors/Kconfig
index 24d4925..2a3dd3b 100644
--- a/drivers/iio/common/hid-sensors/Kconfig
+++ b/drivers/iio/common/hid-sensors/Kconfig
@@ -19,6 +19,7 @@
tristate "Common module (trigger) for all HID Sensor IIO drivers"
depends on HID_SENSOR_HUB && HID_SENSOR_IIO_COMMON && IIO_BUFFER
select IIO_TRIGGER
+ select IIO_TRIGGERED_BUFFER
help
Say yes here to build trigger support for HID sensors.
Triggers will be send if all requested attributes were read.
diff --git a/drivers/iio/dac/ad5770r.c b/drivers/iio/dac/ad5770r.c
index 84dcf14..42decba 100644
--- a/drivers/iio/dac/ad5770r.c
+++ b/drivers/iio/dac/ad5770r.c
@@ -524,23 +524,29 @@ static int ad5770r_channel_config(struct ad5770r_state *st)
device_for_each_child_node(&st->spi->dev, child) {
ret = fwnode_property_read_u32(child, "num", &num);
if (ret)
- return ret;
- if (num >= AD5770R_MAX_CHANNELS)
- return -EINVAL;
+ goto err_child_out;
+ if (num >= AD5770R_MAX_CHANNELS) {
+ ret = -EINVAL;
+ goto err_child_out;
+ }
ret = fwnode_property_read_u32_array(child,
"adi,range-microamp",
tmp, 2);
if (ret)
- return ret;
+ goto err_child_out;
min = tmp[0] / 1000;
max = tmp[1] / 1000;
ret = ad5770r_store_output_range(st, min, max, num);
if (ret)
- return ret;
+ goto err_child_out;
}
+ return 0;
+
+err_child_out:
+ fwnode_handle_put(child);
return ret;
}
diff --git a/drivers/iio/gyro/Kconfig b/drivers/iio/gyro/Kconfig
index 5824f2e..20b5ac7ab 100644
--- a/drivers/iio/gyro/Kconfig
+++ b/drivers/iio/gyro/Kconfig
@@ -111,7 +111,6 @@
config HID_SENSOR_GYRO_3D
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
tristate "HID Gyroscope 3D"
diff --git a/drivers/iio/gyro/fxas21002c_core.c b/drivers/iio/gyro/fxas21002c_core.c
index 129eead..b752335 100644
--- a/drivers/iio/gyro/fxas21002c_core.c
+++ b/drivers/iio/gyro/fxas21002c_core.c
@@ -399,6 +399,7 @@ static int fxas21002c_temp_get(struct fxas21002c_data *data, int *val)
ret = regmap_field_read(data->regmap_fields[F_TEMP], &temp);
if (ret < 0) {
dev_err(dev, "failed to read temp: %d\n", ret);
+ fxas21002c_pm_put(data);
goto data_unlock;
}
@@ -432,6 +433,7 @@ static int fxas21002c_axis_get(struct fxas21002c_data *data,
&axis_be, sizeof(axis_be));
if (ret < 0) {
dev_err(dev, "failed to read axis: %d: %d\n", index, ret);
+ fxas21002c_pm_put(data);
goto data_unlock;
}
diff --git a/drivers/iio/gyro/mpu3050-core.c b/drivers/iio/gyro/mpu3050-core.c
index 8ea6c2a..39e1c43 100644
--- a/drivers/iio/gyro/mpu3050-core.c
+++ b/drivers/iio/gyro/mpu3050-core.c
@@ -271,7 +271,16 @@ static int mpu3050_read_raw(struct iio_dev *indio_dev,
case IIO_CHAN_INFO_OFFSET:
switch (chan->type) {
case IIO_TEMP:
- /* The temperature scaling is (x+23000)/280 Celsius */
+ /*
+ * The temperature scaling is (x+23000)/280 Celsius
+ * for the "best fit straight line" temperature range
+ * of -30C..85C. The 23000 includes room temperature
+ * offset of +35C, 280 is the precision scale and x is
+ * the 16-bit signed integer reported by hardware.
+ *
+ * Temperature value itself represents temperature of
+ * the sensor die.
+ */
*val = 23000;
return IIO_VAL_INT;
default:
@@ -328,7 +337,7 @@ static int mpu3050_read_raw(struct iio_dev *indio_dev,
goto out_read_raw_unlock;
}
- *val = be16_to_cpu(raw_val);
+ *val = (s16)be16_to_cpu(raw_val);
ret = IIO_VAL_INT;
goto out_read_raw_unlock;
diff --git a/drivers/iio/humidity/Kconfig b/drivers/iio/humidity/Kconfig
index 6549fcf..2de5494 100644
--- a/drivers/iio/humidity/Kconfig
+++ b/drivers/iio/humidity/Kconfig
@@ -52,7 +52,6 @@
tristate "HID Environmental humidity sensor"
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
help
diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
index 18a1898..ae391ec 100644
--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
@@ -723,12 +723,16 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
}
}
-static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val)
+static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val,
+ int val2)
{
int result, i;
+ if (val != 0)
+ return -EINVAL;
+
for (i = 0; i < ARRAY_SIZE(gyro_scale_6050); ++i) {
- if (gyro_scale_6050[i] == val) {
+ if (gyro_scale_6050[i] == val2) {
result = inv_mpu6050_set_gyro_fsr(st, i);
if (result)
return result;
@@ -759,13 +763,17 @@ static int inv_write_raw_get_fmt(struct iio_dev *indio_dev,
return -EINVAL;
}
-static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val)
+static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val,
+ int val2)
{
int result, i;
u8 d;
+ if (val != 0)
+ return -EINVAL;
+
for (i = 0; i < ARRAY_SIZE(accel_scale); ++i) {
- if (accel_scale[i] == val) {
+ if (accel_scale[i] == val2) {
d = (i << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT);
result = regmap_write(st->map, st->reg->accl_config, d);
if (result)
@@ -806,10 +814,10 @@ static int inv_mpu6050_write_raw(struct iio_dev *indio_dev,
case IIO_CHAN_INFO_SCALE:
switch (chan->type) {
case IIO_ANGL_VEL:
- result = inv_mpu6050_write_gyro_scale(st, val2);
+ result = inv_mpu6050_write_gyro_scale(st, val, val2);
break;
case IIO_ACCEL:
- result = inv_mpu6050_write_accel_scale(st, val2);
+ result = inv_mpu6050_write_accel_scale(st, val, val2);
break;
default:
result = -EINVAL;
diff --git a/drivers/iio/light/Kconfig b/drivers/iio/light/Kconfig
index 33ad4dd..917f9be 100644
--- a/drivers/iio/light/Kconfig
+++ b/drivers/iio/light/Kconfig
@@ -256,7 +256,6 @@
config HID_SENSOR_ALS
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
tristate "HID ALS"
@@ -270,7 +269,6 @@
config HID_SENSOR_PROX
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
tristate "HID PROX"
diff --git a/drivers/iio/light/gp2ap002.c b/drivers/iio/light/gp2ap002.c
index 7ba7aa5..040d842 100644
--- a/drivers/iio/light/gp2ap002.c
+++ b/drivers/iio/light/gp2ap002.c
@@ -583,7 +583,7 @@ static int gp2ap002_probe(struct i2c_client *client,
"gp2ap002", indio_dev);
if (ret) {
dev_err(dev, "unable to request IRQ\n");
- goto out_disable_vio;
+ goto out_put_pm;
}
gp2ap002->irq = client->irq;
@@ -613,8 +613,9 @@ static int gp2ap002_probe(struct i2c_client *client,
return 0;
-out_disable_pm:
+out_put_pm:
pm_runtime_put_noidle(dev);
+out_disable_pm:
pm_runtime_disable(dev);
out_disable_vio:
regulator_disable(gp2ap002->vio);
diff --git a/drivers/iio/light/tsl2583.c b/drivers/iio/light/tsl2583.c
index 9e5490b..40b7dd2 100644
--- a/drivers/iio/light/tsl2583.c
+++ b/drivers/iio/light/tsl2583.c
@@ -341,6 +341,14 @@ static int tsl2583_als_calibrate(struct iio_dev *indio_dev)
return lux_val;
}
+ /* Avoid division by zero of lux_value later on */
+ if (lux_val == 0) {
+ dev_err(&chip->client->dev,
+ "%s: lux_val of 0 will produce out of range trim_value\n",
+ __func__);
+ return -ENODATA;
+ }
+
gain_trim_val = (unsigned int)(((chip->als_settings.als_cal_target)
* chip->als_settings.als_gain_trim) / lux_val);
if ((gain_trim_val < 250) || (gain_trim_val > 4000)) {
diff --git a/drivers/iio/magnetometer/Kconfig b/drivers/iio/magnetometer/Kconfig
index 1697a8c..7e9489a 100644
--- a/drivers/iio/magnetometer/Kconfig
+++ b/drivers/iio/magnetometer/Kconfig
@@ -95,7 +95,6 @@
config HID_SENSOR_MAGNETOMETER_3D
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
tristate "HID Magenetometer 3D"
diff --git a/drivers/iio/orientation/Kconfig b/drivers/iio/orientation/Kconfig
index a505583..396cbbb 100644
--- a/drivers/iio/orientation/Kconfig
+++ b/drivers/iio/orientation/Kconfig
@@ -9,7 +9,6 @@
config HID_SENSOR_INCLINOMETER_3D
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
tristate "HID Inclinometer 3D"
@@ -20,7 +19,6 @@
config HID_SENSOR_DEVICE_ROTATION
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
tristate "HID Device Rotation"
diff --git a/drivers/iio/pressure/Kconfig b/drivers/iio/pressure/Kconfig
index 689b978..fc0d3cf 100644
--- a/drivers/iio/pressure/Kconfig
+++ b/drivers/iio/pressure/Kconfig
@@ -79,7 +79,6 @@
config HID_SENSOR_PRESS
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
tristate "HID PRESS"
diff --git a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
index c685f10..cc206bf 100644
--- a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
+++ b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
@@ -160,6 +160,7 @@ static int lidar_get_measurement(struct lidar_data *data, u16 *reg)
ret = lidar_write_control(data, LIDAR_REG_CONTROL_ACQUIRE);
if (ret < 0) {
dev_err(&client->dev, "cannot send start measurement command");
+ pm_runtime_put_noidle(&client->dev);
return ret;
}
diff --git a/drivers/iio/temperature/Kconfig b/drivers/iio/temperature/Kconfig
index f1f2a14..4df6008 100644
--- a/drivers/iio/temperature/Kconfig
+++ b/drivers/iio/temperature/Kconfig
@@ -45,7 +45,6 @@
tristate "HID Environmental temperature sensor"
depends on HID_SENSOR_HUB
select IIO_BUFFER
- select IIO_TRIGGERED_BUFFER
select HID_SENSOR_IIO_COMMON
select HID_SENSOR_IIO_TRIGGER
help
diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
index 0abce00..65e3e7d 100644
--- a/drivers/infiniband/core/addr.c
+++ b/drivers/infiniband/core/addr.c
@@ -76,7 +76,9 @@ static struct workqueue_struct *addr_wq;
static const struct nla_policy ib_nl_addr_policy[LS_NLA_TYPE_MAX] = {
[LS_NLA_TYPE_DGID] = {.type = NLA_BINARY,
- .len = sizeof(struct rdma_nla_ls_gid)},
+ .len = sizeof(struct rdma_nla_ls_gid),
+ .validation_type = NLA_VALIDATE_MIN,
+ .min = sizeof(struct rdma_nla_ls_gid)},
};
static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index bbba0cd..ee568bd 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2137,7 +2137,8 @@ static int cm_req_handler(struct cm_work *work)
goto destroy;
}
- cm_process_routed_req(req_msg, work->mad_recv_wc->wc);
+ if (cm_id_priv->av.ah_attr.type != RDMA_AH_ATTR_TYPE_ROCE)
+ cm_process_routed_req(req_msg, work->mad_recv_wc->wc);
memset(&work->path[0], 0, sizeof(work->path[0]));
if (cm_req_has_alt_path(req_msg))
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index e3638f8..d1e9414 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -463,7 +463,6 @@ static void _cma_attach_to_dev(struct rdma_id_private *id_priv,
id_priv->id.route.addr.dev_addr.transport =
rdma_node_get_transport(cma_dev->device->node_type);
list_add_tail(&id_priv->list, &cma_dev->id_list);
- rdma_restrack_add(&id_priv->res);
trace_cm_id_attach(id_priv, cma_dev->device);
}
@@ -483,6 +482,7 @@ static void cma_release_dev(struct rdma_id_private *id_priv)
list_del(&id_priv->list);
cma_dev_put(id_priv->cma_dev);
id_priv->cma_dev = NULL;
+ id_priv->id.device = NULL;
if (id_priv->id.route.addr.dev_addr.sgid_attr) {
rdma_put_gid_attr(id_priv->id.route.addr.dev_addr.sgid_attr);
id_priv->id.route.addr.dev_addr.sgid_attr = NULL;
@@ -700,6 +700,7 @@ static int cma_ib_acquire_dev(struct rdma_id_private *id_priv,
mutex_lock(&lock);
cma_attach_to_dev(id_priv, listen_id_priv->cma_dev);
mutex_unlock(&lock);
+ rdma_restrack_add(&id_priv->res);
return 0;
}
@@ -754,8 +755,10 @@ static int cma_iw_acquire_dev(struct rdma_id_private *id_priv,
}
out:
- if (!ret)
+ if (!ret) {
cma_attach_to_dev(id_priv, cma_dev);
+ rdma_restrack_add(&id_priv->res);
+ }
mutex_unlock(&lock);
return ret;
@@ -816,6 +819,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
found:
cma_attach_to_dev(id_priv, cma_dev);
+ rdma_restrack_add(&id_priv->res);
mutex_unlock(&lock);
addr = (struct sockaddr_ib *)cma_src_addr(id_priv);
memcpy(&addr->sib_addr, &sgid, sizeof(sgid));
@@ -1861,6 +1865,7 @@ static void _destroy_id(struct rdma_id_private *id_priv,
iw_destroy_cm_id(id_priv->cm_id.iw);
}
cma_leave_mc_groups(id_priv);
+ rdma_restrack_del(&id_priv->res);
cma_release_dev(id_priv);
}
@@ -1874,7 +1879,6 @@ static void _destroy_id(struct rdma_id_private *id_priv,
kfree(id_priv->id.route.path_rec);
put_net(id_priv->id.route.addr.dev_addr.net);
- rdma_restrack_del(&id_priv->res);
kfree(id_priv);
}
@@ -2529,6 +2533,7 @@ static int cma_listen_on_dev(struct rdma_id_private *id_priv,
rdma_addr_size(cma_src_addr(id_priv)));
_cma_attach_to_dev(dev_id_priv, cma_dev);
+ rdma_restrack_add(&dev_id_priv->res);
cma_id_get(id_priv);
dev_id_priv->internal_id = 1;
dev_id_priv->afonly = id_priv->afonly;
@@ -3169,6 +3174,7 @@ static int cma_bind_loopback(struct rdma_id_private *id_priv)
ib_addr_set_pkey(&id_priv->id.route.addr.dev_addr, pkey);
id_priv->id.port_num = p;
cma_attach_to_dev(id_priv, cma_dev);
+ rdma_restrack_add(&id_priv->res);
cma_set_loopback(cma_src_addr(id_priv));
out:
mutex_unlock(&lock);
@@ -3201,6 +3207,7 @@ static void addr_handler(int status, struct sockaddr *src_addr,
if (status)
pr_debug_ratelimited("RDMA CM: ADDR_ERROR: failed to acquire device. status %d\n",
status);
+ rdma_restrack_add(&id_priv->res);
} else if (status) {
pr_debug_ratelimited("RDMA CM: ADDR_ERROR: failed to resolve IP. status %d\n", status);
}
@@ -3734,7 +3741,7 @@ int rdma_listen(struct rdma_cm_id *id, int backlog)
}
id_priv->backlog = backlog;
- if (id->device) {
+ if (id_priv->cma_dev) {
if (rdma_cap_ib_cm(id->device, 1)) {
ret = cma_ib_listen(id_priv);
if (ret)
@@ -3812,6 +3819,8 @@ int rdma_bind_addr(struct rdma_cm_id *id, struct sockaddr *addr)
if (ret)
goto err2;
+ if (!cma_any_addr(addr))
+ rdma_restrack_add(&id_priv->res);
return 0;
err2:
if (id_priv->cma_dev)
diff --git a/drivers/infiniband/core/uverbs_std_types_device.c b/drivers/infiniband/core/uverbs_std_types_device.c
index 9ec6971..0496848 100644
--- a/drivers/infiniband/core/uverbs_std_types_device.c
+++ b/drivers/infiniband/core/uverbs_std_types_device.c
@@ -117,8 +117,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_INFO_HANDLES)(
return ret;
uapi_object = uapi_get_object(attrs->ufile->device->uapi, object_id);
- if (!uapi_object)
- return -EINVAL;
+ if (IS_ERR(uapi_object))
+ return PTR_ERR(uapi_object);
handles = gather_objects_handle(attrs->ufile, uapi_object, attrs,
out_len, &total);
@@ -331,6 +331,9 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_GID_TABLE)(
if (ret)
return ret;
+ if (!user_entry_size)
+ return -EINVAL;
+
max_entries = uverbs_attr_ptr_get_array_size(
attrs, UVERBS_ATTR_QUERY_GID_TABLE_RESP_ENTRIES,
user_entry_size);
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
index 995d463..d4d4959 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
@@ -2784,6 +2784,7 @@ static int bnxt_qplib_cq_process_terminal(struct bnxt_qplib_cq *cq,
dev_err(&cq->hwq.pdev->dev,
"FP: CQ Processed terminal reported rq_cons_idx 0x%x exceeds max 0x%x\n",
cqe_cons, rq->max_wqe);
+ rc = -EINVAL;
goto done;
}
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
index fa78783..3ca4700 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
@@ -854,6 +854,7 @@ static int bnxt_qplib_alloc_dpi_tbl(struct bnxt_qplib_res *res,
unmap_io:
pci_iounmap(res->pdev, dpit->dbr_bar_reg_iomem);
+ dpit->dbr_bar_reg_iomem = NULL;
return -ENOMEM;
}
diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
index 8190374..e42c812 100644
--- a/drivers/infiniband/hw/cxgb4/cm.c
+++ b/drivers/infiniband/hw/cxgb4/cm.c
@@ -3616,7 +3616,8 @@ int c4iw_destroy_listen(struct iw_cm_id *cm_id)
c4iw_init_wr_wait(ep->com.wr_waitp);
err = cxgb4_remove_server(
ep->com.dev->rdev.lldi.ports[0], ep->stid,
- ep->com.dev->rdev.lldi.rxq_ids[0], true);
+ ep->com.dev->rdev.lldi.rxq_ids[0],
+ ep->com.local_addr.ss_family == AF_INET6);
if (err)
goto done;
err = c4iw_wait_for_reply(&ep->com.dev->rdev, ep->com.wr_waitp,
diff --git a/drivers/infiniband/hw/cxgb4/resource.c b/drivers/infiniband/hw/cxgb4/resource.c
index 5c95c78..e800e8e 100644
--- a/drivers/infiniband/hw/cxgb4/resource.c
+++ b/drivers/infiniband/hw/cxgb4/resource.c
@@ -216,7 +216,7 @@ u32 c4iw_get_qpid(struct c4iw_rdev *rdev, struct c4iw_dev_ucontext *uctx)
goto out;
entry->qid = qid;
list_add_tail(&entry->entry, &uctx->cqids);
- for (i = qid; i & rdev->qpmask; i++) {
+ for (i = qid + 1; i & rdev->qpmask; i++) {
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
goto out;
diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
index 2a91b8d..04b1e8f 100644
--- a/drivers/infiniband/hw/hfi1/affinity.c
+++ b/drivers/infiniband/hw/hfi1/affinity.c
@@ -632,22 +632,11 @@ static void _dev_comp_vect_cpu_mask_clean_up(struct hfi1_devdata *dd,
*/
int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
{
- int node = pcibus_to_node(dd->pcidev->bus);
struct hfi1_affinity_node *entry;
const struct cpumask *local_mask;
int curr_cpu, possible, i, ret;
bool new_entry = false;
- /*
- * If the BIOS does not have the NUMA node information set, select
- * NUMA 0 so we get consistent performance.
- */
- if (node < 0) {
- dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n");
- node = 0;
- }
- dd->node = node;
-
local_mask = cpumask_of_node(dd->node);
if (cpumask_first(local_mask) >= nr_cpu_ids)
local_mask = topology_core_cpumask(0);
@@ -660,7 +649,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
* create an entry in the global affinity structure and initialize it.
*/
if (!entry) {
- entry = node_affinity_allocate(node);
+ entry = node_affinity_allocate(dd->node);
if (!entry) {
dd_dev_err(dd,
"Unable to allocate global affinity node\n");
@@ -751,6 +740,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
if (new_entry)
node_affinity_add_tail(entry);
+ dd->affinity_entry = entry;
mutex_unlock(&node_affinity.lock);
return 0;
@@ -766,10 +756,9 @@ void hfi1_dev_affinity_clean_up(struct hfi1_devdata *dd)
{
struct hfi1_affinity_node *entry;
- if (dd->node < 0)
- return;
-
mutex_lock(&node_affinity.lock);
+ if (!dd->affinity_entry)
+ goto unlock;
entry = node_affinity_lookup(dd->node);
if (!entry)
goto unlock;
@@ -780,8 +769,8 @@ void hfi1_dev_affinity_clean_up(struct hfi1_devdata *dd)
*/
_dev_comp_vect_cpu_mask_clean_up(dd, entry);
unlock:
+ dd->affinity_entry = NULL;
mutex_unlock(&node_affinity.lock);
- dd->node = NUMA_NO_NODE;
}
/*
diff --git a/drivers/infiniband/hw/hfi1/firmware.c b/drivers/infiniband/hw/hfi1/firmware.c
index 0e83d4b..2cf102b 100644
--- a/drivers/infiniband/hw/hfi1/firmware.c
+++ b/drivers/infiniband/hw/hfi1/firmware.c
@@ -1916,6 +1916,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
dd_dev_err(dd, "%s: Failed CRC check at offset %ld\n",
__func__, (ptr -
(u32 *)dd->platform_config.data));
+ ret = -EINVAL;
goto bail;
}
/* Jump the CRC DWORD */
diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
index e09e824..2a9a040 100644
--- a/drivers/infiniband/hw/hfi1/hfi.h
+++ b/drivers/infiniband/hw/hfi1/hfi.h
@@ -1409,6 +1409,7 @@ struct hfi1_devdata {
spinlock_t irq_src_lock;
int vnic_num_vports;
struct net_device *dummy_netdev;
+ struct hfi1_affinity_node *affinity_entry;
/* Keeps track of IPoIB RSM rule users */
atomic_t ipoib_rsm_usr_num;
diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
index cb7ad12..786c631 100644
--- a/drivers/infiniband/hw/hfi1/init.c
+++ b/drivers/infiniband/hw/hfi1/init.c
@@ -1277,7 +1277,6 @@ static struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev,
dd->pport = (struct hfi1_pportdata *)(dd + 1);
dd->pcidev = pdev;
pci_set_drvdata(pdev, dd);
- dd->node = NUMA_NO_NODE;
ret = xa_alloc_irq(&hfi1_dev_table, &dd->unit, dd, xa_limit_32b,
GFP_KERNEL);
@@ -1287,6 +1286,15 @@ static struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev,
goto bail;
}
rvt_set_ibdev_name(&dd->verbs_dev.rdi, "%s_%d", class_name(), dd->unit);
+ /*
+ * If the BIOS does not have the NUMA node information set, select
+ * NUMA 0 so we get consistent performance.
+ */
+ dd->node = pcibus_to_node(pdev->bus);
+ if (dd->node == NUMA_NO_NODE) {
+ dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n");
+ dd->node = 0;
+ }
/*
* Initialize all locks for the device. This needs to be as early as
diff --git a/drivers/infiniband/hw/hfi1/ipoib.h b/drivers/infiniband/hw/hfi1/ipoib.h
index b8c9d0a..1ee361c 100644
--- a/drivers/infiniband/hw/hfi1/ipoib.h
+++ b/drivers/infiniband/hw/hfi1/ipoib.h
@@ -52,8 +52,9 @@ union hfi1_ipoib_flow {
* @producer_lock: producer sync lock
* @consumer_lock: consumer sync lock
*/
+struct ipoib_txreq;
struct hfi1_ipoib_circ_buf {
- void **items;
+ struct ipoib_txreq **items;
unsigned long head;
unsigned long tail;
unsigned long max_items;
diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
index 9df292b5..ab1eeff 100644
--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
+++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
@@ -702,14 +702,14 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
priv->tx_napis = kcalloc_node(dev->num_tx_queues,
sizeof(struct napi_struct),
- GFP_ATOMIC,
+ GFP_KERNEL,
priv->dd->node);
if (!priv->tx_napis)
goto free_txreq_cache;
priv->txqs = kcalloc_node(dev->num_tx_queues,
sizeof(struct hfi1_ipoib_txq),
- GFP_ATOMIC,
+ GFP_KERNEL,
priv->dd->node);
if (!priv->txqs)
goto free_tx_napis;
@@ -741,9 +741,9 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
priv->dd->node);
txq->tx_ring.items =
- vzalloc_node(array_size(tx_ring_size,
- sizeof(struct ipoib_txreq)),
- priv->dd->node);
+ kcalloc_node(tx_ring_size,
+ sizeof(struct ipoib_txreq *),
+ GFP_KERNEL, priv->dd->node);
if (!txq->tx_ring.items)
goto free_txqs;
@@ -764,7 +764,7 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
struct hfi1_ipoib_txq *txq = &priv->txqs[i];
netif_napi_del(txq->napi);
- vfree(txq->tx_ring.items);
+ kfree(txq->tx_ring.items);
}
kfree(priv->txqs);
@@ -817,7 +817,7 @@ void hfi1_ipoib_txreq_deinit(struct hfi1_ipoib_dev_priv *priv)
hfi1_ipoib_drain_tx_list(txq);
netif_napi_del(txq->napi);
(void)hfi1_ipoib_drain_tx_ring(txq, txq->tx_ring.max_items);
- vfree(txq->tx_ring.items);
+ kfree(txq->tx_ring.items);
}
kfree(priv->txqs);
diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
index f3fb28e..d213f65 100644
--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
+++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
@@ -89,7 +89,7 @@ int hfi1_mmu_rb_register(void *ops_arg,
struct mmu_rb_handler *h;
int ret;
- h = kmalloc(sizeof(*h), GFP_KERNEL);
+ h = kzalloc(sizeof(*h), GFP_KERNEL);
if (!h)
return -ENOMEM;
diff --git a/drivers/infiniband/hw/hfi1/netdev_rx.c b/drivers/infiniband/hw/hfi1/netdev_rx.c
index 6d263c9..ea95baa 100644
--- a/drivers/infiniband/hw/hfi1/netdev_rx.c
+++ b/drivers/infiniband/hw/hfi1/netdev_rx.c
@@ -173,8 +173,7 @@ u32 hfi1_num_netdev_contexts(struct hfi1_devdata *dd, u32 available_contexts,
return 0;
}
- cpumask_and(node_cpu_mask, cpu_mask,
- cpumask_of_node(pcibus_to_node(dd->pcidev->bus)));
+ cpumask_and(node_cpu_mask, cpu_mask, cpumask_of_node(dd->node));
available_cpus = cpumask_weight(node_cpu_mask);
diff --git a/drivers/infiniband/hw/i40iw/i40iw_pble.c b/drivers/infiniband/hw/i40iw/i40iw_pble.c
index 5f97643..ae7d227 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_pble.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_pble.c
@@ -392,12 +392,9 @@ static enum i40iw_status_code add_pble_pool(struct i40iw_sc_dev *dev,
i40iw_debug(dev, I40IW_DEBUG_PBLE, "next_fpm_addr = %llx chunk_size[%u] = 0x%x\n",
pble_rsrc->next_fpm_addr, chunk->size, chunk->size);
pble_rsrc->unallocated_pble -= (chunk->size >> 3);
- list_add(&chunk->list, &pble_rsrc->pinfo.clist);
sd_reg_val = (sd_entry_type == I40IW_SD_TYPE_PAGED) ?
sd_entry->u.pd_table.pd_page_addr.pa : sd_entry->u.bp.addr.pa;
- if (sd_entry->valid)
- return 0;
- if (dev->is_pf) {
+ if (dev->is_pf && !sd_entry->valid) {
ret_code = i40iw_hmc_sd_one(dev, hmc_info->hmc_fn_id,
sd_reg_val, idx->sd_idx,
sd_entry->entry_type, true);
@@ -408,6 +405,7 @@ static enum i40iw_status_code add_pble_pool(struct i40iw_sc_dev *dev,
}
sd_entry->valid = true;
+ list_add(&chunk->list, &pble_rsrc->pinfo.clist);
return 0;
error:
kfree(chunk);
diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
index efb9ec9..06a8732 100644
--- a/drivers/infiniband/hw/mlx5/devx.c
+++ b/drivers/infiniband/hw/mlx5/devx.c
@@ -559,9 +559,8 @@ static bool devx_is_valid_obj_id(struct uverbs_attr_bundle *attrs,
case UVERBS_OBJECT_QP:
{
struct mlx5_ib_qp *qp = to_mqp(uobj->object);
- enum ib_qp_type qp_type = qp->ibqp.qp_type;
- if (qp_type == IB_QPT_RAW_PACKET ||
+ if (qp->type == IB_QPT_RAW_PACKET ||
(qp->flags & IB_QP_CREATE_SOURCE_QPN)) {
struct mlx5_ib_raw_packet_qp *raw_packet_qp =
&qp->raw_packet_qp;
@@ -578,10 +577,9 @@ static bool devx_is_valid_obj_id(struct uverbs_attr_bundle *attrs,
sq->tisn) == obj_id);
}
- if (qp_type == MLX5_IB_QPT_DCT)
+ if (qp->type == MLX5_IB_QPT_DCT)
return get_enc_obj_id(MLX5_CMD_OP_CREATE_DCT,
qp->dct.mdct.mqp.qpn) == obj_id;
-
return get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
qp->ibqp.qp_num) == obj_id;
}
diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
index 492cfe0..13d50b1 100644
--- a/drivers/infiniband/hw/mlx5/fs.c
+++ b/drivers/infiniband/hw/mlx5/fs.c
@@ -1528,8 +1528,8 @@ static struct mlx5_ib_flow_handler *raw_fs_rule_add(
dst_num++;
}
- handler = _create_raw_flow_rule(dev, ft_prio, dst, fs_matcher,
- flow_context, flow_act,
+ handler = _create_raw_flow_rule(dev, ft_prio, dst_num ? dst : NULL,
+ fs_matcher, flow_context, flow_act,
cmd_in, inlen, dst_num);
if (IS_ERR(handler)) {
@@ -1885,8 +1885,9 @@ static int get_dests(struct uverbs_attr_bundle *attrs,
else
*dest_id = mqp->raw_packet_qp.rq.tirn;
*dest_type = MLX5_FLOW_DESTINATION_TYPE_TIR;
- } else if (fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_EGRESS ||
- fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_RDMA_TX) {
+ } else if ((fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_EGRESS ||
+ fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_RDMA_TX) &&
+ !(*flags & MLX5_IB_ATTR_CREATE_FLOW_FLAGS_DROP)) {
*dest_type = MLX5_FLOW_DESTINATION_TYPE_PORT;
}
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index beec0d7..b195067 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -4762,6 +4762,7 @@ static void *mlx5_ib_add_slave_port(struct mlx5_core_dev *mdev)
if (bound) {
rdma_roce_rescan_device(&dev->ib_dev);
+ mpi->ibdev->ib_active = true;
break;
}
}
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 75caeec3..6d2715f 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -3079,6 +3079,19 @@ enum {
MLX5_PATH_FLAG_COUNTER = 1 << 2,
};
+static int mlx5_to_ib_rate_map(u8 rate)
+{
+ static const int rates[] = { IB_RATE_PORT_CURRENT, IB_RATE_56_GBPS,
+ IB_RATE_25_GBPS, IB_RATE_100_GBPS,
+ IB_RATE_200_GBPS, IB_RATE_50_GBPS,
+ IB_RATE_400_GBPS };
+
+ if (rate < ARRAY_SIZE(rates))
+ return rates[rate];
+
+ return rate - MLX5_STAT_RATE_OFFSET;
+}
+
static int ib_to_mlx5_rate_map(u8 rate)
{
switch (rate) {
@@ -4420,7 +4433,7 @@ static void to_rdma_ah_attr(struct mlx5_ib_dev *ibdev,
rdma_ah_set_path_bits(ah_attr, MLX5_GET(ads, path, mlid));
static_rate = MLX5_GET(ads, path, stat_rate);
- rdma_ah_set_static_rate(ah_attr, static_rate ? static_rate - 5 : 0);
+ rdma_ah_set_static_rate(ah_attr, mlx5_to_ib_rate_map(static_rate));
if (MLX5_GET(ads, path, grh) ||
ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) {
rdma_ah_set_grh(ah_attr, NULL, MLX5_GET(ads, path, flow_label),
diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
index c4bc587..1715fbe 100644
--- a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
@@ -636,8 +636,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
memcpy(in_params.local_mac_addr, dev->ndev->dev_addr, ETH_ALEN);
if (test_and_set_bit(QEDR_IWARP_CM_WAIT_FOR_CONNECT,
- &qp->iwarp_cm_flags))
+ &qp->iwarp_cm_flags)) {
+ rc = -ENODEV;
goto err; /* QP already being destroyed */
+ }
rc = dev->ops->iwarp_connect(dev->rdma_ctx, &in_params, &out_params);
if (rc) {
diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
index 511c95b..cdfb773 100644
--- a/drivers/infiniband/hw/qedr/verbs.c
+++ b/drivers/infiniband/hw/qedr/verbs.c
@@ -1241,7 +1241,8 @@ static int qedr_check_qp_attrs(struct ib_pd *ibpd, struct qedr_dev *dev,
* TGT QP isn't associated with RQ/SQ
*/
if ((attrs->qp_type != IB_QPT_GSI) && (dev->gsi_qp_created) &&
- (attrs->qp_type != IB_QPT_XRC_TGT)) {
+ (attrs->qp_type != IB_QPT_XRC_TGT) &&
+ (attrs->qp_type != IB_QPT_XRC_INI)) {
struct qedr_cq *send_cq = get_qedr_cq(attrs->send_cq);
struct qedr_cq *recv_cq = get_qedr_cq(attrs->recv_cq);
diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c
index df0d173..da2e867 100644
--- a/drivers/infiniband/sw/rxe/rxe_av.c
+++ b/drivers/infiniband/sw/rxe/rxe_av.c
@@ -88,7 +88,7 @@ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr)
type = RXE_NETWORK_TYPE_IPV4;
break;
case RDMA_NETWORK_IPV6:
- type = RXE_NETWORK_TYPE_IPV4;
+ type = RXE_NETWORK_TYPE_IPV6;
break;
default:
/* not reached - checked in rxe_av_chk_attr */
diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index 656a5b4..1e716fe 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -231,6 +231,7 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
if (err) {
vfree(qp->sq.queue->buf);
kfree(qp->sq.queue);
+ qp->sq.queue = NULL;
return err;
}
@@ -284,6 +285,7 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp,
if (err) {
vfree(qp->rq.queue->buf);
kfree(qp->rq.queue);
+ qp->rq.queue = NULL;
return err;
}
}
@@ -344,6 +346,11 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd,
err2:
rxe_queue_cleanup(qp->sq.queue);
err1:
+ qp->pd = NULL;
+ qp->rcq = NULL;
+ qp->scq = NULL;
+ qp->srq = NULL;
+
if (srq)
rxe_drop_ref(srq);
rxe_drop_ref(scq);
diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c
index 34a910c..61c17db 100644
--- a/drivers/infiniband/sw/siw/siw_mem.c
+++ b/drivers/infiniband/sw/siw/siw_mem.c
@@ -106,8 +106,6 @@ int siw_mr_add_mem(struct siw_mr *mr, struct ib_pd *pd, void *mem_obj,
mem->perms = rights & IWARP_ACCESS_MASK;
kref_init(&mem->ref);
- mr->mem = mem;
-
get_random_bytes(&next, 4);
next &= 0x00ffffff;
@@ -116,6 +114,8 @@ int siw_mr_add_mem(struct siw_mr *mr, struct ib_pd *pd, void *mem_obj,
kfree(mem);
return -ENOMEM;
}
+
+ mr->mem = mem;
/* Set the STag index part */
mem->stag = id << 8;
mr->base_mr.lkey = mr->base_mr.rkey = mem->stag;
diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
index fb25e80..34e847a 100644
--- a/drivers/infiniband/sw/siw/siw_verbs.c
+++ b/drivers/infiniband/sw/siw/siw_verbs.c
@@ -300,7 +300,6 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
struct siw_ucontext *uctx =
rdma_udata_to_drv_context(udata, struct siw_ucontext,
base_ucontext);
- struct siw_cq *scq = NULL, *rcq = NULL;
unsigned long flags;
int num_sqe, num_rqe, rv = 0;
size_t length;
@@ -340,10 +339,8 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
rv = -EINVAL;
goto err_out;
}
- scq = to_siw_cq(attrs->send_cq);
- rcq = to_siw_cq(attrs->recv_cq);
- if (!scq || (!rcq && !attrs->srq)) {
+ if (!attrs->send_cq || (!attrs->recv_cq && !attrs->srq)) {
siw_dbg(base_dev, "send CQ or receive CQ invalid\n");
rv = -EINVAL;
goto err_out;
@@ -375,7 +372,7 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
else {
/* Zero sized SQ is not supported */
rv = -EINVAL;
- goto err_out;
+ goto err_out_xa;
}
if (num_rqe)
num_rqe = roundup_pow_of_two(num_rqe);
@@ -398,8 +395,8 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd,
}
}
qp->pd = pd;
- qp->scq = scq;
- qp->rcq = rcq;
+ qp->scq = to_siw_cq(attrs->send_cq);
+ qp->rcq = to_siw_cq(attrs->recv_cq);
if (attrs->srq) {
/*
diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index bd47894..e653c83f 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -438,23 +438,23 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
isert_init_conn(isert_conn);
isert_conn->cm_id = cma_id;
- ret = isert_alloc_login_buf(isert_conn, cma_id->device);
- if (ret)
- goto out;
-
device = isert_device_get(cma_id);
if (IS_ERR(device)) {
ret = PTR_ERR(device);
- goto out_rsp_dma_map;
+ goto out;
}
isert_conn->device = device;
+ ret = isert_alloc_login_buf(isert_conn, cma_id->device);
+ if (ret)
+ goto out_conn_dev;
+
isert_set_nego_params(isert_conn, &event->param.conn);
isert_conn->qp = isert_create_qp(isert_conn, cma_id);
if (IS_ERR(isert_conn->qp)) {
ret = PTR_ERR(isert_conn->qp);
- goto out_conn_dev;
+ goto out_rsp_dma_map;
}
ret = isert_login_post_recv(isert_conn);
@@ -473,10 +473,10 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
out_destroy_qp:
isert_destroy_qp(isert_conn);
-out_conn_dev:
- isert_device_put(device);
out_rsp_dma_map:
isert_free_login_buf(isert_conn);
+out_conn_dev:
+ isert_device_put(device);
out:
kfree(isert_conn);
rdma_reject(cma_id, NULL, 0, IB_CM_REJ_CONSUMER_DEFINED);
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
index 6eb95e3..7db550b 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
@@ -2739,8 +2739,8 @@ void rtrs_clt_close(struct rtrs_clt *clt)
/* Now it is safe to iterate over all paths without locks */
list_for_each_entry_safe(sess, tmp, &clt->paths_list, s.entry) {
- rtrs_clt_destroy_sess_files(sess, NULL);
rtrs_clt_close_conns(sess, true);
+ rtrs_clt_destroy_sess_files(sess, NULL);
kobject_put(&sess->kobj);
}
free_clt(clt);
@@ -2803,8 +2803,8 @@ int rtrs_clt_remove_path_from_sysfs(struct rtrs_clt_sess *sess,
} while (!changed && old_state != RTRS_CLT_DEAD);
if (likely(changed)) {
- rtrs_clt_destroy_sess_files(sess, sysfs_self);
rtrs_clt_remove_path_from_arr(sess);
+ rtrs_clt_destroy_sess_files(sess, sysfs_self);
kobject_put(&sess->kobj);
}
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index 53a8bec..07ecc7d 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -2378,6 +2378,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
pr_info("rejected SRP_LOGIN_REQ because target %s_%d is not enabled\n",
dev_name(&sdev->device->dev), port_num);
mutex_unlock(&sport->mutex);
+ ret = -EINVAL;
goto reject;
}
diff --git a/drivers/input/keyboard/nspire-keypad.c b/drivers/input/keyboard/nspire-keypad.c
index 63d5e48..e9fa142 100644
--- a/drivers/input/keyboard/nspire-keypad.c
+++ b/drivers/input/keyboard/nspire-keypad.c
@@ -93,9 +93,15 @@ static irqreturn_t nspire_keypad_irq(int irq, void *dev_id)
return IRQ_HANDLED;
}
-static int nspire_keypad_chip_init(struct nspire_keypad *keypad)
+static int nspire_keypad_open(struct input_dev *input)
{
+ struct nspire_keypad *keypad = input_get_drvdata(input);
unsigned long val = 0, cycles_per_us, delay_cycles, row_delay_cycles;
+ int error;
+
+ error = clk_prepare_enable(keypad->clk);
+ if (error)
+ return error;
cycles_per_us = (clk_get_rate(keypad->clk) / 1000000);
if (cycles_per_us == 0)
@@ -121,30 +127,6 @@ static int nspire_keypad_chip_init(struct nspire_keypad *keypad)
keypad->int_mask = 1 << 1;
writel(keypad->int_mask, keypad->reg_base + KEYPAD_INTMSK);
- /* Disable GPIO interrupts to prevent hanging on touchpad */
- /* Possibly used to detect touchpad events */
- writel(0, keypad->reg_base + KEYPAD_UNKNOWN_INT);
- /* Acknowledge existing interrupts */
- writel(~0, keypad->reg_base + KEYPAD_UNKNOWN_INT_STS);
-
- return 0;
-}
-
-static int nspire_keypad_open(struct input_dev *input)
-{
- struct nspire_keypad *keypad = input_get_drvdata(input);
- int error;
-
- error = clk_prepare_enable(keypad->clk);
- if (error)
- return error;
-
- error = nspire_keypad_chip_init(keypad);
- if (error) {
- clk_disable_unprepare(keypad->clk);
- return error;
- }
-
return 0;
}
@@ -152,6 +134,11 @@ static void nspire_keypad_close(struct input_dev *input)
{
struct nspire_keypad *keypad = input_get_drvdata(input);
+ /* Disable interrupts */
+ writel(0, keypad->reg_base + KEYPAD_INTMSK);
+ /* Acknowledge existing interrupts */
+ writel(~0, keypad->reg_base + KEYPAD_INT);
+
clk_disable_unprepare(keypad->clk);
}
@@ -210,6 +197,25 @@ static int nspire_keypad_probe(struct platform_device *pdev)
return -ENOMEM;
}
+ error = clk_prepare_enable(keypad->clk);
+ if (error) {
+ dev_err(&pdev->dev, "failed to enable clock\n");
+ return error;
+ }
+
+ /* Disable interrupts */
+ writel(0, keypad->reg_base + KEYPAD_INTMSK);
+ /* Acknowledge existing interrupts */
+ writel(~0, keypad->reg_base + KEYPAD_INT);
+
+ /* Disable GPIO interrupts to prevent hanging on touchpad */
+ /* Possibly used to detect touchpad events */
+ writel(0, keypad->reg_base + KEYPAD_UNKNOWN_INT);
+ /* Acknowledge existing GPIO interrupts */
+ writel(~0, keypad->reg_base + KEYPAD_UNKNOWN_INT_STS);
+
+ clk_disable_unprepare(keypad->clk);
+
input_set_drvdata(input, keypad);
input->id.bustype = BUS_HOST;
diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
index 9119e12..a5a0035 100644
--- a/drivers/input/serio/i8042-x86ia64io.h
+++ b/drivers/input/serio/i8042-x86ia64io.h
@@ -588,6 +588,7 @@ static const struct dmi_system_id i8042_dmi_noselftest_table[] = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */
},
+ }, {
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_CHASSIS_TYPE, "31"), /* Convertible Notebook */
diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
index 50c3482..03a4825 100644
--- a/drivers/input/touchscreen/elants_i2c.c
+++ b/drivers/input/touchscreen/elants_i2c.c
@@ -38,6 +38,7 @@
#include <linux/of.h>
#include <linux/gpio/consumer.h>
#include <linux/regulator/consumer.h>
+#include <linux/uuid.h>
#include <asm/unaligned.h>
/* Device, Driver information */
@@ -1224,6 +1225,40 @@ static void elants_i2c_power_off(void *_data)
}
}
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id i2c_hid_ids[] = {
+ {"ACPI0C50", 0 },
+ {"PNP0C50", 0 },
+ { },
+};
+
+static const guid_t i2c_hid_guid =
+ GUID_INIT(0x3CDFF6F7, 0x4267, 0x4555,
+ 0xAD, 0x05, 0xB3, 0x0A, 0x3D, 0x89, 0x38, 0xDE);
+
+static bool elants_acpi_is_hid_device(struct device *dev)
+{
+ acpi_handle handle = ACPI_HANDLE(dev);
+ union acpi_object *obj;
+
+ if (acpi_match_device_ids(ACPI_COMPANION(dev), i2c_hid_ids))
+ return false;
+
+ obj = acpi_evaluate_dsm_typed(handle, &i2c_hid_guid, 1, 1, NULL, ACPI_TYPE_INTEGER);
+ if (obj) {
+ ACPI_FREE(obj);
+ return true;
+ }
+
+ return false;
+}
+#else
+static bool elants_acpi_is_hid_device(struct device *dev)
+{
+ return false;
+}
+#endif
+
static int elants_i2c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
@@ -1232,9 +1267,14 @@ static int elants_i2c_probe(struct i2c_client *client,
unsigned long irqflags;
int error;
+ /* Don't bind to i2c-hid compatible devices, these are handled by the i2c-hid drv. */
+ if (elants_acpi_is_hid_device(&client->dev)) {
+ dev_warn(&client->dev, "This device appears to be an I2C-HID device, not binding\n");
+ return -ENODEV;
+ }
+
if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
- dev_err(&client->dev,
- "%s: i2c check functionality error\n", DEVICE_NAME);
+ dev_err(&client->dev, "I2C check functionality error\n");
return -ENXIO;
}
diff --git a/drivers/input/touchscreen/ili210x.c b/drivers/input/touchscreen/ili210x.c
index d8fccf0..30576a5 100644
--- a/drivers/input/touchscreen/ili210x.c
+++ b/drivers/input/touchscreen/ili210x.c
@@ -87,7 +87,7 @@ static bool ili210x_touchdata_to_coords(const u8 *touchdata,
unsigned int *x, unsigned int *y,
unsigned int *z)
{
- if (touchdata[0] & BIT(finger))
+ if (!(touchdata[0] & BIT(finger)))
return false;
*x = get_unaligned_be16(touchdata + 1 + (finger * 4) + 0);
diff --git a/drivers/input/touchscreen/s6sy761.c b/drivers/input/touchscreen/s6sy761.c
index b63d7fd..85a1f46 100644
--- a/drivers/input/touchscreen/s6sy761.c
+++ b/drivers/input/touchscreen/s6sy761.c
@@ -145,8 +145,8 @@ static void s6sy761_report_coordinates(struct s6sy761_data *sdata,
u8 major = event[4];
u8 minor = event[5];
u8 z = event[6] & S6SY761_MASK_Z;
- u16 x = (event[1] << 3) | ((event[3] & S6SY761_MASK_X) >> 4);
- u16 y = (event[2] << 3) | (event[3] & S6SY761_MASK_Y);
+ u16 x = (event[1] << 4) | ((event[3] & S6SY761_MASK_X) >> 4);
+ u16 y = (event[2] << 4) | (event[3] & S6SY761_MASK_Y);
input_mt_slot(sdata->input, tid);
diff --git a/drivers/input/touchscreen/silead.c b/drivers/input/touchscreen/silead.c
index 8fa2f3b..e8b6c31 100644
--- a/drivers/input/touchscreen/silead.c
+++ b/drivers/input/touchscreen/silead.c
@@ -20,6 +20,7 @@
#include <linux/input/mt.h>
#include <linux/input/touchscreen.h>
#include <linux/pm.h>
+#include <linux/pm_runtime.h>
#include <linux/irq.h>
#include <linux/regulator/consumer.h>
@@ -335,10 +336,8 @@ static int silead_ts_get_id(struct i2c_client *client)
error = i2c_smbus_read_i2c_block_data(client, SILEAD_REG_ID,
sizeof(chip_id), (u8 *)&chip_id);
- if (error < 0) {
- dev_err(&client->dev, "Chip ID read error %d\n", error);
+ if (error < 0)
return error;
- }
data->chip_id = le32_to_cpu(chip_id);
dev_info(&client->dev, "Silead chip ID: 0x%8X", data->chip_id);
@@ -351,12 +350,49 @@ static int silead_ts_setup(struct i2c_client *client)
int error;
u32 status;
+ /*
+ * Some buggy BIOS-es bring up the chip in a stuck state where it
+ * blocks the I2C bus. The following steps are necessary to
+ * unstuck the chip / bus:
+ * 1. Turn off the Silead chip.
+ * 2. Try to do an I2C transfer with the chip, this will fail in
+ * response to which the I2C-bus-driver will call:
+ * i2c_recover_bus() which will unstuck the I2C-bus. Note the
+ * unstuck-ing of the I2C bus only works if we first drop the
+ * chip off the bus by turning it off.
+ * 3. Turn the chip back on.
+ *
+ * On the x86/ACPI systems were this problem is seen, step 1. and
+ * 3. require making ACPI calls and dealing with ACPI Power
+ * Resources. The workaround below runtime-suspends the chip to
+ * turn it off, leaving it up to the ACPI subsystem to deal with
+ * this.
+ */
+
+ if (device_property_read_bool(&client->dev,
+ "silead,stuck-controller-bug")) {
+ pm_runtime_set_active(&client->dev);
+ pm_runtime_enable(&client->dev);
+ pm_runtime_allow(&client->dev);
+
+ pm_runtime_suspend(&client->dev);
+
+ dev_warn(&client->dev, FW_BUG "Stuck I2C bus: please ignore the next 'controller timed out' error\n");
+ silead_ts_get_id(client);
+
+ /* The forbid will also resume the device */
+ pm_runtime_forbid(&client->dev);
+ pm_runtime_disable(&client->dev);
+ }
+
silead_ts_set_power(client, SILEAD_POWER_OFF);
silead_ts_set_power(client, SILEAD_POWER_ON);
error = silead_ts_get_id(client);
- if (error)
+ if (error) {
+ dev_err(&client->dev, "Chip ID read error %d\n", error);
return error;
+ }
error = silead_ts_init(client);
if (error)
diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index 5ad519c..8a1e70e 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -942,6 +942,8 @@ int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
GFP_KERNEL);
if (new)
src->links = new;
+ else
+ ret = -ENOMEM;
out:
mutex_unlock(&icc_lock);
diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c
index 887d137..dd0e3bd 100644
--- a/drivers/interconnect/qcom/bcm-voter.c
+++ b/drivers/interconnect/qcom/bcm-voter.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved.
*/
#include <asm/div64.h>
@@ -212,6 +212,7 @@ struct bcm_voter *of_bcm_voter_get(struct device *dev, const char *name)
}
mutex_unlock(&bcm_voter_lock);
+ of_node_put(node);
return voter;
}
EXPORT_SYMBOL_GPL(of_bcm_voter_get);
@@ -369,6 +370,7 @@ static const struct of_device_id bcm_voter_of_match[] = {
{ .compatible = "qcom,bcm-voter" },
{ }
};
+MODULE_DEVICE_TABLE(of, bcm_voter_of_match);
static struct platform_driver qcom_icc_bcm_voter_driver = {
.probe = qcom_icc_bcm_voter_probe,
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index 3c215f0..cc9869c 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -12,7 +12,6 @@
#include <linux/acpi.h>
#include <linux/list.h>
#include <linux/bitmap.h>
-#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/syscore_ops.h>
#include <linux/interrupt.h>
@@ -255,8 +254,6 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
static int amd_iommu_enable_interrupts(void);
static int __init iommu_go_to_state(enum iommu_init_state state);
static void init_device_table_dma(void);
-static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
- u8 fxn, u64 *value, bool is_write);
static bool amd_iommu_pre_enabled = true;
@@ -1720,53 +1717,16 @@ static int __init init_iommu_all(struct acpi_table_header *table)
return 0;
}
-static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
+static void init_iommu_perf_ctr(struct amd_iommu *iommu)
{
- int retry;
+ u64 val;
struct pci_dev *pdev = iommu->dev;
- u64 val = 0xabcd, val2 = 0, save_reg, save_src;
if (!iommu_feature(iommu, FEATURE_PC))
return;
amd_iommu_pc_present = true;
- /* save the value to restore, if writable */
- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) ||
- iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false))
- goto pc_false;
-
- /*
- * Disable power gating by programing the performance counter
- * source to 20 (i.e. counts the reads and writes from/to IOMMU
- * Reserved Register [MMIO Offset 1FF8h] that are ignored.),
- * which never get incremented during this init phase.
- * (Note: The event is also deprecated.)
- */
- val = 20;
- if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true))
- goto pc_false;
-
- /* Check if the performance counters can be written to */
- val = 0xabcd;
- for (retry = 5; retry; retry--) {
- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) ||
- iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) ||
- val2)
- break;
-
- /* Wait about 20 msec for power gating to disable and retry. */
- msleep(20);
- }
-
- /* restore */
- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) ||
- iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true))
- goto pc_false;
-
- if (val != val2)
- goto pc_false;
-
pci_info(pdev, "IOMMU performance counters supported\n");
val = readl(iommu->mmio_base + MMIO_CNTR_CONF_OFFSET);
@@ -1774,11 +1734,6 @@ static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
iommu->max_counters = (u8) ((val >> 7) & 0xf);
return;
-
-pc_false:
- pci_err(pdev, "Unable to read/write to IOMMU perf counter.\n");
- amd_iommu_pc_present = false;
- return;
}
static ssize_t amd_iommu_show_cap(struct device *dev,
@@ -1840,7 +1795,7 @@ static void __init late_iommu_features_init(struct amd_iommu *iommu)
* IVHD and MMIO conflict.
*/
if (features != iommu->features)
- pr_warn(FW_WARN "EFR mismatch. Use IVHD EFR (%#llx : %#llx\n).",
+ pr_warn(FW_WARN "EFR mismatch. Use IVHD EFR (%#llx : %#llx).\n",
features, iommu->features);
}
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index d4b7f40..57e5d22 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -115,7 +115,7 @@
#define GERROR_PRIQ_ABT_ERR (1 << 3)
#define GERROR_EVTQ_ABT_ERR (1 << 2)
#define GERROR_CMDQ_ERR (1 << 0)
-#define GERROR_ERR_MASK 0xfd
+#define GERROR_ERR_MASK 0x1fd
#define ARM_SMMU_GERRORN 0x64
diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index 02e7c10..b8d0b56 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1137,7 +1137,7 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
err = iommu_device_register(&iommu->iommu);
if (err)
- goto err_unmap;
+ goto err_sysfs;
}
drhd->iommu = iommu;
@@ -1145,6 +1145,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
return 0;
+err_sysfs:
+ iommu_device_sysfs_remove(&iommu->iommu);
err_unmap:
unmap_iommu(iommu);
error_free_seq_id:
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 7e3db4c..b21c822 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -656,7 +656,14 @@ static int domain_update_iommu_snooping(struct intel_iommu *skip)
rcu_read_lock();
for_each_active_iommu(iommu, drhd) {
if (iommu != skip) {
- if (!ecap_sc_support(iommu->ecap)) {
+ /*
+ * If the hardware is operating in the scalable mode,
+ * the snooping control is always supported since we
+ * always set PASID-table-entry.PGSNP bit if the domain
+ * is managed outside (UNMANAGED).
+ */
+ if (!sm_supported(iommu) &&
+ !ecap_sc_support(iommu->ecap)) {
ret = 0;
break;
}
@@ -1021,8 +1028,11 @@ static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
domain_flush_cache(domain, tmp_page, VTD_PAGE_SIZE);
pteval = ((uint64_t)virt_to_dma_pfn(tmp_page) << VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE;
- if (domain_use_first_level(domain))
+ if (domain_use_first_level(domain)) {
pteval |= DMA_FL_PTE_XD | DMA_FL_PTE_US;
+ if (domain->domain.type == IOMMU_DOMAIN_DMA)
+ pteval |= DMA_FL_PTE_ACCESS;
+ }
if (cmpxchg64(&pte->val, 0ULL, pteval))
/* Someone else set it while we were thinking; use theirs. */
free_pgtable_page(tmp_page);
@@ -1338,6 +1348,11 @@ static void iommu_set_root_entry(struct intel_iommu *iommu)
readl, (sts & DMA_GSTS_RTPS), sts);
raw_spin_unlock_irqrestore(&iommu->register_lock, flag);
+
+ iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+ if (sm_supported(iommu))
+ qi_flush_pasid_cache(iommu, 0, QI_PC_GLOBAL, 0);
+ iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
}
void iommu_flush_write_buffer(struct intel_iommu *iommu)
@@ -2347,8 +2362,16 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
return -EINVAL;
attr = prot & (DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP);
- if (domain_use_first_level(domain))
- attr |= DMA_FL_PTE_PRESENT | DMA_FL_PTE_XD | DMA_FL_PTE_US;
+ attr |= DMA_FL_PTE_PRESENT;
+ if (domain_use_first_level(domain)) {
+ attr |= DMA_FL_PTE_XD | DMA_FL_PTE_US;
+
+ if (domain->domain.type == IOMMU_DOMAIN_DMA) {
+ attr |= DMA_FL_PTE_ACCESS;
+ if (prot & DMA_PTE_WRITE)
+ attr |= DMA_FL_PTE_DIRTY;
+ }
+ }
if (!sg) {
sg_res = nr_pages;
@@ -2506,6 +2529,10 @@ static void domain_context_clear_one(struct intel_iommu *iommu, u8 bus, u8 devfn
(((u16)bus) << 8) | devfn,
DMA_CCMD_MASK_NOBIT,
DMA_CCMD_DEVICE_INVL);
+
+ if (sm_supported(iommu))
+ qi_flush_pasid_cache(iommu, did_old, QI_PC_ALL_PASIDS, 0);
+
iommu->flush.flush_iotlb(iommu,
did_old,
0,
@@ -2579,9 +2606,9 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
struct device *dev,
u32 pasid)
{
- int flags = PASID_FLAG_SUPERVISOR_MODE;
struct dma_pte *pgd = domain->pgd;
int agaw, level;
+ int flags = 0;
/*
* Skip top levels of page tables for iommu which has
@@ -2597,7 +2624,13 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
if (level != 4 && level != 5)
return -EINVAL;
- flags |= (level == 5) ? PASID_FLAG_FL5LP : 0;
+ if (pasid != PASID_RID2PASID)
+ flags |= PASID_FLAG_SUPERVISOR_MODE;
+ if (level == 5)
+ flags |= PASID_FLAG_FL5LP;
+
+ if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
+ flags |= PASID_FLAG_PAGE_SNOOP;
return intel_pasid_setup_first_level(iommu, dev, (pgd_t *)pgd, pasid,
domain->iommu_did[iommu->seq_id],
@@ -3369,8 +3402,6 @@ static int __init init_dmars(void)
register_pasid_allocator(iommu);
#endif
iommu_set_root_entry(iommu);
- iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
- iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
}
#ifdef CONFIG_INTEL_IOMMU_BROKEN_GFX_WA
@@ -4148,12 +4179,7 @@ static int init_iommu_hw(void)
}
iommu_flush_write_buffer(iommu);
-
iommu_set_root_entry(iommu);
-
- iommu->flush.flush_context(iommu, 0, 0, 0,
- DMA_CCMD_GLOBAL_INVL);
- iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
iommu_enable_translation(iommu);
iommu_disable_protect_mem_regions(iommu);
}
@@ -4481,8 +4507,6 @@ static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
goto disable_iommu;
iommu_set_root_entry(iommu);
- iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
- iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
iommu_enable_translation(iommu);
iommu_disable_protect_mem_regions(iommu);
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index b92af83..1e7c179 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -412,6 +412,16 @@ static inline void pasid_set_page_snoop(struct pasid_entry *pe, bool value)
}
/*
+ * Setup the Page Snoop (PGSNP) field (Bit 88) of a scalable mode
+ * PASID entry.
+ */
+static inline void
+pasid_set_pgsnp(struct pasid_entry *pe)
+{
+ pasid_set_bits(&pe->val[1], 1ULL << 24, 1ULL << 24);
+}
+
+/*
* Setup the First Level Page table Pointer field (Bit 140~191)
* of a scalable mode PASID entry.
*/
@@ -579,6 +589,9 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu,
}
}
+ if (flags & PASID_FLAG_PAGE_SNOOP)
+ pasid_set_pgsnp(pte);
+
pasid_set_domain_id(pte, did);
pasid_set_address_width(pte, iommu->agaw);
pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
@@ -657,11 +670,15 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
pasid_set_fault_enable(pte);
pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
+ if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
+ pasid_set_pgsnp(pte);
+
/*
* Since it is a second level only translation setup, we should
* set SRE bit as well (addresses are expected to be GPAs).
*/
- pasid_set_sre(pte);
+ if (pasid != PASID_RID2PASID)
+ pasid_set_sre(pte);
pasid_set_present(pte);
pasid_flush_caches(iommu, pte, pasid, did);
diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
index 444c0be..086ebd6 100644
--- a/drivers/iommu/intel/pasid.h
+++ b/drivers/iommu/intel/pasid.h
@@ -48,6 +48,7 @@
*/
#define PASID_FLAG_SUPERVISOR_MODE BIT(0)
#define PASID_FLAG_NESTED BIT(1)
+#define PASID_FLAG_PAGE_SNOOP BIT(2)
/*
* The PASID_FLAG_FL5LP flag Indicates using 5-level paging for first-
diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index b200a3a..6168dec 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -899,7 +899,7 @@ intel_svm_prq_report(struct device *dev, struct page_req_dsc *desc)
/* Fill in event data for device specific processing */
memset(&event, 0, sizeof(struct iommu_fault_event));
event.fault.type = IOMMU_FAULT_PAGE_REQ;
- event.fault.prm.addr = desc->addr;
+ event.fault.prm.addr = (u64)desc->addr << VTD_PAGE_SHIFT;
event.fault.prm.pasid = desc->pasid;
event.fault.prm.grpid = desc->prg_index;
event.fault.prm.perm = prq_to_iommu_prot(desc);
@@ -959,7 +959,17 @@ static irqreturn_t prq_event_thread(int irq, void *d)
((unsigned long long *)req)[1]);
goto no_pasid;
}
-
+ /* We shall not receive page request for supervisor SVM */
+ if (req->pm_req && (req->rd_req | req->wr_req)) {
+ pr_err("Unexpected page request in Privilege Mode");
+ /* No need to find the matching sdev as for bad_req */
+ goto no_pasid;
+ }
+ /* DMA read with exec requeset is not supported. */
+ if (req->exe_req && req->rd_req) {
+ pr_err("Execution request not supported\n");
+ goto no_pasid;
+ }
if (!svm || svm->pasid != req->pasid) {
rcu_read_lock();
svm = ioasid_find(NULL, req->pasid, NULL);
@@ -1061,12 +1071,12 @@ static irqreturn_t prq_event_thread(int irq, void *d)
QI_PGRP_RESP_TYPE;
resp.qw1 = QI_PGRP_IDX(req->prg_index) |
QI_PGRP_LPIG(req->lpig);
+ resp.qw2 = 0;
+ resp.qw3 = 0;
if (req->priv_data_present)
memcpy(&resp.qw2, req->priv_data,
sizeof(req->priv_data));
- resp.qw2 = 0;
- resp.qw3 = 0;
qi_submit_sync(iommu, &resp, 1, 0);
}
prq_advance:
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 0d9adce..9b8664d 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2872,10 +2872,12 @@ EXPORT_SYMBOL_GPL(iommu_dev_has_feature);
int iommu_dev_enable_feature(struct device *dev, enum iommu_dev_features feat)
{
- const struct iommu_ops *ops = dev->bus->iommu_ops;
+ if (dev->iommu && dev->iommu->iommu_dev) {
+ const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
- if (ops && ops->dev_enable_feat)
- return ops->dev_enable_feat(dev, feat);
+ if (ops->dev_enable_feat)
+ return ops->dev_enable_feat(dev, feat);
+ }
return -ENODEV;
}
@@ -2888,10 +2890,12 @@ EXPORT_SYMBOL_GPL(iommu_dev_enable_feature);
*/
int iommu_dev_disable_feature(struct device *dev, enum iommu_dev_features feat)
{
- const struct iommu_ops *ops = dev->bus->iommu_ops;
+ if (dev->iommu && dev->iommu->iommu_dev) {
+ const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
- if (ops && ops->dev_disable_feat)
- return ops->dev_disable_feat(dev, feat);
+ if (ops->dev_disable_feat)
+ return ops->dev_disable_feat(dev, feat);
+ }
return -EBUSY;
}
@@ -2899,10 +2903,12 @@ EXPORT_SYMBOL_GPL(iommu_dev_disable_feature);
bool iommu_dev_feature_enabled(struct device *dev, enum iommu_dev_features feat)
{
- const struct iommu_ops *ops = dev->bus->iommu_ops;
+ if (dev->iommu && dev->iommu->iommu_dev) {
+ const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
- if (ops && ops->dev_feat_enabled)
- return ops->dev_feat_enabled(dev, feat);
+ if (ops->dev_feat_enabled)
+ return ops->dev_feat_enabled(dev, feat);
+ }
return false;
}
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 2bfdd57..81dea4c 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -1138,6 +1138,7 @@ static struct virtio_device_id id_table[] = {
{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
{ 0 },
};
+MODULE_DEVICE_TABLE(virtio, id_table);
static struct virtio_driver virtio_iommu_drv = {
.driver.name = KBUILD_MODNAME,
diff --git a/drivers/irqchip/irq-gic-v3-mbi.c b/drivers/irqchip/irq-gic-v3-mbi.c
index 563a9b3..e81e89a 100644
--- a/drivers/irqchip/irq-gic-v3-mbi.c
+++ b/drivers/irqchip/irq-gic-v3-mbi.c
@@ -303,7 +303,7 @@ int __init mbi_init(struct fwnode_handle *fwnode, struct irq_domain *parent)
reg = of_get_property(np, "mbi-alias", NULL);
if (reg) {
mbi_phys_base = of_translate_address(np, reg);
- if (mbi_phys_base == OF_BAD_ADDR) {
+ if (mbi_phys_base == (phys_addr_t)OF_BAD_ADDR) {
ret = -ENXIO;
goto err_free_mbi;
}
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 16fecc0..7929bf1 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -648,6 +648,10 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
irqnr = gic_read_iar();
+ /* Check for special IDs first */
+ if ((irqnr >= 1020 && irqnr <= 1023))
+ return;
+
if (gic_supports_nmi() &&
unlikely(gic_read_rpr() == GICD_INT_NMI_PRI)) {
gic_handle_nmi(irqnr, regs);
@@ -659,10 +663,6 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
gic_arch_enable_irqs();
}
- /* Check for special IDs first */
- if ((irqnr >= 1020 && irqnr <= 1023))
- return;
-
if (static_branch_likely(&supports_deactivate_key))
gic_write_eoir(irqnr);
else
diff --git a/drivers/isdn/capi/kcapi.c b/drivers/isdn/capi/kcapi.c
index 7168778..cb0afe8 100644
--- a/drivers/isdn/capi/kcapi.c
+++ b/drivers/isdn/capi/kcapi.c
@@ -721,7 +721,7 @@ u16 capi20_put_message(struct capi20_appl *ap, struct sk_buff *skb)
* Return value: CAPI result code
*/
-u16 capi20_get_manufacturer(u32 contr, u8 *buf)
+u16 capi20_get_manufacturer(u32 contr, u8 buf[CAPI_MANUFACTURER_LEN])
{
struct capi_ctr *ctr;
u16 ret;
@@ -787,7 +787,7 @@ u16 capi20_get_version(u32 contr, struct capi_version *verp)
* Return value: CAPI result code
*/
-u16 capi20_get_serial(u32 contr, u8 *serial)
+u16 capi20_get_serial(u32 contr, u8 serial[CAPI_SERIAL_LEN])
{
struct capi_ctr *ctr;
u16 ret;
diff --git a/drivers/isdn/hardware/mISDN/hfcsusb.c b/drivers/isdn/hardware/mISDN/hfcsusb.c
index 7006199..cd5642c 100644
--- a/drivers/isdn/hardware/mISDN/hfcsusb.c
+++ b/drivers/isdn/hardware/mISDN/hfcsusb.c
@@ -46,7 +46,7 @@ static void hfcsusb_start_endpoint(struct hfcsusb *hw, int channel);
static void hfcsusb_stop_endpoint(struct hfcsusb *hw, int channel);
static int hfcsusb_setup_bch(struct bchannel *bch, int protocol);
static void deactivate_bchannel(struct bchannel *bch);
-static void hfcsusb_ph_info(struct hfcsusb *hw);
+static int hfcsusb_ph_info(struct hfcsusb *hw);
/* start next background transfer for control channel */
static void
@@ -241,7 +241,7 @@ hfcusb_l2l1B(struct mISDNchannel *ch, struct sk_buff *skb)
* send full D/B channel status information
* as MPH_INFORMATION_IND
*/
-static void
+static int
hfcsusb_ph_info(struct hfcsusb *hw)
{
struct ph_info *phi;
@@ -250,7 +250,7 @@ hfcsusb_ph_info(struct hfcsusb *hw)
phi = kzalloc(struct_size(phi, bch, dch->dev.nrbchan), GFP_ATOMIC);
if (!phi)
- return;
+ return -ENOMEM;
phi->dch.ch.protocol = hw->protocol;
phi->dch.ch.Flags = dch->Flags;
@@ -263,6 +263,8 @@ hfcsusb_ph_info(struct hfcsusb *hw)
_queue_data(&dch->dev.D, MPH_INFORMATION_IND, MISDN_ID_ANY,
struct_size(phi, bch, dch->dev.nrbchan), phi, GFP_ATOMIC);
kfree(phi);
+
+ return 0;
}
/*
@@ -347,8 +349,7 @@ hfcusb_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
ret = l1_event(dch->l1, hh->prim);
break;
case MPH_INFORMATION_REQ:
- hfcsusb_ph_info(hw);
- ret = 0;
+ ret = hfcsusb_ph_info(hw);
break;
}
@@ -403,8 +404,7 @@ hfc_l1callback(struct dchannel *dch, u_int cmd)
hw->name, __func__, cmd);
return -1;
}
- hfcsusb_ph_info(hw);
- return 0;
+ return hfcsusb_ph_info(hw);
}
static int
@@ -746,8 +746,7 @@ hfcsusb_setup_bch(struct bchannel *bch, int protocol)
handle_led(hw, (bch->nr == 1) ? LED_B1_OFF :
LED_B2_OFF);
}
- hfcsusb_ph_info(hw);
- return 0;
+ return hfcsusb_ph_info(hw);
}
static void
diff --git a/drivers/isdn/hardware/mISDN/mISDNinfineon.c b/drivers/isdn/hardware/mISDN/mISDNinfineon.c
index a16c7a2..88d592b 100644
--- a/drivers/isdn/hardware/mISDN/mISDNinfineon.c
+++ b/drivers/isdn/hardware/mISDN/mISDNinfineon.c
@@ -630,17 +630,19 @@ static void
release_io(struct inf_hw *hw)
{
if (hw->cfg.mode) {
- if (hw->cfg.p) {
+ if (hw->cfg.mode == AM_MEMIO) {
release_mem_region(hw->cfg.start, hw->cfg.size);
- iounmap(hw->cfg.p);
+ if (hw->cfg.p)
+ iounmap(hw->cfg.p);
} else
release_region(hw->cfg.start, hw->cfg.size);
hw->cfg.mode = AM_NONE;
}
if (hw->addr.mode) {
- if (hw->addr.p) {
+ if (hw->addr.mode == AM_MEMIO) {
release_mem_region(hw->addr.start, hw->addr.size);
- iounmap(hw->addr.p);
+ if (hw->addr.p)
+ iounmap(hw->addr.p);
} else
release_region(hw->addr.start, hw->addr.size);
hw->addr.mode = AM_NONE;
@@ -670,9 +672,12 @@ setup_io(struct inf_hw *hw)
(ulong)hw->cfg.start, (ulong)hw->cfg.size);
return err;
}
- if (hw->ci->cfg_mode == AM_MEMIO)
- hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size);
hw->cfg.mode = hw->ci->cfg_mode;
+ if (hw->ci->cfg_mode == AM_MEMIO) {
+ hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size);
+ if (!hw->cfg.p)
+ return -ENOMEM;
+ }
if (debug & DEBUG_HW)
pr_notice("%s: IO cfg %lx (%lu bytes) mode%d\n",
hw->name, (ulong)hw->cfg.start,
@@ -697,12 +702,12 @@ setup_io(struct inf_hw *hw)
(ulong)hw->addr.start, (ulong)hw->addr.size);
return err;
}
+ hw->addr.mode = hw->ci->addr_mode;
if (hw->ci->addr_mode == AM_MEMIO) {
hw->addr.p = ioremap(hw->addr.start, hw->addr.size);
- if (unlikely(!hw->addr.p))
+ if (!hw->addr.p)
return -ENOMEM;
}
- hw->addr.mode = hw->ci->addr_mode;
if (debug & DEBUG_HW)
pr_notice("%s: IO addr %lx (%lu bytes) mode%d\n",
hw->name, (ulong)hw->addr.start,
diff --git a/drivers/isdn/hardware/mISDN/mISDNipac.c b/drivers/isdn/hardware/mISDN/mISDNipac.c
index ec47508..39f841b4 100644
--- a/drivers/isdn/hardware/mISDN/mISDNipac.c
+++ b/drivers/isdn/hardware/mISDN/mISDNipac.c
@@ -694,7 +694,7 @@ isac_release(struct isac_hw *isac)
{
if (isac->type & IPAC_TYPE_ISACX)
WriteISAC(isac, ISACX_MASK, 0xff);
- else
+ else if (isac->type != 0)
WriteISAC(isac, ISAC_MASK, 0xff);
if (isac->dch.timer.function != NULL) {
del_timer(&isac->dch.timer);
diff --git a/drivers/leds/leds-lp5523.c b/drivers/leds/leds-lp5523.c
index fc433e6..b1590cb 100644
--- a/drivers/leds/leds-lp5523.c
+++ b/drivers/leds/leds-lp5523.c
@@ -307,7 +307,7 @@ static int lp5523_init_program_engine(struct lp55xx_chip *chip)
usleep_range(3000, 6000);
ret = lp55xx_read(chip, LP5523_REG_STATUS, &status);
if (ret)
- return ret;
+ goto out;
status &= LP5523_ENG_STATUS_MASK;
if (status != LP5523_ENG_STATUS_MASK) {
diff --git a/drivers/mailbox/sprd-mailbox.c b/drivers/mailbox/sprd-mailbox.c
index 4c32530..94d9067d 100644
--- a/drivers/mailbox/sprd-mailbox.c
+++ b/drivers/mailbox/sprd-mailbox.c
@@ -60,6 +60,8 @@ struct sprd_mbox_priv {
struct clk *clk;
u32 outbox_fifo_depth;
+ struct mutex lock;
+ u32 refcnt;
struct mbox_chan chan[SPRD_MBOX_CHAN_MAX];
};
@@ -115,7 +117,11 @@ static irqreturn_t sprd_mbox_outbox_isr(int irq, void *data)
id = readl(priv->outbox_base + SPRD_MBOX_ID);
chan = &priv->chan[id];
- mbox_chan_received_data(chan, (void *)msg);
+ if (chan->cl)
+ mbox_chan_received_data(chan, (void *)msg);
+ else
+ dev_warn_ratelimited(priv->dev,
+ "message's been dropped at ch[%d]\n", id);
/* Trigger to update outbox FIFO pointer */
writel(0x1, priv->outbox_base + SPRD_MBOX_TRIGGER);
@@ -215,18 +221,22 @@ static int sprd_mbox_startup(struct mbox_chan *chan)
struct sprd_mbox_priv *priv = to_sprd_mbox_priv(chan->mbox);
u32 val;
- /* Select outbox FIFO mode and reset the outbox FIFO status */
- writel(0x0, priv->outbox_base + SPRD_MBOX_FIFO_RST);
+ mutex_lock(&priv->lock);
+ if (priv->refcnt++ == 0) {
+ /* Select outbox FIFO mode and reset the outbox FIFO status */
+ writel(0x0, priv->outbox_base + SPRD_MBOX_FIFO_RST);
- /* Enable inbox FIFO overflow and delivery interrupt */
- val = readl(priv->inbox_base + SPRD_MBOX_IRQ_MSK);
- val &= ~(SPRD_INBOX_FIFO_OVERFLOW_IRQ | SPRD_INBOX_FIFO_DELIVER_IRQ);
- writel(val, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
+ /* Enable inbox FIFO overflow and delivery interrupt */
+ val = readl(priv->inbox_base + SPRD_MBOX_IRQ_MSK);
+ val &= ~(SPRD_INBOX_FIFO_OVERFLOW_IRQ | SPRD_INBOX_FIFO_DELIVER_IRQ);
+ writel(val, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
- /* Enable outbox FIFO not empty interrupt */
- val = readl(priv->outbox_base + SPRD_MBOX_IRQ_MSK);
- val &= ~SPRD_OUTBOX_FIFO_NOT_EMPTY_IRQ;
- writel(val, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
+ /* Enable outbox FIFO not empty interrupt */
+ val = readl(priv->outbox_base + SPRD_MBOX_IRQ_MSK);
+ val &= ~SPRD_OUTBOX_FIFO_NOT_EMPTY_IRQ;
+ writel(val, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
+ }
+ mutex_unlock(&priv->lock);
return 0;
}
@@ -235,9 +245,13 @@ static void sprd_mbox_shutdown(struct mbox_chan *chan)
{
struct sprd_mbox_priv *priv = to_sprd_mbox_priv(chan->mbox);
- /* Disable inbox & outbox interrupt */
- writel(SPRD_INBOX_FIFO_IRQ_MASK, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
- writel(SPRD_OUTBOX_FIFO_IRQ_MASK, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
+ mutex_lock(&priv->lock);
+ if (--priv->refcnt == 0) {
+ /* Disable inbox & outbox interrupt */
+ writel(SPRD_INBOX_FIFO_IRQ_MASK, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
+ writel(SPRD_OUTBOX_FIFO_IRQ_MASK, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
+ }
+ mutex_unlock(&priv->lock);
}
static const struct mbox_chan_ops sprd_mbox_ops = {
@@ -266,6 +280,7 @@ static int sprd_mbox_probe(struct platform_device *pdev)
return -ENOMEM;
priv->dev = dev;
+ mutex_init(&priv->lock);
/*
* The Spreadtrum mailbox uses an inbox to send messages to the target
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index b64fede..4c7da1c 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -3929,6 +3929,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
if (val >= (uint64_t)UINT_MAX * 1000 / HZ) {
r = -EINVAL;
ti->error = "Invalid bitmap_flush_interval argument";
+ goto bad;
}
ic->bitmap_flush_interval = msecs_to_jiffies(val);
} else if (!strncmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 6dca932..f5083b4 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -1869,6 +1869,14 @@ static bool rs_takeover_requested(struct raid_set *rs)
return rs->md.new_level != rs->md.level;
}
+/* True if layout is set to reshape. */
+static bool rs_is_layout_change(struct raid_set *rs, bool use_mddev)
+{
+ return (use_mddev ? rs->md.delta_disks : rs->delta_disks) ||
+ rs->md.new_layout != rs->md.layout ||
+ rs->md.new_chunk_sectors != rs->md.chunk_sectors;
+}
+
/* True if @rs is requested to reshape by ctr */
static bool rs_reshape_requested(struct raid_set *rs)
{
@@ -1881,9 +1889,7 @@ static bool rs_reshape_requested(struct raid_set *rs)
if (rs_is_raid0(rs))
return false;
- change = mddev->new_layout != mddev->layout ||
- mddev->new_chunk_sectors != mddev->chunk_sectors ||
- rs->delta_disks;
+ change = rs_is_layout_change(rs, false);
/* Historical case to support raid1 reshape without delta disks */
if (rs_is_raid1(rs)) {
@@ -2818,7 +2824,7 @@ static sector_t _get_reshape_sectors(struct raid_set *rs)
}
/*
- *
+ * Reshape:
* - change raid layout
* - change chunk size
* - add disks
@@ -2928,6 +2934,20 @@ static int rs_setup_reshape(struct raid_set *rs)
}
/*
+ * If the md resync thread has updated superblock with max reshape position
+ * at the end of a reshape but not (yet) reset the layout configuration
+ * changes -> reset the latter.
+ */
+static void rs_reset_inconclusive_reshape(struct raid_set *rs)
+{
+ if (!rs_is_reshaping(rs) && rs_is_layout_change(rs, true)) {
+ rs_set_cur(rs);
+ rs->md.delta_disks = 0;
+ rs->md.reshape_backwards = 0;
+ }
+}
+
+/*
* Enable/disable discard support on RAID set depending on
* RAID level and discard properties of underlying RAID members.
*/
@@ -3213,11 +3233,14 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
if (r)
goto bad;
+ /* Catch any inconclusive reshape superblock content. */
+ rs_reset_inconclusive_reshape(rs);
+
/* Start raid set read-only and assumed clean to change in raid_resume() */
rs->md.ro = 1;
rs->md.in_sync = 1;
- /* Keep array frozen */
+ /* Keep array frozen until resume. */
set_bit(MD_RECOVERY_FROZEN, &rs->md.recovery);
/* Has to be held on running the array */
@@ -3231,7 +3254,6 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
}
r = md_start(&rs->md);
-
if (r) {
ti->error = "Failed to start raid array";
mddev_unlock(&rs->md);
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index 729a72e..b1e867f 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -569,6 +569,7 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t)
blk_mq_free_tag_set(md->tag_set);
out_kfree_tag_set:
kfree(md->tag_set);
+ md->tag_set = NULL;
return err;
}
@@ -578,6 +579,7 @@ void dm_mq_cleanup_mapped_device(struct mapped_device *md)
if (md->tag_set) {
blk_mq_free_tag_set(md->tag_set);
kfree(md->tag_set);
+ md->tag_set = NULL;
}
}
diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index 11890db..41735a2 100644
--- a/drivers/md/dm-snap.c
+++ b/drivers/md/dm-snap.c
@@ -854,7 +854,7 @@ static int dm_add_exception(void *context, chunk_t old, chunk_t new)
static uint32_t __minimum_chunk_size(struct origin *o)
{
struct dm_snapshot *snap;
- unsigned chunk_size = 0;
+ unsigned chunk_size = rounddown_pow_of_two(UINT_MAX);
if (o)
list_for_each_entry(snap, &o->snapshots, list)
@@ -1408,6 +1408,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
if (!s->store->chunk_size) {
ti->error = "Chunk size not set";
+ r = -EINVAL;
goto bad_read_metadata;
}
diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
index 66f4c63..cea2b37 100644
--- a/drivers/md/dm-verity-fec.c
+++ b/drivers/md/dm-verity-fec.c
@@ -65,7 +65,7 @@ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index,
u8 *res;
position = (index + rsb) * v->fec->roots;
- block = div64_u64_rem(position, v->fec->roots << SECTOR_SHIFT, &rem);
+ block = div64_u64_rem(position, v->fec->io_size, &rem);
*offset = (unsigned)rem;
res = dm_bufio_read(v->fec->bufio, block, buf);
@@ -154,7 +154,7 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio,
/* read the next block when we run out of parity bytes */
offset += v->fec->roots;
- if (offset >= v->fec->roots << SECTOR_SHIFT) {
+ if (offset >= v->fec->io_size) {
dm_bufio_release(buf);
par = fec_read_parity(v, rsb, block_offset, &offset, &buf);
@@ -742,8 +742,13 @@ int verity_fec_ctr(struct dm_verity *v)
return -E2BIG;
}
+ if ((f->roots << SECTOR_SHIFT) & ((1 << v->data_dev_block_bits) - 1))
+ f->io_size = 1 << v->data_dev_block_bits;
+ else
+ f->io_size = v->fec->roots << SECTOR_SHIFT;
+
f->bufio = dm_bufio_client_create(f->dev->bdev,
- f->roots << SECTOR_SHIFT,
+ f->io_size,
1, 0, NULL, NULL);
if (IS_ERR(f->bufio)) {
ti->error = "Cannot initialize FEC bufio client";
diff --git a/drivers/md/dm-verity-fec.h b/drivers/md/dm-verity-fec.h
index 42fbd3a..3c46c8d 100644
--- a/drivers/md/dm-verity-fec.h
+++ b/drivers/md/dm-verity-fec.h
@@ -36,6 +36,7 @@ struct dm_verity_fec {
struct dm_dev *dev; /* parity data device */
struct dm_bufio_client *data_bufio; /* for data dev access */
struct dm_bufio_client *bufio; /* for parity data access */
+ size_t io_size; /* IO size for roots */
sector_t start; /* parity data start in blocks */
sector_t blocks; /* number of blocks covered */
sector_t rounds; /* number of interleaving rounds */
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index 200c5d0..ea3130e 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -1722,6 +1722,8 @@ void md_bitmap_flush(struct mddev *mddev)
md_bitmap_daemon_work(mddev);
bitmap->daemon_lastrun -= sleep;
md_bitmap_daemon_work(mddev);
+ if (mddev->bitmap_info.external)
+ md_super_wait(mddev);
md_bitmap_update_sb(bitmap);
}
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 7a0a228d..288d260 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -748,8 +748,35 @@ void mddev_init(struct mddev *mddev)
}
EXPORT_SYMBOL_GPL(mddev_init);
+static struct mddev *mddev_find_locked(dev_t unit)
+{
+ struct mddev *mddev;
+
+ list_for_each_entry(mddev, &all_mddevs, all_mddevs)
+ if (mddev->unit == unit)
+ return mddev;
+
+ return NULL;
+}
+
static struct mddev *mddev_find(dev_t unit)
{
+ struct mddev *mddev;
+
+ if (MAJOR(unit) != MD_MAJOR)
+ unit &= ~((1 << MdpMinorShift) - 1);
+
+ spin_lock(&all_mddevs_lock);
+ mddev = mddev_find_locked(unit);
+ if (mddev)
+ mddev_get(mddev);
+ spin_unlock(&all_mddevs_lock);
+
+ return mddev;
+}
+
+static struct mddev *mddev_find_or_alloc(dev_t unit)
+{
struct mddev *mddev, *new = NULL;
if (unit && MAJOR(unit) != MD_MAJOR)
@@ -759,13 +786,13 @@ static struct mddev *mddev_find(dev_t unit)
spin_lock(&all_mddevs_lock);
if (unit) {
- list_for_each_entry(mddev, &all_mddevs, all_mddevs)
- if (mddev->unit == unit) {
- mddev_get(mddev);
- spin_unlock(&all_mddevs_lock);
- kfree(new);
- return mddev;
- }
+ mddev = mddev_find_locked(unit);
+ if (mddev) {
+ mddev_get(mddev);
+ spin_unlock(&all_mddevs_lock);
+ kfree(new);
+ return mddev;
+ }
if (new) {
list_add(&new->all_mddevs, &all_mddevs);
@@ -791,12 +818,7 @@ static struct mddev *mddev_find(dev_t unit)
return NULL;
}
- is_free = 1;
- list_for_each_entry(mddev, &all_mddevs, all_mddevs)
- if (mddev->unit == dev) {
- is_free = 0;
- break;
- }
+ is_free = !mddev_find_locked(dev);
}
new->unit = dev;
new->md_minor = MINOR(dev);
@@ -5656,7 +5678,7 @@ static int md_alloc(dev_t dev, char *name)
* writing to /sys/module/md_mod/parameters/new_array.
*/
static DEFINE_MUTEX(disks_mutex);
- struct mddev *mddev = mddev_find(dev);
+ struct mddev *mddev = mddev_find_or_alloc(dev);
struct gendisk *disk;
int partitioned;
int shift;
@@ -6539,11 +6561,9 @@ static void autorun_devices(int part)
md_probe(dev, NULL, NULL);
mddev = mddev_find(dev);
- if (!mddev || !mddev->gendisk) {
- if (mddev)
- mddev_put(mddev);
+ if (!mddev)
break;
- }
+
if (mddev_lock(mddev))
pr_warn("md: %s locked, cannot run\n", mdname(mddev));
else if (mddev->raid_disks || mddev->major_version
@@ -7837,8 +7857,7 @@ static int md_open(struct block_device *bdev, fmode_t mode)
/* Wait until bdev->bd_disk is definitely gone */
if (work_pending(&mddev->del_work))
flush_workqueue(md_misc_wq);
- /* Then retry the open from the top */
- return -ERESTARTSYS;
+ return -EBUSY;
}
BUG_ON(mddev != bdev->bd_disk->private_data);
@@ -8168,7 +8187,11 @@ static void *md_seq_start(struct seq_file *seq, loff_t *pos)
loff_t l = *pos;
struct mddev *mddev;
- if (l >= 0x10000)
+ if (l == 0x10000) {
+ ++*pos;
+ return (void *)2;
+ }
+ if (l > 0x10000)
return NULL;
if (!l--)
/* header */
@@ -9267,11 +9290,11 @@ void md_check_recovery(struct mddev *mddev)
}
if (mddev_is_clustered(mddev)) {
- struct md_rdev *rdev;
+ struct md_rdev *rdev, *tmp;
/* kick the device if another node issued a
* remove disk.
*/
- rdev_for_each(rdev, mddev) {
+ rdev_for_each_safe(rdev, tmp, mddev) {
if (test_and_clear_bit(ClusterRemove, &rdev->flags) &&
rdev->raid_disk < 0)
md_kick_rdev_from_array(rdev);
@@ -9588,7 +9611,7 @@ static int __init md_init(void)
static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
{
struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
- struct md_rdev *rdev2;
+ struct md_rdev *rdev2, *tmp;
int role, ret;
char b[BDEVNAME_SIZE];
@@ -9605,7 +9628,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
}
/* Check for change of roles in the active devices */
- rdev_for_each(rdev2, mddev) {
+ rdev_for_each_safe(rdev2, tmp, mddev) {
if (test_bit(Faulty, &rdev2->flags))
continue;
diff --git a/drivers/md/persistent-data/dm-btree-internal.h b/drivers/md/persistent-data/dm-btree-internal.h
index 5648966..21d1a17 100644
--- a/drivers/md/persistent-data/dm-btree-internal.h
+++ b/drivers/md/persistent-data/dm-btree-internal.h
@@ -34,12 +34,12 @@ struct node_header {
__le32 max_entries;
__le32 value_size;
__le32 padding;
-} __packed;
+} __attribute__((packed, aligned(8)));
struct btree_node {
struct node_header header;
__le64 keys[];
-} __packed;
+} __attribute__((packed, aligned(8)));
/*
diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
index d8b4125..a213bf1 100644
--- a/drivers/md/persistent-data/dm-space-map-common.c
+++ b/drivers/md/persistent-data/dm-space-map-common.c
@@ -339,6 +339,8 @@ int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
*/
begin = do_div(index_begin, ll->entries_per_block);
end = do_div(end, ll->entries_per_block);
+ if (end == 0)
+ end = ll->entries_per_block;
for (i = index_begin; i < index_end; i++, begin = 0) {
struct dm_block *blk;
diff --git a/drivers/md/persistent-data/dm-space-map-common.h b/drivers/md/persistent-data/dm-space-map-common.h
index 8de63ce..87e1790 100644
--- a/drivers/md/persistent-data/dm-space-map-common.h
+++ b/drivers/md/persistent-data/dm-space-map-common.h
@@ -33,7 +33,7 @@ struct disk_index_entry {
__le64 blocknr;
__le32 nr_free;
__le32 none_free_before;
-} __packed;
+} __attribute__ ((packed, aligned(8)));
#define MAX_METADATA_BITMAPS 255
@@ -43,7 +43,7 @@ struct disk_metadata_index {
__le64 blocknr;
struct disk_index_entry index[MAX_METADATA_BITMAPS];
-} __packed;
+} __attribute__ ((packed, aligned(8)));
struct ll_disk;
@@ -86,7 +86,7 @@ struct disk_sm_root {
__le64 nr_allocated;
__le64 bitmap_root;
__le64 ref_count_root;
-} __packed;
+} __attribute__ ((packed, aligned(8)));
#define ENTRIES_PER_BYTE 4
@@ -94,7 +94,7 @@ struct disk_bitmap_header {
__le32 csum;
__le32 not_used;
__le64 blocknr;
-} __packed;
+} __attribute__ ((packed, aligned(8)));
enum allocation_event {
SM_NONE,
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 960d854c..a648056 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -478,6 +478,8 @@ static void raid1_end_write_request(struct bio *bio)
if (!test_bit(Faulty, &rdev->flags))
set_bit(R1BIO_WriteError, &r1_bio->state);
else {
+ /* Fail the request */
+ set_bit(R1BIO_Degraded, &r1_bio->state);
/* Finished with this branch */
r1_bio->bios[mirror] = NULL;
to_put = bio;
diff --git a/drivers/media/common/saa7146/saa7146_core.c b/drivers/media/common/saa7146/saa7146_core.c
index 21fb16c..e43edb0 100644
--- a/drivers/media/common/saa7146/saa7146_core.c
+++ b/drivers/media/common/saa7146/saa7146_core.c
@@ -253,7 +253,7 @@ int saa7146_pgtable_build_single(struct pci_dev *pci, struct saa7146_pgtable *pt
i, sg_dma_address(list), sg_dma_len(list),
list->offset);
*/
- for (p = 0; p * 4096 < list->length; p++, ptr++) {
+ for (p = 0; p * 4096 < sg_dma_len(list); p++, ptr++) {
*ptr = cpu_to_le32(sg_dma_address(list) + p * 4096);
nr_pages++;
}
diff --git a/drivers/media/common/saa7146/saa7146_video.c b/drivers/media/common/saa7146/saa7146_video.c
index ccd15b4..0d1be40 100644
--- a/drivers/media/common/saa7146/saa7146_video.c
+++ b/drivers/media/common/saa7146/saa7146_video.c
@@ -247,9 +247,8 @@ static int saa7146_pgtable_build(struct saa7146_dev *dev, struct saa7146_buf *bu
/* walk all pages, copy all page addresses to ptr1 */
for (i = 0; i < length; i++, list++) {
- for (p = 0; p * 4096 < list->length; p++, ptr1++) {
+ for (p = 0; p * 4096 < sg_dma_len(list); p++, ptr1++)
*ptr1 = cpu_to_le32(sg_dma_address(list) - list->offset);
- }
}
/*
ptr1 = pt1->cpu;
diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
index 959fa28..ec9ebff 100644
--- a/drivers/media/dvb-core/dvbdev.c
+++ b/drivers/media/dvb-core/dvbdev.c
@@ -241,6 +241,7 @@ static void dvb_media_device_free(struct dvb_device *dvbdev)
if (dvbdev->adapter->conn) {
media_device_unregister_entity(dvbdev->adapter->conn);
+ kfree(dvbdev->adapter->conn);
dvbdev->adapter->conn = NULL;
kfree(dvbdev->adapter->conn_pads);
dvbdev->adapter->conn_pads = NULL;
diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
index ad6d9d5..c120cff 100644
--- a/drivers/media/dvb-frontends/m88ds3103.c
+++ b/drivers/media/dvb-frontends/m88ds3103.c
@@ -1904,8 +1904,8 @@ static int m88ds3103_probe(struct i2c_client *client,
dev->dt_client = i2c_new_dummy_device(client->adapter,
dev->dt_addr);
- if (!dev->dt_client) {
- ret = -ENODEV;
+ if (IS_ERR(dev->dt_client)) {
+ ret = PTR_ERR(dev->dt_client);
goto err_kfree;
}
}
diff --git a/drivers/media/dvb-frontends/sp8870.c b/drivers/media/dvb-frontends/sp8870.c
index 655db82..9767159 100644
--- a/drivers/media/dvb-frontends/sp8870.c
+++ b/drivers/media/dvb-frontends/sp8870.c
@@ -281,7 +281,7 @@ static int sp8870_set_frontend_parameters(struct dvb_frontend *fe)
// read status reg in order to clear pending irqs
err = sp8870_readreg(state, 0x200);
- if (err)
+ if (err < 0)
return err;
// system controller start
diff --git a/drivers/media/i2c/adv7511-v4l2.c b/drivers/media/i2c/adv7511-v4l2.c
index a3161d7..ab7883c 100644
--- a/drivers/media/i2c/adv7511-v4l2.c
+++ b/drivers/media/i2c/adv7511-v4l2.c
@@ -1964,7 +1964,7 @@ static int adv7511_remove(struct i2c_client *client)
adv7511_set_isr(sd, false);
adv7511_init_setup(sd);
- cancel_delayed_work(&state->edid_handler);
+ cancel_delayed_work_sync(&state->edid_handler);
i2c_unregister_device(state->i2c_edid);
i2c_unregister_device(state->i2c_cec);
i2c_unregister_device(state->i2c_pktmem);
diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
index 09004d9..d1f5879 100644
--- a/drivers/media/i2c/adv7604.c
+++ b/drivers/media/i2c/adv7604.c
@@ -3616,7 +3616,7 @@ static int adv76xx_remove(struct i2c_client *client)
io_write(sd, 0x6e, 0);
io_write(sd, 0x73, 0);
- cancel_delayed_work(&state->delayed_work_enable_hotplug);
+ cancel_delayed_work_sync(&state->delayed_work_enable_hotplug);
v4l2_async_unregister_subdev(sd);
media_entity_cleanup(&sd->entity);
adv76xx_unregister_clients(to_state(sd));
diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
index 0855f648..f7d2b6c 100644
--- a/drivers/media/i2c/adv7842.c
+++ b/drivers/media/i2c/adv7842.c
@@ -3586,7 +3586,7 @@ static int adv7842_remove(struct i2c_client *client)
struct adv7842_state *state = to_state(sd);
adv7842_irq_enable(sd, false);
- cancel_delayed_work(&state->delayed_work_enable_hotplug);
+ cancel_delayed_work_sync(&state->delayed_work_enable_hotplug);
v4l2_device_unregister_subdev(sd);
media_entity_cleanup(&sd->entity);
adv7842_unregister_clients(sd);
diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
index 0ae6609..4771d0e 100644
--- a/drivers/media/i2c/imx219.c
+++ b/drivers/media/i2c/imx219.c
@@ -1026,29 +1026,47 @@ static int imx219_start_streaming(struct imx219 *imx219)
const struct imx219_reg_list *reg_list;
int ret;
+ ret = pm_runtime_get_sync(&client->dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(&client->dev);
+ return ret;
+ }
+
/* Apply default values of current mode */
reg_list = &imx219->mode->reg_list;
ret = imx219_write_regs(imx219, reg_list->regs, reg_list->num_of_regs);
if (ret) {
dev_err(&client->dev, "%s failed to set mode\n", __func__);
- return ret;
+ goto err_rpm_put;
}
ret = imx219_set_framefmt(imx219);
if (ret) {
dev_err(&client->dev, "%s failed to set frame format: %d\n",
__func__, ret);
- return ret;
+ goto err_rpm_put;
}
/* Apply customized values from user */
ret = __v4l2_ctrl_handler_setup(imx219->sd.ctrl_handler);
if (ret)
- return ret;
+ goto err_rpm_put;
/* set stream on register */
- return imx219_write_reg(imx219, IMX219_REG_MODE_SELECT,
- IMX219_REG_VALUE_08BIT, IMX219_MODE_STREAMING);
+ ret = imx219_write_reg(imx219, IMX219_REG_MODE_SELECT,
+ IMX219_REG_VALUE_08BIT, IMX219_MODE_STREAMING);
+ if (ret)
+ goto err_rpm_put;
+
+ /* vflip and hflip cannot change during streaming */
+ __v4l2_ctrl_grab(imx219->vflip, true);
+ __v4l2_ctrl_grab(imx219->hflip, true);
+
+ return 0;
+
+err_rpm_put:
+ pm_runtime_put(&client->dev);
+ return ret;
}
static void imx219_stop_streaming(struct imx219 *imx219)
@@ -1061,12 +1079,16 @@ static void imx219_stop_streaming(struct imx219 *imx219)
IMX219_REG_VALUE_08BIT, IMX219_MODE_STANDBY);
if (ret)
dev_err(&client->dev, "%s failed to set stream\n", __func__);
+
+ __v4l2_ctrl_grab(imx219->vflip, false);
+ __v4l2_ctrl_grab(imx219->hflip, false);
+
+ pm_runtime_put(&client->dev);
}
static int imx219_set_stream(struct v4l2_subdev *sd, int enable)
{
struct imx219 *imx219 = to_imx219(sd);
- struct i2c_client *client = v4l2_get_subdevdata(sd);
int ret = 0;
mutex_lock(&imx219->mutex);
@@ -1076,36 +1098,23 @@ static int imx219_set_stream(struct v4l2_subdev *sd, int enable)
}
if (enable) {
- ret = pm_runtime_get_sync(&client->dev);
- if (ret < 0) {
- pm_runtime_put_noidle(&client->dev);
- goto err_unlock;
- }
-
/*
* Apply default & customized values
* and then start streaming.
*/
ret = imx219_start_streaming(imx219);
if (ret)
- goto err_rpm_put;
+ goto err_unlock;
} else {
imx219_stop_streaming(imx219);
- pm_runtime_put(&client->dev);
}
imx219->streaming = enable;
- /* vflip and hflip cannot change during streaming */
- __v4l2_ctrl_grab(imx219->vflip, enable);
- __v4l2_ctrl_grab(imx219->hflip, enable);
-
mutex_unlock(&imx219->mutex);
return ret;
-err_rpm_put:
- pm_runtime_put(&client->dev);
err_unlock:
mutex_unlock(&imx219->mutex);
diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
index 831b5b5..1b309bb 100644
--- a/drivers/media/i2c/tc358743.c
+++ b/drivers/media/i2c/tc358743.c
@@ -2193,7 +2193,7 @@ static int tc358743_remove(struct i2c_client *client)
del_timer_sync(&state->timer);
flush_work(&state->work_i2c_poll);
}
- cancel_delayed_work(&state->delayed_work_enable_hotplug);
+ cancel_delayed_work_sync(&state->delayed_work_enable_hotplug);
cec_unregister_adapter(state->cec_adap);
v4l2_async_unregister_subdev(sd);
v4l2_device_unregister_subdev(sd);
diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
index a09bf0a..89bb7e6 100644
--- a/drivers/media/i2c/tda1997x.c
+++ b/drivers/media/i2c/tda1997x.c
@@ -2804,7 +2804,7 @@ static int tda1997x_remove(struct i2c_client *client)
media_entity_cleanup(&sd->entity);
v4l2_ctrl_handler_free(&state->hdl);
regulator_bulk_disable(TDA1997X_NUM_SUPPLIES, state->supplies);
- cancel_delayed_work(&state->delayed_work_enable_hpd);
+ cancel_delayed_work_sync(&state->delayed_work_enable_hpd);
mutex_destroy(&state->page_lock);
mutex_destroy(&state->lock);
diff --git a/drivers/media/pci/saa7134/saa7134-core.c b/drivers/media/pci/saa7134/saa7134-core.c
index 391572a..efb757d 100644
--- a/drivers/media/pci/saa7134/saa7134-core.c
+++ b/drivers/media/pci/saa7134/saa7134-core.c
@@ -243,7 +243,7 @@ int saa7134_pgtable_build(struct pci_dev *pci, struct saa7134_pgtable *pt,
ptr = pt->cpu + startpage;
for (i = 0; i < length; i++, list = sg_next(list)) {
- for (p = 0; p * 4096 < list->length; p++, ptr++)
+ for (p = 0; p * 4096 < sg_dma_len(list); p++, ptr++)
*ptr = cpu_to_le32(sg_dma_address(list) +
list->offset + p * 4096);
}
diff --git a/drivers/media/pci/saa7164/saa7164-encoder.c b/drivers/media/pci/saa7164/saa7164-encoder.c
index 11e1eb6..1d1d32e 100644
--- a/drivers/media/pci/saa7164/saa7164-encoder.c
+++ b/drivers/media/pci/saa7164/saa7164-encoder.c
@@ -1008,7 +1008,7 @@ int saa7164_encoder_register(struct saa7164_port *port)
printk(KERN_ERR "%s() failed (errno = %d), NO PCI configuration\n",
__func__, result);
result = -ENOMEM;
- goto failed;
+ goto fail_pci;
}
/* Establish encoder defaults here */
@@ -1062,7 +1062,7 @@ int saa7164_encoder_register(struct saa7164_port *port)
100000, ENCODER_DEF_BITRATE);
if (hdl->error) {
result = hdl->error;
- goto failed;
+ goto fail_hdl;
}
port->std = V4L2_STD_NTSC_M;
@@ -1080,7 +1080,7 @@ int saa7164_encoder_register(struct saa7164_port *port)
printk(KERN_INFO "%s: can't allocate mpeg device\n",
dev->name);
result = -ENOMEM;
- goto failed;
+ goto fail_hdl;
}
port->v4l_device->ctrl_handler = hdl;
@@ -1091,10 +1091,7 @@ int saa7164_encoder_register(struct saa7164_port *port)
if (result < 0) {
printk(KERN_INFO "%s: can't register mpeg device\n",
dev->name);
- /* TODO: We're going to leak here if we don't dealloc
- The buffers above. The unreg function can't deal wit it.
- */
- goto failed;
+ goto fail_reg;
}
printk(KERN_INFO "%s: registered device video%d [mpeg]\n",
@@ -1116,9 +1113,14 @@ int saa7164_encoder_register(struct saa7164_port *port)
saa7164_api_set_encoder(port);
saa7164_api_get_encoder(port);
+ return 0;
- result = 0;
-failed:
+fail_reg:
+ video_device_release(port->v4l_device);
+ port->v4l_device = NULL;
+fail_hdl:
+ v4l2_ctrl_handler_free(hdl);
+fail_pci:
return result;
}
diff --git a/drivers/media/pci/sta2x11/Kconfig b/drivers/media/pci/sta2x11/Kconfig
index 4dd98f9..27bb785 100644
--- a/drivers/media/pci/sta2x11/Kconfig
+++ b/drivers/media/pci/sta2x11/Kconfig
@@ -3,6 +3,7 @@
tristate "STA2X11 VIP Video For Linux"
depends on PCI && VIDEO_V4L2 && VIRT_TO_BUS && I2C
depends on STA2X11 || COMPILE_TEST
+ select GPIOLIB if MEDIA_SUBDRV_AUTOSELECT
select VIDEO_ADV7180 if MEDIA_SUBDRV_AUTOSELECT
select VIDEOBUF2_DMA_CONTIG
select MEDIA_CONTROLLER
diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
index f2c4dad..7bb6bab 100644
--- a/drivers/media/platform/aspeed-video.c
+++ b/drivers/media/platform/aspeed-video.c
@@ -514,8 +514,8 @@ static void aspeed_video_off(struct aspeed_video *video)
aspeed_video_write(video, VE_INTERRUPT_STATUS, 0xffffffff);
/* Turn off the relevant clocks */
- clk_disable(video->vclk);
clk_disable(video->eclk);
+ clk_disable(video->vclk);
clear_bit(VIDEO_CLOCKS_ON, &video->flags);
}
@@ -526,8 +526,8 @@ static void aspeed_video_on(struct aspeed_video *video)
return;
/* Turn on the relevant clocks */
- clk_enable(video->eclk);
clk_enable(video->vclk);
+ clk_enable(video->eclk);
set_bit(VIDEO_CLOCKS_ON, &video->flags);
}
@@ -1719,8 +1719,11 @@ static int aspeed_video_probe(struct platform_device *pdev)
return rc;
rc = aspeed_video_setup_video(video);
- if (rc)
+ if (rc) {
+ clk_unprepare(video->vclk);
+ clk_unprepare(video->eclk);
return rc;
+ }
return 0;
}
diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
index d5bfd6f..fd5993b 100644
--- a/drivers/media/platform/qcom/venus/core.c
+++ b/drivers/media/platform/qcom/venus/core.c
@@ -195,11 +195,11 @@ static int venus_probe(struct platform_device *pdev)
if (IS_ERR(core->base))
return PTR_ERR(core->base);
- core->video_path = of_icc_get(dev, "video-mem");
+ core->video_path = devm_of_icc_get(dev, "video-mem");
if (IS_ERR(core->video_path))
return PTR_ERR(core->video_path);
- core->cpucfg_path = of_icc_get(dev, "cpu-cfg");
+ core->cpucfg_path = devm_of_icc_get(dev, "cpu-cfg");
if (IS_ERR(core->cpucfg_path))
return PTR_ERR(core->cpucfg_path);
@@ -334,9 +334,6 @@ static int venus_remove(struct platform_device *pdev)
hfi_destroy(core);
- icc_put(core->video_path);
- icc_put(core->cpucfg_path);
-
v4l2_device_unregister(&core->v4l2_dev);
mutex_destroy(&core->pm_lock);
mutex_destroy(&core->lock);
diff --git a/drivers/media/platform/qcom/venus/hfi_parser.c b/drivers/media/platform/qcom/venus/hfi_parser.c
index 363ee2a..2dcf7eae 100644
--- a/drivers/media/platform/qcom/venus/hfi_parser.c
+++ b/drivers/media/platform/qcom/venus/hfi_parser.c
@@ -239,8 +239,10 @@ u32 hfi_parser(struct venus_core *core, struct venus_inst *inst, void *buf,
parser_init(inst, &codecs, &domain);
- core->codecs_count = 0;
- memset(core->caps, 0, sizeof(core->caps));
+ if (core->res->hfi_version > HFI_VERSION_1XX) {
+ core->codecs_count = 0;
+ memset(core->caps, 0, sizeof(core->caps));
+ }
while (words_count) {
data = word + 1;
diff --git a/drivers/media/platform/rcar_drif.c b/drivers/media/platform/rcar_drif.c
index f318cd4..083dba9 100644
--- a/drivers/media/platform/rcar_drif.c
+++ b/drivers/media/platform/rcar_drif.c
@@ -915,7 +915,6 @@ static int rcar_drif_g_fmt_sdr_cap(struct file *file, void *priv,
{
struct rcar_drif_sdr *sdr = video_drvdata(file);
- memset(f->fmt.sdr.reserved, 0, sizeof(f->fmt.sdr.reserved));
f->fmt.sdr.pixelformat = sdr->fmt->pixelformat;
f->fmt.sdr.buffersize = sdr->fmt->buffersize;
diff --git a/drivers/media/platform/sti/bdisp/bdisp-debug.c b/drivers/media/platform/sti/bdisp/bdisp-debug.c
index 2b27009..a27f638 100644
--- a/drivers/media/platform/sti/bdisp/bdisp-debug.c
+++ b/drivers/media/platform/sti/bdisp/bdisp-debug.c
@@ -480,7 +480,7 @@ static int regs_show(struct seq_file *s, void *data)
int ret;
unsigned int i;
- ret = pm_runtime_get_sync(bdisp->dev);
+ ret = pm_runtime_resume_and_get(bdisp->dev);
if (ret < 0) {
seq_puts(s, "Cannot wake up IP\n");
return 0;
diff --git a/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c b/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
index b55de9a..3181d07 100644
--- a/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
+++ b/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
@@ -151,8 +151,10 @@ static int sun6i_video_start_streaming(struct vb2_queue *vq, unsigned int count)
}
subdev = sun6i_video_remote_subdev(video, NULL);
- if (!subdev)
+ if (!subdev) {
+ ret = -EINVAL;
goto stop_media_pipeline;
+ }
config.pixelformat = video->fmt.fmt.pix.pixelformat;
config.code = video->mbus_code;
diff --git a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
index ba5d078..2c15948 100644
--- a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
+++ b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
@@ -589,7 +589,7 @@ static int deinterlace_start_streaming(struct vb2_queue *vq, unsigned int count)
int ret;
if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
- ret = pm_runtime_get_sync(dev);
+ ret = pm_runtime_resume_and_get(dev);
if (ret < 0) {
dev_err(dev, "Failed to enable module\n");
diff --git a/drivers/media/rc/ite-cir.c b/drivers/media/rc/ite-cir.c
index 0c62295..e5c4a69 100644
--- a/drivers/media/rc/ite-cir.c
+++ b/drivers/media/rc/ite-cir.c
@@ -276,8 +276,14 @@ static irqreturn_t ite_cir_isr(int irq, void *data)
/* read the interrupt flags */
iflags = dev->params.get_irq_causes(dev);
+ /* Check for RX overflow */
+ if (iflags & ITE_IRQ_RX_FIFO_OVERRUN) {
+ dev_warn(&dev->rdev->dev, "receive overflow\n");
+ ir_raw_event_reset(dev->rdev);
+ }
+
/* check for the receive interrupt */
- if (iflags & (ITE_IRQ_RX_FIFO | ITE_IRQ_RX_FIFO_OVERRUN)) {
+ if (iflags & ITE_IRQ_RX_FIFO) {
/* read the FIFO bytes */
rx_bytes =
dev->params.get_rx_bytes(dev, rx_buf,
diff --git a/drivers/media/test-drivers/vivid/vivid-core.c b/drivers/media/test-drivers/vivid/vivid-core.c
index aa8d350..1e356dc 100644
--- a/drivers/media/test-drivers/vivid/vivid-core.c
+++ b/drivers/media/test-drivers/vivid/vivid-core.c
@@ -205,13 +205,13 @@ static const u8 vivid_hdmi_edid[256] = {
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x7b,
- 0x02, 0x03, 0x3f, 0xf0, 0x51, 0x61, 0x60, 0x5f,
+ 0x02, 0x03, 0x3f, 0xf1, 0x51, 0x61, 0x60, 0x5f,
0x5e, 0x5d, 0x10, 0x1f, 0x04, 0x13, 0x22, 0x21,
0x20, 0x05, 0x14, 0x02, 0x11, 0x01, 0x23, 0x09,
0x07, 0x07, 0x83, 0x01, 0x00, 0x00, 0x6d, 0x03,
0x0c, 0x00, 0x10, 0x00, 0x00, 0x3c, 0x21, 0x00,
0x60, 0x01, 0x02, 0x03, 0x67, 0xd8, 0x5d, 0xc4,
- 0x01, 0x78, 0x00, 0x00, 0xe2, 0x00, 0xea, 0xe3,
+ 0x01, 0x78, 0x00, 0x00, 0xe2, 0x00, 0xca, 0xe3,
0x05, 0x00, 0x00, 0xe3, 0x06, 0x01, 0x00, 0x4d,
0xd0, 0x00, 0xa0, 0xf0, 0x70, 0x3e, 0x80, 0x30,
0x20, 0x35, 0x00, 0xc0, 0x1c, 0x32, 0x00, 0x00,
@@ -220,7 +220,7 @@ static const u8 vivid_hdmi_edid[256] = {
0x00, 0x00, 0x1a, 0x1a, 0x1d, 0x00, 0x80, 0x51,
0xd0, 0x1c, 0x20, 0x40, 0x80, 0x35, 0x00, 0xc0,
0x1c, 0x32, 0x00, 0x00, 0x1c, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x63,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x82,
};
static int vidioc_querycap(struct file *file, void *priv,
diff --git a/drivers/media/test-drivers/vivid/vivid-vid-out.c b/drivers/media/test-drivers/vivid/vivid-vid-out.c
index ee3446e..cd6c247 100644
--- a/drivers/media/test-drivers/vivid/vivid-vid-out.c
+++ b/drivers/media/test-drivers/vivid/vivid-vid-out.c
@@ -1025,7 +1025,7 @@ int vivid_vid_out_s_fbuf(struct file *file, void *fh,
return -EINVAL;
}
dev->fbuf_out_flags &= ~(chroma_flags | alpha_flags);
- dev->fbuf_out_flags = a->flags & (chroma_flags | alpha_flags);
+ dev->fbuf_out_flags |= a->flags & (chroma_flags | alpha_flags);
return 0;
}
diff --git a/drivers/media/tuners/m88rs6000t.c b/drivers/media/tuners/m88rs6000t.c
index b3505f4..8647c50 100644
--- a/drivers/media/tuners/m88rs6000t.c
+++ b/drivers/media/tuners/m88rs6000t.c
@@ -525,7 +525,7 @@ static int m88rs6000t_get_rf_strength(struct dvb_frontend *fe, u16 *strength)
PGA2_cri = PGA2_GC >> 2;
PGA2_crf = PGA2_GC & 0x03;
- for (i = 0; i <= RF_GC; i++)
+ for (i = 0; i <= RF_GC && i < ARRAY_SIZE(RFGS); i++)
RFG += RFGS[i];
if (RF_GC == 0)
@@ -537,12 +537,12 @@ static int m88rs6000t_get_rf_strength(struct dvb_frontend *fe, u16 *strength)
if (RF_GC == 3)
RFG += 100;
- for (i = 0; i <= IF_GC; i++)
+ for (i = 0; i <= IF_GC && i < ARRAY_SIZE(IFGS); i++)
IFG += IFGS[i];
TIAG = TIA_GC * TIA_GS;
- for (i = 0; i <= BB_GC; i++)
+ for (i = 0; i <= BB_GC && i < ARRAY_SIZE(BBGS); i++)
BBG += BBGS[i];
PGA2G = PGA2_cri * PGA2_cri_GS + PGA2_crf * PGA2_crf_GS;
diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
index c1a7634..28e1fd6 100644
--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
+++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
@@ -79,11 +79,17 @@ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
}
}
- if ((ret = dvb_usb_adapter_stream_init(adap)) ||
- (ret = dvb_usb_adapter_dvb_init(adap, adapter_nrs)) ||
- (ret = dvb_usb_adapter_frontend_init(adap))) {
+ ret = dvb_usb_adapter_stream_init(adap);
+ if (ret)
return ret;
- }
+
+ ret = dvb_usb_adapter_dvb_init(adap, adapter_nrs);
+ if (ret)
+ goto dvb_init_err;
+
+ ret = dvb_usb_adapter_frontend_init(adap);
+ if (ret)
+ goto frontend_init_err;
/* use exclusive FE lock if there is multiple shared FEs */
if (adap->fe_adap[1].fe)
@@ -103,6 +109,12 @@ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
}
return 0;
+
+frontend_init_err:
+ dvb_usb_adapter_dvb_exit(adap);
+dvb_init_err:
+ dvb_usb_adapter_stream_exit(adap);
+ return ret;
}
static int dvb_usb_adapter_exit(struct dvb_usb_device *d)
@@ -158,22 +170,20 @@ static int dvb_usb_init(struct dvb_usb_device *d, short *adapter_nums)
if (d->props.priv_init != NULL) {
ret = d->props.priv_init(d);
- if (ret != 0) {
- kfree(d->priv);
- d->priv = NULL;
- return ret;
- }
+ if (ret != 0)
+ goto err_priv_init;
}
}
/* check the capabilities and set appropriate variables */
dvb_usb_device_power_ctrl(d, 1);
- if ((ret = dvb_usb_i2c_init(d)) ||
- (ret = dvb_usb_adapter_init(d, adapter_nums))) {
- dvb_usb_exit(d);
- return ret;
- }
+ ret = dvb_usb_i2c_init(d);
+ if (ret)
+ goto err_i2c_init;
+ ret = dvb_usb_adapter_init(d, adapter_nums);
+ if (ret)
+ goto err_adapter_init;
if ((ret = dvb_usb_remote_init(d)))
err("could not initialize remote control.");
@@ -181,6 +191,17 @@ static int dvb_usb_init(struct dvb_usb_device *d, short *adapter_nums)
dvb_usb_device_power_ctrl(d, 0);
return 0;
+
+err_adapter_init:
+ dvb_usb_adapter_exit(d);
+err_i2c_init:
+ dvb_usb_i2c_exit(d);
+ if (d->priv && d->props.priv_destroy)
+ d->props.priv_destroy(d);
+err_priv_init:
+ kfree(d->priv);
+ d->priv = NULL;
+ return ret;
}
/* determine the name and the state of the just found USB device */
@@ -255,41 +276,50 @@ int dvb_usb_device_init(struct usb_interface *intf,
if (du != NULL)
*du = NULL;
- if ((desc = dvb_usb_find_device(udev, props, &cold)) == NULL) {
+ d = kzalloc(sizeof(*d), GFP_KERNEL);
+ if (!d) {
+ err("no memory for 'struct dvb_usb_device'");
+ return -ENOMEM;
+ }
+
+ memcpy(&d->props, props, sizeof(struct dvb_usb_device_properties));
+
+ desc = dvb_usb_find_device(udev, &d->props, &cold);
+ if (!desc) {
deb_err("something went very wrong, device was not found in current device list - let's see what comes next.\n");
- return -ENODEV;
+ ret = -ENODEV;
+ goto error;
}
if (cold) {
info("found a '%s' in cold state, will try to load a firmware", desc->name);
ret = dvb_usb_download_firmware(udev, props);
if (!props->no_reconnect || ret != 0)
- return ret;
+ goto error;
}
info("found a '%s' in warm state.", desc->name);
- d = kzalloc(sizeof(struct dvb_usb_device), GFP_KERNEL);
- if (d == NULL) {
- err("no memory for 'struct dvb_usb_device'");
- return -ENOMEM;
- }
-
d->udev = udev;
- memcpy(&d->props, props, sizeof(struct dvb_usb_device_properties));
d->desc = desc;
d->owner = owner;
usb_set_intfdata(intf, d);
- if (du != NULL)
+ ret = dvb_usb_init(d, adapter_nums);
+ if (ret) {
+ info("%s error while loading driver (%d)", desc->name, ret);
+ goto error;
+ }
+
+ if (du)
*du = d;
- ret = dvb_usb_init(d, adapter_nums);
+ info("%s successfully initialized and connected.", desc->name);
+ return 0;
- if (ret == 0)
- info("%s successfully initialized and connected.", desc->name);
- else
- info("%s error while loading driver (%d)", desc->name, ret);
+ error:
+ usb_set_intfdata(intf, NULL);
+ kfree(d);
return ret;
}
EXPORT_SYMBOL(dvb_usb_device_init);
diff --git a/drivers/media/usb/dvb-usb/dvb-usb.h b/drivers/media/usb/dvb-usb/dvb-usb.h
index 741be0e..2b8ad2b 100644
--- a/drivers/media/usb/dvb-usb/dvb-usb.h
+++ b/drivers/media/usb/dvb-usb/dvb-usb.h
@@ -487,7 +487,7 @@ extern int __must_check
dvb_usb_generic_write(struct dvb_usb_device *, u8 *, u16);
/* commonly used remote control parsing */
-extern int dvb_usb_nec_rc_key_to_event(struct dvb_usb_device *, u8[], u32 *, int *);
+extern int dvb_usb_nec_rc_key_to_event(struct dvb_usb_device *, u8[5], u32 *, int *);
/* commonly used firmware download types and function */
struct hexline {
diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c
index fb9cbfa..3cd9e95 100644
--- a/drivers/media/usb/em28xx/em28xx-dvb.c
+++ b/drivers/media/usb/em28xx/em28xx-dvb.c
@@ -1984,6 +1984,7 @@ static int em28xx_dvb_init(struct em28xx *dev)
return result;
out_free:
+ em28xx_uninit_usb_xfer(dev, EM28XX_DIGITAL_MODE);
kfree(dvb);
dev->dvb = NULL;
goto ret;
diff --git a/drivers/media/usb/gspca/cpia1.c b/drivers/media/usb/gspca/cpia1.c
index a4f7431..d93d384 100644
--- a/drivers/media/usb/gspca/cpia1.c
+++ b/drivers/media/usb/gspca/cpia1.c
@@ -1424,7 +1424,6 @@ static int sd_config(struct gspca_dev *gspca_dev,
{
struct sd *sd = (struct sd *) gspca_dev;
struct cam *cam;
- int ret;
sd->mainsFreq = FREQ_DEF == V4L2_CID_POWER_LINE_FREQUENCY_60HZ;
reset_camera_params(gspca_dev);
@@ -1436,10 +1435,7 @@ static int sd_config(struct gspca_dev *gspca_dev,
cam->cam_mode = mode;
cam->nmodes = ARRAY_SIZE(mode);
- ret = goto_low_power(gspca_dev);
- if (ret)
- gspca_err(gspca_dev, "Cannot go to low power mode: %d\n",
- ret);
+ goto_low_power(gspca_dev);
/* Check the firmware version. */
sd->params.version.firmwareVersion = 0;
get_version_information(gspca_dev);
diff --git a/drivers/media/usb/gspca/gspca.c b/drivers/media/usb/gspca/gspca.c
index 158c8e2..47d8f28 100644
--- a/drivers/media/usb/gspca/gspca.c
+++ b/drivers/media/usb/gspca/gspca.c
@@ -1576,6 +1576,8 @@ int gspca_dev_probe2(struct usb_interface *intf,
#endif
v4l2_ctrl_handler_free(gspca_dev->vdev.ctrl_handler);
v4l2_device_unregister(&gspca_dev->v4l2_dev);
+ if (sd_desc->probe_error)
+ sd_desc->probe_error(gspca_dev);
kfree(gspca_dev->usb_buf);
kfree(gspca_dev);
return ret;
diff --git a/drivers/media/usb/gspca/gspca.h b/drivers/media/usb/gspca/gspca.h
index b0ced2e..a6554d5 100644
--- a/drivers/media/usb/gspca/gspca.h
+++ b/drivers/media/usb/gspca/gspca.h
@@ -105,6 +105,7 @@ struct sd_desc {
cam_cf_op config; /* called on probe */
cam_op init; /* called on probe and resume */
cam_op init_controls; /* called on probe */
+ cam_v_op probe_error; /* called if probe failed, do cleanup here */
cam_op start; /* called on stream on after URBs creation */
cam_pkt_op pkt_scan;
/* optional operations */
diff --git a/drivers/media/usb/gspca/m5602/m5602_mt9m111.c b/drivers/media/usb/gspca/m5602/m5602_mt9m111.c
index bfa3b38..bf1af6e 100644
--- a/drivers/media/usb/gspca/m5602/m5602_mt9m111.c
+++ b/drivers/media/usb/gspca/m5602/m5602_mt9m111.c
@@ -195,7 +195,7 @@ static const struct v4l2_ctrl_config mt9m111_greenbal_cfg = {
int mt9m111_probe(struct sd *sd)
{
u8 data[2] = {0x00, 0x00};
- int i, rc = 0;
+ int i, err;
struct gspca_dev *gspca_dev = (struct gspca_dev *)sd;
if (force_sensor) {
@@ -213,18 +213,18 @@ int mt9m111_probe(struct sd *sd)
/* Do the preinit */
for (i = 0; i < ARRAY_SIZE(preinit_mt9m111); i++) {
if (preinit_mt9m111[i][0] == BRIDGE) {
- rc |= m5602_write_bridge(sd,
- preinit_mt9m111[i][1],
- preinit_mt9m111[i][2]);
+ err = m5602_write_bridge(sd,
+ preinit_mt9m111[i][1],
+ preinit_mt9m111[i][2]);
} else {
data[0] = preinit_mt9m111[i][2];
data[1] = preinit_mt9m111[i][3];
- rc |= m5602_write_sensor(sd,
- preinit_mt9m111[i][1], data, 2);
+ err = m5602_write_sensor(sd,
+ preinit_mt9m111[i][1], data, 2);
}
+ if (err < 0)
+ return err;
}
- if (rc < 0)
- return rc;
if (m5602_read_sensor(sd, MT9M111_SC_CHIPVER, data, 2))
return -ENODEV;
diff --git a/drivers/media/usb/gspca/m5602/m5602_po1030.c b/drivers/media/usb/gspca/m5602/m5602_po1030.c
index d680b77..8fd99ce 100644
--- a/drivers/media/usb/gspca/m5602/m5602_po1030.c
+++ b/drivers/media/usb/gspca/m5602/m5602_po1030.c
@@ -154,8 +154,8 @@ static const struct v4l2_ctrl_config po1030_greenbal_cfg = {
int po1030_probe(struct sd *sd)
{
- int rc = 0;
u8 dev_id_h = 0, i;
+ int err;
struct gspca_dev *gspca_dev = (struct gspca_dev *)sd;
if (force_sensor) {
@@ -174,14 +174,14 @@ int po1030_probe(struct sd *sd)
for (i = 0; i < ARRAY_SIZE(preinit_po1030); i++) {
u8 data = preinit_po1030[i][2];
if (preinit_po1030[i][0] == SENSOR)
- rc |= m5602_write_sensor(sd,
- preinit_po1030[i][1], &data, 1);
+ err = m5602_write_sensor(sd, preinit_po1030[i][1],
+ &data, 1);
else
- rc |= m5602_write_bridge(sd, preinit_po1030[i][1],
- data);
+ err = m5602_write_bridge(sd, preinit_po1030[i][1],
+ data);
+ if (err < 0)
+ return err;
}
- if (rc < 0)
- return rc;
if (m5602_read_sensor(sd, PO1030_DEVID_H, &dev_id_h, 1))
return -ENODEV;
diff --git a/drivers/media/usb/gspca/sq905.c b/drivers/media/usb/gspca/sq905.c
index 97799cf..9491110 100644
--- a/drivers/media/usb/gspca/sq905.c
+++ b/drivers/media/usb/gspca/sq905.c
@@ -158,7 +158,7 @@ static int
sq905_read_data(struct gspca_dev *gspca_dev, u8 *data, int size, int need_lock)
{
int ret;
- int act_len;
+ int act_len = 0;
gspca_dev->usb_buf[0] = '\0';
if (need_lock)
diff --git a/drivers/media/usb/gspca/stv06xx/stv06xx.c b/drivers/media/usb/gspca/stv06xx/stv06xx.c
index 95673fc..d9bc2aa 100644
--- a/drivers/media/usb/gspca/stv06xx/stv06xx.c
+++ b/drivers/media/usb/gspca/stv06xx/stv06xx.c
@@ -529,12 +529,21 @@ static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
static int stv06xx_config(struct gspca_dev *gspca_dev,
const struct usb_device_id *id);
+static void stv06xx_probe_error(struct gspca_dev *gspca_dev)
+{
+ struct sd *sd = (struct sd *)gspca_dev;
+
+ kfree(sd->sensor_priv);
+ sd->sensor_priv = NULL;
+}
+
/* sub-driver description */
static const struct sd_desc sd_desc = {
.name = MODULE_NAME,
.config = stv06xx_config,
.init = stv06xx_init,
.init_controls = stv06xx_init_controls,
+ .probe_error = stv06xx_probe_error,
.start = stv06xx_start,
.stopN = stv06xx_stopN,
.pkt_scan = stv06xx_pkt_scan,
diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
index cfe422d..41f8410 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls.c
@@ -2201,7 +2201,16 @@ static void new_to_req(struct v4l2_ctrl_ref *ref)
if (!ref)
return;
ptr_to_ptr(ref->ctrl, ref->ctrl->p_new, ref->p_req);
- ref->req = ref;
+ ref->valid_p_req = true;
+}
+
+/* Copy the current value to the request value */
+static void cur_to_req(struct v4l2_ctrl_ref *ref)
+{
+ if (!ref)
+ return;
+ ptr_to_ptr(ref->ctrl, ref->ctrl->p_cur, ref->p_req);
+ ref->valid_p_req = true;
}
/* Copy the request value to the new value */
@@ -2209,8 +2218,8 @@ static void req_to_new(struct v4l2_ctrl_ref *ref)
{
if (!ref)
return;
- if (ref->req)
- ptr_to_ptr(ref->ctrl, ref->req->p_req, ref->ctrl->p_new);
+ if (ref->valid_p_req)
+ ptr_to_ptr(ref->ctrl, ref->p_req, ref->ctrl->p_new);
else
ptr_to_ptr(ref->ctrl, ref->ctrl->p_cur, ref->ctrl->p_new);
}
@@ -2347,7 +2356,15 @@ void v4l2_ctrl_handler_free(struct v4l2_ctrl_handler *hdl)
if (hdl == NULL || hdl->buckets == NULL)
return;
- if (!hdl->req_obj.req && !list_empty(&hdl->requests)) {
+ /*
+ * If the main handler is freed and it is used by handler objects in
+ * outstanding requests, then unbind and put those objects before
+ * freeing the main handler.
+ *
+ * The main handler can be identified by having a NULL ops pointer in
+ * the request object.
+ */
+ if (!hdl->req_obj.ops && !list_empty(&hdl->requests)) {
struct v4l2_ctrl_handler *req, *next_req;
list_for_each_entry_safe(req, next_req, &hdl->requests, requests) {
@@ -3380,39 +3397,8 @@ static void v4l2_ctrl_request_queue(struct media_request_object *obj)
struct v4l2_ctrl_handler *hdl =
container_of(obj, struct v4l2_ctrl_handler, req_obj);
struct v4l2_ctrl_handler *main_hdl = obj->priv;
- struct v4l2_ctrl_handler *prev_hdl = NULL;
- struct v4l2_ctrl_ref *ref_ctrl, *ref_ctrl_prev = NULL;
mutex_lock(main_hdl->lock);
- if (list_empty(&main_hdl->requests_queued))
- goto queue;
-
- prev_hdl = list_last_entry(&main_hdl->requests_queued,
- struct v4l2_ctrl_handler, requests_queued);
- /*
- * Note: prev_hdl and hdl must contain the same list of control
- * references, so if any differences are detected then that is a
- * driver bug and the WARN_ON is triggered.
- */
- mutex_lock(prev_hdl->lock);
- ref_ctrl_prev = list_first_entry(&prev_hdl->ctrl_refs,
- struct v4l2_ctrl_ref, node);
- list_for_each_entry(ref_ctrl, &hdl->ctrl_refs, node) {
- if (ref_ctrl->req)
- continue;
- while (ref_ctrl_prev->ctrl->id < ref_ctrl->ctrl->id) {
- /* Should never happen, but just in case... */
- if (list_is_last(&ref_ctrl_prev->node,
- &prev_hdl->ctrl_refs))
- break;
- ref_ctrl_prev = list_next_entry(ref_ctrl_prev, node);
- }
- if (WARN_ON(ref_ctrl_prev->ctrl->id != ref_ctrl->ctrl->id))
- break;
- ref_ctrl->req = ref_ctrl_prev->req;
- }
- mutex_unlock(prev_hdl->lock);
-queue:
list_add_tail(&hdl->requests_queued, &main_hdl->requests_queued);
hdl->request_is_queued = true;
mutex_unlock(main_hdl->lock);
@@ -3424,8 +3410,8 @@ static void v4l2_ctrl_request_unbind(struct media_request_object *obj)
container_of(obj, struct v4l2_ctrl_handler, req_obj);
struct v4l2_ctrl_handler *main_hdl = obj->priv;
- list_del_init(&hdl->requests);
mutex_lock(main_hdl->lock);
+ list_del_init(&hdl->requests);
if (hdl->request_is_queued) {
list_del_init(&hdl->requests_queued);
hdl->request_is_queued = false;
@@ -3469,7 +3455,7 @@ v4l2_ctrl_request_hdl_ctrl_find(struct v4l2_ctrl_handler *hdl, u32 id)
{
struct v4l2_ctrl_ref *ref = find_ref_lock(hdl, id);
- return (ref && ref->req == ref) ? ref->ctrl : NULL;
+ return (ref && ref->valid_p_req) ? ref->ctrl : NULL;
}
EXPORT_SYMBOL_GPL(v4l2_ctrl_request_hdl_ctrl_find);
@@ -3484,8 +3470,11 @@ static int v4l2_ctrl_request_bind(struct media_request *req,
if (!ret) {
ret = media_request_object_bind(req, &req_ops,
from, false, &hdl->req_obj);
- if (!ret)
+ if (!ret) {
+ mutex_lock(from->lock);
list_add_tail(&hdl->requests, &from->requests);
+ mutex_unlock(from->lock);
+ }
}
return ret;
}
@@ -3655,7 +3644,13 @@ static int class_check(struct v4l2_ctrl_handler *hdl, u32 which)
return find_ref_lock(hdl, which | 1) ? 0 : -EINVAL;
}
-/* Get extended controls. Allocates the helpers array if needed. */
+/*
+ * Get extended controls. Allocates the helpers array if needed.
+ *
+ * Note that v4l2_g_ext_ctrls_common() with 'which' set to
+ * V4L2_CTRL_WHICH_REQUEST_VAL is only called if the request was
+ * completed, and in that case valid_p_req is true for all controls.
+ */
static int v4l2_g_ext_ctrls_common(struct v4l2_ctrl_handler *hdl,
struct v4l2_ext_controls *cs,
struct video_device *vdev)
@@ -3664,9 +3659,10 @@ static int v4l2_g_ext_ctrls_common(struct v4l2_ctrl_handler *hdl,
struct v4l2_ctrl_helper *helpers = helper;
int ret;
int i, j;
- bool def_value;
+ bool is_default, is_request;
- def_value = (cs->which == V4L2_CTRL_WHICH_DEF_VAL);
+ is_default = (cs->which == V4L2_CTRL_WHICH_DEF_VAL);
+ is_request = (cs->which == V4L2_CTRL_WHICH_REQUEST_VAL);
cs->error_idx = cs->count;
cs->which = V4L2_CTRL_ID2WHICH(cs->which);
@@ -3692,11 +3688,9 @@ static int v4l2_g_ext_ctrls_common(struct v4l2_ctrl_handler *hdl,
ret = -EACCES;
for (i = 0; !ret && i < cs->count; i++) {
- int (*ctrl_to_user)(struct v4l2_ext_control *c,
- struct v4l2_ctrl *ctrl);
struct v4l2_ctrl *master;
-
- ctrl_to_user = def_value ? def_to_user : cur_to_user;
+ bool is_volatile = false;
+ u32 idx = i;
if (helpers[i].mref == NULL)
continue;
@@ -3706,31 +3700,48 @@ static int v4l2_g_ext_ctrls_common(struct v4l2_ctrl_handler *hdl,
v4l2_ctrl_lock(master);
- /* g_volatile_ctrl will update the new control values */
- if (!def_value &&
+ /*
+ * g_volatile_ctrl will update the new control values.
+ * This makes no sense for V4L2_CTRL_WHICH_DEF_VAL and
+ * V4L2_CTRL_WHICH_REQUEST_VAL. In the case of requests
+ * it is v4l2_ctrl_request_complete() that copies the
+ * volatile controls at the time of request completion
+ * to the request, so you don't want to do that again.
+ */
+ if (!is_default && !is_request &&
((master->flags & V4L2_CTRL_FLAG_VOLATILE) ||
(master->has_volatiles && !is_cur_manual(master)))) {
for (j = 0; j < master->ncontrols; j++)
cur_to_new(master->cluster[j]);
ret = call_op(master, g_volatile_ctrl);
- ctrl_to_user = new_to_user;
+ is_volatile = true;
}
- /* If OK, then copy the current (for non-volatile controls)
- or the new (for volatile controls) control values to the
- caller */
- if (!ret) {
- u32 idx = i;
- do {
- if (helpers[idx].ref->req)
- ret = req_to_user(cs->controls + idx,
- helpers[idx].ref->req);
- else
- ret = ctrl_to_user(cs->controls + idx,
- helpers[idx].ref->ctrl);
- idx = helpers[idx].next;
- } while (!ret && idx);
+ if (ret) {
+ v4l2_ctrl_unlock(master);
+ break;
}
+
+ /*
+ * Copy the default value (if is_default is true), the
+ * request value (if is_request is true and p_req is valid),
+ * the new volatile value (if is_volatile is true) or the
+ * current value.
+ */
+ do {
+ struct v4l2_ctrl_ref *ref = helpers[idx].ref;
+
+ if (is_default)
+ ret = def_to_user(cs->controls + idx, ref->ctrl);
+ else if (is_request && ref->valid_p_req)
+ ret = req_to_user(cs->controls + idx, ref);
+ else if (is_volatile)
+ ret = new_to_user(cs->controls + idx, ref->ctrl);
+ else
+ ret = cur_to_user(cs->controls + idx, ref->ctrl);
+ idx = helpers[idx].next;
+ } while (!ret && idx);
+
v4l2_ctrl_unlock(master);
}
@@ -4368,8 +4379,6 @@ void v4l2_ctrl_request_complete(struct media_request *req,
unsigned int i;
if (ctrl->flags & V4L2_CTRL_FLAG_VOLATILE) {
- ref->req = ref;
-
v4l2_ctrl_lock(master);
/* g_volatile_ctrl will update the current control values */
for (i = 0; i < master->ncontrols; i++)
@@ -4379,21 +4388,12 @@ void v4l2_ctrl_request_complete(struct media_request *req,
v4l2_ctrl_unlock(master);
continue;
}
- if (ref->req == ref)
+ if (ref->valid_p_req)
continue;
+ /* Copy the current control value into the request */
v4l2_ctrl_lock(ctrl);
- if (ref->req) {
- ptr_to_ptr(ctrl, ref->req->p_req, ref->p_req);
- } else {
- ptr_to_ptr(ctrl, ctrl->p_cur, ref->p_req);
- /*
- * Set ref->req to ensure that when userspace wants to
- * obtain the controls of this request it will take
- * this value and not the current value of the control.
- */
- ref->req = ref;
- }
+ cur_to_req(ref);
v4l2_ctrl_unlock(ctrl);
}
@@ -4458,7 +4458,7 @@ int v4l2_ctrl_request_setup(struct media_request *req,
struct v4l2_ctrl_ref *r =
find_ref(hdl, master->cluster[i]->id);
- if (r->req && r == r->req) {
+ if (r->valid_p_req) {
have_new_data = true;
break;
}
diff --git a/drivers/memory/omap-gpmc.c b/drivers/memory/omap-gpmc.c
index cfa730c..f80c2ea 100644
--- a/drivers/memory/omap-gpmc.c
+++ b/drivers/memory/omap-gpmc.c
@@ -1009,8 +1009,8 @@ EXPORT_SYMBOL(gpmc_cs_request);
void gpmc_cs_free(int cs)
{
- struct gpmc_cs_data *gpmc = &gpmc_cs[cs];
- struct resource *res = &gpmc->mem;
+ struct gpmc_cs_data *gpmc;
+ struct resource *res;
spin_lock(&gpmc_mem_lock);
if (cs >= gpmc_cs_num || cs < 0 || !gpmc_cs_reserved(cs)) {
@@ -1018,6 +1018,9 @@ void gpmc_cs_free(int cs)
spin_unlock(&gpmc_mem_lock);
return;
}
+ gpmc = &gpmc_cs[cs];
+ res = &gpmc->mem;
+
gpmc_cs_disable_mem(cs);
if (res->flags)
release_resource(res);
diff --git a/drivers/memory/pl353-smc.c b/drivers/memory/pl353-smc.c
index 73bd302..b42804b 100644
--- a/drivers/memory/pl353-smc.c
+++ b/drivers/memory/pl353-smc.c
@@ -63,7 +63,7 @@
/* ECC memory config register specific constants */
#define PL353_SMC_ECC_MEMCFG_MODE_MASK 0xC
#define PL353_SMC_ECC_MEMCFG_MODE_SHIFT 2
-#define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK 0xC
+#define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK 0x3
#define PL353_SMC_DC_UPT_NAND_REGS ((4 << 23) | /* CS: NAND chip */ \
(2 << 21)) /* UpdateRegs operation */
diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
index da0fdb4..1fe6c35 100644
--- a/drivers/memory/renesas-rpc-if.c
+++ b/drivers/memory/renesas-rpc-if.c
@@ -193,10 +193,10 @@ int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap");
- rpc->size = resource_size(res);
rpc->dirmap = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(rpc->dirmap))
rpc->dirmap = NULL;
+ rpc->size = resource_size(res);
rpc->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
index c5ee412..3d230f0 100644
--- a/drivers/memory/samsung/exynos5422-dmc.c
+++ b/drivers/memory/samsung/exynos5422-dmc.c
@@ -1298,7 +1298,9 @@ static int exynos5_dmc_init_clks(struct exynos5_dmc *dmc)
dmc->curr_volt = target_volt;
- clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll);
+ ret = clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll);
+ if (ret)
+ return ret;
clk_prepare_enable(dmc->fout_bpll);
clk_prepare_enable(dmc->mout_bpll);
diff --git a/drivers/mfd/arizona-irq.c b/drivers/mfd/arizona-irq.c
index 077d9ab1..d919ae9 100644
--- a/drivers/mfd/arizona-irq.c
+++ b/drivers/mfd/arizona-irq.c
@@ -100,7 +100,7 @@ static irqreturn_t arizona_irq_thread(int irq, void *data)
unsigned int val;
int ret;
- ret = pm_runtime_get_sync(arizona->dev);
+ ret = pm_runtime_resume_and_get(arizona->dev);
if (ret < 0) {
dev_err(arizona->dev, "Failed to resume device: %d\n", ret);
return IRQ_NONE;
diff --git a/drivers/mfd/da9063-i2c.c b/drivers/mfd/da9063-i2c.c
index b8217ad..3419814 100644
--- a/drivers/mfd/da9063-i2c.c
+++ b/drivers/mfd/da9063-i2c.c
@@ -442,6 +442,16 @@ static int da9063_i2c_probe(struct i2c_client *i2c,
return ret;
}
+ /* If SMBus is not available and only I2C is possible, enter I2C mode */
+ if (i2c_check_functionality(i2c->adapter, I2C_FUNC_I2C)) {
+ ret = regmap_clear_bits(da9063->regmap, DA9063_REG_CONFIG_J,
+ DA9063_TWOWIRE_TO);
+ if (ret < 0) {
+ dev_err(da9063->dev, "Failed to set Two-Wire Bus Mode.\n");
+ return -EIO;
+ }
+ }
+
return da9063_device_init(da9063, i2c->irq);
}
diff --git a/drivers/mfd/stm32-timers.c b/drivers/mfd/stm32-timers.c
index add6033..44ed2fc 100644
--- a/drivers/mfd/stm32-timers.c
+++ b/drivers/mfd/stm32-timers.c
@@ -158,13 +158,18 @@ static const struct regmap_config stm32_timers_regmap_cfg = {
static void stm32_timers_get_arr_size(struct stm32_timers *ddata)
{
+ u32 arr;
+
+ /* Backup ARR to restore it after getting the maximum value */
+ regmap_read(ddata->regmap, TIM_ARR, &arr);
+
/*
* Only the available bits will be written so when readback
* we get the maximum value of auto reload register
*/
regmap_write(ddata->regmap, TIM_ARR, ~0L);
regmap_read(ddata->regmap, TIM_ARR, &ddata->max_arr);
- regmap_write(ddata->regmap, TIM_ARR, 0x0);
+ regmap_write(ddata->regmap, TIM_ARR, arr);
}
static int stm32_timers_dma_probe(struct device *dev,
diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
index 926408b..7a6f01a 100644
--- a/drivers/misc/eeprom/at24.c
+++ b/drivers/misc/eeprom/at24.c
@@ -763,7 +763,8 @@ static int at24_probe(struct i2c_client *client)
at24->nvmem = devm_nvmem_register(dev, &nvmem_config);
if (IS_ERR(at24->nvmem)) {
pm_runtime_disable(dev);
- regulator_disable(at24->vcc_reg);
+ if (!pm_runtime_status_suspended(dev))
+ regulator_disable(at24->vcc_reg);
return PTR_ERR(at24->nvmem);
}
@@ -774,7 +775,8 @@ static int at24_probe(struct i2c_client *client)
err = at24_read(at24, 0, &test_byte, 1);
if (err) {
pm_runtime_disable(dev);
- regulator_disable(at24->vcc_reg);
+ if (!pm_runtime_status_suspended(dev))
+ regulator_disable(at24->vcc_reg);
return -ENODEV;
}
diff --git a/drivers/misc/ics932s401.c b/drivers/misc/ics932s401.c
index 2bdf560..0f9ea75 100644
--- a/drivers/misc/ics932s401.c
+++ b/drivers/misc/ics932s401.c
@@ -134,7 +134,7 @@ static struct ics932s401_data *ics932s401_update_device(struct device *dev)
for (i = 0; i < NUM_MIRRORED_REGS; i++) {
temp = i2c_smbus_read_word_data(client, regs_to_copy[i]);
if (temp < 0)
- data->regs[regs_to_copy[i]] = 0;
+ temp = 0;
data->regs[regs_to_copy[i]] = temp >> 8;
}
diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c
index 945701b..4948915 100644
--- a/drivers/misc/kgdbts.c
+++ b/drivers/misc/kgdbts.c
@@ -95,19 +95,20 @@
#include <asm/sections.h>
-#define v1printk(a...) do { \
- if (verbose) \
- printk(KERN_INFO a); \
- } while (0)
-#define v2printk(a...) do { \
- if (verbose > 1) \
- printk(KERN_INFO a); \
- touch_nmi_watchdog(); \
- } while (0)
-#define eprintk(a...) do { \
- printk(KERN_ERR a); \
- WARN_ON(1); \
- } while (0)
+#define v1printk(a...) do { \
+ if (verbose) \
+ printk(KERN_INFO a); \
+} while (0)
+#define v2printk(a...) do { \
+ if (verbose > 1) { \
+ printk(KERN_INFO a); \
+ } \
+ touch_nmi_watchdog(); \
+} while (0)
+#define eprintk(a...) do { \
+ printk(KERN_ERR a); \
+ WARN_ON(1); \
+} while (0)
#define MAX_CONFIG_LEN 40
static struct kgdb_io kgdbts_io_ops;
diff --git a/drivers/misc/lis3lv02d/lis3lv02d.c b/drivers/misc/lis3lv02d/lis3lv02d.c
index dd65ced..9d14bf4 100644
--- a/drivers/misc/lis3lv02d/lis3lv02d.c
+++ b/drivers/misc/lis3lv02d/lis3lv02d.c
@@ -208,7 +208,7 @@ static int lis3_3dc_rates[16] = {0, 1, 10, 25, 50, 100, 200, 400, 1600, 5000};
static int lis3_3dlh_rates[4] = {50, 100, 400, 1000};
/* ODR is Output Data Rate */
-static int lis3lv02d_get_odr(struct lis3lv02d *lis3)
+static int lis3lv02d_get_odr_index(struct lis3lv02d *lis3)
{
u8 ctrl;
int shift;
@@ -216,15 +216,23 @@ static int lis3lv02d_get_odr(struct lis3lv02d *lis3)
lis3->read(lis3, CTRL_REG1, &ctrl);
ctrl &= lis3->odr_mask;
shift = ffs(lis3->odr_mask) - 1;
- return lis3->odrs[(ctrl >> shift)];
+ return (ctrl >> shift);
}
static int lis3lv02d_get_pwron_wait(struct lis3lv02d *lis3)
{
- int div = lis3lv02d_get_odr(lis3);
+ int odr_idx = lis3lv02d_get_odr_index(lis3);
+ int div = lis3->odrs[odr_idx];
- if (WARN_ONCE(div == 0, "device returned spurious data"))
+ if (div == 0) {
+ if (odr_idx == 0) {
+ /* Power-down mode, not sampling no need to sleep */
+ return 0;
+ }
+
+ dev_err(&lis3->pdev->dev, "Error unknown odrs-index: %d\n", odr_idx);
return -ENXIO;
+ }
/* LIS3 power on delay is quite long */
msleep(lis3->pwron_delay / div);
@@ -816,9 +824,12 @@ static ssize_t lis3lv02d_rate_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct lis3lv02d *lis3 = dev_get_drvdata(dev);
+ int odr_idx;
lis3lv02d_sysfs_poweron(lis3);
- return sprintf(buf, "%d\n", lis3lv02d_get_odr(lis3));
+
+ odr_idx = lis3lv02d_get_odr_index(lis3);
+ return sprintf(buf, "%d\n", lis3->odrs[odr_idx]);
}
static ssize_t lis3lv02d_rate_set(struct device *dev,
diff --git a/drivers/misc/lis3lv02d/lis3lv02d.h b/drivers/misc/lis3lv02d/lis3lv02d.h
index c394c0b..7ac788f 100644
--- a/drivers/misc/lis3lv02d/lis3lv02d.h
+++ b/drivers/misc/lis3lv02d/lis3lv02d.h
@@ -271,6 +271,7 @@ struct lis3lv02d {
int regs_size;
u8 *reg_cache;
bool regs_stored;
+ bool init_required;
u8 odr_mask; /* ODR bit mask */
u8 whoami; /* indicates measurement precision */
s16 (*read_data) (struct lis3lv02d *lis3, int reg);
diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
index 14be76d..cb34925 100644
--- a/drivers/misc/mei/hw-me-regs.h
+++ b/drivers/misc/mei/hw-me-regs.h
@@ -105,6 +105,7 @@
#define MEI_DEV_ID_ADP_S 0x7AE8 /* Alder Lake Point S */
#define MEI_DEV_ID_ADP_LP 0x7A60 /* Alder Lake Point LP */
+#define MEI_DEV_ID_ADP_P 0x51E0 /* Alder Lake Point P */
/*
* MEI HW Section
diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
index 2161c12..fee6030 100644
--- a/drivers/misc/mei/interrupt.c
+++ b/drivers/misc/mei/interrupt.c
@@ -277,6 +277,9 @@ static int mei_cl_irq_read(struct mei_cl *cl, struct mei_cl_cb *cb,
return ret;
}
+ pm_runtime_mark_last_busy(dev->dev);
+ pm_request_autosuspend(dev->dev);
+
list_move_tail(&cb->list, &cl->rd_pending);
return 0;
diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
index a7e1796..c3393b3 100644
--- a/drivers/misc/mei/pci-me.c
+++ b/drivers/misc/mei/pci-me.c
@@ -111,6 +111,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_S, MEI_ME_PCH15_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_LP, MEI_ME_PCH15_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
/* required last entry */
{0, }
diff --git a/drivers/misc/vmw_vmci/vmci_doorbell.c b/drivers/misc/vmw_vmci/vmci_doorbell.c
index 345addd..fa8a7fc 100644
--- a/drivers/misc/vmw_vmci/vmci_doorbell.c
+++ b/drivers/misc/vmw_vmci/vmci_doorbell.c
@@ -326,7 +326,7 @@ int vmci_dbell_host_context_notify(u32 src_cid, struct vmci_handle handle)
bool vmci_dbell_register_notification_bitmap(u64 bitmap_ppn)
{
int result;
- struct vmci_notify_bm_set_msg bitmap_set_msg;
+ struct vmci_notify_bm_set_msg bitmap_set_msg = { };
bitmap_set_msg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
VMCI_SET_NOTIFY_BITMAP);
diff --git a/drivers/misc/vmw_vmci/vmci_guest.c b/drivers/misc/vmw_vmci/vmci_guest.c
index cc8eeb3..1018dc7 100644
--- a/drivers/misc/vmw_vmci/vmci_guest.c
+++ b/drivers/misc/vmw_vmci/vmci_guest.c
@@ -168,7 +168,7 @@ static int vmci_check_host_caps(struct pci_dev *pdev)
VMCI_UTIL_NUM_RESOURCES * sizeof(u32);
struct vmci_datagram *check_msg;
- check_msg = kmalloc(msg_size, GFP_KERNEL);
+ check_msg = kzalloc(msg_size, GFP_KERNEL);
if (!check_msg) {
dev_err(&pdev->dev, "%s: Insufficient memory\n", __func__);
return -ENOMEM;
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 42e27a2..3246598 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -572,6 +572,18 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
}
/*
+ * Make sure to update CACHE_CTRL in case it was changed. The cache
+ * will get turned back on if the card is re-initialized, e.g.
+ * suspend/resume or hw reset in recovery.
+ */
+ if ((MMC_EXTRACT_INDEX_FROM_ARG(cmd.arg) == EXT_CSD_CACHE_CTRL) &&
+ (cmd.opcode == MMC_SWITCH)) {
+ u8 value = MMC_EXTRACT_VALUE_FROM_ARG(cmd.arg) & 1;
+
+ card->ext_csd.cache_ctrl = value;
+ }
+
+ /*
* According to the SD specs, some commands require a delay after
* issuing the command.
*/
@@ -2221,6 +2233,10 @@ enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req)
case MMC_ISSUE_ASYNC:
switch (req_op(req)) {
case REQ_OP_FLUSH:
+ if (!mmc_cache_enabled(host)) {
+ blk_mq_end_request(req, BLK_STS_OK);
+ return MMC_REQ_FINISHED;
+ }
ret = mmc_blk_cqe_issue_flush(mq, req);
break;
case REQ_OP_READ:
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index d42037f..eaf4810 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -1204,7 +1204,7 @@ int mmc_set_uhs_voltage(struct mmc_host *host, u32 ocr)
err = mmc_wait_for_cmd(host, &cmd, 0);
if (err)
- return err;
+ goto power_cycle;
if (!mmc_host_is_spi(host) && (cmd.resp[0] & R1_ERROR))
return -EIO;
@@ -2355,80 +2355,6 @@ void mmc_stop_host(struct mmc_host *host)
mmc_release_host(host);
}
-#ifdef CONFIG_PM_SLEEP
-/* Do the card removal on suspend if card is assumed removeable
- * Do that in pm notifier while userspace isn't yet frozen, so we will be able
- to sync the card.
-*/
-static int mmc_pm_notify(struct notifier_block *notify_block,
- unsigned long mode, void *unused)
-{
- struct mmc_host *host = container_of(
- notify_block, struct mmc_host, pm_notify);
- unsigned long flags;
- int err = 0;
-
- switch (mode) {
- case PM_HIBERNATION_PREPARE:
- case PM_SUSPEND_PREPARE:
- case PM_RESTORE_PREPARE:
- spin_lock_irqsave(&host->lock, flags);
- host->rescan_disable = 1;
- spin_unlock_irqrestore(&host->lock, flags);
- cancel_delayed_work_sync(&host->detect);
-
- if (!host->bus_ops)
- break;
-
- /* Validate prerequisites for suspend */
- if (host->bus_ops->pre_suspend)
- err = host->bus_ops->pre_suspend(host);
- if (!err)
- break;
-
- if (!mmc_card_is_removable(host)) {
- dev_warn(mmc_dev(host),
- "pre_suspend failed for non-removable host: "
- "%d\n", err);
- /* Avoid removing non-removable hosts */
- break;
- }
-
- /* Calling bus_ops->remove() with a claimed host can deadlock */
- host->bus_ops->remove(host);
- mmc_claim_host(host);
- mmc_detach_bus(host);
- mmc_power_off(host);
- mmc_release_host(host);
- host->pm_flags = 0;
- break;
-
- case PM_POST_SUSPEND:
- case PM_POST_HIBERNATION:
- case PM_POST_RESTORE:
-
- spin_lock_irqsave(&host->lock, flags);
- host->rescan_disable = 0;
- spin_unlock_irqrestore(&host->lock, flags);
- _mmc_detect_change(host, 0, false);
-
- }
-
- return 0;
-}
-
-void mmc_register_pm_notifier(struct mmc_host *host)
-{
- host->pm_notify.notifier_call = mmc_pm_notify;
- register_pm_notifier(&host->pm_notify);
-}
-
-void mmc_unregister_pm_notifier(struct mmc_host *host)
-{
- unregister_pm_notifier(&host->pm_notify);
-}
-#endif
-
static int __init mmc_init(void)
{
int ret;
diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h
index 575ac02..db3c9c6 100644
--- a/drivers/mmc/core/core.h
+++ b/drivers/mmc/core/core.h
@@ -29,6 +29,7 @@ struct mmc_bus_ops {
int (*shutdown)(struct mmc_host *);
int (*hw_reset)(struct mmc_host *);
int (*sw_reset)(struct mmc_host *);
+ bool (*cache_enabled)(struct mmc_host *);
};
void mmc_attach_bus(struct mmc_host *host, const struct mmc_bus_ops *ops);
@@ -93,14 +94,6 @@ int mmc_execute_tuning(struct mmc_card *card);
int mmc_hs200_to_hs400(struct mmc_card *card);
int mmc_hs400_to_hs200(struct mmc_card *card);
-#ifdef CONFIG_PM_SLEEP
-void mmc_register_pm_notifier(struct mmc_host *host);
-void mmc_unregister_pm_notifier(struct mmc_host *host);
-#else
-static inline void mmc_register_pm_notifier(struct mmc_host *host) { }
-static inline void mmc_unregister_pm_notifier(struct mmc_host *host) { }
-#endif
-
void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq);
bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq);
@@ -171,4 +164,12 @@ static inline void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
host->ops->post_req(host, mrq, err);
}
+static inline bool mmc_cache_enabled(struct mmc_host *host)
+{
+ if (host->bus_ops->cache_enabled)
+ return host->bus_ops->cache_enabled(host);
+
+ return false;
+}
+
#endif
diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index 96b2ca1..fa59e6f 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -34,6 +34,42 @@
static DEFINE_IDA(mmc_host_ida);
+#ifdef CONFIG_PM_SLEEP
+static int mmc_host_class_prepare(struct device *dev)
+{
+ struct mmc_host *host = cls_dev_to_mmc_host(dev);
+
+ /*
+ * It's safe to access the bus_ops pointer, as both userspace and the
+ * workqueue for detecting cards are frozen at this point.
+ */
+ if (!host->bus_ops)
+ return 0;
+
+ /* Validate conditions for system suspend. */
+ if (host->bus_ops->pre_suspend)
+ return host->bus_ops->pre_suspend(host);
+
+ return 0;
+}
+
+static void mmc_host_class_complete(struct device *dev)
+{
+ struct mmc_host *host = cls_dev_to_mmc_host(dev);
+
+ _mmc_detect_change(host, 0, false);
+}
+
+static const struct dev_pm_ops mmc_host_class_dev_pm_ops = {
+ .prepare = mmc_host_class_prepare,
+ .complete = mmc_host_class_complete,
+};
+
+#define MMC_HOST_CLASS_DEV_PM_OPS (&mmc_host_class_dev_pm_ops)
+#else
+#define MMC_HOST_CLASS_DEV_PM_OPS NULL
+#endif
+
static void mmc_host_classdev_release(struct device *dev)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
@@ -45,6 +81,7 @@ static void mmc_host_classdev_release(struct device *dev)
static struct class mmc_host_class = {
.name = "mmc_host",
.dev_release = mmc_host_classdev_release,
+ .pm = MMC_HOST_CLASS_DEV_PM_OPS,
};
int mmc_register_host_class(void)
@@ -493,8 +530,6 @@ int mmc_add_host(struct mmc_host *host)
#endif
mmc_start_host(host);
- mmc_register_pm_notifier(host);
-
return 0;
}
@@ -510,7 +545,6 @@ EXPORT_SYMBOL(mmc_add_host);
*/
void mmc_remove_host(struct mmc_host *host)
{
- mmc_unregister_pm_notifier(host);
mmc_stop_host(host);
#ifdef CONFIG_DEBUG_FS
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 9ce34e8..7494d59 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -2033,6 +2033,12 @@ static void mmc_detect(struct mmc_host *host)
}
}
+static bool _mmc_cache_enabled(struct mmc_host *host)
+{
+ return host->card->ext_csd.cache_size > 0 &&
+ host->card->ext_csd.cache_ctrl & 1;
+}
+
static int _mmc_suspend(struct mmc_host *host, bool is_suspend)
{
int err = 0;
@@ -2212,6 +2218,7 @@ static const struct mmc_bus_ops mmc_ops = {
.alive = mmc_alive,
.shutdown = mmc_shutdown,
.hw_reset = _mmc_hw_reset,
+ .cache_enabled = _mmc_cache_enabled,
};
/*
diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
index baa6314..ebad70e 100644
--- a/drivers/mmc/core/mmc_ops.c
+++ b/drivers/mmc/core/mmc_ops.c
@@ -988,9 +988,7 @@ int mmc_flush_cache(struct mmc_card *card)
{
int err = 0;
- if (mmc_card_mmc(card) &&
- (card->ext_csd.cache_size > 0) &&
- (card->ext_csd.cache_ctrl & 1)) {
+ if (mmc_cache_enabled(card->host)) {
err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_FLUSH_CACHE, 1,
MMC_CACHE_FLUSH_TIMEOUT_MS);
diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
index 6f054c4..636d4e3 100644
--- a/drivers/mmc/core/sd.c
+++ b/drivers/mmc/core/sd.c
@@ -135,6 +135,9 @@ static int mmc_decode_csd(struct mmc_card *card)
csd->erase_size = UNSTUFF_BITS(resp, 39, 7) + 1;
csd->erase_size <<= csd->write_blkbits - 9;
}
+
+ if (UNSTUFF_BITS(resp, 13, 1))
+ mmc_card_set_readonly(card);
break;
case 1:
/*
@@ -169,6 +172,9 @@ static int mmc_decode_csd(struct mmc_card *card)
csd->write_blkbits = 9;
csd->write_partial = 0;
csd->erase_size = 1;
+
+ if (UNSTUFF_BITS(resp, 13, 1))
+ mmc_card_set_readonly(card);
break;
default:
pr_err("%s: unrecognised CSD structure version %d\n",
diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
index 694a212..1b0853a 100644
--- a/drivers/mmc/core/sdio.c
+++ b/drivers/mmc/core/sdio.c
@@ -985,21 +985,37 @@ static void mmc_sdio_detect(struct mmc_host *host)
*/
static int mmc_sdio_pre_suspend(struct mmc_host *host)
{
- int i, err = 0;
+ int i;
for (i = 0; i < host->card->sdio_funcs; i++) {
struct sdio_func *func = host->card->sdio_func[i];
if (func && sdio_func_present(func) && func->dev.driver) {
const struct dev_pm_ops *pmops = func->dev.driver->pm;
- if (!pmops || !pmops->suspend || !pmops->resume) {
+ if (!pmops || !pmops->suspend || !pmops->resume)
/* force removal of entire card in that case */
- err = -ENOSYS;
- break;
- }
+ goto remove;
}
}
- return err;
+ return 0;
+
+remove:
+ if (!mmc_card_is_removable(host)) {
+ dev_warn(mmc_dev(host),
+ "missing suspend/resume ops for non-removable SDIO card\n");
+ /* Don't remove a non-removable card - we can't re-detect it. */
+ return 0;
+ }
+
+ /* Remove the SDIO card and let it be re-detected later on. */
+ mmc_sdio_remove(host);
+ mmc_claim_host(host);
+ mmc_detach_bus(host);
+ mmc_power_off(host);
+ mmc_release_host(host);
+ host->pm_flags = 0;
+
+ return 0;
}
/*
diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c
index f9780c6..f24623a 100644
--- a/drivers/mmc/host/sdhci-brcmstb.c
+++ b/drivers/mmc/host/sdhci-brcmstb.c
@@ -199,7 +199,6 @@ static int sdhci_brcmstb_add_host(struct sdhci_host *host,
if (dma64) {
dev_dbg(mmc_dev(host->mmc), "Using 64 bit DMA\n");
cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
- cq_host->quirks |= CQHCI_QUIRK_SHORT_TXFR_DESC_SZ;
}
ret = cqhci_init(cq_host, host->mmc, dma64);
diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
index 5d9b310..d28809e 100644
--- a/drivers/mmc/host/sdhci-esdhc-imx.c
+++ b/drivers/mmc/host/sdhci-esdhc-imx.c
@@ -1504,7 +1504,7 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
mmc_of_parse_voltage(np, &host->ocr_mask);
- if (esdhc_is_usdhc(imx_data)) {
+ if (esdhc_is_usdhc(imx_data) && !IS_ERR(imx_data->pinctrl)) {
imx_data->pins_100mhz = pinctrl_lookup_state(imx_data->pinctrl,
ESDHC_PINCTRL_STATE_100MHZ);
imx_data->pins_200mhz = pinctrl_lookup_state(imx_data->pinctrl,
diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index 9552708..bf04a08 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -516,6 +516,7 @@ struct intel_host {
int drv_strength;
bool d3_retune;
bool rpm_retune_ok;
+ bool needs_pwr_off;
u32 glk_rx_ctrl1;
u32 glk_tun_val;
u32 active_ltr;
@@ -643,9 +644,25 @@ static int bxt_get_cd(struct mmc_host *mmc)
static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode,
unsigned short vdd)
{
+ struct sdhci_pci_slot *slot = sdhci_priv(host);
+ struct intel_host *intel_host = sdhci_pci_priv(slot);
int cntr;
u8 reg;
+ /*
+ * Bus power may control card power, but a full reset still may not
+ * reset the power, whereas a direct write to SDHCI_POWER_CONTROL can.
+ * That might be needed to initialize correctly, if the card was left
+ * powered on previously.
+ */
+ if (intel_host->needs_pwr_off) {
+ intel_host->needs_pwr_off = false;
+ if (mode != MMC_POWER_OFF) {
+ sdhci_writeb(host, 0, SDHCI_POWER_CONTROL);
+ usleep_range(10000, 12500);
+ }
+ }
+
sdhci_set_power(host, mode, vdd);
if (mode == MMC_POWER_OFF)
@@ -1135,6 +1152,14 @@ static int byt_sdio_probe_slot(struct sdhci_pci_slot *slot)
return 0;
}
+static void byt_needs_pwr_off(struct sdhci_pci_slot *slot)
+{
+ struct intel_host *intel_host = sdhci_pci_priv(slot);
+ u8 reg = sdhci_readb(slot->host, SDHCI_POWER_CONTROL);
+
+ intel_host->needs_pwr_off = reg & SDHCI_POWER_ON;
+}
+
static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
{
byt_probe_slot(slot);
@@ -1152,6 +1177,8 @@ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
slot->chip->pdev->subsystem_device == PCI_SUBDEVICE_ID_NI_78E3)
slot->host->mmc->caps2 |= MMC_CAP2_AVOID_3_3V;
+ byt_needs_pwr_off(slot);
+
return 0;
}
@@ -1903,6 +1930,8 @@ static const struct pci_device_id pci_ids[] = {
SDHCI_PCI_DEVICE(INTEL, CMLH_SD, intel_byt_sd),
SDHCI_PCI_DEVICE(INTEL, JSL_EMMC, intel_glk_emmc),
SDHCI_PCI_DEVICE(INTEL, JSL_SD, intel_byt_sd),
+ SDHCI_PCI_DEVICE(INTEL, LKF_EMMC, intel_glk_emmc),
+ SDHCI_PCI_DEVICE(INTEL, LKF_SD, intel_byt_sd),
SDHCI_PCI_DEVICE(O2, 8120, o2),
SDHCI_PCI_DEVICE(O2, 8220, o2),
SDHCI_PCI_DEVICE(O2, 8221, o2),
diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c
index 9887485..23b89b4 100644
--- a/drivers/mmc/host/sdhci-pci-gli.c
+++ b/drivers/mmc/host/sdhci-pci-gli.c
@@ -555,8 +555,13 @@ static void sdhci_gli_voltage_switch(struct sdhci_host *host)
*
* Wait 5ms after set 1.8V signal enable in Host Control 2 register
* to ensure 1.8V signal enable bit is set by GL9750/GL9755.
+ *
+ * ...however, the controller in the NUC10i3FNK4 (a 9755) requires
+ * slightly longer than 5ms before the control register reports that
+ * 1.8V is ready, and far longer still before the card will actually
+ * work reliably.
*/
- usleep_range(5000, 5500);
+ usleep_range(100000, 110000);
}
static void sdhci_gl9750_reset(struct sdhci_host *host, u8 mask)
diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
index d0ed232..8f90c41 100644
--- a/drivers/mmc/host/sdhci-pci.h
+++ b/drivers/mmc/host/sdhci-pci.h
@@ -57,6 +57,8 @@
#define PCI_DEVICE_ID_INTEL_CMLH_SD 0x06f5
#define PCI_DEVICE_ID_INTEL_JSL_EMMC 0x4dc4
#define PCI_DEVICE_ID_INTEL_JSL_SD 0x4df8
+#define PCI_DEVICE_ID_INTEL_LKF_EMMC 0x98c4
+#define PCI_DEVICE_ID_INTEL_LKF_SD 0x98f8
#define PCI_DEVICE_ID_SYSKONNECT_8000 0x8000
#define PCI_DEVICE_ID_VIA_95D0 0x95d0
diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
index 41d193f..8ea9132 100644
--- a/drivers/mmc/host/sdhci-tegra.c
+++ b/drivers/mmc/host/sdhci-tegra.c
@@ -119,6 +119,10 @@
/* SDMMC CQE Base Address for Tegra Host Ver 4.1 and Higher */
#define SDHCI_TEGRA_CQE_BASE_ADDR 0xF000
+#define SDHCI_TEGRA_CQE_TRNS_MODE (SDHCI_TRNS_MULTI | \
+ SDHCI_TRNS_BLK_CNT_EN | \
+ SDHCI_TRNS_DMA)
+
struct sdhci_tegra_soc_data {
const struct sdhci_pltfm_data *pdata;
u64 dma_mask;
@@ -1156,6 +1160,7 @@ static void tegra_sdhci_voltage_switch(struct sdhci_host *host)
static void tegra_cqhci_writel(struct cqhci_host *cq_host, u32 val, int reg)
{
struct mmc_host *mmc = cq_host->mmc;
+ struct sdhci_host *host = mmc_priv(mmc);
u8 ctrl;
ktime_t timeout;
bool timed_out;
@@ -1170,6 +1175,7 @@ static void tegra_cqhci_writel(struct cqhci_host *cq_host, u32 val, int reg)
*/
if (reg == CQHCI_CTL && !(val & CQHCI_HALT) &&
cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT) {
+ sdhci_writew(host, SDHCI_TEGRA_CQE_TRNS_MODE, SDHCI_TRANSFER_MODE);
sdhci_cqe_enable(mmc);
writel(val, cq_host->mmio + reg);
timeout = ktime_add_us(ktime_get(), 50);
@@ -1205,6 +1211,7 @@ static void sdhci_tegra_update_dcmd_desc(struct mmc_host *mmc,
static void sdhci_tegra_cqe_enable(struct mmc_host *mmc)
{
struct cqhci_host *cq_host = mmc->cqe_private;
+ struct sdhci_host *host = mmc_priv(mmc);
u32 val;
/*
@@ -1218,6 +1225,7 @@ static void sdhci_tegra_cqe_enable(struct mmc_host *mmc)
if (val & CQHCI_ENABLE)
cqhci_writel(cq_host, (val & ~CQHCI_ENABLE),
CQHCI_CFG);
+ sdhci_writew(host, SDHCI_TEGRA_CQE_TRNS_MODE, SDHCI_TRANSFER_MODE);
sdhci_cqe_enable(mmc);
if (val & CQHCI_ENABLE)
cqhci_writel(cq_host, val, CQHCI_CFG);
@@ -1281,12 +1289,36 @@ static void tegra_sdhci_set_timeout(struct sdhci_host *host,
__sdhci_set_timeout(host, cmd);
}
+static void sdhci_tegra_cqe_pre_enable(struct mmc_host *mmc)
+{
+ struct cqhci_host *cq_host = mmc->cqe_private;
+ u32 reg;
+
+ reg = cqhci_readl(cq_host, CQHCI_CFG);
+ reg |= CQHCI_ENABLE;
+ cqhci_writel(cq_host, reg, CQHCI_CFG);
+}
+
+static void sdhci_tegra_cqe_post_disable(struct mmc_host *mmc)
+{
+ struct cqhci_host *cq_host = mmc->cqe_private;
+ struct sdhci_host *host = mmc_priv(mmc);
+ u32 reg;
+
+ reg = cqhci_readl(cq_host, CQHCI_CFG);
+ reg &= ~CQHCI_ENABLE;
+ cqhci_writel(cq_host, reg, CQHCI_CFG);
+ sdhci_writew(host, 0x0, SDHCI_TRANSFER_MODE);
+}
+
static const struct cqhci_host_ops sdhci_tegra_cqhci_ops = {
.write_l = tegra_cqhci_writel,
.enable = sdhci_tegra_cqe_enable,
.disable = sdhci_cqe_disable,
.dumpregs = sdhci_tegra_dumpregs,
.update_dcmd_desc = sdhci_tegra_update_dcmd_desc,
+ .pre_enable = sdhci_tegra_cqe_pre_enable,
+ .post_disable = sdhci_tegra_cqe_post_disable,
};
static int tegra_sdhci_set_dma_mask(struct sdhci_host *host)
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 6edf9ff..58c977d 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -2998,6 +2998,37 @@ static bool sdhci_request_done(struct sdhci_host *host)
}
/*
+ * The controller needs a reset of internal state machines
+ * upon error conditions.
+ */
+ if (sdhci_needs_reset(host, mrq)) {
+ /*
+ * Do not finish until command and data lines are available for
+ * reset. Note there can only be one other mrq, so it cannot
+ * also be in mrqs_done, otherwise host->cmd and host->data_cmd
+ * would both be null.
+ */
+ if (host->cmd || host->data_cmd) {
+ spin_unlock_irqrestore(&host->lock, flags);
+ return true;
+ }
+
+ /* Some controllers need this kick or reset won't work here */
+ if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET)
+ /* This is to force an update */
+ host->ops->set_clock(host, host->clock);
+
+ /*
+ * Spec says we should do both at the same time, but Ricoh
+ * controllers do not like that.
+ */
+ sdhci_do_reset(host, SDHCI_RESET_CMD);
+ sdhci_do_reset(host, SDHCI_RESET_DATA);
+
+ host->pending_reset = false;
+ }
+
+ /*
* Always unmap the data buffers if they were mapped by
* sdhci_prepare_data() whenever we finish with a request.
* This avoids leaking DMA mappings on error.
@@ -3060,35 +3091,6 @@ static bool sdhci_request_done(struct sdhci_host *host)
}
}
- /*
- * The controller needs a reset of internal state machines
- * upon error conditions.
- */
- if (sdhci_needs_reset(host, mrq)) {
- /*
- * Do not finish until command and data lines are available for
- * reset. Note there can only be one other mrq, so it cannot
- * also be in mrqs_done, otherwise host->cmd and host->data_cmd
- * would both be null.
- */
- if (host->cmd || host->data_cmd) {
- spin_unlock_irqrestore(&host->lock, flags);
- return true;
- }
-
- /* Some controllers need this kick or reset won't work here */
- if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET)
- /* This is to force an update */
- host->ops->set_clock(host, host->clock);
-
- /* Spec says we should do both at the same time, but Ricoh
- controllers do not like that. */
- sdhci_do_reset(host, SDHCI_RESET_CMD);
- sdhci_do_reset(host, SDHCI_RESET_DATA);
-
- host->pending_reset = false;
- }
-
host->mrqs_done[i] = NULL;
spin_unlock_irqrestore(&host->lock, flags);
diff --git a/drivers/mmc/host/uniphier-sd.c b/drivers/mmc/host/uniphier-sd.c
index 3092466..196e94b 100644
--- a/drivers/mmc/host/uniphier-sd.c
+++ b/drivers/mmc/host/uniphier-sd.c
@@ -636,7 +636,7 @@ static int uniphier_sd_probe(struct platform_device *pdev)
ret = tmio_mmc_host_probe(host);
if (ret)
- goto free_host;
+ goto disable_clk;
ret = devm_request_irq(dev, irq, tmio_mmc_irq, IRQF_SHARED,
dev_name(dev), host);
@@ -647,6 +647,8 @@ static int uniphier_sd_probe(struct platform_device *pdev)
remove_host:
tmio_mmc_host_remove(host);
+disable_clk:
+ uniphier_sd_clk_disable(host);
free_host:
tmio_mmc_host_free(host);
@@ -659,6 +661,7 @@ static int uniphier_sd_remove(struct platform_device *pdev)
tmio_mmc_host_remove(host);
uniphier_sd_clk_disable(host);
+ tmio_mmc_host_free(host);
return 0;
}
diff --git a/drivers/mtd/maps/physmap-bt1-rom.c b/drivers/mtd/maps/physmap-bt1-rom.c
index 27cfe1c..d68ae75 100644
--- a/drivers/mtd/maps/physmap-bt1-rom.c
+++ b/drivers/mtd/maps/physmap-bt1-rom.c
@@ -79,7 +79,7 @@ static void __xipram bt1_rom_map_copy_from(struct map_info *map,
if (shift) {
chunk = min_t(ssize_t, 4 - shift, len);
data = readl_relaxed(src - shift);
- memcpy(to, &data + shift, chunk);
+ memcpy(to, (char *)&data + shift, chunk);
src += chunk;
to += chunk;
len -= chunk;
diff --git a/drivers/mtd/maps/physmap-core.c b/drivers/mtd/maps/physmap-core.c
index 001ed5d..4f63b84 100644
--- a/drivers/mtd/maps/physmap-core.c
+++ b/drivers/mtd/maps/physmap-core.c
@@ -69,8 +69,10 @@ static int physmap_flash_remove(struct platform_device *dev)
int i, err = 0;
info = platform_get_drvdata(dev);
- if (!info)
+ if (!info) {
+ err = -EINVAL;
goto out;
+ }
if (info->cmtd) {
err = mtd_device_unregister(info->cmtd);
diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
index b40f46a..69fb5da 100644
--- a/drivers/mtd/mtdchar.c
+++ b/drivers/mtd/mtdchar.c
@@ -651,16 +651,12 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
case MEMGETINFO:
case MEMREADOOB:
case MEMREADOOB64:
- case MEMLOCK:
- case MEMUNLOCK:
case MEMISLOCKED:
case MEMGETOOBSEL:
case MEMGETBADBLOCK:
- case MEMSETBADBLOCK:
case OTPSELECT:
case OTPGETREGIONCOUNT:
case OTPGETREGIONINFO:
- case OTPLOCK:
case ECCGETLAYOUT:
case ECCGETSTATS:
case MTDFILEMODE:
@@ -671,9 +667,13 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
/* "dangerous" commands */
case MEMERASE:
case MEMERASE64:
+ case MEMLOCK:
+ case MEMUNLOCK:
+ case MEMSETBADBLOCK:
case MEMWRITEOOB:
case MEMWRITEOOB64:
case MEMWRITE:
+ case OTPLOCK:
if (!(file->f_mode & FMODE_WRITE))
return -EPERM;
break;
diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
index b07cbb0..1c8c4072 100644
--- a/drivers/mtd/mtdcore.c
+++ b/drivers/mtd/mtdcore.c
@@ -820,6 +820,9 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types,
/* Prefer parsed partitions over driver-provided fallback */
ret = parse_mtd_partitions(mtd, types, parser_data);
+ if (ret == -EPROBE_DEFER)
+ goto out;
+
if (ret > 0)
ret = 0;
else if (nr_parts)
diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
index c3575b6..95d47422 100644
--- a/drivers/mtd/mtdpart.c
+++ b/drivers/mtd/mtdpart.c
@@ -331,7 +331,7 @@ static int __del_mtd_partitions(struct mtd_info *mtd)
list_for_each_entry_safe(child, next, &mtd->partitions, part.node) {
if (mtd_has_partitions(child))
- del_mtd_partitions(child);
+ __del_mtd_partitions(child);
pr_info("Deleting %s MTD partition\n", child->name);
ret = del_mtd_device(child);
diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
index e6ceec8f..8aab101 100644
--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
+++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
@@ -883,10 +883,12 @@ static int atmel_nand_pmecc_correct_data(struct nand_chip *chip, void *buf,
NULL, 0,
chip->ecc.strength);
- if (ret >= 0)
+ if (ret >= 0) {
+ mtd->ecc_stats.corrected += ret;
max_bitflips = max(ret, max_bitflips);
- else
+ } else {
mtd->ecc_stats.failed++;
+ }
databuf += chip->ecc.size;
eccbuf += chip->ecc.bytes;
diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
index 2da39ab..909b14c 100644
--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
@@ -2688,6 +2688,12 @@ static int brcmnand_attach_chip(struct nand_chip *chip)
ret = brcmstb_choose_ecc_layout(host);
+ /* If OOB is written with ECC enabled it will cause ECC errors */
+ if (is_hamming_ecc(host->ctrl, &host->hwcfg)) {
+ chip->ecc.write_oob = brcmnand_write_oob_raw;
+ chip->ecc.read_oob = brcmnand_read_oob_raw;
+ }
+
return ret;
}
diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
index c88421a..ce05dd4 100644
--- a/drivers/mtd/nand/raw/fsmc_nand.c
+++ b/drivers/mtd/nand/raw/fsmc_nand.c
@@ -1078,11 +1078,13 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
host->read_dma_chan = dma_request_channel(mask, filter, NULL);
if (!host->read_dma_chan) {
dev_err(&pdev->dev, "Unable to get read dma channel\n");
+ ret = -ENODEV;
goto disable_clk;
}
host->write_dma_chan = dma_request_channel(mask, filter, NULL);
if (!host->write_dma_chan) {
dev_err(&pdev->dev, "Unable to get write dma channel\n");
+ ret = -ENODEV;
goto release_dma_read_chan;
}
}
diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
index 31a6210..a665856 100644
--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
@@ -2447,7 +2447,7 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
this->bch_geometry.auxiliary_size = 128;
ret = gpmi_alloc_dma_buffer(this);
if (ret)
- goto err_out;
+ return ret;
nand_controller_init(&this->base);
this->base.ops = &gpmi_nand_controller_ops;
diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c
index 57f1f17..5c5c921 100644
--- a/drivers/mtd/nand/raw/mtk_nand.c
+++ b/drivers/mtd/nand/raw/mtk_nand.c
@@ -488,8 +488,8 @@ static int mtk_nfc_exec_instr(struct nand_chip *chip,
return 0;
case NAND_OP_WAITRDY_INSTR:
return readl_poll_timeout(nfc->regs + NFI_STA, status,
- status & STA_BUSY, 20,
- instr->ctx.waitrdy.timeout_ms);
+ !(status & STA_BUSY), 20,
+ instr->ctx.waitrdy.timeout_ms * 1000);
default:
break;
}
diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
index dfc17a2..b99d2e9 100644
--- a/drivers/mtd/nand/raw/qcom_nandc.c
+++ b/drivers/mtd/nand/raw/qcom_nandc.c
@@ -2874,7 +2874,7 @@ static int qcom_probe_nand_devices(struct qcom_nand_controller *nandc)
struct device *dev = nandc->dev;
struct device_node *dn = dev->of_node, *child;
struct qcom_nand_host *host;
- int ret;
+ int ret = -ENODEV;
for_each_available_child_of_node(dn, child) {
host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL);
@@ -2892,10 +2892,7 @@ static int qcom_probe_nand_devices(struct qcom_nand_controller *nandc)
list_add_tail(&host->node, &nandc->host_list);
}
- if (list_empty(&nandc->host_list))
- return -ENODEV;
-
- return 0;
+ return ret;
}
/* parse custom DT properties here */
diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
index c352217..558d8a1 100644
--- a/drivers/mtd/nand/spi/core.c
+++ b/drivers/mtd/nand/spi/core.c
@@ -1173,12 +1173,14 @@ static const struct spi_device_id spinand_ids[] = {
{ .name = "spi-nand" },
{ /* sentinel */ },
};
+MODULE_DEVICE_TABLE(spi, spinand_ids);
#ifdef CONFIG_OF
static const struct of_device_id spinand_of_ids[] = {
{ .compatible = "spi-nand" },
{ /* sentinel */ },
};
+MODULE_DEVICE_TABLE(of, spinand_of_ids);
#endif
static struct spi_mem_driver spinand_drv = {
diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
index 06e1bf0..2b26a87 100644
--- a/drivers/mtd/spi-nor/core.c
+++ b/drivers/mtd/spi-nor/core.c
@@ -2981,6 +2981,37 @@ static void spi_nor_resume(struct mtd_info *mtd)
dev_err(dev, "resume() failed\n");
}
+static int spi_nor_get_device(struct mtd_info *mtd)
+{
+ struct mtd_info *master = mtd_get_master(mtd);
+ struct spi_nor *nor = mtd_to_spi_nor(master);
+ struct device *dev;
+
+ if (nor->spimem)
+ dev = nor->spimem->spi->controller->dev.parent;
+ else
+ dev = nor->dev;
+
+ if (!try_module_get(dev->driver->owner))
+ return -ENODEV;
+
+ return 0;
+}
+
+static void spi_nor_put_device(struct mtd_info *mtd)
+{
+ struct mtd_info *master = mtd_get_master(mtd);
+ struct spi_nor *nor = mtd_to_spi_nor(master);
+ struct device *dev;
+
+ if (nor->spimem)
+ dev = nor->spimem->spi->controller->dev.parent;
+ else
+ dev = nor->dev;
+
+ module_put(dev->driver->owner);
+}
+
void spi_nor_restore(struct spi_nor *nor)
{
/* restore the addressing mode */
@@ -3157,6 +3188,8 @@ int spi_nor_scan(struct spi_nor *nor, const char *name,
mtd->_erase = spi_nor_erase;
mtd->_read = spi_nor_read;
mtd->_resume = spi_nor_resume;
+ mtd->_get_device = spi_nor_get_device;
+ mtd->_put_device = spi_nor_put_device;
if (nor->params->locking_ops) {
mtd->_lock = spi_nor_lock;
diff --git a/drivers/mtd/spi-nor/macronix.c b/drivers/mtd/spi-nor/macronix.c
index 9203aba..662b212 100644
--- a/drivers/mtd/spi-nor/macronix.c
+++ b/drivers/mtd/spi-nor/macronix.c
@@ -73,9 +73,6 @@ static const struct flash_info macronix_parts[] = {
SECT_4K | SPI_NOR_DUAL_READ |
SPI_NOR_QUAD_READ) },
{ "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) },
- { "mx25l51245g", INFO(0xc2201a, 0, 64 * 1024, 1024,
- SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
- SPI_NOR_4B_OPCODES) },
{ "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024,
SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
SPI_NOR_4B_OPCODES) },
diff --git a/drivers/net/caif/caif_serial.c b/drivers/net/caif/caif_serial.c
index bcc14c5..d025ea4 100644
--- a/drivers/net/caif/caif_serial.c
+++ b/drivers/net/caif/caif_serial.c
@@ -270,9 +270,6 @@ static netdev_tx_t caif_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct ser_device *ser;
- if (WARN_ON(!dev))
- return -EINVAL;
-
ser = netdev_priv(dev);
/* Send flow off once, on high water mark */
diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
index 6f0bf5d..62bcef4 100644
--- a/drivers/net/can/m_can/m_can.c
+++ b/drivers/net/can/m_can/m_can.c
@@ -1466,6 +1466,8 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
int i;
int putidx;
+ cdev->tx_skb = NULL;
+
/* Generate ID field for TX buffer Element */
/* Common to all supported M_CAN versions */
if (cf->can_id & CAN_EFF_FLAG) {
@@ -1582,7 +1584,6 @@ static void m_can_tx_work_queue(struct work_struct *ws)
tx_work);
m_can_tx_handler(cdev);
- cdev->tx_skb = NULL;
}
static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
index 22d814ae..89897a2 100644
--- a/drivers/net/can/spi/mcp251x.c
+++ b/drivers/net/can/spi/mcp251x.c
@@ -314,6 +314,18 @@ static int mcp251x_spi_trans(struct spi_device *spi, int len)
return ret;
}
+static int mcp251x_spi_write(struct spi_device *spi, int len)
+{
+ struct mcp251x_priv *priv = spi_get_drvdata(spi);
+ int ret;
+
+ ret = spi_write(spi, priv->spi_tx_buf, len);
+ if (ret)
+ dev_err(&spi->dev, "spi write failed: ret = %d\n", ret);
+
+ return ret;
+}
+
static u8 mcp251x_read_reg(struct spi_device *spi, u8 reg)
{
struct mcp251x_priv *priv = spi_get_drvdata(spi);
@@ -361,7 +373,7 @@ static void mcp251x_write_reg(struct spi_device *spi, u8 reg, u8 val)
priv->spi_tx_buf[1] = reg;
priv->spi_tx_buf[2] = val;
- mcp251x_spi_trans(spi, 3);
+ mcp251x_spi_write(spi, 3);
}
static void mcp251x_write_2regs(struct spi_device *spi, u8 reg, u8 v1, u8 v2)
@@ -373,7 +385,7 @@ static void mcp251x_write_2regs(struct spi_device *spi, u8 reg, u8 v1, u8 v2)
priv->spi_tx_buf[2] = v1;
priv->spi_tx_buf[3] = v2;
- mcp251x_spi_trans(spi, 4);
+ mcp251x_spi_write(spi, 4);
}
static void mcp251x_write_bits(struct spi_device *spi, u8 reg,
@@ -386,7 +398,7 @@ static void mcp251x_write_bits(struct spi_device *spi, u8 reg,
priv->spi_tx_buf[2] = mask;
priv->spi_tx_buf[3] = val;
- mcp251x_spi_trans(spi, 4);
+ mcp251x_spi_write(spi, 4);
}
static u8 mcp251x_read_stat(struct spi_device *spi)
@@ -618,7 +630,7 @@ static void mcp251x_hw_tx_frame(struct spi_device *spi, u8 *buf,
buf[i]);
} else {
memcpy(priv->spi_tx_buf, buf, TXBDAT_OFF + len);
- mcp251x_spi_trans(spi, TXBDAT_OFF + len);
+ mcp251x_spi_write(spi, TXBDAT_OFF + len);
}
}
@@ -650,7 +662,7 @@ static void mcp251x_hw_tx(struct spi_device *spi, struct can_frame *frame,
/* use INSTRUCTION_RTS, to avoid "repeated frame problem" */
priv->spi_tx_buf[0] = INSTRUCTION_RTS(1 << tx_buf_idx);
- mcp251x_spi_trans(priv->spi, 1);
+ mcp251x_spi_write(priv->spi, 1);
}
static void mcp251x_hw_rx_frame(struct spi_device *spi, u8 *buf,
@@ -888,7 +900,7 @@ static int mcp251x_hw_reset(struct spi_device *spi)
mdelay(MCP251X_OST_DELAY_MS);
priv->spi_tx_buf[0] = INSTRUCTION_RESET;
- ret = mcp251x_spi_trans(spi, 1);
+ ret = mcp251x_spi_write(spi, 1);
if (ret)
return ret;
@@ -944,8 +956,6 @@ static int mcp251x_stop(struct net_device *net)
priv->force_quit = 1;
free_irq(spi->irq, priv);
- destroy_workqueue(priv->wq);
- priv->wq = NULL;
mutex_lock(&priv->mcp_lock);
@@ -1212,24 +1222,15 @@ static int mcp251x_open(struct net_device *net)
goto out_close;
}
- priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
- 0);
- if (!priv->wq) {
- ret = -ENOMEM;
- goto out_clean;
- }
- INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
- INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
-
ret = mcp251x_hw_wake(spi);
if (ret)
- goto out_free_wq;
+ goto out_free_irq;
ret = mcp251x_setup(net, spi);
if (ret)
- goto out_free_wq;
+ goto out_free_irq;
ret = mcp251x_set_normal_mode(spi);
if (ret)
- goto out_free_wq;
+ goto out_free_irq;
can_led_event(net, CAN_LED_EVENT_OPEN);
@@ -1238,9 +1239,7 @@ static int mcp251x_open(struct net_device *net)
return 0;
-out_free_wq:
- destroy_workqueue(priv->wq);
-out_clean:
+out_free_irq:
free_irq(spi->irq, priv);
mcp251x_hw_sleep(spi);
out_close:
@@ -1361,6 +1360,15 @@ static int mcp251x_can_probe(struct spi_device *spi)
if (ret)
goto out_clk;
+ priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
+ 0);
+ if (!priv->wq) {
+ ret = -ENOMEM;
+ goto out_clk;
+ }
+ INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
+ INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
+
priv->spi = spi;
mutex_init(&priv->mcp_lock);
@@ -1405,6 +1413,8 @@ static int mcp251x_can_probe(struct spi_device *spi)
return 0;
error_probe:
+ destroy_workqueue(priv->wq);
+ priv->wq = NULL;
mcp251x_power_enable(priv->power, 0);
out_clk:
@@ -1426,6 +1436,9 @@ static int mcp251x_can_remove(struct spi_device *spi)
mcp251x_power_enable(priv->power, 0);
+ destroy_workqueue(priv->wq);
+ priv->wq = NULL;
+
clk_disable_unprepare(priv->clk);
free_candev(net);
diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
index 096d818..68ff931 100644
--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
@@ -2870,10 +2870,12 @@ static int mcp251xfd_probe(struct spi_device *spi)
err = mcp251xfd_register(priv);
if (err)
- goto out_free_candev;
+ goto out_can_rx_offload_del;
return 0;
+ out_can_rx_offload_del:
+ can_rx_offload_del(&priv->offload);
out_free_candev:
spi->max_speed_hz = priv->spi_max_speed_hz_orig;
diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
index 204ccb2..73c1bc3 100644
--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
@@ -856,7 +856,7 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
if (dev->adapter->dev_set_bus) {
err = dev->adapter->dev_set_bus(dev, 0);
if (err)
- goto lbl_unregister_candev;
+ goto adap_dev_free;
}
/* get device number early */
@@ -868,6 +868,10 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
return 0;
+adap_dev_free:
+ if (dev->adapter->dev_free)
+ dev->adapter->dev_free(dev);
+
lbl_unregister_candev:
unregister_candev(netdev);
diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
index 662e68a0..93c7fa1 100644
--- a/drivers/net/dsa/lantiq_gswip.c
+++ b/drivers/net/dsa/lantiq_gswip.c
@@ -93,8 +93,12 @@
/* GSWIP MII Registers */
#define GSWIP_MII_CFGp(p) (0x2 * (p))
+#define GSWIP_MII_CFG_RESET BIT(15)
#define GSWIP_MII_CFG_EN BIT(14)
+#define GSWIP_MII_CFG_ISOLATE BIT(13)
#define GSWIP_MII_CFG_LDCLKDIS BIT(12)
+#define GSWIP_MII_CFG_RGMII_IBS BIT(8)
+#define GSWIP_MII_CFG_RMII_CLK BIT(7)
#define GSWIP_MII_CFG_MODE_MIIP 0x0
#define GSWIP_MII_CFG_MODE_MIIM 0x1
#define GSWIP_MII_CFG_MODE_RMIIP 0x2
@@ -190,6 +194,23 @@
#define GSWIP_PCE_DEFPVID(p) (0x486 + ((p) * 0xA))
#define GSWIP_MAC_FLEN 0x8C5
+#define GSWIP_MAC_CTRL_0p(p) (0x903 + ((p) * 0xC))
+#define GSWIP_MAC_CTRL_0_PADEN BIT(8)
+#define GSWIP_MAC_CTRL_0_FCS_EN BIT(7)
+#define GSWIP_MAC_CTRL_0_FCON_MASK 0x0070
+#define GSWIP_MAC_CTRL_0_FCON_AUTO 0x0000
+#define GSWIP_MAC_CTRL_0_FCON_RX 0x0010
+#define GSWIP_MAC_CTRL_0_FCON_TX 0x0020
+#define GSWIP_MAC_CTRL_0_FCON_RXTX 0x0030
+#define GSWIP_MAC_CTRL_0_FCON_NONE 0x0040
+#define GSWIP_MAC_CTRL_0_FDUP_MASK 0x000C
+#define GSWIP_MAC_CTRL_0_FDUP_AUTO 0x0000
+#define GSWIP_MAC_CTRL_0_FDUP_EN 0x0004
+#define GSWIP_MAC_CTRL_0_FDUP_DIS 0x000C
+#define GSWIP_MAC_CTRL_0_GMII_MASK 0x0003
+#define GSWIP_MAC_CTRL_0_GMII_AUTO 0x0000
+#define GSWIP_MAC_CTRL_0_GMII_MII 0x0001
+#define GSWIP_MAC_CTRL_0_GMII_RGMII 0x0002
#define GSWIP_MAC_CTRL_2p(p) (0x905 + ((p) * 0xC))
#define GSWIP_MAC_CTRL_2_MLEN BIT(3) /* Maximum Untagged Frame Lnegth */
@@ -653,16 +674,13 @@ static int gswip_port_enable(struct dsa_switch *ds, int port,
GSWIP_SDMA_PCTRLp(port));
if (!dsa_is_cpu_port(ds, port)) {
- u32 macconf = GSWIP_MDIO_PHY_LINK_AUTO |
- GSWIP_MDIO_PHY_SPEED_AUTO |
- GSWIP_MDIO_PHY_FDUP_AUTO |
- GSWIP_MDIO_PHY_FCONTX_AUTO |
- GSWIP_MDIO_PHY_FCONRX_AUTO |
- (phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK);
+ u32 mdio_phy = 0;
- gswip_mdio_w(priv, macconf, GSWIP_MDIO_PHYp(port));
- /* Activate MDIO auto polling */
- gswip_mdio_mask(priv, 0, BIT(port), GSWIP_MDIO_MDC_CFG0);
+ if (phydev)
+ mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK;
+
+ gswip_mdio_mask(priv, GSWIP_MDIO_PHY_ADDR_MASK, mdio_phy,
+ GSWIP_MDIO_PHYp(port));
}
return 0;
@@ -675,14 +693,6 @@ static void gswip_port_disable(struct dsa_switch *ds, int port)
if (!dsa_is_user_port(ds, port))
return;
- if (!dsa_is_cpu_port(ds, port)) {
- gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_DOWN,
- GSWIP_MDIO_PHY_LINK_MASK,
- GSWIP_MDIO_PHYp(port));
- /* Deactivate MDIO auto polling */
- gswip_mdio_mask(priv, BIT(port), 0, GSWIP_MDIO_MDC_CFG0);
- }
-
gswip_switch_mask(priv, GSWIP_FDMA_PCTRL_EN, 0,
GSWIP_FDMA_PCTRLp(port));
gswip_switch_mask(priv, GSWIP_SDMA_PCTRL_EN, 0,
@@ -806,14 +816,32 @@ static int gswip_setup(struct dsa_switch *ds)
gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP2);
gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP3);
- /* disable PHY auto polling */
+ /* Deactivate MDIO PHY auto polling. Some PHYs as the AR8030 have an
+ * interoperability problem with this auto polling mechanism because
+ * their status registers think that the link is in a different state
+ * than it actually is. For the AR8030 it has the BMSR_ESTATEN bit set
+ * as well as ESTATUS_1000_TFULL and ESTATUS_1000_XFULL. This makes the
+ * auto polling state machine consider the link being negotiated with
+ * 1Gbit/s. Since the PHY itself is a Fast Ethernet RMII PHY this leads
+ * to the switch port being completely dead (RX and TX are both not
+ * working).
+ * Also with various other PHY / port combinations (PHY11G GPHY, PHY22F
+ * GPHY, external RGMII PEF7071/7072) any traffic would stop. Sometimes
+ * it would work fine for a few minutes to hours and then stop, on
+ * other device it would no traffic could be sent or received at all.
+ * Testing shows that when PHY auto polling is disabled these problems
+ * go away.
+ */
gswip_mdio_w(priv, 0x0, GSWIP_MDIO_MDC_CFG0);
+
/* Configure the MDIO Clock 2.5 MHz */
gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1);
- /* Disable the xMII link */
+ /* Disable the xMII interface and clear it's isolation bit */
for (i = 0; i < priv->hw_info->max_ports; i++)
- gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i);
+ gswip_mii_mask_cfg(priv,
+ GSWIP_MII_CFG_EN | GSWIP_MII_CFG_ISOLATE,
+ 0, i);
/* enable special tag insertion on cpu port */
gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN,
@@ -1464,6 +1492,112 @@ static void gswip_phylink_validate(struct dsa_switch *ds, int port,
return;
}
+static void gswip_port_set_link(struct gswip_priv *priv, int port, bool link)
+{
+ u32 mdio_phy;
+
+ if (link)
+ mdio_phy = GSWIP_MDIO_PHY_LINK_UP;
+ else
+ mdio_phy = GSWIP_MDIO_PHY_LINK_DOWN;
+
+ gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_MASK, mdio_phy,
+ GSWIP_MDIO_PHYp(port));
+}
+
+static void gswip_port_set_speed(struct gswip_priv *priv, int port, int speed,
+ phy_interface_t interface)
+{
+ u32 mdio_phy = 0, mii_cfg = 0, mac_ctrl_0 = 0;
+
+ switch (speed) {
+ case SPEED_10:
+ mdio_phy = GSWIP_MDIO_PHY_SPEED_M10;
+
+ if (interface == PHY_INTERFACE_MODE_RMII)
+ mii_cfg = GSWIP_MII_CFG_RATE_M50;
+ else
+ mii_cfg = GSWIP_MII_CFG_RATE_M2P5;
+
+ mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII;
+ break;
+
+ case SPEED_100:
+ mdio_phy = GSWIP_MDIO_PHY_SPEED_M100;
+
+ if (interface == PHY_INTERFACE_MODE_RMII)
+ mii_cfg = GSWIP_MII_CFG_RATE_M50;
+ else
+ mii_cfg = GSWIP_MII_CFG_RATE_M25;
+
+ mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII;
+ break;
+
+ case SPEED_1000:
+ mdio_phy = GSWIP_MDIO_PHY_SPEED_G1;
+
+ mii_cfg = GSWIP_MII_CFG_RATE_M125;
+
+ mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_RGMII;
+ break;
+ }
+
+ gswip_mdio_mask(priv, GSWIP_MDIO_PHY_SPEED_MASK, mdio_phy,
+ GSWIP_MDIO_PHYp(port));
+ gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_RATE_MASK, mii_cfg, port);
+ gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_GMII_MASK, mac_ctrl_0,
+ GSWIP_MAC_CTRL_0p(port));
+}
+
+static void gswip_port_set_duplex(struct gswip_priv *priv, int port, int duplex)
+{
+ u32 mac_ctrl_0, mdio_phy;
+
+ if (duplex == DUPLEX_FULL) {
+ mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_EN;
+ mdio_phy = GSWIP_MDIO_PHY_FDUP_EN;
+ } else {
+ mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_DIS;
+ mdio_phy = GSWIP_MDIO_PHY_FDUP_DIS;
+ }
+
+ gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FDUP_MASK, mac_ctrl_0,
+ GSWIP_MAC_CTRL_0p(port));
+ gswip_mdio_mask(priv, GSWIP_MDIO_PHY_FDUP_MASK, mdio_phy,
+ GSWIP_MDIO_PHYp(port));
+}
+
+static void gswip_port_set_pause(struct gswip_priv *priv, int port,
+ bool tx_pause, bool rx_pause)
+{
+ u32 mac_ctrl_0, mdio_phy;
+
+ if (tx_pause && rx_pause) {
+ mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RXTX;
+ mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN |
+ GSWIP_MDIO_PHY_FCONRX_EN;
+ } else if (tx_pause) {
+ mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_TX;
+ mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN |
+ GSWIP_MDIO_PHY_FCONRX_DIS;
+ } else if (rx_pause) {
+ mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RX;
+ mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS |
+ GSWIP_MDIO_PHY_FCONRX_EN;
+ } else {
+ mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_NONE;
+ mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS |
+ GSWIP_MDIO_PHY_FCONRX_DIS;
+ }
+
+ gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FCON_MASK,
+ mac_ctrl_0, GSWIP_MAC_CTRL_0p(port));
+ gswip_mdio_mask(priv,
+ GSWIP_MDIO_PHY_FCONTX_MASK |
+ GSWIP_MDIO_PHY_FCONRX_MASK,
+ mdio_phy, GSWIP_MDIO_PHYp(port));
+}
+
static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
unsigned int mode,
const struct phylink_link_state *state)
@@ -1483,6 +1617,9 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
break;
case PHY_INTERFACE_MODE_RMII:
miicfg |= GSWIP_MII_CFG_MODE_RMIIM;
+
+ /* Configure the RMII clock as output: */
+ miicfg |= GSWIP_MII_CFG_RMII_CLK;
break;
case PHY_INTERFACE_MODE_RGMII:
case PHY_INTERFACE_MODE_RGMII_ID:
@@ -1495,7 +1632,11 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
"Unsupported interface: %d\n", state->interface);
return;
}
- gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_MODE_MASK, miicfg, port);
+
+ gswip_mii_mask_cfg(priv,
+ GSWIP_MII_CFG_MODE_MASK | GSWIP_MII_CFG_RMII_CLK |
+ GSWIP_MII_CFG_RGMII_IBS | GSWIP_MII_CFG_LDCLKDIS,
+ miicfg, port);
switch (state->interface) {
case PHY_INTERFACE_MODE_RGMII_ID:
@@ -1520,6 +1661,9 @@ static void gswip_phylink_mac_link_down(struct dsa_switch *ds, int port,
struct gswip_priv *priv = ds->priv;
gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, port);
+
+ if (!dsa_is_cpu_port(ds, port))
+ gswip_port_set_link(priv, port, false);
}
static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port,
@@ -1531,6 +1675,13 @@ static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port,
{
struct gswip_priv *priv = ds->priv;
+ if (!dsa_is_cpu_port(ds, port)) {
+ gswip_port_set_link(priv, port, true);
+ gswip_port_set_speed(priv, port, speed, interface);
+ gswip_port_set_duplex(priv, port, duplex);
+ gswip_port_set_pause(priv, port, tx_pause, rx_pause);
+ }
+
gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port);
}
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
index de7692b..190025a0 100644
--- a/drivers/net/dsa/mt7530.c
+++ b/drivers/net/dsa/mt7530.c
@@ -1128,14 +1128,6 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port)
{
struct mt7530_priv *priv = ds->priv;
- /* The real fabric path would be decided on the membership in the
- * entry of VLAN table. PCR_MATRIX set up here with ALL_MEMBERS
- * means potential VLAN can be consisting of certain subset of all
- * ports.
- */
- mt7530_rmw(priv, MT7530_PCR_P(port),
- PCR_MATRIX_MASK, PCR_MATRIX(MT7530_ALL_MEMBERS));
-
/* Trapped into security mode allows packet forwarding through VLAN
* table lookup. CPU port is set to fallback mode to let untagged
* frames pass through.
diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
index 87160e7..70ec17f 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.c
+++ b/drivers/net/dsa/mv88e6xxx/chip.c
@@ -2994,10 +2994,17 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
return err;
}
+/* prod_id for switch families which do not have a PHY model number */
+static const u16 family_prod_id_table[] = {
+ [MV88E6XXX_FAMILY_6341] = MV88E6XXX_PORT_SWITCH_ID_PROD_6341,
+ [MV88E6XXX_FAMILY_6390] = MV88E6XXX_PORT_SWITCH_ID_PROD_6390,
+};
+
static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg)
{
struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
struct mv88e6xxx_chip *chip = mdio_bus->chip;
+ u16 prod_id;
u16 val;
int err;
@@ -3008,23 +3015,12 @@ static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg)
err = chip->info->ops->phy_read(chip, bus, phy, reg, &val);
mv88e6xxx_reg_unlock(chip);
- if (reg == MII_PHYSID2) {
- /* Some internal PHYs don't have a model number. */
- if (chip->info->family != MV88E6XXX_FAMILY_6165)
- /* Then there is the 6165 family. It gets is
- * PHYs correct. But it can also have two
- * SERDES interfaces in the PHY address
- * space. And these don't have a model
- * number. But they are not PHYs, so we don't
- * want to give them something a PHY driver
- * will recognise.
- *
- * Use the mv88e6390 family model number
- * instead, for anything which really could be
- * a PHY,
- */
- if (!(val & 0x3f0))
- val |= MV88E6XXX_PORT_SWITCH_ID_PROD_6390 >> 4;
+ /* Some internal PHYs don't have a model number. */
+ if (reg == MII_PHYSID2 && !(val & 0x3f0) &&
+ chip->info->family < ARRAY_SIZE(family_prod_id_table)) {
+ prod_id = family_prod_id_table[chip->info->family];
+ if (prod_id)
+ val |= prod_id >> 4;
}
return err ? err : val;
diff --git a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
index b777d3f..12cd04b 100644
--- a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
+++ b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
@@ -167,9 +167,10 @@ enum sja1105_hostcmd {
SJA1105_HOSTCMD_INVALIDATE = 4,
};
+/* Command and entry overlap */
static void
-sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
- enum packing_op op)
+sja1105et_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+ enum packing_op op)
{
const int size = SJA1105_SIZE_DYN_CMD;
@@ -179,6 +180,20 @@ sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
sja1105_packing(buf, &cmd->index, 9, 0, size, op);
}
+/* Command and entry are separate */
+static void
+sja1105pqrs_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+ enum packing_op op)
+{
+ u8 *p = buf + SJA1105_SIZE_VL_LOOKUP_ENTRY;
+ const int size = SJA1105_SIZE_DYN_CMD;
+
+ sja1105_packing(p, &cmd->valid, 31, 31, size, op);
+ sja1105_packing(p, &cmd->errors, 30, 30, size, op);
+ sja1105_packing(p, &cmd->rdwrset, 29, 29, size, op);
+ sja1105_packing(p, &cmd->index, 9, 0, size, op);
+}
+
static size_t sja1105et_vl_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
@@ -641,7 +656,7 @@ static size_t sja1105pqrs_cbs_entry_packing(void *buf, void *entry_ptr,
const struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_VL_LOOKUP] = {
.entry_packing = sja1105et_vl_lookup_entry_packing,
- .cmd_packing = sja1105_vl_lookup_cmd_packing,
+ .cmd_packing = sja1105et_vl_lookup_cmd_packing,
.access = OP_WRITE,
.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
.packed_size = SJA1105ET_SIZE_VL_LOOKUP_DYN_CMD,
@@ -725,7 +740,7 @@ const struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
const struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_VL_LOOKUP] = {
.entry_packing = sja1105_vl_lookup_entry_packing,
- .cmd_packing = sja1105_vl_lookup_cmd_packing,
+ .cmd_packing = sja1105pqrs_vl_lookup_cmd_packing,
.access = (OP_READ | OP_WRITE),
.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
.packed_size = SJA1105PQRS_SIZE_VL_LOOKUP_DYN_CMD,
diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
index 1a85581..e273b2b 100644
--- a/drivers/net/dsa/sja1105/sja1105_main.c
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
@@ -25,6 +25,8 @@
#include "sja1105_sgmii.h"
#include "sja1105_tas.h"
+#define SJA1105_DEFAULT_VLAN (VLAN_N_VID - 1)
+
static const struct dsa_switch_ops sja1105_switch_ops;
static void sja1105_hw_reset(struct gpio_desc *gpio, unsigned int pulse_len,
@@ -204,6 +206,7 @@ static int sja1105_init_mii_settings(struct sja1105_private *priv,
default:
dev_err(dev, "Unsupported PHY mode %s!\n",
phy_modes(ports[i].phy_mode));
+ return -EINVAL;
}
/* Even though the SerDes port is able to drive SGMII autoneg
@@ -292,6 +295,13 @@ static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
return 0;
}
+/* Set up a default VLAN for untagged traffic injected from the CPU
+ * using management routes (e.g. STP, PTP) as opposed to tag_8021q.
+ * All DT-defined ports are members of this VLAN, and there are no
+ * restrictions on forwarding (since the CPU selects the destination).
+ * Frames from this VLAN will always be transmitted as untagged, and
+ * neither the bridge nor the 8021q module cannot create this VLAN ID.
+ */
static int sja1105_init_static_vlan(struct sja1105_private *priv)
{
struct sja1105_table *table;
@@ -301,17 +311,13 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
.vmemb_port = 0,
.vlan_bc = 0,
.tag_port = 0,
- .vlanid = 1,
+ .vlanid = SJA1105_DEFAULT_VLAN,
};
struct dsa_switch *ds = priv->ds;
int port;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
- /* The static VLAN table will only contain the initial pvid of 1.
- * All other VLANs are to be configured through dynamic entries,
- * and kept in the static configuration table as backing memory.
- */
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
@@ -324,9 +330,6 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
table->entry_count = 1;
- /* VLAN 1: all DT-defined ports are members; no restrictions on
- * forwarding; always transmit as untagged.
- */
for (port = 0; port < ds->num_ports; port++) {
struct sja1105_bridge_vlan *v;
@@ -337,15 +340,12 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
pvid.vlan_bc |= BIT(port);
pvid.tag_port &= ~BIT(port);
- /* Let traffic that don't need dsa_8021q (e.g. STP, PTP) be
- * transmitted as untagged.
- */
v = kzalloc(sizeof(*v), GFP_KERNEL);
if (!v)
return -ENOMEM;
v->port = port;
- v->vid = 1;
+ v->vid = SJA1105_DEFAULT_VLAN;
v->untagged = true;
if (dsa_is_cpu_port(ds, port))
v->pvid = true;
@@ -2756,11 +2756,22 @@ static int sja1105_vlan_add_one(struct dsa_switch *ds, int port, u16 vid,
bool pvid = flags & BRIDGE_VLAN_INFO_PVID;
struct sja1105_bridge_vlan *v;
- list_for_each_entry(v, vlan_list, list)
- if (v->port == port && v->vid == vid &&
- v->untagged == untagged && v->pvid == pvid)
+ list_for_each_entry(v, vlan_list, list) {
+ if (v->port == port && v->vid == vid) {
/* Already added */
- return 0;
+ if (v->untagged == untagged && v->pvid == pvid)
+ /* Nothing changed */
+ return 0;
+
+ /* It's the same VLAN, but some of the flags changed
+ * and the user did not bother to delete it first.
+ * Update it and trigger sja1105_build_vlan_table.
+ */
+ v->untagged = untagged;
+ v->pvid = pvid;
+ return 1;
+ }
+ }
v = kzalloc(sizeof(*v), GFP_KERNEL);
if (!v) {
@@ -2911,13 +2922,13 @@ static int sja1105_setup(struct dsa_switch *ds)
rc = sja1105_static_config_load(priv, ports);
if (rc < 0) {
dev_err(ds->dev, "Failed to load static config: %d\n", rc);
- return rc;
+ goto out_ptp_clock_unregister;
}
/* Configure the CGU (PHY link modes and speeds) */
rc = sja1105_clocking_setup(priv);
if (rc < 0) {
dev_err(ds->dev, "Failed to configure MII clocking: %d\n", rc);
- return rc;
+ goto out_static_config_free;
}
/* On SJA1105, VLAN filtering per se is always enabled in hardware.
* The only thing we can do to disable it is lie about what the 802.1Q
@@ -2938,7 +2949,7 @@ static int sja1105_setup(struct dsa_switch *ds)
rc = sja1105_devlink_setup(ds);
if (rc < 0)
- return rc;
+ goto out_static_config_free;
/* The DSA/switchdev model brings up switch ports in standalone mode by
* default, and that means vlan_filtering is 0 since they're not under
@@ -2947,6 +2958,17 @@ static int sja1105_setup(struct dsa_switch *ds)
rtnl_lock();
rc = sja1105_setup_8021q_tagging(ds, true);
rtnl_unlock();
+ if (rc)
+ goto out_devlink_teardown;
+
+ return 0;
+
+out_devlink_teardown:
+ sja1105_devlink_teardown(ds);
+out_ptp_clock_unregister:
+ sja1105_ptp_clock_unregister(ds);
+out_static_config_free:
+ sja1105_static_config_free(&priv->static_config);
return rc;
}
@@ -3461,8 +3483,10 @@ static int sja1105_probe(struct spi_device *spi)
priv->cbs = devm_kcalloc(dev, priv->info->num_cbs_shapers,
sizeof(struct sja1105_cbs_entry),
GFP_KERNEL);
- if (!priv->cbs)
- return -ENOMEM;
+ if (!priv->cbs) {
+ rc = -ENOMEM;
+ goto out_unregister_switch;
+ }
}
/* Connections between dsa_port and sja1105_port */
@@ -3487,7 +3511,7 @@ static int sja1105_probe(struct spi_device *spi)
dev_err(ds->dev,
"failed to create deferred xmit thread: %d\n",
rc);
- goto out;
+ goto out_destroy_workers;
}
skb_queue_head_init(&sp->xmit_queue);
sp->xmit_tpid = ETH_P_SJA1105;
@@ -3497,7 +3521,8 @@ static int sja1105_probe(struct spi_device *spi)
}
return 0;
-out:
+
+out_destroy_workers:
while (port-- > 0) {
struct sja1105_port *sp = &priv->ports[port];
@@ -3506,6 +3531,10 @@ static int sja1105_probe(struct spi_device *spi)
kthread_destroy_worker(sp->xmit_worker);
}
+
+out_unregister_switch:
+ dsa_unregister_switch(ds);
+
return rc;
}
diff --git a/drivers/net/ethernet/amd/pcnet32.c b/drivers/net/ethernet/amd/pcnet32.c
index 187b0b9..f78daba 100644
--- a/drivers/net/ethernet/amd/pcnet32.c
+++ b/drivers/net/ethernet/amd/pcnet32.c
@@ -1534,8 +1534,7 @@ pcnet32_probe_pci(struct pci_dev *pdev, const struct pci_device_id *ent)
}
pci_set_master(pdev);
- ioaddr = pci_resource_start(pdev, 0);
- if (!ioaddr) {
+ if (!pci_resource_len(pdev, 0)) {
if (pcnet32_debug & NETIF_MSG_PROBE)
pr_err("card has no PCI IO resources, aborting\n");
err = -ENODEV;
@@ -1548,6 +1547,8 @@ pcnet32_probe_pci(struct pci_dev *pdev, const struct pci_device_id *ent)
pr_err("architecture does not support 32bit PCI busmaster DMA\n");
goto err_disable_dev;
}
+
+ ioaddr = pci_resource_start(pdev, 0);
if (!request_region(ioaddr, PCNET32_TOTAL_SIZE, "pcnet32_probe_pci")) {
if (pcnet32_debug & NETIF_MSG_PROBE)
pr_err("io address range already allocated\n");
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index ba8321e..3305979 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -180,9 +180,9 @@
#define XGBE_DMA_SYS_AWCR 0x30303030
/* DMA cache settings - PCI device */
-#define XGBE_DMA_PCI_ARCR 0x00000003
-#define XGBE_DMA_PCI_AWCR 0x13131313
-#define XGBE_DMA_PCI_AWARCR 0x00000313
+#define XGBE_DMA_PCI_ARCR 0x000f0f0f
+#define XGBE_DMA_PCI_AWCR 0x0f0f0f0f
+#define XGBE_DMA_PCI_AWARCR 0x00000f0f
/* DMA channel interrupt modes */
#define XGBE_IRQ_MODE_EDGE 0
diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c
index 3e8a179..633b103 100644
--- a/drivers/net/ethernet/broadcom/bnx2.c
+++ b/drivers/net/ethernet/broadcom/bnx2.c
@@ -8247,9 +8247,9 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
BNX2_WR(bp, PCI_COMMAND, reg);
} else if ((BNX2_CHIP_ID(bp) == BNX2_CHIP_ID_5706_A1) &&
!(bp->flags & BNX2_FLAG_PCIX)) {
-
dev_err(&pdev->dev,
"5706 A1 can only be used in a PCIX bus, aborting\n");
+ rc = -EPERM;
goto err_out_unmap;
}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index a59c1f1..adfaa9a 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -122,7 +122,10 @@ enum board_idx {
NETXTREME_E_VF,
NETXTREME_C_VF,
NETXTREME_S_VF,
+ NETXTREME_C_VF_HV,
+ NETXTREME_E_VF_HV,
NETXTREME_E_P5_VF,
+ NETXTREME_E_P5_VF_HV,
};
/* indexed by enum above */
@@ -170,7 +173,10 @@ static const struct {
[NETXTREME_E_VF] = { "Broadcom NetXtreme-E Ethernet Virtual Function" },
[NETXTREME_C_VF] = { "Broadcom NetXtreme-C Ethernet Virtual Function" },
[NETXTREME_S_VF] = { "Broadcom NetXtreme-S Ethernet Virtual Function" },
+ [NETXTREME_C_VF_HV] = { "Broadcom NetXtreme-C Virtual Function for Hyper-V" },
+ [NETXTREME_E_VF_HV] = { "Broadcom NetXtreme-E Virtual Function for Hyper-V" },
[NETXTREME_E_P5_VF] = { "Broadcom BCM5750X NetXtreme-E Ethernet Virtual Function" },
+ [NETXTREME_E_P5_VF_HV] = { "Broadcom BCM5750X NetXtreme-E Virtual Function for Hyper-V" },
};
static const struct pci_device_id bnxt_pci_tbl[] = {
@@ -222,15 +228,25 @@ static const struct pci_device_id bnxt_pci_tbl[] = {
{ PCI_VDEVICE(BROADCOM, 0xd804), .driver_data = BCM58804 },
#ifdef CONFIG_BNXT_SRIOV
{ PCI_VDEVICE(BROADCOM, 0x1606), .driver_data = NETXTREME_E_VF },
+ { PCI_VDEVICE(BROADCOM, 0x1607), .driver_data = NETXTREME_E_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0x1608), .driver_data = NETXTREME_E_VF_HV },
{ PCI_VDEVICE(BROADCOM, 0x1609), .driver_data = NETXTREME_E_VF },
+ { PCI_VDEVICE(BROADCOM, 0x16bd), .driver_data = NETXTREME_E_VF_HV },
{ PCI_VDEVICE(BROADCOM, 0x16c1), .driver_data = NETXTREME_E_VF },
+ { PCI_VDEVICE(BROADCOM, 0x16c2), .driver_data = NETXTREME_C_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0x16c3), .driver_data = NETXTREME_C_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0x16c4), .driver_data = NETXTREME_E_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0x16c5), .driver_data = NETXTREME_E_VF_HV },
{ PCI_VDEVICE(BROADCOM, 0x16cb), .driver_data = NETXTREME_C_VF },
{ PCI_VDEVICE(BROADCOM, 0x16d3), .driver_data = NETXTREME_E_VF },
{ PCI_VDEVICE(BROADCOM, 0x16dc), .driver_data = NETXTREME_E_VF },
{ PCI_VDEVICE(BROADCOM, 0x16e1), .driver_data = NETXTREME_C_VF },
{ PCI_VDEVICE(BROADCOM, 0x16e5), .driver_data = NETXTREME_C_VF },
+ { PCI_VDEVICE(BROADCOM, 0x16e6), .driver_data = NETXTREME_C_VF_HV },
{ PCI_VDEVICE(BROADCOM, 0x1806), .driver_data = NETXTREME_E_P5_VF },
{ PCI_VDEVICE(BROADCOM, 0x1807), .driver_data = NETXTREME_E_P5_VF },
+ { PCI_VDEVICE(BROADCOM, 0x1808), .driver_data = NETXTREME_E_P5_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0x1809), .driver_data = NETXTREME_E_P5_VF_HV },
{ PCI_VDEVICE(BROADCOM, 0xd800), .driver_data = NETXTREME_S_VF },
#endif
{ 0 }
@@ -263,7 +279,9 @@ static struct workqueue_struct *bnxt_pf_wq;
static bool bnxt_vf_pciid(enum board_idx idx)
{
return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF ||
- idx == NETXTREME_S_VF || idx == NETXTREME_E_P5_VF);
+ idx == NETXTREME_S_VF || idx == NETXTREME_C_VF_HV ||
+ idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF ||
+ idx == NETXTREME_E_P5_VF_HV);
}
#define DB_CP_REARM_FLAGS (DB_KEY_CP | DB_IDX_VALID)
@@ -1731,14 +1749,16 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
cons = rxcmp->rx_cmp_opaque;
if (unlikely(cons != rxr->rx_next_cons)) {
- int rc1 = bnxt_discard_rx(bp, cpr, raw_cons, rxcmp);
+ int rc1 = bnxt_discard_rx(bp, cpr, &tmp_raw_cons, rxcmp);
/* 0xffff is forced error, don't print it */
if (rxr->rx_next_cons != 0xffff)
netdev_warn(bp->dev, "RX cons %x != expected cons %x\n",
cons, rxr->rx_next_cons);
bnxt_sched_reset(bp, rxr);
- return rc1;
+ if (rc1)
+ return rc1;
+ goto next_rx_no_prod_no_len;
}
rx_buf = &rxr->rx_buf_ring[cons];
data = rx_buf->data;
@@ -6814,14 +6834,7 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
static void bnxt_hwrm_set_pg_attr(struct bnxt_ring_mem_info *rmem, u8 *pg_attr,
__le64 *pg_dir)
{
- u8 pg_size = 0;
-
- if (BNXT_PAGE_SHIFT == 13)
- pg_size = 1 << 4;
- else if (BNXT_PAGE_SIZE == 16)
- pg_size = 2 << 4;
-
- *pg_attr = pg_size;
+ BNXT_SET_CTX_PAGE_ATTR(*pg_attr);
if (rmem->depth >= 1) {
if (rmem->depth == 2)
*pg_attr |= 2;
@@ -9546,7 +9559,9 @@ static ssize_t bnxt_show_temp(struct device *dev,
if (!rc)
len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
mutex_unlock(&bp->hwrm_cmd_lock);
- return rc ?: len;
+ if (rc)
+ return rc;
+ return len;
}
static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index e4e926c..a95c5af 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -1440,6 +1440,16 @@ struct bnxt_ctx_pg_info {
#define BNXT_MAX_TQM_RINGS \
(BNXT_MAX_TQM_SP_RINGS + BNXT_MAX_TQM_FP_RINGS)
+#define BNXT_SET_CTX_PAGE_ATTR(attr) \
+do { \
+ if (BNXT_PAGE_SIZE == 0x2000) \
+ attr = FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_8K; \
+ else if (BNXT_PAGE_SIZE == 0x10000) \
+ attr = FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_64K; \
+ else \
+ attr = FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_4K; \
+} while (0)
+
struct bnxt_ctx_mem_info {
u32 qp_max_entries;
u16 qp_min_qp1_entries;
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 286f034..390f45e 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -3111,6 +3111,9 @@ static void gem_prog_cmp_regs(struct macb *bp, struct ethtool_rx_flow_spec *fs)
bool cmp_b = false;
bool cmp_c = false;
+ if (!macb_is_gem(bp))
+ return;
+
tp4sp_v = &(fs->h_u.tcp_ip4_spec);
tp4sp_m = &(fs->m_u.tcp_ip4_spec);
@@ -3479,6 +3482,7 @@ static void macb_restore_features(struct macb *bp)
{
struct net_device *netdev = bp->dev;
netdev_features_t features = netdev->features;
+ struct ethtool_rx_fs_item *item;
/* TX checksum offload */
macb_set_txcsum_feature(bp, features);
@@ -3487,6 +3491,9 @@ static void macb_restore_features(struct macb *bp)
macb_set_rxcsum_feature(bp, features);
/* RX Flow Filters */
+ list_for_each_entry(item, &bp->rx_fs_list.list, list)
+ gem_prog_cmp_regs(bp, &item->fs);
+
macb_set_rxflow_feature(bp, features);
}
@@ -3770,6 +3777,7 @@ static int macb_init(struct platform_device *pdev)
reg = gem_readl(bp, DCFG8);
bp->max_tuples = min((GEM_BFEXT(SCR2CMP, reg) / 3),
GEM_BFEXT(T2SCR, reg));
+ INIT_LIST_HEAD(&bp->rx_fs_list.list);
if (bp->max_tuples > 0) {
/* also needs one ethtype match to check IPv4 */
if (GEM_BFEXT(SCR2ETH, reg) > 0) {
@@ -3780,7 +3788,6 @@ static int macb_init(struct platform_device *pdev)
/* Filtering is supported in hw but don't enable it in kernel now */
dev->hw_features |= NETIF_F_NTUPLE;
/* init Rx flow definitions */
- INIT_LIST_HEAD(&bp->rx_fs_list.list);
bp->rx_fs_list.count = 0;
spin_lock_init(&bp->rx_fs_lock);
} else
diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h
index e6d4ad9..3f1c189 100644
--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h
+++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h
@@ -521,7 +521,7 @@
#define CN23XX_BAR1_INDEX_OFFSET 3
#define CN23XX_PEM_BAR1_INDEX_REG(port, idx) \
- (CN23XX_PEM_BAR1_INDEX_START + ((port) << CN23XX_PEM_OFFSET) + \
+ (CN23XX_PEM_BAR1_INDEX_START + (((u64)port) << CN23XX_PEM_OFFSET) + \
((idx) << CN23XX_BAR1_INDEX_OFFSET))
/*############################ DPI #########################*/
diff --git a/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h b/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
index b248966..7aad40b 100644
--- a/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
+++ b/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
@@ -412,7 +412,7 @@
| CN6XXX_INTR_M0UNWI_ERR \
| CN6XXX_INTR_M1UPB0_ERR \
| CN6XXX_INTR_M1UPWI_ERR \
- | CN6XXX_INTR_M1UPB0_ERR \
+ | CN6XXX_INTR_M1UNB0_ERR \
| CN6XXX_INTR_M1UNWI_ERR \
| CN6XXX_INTR_INSTR_DB_OF_ERR \
| CN6XXX_INTR_SLIST_DB_OF_ERR \
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c
index 7d00d3a..e0d18e9 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
@@ -1153,7 +1153,7 @@ static void octeon_destroy_resources(struct octeon_device *oct)
* @lio: per-network private data
* @start_stop: whether to start or stop
*/
-static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
+static int send_rx_ctrl_cmd(struct lio *lio, int start_stop)
{
struct octeon_soft_command *sc;
union octnet_cmd *ncmd;
@@ -1161,15 +1161,15 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
int retval;
if (oct->props[lio->ifidx].rx_on == start_stop)
- return;
+ return 0;
sc = (struct octeon_soft_command *)
octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE,
16, 0);
if (!sc) {
netif_info(lio, rx_err, lio->netdev,
- "Failed to allocate octeon_soft_command\n");
- return;
+ "Failed to allocate octeon_soft_command struct\n");
+ return -ENOMEM;
}
ncmd = (union octnet_cmd *)sc->virtdptr;
@@ -1192,18 +1192,19 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
if (retval == IQ_SEND_FAILED) {
netif_info(lio, rx_err, lio->netdev, "Failed to send RX Control message\n");
octeon_free_soft_command(oct, sc);
- return;
} else {
/* Sleep on a wait queue till the cond flag indicates that the
* response arrived or timed-out.
*/
retval = wait_for_sc_completion_timeout(oct, sc, 0);
if (retval)
- return;
+ return retval;
oct->props[lio->ifidx].rx_on = start_stop;
WRITE_ONCE(sc->caller_is_done, true);
}
+
+ return retval;
}
/**
@@ -1778,6 +1779,7 @@ static int liquidio_open(struct net_device *netdev)
struct octeon_device_priv *oct_priv =
(struct octeon_device_priv *)oct->priv;
struct napi_struct *napi, *n;
+ int ret = 0;
if (oct->props[lio->ifidx].napi_enabled == 0) {
tasklet_disable(&oct_priv->droq_tasklet);
@@ -1813,7 +1815,9 @@ static int liquidio_open(struct net_device *netdev)
netif_info(lio, ifup, lio->netdev, "Interface Open, ready for traffic\n");
/* tell Octeon to start forwarding packets to host */
- send_rx_ctrl_cmd(lio, 1);
+ ret = send_rx_ctrl_cmd(lio, 1);
+ if (ret)
+ return ret;
/* start periodical statistics fetch */
INIT_DELAYED_WORK(&lio->stats_wk.work, lio_fetch_stats);
@@ -1824,7 +1828,7 @@ static int liquidio_open(struct net_device *netdev)
dev_info(&oct->pci_dev->dev, "%s interface is opened\n",
netdev->name);
- return 0;
+ return ret;
}
/**
@@ -1838,6 +1842,7 @@ static int liquidio_stop(struct net_device *netdev)
struct octeon_device_priv *oct_priv =
(struct octeon_device_priv *)oct->priv;
struct napi_struct *napi, *n;
+ int ret = 0;
ifstate_reset(lio, LIO_IFSTATE_RUNNING);
@@ -1854,7 +1859,9 @@ static int liquidio_stop(struct net_device *netdev)
lio->link_changes++;
/* Tell Octeon that nic interface is down. */
- send_rx_ctrl_cmd(lio, 0);
+ ret = send_rx_ctrl_cmd(lio, 0);
+ if (ret)
+ return ret;
if (OCTEON_CN23XX_PF(oct)) {
if (!oct->msix_on)
@@ -1889,7 +1896,7 @@ static int liquidio_stop(struct net_device *netdev)
dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name);
- return 0;
+ return ret;
}
/**
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
index 103440f..226a784 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
@@ -595,7 +595,7 @@ static void octeon_destroy_resources(struct octeon_device *oct)
* @lio: per-network private data
* @start_stop: whether to start or stop
*/
-static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
+static int send_rx_ctrl_cmd(struct lio *lio, int start_stop)
{
struct octeon_device *oct = (struct octeon_device *)lio->oct_dev;
struct octeon_soft_command *sc;
@@ -603,11 +603,16 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
int retval;
if (oct->props[lio->ifidx].rx_on == start_stop)
- return;
+ return 0;
sc = (struct octeon_soft_command *)
octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE,
16, 0);
+ if (!sc) {
+ netif_info(lio, rx_err, lio->netdev,
+ "Failed to allocate octeon_soft_command struct\n");
+ return -ENOMEM;
+ }
ncmd = (union octnet_cmd *)sc->virtdptr;
@@ -635,11 +640,13 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
*/
retval = wait_for_sc_completion_timeout(oct, sc, 0);
if (retval)
- return;
+ return retval;
oct->props[lio->ifidx].rx_on = start_stop;
WRITE_ONCE(sc->caller_is_done, true);
}
+
+ return retval;
}
/**
@@ -906,6 +913,7 @@ static int liquidio_open(struct net_device *netdev)
struct octeon_device_priv *oct_priv =
(struct octeon_device_priv *)oct->priv;
struct napi_struct *napi, *n;
+ int ret = 0;
if (!oct->props[lio->ifidx].napi_enabled) {
tasklet_disable(&oct_priv->droq_tasklet);
@@ -932,11 +940,13 @@ static int liquidio_open(struct net_device *netdev)
(LIQUIDIO_NDEV_STATS_POLL_TIME_MS));
/* tell Octeon to start forwarding packets to host */
- send_rx_ctrl_cmd(lio, 1);
+ ret = send_rx_ctrl_cmd(lio, 1);
+ if (ret)
+ return ret;
dev_info(&oct->pci_dev->dev, "%s interface is opened\n", netdev->name);
- return 0;
+ return ret;
}
/**
@@ -950,9 +960,12 @@ static int liquidio_stop(struct net_device *netdev)
struct octeon_device_priv *oct_priv =
(struct octeon_device_priv *)oct->priv;
struct napi_struct *napi, *n;
+ int ret = 0;
/* tell Octeon to stop forwarding packets to host */
- send_rx_ctrl_cmd(lio, 0);
+ ret = send_rx_ctrl_cmd(lio, 0);
+ if (ret)
+ return ret;
netif_info(lio, ifdown, lio->netdev, "Stopping interface!\n");
/* Inform that netif carrier is down */
@@ -986,7 +999,7 @@ static int liquidio_stop(struct net_device *netdev)
dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name);
- return 0;
+ return ret;
}
/**
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
index 7a141ce..0ccd5b4 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
@@ -776,7 +776,7 @@ static void nicvf_rcv_queue_config(struct nicvf *nic, struct queue_set *qs,
mbx.rq.msg = NIC_MBOX_MSG_RQ_CFG;
mbx.rq.qs_num = qs->vnic_id;
mbx.rq.rq_num = qidx;
- mbx.rq.cfg = (rq->caching << 26) | (rq->cq_qs << 19) |
+ mbx.rq.cfg = ((u64)rq->caching << 26) | (rq->cq_qs << 19) |
(rq->cq_idx << 16) | (rq->cont_rbdr_qs << 9) |
(rq->cont_qs_rbdr_idx << 8) |
(rq->start_rbdr_qs << 1) | (rq->start_qs_rbdr_idx);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
index 75474f8..c5b0e72 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
@@ -1794,11 +1794,25 @@ int cudbg_collect_sge_indirect(struct cudbg_init *pdbg_init,
struct cudbg_buffer temp_buff = { 0 };
struct sge_qbase_reg_field *sge_qbase;
struct ireg_buf *ch_sge_dbg;
+ u8 padap_running = 0;
int i, rc;
+ u32 size;
- rc = cudbg_get_buff(pdbg_init, dbg_buff,
- sizeof(*ch_sge_dbg) * 2 + sizeof(*sge_qbase),
- &temp_buff);
+ /* Accessing SGE_QBASE_MAP[0-3] and SGE_QBASE_INDEX regs can
+ * lead to SGE missing doorbells under heavy traffic. So, only
+ * collect them when adapter is idle.
+ */
+ for_each_port(padap, i) {
+ padap_running = netif_running(padap->port[i]);
+ if (padap_running)
+ break;
+ }
+
+ size = sizeof(*ch_sge_dbg) * 2;
+ if (!padap_running)
+ size += sizeof(*sge_qbase);
+
+ rc = cudbg_get_buff(pdbg_init, dbg_buff, size, &temp_buff);
if (rc)
return rc;
@@ -1820,7 +1834,8 @@ int cudbg_collect_sge_indirect(struct cudbg_init *pdbg_init,
ch_sge_dbg++;
}
- if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5) {
+ if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5 &&
+ !padap_running) {
sge_qbase = (struct sge_qbase_reg_field *)ch_sge_dbg;
/* 1 addr reg SGE_QBASE_INDEX and 4 data reg
* SGE_QBASE_MAP[0-3]
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
index 17410fe..7d49fd4 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
@@ -2671,7 +2671,7 @@ do { \
seq_printf(seq, "%-12s", s); \
for (i = 0; i < n; ++i) \
seq_printf(seq, " %16" fmt_spec, v); \
- seq_putc(seq, '\n'); \
+ seq_putc(seq, '\n'); \
} while (0)
#define S(s, v) S3("s", s, v)
#define T3(fmt_spec, s, v) S3(fmt_spec, s, tx[i].v)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
index 83b4644..e664e05 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
@@ -174,31 +174,31 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
WORD_MASK, f->fs.nat_lip[15] |
f->fs.nat_lip[14] << 8 |
f->fs.nat_lip[13] << 16 |
- f->fs.nat_lip[12] << 24, 1);
+ (u64)f->fs.nat_lip[12] << 24, 1);
set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W + 1,
WORD_MASK, f->fs.nat_lip[11] |
f->fs.nat_lip[10] << 8 |
f->fs.nat_lip[9] << 16 |
- f->fs.nat_lip[8] << 24, 1);
+ (u64)f->fs.nat_lip[8] << 24, 1);
set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W + 2,
WORD_MASK, f->fs.nat_lip[7] |
f->fs.nat_lip[6] << 8 |
f->fs.nat_lip[5] << 16 |
- f->fs.nat_lip[4] << 24, 1);
+ (u64)f->fs.nat_lip[4] << 24, 1);
set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W + 3,
WORD_MASK, f->fs.nat_lip[3] |
f->fs.nat_lip[2] << 8 |
f->fs.nat_lip[1] << 16 |
- f->fs.nat_lip[0] << 24, 1);
+ (u64)f->fs.nat_lip[0] << 24, 1);
} else {
set_tcb_field(adap, f, tid, TCB_RX_FRAG3_LEN_RAW_W,
WORD_MASK, f->fs.nat_lip[3] |
f->fs.nat_lip[2] << 8 |
f->fs.nat_lip[1] << 16 |
- f->fs.nat_lip[0] << 24, 1);
+ (u64)f->fs.nat_lip[0] << 25, 1);
}
}
@@ -208,25 +208,25 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
WORD_MASK, f->fs.nat_fip[15] |
f->fs.nat_fip[14] << 8 |
f->fs.nat_fip[13] << 16 |
- f->fs.nat_fip[12] << 24, 1);
+ (u64)f->fs.nat_fip[12] << 24, 1);
set_tcb_field(adap, f, tid, TCB_RX_FRAG2_PTR_RAW_W + 1,
WORD_MASK, f->fs.nat_fip[11] |
f->fs.nat_fip[10] << 8 |
f->fs.nat_fip[9] << 16 |
- f->fs.nat_fip[8] << 24, 1);
+ (u64)f->fs.nat_fip[8] << 24, 1);
set_tcb_field(adap, f, tid, TCB_RX_FRAG2_PTR_RAW_W + 2,
WORD_MASK, f->fs.nat_fip[7] |
f->fs.nat_fip[6] << 8 |
f->fs.nat_fip[5] << 16 |
- f->fs.nat_fip[4] << 24, 1);
+ (u64)f->fs.nat_fip[4] << 24, 1);
set_tcb_field(adap, f, tid, TCB_RX_FRAG2_PTR_RAW_W + 3,
WORD_MASK, f->fs.nat_fip[3] |
f->fs.nat_fip[2] << 8 |
f->fs.nat_fip[1] << 16 |
- f->fs.nat_fip[0] << 24, 1);
+ (u64)f->fs.nat_fip[0] << 24, 1);
} else {
set_tcb_field(adap, f, tid,
@@ -234,13 +234,13 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
WORD_MASK, f->fs.nat_fip[3] |
f->fs.nat_fip[2] << 8 |
f->fs.nat_fip[1] << 16 |
- f->fs.nat_fip[0] << 24, 1);
+ (u64)f->fs.nat_fip[0] << 24, 1);
}
}
set_tcb_field(adap, f, tid, TCB_PDU_HDR_LEN_W, WORD_MASK,
(dp ? (nat_lp[1] | nat_lp[0] << 8) : 0) |
- (sp ? (nat_fp[1] << 16 | nat_fp[0] << 24) : 0),
+ (sp ? (nat_fp[1] << 16 | (u64)nat_fp[0] << 24) : 0),
1);
}
@@ -1042,7 +1042,7 @@ void clear_all_filters(struct adapter *adapter)
cxgb4_del_filter(dev, f->tid, &f->fs);
}
- sb = t4_read_reg(adapter, LE_DB_SRVR_START_INDEX_A);
+ sb = adapter->tids.stid_base;
for (i = 0; i < sb; i++) {
f = (struct filter_entry *)adapter->tids.tid_tab[i];
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index 7fd264a..23c13f3 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -6484,9 +6484,9 @@ static void cxgb4_ktls_dev_del(struct net_device *netdev,
adap->uld[CXGB4_ULD_KTLS].tlsdev_ops->tls_dev_del(netdev, tls_ctx,
direction);
- cxgb4_set_ktls_feature(adap, FW_PARAMS_PARAM_DEV_KTLS_HW_DISABLE);
out_unlock:
+ cxgb4_set_ktls_feature(adap, FW_PARAMS_PARAM_DEV_KTLS_HW_DISABLE);
mutex_unlock(&uld_mutex);
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
index 3334c9e..5463012 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -2559,12 +2559,12 @@ int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc)
spin_lock_bh(&eosw_txq->lock);
if (tc != FW_SCHED_CLS_NONE) {
if (eosw_txq->state != CXGB4_EO_STATE_CLOSED)
- goto out_unlock;
+ goto out_free_skb;
next_state = CXGB4_EO_STATE_FLOWC_OPEN_SEND;
} else {
if (eosw_txq->state != CXGB4_EO_STATE_ACTIVE)
- goto out_unlock;
+ goto out_free_skb;
next_state = CXGB4_EO_STATE_FLOWC_CLOSE_SEND;
}
@@ -2600,17 +2600,19 @@ int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc)
eosw_txq_flush_pending_skbs(eosw_txq);
ret = eosw_txq_enqueue(eosw_txq, skb);
- if (ret) {
- dev_consume_skb_any(skb);
- goto out_unlock;
- }
+ if (ret)
+ goto out_free_skb;
eosw_txq->state = next_state;
eosw_txq->flowc_idx = eosw_txq->pidx;
eosw_txq_advance(eosw_txq, 1);
ethofld_xmit(dev, eosw_txq);
-out_unlock:
+ spin_unlock_bh(&eosw_txq->lock);
+ return 0;
+
+out_free_skb:
+ dev_consume_skb_any(skb);
spin_unlock_bh(&eosw_txq->lock);
return ret;
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
index 98d01a7..581670d 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
@@ -2090,7 +2090,8 @@ void t4_get_regs(struct adapter *adap, void *buf, size_t buf_size)
0x1190, 0x1194,
0x11a0, 0x11a4,
0x11b0, 0x11b4,
- 0x11fc, 0x1274,
+ 0x11fc, 0x123c,
+ 0x1254, 0x1274,
0x1280, 0x133c,
0x1800, 0x18fc,
0x3000, 0x302c,
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
index 423d6d7..f935382 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
@@ -59,6 +59,7 @@ static int chcr_get_nfrags_to_send(struct sk_buff *skb, u32 start, u32 len)
}
static int chcr_init_tcb_fields(struct chcr_ktls_info *tx_info);
+static void clear_conn_resources(struct chcr_ktls_info *tx_info);
/*
* chcr_ktls_save_keys: calculate and save crypto keys.
* @tx_info - driver specific tls info.
@@ -355,18 +356,6 @@ static int chcr_set_tcb_field(struct chcr_ktls_info *tx_info, u16 word,
}
/*
- * chcr_ktls_mark_tcb_close: mark tcb state to CLOSE
- * @tx_info - driver specific tls info.
- * return: NET_TX_OK/NET_XMIT_DROP.
- */
-static int chcr_ktls_mark_tcb_close(struct chcr_ktls_info *tx_info)
-{
- return chcr_set_tcb_field(tx_info, TCB_T_STATE_W,
- TCB_T_STATE_V(TCB_T_STATE_M),
- CHCR_TCB_STATE_CLOSED, 1);
-}
-
-/*
* chcr_ktls_dev_del: call back for tls_dev_del.
* Remove the tid and l2t entry and close the connection.
* it per connection basis.
@@ -382,10 +371,14 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
chcr_get_ktls_tx_context(tls_ctx);
struct chcr_ktls_info *tx_info = tx_ctx->chcr_info;
struct ch_ktls_port_stats_debug *port_stats;
+ struct chcr_ktls_uld_ctx *u_ctx;
if (!tx_info)
return;
+ u_ctx = tx_info->adap->uld[CXGB4_ULD_KTLS].handle;
+ if (u_ctx && u_ctx->detach)
+ return;
/* clear l2t entry */
if (tx_info->l2te)
cxgb4_l2t_release(tx_info->l2te);
@@ -400,10 +393,10 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
/* clear tid */
if (tx_info->tid != -1) {
- /* clear tcb state and then release tid */
- chcr_ktls_mark_tcb_close(tx_info);
cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
tx_info->tid, tx_info->ip_family);
+
+ xa_erase(&u_ctx->tid_list, tx_info->tid);
}
port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id];
@@ -431,6 +424,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
struct tls_context *tls_ctx = tls_get_ctx(sk);
struct ch_ktls_port_stats_debug *port_stats;
struct chcr_ktls_ofld_ctx_tx *tx_ctx;
+ struct chcr_ktls_uld_ctx *u_ctx;
struct chcr_ktls_info *tx_info;
struct dst_entry *dst;
struct adapter *adap;
@@ -445,6 +439,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
adap = pi->adapter;
port_stats = &adap->ch_ktls_stats.ktls_port[pi->port_id];
atomic64_inc(&port_stats->ktls_tx_connection_open);
+ u_ctx = adap->uld[CXGB4_ULD_KTLS].handle;
if (direction == TLS_OFFLOAD_CTX_DIR_RX) {
pr_err("not expecting for RX direction\n");
@@ -454,6 +449,9 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
if (tx_ctx->chcr_info)
goto out;
+ if (u_ctx && u_ctx->detach)
+ goto out;
+
tx_info = kvzalloc(sizeof(*tx_info), GFP_KERNEL);
if (!tx_info)
goto out;
@@ -579,7 +577,6 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
return 0;
free_tid:
- chcr_ktls_mark_tcb_close(tx_info);
#if IS_ENABLED(CONFIG_IPV6)
/* clear clip entry */
if (tx_info->ip_family == AF_INET6)
@@ -590,6 +587,8 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
tx_info->tid, tx_info->ip_family);
+ xa_erase(&u_ctx->tid_list, tx_info->tid);
+
put_module:
/* release module refcount */
module_put(THIS_MODULE);
@@ -654,8 +653,12 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
{
const struct cpl_act_open_rpl *p = (void *)input;
struct chcr_ktls_info *tx_info = NULL;
+ struct chcr_ktls_ofld_ctx_tx *tx_ctx;
+ struct chcr_ktls_uld_ctx *u_ctx;
unsigned int atid, tid, status;
+ struct tls_context *tls_ctx;
struct tid_info *t;
+ int ret = 0;
tid = GET_TID(p);
status = AOPEN_STATUS_G(ntohl(p->atid_status));
@@ -677,10 +680,6 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
if (tx_info->pending_close) {
spin_unlock(&tx_info->lock);
if (!status) {
- /* it's a late success, tcb status is establised,
- * mark it close.
- */
- chcr_ktls_mark_tcb_close(tx_info);
cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
tid, tx_info->ip_family);
}
@@ -691,14 +690,29 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
if (!status) {
tx_info->tid = tid;
cxgb4_insert_tid(t, tx_info, tx_info->tid, tx_info->ip_family);
+ /* Adding tid */
+ tls_ctx = tls_get_ctx(tx_info->sk);
+ tx_ctx = chcr_get_ktls_tx_context(tls_ctx);
+ u_ctx = adap->uld[CXGB4_ULD_KTLS].handle;
+ if (u_ctx) {
+ ret = xa_insert_bh(&u_ctx->tid_list, tid, tx_ctx,
+ GFP_NOWAIT);
+ if (ret < 0) {
+ pr_err("%s: Failed to allocate tid XA entry = %d\n",
+ __func__, tx_info->tid);
+ tx_info->open_state = CH_KTLS_OPEN_FAILURE;
+ goto out;
+ }
+ }
tx_info->open_state = CH_KTLS_OPEN_SUCCESS;
} else {
tx_info->open_state = CH_KTLS_OPEN_FAILURE;
}
+out:
spin_unlock(&tx_info->lock);
complete(&tx_info->completion);
- return 0;
+ return ret;
}
/*
@@ -1669,54 +1683,6 @@ static void chcr_ktls_copy_record_in_skb(struct sk_buff *nskb,
}
/*
- * chcr_ktls_update_snd_una: Reset the SEND_UNA. It will be done to avoid
- * sending the same segment again. It will discard the segment which is before
- * the current tx max.
- * @tx_info - driver specific tls info.
- * @q - TX queue.
- * return: NET_TX_OK/NET_XMIT_DROP.
- */
-static int chcr_ktls_update_snd_una(struct chcr_ktls_info *tx_info,
- struct sge_eth_txq *q)
-{
- struct fw_ulptx_wr *wr;
- unsigned int ndesc;
- int credits;
- void *pos;
- u32 len;
-
- len = sizeof(*wr) + roundup(CHCR_SET_TCB_FIELD_LEN, 16);
- ndesc = DIV_ROUND_UP(len, 64);
-
- credits = chcr_txq_avail(&q->q) - ndesc;
- if (unlikely(credits < 0)) {
- chcr_eth_txq_stop(q);
- return NETDEV_TX_BUSY;
- }
-
- pos = &q->q.desc[q->q.pidx];
-
- wr = pos;
- /* ULPTX wr */
- wr->op_to_compl = htonl(FW_WR_OP_V(FW_ULPTX_WR));
- wr->cookie = 0;
- /* fill len in wr field */
- wr->flowid_len16 = htonl(FW_WR_LEN16_V(DIV_ROUND_UP(len, 16)));
-
- pos += sizeof(*wr);
-
- pos = chcr_write_cpl_set_tcb_ulp(tx_info, q, tx_info->tid, pos,
- TCB_SND_UNA_RAW_W,
- TCB_SND_UNA_RAW_V(TCB_SND_UNA_RAW_M),
- TCB_SND_UNA_RAW_V(0), 0);
-
- chcr_txq_advance(&q->q, ndesc);
- cxgb4_ring_tx_db(tx_info->adap, &q->q, ndesc);
-
- return 0;
-}
-
-/*
* chcr_end_part_handler: This handler will handle the record which
* is complete or if record's end part is received. T6 adapter has a issue that
* it can't send out TAG with partial record so if its an end part then we have
@@ -1740,7 +1706,9 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info,
struct sge_eth_txq *q, u32 skb_offset,
u32 tls_end_offset, bool last_wr)
{
+ bool free_skb_if_tx_fails = false;
struct sk_buff *nskb = NULL;
+
/* check if it is a complete record */
if (tls_end_offset == record->len) {
nskb = skb;
@@ -1763,6 +1731,8 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info,
if (last_wr)
dev_kfree_skb_any(skb);
+ else
+ free_skb_if_tx_fails = true;
last_wr = true;
@@ -1774,6 +1744,8 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info,
record->num_frags,
(last_wr && tcp_push_no_fin),
mss)) {
+ if (free_skb_if_tx_fails)
+ dev_kfree_skb_any(skb);
goto out;
}
tx_info->prev_seq = record->end_seq;
@@ -1910,11 +1882,6 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info,
/* reset tcp_seq as per the prior_data_required len */
tcp_seq -= prior_data_len;
}
- /* reset snd una, so the middle record won't send the already
- * sent part.
- */
- if (chcr_ktls_update_snd_una(tx_info, q))
- goto out;
atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_middle_pkts);
} else {
atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_start_pkts);
@@ -2015,12 +1982,11 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
* we will send the complete record again.
*/
+ spin_lock_irqsave(&tx_ctx->base.lock, flags);
+
do {
- int i;
cxgb4_reclaim_completed_tx(adap, &q->q, true);
- /* lock taken */
- spin_lock_irqsave(&tx_ctx->base.lock, flags);
/* fetch the tls record */
record = tls_get_record(&tx_ctx->base, tcp_seq,
&tx_info->record_no);
@@ -2079,11 +2045,11 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
tls_end_offset, skb_offset,
0);
- spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
if (ret) {
/* free the refcount taken earlier */
if (tls_end_offset < data_len)
dev_kfree_skb_any(skb);
+ spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
goto out;
}
@@ -2093,16 +2059,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
continue;
}
- /* increase page reference count of the record, so that there
- * won't be any chance of page free in middle if in case stack
- * receives ACK and try to delete the record.
- */
- for (i = 0; i < record->num_frags; i++)
- __skb_frag_ref(&record->frags[i]);
- /* lock cleared */
- spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
-
-
/* if a tls record is finishing in this SKB */
if (tls_end_offset <= data_len) {
ret = chcr_end_part_handler(tx_info, skb, record,
@@ -2127,13 +2083,9 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
data_len = 0;
}
- /* clear the frag ref count which increased locally before */
- for (i = 0; i < record->num_frags; i++) {
- /* clear the frag ref count */
- __skb_frag_unref(&record->frags[i]);
- }
/* if any failure, come out from the loop. */
if (ret) {
+ spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
if (th->fin)
dev_kfree_skb_any(skb);
@@ -2148,6 +2100,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
} while (data_len > 0);
+ spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
atomic64_inc(&port_stats->ktls_tx_encrypted_packets);
atomic64_add(skb_data_len, &port_stats->ktls_tx_encrypted_bytes);
@@ -2177,6 +2130,8 @@ static void *chcr_ktls_uld_add(const struct cxgb4_lld_info *lldi)
goto out;
}
u_ctx->lldi = *lldi;
+ u_ctx->detach = false;
+ xa_init_flags(&u_ctx->tid_list, XA_FLAGS_LOCK_BH);
out:
return u_ctx;
}
@@ -2210,6 +2165,45 @@ static int chcr_ktls_uld_rx_handler(void *handle, const __be64 *rsp,
return 0;
}
+static void clear_conn_resources(struct chcr_ktls_info *tx_info)
+{
+ /* clear l2t entry */
+ if (tx_info->l2te)
+ cxgb4_l2t_release(tx_info->l2te);
+
+#if IS_ENABLED(CONFIG_IPV6)
+ /* clear clip entry */
+ if (tx_info->ip_family == AF_INET6)
+ cxgb4_clip_release(tx_info->netdev, (const u32 *)
+ &tx_info->sk->sk_v6_rcv_saddr,
+ 1);
+#endif
+
+ /* clear tid */
+ if (tx_info->tid != -1)
+ cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
+ tx_info->tid, tx_info->ip_family);
+}
+
+static void ch_ktls_reset_all_conn(struct chcr_ktls_uld_ctx *u_ctx)
+{
+ struct ch_ktls_port_stats_debug *port_stats;
+ struct chcr_ktls_ofld_ctx_tx *tx_ctx;
+ struct chcr_ktls_info *tx_info;
+ unsigned long index;
+
+ xa_for_each(&u_ctx->tid_list, index, tx_ctx) {
+ tx_info = tx_ctx->chcr_info;
+ clear_conn_resources(tx_info);
+ port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id];
+ atomic64_inc(&port_stats->ktls_tx_connection_close);
+ kvfree(tx_info);
+ tx_ctx->chcr_info = NULL;
+ /* release module refcount */
+ module_put(THIS_MODULE);
+ }
+}
+
static int chcr_ktls_uld_state_change(void *handle, enum cxgb4_state new_state)
{
struct chcr_ktls_uld_ctx *u_ctx = handle;
@@ -2226,7 +2220,10 @@ static int chcr_ktls_uld_state_change(void *handle, enum cxgb4_state new_state)
case CXGB4_STATE_DETACH:
pr_info("%s: Down\n", pci_name(u_ctx->lldi.pdev));
mutex_lock(&dev_mutex);
+ u_ctx->detach = true;
list_del(&u_ctx->entry);
+ ch_ktls_reset_all_conn(u_ctx);
+ xa_destroy(&u_ctx->tid_list);
mutex_unlock(&dev_mutex);
break;
default:
@@ -2265,6 +2262,7 @@ static void __exit chcr_ktls_exit(void)
adap = pci_get_drvdata(u_ctx->lldi.pdev);
memset(&adap->ch_ktls_stats, 0, sizeof(adap->ch_ktls_stats));
list_del(&u_ctx->entry);
+ xa_destroy(&u_ctx->tid_list);
kfree(u_ctx);
}
mutex_unlock(&dev_mutex);
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h
index 18b3b1f..10572dc 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h
+++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h
@@ -75,6 +75,8 @@ struct chcr_ktls_ofld_ctx_tx {
struct chcr_ktls_uld_ctx {
struct list_head entry;
struct cxgb4_lld_info lldi;
+ struct xarray tid_list;
+ bool detach;
};
static inline struct chcr_ktls_ofld_ctx_tx *
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
index 188d871..c320cc8 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
@@ -1564,8 +1564,10 @@ static int chtls_pt_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
cerr = put_cmsg(msg, SOL_TLS, TLS_GET_RECORD_TYPE,
sizeof(thdr->type), &thdr->type);
- if (cerr && thdr->type != TLS_RECORD_TYPE_DATA)
- return -EIO;
+ if (cerr && thdr->type != TLS_RECORD_TYPE_DATA) {
+ copied = -EIO;
+ break;
+ }
/* don't send tls header, skip copy */
goto skip_copy;
}
diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
index fb269d5..548d809 100644
--- a/drivers/net/ethernet/cisco/enic/enic_main.c
+++ b/drivers/net/ethernet/cisco/enic/enic_main.c
@@ -768,7 +768,7 @@ static inline int enic_queue_wq_skb_encap(struct enic *enic, struct vnic_wq *wq,
return err;
}
-static inline void enic_queue_wq_skb(struct enic *enic,
+static inline int enic_queue_wq_skb(struct enic *enic,
struct vnic_wq *wq, struct sk_buff *skb)
{
unsigned int mss = skb_shinfo(skb)->gso_size;
@@ -814,6 +814,7 @@ static inline void enic_queue_wq_skb(struct enic *enic,
wq->to_use = buf->next;
dev_kfree_skb(skb);
}
+ return err;
}
/* netif_tx_lock held, process context with BHs disabled, or BH */
@@ -857,7 +858,8 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb,
return NETDEV_TX_BUSY;
}
- enic_queue_wq_skb(enic, wq, skb);
+ if (enic_queue_wq_skb(enic, wq, skb))
+ goto error;
if (vnic_wq_desc_avail(wq) < MAX_SKB_FRAGS + ENIC_DESC_MAX_SPLITS)
netif_tx_stop_queue(txq);
@@ -865,6 +867,7 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb,
if (!netdev_xmit_more() || netif_xmit_stopped(txq))
vnic_wq_doorbell(wq);
+error:
spin_unlock(&enic->wq_lock[txq_map]);
return NETDEV_TX_OK;
diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
index ae09cac..afc4a10 100644
--- a/drivers/net/ethernet/davicom/dm9000.c
+++ b/drivers/net/ethernet/davicom/dm9000.c
@@ -1474,8 +1474,10 @@ dm9000_probe(struct platform_device *pdev)
/* Init network device */
ndev = alloc_etherdev(sizeof(struct board_info));
- if (!ndev)
- return -ENOMEM;
+ if (!ndev) {
+ ret = -ENOMEM;
+ goto out_regulator_disable;
+ }
SET_NETDEV_DEV(ndev, &pdev->dev);
diff --git a/drivers/net/ethernet/freescale/Makefile b/drivers/net/ethernet/freescale/Makefile
index 67c4364..de7b318 100644
--- a/drivers/net/ethernet/freescale/Makefile
+++ b/drivers/net/ethernet/freescale/Makefile
@@ -24,6 +24,4 @@
obj-$(CONFIG_FSL_DPAA2_ETH) += dpaa2/
-obj-$(CONFIG_FSL_ENETC) += enetc/
-obj-$(CONFIG_FSL_ENETC_MDIO) += enetc/
-obj-$(CONFIG_FSL_ENETC_VF) += enetc/
+obj-y += enetc/
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 55c28fb..960def4 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -3277,7 +3277,9 @@ static int fec_enet_init(struct net_device *ndev)
return ret;
}
- fec_enet_alloc_queue(ndev);
+ ret = fec_enet_alloc_queue(ndev);
+ if (ret)
+ return ret;
bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) * dsize;
@@ -3285,7 +3287,8 @@ static int fec_enet_init(struct net_device *ndev)
cbd_base = dmam_alloc_coherent(&fep->pdev->dev, bd_size, &bd_dma,
GFP_KERNEL);
if (!cbd_base) {
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto free_queue_mem;
}
/* Get the Ethernet address */
@@ -3363,6 +3366,10 @@ static int fec_enet_init(struct net_device *ndev)
fec_enet_update_ethtool_stats(ndev);
return 0;
+
+free_queue_mem:
+ fec_enet_free_queue(ndev);
+ return ret;
}
#ifdef CONFIG_OF
diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c
index 4fab2ee..e4d9c4c 100644
--- a/drivers/net/ethernet/freescale/gianfar.c
+++ b/drivers/net/ethernet/freescale/gianfar.c
@@ -364,7 +364,11 @@ static void gfar_set_mac_for_addr(struct net_device *dev, int num,
static int gfar_set_mac_addr(struct net_device *dev, void *p)
{
- eth_mac_addr(dev, p);
+ int ret;
+
+ ret = eth_mac_addr(dev, p);
+ if (ret)
+ return ret;
gfar_set_mac_for_addr(dev, 0, dev->dev_addr);
diff --git a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
index a7b7a4a..b0c0504 100644
--- a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
+++ b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
@@ -548,8 +548,8 @@ static int fmvj18x_get_hwinfo(struct pcmcia_device *link, u_char *node_id)
base = ioremap(link->resource[2]->start, resource_size(link->resource[2]));
if (!base) {
- pcmcia_release_window(link, link->resource[2]);
- return -ENOMEM;
+ pcmcia_release_window(link, link->resource[2]);
+ return -1;
}
pcmcia_map_mem_page(link, link->resource[2], 0);
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 02e7d74..d6e3542 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -180,7 +180,7 @@ static int gve_napi_poll(struct napi_struct *napi, int budget)
/* Double check we have no extra work.
* Ensure unmask synchronizes with checking for work.
*/
- dma_rmb();
+ mb();
if (block->tx)
reschedule |= gve_tx_poll(block, -1);
if (block->rx)
@@ -220,6 +220,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv)
int vecs_left = new_num_ntfy_blks % 2;
priv->num_ntfy_blks = new_num_ntfy_blks;
+ priv->mgmt_msix_idx = priv->num_ntfy_blks;
priv->tx_cfg.max_queues = min_t(int, priv->tx_cfg.max_queues,
vecs_per_type);
priv->rx_cfg.max_queues = min_t(int, priv->rx_cfg.max_queues,
@@ -300,20 +301,22 @@ static void gve_free_notify_blocks(struct gve_priv *priv)
{
int i;
- /* Free the irqs */
- for (i = 0; i < priv->num_ntfy_blks; i++) {
- struct gve_notify_block *block = &priv->ntfy_blocks[i];
- int msix_idx = i;
+ if (priv->msix_vectors) {
+ /* Free the irqs */
+ for (i = 0; i < priv->num_ntfy_blks; i++) {
+ struct gve_notify_block *block = &priv->ntfy_blocks[i];
+ int msix_idx = i;
- irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
- NULL);
- free_irq(priv->msix_vectors[msix_idx].vector, block);
+ irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
+ NULL);
+ free_irq(priv->msix_vectors[msix_idx].vector, block);
+ }
+ free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
}
dma_free_coherent(&priv->pdev->dev,
priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks),
priv->ntfy_blocks, priv->ntfy_block_bus);
priv->ntfy_blocks = NULL;
- free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
pci_disable_msix(priv->pdev);
kvfree(priv->msix_vectors);
priv->msix_vectors = NULL;
diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
index 29afc95..61c6b30 100644
--- a/drivers/net/ethernet/google/gve/gve_tx.c
+++ b/drivers/net/ethernet/google/gve/gve_tx.c
@@ -207,10 +207,12 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
goto abort_with_info;
tx->tx_fifo.qpl = gve_assign_tx_qpl(priv);
+ if (!tx->tx_fifo.qpl)
+ goto abort_with_desc;
/* map Tx FIFO */
if (gve_tx_fifo_init(priv, &tx->tx_fifo))
- goto abort_with_desc;
+ goto abort_with_qpl;
tx->q_resources =
dma_alloc_coherent(hdev,
@@ -229,6 +231,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
abort_with_fifo:
gve_tx_fifo_release(priv, &tx->tx_fifo);
+abort_with_qpl:
+ gve_unassign_qpl(priv, tx->tx_fifo.qpl->id);
abort_with_desc:
dma_free_coherent(hdev, bytes, tx->desc, tx->bus);
tx->desc = NULL;
@@ -497,7 +501,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
struct gve_tx_ring *tx;
int nsegs;
- WARN(skb_get_queue_mapping(skb) > priv->tx_cfg.num_queues,
+ WARN(skb_get_queue_mapping(skb) >= priv->tx_cfg.num_queues,
"skb queue index out of range");
tx = &priv->tx[skb_get_queue_mapping(skb)];
if (unlikely(gve_maybe_stop_tx(tx, skb))) {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index a362516..92ca3b2 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -526,8 +526,8 @@ static int hns3_nic_net_stop(struct net_device *netdev)
if (h->ae_algo->ops->set_timer_task)
h->ae_algo->ops->set_timer_task(priv->ae_handle, false);
- netif_tx_stop_all_queues(netdev);
netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
hns3_nic_net_down(netdev);
@@ -778,7 +778,7 @@ static int hns3_get_l4_protocol(struct sk_buff *skb, u8 *ol4_proto,
* and it is udp packet, which has a dest port as the IANA assigned.
* the hardware is expected to do the checksum offload, but the
* hardware will not do the checksum offload when udp dest port is
- * 4789 or 6081.
+ * 4789, 4790 or 6081.
*/
static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
{
@@ -788,11 +788,10 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
if (!(!skb->encapsulation &&
(l4.udp->dest == htons(IANA_VXLAN_UDP_PORT) ||
- l4.udp->dest == htons(GENEVE_UDP_PORT))))
+ l4.udp->dest == htons(GENEVE_UDP_PORT) ||
+ l4.udp->dest == htons(4790))))
return false;
- skb_checksum_help(skb);
-
return true;
}
@@ -870,8 +869,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
/* the stack computes the IP header already,
* driver calculate l4 checksum when not TSO.
*/
- skb_checksum_help(skb);
- return 0;
+ return skb_checksum_help(skb);
}
hns3_set_outer_l2l3l4(skb, ol4_proto, ol_type_vlan_len_msec);
@@ -916,7 +914,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
break;
case IPPROTO_UDP:
if (hns3_tunnel_csum_bug(skb))
- break;
+ return skb_checksum_help(skb);
hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4T_S,
@@ -941,8 +939,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
/* the stack computes the IP header already,
* driver calculate l4 checksum when not TSO.
*/
- skb_checksum_help(skb);
- return 0;
+ return skb_checksum_help(skb);
}
return 0;
@@ -1192,23 +1189,21 @@ static unsigned int hns3_skb_bd_num(struct sk_buff *skb, unsigned int *bd_size,
}
static unsigned int hns3_tx_bd_num(struct sk_buff *skb, unsigned int *bd_size,
- u8 max_non_tso_bd_num)
+ u8 max_non_tso_bd_num, unsigned int bd_num,
+ unsigned int recursion_level)
{
+#define HNS3_MAX_RECURSION_LEVEL 24
+
struct sk_buff *frag_skb;
- unsigned int bd_num = 0;
/* If the total len is within the max bd limit */
- if (likely(skb->len <= HNS3_MAX_BD_SIZE && !skb_has_frag_list(skb) &&
+ if (likely(skb->len <= HNS3_MAX_BD_SIZE && !recursion_level &&
+ !skb_has_frag_list(skb) &&
skb_shinfo(skb)->nr_frags < max_non_tso_bd_num))
return skb_shinfo(skb)->nr_frags + 1U;
- /* The below case will always be linearized, return
- * HNS3_MAX_BD_NUM_TSO + 1U to make sure it is linearized.
- */
- if (unlikely(skb->len > HNS3_MAX_TSO_SIZE ||
- (!skb_is_gso(skb) && skb->len >
- HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))))
- return HNS3_MAX_TSO_BD_NUM + 1U;
+ if (unlikely(recursion_level >= HNS3_MAX_RECURSION_LEVEL))
+ return UINT_MAX;
bd_num = hns3_skb_bd_num(skb, bd_size, bd_num);
@@ -1216,7 +1211,8 @@ static unsigned int hns3_tx_bd_num(struct sk_buff *skb, unsigned int *bd_size,
return bd_num;
skb_walk_frags(skb, frag_skb) {
- bd_num = hns3_skb_bd_num(frag_skb, bd_size, bd_num);
+ bd_num = hns3_tx_bd_num(frag_skb, bd_size, max_non_tso_bd_num,
+ bd_num, recursion_level + 1);
if (bd_num > HNS3_MAX_TSO_BD_NUM)
return bd_num;
}
@@ -1276,6 +1272,43 @@ void hns3_shinfo_pack(struct skb_shared_info *shinfo, __u32 *size)
size[i] = skb_frag_size(&shinfo->frags[i]);
}
+static int hns3_skb_linearize(struct hns3_enet_ring *ring,
+ struct sk_buff *skb,
+ u8 max_non_tso_bd_num,
+ unsigned int bd_num)
+{
+ /* 'bd_num == UINT_MAX' means the skb' fraglist has a
+ * recursion level of over HNS3_MAX_RECURSION_LEVEL.
+ */
+ if (bd_num == UINT_MAX) {
+ u64_stats_update_begin(&ring->syncp);
+ ring->stats.over_max_recursion++;
+ u64_stats_update_end(&ring->syncp);
+ return -ENOMEM;
+ }
+
+ /* The skb->len has exceeded the hw limitation, linearization
+ * will not help.
+ */
+ if (skb->len > HNS3_MAX_TSO_SIZE ||
+ (!skb_is_gso(skb) && skb->len >
+ HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))) {
+ u64_stats_update_begin(&ring->syncp);
+ ring->stats.hw_limitation++;
+ u64_stats_update_end(&ring->syncp);
+ return -ENOMEM;
+ }
+
+ if (__skb_linearize(skb)) {
+ u64_stats_update_begin(&ring->syncp);
+ ring->stats.sw_err_cnt++;
+ u64_stats_update_end(&ring->syncp);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
struct net_device *netdev,
struct sk_buff *skb)
@@ -1285,7 +1318,7 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
unsigned int bd_size[HNS3_MAX_TSO_BD_NUM + 1U];
unsigned int bd_num;
- bd_num = hns3_tx_bd_num(skb, bd_size, max_non_tso_bd_num);
+ bd_num = hns3_tx_bd_num(skb, bd_size, max_non_tso_bd_num, 0, 0);
if (unlikely(bd_num > max_non_tso_bd_num)) {
if (bd_num <= HNS3_MAX_TSO_BD_NUM && skb_is_gso(skb) &&
!hns3_skb_need_linearized(skb, bd_size, bd_num,
@@ -1294,16 +1327,11 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
goto out;
}
- if (__skb_linearize(skb))
+ if (hns3_skb_linearize(ring, skb, max_non_tso_bd_num,
+ bd_num))
return -ENOMEM;
bd_num = hns3_tx_bd_count(skb->len);
- if ((skb_is_gso(skb) && bd_num > HNS3_MAX_TSO_BD_NUM) ||
- (!skb_is_gso(skb) &&
- bd_num > max_non_tso_bd_num)) {
- trace_hns3_over_max_bd(skb);
- return -ENOMEM;
- }
u64_stats_update_begin(&ring->syncp);
ring->stats.tx_copy++;
@@ -1327,6 +1355,10 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
return bd_num;
}
+ u64_stats_update_begin(&ring->syncp);
+ ring->stats.tx_busy++;
+ u64_stats_update_end(&ring->syncp);
+
return -EBUSY;
}
@@ -1374,6 +1406,7 @@ static int hns3_fill_skb_to_desc(struct hns3_enet_ring *ring,
struct sk_buff *skb, enum hns_desc_type type)
{
unsigned int size = skb_headlen(skb);
+ struct sk_buff *frag_skb;
int i, ret, bd_num = 0;
if (size) {
@@ -1398,6 +1431,15 @@ static int hns3_fill_skb_to_desc(struct hns3_enet_ring *ring,
bd_num += ret;
}
+ skb_walk_frags(skb, frag_skb) {
+ ret = hns3_fill_skb_to_desc(ring, frag_skb,
+ DESC_TYPE_FRAGLIST_SKB);
+ if (unlikely(ret < 0))
+ return ret;
+
+ bd_num += ret;
+ }
+
return bd_num;
}
@@ -1428,8 +1470,6 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
struct hns3_enet_ring *ring = &priv->ring[skb->queue_mapping];
struct netdev_queue *dev_queue;
int pre_ntu, next_to_use_head;
- struct sk_buff *frag_skb;
- int bd_num = 0;
bool doorbell;
int ret;
@@ -1445,15 +1485,8 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
ret = hns3_nic_maybe_stop_tx(ring, netdev, skb);
if (unlikely(ret <= 0)) {
if (ret == -EBUSY) {
- u64_stats_update_begin(&ring->syncp);
- ring->stats.tx_busy++;
- u64_stats_update_end(&ring->syncp);
hns3_tx_doorbell(ring, 0, true);
return NETDEV_TX_BUSY;
- } else if (ret == -ENOMEM) {
- u64_stats_update_begin(&ring->syncp);
- ring->stats.sw_err_cnt++;
- u64_stats_update_end(&ring->syncp);
}
hns3_rl_err(netdev, "xmit error: %d!\n", ret);
@@ -1466,21 +1499,14 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
if (unlikely(ret < 0))
goto fill_err;
+ /* 'ret < 0' means filling error, 'ret == 0' means skb->len is
+ * zero, which is unlikely, and 'ret > 0' means how many tx desc
+ * need to be notified to the hw.
+ */
ret = hns3_fill_skb_to_desc(ring, skb, DESC_TYPE_SKB);
- if (unlikely(ret < 0))
+ if (unlikely(ret <= 0))
goto fill_err;
- bd_num += ret;
-
- skb_walk_frags(skb, frag_skb) {
- ret = hns3_fill_skb_to_desc(ring, frag_skb,
- DESC_TYPE_FRAGLIST_SKB);
- if (unlikely(ret < 0))
- goto fill_err;
-
- bd_num += ret;
- }
-
pre_ntu = ring->next_to_use ? (ring->next_to_use - 1) :
(ring->desc_num - 1);
ring->desc[pre_ntu].tx.bdtp_fe_sc_vld_ra_ri |=
@@ -1491,7 +1517,7 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
dev_queue = netdev_get_tx_queue(netdev, ring->queue_index);
doorbell = __netdev_tx_sent_queue(dev_queue, skb->len,
netdev_xmit_more());
- hns3_tx_doorbell(ring, bd_num, doorbell);
+ hns3_tx_doorbell(ring, ret, doorbell);
return NETDEV_TX_OK;
@@ -1656,11 +1682,15 @@ static void hns3_nic_get_stats64(struct net_device *netdev,
tx_drop += ring->stats.tx_l4_proto_err;
tx_drop += ring->stats.tx_l2l3l4_err;
tx_drop += ring->stats.tx_tso_err;
+ tx_drop += ring->stats.over_max_recursion;
+ tx_drop += ring->stats.hw_limitation;
tx_errors += ring->stats.sw_err_cnt;
tx_errors += ring->stats.tx_vlan_err;
tx_errors += ring->stats.tx_l4_proto_err;
tx_errors += ring->stats.tx_l2l3l4_err;
tx_errors += ring->stats.tx_tso_err;
+ tx_errors += ring->stats.over_max_recursion;
+ tx_errors += ring->stats.hw_limitation;
} while (u64_stats_fetch_retry_irq(&ring->syncp, start));
/* fetch the rx stats */
@@ -3526,7 +3556,6 @@ static void hns3_nic_set_cpumask(struct hns3_nic_priv *priv)
static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv)
{
- struct hnae3_ring_chain_node vector_ring_chain;
struct hnae3_handle *h = priv->ae_handle;
struct hns3_enet_tqp_vector *tqp_vector;
int ret;
@@ -3558,6 +3587,8 @@ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv)
}
for (i = 0; i < priv->vector_num; i++) {
+ struct hnae3_ring_chain_node vector_ring_chain;
+
tqp_vector = &priv->tqp_vector[i];
tqp_vector->rx_group.total_bytes = 0;
@@ -4078,12 +4109,6 @@ static int hns3_client_init(struct hnae3_handle *handle)
if (ret)
goto out_init_phy;
- ret = register_netdev(netdev);
- if (ret) {
- dev_err(priv->dev, "probe register netdev fail!\n");
- goto out_reg_netdev_fail;
- }
-
/* the device can work without cpu rmap, only aRFS needs it */
ret = hns3_set_rx_cpu_rmap(netdev);
if (ret)
@@ -4111,17 +4136,23 @@ static int hns3_client_init(struct hnae3_handle *handle)
set_bit(HNS3_NIC_STATE_INITED, &priv->state);
+ ret = register_netdev(netdev);
+ if (ret) {
+ dev_err(priv->dev, "probe register netdev fail!\n");
+ goto out_reg_netdev_fail;
+ }
+
if (netif_msg_drv(handle))
hns3_info_show(priv);
return ret;
+out_reg_netdev_fail:
+ hns3_dbg_uninit(handle);
out_client_start:
hns3_free_rx_cpu_rmap(netdev);
hns3_nic_uninit_irq(priv);
out_init_irq_fail:
- unregister_netdev(netdev);
-out_reg_netdev_fail:
hns3_uninit_phy(netdev);
out_init_phy:
hns3_uninit_all_ring(priv);
@@ -4392,6 +4423,11 @@ static int hns3_reset_notify_up_enet(struct hnae3_handle *handle)
struct hns3_nic_priv *priv = netdev_priv(kinfo->netdev);
int ret = 0;
+ if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state)) {
+ netdev_err(kinfo->netdev, "device is not initialized yet\n");
+ return -EFAULT;
+ }
+
clear_bit(HNS3_NIC_STATE_RESETTING, &priv->state);
if (netif_running(kinfo->netdev)) {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
index 1c81dea..398686b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
@@ -359,6 +359,8 @@ struct ring_stats {
u64 tx_l4_proto_err;
u64 tx_l2l3l4_err;
u64 tx_tso_err;
+ u64 over_max_recursion;
+ u64 hw_limitation;
};
struct {
u64 rx_pkts;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
index 6b07b27..c0aa3be 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
@@ -39,6 +39,8 @@ static const struct hns3_stats hns3_txq_stats[] = {
HNS3_TQP_STAT("l4_proto_err", tx_l4_proto_err),
HNS3_TQP_STAT("l2l3l4_err", tx_l2l3l4_err),
HNS3_TQP_STAT("tso_err", tx_tso_err),
+ HNS3_TQP_STAT("over_max_recursion", over_max_recursion),
+ HNS3_TQP_STAT("hw_limitation", hw_limitation),
};
#define HNS3_TXQ_STATS_COUNT ARRAY_SIZE(hns3_txq_stats)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
index 9ee55ee0..3226ca1 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
@@ -753,8 +753,9 @@ static int hclge_config_igu_egu_hw_err_int(struct hclge_dev *hdev, bool en)
/* configure IGU,EGU error interrupts */
hclge_cmd_setup_basic_desc(&desc, HCLGE_IGU_COMMON_INT_EN, false);
+ desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_TYPE);
if (en)
- desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN);
+ desc.data[0] |= cpu_to_le32(HCLGE_IGU_ERR_INT_EN);
desc.data[1] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN_MASK);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
index 608fe26..d647f3c 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
@@ -32,7 +32,8 @@
#define HCLGE_TQP_ECC_ERR_INT_EN_MASK 0x0FFF
#define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN_MASK 0x0F000000
#define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN 0x0F000000
-#define HCLGE_IGU_ERR_INT_EN 0x0000066F
+#define HCLGE_IGU_ERR_INT_EN 0x0000000F
+#define HCLGE_IGU_ERR_INT_TYPE 0x00000660
#define HCLGE_IGU_ERR_INT_EN_MASK 0x000F
#define HCLGE_IGU_TNL_ERR_INT_EN 0x0002AABF
#define HCLGE_IGU_TNL_ERR_INT_EN_MASK 0x003F
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index b856dbe..98190aa9 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -10845,7 +10845,6 @@ static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num,
#define REG_LEN_PER_LINE (REG_NUM_PER_LINE * sizeof(u32))
#define REG_SEPARATOR_LINE 1
#define REG_NUM_REMAIN_MASK 3
-#define BD_LIST_MAX_NUM 30
int hclge_query_bd_num_cmd_send(struct hclge_dev *hdev, struct hclge_desc *desc)
{
@@ -10939,15 +10938,19 @@ static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len)
{
u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
int data_len_per_desc, bd_num, i;
- int bd_num_list[BD_LIST_MAX_NUM];
+ int *bd_num_list;
u32 data_len;
int ret;
+ bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
+ if (!bd_num_list)
+ return -ENOMEM;
+
ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
if (ret) {
dev_err(&hdev->pdev->dev,
"Get dfx reg bd num fail, status is %d.\n", ret);
- return ret;
+ goto out;
}
data_len_per_desc = sizeof_field(struct hclge_desc, data);
@@ -10958,6 +10961,8 @@ static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len)
*len += (data_len / REG_LEN_PER_LINE + 1) * REG_LEN_PER_LINE;
}
+out:
+ kfree(bd_num_list);
return ret;
}
@@ -10965,16 +10970,20 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
{
u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
int bd_num, bd_num_max, buf_len, i;
- int bd_num_list[BD_LIST_MAX_NUM];
struct hclge_desc *desc_src;
+ int *bd_num_list;
u32 *reg = data;
int ret;
+ bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
+ if (!bd_num_list)
+ return -ENOMEM;
+
ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
if (ret) {
dev_err(&hdev->pdev->dev,
"Get dfx reg bd num fail, status is %d.\n", ret);
- return ret;
+ goto out;
}
bd_num_max = bd_num_list[0];
@@ -10983,8 +10992,10 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
buf_len = sizeof(*desc_src) * bd_num_max;
desc_src = kzalloc(buf_len, GFP_KERNEL);
- if (!desc_src)
- return -ENOMEM;
+ if (!desc_src) {
+ ret = -ENOMEM;
+ goto out;
+ }
for (i = 0; i < dfx_reg_type_num; i++) {
bd_num = bd_num_list[i];
@@ -11000,6 +11011,8 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
}
kfree(desc_src);
+out:
+ kfree(bd_num_list);
return ret;
}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
index 9c8004f..2c2d53f 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
@@ -519,7 +519,7 @@ static void hclge_get_link_mode(struct hclge_vport *vport,
unsigned long advertising;
unsigned long supported;
unsigned long send_data;
- u8 msg_data[10];
+ u8 msg_data[10] = {};
u8 dest_vfid;
advertising = hdev->hw.mac.advertising[0];
@@ -678,7 +678,6 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
unsigned int flag;
int ret = 0;
- memset(&resp_msg, 0, sizeof(resp_msg));
/* handle all the mailbox requests in the queue */
while (!hclge_cmd_crq_empty(&hdev->hw)) {
if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) {
@@ -706,6 +705,9 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
trace_hclge_pf_mbx_get(hdev, req);
+ /* clear the resp_msg before processing every mailbox message */
+ memset(&resp_msg, 0, sizeof(resp_msg));
+
switch (req->msg.code) {
case HCLGE_MBX_MAP_RING_TO_VECTOR:
ret = hclge_map_unmap_ring_to_vf_vector(vport, true,
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
index e898207..c194bba 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
@@ -255,6 +255,8 @@ void hclge_mac_start_phy(struct hclge_dev *hdev)
if (!phydev)
return;
+ phy_loopback(phydev, false);
+
phy_start(phydev);
}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
index dc5d150..ac6980a 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
@@ -2554,14 +2554,14 @@ static int hclgevf_ae_start(struct hnae3_handle *handle)
{
struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+ clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+
hclgevf_reset_tqp_stats(handle);
hclgevf_request_link_info(hdev);
hclgevf_update_link_mode(hdev);
- clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
-
return 0;
}
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 4a4cb62..8cc4446 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -1159,19 +1159,13 @@ static int __ibmvnic_open(struct net_device *netdev)
rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_UP);
if (rc) {
- for (i = 0; i < adapter->req_rx_queues; i++)
- napi_disable(&adapter->napi[i]);
+ ibmvnic_napi_disable(adapter);
release_resources(adapter);
return rc;
}
netif_tx_start_all_queues(netdev);
- if (prev_state == VNIC_CLOSED) {
- for (i = 0; i < adapter->req_rx_queues; i++)
- napi_schedule(&adapter->napi[i]);
- }
-
adapter->state = VNIC_OPEN;
return rc;
}
@@ -1942,7 +1936,7 @@ static int do_reset(struct ibmvnic_adapter *adapter,
u64 old_num_rx_queues, old_num_tx_queues;
u64 old_num_rx_slots, old_num_tx_slots;
struct net_device *netdev = adapter->netdev;
- int i, rc;
+ int rc;
netdev_dbg(adapter->netdev,
"[S:%d FOP:%d] Reset reason %d, reset_state %d\n",
@@ -2088,10 +2082,6 @@ static int do_reset(struct ibmvnic_adapter *adapter,
/* refresh device's multicast list */
ibmvnic_set_multi(netdev);
- /* kick napi */
- for (i = 0; i < adapter->req_rx_queues; i++)
- napi_schedule(&adapter->napi[i]);
-
if (adapter->reset_reason == VNIC_RESET_FAILOVER ||
adapter->reset_reason == VNIC_RESET_MOBILITY) {
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev);
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 118473d..fe12587 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -142,6 +142,7 @@ enum i40e_state_t {
__I40E_VIRTCHNL_OP_PENDING,
__I40E_RECOVERY_MODE,
__I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */
+ __I40E_VFS_RELEASING,
/* This must be last as it determines the size of the BITMAP */
__I40E_STATE_SIZE__,
};
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
index 1e960c3..e84054f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
@@ -1565,8 +1565,10 @@ enum i40e_aq_phy_type {
I40E_PHY_TYPE_25GBASE_LR = 0x22,
I40E_PHY_TYPE_25GBASE_AOC = 0x23,
I40E_PHY_TYPE_25GBASE_ACC = 0x24,
- I40E_PHY_TYPE_2_5GBASE_T = 0x30,
- I40E_PHY_TYPE_5GBASE_T = 0x31,
+ I40E_PHY_TYPE_2_5GBASE_T = 0x26,
+ I40E_PHY_TYPE_5GBASE_T = 0x27,
+ I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS = 0x30,
+ I40E_PHY_TYPE_5GBASE_T_LINK_STATUS = 0x31,
I40E_PHY_TYPE_MAX,
I40E_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP = 0xFD,
I40E_PHY_TYPE_EMPTY = 0xFE,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
index a2dba32..32f3fac 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
@@ -375,6 +375,7 @@ void i40e_client_subtask(struct i40e_pf *pf)
clear_bit(__I40E_CLIENT_INSTANCE_OPENED,
&cdev->state);
i40e_client_del_instance(pf);
+ return;
}
}
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index adc9e4f..ba10907 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -1154,8 +1154,8 @@ static enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw)
break;
case I40E_PHY_TYPE_100BASE_TX:
case I40E_PHY_TYPE_1000BASE_T:
- case I40E_PHY_TYPE_2_5GBASE_T:
- case I40E_PHY_TYPE_5GBASE_T:
+ case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS:
+ case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS:
case I40E_PHY_TYPE_10GBASE_T:
media = I40E_MEDIA_TYPE_BASET;
break;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index d7c13ca..d627b59 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -578,6 +578,9 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
case RING_TYPE_XDP:
ring = kmemdup(vsi->xdp_rings[ring_id], sizeof(*ring), GFP_KERNEL);
break;
+ default:
+ ring = NULL;
+ break;
}
if (!ring)
return;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 9e81f85..5d48bc0 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -232,6 +232,8 @@ static void __i40e_add_stat_strings(u8 **p, const struct i40e_stats stats[],
I40E_STAT(struct i40e_vsi, _name, _stat)
#define I40E_VEB_STAT(_name, _stat) \
I40E_STAT(struct i40e_veb, _name, _stat)
+#define I40E_VEB_TC_STAT(_name, _stat) \
+ I40E_STAT(struct i40e_cp_veb_tc_stats, _name, _stat)
#define I40E_PFC_STAT(_name, _stat) \
I40E_STAT(struct i40e_pfc_stats, _name, _stat)
#define I40E_QUEUE_STAT(_name, _stat) \
@@ -266,11 +268,18 @@ static const struct i40e_stats i40e_gstrings_veb_stats[] = {
I40E_VEB_STAT("veb.rx_unknown_protocol", stats.rx_unknown_protocol),
};
+struct i40e_cp_veb_tc_stats {
+ u64 tc_rx_packets;
+ u64 tc_rx_bytes;
+ u64 tc_tx_packets;
+ u64 tc_tx_bytes;
+};
+
static const struct i40e_stats i40e_gstrings_veb_tc_stats[] = {
- I40E_VEB_STAT("veb.tc_%u_tx_packets", tc_stats.tc_tx_packets),
- I40E_VEB_STAT("veb.tc_%u_tx_bytes", tc_stats.tc_tx_bytes),
- I40E_VEB_STAT("veb.tc_%u_rx_packets", tc_stats.tc_rx_packets),
- I40E_VEB_STAT("veb.tc_%u_rx_bytes", tc_stats.tc_rx_bytes),
+ I40E_VEB_TC_STAT("veb.tc_%u_tx_packets", tc_tx_packets),
+ I40E_VEB_TC_STAT("veb.tc_%u_tx_bytes", tc_tx_bytes),
+ I40E_VEB_TC_STAT("veb.tc_%u_rx_packets", tc_rx_packets),
+ I40E_VEB_TC_STAT("veb.tc_%u_rx_bytes", tc_rx_bytes),
};
static const struct i40e_stats i40e_gstrings_misc_stats[] = {
@@ -832,8 +841,8 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
10000baseT_Full);
break;
case I40E_PHY_TYPE_10GBASE_T:
- case I40E_PHY_TYPE_5GBASE_T:
- case I40E_PHY_TYPE_2_5GBASE_T:
+ case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS:
+ case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS:
case I40E_PHY_TYPE_1000BASE_T:
case I40E_PHY_TYPE_100BASE_TX:
ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg);
@@ -1101,6 +1110,7 @@ static int i40e_get_link_ksettings(struct net_device *netdev,
/* Set flow control settings */
ethtool_link_ksettings_add_link_mode(ks, supported, Pause);
+ ethtool_link_ksettings_add_link_mode(ks, supported, Asym_Pause);
switch (hw->fc.requested_mode) {
case I40E_FC_FULL:
@@ -1399,7 +1409,8 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg)
memset(&config, 0, sizeof(config));
config.phy_type = abilities.phy_type;
- config.abilities = abilities.abilities;
+ config.abilities = abilities.abilities |
+ I40E_AQ_PHY_ENABLE_ATOMIC_LINK;
config.phy_type_ext = abilities.phy_type_ext;
config.link_speed = abilities.link_speed;
config.eee_capability = abilities.eee_capability;
@@ -2217,6 +2228,29 @@ static int i40e_get_sset_count(struct net_device *netdev, int sset)
}
/**
+ * i40e_get_veb_tc_stats - copy VEB TC statistics to formatted structure
+ * @tc: the TC statistics in VEB structure (veb->tc_stats)
+ * @i: the index of traffic class in (veb->tc_stats) structure to copy
+ *
+ * Copy VEB TC statistics from structure of arrays (veb->tc_stats) to
+ * one dimensional structure i40e_cp_veb_tc_stats.
+ * Produce formatted i40e_cp_veb_tc_stats structure of the VEB TC
+ * statistics for the given TC.
+ **/
+static struct i40e_cp_veb_tc_stats
+i40e_get_veb_tc_stats(struct i40e_veb_tc_stats *tc, unsigned int i)
+{
+ struct i40e_cp_veb_tc_stats veb_tc = {
+ .tc_rx_packets = tc->tc_rx_packets[i],
+ .tc_rx_bytes = tc->tc_rx_bytes[i],
+ .tc_tx_packets = tc->tc_tx_packets[i],
+ .tc_tx_bytes = tc->tc_tx_bytes[i],
+ };
+
+ return veb_tc;
+}
+
+/**
* i40e_get_pfc_stats - copy HW PFC statistics to formatted structure
* @pf: the PF device structure
* @i: the priority value to copy
@@ -2300,8 +2334,16 @@ static void i40e_get_ethtool_stats(struct net_device *netdev,
i40e_gstrings_veb_stats);
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
- i40e_add_ethtool_stats(&data, veb_stats ? veb : NULL,
- i40e_gstrings_veb_tc_stats);
+ if (veb_stats) {
+ struct i40e_cp_veb_tc_stats veb_tc =
+ i40e_get_veb_tc_stats(&veb->tc_stats, i);
+
+ i40e_add_ethtool_stats(&data, &veb_tc,
+ i40e_gstrings_veb_tc_stats);
+ } else {
+ i40e_add_ethtool_stats(&data, NULL,
+ i40e_gstrings_veb_tc_stats);
+ }
i40e_add_ethtool_stats(&data, pf, i40e_gstrings_stats);
@@ -5244,7 +5286,7 @@ static int i40e_get_module_eeprom(struct net_device *netdev,
status = i40e_aq_get_phy_register(hw,
I40E_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE,
- true, addr, offset, &value, NULL);
+ addr, true, offset, &value, NULL);
if (status)
return -EIO;
data[i] = value;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 4a2d03ca..f0edea7 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -2560,8 +2560,7 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
i40e_stat_str(hw, aq_ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
} else {
- dev_info(&pf->pdev->dev, "%s is %s allmulti mode.\n",
- vsi->netdev->name,
+ dev_info(&pf->pdev->dev, "%s allmulti mode.\n",
cur_multipromisc ? "entering" : "leaving");
}
}
@@ -11864,6 +11863,7 @@ static int i40e_sw_init(struct i40e_pf *pf)
{
int err = 0;
int size;
+ u16 pow;
/* Set default capability flags */
pf->flags = I40E_FLAG_RX_CSUM_ENABLED |
@@ -11882,6 +11882,11 @@ static int i40e_sw_init(struct i40e_pf *pf)
pf->rss_table_size = pf->hw.func_caps.rss_table_size;
pf->rss_size_max = min_t(int, pf->rss_size_max,
pf->hw.func_caps.num_tx_qp);
+
+ /* find the next higher power-of-2 of num cpus */
+ pow = roundup_pow_of_two(num_online_cpus());
+ pf->rss_size_max = min_t(int, pf->rss_size_max, pow);
+
if (pf->hw.func_caps.rss) {
pf->flags |= I40E_FLAG_RSS_ENABLED;
pf->alloc_rss_size = min_t(int, pf->rss_size_max,
@@ -14647,12 +14652,16 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw)
* in order to register the netdev
*/
v_idx = i40e_vsi_mem_alloc(pf, I40E_VSI_MAIN);
- if (v_idx < 0)
+ if (v_idx < 0) {
+ err = v_idx;
goto err_switch_setup;
+ }
pf->lan_vsi = v_idx;
vsi = pf->vsi[v_idx];
- if (!vsi)
+ if (!vsi) {
+ err = -EFAULT;
goto err_switch_setup;
+ }
vsi->alloc_queue_pairs = 1;
err = i40e_config_netdev(vsi);
if (err)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 8997142..011f484 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -1810,10 +1810,6 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb,
union i40e_rx_desc *rx_desc)
{
- /* XDP packets use error pointer so abort at this point */
- if (IS_ERR(skb))
- return true;
-
/* ERR_MASK will only have valid bits if EOP set, and
* what we are doing here is actually checking
* I40E_RX_DESC_ERROR_RXE_SHIFT, since it is the zeroth bit in
@@ -2187,8 +2183,7 @@ int i40e_xmit_xdp_tx_ring(struct xdp_buff *xdp, struct i40e_ring *xdp_ring)
* @rx_ring: Rx ring being processed
* @xdp: XDP buffer containing the frame
**/
-static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
- struct xdp_buff *xdp)
+static int i40e_run_xdp(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
{
int err, result = I40E_XDP_PASS;
struct i40e_ring *xdp_ring;
@@ -2227,7 +2222,7 @@ static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
}
xdp_out:
rcu_read_unlock();
- return ERR_PTR(-result);
+ return result;
}
/**
@@ -2339,6 +2334,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
unsigned int xdp_xmit = 0;
bool failure = false;
struct xdp_buff xdp;
+ int xdp_res = 0;
#if (PAGE_SIZE < 8192)
xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, 0);
@@ -2405,12 +2401,10 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
/* At larger PAGE_SIZE, frame_sz depend on len size */
xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size);
#endif
- skb = i40e_run_xdp(rx_ring, &xdp);
+ xdp_res = i40e_run_xdp(rx_ring, &xdp);
}
- if (IS_ERR(skb)) {
- unsigned int xdp_res = -PTR_ERR(skb);
-
+ if (xdp_res) {
if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
xdp_xmit |= xdp_res;
i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
@@ -2428,7 +2422,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
}
/* exit if we failed to retrieve a buffer */
- if (!skb) {
+ if (!xdp_res && !skb) {
rx_ring->rx_stats.alloc_buff_failed++;
rx_buffer->pagecnt_bias++;
break;
@@ -2440,7 +2434,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
if (i40e_is_non_eop(rx_ring, rx_desc, skb))
continue;
- if (i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
+ if (xdp_res || i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
skb = NULL;
continue;
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
index c0bdc66..add67f7 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
@@ -239,11 +239,8 @@ struct i40e_phy_info {
#define I40E_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(I40E_PHY_TYPE_25GBASE_ACC + \
I40E_PHY_TYPE_OFFSET)
/* Offset for 2.5G/5G PHY Types value to bit number conversion */
-#define I40E_PHY_TYPE_OFFSET2 (-10)
-#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T + \
- I40E_PHY_TYPE_OFFSET2)
-#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T + \
- I40E_PHY_TYPE_OFFSET2)
+#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T)
+#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T)
#define I40E_HW_CAP_MAX_GPIO 30
/* Capabilities of a PF or a VF or the whole device */
struct i40e_hw_capabilities {
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
index 3b269c70..e4f13a4 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
@@ -137,6 +137,7 @@ void i40e_vc_notify_vf_reset(struct i40e_vf *vf)
**/
static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
{
+ struct i40e_pf *pf = vf->pf;
int i;
i40e_vc_notify_vf_reset(vf);
@@ -147,6 +148,11 @@ static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
* ensure a reset.
*/
for (i = 0; i < 20; i++) {
+ /* If PF is in VFs releasing state reset VF is impossible,
+ * so leave it.
+ */
+ if (test_bit(__I40E_VFS_RELEASING, pf->state))
+ return;
if (i40e_reset_vf(vf, false))
return;
usleep_range(10000, 20000);
@@ -1574,6 +1580,8 @@ void i40e_free_vfs(struct i40e_pf *pf)
if (!pf->vf)
return;
+
+ set_bit(__I40E_VFS_RELEASING, pf->state);
while (test_and_set_bit(__I40E_VF_DISABLE, pf->state))
usleep_range(1000, 2000);
@@ -1631,6 +1639,7 @@ void i40e_free_vfs(struct i40e_pf *pf)
}
}
clear_bit(__I40E_VF_DISABLE, pf->state);
+ clear_bit(__I40E_VFS_RELEASING, pf->state);
}
#ifdef CONFIG_PCI_IOV
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index dc5b3c0..ebd0854 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -3899,8 +3899,6 @@ static void iavf_remove(struct pci_dev *pdev)
iounmap(hw->hw_addr);
pci_release_regions(pdev);
- iavf_free_all_tx_resources(adapter);
- iavf_free_all_rx_resources(adapter);
iavf_free_queues(adapter);
kfree(adapter->vf_res);
spin_lock_bh(&adapter->mac_vlan_list_lock);
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 5b3f2bb..6a57b41 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -194,7 +194,6 @@ enum ice_state {
__ICE_NEEDS_RESTART,
__ICE_PREPARED_FOR_RESET, /* set by driver when prepared */
__ICE_RESET_OICR_RECV, /* set by driver after rcv reset OICR */
- __ICE_DCBNL_DEVRESET, /* set by dcbnl devreset */
__ICE_PFR_REQ, /* set by driver and peers */
__ICE_CORER_REQ, /* set by driver and peers */
__ICE_GLOBR_REQ, /* set by driver and peers */
@@ -587,7 +586,7 @@ int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset);
void ice_print_link_msg(struct ice_vsi *vsi, bool isup);
const char *ice_stat_str(enum ice_status stat_err);
const char *ice_aq_str(enum ice_aq_err aq_err);
-bool ice_is_wol_supported(struct ice_pf *pf);
+bool ice_is_wol_supported(struct ice_hw *hw);
int
ice_fdir_write_fltr(struct ice_pf *pf, struct ice_fdir_fltr *input, bool add,
bool is_tun);
@@ -605,6 +604,7 @@ int ice_fdir_create_dflt_rules(struct ice_pf *pf);
int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout,
struct ice_rq_event_info *event);
int ice_open(struct net_device *netdev);
+int ice_open_internal(struct net_device *netdev);
int ice_stop(struct net_device *netdev);
void ice_service_task_schedule(struct ice_pf *pf);
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 7db5fd9..2239a5f 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -717,8 +717,8 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
if (!data) {
data = devm_kcalloc(ice_hw_to_dev(hw),
- sizeof(*data),
ICE_AQC_FW_LOG_ID_MAX,
+ sizeof(*data),
GFP_KERNEL);
if (!data)
return ICE_ERR_NO_MEMORY;
diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.h b/drivers/net/ethernet/intel/ice/ice_controlq.h
index faaa08e..68866f4 100644
--- a/drivers/net/ethernet/intel/ice/ice_controlq.h
+++ b/drivers/net/ethernet/intel/ice/ice_controlq.h
@@ -31,8 +31,8 @@ enum ice_ctl_q {
ICE_CTL_Q_MAILBOX,
};
-/* Control Queue timeout settings - max delay 250ms */
-#define ICE_CTL_Q_SQ_CMD_TIMEOUT 2500 /* Count 2500 times */
+/* Control Queue timeout settings - max delay 1s */
+#define ICE_CTL_Q_SQ_CMD_TIMEOUT 10000 /* Count 10000 times */
#define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */
#define ICE_CTL_Q_ADMIN_INIT_TIMEOUT 10 /* Count 10 times */
#define ICE_CTL_Q_ADMIN_INIT_MSEC 100 /* Check every 100msec */
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c
index 2a3147e..28e834a 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb.c
@@ -738,22 +738,27 @@ ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
/**
* ice_cee_to_dcb_cfg
* @cee_cfg: pointer to CEE configuration struct
- * @dcbcfg: DCB configuration struct
+ * @pi: port information structure
*
* Convert CEE configuration from firmware to DCB configuration
*/
static void
ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
- struct ice_dcbx_cfg *dcbcfg)
+ struct ice_port_info *pi)
{
u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status);
- u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift;
- u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio);
+ u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift, j;
u8 i, err, sync, oper, app_index, ice_app_sel_type;
+ u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio);
u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift;
+ struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg;
u16 ice_app_prot_id_type;
- /* CEE PG data to ETS config */
+ dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
+ dcbcfg->dcbx_mode = ICE_DCBX_MODE_CEE;
+ dcbcfg->tlv_status = tlv_status;
+
+ /* CEE PG data */
dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc;
/* Note that the FW creates the oper_prio_tc nibbles reversed
@@ -780,10 +785,16 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
}
}
- /* CEE PFC data to ETS config */
+ /* CEE PFC data */
dcbcfg->pfc.pfcena = cee_cfg->oper_pfc_en;
dcbcfg->pfc.pfccap = ICE_MAX_TRAFFIC_CLASS;
+ /* CEE APP TLV data */
+ if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING)
+ cmp_dcbcfg = &pi->qos_cfg.desired_dcbx_cfg;
+ else
+ cmp_dcbcfg = &pi->qos_cfg.remote_dcbx_cfg;
+
app_index = 0;
for (i = 0; i < 3; i++) {
if (i == 0) {
@@ -802,6 +813,18 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S;
ice_app_sel_type = ICE_APP_SEL_TCPIP;
ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI;
+
+ for (j = 0; j < cmp_dcbcfg->numapps; j++) {
+ u16 prot_id = cmp_dcbcfg->app[j].prot_id;
+ u8 sel = cmp_dcbcfg->app[j].selector;
+
+ if (sel == ICE_APP_SEL_TCPIP &&
+ (prot_id == ICE_APP_PROT_ID_ISCSI ||
+ prot_id == ICE_APP_PROT_ID_ISCSI_860)) {
+ ice_app_prot_id_type = prot_id;
+ break;
+ }
+ }
} else {
/* FIP APP */
ice_aqc_cee_status_mask = ICE_AQC_CEE_FIP_STATUS_M;
@@ -850,9 +873,9 @@ ice_get_ieee_or_cee_dcb_cfg(struct ice_port_info *pi, u8 dcbx_mode)
return ICE_ERR_PARAM;
if (dcbx_mode == ICE_DCBX_MODE_IEEE)
- dcbx_cfg = &pi->local_dcbx_cfg;
+ dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
else if (dcbx_mode == ICE_DCBX_MODE_CEE)
- dcbx_cfg = &pi->desired_dcbx_cfg;
+ dcbx_cfg = &pi->qos_cfg.desired_dcbx_cfg;
/* Get Local DCB Config in case of ICE_DCBX_MODE_IEEE
* or get CEE DCB Desired Config in case of ICE_DCBX_MODE_CEE
@@ -863,7 +886,7 @@ ice_get_ieee_or_cee_dcb_cfg(struct ice_port_info *pi, u8 dcbx_mode)
goto out;
/* Get Remote DCB Config */
- dcbx_cfg = &pi->remote_dcbx_cfg;
+ dcbx_cfg = &pi->qos_cfg.remote_dcbx_cfg;
ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE,
ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID, dcbx_cfg);
/* Don't treat ENOENT as an error for Remote MIBs */
@@ -892,14 +915,11 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL);
if (!ret) {
/* CEE mode */
- dcbx_cfg = &pi->local_dcbx_cfg;
- dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_CEE;
- dcbx_cfg->tlv_status = le32_to_cpu(cee_cfg.tlv_status);
- ice_cee_to_dcb_cfg(&cee_cfg, dcbx_cfg);
ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE);
+ ice_cee_to_dcb_cfg(&cee_cfg, pi);
} else if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) {
/* CEE mode not enabled try querying IEEE data */
- dcbx_cfg = &pi->local_dcbx_cfg;
+ dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_IEEE;
ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_IEEE);
}
@@ -916,26 +936,26 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
*/
enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
{
- struct ice_port_info *pi = hw->port_info;
+ struct ice_qos_cfg *qos_cfg = &hw->port_info->qos_cfg;
enum ice_status ret = 0;
if (!hw->func_caps.common_cap.dcb)
return ICE_ERR_NOT_SUPPORTED;
- pi->is_sw_lldp = true;
+ qos_cfg->is_sw_lldp = true;
/* Get DCBX status */
- pi->dcbx_status = ice_get_dcbx_status(hw);
+ qos_cfg->dcbx_status = ice_get_dcbx_status(hw);
- if (pi->dcbx_status == ICE_DCBX_STATUS_DONE ||
- pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
- pi->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
+ if (qos_cfg->dcbx_status == ICE_DCBX_STATUS_DONE ||
+ qos_cfg->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
+ qos_cfg->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
/* Get current DCBX configuration */
- ret = ice_get_dcb_cfg(pi);
+ ret = ice_get_dcb_cfg(hw->port_info);
if (ret)
return ret;
- pi->is_sw_lldp = false;
- } else if (pi->dcbx_status == ICE_DCBX_STATUS_DIS) {
+ qos_cfg->is_sw_lldp = false;
+ } else if (qos_cfg->dcbx_status == ICE_DCBX_STATUS_DIS) {
return ICE_ERR_NOT_READY;
}
@@ -943,7 +963,7 @@ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
if (enable_mib_change) {
ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
if (ret)
- pi->is_sw_lldp = true;
+ qos_cfg->is_sw_lldp = true;
}
return ret;
@@ -958,21 +978,21 @@ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
*/
enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib)
{
- struct ice_port_info *pi = hw->port_info;
+ struct ice_qos_cfg *qos_cfg = &hw->port_info->qos_cfg;
enum ice_status ret;
if (!hw->func_caps.common_cap.dcb)
return ICE_ERR_NOT_SUPPORTED;
/* Get DCBX status */
- pi->dcbx_status = ice_get_dcbx_status(hw);
+ qos_cfg->dcbx_status = ice_get_dcbx_status(hw);
- if (pi->dcbx_status == ICE_DCBX_STATUS_DIS)
+ if (qos_cfg->dcbx_status == ICE_DCBX_STATUS_DIS)
return ICE_ERR_NOT_READY;
ret = ice_aq_cfg_lldp_mib_change(hw, ena_mib, NULL);
if (!ret)
- pi->is_sw_lldp = !ena_mib;
+ qos_cfg->is_sw_lldp = !ena_mib;
return ret;
}
@@ -1270,7 +1290,7 @@ enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi)
hw = pi->hw;
/* update the HW local config */
- dcbcfg = &pi->local_dcbx_cfg;
+ dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
/* Allocate the LLDPDU */
lldpmib = devm_kzalloc(ice_hw_to_dev(hw), ICE_LLDPDU_SIZE, GFP_KERNEL);
if (!lldpmib)
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
index 36abd6b..1e8f71f 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
@@ -28,7 +28,7 @@ void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc)
if (netdev_set_num_tc(netdev, vsi->tc_cfg.numtc))
return;
- dcbcfg = &pf->hw.port_info->local_dcbx_cfg;
+ dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
ice_for_each_traffic_class(i)
if (vsi->tc_cfg.ena_tc & BIT(i))
@@ -134,7 +134,7 @@ static u8 ice_dcb_get_mode(struct ice_port_info *port_info, bool host)
else
mode = DCB_CAP_DCBX_LLD_MANAGED;
- if (port_info->local_dcbx_cfg.dcbx_mode & ICE_DCBX_MODE_CEE)
+ if (port_info->qos_cfg.local_dcbx_cfg.dcbx_mode & ICE_DCBX_MODE_CEE)
return mode | DCB_CAP_DCBX_VER_CEE;
else
return mode | DCB_CAP_DCBX_VER_IEEE;
@@ -277,10 +277,10 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
int ret = ICE_DCB_NO_HW_CHG;
struct ice_vsi *pf_vsi;
- curr_cfg = &pf->hw.port_info->local_dcbx_cfg;
+ curr_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
/* FW does not care if change happened */
- if (!pf->hw.port_info->is_sw_lldp)
+ if (!pf->hw.port_info->qos_cfg.is_sw_lldp)
ret = ICE_DCB_HW_CHG_RST;
/* Enable DCB tagging only when more than one TC */
@@ -327,7 +327,7 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
/* Only send new config to HW if we are in SW LLDP mode. Otherwise,
* the new config came from the HW in the first place.
*/
- if (pf->hw.port_info->is_sw_lldp) {
+ if (pf->hw.port_info->qos_cfg.is_sw_lldp) {
ret = ice_set_dcb_cfg(pf->hw.port_info);
if (ret) {
dev_err(dev, "Set DCB Config failed\n");
@@ -360,7 +360,7 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
*/
static void ice_cfg_etsrec_defaults(struct ice_port_info *pi)
{
- struct ice_dcbx_cfg *dcbcfg = &pi->local_dcbx_cfg;
+ struct ice_dcbx_cfg *dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
u8 i;
/* Ensure ETS recommended DCB configuration is not already set */
@@ -446,7 +446,7 @@ void ice_dcb_rebuild(struct ice_pf *pf)
mutex_lock(&pf->tc_mutex);
- if (!pf->hw.port_info->is_sw_lldp)
+ if (!pf->hw.port_info->qos_cfg.is_sw_lldp)
ice_cfg_etsrec_defaults(pf->hw.port_info);
ret = ice_set_dcb_cfg(pf->hw.port_info);
@@ -455,9 +455,9 @@ void ice_dcb_rebuild(struct ice_pf *pf)
goto dcb_error;
}
- if (!pf->hw.port_info->is_sw_lldp) {
+ if (!pf->hw.port_info->qos_cfg.is_sw_lldp) {
ret = ice_cfg_lldp_mib_change(&pf->hw, true);
- if (ret && !pf->hw.port_info->is_sw_lldp) {
+ if (ret && !pf->hw.port_info->qos_cfg.is_sw_lldp) {
dev_err(dev, "Failed to register for MIB changes\n");
goto dcb_error;
}
@@ -510,11 +510,12 @@ static int ice_dcb_init_cfg(struct ice_pf *pf, bool locked)
int ret = 0;
pi = pf->hw.port_info;
- newcfg = kmemdup(&pi->local_dcbx_cfg, sizeof(*newcfg), GFP_KERNEL);
+ newcfg = kmemdup(&pi->qos_cfg.local_dcbx_cfg, sizeof(*newcfg),
+ GFP_KERNEL);
if (!newcfg)
return -ENOMEM;
- memset(&pi->local_dcbx_cfg, 0, sizeof(*newcfg));
+ memset(&pi->qos_cfg.local_dcbx_cfg, 0, sizeof(*newcfg));
dev_info(ice_pf_to_dev(pf), "Configuring initial DCB values\n");
if (ice_pf_dcb_cfg(pf, newcfg, locked))
@@ -545,7 +546,7 @@ static int ice_dcb_sw_dflt_cfg(struct ice_pf *pf, bool ets_willing, bool locked)
if (!dcbcfg)
return -ENOMEM;
- memset(&pi->local_dcbx_cfg, 0, sizeof(*dcbcfg));
+ memset(&pi->qos_cfg.local_dcbx_cfg, 0, sizeof(*dcbcfg));
dcbcfg->etscfg.willing = ets_willing ? 1 : 0;
dcbcfg->etscfg.maxtcs = hw->func_caps.common_cap.maxtc;
@@ -608,7 +609,7 @@ static bool ice_dcb_tc_contig(u8 *prio_table)
*/
static int ice_dcb_noncontig_cfg(struct ice_pf *pf)
{
- struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->local_dcbx_cfg;
+ struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
struct device *dev = ice_pf_to_dev(pf);
int ret;
@@ -638,7 +639,7 @@ static int ice_dcb_noncontig_cfg(struct ice_pf *pf)
*/
void ice_pf_dcb_recfg(struct ice_pf *pf)
{
- struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->local_dcbx_cfg;
+ struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
u8 tc_map = 0;
int v, ret;
@@ -691,7 +692,7 @@ int ice_init_pf_dcb(struct ice_pf *pf, bool locked)
port_info = hw->port_info;
err = ice_init_dcb(hw, false);
- if (err && !port_info->is_sw_lldp) {
+ if (err && !port_info->qos_cfg.is_sw_lldp) {
dev_err(dev, "Error initializing DCB %d\n", err);
goto dcb_init_err;
}
@@ -858,7 +859,7 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
/* Update the remote cached instance and return */
ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE,
ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID,
- &pi->remote_dcbx_cfg);
+ &pi->qos_cfg.remote_dcbx_cfg);
if (ret) {
dev_err(dev, "Failed to get remote DCB config\n");
return;
@@ -868,10 +869,11 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
mutex_lock(&pf->tc_mutex);
/* store the old configuration */
- tmp_dcbx_cfg = pf->hw.port_info->local_dcbx_cfg;
+ tmp_dcbx_cfg = pf->hw.port_info->qos_cfg.local_dcbx_cfg;
/* Reset the old DCBX configuration data */
- memset(&pi->local_dcbx_cfg, 0, sizeof(pi->local_dcbx_cfg));
+ memset(&pi->qos_cfg.local_dcbx_cfg, 0,
+ sizeof(pi->qos_cfg.local_dcbx_cfg));
/* Get updated DCBX data from firmware */
ret = ice_get_dcb_cfg(pf->hw.port_info);
@@ -881,7 +883,8 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
}
/* No change detected in DCBX configs */
- if (!memcmp(&tmp_dcbx_cfg, &pi->local_dcbx_cfg, sizeof(tmp_dcbx_cfg))) {
+ if (!memcmp(&tmp_dcbx_cfg, &pi->qos_cfg.local_dcbx_cfg,
+ sizeof(tmp_dcbx_cfg))) {
dev_dbg(dev, "No change detected in DCBX configuration.\n");
goto out;
}
@@ -889,13 +892,13 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
pf->dcbx_cap = ice_dcb_get_mode(pi, false);
need_reconfig = ice_dcb_need_recfg(pf, &tmp_dcbx_cfg,
- &pi->local_dcbx_cfg);
- ice_dcbnl_flush_apps(pf, &tmp_dcbx_cfg, &pi->local_dcbx_cfg);
+ &pi->qos_cfg.local_dcbx_cfg);
+ ice_dcbnl_flush_apps(pf, &tmp_dcbx_cfg, &pi->qos_cfg.local_dcbx_cfg);
if (!need_reconfig)
goto out;
/* Enable DCB tagging only when more than one TC */
- if (ice_dcb_get_num_tc(&pi->local_dcbx_cfg) > 1) {
+ if (ice_dcb_get_num_tc(&pi->qos_cfg.local_dcbx_cfg) > 1) {
dev_dbg(dev, "DCB tagging enabled (num TC > 1)\n");
set_bit(ICE_FLAG_DCB_ENA, pf->flags);
} else {
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c
index 8c133a8..4180f1f 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c
@@ -18,12 +18,10 @@ static void ice_dcbnl_devreset(struct net_device *netdev)
while (ice_is_reset_in_progress(pf->state))
usleep_range(1000, 2000);
- set_bit(__ICE_DCBNL_DEVRESET, pf->state);
dev_close(netdev);
netdev_state_change(netdev);
dev_open(netdev, NULL);
netdev_state_change(netdev);
- clear_bit(__ICE_DCBNL_DEVRESET, pf->state);
}
/**
@@ -34,12 +32,10 @@ static void ice_dcbnl_devreset(struct net_device *netdev)
static int ice_dcbnl_getets(struct net_device *netdev, struct ieee_ets *ets)
{
struct ice_dcbx_cfg *dcbxcfg;
- struct ice_port_info *pi;
struct ice_pf *pf;
pf = ice_netdev_to_pf(netdev);
- pi = pf->hw.port_info;
- dcbxcfg = &pi->local_dcbx_cfg;
+ dcbxcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
ets->willing = dcbxcfg->etscfg.willing;
ets->ets_cap = dcbxcfg->etscfg.maxtcs;
@@ -74,7 +70,7 @@ static int ice_dcbnl_setets(struct net_device *netdev, struct ieee_ets *ets)
!(pf->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))
return -EINVAL;
- new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
+ new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
mutex_lock(&pf->tc_mutex);
@@ -159,6 +155,7 @@ static u8 ice_dcbnl_getdcbx(struct net_device *netdev)
static u8 ice_dcbnl_setdcbx(struct net_device *netdev, u8 mode)
{
struct ice_pf *pf = ice_netdev_to_pf(netdev);
+ struct ice_qos_cfg *qos_cfg;
/* if FW LLDP agent is running, DCBNL not allowed to change mode */
if (test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags))
@@ -175,10 +172,11 @@ static u8 ice_dcbnl_setdcbx(struct net_device *netdev, u8 mode)
return ICE_DCB_NO_HW_CHG;
pf->dcbx_cap = mode;
+ qos_cfg = &pf->hw.port_info->qos_cfg;
if (mode & DCB_CAP_DCBX_VER_CEE)
- pf->hw.port_info->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_CEE;
+ qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_CEE;
else
- pf->hw.port_info->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_IEEE;
+ qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_IEEE;
dev_info(ice_pf_to_dev(pf), "DCBx mode = 0x%x\n", mode);
return ICE_DCB_HW_CHG_RST;
@@ -229,7 +227,7 @@ static int ice_dcbnl_getpfc(struct net_device *netdev, struct ieee_pfc *pfc)
struct ice_dcbx_cfg *dcbxcfg;
int i;
- dcbxcfg = &pi->local_dcbx_cfg;
+ dcbxcfg = &pi->qos_cfg.local_dcbx_cfg;
pfc->pfc_cap = dcbxcfg->pfc.pfccap;
pfc->pfc_en = dcbxcfg->pfc.pfcena;
pfc->mbc = dcbxcfg->pfc.mbc;
@@ -260,7 +258,7 @@ static int ice_dcbnl_setpfc(struct net_device *netdev, struct ieee_pfc *pfc)
mutex_lock(&pf->tc_mutex);
- new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
+ new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
if (pfc->pfc_cap)
new_cfg->pfc.pfccap = pfc->pfc_cap;
@@ -297,9 +295,9 @@ ice_dcbnl_get_pfc_cfg(struct net_device *netdev, int prio, u8 *setting)
if (prio >= ICE_MAX_USER_PRIORITY)
return;
- *setting = (pi->local_dcbx_cfg.pfc.pfcena >> prio) & 0x1;
+ *setting = (pi->qos_cfg.local_dcbx_cfg.pfc.pfcena >> prio) & 0x1;
dev_dbg(ice_pf_to_dev(pf), "Get PFC Config up=%d, setting=%d, pfcenable=0x%x\n",
- prio, *setting, pi->local_dcbx_cfg.pfc.pfcena);
+ prio, *setting, pi->qos_cfg.local_dcbx_cfg.pfc.pfcena);
}
/**
@@ -320,7 +318,7 @@ static void ice_dcbnl_set_pfc_cfg(struct net_device *netdev, int prio, u8 set)
if (prio >= ICE_MAX_USER_PRIORITY)
return;
- new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
+ new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
new_cfg->pfc.pfccap = pf->hw.func_caps.common_cap.maxtc;
if (set)
@@ -342,7 +340,7 @@ static u8 ice_dcbnl_getpfcstate(struct net_device *netdev)
struct ice_port_info *pi = pf->hw.port_info;
/* Return enabled if any UP enabled for PFC */
- if (pi->local_dcbx_cfg.pfc.pfcena)
+ if (pi->qos_cfg.local_dcbx_cfg.pfc.pfcena)
return 1;
return 0;
@@ -382,8 +380,8 @@ static u8 ice_dcbnl_setstate(struct net_device *netdev, u8 state)
if (state) {
set_bit(ICE_FLAG_DCB_ENA, pf->flags);
- memcpy(&pf->hw.port_info->desired_dcbx_cfg,
- &pf->hw.port_info->local_dcbx_cfg,
+ memcpy(&pf->hw.port_info->qos_cfg.desired_dcbx_cfg,
+ &pf->hw.port_info->qos_cfg.local_dcbx_cfg,
sizeof(struct ice_dcbx_cfg));
} else {
clear_bit(ICE_FLAG_DCB_ENA, pf->flags);
@@ -417,7 +415,7 @@ ice_dcbnl_get_pg_tc_cfg_tx(struct net_device *netdev, int prio,
if (prio >= ICE_MAX_USER_PRIORITY)
return;
- *pgid = pi->local_dcbx_cfg.etscfg.prio_table[prio];
+ *pgid = pi->qos_cfg.local_dcbx_cfg.etscfg.prio_table[prio];
dev_dbg(ice_pf_to_dev(pf), "Get PG config prio=%d tc=%d\n", prio,
*pgid);
}
@@ -448,7 +446,7 @@ ice_dcbnl_set_pg_tc_cfg_tx(struct net_device *netdev, int tc,
if (tc >= ICE_MAX_TRAFFIC_CLASS)
return;
- new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
+ new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
/* prio_type, bwg_id and bw_pct per UP are not supported */
@@ -478,7 +476,7 @@ ice_dcbnl_get_pg_bwg_cfg_tx(struct net_device *netdev, int pgid, u8 *bw_pct)
if (pgid >= ICE_MAX_TRAFFIC_CLASS)
return;
- *bw_pct = pi->local_dcbx_cfg.etscfg.tcbwtable[pgid];
+ *bw_pct = pi->qos_cfg.local_dcbx_cfg.etscfg.tcbwtable[pgid];
dev_dbg(ice_pf_to_dev(pf), "Get PG BW config tc=%d bw_pct=%d\n",
pgid, *bw_pct);
}
@@ -502,7 +500,7 @@ ice_dcbnl_set_pg_bwg_cfg_tx(struct net_device *netdev, int pgid, u8 bw_pct)
if (pgid >= ICE_MAX_TRAFFIC_CLASS)
return;
- new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
+ new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
new_cfg->etscfg.tcbwtable[pgid] = bw_pct;
}
@@ -532,7 +530,7 @@ ice_dcbnl_get_pg_tc_cfg_rx(struct net_device *netdev, int prio,
if (prio >= ICE_MAX_USER_PRIORITY)
return;
- *pgid = pi->local_dcbx_cfg.etscfg.prio_table[prio];
+ *pgid = pi->qos_cfg.local_dcbx_cfg.etscfg.prio_table[prio];
}
/**
@@ -703,9 +701,9 @@ static int ice_dcbnl_setapp(struct net_device *netdev, struct dcb_app *app)
mutex_lock(&pf->tc_mutex);
- new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
+ new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
- old_cfg = &pf->hw.port_info->local_dcbx_cfg;
+ old_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
if (old_cfg->numapps == ICE_DCBX_MAX_APPS) {
ret = -EINVAL;
@@ -755,7 +753,7 @@ static int ice_dcbnl_delapp(struct net_device *netdev, struct dcb_app *app)
return -EINVAL;
mutex_lock(&pf->tc_mutex);
- old_cfg = &pf->hw.port_info->local_dcbx_cfg;
+ old_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
if (old_cfg->numapps <= 1)
goto delapp_out;
@@ -764,7 +762,7 @@ static int ice_dcbnl_delapp(struct net_device *netdev, struct dcb_app *app)
if (ret)
goto delapp_out;
- new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
+ new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
for (i = 1; i < new_cfg->numapps; i++) {
if (app->selector == new_cfg->app[i].selector &&
@@ -817,7 +815,7 @@ static u8 ice_dcbnl_cee_set_all(struct net_device *netdev)
!(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE))
return ICE_DCB_NO_HW_CHG;
- new_cfg = &pf->hw.port_info->desired_dcbx_cfg;
+ new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg;
mutex_lock(&pf->tc_mutex);
@@ -888,7 +886,7 @@ void ice_dcbnl_set_all(struct ice_vsi *vsi)
if (!test_bit(ICE_FLAG_DCB_ENA, pf->flags))
return;
- dcbxcfg = &pi->local_dcbx_cfg;
+ dcbxcfg = &pi->qos_cfg.local_dcbx_cfg;
for (i = 0; i < dcbxcfg->numapps; i++) {
u8 prio, tc_map;
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index aebebd2..d70573f 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -2986,7 +2986,7 @@ ice_get_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *pause)
pause->rx_pause = 0;
pause->tx_pause = 0;
- dcbx_cfg = &pi->local_dcbx_cfg;
+ dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
pcaps = kzalloc(sizeof(*pcaps), GFP_KERNEL);
if (!pcaps)
@@ -3038,7 +3038,7 @@ ice_set_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *pause)
pi = vsi->port_info;
hw_link_info = &pi->phy.link_info;
- dcbx_cfg = &pi->local_dcbx_cfg;
+ dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
link_up = hw_link_info->link_info & ICE_AQ_LINK_UP;
/* Changing the port's flow control is not supported if this isn't the
@@ -3472,7 +3472,7 @@ static void ice_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
netdev_warn(netdev, "Wake on LAN is not supported on this interface!\n");
/* Get WoL settings based on the HW capability */
- if (ice_is_wol_supported(pf)) {
+ if (ice_is_wol_supported(&pf->hw)) {
wol->supported = WAKE_MAGIC;
wol->wolopts = pf->wol_ena ? WAKE_MAGIC : 0;
} else {
@@ -3492,7 +3492,7 @@ static int ice_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
struct ice_vsi *vsi = np->vsi;
struct ice_pf *pf = vsi->back;
- if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(pf))
+ if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(&pf->hw))
return -EOPNOTSUPP;
/* only magic packet is supported */
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index ad9c22a..e138450 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -2078,7 +2078,7 @@ int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena, bool vlan_promisc)
static void ice_vsi_set_tc_cfg(struct ice_vsi *vsi)
{
- struct ice_dcbx_cfg *cfg = &vsi->port_info->local_dcbx_cfg;
+ struct ice_dcbx_cfg *cfg = &vsi->port_info->qos_cfg.local_dcbx_cfg;
vsi->tc_cfg.ena_tc = ice_dcb_get_ena_tc(cfg);
vsi->tc_cfg.numtc = ice_dcb_get_num_tc(cfg);
@@ -2489,7 +2489,7 @@ int ice_ena_vsi(struct ice_vsi *vsi, bool locked)
if (!locked)
rtnl_lock();
- err = ice_open(vsi->netdev);
+ err = ice_open_internal(vsi->netdev);
if (!locked)
rtnl_unlock();
@@ -2518,7 +2518,7 @@ void ice_dis_vsi(struct ice_vsi *vsi, bool locked)
if (!locked)
rtnl_lock();
- ice_stop(vsi->netdev);
+ ice_vsi_close(vsi);
if (!locked)
rtnl_unlock();
@@ -2684,39 +2684,47 @@ int ice_vsi_release(struct ice_vsi *vsi)
}
/**
- * ice_vsi_rebuild_update_coalesce - set coalesce for a q_vector
+ * ice_vsi_rebuild_update_coalesce_intrl - set interrupt rate limit for a q_vector
* @q_vector: pointer to q_vector which is being updated
- * @coalesce: pointer to array of struct with stored coalesce
+ * @stored_intrl_setting: original INTRL setting
*
* Set coalesce param in q_vector and update these parameters in HW.
*/
static void
-ice_vsi_rebuild_update_coalesce(struct ice_q_vector *q_vector,
- struct ice_coalesce_stored *coalesce)
+ice_vsi_rebuild_update_coalesce_intrl(struct ice_q_vector *q_vector,
+ u16 stored_intrl_setting)
{
- struct ice_ring_container *rx_rc = &q_vector->rx;
- struct ice_ring_container *tx_rc = &q_vector->tx;
struct ice_hw *hw = &q_vector->vsi->back->hw;
- tx_rc->itr_setting = coalesce->itr_tx;
- rx_rc->itr_setting = coalesce->itr_rx;
-
- /* dynamic ITR values will be updated during Tx/Rx */
- if (!ITR_IS_DYNAMIC(tx_rc->itr_setting))
- wr32(hw, GLINT_ITR(tx_rc->itr_idx, q_vector->reg_idx),
- ITR_REG_ALIGN(tx_rc->itr_setting) >>
- ICE_ITR_GRAN_S);
- if (!ITR_IS_DYNAMIC(rx_rc->itr_setting))
- wr32(hw, GLINT_ITR(rx_rc->itr_idx, q_vector->reg_idx),
- ITR_REG_ALIGN(rx_rc->itr_setting) >>
- ICE_ITR_GRAN_S);
-
- q_vector->intrl = coalesce->intrl;
+ q_vector->intrl = stored_intrl_setting;
wr32(hw, GLINT_RATE(q_vector->reg_idx),
ice_intrl_usec_to_reg(q_vector->intrl, hw->intrl_gran));
}
/**
+ * ice_vsi_rebuild_update_coalesce_itr - set coalesce for a q_vector
+ * @q_vector: pointer to q_vector which is being updated
+ * @rc: pointer to ring container
+ * @stored_itr_setting: original ITR setting
+ *
+ * Set coalesce param in q_vector and update these parameters in HW.
+ */
+static void
+ice_vsi_rebuild_update_coalesce_itr(struct ice_q_vector *q_vector,
+ struct ice_ring_container *rc,
+ u16 stored_itr_setting)
+{
+ struct ice_hw *hw = &q_vector->vsi->back->hw;
+
+ rc->itr_setting = stored_itr_setting;
+
+ /* dynamic ITR values will be updated during Tx/Rx */
+ if (!ITR_IS_DYNAMIC(rc->itr_setting))
+ wr32(hw, GLINT_ITR(rc->itr_idx, q_vector->reg_idx),
+ ITR_REG_ALIGN(rc->itr_setting) >> ICE_ITR_GRAN_S);
+}
+
+/**
* ice_vsi_rebuild_get_coalesce - get coalesce from all q_vectors
* @vsi: VSI connected with q_vectors
* @coalesce: array of struct with stored coalesce
@@ -2735,6 +2743,11 @@ ice_vsi_rebuild_get_coalesce(struct ice_vsi *vsi,
coalesce[i].itr_tx = q_vector->tx.itr_setting;
coalesce[i].itr_rx = q_vector->rx.itr_setting;
coalesce[i].intrl = q_vector->intrl;
+
+ if (i < vsi->num_txq)
+ coalesce[i].tx_valid = true;
+ if (i < vsi->num_rxq)
+ coalesce[i].rx_valid = true;
}
return vsi->num_q_vectors;
@@ -2759,17 +2772,59 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi,
if ((size && !coalesce) || !vsi)
return;
- for (i = 0; i < size && i < vsi->num_q_vectors; i++)
- ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i],
- &coalesce[i]);
-
- /* number of q_vectors increased, so assume coalesce settings were
- * changed globally (i.e. ethtool -C eth0 instead of per-queue) and use
- * the previous settings from q_vector 0 for all of the new q_vectors
+ /* There are a couple of cases that have to be handled here:
+ * 1. The case where the number of queue vectors stays the same, but
+ * the number of Tx or Rx rings changes (the first for loop)
+ * 2. The case where the number of queue vectors increased (the
+ * second for loop)
*/
- for (; i < vsi->num_q_vectors; i++)
- ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i],
- &coalesce[0]);
+ for (i = 0; i < size && i < vsi->num_q_vectors; i++) {
+ /* There are 2 cases to handle here and they are the same for
+ * both Tx and Rx:
+ * if the entry was valid previously (coalesce[i].[tr]x_valid
+ * and the loop variable is less than the number of rings
+ * allocated, then write the previous values
+ *
+ * if the entry was not valid previously, but the number of
+ * rings is less than are allocated (this means the number of
+ * rings increased from previously), then write out the
+ * values in the first element
+ */
+ if (i < vsi->alloc_rxq && coalesce[i].rx_valid)
+ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+ &vsi->q_vectors[i]->rx,
+ coalesce[i].itr_rx);
+ else if (i < vsi->alloc_rxq)
+ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+ &vsi->q_vectors[i]->rx,
+ coalesce[0].itr_rx);
+
+ if (i < vsi->alloc_txq && coalesce[i].tx_valid)
+ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+ &vsi->q_vectors[i]->tx,
+ coalesce[i].itr_tx);
+ else if (i < vsi->alloc_txq)
+ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+ &vsi->q_vectors[i]->tx,
+ coalesce[0].itr_tx);
+
+ ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i],
+ coalesce[i].intrl);
+ }
+
+ /* the number of queue vectors increased so write whatever is in
+ * the first element
+ */
+ for (; i < vsi->num_q_vectors; i++) {
+ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+ &vsi->q_vectors[i]->tx,
+ coalesce[0].itr_tx);
+ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+ &vsi->q_vectors[i]->rx,
+ coalesce[0].itr_rx);
+ ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i],
+ coalesce[0].intrl);
+ }
}
/**
@@ -2798,9 +2853,11 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
coalesce = kcalloc(vsi->num_q_vectors,
sizeof(struct ice_coalesce_stored), GFP_KERNEL);
- if (coalesce)
- prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi,
- coalesce);
+ if (!coalesce)
+ return -ENOMEM;
+
+ prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce);
+
ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
ice_vsi_free_q_vectors(vsi);
@@ -2944,7 +3001,6 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
bool ice_is_reset_in_progress(unsigned long *state)
{
return test_bit(__ICE_RESET_OICR_RECV, state) ||
- test_bit(__ICE_DCBNL_DEVRESET, state) ||
test_bit(__ICE_PFR_REQ, state) ||
test_bit(__ICE_CORER_REQ, state) ||
test_bit(__ICE_GLOBR_REQ, state);
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index bacb368..6f30aad 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -3515,15 +3515,14 @@ static int ice_init_interrupt_scheme(struct ice_pf *pf)
}
/**
- * ice_is_wol_supported - get NVM state of WoL
- * @pf: board private structure
+ * ice_is_wol_supported - check if WoL is supported
+ * @hw: pointer to hardware info
*
* Check if WoL is supported based on the HW configuration.
* Returns true if NVM supports and enables WoL for this port, false otherwise
*/
-bool ice_is_wol_supported(struct ice_pf *pf)
+bool ice_is_wol_supported(struct ice_hw *hw)
{
- struct ice_hw *hw = &pf->hw;
u16 wol_ctrl;
/* A bit set to 1 in the NVM Software Reserved Word 2 (WoL control
@@ -3532,7 +3531,7 @@ bool ice_is_wol_supported(struct ice_pf *pf)
if (ice_read_sr_word(hw, ICE_SR_NVM_WOL_CFG, &wol_ctrl))
return false;
- return !(BIT(hw->pf_id) & wol_ctrl);
+ return !(BIT(hw->port_info->lport) & wol_ctrl);
}
/**
@@ -4170,28 +4169,25 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
goto err_send_version_unroll;
}
+ /* not a fatal error if this fails */
err = ice_init_nvm_phy_type(pf->hw.port_info);
- if (err) {
+ if (err)
dev_err(dev, "ice_init_nvm_phy_type failed: %d\n", err);
- goto err_send_version_unroll;
- }
+ /* not a fatal error if this fails */
err = ice_update_link_info(pf->hw.port_info);
- if (err) {
+ if (err)
dev_err(dev, "ice_update_link_info failed: %d\n", err);
- goto err_send_version_unroll;
- }
ice_init_link_dflt_override(pf->hw.port_info);
/* if media available, initialize PHY settings */
if (pf->hw.port_info->phy.link_info.link_info &
ICE_AQ_MEDIA_AVAILABLE) {
+ /* not a fatal error if this fails */
err = ice_init_phy_user_cfg(pf->hw.port_info);
- if (err) {
+ if (err)
dev_err(dev, "ice_init_phy_user_cfg failed: %d\n", err);
- goto err_send_version_unroll;
- }
if (!test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, pf->flags)) {
struct ice_vsi *vsi = ice_get_main_vsi(pf);
@@ -4542,6 +4538,7 @@ static int __maybe_unused ice_suspend(struct device *dev)
continue;
ice_vsi_free_q_vectors(pf->vsi[v]);
}
+ ice_free_cpu_rx_rmap(ice_get_main_vsi(pf));
ice_clear_interrupt_scheme(pf);
pci_save_state(pdev);
@@ -6618,6 +6615,28 @@ static void ice_tx_timeout(struct net_device *netdev, unsigned int txqueue)
int ice_open(struct net_device *netdev)
{
struct ice_netdev_priv *np = netdev_priv(netdev);
+ struct ice_pf *pf = np->vsi->back;
+
+ if (ice_is_reset_in_progress(pf->state)) {
+ netdev_err(netdev, "can't open net device while reset is in progress");
+ return -EBUSY;
+ }
+
+ return ice_open_internal(netdev);
+}
+
+/**
+ * ice_open_internal - Called when a network interface becomes active
+ * @netdev: network interface device structure
+ *
+ * Internal ice_open implementation. Should not be used directly except for ice_open and reset
+ * handling routine
+ *
+ * Returns 0 on success, negative value on failure
+ */
+int ice_open_internal(struct net_device *netdev)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
struct ice_vsi *vsi = np->vsi;
struct ice_pf *pf = vsi->back;
struct ice_port_info *pi;
@@ -6696,6 +6715,12 @@ int ice_stop(struct net_device *netdev)
{
struct ice_netdev_priv *np = netdev_priv(netdev);
struct ice_vsi *vsi = np->vsi;
+ struct ice_pf *pf = vsi->back;
+
+ if (ice_is_reset_in_progress(pf->state)) {
+ netdev_err(netdev, "can't stop net device while reset is in progress");
+ return -EBUSY;
+ }
ice_vsi_close(vsi);
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
index c3a6c41..5ce8590 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -1239,6 +1239,9 @@ ice_add_update_vsi_list(struct ice_hw *hw,
ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
vsi_list_id);
+ if (!m_entry->vsi_list_info)
+ return ICE_ERR_NO_MEMORY;
+
/* If this entry was large action then the large action needs
* to be updated to point to FWD to VSI list
*/
@@ -2224,6 +2227,7 @@ ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
fm_entry->fltr_info.vsi_handle == vsi_handle) ||
(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
+ fm_entry->vsi_list_info &&
(test_bit(vsi_handle, fm_entry->vsi_list_info->vsi_map))));
}
@@ -2296,14 +2300,12 @@ ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
return ICE_ERR_PARAM;
list_for_each_entry(fm_entry, lkup_list_head, list_entry) {
- struct ice_fltr_info *fi;
-
- fi = &fm_entry->fltr_info;
- if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
+ if (!ice_vsi_uses_fltr(fm_entry, vsi_handle))
continue;
status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
- vsi_list_head, fi);
+ vsi_list_head,
+ &fm_entry->fltr_info);
if (status)
return status;
}
@@ -2626,7 +2628,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
&remove_list_head);
mutex_unlock(rule_lock);
if (status)
- return;
+ goto free_fltr_list;
switch (lkup) {
case ICE_SW_LKUP_MAC:
@@ -2649,6 +2651,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
break;
}
+free_fltr_list:
list_for_each_entry_safe(fm_entry, tmp, &remove_list_head, list_entry) {
list_del(&fm_entry->list_entry);
devm_kfree(ice_hw_to_dev(hw), fm_entry);
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index af5b7f3..0f2544c 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -2421,7 +2421,7 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_ring *tx_ring)
/* allow CONTROL frames egress from main VSI if FW LLDP disabled */
if (unlikely(skb->priority == TC_PRIO_CONTROL &&
vsi->type == ICE_VSI_PF &&
- vsi->port_info->is_sw_lldp))
+ vsi->port_info->qos_cfg.is_sw_lldp))
offload.cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX |
ICE_TX_CTX_DESC_SWTCH_UPLINK <<
ICE_TXD_CTX_QW1_CMD_S);
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index ff1a1cb..eab7cea 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -351,6 +351,8 @@ struct ice_coalesce_stored {
u16 itr_tx;
u16 itr_rx;
u8 intrl;
+ u8 tx_valid;
+ u8 rx_valid;
};
/* iterator for handling rings in ring container */
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index 2226a29..1bed183 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -493,6 +493,7 @@ struct ice_dcb_app_priority_table {
#define ICE_TLV_STATUS_ERR 0x4
#define ICE_APP_PROT_ID_FCOE 0x8906
#define ICE_APP_PROT_ID_ISCSI 0x0cbc
+#define ICE_APP_PROT_ID_ISCSI_860 0x035c
#define ICE_APP_PROT_ID_FIP 0x8914
#define ICE_APP_SEL_ETHTYPE 0x1
#define ICE_APP_SEL_TCPIP 0x2
@@ -514,6 +515,14 @@ struct ice_dcbx_cfg {
#define ICE_DCBX_APPS_NON_WILLING 0x1
};
+struct ice_qos_cfg {
+ struct ice_dcbx_cfg local_dcbx_cfg; /* Oper/Local Cfg */
+ struct ice_dcbx_cfg desired_dcbx_cfg; /* CEE Desired Cfg */
+ struct ice_dcbx_cfg remote_dcbx_cfg; /* Peer Cfg */
+ u8 dcbx_status : 3; /* see ICE_DCBX_STATUS_DIS */
+ u8 is_sw_lldp : 1;
+};
+
struct ice_port_info {
struct ice_sched_node *root; /* Root Node per Port */
struct ice_hw *hw; /* back pointer to HW instance */
@@ -537,13 +546,7 @@ struct ice_port_info {
sib_head[ICE_MAX_TRAFFIC_CLASS][ICE_AQC_TOPO_MAX_LEVEL_NUM];
/* List contain profile ID(s) and other params per layer */
struct list_head rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
- struct ice_dcbx_cfg local_dcbx_cfg; /* Oper/Local Cfg */
- /* DCBX info */
- struct ice_dcbx_cfg remote_dcbx_cfg; /* Peer Cfg */
- struct ice_dcbx_cfg desired_dcbx_cfg; /* CEE Desired Cfg */
- /* LLDP/DCBX Status */
- u8 dcbx_status:3; /* see ICE_DCBX_STATUS_DIS */
- u8 is_sw_lldp:1;
+ struct ice_qos_cfg qos_cfg;
u8 is_vf:1;
};
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index fecfcfc..368f0aa 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -4482,8 +4482,7 @@ static void igb_setup_mrqc(struct igb_adapter *adapter)
else
mrqc |= E1000_MRQC_ENABLE_VMDQ;
} else {
- if (hw->mac.type != e1000_i211)
- mrqc |= E1000_MRQC_ENABLE_RSS_MQ;
+ mrqc |= E1000_MRQC_ENABLE_RSS_MQ;
}
igb_vmm_control(adapter);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 278fc86..0b9fddb 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -6896,6 +6896,11 @@ static int __maybe_unused ixgbe_resume(struct device *dev_d)
adapter->hw.hw_addr = adapter->io_addr;
+ err = pci_enable_device_mem(pdev);
+ if (err) {
+ e_dev_err("Cannot enable PCI device from suspend\n");
+ return err;
+ }
smp_mb__before_atomic();
clear_bit(__IXGBE_DISABLED, &adapter->state);
pci_set_master(pdev);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 988db46..214a38d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -467,12 +467,16 @@ static int ixgbe_set_vf_vlan(struct ixgbe_adapter *adapter, int add, int vid,
return err;
}
-static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+static int ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 max_frame, u32 vf)
{
struct ixgbe_hw *hw = &adapter->hw;
- int max_frame = msgbuf[1];
u32 max_frs;
+ if (max_frame < ETH_MIN_MTU || max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {
+ e_err(drv, "VF max_frame %d out of range\n", max_frame);
+ return -EINVAL;
+ }
+
/*
* For 82599EB we have to keep all PFs and VFs operating with
* the same max_frame value in order to avoid sending an oversize
@@ -533,12 +537,6 @@ static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
}
}
- /* MTU < 68 is an error and causes problems on some kernels */
- if (max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {
- e_err(drv, "VF max_frame %d out of range\n", max_frame);
- return -EINVAL;
- }
-
/* pull current max frame size from hardware */
max_frs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
max_frs &= IXGBE_MHADD_MFS_MASK;
@@ -1249,7 +1247,7 @@ static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf)
retval = ixgbe_set_vf_vlan_msg(adapter, msgbuf, vf);
break;
case IXGBE_VF_SET_LPE:
- retval = ixgbe_set_vf_lpe(adapter, msgbuf, vf);
+ retval = ixgbe_set_vf_lpe(adapter, msgbuf[1], vf);
break;
case IXGBE_VF_SET_MACVLAN:
retval = ixgbe_set_vf_macvlan_msg(adapter, msgbuf, vf);
diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c
index 51ed8a5..135ba5b 100644
--- a/drivers/net/ethernet/lantiq_xrx200.c
+++ b/drivers/net/ethernet/lantiq_xrx200.c
@@ -154,6 +154,7 @@ static int xrx200_close(struct net_device *net_dev)
static int xrx200_alloc_skb(struct xrx200_chan *ch)
{
+ dma_addr_t mapping;
int ret = 0;
ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev,
@@ -163,16 +164,17 @@ static int xrx200_alloc_skb(struct xrx200_chan *ch)
goto skip;
}
- ch->dma.desc_base[ch->dma.desc].addr = dma_map_single(ch->priv->dev,
- ch->skb[ch->dma.desc]->data, XRX200_DMA_DATA_LEN,
- DMA_FROM_DEVICE);
- if (unlikely(dma_mapping_error(ch->priv->dev,
- ch->dma.desc_base[ch->dma.desc].addr))) {
+ mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data,
+ XRX200_DMA_DATA_LEN, DMA_FROM_DEVICE);
+ if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) {
dev_kfree_skb_any(ch->skb[ch->dma.desc]);
ret = -ENOMEM;
goto skip;
}
+ ch->dma.desc_base[ch->dma.desc].addr = mapping;
+ /* Make sure the address is written before we give it to HW */
+ wmb();
skip:
ch->dma.desc_base[ch->dma.desc].ctl =
LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) |
@@ -196,6 +198,8 @@ static int xrx200_hw_receive(struct xrx200_chan *ch)
ch->dma.desc %= LTQ_DESC_NUM;
if (ret) {
+ ch->skb[ch->dma.desc] = skb;
+ net_dev->stats.rx_dropped++;
netdev_err(net_dev, "failed to allocate new rx buffer\n");
return ret;
}
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
index 8347758..a1aefce 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
@@ -909,6 +909,14 @@ enum mvpp22_ptp_packet_format {
#define MVPP2_DESC_DMA_MASK DMA_BIT_MASK(40)
+/* Buffer header info bits */
+#define MVPP2_B_HDR_INFO_MC_ID_MASK 0xfff
+#define MVPP2_B_HDR_INFO_MC_ID(info) ((info) & MVPP2_B_HDR_INFO_MC_ID_MASK)
+#define MVPP2_B_HDR_INFO_LAST_OFFS 12
+#define MVPP2_B_HDR_INFO_LAST_MASK BIT(12)
+#define MVPP2_B_HDR_INFO_IS_LAST(info) \
+ (((info) & MVPP2_B_HDR_INFO_LAST_MASK) >> MVPP2_B_HDR_INFO_LAST_OFFS)
+
struct mvpp2_tai;
/* Definitions */
@@ -918,6 +926,20 @@ struct mvpp2_rss_table {
u32 indir[MVPP22_RSS_TABLE_ENTRIES];
};
+struct mvpp2_buff_hdr {
+ __le32 next_phys_addr;
+ __le32 next_dma_addr;
+ __le16 byte_count;
+ __le16 info;
+ __le16 reserved1; /* bm_qset (for future use, BM) */
+ u8 next_phys_addr_high;
+ u8 next_dma_addr_high;
+ __le16 reserved2;
+ __le16 reserved3;
+ __le16 reserved4;
+ __le16 reserved5;
+};
+
/* Shared Packet Processor resources */
struct mvpp2 {
/* Shared registers' base addresses */
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index f5333fc..6aa13c9 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3481,6 +3481,35 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq,
return ret;
}
+static void mvpp2_buff_hdr_pool_put(struct mvpp2_port *port, struct mvpp2_rx_desc *rx_desc,
+ int pool, u32 rx_status)
+{
+ phys_addr_t phys_addr, phys_addr_next;
+ dma_addr_t dma_addr, dma_addr_next;
+ struct mvpp2_buff_hdr *buff_hdr;
+
+ phys_addr = mvpp2_rxdesc_dma_addr_get(port, rx_desc);
+ dma_addr = mvpp2_rxdesc_cookie_get(port, rx_desc);
+
+ do {
+ buff_hdr = (struct mvpp2_buff_hdr *)phys_to_virt(phys_addr);
+
+ phys_addr_next = le32_to_cpu(buff_hdr->next_phys_addr);
+ dma_addr_next = le32_to_cpu(buff_hdr->next_dma_addr);
+
+ if (port->priv->hw_version >= MVPP22) {
+ phys_addr_next |= ((u64)buff_hdr->next_phys_addr_high << 32);
+ dma_addr_next |= ((u64)buff_hdr->next_dma_addr_high << 32);
+ }
+
+ mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
+
+ phys_addr = phys_addr_next;
+ dma_addr = dma_addr_next;
+
+ } while (!MVPP2_B_HDR_INFO_IS_LAST(le16_to_cpu(buff_hdr->info)));
+}
+
/* Main rx processing */
static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
int rx_todo, struct mvpp2_rx_queue *rxq)
@@ -3527,14 +3556,6 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
MVPP2_RXD_BM_POOL_ID_OFFS;
bm_pool = &port->priv->bm_pools[pool];
- /* In case of an error, release the requested buffer pointer
- * to the Buffer Manager. This request process is controlled
- * by the hardware, and the information about the buffer is
- * comprised by the RX descriptor.
- */
- if (rx_status & MVPP2_RXD_ERR_SUMMARY)
- goto err_drop_frame;
-
if (port->priv->percpu_pools) {
pp = port->priv->page_pool[pool];
dma_dir = page_pool_get_dma_dir(pp);
@@ -3546,6 +3567,18 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
rx_bytes + MVPP2_MH_SIZE,
dma_dir);
+ /* Buffer header not supported */
+ if (rx_status & MVPP2_RXD_BUF_HDR)
+ goto err_drop_frame;
+
+ /* In case of an error, release the requested buffer pointer
+ * to the Buffer Manager. This request process is controlled
+ * by the hardware, and the information about the buffer is
+ * comprised by the RX descriptor.
+ */
+ if (rx_status & MVPP2_RXD_ERR_SUMMARY)
+ goto err_drop_frame;
+
/* Prefetch header */
prefetch(data);
@@ -3627,7 +3660,10 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
dev->stats.rx_errors++;
mvpp2_rx_error(port, rx_desc);
/* Return the buffer to the pool */
- mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
+ if (rx_status & MVPP2_RXD_BUF_HDR)
+ mvpp2_buff_hdr_pool_put(port, rx_desc, pool, rx_status);
+ else
+ mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
}
rcu_read_unlock();
diff --git a/drivers/net/ethernet/marvell/prestera/prestera_main.c b/drivers/net/ethernet/marvell/prestera/prestera_main.c
index da4b286..feb69fc 100644
--- a/drivers/net/ethernet/marvell/prestera/prestera_main.c
+++ b/drivers/net/ethernet/marvell/prestera/prestera_main.c
@@ -436,7 +436,8 @@ static void prestera_port_handle_event(struct prestera_switch *sw,
netif_carrier_on(port->dev);
if (!delayed_work_pending(caching_dw))
queue_delayed_work(prestera_wq, caching_dw, 0);
- } else {
+ } else if (netif_running(port->dev) &&
+ netif_carrier_ok(port->dev)) {
netif_carrier_off(port->dev);
if (delayed_work_pending(caching_dw))
cancel_delayed_work(caching_dw);
diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
index d1e4d42..3712e17 100644
--- a/drivers/net/ethernet/marvell/pxa168_eth.c
+++ b/drivers/net/ethernet/marvell/pxa168_eth.c
@@ -1544,8 +1544,8 @@ static int pxa168_eth_remove(struct platform_device *pdev)
clk_disable_unprepare(pep->clk);
mdiobus_unregister(pep->smi_bus);
mdiobus_free(pep->smi_bus);
- unregister_netdev(dev);
cancel_work_sync(&pep->tx_timeout_task);
+ unregister_netdev(dev);
free_netdev(dev);
return 0;
}
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
index 6d2d606..a2d3f04 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
@@ -679,32 +679,53 @@ static int mtk_set_mac_address(struct net_device *dev, void *p)
void mtk_stats_update_mac(struct mtk_mac *mac)
{
struct mtk_hw_stats *hw_stats = mac->hw_stats;
- unsigned int base = MTK_GDM1_TX_GBCNT;
- u64 stats;
-
- base += hw_stats->reg_offset;
+ struct mtk_eth *eth = mac->hw;
u64_stats_update_begin(&hw_stats->syncp);
- hw_stats->rx_bytes += mtk_r32(mac->hw, base);
- stats = mtk_r32(mac->hw, base + 0x04);
- if (stats)
- hw_stats->rx_bytes += (stats << 32);
- hw_stats->rx_packets += mtk_r32(mac->hw, base + 0x08);
- hw_stats->rx_overflow += mtk_r32(mac->hw, base + 0x10);
- hw_stats->rx_fcs_errors += mtk_r32(mac->hw, base + 0x14);
- hw_stats->rx_short_errors += mtk_r32(mac->hw, base + 0x18);
- hw_stats->rx_long_errors += mtk_r32(mac->hw, base + 0x1c);
- hw_stats->rx_checksum_errors += mtk_r32(mac->hw, base + 0x20);
- hw_stats->rx_flow_control_packets +=
- mtk_r32(mac->hw, base + 0x24);
- hw_stats->tx_skip += mtk_r32(mac->hw, base + 0x28);
- hw_stats->tx_collisions += mtk_r32(mac->hw, base + 0x2c);
- hw_stats->tx_bytes += mtk_r32(mac->hw, base + 0x30);
- stats = mtk_r32(mac->hw, base + 0x34);
- if (stats)
- hw_stats->tx_bytes += (stats << 32);
- hw_stats->tx_packets += mtk_r32(mac->hw, base + 0x38);
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) {
+ hw_stats->tx_packets += mtk_r32(mac->hw, MT7628_SDM_TPCNT);
+ hw_stats->tx_bytes += mtk_r32(mac->hw, MT7628_SDM_TBCNT);
+ hw_stats->rx_packets += mtk_r32(mac->hw, MT7628_SDM_RPCNT);
+ hw_stats->rx_bytes += mtk_r32(mac->hw, MT7628_SDM_RBCNT);
+ hw_stats->rx_checksum_errors +=
+ mtk_r32(mac->hw, MT7628_SDM_CS_ERR);
+ } else {
+ unsigned int offs = hw_stats->reg_offset;
+ u64 stats;
+
+ hw_stats->rx_bytes += mtk_r32(mac->hw,
+ MTK_GDM1_RX_GBCNT_L + offs);
+ stats = mtk_r32(mac->hw, MTK_GDM1_RX_GBCNT_H + offs);
+ if (stats)
+ hw_stats->rx_bytes += (stats << 32);
+ hw_stats->rx_packets +=
+ mtk_r32(mac->hw, MTK_GDM1_RX_GPCNT + offs);
+ hw_stats->rx_overflow +=
+ mtk_r32(mac->hw, MTK_GDM1_RX_OERCNT + offs);
+ hw_stats->rx_fcs_errors +=
+ mtk_r32(mac->hw, MTK_GDM1_RX_FERCNT + offs);
+ hw_stats->rx_short_errors +=
+ mtk_r32(mac->hw, MTK_GDM1_RX_SERCNT + offs);
+ hw_stats->rx_long_errors +=
+ mtk_r32(mac->hw, MTK_GDM1_RX_LENCNT + offs);
+ hw_stats->rx_checksum_errors +=
+ mtk_r32(mac->hw, MTK_GDM1_RX_CERCNT + offs);
+ hw_stats->rx_flow_control_packets +=
+ mtk_r32(mac->hw, MTK_GDM1_RX_FCCNT + offs);
+ hw_stats->tx_skip +=
+ mtk_r32(mac->hw, MTK_GDM1_TX_SKIPCNT + offs);
+ hw_stats->tx_collisions +=
+ mtk_r32(mac->hw, MTK_GDM1_TX_COLCNT + offs);
+ hw_stats->tx_bytes +=
+ mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_L + offs);
+ stats = mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_H + offs);
+ if (stats)
+ hw_stats->tx_bytes += (stats << 32);
+ hw_stats->tx_packets +=
+ mtk_r32(mac->hw, MTK_GDM1_TX_GPCNT + offs);
+ }
+
u64_stats_update_end(&hw_stats->syncp);
}
@@ -1319,7 +1340,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
skb->protocol = eth_type_trans(skb, netdev);
if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX &&
- RX_DMA_VID(trxd.rxd3))
+ (trxd.rxd2 & RX_DMA_VTAG))
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
RX_DMA_VID(trxd.rxd3));
skb_record_rx_queue(skb, 0);
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
index 454cfcd..54a7cd9 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
@@ -266,8 +266,21 @@
/* QDMA FQ Free Page Buffer Length Register */
#define MTK_QDMA_FQ_BLEN 0x1B2C
-/* GMA1 Received Good Byte Count Register */
-#define MTK_GDM1_TX_GBCNT 0x2400
+/* GMA1 counter / statics register */
+#define MTK_GDM1_RX_GBCNT_L 0x2400
+#define MTK_GDM1_RX_GBCNT_H 0x2404
+#define MTK_GDM1_RX_GPCNT 0x2408
+#define MTK_GDM1_RX_OERCNT 0x2410
+#define MTK_GDM1_RX_FERCNT 0x2414
+#define MTK_GDM1_RX_SERCNT 0x2418
+#define MTK_GDM1_RX_LENCNT 0x241c
+#define MTK_GDM1_RX_CERCNT 0x2420
+#define MTK_GDM1_RX_FCCNT 0x2424
+#define MTK_GDM1_TX_SKIPCNT 0x2428
+#define MTK_GDM1_TX_COLCNT 0x242c
+#define MTK_GDM1_TX_GBCNT_L 0x2430
+#define MTK_GDM1_TX_GBCNT_H 0x2434
+#define MTK_GDM1_TX_GPCNT 0x2438
#define MTK_STAT_OFFSET 0x40
/* QDMA descriptor txd4 */
@@ -295,6 +308,7 @@
#define RX_DMA_LSO BIT(30)
#define RX_DMA_PLEN0(_x) (((_x) & 0x3fff) << 16)
#define RX_DMA_GET_PLEN0(_x) (((_x) >> 16) & 0x3fff)
+#define RX_DMA_VTAG BIT(15)
/* QDMA descriptor rxd3 */
#define RX_DMA_VID(_x) ((_x) & 0xfff)
@@ -477,6 +491,13 @@
#define MT7628_SDM_MAC_ADRL (MT7628_SDM_OFFSET + 0x0c)
#define MT7628_SDM_MAC_ADRH (MT7628_SDM_OFFSET + 0x10)
+/* Counter / stat register */
+#define MT7628_SDM_TPCNT (MT7628_SDM_OFFSET + 0x100)
+#define MT7628_SDM_TBCNT (MT7628_SDM_OFFSET + 0x104)
+#define MT7628_SDM_RPCNT (MT7628_SDM_OFFSET + 0x108)
+#define MT7628_SDM_RBCNT (MT7628_SDM_OFFSET + 0x10c)
+#define MT7628_SDM_CS_ERR (MT7628_SDM_OFFSET + 0x110)
+
struct mtk_rx_dma {
unsigned int rxd1;
unsigned int rxd2;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
index 1434df6..3616b77 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
@@ -2027,8 +2027,6 @@ static int mlx4_en_set_tunable(struct net_device *dev,
return ret;
}
-#define MLX4_EEPROM_PAGE_LEN 256
-
static int mlx4_en_get_module_info(struct net_device *dev,
struct ethtool_modinfo *modinfo)
{
@@ -2063,7 +2061,7 @@ static int mlx4_en_get_module_info(struct net_device *dev,
break;
case MLX4_MODULE_ID_SFP:
modinfo->type = ETH_MODULE_SFF_8472;
- modinfo->eeprom_len = MLX4_EEPROM_PAGE_LEN;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
break;
default:
return -EINVAL;
diff --git a/drivers/net/ethernet/mellanox/mlx4/port.c b/drivers/net/ethernet/mellanox/mlx4/port.c
index ba6ac31..256a06b3 100644
--- a/drivers/net/ethernet/mellanox/mlx4/port.c
+++ b/drivers/net/ethernet/mellanox/mlx4/port.c
@@ -1973,6 +1973,7 @@ EXPORT_SYMBOL(mlx4_get_roce_gid_from_slave);
#define I2C_ADDR_LOW 0x50
#define I2C_ADDR_HIGH 0x51
#define I2C_PAGE_SIZE 256
+#define I2C_HIGH_PAGE_SIZE 128
/* Module Info Data */
struct mlx4_cable_info {
@@ -2026,6 +2027,88 @@ static inline const char *cable_info_mad_err_str(u16 mad_status)
return "Unknown Error";
}
+static int mlx4_get_module_id(struct mlx4_dev *dev, u8 port, u8 *module_id)
+{
+ struct mlx4_cmd_mailbox *inbox, *outbox;
+ struct mlx4_mad_ifc *inmad, *outmad;
+ struct mlx4_cable_info *cable_info;
+ int ret;
+
+ inbox = mlx4_alloc_cmd_mailbox(dev);
+ if (IS_ERR(inbox))
+ return PTR_ERR(inbox);
+
+ outbox = mlx4_alloc_cmd_mailbox(dev);
+ if (IS_ERR(outbox)) {
+ mlx4_free_cmd_mailbox(dev, inbox);
+ return PTR_ERR(outbox);
+ }
+
+ inmad = (struct mlx4_mad_ifc *)(inbox->buf);
+ outmad = (struct mlx4_mad_ifc *)(outbox->buf);
+
+ inmad->method = 0x1; /* Get */
+ inmad->class_version = 0x1;
+ inmad->mgmt_class = 0x1;
+ inmad->base_version = 0x1;
+ inmad->attr_id = cpu_to_be16(0xFF60); /* Module Info */
+
+ cable_info = (struct mlx4_cable_info *)inmad->data;
+ cable_info->dev_mem_address = 0;
+ cable_info->page_num = 0;
+ cable_info->i2c_addr = I2C_ADDR_LOW;
+ cable_info->size = cpu_to_be16(1);
+
+ ret = mlx4_cmd_box(dev, inbox->dma, outbox->dma, port, 3,
+ MLX4_CMD_MAD_IFC, MLX4_CMD_TIME_CLASS_C,
+ MLX4_CMD_NATIVE);
+ if (ret)
+ goto out;
+
+ if (be16_to_cpu(outmad->status)) {
+ /* Mad returned with bad status */
+ ret = be16_to_cpu(outmad->status);
+ mlx4_warn(dev,
+ "MLX4_CMD_MAD_IFC Get Module ID attr(%x) port(%d) i2c_addr(%x) offset(%d) size(%d): Response Mad Status(%x) - %s\n",
+ 0xFF60, port, I2C_ADDR_LOW, 0, 1, ret,
+ cable_info_mad_err_str(ret));
+ ret = -ret;
+ goto out;
+ }
+ cable_info = (struct mlx4_cable_info *)outmad->data;
+ *module_id = cable_info->data[0];
+out:
+ mlx4_free_cmd_mailbox(dev, inbox);
+ mlx4_free_cmd_mailbox(dev, outbox);
+ return ret;
+}
+
+static void mlx4_sfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset)
+{
+ *i2c_addr = I2C_ADDR_LOW;
+ *page_num = 0;
+
+ if (*offset < I2C_PAGE_SIZE)
+ return;
+
+ *i2c_addr = I2C_ADDR_HIGH;
+ *offset -= I2C_PAGE_SIZE;
+}
+
+static void mlx4_qsfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset)
+{
+ /* Offsets 0-255 belong to page 0.
+ * Offsets 256-639 belong to pages 01, 02, 03.
+ * For example, offset 400 is page 02: 1 + (400 - 256) / 128 = 2
+ */
+ if (*offset < I2C_PAGE_SIZE)
+ *page_num = 0;
+ else
+ *page_num = 1 + (*offset - I2C_PAGE_SIZE) / I2C_HIGH_PAGE_SIZE;
+ *i2c_addr = I2C_ADDR_LOW;
+ *offset -= *page_num * I2C_HIGH_PAGE_SIZE;
+}
+
/**
* mlx4_get_module_info - Read cable module eeprom data
* @dev: mlx4_dev.
@@ -2045,12 +2128,30 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
struct mlx4_cmd_mailbox *inbox, *outbox;
struct mlx4_mad_ifc *inmad, *outmad;
struct mlx4_cable_info *cable_info;
- u16 i2c_addr;
+ u8 module_id, i2c_addr, page_num;
int ret;
if (size > MODULE_INFO_MAX_READ)
size = MODULE_INFO_MAX_READ;
+ ret = mlx4_get_module_id(dev, port, &module_id);
+ if (ret)
+ return ret;
+
+ switch (module_id) {
+ case MLX4_MODULE_ID_SFP:
+ mlx4_sfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
+ break;
+ case MLX4_MODULE_ID_QSFP:
+ case MLX4_MODULE_ID_QSFP_PLUS:
+ case MLX4_MODULE_ID_QSFP28:
+ mlx4_qsfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
+ break;
+ default:
+ mlx4_err(dev, "Module ID not recognized: %#x\n", module_id);
+ return -EINVAL;
+ }
+
inbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(inbox))
return PTR_ERR(inbox);
@@ -2076,11 +2177,9 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
*/
size -= offset + size - I2C_PAGE_SIZE;
- i2c_addr = I2C_ADDR_LOW;
-
cable_info = (struct mlx4_cable_info *)inmad->data;
cable_info->dev_mem_address = cpu_to_be16(offset);
- cable_info->page_num = 0;
+ cable_info->page_num = page_num;
cable_info->i2c_addr = i2c_addr;
cable_info->size = cpu_to_be16(size);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
index 308fd27..89510ca 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
@@ -387,21 +387,6 @@ enum mlx5e_fec_supported_link_mode {
*_policy = MLX5_GET(pplm_reg, _buf, fec_override_admin_##link); \
} while (0)
-#define MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(buf, policy, write, link) \
- do { \
- unsigned long policy_long; \
- u16 *__policy = &(policy); \
- bool _write = (write); \
- \
- policy_long = *__policy; \
- if (_write && *__policy) \
- *__policy = find_first_bit(&policy_long, \
- sizeof(policy_long) * BITS_PER_BYTE);\
- MLX5E_FEC_OVERRIDE_ADMIN_POLICY(buf, *__policy, _write, link); \
- if (!_write && *__policy) \
- *__policy = 1 << *__policy; \
- } while (0)
-
/* get/set FEC admin field for a given speed */
static int mlx5e_fec_admin_field(u32 *pplm, u16 *fec_policy, bool write,
enum mlx5e_fec_supported_link_mode link_mode)
@@ -423,16 +408,16 @@ static int mlx5e_fec_admin_field(u32 *pplm, u16 *fec_policy, bool write,
MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 100g);
break;
case MLX5E_FEC_SUPPORTED_LINK_MODE_50G_1X:
- MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 50g_1x);
+ MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 50g_1x);
break;
case MLX5E_FEC_SUPPORTED_LINK_MODE_100G_2X:
- MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 100g_2x);
+ MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 100g_2x);
break;
case MLX5E_FEC_SUPPORTED_LINK_MODE_200G_4X:
- MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 200g_4x);
+ MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 200g_4x);
break;
case MLX5E_FEC_SUPPORTED_LINK_MODE_400G_8X:
- MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 400g_8x);
+ MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 400g_8x);
break;
default:
return -EINVAL;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
index 95f2b26..9c076aa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c
@@ -223,6 +223,8 @@ static void mlx5e_rep_changelowerstate_event(struct net_device *netdev, void *pt
rpriv = priv->ppriv;
fwd_vport_num = rpriv->rep->vport;
lag_dev = netdev_master_upper_dev_get(netdev);
+ if (!lag_dev)
+ return;
netdev_dbg(netdev, "lag_dev(%s)'s slave vport(%d) is txable(%d)\n",
lag_dev->name, fwd_vport_num, net_lag_port_dev_txable(netdev));
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
index 76177f7..e6f7827 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
@@ -643,7 +643,7 @@ bool mlx5e_rep_tc_update_skb(struct mlx5_cqe64 *cqe,
}
if (chain) {
- tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT);
+ tc_skb_ext = tc_skb_ext_alloc(skb);
if (!tc_skb_ext) {
WARN_ON(1);
return false;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
index b42396d..0469f53 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
@@ -185,6 +185,28 @@ mlx5_tc_ct_entry_has_nat(struct mlx5_ct_entry *entry)
}
static int
+mlx5_get_label_mapping(struct mlx5_tc_ct_priv *ct_priv,
+ u32 *labels, u32 *id)
+{
+ if (!memchr_inv(labels, 0, sizeof(u32) * 4)) {
+ *id = 0;
+ return 0;
+ }
+
+ if (mapping_add(ct_priv->labels_mapping, labels, id))
+ return -EOPNOTSUPP;
+
+ return 0;
+}
+
+static void
+mlx5_put_label_mapping(struct mlx5_tc_ct_priv *ct_priv, u32 id)
+{
+ if (id)
+ mapping_remove(ct_priv->labels_mapping, id);
+}
+
+static int
mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule)
{
struct flow_match_control control;
@@ -435,7 +457,7 @@ mlx5_tc_ct_entry_del_rule(struct mlx5_tc_ct_priv *ct_priv,
mlx5_tc_rule_delete(netdev_priv(ct_priv->netdev), zone_rule->rule, attr);
mlx5e_mod_hdr_detach(ct_priv->dev,
ct_priv->mod_hdr_tbl, zone_rule->mh);
- mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
+ mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
kfree(attr);
}
@@ -638,8 +660,8 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
if (!meta)
return -EOPNOTSUPP;
- err = mapping_add(ct_priv->labels_mapping, meta->ct_metadata.labels,
- &attr->ct_attr.ct_labels_id);
+ err = mlx5_get_label_mapping(ct_priv, meta->ct_metadata.labels,
+ &attr->ct_attr.ct_labels_id);
if (err)
return -EOPNOTSUPP;
if (nat) {
@@ -675,7 +697,7 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
err_mapping:
dealloc_mod_hdr_actions(&mod_acts);
- mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
+ mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
return err;
}
@@ -743,7 +765,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv,
err_rule:
mlx5e_mod_hdr_detach(ct_priv->dev,
ct_priv->mod_hdr_tbl, zone_rule->mh);
- mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
+ mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
err_mod_hdr:
kfree(attr);
err_attr:
@@ -1198,7 +1220,7 @@ void mlx5_tc_ct_match_del(struct mlx5_tc_ct_priv *priv, struct mlx5_ct_attr *ct_
if (!priv || !ct_attr->ct_labels_id)
return;
- mapping_remove(priv->labels_mapping, ct_attr->ct_labels_id);
+ mlx5_put_label_mapping(priv, ct_attr->ct_labels_id);
}
int
@@ -1276,7 +1298,7 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv,
ct_labels[1] = key->ct_labels[1] & mask->ct_labels[1];
ct_labels[2] = key->ct_labels[2] & mask->ct_labels[2];
ct_labels[3] = key->ct_labels[3] & mask->ct_labels[3];
- if (mapping_add(priv->labels_mapping, ct_labels, &ct_attr->ct_labels_id))
+ if (mlx5_get_label_mapping(priv, ct_labels, &ct_attr->ct_labels_id))
return -EOPNOTSUPP;
mlx5e_tc_match_to_reg_match(spec, LABELS_TO_REG, ct_attr->ct_labels_id,
MLX5_CT_LABELS_MASK);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
index bcd0545..986f0d8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
@@ -744,11 +744,11 @@ static int get_fec_supported_advertised(struct mlx5_core_dev *dev,
return 0;
}
-static void ptys2ethtool_supported_advertised_port(struct ethtool_link_ksettings *link_ksettings,
- u32 eth_proto_cap,
- u8 connector_type, bool ext)
+static void ptys2ethtool_supported_advertised_port(struct mlx5_core_dev *mdev,
+ struct ethtool_link_ksettings *link_ksettings,
+ u32 eth_proto_cap, u8 connector_type)
{
- if ((!connector_type && !ext) || connector_type >= MLX5E_CONNECTOR_TYPE_NUMBER) {
+ if (!MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) {
if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR)
| MLX5E_PROT_MASK(MLX5E_10GBASE_SR)
| MLX5E_PROT_MASK(MLX5E_40GBASE_CR4)
@@ -884,9 +884,9 @@ static int ptys2connector_type[MLX5E_CONNECTOR_TYPE_NUMBER] = {
[MLX5E_PORT_OTHER] = PORT_OTHER,
};
-static u8 get_connector_port(u32 eth_proto, u8 connector_type, bool ext)
+static u8 get_connector_port(struct mlx5_core_dev *mdev, u32 eth_proto, u8 connector_type)
{
- if ((connector_type || ext) && connector_type < MLX5E_CONNECTOR_TYPE_NUMBER)
+ if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type))
return ptys2connector_type[connector_type];
if (eth_proto &
@@ -987,11 +987,11 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
data_rate_oper, link_ksettings);
eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap;
-
- link_ksettings->base.port = get_connector_port(eth_proto_oper,
- connector_type, ext);
- ptys2ethtool_supported_advertised_port(link_ksettings, eth_proto_admin,
- connector_type, ext);
+ connector_type = connector_type < MLX5E_CONNECTOR_TYPE_NUMBER ?
+ connector_type : MLX5E_PORT_UNKNOWN;
+ link_ksettings->base.port = get_connector_port(mdev, eth_proto_oper, connector_type);
+ ptys2ethtool_supported_advertised_port(mdev, link_ksettings, eth_proto_admin,
+ connector_type);
get_lp_advertising(mdev, eth_proto_lp, link_ksettings);
if (an_status == MLX5_AN_COMPLETE)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
index 7ad332d..93877be 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
@@ -35,6 +35,7 @@
#include <linux/ipv6.h>
#include <linux/tcp.h>
#include <linux/mlx5/fs.h>
+#include <linux/mlx5/mpfs.h>
#include "en.h"
#include "lib/mpfs.h"
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index e2006c6..f18b52b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -2326,8 +2326,9 @@ static u8 mlx5e_build_icosq_log_wq_sz(struct mlx5e_params *params,
{
switch (params->rq_wq_type) {
case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
- return order_base_2(MLX5E_UMR_WQEBBS) +
- mlx5e_get_rq_log_wq_sz(rqp->rqc);
+ return max_t(u8, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE,
+ order_base_2(MLX5E_UMR_WQEBBS) +
+ mlx5e_get_rq_log_wq_sz(rqp->rqc));
default: /* MLX5_WQ_TYPE_CYCLIC */
return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE;
}
@@ -2919,7 +2920,7 @@ static int mlx5e_update_netdev_queues(struct mlx5e_priv *priv)
int err;
old_num_txqs = netdev->real_num_tx_queues;
- old_ntc = netdev->num_tc;
+ old_ntc = netdev->num_tc ? : 1;
nch = priv->channels.params.num_channels;
ntc = priv->channels.params.num_tc;
@@ -5384,6 +5385,11 @@ struct net_device *mlx5e_create_netdev(struct mlx5_core_dev *mdev,
return NULL;
}
+static void mlx5e_reset_channels(struct net_device *netdev)
+{
+ netdev_reset_tc(netdev);
+}
+
int mlx5e_attach_netdev(struct mlx5e_priv *priv)
{
const bool take_rtnl = priv->netdev->reg_state == NETREG_REGISTERED;
@@ -5437,6 +5443,7 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv)
profile->cleanup_tx(priv);
out:
+ mlx5e_reset_channels(priv->netdev);
set_bit(MLX5E_STATE_DESTROYING, &priv->state);
cancel_work_sync(&priv->update_stats_work);
return err;
@@ -5454,6 +5461,7 @@ void mlx5e_detach_netdev(struct mlx5e_priv *priv)
profile->cleanup_rx(priv);
profile->cleanup_tx(priv);
+ mlx5e_reset_channels(priv->netdev);
cancel_work_sync(&priv->update_stats_work);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 930f19c..1bdeb94 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -2196,6 +2196,9 @@ static int mlx5e_flower_parse_meta(struct net_device *filter_dev,
return 0;
flow_rule_match_meta(rule, &match);
+ if (!match.mask->ingress_ifindex)
+ return 0;
+
if (match.mask->ingress_ifindex != 0xFFFFFFFF) {
NL_SET_ERR_MSG_MOD(extack, "Unsupported ingress ifindex mask");
return -EOPNOTSUPP;
@@ -4022,8 +4025,12 @@ static int add_vlan_push_action(struct mlx5e_priv *priv,
if (err)
return err;
- *out_dev = dev_get_by_index_rcu(dev_net(vlan_dev),
- dev_get_iflink(vlan_dev));
+ rcu_read_lock();
+ *out_dev = dev_get_by_index_rcu(dev_net(vlan_dev), dev_get_iflink(vlan_dev));
+ rcu_read_unlock();
+ if (!*out_dev)
+ return -ENODEV;
+
if (is_vlan_dev(*out_dev))
err = add_vlan_push_action(priv, attr, out_dev, action);
@@ -5487,7 +5494,7 @@ bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe,
}
if (chain) {
- tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT);
+ tc_skb_ext = tc_skb_ext_alloc(skb);
if (WARN_ON(!tc_skb_ext))
return false;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
index 38a23d2..3736680 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
@@ -486,7 +486,7 @@ static void mlx5e_tx_mpwqe_session_start(struct mlx5e_txqsq *sq,
pi = mlx5e_txqsq_get_next_pi(sq, MLX5E_TX_MPW_MAX_WQEBBS);
wqe = MLX5E_TX_FETCH_WQE(sq, pi);
- prefetchw(wqe->data);
+ net_prefetchw(wqe->data);
*session = (struct mlx5e_tx_mpwqe) {
.wqe = wqe,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index 8ebfe78..ccd53a7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -926,13 +926,24 @@ void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev)
mutex_unlock(&table->lock);
}
+#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
+#define MLX5_MAX_ASYNC_EQS 4
+#else
+#define MLX5_MAX_ASYNC_EQS 3
+#endif
+
int mlx5_eq_table_create(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *eq_table = dev->priv.eq_table;
+ int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
+ MLX5_CAP_GEN(dev, max_num_eqs) :
+ 1 << MLX5_CAP_GEN(dev, log_max_eq);
int err;
eq_table->num_comp_eqs =
- mlx5_irq_get_num_comp(eq_table->irq_table);
+ min_t(int,
+ mlx5_irq_get_num_comp(eq_table->irq_table),
+ num_eqs - MLX5_MAX_ASYNC_EQS);
err = create_async_eqs(dev);
if (err) {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index d4ee0a9..d61539b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -35,6 +35,7 @@
#include <linux/mlx5/mlx5_ifc.h>
#include <linux/mlx5/vport.h>
#include <linux/mlx5/fs.h>
+#include <linux/mlx5/mpfs.h>
#include "esw/acl/lgcy.h"
#include "mlx5_core.h"
#include "lib/eq.h"
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
index ec67956..1cbb330 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
@@ -76,10 +76,11 @@ mlx5_eswitch_termtbl_create(struct mlx5_core_dev *dev,
/* As this is the terminating action then the termination table is the
* same prio as the slow path
*/
- ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION |
+ ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION | MLX5_FLOW_TABLE_UNMANAGED |
MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT;
- ft_attr.prio = FDB_SLOW_PATH;
+ ft_attr.prio = FDB_TC_OFFLOAD;
ft_attr.max_fte = 1;
+ ft_attr.level = 1;
ft_attr.autogroup.max_num_groups = 1;
tt->termtbl = mlx5_create_auto_grouped_flow_table(root_ns, &ft_attr);
if (IS_ERR(tt->termtbl)) {
@@ -171,19 +172,6 @@ mlx5_eswitch_termtbl_put(struct mlx5_eswitch *esw,
}
}
-static bool mlx5_eswitch_termtbl_is_encap_reformat(struct mlx5_pkt_reformat *rt)
-{
- switch (rt->reformat_type) {
- case MLX5_REFORMAT_TYPE_L2_TO_VXLAN:
- case MLX5_REFORMAT_TYPE_L2_TO_NVGRE:
- case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL:
- case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL:
- return true;
- default:
- return false;
- }
-}
-
static void
mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
struct mlx5_flow_act *dst)
@@ -201,14 +189,6 @@ mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
memset(&src->vlan[1], 0, sizeof(src->vlan[1]));
}
}
-
- if (src->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT &&
- mlx5_eswitch_termtbl_is_encap_reformat(src->pkt_reformat)) {
- src->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
- dst->action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
- dst->pkt_reformat = src->pkt_reformat;
- src->pkt_reformat = NULL;
- }
}
static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw,
@@ -237,6 +217,7 @@ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
int i;
if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table) ||
+ !MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level) ||
attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH ||
!mlx5_eswitch_offload_is_uplink_port(esw, spec))
return false;
@@ -278,6 +259,14 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw,
if (dest[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT)
continue;
+ if (attr->dests[num_vport_dests].flags & MLX5_ESW_DEST_ENCAP) {
+ term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
+ term_tbl_act.pkt_reformat = attr->dests[num_vport_dests].pkt_reformat;
+ } else {
+ term_tbl_act.action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
+ term_tbl_act.pkt_reformat = NULL;
+ }
+
/* get the terminating table for the action list */
tt = mlx5_eswitch_termtbl_get_create(esw, &term_tbl_act,
&dest[i], attr);
@@ -299,6 +288,9 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw,
goto revert_changes;
/* create the FTE */
+ flow_act->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
+ flow_act->pkt_reformat = NULL;
+ flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
rule = mlx5_add_flow_rules(fdb, spec, flow_act, dest, num_dest);
if (IS_ERR(rule))
goto revert_changes;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
index cc67366..bed154e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
@@ -850,7 +850,7 @@ mlx5_fpga_ipsec_release_sa_ctx(struct mlx5_fpga_ipsec_sa_ctx *sa_ctx)
return;
}
- if (sa_ctx->fpga_xfrm->accel_xfrm.attrs.action &
+ if (sa_ctx->fpga_xfrm->accel_xfrm.attrs.action ==
MLX5_ACCEL_ESP_ACTION_DECRYPT)
ida_simple_remove(&fipsec->halloc, sa_ctx->sa_handle);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
index 88e58ac..15c3a90 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
@@ -307,6 +307,11 @@ int mlx5_lag_mp_init(struct mlx5_lag *ldev)
struct lag_mp *mp = &ldev->lag_mp;
int err;
+ /* always clear mfi, as it might become stale when a route delete event
+ * has been missed
+ */
+ mp->mfi = NULL;
+
if (mp->fib_nb.notifier_call)
return 0;
@@ -335,4 +340,5 @@ void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev)
unregister_fib_notifier(&init_net, &mp->fib_nb);
destroy_workqueue(mp->wq);
mp->fib_nb.notifier_call = NULL;
+ mp->mfi = NULL;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
index fd8449f..839a01d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
@@ -33,6 +33,7 @@
#include <linux/etherdevice.h>
#include <linux/mlx5/driver.h>
#include <linux/mlx5/mlx5_ifc.h>
+#include <linux/mlx5/mpfs.h>
#include <linux/mlx5/eswitch.h>
#include "mlx5_core.h"
#include "lib/mpfs.h"
@@ -175,6 +176,7 @@ int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac)
mutex_unlock(&mpfs->lock);
return err;
}
+EXPORT_SYMBOL(mlx5_mpfs_add_mac);
int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac)
{
@@ -206,3 +208,4 @@ int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac)
mutex_unlock(&mpfs->lock);
return err;
}
+EXPORT_SYMBOL(mlx5_mpfs_del_mac);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h
index 4a7b2c3..4a29354 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h
@@ -84,12 +84,9 @@ struct l2addr_node {
#ifdef CONFIG_MLX5_MPFS
int mlx5_mpfs_init(struct mlx5_core_dev *dev);
void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev);
-int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac);
-int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac);
#else /* #ifndef CONFIG_MLX5_MPFS */
static inline int mlx5_mpfs_init(struct mlx5_core_dev *dev) { return 0; }
static inline void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev) {}
-static inline int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
-static inline int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
#endif
+
#endif
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
index 74b3959..3e7576e 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
@@ -20,6 +20,7 @@
#include <net/red.h>
#include <net/vxlan.h>
#include <net/flow_offload.h>
+#include <net/inet_ecn.h>
#include "port.h"
#include "core.h"
@@ -345,6 +346,20 @@ struct mlxsw_sp_port_type_speed_ops {
u32 (*ptys_proto_cap_masked_get)(u32 eth_proto_cap);
};
+static inline u8 mlxsw_sp_tunnel_ecn_decap(u8 outer_ecn, u8 inner_ecn,
+ bool *trap_en)
+{
+ bool set_ce = false;
+
+ *trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce);
+ if (set_ce)
+ return INET_ECN_CE;
+ else if (outer_ecn == INET_ECN_ECT_1 && inner_ecn == INET_ECN_ECT_0)
+ return INET_ECN_ECT_1;
+ else
+ return inner_ecn;
+}
+
static inline struct net_device *
mlxsw_sp_bridge_vxlan_dev_find(struct net_device *br_dev)
{
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
index a852599..3262a2c 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
@@ -371,12 +371,11 @@ static int mlxsw_sp_ipip_ecn_decap_init_one(struct mlxsw_sp *mlxsw_sp,
u8 inner_ecn, u8 outer_ecn)
{
char tidem_pl[MLXSW_REG_TIDEM_LEN];
- bool trap_en, set_ce = false;
u8 new_inner_ecn;
+ bool trap_en;
- trap_en = __INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce);
- new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn;
-
+ new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn,
+ &trap_en);
mlxsw_reg_tidem_pack(tidem_pl, outer_ecn, inner_ecn, new_inner_ecn,
trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0);
return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tidem), tidem_pl);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
index 47eb751..ee308d9 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
@@ -535,6 +535,16 @@ mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,
u16 erif_index = 0;
int err;
+ /* Add the eRIF */
+ if (mlxsw_sp_mr_vif_valid(rve->mr_vif)) {
+ erif_index = mlxsw_sp_rif_index(rve->mr_vif->rif);
+ err = mr->mr_ops->route_erif_add(mlxsw_sp,
+ rve->mr_route->route_priv,
+ erif_index);
+ if (err)
+ return err;
+ }
+
/* Update the route action, as the new eVIF can be a tunnel or a pimreg
* device which will require updating the action.
*/
@@ -544,17 +554,7 @@ mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,
rve->mr_route->route_priv,
route_action);
if (err)
- return err;
- }
-
- /* Add the eRIF */
- if (mlxsw_sp_mr_vif_valid(rve->mr_vif)) {
- erif_index = mlxsw_sp_rif_index(rve->mr_vif->rif);
- err = mr->mr_ops->route_erif_add(mlxsw_sp,
- rve->mr_route->route_priv,
- erif_index);
- if (err)
- goto err_route_erif_add;
+ goto err_route_action_update;
}
/* Update the minimum MTU */
@@ -572,14 +572,14 @@ mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,
return 0;
err_route_min_mtu_update:
- if (mlxsw_sp_mr_vif_valid(rve->mr_vif))
- mr->mr_ops->route_erif_del(mlxsw_sp, rve->mr_route->route_priv,
- erif_index);
-err_route_erif_add:
if (route_action != rve->mr_route->route_action)
mr->mr_ops->route_action_update(mlxsw_sp,
rve->mr_route->route_priv,
rve->mr_route->route_action);
+err_route_action_update:
+ if (mlxsw_sp_mr_vif_valid(rve->mr_vif))
+ mr->mr_ops->route_erif_del(mlxsw_sp, rve->mr_route->route_priv,
+ erif_index);
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
index 54d3e7d..a2d1b95 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
@@ -909,12 +909,11 @@ static int __mlxsw_sp_nve_ecn_decap_init(struct mlxsw_sp *mlxsw_sp,
u8 inner_ecn, u8 outer_ecn)
{
char tndem_pl[MLXSW_REG_TNDEM_LEN];
- bool trap_en, set_ce = false;
u8 new_inner_ecn;
+ bool trap_en;
- trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce);
- new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn;
-
+ new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn,
+ &trap_en);
mlxsw_reg_tndem_pack(tndem_pl, outer_ecn, inner_ecn, new_inner_ecn,
trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0);
return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tndem), tndem_pl);
diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
index 1634ca6..c84c8bf 100644
--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
@@ -2897,7 +2897,7 @@ static netdev_tx_t myri10ge_sw_tso(struct sk_buff *skb,
dev_kfree_skb_any(curr);
if (segs != NULL) {
curr = segs;
- segs = segs->next;
+ segs = next;
curr->next = NULL;
dev_kfree_skb_any(segs);
}
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c b/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
index 0e2db6e..2ec62c8 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
@@ -454,6 +454,7 @@ void nfp_bpf_ctrl_msg_rx(struct nfp_app *app, struct sk_buff *skb)
dev_consume_skb_any(skb);
else
dev_kfree_skb_any(skb);
+ return;
}
nfp_ccm_rx(&bpf->ccm, skb);
diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index caf12ee..56833a4 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -190,6 +190,7 @@ struct nfp_fl_internal_ports {
* @qos_rate_limiters: Current active qos rate limiters
* @qos_stats_lock: Lock on qos stats updates
* @pre_tun_rule_cnt: Number of pre-tunnel rules offloaded
+ * @merge_table: Hash table to store merged flows
*/
struct nfp_flower_priv {
struct nfp_app *app;
@@ -223,6 +224,7 @@ struct nfp_flower_priv {
unsigned int qos_rate_limiters;
spinlock_t qos_stats_lock; /* Protect the qos stats */
int pre_tun_rule_cnt;
+ struct rhashtable merge_table;
};
/**
@@ -350,6 +352,12 @@ struct nfp_fl_payload_link {
};
extern const struct rhashtable_params nfp_flower_table_params;
+extern const struct rhashtable_params merge_table_params;
+
+struct nfp_merge_info {
+ u64 parent_ctx;
+ struct rhash_head ht_node;
+};
struct nfp_fl_stats_frame {
__be32 stats_con_id;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/metadata.c b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
index aa06fcb..327bb56 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
@@ -490,6 +490,12 @@ const struct rhashtable_params nfp_flower_table_params = {
.automatic_shrinking = true,
};
+const struct rhashtable_params merge_table_params = {
+ .key_offset = offsetof(struct nfp_merge_info, parent_ctx),
+ .head_offset = offsetof(struct nfp_merge_info, ht_node),
+ .key_len = sizeof(u64),
+};
+
int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
unsigned int host_num_mems)
{
@@ -506,6 +512,10 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
if (err)
goto err_free_flow_table;
+ err = rhashtable_init(&priv->merge_table, &merge_table_params);
+ if (err)
+ goto err_free_stats_ctx_table;
+
get_random_bytes(&priv->mask_id_seed, sizeof(priv->mask_id_seed));
/* Init ring buffer and unallocated mask_ids. */
@@ -513,7 +523,7 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
kmalloc_array(NFP_FLOWER_MASK_ENTRY_RS,
NFP_FLOWER_MASK_ELEMENT_RS, GFP_KERNEL);
if (!priv->mask_ids.mask_id_free_list.buf)
- goto err_free_stats_ctx_table;
+ goto err_free_merge_table;
priv->mask_ids.init_unallocated = NFP_FLOWER_MASK_ENTRY_RS - 1;
@@ -550,6 +560,8 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
kfree(priv->mask_ids.last_used);
err_free_mask_id:
kfree(priv->mask_ids.mask_id_free_list.buf);
+err_free_merge_table:
+ rhashtable_destroy(&priv->merge_table);
err_free_stats_ctx_table:
rhashtable_destroy(&priv->stats_ctx_table);
err_free_flow_table:
@@ -568,6 +580,8 @@ void nfp_flower_metadata_cleanup(struct nfp_app *app)
nfp_check_rhashtable_empty, NULL);
rhashtable_free_and_destroy(&priv->stats_ctx_table,
nfp_check_rhashtable_empty, NULL);
+ rhashtable_free_and_destroy(&priv->merge_table,
+ nfp_check_rhashtable_empty, NULL);
kvfree(priv->stats);
kfree(priv->mask_ids.mask_id_free_list.buf);
kfree(priv->mask_ids.last_used);
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index d72225d..e95969c 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -1009,6 +1009,8 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
struct netlink_ext_ack *extack = NULL;
struct nfp_fl_payload *merge_flow;
struct nfp_fl_key_ls merge_key_ls;
+ struct nfp_merge_info *merge_info;
+ u64 parent_ctx = 0;
int err;
ASSERT_RTNL();
@@ -1019,6 +1021,15 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
nfp_flower_is_merge_flow(sub_flow2))
return -EINVAL;
+ /* check if the two flows are already merged */
+ parent_ctx = (u64)(be32_to_cpu(sub_flow1->meta.host_ctx_id)) << 32;
+ parent_ctx |= (u64)(be32_to_cpu(sub_flow2->meta.host_ctx_id));
+ if (rhashtable_lookup_fast(&priv->merge_table,
+ &parent_ctx, merge_table_params)) {
+ nfp_flower_cmsg_warn(app, "The two flows are already merged.\n");
+ return 0;
+ }
+
err = nfp_flower_can_merge(sub_flow1, sub_flow2);
if (err)
return err;
@@ -1060,16 +1071,33 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
if (err)
goto err_release_metadata;
+ merge_info = kmalloc(sizeof(*merge_info), GFP_KERNEL);
+ if (!merge_info) {
+ err = -ENOMEM;
+ goto err_remove_rhash;
+ }
+ merge_info->parent_ctx = parent_ctx;
+ err = rhashtable_insert_fast(&priv->merge_table, &merge_info->ht_node,
+ merge_table_params);
+ if (err)
+ goto err_destroy_merge_info;
+
err = nfp_flower_xmit_flow(app, merge_flow,
NFP_FLOWER_CMSG_TYPE_FLOW_MOD);
if (err)
- goto err_remove_rhash;
+ goto err_remove_merge_info;
merge_flow->in_hw = true;
sub_flow1->in_hw = false;
return 0;
+err_remove_merge_info:
+ WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table,
+ &merge_info->ht_node,
+ merge_table_params));
+err_destroy_merge_info:
+ kfree(merge_info);
err_remove_rhash:
WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table,
&merge_flow->fl_node,
@@ -1359,7 +1387,9 @@ nfp_flower_remove_merge_flow(struct nfp_app *app,
{
struct nfp_flower_priv *priv = app->priv;
struct nfp_fl_payload_link *link, *temp;
+ struct nfp_merge_info *merge_info;
struct nfp_fl_payload *origin;
+ u64 parent_ctx = 0;
bool mod = false;
int err;
@@ -1396,8 +1426,22 @@ nfp_flower_remove_merge_flow(struct nfp_app *app,
err_free_links:
/* Clean any links connected with the merged flow. */
list_for_each_entry_safe(link, temp, &merge_flow->linked_flows,
- merge_flow.list)
+ merge_flow.list) {
+ u32 ctx_id = be32_to_cpu(link->sub_flow.flow->meta.host_ctx_id);
+
+ parent_ctx = (parent_ctx << 32) | (u64)(ctx_id);
nfp_flower_unlink_flow(link);
+ }
+
+ merge_info = rhashtable_lookup_fast(&priv->merge_table,
+ &parent_ctx,
+ merge_table_params);
+ if (merge_info) {
+ WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table,
+ &merge_info->ht_node,
+ merge_table_params));
+ kfree(merge_info);
+ }
kfree(merge_flow->action_data);
kfree(merge_flow->mask_data);
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
index 97d2b03..7a81874 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
@@ -364,6 +364,7 @@ int nfp_devlink_port_register(struct nfp_app *app, struct nfp_port *port)
attrs.split = eth_port.is_split;
attrs.splittable = !attrs.split;
+ attrs.lanes = eth_port.port_lanes;
attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
attrs.phys.port_number = eth_port.label_port;
attrs.phys.split_subport_number = eth_port.label_subport;
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
index d8a3eca..d8f0863 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
@@ -1048,7 +1048,7 @@ int qlcnic_do_lb_test(struct qlcnic_adapter *adapter, u8 mode)
for (i = 0; i < QLCNIC_NUM_ILB_PKT; i++) {
skb = netdev_alloc_skb(adapter->netdev, QLCNIC_ILB_PKT_SIZE);
if (!skb)
- break;
+ goto error;
qlcnic_create_loopback_buff(skb->data, adapter->mac_addr);
skb_put(skb, QLCNIC_ILB_PKT_SIZE);
adapter->ahw->diag_cnt = 0;
@@ -1072,6 +1072,7 @@ int qlcnic_do_lb_test(struct qlcnic_adapter *adapter, u8 mode)
cnt++;
}
if (cnt != i) {
+error:
dev_err(&adapter->pdev->dev,
"LB Test: failed, TX[%d], RX[%d]\n", i, cnt);
if (mode != QLCNIC_ILB_MODE)
diff --git a/drivers/net/ethernet/qualcomm/emac/emac-mac.c b/drivers/net/ethernet/qualcomm/emac/emac-mac.c
index 117188e3..87b8c03 100644
--- a/drivers/net/ethernet/qualcomm/emac/emac-mac.c
+++ b/drivers/net/ethernet/qualcomm/emac/emac-mac.c
@@ -1437,6 +1437,7 @@ netdev_tx_t emac_mac_tx_buf_send(struct emac_adapter *adpt,
{
struct emac_tpd tpd;
u32 prod_idx;
+ int len;
memset(&tpd, 0, sizeof(tpd));
@@ -1456,9 +1457,10 @@ netdev_tx_t emac_mac_tx_buf_send(struct emac_adapter *adpt,
if (skb_network_offset(skb) != ETH_HLEN)
TPD_TYP_SET(&tpd, 1);
+ len = skb->len;
emac_tx_fill_tpd(adpt, tx_q, skb, &tpd);
- netdev_sent_queue(adpt->netdev, skb->len);
+ netdev_sent_queue(adpt->netdev, len);
/* Make sure the are enough free descriptors to hold one
* maximum-sized SKB. We need one desc for each fragment,
diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
index d634da2..3bb36f4 100644
--- a/drivers/net/ethernet/realtek/r8169_main.c
+++ b/drivers/net/ethernet/realtek/r8169_main.c
@@ -2378,13 +2378,14 @@ static void r8168b_1_hw_jumbo_disable(struct rtl8169_private *tp)
static void rtl_jumbo_config(struct rtl8169_private *tp)
{
bool jumbo = tp->dev->mtu > ETH_DATA_LEN;
+ int readrq = 4096;
rtl_unlock_config_regs(tp);
switch (tp->mac_version) {
case RTL_GIGA_MAC_VER_12:
case RTL_GIGA_MAC_VER_17:
if (jumbo) {
- pcie_set_readrq(tp->pci_dev, 512);
+ readrq = 512;
r8168b_1_hw_jumbo_enable(tp);
} else {
r8168b_1_hw_jumbo_disable(tp);
@@ -2392,7 +2393,7 @@ static void rtl_jumbo_config(struct rtl8169_private *tp)
break;
case RTL_GIGA_MAC_VER_18 ... RTL_GIGA_MAC_VER_26:
if (jumbo) {
- pcie_set_readrq(tp->pci_dev, 512);
+ readrq = 512;
r8168c_hw_jumbo_enable(tp);
} else {
r8168c_hw_jumbo_disable(tp);
@@ -2417,8 +2418,15 @@ static void rtl_jumbo_config(struct rtl8169_private *tp)
}
rtl_lock_config_regs(tp);
- if (!jumbo && pci_is_pcie(tp->pci_dev) && tp->supports_gmii)
- pcie_set_readrq(tp->pci_dev, 4096);
+ if (pci_is_pcie(tp->pci_dev) && tp->supports_gmii)
+ pcie_set_readrq(tp->pci_dev, readrq);
+
+ /* Chip doesn't support pause in jumbo mode */
+ linkmode_mod_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ tp->phydev->advertising, !jumbo);
+ linkmode_mod_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ tp->phydev->advertising, !jumbo);
+ phy_start_aneg(tp->phydev);
}
DECLARE_RTL_COND(rtl_chipcmd_cond)
@@ -4710,8 +4718,6 @@ static int r8169_phy_connect(struct rtl8169_private *tp)
if (!tp->supports_gmii)
phy_set_max_speed(phydev, SPEED_100);
- phy_support_asym_pause(phydev);
-
phy_attached_info(phydev);
return 0;
diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
index bd30505..f96eed6 100644
--- a/drivers/net/ethernet/renesas/ravb_main.c
+++ b/drivers/net/ethernet/renesas/ravb_main.c
@@ -911,31 +911,20 @@ static int ravb_poll(struct napi_struct *napi, int budget)
int q = napi - priv->napi;
int mask = BIT(q);
int quota = budget;
- u32 ris0, tis;
- for (;;) {
- tis = ravb_read(ndev, TIS);
- ris0 = ravb_read(ndev, RIS0);
- if (!((ris0 & mask) || (tis & mask)))
- break;
+ /* Processing RX Descriptor Ring */
+ /* Clear RX interrupt */
+ ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
+ if (ravb_rx(ndev, "a, q))
+ goto out;
- /* Processing RX Descriptor Ring */
- if (ris0 & mask) {
- /* Clear RX interrupt */
- ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
- if (ravb_rx(ndev, "a, q))
- goto out;
- }
- /* Processing TX Descriptor Ring */
- if (tis & mask) {
- spin_lock_irqsave(&priv->lock, flags);
- /* Clear TX interrupt */
- ravb_write(ndev, ~(mask | TIS_RESERVED), TIS);
- ravb_tx_free(ndev, q, true);
- netif_wake_subqueue(ndev, q);
- spin_unlock_irqrestore(&priv->lock, flags);
- }
- }
+ /* Processing RX Descriptor Ring */
+ spin_lock_irqsave(&priv->lock, flags);
+ /* Clear TX interrupt */
+ ravb_write(ndev, ~(mask | TIS_RESERVED), TIS);
+ ravb_tx_free(ndev, q, true);
+ netif_wake_subqueue(ndev, q);
+ spin_unlock_irqrestore(&priv->lock, flags);
napi_complete(napi);
diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
index da6886d..4fa72b5 100644
--- a/drivers/net/ethernet/sfc/ef10.c
+++ b/drivers/net/ethernet/sfc/ef10.c
@@ -2928,8 +2928,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
/* Get the transmit queue */
tx_ev_q_label = EFX_QWORD_FIELD(*event, ESF_DZ_TX_QLABEL);
- tx_queue = efx_channel_get_tx_queue(channel,
- tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
+ tx_queue = channel->tx_queue + (tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
if (!tx_queue->timestamping) {
/* Transmit completion */
diff --git a/drivers/net/ethernet/sfc/farch.c b/drivers/net/ethernet/sfc/farch.c
index d75cf5f..49df02e 100644
--- a/drivers/net/ethernet/sfc/farch.c
+++ b/drivers/net/ethernet/sfc/farch.c
@@ -835,14 +835,14 @@ efx_farch_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
/* Transmit completion */
tx_ev_desc_ptr = EFX_QWORD_FIELD(*event, FSF_AZ_TX_EV_DESC_PTR);
tx_ev_q_label = EFX_QWORD_FIELD(*event, FSF_AZ_TX_EV_Q_LABEL);
- tx_queue = efx_channel_get_tx_queue(
- channel, tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
+ tx_queue = channel->tx_queue +
+ (tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
efx_xmit_done(tx_queue, tx_ev_desc_ptr);
} else if (EFX_QWORD_FIELD(*event, FSF_AZ_TX_EV_WQ_FF_FULL)) {
/* Rewrite the FIFO write pointer */
tx_ev_q_label = EFX_QWORD_FIELD(*event, FSF_AZ_TX_EV_Q_LABEL);
- tx_queue = efx_channel_get_tx_queue(
- channel, tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
+ tx_queue = channel->tx_queue +
+ (tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
netif_tx_lock(efx->net_dev);
efx_farch_notify_tx_desc(tx_queue);
@@ -1081,16 +1081,16 @@ static void
efx_farch_handle_tx_flush_done(struct efx_nic *efx, efx_qword_t *event)
{
struct efx_tx_queue *tx_queue;
+ struct efx_channel *channel;
int qid;
qid = EFX_QWORD_FIELD(*event, FSF_AZ_DRIVER_EV_SUBDATA);
if (qid < EFX_MAX_TXQ_PER_CHANNEL * (efx->n_tx_channels + efx->n_extra_tx_channels)) {
- tx_queue = efx_get_tx_queue(efx, qid / EFX_MAX_TXQ_PER_CHANNEL,
- qid % EFX_MAX_TXQ_PER_CHANNEL);
- if (atomic_cmpxchg(&tx_queue->flush_outstanding, 1, 0)) {
+ channel = efx_get_tx_channel(efx, qid / EFX_MAX_TXQ_PER_CHANNEL);
+ tx_queue = channel->tx_queue + (qid % EFX_MAX_TXQ_PER_CHANNEL);
+ if (atomic_cmpxchg(&tx_queue->flush_outstanding, 1, 0))
efx_farch_magic_event(tx_queue->channel,
EFX_CHANNEL_MAGIC_TX_DRAIN(tx_queue));
- }
}
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
index bf3250e..749585f 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
@@ -352,6 +352,8 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
plat_dat->bsp_priv = gmac;
plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed;
plat_dat->multicast_filter_bins = 0;
+ plat_dat->tx_fifo_size = 8192;
+ plat_dat->rx_fifo_size = 8192;
err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (err)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
index 0e1ca2c..e18dee7 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
@@ -30,7 +30,7 @@ struct sunxi_priv_data {
static int sun7i_gmac_init(struct platform_device *pdev, void *priv)
{
struct sunxi_priv_data *gmac = priv;
- int ret;
+ int ret = 0;
if (gmac->regulator) {
ret = regulator_enable(gmac->regulator);
@@ -51,11 +51,11 @@ static int sun7i_gmac_init(struct platform_device *pdev, void *priv)
} else {
clk_set_rate(gmac->tx_clk, SUN7I_GMAC_MII_RATE);
ret = clk_prepare(gmac->tx_clk);
- if (ret)
- return ret;
+ if (ret && gmac->regulator)
+ regulator_disable(gmac->regulator);
}
- return 0;
+ return ret;
}
static void sun7i_gmac_exit(struct platform_device *pdev, void *priv)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
index ced6d76..16c538c 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
@@ -617,6 +617,7 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
value &= ~GMAC_PACKET_FILTER_PCF;
value &= ~GMAC_PACKET_FILTER_PM;
value &= ~GMAC_PACKET_FILTER_PR;
+ value &= ~GMAC_PACKET_FILTER_RA;
if (dev->flags & IFF_PROMISC) {
/* VLAN Tag Filter Fail Packets Queuing */
if (hw->vlan_fail_q_en) {
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
index 62aa0e9..a7249e4 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
@@ -222,7 +222,7 @@ static void dwmac4_dma_rx_chan_op_mode(void __iomem *ioaddr, int mode,
u32 channel, int fifosz, u8 qmode)
{
unsigned int rqs = fifosz / 256 - 1;
- u32 mtl_rx_op, mtl_rx_int;
+ u32 mtl_rx_op;
mtl_rx_op = readl(ioaddr + MTL_CHAN_RX_OP_MODE(channel));
@@ -283,11 +283,6 @@ static void dwmac4_dma_rx_chan_op_mode(void __iomem *ioaddr, int mode,
}
writel(mtl_rx_op, ioaddr + MTL_CHAN_RX_OP_MODE(channel));
-
- /* Enable MTL RX overflow */
- mtl_rx_int = readl(ioaddr + MTL_CHAN_INT_CTRL(channel));
- writel(mtl_rx_int | MTL_RX_OVERFLOW_INT_EN,
- ioaddr + MTL_CHAN_INT_CTRL(channel));
}
static void dwmac4_dma_tx_chan_op_mode(void __iomem *ioaddr, int mode,
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 6012ead..3134f7e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -1052,7 +1052,6 @@ static void stmmac_check_pcs_mode(struct stmmac_priv *priv)
*/
static int stmmac_init_phy(struct net_device *dev)
{
- struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };
struct stmmac_priv *priv = netdev_priv(dev);
struct device_node *node;
int ret;
@@ -1078,8 +1077,12 @@ static int stmmac_init_phy(struct net_device *dev)
ret = phylink_connect_phy(priv->phylink, phydev);
}
- phylink_ethtool_get_wol(priv->phylink, &wol);
- device_set_wakeup_capable(priv->device, !!wol.supported);
+ if (!priv->plat->pmt) {
+ struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };
+
+ phylink_ethtool_get_wol(priv->phylink, &wol);
+ device_set_wakeup_capable(priv->device, !!wol.supported);
+ }
return ret;
}
@@ -2727,8 +2730,15 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
/* Enable TSO */
if (priv->tso) {
- for (chan = 0; chan < tx_cnt; chan++)
+ for (chan = 0; chan < tx_cnt; chan++) {
+ struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
+
+ /* TSO and TBS cannot co-exist */
+ if (tx_q->tbs & STMMAC_TBS_AVAIL)
+ continue;
+
stmmac_enable_tso(priv, priv->ioaddr, 1, chan);
+ }
}
/* Enable Split Header */
@@ -2820,9 +2830,8 @@ static int stmmac_open(struct net_device *dev)
struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
int tbs_en = priv->plat->tx_queues_cfg[chan].tbs_en;
+ /* Setup per-TXQ tbs flag before TX descriptor alloc */
tx_q->tbs |= tbs_en ? STMMAC_TBS_AVAIL : 0;
- if (stmmac_enable_tbs(priv, priv->ioaddr, tbs_en, chan))
- tx_q->tbs &= ~STMMAC_TBS_AVAIL;
}
ret = alloc_dma_desc_resources(priv);
@@ -4132,7 +4141,6 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
/* To handle GMAC own interrupts */
if ((priv->plat->has_gmac) || xmac) {
int status = stmmac_host_irq_status(priv, priv->hw, &priv->xstats);
- int mtl_status;
if (unlikely(status)) {
/* For LPI we need to save the tx status */
@@ -4143,17 +4151,8 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
}
for (queue = 0; queue < queues_count; queue++) {
- struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
-
- mtl_status = stmmac_host_mtl_irq_status(priv, priv->hw,
- queue);
- if (mtl_status != -EINVAL)
- status |= mtl_status;
-
- if (status & CORE_IRQ_MTL_RX_OVERFLOW)
- stmmac_set_rx_tail_ptr(priv, priv->ioaddr,
- rx_q->rx_tail_addr,
- queue);
+ status = stmmac_host_mtl_irq_status(priv, priv->hw,
+ queue);
}
/* PCS link status */
diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
index 707ccdd..74e7486 100644
--- a/drivers/net/ethernet/sun/niu.c
+++ b/drivers/net/ethernet/sun/niu.c
@@ -8144,10 +8144,10 @@ static int niu_pci_vpd_scan_props(struct niu *np, u32 start, u32 end)
"VPD_SCAN: Reading in property [%s] len[%d]\n",
namebuf, prop_len);
for (i = 0; i < prop_len; i++) {
- err = niu_pci_eeprom_read(np, off + i);
- if (err >= 0)
- *prop_buf = err;
- ++prop_buf;
+ err = niu_pci_eeprom_read(np, off + i);
+ if (err < 0)
+ return err;
+ *prop_buf++ = err;
}
}
@@ -8158,14 +8158,14 @@ static int niu_pci_vpd_scan_props(struct niu *np, u32 start, u32 end)
}
/* ESPC_PIO_EN_ENABLE must be set */
-static void niu_pci_vpd_fetch(struct niu *np, u32 start)
+static int niu_pci_vpd_fetch(struct niu *np, u32 start)
{
u32 offset;
int err;
err = niu_pci_eeprom_read16_swp(np, start + 1);
if (err < 0)
- return;
+ return err;
offset = err + 3;
@@ -8174,12 +8174,14 @@ static void niu_pci_vpd_fetch(struct niu *np, u32 start)
u32 end;
err = niu_pci_eeprom_read(np, here);
+ if (err < 0)
+ return err;
if (err != 0x90)
- return;
+ return -EINVAL;
err = niu_pci_eeprom_read16_swp(np, here + 1);
if (err < 0)
- return;
+ return err;
here = start + offset + 3;
end = start + offset + err;
@@ -8187,9 +8189,12 @@ static void niu_pci_vpd_fetch(struct niu *np, u32 start)
offset += err;
err = niu_pci_vpd_scan_props(np, here, end);
- if (err < 0 || err == 1)
- return;
+ if (err < 0)
+ return err;
+ if (err == 1)
+ return -EINVAL;
}
+ return 0;
}
/* ESPC_PIO_EN_ENABLE must be set */
@@ -9280,8 +9285,11 @@ static int niu_get_invariants(struct niu *np)
offset = niu_pci_vpd_offset(np);
netif_printk(np, probe, KERN_DEBUG, np->dev,
"%s() VPD offset [%08x]\n", __func__, offset);
- if (offset)
- niu_pci_vpd_fetch(np, offset);
+ if (offset) {
+ err = niu_pci_vpd_fetch(np, offset);
+ if (err < 0)
+ return err;
+ }
nw64(ESPC_PIO_EN, 0);
if (np->flags & NIU_FLAGS_VPD_VALID) {
diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
index c7031e1..03055c96 100644
--- a/drivers/net/ethernet/ti/davinci_emac.c
+++ b/drivers/net/ethernet/ti/davinci_emac.c
@@ -169,11 +169,11 @@ static const char emac_version_string[] = "TI DaVinci EMAC Linux v6.1";
/* EMAC mac_status register */
#define EMAC_MACSTATUS_TXERRCODE_MASK (0xF00000)
#define EMAC_MACSTATUS_TXERRCODE_SHIFT (20)
-#define EMAC_MACSTATUS_TXERRCH_MASK (0x7)
+#define EMAC_MACSTATUS_TXERRCH_MASK (0x70000)
#define EMAC_MACSTATUS_TXERRCH_SHIFT (16)
#define EMAC_MACSTATUS_RXERRCODE_MASK (0xF000)
#define EMAC_MACSTATUS_RXERRCODE_SHIFT (12)
-#define EMAC_MACSTATUS_RXERRCH_MASK (0x7)
+#define EMAC_MACSTATUS_RXERRCH_MASK (0x700)
#define EMAC_MACSTATUS_RXERRCH_SHIFT (8)
/* EMAC RX register masks */
diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c
index d7a144b..dc50e94 100644
--- a/drivers/net/ethernet/ti/netcp_core.c
+++ b/drivers/net/ethernet/ti/netcp_core.c
@@ -1350,8 +1350,8 @@ int netcp_txpipe_open(struct netcp_tx_pipe *tx_pipe)
tx_pipe->dma_queue = knav_queue_open(name, tx_pipe->dma_queue_id,
KNAV_QUEUE_SHARED);
if (IS_ERR(tx_pipe->dma_queue)) {
- dev_err(dev, "Could not open DMA queue for channel \"%s\": %d\n",
- name, ret);
+ dev_err(dev, "Could not open DMA queue for channel \"%s\": %pe\n",
+ name, tx_pipe->dma_queue);
ret = PTR_ERR(tx_pipe->dma_queue);
goto err;
}
diff --git a/drivers/net/ethernet/xscale/ixp4xx_eth.c b/drivers/net/ethernet/xscale/ixp4xx_eth.c
index 2e52029..403358f2 100644
--- a/drivers/net/ethernet/xscale/ixp4xx_eth.c
+++ b/drivers/net/ethernet/xscale/ixp4xx_eth.c
@@ -1086,7 +1086,7 @@ static int init_queues(struct port *port)
int i;
if (!ports_open) {
- dma_pool = dma_pool_create(DRV_NAME, port->netdev->dev.parent,
+ dma_pool = dma_pool_create(DRV_NAME, &port->netdev->dev,
POOL_ALLOC_SIZE, 32, 0);
if (!dma_pool)
return -ENOMEM;
@@ -1436,6 +1436,9 @@ static int ixp4xx_eth_probe(struct platform_device *pdev)
ndev->netdev_ops = &ixp4xx_netdev_ops;
ndev->ethtool_ops = &ixp4xx_ethtool_ops;
ndev->tx_queue_len = 100;
+ /* Inherit the DMA masks from the platform device */
+ ndev->dev.dma_mask = dev->dma_mask;
+ ndev->dev.coherent_dma_mask = dev->coherent_dma_mask;
netif_napi_add(ndev, &port->napi, eth_poll, NAPI_WEIGHT);
diff --git a/drivers/net/fddi/Kconfig b/drivers/net/fddi/Kconfig
index f722079..f99c104 100644
--- a/drivers/net/fddi/Kconfig
+++ b/drivers/net/fddi/Kconfig
@@ -40,17 +40,20 @@
config DEFXX_MMIO
bool
- prompt "Use MMIO instead of PIO" if PCI || EISA
+ prompt "Use MMIO instead of IOP" if PCI || EISA
depends on DEFXX
- default n if PCI || EISA
+ default n if EISA
default y
help
This instructs the driver to use EISA or PCI memory-mapped I/O
- (MMIO) as appropriate instead of programmed I/O ports (PIO).
+ (MMIO) as appropriate instead of programmed I/O ports (IOP).
Enabling this gives an improvement in processing time in parts
- of the driver, but it may cause problems with EISA (DEFEA)
- adapters. TURBOchannel does not have the concept of I/O ports,
- so MMIO is always used for these (DEFTA) adapters.
+ of the driver, but it requires a memory window to be configured
+ for EISA (DEFEA) adapters that may not always be available.
+ Conversely some PCIe host bridges do not support IOP, so MMIO
+ may be required to access PCI (DEFPA) adapters on downstream PCI
+ buses with some systems. TURBOchannel does not have the concept
+ of I/O ports, so MMIO is always used for these (DEFTA) adapters.
If unsure, say N.
diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c
index 077c6849..c7ce6d5 100644
--- a/drivers/net/fddi/defxx.c
+++ b/drivers/net/fddi/defxx.c
@@ -495,6 +495,25 @@ static const struct net_device_ops dfx_netdev_ops = {
.ndo_set_mac_address = dfx_ctl_set_mac_address,
};
+static void dfx_register_res_alloc_err(const char *print_name, bool mmio,
+ bool eisa)
+{
+ pr_err("%s: Cannot use %s, no address set, aborting\n",
+ print_name, mmio ? "MMIO" : "I/O");
+ pr_err("%s: Recompile driver with \"CONFIG_DEFXX_MMIO=%c\"\n",
+ print_name, mmio ? 'n' : 'y');
+ if (eisa && mmio)
+ pr_err("%s: Or run ECU and set adapter's MMIO location\n",
+ print_name);
+}
+
+static void dfx_register_res_err(const char *print_name, bool mmio,
+ unsigned long start, unsigned long len)
+{
+ pr_err("%s: Cannot reserve %s resource 0x%lx @ 0x%lx, aborting\n",
+ print_name, mmio ? "MMIO" : "I/O", len, start);
+}
+
/*
* ================
* = dfx_register =
@@ -568,15 +587,12 @@ static int dfx_register(struct device *bdev)
dev_set_drvdata(bdev, dev);
dfx_get_bars(bdev, bar_start, bar_len);
- if (dfx_bus_eisa && dfx_use_mmio && bar_start[0] == 0) {
- pr_err("%s: Cannot use MMIO, no address set, aborting\n",
- print_name);
- pr_err("%s: Run ECU and set adapter's MMIO location\n",
- print_name);
- pr_err("%s: Or recompile driver with \"CONFIG_DEFXX_MMIO=n\""
- "\n", print_name);
+ if (bar_len[0] == 0 ||
+ (dfx_bus_eisa && dfx_use_mmio && bar_start[0] == 0)) {
+ dfx_register_res_alloc_err(print_name, dfx_use_mmio,
+ dfx_bus_eisa);
err = -ENXIO;
- goto err_out;
+ goto err_out_disable;
}
if (dfx_use_mmio)
@@ -585,18 +601,16 @@ static int dfx_register(struct device *bdev)
else
region = request_region(bar_start[0], bar_len[0], print_name);
if (!region) {
- pr_err("%s: Cannot reserve %s resource 0x%lx @ 0x%lx, "
- "aborting\n", dfx_use_mmio ? "MMIO" : "I/O", print_name,
- (long)bar_len[0], (long)bar_start[0]);
+ dfx_register_res_err(print_name, dfx_use_mmio,
+ bar_start[0], bar_len[0]);
err = -EBUSY;
goto err_out_disable;
}
if (bar_start[1] != 0) {
region = request_region(bar_start[1], bar_len[1], print_name);
if (!region) {
- pr_err("%s: Cannot reserve I/O resource "
- "0x%lx @ 0x%lx, aborting\n", print_name,
- (long)bar_len[1], (long)bar_start[1]);
+ dfx_register_res_err(print_name, 0,
+ bar_start[1], bar_len[1]);
err = -EBUSY;
goto err_out_csr_region;
}
@@ -604,9 +618,8 @@ static int dfx_register(struct device *bdev)
if (bar_start[2] != 0) {
region = request_region(bar_start[2], bar_len[2], print_name);
if (!region) {
- pr_err("%s: Cannot reserve I/O resource "
- "0x%lx @ 0x%lx, aborting\n", print_name,
- (long)bar_len[2], (long)bar_start[2]);
+ dfx_register_res_err(print_name, 0,
+ bar_start[2], bar_len[2]);
err = -EBUSY;
goto err_out_bh_region;
}
diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
index 1426bfc..5ddb2db 100644
--- a/drivers/net/geneve.c
+++ b/drivers/net/geneve.c
@@ -890,6 +890,9 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
__be16 sport;
int err;
+ if (!pskb_inet_may_pull(skb))
+ return -EINVAL;
+
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
geneve->cfg.info.key.tp_dst, sport);
@@ -907,8 +910,16 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
info = skb_tunnel_info(skb);
if (info) {
- info->key.u.ipv4.dst = fl4.saddr;
- info->key.u.ipv4.src = fl4.daddr;
+ struct ip_tunnel_info *unclone;
+
+ unclone = skb_tunnel_info_unclone(skb);
+ if (unlikely(!unclone)) {
+ dst_release(&rt->dst);
+ return -ENOMEM;
+ }
+
+ unclone->key.u.ipv4.dst = fl4.saddr;
+ unclone->key.u.ipv4.src = fl4.daddr;
}
if (!pskb_may_pull(skb, ETH_HLEN)) {
@@ -976,6 +987,9 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
__be16 sport;
int err;
+ if (!pskb_inet_may_pull(skb))
+ return -EINVAL;
+
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
geneve->cfg.info.key.tp_dst, sport);
@@ -992,8 +1006,16 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
struct ip_tunnel_info *info = skb_tunnel_info(skb);
if (info) {
- info->key.u.ipv6.dst = fl6.saddr;
- info->key.u.ipv6.src = fl6.daddr;
+ struct ip_tunnel_info *unclone;
+
+ unclone = skb_tunnel_info_unclone(skb);
+ if (unlikely(!unclone)) {
+ dst_release(dst);
+ return -ENOMEM;
+ }
+
+ unclone->key.u.ipv6.dst = fl6.saddr;
+ unclone->key.u.ipv6.src = fl6.daddr;
}
if (!pskb_may_pull(skb, ETH_HLEN)) {
diff --git a/drivers/net/ieee802154/atusb.c b/drivers/net/ieee802154/atusb.c
index 0dd0ba9..23ee0b1 100644
--- a/drivers/net/ieee802154/atusb.c
+++ b/drivers/net/ieee802154/atusb.c
@@ -365,6 +365,7 @@ static int atusb_alloc_urbs(struct atusb *atusb, int n)
return -ENOMEM;
}
usb_anchor_urb(urb, &atusb->idle_urbs);
+ usb_free_urb(urb);
n--;
}
return 0;
diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
index 6c23710..da862db 100644
--- a/drivers/net/ipa/ipa.h
+++ b/drivers/net/ipa/ipa.h
@@ -56,6 +56,7 @@ enum ipa_flag {
* @mem_virt: Virtual address of IPA-local memory space
* @mem_offset: Offset from @mem_virt used for access to IPA memory
* @mem_size: Total size (bytes) of memory at @mem_virt
+ * @mem_count: Number of entries in the mem array
* @mem: Array of IPA-local memory region descriptors
* @imem_iova: I/O virtual address of IPA region in IMEM
* @imem_size; Size of IMEM region
@@ -102,6 +103,7 @@ struct ipa {
void *mem_virt;
u32 mem_offset;
u32 mem_size;
+ u32 mem_count;
const struct ipa_mem *mem;
unsigned long imem_iova;
diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
index 46d8b73..a47378b 100644
--- a/drivers/net/ipa/ipa_cmd.c
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -175,21 +175,23 @@ bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
: field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
if (mem->offset > offset_max ||
ipa->mem_offset > offset_max - mem->offset) {
- dev_err(dev, "IPv%c %s%s table region offset too large "
- "(0x%04x + 0x%04x > 0x%04x)\n",
- ipv6 ? '6' : '4', hashed ? "hashed " : "",
- route ? "route" : "filter",
- ipa->mem_offset, mem->offset, offset_max);
+ dev_err(dev, "IPv%c %s%s table region offset too large\n",
+ ipv6 ? '6' : '4', hashed ? "hashed " : "",
+ route ? "route" : "filter");
+ dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n",
+ ipa->mem_offset, mem->offset, offset_max);
+
return false;
}
if (mem->offset > ipa->mem_size ||
mem->size > ipa->mem_size - mem->offset) {
- dev_err(dev, "IPv%c %s%s table region out of range "
- "(0x%04x + 0x%04x > 0x%04x)\n",
- ipv6 ? '6' : '4', hashed ? "hashed " : "",
- route ? "route" : "filter",
- mem->offset, mem->size, ipa->mem_size);
+ dev_err(dev, "IPv%c %s%s table region out of range\n",
+ ipv6 ? '6' : '4', hashed ? "hashed " : "",
+ route ? "route" : "filter");
+ dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n",
+ mem->offset, mem->size, ipa->mem_size);
+
return false;
}
@@ -205,22 +207,36 @@ static bool ipa_cmd_header_valid(struct ipa *ipa)
u32 size_max;
u32 size;
+ /* In ipa_cmd_hdr_init_local_add() we record the offset and size
+ * of the header table memory area. Make sure the offset and size
+ * fit in the fields that need to hold them, and that the entire
+ * range is within the overall IPA memory range.
+ */
offset_max = field_max(HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
if (mem->offset > offset_max ||
ipa->mem_offset > offset_max - mem->offset) {
- dev_err(dev, "header table region offset too large "
- "(0x%04x + 0x%04x > 0x%04x)\n",
- ipa->mem_offset + mem->offset, offset_max);
+ dev_err(dev, "header table region offset too large\n");
+ dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n",
+ ipa->mem_offset, mem->offset, offset_max);
+
return false;
}
size_max = field_max(HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
size = ipa->mem[IPA_MEM_MODEM_HEADER].size;
size += ipa->mem[IPA_MEM_AP_HEADER].size;
- if (mem->offset > ipa->mem_size || size > ipa->mem_size - mem->offset) {
- dev_err(dev, "header table region out of range "
- "(0x%04x + 0x%04x > 0x%04x)\n",
- mem->offset, size, ipa->mem_size);
+
+ if (size > size_max) {
+ dev_err(dev, "header table region size too large\n");
+ dev_err(dev, " (0x%04x > 0x%08x)\n", size, size_max);
+
+ return false;
+ }
+ if (size > ipa->mem_size || mem->offset > ipa->mem_size - size) {
+ dev_err(dev, "header table region out of range\n");
+ dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n",
+ mem->offset, size, ipa->mem_size);
+
return false;
}
diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
index 2d45c44..a78d660 100644
--- a/drivers/net/ipa/ipa_mem.c
+++ b/drivers/net/ipa/ipa_mem.c
@@ -181,7 +181,7 @@ int ipa_mem_config(struct ipa *ipa)
* for the region, write "canary" values in the space prior to
* the region's base address.
*/
- for (mem_id = 0; mem_id < IPA_MEM_COUNT; mem_id++) {
+ for (mem_id = 0; mem_id < ipa->mem_count; mem_id++) {
const struct ipa_mem *mem = &ipa->mem[mem_id];
u16 canary_count;
__le32 *canary;
@@ -488,6 +488,7 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
ipa->mem_size = resource_size(res);
/* The ipa->mem[] array is indexed by enum ipa_mem_id values */
+ ipa->mem_count = mem_data->local_count;
ipa->mem = mem_data->local;
ret = ipa_imem_init(ipa, mem_data->imem_addr, mem_data->imem_size);
diff --git a/drivers/net/mdio/mdio-octeon.c b/drivers/net/mdio/mdio-octeon.c
index d1e1009..6faf393 100644
--- a/drivers/net/mdio/mdio-octeon.c
+++ b/drivers/net/mdio/mdio-octeon.c
@@ -71,7 +71,6 @@ static int octeon_mdiobus_probe(struct platform_device *pdev)
return 0;
fail_register:
- mdiobus_free(bus->mii_bus);
smi_en.u64 = 0;
oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
return err;
@@ -85,7 +84,6 @@ static int octeon_mdiobus_remove(struct platform_device *pdev)
bus = platform_get_drvdata(pdev);
mdiobus_unregister(bus->mii_bus);
- mdiobus_free(bus->mii_bus);
smi_en.u64 = 0;
oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
return 0;
diff --git a/drivers/net/mdio/mdio-thunder.c b/drivers/net/mdio/mdio-thunder.c
index 3d7eda9..dd7430c 100644
--- a/drivers/net/mdio/mdio-thunder.c
+++ b/drivers/net/mdio/mdio-thunder.c
@@ -126,7 +126,6 @@ static void thunder_mdiobus_pci_remove(struct pci_dev *pdev)
continue;
mdiobus_unregister(bus->mii_bus);
- mdiobus_free(bus->mii_bus);
oct_mdio_writeq(0, bus->register_base + SMI_EN);
}
pci_release_regions(pdev);
diff --git a/drivers/net/phy/bcm-phy-lib.c b/drivers/net/phy/bcm-phy-lib.c
index ef6825b..b72e11c 100644
--- a/drivers/net/phy/bcm-phy-lib.c
+++ b/drivers/net/phy/bcm-phy-lib.c
@@ -328,7 +328,7 @@ EXPORT_SYMBOL_GPL(bcm_phy_enable_apd);
int bcm_phy_set_eee(struct phy_device *phydev, bool enable)
{
- int val;
+ int val, mask = 0;
/* Enable EEE at PHY level */
val = phy_read_mmd(phydev, MDIO_MMD_AN, BRCM_CL45VEN_EEE_CONTROL);
@@ -347,10 +347,17 @@ int bcm_phy_set_eee(struct phy_device *phydev, bool enable)
if (val < 0)
return val;
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
+ phydev->supported))
+ mask |= MDIO_EEE_1000T;
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT,
+ phydev->supported))
+ mask |= MDIO_EEE_100TX;
+
if (enable)
- val |= (MDIO_EEE_100TX | MDIO_EEE_1000T);
+ val |= mask;
else
- val &= ~(MDIO_EEE_100TX | MDIO_EEE_1000T);
+ val &= ~mask;
phy_write_mmd(phydev, MDIO_MMD_AN, BCM_CL45VEN_EEE_ADV, (u32)val);
diff --git a/drivers/net/phy/intel-xway.c b/drivers/net/phy/intel-xway.c
index b7875b3..574a8bc 100644
--- a/drivers/net/phy/intel-xway.c
+++ b/drivers/net/phy/intel-xway.c
@@ -11,6 +11,18 @@
#define XWAY_MDIO_IMASK 0x19 /* interrupt mask */
#define XWAY_MDIO_ISTAT 0x1A /* interrupt status */
+#define XWAY_MDIO_LED 0x1B /* led control */
+
+/* bit 15:12 are reserved */
+#define XWAY_MDIO_LED_LED3_EN BIT(11) /* Enable the integrated function of LED3 */
+#define XWAY_MDIO_LED_LED2_EN BIT(10) /* Enable the integrated function of LED2 */
+#define XWAY_MDIO_LED_LED1_EN BIT(9) /* Enable the integrated function of LED1 */
+#define XWAY_MDIO_LED_LED0_EN BIT(8) /* Enable the integrated function of LED0 */
+/* bit 7:4 are reserved */
+#define XWAY_MDIO_LED_LED3_DA BIT(3) /* Direct Access to LED3 */
+#define XWAY_MDIO_LED_LED2_DA BIT(2) /* Direct Access to LED2 */
+#define XWAY_MDIO_LED_LED1_DA BIT(1) /* Direct Access to LED1 */
+#define XWAY_MDIO_LED_LED0_DA BIT(0) /* Direct Access to LED0 */
#define XWAY_MDIO_INIT_WOL BIT(15) /* Wake-On-LAN */
#define XWAY_MDIO_INIT_MSRE BIT(14)
@@ -159,6 +171,15 @@ static int xway_gphy_config_init(struct phy_device *phydev)
/* Clear all pending interrupts */
phy_read(phydev, XWAY_MDIO_ISTAT);
+ /* Ensure that integrated led function is enabled for all leds */
+ err = phy_write(phydev, XWAY_MDIO_LED,
+ XWAY_MDIO_LED_LED0_EN |
+ XWAY_MDIO_LED_LED1_EN |
+ XWAY_MDIO_LED_LED2_EN |
+ XWAY_MDIO_LED_LED3_EN);
+ if (err)
+ return err;
+
phy_write_mmd(phydev, MDIO_MMD_VEND2, XWAY_MMD_LEDCH,
XWAY_MMD_LEDCH_NACS_NONE |
XWAY_MMD_LEDCH_SBF_F02HZ |
diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
index 5dbdaf0..9161618 100644
--- a/drivers/net/phy/marvell.c
+++ b/drivers/net/phy/marvell.c
@@ -861,22 +861,28 @@ static int m88e1111_get_downshift(struct phy_device *phydev, u8 *data)
static int m88e1111_set_downshift(struct phy_device *phydev, u8 cnt)
{
- int val;
+ int val, err;
if (cnt > MII_M1111_PHY_EXT_CR_DOWNSHIFT_MAX)
return -E2BIG;
- if (!cnt)
- return phy_clear_bits(phydev, MII_M1111_PHY_EXT_CR,
- MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN);
+ if (!cnt) {
+ err = phy_clear_bits(phydev, MII_M1111_PHY_EXT_CR,
+ MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN);
+ } else {
+ val = MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN;
+ val |= FIELD_PREP(MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK, cnt - 1);
- val = MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN;
- val |= FIELD_PREP(MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK, cnt - 1);
+ err = phy_modify(phydev, MII_M1111_PHY_EXT_CR,
+ MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN |
+ MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK,
+ val);
+ }
- return phy_modify(phydev, MII_M1111_PHY_EXT_CR,
- MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN |
- MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK,
- val);
+ if (err < 0)
+ return err;
+
+ return genphy_soft_reset(phydev);
}
static int m88e1111_get_tunable(struct phy_device *phydev,
@@ -919,22 +925,28 @@ static int m88e1011_get_downshift(struct phy_device *phydev, u8 *data)
static int m88e1011_set_downshift(struct phy_device *phydev, u8 cnt)
{
- int val;
+ int val, err;
if (cnt > MII_M1011_PHY_SCR_DOWNSHIFT_MAX)
return -E2BIG;
- if (!cnt)
- return phy_clear_bits(phydev, MII_M1011_PHY_SCR,
- MII_M1011_PHY_SCR_DOWNSHIFT_EN);
+ if (!cnt) {
+ err = phy_clear_bits(phydev, MII_M1011_PHY_SCR,
+ MII_M1011_PHY_SCR_DOWNSHIFT_EN);
+ } else {
+ val = MII_M1011_PHY_SCR_DOWNSHIFT_EN;
+ val |= FIELD_PREP(MII_M1011_PHY_SCR_DOWNSHIFT_MASK, cnt - 1);
- val = MII_M1011_PHY_SCR_DOWNSHIFT_EN;
- val |= FIELD_PREP(MII_M1011_PHY_SCR_DOWNSHIFT_MASK, cnt - 1);
+ err = phy_modify(phydev, MII_M1011_PHY_SCR,
+ MII_M1011_PHY_SCR_DOWNSHIFT_EN |
+ MII_M1011_PHY_SCR_DOWNSHIFT_MASK,
+ val);
+ }
- return phy_modify(phydev, MII_M1011_PHY_SCR,
- MII_M1011_PHY_SCR_DOWNSHIFT_EN |
- MII_M1011_PHY_SCR_DOWNSHIFT_MASK,
- val);
+ if (err < 0)
+ return err;
+
+ return genphy_soft_reset(phydev);
}
static int m88e1011_get_tunable(struct phy_device *phydev,
@@ -2913,9 +2925,35 @@ static struct phy_driver marvell_drivers[] = {
.get_stats = marvell_get_stats,
},
{
- .phy_id = MARVELL_PHY_ID_88E6390,
+ .phy_id = MARVELL_PHY_ID_88E6341_FAMILY,
.phy_id_mask = MARVELL_PHY_ID_MASK,
- .name = "Marvell 88E6390",
+ .name = "Marvell 88E6341 Family",
+ /* PHY_GBIT_FEATURES */
+ .flags = PHY_POLL_CABLE_TEST,
+ .probe = m88e1510_probe,
+ .config_init = marvell_config_init,
+ .config_aneg = m88e6390_config_aneg,
+ .read_status = marvell_read_status,
+ .ack_interrupt = marvell_ack_interrupt,
+ .config_intr = marvell_config_intr,
+ .did_interrupt = m88e1121_did_interrupt,
+ .resume = genphy_resume,
+ .suspend = genphy_suspend,
+ .read_page = marvell_read_page,
+ .write_page = marvell_write_page,
+ .get_sset_count = marvell_get_sset_count,
+ .get_strings = marvell_get_strings,
+ .get_stats = marvell_get_stats,
+ .get_tunable = m88e1540_get_tunable,
+ .set_tunable = m88e1540_set_tunable,
+ .cable_test_start = marvell_vct7_cable_test_start,
+ .cable_test_tdr_start = marvell_vct5_cable_test_tdr_start,
+ .cable_test_get_status = marvell_vct7_cable_test_get_status,
+ },
+ {
+ .phy_id = MARVELL_PHY_ID_88E6390_FAMILY,
+ .phy_id_mask = MARVELL_PHY_ID_MASK,
+ .name = "Marvell 88E6390 Family",
/* PHY_GBIT_FEATURES */
.flags = PHY_POLL_CABLE_TEST,
.probe = m88e6390_probe,
@@ -3001,7 +3039,8 @@ static struct mdio_device_id __maybe_unused marvell_tbl[] = {
{ MARVELL_PHY_ID_88E1540, MARVELL_PHY_ID_MASK },
{ MARVELL_PHY_ID_88E1545, MARVELL_PHY_ID_MASK },
{ MARVELL_PHY_ID_88E3016, MARVELL_PHY_ID_MASK },
- { MARVELL_PHY_ID_88E6390, MARVELL_PHY_ID_MASK },
+ { MARVELL_PHY_ID_88E6341_FAMILY, MARVELL_PHY_ID_MASK },
+ { MARVELL_PHY_ID_88E6390_FAMILY, MARVELL_PHY_ID_MASK },
{ MARVELL_PHY_ID_88E1340S, MARVELL_PHY_ID_MASK },
{ MARVELL_PHY_ID_88E1548P, MARVELL_PHY_ID_MASK },
{ }
diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
index fb954e8..4cf874f 100644
--- a/drivers/net/phy/sfp-bus.c
+++ b/drivers/net/phy/sfp-bus.c
@@ -349,14 +349,13 @@ void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id,
}
/* If we haven't discovered any modes that this module supports, try
- * the encoding and bitrate to determine supported modes. Some BiDi
- * modules (eg, 1310nm/1550nm) are not 1000BASE-BX compliant due to
- * the differing wavelengths, so do not set any transceiver bits.
+ * the bitrate to determine supported modes. Some BiDi modules (eg,
+ * 1310nm/1550nm) are not 1000BASE-BX compliant due to the differing
+ * wavelengths, so do not set any transceiver bits.
*/
if (bitmap_empty(modes, __ETHTOOL_LINK_MODE_MASK_NBITS)) {
- /* If the encoding and bit rate allows 1000baseX */
- if (id->base.encoding == SFF8024_ENCODING_8B10B && br_nom &&
- br_min <= 1300 && br_max >= 1200)
+ /* If the bit rate allows 1000baseX */
+ if (br_nom && br_min <= 1300 && br_max >= 1200)
phylink_set(modes, 1000baseX_Full);
}
diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
index 7a680b5..2fff626 100644
--- a/drivers/net/phy/sfp.c
+++ b/drivers/net/phy/sfp.c
@@ -1501,15 +1501,19 @@ static void sfp_sm_link_down(struct sfp *sfp)
static void sfp_sm_link_check_los(struct sfp *sfp)
{
- unsigned int los = sfp->state & SFP_F_LOS;
+ const __be16 los_inverted = cpu_to_be16(SFP_OPTIONS_LOS_INVERTED);
+ const __be16 los_normal = cpu_to_be16(SFP_OPTIONS_LOS_NORMAL);
+ __be16 los_options = sfp->id.ext.options & (los_inverted | los_normal);
+ bool los = false;
/* If neither SFP_OPTIONS_LOS_INVERTED nor SFP_OPTIONS_LOS_NORMAL
- * are set, we assume that no LOS signal is available.
+ * are set, we assume that no LOS signal is available. If both are
+ * set, we assume LOS is not implemented (and is meaningless.)
*/
- if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_INVERTED))
- los ^= SFP_F_LOS;
- else if (!(sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_NORMAL)))
- los = 0;
+ if (los_options == los_inverted)
+ los = !(sfp->state & SFP_F_LOS);
+ else if (los_options == los_normal)
+ los = !!(sfp->state & SFP_F_LOS);
if (los)
sfp_sm_next(sfp, SFP_S_WAIT_LOS, 0);
@@ -1519,18 +1523,22 @@ static void sfp_sm_link_check_los(struct sfp *sfp)
static bool sfp_los_event_active(struct sfp *sfp, unsigned int event)
{
- return (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_INVERTED) &&
- event == SFP_E_LOS_LOW) ||
- (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_NORMAL) &&
- event == SFP_E_LOS_HIGH);
+ const __be16 los_inverted = cpu_to_be16(SFP_OPTIONS_LOS_INVERTED);
+ const __be16 los_normal = cpu_to_be16(SFP_OPTIONS_LOS_NORMAL);
+ __be16 los_options = sfp->id.ext.options & (los_inverted | los_normal);
+
+ return (los_options == los_inverted && event == SFP_E_LOS_LOW) ||
+ (los_options == los_normal && event == SFP_E_LOS_HIGH);
}
static bool sfp_los_event_inactive(struct sfp *sfp, unsigned int event)
{
- return (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_INVERTED) &&
- event == SFP_E_LOS_HIGH) ||
- (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_LOS_NORMAL) &&
- event == SFP_E_LOS_LOW);
+ const __be16 los_inverted = cpu_to_be16(SFP_OPTIONS_LOS_INVERTED);
+ const __be16 los_normal = cpu_to_be16(SFP_OPTIONS_LOS_NORMAL);
+ __be16 los_options = sfp->id.ext.options & (los_inverted | los_normal);
+
+ return (los_options == los_inverted && event == SFP_E_LOS_HIGH) ||
+ (los_options == los_normal && event == SFP_E_LOS_LOW);
}
static void sfp_sm_fault(struct sfp *sfp, unsigned int next_state, bool warn)
diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
index 10722fe..caf7291 100644
--- a/drivers/net/phy/smsc.c
+++ b/drivers/net/phy/smsc.c
@@ -152,10 +152,13 @@ static int lan87xx_config_aneg(struct phy_device *phydev)
return genphy_config_aneg(phydev);
}
-static int lan87xx_config_aneg_ext(struct phy_device *phydev)
+static int lan95xx_config_aneg_ext(struct phy_device *phydev)
{
int rc;
+ if (phydev->phy_id != 0x0007c0f0) /* not (LAN9500A or LAN9505A) */
+ return lan87xx_config_aneg(phydev);
+
/* Extend Manual AutoMDIX timer */
rc = phy_read(phydev, PHY_EDPD_CONFIG);
if (rc < 0)
@@ -408,7 +411,7 @@ static struct phy_driver smsc_phy_driver[] = {
.read_status = lan87xx_read_status,
.config_init = smsc_phy_config_init,
.soft_reset = smsc_phy_reset,
- .config_aneg = lan87xx_config_aneg_ext,
+ .config_aneg = lan95xx_config_aneg_ext,
/* IRQ related */
.ack_interrupt = smsc_phy_ack_interrupt,
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index accde25..c671d8e 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -69,6 +69,14 @@
#include <linux/bpf.h>
#include <linux/bpf_trace.h>
#include <linux/mutex.h>
+#include <linux/ieee802154.h>
+#include <linux/if_ltalk.h>
+#include <uapi/linux/if_fddi.h>
+#include <uapi/linux/if_hippi.h>
+#include <uapi/linux/if_fc.h>
+#include <net/ax25.h>
+#include <net/rose.h>
+#include <net/6lowpan.h>
#include <linux/uaccess.h>
#include <linux/proc_fs.h>
@@ -2978,6 +2986,45 @@ static int tun_set_ebpf(struct tun_struct *tun, struct tun_prog __rcu **prog_p,
return __tun_set_ebpf(tun, prog_p, prog);
}
+/* Return correct value for tun->dev->addr_len based on tun->dev->type. */
+static unsigned char tun_get_addr_len(unsigned short type)
+{
+ switch (type) {
+ case ARPHRD_IP6GRE:
+ case ARPHRD_TUNNEL6:
+ return sizeof(struct in6_addr);
+ case ARPHRD_IPGRE:
+ case ARPHRD_TUNNEL:
+ case ARPHRD_SIT:
+ return 4;
+ case ARPHRD_ETHER:
+ return ETH_ALEN;
+ case ARPHRD_IEEE802154:
+ case ARPHRD_IEEE802154_MONITOR:
+ return IEEE802154_EXTENDED_ADDR_LEN;
+ case ARPHRD_PHONET_PIPE:
+ case ARPHRD_PPP:
+ case ARPHRD_NONE:
+ return 0;
+ case ARPHRD_6LOWPAN:
+ return EUI64_ADDR_LEN;
+ case ARPHRD_FDDI:
+ return FDDI_K_ALEN;
+ case ARPHRD_HIPPI:
+ return HIPPI_ALEN;
+ case ARPHRD_IEEE802:
+ return FC_ALEN;
+ case ARPHRD_ROSE:
+ return ROSE_ADDR_LEN;
+ case ARPHRD_NETROM:
+ return AX25_ADDR_LEN;
+ case ARPHRD_LOCALTLK:
+ return LTALK_ALEN;
+ default:
+ return 0;
+ }
+}
+
static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
unsigned long arg, int ifreq_len)
{
@@ -3133,6 +3180,7 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
ret = -EBUSY;
} else {
tun->dev->type = (int) arg;
+ tun->dev->addr_len = tun_get_addr_len(tun->dev->type);
netif_info(tun, drv, tun->dev, "linktype set to %d\n",
tun->dev->type);
ret = 0;
diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
index 5541f3f..b77b0a3 100644
--- a/drivers/net/usb/ax88179_178a.c
+++ b/drivers/net/usb/ax88179_178a.c
@@ -296,12 +296,12 @@ static int ax88179_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
int ret;
if (2 == size) {
- u16 buf;
+ u16 buf = 0;
ret = __ax88179_read_cmd(dev, cmd, value, index, size, &buf, 0);
le16_to_cpus(&buf);
*((u16 *)data) = buf;
} else if (4 == size) {
- u32 buf;
+ u32 buf = 0;
ret = __ax88179_read_cmd(dev, cmd, value, index, size, &buf, 0);
le32_to_cpus(&buf);
*((u32 *)data) = buf;
@@ -1296,6 +1296,8 @@ static void ax88179_get_mac_addr(struct usbnet *dev)
{
u8 mac[ETH_ALEN];
+ memset(mac, 0, sizeof(mac));
+
/* Maybe the boot loader passed the MAC address via device tree */
if (!eth_platform_get_mac_address(&dev->udev->dev, mac)) {
netif_dbg(dev, ifup, dev->net,
diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
index 2bb28db..fbfcbd0 100644
--- a/drivers/net/usb/hso.c
+++ b/drivers/net/usb/hso.c
@@ -611,7 +611,7 @@ static struct hso_serial *get_serial_by_index(unsigned index)
return serial;
}
-static int get_free_serial_index(void)
+static int obtain_minor(struct hso_serial *serial)
{
int index;
unsigned long flags;
@@ -619,8 +619,10 @@ static int get_free_serial_index(void)
spin_lock_irqsave(&serial_table_lock, flags);
for (index = 0; index < HSO_SERIAL_TTY_MINORS; index++) {
if (serial_table[index] == NULL) {
+ serial_table[index] = serial->parent;
+ serial->minor = index;
spin_unlock_irqrestore(&serial_table_lock, flags);
- return index;
+ return 0;
}
}
spin_unlock_irqrestore(&serial_table_lock, flags);
@@ -629,15 +631,12 @@ static int get_free_serial_index(void)
return -1;
}
-static void set_serial_by_index(unsigned index, struct hso_serial *serial)
+static void release_minor(struct hso_serial *serial)
{
unsigned long flags;
spin_lock_irqsave(&serial_table_lock, flags);
- if (serial)
- serial_table[index] = serial->parent;
- else
- serial_table[index] = NULL;
+ serial_table[serial->minor] = NULL;
spin_unlock_irqrestore(&serial_table_lock, flags);
}
@@ -1690,7 +1689,7 @@ static int hso_serial_tiocmset(struct tty_struct *tty,
spin_unlock_irqrestore(&serial->serial_lock, flags);
return usb_control_msg(serial->parent->usb,
- usb_rcvctrlpipe(serial->parent->usb, 0), 0x22,
+ usb_sndctrlpipe(serial->parent->usb, 0), 0x22,
0x21, val, if_num, NULL, 0,
USB_CTRL_SET_TIMEOUT);
}
@@ -2230,6 +2229,7 @@ static int hso_stop_serial_device(struct hso_device *hso_dev)
static void hso_serial_tty_unregister(struct hso_serial *serial)
{
tty_unregister_device(tty_drv, serial->minor);
+ release_minor(serial);
}
static void hso_serial_common_free(struct hso_serial *serial)
@@ -2253,24 +2253,22 @@ static void hso_serial_common_free(struct hso_serial *serial)
static int hso_serial_common_create(struct hso_serial *serial, int num_urbs,
int rx_size, int tx_size)
{
- int minor;
int i;
tty_port_init(&serial->port);
- minor = get_free_serial_index();
- if (minor < 0)
+ if (obtain_minor(serial))
goto exit2;
/* register our minor number */
serial->parent->dev = tty_port_register_device_attr(&serial->port,
- tty_drv, minor, &serial->parent->interface->dev,
+ tty_drv, serial->minor, &serial->parent->interface->dev,
serial->parent, hso_serial_dev_groups);
- if (IS_ERR(serial->parent->dev))
+ if (IS_ERR(serial->parent->dev)) {
+ release_minor(serial);
goto exit2;
+ }
- /* fill in specific data for later use */
- serial->minor = minor;
serial->magic = HSO_SERIAL_MAGIC;
spin_lock_init(&serial->serial_lock);
serial->num_rx_urbs = num_urbs;
@@ -2438,7 +2436,7 @@ static int hso_rfkill_set_block(void *data, bool blocked)
if (hso_dev->usb_gone)
rv = 0;
else
- rv = usb_control_msg(hso_dev->usb, usb_rcvctrlpipe(hso_dev->usb, 0),
+ rv = usb_control_msg(hso_dev->usb, usb_sndctrlpipe(hso_dev->usb, 0),
enabled ? 0x82 : 0x81, 0x40, 0, 0, NULL, 0,
USB_CTRL_SET_TIMEOUT);
mutex_unlock(&hso_dev->mutex);
@@ -2620,32 +2618,31 @@ static struct hso_device *hso_create_bulk_serial_device(
num_urbs = 2;
serial->tiocmget = kzalloc(sizeof(struct hso_tiocmget),
GFP_KERNEL);
+ if (!serial->tiocmget)
+ goto exit;
serial->tiocmget->serial_state_notification
= kzalloc(sizeof(struct hso_serial_state_notification),
GFP_KERNEL);
- /* it isn't going to break our heart if serial->tiocmget
- * allocation fails don't bother checking this.
- */
- if (serial->tiocmget && serial->tiocmget->serial_state_notification) {
- tiocmget = serial->tiocmget;
- tiocmget->endp = hso_get_ep(interface,
- USB_ENDPOINT_XFER_INT,
- USB_DIR_IN);
- if (!tiocmget->endp) {
- dev_err(&interface->dev, "Failed to find INT IN ep\n");
- goto exit;
- }
-
- tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL);
- if (tiocmget->urb) {
- mutex_init(&tiocmget->mutex);
- init_waitqueue_head(&tiocmget->waitq);
- } else
- hso_free_tiomget(serial);
+ if (!serial->tiocmget->serial_state_notification)
+ goto exit;
+ tiocmget = serial->tiocmget;
+ tiocmget->endp = hso_get_ep(interface,
+ USB_ENDPOINT_XFER_INT,
+ USB_DIR_IN);
+ if (!tiocmget->endp) {
+ dev_err(&interface->dev, "Failed to find INT IN ep\n");
+ goto exit;
}
- }
- else
+
+ tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!tiocmget->urb)
+ goto exit;
+
+ mutex_init(&tiocmget->mutex);
+ init_waitqueue_head(&tiocmget->waitq);
+ } else {
num_urbs = 1;
+ }
if (hso_serial_common_create(serial, num_urbs, BULK_URB_RX_SIZE,
BULK_URB_TX_SIZE))
@@ -2667,9 +2664,6 @@ static struct hso_device *hso_create_bulk_serial_device(
serial->write_data = hso_std_serial_write_data;
- /* and record this serial */
- set_serial_by_index(serial->minor, serial);
-
/* setup the proc dirs and files if needed */
hso_log_port(hso_dev);
@@ -2726,9 +2720,6 @@ struct hso_device *hso_create_mux_serial_device(struct usb_interface *interface,
serial->shared_int->ref_count++;
mutex_unlock(&serial->shared_int->shared_int_lock);
- /* and record this serial */
- set_serial_by_index(serial->minor, serial);
-
/* setup the proc dirs and files if needed */
hso_log_port(hso_dev);
@@ -3112,8 +3103,7 @@ static void hso_free_interface(struct usb_interface *interface)
cancel_work_sync(&serial_table[i]->async_put_intf);
cancel_work_sync(&serial_table[i]->async_get_intf);
hso_serial_tty_unregister(serial);
- kref_put(&serial_table[i]->ref, hso_serial_ref_free);
- set_serial_by_index(i, NULL);
+ kref_put(&serial->parent->ref, hso_serial_ref_free);
}
}
diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
index 8689835..d44657b 100644
--- a/drivers/net/usb/smsc75xx.c
+++ b/drivers/net/usb/smsc75xx.c
@@ -1483,7 +1483,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
ret = smsc75xx_wait_ready(dev, 0);
if (ret < 0) {
netdev_warn(dev->net, "device not ready in smsc75xx_bind\n");
- return ret;
+ goto err;
}
smsc75xx_init_mac_address(dev);
@@ -1492,7 +1492,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
ret = smsc75xx_reset(dev);
if (ret < 0) {
netdev_warn(dev->net, "smsc75xx_reset error %d\n", ret);
- return ret;
+ goto err;
}
dev->net->netdev_ops = &smsc75xx_netdev_ops;
@@ -1502,6 +1502,10 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len;
dev->net->max_mtu = MAX_SINGLE_PACKET_SIZE;
return 0;
+
+err:
+ kfree(pdata);
+ return ret;
}
static void smsc75xx_unbind(struct usbnet *dev, struct usb_interface *intf)
diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
index 50cb8f045..d3b698d 100644
--- a/drivers/net/vxlan.c
+++ b/drivers/net/vxlan.c
@@ -2724,12 +2724,17 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
goto tx_error;
} else if (err) {
if (info) {
+ struct ip_tunnel_info *unclone;
struct in_addr src, dst;
+ unclone = skb_tunnel_info_unclone(skb);
+ if (unlikely(!unclone))
+ goto tx_error;
+
src = remote_ip.sin.sin_addr;
dst = local_ip.sin.sin_addr;
- info->key.u.ipv4.src = src.s_addr;
- info->key.u.ipv4.dst = dst.s_addr;
+ unclone->key.u.ipv4.src = src.s_addr;
+ unclone->key.u.ipv4.dst = dst.s_addr;
}
vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
dst_release(ndst);
@@ -2780,12 +2785,17 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
goto tx_error;
} else if (err) {
if (info) {
+ struct ip_tunnel_info *unclone;
struct in6_addr src, dst;
+ unclone = skb_tunnel_info_unclone(skb);
+ if (unlikely(!unclone))
+ goto tx_error;
+
src = remote_ip.sin6.sin6_addr;
dst = local_ip.sin6.sin6_addr;
- info->key.u.ipv6.src = src;
- info->key.u.ipv6.dst = dst;
+ unclone->key.u.ipv6.src = src;
+ unclone->key.u.ipv6.dst = dst;
}
vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
index 605c01f..f6562a3 100644
--- a/drivers/net/wan/lapbether.c
+++ b/drivers/net/wan/lapbether.c
@@ -51,6 +51,8 @@ struct lapbethdev {
struct list_head node;
struct net_device *ethdev; /* link to ethernet device */
struct net_device *axdev; /* lapbeth device (lapb#) */
+ bool up;
+ spinlock_t up_lock; /* Protects "up" */
};
static LIST_HEAD(lapbeth_devices);
@@ -98,8 +100,9 @@ static int lapbeth_rcv(struct sk_buff *skb, struct net_device *dev, struct packe
rcu_read_lock();
lapbeth = lapbeth_get_x25_dev(dev);
if (!lapbeth)
- goto drop_unlock;
- if (!netif_running(lapbeth->axdev))
+ goto drop_unlock_rcu;
+ spin_lock_bh(&lapbeth->up_lock);
+ if (!lapbeth->up)
goto drop_unlock;
len = skb->data[0] + skb->data[1] * 256;
@@ -114,11 +117,14 @@ static int lapbeth_rcv(struct sk_buff *skb, struct net_device *dev, struct packe
goto drop_unlock;
}
out:
+ spin_unlock_bh(&lapbeth->up_lock);
rcu_read_unlock();
return 0;
drop_unlock:
kfree_skb(skb);
goto out;
+drop_unlock_rcu:
+ rcu_read_unlock();
drop:
kfree_skb(skb);
return 0;
@@ -148,13 +154,11 @@ static int lapbeth_data_indication(struct net_device *dev, struct sk_buff *skb)
static netdev_tx_t lapbeth_xmit(struct sk_buff *skb,
struct net_device *dev)
{
+ struct lapbethdev *lapbeth = netdev_priv(dev);
int err;
- /*
- * Just to be *really* sure not to send anything if the interface
- * is down, the ethernet device may have gone.
- */
- if (!netif_running(dev))
+ spin_lock_bh(&lapbeth->up_lock);
+ if (!lapbeth->up)
goto drop;
/* There should be a pseudo header of 1 byte added by upper layers.
@@ -185,6 +189,7 @@ static netdev_tx_t lapbeth_xmit(struct sk_buff *skb,
goto drop;
}
out:
+ spin_unlock_bh(&lapbeth->up_lock);
return NETDEV_TX_OK;
drop:
kfree_skb(skb);
@@ -276,6 +281,7 @@ static const struct lapb_register_struct lapbeth_callbacks = {
*/
static int lapbeth_open(struct net_device *dev)
{
+ struct lapbethdev *lapbeth = netdev_priv(dev);
int err;
if ((err = lapb_register(dev, &lapbeth_callbacks)) != LAPB_OK) {
@@ -283,13 +289,22 @@ static int lapbeth_open(struct net_device *dev)
return -ENODEV;
}
+ spin_lock_bh(&lapbeth->up_lock);
+ lapbeth->up = true;
+ spin_unlock_bh(&lapbeth->up_lock);
+
return 0;
}
static int lapbeth_close(struct net_device *dev)
{
+ struct lapbethdev *lapbeth = netdev_priv(dev);
int err;
+ spin_lock_bh(&lapbeth->up_lock);
+ lapbeth->up = false;
+ spin_unlock_bh(&lapbeth->up_lock);
+
if ((err = lapb_unregister(dev)) != LAPB_OK)
pr_err("lapb_unregister error: %d\n", err);
@@ -347,6 +362,9 @@ static int lapbeth_new_device(struct net_device *dev)
dev_hold(dev);
lapbeth->ethdev = dev;
+ lapbeth->up = false;
+ spin_lock_init(&lapbeth->up_lock);
+
rc = -EIO;
if (register_netdevice(ndev))
goto fail;
diff --git a/drivers/net/wimax/i2400m/op-rfkill.c b/drivers/net/wimax/i2400m/op-rfkill.c
index 5c79f05..34f81f1 100644
--- a/drivers/net/wimax/i2400m/op-rfkill.c
+++ b/drivers/net/wimax/i2400m/op-rfkill.c
@@ -86,7 +86,7 @@ int i2400m_op_rfkill_sw_toggle(struct wimax_dev *wimax_dev,
if (cmd == NULL)
goto error_alloc;
cmd->hdr.type = cpu_to_le16(I2400M_MT_CMD_RF_CONTROL);
- cmd->hdr.length = sizeof(cmd->sw_rf);
+ cmd->hdr.length = cpu_to_le16(sizeof(cmd->sw_rf));
cmd->hdr.version = cpu_to_le16(I2400M_L3L4_VERSION);
cmd->sw_rf.hdr.type = cpu_to_le16(I2400M_TLV_RF_OPERATION);
cmd->sw_rf.hdr.length = cpu_to_le16(sizeof(cmd->sw_rf.status));
diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
index 31df6dd..540dd59 100644
--- a/drivers/net/wireless/ath/ath10k/htc.c
+++ b/drivers/net/wireless/ath/ath10k/htc.c
@@ -665,7 +665,7 @@ static int ath10k_htc_send_bundle(struct ath10k_htc_ep *ep,
ath10k_dbg(ar, ATH10K_DBG_HTC,
"bundle tx status %d eid %d req count %d count %d len %d\n",
- ret, ep->eid, skb_queue_len(&ep->tx_req_head), cn, bundle_skb->len);
+ ret, ep->eid, skb_queue_len(&ep->tx_req_head), cn, skb_len);
return ret;
}
diff --git a/drivers/net/wireless/ath/ath10k/htt.h b/drivers/net/wireless/ath/ath10k/htt.h
index cad5949..2ee2c65 100644
--- a/drivers/net/wireless/ath/ath10k/htt.h
+++ b/drivers/net/wireless/ath/ath10k/htt.h
@@ -845,6 +845,7 @@ enum htt_security_types {
#define ATH10K_HTT_TXRX_PEER_SECURITY_MAX 2
#define ATH10K_TXRX_NUM_EXT_TIDS 19
+#define ATH10K_TXRX_NON_QOS_TID 16
enum htt_security_flags {
#define HTT_SECURITY_TYPE_MASK 0x7F
diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
index 5c1af20..28ec3c5 100644
--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
@@ -1746,16 +1746,97 @@ static void ath10k_htt_rx_h_csum_offload(struct sk_buff *msdu)
msdu->ip_summed = ath10k_htt_rx_get_csum_state(msdu);
}
+static u64 ath10k_htt_rx_h_get_pn(struct ath10k *ar, struct sk_buff *skb,
+ u16 offset,
+ enum htt_rx_mpdu_encrypt_type enctype)
+{
+ struct ieee80211_hdr *hdr;
+ u64 pn = 0;
+ u8 *ehdr;
+
+ hdr = (struct ieee80211_hdr *)(skb->data + offset);
+ ehdr = skb->data + offset + ieee80211_hdrlen(hdr->frame_control);
+
+ if (enctype == HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2) {
+ pn = ehdr[0];
+ pn |= (u64)ehdr[1] << 8;
+ pn |= (u64)ehdr[4] << 16;
+ pn |= (u64)ehdr[5] << 24;
+ pn |= (u64)ehdr[6] << 32;
+ pn |= (u64)ehdr[7] << 40;
+ }
+ return pn;
+}
+
+static bool ath10k_htt_rx_h_frag_multicast_check(struct ath10k *ar,
+ struct sk_buff *skb,
+ u16 offset)
+{
+ struct ieee80211_hdr *hdr;
+
+ hdr = (struct ieee80211_hdr *)(skb->data + offset);
+ return !is_multicast_ether_addr(hdr->addr1);
+}
+
+static bool ath10k_htt_rx_h_frag_pn_check(struct ath10k *ar,
+ struct sk_buff *skb,
+ u16 peer_id,
+ u16 offset,
+ enum htt_rx_mpdu_encrypt_type enctype)
+{
+ struct ath10k_peer *peer;
+ union htt_rx_pn_t *last_pn, new_pn = {0};
+ struct ieee80211_hdr *hdr;
+ bool more_frags;
+ u8 tid, frag_number;
+ u32 seq;
+
+ peer = ath10k_peer_find_by_id(ar, peer_id);
+ if (!peer) {
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "invalid peer for frag pn check\n");
+ return false;
+ }
+
+ hdr = (struct ieee80211_hdr *)(skb->data + offset);
+ if (ieee80211_is_data_qos(hdr->frame_control))
+ tid = ieee80211_get_tid(hdr);
+ else
+ tid = ATH10K_TXRX_NON_QOS_TID;
+
+ last_pn = &peer->frag_tids_last_pn[tid];
+ new_pn.pn48 = ath10k_htt_rx_h_get_pn(ar, skb, offset, enctype);
+ more_frags = ieee80211_has_morefrags(hdr->frame_control);
+ frag_number = le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_FRAG;
+ seq = (__le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_SEQ) >> 4;
+
+ if (frag_number == 0) {
+ last_pn->pn48 = new_pn.pn48;
+ peer->frag_tids_seq[tid] = seq;
+ } else {
+ if (seq != peer->frag_tids_seq[tid])
+ return false;
+
+ if (new_pn.pn48 != last_pn->pn48 + 1)
+ return false;
+
+ last_pn->pn48 = new_pn.pn48;
+ }
+
+ return true;
+}
+
static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
struct sk_buff_head *amsdu,
struct ieee80211_rx_status *status,
bool fill_crypt_header,
u8 *rx_hdr,
- enum ath10k_pkt_rx_err *err)
+ enum ath10k_pkt_rx_err *err,
+ u16 peer_id,
+ bool frag)
{
struct sk_buff *first;
struct sk_buff *last;
- struct sk_buff *msdu;
+ struct sk_buff *msdu, *temp;
struct htt_rx_desc *rxd;
struct ieee80211_hdr *hdr;
enum htt_rx_mpdu_encrypt_type enctype;
@@ -1768,6 +1849,7 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
bool is_decrypted;
bool is_mgmt;
u32 attention;
+ bool frag_pn_check = true, multicast_check = true;
if (skb_queue_empty(amsdu))
return;
@@ -1866,7 +1948,37 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
}
skb_queue_walk(amsdu, msdu) {
+ if (frag && !fill_crypt_header && is_decrypted &&
+ enctype == HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2)
+ frag_pn_check = ath10k_htt_rx_h_frag_pn_check(ar,
+ msdu,
+ peer_id,
+ 0,
+ enctype);
+
+ if (frag)
+ multicast_check = ath10k_htt_rx_h_frag_multicast_check(ar,
+ msdu,
+ 0);
+
+ if (!frag_pn_check || !multicast_check) {
+ /* Discard the fragment with invalid PN or multicast DA
+ */
+ temp = msdu->prev;
+ __skb_unlink(msdu, amsdu);
+ dev_kfree_skb_any(msdu);
+ msdu = temp;
+ frag_pn_check = true;
+ multicast_check = true;
+ continue;
+ }
+
ath10k_htt_rx_h_csum_offload(msdu);
+
+ if (frag && !fill_crypt_header &&
+ enctype == HTT_RX_MPDU_ENCRYPT_TKIP_WPA)
+ status->flag &= ~RX_FLAG_MMIC_STRIPPED;
+
ath10k_htt_rx_h_undecap(ar, msdu, status, first_hdr, enctype,
is_decrypted);
@@ -1884,6 +1996,11 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
hdr = (void *)msdu->data;
hdr->frame_control &= ~__cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+
+ if (frag && !fill_crypt_header &&
+ enctype == HTT_RX_MPDU_ENCRYPT_TKIP_WPA)
+ status->flag &= ~RX_FLAG_IV_STRIPPED &
+ ~RX_FLAG_MMIC_STRIPPED;
}
}
@@ -1991,14 +2108,62 @@ static void ath10k_htt_rx_h_unchain(struct ath10k *ar,
ath10k_unchain_msdu(amsdu, unchain_cnt);
}
+static bool ath10k_htt_rx_validate_amsdu(struct ath10k *ar,
+ struct sk_buff_head *amsdu)
+{
+ u8 *subframe_hdr;
+ struct sk_buff *first;
+ bool is_first, is_last;
+ struct htt_rx_desc *rxd;
+ struct ieee80211_hdr *hdr;
+ size_t hdr_len, crypto_len;
+ enum htt_rx_mpdu_encrypt_type enctype;
+ int bytes_aligned = ar->hw_params.decap_align_bytes;
+
+ first = skb_peek(amsdu);
+
+ rxd = (void *)first->data - sizeof(*rxd);
+ hdr = (void *)rxd->rx_hdr_status;
+
+ is_first = !!(rxd->msdu_end.common.info0 &
+ __cpu_to_le32(RX_MSDU_END_INFO0_FIRST_MSDU));
+ is_last = !!(rxd->msdu_end.common.info0 &
+ __cpu_to_le32(RX_MSDU_END_INFO0_LAST_MSDU));
+
+ /* Return in case of non-aggregated msdu */
+ if (is_first && is_last)
+ return true;
+
+ /* First msdu flag is not set for the first msdu of the list */
+ if (!is_first)
+ return false;
+
+ enctype = MS(__le32_to_cpu(rxd->mpdu_start.info0),
+ RX_MPDU_START_INFO0_ENCRYPT_TYPE);
+
+ hdr_len = ieee80211_hdrlen(hdr->frame_control);
+ crypto_len = ath10k_htt_rx_crypto_param_len(ar, enctype);
+
+ subframe_hdr = (u8 *)hdr + round_up(hdr_len, bytes_aligned) +
+ crypto_len;
+
+ /* Validate if the amsdu has a proper first subframe.
+ * There are chances a single msdu can be received as amsdu when
+ * the unauthenticated amsdu flag of a QoS header
+ * gets flipped in non-SPP AMSDU's, in such cases the first
+ * subframe has llc/snap header in place of a valid da.
+ * return false if the da matches rfc1042 pattern
+ */
+ if (ether_addr_equal(subframe_hdr, rfc1042_header))
+ return false;
+
+ return true;
+}
+
static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar,
struct sk_buff_head *amsdu,
struct ieee80211_rx_status *rx_status)
{
- /* FIXME: It might be a good idea to do some fuzzy-testing to drop
- * invalid/dangerous frames.
- */
-
if (!rx_status->freq) {
ath10k_dbg(ar, ATH10K_DBG_HTT, "no channel configured; ignoring frame(s)!\n");
return false;
@@ -2009,6 +2174,11 @@ static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar,
return false;
}
+ if (!ath10k_htt_rx_validate_amsdu(ar, amsdu)) {
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "invalid amsdu received\n");
+ return false;
+ }
+
return true;
}
@@ -2071,7 +2241,8 @@ static int ath10k_htt_rx_handle_amsdu(struct ath10k_htt *htt)
ath10k_htt_rx_h_unchain(ar, &amsdu, &drop_cnt, &unchain_cnt);
ath10k_htt_rx_h_filter(ar, &amsdu, rx_status, &drop_cnt_filter);
- ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status, true, first_hdr, &err);
+ ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status, true, first_hdr, &err, 0,
+ false);
msdus_to_queue = skb_queue_len(&amsdu);
ath10k_htt_rx_h_enqueue(ar, &amsdu, rx_status);
@@ -2204,6 +2375,11 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
fw_desc = &rx->fw_desc;
rx_desc_len = fw_desc->len;
+ if (fw_desc->u.bits.discard) {
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt discard mpdu\n");
+ goto err;
+ }
+
/* I have not yet seen any case where num_mpdu_ranges > 1.
* qcacld does not seem handle that case either, so we introduce the
* same limitiation here as well.
@@ -2509,6 +2685,13 @@ static bool ath10k_htt_rx_proc_rx_frag_ind_hl(struct ath10k_htt *htt,
rx_desc = (struct htt_hl_rx_desc *)(skb->data + tot_hdr_len);
rx_desc_info = __le32_to_cpu(rx_desc->info);
+ hdr = (struct ieee80211_hdr *)((u8 *)rx_desc + rx_hl->fw_desc.len);
+
+ if (is_multicast_ether_addr(hdr->addr1)) {
+ /* Discard the fragment with multicast DA */
+ goto err;
+ }
+
if (!MS(rx_desc_info, HTT_RX_DESC_HL_INFO_ENCRYPTED)) {
spin_unlock_bh(&ar->data_lock);
return ath10k_htt_rx_proc_rx_ind_hl(htt, &resp->rx_ind_hl, skb,
@@ -2516,8 +2699,6 @@ static bool ath10k_htt_rx_proc_rx_frag_ind_hl(struct ath10k_htt *htt,
HTT_RX_NON_TKIP_MIC);
}
- hdr = (struct ieee80211_hdr *)((u8 *)rx_desc + rx_hl->fw_desc.len);
-
if (ieee80211_has_retry(hdr->frame_control))
goto err;
@@ -3027,7 +3208,7 @@ static int ath10k_htt_rx_in_ord_ind(struct ath10k *ar, struct sk_buff *skb)
ath10k_htt_rx_h_ppdu(ar, &amsdu, status, vdev_id);
ath10k_htt_rx_h_filter(ar, &amsdu, status, NULL);
ath10k_htt_rx_h_mpdu(ar, &amsdu, status, false, NULL,
- NULL);
+ NULL, peer_id, frag);
ath10k_htt_rx_h_enqueue(ar, &amsdu, status);
break;
case -EAGAIN:
diff --git a/drivers/net/wireless/ath/ath10k/rx_desc.h b/drivers/net/wireless/ath/ath10k/rx_desc.h
index dec1582..13a1cae6 100644
--- a/drivers/net/wireless/ath/ath10k/rx_desc.h
+++ b/drivers/net/wireless/ath/ath10k/rx_desc.h
@@ -1282,7 +1282,19 @@ struct fw_rx_desc_base {
#define FW_RX_DESC_UDP (1 << 6)
struct fw_rx_desc_hl {
- u8 info0;
+ union {
+ struct {
+ u8 discard:1,
+ forward:1,
+ any_err:1,
+ dup_err:1,
+ reserved:1,
+ inspect:1,
+ extension:2;
+ } bits;
+ u8 info0;
+ } u;
+
u8 version;
u8 len;
u8 flags;
diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
index e7072fc..4f2fbc6 100644
--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
@@ -592,6 +592,9 @@ static void ath10k_wmi_event_tdls_peer(struct ath10k *ar, struct sk_buff *skb)
GFP_ATOMIC
);
break;
+ default:
+ kfree(tb);
+ return;
}
exit:
diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
index 3638501..2bff8eb5 100644
--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
+++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
@@ -832,6 +832,24 @@ static void ath11k_dp_rx_frags_cleanup(struct dp_rx_tid *rx_tid, bool rel_link_d
__skb_queue_purge(&rx_tid->rx_frags);
}
+void ath11k_peer_frags_flush(struct ath11k *ar, struct ath11k_peer *peer)
+{
+ struct dp_rx_tid *rx_tid;
+ int i;
+
+ lockdep_assert_held(&ar->ab->base_lock);
+
+ for (i = 0; i <= IEEE80211_NUM_TIDS; i++) {
+ rx_tid = &peer->rx_tid[i];
+
+ spin_unlock_bh(&ar->ab->base_lock);
+ del_timer_sync(&rx_tid->frag_timer);
+ spin_lock_bh(&ar->ab->base_lock);
+
+ ath11k_dp_rx_frags_cleanup(rx_tid, true);
+ }
+}
+
void ath11k_peer_rx_tid_cleanup(struct ath11k *ar, struct ath11k_peer *peer)
{
struct dp_rx_tid *rx_tid;
diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.h b/drivers/net/wireless/ath/ath11k/dp_rx.h
index fbea45f..6986752 100644
--- a/drivers/net/wireless/ath/ath11k/dp_rx.h
+++ b/drivers/net/wireless/ath/ath11k/dp_rx.h
@@ -49,6 +49,7 @@ int ath11k_dp_peer_rx_pn_replay_config(struct ath11k_vif *arvif,
const u8 *peer_addr,
enum set_key_cmd key_cmd,
struct ieee80211_key_conf *key);
+void ath11k_peer_frags_flush(struct ath11k *ar, struct ath11k_peer *peer);
void ath11k_peer_rx_tid_cleanup(struct ath11k *ar, struct ath11k_peer *peer);
void ath11k_peer_rx_tid_delete(struct ath11k *ar,
struct ath11k_peer *peer, u8 tid);
diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
index e9e6b0c..0738c78 100644
--- a/drivers/net/wireless/ath/ath11k/mac.c
+++ b/drivers/net/wireless/ath/ath11k/mac.c
@@ -2525,6 +2525,12 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
*/
spin_lock_bh(&ab->base_lock);
peer = ath11k_peer_find(ab, arvif->vdev_id, peer_addr);
+
+ /* flush the fragments cache during key (re)install to
+ * ensure all frags in the new frag list belong to the same key.
+ */
+ if (peer && cmd == SET_KEY)
+ ath11k_peer_frags_flush(ar, peer);
spin_unlock_bh(&ab->base_lock);
if (!peer) {
diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
index 173ab6c..eca8622 100644
--- a/drivers/net/wireless/ath/ath11k/wmi.c
+++ b/drivers/net/wireless/ath/ath11k/wmi.c
@@ -4986,31 +4986,6 @@ int ath11k_wmi_pull_fw_stats(struct ath11k_base *ab, struct sk_buff *skb,
return 0;
}
-static int
-ath11k_pull_pdev_temp_ev(struct ath11k_base *ab, u8 *evt_buf,
- u32 len, const struct wmi_pdev_temperature_event *ev)
-{
- const void **tb;
- int ret;
-
- tb = ath11k_wmi_tlv_parse_alloc(ab, evt_buf, len, GFP_ATOMIC);
- if (IS_ERR(tb)) {
- ret = PTR_ERR(tb);
- ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
- return ret;
- }
-
- ev = tb[WMI_TAG_PDEV_TEMPERATURE_EVENT];
- if (!ev) {
- ath11k_warn(ab, "failed to fetch pdev temp ev");
- kfree(tb);
- return -EPROTO;
- }
-
- kfree(tb);
- return 0;
-}
-
size_t ath11k_wmi_fw_stats_num_vdevs(struct list_head *head)
{
struct ath11k_fw_stats_vdev *i;
@@ -6390,23 +6365,37 @@ ath11k_wmi_pdev_temperature_event(struct ath11k_base *ab,
struct sk_buff *skb)
{
struct ath11k *ar;
- struct wmi_pdev_temperature_event ev = {0};
+ const void **tb;
+ const struct wmi_pdev_temperature_event *ev;
+ int ret;
- if (ath11k_pull_pdev_temp_ev(ab, skb->data, skb->len, &ev) != 0) {
- ath11k_warn(ab, "failed to extract pdev temperature event");
+ tb = ath11k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return;
+ }
+
+ ev = tb[WMI_TAG_PDEV_TEMPERATURE_EVENT];
+ if (!ev) {
+ ath11k_warn(ab, "failed to fetch pdev temp ev");
+ kfree(tb);
return;
}
ath11k_dbg(ab, ATH11K_DBG_WMI,
- "pdev temperature ev temp %d pdev_id %d\n", ev.temp, ev.pdev_id);
+ "pdev temperature ev temp %d pdev_id %d\n", ev->temp, ev->pdev_id);
- ar = ath11k_mac_get_ar_by_pdev_id(ab, ev.pdev_id);
+ ar = ath11k_mac_get_ar_by_pdev_id(ab, ev->pdev_id);
if (!ar) {
- ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev.pdev_id);
+ ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev->pdev_id);
+ kfree(tb);
return;
}
- ath11k_thermal_event_temperature(ar, ev.temp);
+ ath11k_thermal_event_temperature(ar, ev->temp);
+
+ kfree(tb);
}
static void ath11k_wmi_tlv_op_rx(struct ath11k_base *ab, struct sk_buff *skb)
diff --git a/drivers/net/wireless/ath/ath6kl/debug.c b/drivers/net/wireless/ath/ath6kl/debug.c
index 7506cea..433a047 100644
--- a/drivers/net/wireless/ath/ath6kl/debug.c
+++ b/drivers/net/wireless/ath/ath6kl/debug.c
@@ -1027,14 +1027,17 @@ static ssize_t ath6kl_lrssi_roam_write(struct file *file,
{
struct ath6kl *ar = file->private_data;
unsigned long lrssi_roam_threshold;
+ int ret;
if (kstrtoul_from_user(user_buf, count, 0, &lrssi_roam_threshold))
return -EINVAL;
ar->lrssi_roam_threshold = lrssi_roam_threshold;
- ath6kl_wmi_set_roam_lrssi_cmd(ar->wmi, ar->lrssi_roam_threshold);
+ ret = ath6kl_wmi_set_roam_lrssi_cmd(ar->wmi, ar->lrssi_roam_threshold);
+ if (ret)
+ return ret;
return count;
}
diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
index db0c6fa..ff61ae3 100644
--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
@@ -246,7 +246,7 @@ static unsigned int ath9k_regread(void *hw_priv, u32 reg_offset)
if (unlikely(r)) {
ath_dbg(common, WMI, "REGISTER READ FAILED: (0x%04x, %d)\n",
reg_offset, r);
- return -EIO;
+ return -1;
}
return be32_to_cpu(val);
diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
index 6609ce1..c86faebbc 100644
--- a/drivers/net/wireless/ath/ath9k/hw.c
+++ b/drivers/net/wireless/ath/ath9k/hw.c
@@ -287,7 +287,7 @@ static bool ath9k_hw_read_revisions(struct ath_hw *ah)
srev = REG_READ(ah, AR_SREV);
- if (srev == -EIO) {
+ if (srev == -1) {
ath_err(ath9k_hw_common(ah),
"Failed to read SREV register");
return false;
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
index f9ebb98..b6d0bc7 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
@@ -1217,13 +1217,9 @@ static struct sdio_driver brcmf_sdmmc_driver = {
},
};
-void brcmf_sdio_register(void)
+int brcmf_sdio_register(void)
{
- int ret;
-
- ret = sdio_register_driver(&brcmf_sdmmc_driver);
- if (ret)
- brcmf_err("sdio_register_driver failed: %d\n", ret);
+ return sdio_register_driver(&brcmf_sdmmc_driver);
}
void brcmf_sdio_exit(void)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
index 08f9d47..3f5da3b 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
@@ -275,11 +275,26 @@ void brcmf_bus_add_txhdrlen(struct device *dev, uint len);
#ifdef CONFIG_BRCMFMAC_SDIO
void brcmf_sdio_exit(void);
-void brcmf_sdio_register(void);
+int brcmf_sdio_register(void);
+#else
+static inline void brcmf_sdio_exit(void) { }
+static inline int brcmf_sdio_register(void) { return 0; }
#endif
+
#ifdef CONFIG_BRCMFMAC_USB
void brcmf_usb_exit(void);
-void brcmf_usb_register(void);
+int brcmf_usb_register(void);
+#else
+static inline void brcmf_usb_exit(void) { }
+static inline int brcmf_usb_register(void) { return 0; }
+#endif
+
+#ifdef CONFIG_BRCMFMAC_PCIE
+void brcmf_pcie_exit(void);
+int brcmf_pcie_register(void);
+#else
+static inline void brcmf_pcie_exit(void) { }
+static inline int brcmf_pcie_register(void) { return 0; }
#endif
#endif /* BRCMFMAC_BUS_H */
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
index 3dd28f5..6103953 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
@@ -1518,40 +1518,34 @@ void brcmf_bus_change_state(struct brcmf_bus *bus, enum brcmf_bus_state state)
}
}
-static void brcmf_driver_register(struct work_struct *work)
-{
-#ifdef CONFIG_BRCMFMAC_SDIO
- brcmf_sdio_register();
-#endif
-#ifdef CONFIG_BRCMFMAC_USB
- brcmf_usb_register();
-#endif
-#ifdef CONFIG_BRCMFMAC_PCIE
- brcmf_pcie_register();
-#endif
-}
-static DECLARE_WORK(brcmf_driver_work, brcmf_driver_register);
-
int __init brcmf_core_init(void)
{
- if (!schedule_work(&brcmf_driver_work))
- return -EBUSY;
+ int err;
+ err = brcmf_sdio_register();
+ if (err)
+ return err;
+
+ err = brcmf_usb_register();
+ if (err)
+ goto error_usb_register;
+
+ err = brcmf_pcie_register();
+ if (err)
+ goto error_pcie_register;
return 0;
+
+error_pcie_register:
+ brcmf_usb_exit();
+error_usb_register:
+ brcmf_sdio_exit();
+ return err;
}
void __exit brcmf_core_exit(void)
{
- cancel_work_sync(&brcmf_driver_work);
-
-#ifdef CONFIG_BRCMFMAC_SDIO
brcmf_sdio_exit();
-#endif
-#ifdef CONFIG_BRCMFMAC_USB
brcmf_usb_exit();
-#endif
-#ifdef CONFIG_BRCMFMAC_PCIE
brcmf_pcie_exit();
-#endif
}
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
index d8db0db..603aff4 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
@@ -2138,15 +2138,10 @@ static struct pci_driver brcmf_pciedrvr = {
};
-void brcmf_pcie_register(void)
+int brcmf_pcie_register(void)
{
- int err;
-
brcmf_dbg(PCIE, "Enter\n");
- err = pci_register_driver(&brcmf_pciedrvr);
- if (err)
- brcmf_err(NULL, "PCIE driver registration failed, err=%d\n",
- err);
+ return pci_register_driver(&brcmf_pciedrvr);
}
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
index d026401..8e6c227 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
@@ -11,9 +11,4 @@ struct brcmf_pciedev {
struct brcmf_pciedev_info *devinfo;
};
-
-void brcmf_pcie_exit(void);
-void brcmf_pcie_register(void);
-
-
#endif /* BRCMFMAC_PCIE_H */
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
index 586f4df..9fb68c2 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
@@ -1584,12 +1584,8 @@ void brcmf_usb_exit(void)
usb_deregister(&brcmf_usbdrvr);
}
-void brcmf_usb_register(void)
+int brcmf_usb_register(void)
{
- int ret;
-
brcmf_dbg(USB, "Enter\n");
- ret = usb_register(&brcmf_usbdrvr);
- if (ret)
- brcmf_err("usb_register failed %d\n", ret);
+ return usb_register(&brcmf_usbdrvr);
}
diff --git a/drivers/net/wireless/cisco/airo.c b/drivers/net/wireless/cisco/airo.c
index 87b9398..0569f37 100644
--- a/drivers/net/wireless/cisco/airo.c
+++ b/drivers/net/wireless/cisco/airo.c
@@ -3825,6 +3825,68 @@ static inline void set_auth_type(struct airo_info *local, int auth_type)
local->last_auth = auth_type;
}
+static int noinline_for_stack airo_readconfig(struct airo_info *ai, u8 *mac, int lock)
+{
+ int i, status;
+ /* large variables, so don't inline this function,
+ * maybe change to kmalloc
+ */
+ tdsRssiRid rssi_rid;
+ CapabilityRid cap_rid;
+
+ kfree(ai->SSID);
+ ai->SSID = NULL;
+ // general configuration (read/modify/write)
+ status = readConfigRid(ai, lock);
+ if (status != SUCCESS) return ERROR;
+
+ status = readCapabilityRid(ai, &cap_rid, lock);
+ if (status != SUCCESS) return ERROR;
+
+ status = PC4500_readrid(ai, RID_RSSI, &rssi_rid, sizeof(rssi_rid), lock);
+ if (status == SUCCESS) {
+ if (ai->rssi || (ai->rssi = kmalloc(512, GFP_KERNEL)) != NULL)
+ memcpy(ai->rssi, (u8*)&rssi_rid + 2, 512); /* Skip RID length member */
+ }
+ else {
+ kfree(ai->rssi);
+ ai->rssi = NULL;
+ if (cap_rid.softCap & cpu_to_le16(8))
+ ai->config.rmode |= RXMODE_NORMALIZED_RSSI;
+ else
+ airo_print_warn(ai->dev->name, "unknown received signal "
+ "level scale");
+ }
+ ai->config.opmode = adhoc ? MODE_STA_IBSS : MODE_STA_ESS;
+ set_auth_type(ai, AUTH_OPEN);
+ ai->config.modulation = MOD_CCK;
+
+ if (le16_to_cpu(cap_rid.len) >= sizeof(cap_rid) &&
+ (cap_rid.extSoftCap & cpu_to_le16(1)) &&
+ micsetup(ai) == SUCCESS) {
+ ai->config.opmode |= MODE_MIC;
+ set_bit(FLAG_MIC_CAPABLE, &ai->flags);
+ }
+
+ /* Save off the MAC */
+ for (i = 0; i < ETH_ALEN; i++) {
+ mac[i] = ai->config.macAddr[i];
+ }
+
+ /* Check to see if there are any insmod configured
+ rates to add */
+ if (rates[0]) {
+ memset(ai->config.rates, 0, sizeof(ai->config.rates));
+ for (i = 0; i < 8 && rates[i]; i++) {
+ ai->config.rates[i] = rates[i];
+ }
+ }
+ set_bit (FLAG_COMMIT, &ai->flags);
+
+ return SUCCESS;
+}
+
+
static u16 setup_card(struct airo_info *ai, u8 *mac, int lock)
{
Cmd cmd;
@@ -3871,58 +3933,9 @@ static u16 setup_card(struct airo_info *ai, u8 *mac, int lock)
if (lock)
up(&ai->sem);
if (ai->config.len == 0) {
- int i;
- tdsRssiRid rssi_rid;
- CapabilityRid cap_rid;
-
- kfree(ai->SSID);
- ai->SSID = NULL;
- // general configuration (read/modify/write)
- status = readConfigRid(ai, lock);
- if (status != SUCCESS) return ERROR;
-
- status = readCapabilityRid(ai, &cap_rid, lock);
- if (status != SUCCESS) return ERROR;
-
- status = PC4500_readrid(ai, RID_RSSI,&rssi_rid, sizeof(rssi_rid), lock);
- if (status == SUCCESS) {
- if (ai->rssi || (ai->rssi = kmalloc(512, GFP_KERNEL)) != NULL)
- memcpy(ai->rssi, (u8*)&rssi_rid + 2, 512); /* Skip RID length member */
- }
- else {
- kfree(ai->rssi);
- ai->rssi = NULL;
- if (cap_rid.softCap & cpu_to_le16(8))
- ai->config.rmode |= RXMODE_NORMALIZED_RSSI;
- else
- airo_print_warn(ai->dev->name, "unknown received signal "
- "level scale");
- }
- ai->config.opmode = adhoc ? MODE_STA_IBSS : MODE_STA_ESS;
- set_auth_type(ai, AUTH_OPEN);
- ai->config.modulation = MOD_CCK;
-
- if (le16_to_cpu(cap_rid.len) >= sizeof(cap_rid) &&
- (cap_rid.extSoftCap & cpu_to_le16(1)) &&
- micsetup(ai) == SUCCESS) {
- ai->config.opmode |= MODE_MIC;
- set_bit(FLAG_MIC_CAPABLE, &ai->flags);
- }
-
- /* Save off the MAC */
- for (i = 0; i < ETH_ALEN; i++) {
- mac[i] = ai->config.macAddr[i];
- }
-
- /* Check to see if there are any insmod configured
- rates to add */
- if (rates[0]) {
- memset(ai->config.rates, 0, sizeof(ai->config.rates));
- for (i = 0; i < 8 && rates[i]; i++) {
- ai->config.rates[i] = rates[i];
- }
- }
- set_bit (FLAG_COMMIT, &ai->flags);
+ status = airo_readconfig(ai, mac, lock);
+ if (status != SUCCESS)
+ return ERROR;
}
/* Setup the SSIDs if present */
diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_wx.c b/drivers/net/wireless/intel/ipw2x00/libipw_wx.c
index a0cf78c..903de340 100644
--- a/drivers/net/wireless/intel/ipw2x00/libipw_wx.c
+++ b/drivers/net/wireless/intel/ipw2x00/libipw_wx.c
@@ -633,8 +633,10 @@ int libipw_wx_set_encodeext(struct libipw_device *ieee,
}
if (ext->alg != IW_ENCODE_ALG_NONE) {
- memcpy(sec.keys[idx], ext->key, ext->key_len);
- sec.key_sizes[idx] = ext->key_len;
+ int key_len = clamp_val(ext->key_len, 0, SCM_KEY_LEN);
+
+ memcpy(sec.keys[idx], ext->key, key_len);
+ sec.key_sizes[idx] = key_len;
sec.flags |= (1 << idx);
if (ext->alg == IW_ENCODE_ALG_WEP) {
sec.encode_alg[idx] = SEC_ALG_WEP;
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
index 6d19de3..cbde21e 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
@@ -285,7 +285,7 @@ enum iwl_reg_capa_flags_v2 {
REG_CAPA_V2_MCS_9_ALLOWED = BIT(6),
REG_CAPA_V2_WEATHER_DISABLED = BIT(7),
REG_CAPA_V2_40MHZ_ALLOWED = BIT(8),
- REG_CAPA_V2_11AX_DISABLED = BIT(13),
+ REG_CAPA_V2_11AX_DISABLED = BIT(10),
};
/*
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
index 81ef4fc..ec1d602 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
@@ -5,7 +5,7 @@
*
* GPL LICENSE SUMMARY
*
- * Copyright(c) 2018 - 2020 Intel Corporation
+ * Copyright(c) 2018 - 2021 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@@ -122,15 +122,6 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
const struct fw_img *fw)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
- u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
- u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
- CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
- u32_encode_bits(250,
- CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
- CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
- u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
- CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
- u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
struct iwl_context_info_gen3 *ctxt_info_gen3;
struct iwl_prph_scratch *prph_scratch;
struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl;
@@ -264,26 +255,6 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL,
CSR_AUTO_FUNC_BOOT_ENA);
- /*
- * To workaround hardware latency issues during the boot process,
- * initialize the LTR to ~250 usec (see ltr_val above).
- * The firmware initializes this again later (to a smaller value).
- */
- if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 ||
- trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) &&
- !trans->trans_cfg->integrated) {
- iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val);
- } else if (trans->trans_cfg->integrated &&
- trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) {
- iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL);
- iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val);
- }
-
- if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
- iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1);
- else
- iwl_set_bit(trans, CSR_GP_CNTRL, CSR_AUTO_FUNC_INIT);
-
return 0;
err_free_ctxt_info:
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
index 13fe9c0..f65d363 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
@@ -6,7 +6,7 @@
* GPL LICENSE SUMMARY
*
* Copyright(c) 2017 Intel Deutschland GmbH
- * Copyright(c) 2018 - 2020 Intel Corporation
+ * Copyright(c) 2018 - 2021 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@@ -288,7 +288,6 @@ int iwl_pcie_ctxt_info_init(struct iwl_trans *trans,
/* kick FW self load */
iwl_write64(trans, CSR_CTXT_INFO_BA, trans_pcie->ctxt_info_dma_addr);
- iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1);
/* Context info will be released upon alive or failure to get one */
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
index fa32f90..eeb7056 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
@@ -73,10 +73,20 @@
#include "iwl-prph.h"
#include "internal.h"
+#define TRANS_CFG_MARKER BIT(0)
+#define _IS_A(cfg, _struct) __builtin_types_compatible_p(typeof(cfg), \
+ struct _struct)
+extern int _invalid_type;
+#define _TRANS_CFG_MARKER(cfg) \
+ (__builtin_choose_expr(_IS_A(cfg, iwl_cfg_trans_params), \
+ TRANS_CFG_MARKER, \
+ __builtin_choose_expr(_IS_A(cfg, iwl_cfg), 0, _invalid_type)))
+#define _ASSIGN_CFG(cfg) (_TRANS_CFG_MARKER(cfg) + (kernel_ulong_t)&(cfg))
+
#define IWL_PCI_DEVICE(dev, subdev, cfg) \
.vendor = PCI_VENDOR_ID_INTEL, .device = (dev), \
.subvendor = PCI_ANY_ID, .subdevice = (subdev), \
- .driver_data = (kernel_ulong_t)&(cfg)
+ .driver_data = _ASSIGN_CFG(cfg)
/* Hardware specific file defines the PCI IDs table for that hardware module */
static const struct pci_device_id iwl_hw_card_ids[] = {
@@ -684,6 +694,7 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
IWL_DEV_INFO(0x4DF0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL),
IWL_DEV_INFO(0x4DF0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0x4DF0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
+ IWL_DEV_INFO(0x4DF0, 0x6074, iwl_ax201_cfg_qu_hr, NULL),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_PU, IWL_CFG_ANY,
@@ -1017,20 +1028,23 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
- const struct iwl_cfg_trans_params *trans =
- (struct iwl_cfg_trans_params *)(ent->driver_data);
+ const struct iwl_cfg_trans_params *trans;
const struct iwl_cfg *cfg_7265d __maybe_unused = NULL;
struct iwl_trans *iwl_trans;
struct iwl_trans_pcie *trans_pcie;
unsigned long flags;
int i, ret;
+ const struct iwl_cfg *cfg;
+
+ trans = (void *)(ent->driver_data & ~TRANS_CFG_MARKER);
+
/*
* This is needed for backwards compatibility with the old
* tables, so we don't need to change all the config structs
* at the same time. The cfg is used to compare with the old
* full cfg structs.
*/
- const struct iwl_cfg *cfg = (struct iwl_cfg *)(ent->driver_data);
+ cfg = (void *)(ent->driver_data & ~TRANS_CFG_MARKER);
/* make sure trans is the first element in iwl_cfg */
BUILD_BUG_ON(offsetof(struct iwl_cfg, trans));
@@ -1132,11 +1146,19 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
#endif
/*
- * If we didn't set the cfg yet, assume the trans is actually
- * a full cfg from the old tables.
+ * If we didn't set the cfg yet, the PCI ID table entry should have
+ * been a full config - if yes, use it, otherwise fail.
*/
- if (!iwl_trans->cfg)
+ if (!iwl_trans->cfg) {
+ if (ent->driver_data & TRANS_CFG_MARKER) {
+ pr_err("No config found for PCI dev %04x/%04x, rev=0x%x, rfid=0x%x\n",
+ pdev->device, pdev->subsystem_device,
+ iwl_trans->hw_rev, iwl_trans->hw_rf_id);
+ ret = -EINVAL;
+ goto out_free_trans;
+ }
iwl_trans->cfg = cfg;
+ }
/* if we don't have a name yet, copy name from the old cfg */
if (!iwl_trans->name)
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
index 91ec937..4c3ca2a 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
@@ -281,6 +281,34 @@ void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr)
mutex_unlock(&trans_pcie->mutex);
}
+static void iwl_pcie_set_ltr(struct iwl_trans *trans)
+{
+ u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
+ u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
+ CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
+ u32_encode_bits(250,
+ CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
+ CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
+ u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
+ CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
+ u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
+
+ /*
+ * To workaround hardware latency issues during the boot process,
+ * initialize the LTR to ~250 usec (see ltr_val above).
+ * The firmware initializes this again later (to a smaller value).
+ */
+ if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 ||
+ trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) &&
+ !trans->trans_cfg->integrated) {
+ iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val);
+ } else if (trans->trans_cfg->integrated &&
+ trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) {
+ iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL);
+ iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val);
+ }
+}
+
int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
const struct fw_img *fw, bool run_in_rfkill)
{
@@ -347,6 +375,13 @@ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
if (ret)
goto out;
+ iwl_pcie_set_ltr(trans);
+
+ if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
+ iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1);
+ else
+ iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1);
+
/* re-check RF-Kill state since we may have missed the interrupt */
hw_rfkill = iwl_pcie_check_hw_rf_kill(trans);
if (hw_rfkill && !run_in_rfkill)
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
index 8c71382..833fd13 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
@@ -87,6 +87,7 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD];
u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD];
struct iwl_tfh_tfd *tfd;
+ unsigned long flags;
copy_size = sizeof(struct iwl_cmd_header_wide);
cmd_size = sizeof(struct iwl_cmd_header_wide);
@@ -155,14 +156,14 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
goto free_dup_buf;
}
- spin_lock_bh(&txq->lock);
+ spin_lock_irqsave(&txq->lock, flags);
idx = iwl_txq_get_cmd_index(txq, txq->write_ptr);
tfd = iwl_txq_get_tfd(trans, txq, txq->write_ptr);
memset(tfd, 0, sizeof(*tfd));
if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) {
- spin_unlock_bh(&txq->lock);
+ spin_unlock_irqrestore(&txq->lock, flags);
IWL_ERR(trans, "No space in command queue\n");
iwl_op_mode_cmd_queue_full(trans->op_mode);
@@ -297,7 +298,7 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
spin_unlock(&trans_pcie->reg_lock);
out:
- spin_unlock_bh(&txq->lock);
+ spin_unlock_irqrestore(&txq->lock, flags);
free_dup_buf:
if (idx < 0)
kfree(dup_buf);
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
index 50133c0..1333713 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
@@ -1181,6 +1181,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
u32 cmd_pos;
const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD];
u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD];
+ unsigned long flags;
if (WARN(!trans->wide_cmd_header &&
group_id > IWL_ALWAYS_LONG_GROUP,
@@ -1264,10 +1265,10 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
goto free_dup_buf;
}
- spin_lock_bh(&txq->lock);
+ spin_lock_irqsave(&txq->lock, flags);
if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) {
- spin_unlock_bh(&txq->lock);
+ spin_unlock_irqrestore(&txq->lock, flags);
IWL_ERR(trans, "No space in command queue\n");
iwl_op_mode_cmd_queue_full(trans->op_mode);
@@ -1427,7 +1428,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
unlock_reg:
spin_unlock(&trans_pcie->reg_lock);
out:
- spin_unlock_bh(&txq->lock);
+ spin_unlock_irqrestore(&txq->lock, flags);
free_dup_buf:
if (idx < 0)
kfree(dup_buf);
diff --git a/drivers/net/wireless/marvell/libertas/mesh.c b/drivers/net/wireless/marvell/libertas/mesh.c
index f5b7825..c688148 100644
--- a/drivers/net/wireless/marvell/libertas/mesh.c
+++ b/drivers/net/wireless/marvell/libertas/mesh.c
@@ -801,24 +801,6 @@ static const struct attribute_group mesh_ie_group = {
.attrs = mesh_ie_attrs,
};
-static void lbs_persist_config_init(struct net_device *dev)
-{
- int ret;
- ret = sysfs_create_group(&(dev->dev.kobj), &boot_opts_group);
- if (ret)
- pr_err("failed to create boot_opts_group.\n");
-
- ret = sysfs_create_group(&(dev->dev.kobj), &mesh_ie_group);
- if (ret)
- pr_err("failed to create mesh_ie_group.\n");
-}
-
-static void lbs_persist_config_remove(struct net_device *dev)
-{
- sysfs_remove_group(&(dev->dev.kobj), &boot_opts_group);
- sysfs_remove_group(&(dev->dev.kobj), &mesh_ie_group);
-}
-
/***************************************************************************
* Initializing and starting, stopping mesh
@@ -1014,6 +996,10 @@ static int lbs_add_mesh(struct lbs_private *priv)
SET_NETDEV_DEV(priv->mesh_dev, priv->dev->dev.parent);
mesh_dev->flags |= IFF_BROADCAST | IFF_MULTICAST;
+ mesh_dev->sysfs_groups[0] = &lbs_mesh_attr_group;
+ mesh_dev->sysfs_groups[1] = &boot_opts_group;
+ mesh_dev->sysfs_groups[2] = &mesh_ie_group;
+
/* Register virtual mesh interface */
ret = register_netdev(mesh_dev);
if (ret) {
@@ -1021,19 +1007,10 @@ static int lbs_add_mesh(struct lbs_private *priv)
goto err_free_netdev;
}
- ret = sysfs_create_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
- if (ret)
- goto err_unregister;
-
- lbs_persist_config_init(mesh_dev);
-
/* Everything successful */
ret = 0;
goto done;
-err_unregister:
- unregister_netdev(mesh_dev);
-
err_free_netdev:
free_netdev(mesh_dev);
@@ -1054,8 +1031,6 @@ void lbs_remove_mesh(struct lbs_private *priv)
netif_stop_queue(mesh_dev);
netif_carrier_off(mesh_dev);
- sysfs_remove_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
- lbs_persist_config_remove(mesh_dev);
unregister_netdev(mesh_dev);
priv->mesh_dev = NULL;
kfree(mesh_dev->ieee80211_ptr);
diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
index 23efd70..27b7d4b 100644
--- a/drivers/net/wireless/marvell/mwl8k.c
+++ b/drivers/net/wireless/marvell/mwl8k.c
@@ -1469,6 +1469,7 @@ static int mwl8k_txq_init(struct ieee80211_hw *hw, int index)
txq->skb = kcalloc(MWL8K_TX_DESCS, sizeof(*txq->skb), GFP_KERNEL);
if (txq->skb == NULL) {
pci_free_consistent(priv->pdev, size, txq->txd, txq->txd_dma);
+ txq->txd = NULL;
return -ENOMEM;
}
diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
index 665a03e..0fdfead 100644
--- a/drivers/net/wireless/mediatek/mt76/dma.c
+++ b/drivers/net/wireless/mediatek/mt76/dma.c
@@ -318,7 +318,7 @@ mt76_dma_tx_queue_skb_raw(struct mt76_dev *dev, enum mt76_txq_id qid,
struct sk_buff *skb, u32 tx_info)
{
struct mt76_queue *q = dev->q_tx[qid];
- struct mt76_queue_buf buf;
+ struct mt76_queue_buf buf = {};
dma_addr_t addr;
if (q->queued + 1 >= q->ndesc - 1)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
index f4756bb..e9cdcdc 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
@@ -86,6 +86,7 @@ static int mt7615_check_eeprom(struct mt76_dev *dev)
switch (val) {
case 0x7615:
case 0x7622:
+ case 0x7663:
return 0;
default:
return -EINVAL;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
index f1f954f..5795e44 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
@@ -688,7 +688,7 @@ mt7615_txp_skb_unmap_fw(struct mt76_dev *dev, struct mt7615_fw_txp *txp)
{
int i;
- for (i = 1; i < txp->nbuf; i++)
+ for (i = 0; i < txp->nbuf; i++)
dma_unmap_single(dev->dev, le32_to_cpu(txp->buf[i]),
le16_to_cpu(txp->len[i]), DMA_TO_DEVICE);
}
@@ -1817,10 +1817,8 @@ mt7615_mac_update_mib_stats(struct mt7615_phy *phy)
int i, aggr;
u32 val, val2;
- memset(mib, 0, sizeof(*mib));
-
- mib->fcs_err_cnt = mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
- MT_MIB_SDR3_FCS_ERR_MASK);
+ mib->fcs_err_cnt += mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
+ MT_MIB_SDR3_FCS_ERR_MASK);
val = mt76_get_field(dev, MT_MIB_SDR14(ext_phy),
MT_MIB_AMPDU_MPDU_COUNT);
@@ -1833,24 +1831,16 @@ mt7615_mac_update_mib_stats(struct mt7615_phy *phy)
aggr = ext_phy ? ARRAY_SIZE(dev->mt76.aggr_stats) / 2 : 0;
for (i = 0; i < 4; i++) {
val = mt76_rr(dev, MT_MIB_MB_SDR1(ext_phy, i));
-
- val2 = FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
- if (val2 > mib->ack_fail_cnt)
- mib->ack_fail_cnt = val2;
-
- val2 = FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
- if (val2 > mib->ba_miss_cnt)
- mib->ba_miss_cnt = val2;
+ mib->ba_miss_cnt += FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
+ mib->ack_fail_cnt += FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK,
+ val);
val = mt76_rr(dev, MT_MIB_MB_SDR0(ext_phy, i));
- val2 = FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
- if (val2 > mib->rts_retries_cnt) {
- mib->rts_cnt = FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
- mib->rts_retries_cnt = val2;
- }
+ mib->rts_cnt += FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
+ mib->rts_retries_cnt += FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK,
+ val);
val = mt76_rr(dev, MT_TX_AGG_CNT(ext_phy, i));
-
dev->mt76.aggr_stats[aggr++] += val & 0xffff;
dev->mt76.aggr_stats[aggr++] += val >> 16;
}
@@ -2106,8 +2096,12 @@ void mt7615_tx_token_put(struct mt7615_dev *dev)
spin_lock_bh(&dev->token_lock);
idr_for_each_entry(&dev->token, txwi, id) {
mt7615_txp_skb_unmap(&dev->mt76, txwi);
- if (txwi->skb)
- dev_kfree_skb_any(txwi->skb);
+ if (txwi->skb) {
+ struct ieee80211_hw *hw;
+
+ hw = mt76_tx_status_get_hw(&dev->mt76, txwi->skb);
+ ieee80211_free_txskb(hw, txwi->skb);
+ }
mt76_put_txwi(&dev->mt76, txwi);
}
spin_unlock_bh(&dev->token_lock);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
index 3186b7b..88cdc2b 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
@@ -851,11 +851,17 @@ mt7615_get_stats(struct ieee80211_hw *hw,
struct mt7615_phy *phy = mt7615_hw_phy(hw);
struct mib_stats *mib = &phy->mib;
+ mt7615_mutex_acquire(phy->dev);
+
stats->dot11RTSSuccessCount = mib->rts_cnt;
stats->dot11RTSFailureCount = mib->rts_retries_cnt;
stats->dot11FCSErrorCount = mib->fcs_err_cnt;
stats->dot11ACKFailureCount = mib->ack_fail_cnt;
+ memset(mib, 0, sizeof(*mib));
+
+ mt7615_mutex_release(phy->dev);
+
return 0;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
index c31036f..62a9716 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
@@ -341,12 +341,20 @@ static int mt7615_mcu_drv_pmctrl(struct mt7615_dev *dev)
u32 addr;
int err;
- addr = is_mt7663(mdev) ? MT_PCIE_DOORBELL_PUSH : MT_CFG_LPCR_HOST;
+ if (is_mt7663(mdev)) {
+ /* Clear firmware own via N9 eint */
+ mt76_wr(dev, MT_PCIE_DOORBELL_PUSH, MT_CFG_LPCR_HOST_DRV_OWN);
+ mt76_poll(dev, MT_CONN_ON_MISC, MT_CFG_LPCR_HOST_FW_OWN, 0, 3000);
+
+ addr = MT_CONN_HIF_ON_LPCTL;
+ } else {
+ addr = MT_CFG_LPCR_HOST;
+ }
+
mt76_wr(dev, addr, MT_CFG_LPCR_HOST_DRV_OWN);
mt7622_trigger_hif_int(dev, true);
- addr = is_mt7663(mdev) ? MT_CONN_HIF_ON_LPCTL : MT_CFG_LPCR_HOST;
err = !mt76_poll_msec(dev, addr, MT_CFG_LPCR_HOST_FW_OWN, 0, 3000);
mt7622_trigger_hif_int(dev, false);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
index 5b06294..4cee766 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
@@ -161,11 +161,11 @@ struct mt7615_vif {
};
struct mib_stats {
- u16 ack_fail_cnt;
- u16 fcs_err_cnt;
- u16 rts_cnt;
- u16 rts_retries_cnt;
- u16 ba_miss_cnt;
+ u32 ack_fail_cnt;
+ u32 fcs_err_cnt;
+ u32 rts_cnt;
+ u32 rts_retries_cnt;
+ u32 ba_miss_cnt;
unsigned long aggr_per;
};
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
index 7b81aef..726e4781 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
@@ -161,10 +161,9 @@ void mt7615_unregister_device(struct mt7615_dev *dev)
mt76_unregister_device(&dev->mt76);
if (mcu_running)
mt7615_mcu_exit(dev);
- mt7615_dma_cleanup(dev);
mt7615_tx_token_put(dev);
-
+ mt7615_dma_cleanup(dev);
tasklet_disable(&dev->irq_tasklet);
mt76_free_device(&dev->mt76);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
index 595519c..d7d61a5 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
@@ -195,11 +195,14 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, enum mt76_txq_id qid)
int err, nframes = 0, len = 0, pse_sz = 0, ple_sz = 0;
struct mt76_queue *q = dev->q_tx[qid];
struct mt76_sdio *sdio = &dev->sdio;
+ u8 pad;
while (q->first != q->head) {
struct mt76_queue_entry *e = &q->entry[q->first];
struct sk_buff *iter;
+ smp_rmb();
+
if (!test_bit(MT76_STATE_MCU_RUNNING, &dev->phy.state)) {
__skb_put_zero(e->skb, 4);
err = __mt7663s_xmit_queue(dev, e->skb->data,
@@ -210,7 +213,8 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, enum mt76_txq_id qid)
goto next;
}
- if (len + e->skb->len + 4 > MT76S_XMIT_BUF_SZ)
+ pad = roundup(e->skb->len, 4) - e->skb->len;
+ if (len + e->skb->len + pad + 4 > MT76S_XMIT_BUF_SZ)
break;
if (mt7663s_tx_pick_quota(sdio, qid, e->buf_sz, &pse_sz,
@@ -228,6 +232,11 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, enum mt76_txq_id qid)
len += iter->len;
nframes++;
}
+
+ if (unlikely(pad)) {
+ memset(sdio->xmit_buf[qid] + len, 0, pad);
+ len += pad;
+ }
next:
q->first = (q->first + 1) % q->ndesc;
e->done = true;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
index 11b769a..0f191bd 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
@@ -446,6 +446,10 @@ int mt76x02_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
!(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
return -EOPNOTSUPP;
+ /* MT76x0 GTK offloading does not work with more than one VIF */
+ if (is_mt76x0(dev) && !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
+ return -EOPNOTSUPP;
+
msta = sta ? (struct mt76x02_sta *)sta->drv_priv : NULL;
wcid = msta ? &msta->wcid : &mvif->group_wcid;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
index 8f2ad32..e4d7eb3 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
@@ -124,7 +124,7 @@ mt7915_ampdu_stat_read_phy(struct mt7915_phy *phy,
range[i] = mt76_rr(dev, MT_MIB_ARNG(ext_phy, i));
for (i = 0; i < ARRAY_SIZE(bound); i++)
- bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i) + 1;
+ bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i % 4) + 1;
seq_printf(file, "\nPhy %d\n", ext_phy);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
index 7deba7e..e4c5f96 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
@@ -104,7 +104,7 @@ int mt7915_eeprom_get_target_power(struct mt7915_dev *dev,
struct ieee80211_channel *chan,
u8 chain_idx)
{
- int index;
+ int index, target_power;
bool tssi_on;
if (chain_idx > 3)
@@ -113,15 +113,22 @@ int mt7915_eeprom_get_target_power(struct mt7915_dev *dev,
tssi_on = mt7915_tssi_enabled(dev, chan->band);
if (chan->band == NL80211_BAND_2GHZ) {
- index = MT_EE_TX0_POWER_2G + chain_idx * 3 + !tssi_on;
- } else {
- int group = tssi_on ?
- mt7915_get_channel_group(chan->hw_value) : 8;
+ index = MT_EE_TX0_POWER_2G + chain_idx * 3;
+ target_power = mt7915_eeprom_read(dev, index);
- index = MT_EE_TX0_POWER_5G + chain_idx * 12 + group;
+ if (!tssi_on)
+ target_power += mt7915_eeprom_read(dev, index + 1);
+ } else {
+ int group = mt7915_get_channel_group(chan->hw_value);
+
+ index = MT_EE_TX0_POWER_5G + chain_idx * 12;
+ target_power = mt7915_eeprom_read(dev, index + group);
+
+ if (!tssi_on)
+ target_power += mt7915_eeprom_read(dev, index + 8);
}
- return mt7915_eeprom_read(dev, index);
+ return target_power;
}
static const u8 sku_cck_delta_map[] = {
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
index 6f159d9..1e14d77 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
@@ -856,7 +856,7 @@ void mt7915_txp_skb_unmap(struct mt76_dev *dev,
int i;
txp = mt7915_txwi_to_txp(dev, t);
- for (i = 1; i < txp->nbuf; i++)
+ for (i = 0; i < txp->nbuf; i++)
dma_unmap_single(dev->dev, le32_to_cpu(txp->buf[i]),
le16_to_cpu(txp->len[i]), DMA_TO_DEVICE);
}
@@ -1277,39 +1277,30 @@ mt7915_mac_update_mib_stats(struct mt7915_phy *phy)
bool ext_phy = phy != &dev->phy;
int i, aggr0, aggr1;
- memset(mib, 0, sizeof(*mib));
-
- mib->fcs_err_cnt = mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
- MT_MIB_SDR3_FCS_ERR_MASK);
+ mib->fcs_err_cnt += mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
+ MT_MIB_SDR3_FCS_ERR_MASK);
aggr0 = ext_phy ? ARRAY_SIZE(dev->mt76.aggr_stats) / 2 : 0;
for (i = 0, aggr1 = aggr0 + 4; i < 4; i++) {
- u32 val, val2;
+ u32 val;
val = mt76_rr(dev, MT_MIB_MB_SDR1(ext_phy, i));
-
- val2 = FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
- if (val2 > mib->ack_fail_cnt)
- mib->ack_fail_cnt = val2;
-
- val2 = FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
- if (val2 > mib->ba_miss_cnt)
- mib->ba_miss_cnt = val2;
+ mib->ba_miss_cnt += FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
+ mib->ack_fail_cnt +=
+ FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
val = mt76_rr(dev, MT_MIB_MB_SDR0(ext_phy, i));
- val2 = FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
- if (val2 > mib->rts_retries_cnt) {
- mib->rts_cnt = FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
- mib->rts_retries_cnt = val2;
- }
+ mib->rts_cnt += FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
+ mib->rts_retries_cnt +=
+ FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
val = mt76_rr(dev, MT_TX_AGG_CNT(ext_phy, i));
- val2 = mt76_rr(dev, MT_TX_AGG_CNT2(ext_phy, i));
-
dev->mt76.aggr_stats[aggr0++] += val & 0xffff;
dev->mt76.aggr_stats[aggr0++] += val >> 16;
- dev->mt76.aggr_stats[aggr1++] += val2 & 0xffff;
- dev->mt76.aggr_stats[aggr1++] += val2 >> 16;
+
+ val = mt76_rr(dev, MT_TX_AGG_CNT2(ext_phy, i));
+ dev->mt76.aggr_stats[aggr1++] += val & 0xffff;
+ dev->mt76.aggr_stats[aggr1++] += val >> 16;
}
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
index c481583..e78d3ef 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
@@ -651,13 +651,19 @@ mt7915_get_stats(struct ieee80211_hw *hw,
struct ieee80211_low_level_stats *stats)
{
struct mt7915_phy *phy = mt7915_hw_phy(hw);
+ struct mt7915_dev *dev = mt7915_hw_dev(hw);
struct mib_stats *mib = &phy->mib;
+ mutex_lock(&dev->mt76.mutex);
stats->dot11RTSSuccessCount = mib->rts_cnt;
stats->dot11RTSFailureCount = mib->rts_retries_cnt;
stats->dot11FCSErrorCount = mib->fcs_err_cnt;
stats->dot11ACKFailureCount = mib->ack_fail_cnt;
+ memset(mib, 0, sizeof(*mib));
+
+ mutex_unlock(&dev->mt76.mutex);
+
return 0;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
index 4b8908f..c84110e 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
@@ -99,11 +99,11 @@ struct mt7915_vif {
};
struct mib_stats {
- u16 ack_fail_cnt;
- u16 fcs_err_cnt;
- u16 rts_cnt;
- u16 rts_retries_cnt;
- u16 ba_miss_cnt;
+ u32 ack_fail_cnt;
+ u32 fcs_err_cnt;
+ u32 rts_cnt;
+ u32 rts_retries_cnt;
+ u32 ba_miss_cnt;
};
struct mt7915_phy {
diff --git a/drivers/net/wireless/mediatek/mt76/sdio.c b/drivers/net/wireless/mediatek/mt76/sdio.c
index 9a4d95a..439ea41 100644
--- a/drivers/net/wireless/mediatek/mt76/sdio.c
+++ b/drivers/net/wireless/mediatek/mt76/sdio.c
@@ -215,6 +215,9 @@ mt76s_tx_queue_skb(struct mt76_dev *dev, enum mt76_txq_id qid,
q->entry[q->head].skb = tx_info.skb;
q->entry[q->head].buf_sz = len;
+
+ smp_wmb();
+
q->head = (q->head + 1) % q->ndesc;
q->queued++;
diff --git a/drivers/net/wireless/mediatek/mt7601u/eeprom.c b/drivers/net/wireless/mediatek/mt7601u/eeprom.c
index c868582..aa3b649 100644
--- a/drivers/net/wireless/mediatek/mt7601u/eeprom.c
+++ b/drivers/net/wireless/mediatek/mt7601u/eeprom.c
@@ -99,7 +99,7 @@ mt7601u_has_tssi(struct mt7601u_dev *dev, u8 *eeprom)
{
u16 nic_conf1 = get_unaligned_le16(eeprom + MT_EE_NIC_CONF_1);
- return ~nic_conf1 && (nic_conf1 & MT_EE_NIC_CONF_1_TX_ALC_EN);
+ return (u16)~nic_conf1 && (nic_conf1 & MT_EE_NIC_CONF_1_TX_ALC_EN);
}
static void
diff --git a/drivers/net/wireless/microchip/wilc1000/sdio.c b/drivers/net/wireless/microchip/wilc1000/sdio.c
index 351ff90..e14b9fc 100644
--- a/drivers/net/wireless/microchip/wilc1000/sdio.c
+++ b/drivers/net/wireless/microchip/wilc1000/sdio.c
@@ -947,7 +947,7 @@ static int wilc_sdio_sync_ext(struct wilc *wilc, int nint)
for (i = 0; (i < 3) && (nint > 0); i++, nint--)
reg |= BIT(i);
- ret = wilc_sdio_read_reg(wilc, WILC_INTR2_ENABLE, ®);
+ ret = wilc_sdio_write_reg(wilc, WILC_INTR2_ENABLE, reg);
if (ret) {
dev_err(&func->dev,
"Failed write reg (%08x)...\n",
diff --git a/drivers/net/wireless/quantenna/qtnfmac/event.c b/drivers/net/wireless/quantenna/qtnfmac/event.c
index c775c17..8dc8057 100644
--- a/drivers/net/wireless/quantenna/qtnfmac/event.c
+++ b/drivers/net/wireless/quantenna/qtnfmac/event.c
@@ -570,8 +570,10 @@ qtnf_event_handle_external_auth(struct qtnf_vif *vif,
return 0;
if (ev->ssid_len) {
- memcpy(auth.ssid.ssid, ev->ssid, ev->ssid_len);
- auth.ssid.ssid_len = ev->ssid_len;
+ int len = clamp_val(ev->ssid_len, 0, IEEE80211_MAX_SSID_LEN);
+
+ memcpy(auth.ssid.ssid, ev->ssid, len);
+ auth.ssid.ssid_len = len;
}
auth.key_mgmt_suite = le32_to_cpu(ev->akm_suite);
diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
index 6e8bd99..1866f6c 100644
--- a/drivers/net/wireless/realtek/rtlwifi/base.c
+++ b/drivers/net/wireless/realtek/rtlwifi/base.c
@@ -440,9 +440,14 @@ static void rtl_watchdog_wq_callback(struct work_struct *work);
static void rtl_fwevt_wq_callback(struct work_struct *work);
static void rtl_c2hcmd_wq_callback(struct work_struct *work);
-static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
+static int _rtl_init_deferred_work(struct ieee80211_hw *hw)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
+ struct workqueue_struct *wq;
+
+ wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name);
+ if (!wq)
+ return -ENOMEM;
/* <1> timer */
timer_setup(&rtlpriv->works.watchdog_timer,
@@ -451,11 +456,7 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
rtl_easy_concurrent_retrytimer_callback, 0);
/* <2> work queue */
rtlpriv->works.hw = hw;
- rtlpriv->works.rtl_wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name);
- if (unlikely(!rtlpriv->works.rtl_wq)) {
- pr_err("Failed to allocate work queue\n");
- return;
- }
+ rtlpriv->works.rtl_wq = wq;
INIT_DELAYED_WORK(&rtlpriv->works.watchdog_wq,
rtl_watchdog_wq_callback);
@@ -466,6 +467,7 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
rtl_swlps_rfon_wq_callback);
INIT_DELAYED_WORK(&rtlpriv->works.fwevt_wq, rtl_fwevt_wq_callback);
INIT_DELAYED_WORK(&rtlpriv->works.c2hcmd_wq, rtl_c2hcmd_wq_callback);
+ return 0;
}
void rtl_deinit_deferred_work(struct ieee80211_hw *hw, bool ips_wq)
@@ -565,9 +567,7 @@ int rtl_init_core(struct ieee80211_hw *hw)
rtlmac->link_state = MAC80211_NOLINK;
/* <6> init deferred work */
- _rtl_init_deferred_work(hw);
-
- return 0;
+ return _rtl_init_deferred_work(hw);
}
EXPORT_SYMBOL_GPL(rtl_init_core);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
index 85093b3..ed72a2a 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
@@ -249,7 +249,7 @@ u32 RTL8821AE_PHY_REG_ARRAY[] = {
0x824, 0x00030FE0,
0x828, 0x00000000,
0x82C, 0x002081DD,
- 0x830, 0x2AAA8E24,
+ 0x830, 0x2AAAEEC8,
0x834, 0x0037A706,
0x838, 0x06489B44,
0x83C, 0x0000095B,
@@ -324,10 +324,10 @@ u32 RTL8821AE_PHY_REG_ARRAY[] = {
0x9D8, 0x00000000,
0x9DC, 0x00000000,
0x9E0, 0x00005D00,
- 0x9E4, 0x00000002,
+ 0x9E4, 0x00000003,
0x9E8, 0x00000001,
0xA00, 0x00D047C8,
- 0xA04, 0x01FF000C,
+ 0xA04, 0x01FF800C,
0xA08, 0x8C8A8300,
0xA0C, 0x2E68000F,
0xA10, 0x9500BB78,
@@ -1320,7 +1320,11 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x083, 0x00021800,
0x084, 0x00028000,
0x085, 0x00048000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
+ 0x086, 0x0009483A,
+ 0xA0000000, 0x00000000,
0x086, 0x00094838,
+ 0xB0000000, 0x00000000,
0x087, 0x00044980,
0x088, 0x00048000,
0x089, 0x0000D480,
@@ -1409,26 +1413,32 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x03C, 0x000CA000,
0x0EF, 0x00000000,
0x0EF, 0x00001100,
- 0xFF0F0104, 0xABCD,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x0004ADF3,
0x034, 0x00049DF0,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x0004ADF3,
0x034, 0x00049DF0,
- 0xFF0F0404, 0xCDEF,
- 0x034, 0x0004ADF3,
- 0x034, 0x00049DF0,
- 0xFF0F0200, 0xCDEF,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x0004ADF5,
0x034, 0x00049DF2,
- 0xFF0F02C0, 0xCDEF,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x0004A0F3,
0x034, 0x000490B1,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x0004A0F3,
+ 0x034, 0x000490B1,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x0004ADF5,
+ 0x034, 0x00049DF2,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x0004ADF3,
+ 0x034, 0x00049DF0,
+ 0xA0000000, 0x00000000,
0x034, 0x0004ADF7,
0x034, 0x00049DF3,
- 0xFF0F0104, 0xDEAD,
- 0xFF0F0104, 0xABCD,
+ 0xB0000000, 0x00000000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x00048DED,
0x034, 0x00047DEA,
0x034, 0x00046DE7,
@@ -1438,7 +1448,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00042886,
0x034, 0x00041486,
0x034, 0x00040447,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x00048DED,
0x034, 0x00047DEA,
0x034, 0x00046DE7,
@@ -1448,17 +1458,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00042886,
0x034, 0x00041486,
0x034, 0x00040447,
- 0xFF0F0404, 0xCDEF,
- 0x034, 0x00048DED,
- 0x034, 0x00047DEA,
- 0x034, 0x00046DE7,
- 0x034, 0x00045CE9,
- 0x034, 0x00044CE6,
- 0x034, 0x000438C6,
- 0x034, 0x00042886,
- 0x034, 0x00041486,
- 0x034, 0x00040447,
- 0xFF0F02C0, 0xCDEF,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x000480AE,
0x034, 0x000470AB,
0x034, 0x0004608B,
@@ -1468,7 +1468,27 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00042026,
0x034, 0x00041023,
0x034, 0x00040002,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x000480AE,
+ 0x034, 0x000470AB,
+ 0x034, 0x0004608B,
+ 0x034, 0x00045069,
+ 0x034, 0x00044048,
+ 0x034, 0x00043045,
+ 0x034, 0x00042026,
+ 0x034, 0x00041023,
+ 0x034, 0x00040002,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x00048DED,
+ 0x034, 0x00047DEA,
+ 0x034, 0x00046DE7,
+ 0x034, 0x00045CE9,
+ 0x034, 0x00044CE6,
+ 0x034, 0x000438C6,
+ 0x034, 0x00042886,
+ 0x034, 0x00041486,
+ 0x034, 0x00040447,
+ 0xA0000000, 0x00000000,
0x034, 0x00048DEF,
0x034, 0x00047DEC,
0x034, 0x00046DE9,
@@ -1478,28 +1498,36 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x0004248A,
0x034, 0x0004108D,
0x034, 0x0004008A,
- 0xFF0F0104, 0xDEAD,
- 0xFF0F0200, 0xABCD,
+ 0xB0000000, 0x00000000,
+ 0x80000210, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x0002ADF4,
- 0xFF0F02C0, 0xCDEF,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x0002A0F3,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x0002A0F3,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x0002ADF4,
+ 0xA0000000, 0x00000000,
0x034, 0x0002ADF7,
- 0xFF0F0200, 0xDEAD,
- 0xFF0F0104, 0xABCD,
+ 0xB0000000, 0x00000000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x00029DF4,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x00029DF4,
- 0xFF0F0404, 0xCDEF,
- 0x034, 0x00029DF4,
- 0xFF0F0200, 0xCDEF,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x00029DF1,
- 0xFF0F02C0, 0xCDEF,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x000290F0,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x000290F0,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x00029DF1,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x00029DF4,
+ 0xA0000000, 0x00000000,
0x034, 0x00029DF2,
- 0xFF0F0104, 0xDEAD,
- 0xFF0F0104, 0xABCD,
+ 0xB0000000, 0x00000000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x00028DF1,
0x034, 0x00027DEE,
0x034, 0x00026DEB,
@@ -1509,7 +1537,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00022889,
0x034, 0x00021489,
0x034, 0x0002044A,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x00028DF1,
0x034, 0x00027DEE,
0x034, 0x00026DEB,
@@ -1519,17 +1547,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00022889,
0x034, 0x00021489,
0x034, 0x0002044A,
- 0xFF0F0404, 0xCDEF,
- 0x034, 0x00028DF1,
- 0x034, 0x00027DEE,
- 0x034, 0x00026DEB,
- 0x034, 0x00025CEC,
- 0x034, 0x00024CE9,
- 0x034, 0x000238CA,
- 0x034, 0x00022889,
- 0x034, 0x00021489,
- 0x034, 0x0002044A,
- 0xFF0F02C0, 0xCDEF,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x000280AF,
0x034, 0x000270AC,
0x034, 0x0002608B,
@@ -1539,7 +1557,27 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00022026,
0x034, 0x00021023,
0x034, 0x00020002,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x000280AF,
+ 0x034, 0x000270AC,
+ 0x034, 0x0002608B,
+ 0x034, 0x00025069,
+ 0x034, 0x00024048,
+ 0x034, 0x00023045,
+ 0x034, 0x00022026,
+ 0x034, 0x00021023,
+ 0x034, 0x00020002,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x00028DF1,
+ 0x034, 0x00027DEE,
+ 0x034, 0x00026DEB,
+ 0x034, 0x00025CEC,
+ 0x034, 0x00024CE9,
+ 0x034, 0x000238CA,
+ 0x034, 0x00022889,
+ 0x034, 0x00021489,
+ 0x034, 0x0002044A,
+ 0xA0000000, 0x00000000,
0x034, 0x00028DEE,
0x034, 0x00027DEB,
0x034, 0x00026CCD,
@@ -1549,19 +1587,24 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00022849,
0x034, 0x00021449,
0x034, 0x0002004D,
- 0xFF0F0104, 0xDEAD,
- 0xFF0F02C0, 0xABCD,
+ 0xB0000000, 0x00000000,
+ 0x8000020c, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x0000A0D7,
0x034, 0x000090D3,
0x034, 0x000080B1,
0x034, 0x000070AE,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x0000A0D7,
+ 0x034, 0x000090D3,
+ 0x034, 0x000080B1,
+ 0x034, 0x000070AE,
+ 0xA0000000, 0x00000000,
0x034, 0x0000ADF7,
0x034, 0x00009DF4,
0x034, 0x00008DF1,
0x034, 0x00007DEE,
- 0xFF0F02C0, 0xDEAD,
- 0xFF0F0104, 0xABCD,
+ 0xB0000000, 0x00000000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x00006DEB,
0x034, 0x00005CEC,
0x034, 0x00004CE9,
@@ -1569,7 +1612,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00002889,
0x034, 0x00001489,
0x034, 0x0000044A,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x00006DEB,
0x034, 0x00005CEC,
0x034, 0x00004CE9,
@@ -1577,15 +1620,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00002889,
0x034, 0x00001489,
0x034, 0x0000044A,
- 0xFF0F0404, 0xCDEF,
- 0x034, 0x00006DEB,
- 0x034, 0x00005CEC,
- 0x034, 0x00004CE9,
- 0x034, 0x000038CA,
- 0x034, 0x00002889,
- 0x034, 0x00001489,
- 0x034, 0x0000044A,
- 0xFF0F02C0, 0xCDEF,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
0x034, 0x0000608D,
0x034, 0x0000506B,
0x034, 0x0000404A,
@@ -1593,7 +1628,23 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00002044,
0x034, 0x00001025,
0x034, 0x00000004,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x0000608D,
+ 0x034, 0x0000506B,
+ 0x034, 0x0000404A,
+ 0x034, 0x00003047,
+ 0x034, 0x00002044,
+ 0x034, 0x00001025,
+ 0x034, 0x00000004,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x034, 0x00006DEB,
+ 0x034, 0x00005CEC,
+ 0x034, 0x00004CE9,
+ 0x034, 0x000038CA,
+ 0x034, 0x00002889,
+ 0x034, 0x00001489,
+ 0x034, 0x0000044A,
+ 0xA0000000, 0x00000000,
0x034, 0x00006DCD,
0x034, 0x00005CCD,
0x034, 0x00004CCA,
@@ -1601,11 +1652,11 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x034, 0x00002888,
0x034, 0x00001488,
0x034, 0x00000486,
- 0xFF0F0104, 0xDEAD,
+ 0xB0000000, 0x00000000,
0x0EF, 0x00000000,
0x018, 0x0001712A,
0x0EF, 0x00000040,
- 0xFF0F0104, 0xABCD,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x035, 0x00000187,
0x035, 0x00008187,
0x035, 0x00010187,
@@ -1615,7 +1666,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x035, 0x00040188,
0x035, 0x00048188,
0x035, 0x00050188,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x035, 0x00000187,
0x035, 0x00008187,
0x035, 0x00010187,
@@ -1625,17 +1676,17 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x035, 0x00040188,
0x035, 0x00048188,
0x035, 0x00050188,
- 0xFF0F0404, 0xCDEF,
- 0x035, 0x00000187,
- 0x035, 0x00008187,
- 0x035, 0x00010187,
- 0x035, 0x00020188,
- 0x035, 0x00028188,
- 0x035, 0x00030188,
- 0x035, 0x00040188,
- 0x035, 0x00048188,
- 0x035, 0x00050188,
- 0xCDCDCDCD, 0xCDCD,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
+ 0x035, 0x00000128,
+ 0x035, 0x00008128,
+ 0x035, 0x00010128,
+ 0x035, 0x000201C8,
+ 0x035, 0x000281C8,
+ 0x035, 0x000301C8,
+ 0x035, 0x000401C8,
+ 0x035, 0x000481C8,
+ 0x035, 0x000501C8,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
0x035, 0x00000145,
0x035, 0x00008145,
0x035, 0x00010145,
@@ -1645,11 +1696,41 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x035, 0x000401C7,
0x035, 0x000481C7,
0x035, 0x000501C7,
- 0xFF0F0104, 0xDEAD,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x035, 0x00000128,
+ 0x035, 0x00008128,
+ 0x035, 0x00010128,
+ 0x035, 0x000201C8,
+ 0x035, 0x000281C8,
+ 0x035, 0x000301C8,
+ 0x035, 0x000401C8,
+ 0x035, 0x000481C8,
+ 0x035, 0x000501C8,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x035, 0x00000187,
+ 0x035, 0x00008187,
+ 0x035, 0x00010187,
+ 0x035, 0x00020188,
+ 0x035, 0x00028188,
+ 0x035, 0x00030188,
+ 0x035, 0x00040188,
+ 0x035, 0x00048188,
+ 0x035, 0x00050188,
+ 0xA0000000, 0x00000000,
+ 0x035, 0x00000145,
+ 0x035, 0x00008145,
+ 0x035, 0x00010145,
+ 0x035, 0x00020196,
+ 0x035, 0x00028196,
+ 0x035, 0x00030196,
+ 0x035, 0x000401C7,
+ 0x035, 0x000481C7,
+ 0x035, 0x000501C7,
+ 0xB0000000, 0x00000000,
0x0EF, 0x00000000,
0x018, 0x0001712A,
0x0EF, 0x00000010,
- 0xFF0F0104, 0xABCD,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x036, 0x00085733,
0x036, 0x0008D733,
0x036, 0x00095733,
@@ -1662,7 +1743,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x036, 0x000CE4B4,
0x036, 0x000D64B4,
0x036, 0x000DE4B4,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x036, 0x00085733,
0x036, 0x0008D733,
0x036, 0x00095733,
@@ -1675,20 +1756,20 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x036, 0x000CE4B4,
0x036, 0x000D64B4,
0x036, 0x000DE4B4,
- 0xFF0F0404, 0xCDEF,
- 0x036, 0x00085733,
- 0x036, 0x0008D733,
- 0x036, 0x00095733,
- 0x036, 0x0009D733,
- 0x036, 0x000A64B4,
- 0x036, 0x000AE4B4,
- 0x036, 0x000B64B4,
- 0x036, 0x000BE4B4,
- 0x036, 0x000C64B4,
- 0x036, 0x000CE4B4,
- 0x036, 0x000D64B4,
- 0x036, 0x000DE4B4,
- 0xCDCDCDCD, 0xCDCD,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
+ 0x036, 0x000063B5,
+ 0x036, 0x0000E3B5,
+ 0x036, 0x000163B5,
+ 0x036, 0x0001E3B5,
+ 0x036, 0x000263B5,
+ 0x036, 0x0002E3B5,
+ 0x036, 0x000363B5,
+ 0x036, 0x0003E3B5,
+ 0x036, 0x000463B5,
+ 0x036, 0x0004E3B5,
+ 0x036, 0x000563B5,
+ 0x036, 0x0005E3B5,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
0x036, 0x000056B3,
0x036, 0x0000D6B3,
0x036, 0x000156B3,
@@ -1701,103 +1782,201 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x036, 0x0004E7B4,
0x036, 0x000567B4,
0x036, 0x0005E7B4,
- 0xFF0F0104, 0xDEAD,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x036, 0x000063B5,
+ 0x036, 0x0000E3B5,
+ 0x036, 0x000163B5,
+ 0x036, 0x0001E3B5,
+ 0x036, 0x000263B5,
+ 0x036, 0x0002E3B5,
+ 0x036, 0x000363B5,
+ 0x036, 0x0003E3B5,
+ 0x036, 0x000463B5,
+ 0x036, 0x0004E3B5,
+ 0x036, 0x000563B5,
+ 0x036, 0x0005E3B5,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x036, 0x00085733,
+ 0x036, 0x0008D733,
+ 0x036, 0x00095733,
+ 0x036, 0x0009D733,
+ 0x036, 0x000A64B4,
+ 0x036, 0x000AE4B4,
+ 0x036, 0x000B64B4,
+ 0x036, 0x000BE4B4,
+ 0x036, 0x000C64B4,
+ 0x036, 0x000CE4B4,
+ 0x036, 0x000D64B4,
+ 0x036, 0x000DE4B4,
+ 0xA0000000, 0x00000000,
+ 0x036, 0x000056B3,
+ 0x036, 0x0000D6B3,
+ 0x036, 0x000156B3,
+ 0x036, 0x0001D6B3,
+ 0x036, 0x00026634,
+ 0x036, 0x0002E634,
+ 0x036, 0x00036634,
+ 0x036, 0x0003E634,
+ 0x036, 0x000467B4,
+ 0x036, 0x0004E7B4,
+ 0x036, 0x000567B4,
+ 0x036, 0x0005E7B4,
+ 0xB0000000, 0x00000000,
0x0EF, 0x00000000,
0x0EF, 0x00000008,
- 0xFF0F0104, 0xABCD,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x03C, 0x000001C8,
0x03C, 0x00000492,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x03C, 0x000001C8,
0x03C, 0x00000492,
- 0xFF0F0404, 0xCDEF,
- 0x03C, 0x000001C8,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
+ 0x03C, 0x000001B6,
0x03C, 0x00000492,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
0x03C, 0x0000022A,
0x03C, 0x00000594,
- 0xFF0F0104, 0xDEAD,
- 0xFF0F0104, 0xABCD,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x03C, 0x000001B6,
+ 0x03C, 0x00000492,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x03C, 0x000001C8,
+ 0x03C, 0x00000492,
+ 0xA0000000, 0x00000000,
+ 0x03C, 0x0000022A,
+ 0x03C, 0x00000594,
+ 0xB0000000, 0x00000000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x03C, 0x00000800,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x03C, 0x00000800,
- 0xFF0F0404, 0xCDEF,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
0x03C, 0x00000800,
- 0xFF0F02C0, 0xCDEF,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
0x03C, 0x00000820,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x03C, 0x00000820,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x03C, 0x00000800,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x03C, 0x00000800,
+ 0xA0000000, 0x00000000,
0x03C, 0x00000900,
- 0xFF0F0104, 0xDEAD,
+ 0xB0000000, 0x00000000,
0x0EF, 0x00000000,
0x018, 0x0001712A,
0x0EF, 0x00000002,
- 0xFF0F0104, 0xABCD,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x008, 0x0004E400,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x008, 0x0004E400,
- 0xFF0F0404, 0xCDEF,
- 0x008, 0x0004E400,
- 0xCDCDCDCD, 0xCDCD,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
0x008, 0x00002000,
- 0xFF0F0104, 0xDEAD,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
+ 0x008, 0x00002000,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x008, 0x00002000,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x008, 0x00002000,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x008, 0x0004E400,
+ 0xA0000000, 0x00000000,
+ 0x008, 0x00002000,
+ 0xB0000000, 0x00000000,
0x0EF, 0x00000000,
0x0DF, 0x000000C0,
- 0x01F, 0x00040064,
- 0xFF0F0104, 0xABCD,
+ 0x01F, 0x00000064,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x058, 0x000A7284,
0x059, 0x000600EC,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x058, 0x000A7284,
0x059, 0x000600EC,
- 0xFF0F0404, 0xCDEF,
- 0x058, 0x000A7284,
- 0x059, 0x000600EC,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
0x058, 0x00081184,
0x059, 0x0006016C,
- 0xFF0F0104, 0xDEAD,
- 0xFF0F0104, 0xABCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x058, 0x00081184,
+ 0x059, 0x0006016C,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x058, 0x00081184,
+ 0x059, 0x0006016C,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x058, 0x000A7284,
+ 0x059, 0x000600EC,
+ 0xA0000000, 0x00000000,
+ 0x058, 0x00081184,
+ 0x059, 0x0006016C,
+ 0xB0000000, 0x00000000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x061, 0x000E8D73,
0x062, 0x00093FC5,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x061, 0x000E8D73,
0x062, 0x00093FC5,
- 0xFF0F0404, 0xCDEF,
- 0x061, 0x000E8D73,
- 0x062, 0x00093FC5,
- 0xCDCDCDCD, 0xCDCD,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
+ 0x061, 0x000EFD83,
+ 0x062, 0x00093FCC,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
0x061, 0x000EAD53,
0x062, 0x00093BC4,
- 0xFF0F0104, 0xDEAD,
- 0xFF0F0104, 0xABCD,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x061, 0x000EFD83,
+ 0x062, 0x00093FCC,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x061, 0x000E8D73,
+ 0x062, 0x00093FC5,
+ 0xA0000000, 0x00000000,
+ 0x061, 0x000EAD53,
+ 0x062, 0x00093BC4,
+ 0xB0000000, 0x00000000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x063, 0x000110E9,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x063, 0x000110E9,
- 0xFF0F0404, 0xCDEF,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
+ 0x063, 0x000110EB,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
0x063, 0x000110E9,
- 0xFF0F0200, 0xCDEF,
- 0x063, 0x000710E9,
- 0xFF0F02C0, 0xCDEF,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
0x063, 0x000110E9,
- 0xCDCDCDCD, 0xCDCD,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x063, 0x000110EB,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x063, 0x000110E9,
+ 0xA0000000, 0x00000000,
0x063, 0x000714E9,
- 0xFF0F0104, 0xDEAD,
- 0xFF0F0104, 0xABCD,
+ 0xB0000000, 0x00000000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x064, 0x0001C27C,
- 0xFF0F0204, 0xCDEF,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
0x064, 0x0001C27C,
- 0xFF0F0404, 0xCDEF,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
0x064, 0x0001C27C,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
0x064, 0x0001C67C,
- 0xFF0F0104, 0xDEAD,
- 0xFF0F0200, 0xABCD,
- 0x065, 0x00093016,
- 0xFF0F02C0, 0xCDEF,
- 0x065, 0x00093015,
- 0xCDCDCDCD, 0xCDCD,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x064, 0x0001C27C,
+ 0x90000410, 0x00000000, 0x40000000, 0x00000000,
+ 0x064, 0x0001C27C,
+ 0xA0000000, 0x00000000,
+ 0x064, 0x0001C67C,
+ 0xB0000000, 0x00000000,
+ 0x80000111, 0x00000000, 0x40000000, 0x00000000,
0x065, 0x00091016,
- 0xFF0F0200, 0xDEAD,
+ 0x90000110, 0x00000000, 0x40000000, 0x00000000,
+ 0x065, 0x00091016,
+ 0x90000210, 0x00000000, 0x40000000, 0x00000000,
+ 0x065, 0x00093016,
+ 0x9000020c, 0x00000000, 0x40000000, 0x00000000,
+ 0x065, 0x00093015,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x065, 0x00093015,
+ 0x90000200, 0x00000000, 0x40000000, 0x00000000,
+ 0x065, 0x00093016,
+ 0xA0000000, 0x00000000,
+ 0x065, 0x00091016,
+ 0xB0000000, 0x00000000,
0x018, 0x00000006,
0x0EF, 0x00002000,
0x03B, 0x0003824B,
@@ -1895,9 +2074,10 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
0x0B4, 0x0001214C,
0x0B7, 0x0003000C,
0x01C, 0x000539D2,
+ 0x0C4, 0x000AFE00,
0x018, 0x0001F12A,
- 0x0FE, 0x00000000,
- 0x0FE, 0x00000000,
+ 0xFFE, 0x00000000,
+ 0xFFE, 0x00000000,
0x018, 0x0001712A,
};
@@ -2017,6 +2197,7 @@ u32 RTL8812AE_MAC_REG_ARRAY[] = {
u32 RTL8812AE_MAC_1T_ARRAYLEN = ARRAY_SIZE(RTL8812AE_MAC_REG_ARRAY);
u32 RTL8821AE_MAC_REG_ARRAY[] = {
+ 0x421, 0x0000000F,
0x428, 0x0000000A,
0x429, 0x00000010,
0x430, 0x00000000,
@@ -2485,7 +2666,7 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
0x81C, 0xA6360001,
0x81C, 0xA5380001,
0x81C, 0xA43A0001,
- 0x81C, 0xA33C0001,
+ 0x81C, 0x683C0001,
0x81C, 0x673E0001,
0x81C, 0x66400001,
0x81C, 0x65420001,
@@ -2519,7 +2700,7 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
0x81C, 0x017A0001,
0x81C, 0x017C0001,
0x81C, 0x017E0001,
- 0xFF0F02C0, 0xABCD,
+ 0x8000020c, 0x00000000, 0x40000000, 0x00000000,
0x81C, 0xFB000101,
0x81C, 0xFA020101,
0x81C, 0xF9040101,
@@ -2578,7 +2759,66 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
0x81C, 0x016E0101,
0x81C, 0x01700101,
0x81C, 0x01720101,
- 0xCDCDCDCD, 0xCDCD,
+ 0x9000040c, 0x00000000, 0x40000000, 0x00000000,
+ 0x81C, 0xFB000101,
+ 0x81C, 0xFA020101,
+ 0x81C, 0xF9040101,
+ 0x81C, 0xF8060101,
+ 0x81C, 0xF7080101,
+ 0x81C, 0xF60A0101,
+ 0x81C, 0xF50C0101,
+ 0x81C, 0xF40E0101,
+ 0x81C, 0xF3100101,
+ 0x81C, 0xF2120101,
+ 0x81C, 0xF1140101,
+ 0x81C, 0xF0160101,
+ 0x81C, 0xEF180101,
+ 0x81C, 0xEE1A0101,
+ 0x81C, 0xED1C0101,
+ 0x81C, 0xEC1E0101,
+ 0x81C, 0xEB200101,
+ 0x81C, 0xEA220101,
+ 0x81C, 0xE9240101,
+ 0x81C, 0xE8260101,
+ 0x81C, 0xE7280101,
+ 0x81C, 0xE62A0101,
+ 0x81C, 0xE52C0101,
+ 0x81C, 0xE42E0101,
+ 0x81C, 0xE3300101,
+ 0x81C, 0xA5320101,
+ 0x81C, 0xA4340101,
+ 0x81C, 0xA3360101,
+ 0x81C, 0x87380101,
+ 0x81C, 0x863A0101,
+ 0x81C, 0x853C0101,
+ 0x81C, 0x843E0101,
+ 0x81C, 0x69400101,
+ 0x81C, 0x68420101,
+ 0x81C, 0x67440101,
+ 0x81C, 0x66460101,
+ 0x81C, 0x49480101,
+ 0x81C, 0x484A0101,
+ 0x81C, 0x474C0101,
+ 0x81C, 0x2A4E0101,
+ 0x81C, 0x29500101,
+ 0x81C, 0x28520101,
+ 0x81C, 0x27540101,
+ 0x81C, 0x26560101,
+ 0x81C, 0x25580101,
+ 0x81C, 0x245A0101,
+ 0x81C, 0x235C0101,
+ 0x81C, 0x055E0101,
+ 0x81C, 0x04600101,
+ 0x81C, 0x03620101,
+ 0x81C, 0x02640101,
+ 0x81C, 0x01660101,
+ 0x81C, 0x01680101,
+ 0x81C, 0x016A0101,
+ 0x81C, 0x016C0101,
+ 0x81C, 0x016E0101,
+ 0x81C, 0x01700101,
+ 0x81C, 0x01720101,
+ 0xA0000000, 0x00000000,
0x81C, 0xFF000101,
0x81C, 0xFF020101,
0x81C, 0xFE040101,
@@ -2637,7 +2877,7 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
0x81C, 0x046E0101,
0x81C, 0x03700101,
0x81C, 0x02720101,
- 0xFF0F02C0, 0xDEAD,
+ 0xB0000000, 0x00000000,
0x81C, 0x01740101,
0x81C, 0x01760101,
0x81C, 0x01780101,
diff --git a/drivers/net/wireless/realtek/rtw88/debug.c b/drivers/net/wireless/realtek/rtw88/debug.c
index efbba9c..8bb6cc8 100644
--- a/drivers/net/wireless/realtek/rtw88/debug.c
+++ b/drivers/net/wireless/realtek/rtw88/debug.c
@@ -270,7 +270,7 @@ static ssize_t rtw_debugfs_set_rsvd_page(struct file *filp,
if (num != 2) {
rtw_warn(rtwdev, "invalid arguments\n");
- return num;
+ return -EINVAL;
}
debugfs_priv->rsvd_page.page_offset = offset;
diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
index ffb02e6..8ba0b08 100644
--- a/drivers/net/wireless/realtek/rtw88/main.h
+++ b/drivers/net/wireless/realtek/rtw88/main.h
@@ -1156,6 +1156,7 @@ struct rtw_chip_info {
bool en_dis_dpd;
u16 dpd_ratemask;
u8 iqk_threshold;
+ u8 lck_threshold;
const struct rtw_pwr_track_tbl *pwr_track_tbl;
u8 bfer_su_max_num;
@@ -1485,6 +1486,7 @@ struct rtw_dm_info {
u8 tx_rate;
u8 thermal_avg[RTW_RF_PATH_MAX];
u8 thermal_meter_k;
+ u8 thermal_meter_lck;
s8 delta_power_index[RTW_RF_PATH_MAX];
s8 delta_power_index_last[RTW_RF_PATH_MAX];
u8 default_ofdm_index;
diff --git a/drivers/net/wireless/realtek/rtw88/phy.c b/drivers/net/wireless/realtek/rtw88/phy.c
index 5cd9cc4..af8b703 100644
--- a/drivers/net/wireless/realtek/rtw88/phy.c
+++ b/drivers/net/wireless/realtek/rtw88/phy.c
@@ -1518,7 +1518,7 @@ void rtw_phy_load_tables(struct rtw_dev *rtwdev)
}
EXPORT_SYMBOL(rtw_phy_load_tables);
-static u8 rtw_get_channel_group(u8 channel)
+static u8 rtw_get_channel_group(u8 channel, u8 rate)
{
switch (channel) {
default:
@@ -1562,6 +1562,7 @@ static u8 rtw_get_channel_group(u8 channel)
case 106:
return 4;
case 14:
+ return rate <= DESC_RATE11M ? 5 : 4;
case 108:
case 110:
case 112:
@@ -1813,7 +1814,7 @@ void rtw_get_tx_power_params(struct rtw_dev *rtwdev, u8 path, u8 rate, u8 bw,
s8 *remnant = &pwr_param->pwr_remnant;
pwr_idx = &rtwdev->efuse.txpwr_idx_table[path];
- group = rtw_get_channel_group(ch);
+ group = rtw_get_channel_group(ch, rate);
/* base power index for 2.4G/5G */
if (IS_CH_2G_BAND(ch)) {
@@ -2153,6 +2154,20 @@ s8 rtw_phy_pwrtrack_get_pwridx(struct rtw_dev *rtwdev,
}
EXPORT_SYMBOL(rtw_phy_pwrtrack_get_pwridx);
+bool rtw_phy_pwrtrack_need_lck(struct rtw_dev *rtwdev)
+{
+ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ u8 delta_lck;
+
+ delta_lck = abs(dm_info->thermal_avg[0] - dm_info->thermal_meter_lck);
+ if (delta_lck >= rtwdev->chip->lck_threshold) {
+ dm_info->thermal_meter_lck = dm_info->thermal_avg[0];
+ return true;
+ }
+ return false;
+}
+EXPORT_SYMBOL(rtw_phy_pwrtrack_need_lck);
+
bool rtw_phy_pwrtrack_need_iqk(struct rtw_dev *rtwdev)
{
struct rtw_dm_info *dm_info = &rtwdev->dm_info;
diff --git a/drivers/net/wireless/realtek/rtw88/phy.h b/drivers/net/wireless/realtek/rtw88/phy.h
index b924ed0..9623248 100644
--- a/drivers/net/wireless/realtek/rtw88/phy.h
+++ b/drivers/net/wireless/realtek/rtw88/phy.h
@@ -55,6 +55,7 @@ u8 rtw_phy_pwrtrack_get_delta(struct rtw_dev *rtwdev, u8 path);
s8 rtw_phy_pwrtrack_get_pwridx(struct rtw_dev *rtwdev,
struct rtw_swing_table *swing_table,
u8 tbl_path, u8 therm_path, u8 delta);
+bool rtw_phy_pwrtrack_need_lck(struct rtw_dev *rtwdev);
bool rtw_phy_pwrtrack_need_iqk(struct rtw_dev *rtwdev);
void rtw_phy_config_swing_table(struct rtw_dev *rtwdev,
struct rtw_swing_table *swing_table);
diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h
index 86b94c0..aca3dbd 100644
--- a/drivers/net/wireless/realtek/rtw88/reg.h
+++ b/drivers/net/wireless/realtek/rtw88/reg.h
@@ -639,8 +639,13 @@
#define RF_TXATANK 0x64
#define RF_TRXIQ 0x66
#define RF_RXIQGEN 0x8d
+#define RF_SYN_PFD 0xb0
#define RF_XTALX2 0xb8
+#define RF_SYN_CTRL 0xbb
#define RF_MALSEL 0xbe
+#define RF_SYN_AAC 0xc9
+#define RF_AAC_CTRL 0xca
+#define RF_FAST_LCK 0xcc
#define RF_RCKD 0xde
#define RF_TXADBG 0xde
#define RF_LUTDBG 0xdf
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
index e37300e..b718f5d 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
@@ -1124,6 +1124,7 @@ static void rtw8822c_pwrtrack_init(struct rtw_dev *rtwdev)
dm_info->pwr_trk_triggered = false;
dm_info->thermal_meter_k = rtwdev->efuse.thermal_meter_k;
+ dm_info->thermal_meter_lck = rtwdev->efuse.thermal_meter_k;
}
static void rtw8822c_phy_set_param(struct rtw_dev *rtwdev)
@@ -2106,6 +2107,26 @@ static void rtw8822c_false_alarm_statistics(struct rtw_dev *rtwdev)
rtw_write32_set(rtwdev, REG_RX_BREAK, BIT_COM_RX_GCK_EN);
}
+static void rtw8822c_do_lck(struct rtw_dev *rtwdev)
+{
+ u32 val;
+
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_CTRL, RFREG_MASK, 0x80010);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_PFD, RFREG_MASK, 0x1F0FA);
+ fsleep(1);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_AAC_CTRL, RFREG_MASK, 0x80000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_AAC, RFREG_MASK, 0x80001);
+ read_poll_timeout(rtw_read_rf, val, val != 0x1, 1000, 100000,
+ true, rtwdev, RF_PATH_A, RF_AAC_CTRL, 0x1000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_PFD, RFREG_MASK, 0x1F0F8);
+ rtw_write_rf(rtwdev, RF_PATH_B, RF_SYN_CTRL, RFREG_MASK, 0x80010);
+
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x0f000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x4f000);
+ fsleep(1);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x0f000);
+}
+
static void rtw8822c_do_iqk(struct rtw_dev *rtwdev)
{
struct rtw_iqk_para para = {0};
@@ -3519,11 +3540,12 @@ static void __rtw8822c_pwr_track(struct rtw_dev *rtwdev)
rtw_phy_config_swing_table(rtwdev, &swing_table);
+ if (rtw_phy_pwrtrack_need_lck(rtwdev))
+ rtw8822c_do_lck(rtwdev);
+
for (i = 0; i < rtwdev->hal.rf_path_num; i++)
rtw8822c_pwr_track_path(rtwdev, &swing_table, i);
- if (rtw_phy_pwrtrack_need_iqk(rtwdev))
- rtw8822c_do_iqk(rtwdev);
}
static void rtw8822c_pwr_track(struct rtw_dev *rtwdev)
@@ -4328,6 +4350,7 @@ struct rtw_chip_info rtw8822c_hw_spec = {
.dpd_ratemask = DIS_DPD_RATEALL,
.pwr_track_tbl = &rtw8822c_rtw_pwr_track_tbl,
.iqk_threshold = 8,
+ .lck_threshold = 8,
.bfer_su_max_num = 2,
.bfer_mu_max_num = 1,
.rx_ldpc = true,
diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
index 592e9da..3a243c5 100644
--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
+++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
@@ -1513,7 +1513,7 @@ static int rsi_restore(struct device *dev)
}
static const struct dev_pm_ops rsi_pm_ops = {
.suspend = rsi_suspend,
- .resume = rsi_resume,
+ .resume_noirq = rsi_resume,
.freeze = rsi_freeze,
.thaw = rsi_thaw,
.restore = rsi_restore,
diff --git a/drivers/net/wireless/ti/wlcore/boot.c b/drivers/net/wireless/ti/wlcore/boot.c
index e14d88e..85abd0a 100644
--- a/drivers/net/wireless/ti/wlcore/boot.c
+++ b/drivers/net/wireless/ti/wlcore/boot.c
@@ -72,6 +72,7 @@ static int wlcore_validate_fw_ver(struct wl1271 *wl)
unsigned int *min_ver = (wl->fw_type == WL12XX_FW_TYPE_MULTI) ?
wl->min_mr_fw_ver : wl->min_sr_fw_ver;
char min_fw_str[32] = "";
+ int off = 0;
int i;
/* the chip must be exactly equal */
@@ -105,13 +106,15 @@ static int wlcore_validate_fw_ver(struct wl1271 *wl)
return 0;
fail:
- for (i = 0; i < NUM_FW_VER; i++)
+ for (i = 0; i < NUM_FW_VER && off < sizeof(min_fw_str); i++)
if (min_ver[i] == WLCORE_FW_VER_IGNORE)
- snprintf(min_fw_str, sizeof(min_fw_str),
- "%s*.", min_fw_str);
+ off += snprintf(min_fw_str + off,
+ sizeof(min_fw_str) - off,
+ "*.");
else
- snprintf(min_fw_str, sizeof(min_fw_str),
- "%s%u.", min_fw_str, min_ver[i]);
+ off += snprintf(min_fw_str + off,
+ sizeof(min_fw_str) - off,
+ "%u.", min_ver[i]);
wl1271_error("Your WiFi FW version (%u.%u.%u.%u.%u) is invalid.\n"
"Please use at least FW %s\n"
diff --git a/drivers/net/wireless/ti/wlcore/debugfs.h b/drivers/net/wireless/ti/wlcore/debugfs.h
index b143293..a9e13e6 100644
--- a/drivers/net/wireless/ti/wlcore/debugfs.h
+++ b/drivers/net/wireless/ti/wlcore/debugfs.h
@@ -78,13 +78,14 @@ static ssize_t sub## _ ##name## _read(struct file *file, \
struct wl1271 *wl = file->private_data; \
struct struct_type *stats = wl->stats.fw_stats; \
char buf[DEBUGFS_FORMAT_BUFFER_SIZE] = ""; \
+ int pos = 0; \
int i; \
\
wl1271_debugfs_update_stats(wl); \
\
- for (i = 0; i < len; i++) \
- snprintf(buf, sizeof(buf), "%s[%d] = %d\n", \
- buf, i, stats->sub.name[i]); \
+ for (i = 0; i < len && pos < sizeof(buf); i++) \
+ pos += snprintf(buf + pos, sizeof(buf) - pos, \
+ "[%d] = %d\n", i, stats->sub.name[i]); \
\
return wl1271_format_buffer(userbuf, count, ppos, "%s", buf); \
} \
diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c
index c878097..1df9595 100644
--- a/drivers/net/wireless/virt_wifi.c
+++ b/drivers/net/wireless/virt_wifi.c
@@ -12,6 +12,7 @@
#include <net/cfg80211.h>
#include <net/rtnetlink.h>
#include <linux/etherdevice.h>
+#include <linux/math64.h>
#include <linux/module.h>
static struct wiphy *common_wiphy;
@@ -168,11 +169,11 @@ static void virt_wifi_scan_result(struct work_struct *work)
scan_result.work);
struct wiphy *wiphy = priv_to_wiphy(priv);
struct cfg80211_scan_info scan_info = { .aborted = false };
+ u64 tsf = div_u64(ktime_get_boottime_ns(), 1000);
informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz,
CFG80211_BSS_FTYPE_PRESP,
- fake_router_bssid,
- ktime_get_boottime_ns(),
+ fake_router_bssid, tsf,
WLAN_CAPABILITY_ESS, 0,
(void *)&ssid, sizeof(ssid),
DBM_TO_MBM(-50), GFP_KERNEL);
diff --git a/drivers/net/wireless/wl3501.h b/drivers/net/wireless/wl3501.h
index b446cb3..87195c1 100644
--- a/drivers/net/wireless/wl3501.h
+++ b/drivers/net/wireless/wl3501.h
@@ -379,16 +379,7 @@ struct wl3501_get_confirm {
u8 mib_value[100];
};
-struct wl3501_join_req {
- u16 next_blk;
- u8 sig_id;
- u8 reserved;
- struct iw_mgmt_data_rset operational_rset;
- u16 reserved2;
- u16 timeout;
- u16 probe_delay;
- u8 timestamp[8];
- u8 local_time[8];
+struct wl3501_req {
u16 beacon_period;
u16 dtim_period;
u16 cap_info;
@@ -401,6 +392,19 @@ struct wl3501_join_req {
struct iw_mgmt_data_rset bss_basic_rset;
};
+struct wl3501_join_req {
+ u16 next_blk;
+ u8 sig_id;
+ u8 reserved;
+ struct iw_mgmt_data_rset operational_rset;
+ u16 reserved2;
+ u16 timeout;
+ u16 probe_delay;
+ u8 timestamp[8];
+ u8 local_time[8];
+ struct wl3501_req req;
+};
+
struct wl3501_join_confirm {
u16 next_blk;
u8 sig_id;
@@ -443,16 +447,7 @@ struct wl3501_scan_confirm {
u16 status;
char timestamp[8];
char localtime[8];
- u16 beacon_period;
- u16 dtim_period;
- u16 cap_info;
- u8 bss_type;
- u8 bssid[ETH_ALEN];
- struct iw_mgmt_essid_pset ssid;
- struct iw_mgmt_ds_pset ds_pset;
- struct iw_mgmt_cf_pset cf_pset;
- struct iw_mgmt_ibss_pset ibss_pset;
- struct iw_mgmt_data_rset bss_basic_rset;
+ struct wl3501_req req;
u8 rssi;
};
@@ -471,8 +466,10 @@ struct wl3501_md_req {
u16 size;
u8 pri;
u8 service_class;
- u8 daddr[ETH_ALEN];
- u8 saddr[ETH_ALEN];
+ struct {
+ u8 daddr[ETH_ALEN];
+ u8 saddr[ETH_ALEN];
+ } addr;
};
struct wl3501_md_ind {
@@ -484,8 +481,10 @@ struct wl3501_md_ind {
u8 reception;
u8 pri;
u8 service_class;
- u8 daddr[ETH_ALEN];
- u8 saddr[ETH_ALEN];
+ struct {
+ u8 daddr[ETH_ALEN];
+ u8 saddr[ETH_ALEN];
+ } addr;
};
struct wl3501_md_confirm {
diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
index 026e88b..ff1701a 100644
--- a/drivers/net/wireless/wl3501_cs.c
+++ b/drivers/net/wireless/wl3501_cs.c
@@ -471,6 +471,7 @@ static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len)
struct wl3501_md_req sig = {
.sig_id = WL3501_SIG_MD_REQ,
};
+ size_t sig_addr_len = sizeof(sig.addr);
u8 *pdata = (char *)data;
int rc = -EIO;
@@ -486,9 +487,9 @@ static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len)
goto out;
}
rc = 0;
- memcpy(&sig.daddr[0], pdata, 12);
- pktlen = len - 12;
- pdata += 12;
+ memcpy(&sig.addr, pdata, sig_addr_len);
+ pktlen = len - sig_addr_len;
+ pdata += sig_addr_len;
sig.data = bf;
if (((*pdata) * 256 + (*(pdata + 1))) > 1500) {
u8 addr4[ETH_ALEN] = {
@@ -591,7 +592,7 @@ static int wl3501_mgmt_join(struct wl3501_card *this, u16 stas)
struct wl3501_join_req sig = {
.sig_id = WL3501_SIG_JOIN_REQ,
.timeout = 10,
- .ds_pset = {
+ .req.ds_pset = {
.el = {
.id = IW_MGMT_INFO_ELEMENT_DS_PARAMETER_SET,
.len = 1,
@@ -600,7 +601,7 @@ static int wl3501_mgmt_join(struct wl3501_card *this, u16 stas)
},
};
- memcpy(&sig.beacon_period, &this->bss_set[stas].beacon_period, 72);
+ memcpy(&sig.req, &this->bss_set[stas].req, sizeof(sig.req));
return wl3501_esbq_exec(this, &sig, sizeof(sig));
}
@@ -668,35 +669,37 @@ static void wl3501_mgmt_scan_confirm(struct wl3501_card *this, u16 addr)
if (sig.status == WL3501_STATUS_SUCCESS) {
pr_debug("success");
if ((this->net_type == IW_MODE_INFRA &&
- (sig.cap_info & WL3501_MGMT_CAPABILITY_ESS)) ||
+ (sig.req.cap_info & WL3501_MGMT_CAPABILITY_ESS)) ||
(this->net_type == IW_MODE_ADHOC &&
- (sig.cap_info & WL3501_MGMT_CAPABILITY_IBSS)) ||
+ (sig.req.cap_info & WL3501_MGMT_CAPABILITY_IBSS)) ||
this->net_type == IW_MODE_AUTO) {
if (!this->essid.el.len)
matchflag = 1;
else if (this->essid.el.len == 3 &&
!memcmp(this->essid.essid, "ANY", 3))
matchflag = 1;
- else if (this->essid.el.len != sig.ssid.el.len)
+ else if (this->essid.el.len != sig.req.ssid.el.len)
matchflag = 0;
- else if (memcmp(this->essid.essid, sig.ssid.essid,
+ else if (memcmp(this->essid.essid, sig.req.ssid.essid,
this->essid.el.len))
matchflag = 0;
else
matchflag = 1;
if (matchflag) {
for (i = 0; i < this->bss_cnt; i++) {
- if (ether_addr_equal_unaligned(this->bss_set[i].bssid, sig.bssid)) {
+ if (ether_addr_equal_unaligned(this->bss_set[i].req.bssid,
+ sig.req.bssid)) {
matchflag = 0;
break;
}
}
}
if (matchflag && (i < 20)) {
- memcpy(&this->bss_set[i].beacon_period,
- &sig.beacon_period, 73);
+ memcpy(&this->bss_set[i].req,
+ &sig.req, sizeof(sig.req));
this->bss_cnt++;
this->rssi = sig.rssi;
+ this->bss_set[i].rssi = sig.rssi;
}
}
} else if (sig.status == WL3501_STATUS_TIMEOUT) {
@@ -888,19 +891,19 @@ static void wl3501_mgmt_join_confirm(struct net_device *dev, u16 addr)
if (this->join_sta_bss < this->bss_cnt) {
const int i = this->join_sta_bss;
memcpy(this->bssid,
- this->bss_set[i].bssid, ETH_ALEN);
- this->chan = this->bss_set[i].ds_pset.chan;
+ this->bss_set[i].req.bssid, ETH_ALEN);
+ this->chan = this->bss_set[i].req.ds_pset.chan;
iw_copy_mgmt_info_element(&this->keep_essid.el,
- &this->bss_set[i].ssid.el);
+ &this->bss_set[i].req.ssid.el);
wl3501_mgmt_auth(this);
}
} else {
const int i = this->join_sta_bss;
- memcpy(&this->bssid, &this->bss_set[i].bssid, ETH_ALEN);
- this->chan = this->bss_set[i].ds_pset.chan;
+ memcpy(&this->bssid, &this->bss_set[i].req.bssid, ETH_ALEN);
+ this->chan = this->bss_set[i].req.ds_pset.chan;
iw_copy_mgmt_info_element(&this->keep_essid.el,
- &this->bss_set[i].ssid.el);
+ &this->bss_set[i].req.ssid.el);
wl3501_online(dev);
}
} else {
@@ -982,7 +985,8 @@ static inline void wl3501_md_ind_interrupt(struct net_device *dev,
} else {
skb->dev = dev;
skb_reserve(skb, 2); /* IP headers on 16 bytes boundaries */
- skb_copy_to_linear_data(skb, (unsigned char *)&sig.daddr, 12);
+ skb_copy_to_linear_data(skb, (unsigned char *)&sig.addr,
+ sizeof(sig.addr));
wl3501_receive(this, skb->data, pkt_len);
skb_put(skb, pkt_len);
skb->protocol = eth_type_trans(skb, dev);
@@ -1573,30 +1577,30 @@ static int wl3501_get_scan(struct net_device *dev, struct iw_request_info *info,
for (i = 0; i < this->bss_cnt; ++i) {
iwe.cmd = SIOCGIWAP;
iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
- memcpy(iwe.u.ap_addr.sa_data, this->bss_set[i].bssid, ETH_ALEN);
+ memcpy(iwe.u.ap_addr.sa_data, this->bss_set[i].req.bssid, ETH_ALEN);
current_ev = iwe_stream_add_event(info, current_ev,
extra + IW_SCAN_MAX_DATA,
&iwe, IW_EV_ADDR_LEN);
iwe.cmd = SIOCGIWESSID;
iwe.u.data.flags = 1;
- iwe.u.data.length = this->bss_set[i].ssid.el.len;
+ iwe.u.data.length = this->bss_set[i].req.ssid.el.len;
current_ev = iwe_stream_add_point(info, current_ev,
extra + IW_SCAN_MAX_DATA,
&iwe,
- this->bss_set[i].ssid.essid);
+ this->bss_set[i].req.ssid.essid);
iwe.cmd = SIOCGIWMODE;
- iwe.u.mode = this->bss_set[i].bss_type;
+ iwe.u.mode = this->bss_set[i].req.bss_type;
current_ev = iwe_stream_add_event(info, current_ev,
extra + IW_SCAN_MAX_DATA,
&iwe, IW_EV_UINT_LEN);
iwe.cmd = SIOCGIWFREQ;
- iwe.u.freq.m = this->bss_set[i].ds_pset.chan;
+ iwe.u.freq.m = this->bss_set[i].req.ds_pset.chan;
iwe.u.freq.e = 0;
current_ev = iwe_stream_add_event(info, current_ev,
extra + IW_SCAN_MAX_DATA,
&iwe, IW_EV_FREQ_LEN);
iwe.cmd = SIOCGIWENCODE;
- if (this->bss_set[i].cap_info & WL3501_MGMT_CAPABILITY_PRIVACY)
+ if (this->bss_set[i].req.cap_info & WL3501_MGMT_CAPABILITY_PRIVACY)
iwe.u.data.flags = IW_ENCODE_ENABLED | IW_ENCODE_NOKEY;
else
iwe.u.data.flags = IW_ENCODE_DISABLED;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 6f10e09..94d1915 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -824,11 +824,15 @@ static void connect(struct backend_info *be)
xenvif_carrier_on(be->vif);
unregister_hotplug_status_watch(be);
- err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL,
- hotplug_status_changed,
- "%s/%s", dev->nodename, "hotplug-status");
- if (!err)
+ if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) {
+ err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
+ NULL, hotplug_status_changed,
+ "%s/%s", dev->nodename,
+ "hotplug-status");
+ if (err)
+ goto err;
be->have_hotplug_status_watch = 1;
+ }
netif_tx_wake_all_queues(be->vif->dev);
diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
index f7464bd..18e3435 100644
--- a/drivers/nfc/pn533/pn533.c
+++ b/drivers/nfc/pn533/pn533.c
@@ -706,6 +706,9 @@ static bool pn533_target_type_a_is_valid(struct pn533_target_type_a *type_a,
if (PN533_TYPE_A_SEL_CASCADE(type_a->sel_res) != 0)
return false;
+ if (type_a->nfcid_len > NFC_NFCID1_MAXSIZE)
+ return false;
+
return true;
}
diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index ef23119..e05cc9f 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -1239,6 +1239,11 @@ int nvdimm_has_flush(struct nd_region *nd_region)
|| !IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API))
return -ENXIO;
+ /* Test if an explicit flush function is defined */
+ if (test_bit(ND_REGION_ASYNC, &nd_region->flags) && nd_region->flush)
+ return 1;
+
+ /* Test if any flush hints for the region are available */
for (i = 0; i < nd_region->ndr_mappings; i++) {
struct nd_mapping *nd_mapping = &nd_region->mapping[i];
struct nvdimm *nvdimm = nd_mapping->nvdimm;
@@ -1249,8 +1254,8 @@ int nvdimm_has_flush(struct nd_region *nd_region)
}
/*
- * The platform defines dimm devices without hints, assume
- * platform persistence mechanism like ADR
+ * The platform defines dimm devices without hints nor explicit flush,
+ * assume platform persistence mechanism like ADR
*/
return 0;
}
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 610d2bc..f520a71 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2622,7 +2622,8 @@ static void nvme_set_latency_tolerance(struct device *dev, s32 val)
if (ctrl->ps_max_latency_us != latency) {
ctrl->ps_max_latency_us = latency;
- nvme_configure_apst(ctrl);
+ if (ctrl->state == NVME_CTRL_LIVE)
+ nvme_configure_apst(ctrl);
}
}
@@ -3130,7 +3131,7 @@ int nvme_init_identify(struct nvme_ctrl *ctrl)
ctrl->hmmaxd = le16_to_cpu(id->hmmaxd);
}
- ret = nvme_mpath_init(ctrl, id);
+ ret = nvme_mpath_init_identify(ctrl, id);
kfree(id);
if (ret < 0)
@@ -4516,6 +4517,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
min(default_ps_max_latency_us, (unsigned long)S32_MAX));
nvme_fault_inject_init(&ctrl->fault_inject, dev_name(ctrl->device));
+ nvme_mpath_init_ctrl(ctrl);
return 0;
out_free_name:
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 41257da..a0bcec33 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2460,6 +2460,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
static void
__nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
{
+ int q;
+
+ /*
+ * if aborting io, the queues are no longer good, mark them
+ * all as not live.
+ */
+ if (ctrl->ctrl.queue_count > 1) {
+ for (q = 1; q < ctrl->ctrl.queue_count; q++)
+ clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags);
+ }
+ clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags);
+
/*
* If io queues are present, stop them and terminate all outstanding
* ios on them. As FC allocates FC exchange for each io, the
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index e812a0d..2747efc 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -667,6 +667,10 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id)
if (desc.state) {
/* found the group desc: update */
nvme_update_ns_ana_state(&desc, ns);
+ } else {
+ /* group desc not found: trigger a re-read */
+ set_bit(NVME_NS_ANA_PENDING, &ns->flags);
+ queue_work(nvme_wq, &ns->ctrl->ana_work);
}
} else {
ns->ana_state = NVME_ANA_OPTIMIZED;
@@ -704,9 +708,18 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
put_disk(head->disk);
}
-int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl)
{
- int error;
+ mutex_init(&ctrl->ana_lock);
+ timer_setup(&ctrl->anatt_timer, nvme_anatt_timeout, 0);
+ INIT_WORK(&ctrl->ana_work, nvme_ana_work);
+}
+
+int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+{
+ size_t max_transfer_size = ctrl->max_hw_sectors << SECTOR_SHIFT;
+ size_t ana_log_size;
+ int error = 0;
/* check if multipath is enabled and we have the capability */
if (!multipath || !ctrl->subsys ||
@@ -718,37 +731,31 @@ int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
ctrl->nanagrpid = le32_to_cpu(id->nanagrpid);
ctrl->anagrpmax = le32_to_cpu(id->anagrpmax);
- mutex_init(&ctrl->ana_lock);
- timer_setup(&ctrl->anatt_timer, nvme_anatt_timeout, 0);
- ctrl->ana_log_size = sizeof(struct nvme_ana_rsp_hdr) +
- ctrl->nanagrpid * sizeof(struct nvme_ana_group_desc);
- ctrl->ana_log_size += ctrl->max_namespaces * sizeof(__le32);
-
- if (ctrl->ana_log_size > ctrl->max_hw_sectors << SECTOR_SHIFT) {
+ ana_log_size = sizeof(struct nvme_ana_rsp_hdr) +
+ ctrl->nanagrpid * sizeof(struct nvme_ana_group_desc) +
+ ctrl->max_namespaces * sizeof(__le32);
+ if (ana_log_size > max_transfer_size) {
dev_err(ctrl->device,
- "ANA log page size (%zd) larger than MDTS (%d).\n",
- ctrl->ana_log_size,
- ctrl->max_hw_sectors << SECTOR_SHIFT);
+ "ANA log page size (%zd) larger than MDTS (%zd).\n",
+ ana_log_size, max_transfer_size);
dev_err(ctrl->device, "disabling ANA support.\n");
- return 0;
+ goto out_uninit;
}
-
- INIT_WORK(&ctrl->ana_work, nvme_ana_work);
- kfree(ctrl->ana_log_buf);
- ctrl->ana_log_buf = kmalloc(ctrl->ana_log_size, GFP_KERNEL);
- if (!ctrl->ana_log_buf) {
- error = -ENOMEM;
- goto out;
+ if (ana_log_size > ctrl->ana_log_size) {
+ nvme_mpath_stop(ctrl);
+ kfree(ctrl->ana_log_buf);
+ ctrl->ana_log_buf = kmalloc(ana_log_size, GFP_KERNEL);
+ if (!ctrl->ana_log_buf)
+ return -ENOMEM;
}
-
+ ctrl->ana_log_size = ana_log_size;
error = nvme_read_ana_log(ctrl);
if (error)
- goto out_free_ana_log_buf;
+ goto out_uninit;
return 0;
-out_free_ana_log_buf:
- kfree(ctrl->ana_log_buf);
- ctrl->ana_log_buf = NULL;
-out:
+
+out_uninit:
+ nvme_mpath_uninit(ctrl);
return error;
}
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index f843540..3cb3c82 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -654,7 +654,8 @@ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl);
int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl,struct nvme_ns_head *head);
void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id);
void nvme_mpath_remove_disk(struct nvme_ns_head *head);
-int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id);
+int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id);
+void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl);
void nvme_mpath_uninit(struct nvme_ctrl *ctrl);
void nvme_mpath_stop(struct nvme_ctrl *ctrl);
bool nvme_mpath_clear_current_path(struct nvme_ns *ns);
@@ -730,7 +731,10 @@ static inline void nvme_trace_bio_complete(struct request *req,
blk_status_t status)
{
}
-static inline int nvme_mpath_init(struct nvme_ctrl *ctrl,
+static inline void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl)
+{
+}
+static inline int nvme_mpath_init_identify(struct nvme_ctrl *ctrl,
struct nvme_id_ctrl *id)
{
if (ctrl->subsys->cmic & (1 << 3))
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 4dca58f..c1f34462 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -852,7 +852,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
return nvme_setup_prp_simple(dev, req,
&cmnd->rw, &bv);
- if (iod->nvmeq->qid &&
+ if (iod->nvmeq->qid && sgl_threshold &&
dev->ctrl.sgls & ((1 << 0) | (1 << 1)))
return nvme_setup_sgl_simple(dev, req,
&cmnd->rw, &bv);
@@ -2634,6 +2634,7 @@ static void nvme_reset_work(struct work_struct *work)
* Don't limit the IOMMU merged segment size.
*/
dma_set_max_seg_size(dev->dev, 0xffffffff);
+ dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
mutex_unlock(&dev->shutdown_lock);
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 9444e5e2..82b2611 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -874,7 +874,7 @@ static void nvme_tcp_state_change(struct sock *sk)
{
struct nvme_tcp_queue *queue;
- read_lock(&sk->sk_callback_lock);
+ read_lock_bh(&sk->sk_callback_lock);
queue = sk->sk_user_data;
if (!queue)
goto done;
@@ -895,7 +895,7 @@ static void nvme_tcp_state_change(struct sock *sk)
queue->state_change(sk);
done:
- read_unlock(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock);
}
static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
@@ -940,7 +940,6 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
if (ret <= 0)
return ret;
- nvme_tcp_advance_req(req, ret);
if (queue->data_digest)
nvme_tcp_ddgst_update(queue->snd_hash, page,
offset, ret);
@@ -957,6 +956,7 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
}
return 1;
}
+ nvme_tcp_advance_req(req, ret);
}
return -EAGAIN;
}
@@ -1140,7 +1140,8 @@ static void nvme_tcp_io_work(struct work_struct *w)
pending = true;
else if (unlikely(result < 0))
break;
- }
+ } else
+ pending = !llist_empty(&queue->req_list);
result = nvme_tcp_try_recv(queue);
if (result > 0)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index e20dea5..6a8274c 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -313,7 +313,7 @@ static void nvmet_execute_get_log_page(struct nvmet_req *req)
case NVME_LOG_ANA:
return nvmet_execute_get_log_page_ana(req);
}
- pr_err("unhandled lid %d on qid %d\n",
+ pr_debug("unhandled lid %d on qid %d\n",
req->cmd->get_log_page.lid, req->sq->qid);
req->error_loc = offsetof(struct nvme_get_log_page_command, lid);
nvmet_req_complete(req, NVME_SC_INVALID_FIELD | NVME_SC_DNR);
@@ -657,7 +657,7 @@ static void nvmet_execute_identify(struct nvmet_req *req)
return nvmet_execute_identify_desclist(req);
}
- pr_err("unhandled identify cns %d on qid %d\n",
+ pr_debug("unhandled identify cns %d on qid %d\n",
req->cmd->identify.cns, req->sq->qid);
req->error_loc = offsetof(struct nvme_identify, cns);
nvmet_req_complete(req, NVME_SC_INVALID_FIELD | NVME_SC_DNR);
@@ -972,7 +972,7 @@ u16 nvmet_parse_admin_cmd(struct nvmet_req *req)
return 0;
}
- pr_err("unhandled cmd %d on qid %d\n", cmd->common.opcode,
+ pr_debug("unhandled cmd %d on qid %d\n", cmd->common.opcode,
req->sq->qid);
req->error_loc = offsetof(struct nvme_common_command, opcode);
return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 1e79d33..46e4f7e 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -757,8 +757,6 @@ void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq,
{
cq->qid = qid;
cq->size = size;
-
- ctrl->cqs[qid] = cq;
}
void nvmet_sq_setup(struct nvmet_ctrl *ctrl, struct nvmet_sq *sq,
@@ -1355,20 +1353,14 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
if (!ctrl->changed_ns_list)
goto out_free_ctrl;
- ctrl->cqs = kcalloc(subsys->max_qid + 1,
- sizeof(struct nvmet_cq *),
- GFP_KERNEL);
- if (!ctrl->cqs)
- goto out_free_changed_ns_list;
-
ctrl->sqs = kcalloc(subsys->max_qid + 1,
sizeof(struct nvmet_sq *),
GFP_KERNEL);
if (!ctrl->sqs)
- goto out_free_cqs;
+ goto out_free_changed_ns_list;
if (subsys->cntlid_min > subsys->cntlid_max)
- goto out_free_cqs;
+ goto out_free_sqs;
ret = ida_simple_get(&cntlid_ida,
subsys->cntlid_min, subsys->cntlid_max,
@@ -1406,8 +1398,6 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
out_free_sqs:
kfree(ctrl->sqs);
-out_free_cqs:
- kfree(ctrl->cqs);
out_free_changed_ns_list:
kfree(ctrl->changed_ns_list);
out_free_ctrl:
@@ -1437,7 +1427,6 @@ static void nvmet_ctrl_free(struct kref *ref)
nvmet_async_events_free(ctrl);
kfree(ctrl->sqs);
- kfree(ctrl->cqs);
kfree(ctrl->changed_ns_list);
kfree(ctrl);
diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
index f40c05c..5b8ee82 100644
--- a/drivers/nvme/target/discovery.c
+++ b/drivers/nvme/target/discovery.c
@@ -177,12 +177,14 @@ static void nvmet_execute_disc_get_log_page(struct nvmet_req *req)
if (req->cmd->get_log_page.lid != NVME_LOG_DISC) {
req->error_loc =
offsetof(struct nvme_get_log_page_command, lid);
- status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
goto out;
}
/* Spec requires dword aligned offsets */
if (offset & 0x3) {
+ req->error_loc =
+ offsetof(struct nvme_get_log_page_command, lpo);
status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
goto out;
}
@@ -249,7 +251,7 @@ static void nvmet_execute_disc_identify(struct nvmet_req *req)
if (req->cmd->identify.cns != NVME_ID_CNS_CTRL) {
req->error_loc = offsetof(struct nvme_identify, cns);
- status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
goto out;
}
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 125dde3..6a9626f 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -256,10 +256,9 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
if (is_pci_p2pdma_page(sg_page(req->sg)))
op |= REQ_NOMERGE;
- sector = le64_to_cpu(req->cmd->rw.slba);
- sector <<= (req->ns->blksize_shift - 9);
+ sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba);
- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
+ if (nvmet_use_inline_bvec(req)) {
bio = &req->b.inline_bio;
bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
} else {
@@ -345,7 +344,7 @@ static u16 nvmet_bdev_discard_range(struct nvmet_req *req,
int ret;
ret = __blkdev_issue_discard(ns->bdev,
- le64_to_cpu(range->slba) << (ns->blksize_shift - 9),
+ nvmet_lba_to_sect(ns, range->slba),
le32_to_cpu(range->nlb) << (ns->blksize_shift - 9),
GFP_KERNEL, 0, bio);
if (ret && ret != -EOPNOTSUPP) {
@@ -414,8 +413,7 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req)
if (!nvmet_check_transfer_len(req, 0))
return;
- sector = le64_to_cpu(write_zeroes->slba) <<
- (req->ns->blksize_shift - 9);
+ sector = nvmet_lba_to_sect(req->ns, write_zeroes->slba);
nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length) + 1) <<
(req->ns->blksize_shift - 9));
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 0abbefd..b575997 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -49,9 +49,11 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
ns->file = filp_open(ns->device_path, flags, 0);
if (IS_ERR(ns->file)) {
- pr_err("failed to open file %s: (%ld)\n",
- ns->device_path, PTR_ERR(ns->file));
- return PTR_ERR(ns->file);
+ ret = PTR_ERR(ns->file);
+ pr_err("failed to open file %s: (%d)\n",
+ ns->device_path, ret);
+ ns->file = NULL;
+ return ret;
}
ret = nvmet_file_ns_revalidate(ns);
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index f6d8123..b869b68 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -578,8 +578,10 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
ret = nvme_init_ctrl(&ctrl->ctrl, dev, &nvme_loop_ctrl_ops,
0 /* no quirks, we're perfect! */);
- if (ret)
+ if (ret) {
+ kfree(ctrl);
goto out;
+ }
if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING))
WARN_ON_ONCE(1);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 559a15c..ea96487 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -164,7 +164,6 @@ static inline struct nvmet_port *ana_groups_to_port(
struct nvmet_ctrl {
struct nvmet_subsys *subsys;
- struct nvmet_cq **cqs;
struct nvmet_sq **sqs;
bool cmd_seen;
@@ -601,4 +600,20 @@ static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple);
}
+static inline __le64 nvmet_sect_to_lba(struct nvmet_ns *ns, sector_t sect)
+{
+ return cpu_to_le64(sect >> (ns->blksize_shift - SECTOR_SHIFT));
+}
+
+static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba)
+{
+ return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT);
+}
+
+static inline bool nvmet_use_inline_bvec(struct nvmet_req *req)
+{
+ return req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN &&
+ req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC;
+}
+
#endif /* _NVMET_H */
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 6c1f3ab..7d607f4 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -700,7 +700,7 @@ static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
{
struct nvmet_rdma_rsp *rsp =
container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe);
- struct nvmet_rdma_queue *queue = cq->cq_context;
+ struct nvmet_rdma_queue *queue = wc->qp->qp_context;
nvmet_rdma_release_rsp(rsp);
@@ -786,7 +786,7 @@ static void nvmet_rdma_write_data_done(struct ib_cq *cq, struct ib_wc *wc)
{
struct nvmet_rdma_rsp *rsp =
container_of(wc->wr_cqe, struct nvmet_rdma_rsp, write_cqe);
- struct nvmet_rdma_queue *queue = cq->cq_context;
+ struct nvmet_rdma_queue *queue = wc->qp->qp_context;
struct rdma_cm_id *cm_id = rsp->queue->cm_id;
u16 status;
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index d658c6e..4df4f37 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -525,11 +525,36 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
struct nvmet_tcp_cmd *cmd =
container_of(req, struct nvmet_tcp_cmd, req);
struct nvmet_tcp_queue *queue = cmd->queue;
+ struct nvme_sgl_desc *sgl;
+ u32 len;
+
+ if (unlikely(cmd == queue->cmd)) {
+ sgl = &cmd->req.cmd->common.dptr.sgl;
+ len = le32_to_cpu(sgl->length);
+
+ /*
+ * Wait for inline data before processing the response.
+ * Avoid using helpers, this might happen before
+ * nvmet_req_init is completed.
+ */
+ if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
+ len && len <= cmd->req.port->inline_data_size &&
+ nvme_is_write(cmd->req.cmd))
+ return;
+ }
llist_add(&cmd->lentry, &queue->resp_list);
queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
}
+static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd)
+{
+ if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
+ nvmet_tcp_queue_response(&cmd->req);
+ else
+ cmd->req.execute(&cmd->req);
+}
+
static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)
{
u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue);
@@ -961,7 +986,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
le32_to_cpu(req->cmd->common.dptr.sgl.length));
nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
- return -EAGAIN;
+ return 0;
}
ret = nvmet_tcp_map_data(queue->cmd);
@@ -1104,10 +1129,8 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
return 0;
}
- if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
- cmd->rbytes_done == cmd->req.transfer_len) {
- cmd->req.execute(&cmd->req);
- }
+ if (cmd->rbytes_done == cmd->req.transfer_len)
+ nvmet_tcp_execute_request(cmd);
nvmet_prepare_receive_pdu(queue);
return 0;
@@ -1144,9 +1167,9 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
goto out;
}
- if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
- cmd->rbytes_done == cmd->req.transfer_len)
- cmd->req.execute(&cmd->req);
+ if (cmd->rbytes_done == cmd->req.transfer_len)
+ nvmet_tcp_execute_request(cmd);
+
ret = 0;
out:
nvmet_prepare_receive_pdu(queue);
@@ -1434,7 +1457,7 @@ static void nvmet_tcp_state_change(struct sock *sk)
{
struct nvmet_tcp_queue *queue;
- write_lock_bh(&sk->sk_callback_lock);
+ read_lock_bh(&sk->sk_callback_lock);
queue = sk->sk_user_data;
if (!queue)
goto done;
@@ -1452,7 +1475,7 @@ static void nvmet_tcp_state_change(struct sock *sk)
queue->idx, sk->sk_state);
}
done:
- write_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock);
}
static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue)
diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
index 5e9e60e..955b8b8 100644
--- a/drivers/nvmem/qfprom.c
+++ b/drivers/nvmem/qfprom.c
@@ -104,6 +104,16 @@ static void qfprom_disable_fuse_blowing(const struct qfprom_priv *priv,
{
int ret;
+ /*
+ * This may be a shared rail and may be able to run at a lower rate
+ * when we're not blowing fuses. At the moment, the regulator framework
+ * applies voltage constraints even on disabled rails, so remove our
+ * constraints and allow the rail to be adjusted by other users.
+ */
+ ret = regulator_set_voltage(priv->vcc, 0, INT_MAX);
+ if (ret)
+ dev_warn(priv->dev, "Failed to set 0 voltage (ignoring)\n");
+
ret = regulator_disable(priv->vcc);
if (ret)
dev_warn(priv->dev, "Failed to disable regulator (ignoring)\n");
@@ -149,6 +159,17 @@ static int qfprom_enable_fuse_blowing(const struct qfprom_priv *priv,
goto err_clk_prepared;
}
+ /*
+ * Hardware requires 1.8V min for fuse blowing; this may be
+ * a rail shared do don't specify a max--regulator constraints
+ * will handle.
+ */
+ ret = regulator_set_voltage(priv->vcc, 1800000, INT_MAX);
+ if (ret) {
+ dev_err(priv->dev, "Failed to set 1.8 voltage\n");
+ goto err_clk_rate_set;
+ }
+
ret = regulator_enable(priv->vcc);
if (ret) {
dev_err(priv->dev, "Failed to enable regulator\n");
diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
index 50bbe0e..43a77d72 100644
--- a/drivers/of/overlay.c
+++ b/drivers/of/overlay.c
@@ -796,6 +796,7 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs,
if (!fragment->target) {
of_node_put(fragment->overlay);
ret = -EINVAL;
+ of_node_put(node);
goto err_free_fragments;
}
diff --git a/drivers/of/property.c b/drivers/of/property.c
index 408a7b5..1d7d24e 100644
--- a/drivers/of/property.c
+++ b/drivers/of/property.c
@@ -1308,7 +1308,16 @@ DEFINE_SIMPLE_PROP(pinctrl7, "pinctrl-7", NULL)
DEFINE_SIMPLE_PROP(pinctrl8, "pinctrl-8", NULL)
DEFINE_SUFFIX_PROP(regulators, "-supply", NULL)
DEFINE_SUFFIX_PROP(gpio, "-gpio", "#gpio-cells")
-DEFINE_SUFFIX_PROP(gpios, "-gpios", "#gpio-cells")
+
+static struct device_node *parse_gpios(struct device_node *np,
+ const char *prop_name, int index)
+{
+ if (!strcmp_suffix(prop_name, ",nr-gpios"))
+ return NULL;
+
+ return parse_suffix_prop_cells(np, prop_name, index, "-gpios",
+ "#gpio-cells");
+}
static struct device_node *parse_iommu_maps(struct device_node *np,
const char *prop_name, int index)
diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
index a222728..90482d5 100644
--- a/drivers/pci/controller/dwc/pci-keystone.c
+++ b/drivers/pci/controller/dwc/pci-keystone.c
@@ -811,7 +811,8 @@ static int __init ks_pcie_host_init(struct pcie_port *pp)
int ret;
pp->bridge->ops = &ks_pcie_ops;
- pp->bridge->child_ops = &ks_child_pcie_ops;
+ if (!ks_pcie->is_am6)
+ pp->bridge->child_ops = &ks_child_pcie_ops;
ret = ks_pcie_config_legacy_irq(ks_pcie);
if (ret)
diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
index f920e7e..d788f4d 100644
--- a/drivers/pci/controller/dwc/pcie-tegra194.c
+++ b/drivers/pci/controller/dwc/pcie-tegra194.c
@@ -1660,7 +1660,7 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
if (pcie->ep_state == EP_STATE_ENABLED)
return;
- ret = pm_runtime_get_sync(dev);
+ ret = pm_runtime_resume_and_get(dev);
if (ret < 0) {
dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
ret);
diff --git a/drivers/pci/controller/pci-thunder-ecam.c b/drivers/pci/controller/pci-thunder-ecam.c
index 7e8835f..d793958 100644
--- a/drivers/pci/controller/pci-thunder-ecam.c
+++ b/drivers/pci/controller/pci-thunder-ecam.c
@@ -116,7 +116,7 @@ static int thunder_ecam_p2_config_read(struct pci_bus *bus, unsigned int devfn,
* the config space access window. Since we are working with
* the high-order 32 bits, shift everything down by 32 bits.
*/
- node_bits = (cfg->res.start >> 32) & (1 << 12);
+ node_bits = upper_32_bits(cfg->res.start) & (1 << 12);
v |= node_bits;
set_val(v, where, size, val);
diff --git a/drivers/pci/controller/pci-thunder-pem.c b/drivers/pci/controller/pci-thunder-pem.c
index 3f84796..4b12dd4 100644
--- a/drivers/pci/controller/pci-thunder-pem.c
+++ b/drivers/pci/controller/pci-thunder-pem.c
@@ -12,6 +12,7 @@
#include <linux/pci-acpi.h>
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
#include "../pci.h"
#if defined(CONFIG_PCI_HOST_THUNDER_PEM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
@@ -315,9 +316,9 @@ static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg,
* structure here for the BAR.
*/
bar4_start = res_pem->start + 0xf00000;
- pem_pci->ea_entry[0] = (u32)bar4_start | 2;
- pem_pci->ea_entry[1] = (u32)(res_pem->end - bar4_start) & ~3u;
- pem_pci->ea_entry[2] = (u32)(bar4_start >> 32);
+ pem_pci->ea_entry[0] = lower_32_bits(bar4_start) | 2;
+ pem_pci->ea_entry[1] = lower_32_bits(res_pem->end - bar4_start) & ~3u;
+ pem_pci->ea_entry[2] = upper_32_bits(bar4_start);
cfg->priv = pem_pci;
return 0;
@@ -325,9 +326,9 @@ static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg,
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
-#define PEM_RES_BASE 0x87e0c0000000UL
-#define PEM_NODE_MASK GENMASK(45, 44)
-#define PEM_INDX_MASK GENMASK(26, 24)
+#define PEM_RES_BASE 0x87e0c0000000ULL
+#define PEM_NODE_MASK GENMASK_ULL(45, 44)
+#define PEM_INDX_MASK GENMASK_ULL(26, 24)
#define PEM_MIN_DOM_IN_NODE 4
#define PEM_MAX_DOM_IN_NODE 10
diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
index 8e0db84..c33b385 100644
--- a/drivers/pci/controller/pci-xgene.c
+++ b/drivers/pci/controller/pci-xgene.c
@@ -355,7 +355,8 @@ static int xgene_pcie_map_reg(struct xgene_pcie_port *port,
if (IS_ERR(port->csr_base))
return PTR_ERR(port->csr_base);
- port->cfg_base = devm_platform_ioremap_resource_byname(pdev, "cfg");
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
+ port->cfg_base = devm_ioremap_resource(dev, res);
if (IS_ERR(port->cfg_base))
return PTR_ERR(port->cfg_base);
port->cfg_addr = res->start;
diff --git a/drivers/pci/controller/pcie-iproc-msi.c b/drivers/pci/controller/pcie-iproc-msi.c
index 908475d..eede4e8 100644
--- a/drivers/pci/controller/pcie-iproc-msi.c
+++ b/drivers/pci/controller/pcie-iproc-msi.c
@@ -271,7 +271,7 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
NULL, NULL);
}
- return hwirq;
+ return 0;
}
static void iproc_msi_irq_domain_free(struct irq_domain *domain,
diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
index e4e51d8..d415707 100644
--- a/drivers/pci/endpoint/functions/pci-epf-test.c
+++ b/drivers/pci/endpoint/functions/pci-epf-test.c
@@ -830,13 +830,18 @@ static int pci_epf_test_bind(struct pci_epf *epf)
return -EINVAL;
epc_features = pci_epc_get_features(epc, epf->func_no);
- if (epc_features) {
- linkup_notifier = epc_features->linkup_notifier;
- core_init_notifier = epc_features->core_init_notifier;
- test_reg_bar = pci_epc_get_first_free_bar(epc_features);
- pci_epf_configure_bar(epf, epc_features);
+ if (!epc_features) {
+ dev_err(&epf->dev, "epc_features not implemented\n");
+ return -EOPNOTSUPP;
}
+ linkup_notifier = epc_features->linkup_notifier;
+ core_init_notifier = epc_features->core_init_notifier;
+ test_reg_bar = pci_epc_get_first_free_bar(epc_features);
+ if (test_reg_bar < 0)
+ return -EINVAL;
+ pci_epf_configure_bar(epf, epc_features);
+
epf_test->test_reg_bar = test_reg_bar;
epf_test->epc_features = epc_features;
@@ -917,6 +922,7 @@ static int __init pci_epf_test_init(void)
ret = pci_epf_register_driver(&test_driver);
if (ret) {
+ destroy_workqueue(kpcitest_workqueue);
pr_err("Failed to register pci epf test driver --> %d\n", ret);
return ret;
}
@@ -927,6 +933,8 @@ module_init(pci_epf_test_init);
static void __exit pci_epf_test_exit(void)
{
+ if (kpcitest_workqueue)
+ destroy_workqueue(kpcitest_workqueue);
pci_epf_unregister_driver(&test_driver);
}
module_exit(pci_epf_test_exit);
diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
index cadd3db..ea7e746 100644
--- a/drivers/pci/endpoint/pci-epc-core.c
+++ b/drivers/pci/endpoint/pci-epc-core.c
@@ -87,24 +87,50 @@ EXPORT_SYMBOL_GPL(pci_epc_get);
* pci_epc_get_first_free_bar() - helper to get first unreserved BAR
* @epc_features: pci_epc_features structure that holds the reserved bar bitmap
*
- * Invoke to get the first unreserved BAR that can be used for endpoint
+ * Invoke to get the first unreserved BAR that can be used by the endpoint
* function. For any incorrect value in reserved_bar return '0'.
*/
-unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
- *epc_features)
+enum pci_barno
+pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features)
{
- int free_bar;
+ return pci_epc_get_next_free_bar(epc_features, BAR_0);
+}
+EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
+
+/**
+ * pci_epc_get_next_free_bar() - helper to get unreserved BAR starting from @bar
+ * @epc_features: pci_epc_features structure that holds the reserved bar bitmap
+ * @bar: the starting BAR number from where unreserved BAR should be searched
+ *
+ * Invoke to get the next unreserved BAR starting from @bar that can be used
+ * for endpoint function. For any incorrect value in reserved_bar return '0'.
+ */
+enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
+ *epc_features, enum pci_barno bar)
+{
+ unsigned long free_bar;
if (!epc_features)
- return 0;
+ return BAR_0;
- free_bar = ffz(epc_features->reserved_bar);
+ /* If 'bar - 1' is a 64-bit BAR, move to the next BAR */
+ if ((epc_features->bar_fixed_64bit << 1) & 1 << bar)
+ bar++;
+
+ /* Find if the reserved BAR is also a 64-bit BAR */
+ free_bar = epc_features->reserved_bar & epc_features->bar_fixed_64bit;
+
+ /* Set the adjacent bit if the reserved BAR is also a 64-bit BAR */
+ free_bar <<= 1;
+ free_bar |= epc_features->reserved_bar;
+
+ free_bar = find_next_zero_bit(&free_bar, 6, bar);
if (free_bar > 5)
- return 0;
+ return NO_BAR;
return free_bar;
}
-EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
+EXPORT_SYMBOL_GPL(pci_epc_get_next_free_bar);
/**
* pci_epc_get_features() - get the features supported by EPC
diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
index 3365c93..f031302 100644
--- a/drivers/pci/hotplug/acpiphp_glue.c
+++ b/drivers/pci/hotplug/acpiphp_glue.c
@@ -533,6 +533,7 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
slot->flags &= ~SLOT_ENABLED;
continue;
}
+ pci_dev_put(dev);
}
}
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 9e971ff..d5d9ea8 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -1874,20 +1874,10 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
int err;
int i, bars = 0;
- /*
- * Power state could be unknown at this point, either due to a fresh
- * boot or a device removal call. So get the current power state
- * so that things like MSI message writing will behave as expected
- * (e.g. if the device really is in D0 at enable time).
- */
- if (dev->pm_cap) {
- u16 pmcsr;
- pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
- dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
- }
-
- if (atomic_inc_return(&dev->enable_cnt) > 1)
+ if (atomic_inc_return(&dev->enable_cnt) > 1) {
+ pci_update_current_state(dev, dev->current_state);
return 0; /* already enabled */
+ }
bridge = pci_upstream_bridge(dev);
if (bridge)
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index f86cae9..09ebc13 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -606,6 +606,12 @@ static inline int pci_dev_specific_reset(struct pci_dev *dev, int probe)
#if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64)
int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment,
struct resource *res);
+#else
+static inline int acpi_get_rc_resources(struct device *dev, const char *hid,
+ u16 segment, struct resource *res)
+{
+ return -ENODEV;
+}
#endif
u32 pci_rebar_get_possible_sizes(struct pci_dev *pdev, int bar);
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 4289030..ece90a2 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -2367,6 +2367,7 @@ static struct pci_dev *pci_scan_device(struct pci_bus *bus, int devfn)
pci_set_of_node(dev);
if (pci_setup_device(dev)) {
+ pci_release_of_node(dev);
pci_bus_put(dev->bus);
kfree(dev);
return NULL;
diff --git a/drivers/pci/vpd.c b/drivers/pci/vpd.c
index 7915d10..bd54907 100644
--- a/drivers/pci/vpd.c
+++ b/drivers/pci/vpd.c
@@ -570,7 +570,6 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID,
quirk_blacklist_vpd);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd);
/*
* The Amazon Annapurna Labs 0x0031 device id is reused for other non Root Port
* device types, so the quirk is registered for the PCI_CLASS_BRIDGE_PCI class.
diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c
index 933bd84..ef96764 100644
--- a/drivers/perf/arm_pmu_platform.c
+++ b/drivers/perf/arm_pmu_platform.c
@@ -6,6 +6,7 @@
* Copyright (C) 2010 ARM Ltd., Will Deacon <will.deacon@arm.com>
*/
#define pr_fmt(fmt) "hw perfevents: " fmt
+#define dev_fmt pr_fmt
#include <linux/bug.h>
#include <linux/cpumask.h>
@@ -100,10 +101,8 @@ static int pmu_parse_irqs(struct arm_pmu *pmu)
struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
num_irqs = platform_irq_count(pdev);
- if (num_irqs < 0) {
- pr_err("unable to count PMU IRQs\n");
- return num_irqs;
- }
+ if (num_irqs < 0)
+ return dev_err_probe(&pdev->dev, num_irqs, "unable to count PMU IRQs\n");
/*
* In this case we have no idea which CPUs are covered by the PMU.
@@ -236,7 +235,7 @@ int arm_pmu_device_probe(struct platform_device *pdev,
ret = armpmu_register(pmu);
if (ret)
- goto out_free;
+ goto out_free_irqs;
return 0;
diff --git a/drivers/phy/cadence/phy-cadence-sierra.c b/drivers/phy/cadence/phy-cadence-sierra.c
index 453ef26..aaa0bbe 100644
--- a/drivers/phy/cadence/phy-cadence-sierra.c
+++ b/drivers/phy/cadence/phy-cadence-sierra.c
@@ -319,6 +319,12 @@ static int cdns_sierra_phy_on(struct phy *gphy)
u32 val;
int ret;
+ ret = reset_control_deassert(sp->phy_rst);
+ if (ret) {
+ dev_err(dev, "Failed to take the PHY out of reset\n");
+ return ret;
+ }
+
/* Take the PHY lane group out of reset */
ret = reset_control_deassert(ins->lnk_rst);
if (ret) {
@@ -618,7 +624,6 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
pm_runtime_enable(dev);
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
- reset_control_deassert(sp->phy_rst);
return PTR_ERR_OR_ZERO(phy_provider);
put_child:
diff --git a/drivers/phy/marvell/Kconfig b/drivers/phy/marvell/Kconfig
index 8f6273c..8ab9031 100644
--- a/drivers/phy/marvell/Kconfig
+++ b/drivers/phy/marvell/Kconfig
@@ -3,8 +3,8 @@
# Phy drivers for Marvell platforms
#
config ARMADA375_USBCLUSTER_PHY
- def_bool y
- depends on MACH_ARMADA_375 || COMPILE_TEST
+ bool "Armada 375 USB cluster PHY support" if COMPILE_TEST
+ default y if MACH_ARMADA_375
depends on OF && HAS_IOMEM
select GENERIC_PHY
diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
index c9cfafe..e28e25f 100644
--- a/drivers/phy/ti/phy-j721e-wiz.c
+++ b/drivers/phy/ti/phy-j721e-wiz.c
@@ -615,6 +615,12 @@ static void wiz_clock_cleanup(struct wiz *wiz, struct device_node *node)
of_clk_del_provider(clk_node);
of_node_put(clk_node);
}
+
+ for (i = 0; i < wiz->clk_div_sel_num; i++) {
+ clk_node = of_get_child_by_name(node, clk_div_sel[i].node_name);
+ of_clk_del_provider(clk_node);
+ of_node_put(clk_node);
+ }
}
static int wiz_clock_init(struct wiz *wiz, struct device_node *node)
@@ -947,27 +953,24 @@ static int wiz_probe(struct platform_device *pdev)
goto err_get_sync;
}
- serdes_pdev = of_platform_device_create(child_node, NULL, dev);
- if (!serdes_pdev) {
- dev_WARN(dev, "Unable to create SERDES platform device\n");
- ret = -ENOMEM;
- goto err_pdev_create;
- }
- wiz->serdes_pdev = serdes_pdev;
-
ret = wiz_init(wiz);
if (ret) {
dev_err(dev, "WIZ initialization failed\n");
goto err_wiz_init;
}
+ serdes_pdev = of_platform_device_create(child_node, NULL, dev);
+ if (!serdes_pdev) {
+ dev_WARN(dev, "Unable to create SERDES platform device\n");
+ ret = -ENOMEM;
+ goto err_wiz_init;
+ }
+ wiz->serdes_pdev = serdes_pdev;
+
of_node_put(child_node);
return 0;
err_wiz_init:
- of_platform_device_destroy(&serdes_pdev->dev, NULL);
-
-err_pdev_create:
wiz_clock_cleanup(wiz, node);
err_get_sync:
diff --git a/drivers/phy/ti/phy-twl4030-usb.c b/drivers/phy/ti/phy-twl4030-usb.c
index 9887f90..812e540 100644
--- a/drivers/phy/ti/phy-twl4030-usb.c
+++ b/drivers/phy/ti/phy-twl4030-usb.c
@@ -779,7 +779,7 @@ static int twl4030_usb_remove(struct platform_device *pdev)
usb_remove_phy(&twl->phy);
pm_runtime_get_sync(twl->dev);
- cancel_delayed_work(&twl->id_workaround_work);
+ cancel_delayed_work_sync(&twl->id_workaround_work);
device_remove_file(twl->dev, &dev_attr_vbus);
/* set transceiver mode to power on defaults */
diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
index 9fc4433..20b477c 100644
--- a/drivers/pinctrl/core.c
+++ b/drivers/pinctrl/core.c
@@ -1604,8 +1604,8 @@ static int pinctrl_pins_show(struct seq_file *s, void *what)
unsigned i, pin;
#ifdef CONFIG_GPIOLIB
struct pinctrl_gpio_range *range;
- unsigned int gpio_num;
struct gpio_chip *chip;
+ int gpio_num;
#endif
seq_printf(s, "registered pins: %d\n", pctldev->desc->npins);
@@ -1625,7 +1625,7 @@ static int pinctrl_pins_show(struct seq_file *s, void *what)
seq_printf(s, "pin %d (%s) ", pin, desc->name);
#ifdef CONFIG_GPIOLIB
- gpio_num = 0;
+ gpio_num = -1;
list_for_each_entry(range, &pctldev->gpio_ranges, node) {
if ((pin >= range->pin_base) &&
(pin < (range->pin_base + range->npins))) {
@@ -1633,10 +1633,12 @@ static int pinctrl_pins_show(struct seq_file *s, void *what)
break;
}
}
- chip = gpio_to_chip(gpio_num);
- if (chip && chip->gpiodev && chip->gpiodev->base)
- seq_printf(s, "%u:%s ", gpio_num -
- chip->gpiodev->base, chip->label);
+ if (gpio_num >= 0)
+ chip = gpio_to_chip(gpio_num);
+ else
+ chip = NULL;
+ if (chip)
+ seq_printf(s, "%u:%s ", gpio_num - chip->gpiodev->base, chip->label);
else
seq_puts(s, "0:? ");
#endif
diff --git a/drivers/pinctrl/intel/pinctrl-lewisburg.c b/drivers/pinctrl/intel/pinctrl-lewisburg.c
index 7fdf425..ad4b446 100644
--- a/drivers/pinctrl/intel/pinctrl-lewisburg.c
+++ b/drivers/pinctrl/intel/pinctrl-lewisburg.c
@@ -299,9 +299,9 @@ static const struct pinctrl_pin_desc lbg_pins[] = {
static const struct intel_community lbg_communities[] = {
LBG_COMMUNITY(0, 0, 71),
LBG_COMMUNITY(1, 72, 132),
- LBG_COMMUNITY(3, 133, 144),
- LBG_COMMUNITY(4, 145, 180),
- LBG_COMMUNITY(5, 181, 246),
+ LBG_COMMUNITY(3, 133, 143),
+ LBG_COMMUNITY(4, 144, 178),
+ LBG_COMMUNITY(5, 179, 246),
};
static const struct intel_pinctrl_soc_data lbg_soc_data = {
diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
index f3cd7e2..12cc4eb1 100644
--- a/drivers/pinctrl/pinctrl-single.c
+++ b/drivers/pinctrl/pinctrl-single.c
@@ -270,20 +270,44 @@ static void __maybe_unused pcs_writel(unsigned val, void __iomem *reg)
writel(val, reg);
}
+static unsigned int pcs_pin_reg_offset_get(struct pcs_device *pcs,
+ unsigned int pin)
+{
+ unsigned int mux_bytes = pcs->width / BITS_PER_BYTE;
+
+ if (pcs->bits_per_mux) {
+ unsigned int pin_offset_bytes;
+
+ pin_offset_bytes = (pcs->bits_per_pin * pin) / BITS_PER_BYTE;
+ return (pin_offset_bytes / mux_bytes) * mux_bytes;
+ }
+
+ return pin * mux_bytes;
+}
+
+static unsigned int pcs_pin_shift_reg_get(struct pcs_device *pcs,
+ unsigned int pin)
+{
+ return (pin % (pcs->width / pcs->bits_per_pin)) * pcs->bits_per_pin;
+}
+
static void pcs_pin_dbg_show(struct pinctrl_dev *pctldev,
struct seq_file *s,
unsigned pin)
{
struct pcs_device *pcs;
- unsigned val, mux_bytes;
+ unsigned int val;
unsigned long offset;
size_t pa;
pcs = pinctrl_dev_get_drvdata(pctldev);
- mux_bytes = pcs->width / BITS_PER_BYTE;
- offset = pin * mux_bytes;
+ offset = pcs_pin_reg_offset_get(pcs, pin);
val = pcs->read(pcs->base + offset);
+
+ if (pcs->bits_per_mux)
+ val &= pcs->fmask << pcs_pin_shift_reg_get(pcs, pin);
+
pa = pcs->res->start + offset;
seq_printf(s, "%zx %08x %s ", pa, val, DRIVER_NAME);
@@ -384,7 +408,6 @@ static int pcs_request_gpio(struct pinctrl_dev *pctldev,
struct pcs_device *pcs = pinctrl_dev_get_drvdata(pctldev);
struct pcs_gpiofunc_range *frange = NULL;
struct list_head *pos, *tmp;
- int mux_bytes = 0;
unsigned data;
/* If function mask is null, return directly. */
@@ -392,29 +415,27 @@ static int pcs_request_gpio(struct pinctrl_dev *pctldev,
return -ENOTSUPP;
list_for_each_safe(pos, tmp, &pcs->gpiofuncs) {
+ u32 offset;
+
frange = list_entry(pos, struct pcs_gpiofunc_range, node);
if (pin >= frange->offset + frange->npins
|| pin < frange->offset)
continue;
- mux_bytes = pcs->width / BITS_PER_BYTE;
+
+ offset = pcs_pin_reg_offset_get(pcs, pin);
if (pcs->bits_per_mux) {
- int byte_num, offset, pin_shift;
-
- byte_num = (pcs->bits_per_pin * pin) / BITS_PER_BYTE;
- offset = (byte_num / mux_bytes) * mux_bytes;
- pin_shift = pin % (pcs->width / pcs->bits_per_pin) *
- pcs->bits_per_pin;
+ int pin_shift = pcs_pin_shift_reg_get(pcs, pin);
data = pcs->read(pcs->base + offset);
data &= ~(pcs->fmask << pin_shift);
data |= frange->gpiofunc << pin_shift;
pcs->write(data, pcs->base + offset);
} else {
- data = pcs->read(pcs->base + pin * mux_bytes);
+ data = pcs->read(pcs->base + offset);
data &= ~pcs->fmask;
data |= frange->gpiofunc;
- pcs->write(data, pcs->base + pin * mux_bytes);
+ pcs->write(data, pcs->base + offset);
}
break;
}
@@ -656,10 +677,8 @@ static const struct pinconf_ops pcs_pinconf_ops = {
* pcs_add_pin() - add a pin to the static per controller pin array
* @pcs: pcs driver instance
* @offset: register offset from base
- * @pin_pos: unused
*/
-static int pcs_add_pin(struct pcs_device *pcs, unsigned offset,
- unsigned pin_pos)
+static int pcs_add_pin(struct pcs_device *pcs, unsigned int offset)
{
struct pcs_soc_data *pcs_soc = &pcs->socdata;
struct pinctrl_pin_desc *pin;
@@ -728,17 +747,9 @@ static int pcs_allocate_pin_table(struct pcs_device *pcs)
for (i = 0; i < pcs->desc.npins; i++) {
unsigned offset;
int res;
- int byte_num;
- int pin_pos = 0;
- if (pcs->bits_per_mux) {
- byte_num = (pcs->bits_per_pin * i) / BITS_PER_BYTE;
- offset = (byte_num / mux_bytes) * mux_bytes;
- pin_pos = i % num_pins_in_register;
- } else {
- offset = i * mux_bytes;
- }
- res = pcs_add_pin(pcs, offset, pin_pos);
+ offset = pcs_pin_reg_offset_get(pcs, i);
+ res = pcs_add_pin(pcs, offset);
if (res < 0) {
dev_err(pcs->dev, "error adding pins: %i\n", res);
return res;
diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c
index b9ea09f..493079a 100644
--- a/drivers/pinctrl/samsung/pinctrl-exynos.c
+++ b/drivers/pinctrl/samsung/pinctrl-exynos.c
@@ -55,7 +55,7 @@ static void exynos_irq_mask(struct irq_data *irqd)
struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip);
struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd);
unsigned long reg_mask = our_chip->eint_mask + bank->eint_offset;
- unsigned long mask;
+ unsigned int mask;
unsigned long flags;
spin_lock_irqsave(&bank->slock, flags);
@@ -83,7 +83,7 @@ static void exynos_irq_unmask(struct irq_data *irqd)
struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip);
struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd);
unsigned long reg_mask = our_chip->eint_mask + bank->eint_offset;
- unsigned long mask;
+ unsigned int mask;
unsigned long flags;
/*
@@ -483,7 +483,7 @@ static void exynos_irq_eint0_15(struct irq_desc *desc)
chained_irq_exit(chip, desc);
}
-static inline void exynos_irq_demux_eint(unsigned long pend,
+static inline void exynos_irq_demux_eint(unsigned int pend,
struct irq_domain *domain)
{
unsigned int irq;
@@ -500,8 +500,8 @@ static void exynos_irq_demux_eint16_31(struct irq_desc *desc)
{
struct irq_chip *chip = irq_desc_get_chip(desc);
struct exynos_muxed_weint_data *eintd = irq_desc_get_handler_data(desc);
- unsigned long pend;
- unsigned long mask;
+ unsigned int pend;
+ unsigned int mask;
int i;
chained_irq_enter(chip, desc);
diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
index 31be311..036d54d 100644
--- a/drivers/platform/chrome/cros_ec_typec.c
+++ b/drivers/platform/chrome/cros_ec_typec.c
@@ -475,6 +475,11 @@ static int cros_typec_enable_dp(struct cros_typec_data *typec,
return -ENOTSUPP;
}
+ if (!pd_ctrl->dp_mode) {
+ dev_err(typec->dev, "No valid DP mode provided.\n");
+ return -EINVAL;
+ }
+
/* Status VDO. */
dp_data.status = DP_STATUS_ENABLED;
if (port->mux_flags & USB_PD_MUX_HPD_IRQ)
diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
index bbc4e71..38800e8 100644
--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
+++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
@@ -294,6 +294,9 @@ mlxbf_tmfifo_get_next_desc(struct mlxbf_tmfifo_vring *vring)
if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx))
return NULL;
+ /* Make sure 'avail->idx' is visible already. */
+ virtio_rmb(false);
+
idx = vring->next_avail % vr->num;
head = virtio16_to_cpu(vdev, vr->avail->ring[idx]);
if (WARN_ON(head >= vr->num))
@@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct mlxbf_tmfifo_vring *vring,
* done or not. Add a memory barrier here to make sure the update above
* completes before updating the idx.
*/
- mb();
+ virtio_mb(false);
vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1);
}
@@ -733,6 +736,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring,
desc = NULL;
fifo->vring[is_rx] = NULL;
+ /*
+ * Make sure the load/store are in order before
+ * returning back to virtio.
+ */
+ virtio_mb(false);
+
/* Notify upper layer that packet is done. */
spin_lock_irqsave(&fifo->spin_lock[is_rx], flags);
vring_interrupt(0, vring->vq);
diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
index 0d91d13..a185868 100644
--- a/drivers/platform/x86/Kconfig
+++ b/drivers/platform/x86/Kconfig
@@ -821,7 +821,7 @@
config INTEL_INT0002_VGPIO
tristate "Intel ACPI INT0002 Virtual GPIO driver"
- depends on GPIOLIB && ACPI
+ depends on GPIOLIB && ACPI && PM_SLEEP
select GPIOLIB_IRQCHIP
help
Some peripherals on Bay Trail and Cherry Trail platforms signal a
diff --git a/drivers/platform/x86/dell-smbios-wmi.c b/drivers/platform/x86/dell-smbios-wmi.c
index 27a298b..c97bd4a 100644
--- a/drivers/platform/x86/dell-smbios-wmi.c
+++ b/drivers/platform/x86/dell-smbios-wmi.c
@@ -271,7 +271,8 @@ int init_dell_smbios_wmi(void)
void exit_dell_smbios_wmi(void)
{
- wmi_driver_unregister(&dell_smbios_wmi_driver);
+ if (wmi_supported)
+ wmi_driver_unregister(&dell_smbios_wmi_driver);
}
MODULE_DEVICE_TABLE(wmi, dell_smbios_wmi_id_table);
diff --git a/drivers/platform/x86/hp-wireless.c b/drivers/platform/x86/hp-wireless.c
index 12c31fd..0753ef1 100644
--- a/drivers/platform/x86/hp-wireless.c
+++ b/drivers/platform/x86/hp-wireless.c
@@ -17,12 +17,14 @@ MODULE_LICENSE("GPL");
MODULE_AUTHOR("Alex Hung");
MODULE_ALIAS("acpi*:HPQ6001:*");
MODULE_ALIAS("acpi*:WSTADEF:*");
+MODULE_ALIAS("acpi*:AMDI0051:*");
static struct input_dev *hpwl_input_dev;
static const struct acpi_device_id hpwl_ids[] = {
{"HPQ6001", 0},
{"WSTADEF", 0},
+ {"AMDI0051", 0},
{"", 0},
};
diff --git a/drivers/platform/x86/hp_accel.c b/drivers/platform/x86/hp_accel.c
index 799cbe2..8c0867b 100644
--- a/drivers/platform/x86/hp_accel.c
+++ b/drivers/platform/x86/hp_accel.c
@@ -88,6 +88,9 @@ MODULE_DEVICE_TABLE(acpi, lis3lv02d_device_ids);
static int lis3lv02d_acpi_init(struct lis3lv02d *lis3)
{
struct acpi_device *dev = lis3->bus_priv;
+ if (!lis3->init_required)
+ return 0;
+
if (acpi_evaluate_object(dev->handle, METHOD_NAME__INI,
NULL, NULL) != AE_OK)
return -EINVAL;
@@ -356,6 +359,7 @@ static int lis3lv02d_add(struct acpi_device *device)
}
/* call the core layer do its init */
+ lis3_dev.init_required = true;
ret = lis3lv02d_init_device(&lis3_dev);
if (ret)
return ret;
@@ -403,11 +407,27 @@ static int lis3lv02d_suspend(struct device *dev)
static int lis3lv02d_resume(struct device *dev)
{
+ lis3_dev.init_required = false;
lis3lv02d_poweron(&lis3_dev);
return 0;
}
-static SIMPLE_DEV_PM_OPS(hp_accel_pm, lis3lv02d_suspend, lis3lv02d_resume);
+static int lis3lv02d_restore(struct device *dev)
+{
+ lis3_dev.init_required = true;
+ lis3lv02d_poweron(&lis3_dev);
+ return 0;
+}
+
+static const struct dev_pm_ops hp_accel_pm = {
+ .suspend = lis3lv02d_suspend,
+ .resume = lis3lv02d_resume,
+ .freeze = lis3lv02d_suspend,
+ .thaw = lis3lv02d_resume,
+ .poweroff = lis3lv02d_suspend,
+ .restore = lis3lv02d_restore,
+};
+
#define HP_ACCEL_PM (&hp_accel_pm)
#else
#define HP_ACCEL_PM NULL
diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
index 8626197..8a0cd5b 100644
--- a/drivers/platform/x86/intel-hid.c
+++ b/drivers/platform/x86/intel-hid.c
@@ -86,6 +86,13 @@ static const struct dmi_system_id button_array_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x2 Detachable"),
},
},
+ {
+ .ident = "Lenovo ThinkPad X1 Tablet Gen 2",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_FAMILY, "ThinkPad X1 Tablet Gen 2"),
+ },
+ },
{ }
};
diff --git a/drivers/platform/x86/intel_int0002_vgpio.c b/drivers/platform/x86/intel_int0002_vgpio.c
index 289c665..569342a 100644
--- a/drivers/platform/x86/intel_int0002_vgpio.c
+++ b/drivers/platform/x86/intel_int0002_vgpio.c
@@ -51,6 +51,12 @@
#define GPE0A_STS_PORT 0x420
#define GPE0A_EN_PORT 0x428
+struct int0002_data {
+ struct gpio_chip chip;
+ int parent_irq;
+ int wake_enable_count;
+};
+
/*
* As this is not a real GPIO at all, but just a hack to model an event in
* ACPI the get / set functions are dummy functions.
@@ -98,14 +104,16 @@ static void int0002_irq_mask(struct irq_data *data)
static int int0002_irq_set_wake(struct irq_data *data, unsigned int on)
{
struct gpio_chip *chip = irq_data_get_irq_chip_data(data);
- struct platform_device *pdev = to_platform_device(chip->parent);
- int irq = platform_get_irq(pdev, 0);
+ struct int0002_data *int0002 = container_of(chip, struct int0002_data, chip);
- /* Propagate to parent irq */
+ /*
+ * Applying of the wakeup flag to our parent IRQ is delayed till system
+ * suspend, because we only want to do this when using s2idle.
+ */
if (on)
- enable_irq_wake(irq);
+ int0002->wake_enable_count++;
else
- disable_irq_wake(irq);
+ int0002->wake_enable_count--;
return 0;
}
@@ -135,7 +143,7 @@ static bool int0002_check_wake(void *data)
return (gpe_sts_reg & GPE0A_PME_B0_STS_BIT);
}
-static struct irq_chip int0002_byt_irqchip = {
+static struct irq_chip int0002_irqchip = {
.name = DRV_NAME,
.irq_ack = int0002_irq_ack,
.irq_mask = int0002_irq_mask,
@@ -143,21 +151,9 @@ static struct irq_chip int0002_byt_irqchip = {
.irq_set_wake = int0002_irq_set_wake,
};
-static struct irq_chip int0002_cht_irqchip = {
- .name = DRV_NAME,
- .irq_ack = int0002_irq_ack,
- .irq_mask = int0002_irq_mask,
- .irq_unmask = int0002_irq_unmask,
- /*
- * No set_wake, on CHT the IRQ is typically shared with the ACPI SCI
- * and we don't want to mess with the ACPI SCI irq settings.
- */
- .flags = IRQCHIP_SKIP_SET_WAKE,
-};
-
static const struct x86_cpu_id int0002_cpu_ids[] = {
- X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT, &int0002_byt_irqchip),
- X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT, &int0002_cht_irqchip),
+ X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT, NULL),
+ X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT, NULL),
{}
};
@@ -172,8 +168,9 @@ static int int0002_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
const struct x86_cpu_id *cpu_id;
- struct gpio_chip *chip;
+ struct int0002_data *int0002;
struct gpio_irq_chip *girq;
+ struct gpio_chip *chip;
int irq, ret;
/* Menlow has a different INT0002 device? <sigh> */
@@ -185,10 +182,13 @@ static int int0002_probe(struct platform_device *pdev)
if (irq < 0)
return irq;
- chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL);
- if (!chip)
+ int0002 = devm_kzalloc(dev, sizeof(*int0002), GFP_KERNEL);
+ if (!int0002)
return -ENOMEM;
+ int0002->parent_irq = irq;
+
+ chip = &int0002->chip;
chip->label = DRV_NAME;
chip->parent = dev;
chip->owner = THIS_MODULE;
@@ -214,7 +214,7 @@ static int int0002_probe(struct platform_device *pdev)
}
girq = &chip->irq;
- girq->chip = (struct irq_chip *)cpu_id->driver_data;
+ girq->chip = &int0002_irqchip;
/* This let us handle the parent IRQ in the driver */
girq->parent_handler = NULL;
girq->num_parents = 0;
@@ -230,6 +230,7 @@ static int int0002_probe(struct platform_device *pdev)
acpi_register_wakeup_handler(irq, int0002_check_wake, NULL);
device_init_wakeup(dev, true);
+ dev_set_drvdata(dev, int0002);
return 0;
}
@@ -240,6 +241,36 @@ static int int0002_remove(struct platform_device *pdev)
return 0;
}
+static int int0002_suspend(struct device *dev)
+{
+ struct int0002_data *int0002 = dev_get_drvdata(dev);
+
+ /*
+ * The INT0002 parent IRQ is often shared with the ACPI GPE IRQ, don't
+ * muck with it when firmware based suspend is used, otherwise we may
+ * cause spurious wakeups from firmware managed suspend.
+ */
+ if (!pm_suspend_via_firmware() && int0002->wake_enable_count)
+ enable_irq_wake(int0002->parent_irq);
+
+ return 0;
+}
+
+static int int0002_resume(struct device *dev)
+{
+ struct int0002_data *int0002 = dev_get_drvdata(dev);
+
+ if (!pm_suspend_via_firmware() && int0002->wake_enable_count)
+ disable_irq_wake(int0002->parent_irq);
+
+ return 0;
+}
+
+static const struct dev_pm_ops int0002_pm_ops = {
+ .suspend = int0002_suspend,
+ .resume = int0002_resume,
+};
+
static const struct acpi_device_id int0002_acpi_ids[] = {
{ "INT0002", 0 },
{ },
@@ -250,6 +281,7 @@ static struct platform_driver int0002_driver = {
.driver = {
.name = DRV_NAME,
.acpi_match_table = int0002_acpi_ids,
+ .pm = &int0002_pm_ops,
},
.probe = int0002_probe,
.remove = int0002_remove,
diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
index 3e5fe66..ca32a4c 100644
--- a/drivers/platform/x86/intel_pmc_core.c
+++ b/drivers/platform/x86/intel_pmc_core.c
@@ -863,34 +863,45 @@ static int pmc_core_pll_show(struct seq_file *s, void *unused)
}
DEFINE_SHOW_ATTRIBUTE(pmc_core_pll);
-static ssize_t pmc_core_ltr_ignore_write(struct file *file,
- const char __user *userbuf,
- size_t count, loff_t *ppos)
+static int pmc_core_send_ltr_ignore(u32 value)
{
struct pmc_dev *pmcdev = &pmc;
const struct pmc_reg_map *map = pmcdev->map;
- u32 val, buf_size, fd;
- int err;
-
- buf_size = count < 64 ? count : 64;
-
- err = kstrtou32_from_user(userbuf, buf_size, 10, &val);
- if (err)
- return err;
+ u32 reg;
+ int err = 0;
mutex_lock(&pmcdev->lock);
- if (val > map->ltr_ignore_max) {
+ if (value > map->ltr_ignore_max) {
err = -EINVAL;
goto out_unlock;
}
- fd = pmc_core_reg_read(pmcdev, map->ltr_ignore_offset);
- fd |= (1U << val);
- pmc_core_reg_write(pmcdev, map->ltr_ignore_offset, fd);
+ reg = pmc_core_reg_read(pmcdev, map->ltr_ignore_offset);
+ reg |= BIT(value);
+ pmc_core_reg_write(pmcdev, map->ltr_ignore_offset, reg);
out_unlock:
mutex_unlock(&pmcdev->lock);
+
+ return err;
+}
+
+static ssize_t pmc_core_ltr_ignore_write(struct file *file,
+ const char __user *userbuf,
+ size_t count, loff_t *ppos)
+{
+ u32 buf_size, value;
+ int err;
+
+ buf_size = min_t(u32, count, 64);
+
+ err = kstrtou32_from_user(userbuf, buf_size, 10, &value);
+ if (err)
+ return err;
+
+ err = pmc_core_send_ltr_ignore(value);
+
return err == 0 ? count : err;
}
@@ -1175,9 +1186,15 @@ static const struct pci_device_id pmc_pci_ids[] = {
* the platform BIOS enforces 24Mhz crystal to shutdown
* before PMC can assert SLP_S0#.
*/
+static bool xtal_ignore;
static int quirk_xtal_ignore(const struct dmi_system_id *id)
{
- struct pmc_dev *pmcdev = &pmc;
+ xtal_ignore = true;
+ return 0;
+}
+
+static void pmc_core_xtal_ignore(struct pmc_dev *pmcdev)
+{
u32 value;
value = pmc_core_reg_read(pmcdev, pmcdev->map->pm_vric1_offset);
@@ -1186,7 +1203,6 @@ static int quirk_xtal_ignore(const struct dmi_system_id *id)
/* Low Voltage Mode Enable */
value &= ~SPT_PMC_VRIC1_SLPS0LVEN;
pmc_core_reg_write(pmcdev, pmcdev->map->pm_vric1_offset, value);
- return 0;
}
static const struct dmi_system_id pmc_core_dmi_table[] = {
@@ -1201,6 +1217,14 @@ static const struct dmi_system_id pmc_core_dmi_table[] = {
{}
};
+static void pmc_core_do_dmi_quirks(struct pmc_dev *pmcdev)
+{
+ dmi_check_system(pmc_core_dmi_table);
+
+ if (xtal_ignore)
+ pmc_core_xtal_ignore(pmcdev);
+}
+
static int pmc_core_probe(struct platform_device *pdev)
{
static bool device_initialized;
@@ -1242,7 +1266,16 @@ static int pmc_core_probe(struct platform_device *pdev)
mutex_init(&pmcdev->lock);
platform_set_drvdata(pdev, pmcdev);
pmcdev->pmc_xram_read_bit = pmc_core_check_read_lock_bit();
- dmi_check_system(pmc_core_dmi_table);
+ pmc_core_do_dmi_quirks(pmcdev);
+
+ /*
+ * On TGL, due to a hardware limitation, the GBE LTR blocks PC10 when
+ * a cable is attached. Tell the PMC to ignore it.
+ */
+ if (pmcdev->map == &tgl_reg_map) {
+ dev_dbg(&pdev->dev, "ignoring GBE LTR\n");
+ pmc_core_send_ltr_ignore(3);
+ }
pmc_core_dbgfs_register(pmcdev);
diff --git a/drivers/platform/x86/intel_punit_ipc.c b/drivers/platform/x86/intel_punit_ipc.c
index 05cced5..f58b854 100644
--- a/drivers/platform/x86/intel_punit_ipc.c
+++ b/drivers/platform/x86/intel_punit_ipc.c
@@ -312,6 +312,7 @@ static const struct acpi_device_id punit_ipc_acpi_ids[] = {
{ "INT34D4", 0 },
{ }
};
+MODULE_DEVICE_TABLE(acpi, punit_ipc_acpi_ids);
static struct platform_driver intel_punit_ipc_driver = {
.probe = intel_punit_ipc_probe,
diff --git a/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c b/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c
index 95f01e7..da958aa 100644
--- a/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c
+++ b/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c
@@ -21,12 +21,16 @@
#define PUNIT_MAILBOX_BUSY_BIT 31
/*
- * The average time to complete some commands is about 40us. The current
- * count is enough to satisfy 40us. But when the firmware is very busy, this
- * causes timeout occasionally. So increase to deal with some worst case
- * scenarios. Most of the command still complete in few us.
+ * The average time to complete mailbox commands is less than 40us. Most of
+ * the commands complete in few micro seconds. But the same firmware handles
+ * requests from all power management features.
+ * We can create a scenario where we flood the firmware with requests then
+ * the mailbox response can be delayed for 100s of micro seconds. So define
+ * two timeouts. One for average case and one for long.
+ * If the firmware is taking more than average, just call cond_resched().
*/
-#define OS_MAILBOX_RETRY_COUNT 100
+#define OS_MAILBOX_TIMEOUT_AVG_US 40
+#define OS_MAILBOX_TIMEOUT_MAX_US 1000
struct isst_if_device {
struct mutex mutex;
@@ -35,11 +39,13 @@ struct isst_if_device {
static int isst_if_mbox_cmd(struct pci_dev *pdev,
struct isst_if_mbox_cmd *mbox_cmd)
{
- u32 retries, data;
+ s64 tm_delta = 0;
+ ktime_t tm;
+ u32 data;
int ret;
/* Poll for rb bit == 0 */
- retries = OS_MAILBOX_RETRY_COUNT;
+ tm = ktime_get();
do {
ret = pci_read_config_dword(pdev, PUNIT_MAILBOX_INTERFACE,
&data);
@@ -48,11 +54,14 @@ static int isst_if_mbox_cmd(struct pci_dev *pdev,
if (data & BIT_ULL(PUNIT_MAILBOX_BUSY_BIT)) {
ret = -EBUSY;
+ tm_delta = ktime_us_delta(ktime_get(), tm);
+ if (tm_delta > OS_MAILBOX_TIMEOUT_AVG_US)
+ cond_resched();
continue;
}
ret = 0;
break;
- } while (--retries);
+ } while (tm_delta < OS_MAILBOX_TIMEOUT_MAX_US);
if (ret)
return ret;
@@ -74,7 +83,8 @@ static int isst_if_mbox_cmd(struct pci_dev *pdev,
return ret;
/* Poll for rb bit == 0 */
- retries = OS_MAILBOX_RETRY_COUNT;
+ tm_delta = 0;
+ tm = ktime_get();
do {
ret = pci_read_config_dword(pdev, PUNIT_MAILBOX_INTERFACE,
&data);
@@ -83,6 +93,9 @@ static int isst_if_mbox_cmd(struct pci_dev *pdev,
if (data & BIT_ULL(PUNIT_MAILBOX_BUSY_BIT)) {
ret = -EBUSY;
+ tm_delta = ktime_us_delta(ktime_get(), tm);
+ if (tm_delta > OS_MAILBOX_TIMEOUT_AVG_US)
+ cond_resched();
continue;
}
@@ -96,7 +109,7 @@ static int isst_if_mbox_cmd(struct pci_dev *pdev,
mbox_cmd->resp_data = data;
ret = 0;
break;
- } while (--retries);
+ } while (tm_delta < OS_MAILBOX_TIMEOUT_MAX_US);
return ret;
}
diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
index ca684ed..a9d2a4b 100644
--- a/drivers/platform/x86/pmc_atom.c
+++ b/drivers/platform/x86/pmc_atom.c
@@ -393,34 +393,10 @@ static const struct dmi_system_id critclk_systems[] = {
},
{
/* pmc_plt_clk* - are used for ethernet controllers */
- .ident = "Beckhoff CB3163",
+ .ident = "Beckhoff Baytrail",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
- DMI_MATCH(DMI_BOARD_NAME, "CB3163"),
- },
- },
- {
- /* pmc_plt_clk* - are used for ethernet controllers */
- .ident = "Beckhoff CB4063",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
- DMI_MATCH(DMI_BOARD_NAME, "CB4063"),
- },
- },
- {
- /* pmc_plt_clk* - are used for ethernet controllers */
- .ident = "Beckhoff CB6263",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
- DMI_MATCH(DMI_BOARD_NAME, "CB6263"),
- },
- },
- {
- /* pmc_plt_clk* - are used for ethernet controllers */
- .ident = "Beckhoff CB6363",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
- DMI_MATCH(DMI_BOARD_NAME, "CB6363"),
+ DMI_MATCH(DMI_PRODUCT_FAMILY, "CBxx63"),
},
},
{
diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
index 6940275..1c25af2 100644
--- a/drivers/platform/x86/thinkpad_acpi.c
+++ b/drivers/platform/x86/thinkpad_acpi.c
@@ -4079,13 +4079,19 @@ static bool hotkey_notify_6xxx(const u32 hkey,
case TP_HKEY_EV_KEY_NUMLOCK:
case TP_HKEY_EV_KEY_FN:
- case TP_HKEY_EV_KEY_FN_ESC:
/* key press events, we just ignore them as long as the EC
* is still reporting them in the normal keyboard stream */
*send_acpi_ev = false;
*ignore_acpi_ev = true;
return true;
+ case TP_HKEY_EV_KEY_FN_ESC:
+ /* Get the media key status to foce the status LED to update */
+ acpi_evalf(hkey_handle, NULL, "GMKS", "v");
+ *send_acpi_ev = false;
+ *ignore_acpi_ev = true;
+ return true;
+
case TP_HKEY_EV_TABLET_CHANGED:
tpacpi_input_send_tabletsw();
hotkey_tablet_mode_notify_change();
@@ -6252,6 +6258,7 @@ enum thermal_access_mode {
enum { /* TPACPI_THERMAL_TPEC_* */
TP_EC_THERMAL_TMP0 = 0x78, /* ACPI EC regs TMP 0..7 */
TP_EC_THERMAL_TMP8 = 0xC0, /* ACPI EC regs TMP 8..15 */
+ TP_EC_FUNCREV = 0xEF, /* ACPI EC Functional revision */
TP_EC_THERMAL_TMP_NA = -128, /* ACPI EC sensor not available */
TPACPI_THERMAL_SENSOR_NA = -128000, /* Sensor not available */
@@ -6450,7 +6457,7 @@ static const struct attribute_group thermal_temp_input8_group = {
static int __init thermal_init(struct ibm_init_struct *iibm)
{
- u8 t, ta1, ta2;
+ u8 t, ta1, ta2, ver = 0;
int i;
int acpi_tmp7;
int res;
@@ -6465,7 +6472,14 @@ static int __init thermal_init(struct ibm_init_struct *iibm)
* 0x78-0x7F, 0xC0-0xC7. Registers return 0x00 for
* non-implemented, thermal sensors return 0x80 when
* not available
+ * The above rule is unfortunately flawed. This has been seen with
+ * 0xC2 (power supply ID) causing thermal control problems.
+ * The EC version can be determined by offset 0xEF and at least for
+ * version 3 the Lenovo firmware team confirmed that registers 0xC0-0xC7
+ * are not thermal registers.
*/
+ if (!acpi_ec_read(TP_EC_FUNCREV, &ver))
+ pr_warn("Thinkpad ACPI EC unable to access EC version\n");
ta1 = ta2 = 0;
for (i = 0; i < 8; i++) {
@@ -6475,11 +6489,13 @@ static int __init thermal_init(struct ibm_init_struct *iibm)
ta1 = 0;
break;
}
- if (acpi_ec_read(TP_EC_THERMAL_TMP8 + i, &t)) {
- ta2 |= t;
- } else {
- ta1 = 0;
- break;
+ if (ver < 3) {
+ if (acpi_ec_read(TP_EC_THERMAL_TMP8 + i, &t)) {
+ ta2 |= t;
+ } else {
+ ta1 = 0;
+ break;
+ }
}
}
if (ta1 == 0) {
@@ -6492,9 +6508,12 @@ static int __init thermal_init(struct ibm_init_struct *iibm)
thermal_read_mode = TPACPI_THERMAL_NONE;
}
} else {
- thermal_read_mode =
- (ta2 != 0) ?
- TPACPI_THERMAL_TPEC_16 : TPACPI_THERMAL_TPEC_8;
+ if (ver >= 3)
+ thermal_read_mode = TPACPI_THERMAL_TPEC_8;
+ else
+ thermal_read_mode =
+ (ta2 != 0) ?
+ TPACPI_THERMAL_TPEC_16 : TPACPI_THERMAL_TPEC_8;
}
} else if (acpi_tmp7) {
if (tpacpi_is_ibm() &&
diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
index c4de932..3743d89 100644
--- a/drivers/platform/x86/touchscreen_dmi.c
+++ b/drivers/platform/x86/touchscreen_dmi.c
@@ -115,6 +115,32 @@ static const struct ts_dmi_data chuwi_hi10_plus_data = {
.properties = chuwi_hi10_plus_props,
};
+static const struct property_entry chuwi_hi10_pro_props[] = {
+ PROPERTY_ENTRY_U32("touchscreen-min-x", 8),
+ PROPERTY_ENTRY_U32("touchscreen-min-y", 8),
+ PROPERTY_ENTRY_U32("touchscreen-size-x", 1912),
+ PROPERTY_ENTRY_U32("touchscreen-size-y", 1272),
+ PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
+ PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10-pro.fw"),
+ PROPERTY_ENTRY_U32("silead,max-fingers", 10),
+ PROPERTY_ENTRY_BOOL("silead,home-button"),
+ { }
+};
+
+static const struct ts_dmi_data chuwi_hi10_pro_data = {
+ .embedded_fw = {
+ .name = "silead/gsl1680-chuwi-hi10-pro.fw",
+ .prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 },
+ .length = 42504,
+ .sha256 = { 0xdb, 0x92, 0x68, 0xa8, 0xdb, 0x81, 0x31, 0x00,
+ 0x1f, 0x58, 0x89, 0xdb, 0x19, 0x1b, 0x15, 0x8c,
+ 0x05, 0x14, 0xf4, 0x95, 0xba, 0x15, 0x45, 0x98,
+ 0x42, 0xa3, 0xbb, 0x65, 0xe3, 0x30, 0xa5, 0x93 },
+ },
+ .acpi_name = "MSSL1680:00",
+ .properties = chuwi_hi10_pro_props,
+};
+
static const struct property_entry chuwi_vi8_props[] = {
PROPERTY_ENTRY_U32("touchscreen-min-x", 4),
PROPERTY_ENTRY_U32("touchscreen-min-y", 6),
@@ -873,6 +899,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
},
},
{
+ /* Chuwi Hi10 Prus (CWI597) */
+ .driver_data = (void *)&chuwi_hi10_pro_data,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Hi10 pro tablet"),
+ DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
+ },
+ },
+ {
/* Chuwi Vi8 (CWI506) */
.driver_data = (void *)&chuwi_vi8_data,
.matches = {
@@ -1044,6 +1079,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
},
},
{
+ /* Mediacom WinPad 7.0 W700 (same hw as Wintron surftab 7") */
+ .driver_data = (void *)&trekstor_surftab_wintron70_data,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "MEDIACOM"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "WinPad 7 W10 - WPW700"),
+ },
+ },
+ {
/* Mediacom Flexbook Edge 11 (same hw as TS Primebook C11) */
.driver_data = (void *)&trekstor_primebook_c11_data,
.matches = {
diff --git a/drivers/power/supply/bq25980_charger.c b/drivers/power/supply/bq25980_charger.c
index c936f31..b94ecf8 100644
--- a/drivers/power/supply/bq25980_charger.c
+++ b/drivers/power/supply/bq25980_charger.c
@@ -606,33 +606,6 @@ static int bq25980_get_state(struct bq25980_device *bq,
return 0;
}
-static int bq25980_set_battery_property(struct power_supply *psy,
- enum power_supply_property psp,
- const union power_supply_propval *val)
-{
- struct bq25980_device *bq = power_supply_get_drvdata(psy);
- int ret = 0;
-
- switch (psp) {
- case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
- ret = bq25980_set_const_charge_curr(bq, val->intval);
- if (ret)
- return ret;
- break;
-
- case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE:
- ret = bq25980_set_const_charge_volt(bq, val->intval);
- if (ret)
- return ret;
- break;
-
- default:
- return -EINVAL;
- }
-
- return ret;
-}
-
static int bq25980_get_battery_property(struct power_supply *psy,
enum power_supply_property psp,
union power_supply_propval *val)
@@ -701,6 +674,18 @@ static int bq25980_set_charger_property(struct power_supply *psy,
return ret;
break;
+ case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
+ ret = bq25980_set_const_charge_curr(bq, val->intval);
+ if (ret)
+ return ret;
+ break;
+
+ case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE:
+ ret = bq25980_set_const_charge_volt(bq, val->intval);
+ if (ret)
+ return ret;
+ break;
+
default:
return -EINVAL;
}
@@ -922,7 +907,6 @@ static struct power_supply_desc bq25980_battery_desc = {
.name = "bq25980-battery",
.type = POWER_SUPPLY_TYPE_BATTERY,
.get_property = bq25980_get_battery_property,
- .set_property = bq25980_set_battery_property,
.properties = bq25980_battery_props,
.num_properties = ARRAY_SIZE(bq25980_battery_props),
.property_is_writeable = bq25980_property_is_writeable,
diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
index 315e090..72a2bcf 100644
--- a/drivers/power/supply/bq27xxx_battery.c
+++ b/drivers/power/supply/bq27xxx_battery.c
@@ -1632,27 +1632,6 @@ static int bq27xxx_battery_read_time(struct bq27xxx_device_info *di, u8 reg)
}
/*
- * Read an average power register.
- * Return < 0 if something fails.
- */
-static int bq27xxx_battery_read_pwr_avg(struct bq27xxx_device_info *di)
-{
- int tval;
-
- tval = bq27xxx_read(di, BQ27XXX_REG_AP, false);
- if (tval < 0) {
- dev_err(di->dev, "error reading average power register %02x: %d\n",
- BQ27XXX_REG_AP, tval);
- return tval;
- }
-
- if (di->opts & BQ27XXX_O_ZERO)
- return (tval * BQ27XXX_POWER_CONSTANT) / BQ27XXX_RS;
- else
- return tval;
-}
-
-/*
* Returns true if a battery over temperature condition is detected
*/
static bool bq27xxx_battery_overtemp(struct bq27xxx_device_info *di, u16 flags)
@@ -1739,8 +1718,6 @@ void bq27xxx_battery_update(struct bq27xxx_device_info *di)
}
if (di->regs[BQ27XXX_REG_CYCT] != INVALID_REG_ADDR)
cache.cycle_count = bq27xxx_battery_read_cyct(di);
- if (di->regs[BQ27XXX_REG_AP] != INVALID_REG_ADDR)
- cache.power_avg = bq27xxx_battery_read_pwr_avg(di);
/* We only have to read charge design full once */
if (di->charge_design_full <= 0)
@@ -1803,6 +1780,32 @@ static int bq27xxx_battery_current(struct bq27xxx_device_info *di,
return 0;
}
+/*
+ * Get the average power in µW
+ * Return < 0 if something fails.
+ */
+static int bq27xxx_battery_pwr_avg(struct bq27xxx_device_info *di,
+ union power_supply_propval *val)
+{
+ int power;
+
+ power = bq27xxx_read(di, BQ27XXX_REG_AP, false);
+ if (power < 0) {
+ dev_err(di->dev,
+ "error reading average power register %02x: %d\n",
+ BQ27XXX_REG_AP, power);
+ return power;
+ }
+
+ if (di->opts & BQ27XXX_O_ZERO)
+ val->intval = (power * BQ27XXX_POWER_CONSTANT) / BQ27XXX_RS;
+ else
+ /* Other gauges return a signed value in units of 10mW */
+ val->intval = (int)((s16)power) * 10000;
+
+ return 0;
+}
+
static int bq27xxx_battery_status(struct bq27xxx_device_info *di,
union power_supply_propval *val)
{
@@ -1987,7 +1990,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
ret = bq27xxx_simple_value(di->cache.energy, val);
break;
case POWER_SUPPLY_PROP_POWER_AVG:
- ret = bq27xxx_simple_value(di->cache.power_avg, val);
+ ret = bq27xxx_battery_pwr_avg(di, val);
break;
case POWER_SUPPLY_PROP_HEALTH:
ret = bq27xxx_simple_value(di->cache.health, val);
diff --git a/drivers/power/supply/cpcap-battery.c b/drivers/power/supply/cpcap-battery.c
index cebc5c8..793d4ca 100644
--- a/drivers/power/supply/cpcap-battery.c
+++ b/drivers/power/supply/cpcap-battery.c
@@ -626,7 +626,7 @@ static irqreturn_t cpcap_battery_irq_thread(int irq, void *data)
break;
}
- if (!d)
+ if (list_entry_is_head(d, &ddata->irq_list, node))
return IRQ_NONE;
latest = cpcap_battery_latest(ddata);
diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
index 22fff01..891e1eb 100644
--- a/drivers/power/supply/cpcap-charger.c
+++ b/drivers/power/supply/cpcap-charger.c
@@ -633,6 +633,9 @@ static void cpcap_usb_detect(struct work_struct *work)
return;
}
+ /* Delay for 80ms to avoid vbus bouncing when usb cable is plugged in */
+ usleep_range(80000, 120000);
+
/* Throttle chrgcurr2 interrupt for charger done and retry */
switch (ddata->state) {
case CPCAP_CHARGER_CHARGING:
diff --git a/drivers/power/supply/generic-adc-battery.c b/drivers/power/supply/generic-adc-battery.c
index caa8297..58f0931 100644
--- a/drivers/power/supply/generic-adc-battery.c
+++ b/drivers/power/supply/generic-adc-battery.c
@@ -382,7 +382,7 @@ static int gab_remove(struct platform_device *pdev)
}
kfree(adc_bat->psy_desc.properties);
- cancel_delayed_work(&adc_bat->bat_work);
+ cancel_delayed_work_sync(&adc_bat->bat_work);
return 0;
}
diff --git a/drivers/power/supply/lp8788-charger.c b/drivers/power/supply/lp8788-charger.c
index e7931ff..397e5a0 100644
--- a/drivers/power/supply/lp8788-charger.c
+++ b/drivers/power/supply/lp8788-charger.c
@@ -501,7 +501,7 @@ static int lp8788_set_irqs(struct platform_device *pdev,
ret = request_threaded_irq(virq, NULL,
lp8788_charger_irq_thread,
- 0, name, pchg);
+ IRQF_ONESHOT, name, pchg);
if (ret)
break;
}
diff --git a/drivers/power/supply/pm2301_charger.c b/drivers/power/supply/pm2301_charger.c
index 2df6a24..34f168f 100644
--- a/drivers/power/supply/pm2301_charger.c
+++ b/drivers/power/supply/pm2301_charger.c
@@ -1090,7 +1090,7 @@ static int pm2xxx_wall_charger_probe(struct i2c_client *i2c_client,
ret = request_threaded_irq(gpio_to_irq(pm2->pdata->gpio_irq_number),
NULL,
pm2xxx_charger_irq[0].isr,
- pm2->pdata->irq_type,
+ pm2->pdata->irq_type | IRQF_ONESHOT,
pm2xxx_charger_irq[0].name, pm2);
if (ret != 0) {
diff --git a/drivers/power/supply/s3c_adc_battery.c b/drivers/power/supply/s3c_adc_battery.c
index 60b7f41..ff46bcf 100644
--- a/drivers/power/supply/s3c_adc_battery.c
+++ b/drivers/power/supply/s3c_adc_battery.c
@@ -394,7 +394,7 @@ static int s3c_adc_bat_remove(struct platform_device *pdev)
gpio_free(pdata->gpio_charge_finished);
}
- cancel_delayed_work(&bat_work);
+ cancel_delayed_work_sync(&bat_work);
if (pdata->exit)
pdata->exit();
diff --git a/drivers/power/supply/tps65090-charger.c b/drivers/power/supply/tps65090-charger.c
index 6b0098e..0990b2f 100644
--- a/drivers/power/supply/tps65090-charger.c
+++ b/drivers/power/supply/tps65090-charger.c
@@ -301,7 +301,7 @@ static int tps65090_charger_probe(struct platform_device *pdev)
if (irq != -ENXIO) {
ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
- tps65090_charger_isr, 0, "tps65090-charger", cdata);
+ tps65090_charger_isr, IRQF_ONESHOT, "tps65090-charger", cdata);
if (ret) {
dev_err(cdata->dev,
"Unable to register irq %d err %d\n", irq,
diff --git a/drivers/power/supply/tps65217_charger.c b/drivers/power/supply/tps65217_charger.c
index 814c2b8..ba33d16 100644
--- a/drivers/power/supply/tps65217_charger.c
+++ b/drivers/power/supply/tps65217_charger.c
@@ -238,7 +238,7 @@ static int tps65217_charger_probe(struct platform_device *pdev)
for (i = 0; i < NUM_CHARGER_IRQS; i++) {
ret = devm_request_threaded_irq(&pdev->dev, irq[i], NULL,
tps65217_charger_irq,
- 0, "tps65217-charger",
+ IRQF_ONESHOT, "tps65217-charger",
charger);
if (ret) {
dev_err(charger->dev,
diff --git a/drivers/ptp/ptp_qoriq.c b/drivers/ptp/ptp_qoriq.c
index beb5f74..08f4cf0 100644
--- a/drivers/ptp/ptp_qoriq.c
+++ b/drivers/ptp/ptp_qoriq.c
@@ -189,15 +189,16 @@ int ptp_qoriq_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
tmr_add = ptp_qoriq->tmr_add;
adj = tmr_add;
- /* calculate diff as adj*(scaled_ppm/65536)/1000000
- * and round() to the nearest integer
+ /*
+ * Calculate diff and round() to the nearest integer
+ *
+ * diff = adj * (ppb / 1000000000)
+ * = adj * scaled_ppm / 65536000000
*/
- adj *= scaled_ppm;
- diff = div_u64(adj, 8000000);
- diff = (diff >> 13) + ((diff >> 12) & 1);
+ diff = mul_u64_u64_div_u64(adj, scaled_ppm, 32768000000);
+ diff = DIV64_U64_ROUND_UP(diff, 2);
tmr_add = neg_adj ? tmr_add - diff : tmr_add + diff;
-
ptp_qoriq->write(®s->ctrl_regs->tmr_add, tmr_add);
return 0;
diff --git a/drivers/pwm/pwm-atmel.c b/drivers/pwm/pwm-atmel.c
index 6161e7e..d7cb0df 100644
--- a/drivers/pwm/pwm-atmel.c
+++ b/drivers/pwm/pwm-atmel.c
@@ -319,7 +319,7 @@ static void atmel_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
cdty = atmel_pwm_ch_readl(atmel_pwm, pwm->hwpwm,
atmel_pwm->data->regs.duty);
- tmp = (u64)cdty * NSEC_PER_SEC;
+ tmp = (u64)(cprd - cdty) * NSEC_PER_SEC;
tmp <<= pres;
state->duty_cycle = DIV64_U64_ROUND_UP(tmp, rate);
diff --git a/drivers/rapidio/rio_cm.c b/drivers/rapidio/rio_cm.c
index 50ec53d..db4c265 100644
--- a/drivers/rapidio/rio_cm.c
+++ b/drivers/rapidio/rio_cm.c
@@ -2127,6 +2127,14 @@ static int riocm_add_mport(struct device *dev,
return -ENODEV;
}
+ cm->rx_wq = create_workqueue(DRV_NAME "/rxq");
+ if (!cm->rx_wq) {
+ rio_release_inb_mbox(mport, cmbox);
+ rio_release_outb_mbox(mport, cmbox);
+ kfree(cm);
+ return -ENOMEM;
+ }
+
/*
* Allocate and register inbound messaging buffers to be ready
* to receive channel and system management requests
@@ -2137,15 +2145,6 @@ static int riocm_add_mport(struct device *dev,
cm->rx_slots = RIOCM_RX_RING_SIZE;
mutex_init(&cm->rx_lock);
riocm_rx_fill(cm, RIOCM_RX_RING_SIZE);
- cm->rx_wq = create_workqueue(DRV_NAME "/rxq");
- if (!cm->rx_wq) {
- riocm_error("failed to allocate IBMBOX_%d on %s",
- cmbox, mport->name);
- rio_release_outb_mbox(mport, cmbox);
- kfree(cm);
- return -ENOMEM;
- }
-
INIT_WORK(&cm->rx_work, rio_ibmsg_handler);
cm->tx_slot = 0;
diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
index ddecf25..d7894f1 100644
--- a/drivers/ras/cec.c
+++ b/drivers/ras/cec.c
@@ -309,11 +309,20 @@ static bool sanity_check(struct ce_array *ca)
return ret;
}
+/**
+ * cec_add_elem - Add an element to the CEC array.
+ * @pfn: page frame number to insert
+ *
+ * Return values:
+ * - <0: on error
+ * - 0: on success
+ * - >0: when the inserted pfn was offlined
+ */
static int cec_add_elem(u64 pfn)
{
struct ce_array *ca = &ce_arr;
+ int count, err, ret = 0;
unsigned int to = 0;
- int count, ret = 0;
/*
* We can be called very early on the identify_cpu() path where we are
@@ -330,8 +339,8 @@ static int cec_add_elem(u64 pfn)
if (ca->n == MAX_ELEMS)
WARN_ON(!del_lru_elem_unlocked(ca));
- ret = find_elem(ca, pfn, &to);
- if (ret < 0) {
+ err = find_elem(ca, pfn, &to);
+ if (err < 0) {
/*
* Shift range [to-end] to make room for one more element.
*/
diff --git a/drivers/regulator/bd9571mwv-regulator.c b/drivers/regulator/bd9571mwv-regulator.c
index e690c2c..25e3302 100644
--- a/drivers/regulator/bd9571mwv-regulator.c
+++ b/drivers/regulator/bd9571mwv-regulator.c
@@ -124,7 +124,7 @@ static const struct regulator_ops vid_ops = {
static const struct regulator_desc regulators[] = {
BD9571MWV_REG("VD09", "vd09", VD09, avs_ops, 0, 0x7f,
- 0x80, 600000, 10000, 0x3c),
+ 0x6f, 600000, 10000, 0x3c),
BD9571MWV_REG("VD18", "vd18", VD18, vid_ops, BD9571MWV_VD18_VID, 0xf,
16, 1625000, 25000, 0),
BD9571MWV_REG("VD25", "vd25", VD25, vid_ops, BD9571MWV_VD25_VID, 0xf,
@@ -133,7 +133,7 @@ static const struct regulator_desc regulators[] = {
11, 2800000, 100000, 0),
BD9571MWV_REG("DVFS", "dvfs", DVFS, reg_ops,
BD9571MWV_DVFS_MONIVDAC, 0x7f,
- 0x80, 600000, 10000, 0x3c),
+ 0x6f, 600000, 10000, 0x3c),
};
#ifdef CONFIG_PM_SLEEP
diff --git a/drivers/regulator/bd9576-regulator.c b/drivers/regulator/bd9576-regulator.c
index a8b5832a..204a2da 100644
--- a/drivers/regulator/bd9576-regulator.c
+++ b/drivers/regulator/bd9576-regulator.c
@@ -206,7 +206,7 @@ static int bd957x_probe(struct platform_device *pdev)
{
struct regmap *regmap;
struct regulator_config config = { 0 };
- int i, err;
+ int i;
bool vout_mode, ddr_sel;
const struct bd957x_regulator_data *reg_data = &bd9576_regulators[0];
unsigned int num_reg_data = ARRAY_SIZE(bd9576_regulators);
@@ -279,8 +279,7 @@ static int bd957x_probe(struct platform_device *pdev)
break;
default:
dev_err(&pdev->dev, "Unsupported chip type\n");
- err = -EINVAL;
- goto err;
+ return -EINVAL;
}
config.dev = pdev->dev.parent;
@@ -300,8 +299,7 @@ static int bd957x_probe(struct platform_device *pdev)
dev_err(&pdev->dev,
"failed to register %s regulator\n",
desc->name);
- err = PTR_ERR(rdev);
- goto err;
+ return PTR_ERR(rdev);
}
/*
* Clear the VOUT1 GPIO setting - rest of the regulators do not
@@ -310,8 +308,7 @@ static int bd957x_probe(struct platform_device *pdev)
config.ena_gpiod = NULL;
}
-err:
- return err;
+ return 0;
}
static const struct platform_device_id bd957x_pmic_id[] = {
diff --git a/drivers/remoteproc/qcom_pil_info.c b/drivers/remoteproc/qcom_pil_info.c
index 5521c44..7c007dd 100644
--- a/drivers/remoteproc/qcom_pil_info.c
+++ b/drivers/remoteproc/qcom_pil_info.c
@@ -56,7 +56,7 @@ static int qcom_pil_info_init(void)
memset_io(base, 0, resource_size(&imem));
_reloc.base = base;
- _reloc.num_entries = resource_size(&imem) / PIL_RELOC_ENTRY_SIZE;
+ _reloc.num_entries = (u32)resource_size(&imem) / PIL_RELOC_ENTRY_SIZE;
return 0;
}
diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
index ba6f755..ebc3e75 100644
--- a/drivers/remoteproc/qcom_q6v5_mss.c
+++ b/drivers/remoteproc/qcom_q6v5_mss.c
@@ -1182,7 +1182,15 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
goto release_firmware;
}
- ptr = ioremap_wc(qproc->mpss_phys + offset, phdr->p_memsz);
+ if (phdr->p_filesz > phdr->p_memsz) {
+ dev_err(qproc->dev,
+ "refusing to load segment %d with p_filesz > p_memsz\n",
+ i);
+ ret = -EINVAL;
+ goto release_firmware;
+ }
+
+ ptr = memremap(qproc->mpss_phys + offset, phdr->p_memsz, MEMREMAP_WC);
if (!ptr) {
dev_err(qproc->dev,
"unable to map memory region: %pa+%zx-%x\n",
@@ -1197,7 +1205,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
"failed to load segment %d from truncated file %s\n",
i, fw_name);
ret = -EINVAL;
- iounmap(ptr);
+ memunmap(ptr);
goto release_firmware;
}
@@ -1209,7 +1217,17 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
ptr, phdr->p_filesz);
if (ret) {
dev_err(qproc->dev, "failed to load %s\n", fw_name);
- iounmap(ptr);
+ memunmap(ptr);
+ goto release_firmware;
+ }
+
+ if (seg_fw->size != phdr->p_filesz) {
+ dev_err(qproc->dev,
+ "failed to load segment %d from truncated file %s\n",
+ i, fw_name);
+ ret = -EINVAL;
+ release_firmware(seg_fw);
+ memunmap(ptr);
goto release_firmware;
}
@@ -1220,7 +1238,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
memset(ptr + phdr->p_filesz, 0,
phdr->p_memsz - phdr->p_filesz);
}
- iounmap(ptr);
+ memunmap(ptr);
size += phdr->p_memsz;
code_length = readl(qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG);
@@ -1287,11 +1305,11 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
}
if (!ret)
- ptr = ioremap_wc(qproc->mpss_phys + offset + cp_offset, size);
+ ptr = memremap(qproc->mpss_phys + offset + cp_offset, size, MEMREMAP_WC);
if (ptr) {
memcpy(dest, ptr, size);
- iounmap(ptr);
+ memunmap(ptr);
} else {
memset(dest, 0xff, size);
}
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
index 27a0516..4840886 100644
--- a/drivers/rpmsg/qcom_glink_native.c
+++ b/drivers/rpmsg/qcom_glink_native.c
@@ -857,6 +857,7 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail)
dev_err(glink->dev,
"no intent found for channel %s intent %d",
channel->name, liid);
+ ret = -ENOENT;
goto advance_rx;
}
}
diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
index 9f5f54c..07a9cc9 100644
--- a/drivers/rtc/rtc-ds1307.c
+++ b/drivers/rtc/rtc-ds1307.c
@@ -295,7 +295,11 @@ static int ds1307_get_time(struct device *dev, struct rtc_time *t)
t->tm_min = bcd2bin(regs[DS1307_REG_MIN] & 0x7f);
tmp = regs[DS1307_REG_HOUR] & 0x3f;
t->tm_hour = bcd2bin(tmp);
- t->tm_wday = bcd2bin(regs[DS1307_REG_WDAY] & 0x07) - 1;
+ /* rx8130 is bit position, not BCD */
+ if (ds1307->type == rx_8130)
+ t->tm_wday = fls(regs[DS1307_REG_WDAY] & 0x7f);
+ else
+ t->tm_wday = bcd2bin(regs[DS1307_REG_WDAY] & 0x07) - 1;
t->tm_mday = bcd2bin(regs[DS1307_REG_MDAY] & 0x3f);
tmp = regs[DS1307_REG_MONTH] & 0x1f;
t->tm_mon = bcd2bin(tmp) - 1;
@@ -342,7 +346,11 @@ static int ds1307_set_time(struct device *dev, struct rtc_time *t)
regs[DS1307_REG_SECS] = bin2bcd(t->tm_sec);
regs[DS1307_REG_MIN] = bin2bcd(t->tm_min);
regs[DS1307_REG_HOUR] = bin2bcd(t->tm_hour);
- regs[DS1307_REG_WDAY] = bin2bcd(t->tm_wday + 1);
+ /* rx8130 is bit position, not BCD */
+ if (ds1307->type == rx_8130)
+ regs[DS1307_REG_WDAY] = 1 << t->tm_wday;
+ else
+ regs[DS1307_REG_WDAY] = bin2bcd(t->tm_wday + 1);
regs[DS1307_REG_MDAY] = bin2bcd(t->tm_mday);
regs[DS1307_REG_MONTH] = bin2bcd(t->tm_mon + 1);
diff --git a/drivers/rtc/rtc-fsl-ftm-alarm.c b/drivers/rtc/rtc-fsl-ftm-alarm.c
index 48d3b38..e08672e 100644
--- a/drivers/rtc/rtc-fsl-ftm-alarm.c
+++ b/drivers/rtc/rtc-fsl-ftm-alarm.c
@@ -310,6 +310,7 @@ static const struct of_device_id ftm_rtc_match[] = {
{ .compatible = "fsl,lx2160a-ftm-alarm", },
{ },
};
+MODULE_DEVICE_TABLE(of, ftm_rtc_match);
static const struct acpi_device_id ftm_imx_acpi_ids[] = {
{"NXP0014",},
diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
index f8b99cb..62684ca 100644
--- a/drivers/rtc/rtc-pcf85063.c
+++ b/drivers/rtc/rtc-pcf85063.c
@@ -486,6 +486,7 @@ static struct clk *pcf85063_clkout_register_clk(struct pcf85063 *pcf85063)
{
struct clk *clk;
struct clk_init_data init;
+ struct device_node *node = pcf85063->rtc->dev.parent->of_node;
init.name = "pcf85063-clkout";
init.ops = &pcf85063_clkout_ops;
@@ -495,15 +496,13 @@ static struct clk *pcf85063_clkout_register_clk(struct pcf85063 *pcf85063)
pcf85063->clkout_hw.init = &init;
/* optional override of the clockname */
- of_property_read_string(pcf85063->rtc->dev.of_node,
- "clock-output-names", &init.name);
+ of_property_read_string(node, "clock-output-names", &init.name);
/* register the clock */
clk = devm_clk_register(&pcf85063->rtc->dev, &pcf85063->clkout_hw);
if (!IS_ERR(clk))
- of_clk_add_provider(pcf85063->rtc->dev.of_node,
- of_clk_src_simple_get, clk);
+ of_clk_add_provider(node, of_clk_src_simple_get, clk);
return clk;
}
diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index b9febc5..8d1b277 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -638,6 +638,10 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
int ret;
+ /* this is an error in the caller */
+ if (cp->initialized)
+ return -EBUSY;
+
/*
* We only support prefetching the channel program. We assume all channel
* programs executed by supported guests likewise support prefetching.
diff --git a/drivers/s390/crypto/zcrypt_card.c b/drivers/s390/crypto/zcrypt_card.c
index 33b2388..09fe6bb 100644
--- a/drivers/s390/crypto/zcrypt_card.c
+++ b/drivers/s390/crypto/zcrypt_card.c
@@ -192,5 +192,6 @@ void zcrypt_card_unregister(struct zcrypt_card *zc)
spin_unlock(&zcrypt_list_lock);
sysfs_remove_group(&zc->card->ap_dev.device.kobj,
&zcrypt_card_attr_group);
+ zcrypt_card_put(zc);
}
EXPORT_SYMBOL(zcrypt_card_unregister);
diff --git a/drivers/s390/crypto/zcrypt_queue.c b/drivers/s390/crypto/zcrypt_queue.c
index 5062eae..c3ffbd2 100644
--- a/drivers/s390/crypto/zcrypt_queue.c
+++ b/drivers/s390/crypto/zcrypt_queue.c
@@ -223,5 +223,6 @@ void zcrypt_queue_unregister(struct zcrypt_queue *zq)
sysfs_remove_group(&zq->queue->ap_dev.device.kobj,
&zcrypt_queue_attr_group);
zcrypt_card_put(zc);
+ zcrypt_queue_put(zq);
}
EXPORT_SYMBOL(zcrypt_queue_unregister);
diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
index ccb061a..7231de2 100644
--- a/drivers/scsi/BusLogic.c
+++ b/drivers/scsi/BusLogic.c
@@ -3078,11 +3078,11 @@ static int blogic_qcmd_lck(struct scsi_cmnd *command,
ccb->opcode = BLOGIC_INITIATOR_CCB_SG;
ccb->datalen = count * sizeof(struct blogic_sg_seg);
if (blogic_multimaster_type(adapter))
- ccb->data = (void *)((unsigned int) ccb->dma_handle +
+ ccb->data = (unsigned int) ccb->dma_handle +
((unsigned long) &ccb->sglist -
- (unsigned long) ccb));
+ (unsigned long) ccb);
else
- ccb->data = ccb->sglist;
+ ccb->data = virt_to_32bit_virt(ccb->sglist);
scsi_for_each_sg(command, sg, count, i) {
ccb->sglist[i].segbytes = sg_dma_len(sg);
diff --git a/drivers/scsi/BusLogic.h b/drivers/scsi/BusLogic.h
index 6182cc8..e081ad4 100644
--- a/drivers/scsi/BusLogic.h
+++ b/drivers/scsi/BusLogic.h
@@ -814,7 +814,7 @@ struct blogic_ccb {
unsigned char cdblen; /* Byte 2 */
unsigned char sense_datalen; /* Byte 3 */
u32 datalen; /* Bytes 4-7 */
- void *data; /* Bytes 8-11 */
+ u32 data; /* Bytes 8-11 */
unsigned char:8; /* Byte 12 */
unsigned char:8; /* Byte 13 */
enum blogic_adapter_status adapter_status; /* Byte 14 */
diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
index 308bda2e..df5a3bb 100644
--- a/drivers/scsi/device_handler/scsi_dh_alua.c
+++ b/drivers/scsi/device_handler/scsi_dh_alua.c
@@ -565,10 +565,11 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
* even though it shouldn't according to T10.
* The retry without rtpg_ext_hdr_req set
* handles this.
+ * Note: some arrays return a sense key of ILLEGAL_REQUEST
+ * with ASC 00h if they don't support the extended header.
*/
if (!(pg->flags & ALUA_RTPG_EXT_HDR_UNSUPP) &&
- sense_hdr.sense_key == ILLEGAL_REQUEST &&
- sense_hdr.asc == 0x24 && sense_hdr.ascq == 0) {
+ sense_hdr.sense_key == ILLEGAL_REQUEST) {
pg->flags |= ALUA_RTPG_EXT_HDR_UNSUPP;
goto retry;
}
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
index 22eecc8..6c2a97f 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
@@ -1644,7 +1644,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
idx = i * HISI_SAS_PHY_INT_NR;
for (j = 0; j < HISI_SAS_PHY_INT_NR; j++, idx++) {
irq = platform_get_irq(pdev, idx);
- if (!irq) {
+ if (irq < 0) {
dev_err(dev, "irq init: fail map phy interrupt %d\n",
idx);
return -ENOENT;
@@ -1663,7 +1663,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
idx = hisi_hba->n_phy * HISI_SAS_PHY_INT_NR;
for (i = 0; i < hisi_hba->queue_count; i++, idx++) {
irq = platform_get_irq(pdev, idx);
- if (!irq) {
+ if (irq < 0) {
dev_err(dev, "irq init: could not map cq interrupt %d\n",
idx);
return -ENOENT;
@@ -1681,7 +1681,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
idx = (hisi_hba->n_phy * HISI_SAS_PHY_INT_NR) + hisi_hba->queue_count;
for (i = 0; i < HISI_SAS_FATAL_INT_NR; i++, idx++) {
irq = platform_get_irq(pdev, idx);
- if (!irq) {
+ if (irq < 0) {
dev_err(dev, "irq init: could not map fatal interrupt %d\n",
idx);
return -ENOENT;
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index 57c9a71..f6d6539 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi/ibmvfc.c
@@ -532,8 +532,17 @@ static void ibmvfc_set_host_action(struct ibmvfc_host *vhost,
if (vhost->action == IBMVFC_HOST_ACTION_ALLOC_TGTS)
vhost->action = action;
break;
+ case IBMVFC_HOST_ACTION_REENABLE:
+ case IBMVFC_HOST_ACTION_RESET:
+ vhost->action = action;
+ break;
case IBMVFC_HOST_ACTION_INIT:
case IBMVFC_HOST_ACTION_TGT_DEL:
+ case IBMVFC_HOST_ACTION_LOGO:
+ case IBMVFC_HOST_ACTION_QUERY_TGTS:
+ case IBMVFC_HOST_ACTION_TGT_DEL_FAILED:
+ case IBMVFC_HOST_ACTION_NONE:
+ default:
switch (vhost->action) {
case IBMVFC_HOST_ACTION_RESET:
case IBMVFC_HOST_ACTION_REENABLE:
@@ -543,15 +552,6 @@ static void ibmvfc_set_host_action(struct ibmvfc_host *vhost,
break;
}
break;
- case IBMVFC_HOST_ACTION_LOGO:
- case IBMVFC_HOST_ACTION_QUERY_TGTS:
- case IBMVFC_HOST_ACTION_TGT_DEL_FAILED:
- case IBMVFC_HOST_ACTION_NONE:
- case IBMVFC_HOST_ACTION_RESET:
- case IBMVFC_HOST_ACTION_REENABLE:
- default:
- vhost->action = action;
- break;
}
}
@@ -4658,26 +4658,45 @@ static void ibmvfc_do_work(struct ibmvfc_host *vhost)
case IBMVFC_HOST_ACTION_INIT_WAIT:
break;
case IBMVFC_HOST_ACTION_RESET:
- vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
spin_unlock_irqrestore(vhost->host->host_lock, flags);
rc = ibmvfc_reset_crq(vhost);
+
spin_lock_irqsave(vhost->host->host_lock, flags);
- if (rc == H_CLOSED)
+ if (!rc || rc == H_CLOSED)
vio_enable_interrupts(to_vio_dev(vhost->dev));
- if (rc || (rc = ibmvfc_send_crq_init(vhost)) ||
- (rc = vio_enable_interrupts(to_vio_dev(vhost->dev)))) {
- ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
- dev_err(vhost->dev, "Error after reset (rc=%d)\n", rc);
+ if (vhost->action == IBMVFC_HOST_ACTION_RESET) {
+ /*
+ * The only action we could have changed to would have
+ * been reenable, in which case, we skip the rest of
+ * this path and wait until we've done the re-enable
+ * before sending the crq init.
+ */
+ vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
+
+ if (rc || (rc = ibmvfc_send_crq_init(vhost)) ||
+ (rc = vio_enable_interrupts(to_vio_dev(vhost->dev)))) {
+ ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
+ dev_err(vhost->dev, "Error after reset (rc=%d)\n", rc);
+ }
}
break;
case IBMVFC_HOST_ACTION_REENABLE:
- vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
spin_unlock_irqrestore(vhost->host->host_lock, flags);
rc = ibmvfc_reenable_crq_queue(vhost);
+
spin_lock_irqsave(vhost->host->host_lock, flags);
- if (rc || (rc = ibmvfc_send_crq_init(vhost))) {
- ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
- dev_err(vhost->dev, "Error after enable (rc=%d)\n", rc);
+ if (vhost->action == IBMVFC_HOST_ACTION_REENABLE) {
+ /*
+ * The only action we could have changed to would have
+ * been reset, in which case, we skip the rest of this
+ * path and wait until we've done the reset before
+ * sending the crq init.
+ */
+ vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
+ if (rc || (rc = ibmvfc_send_crq_init(vhost))) {
+ ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
+ dev_err(vhost->dev, "Error after enable (rc=%d)\n", rc);
+ }
}
break;
case IBMVFC_HOST_ACTION_LOGO:
diff --git a/drivers/scsi/jazz_esp.c b/drivers/scsi/jazz_esp.c
index f0ed686..60a88a9 100644
--- a/drivers/scsi/jazz_esp.c
+++ b/drivers/scsi/jazz_esp.c
@@ -143,7 +143,9 @@ static int esp_jazz_probe(struct platform_device *dev)
if (!esp->command_block)
goto fail_unmap_regs;
- host->irq = platform_get_irq(dev, 0);
+ host->irq = err = platform_get_irq(dev, 0);
+ if (err < 0)
+ goto fail_unmap_command_block;
err = request_irq(host->irq, scsi_esp_intr, IRQF_SHARED, "ESP", esp);
if (err < 0)
goto fail_unmap_command_block;
diff --git a/drivers/scsi/libfc/fc_lport.c b/drivers/scsi/libfc/fc_lport.c
index 6557fda..abb14b2 100644
--- a/drivers/scsi/libfc/fc_lport.c
+++ b/drivers/scsi/libfc/fc_lport.c
@@ -1731,7 +1731,7 @@ void fc_lport_flogi_resp(struct fc_seq *sp, struct fc_frame *fp,
if (mfs < FC_SP_MIN_MAX_PAYLOAD || mfs > FC_SP_MAX_MAX_PAYLOAD) {
FC_LPORT_DBG(lport, "FLOGI bad mfs:%hu response, "
- "lport->mfs:%hu\n", mfs, lport->mfs);
+ "lport->mfs:%u\n", mfs, lport->mfs);
fc_lport_error(lport, fp);
goto out;
}
diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
index 024e5a5..8b9a390 100644
--- a/drivers/scsi/libsas/sas_ata.c
+++ b/drivers/scsi/libsas/sas_ata.c
@@ -201,18 +201,17 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
memcpy(task->ata_task.atapi_packet, qc->cdb, qc->dev->cdb_len);
task->total_xfer_len = qc->nbytes;
task->num_scatter = qc->n_elem;
+ task->data_dir = qc->dma_dir;
+ } else if (qc->tf.protocol == ATA_PROT_NODATA) {
+ task->data_dir = DMA_NONE;
} else {
for_each_sg(qc->sg, sg, qc->n_elem, si)
xfer += sg_dma_len(sg);
task->total_xfer_len = xfer;
task->num_scatter = si;
- }
-
- if (qc->tf.protocol == ATA_PROT_NODATA)
- task->data_dir = DMA_NONE;
- else
task->data_dir = qc->dma_dir;
+ }
task->scatter = qc->sg;
task->ata_task.retry_count = 1;
task->task_state_flags = SAS_TASK_STATE_PENDING;
diff --git a/drivers/scsi/libsas/sas_port.c b/drivers/scsi/libsas/sas_port.c
index 19cf418..e3d03d7 100644
--- a/drivers/scsi/libsas/sas_port.c
+++ b/drivers/scsi/libsas/sas_port.c
@@ -25,7 +25,7 @@ static bool phy_is_wideport_member(struct asd_sas_port *port, struct asd_sas_phy
static void sas_resume_port(struct asd_sas_phy *phy)
{
- struct domain_device *dev;
+ struct domain_device *dev, *n;
struct asd_sas_port *port = phy->port;
struct sas_ha_struct *sas_ha = phy->ha;
struct sas_internal *si = to_sas_internal(sas_ha->core.shost->transportt);
@@ -44,7 +44,7 @@ static void sas_resume_port(struct asd_sas_phy *phy)
* 1/ presume every device came back
* 2/ force the next revalidation to check all expander phys
*/
- list_for_each_entry(dev, &port->dev_list, dev_list_node) {
+ list_for_each_entry_safe(dev, n, &port->dev_list, dev_list_node) {
int i, rc;
rc = sas_notify_lldd_dev_found(dev);
diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index e94eac1..bdea286 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -1690,8 +1690,7 @@ lpfc_set_trunking(struct lpfc_hba *phba, char *buff_out)
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
"0071 Set trunk mode failed with status: %d",
rc);
- if (rc != MBX_TIMEOUT)
- mempool_free(mbox, phba->mbox_mem_pool);
+ mempool_free(mbox, phba->mbox_mem_pool);
return 0;
}
@@ -6780,15 +6779,19 @@ lpfc_get_stats(struct Scsi_Host *shost)
pmboxq->ctx_buf = NULL;
pmboxq->vport = vport;
- if (vport->fc_flag & FC_OFFLINE_MODE)
+ if (vport->fc_flag & FC_OFFLINE_MODE) {
rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
- else
- rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
-
- if (rc != MBX_SUCCESS) {
- if (rc != MBX_TIMEOUT)
+ if (rc != MBX_SUCCESS) {
mempool_free(pmboxq, phba->mbox_mem_pool);
- return NULL;
+ return NULL;
+ }
+ } else {
+ rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
+ if (rc != MBX_SUCCESS) {
+ if (rc != MBX_TIMEOUT)
+ mempool_free(pmboxq, phba->mbox_mem_pool);
+ return NULL;
+ }
}
memset(hs, 0, sizeof (struct fc_host_statistics));
@@ -6812,15 +6815,19 @@ lpfc_get_stats(struct Scsi_Host *shost)
pmboxq->ctx_buf = NULL;
pmboxq->vport = vport;
- if (vport->fc_flag & FC_OFFLINE_MODE)
+ if (vport->fc_flag & FC_OFFLINE_MODE) {
rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
- else
- rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
-
- if (rc != MBX_SUCCESS) {
- if (rc != MBX_TIMEOUT)
+ if (rc != MBX_SUCCESS) {
mempool_free(pmboxq, phba->mbox_mem_pool);
- return NULL;
+ return NULL;
+ }
+ } else {
+ rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
+ if (rc != MBX_SUCCESS) {
+ if (rc != MBX_TIMEOUT)
+ mempool_free(pmboxq, phba->mbox_mem_pool);
+ return NULL;
+ }
}
hs->link_failure_count = pmb->un.varRdLnk.linkFailureCnt;
@@ -6893,15 +6900,19 @@ lpfc_reset_stats(struct Scsi_Host *shost)
pmboxq->vport = vport;
if ((vport->fc_flag & FC_OFFLINE_MODE) ||
- (!(psli->sli_flag & LPFC_SLI_ACTIVE)))
+ (!(psli->sli_flag & LPFC_SLI_ACTIVE))) {
rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
- else
- rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
-
- if (rc != MBX_SUCCESS) {
- if (rc != MBX_TIMEOUT)
+ if (rc != MBX_SUCCESS) {
mempool_free(pmboxq, phba->mbox_mem_pool);
- return;
+ return;
+ }
+ } else {
+ rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
+ if (rc != MBX_SUCCESS) {
+ if (rc != MBX_TIMEOUT)
+ mempool_free(pmboxq, phba->mbox_mem_pool);
+ return;
+ }
}
memset(pmboxq, 0, sizeof(LPFC_MBOXQ_t));
@@ -6911,15 +6922,19 @@ lpfc_reset_stats(struct Scsi_Host *shost)
pmboxq->vport = vport;
if ((vport->fc_flag & FC_OFFLINE_MODE) ||
- (!(psli->sli_flag & LPFC_SLI_ACTIVE)))
+ (!(psli->sli_flag & LPFC_SLI_ACTIVE))) {
rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
- else
+ if (rc != MBX_SUCCESS) {
+ mempool_free(pmboxq, phba->mbox_mem_pool);
+ return;
+ }
+ } else {
rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
-
- if (rc != MBX_SUCCESS) {
- if (rc != MBX_TIMEOUT)
- mempool_free( pmboxq, phba->mbox_mem_pool);
- return;
+ if (rc != MBX_SUCCESS) {
+ if (rc != MBX_TIMEOUT)
+ mempool_free(pmboxq, phba->mbox_mem_pool);
+ return;
+ }
}
lso->link_failure_count = pmb->un.varRdLnk.linkFailureCnt;
diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
index 782f6f7..26f4a34 100644
--- a/drivers/scsi/lpfc/lpfc_crtn.h
+++ b/drivers/scsi/lpfc/lpfc_crtn.h
@@ -55,9 +55,6 @@ void lpfc_register_new_vport(struct lpfc_hba *, struct lpfc_vport *,
void lpfc_unreg_vpi(struct lpfc_hba *, uint16_t, LPFC_MBOXQ_t *);
void lpfc_init_link(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t, uint32_t);
void lpfc_request_features(struct lpfc_hba *, struct lpfcMboxq *);
-void lpfc_supported_pages(struct lpfcMboxq *);
-void lpfc_pc_sli4_params(struct lpfcMboxq *);
-int lpfc_pc_sli4_params_get(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_sli4_mbox_rsrc_extent(struct lpfc_hba *, struct lpfcMboxq *,
uint16_t, uint16_t, bool);
int lpfc_get_sli4_parameters(struct lpfc_hba *, LPFC_MBOXQ_t *);
diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h
index 12e4e76..47e832b 100644
--- a/drivers/scsi/lpfc/lpfc_hw4.h
+++ b/drivers/scsi/lpfc/lpfc_hw4.h
@@ -124,6 +124,7 @@ struct lpfc_sli_intf {
/* Define SLI4 Alignment requirements. */
#define LPFC_ALIGN_16_BYTE 16
#define LPFC_ALIGN_64_BYTE 64
+#define SLI4_PAGE_SIZE 4096
/* Define SLI4 specific definitions. */
#define LPFC_MQ_CQE_BYTE_OFFSET 256
@@ -2976,62 +2977,6 @@ struct lpfc_mbx_request_features {
#define lpfc_mbx_rq_ftr_rsp_mrqp_WORD word3
};
-struct lpfc_mbx_supp_pages {
- uint32_t word1;
-#define qs_SHIFT 0
-#define qs_MASK 0x00000001
-#define qs_WORD word1
-#define wr_SHIFT 1
-#define wr_MASK 0x00000001
-#define wr_WORD word1
-#define pf_SHIFT 8
-#define pf_MASK 0x000000ff
-#define pf_WORD word1
-#define cpn_SHIFT 16
-#define cpn_MASK 0x000000ff
-#define cpn_WORD word1
- uint32_t word2;
-#define list_offset_SHIFT 0
-#define list_offset_MASK 0x000000ff
-#define list_offset_WORD word2
-#define next_offset_SHIFT 8
-#define next_offset_MASK 0x000000ff
-#define next_offset_WORD word2
-#define elem_cnt_SHIFT 16
-#define elem_cnt_MASK 0x000000ff
-#define elem_cnt_WORD word2
- uint32_t word3;
-#define pn_0_SHIFT 24
-#define pn_0_MASK 0x000000ff
-#define pn_0_WORD word3
-#define pn_1_SHIFT 16
-#define pn_1_MASK 0x000000ff
-#define pn_1_WORD word3
-#define pn_2_SHIFT 8
-#define pn_2_MASK 0x000000ff
-#define pn_2_WORD word3
-#define pn_3_SHIFT 0
-#define pn_3_MASK 0x000000ff
-#define pn_3_WORD word3
- uint32_t word4;
-#define pn_4_SHIFT 24
-#define pn_4_MASK 0x000000ff
-#define pn_4_WORD word4
-#define pn_5_SHIFT 16
-#define pn_5_MASK 0x000000ff
-#define pn_5_WORD word4
-#define pn_6_SHIFT 8
-#define pn_6_MASK 0x000000ff
-#define pn_6_WORD word4
-#define pn_7_SHIFT 0
-#define pn_7_MASK 0x000000ff
-#define pn_7_WORD word4
- uint32_t rsvd[27];
-#define LPFC_SUPP_PAGES 0
-#define LPFC_BLOCK_GUARD_PROFILES 1
-#define LPFC_SLI4_PARAMETERS 2
-};
-
struct lpfc_mbx_memory_dump_type3 {
uint32_t word1;
#define lpfc_mbx_memory_dump_type3_type_SHIFT 0
@@ -3248,121 +3193,6 @@ struct user_eeprom {
uint8_t reserved191[57];
};
-struct lpfc_mbx_pc_sli4_params {
- uint32_t word1;
-#define qs_SHIFT 0
-#define qs_MASK 0x00000001
-#define qs_WORD word1
-#define wr_SHIFT 1
-#define wr_MASK 0x00000001
-#define wr_WORD word1
-#define pf_SHIFT 8
-#define pf_MASK 0x000000ff
-#define pf_WORD word1
-#define cpn_SHIFT 16
-#define cpn_MASK 0x000000ff
-#define cpn_WORD word1
- uint32_t word2;
-#define if_type_SHIFT 0
-#define if_type_MASK 0x00000007
-#define if_type_WORD word2
-#define sli_rev_SHIFT 4
-#define sli_rev_MASK 0x0000000f
-#define sli_rev_WORD word2
-#define sli_family_SHIFT 8
-#define sli_family_MASK 0x000000ff
-#define sli_family_WORD word2
-#define featurelevel_1_SHIFT 16
-#define featurelevel_1_MASK 0x000000ff
-#define featurelevel_1_WORD word2
-#define featurelevel_2_SHIFT 24
-#define featurelevel_2_MASK 0x0000001f
-#define featurelevel_2_WORD word2
- uint32_t word3;
-#define fcoe_SHIFT 0
-#define fcoe_MASK 0x00000001
-#define fcoe_WORD word3
-#define fc_SHIFT 1
-#define fc_MASK 0x00000001
-#define fc_WORD word3
-#define nic_SHIFT 2
-#define nic_MASK 0x00000001
-#define nic_WORD word3
-#define iscsi_SHIFT 3
-#define iscsi_MASK 0x00000001
-#define iscsi_WORD word3
-#define rdma_SHIFT 4
-#define rdma_MASK 0x00000001
-#define rdma_WORD word3
- uint32_t sge_supp_len;
-#define SLI4_PAGE_SIZE 4096
- uint32_t word5;
-#define if_page_sz_SHIFT 0
-#define if_page_sz_MASK 0x0000ffff
-#define if_page_sz_WORD word5
-#define loopbk_scope_SHIFT 24
-#define loopbk_scope_MASK 0x0000000f
-#define loopbk_scope_WORD word5
-#define rq_db_window_SHIFT 28
-#define rq_db_window_MASK 0x0000000f
-#define rq_db_window_WORD word5
- uint32_t word6;
-#define eq_pages_SHIFT 0
-#define eq_pages_MASK 0x0000000f
-#define eq_pages_WORD word6
-#define eqe_size_SHIFT 8
-#define eqe_size_MASK 0x000000ff
-#define eqe_size_WORD word6
- uint32_t word7;
-#define cq_pages_SHIFT 0
-#define cq_pages_MASK 0x0000000f
-#define cq_pages_WORD word7
-#define cqe_size_SHIFT 8
-#define cqe_size_MASK 0x000000ff
-#define cqe_size_WORD word7
- uint32_t word8;
-#define mq_pages_SHIFT 0
-#define mq_pages_MASK 0x0000000f
-#define mq_pages_WORD word8
-#define mqe_size_SHIFT 8
-#define mqe_size_MASK 0x000000ff
-#define mqe_size_WORD word8
-#define mq_elem_cnt_SHIFT 16
-#define mq_elem_cnt_MASK 0x000000ff
-#define mq_elem_cnt_WORD word8
- uint32_t word9;
-#define wq_pages_SHIFT 0
-#define wq_pages_MASK 0x0000ffff
-#define wq_pages_WORD word9
-#define wqe_size_SHIFT 8
-#define wqe_size_MASK 0x000000ff
-#define wqe_size_WORD word9
- uint32_t word10;
-#define rq_pages_SHIFT 0
-#define rq_pages_MASK 0x0000ffff
-#define rq_pages_WORD word10
-#define rqe_size_SHIFT 8
-#define rqe_size_MASK 0x000000ff
-#define rqe_size_WORD word10
- uint32_t word11;
-#define hdr_pages_SHIFT 0
-#define hdr_pages_MASK 0x0000000f
-#define hdr_pages_WORD word11
-#define hdr_size_SHIFT 8
-#define hdr_size_MASK 0x0000000f
-#define hdr_size_WORD word11
-#define hdr_pp_align_SHIFT 16
-#define hdr_pp_align_MASK 0x0000ffff
-#define hdr_pp_align_WORD word11
- uint32_t word12;
-#define sgl_pages_SHIFT 0
-#define sgl_pages_MASK 0x0000000f
-#define sgl_pages_WORD word12
-#define sgl_pp_align_SHIFT 16
-#define sgl_pp_align_MASK 0x0000ffff
-#define sgl_pp_align_WORD word12
- uint32_t rsvd_13_63[51];
-};
#define SLI4_PAGE_ALIGN(addr) (((addr)+((SLI4_PAGE_SIZE)-1)) \
&(~((SLI4_PAGE_SIZE)-1)))
@@ -3988,8 +3818,6 @@ struct lpfc_mqe {
struct lpfc_mbx_post_hdr_tmpl hdr_tmpl;
struct lpfc_mbx_query_fw_config query_fw_cfg;
struct lpfc_mbx_set_beacon_config beacon_config;
- struct lpfc_mbx_supp_pages supp_pages;
- struct lpfc_mbx_pc_sli4_params sli4_params;
struct lpfc_mbx_get_sli4_parameters get_sli4_parameters;
struct lpfc_mbx_set_link_diag_state link_diag_state;
struct lpfc_mbx_set_link_diag_loopback link_diag_loopback;
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 40fe889..2dde5dd 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -6548,8 +6548,6 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
LPFC_MBOXQ_t *mboxq;
MAILBOX_t *mb;
int rc, i, max_buf_size;
- uint8_t pn_page[LPFC_MAX_SUPPORTED_PAGES] = {0};
- struct lpfc_mqe *mqe;
int longs;
int extra;
uint64_t wwn;
@@ -6783,32 +6781,6 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
lpfc_nvme_mod_param_dep(phba);
- /* Get the Supported Pages if PORT_CAPABILITIES is supported by port. */
- lpfc_supported_pages(mboxq);
- rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
- if (!rc) {
- mqe = &mboxq->u.mqe;
- memcpy(&pn_page[0], ((uint8_t *)&mqe->un.supp_pages.word3),
- LPFC_MAX_SUPPORTED_PAGES);
- for (i = 0; i < LPFC_MAX_SUPPORTED_PAGES; i++) {
- switch (pn_page[i]) {
- case LPFC_SLI4_PARAMETERS:
- phba->sli4_hba.pc_sli4_params.supported = 1;
- break;
- default:
- break;
- }
- }
- /* Read the port's SLI4 Parameters capabilities if supported. */
- if (phba->sli4_hba.pc_sli4_params.supported)
- rc = lpfc_pc_sli4_params_get(phba, mboxq);
- if (rc) {
- mempool_free(mboxq, phba->mbox_mem_pool);
- rc = -EIO;
- goto out_free_bsmbx;
- }
- }
-
/*
* Get sli4 parameters that override parameters from Port capabilities.
* If this call fails, it isn't critical unless the SLI4 parameters come
@@ -9636,8 +9608,7 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
"3250 QUERY_FW_CFG mailbox failed with status "
"x%x add_status x%x, mbx status x%x\n",
shdr_status, shdr_add_status, rc);
- if (rc != MBX_TIMEOUT)
- mempool_free(mboxq, phba->mbox_mem_pool);
+ mempool_free(mboxq, phba->mbox_mem_pool);
rc = -ENXIO;
goto out_error;
}
@@ -9653,8 +9624,7 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
"ulp1_mode:x%x\n", phba->sli4_hba.fw_func_mode,
phba->sli4_hba.ulp0_mode, phba->sli4_hba.ulp1_mode);
- if (rc != MBX_TIMEOUT)
- mempool_free(mboxq, phba->mbox_mem_pool);
+ mempool_free(mboxq, phba->mbox_mem_pool);
/*
* Set up HBA Event Queues (EQs)
@@ -10252,8 +10222,7 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
shdr_add_status = bf_get(lpfc_mbox_hdr_add_status,
&shdr->response);
- if (rc != MBX_TIMEOUT)
- mempool_free(mboxq, phba->mbox_mem_pool);
+ mempool_free(mboxq, phba->mbox_mem_pool);
if (shdr_status || shdr_add_status || rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"0495 SLI_FUNCTION_RESET mailbox "
@@ -12049,78 +12018,6 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba)
phba->pport->work_port_events = 0;
}
- /**
- * lpfc_pc_sli4_params_get - Get the SLI4_PARAMS port capabilities.
- * @phba: Pointer to HBA context object.
- * @mboxq: Pointer to the mailboxq memory for the mailbox command response.
- *
- * This function is called in the SLI4 code path to read the port's
- * sli4 capabilities.
- *
- * This function may be be called from any context that can block-wait
- * for the completion. The expectation is that this routine is called
- * typically from probe_one or from the online routine.
- **/
-int
-lpfc_pc_sli4_params_get(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
-{
- int rc;
- struct lpfc_mqe *mqe;
- struct lpfc_pc_sli4_params *sli4_params;
- uint32_t mbox_tmo;
-
- rc = 0;
- mqe = &mboxq->u.mqe;
-
- /* Read the port's SLI4 Parameters port capabilities */
- lpfc_pc_sli4_params(mboxq);
- if (!phba->sli4_hba.intr_enable)
- rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
- else {
- mbox_tmo = lpfc_mbox_tmo_val(phba, mboxq);
- rc = lpfc_sli_issue_mbox_wait(phba, mboxq, mbox_tmo);
- }
-
- if (unlikely(rc))
- return 1;
-
- sli4_params = &phba->sli4_hba.pc_sli4_params;
- sli4_params->if_type = bf_get(if_type, &mqe->un.sli4_params);
- sli4_params->sli_rev = bf_get(sli_rev, &mqe->un.sli4_params);
- sli4_params->sli_family = bf_get(sli_family, &mqe->un.sli4_params);
- sli4_params->featurelevel_1 = bf_get(featurelevel_1,
- &mqe->un.sli4_params);
- sli4_params->featurelevel_2 = bf_get(featurelevel_2,
- &mqe->un.sli4_params);
- sli4_params->proto_types = mqe->un.sli4_params.word3;
- sli4_params->sge_supp_len = mqe->un.sli4_params.sge_supp_len;
- sli4_params->if_page_sz = bf_get(if_page_sz, &mqe->un.sli4_params);
- sli4_params->rq_db_window = bf_get(rq_db_window, &mqe->un.sli4_params);
- sli4_params->loopbk_scope = bf_get(loopbk_scope, &mqe->un.sli4_params);
- sli4_params->eq_pages_max = bf_get(eq_pages, &mqe->un.sli4_params);
- sli4_params->eqe_size = bf_get(eqe_size, &mqe->un.sli4_params);
- sli4_params->cq_pages_max = bf_get(cq_pages, &mqe->un.sli4_params);
- sli4_params->cqe_size = bf_get(cqe_size, &mqe->un.sli4_params);
- sli4_params->mq_pages_max = bf_get(mq_pages, &mqe->un.sli4_params);
- sli4_params->mqe_size = bf_get(mqe_size, &mqe->un.sli4_params);
- sli4_params->mq_elem_cnt = bf_get(mq_elem_cnt, &mqe->un.sli4_params);
- sli4_params->wq_pages_max = bf_get(wq_pages, &mqe->un.sli4_params);
- sli4_params->wqe_size = bf_get(wqe_size, &mqe->un.sli4_params);
- sli4_params->rq_pages_max = bf_get(rq_pages, &mqe->un.sli4_params);
- sli4_params->rqe_size = bf_get(rqe_size, &mqe->un.sli4_params);
- sli4_params->hdr_pages_max = bf_get(hdr_pages, &mqe->un.sli4_params);
- sli4_params->hdr_size = bf_get(hdr_size, &mqe->un.sli4_params);
- sli4_params->hdr_pp_align = bf_get(hdr_pp_align, &mqe->un.sli4_params);
- sli4_params->sgl_pages_max = bf_get(sgl_pages, &mqe->un.sli4_params);
- sli4_params->sgl_pp_align = bf_get(sgl_pp_align, &mqe->un.sli4_params);
-
- /* Make sure that sge_supp_len can be handled by the driver */
- if (sli4_params->sge_supp_len > LPFC_MAX_SGE_SIZE)
- sli4_params->sge_supp_len = LPFC_MAX_SGE_SIZE;
-
- return rc;
-}
-
/**
* lpfc_get_sli4_parameters - Get the SLI4 Config PARAMETERS.
* @phba: Pointer to HBA context object.
@@ -12179,7 +12076,8 @@ lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
else
phba->sli3_options &= ~LPFC_SLI4_PHWQ_ENABLED;
sli4_params->sge_supp_len = mbx_sli4_parameters->sge_supp_len;
- sli4_params->loopbk_scope = bf_get(loopbk_scope, mbx_sli4_parameters);
+ sli4_params->loopbk_scope = bf_get(cfg_loopbk_scope,
+ mbx_sli4_parameters);
sli4_params->oas_supported = bf_get(cfg_oas, mbx_sli4_parameters);
sli4_params->cqv = bf_get(cfg_cqv, mbx_sli4_parameters);
sli4_params->mqv = bf_get(cfg_mqv, mbx_sli4_parameters);
diff --git a/drivers/scsi/lpfc/lpfc_mbox.c b/drivers/scsi/lpfc/lpfc_mbox.c
index 3414ffc..8764fdf 100644
--- a/drivers/scsi/lpfc/lpfc_mbox.c
+++ b/drivers/scsi/lpfc/lpfc_mbox.c
@@ -2624,39 +2624,3 @@ lpfc_resume_rpi(struct lpfcMboxq *mbox, struct lpfc_nodelist *ndlp)
resume_rpi->event_tag = ndlp->phba->fc_eventTag;
}
-/**
- * lpfc_supported_pages - Initialize the PORT_CAPABILITIES supported pages
- * mailbox command.
- * @mbox: pointer to lpfc mbox command to initialize.
- *
- * The PORT_CAPABILITIES supported pages mailbox command is issued to
- * retrieve the particular feature pages supported by the port.
- **/
-void
-lpfc_supported_pages(struct lpfcMboxq *mbox)
-{
- struct lpfc_mbx_supp_pages *supp_pages;
-
- memset(mbox, 0, sizeof(*mbox));
- supp_pages = &mbox->u.mqe.un.supp_pages;
- bf_set(lpfc_mqe_command, &mbox->u.mqe, MBX_PORT_CAPABILITIES);
- bf_set(cpn, supp_pages, LPFC_SUPP_PAGES);
-}
-
-/**
- * lpfc_pc_sli4_params - Initialize the PORT_CAPABILITIES SLI4 Params mbox cmd.
- * @mbox: pointer to lpfc mbox command to initialize.
- *
- * The PORT_CAPABILITIES SLI4 parameters mailbox command is issued to
- * retrieve the particular SLI4 features supported by the port.
- **/
-void
-lpfc_pc_sli4_params(struct lpfcMboxq *mbox)
-{
- struct lpfc_mbx_pc_sli4_params *sli4_params;
-
- memset(mbox, 0, sizeof(*mbox));
- sli4_params = &mbox->u.mqe.un.sli4_params;
- bf_set(lpfc_mqe_command, &mbox->u.mqe, MBX_PORT_CAPABILITIES);
- bf_set(cpn, sli4_params, LPFC_SLI4_PARAMETERS);
-}
diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
index 92d6e7b..6afcb14 100644
--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
+++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
@@ -903,9 +903,14 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
}
} else if ((!(ndlp->nlp_type & NLP_FABRIC) &&
((ndlp->nlp_type & NLP_FCP_TARGET) ||
- !(ndlp->nlp_type & NLP_FCP_INITIATOR))) ||
+ (ndlp->nlp_type & NLP_NVME_TARGET) ||
+ (vport->fc_flag & FC_PT2PT))) ||
(ndlp->nlp_state == NLP_STE_ADISC_ISSUE)) {
- /* Only try to re-login if this is NOT a Fabric Node */
+ /* Only try to re-login if this is NOT a Fabric Node
+ * AND the remote NPORT is a FCP/NVME Target or we
+ * are in pt2pt mode. NLP_STE_ADISC_ISSUE is a special
+ * case for LOGO as a response to ADISC behavior.
+ */
mod_timer(&ndlp->nlp_delayfunc,
jiffies + msecs_to_jiffies(1000 * 1));
spin_lock_irq(shost->host_lock);
@@ -1979,8 +1984,6 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
lpfc_issue_els_logo(vport, ndlp, 0);
- ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
- lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
return ndlp->nlp_state;
}
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index d4ade7c..deab893 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -3300,7 +3300,6 @@ lpfc_nvmet_unsol_issue_abort(struct lpfc_hba *phba,
bf_set(wqe_rcvoxid, &wqe_abts->xmit_sequence.wqe_com, xri);
/* Word 10 */
- bf_set(wqe_dbde, &wqe_abts->xmit_sequence.wqe_com, 1);
bf_set(wqe_iod, &wqe_abts->xmit_sequence.wqe_com, LPFC_WQE_IOD_WRITE);
bf_set(wqe_lenloc, &wqe_abts->xmit_sequence.wqe_com,
LPFC_WQE_LENLOC_WORD12);
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index f103340..bf171ef 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -5546,12 +5546,10 @@ lpfc_sli4_get_ctl_attr(struct lpfc_hba *phba)
phba->sli4_hba.lnk_info.lnk_no,
phba->BIOSVersion);
out_free_mboxq:
- if (rc != MBX_TIMEOUT) {
- if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
- lpfc_sli4_mbox_cmd_free(phba, mboxq);
- else
- mempool_free(mboxq, phba->mbox_mem_pool);
- }
+ if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
+ lpfc_sli4_mbox_cmd_free(phba, mboxq);
+ else
+ mempool_free(mboxq, phba->mbox_mem_pool);
return rc;
}
@@ -5652,12 +5650,10 @@ lpfc_sli4_retrieve_pport_name(struct lpfc_hba *phba)
}
out_free_mboxq:
- if (rc != MBX_TIMEOUT) {
- if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
- lpfc_sli4_mbox_cmd_free(phba, mboxq);
- else
- mempool_free(mboxq, phba->mbox_mem_pool);
- }
+ if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
+ lpfc_sli4_mbox_cmd_free(phba, mboxq);
+ else
+ mempool_free(mboxq, phba->mbox_mem_pool);
return rc;
}
@@ -11594,13 +11590,20 @@ lpfc_sli_validate_fcp_iocb(struct lpfc_iocbq *iocbq, struct lpfc_vport *vport,
lpfc_ctx_cmd ctx_cmd)
{
struct lpfc_io_buf *lpfc_cmd;
+ IOCB_t *icmd = NULL;
int rc = 1;
if (iocbq->vport != vport)
return rc;
- if (!(iocbq->iocb_flag & LPFC_IO_FCP) ||
- !(iocbq->iocb_flag & LPFC_IO_ON_TXCMPLQ))
+ if (!(iocbq->iocb_flag & LPFC_IO_FCP) ||
+ !(iocbq->iocb_flag & LPFC_IO_ON_TXCMPLQ) ||
+ iocbq->iocb_flag & LPFC_DRIVER_ABORTED)
+ return rc;
+
+ icmd = &iocbq->iocb;
+ if (icmd->ulpCommand == CMD_ABORT_XRI_CN ||
+ icmd->ulpCommand == CMD_CLOSE_XRI_CN)
return rc;
lpfc_cmd = container_of(iocbq, struct lpfc_io_buf, cur_iocbq);
@@ -16844,8 +16847,7 @@ lpfc_rq_destroy(struct lpfc_hba *phba, struct lpfc_queue *hrq,
"2509 RQ_DESTROY mailbox failed with "
"status x%x add_status x%x, mbx status x%x\n",
shdr_status, shdr_add_status, rc);
- if (rc != MBX_TIMEOUT)
- mempool_free(mbox, hrq->phba->mbox_mem_pool);
+ mempool_free(mbox, hrq->phba->mbox_mem_pool);
return -ENXIO;
}
bf_set(lpfc_mbx_rq_destroy_q_id, &mbox->u.mqe.un.rq_destroy.u.request,
@@ -16942,7 +16944,9 @@ lpfc_sli4_post_sgl(struct lpfc_hba *phba,
shdr = (union lpfc_sli4_cfg_shdr *) &post_sgl_pages->header.cfg_shdr;
shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
- if (rc != MBX_TIMEOUT)
+ if (!phba->sli4_hba.intr_enable)
+ mempool_free(mbox, phba->mbox_mem_pool);
+ else if (rc != MBX_TIMEOUT)
mempool_free(mbox, phba->mbox_mem_pool);
if (shdr_status || shdr_add_status || rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
@@ -17139,7 +17143,9 @@ lpfc_sli4_post_sgl_list(struct lpfc_hba *phba,
shdr = (union lpfc_sli4_cfg_shdr *) &sgl->cfg_shdr;
shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
- if (rc != MBX_TIMEOUT)
+ if (!phba->sli4_hba.intr_enable)
+ lpfc_sli4_mbox_cmd_free(phba, mbox);
+ else if (rc != MBX_TIMEOUT)
lpfc_sli4_mbox_cmd_free(phba, mbox);
if (shdr_status || shdr_add_status || rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
@@ -17252,7 +17258,9 @@ lpfc_sli4_post_io_sgl_block(struct lpfc_hba *phba, struct list_head *nblist,
shdr = (union lpfc_sli4_cfg_shdr *)&sgl->cfg_shdr;
shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
- if (rc != MBX_TIMEOUT)
+ if (!phba->sli4_hba.intr_enable)
+ lpfc_sli4_mbox_cmd_free(phba, mbox);
+ else if (rc != MBX_TIMEOUT)
lpfc_sli4_mbox_cmd_free(phba, mbox);
if (shdr_status || shdr_add_status || rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
@@ -17836,7 +17844,6 @@ lpfc_sli4_seq_abort_rsp_cmpl(struct lpfc_hba *phba,
if (cmd_iocbq) {
ndlp = (struct lpfc_nodelist *)cmd_iocbq->context1;
lpfc_nlp_put(ndlp);
- lpfc_nlp_not_used(ndlp);
lpfc_sli_release_iocbq(phba, cmd_iocbq);
}
@@ -18608,8 +18615,7 @@ lpfc_sli4_post_rpi_hdr(struct lpfc_hba *phba, struct lpfc_rpi_hdr *rpi_page)
shdr = (union lpfc_sli4_cfg_shdr *) &hdr_tmpl->header.cfg_shdr;
shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
- if (rc != MBX_TIMEOUT)
- mempool_free(mboxq, phba->mbox_mem_pool);
+ mempool_free(mboxq, phba->mbox_mem_pool);
if (shdr_status || shdr_add_status || rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2514 POST_RPI_HDR mailbox failed with "
@@ -19853,7 +19859,9 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
break;
}
}
- if (rc != MBX_TIMEOUT)
+ if (!phba->sli4_hba.intr_enable)
+ mempool_free(mbox, phba->mbox_mem_pool);
+ else if (rc != MBX_TIMEOUT)
mempool_free(mbox, phba->mbox_mem_pool);
if (shdr_status || shdr_add_status || rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
index ac25ec5..3fbbdf0 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
@@ -6804,6 +6804,8 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
ioc_info(ioc, "sending diag reset !!\n");
+ pci_cfg_access_lock(ioc->pdev);
+
drsprintk(ioc, ioc_info(ioc, "clear interrupts\n"));
count = 0;
@@ -6894,10 +6896,12 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
goto out;
}
+ pci_cfg_access_unlock(ioc->pdev);
ioc_info(ioc, "diag reset: SUCCESS\n");
return 0;
out:
+ pci_cfg_access_unlock(ioc->pdev);
ioc_err(ioc, "diag reset: FAILED\n");
return -EFAULT;
}
diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
index 95ba1bd..2114d2d 100644
--- a/drivers/scsi/pm8001/pm8001_hwi.c
+++ b/drivers/scsi/pm8001/pm8001_hwi.c
@@ -223,7 +223,7 @@ static void init_default_table_values(struct pm8001_hba_info *pm8001_ha)
PM8001_EVENT_LOG_SIZE;
pm8001_ha->main_cfg_tbl.pm8001_tbl.iop_event_log_option = 0x01;
pm8001_ha->main_cfg_tbl.pm8001_tbl.fatal_err_interrupt = 0x01;
- for (i = 0; i < PM8001_MAX_INB_NUM; i++) {
+ for (i = 0; i < pm8001_ha->max_q_num; i++) {
pm8001_ha->inbnd_q_tbl[i].element_pri_size_cnt =
PM8001_MPI_QUEUE | (pm8001_ha->iomb_size << 16) | (0x00<<30);
pm8001_ha->inbnd_q_tbl[i].upper_base_addr =
@@ -249,7 +249,7 @@ static void init_default_table_values(struct pm8001_hba_info *pm8001_ha)
pm8001_ha->inbnd_q_tbl[i].producer_idx = 0;
pm8001_ha->inbnd_q_tbl[i].consumer_index = 0;
}
- for (i = 0; i < PM8001_MAX_OUTB_NUM; i++) {
+ for (i = 0; i < pm8001_ha->max_q_num; i++) {
pm8001_ha->outbnd_q_tbl[i].element_size_cnt =
PM8001_MPI_QUEUE | (pm8001_ha->iomb_size << 16) | (0x01<<30);
pm8001_ha->outbnd_q_tbl[i].upper_base_addr =
@@ -643,7 +643,7 @@ static void init_pci_device_addresses(struct pm8001_hba_info *pm8001_ha)
*/
static int pm8001_chip_init(struct pm8001_hba_info *pm8001_ha)
{
- u8 i = 0;
+ u32 i = 0;
u16 deviceid;
pci_read_config_word(pm8001_ha->pdev, PCI_DEVICE_ID, &deviceid);
/* 8081 controllers need BAR shift to access MPI space
@@ -671,9 +671,9 @@ static int pm8001_chip_init(struct pm8001_hba_info *pm8001_ha)
read_outbnd_queue_table(pm8001_ha);
/* update main config table ,inbound table and outbound table */
update_main_config_table(pm8001_ha);
- for (i = 0; i < PM8001_MAX_INB_NUM; i++)
+ for (i = 0; i < pm8001_ha->max_q_num; i++)
update_inbnd_queue_table(pm8001_ha, i);
- for (i = 0; i < PM8001_MAX_OUTB_NUM; i++)
+ for (i = 0; i < pm8001_ha->max_q_num; i++)
update_outbnd_queue_table(pm8001_ha, i);
/* 8081 controller donot require these operations */
if (deviceid != 0x8081 && deviceid != 0x0042) {
@@ -3703,11 +3703,13 @@ static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void* piomb)
case HW_EVENT_PHY_START_STATUS:
pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_START_STATUS status = %x\n",
status);
- if (status == 0) {
+ if (status == 0)
phy->phy_state = 1;
- if (pm8001_ha->flags == PM8001F_RUN_TIME &&
- phy->enable_completion != NULL)
- complete(phy->enable_completion);
+
+ if (pm8001_ha->flags == PM8001F_RUN_TIME &&
+ phy->enable_completion != NULL) {
+ complete(phy->enable_completion);
+ phy->enable_completion = NULL;
}
break;
case HW_EVENT_SAS_PHY_UP:
diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
index 7657d68..0c0c886 100644
--- a/drivers/scsi/pm8001/pm8001_init.c
+++ b/drivers/scsi/pm8001/pm8001_init.c
@@ -1139,8 +1139,8 @@ static int pm8001_pci_probe(struct pci_dev *pdev,
goto err_out_shost;
}
list_add_tail(&pm8001_ha->list, &hba_list);
- scsi_scan_host(pm8001_ha->shost);
pm8001_ha->flags = PM8001F_RUN_TIME;
+ scsi_scan_host(pm8001_ha->shost);
return 0;
err_out_shost:
diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
index 474468d..39de9a9 100644
--- a/drivers/scsi/pm8001/pm8001_sas.c
+++ b/drivers/scsi/pm8001/pm8001_sas.c
@@ -264,12 +264,17 @@ void pm8001_scan_start(struct Scsi_Host *shost)
int i;
struct pm8001_hba_info *pm8001_ha;
struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
+ DECLARE_COMPLETION_ONSTACK(completion);
pm8001_ha = sha->lldd_ha;
/* SAS_RE_INITIALIZATION not available in SPCv/ve */
if (pm8001_ha->chip_id == chip_8001)
PM8001_CHIP_DISP->sas_re_init_req(pm8001_ha);
- for (i = 0; i < pm8001_ha->chip->n_phy; ++i)
+ for (i = 0; i < pm8001_ha->chip->n_phy; ++i) {
+ pm8001_ha->phy[i].enable_completion = &completion;
PM8001_CHIP_DISP->phy_start_req(pm8001_ha, i);
+ wait_for_completion(&completion);
+ msleep(300);
+ }
}
int pm8001_scan_finished(struct Scsi_Host *shost, unsigned long time)
diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
index 055f764..a203a4f 100644
--- a/drivers/scsi/pm8001/pm80xx_hwi.c
+++ b/drivers/scsi/pm8001/pm80xx_hwi.c
@@ -1488,9 +1488,9 @@ static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha)
/* wait until Inbound DoorBell Clear Register toggled */
if (IS_SPCV_12G(pm8001_ha->pdev)) {
- max_wait_count = 4 * 1000 * 1000;/* 4 sec */
+ max_wait_count = 30 * 1000 * 1000; /* 30 sec */
} else {
- max_wait_count = 2 * 1000 * 1000;/* 2 sec */
+ max_wait_count = 15 * 1000 * 1000; /* 15 sec */
}
do {
udelay(1);
@@ -3432,13 +3432,13 @@ static int mpi_phy_start_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
pm8001_dbg(pm8001_ha, INIT,
"phy start resp status:0x%x, phyid:0x%x\n",
status, phy_id);
- if (status == 0) {
+ if (status == 0)
phy->phy_state = PHY_LINK_DOWN;
- if (pm8001_ha->flags == PM8001F_RUN_TIME &&
- phy->enable_completion != NULL) {
- complete(phy->enable_completion);
- phy->enable_completion = NULL;
- }
+
+ if (pm8001_ha->flags == PM8001F_RUN_TIME &&
+ phy->enable_completion != NULL) {
+ complete(phy->enable_completion);
+ phy->enable_completion = NULL;
}
return 0;
diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
index 46d185c..a464d0a 100644
--- a/drivers/scsi/qedf/qedf_main.c
+++ b/drivers/scsi/qedf/qedf_main.c
@@ -536,7 +536,9 @@ static void qedf_update_link_speed(struct qedf_ctx *qedf,
if (linkmode_intersects(link->supported_caps, sup_caps))
lport->link_supported_speeds |= FC_PORTSPEED_20GBIT;
- fc_host_supported_speeds(lport->host) = lport->link_supported_speeds;
+ if (lport->host && lport->host->shost_data)
+ fc_host_supported_speeds(lport->host) =
+ lport->link_supported_speeds;
}
static void qedf_bw_update(void *dev)
diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
index ab45ac1..6a2c4a6f 100644
--- a/drivers/scsi/qla2xxx/qla_attr.c
+++ b/drivers/scsi/qla2xxx/qla_attr.c
@@ -2855,6 +2855,8 @@ qla2x00_reset_host_stats(struct Scsi_Host *shost)
vha->qla_stats.jiffies_at_last_reset = get_jiffies_64();
if (IS_FWI2_CAPABLE(ha)) {
+ int rval;
+
stats = dma_alloc_coherent(&ha->pdev->dev,
sizeof(*stats), &stats_dma, GFP_KERNEL);
if (!stats) {
@@ -2864,7 +2866,11 @@ qla2x00_reset_host_stats(struct Scsi_Host *shost)
}
/* reset firmware statistics */
- qla24xx_get_isp_stats(base_vha, stats, stats_dma, BIT_0);
+ rval = qla24xx_get_isp_stats(base_vha, stats, stats_dma, BIT_0);
+ if (rval != QLA_SUCCESS)
+ ql_log(ql_log_warn, vha, 0x70de,
+ "Resetting ISP statistics failed: rval = %d\n",
+ rval);
dma_free_coherent(&ha->pdev->dev, sizeof(*stats),
stats, stats_dma);
diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
index 23b6048..7fa0859 100644
--- a/drivers/scsi/qla2xxx/qla_bsg.c
+++ b/drivers/scsi/qla2xxx/qla_bsg.c
@@ -24,10 +24,11 @@ void qla2x00_bsg_job_done(srb_t *sp, int res)
struct bsg_job *bsg_job = sp->u.bsg_job;
struct fc_bsg_reply *bsg_reply = bsg_job->reply;
+ sp->free(sp);
+
bsg_reply->result = res;
bsg_job_done(bsg_job, bsg_reply->result,
bsg_reply->reply_payload_rcv_len);
- sp->free(sp);
}
void qla2x00_bsg_sp_free(srb_t *sp)
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index 52e8b55..6faf34f 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -1190,6 +1190,9 @@ static int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport)
{
struct qla_work_evt *e;
+ if (vha->host->active_mode == MODE_TARGET)
+ return QLA_FUNCTION_FAILED;
+
e = qla2x00_alloc_work(vha, QLA_EVT_PRLI);
if (!e)
return QLA_FUNCTION_FAILED;
diff --git a/drivers/scsi/qla2xxx/qla_nx.c b/drivers/scsi/qla2xxx/qla_nx.c
index b3ba0de..0563c95 100644
--- a/drivers/scsi/qla2xxx/qla_nx.c
+++ b/drivers/scsi/qla2xxx/qla_nx.c
@@ -1066,7 +1066,8 @@ qla82xx_write_flash_dword(struct qla_hw_data *ha, uint32_t flashaddr,
return ret;
}
- if (qla82xx_flash_set_write_enable(ha))
+ ret = qla82xx_flash_set_write_enable(ha);
+ if (ret < 0)
goto done_write;
qla82xx_wr_32(ha, QLA82XX_ROMUSB_ROM_WDATA, data);
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index d389f56..21be50b 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -1008,8 +1008,6 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
if (rval != QLA_SUCCESS) {
ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3078,
"Start scsi failed rval=%d for cmd=%p.\n", rval, cmd);
- if (rval == QLA_INTERFACE_ERROR)
- goto qc24_free_sp_fail_command;
goto qc24_host_busy_free_sp;
}
@@ -1021,11 +1019,6 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
qc24_target_busy:
return SCSI_MLQUEUE_TARGET_BUSY;
-qc24_free_sp_fail_command:
- sp->free(sp);
- CMD_SP(cmd) = NULL;
- qla2xxx_rel_qpair_sp(sp->qpair, sp);
-
qc24_fail_command:
cmd->scsi_done(cmd);
diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
index 1e939a2..98a34ed 100644
--- a/drivers/scsi/scsi_transport_srp.c
+++ b/drivers/scsi/scsi_transport_srp.c
@@ -541,7 +541,7 @@ int srp_reconnect_rport(struct srp_rport *rport)
res = mutex_lock_interruptible(&rport->mutex);
if (res)
goto out;
- if (rport->state != SRP_RPORT_FAIL_FAST)
+ if (rport->state != SRP_RPORT_FAIL_FAST && rport->state != SRP_RPORT_LOST)
/*
* sdev state must be SDEV_TRANSPORT_OFFLINE, transition
* to SDEV_BLOCK is illegal. Calling scsi_target_unblock()
diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
index 9d02296..5083e5d 100644
--- a/drivers/scsi/smartpqi/smartpqi_init.c
+++ b/drivers/scsi/smartpqi/smartpqi_init.c
@@ -5491,6 +5491,8 @@ static void pqi_fail_io_queued_for_device(struct pqi_ctrl_info *ctrl_info,
list_del(&io_request->request_list_entry);
set_host_byte(scmd, DID_RESET);
+ pqi_free_io_request(io_request);
+ scsi_dma_unmap(scmd);
pqi_scsi_done(scmd);
}
@@ -5527,6 +5529,8 @@ static void pqi_fail_io_queued_for_all_devices(struct pqi_ctrl_info *ctrl_info)
list_del(&io_request->request_list_entry);
set_host_byte(scmd, DID_RESET);
+ pqi_free_io_request(io_request);
+ scsi_dma_unmap(scmd);
pqi_scsi_done(scmd);
}
@@ -6601,6 +6605,7 @@ static int pqi_register_scsi(struct pqi_ctrl_info *ctrl_info)
shost->irq = pci_irq_vector(ctrl_info->pci_dev, 0);
shost->unique_id = shost->irq;
shost->nr_hw_queues = ctrl_info->num_queue_groups;
+ shost->host_tagset = 1;
shost->hostdata[0] = (unsigned long)ctrl_info;
rc = scsi_add_host(shost, &ctrl_info->pci_dev->dev);
@@ -8223,6 +8228,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x193d, 0x8460)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x193d, 0x1104)
},
{
@@ -8295,6 +8304,22 @@ static const struct pci_device_id pqi_pci_id_table[] = {
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1bd4, 0x0051)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1bd4, 0x0052)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1bd4, 0x0053)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1bd4, 0x0054)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x19e5, 0xd227)
},
{
@@ -8455,6 +8480,122 @@ static const struct pci_device_id pqi_pci_id_table[] = {
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1400)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1402)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1410)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1411)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1412)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1420)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1430)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1440)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1441)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1450)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1452)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1460)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1461)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1462)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1470)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1471)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1472)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1480)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1490)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x1491)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x14a0)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x14a1)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x14b0)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x14b1)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x14c0)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x14c1)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x14d0)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x14e0)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_ADAPTEC2, 0x14f0)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_ADVANTECH, 0x8312)
},
{
@@ -8519,6 +8660,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ PCI_VENDOR_ID_HP, 0x1002)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
PCI_VENDOR_ID_HP, 0x1100)
},
{
@@ -8527,6 +8672,22 @@ static const struct pci_device_id pqi_pci_id_table[] = {
},
{
PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1590, 0x0294)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1590, 0x02db)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1590, 0x02dc)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 0x1590, 0x032e)
+ },
+ {
+ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
0x1d8d, 0x0800)
},
{
diff --git a/drivers/scsi/sni_53c710.c b/drivers/scsi/sni_53c710.c
index 9e2e196..97c6f81 100644
--- a/drivers/scsi/sni_53c710.c
+++ b/drivers/scsi/sni_53c710.c
@@ -58,6 +58,7 @@ static int snirm710_probe(struct platform_device *dev)
struct NCR_700_Host_Parameters *hostdata;
struct Scsi_Host *host;
struct resource *res;
+ int rc;
res = platform_get_resource(dev, IORESOURCE_MEM, 0);
if (!res)
@@ -83,7 +84,9 @@ static int snirm710_probe(struct platform_device *dev)
goto out_kfree;
host->this_id = 7;
host->base = base;
- host->irq = platform_get_irq(dev, 0);
+ host->irq = rc = platform_get_irq(dev, 0);
+ if (rc < 0)
+ goto out_put_host;
if(request_irq(host->irq, NCR_700_intr, IRQF_SHARED, "snirm710", host)) {
printk(KERN_ERR "snirm710: request_irq failed!\n");
goto out_put_host;
diff --git a/drivers/scsi/sun3x_esp.c b/drivers/scsi/sun3x_esp.c
index 7de82f2..d3489ac 100644
--- a/drivers/scsi/sun3x_esp.c
+++ b/drivers/scsi/sun3x_esp.c
@@ -206,7 +206,9 @@ static int esp_sun3x_probe(struct platform_device *dev)
if (!esp->command_block)
goto fail_unmap_regs_dma;
- host->irq = platform_get_irq(dev, 0);
+ host->irq = err = platform_get_irq(dev, 0);
+ if (err < 0)
+ goto fail_unmap_command_block;
err = request_irq(host->irq, scsi_esp_intr, IRQF_SHARED,
"SUN3X ESP", esp);
if (err < 0)
diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c
index 074a6a0..55b7161 100644
--- a/drivers/scsi/ufs/ufs-hisi.c
+++ b/drivers/scsi/ufs/ufs-hisi.c
@@ -478,21 +478,24 @@ static int ufs_hisi_init_common(struct ufs_hba *hba)
host->hba = hba;
ufshcd_set_variant(hba, host);
- host->rst = devm_reset_control_get(dev, "rst");
+ host->rst = devm_reset_control_get(dev, "rst");
if (IS_ERR(host->rst)) {
dev_err(dev, "%s: failed to get reset control\n", __func__);
- return PTR_ERR(host->rst);
+ err = PTR_ERR(host->rst);
+ goto error;
}
ufs_hisi_set_pm_lvl(hba);
err = ufs_hisi_get_resource(host);
- if (err) {
- ufshcd_set_variant(hba, NULL);
- return err;
- }
+ if (err)
+ goto error;
return 0;
+
+error:
+ ufshcd_set_variant(hba, NULL);
+ return err;
}
static int ufs_hi3660_init(struct ufs_hba *hba)
diff --git a/drivers/scsi/ufs/ufs-mediatek.c b/drivers/scsi/ufs/ufs-mediatek.c
index 09d2ac2..aace133 100644
--- a/drivers/scsi/ufs/ufs-mediatek.c
+++ b/drivers/scsi/ufs/ufs-mediatek.c
@@ -824,6 +824,7 @@ static void ufs_mtk_vreg_set_lpm(struct ufs_hba *hba, bool lpm)
static int ufs_mtk_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
{
int err;
+ struct arm_smccc_res res;
if (ufshcd_is_link_hibern8(hba)) {
err = ufs_mtk_link_set_lpm(hba);
@@ -844,6 +845,9 @@ static int ufs_mtk_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
ufs_mtk_vreg_set_lpm(hba, true);
}
+ if (ufshcd_is_link_off(hba))
+ ufs_mtk_device_reset_ctrl(0, res);
+
return 0;
}
diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
index 3db0af6..24927cf 100644
--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
+++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
@@ -377,7 +377,7 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
- err = -ENODEV;
+ err = irq;
goto out;
}
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 97d9d5d..854c96e 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -2768,7 +2768,7 @@ static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba,
* ufshcd_exec_dev_cmd - API for sending device management requests
* @hba: UFS hba
* @cmd_type: specifies the type (NOP, Query...)
- * @timeout: time in seconds
+ * @timeout: timeout in milliseconds
*
* NOTE: Since there is only one available tag for device management commands,
* it is expected you hold the hba->dev_cmd.lock mutex.
@@ -2798,6 +2798,9 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
}
tag = req->tag;
WARN_ON_ONCE(!ufshcd_valid_tag(hba, tag));
+ /* Set the timeout such that the SCSI error handler is not activated. */
+ req->timeout = msecs_to_jiffies(2 * timeout);
+ blk_mq_start_request(req);
init_completion(&wait);
lrbp = &hba->lrb[tag];
@@ -6256,37 +6259,34 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
DECLARE_COMPLETION_ONSTACK(wait);
struct request *req;
unsigned long flags;
- int free_slot, task_tag, err;
+ int task_tag, err;
/*
- * Get free slot, sleep if slots are unavailable.
- * Even though we use wait_event() which sleeps indefinitely,
- * the maximum wait time is bounded by %TM_CMD_TIMEOUT.
+ * blk_get_request() is used here only to get a free tag.
*/
req = blk_get_request(q, REQ_OP_DRV_OUT, 0);
if (IS_ERR(req))
return PTR_ERR(req);
req->end_io_data = &wait;
- free_slot = req->tag;
- WARN_ON_ONCE(free_slot < 0 || free_slot >= hba->nutmrs);
ufshcd_hold(hba, false);
spin_lock_irqsave(host->host_lock, flags);
- task_tag = hba->nutrs + free_slot;
+ blk_mq_start_request(req);
+ task_tag = req->tag;
treq->req_header.dword_0 |= cpu_to_be32(task_tag);
- memcpy(hba->utmrdl_base_addr + free_slot, treq, sizeof(*treq));
- ufshcd_vops_setup_task_mgmt(hba, free_slot, tm_function);
+ memcpy(hba->utmrdl_base_addr + task_tag, treq, sizeof(*treq));
+ ufshcd_vops_setup_task_mgmt(hba, task_tag, tm_function);
/* send command to the controller */
- __set_bit(free_slot, &hba->outstanding_tasks);
+ __set_bit(task_tag, &hba->outstanding_tasks);
/* Make sure descriptors are ready before ringing the task doorbell */
wmb();
- ufshcd_writel(hba, 1 << free_slot, REG_UTP_TASK_REQ_DOOR_BELL);
+ ufshcd_writel(hba, 1 << task_tag, REG_UTP_TASK_REQ_DOOR_BELL);
/* Make sure that doorbell is committed immediately */
wmb();
@@ -6306,24 +6306,24 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
ufshcd_add_tm_upiu_trace(hba, task_tag, "tm_complete_err");
dev_err(hba->dev, "%s: task management cmd 0x%.2x timed-out\n",
__func__, tm_function);
- if (ufshcd_clear_tm_cmd(hba, free_slot))
- dev_WARN(hba->dev, "%s: unable clear tm cmd (slot %d) after timeout\n",
- __func__, free_slot);
+ if (ufshcd_clear_tm_cmd(hba, task_tag))
+ dev_WARN(hba->dev, "%s: unable to clear tm cmd (slot %d) after timeout\n",
+ __func__, task_tag);
err = -ETIMEDOUT;
} else {
err = 0;
- memcpy(treq, hba->utmrdl_base_addr + free_slot, sizeof(*treq));
+ memcpy(treq, hba->utmrdl_base_addr + task_tag, sizeof(*treq));
ufshcd_add_tm_upiu_trace(hba, task_tag, "tm_complete");
}
spin_lock_irqsave(hba->host->host_lock, flags);
- __clear_bit(free_slot, &hba->outstanding_tasks);
+ __clear_bit(task_tag, &hba->outstanding_tasks);
spin_unlock_irqrestore(hba->host->host_lock, flags);
+ ufshcd_release(hba);
blk_put_request(req);
- ufshcd_release(hba);
return err;
}
@@ -8462,7 +8462,7 @@ static void ufshcd_vreg_set_lpm(struct ufs_hba *hba)
} else if (!ufshcd_is_ufs_dev_active(hba)) {
ufshcd_toggle_vreg(hba->dev, hba->vreg_info.vcc, false);
vcc_off = true;
- if (!ufshcd_is_link_active(hba)) {
+ if (ufshcd_is_link_hibern8(hba) || ufshcd_is_link_off(hba)) {
ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq);
ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq2);
}
@@ -8484,7 +8484,7 @@ static int ufshcd_vreg_set_hpm(struct ufs_hba *hba)
!hba->dev_info.is_lu_power_on_wp) {
ret = ufshcd_setup_vreg(hba, true);
} else if (!ufshcd_is_ufs_dev_active(hba)) {
- if (!ret && !ufshcd_is_link_active(hba)) {
+ if (!ufshcd_is_link_active(hba)) {
ret = ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq);
if (ret)
goto vcc_disable;
@@ -8822,10 +8822,13 @@ int ufshcd_system_suspend(struct ufs_hba *hba)
if (!hba || !hba->is_powered)
return 0;
+ cancel_delayed_work_sync(&hba->rpm_dev_flush_recheck_work);
+
if ((ufs_get_pm_lvl_to_dev_pwr_mode(hba->spm_lvl) ==
hba->curr_dev_pwr_mode) &&
(ufs_get_pm_lvl_to_link_pwr_state(hba->spm_lvl) ==
hba->uic_link_state) &&
+ pm_runtime_suspended(hba->dev) &&
!hba->dev_info.b_rpm_dev_flush_capable)
goto out;
diff --git a/drivers/soc/aspeed/aspeed-lpc-snoop.c b/drivers/soc/aspeed/aspeed-lpc-snoop.c
index dbe5325..538d7aa 100644
--- a/drivers/soc/aspeed/aspeed-lpc-snoop.c
+++ b/drivers/soc/aspeed/aspeed-lpc-snoop.c
@@ -95,8 +95,10 @@ static ssize_t snoop_file_read(struct file *file, char __user *buffer,
return -EINTR;
}
ret = kfifo_to_user(&chan->fifo, buffer, count, &copied);
+ if (ret)
+ return ret;
- return ret ? ret : copied;
+ return copied;
}
static __poll_t snoop_file_poll(struct file *file,
diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
index 9888a70..feb9747 100644
--- a/drivers/soc/fsl/qbman/qman.c
+++ b/drivers/soc/fsl/qbman/qman.c
@@ -186,7 +186,7 @@ struct qm_eqcr_entry {
__be32 tag;
struct qm_fd fd;
u8 __reserved3[32];
-} __packed;
+} __packed __aligned(8);
#define QM_EQCR_VERB_VBIT 0x80
#define QM_EQCR_VERB_CMD_MASK 0x61 /* but only one value; */
#define QM_EQCR_VERB_CMD_ENQUEUE 0x01
diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
index 24cd193..eba7f76 100644
--- a/drivers/soc/qcom/mdt_loader.c
+++ b/drivers/soc/qcom/mdt_loader.c
@@ -230,6 +230,14 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
break;
}
+ if (phdr->p_filesz > phdr->p_memsz) {
+ dev_err(dev,
+ "refusing to load segment %d with p_filesz > p_memsz\n",
+ i);
+ ret = -EINVAL;
+ break;
+ }
+
ptr = mem_region + offset;
if (phdr->p_filesz && phdr->p_offset < fw->size) {
@@ -253,6 +261,15 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
break;
}
+ if (seg_fw->size != phdr->p_filesz) {
+ dev_err(dev,
+ "failed to load segment %d from truncated file %s\n",
+ i, fw_name);
+ release_firmware(seg_fw);
+ ret = -EINVAL;
+ break;
+ }
+
release_firmware(seg_fw);
}
diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
index f63135c..205cc96 100644
--- a/drivers/soc/qcom/pdr_interface.c
+++ b/drivers/soc/qcom/pdr_interface.c
@@ -153,7 +153,7 @@ static int pdr_register_listener(struct pdr_handle *pdr,
if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
pr_err("PDR: %s register listener failed: 0x%x\n",
pds->service_path, resp.resp.error);
- return ret;
+ return -EREMOTEIO;
}
pds->state = resp.curr_state;
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index be76fdd..0dbca67 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -741,6 +741,9 @@ int geni_icc_get(struct geni_se *se, const char *icc_ddr)
int i, err;
const char *icc_names[] = {"qup-core", "qup-config", icc_ddr};
+ if (has_acpi_companion(se->dev))
+ return 0;
+
for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) {
if (!icc_names[i])
continue;
diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
index df9a5ca..0118bd9 100644
--- a/drivers/soc/tegra/pmc.c
+++ b/drivers/soc/tegra/pmc.c
@@ -317,6 +317,8 @@ struct tegra_pmc_soc {
bool invert);
int (*irq_set_wake)(struct irq_data *data, unsigned int on);
int (*irq_set_type)(struct irq_data *data, unsigned int type);
+ int (*powergate_set)(struct tegra_pmc *pmc, unsigned int id,
+ bool new_state);
const char * const *reset_sources;
unsigned int num_reset_sources;
@@ -517,6 +519,63 @@ static int tegra_powergate_lookup(struct tegra_pmc *pmc, const char *name)
return -ENODEV;
}
+static int tegra20_powergate_set(struct tegra_pmc *pmc, unsigned int id,
+ bool new_state)
+{
+ unsigned int retries = 100;
+ bool status;
+ int ret;
+
+ /*
+ * As per TRM documentation, the toggle command will be dropped by PMC
+ * if there is contention with a HW-initiated toggling (i.e. CPU core
+ * power-gated), the command should be retried in that case.
+ */
+ do {
+ tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE);
+
+ /* wait for PMC to execute the command */
+ ret = readx_poll_timeout(tegra_powergate_state, id, status,
+ status == new_state, 1, 10);
+ } while (ret == -ETIMEDOUT && retries--);
+
+ return ret;
+}
+
+static inline bool tegra_powergate_toggle_ready(struct tegra_pmc *pmc)
+{
+ return !(tegra_pmc_readl(pmc, PWRGATE_TOGGLE) & PWRGATE_TOGGLE_START);
+}
+
+static int tegra114_powergate_set(struct tegra_pmc *pmc, unsigned int id,
+ bool new_state)
+{
+ bool status;
+ int err;
+
+ /* wait while PMC power gating is contended */
+ err = readx_poll_timeout(tegra_powergate_toggle_ready, pmc, status,
+ status == true, 1, 100);
+ if (err)
+ return err;
+
+ tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE);
+
+ /* wait for PMC to accept the command */
+ err = readx_poll_timeout(tegra_powergate_toggle_ready, pmc, status,
+ status == true, 1, 100);
+ if (err)
+ return err;
+
+ /* wait for PMC to execute the command */
+ err = readx_poll_timeout(tegra_powergate_state, id, status,
+ status == new_state, 10, 100000);
+ if (err)
+ return err;
+
+ return 0;
+}
+
/**
* tegra_powergate_set() - set the state of a partition
* @pmc: power management controller
@@ -526,7 +585,6 @@ static int tegra_powergate_lookup(struct tegra_pmc *pmc, const char *name)
static int tegra_powergate_set(struct tegra_pmc *pmc, unsigned int id,
bool new_state)
{
- bool status;
int err;
if (id == TEGRA_POWERGATE_3D && pmc->soc->has_gpu_clamps)
@@ -539,10 +597,7 @@ static int tegra_powergate_set(struct tegra_pmc *pmc, unsigned int id,
return 0;
}
- tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE);
-
- err = readx_poll_timeout(tegra_powergate_state, id, status,
- status == new_state, 10, 100000);
+ err = pmc->soc->powergate_set(pmc, id, new_state);
mutex_unlock(&pmc->powergates_lock);
@@ -2699,6 +2754,7 @@ static const struct tegra_pmc_soc tegra20_pmc_soc = {
.regs = &tegra20_pmc_regs,
.init = tegra20_pmc_init,
.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
+ .powergate_set = tegra20_powergate_set,
.reset_sources = NULL,
.num_reset_sources = 0,
.reset_levels = NULL,
@@ -2757,6 +2813,7 @@ static const struct tegra_pmc_soc tegra30_pmc_soc = {
.regs = &tegra20_pmc_regs,
.init = tegra20_pmc_init,
.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
+ .powergate_set = tegra20_powergate_set,
.reset_sources = tegra30_reset_sources,
.num_reset_sources = ARRAY_SIZE(tegra30_reset_sources),
.reset_levels = NULL,
@@ -2811,6 +2868,7 @@ static const struct tegra_pmc_soc tegra114_pmc_soc = {
.regs = &tegra20_pmc_regs,
.init = tegra20_pmc_init,
.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
+ .powergate_set = tegra114_powergate_set,
.reset_sources = tegra30_reset_sources,
.num_reset_sources = ARRAY_SIZE(tegra30_reset_sources),
.reset_levels = NULL,
@@ -2925,6 +2983,7 @@ static const struct tegra_pmc_soc tegra124_pmc_soc = {
.regs = &tegra20_pmc_regs,
.init = tegra20_pmc_init,
.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
+ .powergate_set = tegra114_powergate_set,
.reset_sources = tegra30_reset_sources,
.num_reset_sources = ARRAY_SIZE(tegra30_reset_sources),
.reset_levels = NULL,
@@ -3048,6 +3107,7 @@ static const struct tegra_pmc_soc tegra210_pmc_soc = {
.regs = &tegra20_pmc_regs,
.init = tegra20_pmc_init,
.setup_irq_polarity = tegra20_pmc_setup_irq_polarity,
+ .powergate_set = tegra114_powergate_set,
.irq_set_wake = tegra210_pmc_irq_set_wake,
.irq_set_type = tegra210_pmc_irq_set_type,
.reset_sources = tegra210_reset_sources,
diff --git a/drivers/soc/tegra/regulators-tegra30.c b/drivers/soc/tegra/regulators-tegra30.c
index 7f21f31..0e776b2 100644
--- a/drivers/soc/tegra/regulators-tegra30.c
+++ b/drivers/soc/tegra/regulators-tegra30.c
@@ -178,7 +178,7 @@ static int tegra30_voltage_update(struct tegra_regulator_coupler *tegra,
* survive the voltage drop if it's running on a higher frequency.
*/
if (!cpu_min_uV_consumers)
- cpu_min_uV = cpu_uV;
+ cpu_min_uV = max(cpu_uV, cpu_min_uV);
/*
* Bootloader shall set up voltages correctly, but if it
diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
index 1fe7868..3317a02 100644
--- a/drivers/soundwire/bus.c
+++ b/drivers/soundwire/bus.c
@@ -703,7 +703,7 @@ static int sdw_program_device_num(struct sdw_bus *bus)
struct sdw_slave *slave, *_s;
struct sdw_slave_id id;
struct sdw_msg msg;
- bool found = false;
+ bool found;
int count = 0, ret;
u64 addr;
@@ -735,6 +735,7 @@ static int sdw_program_device_num(struct sdw_bus *bus)
sdw_extract_slave_id(bus, addr, &id);
+ found = false;
/* Now compare with entries */
list_for_each_entry_safe(slave, _s, &bus->slaves, node) {
if (sdw_compare_devid(slave, id) == 0) {
diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
index 5806605..c6d421a 100644
--- a/drivers/soundwire/cadence_master.c
+++ b/drivers/soundwire/cadence_master.c
@@ -1449,10 +1449,12 @@ int sdw_cdns_clock_stop(struct sdw_cdns *cdns, bool block_wake)
}
/* Prepare slaves for clock stop */
- ret = sdw_bus_prep_clk_stop(&cdns->bus);
- if (ret < 0) {
- dev_err(cdns->dev, "prepare clock stop failed %d", ret);
- return ret;
+ if (slave_present) {
+ ret = sdw_bus_prep_clk_stop(&cdns->bus);
+ if (ret < 0 && ret != -ENODATA) {
+ dev_err(cdns->dev, "prepare clock stop failed %d\n", ret);
+ return ret;
+ }
}
/*
diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
index 1099b5d..a418c3c7 100644
--- a/drivers/soundwire/stream.c
+++ b/drivers/soundwire/stream.c
@@ -1375,8 +1375,16 @@ int sdw_stream_add_slave(struct sdw_slave *slave,
}
ret = sdw_config_stream(&slave->dev, stream, stream_config, true);
- if (ret)
+ if (ret) {
+ /*
+ * sdw_release_master_stream will release s_rt in slave_rt_list in
+ * stream_error case, but s_rt is only added to slave_rt_list
+ * when sdw_config_stream is successful, so free s_rt explicitly
+ * when sdw_config_stream is failed.
+ */
+ kfree(s_rt);
goto stream_error;
+ }
list_add_tail(&s_rt->m_rt_node, &m_rt->slave_rt_list);
diff --git a/drivers/spi/spi-ath79.c b/drivers/spi/spi-ath79.c
index eb9a243..98ace74 100644
--- a/drivers/spi/spi-ath79.c
+++ b/drivers/spi/spi-ath79.c
@@ -156,8 +156,7 @@ static int ath79_spi_probe(struct platform_device *pdev)
master->use_gpio_descriptors = true;
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
- master->setup = spi_bitbang_setup;
- master->cleanup = spi_bitbang_cleanup;
+ master->flags = SPI_MASTER_GPIO_SS;
if (pdata) {
master->bus_num = pdata->bus_num;
master->num_chipselect = pdata->num_chipselect;
diff --git a/drivers/spi/spi-dln2.c b/drivers/spi/spi-dln2.c
index 75b33d7..9a4d942 100644
--- a/drivers/spi/spi-dln2.c
+++ b/drivers/spi/spi-dln2.c
@@ -780,7 +780,7 @@ static int dln2_spi_probe(struct platform_device *pdev)
static int dln2_spi_remove(struct platform_device *pdev)
{
- struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
+ struct spi_master *master = platform_get_drvdata(pdev);
struct dln2_spi *dln2 = spi_master_get_devdata(master);
pm_runtime_disable(&pdev->dev);
diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
index 0287366..fb45e6a 100644
--- a/drivers/spi/spi-fsl-dspi.c
+++ b/drivers/spi/spi-fsl-dspi.c
@@ -1375,11 +1375,13 @@ static int dspi_probe(struct platform_device *pdev)
ret = spi_register_controller(ctlr);
if (ret != 0) {
dev_err(&pdev->dev, "Problem registering DSPI ctlr\n");
- goto out_free_irq;
+ goto out_release_dma;
}
return ret;
+out_release_dma:
+ dspi_release_dma(dspi);
out_free_irq:
if (dspi->irq)
free_irq(dspi->irq, dspi);
diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
index a2886ee..5d98611 100644
--- a/drivers/spi/spi-fsl-lpspi.c
+++ b/drivers/spi/spi-fsl-lpspi.c
@@ -200,7 +200,7 @@ static int lpspi_prepare_xfer_hardware(struct spi_controller *controller)
spi_controller_get_devdata(controller);
int ret;
- ret = pm_runtime_get_sync(fsl_lpspi->dev);
+ ret = pm_runtime_resume_and_get(fsl_lpspi->dev);
if (ret < 0) {
dev_err(fsl_lpspi->dev, "failed to enable clock\n");
return ret;
diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
index e4a8d20..d0e5aa1 100644
--- a/drivers/spi/spi-fsl-spi.c
+++ b/drivers/spi/spi-fsl-spi.c
@@ -707,6 +707,11 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
struct resource mem;
int irq, type;
int ret;
+ bool spisel_boot = false;
+#if IS_ENABLED(CONFIG_FSL_SOC)
+ struct mpc8xxx_spi_probe_info *pinfo = NULL;
+#endif
+
ret = of_mpc8xxx_spi_probe(ofdev);
if (ret)
@@ -715,9 +720,8 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
type = fsl_spi_get_type(&ofdev->dev);
if (type == TYPE_FSL) {
struct fsl_spi_platform_data *pdata = dev_get_platdata(dev);
- bool spisel_boot = false;
#if IS_ENABLED(CONFIG_FSL_SOC)
- struct mpc8xxx_spi_probe_info *pinfo = to_of_pinfo(pdata);
+ pinfo = to_of_pinfo(pdata);
spisel_boot = of_property_read_bool(np, "fsl,spisel_boot");
if (spisel_boot) {
@@ -746,15 +750,24 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
ret = of_address_to_resource(np, 0, &mem);
if (ret)
- return ret;
+ goto unmap_out;
irq = platform_get_irq(ofdev, 0);
- if (irq < 0)
- return irq;
+ if (irq < 0) {
+ ret = irq;
+ goto unmap_out;
+ }
master = fsl_spi_probe(dev, &mem, irq);
return PTR_ERR_OR_ZERO(master);
+
+unmap_out:
+#if IS_ENABLED(CONFIG_FSL_SOC)
+ if (spisel_boot)
+ iounmap(pinfo->immr_spi_cs);
+#endif
+ return ret;
}
static int of_fsl_spi_remove(struct platform_device *ofdev)
diff --git a/drivers/spi/spi-omap-100k.c b/drivers/spi/spi-omap-100k.c
index 36a4922..ccd817e 100644
--- a/drivers/spi/spi-omap-100k.c
+++ b/drivers/spi/spi-omap-100k.c
@@ -424,7 +424,7 @@ static int omap1_spi100k_probe(struct platform_device *pdev)
static int omap1_spi100k_remove(struct platform_device *pdev)
{
- struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
+ struct spi_master *master = platform_get_drvdata(pdev);
struct omap1_spi100k *spi100k = spi_master_get_devdata(master);
pm_runtime_disable(&pdev->dev);
@@ -438,7 +438,7 @@ static int omap1_spi100k_remove(struct platform_device *pdev)
#ifdef CONFIG_PM
static int omap1_spi100k_runtime_suspend(struct device *dev)
{
- struct spi_master *master = spi_master_get(dev_get_drvdata(dev));
+ struct spi_master *master = dev_get_drvdata(dev);
struct omap1_spi100k *spi100k = spi_master_get_devdata(master);
clk_disable_unprepare(spi100k->ick);
@@ -449,7 +449,7 @@ static int omap1_spi100k_runtime_suspend(struct device *dev)
static int omap1_spi100k_runtime_resume(struct device *dev)
{
- struct spi_master *master = spi_master_get(dev_get_drvdata(dev));
+ struct spi_master *master = dev_get_drvdata(dev);
struct omap1_spi100k *spi100k = spi_master_get_devdata(master);
int ret;
diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c
index 8dcb2e7..d39dec6 100644
--- a/drivers/spi/spi-qup.c
+++ b/drivers/spi/spi-qup.c
@@ -1263,7 +1263,7 @@ static int spi_qup_remove(struct platform_device *pdev)
struct spi_qup *controller = spi_master_get_devdata(master);
int ret;
- ret = pm_runtime_get_sync(&pdev->dev);
+ ret = pm_runtime_resume_and_get(&pdev->dev);
if (ret < 0)
return ret;
diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
index 75a8a94..0aab37c 100644
--- a/drivers/spi/spi-rockchip.c
+++ b/drivers/spi/spi-rockchip.c
@@ -474,7 +474,7 @@ static int rockchip_spi_prepare_dma(struct rockchip_spi *rs,
return 1;
}
-static void rockchip_spi_config(struct rockchip_spi *rs,
+static int rockchip_spi_config(struct rockchip_spi *rs,
struct spi_device *spi, struct spi_transfer *xfer,
bool use_dma, bool slave_mode)
{
@@ -519,7 +519,9 @@ static void rockchip_spi_config(struct rockchip_spi *rs,
* ctlr->bits_per_word_mask, so this shouldn't
* happen
*/
- unreachable();
+ dev_err(rs->dev, "unknown bits per word: %d\n",
+ xfer->bits_per_word);
+ return -EINVAL;
}
if (use_dma) {
@@ -552,6 +554,8 @@ static void rockchip_spi_config(struct rockchip_spi *rs,
*/
writel_relaxed(2 * DIV_ROUND_UP(rs->freq, 2 * xfer->speed_hz),
rs->regs + ROCKCHIP_SPI_BAUDR);
+
+ return 0;
}
static size_t rockchip_spi_max_transfer_size(struct spi_device *spi)
@@ -575,6 +579,7 @@ static int rockchip_spi_transfer_one(
struct spi_transfer *xfer)
{
struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
+ int ret;
bool use_dma;
WARN_ON(readl_relaxed(rs->regs + ROCKCHIP_SPI_SSIENR) &&
@@ -594,7 +599,9 @@ static int rockchip_spi_transfer_one(
use_dma = ctlr->can_dma ? ctlr->can_dma(ctlr, spi, xfer) : false;
- rockchip_spi_config(rs, spi, xfer, use_dma, ctlr->slave);
+ ret = rockchip_spi_config(rs, spi, xfer, use_dma, ctlr->slave);
+ if (ret)
+ return ret;
if (use_dma)
return rockchip_spi_prepare_dma(rs, ctlr, xfer);
diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
index 947e6b9..2786470 100644
--- a/drivers/spi/spi-stm32-qspi.c
+++ b/drivers/spi/spi-stm32-qspi.c
@@ -727,21 +727,31 @@ static int __maybe_unused stm32_qspi_suspend(struct device *dev)
{
pinctrl_pm_select_sleep_state(dev);
- return 0;
+ return pm_runtime_force_suspend(dev);
}
static int __maybe_unused stm32_qspi_resume(struct device *dev)
{
struct stm32_qspi *qspi = dev_get_drvdata(dev);
+ int ret;
+
+ ret = pm_runtime_force_resume(dev);
+ if (ret < 0)
+ return ret;
pinctrl_pm_select_default_state(dev);
- clk_prepare_enable(qspi->clk);
+
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(dev);
+ return ret;
+ }
writel_relaxed(qspi->cr_reg, qspi->io_base + QSPI_CR);
writel_relaxed(qspi->dcr_reg, qspi->io_base + QSPI_DCR);
- pm_runtime_mark_last_busy(qspi->dev);
- pm_runtime_put_autosuspend(qspi->dev);
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
return 0;
}
diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
index 53c4311..0318f02 100644
--- a/drivers/spi/spi-stm32.c
+++ b/drivers/spi/spi-stm32.c
@@ -1830,7 +1830,7 @@ static int stm32_spi_probe(struct platform_device *pdev)
struct resource *res;
int ret;
- master = spi_alloc_master(&pdev->dev, sizeof(struct stm32_spi));
+ master = devm_spi_alloc_master(&pdev->dev, sizeof(struct stm32_spi));
if (!master) {
dev_err(&pdev->dev, "spi master allocation failed\n");
return -ENOMEM;
@@ -1848,18 +1848,16 @@ static int stm32_spi_probe(struct platform_device *pdev)
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
spi->base = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(spi->base)) {
- ret = PTR_ERR(spi->base);
- goto err_master_put;
- }
+ if (IS_ERR(spi->base))
+ return PTR_ERR(spi->base);
spi->phys_addr = (dma_addr_t)res->start;
spi->irq = platform_get_irq(pdev, 0);
- if (spi->irq <= 0) {
- ret = dev_err_probe(&pdev->dev, spi->irq, "failed to get irq\n");
- goto err_master_put;
- }
+ if (spi->irq <= 0)
+ return dev_err_probe(&pdev->dev, spi->irq,
+ "failed to get irq\n");
+
ret = devm_request_threaded_irq(&pdev->dev, spi->irq,
spi->cfg->irq_handler_event,
spi->cfg->irq_handler_thread,
@@ -1867,20 +1865,20 @@ static int stm32_spi_probe(struct platform_device *pdev)
if (ret) {
dev_err(&pdev->dev, "irq%d request failed: %d\n", spi->irq,
ret);
- goto err_master_put;
+ return ret;
}
spi->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(spi->clk)) {
ret = PTR_ERR(spi->clk);
dev_err(&pdev->dev, "clk get failed: %d\n", ret);
- goto err_master_put;
+ return ret;
}
ret = clk_prepare_enable(spi->clk);
if (ret) {
dev_err(&pdev->dev, "clk enable failed: %d\n", ret);
- goto err_master_put;
+ return ret;
}
spi->clk_rate = clk_get_rate(spi->clk);
if (!spi->clk_rate) {
@@ -1950,7 +1948,7 @@ static int stm32_spi_probe(struct platform_device *pdev)
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
- ret = devm_spi_register_master(&pdev->dev, master);
+ ret = spi_register_master(master);
if (ret) {
dev_err(&pdev->dev, "spi master registration failed: %d\n",
ret);
@@ -1976,8 +1974,6 @@ static int stm32_spi_probe(struct platform_device *pdev)
dma_release_channel(spi->dma_rx);
err_clk_disable:
clk_disable_unprepare(spi->clk);
-err_master_put:
- spi_master_put(master);
return ret;
}
@@ -1987,6 +1983,7 @@ static int stm32_spi_remove(struct platform_device *pdev)
struct spi_master *master = platform_get_drvdata(pdev);
struct stm32_spi *spi = spi_master_get_devdata(master);
+ spi_unregister_master(master);
spi->cfg->disable(spi);
if (master->dma_tx)
diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
index 9417385..e06aafe 100644
--- a/drivers/spi/spi-ti-qspi.c
+++ b/drivers/spi/spi-ti-qspi.c
@@ -733,6 +733,17 @@ static int ti_qspi_runtime_resume(struct device *dev)
return 0;
}
+static void ti_qspi_dma_cleanup(struct ti_qspi *qspi)
+{
+ if (qspi->rx_bb_addr)
+ dma_free_coherent(qspi->dev, QSPI_DMA_BUFFER_SIZE,
+ qspi->rx_bb_addr,
+ qspi->rx_bb_dma_addr);
+
+ if (qspi->rx_chan)
+ dma_release_channel(qspi->rx_chan);
+}
+
static const struct of_device_id ti_qspi_match[] = {
{.compatible = "ti,dra7xxx-qspi" },
{.compatible = "ti,am4372-qspi" },
@@ -886,6 +897,8 @@ static int ti_qspi_probe(struct platform_device *pdev)
if (!ret)
return 0;
+ ti_qspi_dma_cleanup(qspi);
+
pm_runtime_disable(&pdev->dev);
free_master:
spi_master_put(master);
@@ -904,12 +917,7 @@ static int ti_qspi_remove(struct platform_device *pdev)
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
- if (qspi->rx_bb_addr)
- dma_free_coherent(qspi->dev, QSPI_DMA_BUFFER_SIZE,
- qspi->rx_bb_addr,
- qspi->rx_bb_dma_addr);
- if (qspi->rx_chan)
- dma_release_channel(qspi->rx_chan);
+ ti_qspi_dma_cleanup(qspi);
return 0;
}
diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
index c8fa6ee..1dd2af9 100644
--- a/drivers/spi/spi-zynqmp-gqspi.c
+++ b/drivers/spi/spi-zynqmp-gqspi.c
@@ -157,6 +157,7 @@ enum mode_type {GQSPI_MODE_IO, GQSPI_MODE_DMA};
* @data_completion: completion structure
*/
struct zynqmp_qspi {
+ struct spi_controller *ctlr;
void __iomem *regs;
struct clk *refclk;
struct clk *pclk;
@@ -173,6 +174,7 @@ struct zynqmp_qspi {
u32 genfifoentry;
enum mode_type mode;
struct completion data_completion;
+ struct mutex op_lock;
};
/**
@@ -486,24 +488,10 @@ static int zynqmp_qspi_setup_op(struct spi_device *qspi)
{
struct spi_controller *ctlr = qspi->master;
struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
- struct device *dev = &ctlr->dev;
- int ret;
if (ctlr->busy)
return -EBUSY;
- ret = clk_enable(xqspi->refclk);
- if (ret) {
- dev_err(dev, "Cannot enable device clock.\n");
- return ret;
- }
-
- ret = clk_enable(xqspi->pclk);
- if (ret) {
- dev_err(dev, "Cannot enable APB clock.\n");
- clk_disable(xqspi->refclk);
- return ret;
- }
zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, GQSPI_EN_MASK);
return 0;
@@ -520,7 +508,7 @@ static void zynqmp_qspi_filltxfifo(struct zynqmp_qspi *xqspi, int size)
{
u32 count = 0, intermediate;
- while ((xqspi->bytes_to_transfer > 0) && (count < size)) {
+ while ((xqspi->bytes_to_transfer > 0) && (count < size) && (xqspi->txbuf)) {
memcpy(&intermediate, xqspi->txbuf, 4);
zynqmp_gqspi_write(xqspi, GQSPI_TXD_OFST, intermediate);
@@ -579,7 +567,7 @@ static void zynqmp_qspi_fillgenfifo(struct zynqmp_qspi *xqspi, u8 nbits,
genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
genfifoentry |= GQSPI_GENFIFO_TX;
transfer_len = xqspi->bytes_to_transfer;
- } else {
+ } else if (xqspi->rxbuf) {
genfifoentry &= ~GQSPI_GENFIFO_TX;
genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
genfifoentry |= GQSPI_GENFIFO_RX;
@@ -587,6 +575,11 @@ static void zynqmp_qspi_fillgenfifo(struct zynqmp_qspi *xqspi, u8 nbits,
transfer_len = xqspi->dma_rx_bytes;
else
transfer_len = xqspi->bytes_to_receive;
+ } else {
+ /* Sending dummy circles here */
+ genfifoentry &= ~(GQSPI_GENFIFO_TX | GQSPI_GENFIFO_RX);
+ genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
+ transfer_len = xqspi->bytes_to_transfer;
}
genfifoentry |= zynqmp_qspi_selectspimode(xqspi, nbits);
xqspi->genfifoentry = genfifoentry;
@@ -738,7 +731,7 @@ static irqreturn_t zynqmp_qspi_irq(int irq, void *dev_id)
* zynqmp_qspi_setuprxdma - This function sets up the RX DMA operation
* @xqspi: xqspi is a pointer to the GQSPI instance.
*/
-static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
+static int zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
{
u32 rx_bytes, rx_rem, config_reg;
dma_addr_t addr;
@@ -752,7 +745,7 @@ static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST, config_reg);
xqspi->mode = GQSPI_MODE_IO;
xqspi->dma_rx_bytes = 0;
- return;
+ return 0;
}
rx_rem = xqspi->bytes_to_receive % 4;
@@ -760,8 +753,10 @@ static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
addr = dma_map_single(xqspi->dev, (void *)xqspi->rxbuf,
rx_bytes, DMA_FROM_DEVICE);
- if (dma_mapping_error(xqspi->dev, addr))
+ if (dma_mapping_error(xqspi->dev, addr)) {
dev_err(xqspi->dev, "ERR:rxdma:memory not mapped\n");
+ return -ENOMEM;
+ }
xqspi->dma_rx_bytes = rx_bytes;
xqspi->dma_addr = addr;
@@ -782,6 +777,8 @@ static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
/* Write the number of bytes to transfer */
zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_SIZE_OFST, rx_bytes);
+
+ return 0;
}
/**
@@ -818,11 +815,17 @@ static void zynqmp_qspi_write_op(struct zynqmp_qspi *xqspi, u8 tx_nbits,
* @genfifoentry: genfifoentry is pointer to the variable in which
* GENFIFO mask is returned to calling function
*/
-static void zynqmp_qspi_read_op(struct zynqmp_qspi *xqspi, u8 rx_nbits,
+static int zynqmp_qspi_read_op(struct zynqmp_qspi *xqspi, u8 rx_nbits,
u32 genfifoentry)
{
+ int ret;
+
+ ret = zynqmp_qspi_setuprxdma(xqspi);
+ if (ret)
+ return ret;
zynqmp_qspi_fillgenfifo(xqspi, rx_nbits, genfifoentry);
- zynqmp_qspi_setuprxdma(xqspi);
+
+ return 0;
}
/**
@@ -835,10 +838,13 @@ static void zynqmp_qspi_read_op(struct zynqmp_qspi *xqspi, u8 rx_nbits,
*/
static int __maybe_unused zynqmp_qspi_suspend(struct device *dev)
{
- struct spi_controller *ctlr = dev_get_drvdata(dev);
- struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
+ struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
+ struct spi_controller *ctlr = xqspi->ctlr;
+ int ret;
- spi_controller_suspend(ctlr);
+ ret = spi_controller_suspend(ctlr);
+ if (ret)
+ return ret;
zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
@@ -856,27 +862,13 @@ static int __maybe_unused zynqmp_qspi_suspend(struct device *dev)
*/
static int __maybe_unused zynqmp_qspi_resume(struct device *dev)
{
- struct spi_controller *ctlr = dev_get_drvdata(dev);
- struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
- int ret = 0;
+ struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
+ struct spi_controller *ctlr = xqspi->ctlr;
- ret = clk_enable(xqspi->pclk);
- if (ret) {
- dev_err(dev, "Cannot enable APB clock.\n");
- return ret;
- }
-
- ret = clk_enable(xqspi->refclk);
- if (ret) {
- dev_err(dev, "Cannot enable device clock.\n");
- clk_disable(xqspi->pclk);
- return ret;
- }
+ zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, GQSPI_EN_MASK);
spi_controller_resume(ctlr);
- clk_disable(xqspi->refclk);
- clk_disable(xqspi->pclk);
return 0;
}
@@ -890,10 +882,10 @@ static int __maybe_unused zynqmp_qspi_resume(struct device *dev)
*/
static int __maybe_unused zynqmp_runtime_suspend(struct device *dev)
{
- struct zynqmp_qspi *xqspi = (struct zynqmp_qspi *)dev_get_drvdata(dev);
+ struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
- clk_disable(xqspi->refclk);
- clk_disable(xqspi->pclk);
+ clk_disable_unprepare(xqspi->refclk);
+ clk_disable_unprepare(xqspi->pclk);
return 0;
}
@@ -908,19 +900,19 @@ static int __maybe_unused zynqmp_runtime_suspend(struct device *dev)
*/
static int __maybe_unused zynqmp_runtime_resume(struct device *dev)
{
- struct zynqmp_qspi *xqspi = (struct zynqmp_qspi *)dev_get_drvdata(dev);
+ struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
int ret;
- ret = clk_enable(xqspi->pclk);
+ ret = clk_prepare_enable(xqspi->pclk);
if (ret) {
dev_err(dev, "Cannot enable APB clock.\n");
return ret;
}
- ret = clk_enable(xqspi->refclk);
+ ret = clk_prepare_enable(xqspi->refclk);
if (ret) {
dev_err(dev, "Cannot enable device clock.\n");
- clk_disable(xqspi->pclk);
+ clk_disable_unprepare(xqspi->pclk);
return ret;
}
@@ -944,25 +936,23 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
struct zynqmp_qspi *xqspi = spi_controller_get_devdata
(mem->spi->master);
int err = 0, i;
- u8 *tmpbuf;
u32 genfifoentry = 0;
+ u16 opcode = op->cmd.opcode;
+ u64 opaddr;
dev_dbg(xqspi->dev, "cmd:%#x mode:%d.%d.%d.%d\n",
op->cmd.opcode, op->cmd.buswidth, op->addr.buswidth,
op->dummy.buswidth, op->data.buswidth);
+ mutex_lock(&xqspi->op_lock);
zynqmp_qspi_config_op(xqspi, mem->spi);
zynqmp_qspi_chipselect(mem->spi, false);
genfifoentry |= xqspi->genfifocs;
genfifoentry |= xqspi->genfifobus;
if (op->cmd.opcode) {
- tmpbuf = kzalloc(op->cmd.nbytes, GFP_KERNEL | GFP_DMA);
- if (!tmpbuf)
- return -ENOMEM;
- tmpbuf[0] = op->cmd.opcode;
reinit_completion(&xqspi->data_completion);
- xqspi->txbuf = tmpbuf;
+ xqspi->txbuf = &opcode;
xqspi->rxbuf = NULL;
xqspi->bytes_to_transfer = op->cmd.nbytes;
xqspi->bytes_to_receive = 0;
@@ -973,16 +963,15 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
GQSPI_IER_GENFIFOEMPTY_MASK |
GQSPI_IER_TXNOT_FULL_MASK);
- if (!wait_for_completion_interruptible_timeout
+ if (!wait_for_completion_timeout
(&xqspi->data_completion, msecs_to_jiffies(1000))) {
err = -ETIMEDOUT;
- kfree(tmpbuf);
goto return_err;
}
- kfree(tmpbuf);
}
if (op->addr.nbytes) {
+ xqspi->txbuf = &opaddr;
for (i = 0; i < op->addr.nbytes; i++) {
*(((u8 *)xqspi->txbuf) + i) = op->addr.val >>
(8 * (op->addr.nbytes - i - 1));
@@ -1001,7 +990,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
GQSPI_IER_TXEMPTY_MASK |
GQSPI_IER_GENFIFOEMPTY_MASK |
GQSPI_IER_TXNOT_FULL_MASK);
- if (!wait_for_completion_interruptible_timeout
+ if (!wait_for_completion_timeout
(&xqspi->data_completion, msecs_to_jiffies(1000))) {
err = -ETIMEDOUT;
goto return_err;
@@ -1009,32 +998,23 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
}
if (op->dummy.nbytes) {
- tmpbuf = kzalloc(op->dummy.nbytes, GFP_KERNEL | GFP_DMA);
- if (!tmpbuf)
- return -ENOMEM;
- memset(tmpbuf, 0xff, op->dummy.nbytes);
- reinit_completion(&xqspi->data_completion);
- xqspi->txbuf = tmpbuf;
+ xqspi->txbuf = NULL;
xqspi->rxbuf = NULL;
- xqspi->bytes_to_transfer = op->dummy.nbytes;
+ /*
+ * xqspi->bytes_to_transfer here represents the dummy circles
+ * which need to be sent.
+ */
+ xqspi->bytes_to_transfer = op->dummy.nbytes * 8 / op->dummy.buswidth;
xqspi->bytes_to_receive = 0;
- zynqmp_qspi_write_op(xqspi, op->dummy.buswidth,
+ /*
+ * Using op->data.buswidth instead of op->dummy.buswidth here because
+ * we need to use it to configure the correct SPI mode.
+ */
+ zynqmp_qspi_write_op(xqspi, op->data.buswidth,
genfifoentry);
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST) |
GQSPI_CFG_START_GEN_FIFO_MASK);
- zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
- GQSPI_IER_TXEMPTY_MASK |
- GQSPI_IER_GENFIFOEMPTY_MASK |
- GQSPI_IER_TXNOT_FULL_MASK);
- if (!wait_for_completion_interruptible_timeout
- (&xqspi->data_completion, msecs_to_jiffies(1000))) {
- err = -ETIMEDOUT;
- kfree(tmpbuf);
- goto return_err;
- }
-
- kfree(tmpbuf);
}
if (op->data.nbytes) {
@@ -1059,8 +1039,11 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
xqspi->rxbuf = (u8 *)op->data.buf.in;
xqspi->bytes_to_receive = op->data.nbytes;
xqspi->bytes_to_transfer = 0;
- zynqmp_qspi_read_op(xqspi, op->data.buswidth,
+ err = zynqmp_qspi_read_op(xqspi, op->data.buswidth,
genfifoentry);
+ if (err)
+ goto return_err;
+
zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
zynqmp_gqspi_read
(xqspi, GQSPI_CONFIG_OFST) |
@@ -1076,7 +1059,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
GQSPI_IER_RXEMPTY_MASK);
}
}
- if (!wait_for_completion_interruptible_timeout
+ if (!wait_for_completion_timeout
(&xqspi->data_completion, msecs_to_jiffies(1000)))
err = -ETIMEDOUT;
}
@@ -1084,6 +1067,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
return_err:
zynqmp_qspi_chipselect(mem->spi, true);
+ mutex_unlock(&xqspi->op_lock);
return err;
}
@@ -1120,6 +1104,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
xqspi = spi_controller_get_devdata(ctlr);
xqspi->dev = dev;
+ xqspi->ctlr = ctlr;
platform_set_drvdata(pdev, xqspi);
xqspi->regs = devm_platform_ioremap_resource(pdev, 0);
@@ -1135,13 +1120,11 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
goto remove_master;
}
- init_completion(&xqspi->data_completion);
-
xqspi->refclk = devm_clk_get(&pdev->dev, "ref_clk");
if (IS_ERR(xqspi->refclk)) {
dev_err(dev, "ref_clk clock not found.\n");
ret = PTR_ERR(xqspi->refclk);
- goto clk_dis_pclk;
+ goto remove_master;
}
ret = clk_prepare_enable(xqspi->pclk);
@@ -1156,6 +1139,10 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
goto clk_dis_pclk;
}
+ init_completion(&xqspi->data_completion);
+
+ mutex_init(&xqspi->op_lock);
+
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
pm_runtime_set_active(&pdev->dev);
@@ -1178,6 +1165,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
goto clk_dis_all;
}
+ dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
ctlr->num_chipselect = GQSPI_DEFAULT_NUM_CS;
ctlr->mem_ops = &zynqmp_qspi_mem_ops;
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index 4257a2d..a6f1e94 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -787,7 +787,7 @@ int spi_register_board_info(struct spi_board_info const *info, unsigned n)
/*-------------------------------------------------------------------------*/
-static void spi_set_cs(struct spi_device *spi, bool enable)
+static void spi_set_cs(struct spi_device *spi, bool enable, bool force)
{
bool enable1 = enable;
@@ -795,7 +795,7 @@ static void spi_set_cs(struct spi_device *spi, bool enable)
* Avoid calling into the driver (or doing delays) if the chip select
* isn't actually changing from the last time this was called.
*/
- if ((spi->controller->last_cs_enable == enable) &&
+ if (!force && (spi->controller->last_cs_enable == enable) &&
(spi->controller->last_cs_mode_high == (spi->mode & SPI_CS_HIGH)))
return;
@@ -814,16 +814,29 @@ static void spi_set_cs(struct spi_device *spi, bool enable)
if (spi->cs_gpiod || gpio_is_valid(spi->cs_gpio)) {
if (!(spi->mode & SPI_NO_CS)) {
- if (spi->cs_gpiod)
- /* polarity handled by gpiolib */
- gpiod_set_value_cansleep(spi->cs_gpiod,
- enable1);
- else
+ if (spi->cs_gpiod) {
+ /*
+ * Historically ACPI has no means of the GPIO polarity and
+ * thus the SPISerialBus() resource defines it on the per-chip
+ * basis. In order to avoid a chain of negations, the GPIO
+ * polarity is considered being Active High. Even for the cases
+ * when _DSD() is involved (in the updated versions of ACPI)
+ * the GPIO CS polarity must be defined Active High to avoid
+ * ambiguity. That's why we use enable, that takes SPI_CS_HIGH
+ * into account.
+ */
+ if (has_acpi_companion(&spi->dev))
+ gpiod_set_value_cansleep(spi->cs_gpiod, !enable);
+ else
+ /* Polarity handled by GPIO library */
+ gpiod_set_value_cansleep(spi->cs_gpiod, enable1);
+ } else {
/*
* invert the enable line, as active low is
* default for SPI.
*/
gpio_set_value_cansleep(spi->cs_gpio, !enable);
+ }
}
/* Some SPI masters need both GPIO CS & slave_select */
if ((spi->controller->flags & SPI_MASTER_GPIO_SS) &&
@@ -1243,7 +1256,7 @@ static int spi_transfer_one_message(struct spi_controller *ctlr,
struct spi_statistics *statm = &ctlr->statistics;
struct spi_statistics *stats = &msg->spi->statistics;
- spi_set_cs(msg->spi, true);
+ spi_set_cs(msg->spi, true, false);
SPI_STATISTICS_INCREMENT_FIELD(statm, messages);
SPI_STATISTICS_INCREMENT_FIELD(stats, messages);
@@ -1311,9 +1324,9 @@ static int spi_transfer_one_message(struct spi_controller *ctlr,
&msg->transfers)) {
keep_cs = true;
} else {
- spi_set_cs(msg->spi, false);
+ spi_set_cs(msg->spi, false, false);
_spi_transfer_cs_change_delay(msg, xfer);
- spi_set_cs(msg->spi, true);
+ spi_set_cs(msg->spi, true, false);
}
}
@@ -1322,7 +1335,7 @@ static int spi_transfer_one_message(struct spi_controller *ctlr,
out:
if (ret != 0 || !keep_cs)
- spi_set_cs(msg->spi, false);
+ spi_set_cs(msg->spi, false, false);
if (msg->status == -EINPROGRESS)
msg->status = ret;
@@ -2480,6 +2493,7 @@ struct spi_controller *__devm_spi_alloc_controller(struct device *dev,
ctlr = __spi_alloc_controller(dev, size, slave);
if (ctlr) {
+ ctlr->devm_allocated = true;
*ptr = ctlr;
devres_add(dev, ptr);
} else {
@@ -2826,11 +2840,6 @@ int devm_spi_register_controller(struct device *dev,
}
EXPORT_SYMBOL_GPL(devm_spi_register_controller);
-static int devm_spi_match_controller(struct device *dev, void *res, void *ctlr)
-{
- return *(struct spi_controller **)res == ctlr;
-}
-
static int __unregister(struct device *dev, void *null)
{
spi_unregister_device(to_spi_device(dev));
@@ -2877,8 +2886,7 @@ void spi_unregister_controller(struct spi_controller *ctlr)
/* Release the last reference on the controller if its driver
* has not yet been converted to devm_spi_alloc_master/slave().
*/
- if (!devres_find(ctlr->dev.parent, devm_spi_release_controller,
- devm_spi_match_controller, ctlr))
+ if (!ctlr->devm_allocated)
put_device(&ctlr->dev);
/* free bus id */
@@ -3400,11 +3408,11 @@ int spi_setup(struct spi_device *spi)
*/
status = 0;
- spi_set_cs(spi, false);
+ spi_set_cs(spi, false, true);
pm_runtime_mark_last_busy(spi->controller->dev.parent);
pm_runtime_put_autosuspend(spi->controller->dev.parent);
} else {
- spi_set_cs(spi, false);
+ spi_set_cs(spi, false, true);
}
mutex_unlock(&spi->controller->io_mutex);
diff --git a/drivers/staging/comedi/drivers/tests/ni_routes_test.c b/drivers/staging/comedi/drivers/tests/ni_routes_test.c
index eaefaf5..02606e3 100644
--- a/drivers/staging/comedi/drivers/tests/ni_routes_test.c
+++ b/drivers/staging/comedi/drivers/tests/ni_routes_test.c
@@ -217,7 +217,8 @@ void test_ni_assign_device_routes(void)
const u8 *table, *oldtable;
init_pci_6070e();
- ni_assign_device_routes(ni_eseries, pci_6070e, &private.routing_tables);
+ ni_assign_device_routes(ni_eseries, pci_6070e, NULL,
+ &private.routing_tables);
devroutes = private.routing_tables.valid_routes;
table = private.routing_tables.route_values;
@@ -253,7 +254,8 @@ void test_ni_assign_device_routes(void)
olddevroutes = devroutes;
oldtable = table;
init_pci_6220();
- ni_assign_device_routes(ni_mseries, pci_6220, &private.routing_tables);
+ ni_assign_device_routes(ni_mseries, pci_6220, NULL,
+ &private.routing_tables);
devroutes = private.routing_tables.valid_routes;
table = private.routing_tables.route_values;
diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c
index a30b4f5..3897f8e 100644
--- a/drivers/staging/emxx_udc/emxx_udc.c
+++ b/drivers/staging/emxx_udc/emxx_udc.c
@@ -2062,7 +2062,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc,
struct nbu2ss_ep *ep,
int status)
{
- struct nbu2ss_req *req;
+ struct nbu2ss_req *req, *n;
/* Endpoint Disable */
_nbu2ss_epn_exit(udc, ep);
@@ -2074,7 +2074,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc,
return 0;
/* called with irqs blocked */
- list_for_each_entry(req, &ep->queue, queue) {
+ list_for_each_entry_safe(req, n, &ep->queue, queue) {
_nbu2ss_ep_done(ep, req, status);
}
diff --git a/drivers/staging/fwserial/fwserial.c b/drivers/staging/fwserial/fwserial.c
index c368082a..0f4655d 100644
--- a/drivers/staging/fwserial/fwserial.c
+++ b/drivers/staging/fwserial/fwserial.c
@@ -1218,13 +1218,12 @@ static int get_serial_info(struct tty_struct *tty,
struct fwtty_port *port = tty->driver_data;
mutex_lock(&port->port.mutex);
- ss->type = PORT_UNKNOWN;
- ss->line = port->port.tty->index;
- ss->flags = port->port.flags;
- ss->xmit_fifo_size = FWTTY_PORT_TXFIFO_LEN;
+ ss->line = port->index;
ss->baud_base = 400000000;
- ss->close_delay = port->port.close_delay;
+ ss->close_delay = jiffies_to_msecs(port->port.close_delay) / 10;
+ ss->closing_wait = 3000;
mutex_unlock(&port->port.mutex);
+
return 0;
}
@@ -1232,20 +1231,20 @@ static int set_serial_info(struct tty_struct *tty,
struct serial_struct *ss)
{
struct fwtty_port *port = tty->driver_data;
+ unsigned int cdelay;
- if (ss->irq != 0 || ss->port != 0 || ss->custom_divisor != 0 ||
- ss->baud_base != 400000000)
- return -EPERM;
+ cdelay = msecs_to_jiffies(ss->close_delay * 10);
mutex_lock(&port->port.mutex);
if (!capable(CAP_SYS_ADMIN)) {
- if (((ss->flags & ~ASYNC_USR_MASK) !=
+ if (cdelay != port->port.close_delay ||
+ ((ss->flags & ~ASYNC_USR_MASK) !=
(port->port.flags & ~ASYNC_USR_MASK))) {
mutex_unlock(&port->port.mutex);
return -EPERM;
}
}
- port->port.close_delay = ss->close_delay * HZ / 100;
+ port->port.close_delay = cdelay;
mutex_unlock(&port->port.mutex);
return 0;
diff --git a/drivers/staging/greybus/uart.c b/drivers/staging/greybus/uart.c
index 607378b..a520f7f 100644
--- a/drivers/staging/greybus/uart.c
+++ b/drivers/staging/greybus/uart.c
@@ -614,10 +614,12 @@ static int get_serial_info(struct tty_struct *tty,
ss->line = gb_tty->minor;
ss->xmit_fifo_size = 16;
ss->baud_base = 9600;
- ss->close_delay = gb_tty->port.close_delay / 10;
+ ss->close_delay = jiffies_to_msecs(gb_tty->port.close_delay) / 10;
ss->closing_wait =
gb_tty->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
- ASYNC_CLOSING_WAIT_NONE : gb_tty->port.closing_wait / 10;
+ ASYNC_CLOSING_WAIT_NONE :
+ jiffies_to_msecs(gb_tty->port.closing_wait) / 10;
+
return 0;
}
@@ -629,17 +631,16 @@ static int set_serial_info(struct tty_struct *tty,
unsigned int close_delay;
int retval = 0;
- close_delay = ss->close_delay * 10;
+ close_delay = msecs_to_jiffies(ss->close_delay * 10);
closing_wait = ss->closing_wait == ASYNC_CLOSING_WAIT_NONE ?
- ASYNC_CLOSING_WAIT_NONE : ss->closing_wait * 10;
+ ASYNC_CLOSING_WAIT_NONE :
+ msecs_to_jiffies(ss->closing_wait * 10);
mutex_lock(&gb_tty->port.mutex);
if (!capable(CAP_SYS_ADMIN)) {
if ((close_delay != gb_tty->port.close_delay) ||
(closing_wait != gb_tty->port.closing_wait))
retval = -EPERM;
- else
- retval = -EOPNOTSUPP;
} else {
gb_tty->port.close_delay = close_delay;
gb_tty->port.closing_wait = closing_wait;
diff --git a/drivers/staging/iio/cdc/ad7746.c b/drivers/staging/iio/cdc/ad7746.c
index dfd71e9..eab534d 100644
--- a/drivers/staging/iio/cdc/ad7746.c
+++ b/drivers/staging/iio/cdc/ad7746.c
@@ -700,7 +700,6 @@ static int ad7746_probe(struct i2c_client *client,
indio_dev->num_channels = ARRAY_SIZE(ad7746_channels);
else
indio_dev->num_channels = ARRAY_SIZE(ad7746_channels) - 2;
- indio_dev->num_channels = ARRAY_SIZE(ad7746_channels);
indio_dev->modes = INDIO_DIRECT_MODE;
if (pdata) {
diff --git a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
index 7ca7378..0ab67b2 100644
--- a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
+++ b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
@@ -843,8 +843,10 @@ static int lm3554_probe(struct i2c_client *client)
return -ENOMEM;
flash->pdata = lm3554_platform_data_func(client);
- if (IS_ERR(flash->pdata))
- return PTR_ERR(flash->pdata);
+ if (IS_ERR(flash->pdata)) {
+ err = PTR_ERR(flash->pdata);
+ goto fail1;
+ }
v4l2_i2c_subdev_init(&flash->sd, client, &lm3554_ops);
flash->sd.internal_ops = &lm3554_internal_ops;
@@ -856,7 +858,7 @@ static int lm3554_probe(struct i2c_client *client)
ARRAY_SIZE(lm3554_controls));
if (ret) {
dev_err(&client->dev, "error initialize a ctrl_handler.\n");
- goto fail2;
+ goto fail3;
}
for (i = 0; i < ARRAY_SIZE(lm3554_controls); i++)
@@ -865,14 +867,14 @@ static int lm3554_probe(struct i2c_client *client)
if (flash->ctrl_handler.error) {
dev_err(&client->dev, "ctrl_handler error.\n");
- goto fail2;
+ goto fail3;
}
flash->sd.ctrl_handler = &flash->ctrl_handler;
err = media_entity_pads_init(&flash->sd.entity, 0, NULL);
if (err) {
dev_err(&client->dev, "error initialize a media entity.\n");
- goto fail1;
+ goto fail2;
}
flash->sd.entity.function = MEDIA_ENT_F_FLASH;
@@ -884,14 +886,15 @@ static int lm3554_probe(struct i2c_client *client)
err = lm3554_gpio_init(client);
if (err) {
dev_err(&client->dev, "gpio request/direction_output fail");
- goto fail2;
+ goto fail3;
}
return atomisp_register_i2c_module(&flash->sd, NULL, LED_FLASH);
-fail2:
+fail3:
media_entity_cleanup(&flash->sd.entity);
v4l2_ctrl_handler_free(&flash->ctrl_handler);
-fail1:
+fail2:
v4l2_device_unregister_subdev(&flash->sd);
+fail1:
kfree(flash);
return err;
diff --git a/drivers/staging/media/atomisp/pci/atomisp_fops.c b/drivers/staging/media/atomisp/pci/atomisp_fops.c
index 453bb69..f1e6b25 100644
--- a/drivers/staging/media/atomisp/pci/atomisp_fops.c
+++ b/drivers/staging/media/atomisp/pci/atomisp_fops.c
@@ -221,6 +221,9 @@ int atomisp_q_video_buffers_to_css(struct atomisp_sub_device *asd,
unsigned long irqflags;
int err = 0;
+ if (WARN_ON(css_pipe_id >= IA_CSS_PIPE_ID_NUM))
+ return -EINVAL;
+
while (pipe->buffers_in_css < ATOMISP_CSS_Q_DEPTH) {
struct videobuf_buffer *vb;
diff --git a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
index 2ae50dec..9da8285 100644
--- a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+++ b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
@@ -948,10 +948,8 @@ int atomisp_alloc_css_stat_bufs(struct atomisp_sub_device *asd,
dev_dbg(isp->dev, "allocating %d dis buffers\n", count);
while (count--) {
dis_buf = kzalloc(sizeof(struct atomisp_dis_buf), GFP_KERNEL);
- if (!dis_buf) {
- kfree(s3a_buf);
+ if (!dis_buf)
goto error;
- }
if (atomisp_css_allocate_stat_buffers(
asd, stream_id, NULL, dis_buf, NULL)) {
kfree(dis_buf);
diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
index f13af23..0168f98 100644
--- a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
+++ b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
@@ -857,16 +857,17 @@ static void free_private_pages(struct hmm_buffer_object *bo,
kfree(bo->page_obj);
}
-static void free_user_pages(struct hmm_buffer_object *bo)
+static void free_user_pages(struct hmm_buffer_object *bo,
+ unsigned int page_nr)
{
int i;
hmm_mem_stat.usr_size -= bo->pgnr;
if (bo->mem_type == HMM_BO_MEM_TYPE_PFN) {
- unpin_user_pages(bo->pages, bo->pgnr);
+ unpin_user_pages(bo->pages, page_nr);
} else {
- for (i = 0; i < bo->pgnr; i++)
+ for (i = 0; i < page_nr; i++)
put_page(bo->pages[i]);
}
kfree(bo->pages);
@@ -942,6 +943,8 @@ static int alloc_user_pages(struct hmm_buffer_object *bo,
dev_err(atomisp_dev,
"get_user_pages err: bo->pgnr = %d, pgnr actually pinned = %d.\n",
bo->pgnr, page_nr);
+ if (page_nr < 0)
+ page_nr = 0;
goto out_of_mem;
}
@@ -954,7 +957,7 @@ static int alloc_user_pages(struct hmm_buffer_object *bo,
out_of_mem:
- free_user_pages(bo);
+ free_user_pages(bo, page_nr);
return -ENOMEM;
}
@@ -1037,7 +1040,7 @@ void hmm_bo_free_pages(struct hmm_buffer_object *bo)
if (bo->type == HMM_BO_PRIVATE)
free_private_pages(bo, &dynamic_pool, &reserved_pool);
else if (bo->type == HMM_BO_USER)
- free_user_pages(bo);
+ free_user_pages(bo, bo->pgnr);
else
dev_err(atomisp_dev, "invalid buffer type.\n");
mutex_unlock(&bo->mutex);
diff --git a/drivers/staging/media/imx/imx-media-capture.c b/drivers/staging/media/imx/imx-media-capture.c
index c1931eb..b2f2cb3 100644
--- a/drivers/staging/media/imx/imx-media-capture.c
+++ b/drivers/staging/media/imx/imx-media-capture.c
@@ -557,7 +557,7 @@ static int capture_validate_fmt(struct capture_priv *priv)
priv->vdev.fmt.fmt.pix.height != f.fmt.pix.height ||
priv->vdev.cc->cs != cc->cs ||
priv->vdev.compose.width != compose.width ||
- priv->vdev.compose.height != compose.height) ? -EINVAL : 0;
+ priv->vdev.compose.height != compose.height) ? -EPIPE : 0;
}
static int capture_start_streaming(struct vb2_queue *vq, unsigned int count)
diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c
index 4dc8d91..e017961 100644
--- a/drivers/staging/media/ipu3/ipu3-v4l2.c
+++ b/drivers/staging/media/ipu3/ipu3-v4l2.c
@@ -686,6 +686,7 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
dev_dbg(dev, "IPU3 pipe %u pipe_id = %u", pipe, css_pipe->pipe_id);
+ css_q = imgu_node_to_queue(node);
for (i = 0; i < IPU3_CSS_QUEUES; i++) {
unsigned int inode = imgu_map_node(imgu, i);
@@ -693,6 +694,18 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
if (inode == IMGU_NODE_STAT_3A || inode == IMGU_NODE_PARAMS)
continue;
+ /* CSS expects some format on OUT queue */
+ if (i != IPU3_CSS_QUEUE_OUT &&
+ !imgu_pipe->nodes[inode].enabled) {
+ fmts[i] = NULL;
+ continue;
+ }
+
+ if (i == css_q) {
+ fmts[i] = &f->fmt.pix_mp;
+ continue;
+ }
+
if (try) {
fmts[i] = kmemdup(&imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp,
sizeof(struct v4l2_pix_format_mplane),
@@ -705,10 +718,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp;
}
- /* CSS expects some format on OUT queue */
- if (i != IPU3_CSS_QUEUE_OUT &&
- !imgu_pipe->nodes[inode].enabled)
- fmts[i] = NULL;
}
if (!try) {
@@ -725,16 +734,10 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
rects[IPU3_CSS_RECT_GDC]->height = pad_fmt.height;
}
- /*
- * imgu doesn't set the node to the value given by user
- * before we return success from this function, so set it here.
- */
- css_q = imgu_node_to_queue(node);
if (!fmts[css_q]) {
ret = -EINVAL;
goto out;
}
- *fmts[css_q] = f->fmt.pix_mp;
if (try)
ret = imgu_css_fmt_try(&imgu->css, fmts, rects, pipe);
@@ -745,15 +748,18 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
if (ret < 0)
goto out;
- if (try)
- f->fmt.pix_mp = *fmts[css_q];
- else
- f->fmt = imgu_pipe->nodes[node].vdev_fmt.fmt;
+ /*
+ * imgu doesn't set the node to the value given by user
+ * before we return success from this function, so set it here.
+ */
+ if (!try)
+ imgu_pipe->nodes[node].vdev_fmt.fmt.pix_mp = f->fmt.pix_mp;
out:
if (try) {
for (i = 0; i < IPU3_CSS_QUEUES; i++)
- kfree(fmts[i]);
+ if (i != css_q)
+ kfree(fmts[i]);
}
return ret;
diff --git a/drivers/staging/media/omap4iss/iss.c b/drivers/staging/media/omap4iss/iss.c
index e06ea7e..3dac35f 100644
--- a/drivers/staging/media/omap4iss/iss.c
+++ b/drivers/staging/media/omap4iss/iss.c
@@ -1236,8 +1236,10 @@ static int iss_probe(struct platform_device *pdev)
if (ret < 0)
goto error;
- if (!omap4iss_get(iss))
+ if (!omap4iss_get(iss)) {
+ ret = -EINVAL;
goto error;
+ }
ret = iss_reset(iss);
if (ret < 0)
diff --git a/drivers/staging/media/rkisp1/rkisp1-resizer.c b/drivers/staging/media/rkisp1/rkisp1-resizer.c
index 1687d82..4dcc342 100644
--- a/drivers/staging/media/rkisp1/rkisp1-resizer.c
+++ b/drivers/staging/media/rkisp1/rkisp1-resizer.c
@@ -520,14 +520,15 @@ static void rkisp1_rsz_set_src_fmt(struct rkisp1_resizer *rsz,
struct v4l2_mbus_framefmt *format,
unsigned int which)
{
- const struct rkisp1_isp_mbus_info *mbus_info;
- struct v4l2_mbus_framefmt *src_fmt;
+ const struct rkisp1_isp_mbus_info *sink_mbus_info;
+ struct v4l2_mbus_framefmt *src_fmt, *sink_fmt;
+ sink_fmt = rkisp1_rsz_get_pad_fmt(rsz, cfg, RKISP1_RSZ_PAD_SINK, which);
src_fmt = rkisp1_rsz_get_pad_fmt(rsz, cfg, RKISP1_RSZ_PAD_SRC, which);
- mbus_info = rkisp1_isp_mbus_info_get(src_fmt->code);
+ sink_mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
/* for YUV formats, userspace can change the mbus code on the src pad if it is supported */
- if (mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV &&
+ if (sink_mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV &&
rkisp1_rsz_get_yuv_mbus_info(format->code))
src_fmt->code = format->code;
diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
index d25c4a3..1263991 100644
--- a/drivers/staging/media/rkvdec/rkvdec.c
+++ b/drivers/staging/media/rkvdec/rkvdec.c
@@ -1107,7 +1107,7 @@ static struct platform_driver rkvdec_driver = {
.remove = rkvdec_remove,
.driver = {
.name = "rkvdec",
- .of_match_table = of_match_ptr(of_rkvdec_match),
+ .of_match_table = of_rkvdec_match,
.pm = &rkvdec_pm_ops,
},
};
diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
index 66b152f..426387c 100644
--- a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
+++ b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
@@ -443,16 +443,17 @@
#define VE_DEC_H265_STATUS_STCD_BUSY BIT(21)
#define VE_DEC_H265_STATUS_WB_BUSY BIT(20)
#define VE_DEC_H265_STATUS_BS_DMA_BUSY BIT(19)
-#define VE_DEC_H265_STATUS_IQIT_BUSY BIT(18)
+#define VE_DEC_H265_STATUS_IT_BUSY BIT(18)
#define VE_DEC_H265_STATUS_INTER_BUSY BIT(17)
#define VE_DEC_H265_STATUS_MORE_DATA BIT(16)
-#define VE_DEC_H265_STATUS_VLD_BUSY BIT(14)
-#define VE_DEC_H265_STATUS_DEBLOCKING_BUSY BIT(13)
-#define VE_DEC_H265_STATUS_DEBLOCKING_DRAM_BUSY BIT(12)
-#define VE_DEC_H265_STATUS_INTRA_BUSY BIT(11)
-#define VE_DEC_H265_STATUS_SAO_BUSY BIT(10)
-#define VE_DEC_H265_STATUS_MVP_BUSY BIT(9)
-#define VE_DEC_H265_STATUS_SWDEC_BUSY BIT(8)
+#define VE_DEC_H265_STATUS_DBLK_BUSY BIT(15)
+#define VE_DEC_H265_STATUS_IREC_BUSY BIT(14)
+#define VE_DEC_H265_STATUS_INTRA_BUSY BIT(13)
+#define VE_DEC_H265_STATUS_MCRI_BUSY BIT(12)
+#define VE_DEC_H265_STATUS_IQIT_BUSY BIT(11)
+#define VE_DEC_H265_STATUS_MVP_BUSY BIT(10)
+#define VE_DEC_H265_STATUS_IS_BUSY BIT(9)
+#define VE_DEC_H265_STATUS_VLD_BUSY BIT(8)
#define VE_DEC_H265_STATUS_OVER_TIME BIT(3)
#define VE_DEC_H265_STATUS_VLD_DATA_REQ BIT(2)
#define VE_DEC_H265_STATUS_ERROR BIT(1)
diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
index 27dc181..03d31e5 100644
--- a/drivers/staging/rtl8192u/r8192U_core.c
+++ b/drivers/staging/rtl8192u/r8192U_core.c
@@ -3208,7 +3208,7 @@ static void rtl819x_update_rxcounts(struct r8192_priv *priv, u32 *TotalRxBcnNum,
u32 *TotalRxDataNum)
{
u16 SlotIndex;
- u8 i;
+ u16 i;
*TotalRxBcnNum = 0;
*TotalRxDataNum = 0;
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 518fac4..a237f1c 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -1166,6 +1166,7 @@ int iscsit_setup_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
target_get_sess_cmd(&cmd->se_cmd, true);
+ cmd->se_cmd.tag = (__force u32)cmd->init_task_tag;
cmd->sense_reason = target_cmd_init_cdb(&cmd->se_cmd, hdr->cdb);
if (cmd->sense_reason) {
if (cmd->sense_reason == TCM_OUT_OF_RESOURCES) {
@@ -1180,8 +1181,6 @@ int iscsit_setup_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
if (cmd->sense_reason)
goto attach_cmd;
- /* only used for printks or comparing with ->ref_task_tag */
- cmd->se_cmd.tag = (__force u32)cmd->init_task_tag;
cmd->sense_reason = target_cmd_parse_cdb(&cmd->se_cmd);
if (cmd->sense_reason)
goto attach_cmd;
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index 4e37fa9..f10f0aa 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -620,8 +620,9 @@ static void pscsi_complete_cmd(struct se_cmd *cmd, u8 scsi_status,
unsigned char *buf;
buf = transport_kmap_data_sg(cmd);
- if (!buf)
+ if (!buf) {
; /* XXX: TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE */
+ }
if (cdb[0] == MODE_SENSE_10) {
if (!(buf[3] & 0x80))
@@ -939,6 +940,14 @@ pscsi_map_sg(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
return 0;
fail:
+ if (bio)
+ bio_put(bio);
+ while (req->bio) {
+ bio = req->bio;
+ req->bio = bio->bi_next;
+ bio_put(bio);
+ }
+ req->biotail = NULL;
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
}
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 7d5814a..c6950f1 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -1391,7 +1391,7 @@ static int tcmu_run_tmr_queue(struct tcmu_dev *udev)
return 1;
}
-static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
+static bool tcmu_handle_completions(struct tcmu_dev *udev)
{
struct tcmu_mailbox *mb;
struct tcmu_cmd *cmd;
@@ -1434,7 +1434,7 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
pr_err("cmd_id %u not found, ring is broken\n",
entry->hdr.cmd_id);
set_bit(TCMU_DEV_BIT_BROKEN, &udev->flags);
- break;
+ return false;
}
tcmu_handle_completion(cmd, entry);
diff --git a/drivers/tee/amdtee/amdtee_private.h b/drivers/tee/amdtee/amdtee_private.h
index 337c8d8..6d0f706 100644
--- a/drivers/tee/amdtee/amdtee_private.h
+++ b/drivers/tee/amdtee/amdtee_private.h
@@ -21,6 +21,7 @@
#define TEEC_SUCCESS 0x00000000
#define TEEC_ERROR_GENERIC 0xFFFF0000
#define TEEC_ERROR_BAD_PARAMETERS 0xFFFF0006
+#define TEEC_ERROR_OUT_OF_MEMORY 0xFFFF000C
#define TEEC_ERROR_COMMUNICATION 0xFFFF000E
#define TEEC_ORIGIN_COMMS 0x00000002
@@ -93,6 +94,18 @@ struct amdtee_shm_data {
u32 buf_id;
};
+/**
+ * struct amdtee_ta_data - Keeps track of all TAs loaded in AMD Secure
+ * Processor
+ * @ta_handle: Handle to TA loaded in TEE
+ * @refcount: Reference count for the loaded TA
+ */
+struct amdtee_ta_data {
+ struct list_head list_node;
+ u32 ta_handle;
+ u32 refcount;
+};
+
#define LOWER_TWO_BYTE_MASK 0x0000FFFF
/**
diff --git a/drivers/tee/amdtee/call.c b/drivers/tee/amdtee/call.c
index 096dd4d..07f36ac 100644
--- a/drivers/tee/amdtee/call.c
+++ b/drivers/tee/amdtee/call.c
@@ -121,15 +121,69 @@ static int amd_params_to_tee_params(struct tee_param *tee, u32 count,
return ret;
}
+static DEFINE_MUTEX(ta_refcount_mutex);
+static struct list_head ta_list = LIST_HEAD_INIT(ta_list);
+
+static u32 get_ta_refcount(u32 ta_handle)
+{
+ struct amdtee_ta_data *ta_data;
+ u32 count = 0;
+
+ /* Caller must hold a mutex */
+ list_for_each_entry(ta_data, &ta_list, list_node)
+ if (ta_data->ta_handle == ta_handle)
+ return ++ta_data->refcount;
+
+ ta_data = kzalloc(sizeof(*ta_data), GFP_KERNEL);
+ if (ta_data) {
+ ta_data->ta_handle = ta_handle;
+ ta_data->refcount = 1;
+ count = ta_data->refcount;
+ list_add(&ta_data->list_node, &ta_list);
+ }
+
+ return count;
+}
+
+static u32 put_ta_refcount(u32 ta_handle)
+{
+ struct amdtee_ta_data *ta_data;
+ u32 count = 0;
+
+ /* Caller must hold a mutex */
+ list_for_each_entry(ta_data, &ta_list, list_node)
+ if (ta_data->ta_handle == ta_handle) {
+ count = --ta_data->refcount;
+ if (count == 0) {
+ list_del(&ta_data->list_node);
+ kfree(ta_data);
+ break;
+ }
+ }
+
+ return count;
+}
+
int handle_unload_ta(u32 ta_handle)
{
struct tee_cmd_unload_ta cmd = {0};
- u32 status;
+ u32 status, count;
int ret;
if (!ta_handle)
return -EINVAL;
+ mutex_lock(&ta_refcount_mutex);
+
+ count = put_ta_refcount(ta_handle);
+
+ if (count) {
+ pr_debug("unload ta: not unloading %u count %u\n",
+ ta_handle, count);
+ ret = -EBUSY;
+ goto unlock;
+ }
+
cmd.ta_handle = ta_handle;
ret = psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, (void *)&cmd,
@@ -137,8 +191,12 @@ int handle_unload_ta(u32 ta_handle)
if (!ret && status != 0) {
pr_err("unload ta: status = 0x%x\n", status);
ret = -EBUSY;
+ } else {
+ pr_debug("unloaded ta handle %u\n", ta_handle);
}
+unlock:
+ mutex_unlock(&ta_refcount_mutex);
return ret;
}
@@ -340,7 +398,8 @@ int handle_open_session(struct tee_ioctl_open_session_arg *arg, u32 *info,
int handle_load_ta(void *data, u32 size, struct tee_ioctl_open_session_arg *arg)
{
- struct tee_cmd_load_ta cmd = {0};
+ struct tee_cmd_unload_ta unload_cmd = {};
+ struct tee_cmd_load_ta load_cmd = {};
phys_addr_t blob;
int ret;
@@ -353,21 +412,36 @@ int handle_load_ta(void *data, u32 size, struct tee_ioctl_open_session_arg *arg)
return -EINVAL;
}
- cmd.hi_addr = upper_32_bits(blob);
- cmd.low_addr = lower_32_bits(blob);
- cmd.size = size;
+ load_cmd.hi_addr = upper_32_bits(blob);
+ load_cmd.low_addr = lower_32_bits(blob);
+ load_cmd.size = size;
- ret = psp_tee_process_cmd(TEE_CMD_ID_LOAD_TA, (void *)&cmd,
- sizeof(cmd), &arg->ret);
+ mutex_lock(&ta_refcount_mutex);
+
+ ret = psp_tee_process_cmd(TEE_CMD_ID_LOAD_TA, (void *)&load_cmd,
+ sizeof(load_cmd), &arg->ret);
if (ret) {
arg->ret_origin = TEEC_ORIGIN_COMMS;
arg->ret = TEEC_ERROR_COMMUNICATION;
- } else {
- set_session_id(cmd.ta_handle, 0, &arg->session);
+ } else if (arg->ret == TEEC_SUCCESS) {
+ ret = get_ta_refcount(load_cmd.ta_handle);
+ if (!ret) {
+ arg->ret_origin = TEEC_ORIGIN_COMMS;
+ arg->ret = TEEC_ERROR_OUT_OF_MEMORY;
+
+ /* Unload the TA on error */
+ unload_cmd.ta_handle = load_cmd.ta_handle;
+ psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA,
+ (void *)&unload_cmd,
+ sizeof(unload_cmd), &ret);
+ } else {
+ set_session_id(load_cmd.ta_handle, 0, &arg->session);
+ }
}
+ mutex_unlock(&ta_refcount_mutex);
pr_debug("load TA: TA handle = 0x%x, RO = 0x%x, ret = 0x%x\n",
- cmd.ta_handle, arg->ret_origin, arg->ret);
+ load_cmd.ta_handle, arg->ret_origin, arg->ret);
return 0;
}
diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c
index 8a6a8f3..da6b88e 100644
--- a/drivers/tee/amdtee/core.c
+++ b/drivers/tee/amdtee/core.c
@@ -59,10 +59,9 @@ static void release_session(struct amdtee_session *sess)
continue;
handle_close_session(sess->ta_handle, sess->session_info[i]);
+ handle_unload_ta(sess->ta_handle);
}
- /* Unload Trusted Application once all sessions are closed */
- handle_unload_ta(sess->ta_handle);
kfree(sess);
}
@@ -224,8 +223,6 @@ static void destroy_session(struct kref *ref)
struct amdtee_session *sess = container_of(ref, struct amdtee_session,
refcount);
- /* Unload the TA from TEE */
- handle_unload_ta(sess->ta_handle);
mutex_lock(&session_list_mutex);
list_del(&sess->list_node);
mutex_unlock(&session_list_mutex);
@@ -238,7 +235,7 @@ int amdtee_open_session(struct tee_context *ctx,
{
struct amdtee_context_data *ctxdata = ctx->data;
struct amdtee_session *sess = NULL;
- u32 session_info;
+ u32 session_info, ta_handle;
size_t ta_size;
int rc, i;
void *ta;
@@ -259,11 +256,14 @@ int amdtee_open_session(struct tee_context *ctx,
if (arg->ret != TEEC_SUCCESS)
goto out;
+ ta_handle = get_ta_handle(arg->session);
+
mutex_lock(&session_list_mutex);
sess = alloc_session(ctxdata, arg->session);
mutex_unlock(&session_list_mutex);
if (!sess) {
+ handle_unload_ta(ta_handle);
rc = -ENOMEM;
goto out;
}
@@ -277,6 +277,7 @@ int amdtee_open_session(struct tee_context *ctx,
if (i >= TEE_NUM_SESSIONS) {
pr_err("reached maximum session count %d\n", TEE_NUM_SESSIONS);
+ handle_unload_ta(ta_handle);
kref_put(&sess->refcount, destroy_session);
rc = -ENOMEM;
goto out;
@@ -289,12 +290,13 @@ int amdtee_open_session(struct tee_context *ctx,
spin_lock(&sess->lock);
clear_bit(i, sess->sess_mask);
spin_unlock(&sess->lock);
+ handle_unload_ta(ta_handle);
kref_put(&sess->refcount, destroy_session);
goto out;
}
sess->session_info[i] = session_info;
- set_session_id(sess->ta_handle, i, &arg->session);
+ set_session_id(ta_handle, i, &arg->session);
out:
free_pages((u64)ta, get_order(ta_size));
return rc;
@@ -329,6 +331,7 @@ int amdtee_close_session(struct tee_context *ctx, u32 session)
/* Close the session */
handle_close_session(ta_handle, session_info);
+ handle_unload_ta(ta_handle);
kref_put(&sess->refcount, destroy_session);
diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
index cf4718c..63542c1 100644
--- a/drivers/tee/optee/core.c
+++ b/drivers/tee/optee/core.c
@@ -79,16 +79,6 @@ int optee_from_msg_param(struct tee_param *params, size_t num_params,
return rc;
p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa;
p->u.memref.shm = shm;
-
- /* Check that the memref is covered by the shm object */
- if (p->u.memref.size) {
- size_t o = p->u.memref.shm_offs +
- p->u.memref.size - 1;
-
- rc = tee_shm_get_pa(shm, o, NULL);
- if (rc)
- return rc;
- }
break;
case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
index ddc166e..3f6a69cc 100644
--- a/drivers/thermal/cpufreq_cooling.c
+++ b/drivers/thermal/cpufreq_cooling.c
@@ -123,7 +123,7 @@ static u32 cpu_power_to_freq(struct cpufreq_cooling_device *cpufreq_cdev,
{
int i;
- for (i = cpufreq_cdev->max_level; i >= 0; i--) {
+ for (i = cpufreq_cdev->max_level; i > 0; i--) {
if (power >= cpufreq_cdev->em->table[i].power)
break;
}
diff --git a/drivers/thermal/gov_fair_share.c b/drivers/thermal/gov_fair_share.c
index aaa0718..645432c 100644
--- a/drivers/thermal/gov_fair_share.c
+++ b/drivers/thermal/gov_fair_share.c
@@ -82,6 +82,8 @@ static int fair_share_throttle(struct thermal_zone_device *tz, int trip)
int total_instance = 0;
int cur_trip_level = get_trip_level(tz);
+ mutex_lock(&tz->lock);
+
list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
if (instance->trip != trip)
continue;
@@ -110,6 +112,8 @@ static int fair_share_throttle(struct thermal_zone_device *tz, int trip)
mutex_unlock(&instance->cdev->lock);
thermal_cdev_update(cdev);
}
+
+ mutex_unlock(&tz->lock);
return 0;
}
diff --git a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
index 6e479de..a337600 100644
--- a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
+++ b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
@@ -231,6 +231,8 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev,
if (ACPI_FAILURE(status))
trip_cnt = 0;
else {
+ int i;
+
int34x_thermal_zone->aux_trips =
kcalloc(trip_cnt,
sizeof(*int34x_thermal_zone->aux_trips),
@@ -241,6 +243,8 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev,
}
trip_mask = BIT(trip_cnt) - 1;
int34x_thermal_zone->aux_trip_nr = trip_cnt;
+ for (i = 0; i < trip_cnt; ++i)
+ int34x_thermal_zone->aux_trips[i] = THERMAL_TEMP_INVALID;
}
trip_cnt = int340x_thermal_read_trips(int34x_thermal_zone);
diff --git a/drivers/thermal/intel/x86_pkg_temp_thermal.c b/drivers/thermal/intel/x86_pkg_temp_thermal.c
index b81c332..4f5d973 100644
--- a/drivers/thermal/intel/x86_pkg_temp_thermal.c
+++ b/drivers/thermal/intel/x86_pkg_temp_thermal.c
@@ -164,7 +164,7 @@ static int sys_get_trip_temp(struct thermal_zone_device *tzd,
if (thres_reg_value)
*temp = zonedev->tj_max - thres_reg_value * 1000;
else
- *temp = 0;
+ *temp = THERMAL_TEMP_INVALID;
pr_debug("sys_get_trip_temp %d\n", *temp);
return 0;
diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
index d8ce3a6..3c4c051 100644
--- a/drivers/thermal/qcom/tsens.c
+++ b/drivers/thermal/qcom/tsens.c
@@ -755,8 +755,10 @@ int __init init_common(struct tsens_priv *priv)
for (i = VER_MAJOR; i <= VER_STEP; i++) {
priv->rf[i] = devm_regmap_field_alloc(dev, priv->srot_map,
priv->fields[i]);
- if (IS_ERR(priv->rf[i]))
- return PTR_ERR(priv->rf[i]);
+ if (IS_ERR(priv->rf[i])) {
+ ret = PTR_ERR(priv->rf[i]);
+ goto err_put_device;
+ }
}
ret = regmap_field_read(priv->rf[VER_MINOR], &ver_minor);
if (ret)
diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
index 69ef12f..5b76f9a 100644
--- a/drivers/thermal/thermal_of.c
+++ b/drivers/thermal/thermal_of.c
@@ -704,14 +704,17 @@ static int thermal_of_populate_bind_params(struct device_node *np,
count = of_count_phandle_with_args(np, "cooling-device",
"#cooling-cells");
- if (!count) {
+ if (count <= 0) {
pr_err("Add a cooling_device property with at least one device\n");
+ ret = -ENOENT;
goto end;
}
__tcbp = kcalloc(count, sizeof(*__tcbp), GFP_KERNEL);
- if (!__tcbp)
+ if (!__tcbp) {
+ ret = -ENOMEM;
goto end;
+ }
for (i = 0; i < count; i++) {
ret = of_parse_phandle_with_args(np, "cooling-device",
diff --git a/drivers/thunderbolt/dma_port.c b/drivers/thunderbolt/dma_port.c
index 847dd07..de21995 100644
--- a/drivers/thunderbolt/dma_port.c
+++ b/drivers/thunderbolt/dma_port.c
@@ -364,15 +364,15 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
void *buf, size_t size)
{
unsigned int retries = DMA_PORT_RETRIES;
- unsigned int offset;
-
- offset = address & 3;
- address = address & ~3;
do {
- u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4);
+ unsigned int offset;
+ size_t nbytes;
int ret;
+ offset = address & 3;
+ nbytes = min_t(size_t, size + offset, MAIL_DATA_DWORDS * 4);
+
ret = dma_port_flash_read_block(dma, address, dma->buf,
ALIGN(nbytes, 4));
if (ret) {
@@ -384,6 +384,7 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
return ret;
}
+ nbytes -= offset;
memcpy(buf, dma->buf + offset, nbytes);
size -= nbytes;
diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c
index 620bcf5..c44fad2 100644
--- a/drivers/thunderbolt/retimer.c
+++ b/drivers/thunderbolt/retimer.c
@@ -347,7 +347,7 @@ static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
ret = tb_retimer_nvm_add(rt);
if (ret) {
dev_err(&rt->dev, "failed to add NVM devices: %d\n", ret);
- device_del(&rt->dev);
+ device_unregister(&rt->dev);
return ret;
}
@@ -406,7 +406,7 @@ static struct tb_retimer *tb_port_find_retimer(struct tb_port *port, u8 index)
*/
int tb_retimer_scan(struct tb_port *port)
{
- u32 status[TB_MAX_RETIMER_INDEX] = {};
+ u32 status[TB_MAX_RETIMER_INDEX + 1] = {};
int ret, i, last_idx = 0;
if (!port->cap_usb4)
diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
index f2583b4..c05ec6f 100644
--- a/drivers/thunderbolt/usb4.c
+++ b/drivers/thunderbolt/usb4.c
@@ -108,15 +108,15 @@ static int usb4_do_read_data(u16 address, void *buf, size_t size,
unsigned int retries = USB4_DATA_RETRIES;
unsigned int offset;
- offset = address & 3;
- address = address & ~3;
-
do {
- size_t nbytes = min_t(size_t, size, USB4_DATA_DWORDS * 4);
unsigned int dwaddress, dwords;
u8 data[USB4_DATA_DWORDS * 4];
+ size_t nbytes;
int ret;
+ offset = address & 3;
+ nbytes = min_t(size_t, size + offset, USB4_DATA_DWORDS * 4);
+
dwaddress = address / 4;
dwords = ALIGN(nbytes, 4) / 4;
@@ -127,6 +127,7 @@ static int usb4_do_read_data(u16 address, void *buf, size_t size,
return ret;
}
+ nbytes -= offset;
memcpy(buf, data + offset, nbytes);
size -= nbytes;
diff --git a/drivers/tty/amiserial.c b/drivers/tty/amiserial.c
index 13f63c01..f60db96 100644
--- a/drivers/tty/amiserial.c
+++ b/drivers/tty/amiserial.c
@@ -970,6 +970,7 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
if (!serial_isroot()) {
if ((ss->baud_base != state->baud_base) ||
(ss->close_delay != port->close_delay) ||
+ (ss->closing_wait != port->closing_wait) ||
(ss->xmit_fifo_size != state->xmit_fifo_size) ||
((ss->flags & ~ASYNC_USR_MASK) !=
(port->flags & ~ASYNC_USR_MASK))) {
diff --git a/drivers/tty/moxa.c b/drivers/tty/moxa.c
index 9f13f7d..f9f1410 100644
--- a/drivers/tty/moxa.c
+++ b/drivers/tty/moxa.c
@@ -2040,7 +2040,7 @@ static int moxa_get_serial_info(struct tty_struct *tty,
ss->line = info->port.tty->index,
ss->flags = info->port.flags,
ss->baud_base = 921600,
- ss->close_delay = info->port.close_delay;
+ ss->close_delay = jiffies_to_msecs(info->port.close_delay) / 10;
mutex_unlock(&info->port.mutex);
return 0;
}
@@ -2050,6 +2050,7 @@ static int moxa_set_serial_info(struct tty_struct *tty,
struct serial_struct *ss)
{
struct moxa_port *info = tty->driver_data;
+ unsigned int close_delay;
if (tty->index == MAX_PORTS)
return -EINVAL;
@@ -2061,19 +2062,24 @@ static int moxa_set_serial_info(struct tty_struct *tty,
ss->baud_base != 921600)
return -EPERM;
+ close_delay = msecs_to_jiffies(ss->close_delay * 10);
+
mutex_lock(&info->port.mutex);
if (!capable(CAP_SYS_ADMIN)) {
- if (((ss->flags & ~ASYNC_USR_MASK) !=
+ if (close_delay != info->port.close_delay ||
+ ss->type != info->type ||
+ ((ss->flags & ~ASYNC_USR_MASK) !=
(info->port.flags & ~ASYNC_USR_MASK))) {
mutex_unlock(&info->port.mutex);
return -EPERM;
}
+ } else {
+ info->port.close_delay = close_delay;
+
+ MoxaSetFifo(info, ss->type == PORT_16550A);
+
+ info->type = ss->type;
}
- info->port.close_delay = ss->close_delay * HZ / 100;
-
- MoxaSetFifo(info, ss->type == PORT_16550A);
-
- info->type = ss->type;
mutex_unlock(&info->port.mutex);
return 0;
}
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index fea1eea..d76880a 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -2382,8 +2382,18 @@ static int gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
/* Don't register device 0 - this is the control channel and not
a usable tty interface */
base = mux_num_to_base(gsm); /* Base for this MUX */
- for (i = 1; i < NUM_DLCI; i++)
- tty_register_device(gsm_tty_driver, base + i, NULL);
+ for (i = 1; i < NUM_DLCI; i++) {
+ struct device *dev;
+
+ dev = tty_register_device(gsm_tty_driver,
+ base + i, NULL);
+ if (IS_ERR(dev)) {
+ for (i--; i >= 1; i--)
+ tty_unregister_device(gsm_tty_driver,
+ base + i);
+ return PTR_ERR(dev);
+ }
+ }
}
return ret;
}
diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
index 52bb212..34aa271 100644
--- a/drivers/tty/serial/8250/8250.h
+++ b/drivers/tty/serial/8250/8250.h
@@ -88,6 +88,7 @@ struct serial8250_config {
#define UART_BUG_NOMSR (1 << 2) /* UART has buggy MSR status bits (Au1x00) */
#define UART_BUG_THRE (1 << 3) /* UART has buggy THRE reassertion */
#define UART_BUG_PARITY (1 << 4) /* UART mishandles parity if FIFO enabled */
+#define UART_BUG_TXRACE (1 << 5) /* UART Tx fails to set remote DR */
#ifdef CONFIG_SERIAL_8250_SHARE_IRQ
diff --git a/drivers/tty/serial/8250/8250_aspeed_vuart.c b/drivers/tty/serial/8250/8250_aspeed_vuart.c
index c33e02c..ec0d1da 100644
--- a/drivers/tty/serial/8250/8250_aspeed_vuart.c
+++ b/drivers/tty/serial/8250/8250_aspeed_vuart.c
@@ -403,6 +403,7 @@ static int aspeed_vuart_probe(struct platform_device *pdev)
port.port.status = UPSTAT_SYNC_FIFO;
port.port.dev = &pdev->dev;
port.port.has_sysrq = IS_ENABLED(CONFIG_SERIAL_8250_CONSOLE);
+ port.bugs |= UART_BUG_TXRACE;
rc = sysfs_create_group(&vuart->dev->kobj, &aspeed_vuart_attr_group);
if (rc < 0)
diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
index 9e204f9..a3a0154 100644
--- a/drivers/tty/serial/8250/8250_dw.c
+++ b/drivers/tty/serial/8250/8250_dw.c
@@ -714,6 +714,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
{ "APMC0D08", 0},
{ "AMD0020", 0 },
{ "AMDI0020", 0 },
+ { "AMDI0022", 0 },
{ "BRCM2032", 0 },
{ "HISI0031", 0 },
{ },
diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
index d5a513e..13929ab 100644
--- a/drivers/tty/serial/8250/8250_pci.c
+++ b/drivers/tty/serial/8250/8250_pci.c
@@ -56,6 +56,8 @@ struct serial_private {
int line[];
};
+#define PCI_DEVICE_ID_HPE_PCI_SERIAL 0x37e
+
static const struct pci_device_id pci_use_msi[] = {
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9900,
0xA000, 0x1000) },
@@ -63,6 +65,8 @@ static const struct pci_device_id pci_use_msi[] = {
0xA000, 0x1000) },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9922,
0xA000, 0x1000) },
+ { PCI_DEVICE_SUB(PCI_VENDOR_ID_HP_3PAR, PCI_DEVICE_ID_HPE_PCI_SERIAL,
+ PCI_ANY_ID, PCI_ANY_ID) },
{ }
};
@@ -1998,6 +2002,16 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
.setup = pci_hp_diva_setup,
},
/*
+ * HPE PCI serial device
+ */
+ {
+ .vendor = PCI_VENDOR_ID_HP_3PAR,
+ .device = PCI_DEVICE_ID_HPE_PCI_SERIAL,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_hp_diva_setup,
+ },
+ /*
* Intel
*/
{
@@ -3944,21 +3958,26 @@ pciserial_init_ports(struct pci_dev *dev, const struct pciserial_board *board)
uart.port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | UPF_SHARE_IRQ;
uart.port.uartclk = board->base_baud * 16;
- if (pci_match_id(pci_use_msi, dev)) {
- dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n");
- pci_set_master(dev);
- rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES);
+ if (board->flags & FL_NOIRQ) {
+ uart.port.irq = 0;
} else {
- dev_dbg(&dev->dev, "Using legacy interrupts\n");
- rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY);
- }
- if (rc < 0) {
- kfree(priv);
- priv = ERR_PTR(rc);
- goto err_deinit;
+ if (pci_match_id(pci_use_msi, dev)) {
+ dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n");
+ pci_set_master(dev);
+ rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES);
+ } else {
+ dev_dbg(&dev->dev, "Using legacy interrupts\n");
+ rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY);
+ }
+ if (rc < 0) {
+ kfree(priv);
+ priv = ERR_PTR(rc);
+ goto err_deinit;
+ }
+
+ uart.port.irq = pci_irq_vector(dev, 0);
}
- uart.port.irq = pci_irq_vector(dev, 0);
uart.port.dev = &dev->dev;
for (i = 0; i < nr_ports; i++) {
@@ -4973,6 +4992,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
{ PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_DIVA_AUX,
PCI_ANY_ID, PCI_ANY_ID, 0, 0,
pbn_b2_1_115200 },
+ /* HPE PCI serial device */
+ { PCI_VENDOR_ID_HP_3PAR, PCI_DEVICE_ID_HPE_PCI_SERIAL,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_b1_1_115200 },
{ PCI_VENDOR_ID_DCI, PCI_DEVICE_ID_DCI_PCCOM2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0,
diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
index b0af130..6e14142 100644
--- a/drivers/tty/serial/8250/8250_port.c
+++ b/drivers/tty/serial/8250/8250_port.c
@@ -1815,6 +1815,18 @@ void serial8250_tx_chars(struct uart_8250_port *up)
count = up->tx_loadsz;
do {
serial_out(up, UART_TX, xmit->buf[xmit->tail]);
+ if (up->bugs & UART_BUG_TXRACE) {
+ /*
+ * The Aspeed BMC virtual UARTs have a bug where data
+ * may get stuck in the BMC's Tx FIFO from bursts of
+ * writes on the APB interface.
+ *
+ * Delay back-to-back writes by a read cycle to avoid
+ * stalling the VUART. Read a register that won't have
+ * side-effects and discard the result.
+ */
+ serial_in(up, UART_SCR);
+ }
xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
port->icount.tx++;
if (uart_circ_empty(xmit))
diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
index 8434bd5..5bf8dd6 100644
--- a/drivers/tty/serial/max310x.c
+++ b/drivers/tty/serial/max310x.c
@@ -1528,6 +1528,8 @@ static int __init max310x_uart_init(void)
#ifdef CONFIG_SPI_MASTER
ret = spi_register_driver(&max310x_spi_driver);
+ if (ret)
+ uart_unregister_driver(&max310x_uart);
#endif
return ret;
diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
index e0c00a1..51b0eca 100644
--- a/drivers/tty/serial/mvebu-uart.c
+++ b/drivers/tty/serial/mvebu-uart.c
@@ -818,9 +818,6 @@ static int mvebu_uart_probe(struct platform_device *pdev)
return -EINVAL;
}
- if (!match)
- return -ENODEV;
-
/* Assume that all UART ports have a DT alias or none has */
id = of_alias_get_id(pdev->dev.of_node, "serial");
if (!pdev->dev.of_node || id < 0)
diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
index 76b94d0..84e8158 100644
--- a/drivers/tty/serial/omap-serial.c
+++ b/drivers/tty/serial/omap-serial.c
@@ -159,6 +159,8 @@ struct uart_omap_port {
u32 calc_latency;
struct work_struct qos_work;
bool is_suspending;
+
+ unsigned int rs485_tx_filter_count;
};
#define to_uart_omap_port(p) ((container_of((p), struct uart_omap_port, port)))
@@ -302,7 +304,8 @@ static void serial_omap_stop_tx(struct uart_port *port)
serial_out(up, UART_OMAP_SCR, up->scr);
res = (port->rs485.flags & SER_RS485_RTS_AFTER_SEND) ?
1 : 0;
- if (gpiod_get_value(up->rts_gpiod) != res) {
+ if (up->rts_gpiod &&
+ gpiod_get_value(up->rts_gpiod) != res) {
if (port->rs485.delay_rts_after_send > 0)
mdelay(
port->rs485.delay_rts_after_send);
@@ -328,19 +331,6 @@ static void serial_omap_stop_tx(struct uart_port *port)
serial_out(up, UART_IER, up->ier);
}
- if ((port->rs485.flags & SER_RS485_ENABLED) &&
- !(port->rs485.flags & SER_RS485_RX_DURING_TX)) {
- /*
- * Empty the RX FIFO, we are not interested in anything
- * received during the half-duplex transmission.
- */
- serial_out(up, UART_FCR, up->fcr | UART_FCR_CLEAR_RCVR);
- /* Re-enable RX interrupts */
- up->ier |= UART_IER_RLSI | UART_IER_RDI;
- up->port.read_status_mask |= UART_LSR_DR;
- serial_out(up, UART_IER, up->ier);
- }
-
pm_runtime_mark_last_busy(up->dev);
pm_runtime_put_autosuspend(up->dev);
}
@@ -366,6 +356,10 @@ static void transmit_chars(struct uart_omap_port *up, unsigned int lsr)
serial_out(up, UART_TX, up->port.x_char);
up->port.icount.tx++;
up->port.x_char = 0;
+ if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
+ !(up->port.rs485.flags & SER_RS485_RX_DURING_TX))
+ up->rs485_tx_filter_count++;
+
return;
}
if (uart_circ_empty(xmit) || uart_tx_stopped(&up->port)) {
@@ -377,6 +371,10 @@ static void transmit_chars(struct uart_omap_port *up, unsigned int lsr)
serial_out(up, UART_TX, xmit->buf[xmit->tail]);
xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
up->port.icount.tx++;
+ if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
+ !(up->port.rs485.flags & SER_RS485_RX_DURING_TX))
+ up->rs485_tx_filter_count++;
+
if (uart_circ_empty(xmit))
break;
} while (--count > 0);
@@ -411,7 +409,7 @@ static void serial_omap_start_tx(struct uart_port *port)
/* if rts not already enabled */
res = (port->rs485.flags & SER_RS485_RTS_ON_SEND) ? 1 : 0;
- if (gpiod_get_value(up->rts_gpiod) != res) {
+ if (up->rts_gpiod && gpiod_get_value(up->rts_gpiod) != res) {
gpiod_set_value(up->rts_gpiod, res);
if (port->rs485.delay_rts_before_send > 0)
mdelay(port->rs485.delay_rts_before_send);
@@ -420,7 +418,7 @@ static void serial_omap_start_tx(struct uart_port *port)
if ((port->rs485.flags & SER_RS485_ENABLED) &&
!(port->rs485.flags & SER_RS485_RX_DURING_TX))
- serial_omap_stop_rx(port);
+ up->rs485_tx_filter_count = 0;
serial_omap_enable_ier_thri(up);
pm_runtime_mark_last_busy(up->dev);
@@ -491,8 +489,13 @@ static void serial_omap_rlsi(struct uart_omap_port *up, unsigned int lsr)
* Read one data character out to avoid stalling the receiver according
* to the table 23-246 of the omap4 TRM.
*/
- if (likely(lsr & UART_LSR_DR))
+ if (likely(lsr & UART_LSR_DR)) {
serial_in(up, UART_RX);
+ if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
+ !(up->port.rs485.flags & SER_RS485_RX_DURING_TX) &&
+ up->rs485_tx_filter_count)
+ up->rs485_tx_filter_count--;
+ }
up->port.icount.rx++;
flag = TTY_NORMAL;
@@ -543,6 +546,13 @@ static void serial_omap_rdi(struct uart_omap_port *up, unsigned int lsr)
return;
ch = serial_in(up, UART_RX);
+ if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
+ !(up->port.rs485.flags & SER_RS485_RX_DURING_TX) &&
+ up->rs485_tx_filter_count) {
+ up->rs485_tx_filter_count--;
+ return;
+ }
+
flag = TTY_NORMAL;
up->port.icount.rx++;
@@ -1407,18 +1417,13 @@ serial_omap_config_rs485(struct uart_port *port, struct serial_rs485 *rs485)
/* store new config */
port->rs485 = *rs485;
- /*
- * Just as a precaution, only allow rs485
- * to be enabled if the gpio pin is valid
- */
if (up->rts_gpiod) {
/* enable / disable rts */
val = (port->rs485.flags & SER_RS485_ENABLED) ?
SER_RS485_RTS_AFTER_SEND : SER_RS485_RTS_ON_SEND;
val = (port->rs485.flags & val) ? 1 : 0;
gpiod_set_value(up->rts_gpiod, val);
- } else
- port->rs485.flags &= ~SER_RS485_ENABLED;
+ }
/* Enable interrupts */
up->ier = mode;
diff --git a/drivers/tty/serial/rp2.c b/drivers/tty/serial/rp2.c
index 5690c09..944a4c0 100644
--- a/drivers/tty/serial/rp2.c
+++ b/drivers/tty/serial/rp2.c
@@ -195,7 +195,6 @@ struct rp2_card {
void __iomem *bar0;
void __iomem *bar1;
spinlock_t card_lock;
- struct completion fw_loaded;
};
#define RP_ID(prod) PCI_VDEVICE(RP, (prod))
@@ -664,17 +663,10 @@ static void rp2_remove_ports(struct rp2_card *card)
card->initialized_ports = 0;
}
-static void rp2_fw_cb(const struct firmware *fw, void *context)
+static int rp2_load_firmware(struct rp2_card *card, const struct firmware *fw)
{
- struct rp2_card *card = context;
resource_size_t phys_base;
- int i, rc = -ENOENT;
-
- if (!fw) {
- dev_err(&card->pdev->dev, "cannot find '%s' firmware image\n",
- RP2_FW_NAME);
- goto no_fw;
- }
+ int i, rc = 0;
phys_base = pci_resource_start(card->pdev, 1);
@@ -720,23 +712,13 @@ static void rp2_fw_cb(const struct firmware *fw, void *context)
card->initialized_ports++;
}
- release_firmware(fw);
-no_fw:
- /*
- * rp2_fw_cb() is called from a workqueue long after rp2_probe()
- * has already returned success. So if something failed here,
- * we'll just leave the now-dormant device in place until somebody
- * unbinds it.
- */
- if (rc)
- dev_warn(&card->pdev->dev, "driver initialization failed\n");
-
- complete(&card->fw_loaded);
+ return rc;
}
static int rp2_probe(struct pci_dev *pdev,
const struct pci_device_id *id)
{
+ const struct firmware *fw;
struct rp2_card *card;
struct rp2_uart_port *ports;
void __iomem * const *bars;
@@ -747,7 +729,6 @@ static int rp2_probe(struct pci_dev *pdev,
return -ENOMEM;
pci_set_drvdata(pdev, card);
spin_lock_init(&card->card_lock);
- init_completion(&card->fw_loaded);
rc = pcim_enable_device(pdev);
if (rc)
@@ -780,22 +761,24 @@ static int rp2_probe(struct pci_dev *pdev,
return -ENOMEM;
card->ports = ports;
+ rc = request_firmware(&fw, RP2_FW_NAME, &pdev->dev);
+ if (rc < 0) {
+ dev_err(&pdev->dev, "cannot find '%s' firmware image\n",
+ RP2_FW_NAME);
+ return rc;
+ }
+
+ rc = rp2_load_firmware(card, fw);
+
+ release_firmware(fw);
+ if (rc < 0)
+ return rc;
+
rc = devm_request_irq(&pdev->dev, pdev->irq, rp2_uart_interrupt,
IRQF_SHARED, DRV_NAME, card);
if (rc)
return rc;
- /*
- * Only catastrophic errors (e.g. ENOMEM) are reported here.
- * If the FW image is missing, we'll find out in rp2_fw_cb()
- * and print an error message.
- */
- rc = request_firmware_nowait(THIS_MODULE, 1, RP2_FW_NAME, &pdev->dev,
- GFP_KERNEL, card, rp2_fw_cb);
- if (rc)
- return rc;
- dev_dbg(&pdev->dev, "waiting for firmware blob...\n");
-
return 0;
}
@@ -803,7 +786,6 @@ static void rp2_remove(struct pci_dev *pdev)
{
struct rp2_card *card = pci_get_drvdata(pdev);
- wait_for_completion(&card->fw_loaded);
rp2_remove_ports(card);
}
diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
index f86ec2d..9adb836 100644
--- a/drivers/tty/serial/sc16is7xx.c
+++ b/drivers/tty/serial/sc16is7xx.c
@@ -1196,7 +1196,7 @@ static int sc16is7xx_probe(struct device *dev,
ret = regmap_read(regmap,
SC16IS7XX_LSR_REG << SC16IS7XX_REG_SHIFT, &val);
if (ret < 0)
- return ret;
+ return -EPROBE_DEFER;
/* Alloc port structure */
s = devm_kzalloc(dev, struct_size(s, p, devtype->nr_uart), GFP_KERNEL);
diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
index bd13014..fdce1a7 100644
--- a/drivers/tty/serial/serial-tegra.c
+++ b/drivers/tty/serial/serial-tegra.c
@@ -333,7 +333,7 @@ static void tegra_uart_fifo_reset(struct tegra_uart_port *tup, u8 fcr_bits)
do {
lsr = tegra_uart_read(tup, UART_LSR);
- if ((lsr | UART_LSR_TEMT) && !(lsr & UART_LSR_DR))
+ if ((lsr & UART_LSR_TEMT) && !(lsr & UART_LSR_DR))
break;
udelay(1);
} while (--tmout);
diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
index 828f9ad..68a0ff6 100644
--- a/drivers/tty/serial/serial_core.c
+++ b/drivers/tty/serial/serial_core.c
@@ -865,9 +865,11 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port,
goto check_and_exit;
}
- retval = security_locked_down(LOCKDOWN_TIOCSSERIAL);
- if (retval && (change_irq || change_port))
- goto exit;
+ if (change_irq || change_port) {
+ retval = security_locked_down(LOCKDOWN_TIOCSSERIAL);
+ if (retval)
+ goto exit;
+ }
/*
* Ask the low level driver to verify the settings.
@@ -1306,7 +1308,7 @@ static int uart_set_rs485_config(struct uart_port *port,
unsigned long flags;
if (!port->rs485_config)
- return -ENOIOCTLCMD;
+ return -ENOTTY;
if (copy_from_user(&rs485, rs485_user, sizeof(*rs485_user)))
return -EFAULT;
@@ -1330,7 +1332,7 @@ static int uart_get_iso7816_config(struct uart_port *port,
struct serial_iso7816 aux;
if (!port->iso7816_config)
- return -ENOIOCTLCMD;
+ return -ENOTTY;
spin_lock_irqsave(&port->lock, flags);
aux = port->iso7816;
@@ -1350,7 +1352,7 @@ static int uart_set_iso7816_config(struct uart_port *port,
unsigned long flags;
if (!port->iso7816_config)
- return -ENOIOCTLCMD;
+ return -ENOTTY;
if (copy_from_user(&iso7816, iso7816_user, sizeof(*iso7816_user)))
return -EFAULT;
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index e1179e7..3b1aaa9 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1023,10 +1023,10 @@ static int scif_set_rtrg(struct uart_port *port, int rx_trig)
{
unsigned int bits;
+ if (rx_trig >= port->fifosize)
+ rx_trig = port->fifosize - 1;
if (rx_trig < 1)
rx_trig = 1;
- if (rx_trig >= port->fifosize)
- rx_trig = port->fifosize;
/* HSCIF can be set to an arbitrary level. */
if (sci_getreg(port, HSRTRGR)->size) {
diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
index 6248304..2cf9fc9 100644
--- a/drivers/tty/serial/stm32-usart.c
+++ b/drivers/tty/serial/stm32-usart.c
@@ -34,15 +34,15 @@
#include "serial_mctrl_gpio.h"
#include "stm32-usart.h"
-static void stm32_stop_tx(struct uart_port *port);
-static void stm32_transmit_chars(struct uart_port *port);
+static void stm32_usart_stop_tx(struct uart_port *port);
+static void stm32_usart_transmit_chars(struct uart_port *port);
static inline struct stm32_port *to_stm32_port(struct uart_port *port)
{
return container_of(port, struct stm32_port, port);
}
-static void stm32_set_bits(struct uart_port *port, u32 reg, u32 bits)
+static void stm32_usart_set_bits(struct uart_port *port, u32 reg, u32 bits)
{
u32 val;
@@ -51,7 +51,7 @@ static void stm32_set_bits(struct uart_port *port, u32 reg, u32 bits)
writel_relaxed(val, port->membase + reg);
}
-static void stm32_clr_bits(struct uart_port *port, u32 reg, u32 bits)
+static void stm32_usart_clr_bits(struct uart_port *port, u32 reg, u32 bits)
{
u32 val;
@@ -60,8 +60,8 @@ static void stm32_clr_bits(struct uart_port *port, u32 reg, u32 bits)
writel_relaxed(val, port->membase + reg);
}
-static void stm32_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
- u32 delay_DDE, u32 baud)
+static void stm32_usart_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
+ u32 delay_DDE, u32 baud)
{
u32 rs485_deat_dedt;
u32 rs485_deat_dedt_max = (USART_CR1_DEAT_MASK >> USART_CR1_DEAT_SHIFT);
@@ -95,16 +95,16 @@ static void stm32_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE,
*cr1 |= rs485_deat_dedt;
}
-static int stm32_config_rs485(struct uart_port *port,
- struct serial_rs485 *rs485conf)
+static int stm32_usart_config_rs485(struct uart_port *port,
+ struct serial_rs485 *rs485conf)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
- struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
u32 usartdiv, baud, cr1, cr3;
bool over8;
- stm32_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ stm32_usart_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
port->rs485 = *rs485conf;
@@ -122,9 +122,10 @@ static int stm32_config_rs485(struct uart_port *port,
<< USART_BRR_04_R_SHIFT;
baud = DIV_ROUND_CLOSEST(port->uartclk, usartdiv);
- stm32_config_reg_rs485(&cr1, &cr3,
- rs485conf->delay_rts_before_send,
- rs485conf->delay_rts_after_send, baud);
+ stm32_usart_config_reg_rs485(&cr1, &cr3,
+ rs485conf->delay_rts_before_send,
+ rs485conf->delay_rts_after_send,
+ baud);
if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
cr3 &= ~USART_CR3_DEP;
@@ -137,18 +138,19 @@ static int stm32_config_rs485(struct uart_port *port,
writel_relaxed(cr3, port->membase + ofs->cr3);
writel_relaxed(cr1, port->membase + ofs->cr1);
} else {
- stm32_clr_bits(port, ofs->cr3, USART_CR3_DEM | USART_CR3_DEP);
- stm32_clr_bits(port, ofs->cr1,
- USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK);
+ stm32_usart_clr_bits(port, ofs->cr3,
+ USART_CR3_DEM | USART_CR3_DEP);
+ stm32_usart_clr_bits(port, ofs->cr1,
+ USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK);
}
- stm32_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ stm32_usart_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
return 0;
}
-static int stm32_init_rs485(struct uart_port *port,
- struct platform_device *pdev)
+static int stm32_usart_init_rs485(struct uart_port *port,
+ struct platform_device *pdev)
{
struct serial_rs485 *rs485conf = &port->rs485;
@@ -162,11 +164,11 @@ static int stm32_init_rs485(struct uart_port *port,
return uart_get_rs485_mode(port);
}
-static int stm32_pending_rx(struct uart_port *port, u32 *sr, int *last_res,
- bool threaded)
+static int stm32_usart_pending_rx(struct uart_port *port, u32 *sr,
+ int *last_res, bool threaded)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
enum dma_status status;
struct dma_tx_state state;
@@ -176,8 +178,7 @@ static int stm32_pending_rx(struct uart_port *port, u32 *sr, int *last_res,
status = dmaengine_tx_status(stm32_port->rx_ch,
stm32_port->rx_ch->cookie,
&state);
- if ((status == DMA_IN_PROGRESS) &&
- (*last_res != state.residue))
+ if (status == DMA_IN_PROGRESS && (*last_res != state.residue))
return 1;
else
return 0;
@@ -187,11 +188,11 @@ static int stm32_pending_rx(struct uart_port *port, u32 *sr, int *last_res,
return 0;
}
-static unsigned long stm32_get_char(struct uart_port *port, u32 *sr,
- int *last_res)
+static unsigned long stm32_usart_get_char(struct uart_port *port, u32 *sr,
+ int *last_res)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
unsigned long c;
if (stm32_port->rx_ch) {
@@ -207,19 +208,22 @@ static unsigned long stm32_get_char(struct uart_port *port, u32 *sr,
return c;
}
-static void stm32_receive_chars(struct uart_port *port, bool threaded)
+static void stm32_usart_receive_chars(struct uart_port *port, bool threaded)
{
struct tty_port *tport = &port->state->port;
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
- unsigned long c;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ unsigned long c, flags;
u32 sr;
char flag;
- if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
- pm_wakeup_event(tport->tty->dev, 0);
+ if (threaded)
+ spin_lock_irqsave(&port->lock, flags);
+ else
+ spin_lock(&port->lock);
- while (stm32_pending_rx(port, &sr, &stm32_port->last_res, threaded)) {
+ while (stm32_usart_pending_rx(port, &sr, &stm32_port->last_res,
+ threaded)) {
sr |= USART_SR_DUMMY_RX;
flag = TTY_NORMAL;
@@ -238,7 +242,7 @@ static void stm32_receive_chars(struct uart_port *port, bool threaded)
writel_relaxed(sr & USART_SR_ERR_MASK,
port->membase + ofs->icr);
- c = stm32_get_char(port, &sr, &stm32_port->last_res);
+ c = stm32_usart_get_char(port, &sr, &stm32_port->last_res);
port->icount.rx++;
if (sr & USART_SR_ERR_MASK) {
if (sr & USART_SR_ORE) {
@@ -273,58 +277,65 @@ static void stm32_receive_chars(struct uart_port *port, bool threaded)
uart_insert_char(port, sr, USART_SR_ORE, c, flag);
}
- spin_unlock(&port->lock);
+ if (threaded)
+ spin_unlock_irqrestore(&port->lock, flags);
+ else
+ spin_unlock(&port->lock);
+
tty_flip_buffer_push(tport);
- spin_lock(&port->lock);
}
-static void stm32_tx_dma_complete(void *arg)
+static void stm32_usart_tx_dma_complete(void *arg)
{
struct uart_port *port = arg;
struct stm32_port *stm32port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ unsigned long flags;
- stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ dmaengine_terminate_async(stm32port->tx_ch);
+ stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
stm32port->tx_dma_busy = false;
/* Let's see if we have pending data to send */
- stm32_transmit_chars(port);
+ spin_lock_irqsave(&port->lock, flags);
+ stm32_usart_transmit_chars(port);
+ spin_unlock_irqrestore(&port->lock, flags);
}
-static void stm32_tx_interrupt_enable(struct uart_port *port)
+static void stm32_usart_tx_interrupt_enable(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
/*
* Enables TX FIFO threashold irq when FIFO is enabled,
* or TX empty irq when FIFO is disabled
*/
if (stm32_port->fifoen)
- stm32_set_bits(port, ofs->cr3, USART_CR3_TXFTIE);
+ stm32_usart_set_bits(port, ofs->cr3, USART_CR3_TXFTIE);
else
- stm32_set_bits(port, ofs->cr1, USART_CR1_TXEIE);
+ stm32_usart_set_bits(port, ofs->cr1, USART_CR1_TXEIE);
}
-static void stm32_tx_interrupt_disable(struct uart_port *port)
+static void stm32_usart_tx_interrupt_disable(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
if (stm32_port->fifoen)
- stm32_clr_bits(port, ofs->cr3, USART_CR3_TXFTIE);
+ stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_TXFTIE);
else
- stm32_clr_bits(port, ofs->cr1, USART_CR1_TXEIE);
+ stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_TXEIE);
}
-static void stm32_transmit_chars_pio(struct uart_port *port)
+static void stm32_usart_transmit_chars_pio(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
struct circ_buf *xmit = &port->state->xmit;
if (stm32_port->tx_dma_busy) {
- stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
stm32_port->tx_dma_busy = false;
}
@@ -339,15 +350,15 @@ static void stm32_transmit_chars_pio(struct uart_port *port)
/* rely on TXE irq (mask or unmask) for sending remaining data */
if (uart_circ_empty(xmit))
- stm32_tx_interrupt_disable(port);
+ stm32_usart_tx_interrupt_disable(port);
else
- stm32_tx_interrupt_enable(port);
+ stm32_usart_tx_interrupt_enable(port);
}
-static void stm32_transmit_chars_dma(struct uart_port *port)
+static void stm32_usart_transmit_chars_dma(struct uart_port *port)
{
struct stm32_port *stm32port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
struct circ_buf *xmit = &port->state->xmit;
struct dma_async_tx_descriptor *desc = NULL;
unsigned int count, i;
@@ -386,7 +397,7 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
if (!desc)
goto fallback_err;
- desc->callback = stm32_tx_dma_complete;
+ desc->callback = stm32_usart_tx_dma_complete;
desc->callback_param = port;
/* Push current DMA TX transaction in the pending queue */
@@ -399,7 +410,7 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
/* Issue pending DMA TX requests */
dma_async_issue_pending(stm32port->tx_ch);
- stm32_set_bits(port, ofs->cr3, USART_CR3_DMAT);
+ stm32_usart_set_bits(port, ofs->cr3, USART_CR3_DMAT);
xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
port->icount.tx += count;
@@ -407,74 +418,79 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
fallback_err:
for (i = count; i > 0; i--)
- stm32_transmit_chars_pio(port);
+ stm32_usart_transmit_chars_pio(port);
}
-static void stm32_transmit_chars(struct uart_port *port)
+static void stm32_usart_transmit_chars(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
struct circ_buf *xmit = &port->state->xmit;
if (port->x_char) {
if (stm32_port->tx_dma_busy)
- stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
writel_relaxed(port->x_char, port->membase + ofs->tdr);
port->x_char = 0;
port->icount.tx++;
if (stm32_port->tx_dma_busy)
- stm32_set_bits(port, ofs->cr3, USART_CR3_DMAT);
+ stm32_usart_set_bits(port, ofs->cr3, USART_CR3_DMAT);
return;
}
if (uart_circ_empty(xmit) || uart_tx_stopped(port)) {
- stm32_tx_interrupt_disable(port);
+ stm32_usart_tx_interrupt_disable(port);
return;
}
if (ofs->icr == UNDEF_REG)
- stm32_clr_bits(port, ofs->isr, USART_SR_TC);
+ stm32_usart_clr_bits(port, ofs->isr, USART_SR_TC);
else
writel_relaxed(USART_ICR_TCCF, port->membase + ofs->icr);
if (stm32_port->tx_ch)
- stm32_transmit_chars_dma(port);
+ stm32_usart_transmit_chars_dma(port);
else
- stm32_transmit_chars_pio(port);
+ stm32_usart_transmit_chars_pio(port);
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
uart_write_wakeup(port);
if (uart_circ_empty(xmit))
- stm32_tx_interrupt_disable(port);
+ stm32_usart_tx_interrupt_disable(port);
}
-static irqreturn_t stm32_interrupt(int irq, void *ptr)
+static irqreturn_t stm32_usart_interrupt(int irq, void *ptr)
{
struct uart_port *port = ptr;
+ struct tty_port *tport = &port->state->port;
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
u32 sr;
- spin_lock(&port->lock);
-
sr = readl_relaxed(port->membase + ofs->isr);
if ((sr & USART_SR_RTOF) && ofs->icr != UNDEF_REG)
writel_relaxed(USART_ICR_RTOCF,
port->membase + ofs->icr);
- if ((sr & USART_SR_WUF) && (ofs->icr != UNDEF_REG))
+ if ((sr & USART_SR_WUF) && ofs->icr != UNDEF_REG) {
+ /* Clear wake up flag and disable wake up interrupt */
writel_relaxed(USART_ICR_WUCF,
port->membase + ofs->icr);
+ stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_WUFIE);
+ if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
+ pm_wakeup_event(tport->tty->dev, 0);
+ }
if ((sr & USART_SR_RXNE) && !(stm32_port->rx_ch))
- stm32_receive_chars(port, false);
+ stm32_usart_receive_chars(port, false);
- if ((sr & USART_SR_TXE) && !(stm32_port->tx_ch))
- stm32_transmit_chars(port);
-
- spin_unlock(&port->lock);
+ if ((sr & USART_SR_TXE) && !(stm32_port->tx_ch)) {
+ spin_lock(&port->lock);
+ stm32_usart_transmit_chars(port);
+ spin_unlock(&port->lock);
+ }
if (stm32_port->rx_ch)
return IRQ_WAKE_THREAD;
@@ -482,43 +498,42 @@ static irqreturn_t stm32_interrupt(int irq, void *ptr)
return IRQ_HANDLED;
}
-static irqreturn_t stm32_threaded_interrupt(int irq, void *ptr)
+static irqreturn_t stm32_usart_threaded_interrupt(int irq, void *ptr)
{
struct uart_port *port = ptr;
struct stm32_port *stm32_port = to_stm32_port(port);
- spin_lock(&port->lock);
-
if (stm32_port->rx_ch)
- stm32_receive_chars(port, true);
-
- spin_unlock(&port->lock);
+ stm32_usart_receive_chars(port, true);
return IRQ_HANDLED;
}
-static unsigned int stm32_tx_empty(struct uart_port *port)
+static unsigned int stm32_usart_tx_empty(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
- return readl_relaxed(port->membase + ofs->isr) & USART_SR_TXE;
+ if (readl_relaxed(port->membase + ofs->isr) & USART_SR_TC)
+ return TIOCSER_TEMT;
+
+ return 0;
}
-static void stm32_set_mctrl(struct uart_port *port, unsigned int mctrl)
+static void stm32_usart_set_mctrl(struct uart_port *port, unsigned int mctrl)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
if ((mctrl & TIOCM_RTS) && (port->status & UPSTAT_AUTORTS))
- stm32_set_bits(port, ofs->cr3, USART_CR3_RTSE);
+ stm32_usart_set_bits(port, ofs->cr3, USART_CR3_RTSE);
else
- stm32_clr_bits(port, ofs->cr3, USART_CR3_RTSE);
+ stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_RTSE);
mctrl_gpio_set(stm32_port->gpios, mctrl);
}
-static unsigned int stm32_get_mctrl(struct uart_port *port)
+static unsigned int stm32_usart_get_mctrl(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
unsigned int ret;
@@ -529,23 +544,23 @@ static unsigned int stm32_get_mctrl(struct uart_port *port)
return mctrl_gpio_get(stm32_port->gpios, &ret);
}
-static void stm32_enable_ms(struct uart_port *port)
+static void stm32_usart_enable_ms(struct uart_port *port)
{
mctrl_gpio_enable_ms(to_stm32_port(port)->gpios);
}
-static void stm32_disable_ms(struct uart_port *port)
+static void stm32_usart_disable_ms(struct uart_port *port)
{
mctrl_gpio_disable_ms(to_stm32_port(port)->gpios);
}
/* Transmit stop */
-static void stm32_stop_tx(struct uart_port *port)
+static void stm32_usart_stop_tx(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct serial_rs485 *rs485conf = &port->rs485;
- stm32_tx_interrupt_disable(port);
+ stm32_usart_tx_interrupt_disable(port);
if (rs485conf->flags & SER_RS485_ENABLED) {
if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
@@ -559,7 +574,7 @@ static void stm32_stop_tx(struct uart_port *port)
}
/* There are probably characters waiting to be transmitted. */
-static void stm32_start_tx(struct uart_port *port)
+static void stm32_usart_start_tx(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct serial_rs485 *rs485conf = &port->rs485;
@@ -578,102 +593,91 @@ static void stm32_start_tx(struct uart_port *port)
}
}
- stm32_transmit_chars(port);
+ stm32_usart_transmit_chars(port);
}
/* Throttle the remote when input buffer is about to overflow. */
-static void stm32_throttle(struct uart_port *port)
+static void stm32_usart_throttle(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
unsigned long flags;
spin_lock_irqsave(&port->lock, flags);
- stm32_clr_bits(port, ofs->cr1, stm32_port->cr1_irq);
+ stm32_usart_clr_bits(port, ofs->cr1, stm32_port->cr1_irq);
if (stm32_port->cr3_irq)
- stm32_clr_bits(port, ofs->cr3, stm32_port->cr3_irq);
+ stm32_usart_clr_bits(port, ofs->cr3, stm32_port->cr3_irq);
spin_unlock_irqrestore(&port->lock, flags);
}
/* Unthrottle the remote, the input buffer can now accept data. */
-static void stm32_unthrottle(struct uart_port *port)
+static void stm32_usart_unthrottle(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
unsigned long flags;
spin_lock_irqsave(&port->lock, flags);
- stm32_set_bits(port, ofs->cr1, stm32_port->cr1_irq);
+ stm32_usart_set_bits(port, ofs->cr1, stm32_port->cr1_irq);
if (stm32_port->cr3_irq)
- stm32_set_bits(port, ofs->cr3, stm32_port->cr3_irq);
+ stm32_usart_set_bits(port, ofs->cr3, stm32_port->cr3_irq);
spin_unlock_irqrestore(&port->lock, flags);
}
/* Receive stop */
-static void stm32_stop_rx(struct uart_port *port)
+static void stm32_usart_stop_rx(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
- stm32_clr_bits(port, ofs->cr1, stm32_port->cr1_irq);
+ stm32_usart_clr_bits(port, ofs->cr1, stm32_port->cr1_irq);
if (stm32_port->cr3_irq)
- stm32_clr_bits(port, ofs->cr3, stm32_port->cr3_irq);
-
+ stm32_usart_clr_bits(port, ofs->cr3, stm32_port->cr3_irq);
}
/* Handle breaks - ignored by us */
-static void stm32_break_ctl(struct uart_port *port, int break_state)
+static void stm32_usart_break_ctl(struct uart_port *port, int break_state)
{
}
-static int stm32_startup(struct uart_port *port)
+static int stm32_usart_startup(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
const char *name = to_platform_device(port->dev)->name;
u32 val;
int ret;
- ret = request_threaded_irq(port->irq, stm32_interrupt,
- stm32_threaded_interrupt,
+ ret = request_threaded_irq(port->irq, stm32_usart_interrupt,
+ stm32_usart_threaded_interrupt,
IRQF_NO_SUSPEND, name, port);
if (ret)
return ret;
/* RX FIFO Flush */
if (ofs->rqr != UNDEF_REG)
- stm32_set_bits(port, ofs->rqr, USART_RQR_RXFRQ);
+ writel_relaxed(USART_RQR_RXFRQ, port->membase + ofs->rqr);
- /* Tx and RX FIFO configuration */
- if (stm32_port->fifoen) {
- val = readl_relaxed(port->membase + ofs->cr3);
- val &= ~(USART_CR3_TXFTCFG_MASK | USART_CR3_RXFTCFG_MASK);
- val |= USART_CR3_TXFTCFG_HALF << USART_CR3_TXFTCFG_SHIFT;
- val |= USART_CR3_RXFTCFG_HALF << USART_CR3_RXFTCFG_SHIFT;
- writel_relaxed(val, port->membase + ofs->cr3);
- }
-
- /* RX FIFO enabling */
- val = stm32_port->cr1_irq | USART_CR1_RE;
- if (stm32_port->fifoen)
- val |= USART_CR1_FIFOEN;
- stm32_set_bits(port, ofs->cr1, val);
+ /* RX enabling */
+ val = stm32_port->cr1_irq | USART_CR1_RE | BIT(cfg->uart_enable_bit);
+ stm32_usart_set_bits(port, ofs->cr1, val);
return 0;
}
-static void stm32_shutdown(struct uart_port *port)
+static void stm32_usart_shutdown(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
- struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
u32 val, isr;
int ret;
/* Disable modem control interrupts */
- stm32_disable_ms(port);
+ stm32_usart_disable_ms(port);
val = USART_CR1_TXEIE | USART_CR1_TE;
val |= stm32_port->cr1_irq | USART_CR1_RE;
@@ -688,12 +692,17 @@ static void stm32_shutdown(struct uart_port *port)
if (ret)
dev_err(port->dev, "transmission complete not set\n");
- stm32_clr_bits(port, ofs->cr1, val);
+ /* flush RX & TX FIFO */
+ if (ofs->rqr != UNDEF_REG)
+ writel_relaxed(USART_RQR_TXFRQ | USART_RQR_RXFRQ,
+ port->membase + ofs->rqr);
+
+ stm32_usart_clr_bits(port, ofs->cr1, val);
free_irq(port->irq, port);
}
-static unsigned int stm32_get_databits(struct ktermios *termios)
+static unsigned int stm32_usart_get_databits(struct ktermios *termios)
{
unsigned int bits;
@@ -723,18 +732,20 @@ static unsigned int stm32_get_databits(struct ktermios *termios)
return bits;
}
-static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
- struct ktermios *old)
+static void stm32_usart_set_termios(struct uart_port *port,
+ struct ktermios *termios,
+ struct ktermios *old)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
- struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
struct serial_rs485 *rs485conf = &port->rs485;
unsigned int baud, bits;
u32 usartdiv, mantissa, fraction, oversampling;
tcflag_t cflag = termios->c_cflag;
- u32 cr1, cr2, cr3;
+ u32 cr1, cr2, cr3, isr;
unsigned long flags;
+ int ret;
if (!stm32_port->hw_flow_control)
cflag &= ~CRTSCTS;
@@ -743,26 +754,41 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
spin_lock_irqsave(&port->lock, flags);
+ ret = readl_relaxed_poll_timeout_atomic(port->membase + ofs->isr,
+ isr,
+ (isr & USART_SR_TC),
+ 10, 100000);
+
+ /* Send the TC error message only when ISR_TC is not set. */
+ if (ret)
+ dev_err(port->dev, "Transmission is not complete\n");
+
/* Stop serial port and reset value */
writel_relaxed(0, port->membase + ofs->cr1);
/* flush RX & TX FIFO */
if (ofs->rqr != UNDEF_REG)
- stm32_set_bits(port, ofs->rqr,
- USART_RQR_TXFRQ | USART_RQR_RXFRQ);
+ writel_relaxed(USART_RQR_TXFRQ | USART_RQR_RXFRQ,
+ port->membase + ofs->rqr);
cr1 = USART_CR1_TE | USART_CR1_RE;
if (stm32_port->fifoen)
cr1 |= USART_CR1_FIFOEN;
cr2 = 0;
+
+ /* Tx and RX FIFO configuration */
cr3 = readl_relaxed(port->membase + ofs->cr3);
- cr3 &= USART_CR3_TXFTIE | USART_CR3_RXFTCFG_MASK | USART_CR3_RXFTIE
- | USART_CR3_TXFTCFG_MASK;
+ cr3 &= USART_CR3_TXFTIE | USART_CR3_RXFTIE;
+ if (stm32_port->fifoen) {
+ cr3 &= ~(USART_CR3_TXFTCFG_MASK | USART_CR3_RXFTCFG_MASK);
+ cr3 |= USART_CR3_TXFTCFG_HALF << USART_CR3_TXFTCFG_SHIFT;
+ cr3 |= USART_CR3_RXFTCFG_HALF << USART_CR3_RXFTCFG_SHIFT;
+ }
if (cflag & CSTOPB)
cr2 |= USART_CR2_STOP_2B;
- bits = stm32_get_databits(termios);
+ bits = stm32_usart_get_databits(termios);
stm32_port->rdr_mask = (BIT(bits) - 1);
if (cflag & PARENB) {
@@ -813,12 +839,6 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
cr3 |= USART_CR3_CTSE | USART_CR3_RTSE;
}
- /* Handle modem control interrupts */
- if (UART_ENABLE_MS(port, termios->c_cflag))
- stm32_enable_ms(port);
- else
- stm32_disable_ms(port);
-
usartdiv = DIV_ROUND_CLOSEST(port->uartclk, baud);
/*
@@ -830,11 +850,11 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
if (usartdiv < 16) {
oversampling = 8;
cr1 |= USART_CR1_OVER8;
- stm32_set_bits(port, ofs->cr1, USART_CR1_OVER8);
+ stm32_usart_set_bits(port, ofs->cr1, USART_CR1_OVER8);
} else {
oversampling = 16;
cr1 &= ~USART_CR1_OVER8;
- stm32_clr_bits(port, ofs->cr1, USART_CR1_OVER8);
+ stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_OVER8);
}
mantissa = (usartdiv / oversampling) << USART_BRR_DIV_M_SHIFT;
@@ -871,9 +891,10 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
cr3 |= USART_CR3_DMAR;
if (rs485conf->flags & SER_RS485_ENABLED) {
- stm32_config_reg_rs485(&cr1, &cr3,
- rs485conf->delay_rts_before_send,
- rs485conf->delay_rts_after_send, baud);
+ stm32_usart_config_reg_rs485(&cr1, &cr3,
+ rs485conf->delay_rts_before_send,
+ rs485conf->delay_rts_after_send,
+ baud);
if (rs485conf->flags & SER_RS485_RTS_ON_SEND) {
cr3 &= ~USART_CR3_DEP;
rs485conf->flags &= ~SER_RS485_RTS_AFTER_SEND;
@@ -887,48 +908,60 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
cr1 &= ~(USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK);
}
+ /* Configure wake up from low power on start bit detection */
+ if (stm32_port->wakeirq > 0) {
+ cr3 &= ~USART_CR3_WUS_MASK;
+ cr3 |= USART_CR3_WUS_START_BIT;
+ }
+
writel_relaxed(cr3, port->membase + ofs->cr3);
writel_relaxed(cr2, port->membase + ofs->cr2);
writel_relaxed(cr1, port->membase + ofs->cr1);
- stm32_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ stm32_usart_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
spin_unlock_irqrestore(&port->lock, flags);
+
+ /* Handle modem control interrupts */
+ if (UART_ENABLE_MS(port, termios->c_cflag))
+ stm32_usart_enable_ms(port);
+ else
+ stm32_usart_disable_ms(port);
}
-static const char *stm32_type(struct uart_port *port)
+static const char *stm32_usart_type(struct uart_port *port)
{
return (port->type == PORT_STM32) ? DRIVER_NAME : NULL;
}
-static void stm32_release_port(struct uart_port *port)
+static void stm32_usart_release_port(struct uart_port *port)
{
}
-static int stm32_request_port(struct uart_port *port)
+static int stm32_usart_request_port(struct uart_port *port)
{
return 0;
}
-static void stm32_config_port(struct uart_port *port, int flags)
+static void stm32_usart_config_port(struct uart_port *port, int flags)
{
if (flags & UART_CONFIG_TYPE)
port->type = PORT_STM32;
}
static int
-stm32_verify_port(struct uart_port *port, struct serial_struct *ser)
+stm32_usart_verify_port(struct uart_port *port, struct serial_struct *ser)
{
/* No user changeable parameters */
return -EINVAL;
}
-static void stm32_pm(struct uart_port *port, unsigned int state,
- unsigned int oldstate)
+static void stm32_usart_pm(struct uart_port *port, unsigned int state,
+ unsigned int oldstate)
{
struct stm32_port *stm32port = container_of(port,
struct stm32_port, port);
- struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
- struct stm32_usart_config *cfg = &stm32port->info->cfg;
+ const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ const struct stm32_usart_config *cfg = &stm32port->info->cfg;
unsigned long flags = 0;
switch (state) {
@@ -937,7 +970,7 @@ static void stm32_pm(struct uart_port *port, unsigned int state,
break;
case UART_PM_STATE_OFF:
spin_lock_irqsave(&port->lock, flags);
- stm32_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ stm32_usart_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
spin_unlock_irqrestore(&port->lock, flags);
pm_runtime_put_sync(port->dev);
break;
@@ -945,49 +978,48 @@ static void stm32_pm(struct uart_port *port, unsigned int state,
}
static const struct uart_ops stm32_uart_ops = {
- .tx_empty = stm32_tx_empty,
- .set_mctrl = stm32_set_mctrl,
- .get_mctrl = stm32_get_mctrl,
- .stop_tx = stm32_stop_tx,
- .start_tx = stm32_start_tx,
- .throttle = stm32_throttle,
- .unthrottle = stm32_unthrottle,
- .stop_rx = stm32_stop_rx,
- .enable_ms = stm32_enable_ms,
- .break_ctl = stm32_break_ctl,
- .startup = stm32_startup,
- .shutdown = stm32_shutdown,
- .set_termios = stm32_set_termios,
- .pm = stm32_pm,
- .type = stm32_type,
- .release_port = stm32_release_port,
- .request_port = stm32_request_port,
- .config_port = stm32_config_port,
- .verify_port = stm32_verify_port,
+ .tx_empty = stm32_usart_tx_empty,
+ .set_mctrl = stm32_usart_set_mctrl,
+ .get_mctrl = stm32_usart_get_mctrl,
+ .stop_tx = stm32_usart_stop_tx,
+ .start_tx = stm32_usart_start_tx,
+ .throttle = stm32_usart_throttle,
+ .unthrottle = stm32_usart_unthrottle,
+ .stop_rx = stm32_usart_stop_rx,
+ .enable_ms = stm32_usart_enable_ms,
+ .break_ctl = stm32_usart_break_ctl,
+ .startup = stm32_usart_startup,
+ .shutdown = stm32_usart_shutdown,
+ .set_termios = stm32_usart_set_termios,
+ .pm = stm32_usart_pm,
+ .type = stm32_usart_type,
+ .release_port = stm32_usart_release_port,
+ .request_port = stm32_usart_request_port,
+ .config_port = stm32_usart_config_port,
+ .verify_port = stm32_usart_verify_port,
};
-static int stm32_init_port(struct stm32_port *stm32port,
- struct platform_device *pdev)
+static int stm32_usart_init_port(struct stm32_port *stm32port,
+ struct platform_device *pdev)
{
struct uart_port *port = &stm32port->port;
struct resource *res;
int ret;
+ ret = platform_get_irq(pdev, 0);
+ if (ret <= 0)
+ return ret ? : -ENODEV;
+
port->iotype = UPIO_MEM;
port->flags = UPF_BOOT_AUTOCONF;
port->ops = &stm32_uart_ops;
port->dev = &pdev->dev;
port->fifosize = stm32port->info->cfg.fifosize;
port->has_sysrq = IS_ENABLED(CONFIG_SERIAL_STM32_CONSOLE);
-
- ret = platform_get_irq(pdev, 0);
- if (ret <= 0)
- return ret ? : -ENODEV;
port->irq = ret;
+ port->rs485_config = stm32_usart_config_rs485;
- port->rs485_config = stm32_config_rs485;
-
- ret = stm32_init_rs485(port, pdev);
+ ret = stm32_usart_init_rs485(port, pdev);
if (ret)
return ret;
@@ -1046,7 +1078,7 @@ static int stm32_init_port(struct stm32_port *stm32port,
return ret;
}
-static struct stm32_port *stm32_of_get_stm32_port(struct platform_device *pdev)
+static struct stm32_port *stm32_usart_of_get_port(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
int id;
@@ -1084,10 +1116,10 @@ static const struct of_device_id stm32_match[] = {
MODULE_DEVICE_TABLE(of, stm32_match);
#endif
-static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
- struct platform_device *pdev)
+static int stm32_usart_of_dma_rx_probe(struct stm32_port *stm32port,
+ struct platform_device *pdev)
{
- struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
struct uart_port *port = &stm32port->port;
struct device *dev = &pdev->dev;
struct dma_slave_config config;
@@ -1101,8 +1133,8 @@ static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
return -ENODEV;
}
stm32port->rx_buf = dma_alloc_coherent(&pdev->dev, RX_BUF_L,
- &stm32port->rx_dma_buf,
- GFP_KERNEL);
+ &stm32port->rx_dma_buf,
+ GFP_KERNEL);
if (!stm32port->rx_buf) {
ret = -ENOMEM;
goto alloc_err;
@@ -1159,10 +1191,10 @@ static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
return ret;
}
-static int stm32_of_dma_tx_probe(struct stm32_port *stm32port,
- struct platform_device *pdev)
+static int stm32_usart_of_dma_tx_probe(struct stm32_port *stm32port,
+ struct platform_device *pdev)
{
- struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
struct uart_port *port = &stm32port->port;
struct device *dev = &pdev->dev;
struct dma_slave_config config;
@@ -1177,8 +1209,8 @@ static int stm32_of_dma_tx_probe(struct stm32_port *stm32port,
return -ENODEV;
}
stm32port->tx_buf = dma_alloc_coherent(&pdev->dev, TX_BUF_L,
- &stm32port->tx_dma_buf,
- GFP_KERNEL);
+ &stm32port->tx_dma_buf,
+ GFP_KERNEL);
if (!stm32port->tx_buf) {
ret = -ENOMEM;
goto alloc_err;
@@ -1210,23 +1242,20 @@ static int stm32_of_dma_tx_probe(struct stm32_port *stm32port,
return ret;
}
-static int stm32_serial_probe(struct platform_device *pdev)
+static int stm32_usart_serial_probe(struct platform_device *pdev)
{
- const struct of_device_id *match;
struct stm32_port *stm32port;
int ret;
- stm32port = stm32_of_get_stm32_port(pdev);
+ stm32port = stm32_usart_of_get_port(pdev);
if (!stm32port)
return -ENODEV;
- match = of_match_device(stm32_match, &pdev->dev);
- if (match && match->data)
- stm32port->info = (struct stm32_usart_info *)match->data;
- else
+ stm32port->info = of_device_get_match_data(&pdev->dev);
+ if (!stm32port->info)
return -EINVAL;
- ret = stm32_init_port(stm32port, pdev);
+ ret = stm32_usart_init_port(stm32port, pdev);
if (ret)
return ret;
@@ -1243,15 +1272,11 @@ static int stm32_serial_probe(struct platform_device *pdev)
device_set_wakeup_enable(&pdev->dev, false);
}
- ret = uart_add_one_port(&stm32_usart_driver, &stm32port->port);
- if (ret)
- goto err_wirq;
-
- ret = stm32_of_dma_rx_probe(stm32port, pdev);
+ ret = stm32_usart_of_dma_rx_probe(stm32port, pdev);
if (ret)
dev_info(&pdev->dev, "interrupt mode used for rx (no dma)\n");
- ret = stm32_of_dma_tx_probe(stm32port, pdev);
+ ret = stm32_usart_of_dma_tx_probe(stm32port, pdev);
if (ret)
dev_info(&pdev->dev, "interrupt mode used for tx (no dma)\n");
@@ -1260,11 +1285,40 @@ static int stm32_serial_probe(struct platform_device *pdev)
pm_runtime_get_noresume(&pdev->dev);
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
+
+ ret = uart_add_one_port(&stm32_usart_driver, &stm32port->port);
+ if (ret)
+ goto err_port;
+
pm_runtime_put_sync(&pdev->dev);
return 0;
-err_wirq:
+err_port:
+ pm_runtime_disable(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+
+ if (stm32port->rx_ch) {
+ dmaengine_terminate_async(stm32port->rx_ch);
+ dma_release_channel(stm32port->rx_ch);
+ }
+
+ if (stm32port->rx_dma_buf)
+ dma_free_coherent(&pdev->dev,
+ RX_BUF_L, stm32port->rx_buf,
+ stm32port->rx_dma_buf);
+
+ if (stm32port->tx_ch) {
+ dmaengine_terminate_async(stm32port->tx_ch);
+ dma_release_channel(stm32port->tx_ch);
+ }
+
+ if (stm32port->tx_dma_buf)
+ dma_free_coherent(&pdev->dev,
+ TX_BUF_L, stm32port->tx_buf,
+ stm32port->tx_dma_buf);
+
if (stm32port->wakeirq > 0)
dev_pm_clear_wake_irq(&pdev->dev);
@@ -1278,29 +1332,40 @@ static int stm32_serial_probe(struct platform_device *pdev)
return ret;
}
-static int stm32_serial_remove(struct platform_device *pdev)
+static int stm32_usart_serial_remove(struct platform_device *pdev)
{
struct uart_port *port = platform_get_drvdata(pdev);
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
int err;
pm_runtime_get_sync(&pdev->dev);
+ err = uart_remove_one_port(&stm32_usart_driver, port);
+ if (err)
+ return(err);
- stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAR);
+ pm_runtime_disable(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
- if (stm32_port->rx_ch)
+ stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAR);
+
+ if (stm32_port->rx_ch) {
+ dmaengine_terminate_async(stm32_port->rx_ch);
dma_release_channel(stm32_port->rx_ch);
+ }
if (stm32_port->rx_dma_buf)
dma_free_coherent(&pdev->dev,
RX_BUF_L, stm32_port->rx_buf,
stm32_port->rx_dma_buf);
- stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
- if (stm32_port->tx_ch)
+ if (stm32_port->tx_ch) {
+ dmaengine_terminate_async(stm32_port->tx_ch);
dma_release_channel(stm32_port->tx_ch);
+ }
if (stm32_port->tx_dma_buf)
dma_free_coherent(&pdev->dev,
@@ -1314,20 +1379,14 @@ static int stm32_serial_remove(struct platform_device *pdev)
clk_disable_unprepare(stm32_port->clk);
- err = uart_remove_one_port(&stm32_usart_driver, port);
-
- pm_runtime_disable(&pdev->dev);
- pm_runtime_put_noidle(&pdev->dev);
-
- return err;
+ return 0;
}
-
#ifdef CONFIG_SERIAL_STM32_CONSOLE
-static void stm32_console_putchar(struct uart_port *port, int ch)
+static void stm32_usart_console_putchar(struct uart_port *port, int ch)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
while (!(readl_relaxed(port->membase + ofs->isr) & USART_SR_TXE))
cpu_relax();
@@ -1335,12 +1394,13 @@ static void stm32_console_putchar(struct uart_port *port, int ch)
writel_relaxed(ch, port->membase + ofs->tdr);
}
-static void stm32_console_write(struct console *co, const char *s, unsigned cnt)
+static void stm32_usart_console_write(struct console *co, const char *s,
+ unsigned int cnt)
{
struct uart_port *port = &stm32_ports[co->index].port;
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
- struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
unsigned long flags;
u32 old_cr1, new_cr1;
int locked = 1;
@@ -1359,7 +1419,7 @@ static void stm32_console_write(struct console *co, const char *s, unsigned cnt)
new_cr1 |= USART_CR1_TE | BIT(cfg->uart_enable_bit);
writel_relaxed(new_cr1, port->membase + ofs->cr1);
- uart_console_write(port, s, cnt, stm32_console_putchar);
+ uart_console_write(port, s, cnt, stm32_usart_console_putchar);
/* Restore interrupt state */
writel_relaxed(old_cr1, port->membase + ofs->cr1);
@@ -1369,7 +1429,7 @@ static void stm32_console_write(struct console *co, const char *s, unsigned cnt)
local_irq_restore(flags);
}
-static int stm32_console_setup(struct console *co, char *options)
+static int stm32_usart_console_setup(struct console *co, char *options)
{
struct stm32_port *stm32port;
int baud = 9600;
@@ -1388,7 +1448,7 @@ static int stm32_console_setup(struct console *co, char *options)
* this to be called during the uart port registration when the
* driver gets probed and the port should be mapped at that point.
*/
- if (stm32port->port.mapbase == 0 || stm32port->port.membase == NULL)
+ if (stm32port->port.mapbase == 0 || !stm32port->port.membase)
return -ENXIO;
if (options)
@@ -1400,8 +1460,8 @@ static int stm32_console_setup(struct console *co, char *options)
static struct console stm32_console = {
.name = STM32_SERIAL_NAME,
.device = uart_console_device,
- .write = stm32_console_write,
- .setup = stm32_console_setup,
+ .write = stm32_usart_console_write,
+ .setup = stm32_usart_console_setup,
.flags = CON_PRINTBUFFER,
.index = -1,
.data = &stm32_usart_driver,
@@ -1422,41 +1482,38 @@ static struct uart_driver stm32_usart_driver = {
.cons = STM32_SERIAL_CONSOLE,
};
-static void __maybe_unused stm32_serial_enable_wakeup(struct uart_port *port,
- bool enable)
+static void __maybe_unused stm32_usart_serial_en_wakeup(struct uart_port *port,
+ bool enable)
{
struct stm32_port *stm32_port = to_stm32_port(port);
- struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
- struct stm32_usart_config *cfg = &stm32_port->info->cfg;
- u32 val;
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
if (stm32_port->wakeirq <= 0)
return;
+ /*
+ * Enable low-power wake-up and wake-up irq if argument is set to
+ * "enable", disable low-power wake-up and wake-up irq otherwise
+ */
if (enable) {
- stm32_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
- stm32_set_bits(port, ofs->cr1, USART_CR1_UESM);
- val = readl_relaxed(port->membase + ofs->cr3);
- val &= ~USART_CR3_WUS_MASK;
- /* Enable Wake up interrupt from low power on start bit */
- val |= USART_CR3_WUS_START_BIT | USART_CR3_WUFIE;
- writel_relaxed(val, port->membase + ofs->cr3);
- stm32_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ stm32_usart_set_bits(port, ofs->cr1, USART_CR1_UESM);
+ stm32_usart_set_bits(port, ofs->cr3, USART_CR3_WUFIE);
} else {
- stm32_clr_bits(port, ofs->cr1, USART_CR1_UESM);
+ stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_UESM);
+ stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_WUFIE);
}
}
-static int __maybe_unused stm32_serial_suspend(struct device *dev)
+static int __maybe_unused stm32_usart_serial_suspend(struct device *dev)
{
struct uart_port *port = dev_get_drvdata(dev);
uart_suspend_port(&stm32_usart_driver, port);
if (device_may_wakeup(dev))
- stm32_serial_enable_wakeup(port, true);
+ stm32_usart_serial_en_wakeup(port, true);
else
- stm32_serial_enable_wakeup(port, false);
+ stm32_usart_serial_en_wakeup(port, false);
/*
* When "no_console_suspend" is enabled, keep the pinctrl default state
@@ -1474,19 +1531,19 @@ static int __maybe_unused stm32_serial_suspend(struct device *dev)
return 0;
}
-static int __maybe_unused stm32_serial_resume(struct device *dev)
+static int __maybe_unused stm32_usart_serial_resume(struct device *dev)
{
struct uart_port *port = dev_get_drvdata(dev);
pinctrl_pm_select_default_state(dev);
if (device_may_wakeup(dev))
- stm32_serial_enable_wakeup(port, false);
+ stm32_usart_serial_en_wakeup(port, false);
return uart_resume_port(&stm32_usart_driver, port);
}
-static int __maybe_unused stm32_serial_runtime_suspend(struct device *dev)
+static int __maybe_unused stm32_usart_runtime_suspend(struct device *dev)
{
struct uart_port *port = dev_get_drvdata(dev);
struct stm32_port *stm32port = container_of(port,
@@ -1497,7 +1554,7 @@ static int __maybe_unused stm32_serial_runtime_suspend(struct device *dev)
return 0;
}
-static int __maybe_unused stm32_serial_runtime_resume(struct device *dev)
+static int __maybe_unused stm32_usart_runtime_resume(struct device *dev)
{
struct uart_port *port = dev_get_drvdata(dev);
struct stm32_port *stm32port = container_of(port,
@@ -1507,14 +1564,15 @@ static int __maybe_unused stm32_serial_runtime_resume(struct device *dev)
}
static const struct dev_pm_ops stm32_serial_pm_ops = {
- SET_RUNTIME_PM_OPS(stm32_serial_runtime_suspend,
- stm32_serial_runtime_resume, NULL)
- SET_SYSTEM_SLEEP_PM_OPS(stm32_serial_suspend, stm32_serial_resume)
+ SET_RUNTIME_PM_OPS(stm32_usart_runtime_suspend,
+ stm32_usart_runtime_resume, NULL)
+ SET_SYSTEM_SLEEP_PM_OPS(stm32_usart_serial_suspend,
+ stm32_usart_serial_resume)
};
static struct platform_driver stm32_serial_driver = {
- .probe = stm32_serial_probe,
- .remove = stm32_serial_remove,
+ .probe = stm32_usart_serial_probe,
+ .remove = stm32_usart_serial_remove,
.driver = {
.name = DRIVER_NAME,
.pm = &stm32_serial_pm_ops,
@@ -1522,7 +1580,7 @@ static struct platform_driver stm32_serial_driver = {
},
};
-static int __init usart_init(void)
+static int __init stm32_usart_init(void)
{
static char banner[] __initdata = "STM32 USART driver initialized";
int ret;
@@ -1540,14 +1598,14 @@ static int __init usart_init(void)
return ret;
}
-static void __exit usart_exit(void)
+static void __exit stm32_usart_exit(void)
{
platform_driver_unregister(&stm32_serial_driver);
uart_unregister_driver(&stm32_usart_driver);
}
-module_init(usart_init);
-module_exit(usart_exit);
+module_init(stm32_usart_init);
+module_exit(stm32_usart_exit);
MODULE_ALIAS("platform:" DRIVER_NAME);
MODULE_DESCRIPTION("STMicroelectronics STM32 serial port driver");
diff --git a/drivers/tty/serial/stm32-usart.h b/drivers/tty/serial/stm32-usart.h
index d4c916e..94b568aa 100644
--- a/drivers/tty/serial/stm32-usart.h
+++ b/drivers/tty/serial/stm32-usart.h
@@ -127,9 +127,6 @@ struct stm32_usart_info stm32h7_info = {
/* Dummy bits */
#define USART_SR_DUMMY_RX BIT(16)
-/* USART_ICR (F7) */
-#define USART_CR_TC BIT(6)
-
/* USART_DR */
#define USART_DR_MASK GENMASK(8, 0)
@@ -259,7 +256,7 @@ struct stm32_usart_info stm32h7_info = {
struct stm32_port {
struct uart_port port;
struct clk *clk;
- struct stm32_usart_info *info;
+ const struct stm32_usart_info *info;
struct dma_chan *rx_ch; /* dma rx channel */
dma_addr_t rx_dma_buf; /* dma rx buffer bus address */
unsigned char *rx_buf; /* dma rx buffer cpu address */
diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index 146bd67..bc53140 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -2492,14 +2492,14 @@ static int send_break(struct tty_struct *tty, unsigned int duration)
* @p: pointer to result
*
* Obtain the modem status bits from the tty driver if the feature
- * is supported. Return -EINVAL if it is not available.
+ * is supported. Return -ENOTTY if it is not available.
*
* Locking: none (up to the driver)
*/
static int tty_tiocmget(struct tty_struct *tty, int __user *p)
{
- int retval = -EINVAL;
+ int retval = -ENOTTY;
if (tty->ops->tiocmget) {
retval = tty->ops->tiocmget(tty);
@@ -2517,7 +2517,7 @@ static int tty_tiocmget(struct tty_struct *tty, int __user *p)
* @p: pointer to desired bits
*
* Set the modem status bits from the tty driver if the feature
- * is supported. Return -EINVAL if it is not available.
+ * is supported. Return -ENOTTY if it is not available.
*
* Locking: none (up to the driver)
*/
@@ -2529,7 +2529,7 @@ static int tty_tiocmset(struct tty_struct *tty, unsigned int cmd,
unsigned int set, clear, val;
if (tty->ops->tiocmset == NULL)
- return -EINVAL;
+ return -ENOTTY;
retval = get_user(val, p);
if (retval)
diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c
index e18f318..803da2d 100644
--- a/drivers/tty/tty_ioctl.c
+++ b/drivers/tty/tty_ioctl.c
@@ -443,51 +443,6 @@ static int get_termio(struct tty_struct *tty, struct termio __user *termio)
return 0;
}
-
-#ifdef TCGETX
-
-/**
- * set_termiox - set termiox fields if possible
- * @tty: terminal
- * @arg: termiox structure from user
- * @opt: option flags for ioctl type
- *
- * Implement the device calling points for the SYS5 termiox ioctl
- * interface in Linux
- */
-
-static int set_termiox(struct tty_struct *tty, void __user *arg, int opt)
-{
- struct termiox tnew;
- struct tty_ldisc *ld;
-
- if (tty->termiox == NULL)
- return -EINVAL;
- if (copy_from_user(&tnew, arg, sizeof(struct termiox)))
- return -EFAULT;
-
- ld = tty_ldisc_ref(tty);
- if (ld != NULL) {
- if ((opt & TERMIOS_FLUSH) && ld->ops->flush_buffer)
- ld->ops->flush_buffer(tty);
- tty_ldisc_deref(ld);
- }
- if (opt & TERMIOS_WAIT) {
- tty_wait_until_sent(tty, 0);
- if (signal_pending(current))
- return -ERESTARTSYS;
- }
-
- down_write(&tty->termios_rwsem);
- if (tty->ops->set_termiox)
- tty->ops->set_termiox(tty, &tnew);
- up_write(&tty->termios_rwsem);
- return 0;
-}
-
-#endif
-
-
#ifdef TIOCGETP
/*
* These are deprecated, but there is limited support..
@@ -815,24 +770,12 @@ int tty_mode_ioctl(struct tty_struct *tty, struct file *file,
return ret;
#endif
#ifdef TCGETX
- case TCGETX: {
- struct termiox ktermx;
- if (real_tty->termiox == NULL)
- return -EINVAL;
- down_read(&real_tty->termios_rwsem);
- memcpy(&ktermx, real_tty->termiox, sizeof(struct termiox));
- up_read(&real_tty->termios_rwsem);
- if (copy_to_user(p, &ktermx, sizeof(struct termiox)))
- ret = -EFAULT;
- return ret;
- }
+ case TCGETX:
case TCSETX:
- return set_termiox(real_tty, p, 0);
case TCSETXW:
- return set_termiox(real_tty, p, TERMIOS_WAIT);
case TCSETXF:
- return set_termiox(real_tty, p, TERMIOS_FLUSH);
-#endif
+ return -ENOTTY;
+#endif
case TIOCGSOFTCAR:
copy_termios(real_tty, &kterm);
ret = put_user((kterm.c_cflag & CLOCAL) ? 1 : 0,
diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
index d04a162..06757b1 100644
--- a/drivers/tty/vt/vt.c
+++ b/drivers/tty/vt/vt.c
@@ -1172,7 +1172,7 @@ static inline int resize_screen(struct vc_data *vc, int width, int height,
/* Resizes the resolution of the display adapater */
int err = 0;
- if (vc->vc_mode != KD_GRAPHICS && vc->vc_sw->con_resize)
+ if (vc->vc_sw->con_resize)
err = vc->vc_sw->con_resize(vc, width, height, user);
return err;
@@ -1382,6 +1382,7 @@ struct vc_data *vc_deallocate(unsigned int currcons)
atomic_notifier_call_chain(&vt_notifier_list, VT_DEALLOCATE, ¶m);
vcs_remove_sysfs(currcons);
visual_deinit(vc);
+ con_free_unimap(vc);
put_pid(vc->vt_pid);
vc_uniscr_set(vc, NULL);
kfree(vc->vc_screenbuf);
diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
index 5f61b25a9..09b8d02 100644
--- a/drivers/tty/vt/vt_ioctl.c
+++ b/drivers/tty/vt/vt_ioctl.c
@@ -771,21 +771,58 @@ static int vt_resizex(struct vc_data *vc, struct vt_consize __user *cs)
if (copy_from_user(&v, cs, sizeof(struct vt_consize)))
return -EFAULT;
- if (v.v_vlin)
- pr_info_once("\"struct vt_consize\"->v_vlin is ignored. Please report if you need this.\n");
- if (v.v_clin)
- pr_info_once("\"struct vt_consize\"->v_clin is ignored. Please report if you need this.\n");
+ /* FIXME: Should check the copies properly */
+ if (!v.v_vlin)
+ v.v_vlin = vc->vc_scan_lines;
- console_lock();
- for (i = 0; i < MAX_NR_CONSOLES; i++) {
- vc = vc_cons[i].d;
-
- if (vc) {
- vc->vc_resize_user = 1;
- vc_resize(vc, v.v_cols, v.v_rows);
+ if (v.v_clin) {
+ int rows = v.v_vlin / v.v_clin;
+ if (v.v_rows != rows) {
+ if (v.v_rows) /* Parameters don't add up */
+ return -EINVAL;
+ v.v_rows = rows;
}
}
- console_unlock();
+
+ if (v.v_vcol && v.v_ccol) {
+ int cols = v.v_vcol / v.v_ccol;
+ if (v.v_cols != cols) {
+ if (v.v_cols)
+ return -EINVAL;
+ v.v_cols = cols;
+ }
+ }
+
+ if (v.v_clin > 32)
+ return -EINVAL;
+
+ for (i = 0; i < MAX_NR_CONSOLES; i++) {
+ struct vc_data *vcp;
+
+ if (!vc_cons[i].d)
+ continue;
+ console_lock();
+ vcp = vc_cons[i].d;
+ if (vcp) {
+ int ret;
+ int save_scan_lines = vcp->vc_scan_lines;
+ int save_cell_height = vcp->vc_cell_height;
+
+ if (v.v_vlin)
+ vcp->vc_scan_lines = v.v_vlin;
+ if (v.v_clin)
+ vcp->vc_cell_height = v.v_clin;
+ vcp->vc_resize_user = 1;
+ ret = vc_resize(vcp, v.v_cols, v.v_rows);
+ if (ret) {
+ vcp->vc_scan_lines = save_scan_lines;
+ vcp->vc_cell_height = save_cell_height;
+ console_unlock();
+ return ret;
+ }
+ }
+ console_unlock();
+ }
return 0;
}
diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
index 4dae232..c31febe 100644
--- a/drivers/uio/uio_hv_generic.c
+++ b/drivers/uio/uio_hv_generic.c
@@ -296,8 +296,10 @@ hv_uio_probe(struct hv_device *dev,
ret = vmbus_establish_gpadl(channel, pdata->recv_buf,
RECV_BUFFER_SIZE, &pdata->recv_gpadl);
- if (ret)
+ if (ret) {
+ vfree(pdata->recv_buf);
goto fail_close;
+ }
/* put Global Physical Address Label in name */
snprintf(pdata->recv_name, sizeof(pdata->recv_name),
@@ -316,8 +318,10 @@ hv_uio_probe(struct hv_device *dev,
ret = vmbus_establish_gpadl(channel, pdata->send_buf,
SEND_BUFFER_SIZE, &pdata->send_gpadl);
- if (ret)
+ if (ret) {
+ vfree(pdata->send_buf);
goto fail_close;
+ }
snprintf(pdata->send_name, sizeof(pdata->send_name),
"send:%u", pdata->send_gpadl);
diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
index e793593..6fbabf5 100644
--- a/drivers/usb/class/cdc-acm.c
+++ b/drivers/usb/class/cdc-acm.c
@@ -929,8 +929,7 @@ static int get_serial_info(struct tty_struct *tty, struct serial_struct *ss)
{
struct acm *acm = tty->driver_data;
- ss->xmit_fifo_size = acm->writesize;
- ss->baud_base = le32_to_cpu(acm->line.dwDTERate);
+ ss->line = acm->minor;
ss->close_delay = jiffies_to_msecs(acm->port.close_delay) / 10;
ss->closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
ASYNC_CLOSING_WAIT_NONE :
@@ -942,7 +941,6 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
{
struct acm *acm = tty->driver_data;
unsigned int closing_wait, close_delay;
- unsigned int old_closing_wait, old_close_delay;
int retval = 0;
close_delay = msecs_to_jiffies(ss->close_delay * 10);
@@ -950,20 +948,12 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
ASYNC_CLOSING_WAIT_NONE :
msecs_to_jiffies(ss->closing_wait * 10);
- /* we must redo the rounding here, so that the values match */
- old_close_delay = jiffies_to_msecs(acm->port.close_delay) / 10;
- old_closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
- ASYNC_CLOSING_WAIT_NONE :
- jiffies_to_msecs(acm->port.closing_wait) / 10;
-
mutex_lock(&acm->port.mutex);
if (!capable(CAP_SYS_ADMIN)) {
- if ((ss->close_delay != old_close_delay) ||
- (ss->closing_wait != old_closing_wait))
+ if ((close_delay != acm->port.close_delay) ||
+ (closing_wait != acm->port.closing_wait))
retval = -EPERM;
- else
- retval = -EOPNOTSUPP;
} else {
acm->port.close_delay = close_delay;
acm->port.closing_wait = closing_wait;
@@ -1637,12 +1627,13 @@ static int acm_resume(struct usb_interface *intf)
struct urb *urb;
int rv = 0;
- acm_unpoison_urbs(acm);
spin_lock_irq(&acm->write_lock);
if (--acm->susp_count)
goto out;
+ acm_unpoison_urbs(acm);
+
if (tty_port_initialized(&acm->port)) {
rv = usb_submit_urb(acm->ctrlurb, GFP_ATOMIC);
diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
index 508b1c3..d1e4a73 100644
--- a/drivers/usb/class/cdc-wdm.c
+++ b/drivers/usb/class/cdc-wdm.c
@@ -321,12 +321,23 @@ static void wdm_int_callback(struct urb *urb)
}
-static void kill_urbs(struct wdm_device *desc)
+static void poison_urbs(struct wdm_device *desc)
{
/* the order here is essential */
- usb_kill_urb(desc->command);
- usb_kill_urb(desc->validity);
- usb_kill_urb(desc->response);
+ usb_poison_urb(desc->command);
+ usb_poison_urb(desc->validity);
+ usb_poison_urb(desc->response);
+}
+
+static void unpoison_urbs(struct wdm_device *desc)
+{
+ /*
+ * the order here is not essential
+ * it is symmetrical just to be nice
+ */
+ usb_unpoison_urb(desc->response);
+ usb_unpoison_urb(desc->validity);
+ usb_unpoison_urb(desc->command);
}
static void free_urbs(struct wdm_device *desc)
@@ -741,11 +752,12 @@ static int wdm_release(struct inode *inode, struct file *file)
if (!desc->count) {
if (!test_bit(WDM_DISCONNECTING, &desc->flags)) {
dev_dbg(&desc->intf->dev, "wdm_release: cleanup\n");
- kill_urbs(desc);
+ poison_urbs(desc);
spin_lock_irq(&desc->iuspin);
desc->resp_count = 0;
spin_unlock_irq(&desc->iuspin);
desc->manage_power(desc->intf, 0);
+ unpoison_urbs(desc);
} else {
/* must avoid dev_printk here as desc->intf is invalid */
pr_debug(KBUILD_MODNAME " %s: device gone - cleaning up\n", __func__);
@@ -1037,9 +1049,9 @@ static void wdm_disconnect(struct usb_interface *intf)
wake_up_all(&desc->wait);
mutex_lock(&desc->rlock);
mutex_lock(&desc->wlock);
+ poison_urbs(desc);
cancel_work_sync(&desc->rxwork);
cancel_work_sync(&desc->service_outs_intr);
- kill_urbs(desc);
mutex_unlock(&desc->wlock);
mutex_unlock(&desc->rlock);
@@ -1080,9 +1092,10 @@ static int wdm_suspend(struct usb_interface *intf, pm_message_t message)
set_bit(WDM_SUSPENDING, &desc->flags);
spin_unlock_irq(&desc->iuspin);
/* callback submits work - order is essential */
- kill_urbs(desc);
+ poison_urbs(desc);
cancel_work_sync(&desc->rxwork);
cancel_work_sync(&desc->service_outs_intr);
+ unpoison_urbs(desc);
}
if (!PMSG_IS_AUTO(message)) {
mutex_unlock(&desc->wlock);
@@ -1140,7 +1153,7 @@ static int wdm_pre_reset(struct usb_interface *intf)
wake_up_all(&desc->wait);
mutex_lock(&desc->rlock);
mutex_lock(&desc->wlock);
- kill_urbs(desc);
+ poison_urbs(desc);
cancel_work_sync(&desc->rxwork);
cancel_work_sync(&desc->service_outs_intr);
return 0;
@@ -1151,6 +1164,7 @@ static int wdm_post_reset(struct usb_interface *intf)
struct wdm_device *desc = wdm_find_device(intf);
int rv;
+ unpoison_urbs(desc);
clear_bit(WDM_OVERFLOW, &desc->flags);
clear_bit(WDM_RESETTING, &desc->flags);
rv = recover_from_urb_loss(desc);
diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
index 5332363..2218941 100644
--- a/drivers/usb/core/devio.c
+++ b/drivers/usb/core/devio.c
@@ -1218,7 +1218,12 @@ static int do_proc_bulk(struct usb_dev_state *ps,
ret = usbfs_increase_memory_usage(len1 + sizeof(struct urb));
if (ret)
return ret;
- tbuf = kmalloc(len1, GFP_KERNEL);
+
+ /*
+ * len1 can be almost arbitrarily large. Don't WARN if it's
+ * too big, just fail the request.
+ */
+ tbuf = kmalloc(len1, GFP_KERNEL | __GFP_NOWARN);
if (!tbuf) {
ret = -ENOMEM;
goto done;
@@ -1696,7 +1701,7 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
if (num_sgs) {
as->urb->sg = kmalloc_array(num_sgs,
sizeof(struct scatterlist),
- GFP_KERNEL);
+ GFP_KERNEL | __GFP_NOWARN);
if (!as->urb->sg) {
ret = -ENOMEM;
goto error;
@@ -1731,7 +1736,7 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
(uurb_start - as->usbm->vm_start);
} else {
as->urb->transfer_buffer = kmalloc(uurb->buffer_length,
- GFP_KERNEL);
+ GFP_KERNEL | __GFP_NOWARN);
if (!as->urb->transfer_buffer) {
ret = -ENOMEM;
goto error;
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 17202b2..228e3d4 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -3555,7 +3555,7 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
u16 portchange, portstatus;
if (!test_and_set_bit(port1, hub->child_usage_bits)) {
- status = pm_runtime_get_sync(&port_dev->dev);
+ status = pm_runtime_resume_and_get(&port_dev->dev);
if (status < 0) {
dev_dbg(&udev->dev, "can't resume usb port, status %d\n",
status);
@@ -3592,9 +3592,6 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
* sequence.
*/
status = hub_port_status(hub, port1, &portstatus, &portchange);
-
- /* TRSMRCY = 10 msec */
- msleep(10);
}
SuspendCleared:
@@ -3609,6 +3606,9 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
usb_clear_port_feature(hub->hdev, port1,
USB_PORT_FEAT_C_SUSPEND);
}
+
+ /* TRSMRCY = 10 msec */
+ msleep(10);
}
if (udev->persist_enabled)
diff --git a/drivers/usb/core/hub.h b/drivers/usb/core/hub.h
index 73f4482..22ea1f4 100644
--- a/drivers/usb/core/hub.h
+++ b/drivers/usb/core/hub.h
@@ -148,8 +148,10 @@ static inline unsigned hub_power_on_good_delay(struct usb_hub *hub)
{
unsigned delay = hub->descriptor->bPwrOn2PwrGood * 2;
- /* Wait at least 100 msec for power to become stable */
- return max(delay, 100U);
+ if (!hub->hdev->parent) /* root hub */
+ return delay;
+ else /* Wait at least 100 msec for power to become stable */
+ return max(delay, 100U);
}
static inline int hub_port_debounce_be_connected(struct usb_hub *hub,
diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
index 76ac5d6..21e7522 100644
--- a/drivers/usb/core/quirks.c
+++ b/drivers/usb/core/quirks.c
@@ -406,6 +406,7 @@ static const struct usb_device_id usb_quirk_list[] = {
/* Realtek hub in Dell WD19 (Type-C) */
{ USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM },
+ { USB_DEVICE(0x0bda, 0x5487), .driver_info = USB_QUIRK_RESET_RESUME },
/* Generic RTL8153 based ethernet adapters */
{ USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM },
@@ -438,6 +439,9 @@ static const struct usb_device_id usb_quirk_list[] = {
{ USB_DEVICE(0x17ef, 0xa012), .driver_info =
USB_QUIRK_DISCONNECT_SUSPEND },
+ /* Lenovo ThinkPad USB-C Dock Gen2 Ethernet (RTL8153 GigE) */
+ { USB_DEVICE(0x17ef, 0xa387), .driver_info = USB_QUIRK_NO_LPM },
+
/* BUILDWIN Photo Frame */
{ USB_DEVICE(0x1908, 0x1315), .driver_info =
USB_QUIRK_HONOR_BNUMINTERFACES },
diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h
index 7161344..641e425 100644
--- a/drivers/usb/dwc2/core.h
+++ b/drivers/usb/dwc2/core.h
@@ -112,6 +112,7 @@ struct dwc2_hsotg_req;
* @debugfs: File entry for debugfs file for this endpoint.
* @dir_in: Set to true if this endpoint is of the IN direction, which
* means that it is sending data to the Host.
+ * @map_dir: Set to the value of dir_in when the DMA buffer is mapped.
* @index: The index for the endpoint registers.
* @mc: Multi Count - number of transactions per microframe
* @interval: Interval for periodic endpoints, in frames or microframes.
@@ -161,6 +162,7 @@ struct dwc2_hsotg_ep {
unsigned short fifo_index;
unsigned char dir_in;
+ unsigned char map_dir;
unsigned char index;
unsigned char mc;
u16 interval;
diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
index 55f1d14..510fd05 100644
--- a/drivers/usb/dwc2/core_intr.c
+++ b/drivers/usb/dwc2/core_intr.c
@@ -307,6 +307,7 @@ static void dwc2_handle_conn_id_status_change_intr(struct dwc2_hsotg *hsotg)
static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
{
int ret;
+ u32 hprt0;
/* Clear interrupt */
dwc2_writel(hsotg, GINTSTS_SESSREQINT, GINTSTS);
@@ -327,6 +328,13 @@ static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
* established
*/
dwc2_hsotg_disconnect(hsotg);
+ } else {
+ /* Turn on the port power bit. */
+ hprt0 = dwc2_read_hprt0(hsotg);
+ hprt0 |= HPRT0_PWR;
+ dwc2_writel(hsotg, hprt0, HPRT0);
+ /* Connect hcd after port power is set. */
+ dwc2_hcd_connect(hsotg);
}
}
@@ -652,6 +660,71 @@ static u32 dwc2_read_common_intr(struct dwc2_hsotg *hsotg)
return 0;
}
+/**
+ * dwc_handle_gpwrdn_disc_det() - Handles the gpwrdn disconnect detect.
+ * Exits hibernation without restoring registers.
+ *
+ * @hsotg: Programming view of DWC_otg controller
+ * @gpwrdn: GPWRDN register
+ */
+static inline void dwc_handle_gpwrdn_disc_det(struct dwc2_hsotg *hsotg,
+ u32 gpwrdn)
+{
+ u32 gpwrdn_tmp;
+
+ /* Switch-on voltage to the core */
+ gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+ gpwrdn_tmp &= ~GPWRDN_PWRDNSWTCH;
+ dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+ udelay(5);
+
+ /* Reset core */
+ gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+ gpwrdn_tmp &= ~GPWRDN_PWRDNRSTN;
+ dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+ udelay(5);
+
+ /* Disable Power Down Clamp */
+ gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+ gpwrdn_tmp &= ~GPWRDN_PWRDNCLMP;
+ dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+ udelay(5);
+
+ /* Deassert reset core */
+ gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+ gpwrdn_tmp |= GPWRDN_PWRDNRSTN;
+ dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+ udelay(5);
+
+ /* Disable PMU interrupt */
+ gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+ gpwrdn_tmp &= ~GPWRDN_PMUINTSEL;
+ dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+
+ /* De-assert Wakeup Logic */
+ gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+ gpwrdn_tmp &= ~GPWRDN_PMUACTV;
+ dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+
+ hsotg->hibernated = 0;
+ hsotg->bus_suspended = 0;
+
+ if (gpwrdn & GPWRDN_IDSTS) {
+ hsotg->op_state = OTG_STATE_B_PERIPHERAL;
+ dwc2_core_init(hsotg, false);
+ dwc2_enable_global_interrupts(hsotg);
+ dwc2_hsotg_core_init_disconnected(hsotg, false);
+ dwc2_hsotg_core_connect(hsotg);
+ } else {
+ hsotg->op_state = OTG_STATE_A_HOST;
+
+ /* Initialize the Core for Host mode */
+ dwc2_core_init(hsotg, false);
+ dwc2_enable_global_interrupts(hsotg);
+ dwc2_hcd_start(hsotg);
+ }
+}
+
/*
* GPWRDN interrupt handler.
*
@@ -673,64 +746,14 @@ static void dwc2_handle_gpwrdn_intr(struct dwc2_hsotg *hsotg)
if ((gpwrdn & GPWRDN_DISCONN_DET) &&
(gpwrdn & GPWRDN_DISCONN_DET_MSK) && !linestate) {
- u32 gpwrdn_tmp;
-
dev_dbg(hsotg->dev, "%s: GPWRDN_DISCONN_DET\n", __func__);
-
- /* Switch-on voltage to the core */
- gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
- gpwrdn_tmp &= ~GPWRDN_PWRDNSWTCH;
- dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
- udelay(10);
-
- /* Reset core */
- gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
- gpwrdn_tmp &= ~GPWRDN_PWRDNRSTN;
- dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
- udelay(10);
-
- /* Disable Power Down Clamp */
- gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
- gpwrdn_tmp &= ~GPWRDN_PWRDNCLMP;
- dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
- udelay(10);
-
- /* Deassert reset core */
- gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
- gpwrdn_tmp |= GPWRDN_PWRDNRSTN;
- dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
- udelay(10);
-
- /* Disable PMU interrupt */
- gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
- gpwrdn_tmp &= ~GPWRDN_PMUINTSEL;
- dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
-
- /* De-assert Wakeup Logic */
- gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
- gpwrdn_tmp &= ~GPWRDN_PMUACTV;
- dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
-
- hsotg->hibernated = 0;
-
- if (gpwrdn & GPWRDN_IDSTS) {
- hsotg->op_state = OTG_STATE_B_PERIPHERAL;
- dwc2_core_init(hsotg, false);
- dwc2_enable_global_interrupts(hsotg);
- dwc2_hsotg_core_init_disconnected(hsotg, false);
- dwc2_hsotg_core_connect(hsotg);
- } else {
- hsotg->op_state = OTG_STATE_A_HOST;
-
- /* Initialize the Core for Host mode */
- dwc2_core_init(hsotg, false);
- dwc2_enable_global_interrupts(hsotg);
- dwc2_hcd_start(hsotg);
- }
- }
-
- if ((gpwrdn & GPWRDN_LNSTSCHG) &&
- (gpwrdn & GPWRDN_LNSTSCHG_MSK) && linestate) {
+ /*
+ * Call disconnect detect function to exit from
+ * hibernation
+ */
+ dwc_handle_gpwrdn_disc_det(hsotg, gpwrdn);
+ } else if ((gpwrdn & GPWRDN_LNSTSCHG) &&
+ (gpwrdn & GPWRDN_LNSTSCHG_MSK) && linestate) {
dev_dbg(hsotg->dev, "%s: GPWRDN_LNSTSCHG\n", __func__);
if (hsotg->hw_params.hibernation &&
hsotg->hibernated) {
@@ -741,24 +764,21 @@ static void dwc2_handle_gpwrdn_intr(struct dwc2_hsotg *hsotg)
dwc2_exit_hibernation(hsotg, 1, 0, 1);
}
}
- }
- if ((gpwrdn & GPWRDN_RST_DET) && (gpwrdn & GPWRDN_RST_DET_MSK)) {
+ } else if ((gpwrdn & GPWRDN_RST_DET) &&
+ (gpwrdn & GPWRDN_RST_DET_MSK)) {
dev_dbg(hsotg->dev, "%s: GPWRDN_RST_DET\n", __func__);
if (!linestate && (gpwrdn & GPWRDN_BSESSVLD))
dwc2_exit_hibernation(hsotg, 0, 1, 0);
- }
- if ((gpwrdn & GPWRDN_STS_CHGINT) &&
- (gpwrdn & GPWRDN_STS_CHGINT_MSK) && linestate) {
+ } else if ((gpwrdn & GPWRDN_STS_CHGINT) &&
+ (gpwrdn & GPWRDN_STS_CHGINT_MSK)) {
dev_dbg(hsotg->dev, "%s: GPWRDN_STS_CHGINT\n", __func__);
- if (hsotg->hw_params.hibernation &&
- hsotg->hibernated) {
- if (gpwrdn & GPWRDN_IDSTS) {
- dwc2_exit_hibernation(hsotg, 0, 0, 0);
- call_gadget(hsotg, resume);
- } else {
- dwc2_exit_hibernation(hsotg, 1, 0, 1);
- }
- }
+ /*
+ * As GPWRDN_STS_CHGINT exit from hibernation flow is
+ * the same as in GPWRDN_DISCONN_DET flow. Call
+ * disconnect detect helper function to exit from
+ * hibernation.
+ */
+ dwc_handle_gpwrdn_disc_det(hsotg, gpwrdn);
}
}
diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
index ad4c943..d2f623d 100644
--- a/drivers/usb/dwc2/gadget.c
+++ b/drivers/usb/dwc2/gadget.c
@@ -422,7 +422,7 @@ static void dwc2_hsotg_unmap_dma(struct dwc2_hsotg *hsotg,
{
struct usb_request *req = &hs_req->req;
- usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->dir_in);
+ usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->map_dir);
}
/*
@@ -1242,6 +1242,7 @@ static int dwc2_hsotg_map_dma(struct dwc2_hsotg *hsotg,
{
int ret;
+ hs_ep->map_dir = hs_ep->dir_in;
ret = usb_gadget_map_request(&hsotg->gadget, req, hs_ep->dir_in);
if (ret)
goto dma_error;
diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
index 1a9789e..6af1dcb 100644
--- a/drivers/usb/dwc2/hcd.c
+++ b/drivers/usb/dwc2/hcd.c
@@ -5580,7 +5580,15 @@ int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
return ret;
}
- dwc2_hcd_rem_wakeup(hsotg);
+ if (rem_wakeup) {
+ dwc2_hcd_rem_wakeup(hsotg);
+ /*
+ * Change "port_connect_status_change" flag to re-enumerate,
+ * because after exit from hibernation port connection status
+ * is not detected.
+ */
+ hsotg->flags.b.port_connect_status_change = 1;
+ }
hsotg->hibernated = 0;
hsotg->bus_suspended = 0;
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
index 3101f0d..e07fd5e 100644
--- a/drivers/usb/dwc3/core.c
+++ b/drivers/usb/dwc3/core.c
@@ -114,6 +114,8 @@ void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode)
dwc->current_dr_role = mode;
}
+static int dwc3_core_soft_reset(struct dwc3 *dwc);
+
static void __dwc3_set_mode(struct work_struct *work)
{
struct dwc3 *dwc = work_to_dwc(work);
@@ -121,6 +123,8 @@ static void __dwc3_set_mode(struct work_struct *work)
int ret;
u32 reg;
+ mutex_lock(&dwc->mutex);
+
pm_runtime_get_sync(dwc->dev);
if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_OTG)
@@ -154,6 +158,25 @@ static void __dwc3_set_mode(struct work_struct *work)
break;
}
+ /* For DRD host or device mode only */
+ if (dwc->desired_dr_role != DWC3_GCTL_PRTCAP_OTG) {
+ reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+ reg |= DWC3_GCTL_CORESOFTRESET;
+ dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+
+ /*
+ * Wait for internal clocks to synchronized. DWC_usb31 and
+ * DWC_usb32 may need at least 50ms (less for DWC_usb3). To
+ * keep it consistent across different IPs, let's wait up to
+ * 100ms before clearing GCTL.CORESOFTRESET.
+ */
+ msleep(100);
+
+ reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+ reg &= ~DWC3_GCTL_CORESOFTRESET;
+ dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+ }
+
spin_lock_irqsave(&dwc->lock, flags);
dwc3_set_prtcap(dwc, dwc->desired_dr_role);
@@ -178,6 +201,8 @@ static void __dwc3_set_mode(struct work_struct *work)
}
break;
case DWC3_GCTL_PRTCAP_DEVICE:
+ dwc3_core_soft_reset(dwc);
+
dwc3_event_buffers_setup(dwc);
if (dwc->usb2_phy)
@@ -200,6 +225,7 @@ static void __dwc3_set_mode(struct work_struct *work)
out:
pm_runtime_mark_last_busy(dwc->dev);
pm_runtime_put_autosuspend(dwc->dev);
+ mutex_unlock(&dwc->mutex);
}
void dwc3_set_mode(struct dwc3 *dwc, u32 mode)
@@ -1297,6 +1323,8 @@ static void dwc3_get_properties(struct dwc3 *dwc)
"snps,usb3_lpm_capable");
dwc->usb2_lpm_disable = device_property_read_bool(dev,
"snps,usb2-lpm-disable");
+ dwc->usb2_gadget_lpm_disable = device_property_read_bool(dev,
+ "snps,usb2-gadget-lpm-disable");
device_property_read_u8(dev, "snps,rx-thr-num-pkt-prd",
&rx_thr_num_pkt_prd);
device_property_read_u8(dev, "snps,rx-max-burst-prd",
@@ -1527,6 +1555,7 @@ static int dwc3_probe(struct platform_device *pdev)
dwc3_cache_hwparams(dwc);
spin_lock_init(&dwc->lock);
+ mutex_init(&dwc->mutex);
pm_runtime_set_active(dev);
pm_runtime_use_autosuspend(dev);
diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
index 1b241f9..79e1b82 100644
--- a/drivers/usb/dwc3/core.h
+++ b/drivers/usb/dwc3/core.h
@@ -13,6 +13,7 @@
#include <linux/device.h>
#include <linux/spinlock.h>
+#include <linux/mutex.h>
#include <linux/ioport.h>
#include <linux/list.h>
#include <linux/bitops.h>
@@ -942,6 +943,7 @@ struct dwc3_scratchpad_array {
* @scratch_addr: dma address of scratchbuf
* @ep0_in_setup: one control transfer is completed and enter setup phase
* @lock: for synchronizing
+ * @mutex: for mode switching
* @dev: pointer to our struct device
* @sysdev: pointer to the DMA-capable device
* @xhci: pointer to our xHCI child
@@ -1026,7 +1028,8 @@ struct dwc3_scratchpad_array {
* @dis_start_transfer_quirk: set if start_transfer failure SW workaround is
* not needed for DWC_usb31 version 1.70a-ea06 and below
* @usb3_lpm_capable: set if hadrware supports Link Power Management
- * @usb2_lpm_disable: set to disable usb2 lpm
+ * @usb2_lpm_disable: set to disable usb2 lpm for host
+ * @usb2_gadget_lpm_disable: set to disable usb2 lpm for gadget
* @disable_scramble_quirk: set if we enable the disable scramble quirk
* @u2exit_lfps_quirk: set if we enable u2exit lfps quirk
* @u2ss_inp3_quirk: set if we enable P3 OK for U2/SS Inactive quirk
@@ -1077,6 +1080,9 @@ struct dwc3 {
/* device lock */
spinlock_t lock;
+ /* mode switching lock */
+ struct mutex mutex;
+
struct device *dev;
struct device *sysdev;
@@ -1227,6 +1233,7 @@ struct dwc3 {
unsigned dis_start_transfer_quirk:1;
unsigned usb3_lpm_capable:1;
unsigned usb2_lpm_disable:1;
+ unsigned usb2_gadget_lpm_disable:1;
unsigned disable_scramble_quirk:1;
unsigned u2exit_lfps_quirk:1;
diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
index 3db1780..e196673 100644
--- a/drivers/usb/dwc3/dwc3-omap.c
+++ b/drivers/usb/dwc3/dwc3-omap.c
@@ -437,8 +437,13 @@ static int dwc3_omap_extcon_register(struct dwc3_omap *omap)
if (extcon_get_state(edev, EXTCON_USB) == true)
dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_VALID);
+ else
+ dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_OFF);
+
if (extcon_get_state(edev, EXTCON_USB_HOST) == true)
dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_GROUND);
+ else
+ dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_FLOAT);
omap->edev = edev;
}
diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
index 598daed..17117870 100644
--- a/drivers/usb/dwc3/dwc3-pci.c
+++ b/drivers/usb/dwc3/dwc3-pci.c
@@ -120,6 +120,7 @@ static const struct property_entry dwc3_pci_mrfld_properties[] = {
PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"),
PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"),
+ PROPERTY_ENTRY_BOOL("snps,usb2-gadget-lpm-disable"),
PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
{}
};
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 65ff41e..ead877e 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -308,13 +308,12 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
}
if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) {
- int needs_wakeup;
+ int link_state;
- needs_wakeup = (dwc->link_state == DWC3_LINK_STATE_U1 ||
- dwc->link_state == DWC3_LINK_STATE_U2 ||
- dwc->link_state == DWC3_LINK_STATE_U3);
-
- if (unlikely(needs_wakeup)) {
+ link_state = dwc3_gadget_get_link_state(dwc);
+ if (link_state == DWC3_LINK_STATE_U1 ||
+ link_state == DWC3_LINK_STATE_U2 ||
+ link_state == DWC3_LINK_STATE_U3) {
ret = __dwc3_gadget_wakeup(dwc);
dev_WARN_ONCE(dwc->dev, ret, "wakeup failed --> %d\n",
ret);
@@ -608,12 +607,14 @@ static int dwc3_gadget_set_ep_config(struct dwc3_ep *dep, unsigned int action)
u8 bInterval_m1;
/*
- * Valid range for DEPCFG.bInterval_m1 is from 0 to 13, and it
- * must be set to 0 when the controller operates in full-speed.
+ * Valid range for DEPCFG.bInterval_m1 is from 0 to 13.
+ *
+ * NOTE: The programming guide incorrectly stated bInterval_m1
+ * must be set to 0 when operating in fullspeed. Internally the
+ * controller does not have this limitation. See DWC_usb3x
+ * programming guide section 3.2.2.1.
*/
bInterval_m1 = min_t(u8, desc->bInterval - 1, 13);
- if (dwc->gadget->speed == USB_SPEED_FULL)
- bInterval_m1 = 0;
if (usb_endpoint_type(desc) == USB_ENDPOINT_XFER_INT &&
dwc->gadget->speed == USB_SPEED_FULL)
@@ -1235,6 +1236,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
req->start_sg = sg_next(s);
req->num_queued_sgs++;
+ req->num_pending_sgs--;
/*
* The number of pending SG entries may not correspond to the
@@ -1242,7 +1244,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
* don't include unused SG entries.
*/
if (length == 0) {
- req->num_pending_sgs -= req->request.num_mapped_sgs - req->num_queued_sgs;
+ req->num_pending_sgs = 0;
break;
}
@@ -1675,7 +1677,9 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
}
}
- return __dwc3_gadget_kick_transfer(dep);
+ __dwc3_gadget_kick_transfer(dep);
+
+ return 0;
}
static int dwc3_gadget_ep_queue(struct usb_ep *ep, struct usb_request *request,
@@ -1973,6 +1977,8 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc)
case DWC3_LINK_STATE_RESET:
case DWC3_LINK_STATE_RX_DET: /* in HS, means Early Suspend */
case DWC3_LINK_STATE_U3: /* in HS, means SUSPEND */
+ case DWC3_LINK_STATE_U2: /* in HS, means Sleep (L1) */
+ case DWC3_LINK_STATE_U1:
case DWC3_LINK_STATE_RESUME:
break;
default:
@@ -2203,6 +2209,10 @@ static void dwc3_gadget_enable_irq(struct dwc3 *dwc)
if (DWC3_VER_IS_PRIOR(DWC3, 250A))
reg |= DWC3_DEVTEN_ULSTCNGEN;
+ /* On 2.30a and above this bit enables U3/L2-L1 Suspend Events */
+ if (!DWC3_VER_IS_PRIOR(DWC3, 230A))
+ reg |= DWC3_DEVTEN_EOPFEN;
+
dwc3_writel(dwc->regs, DWC3_DEVTEN, reg);
}
@@ -2775,15 +2785,15 @@ static int dwc3_gadget_ep_reclaim_trb_sg(struct dwc3_ep *dep,
struct dwc3_trb *trb = &dep->trb_pool[dep->trb_dequeue];
struct scatterlist *sg = req->sg;
struct scatterlist *s;
- unsigned int pending = req->num_pending_sgs;
+ unsigned int num_queued = req->num_queued_sgs;
unsigned int i;
int ret = 0;
- for_each_sg(sg, s, pending, i) {
+ for_each_sg(sg, s, num_queued, i) {
trb = &dep->trb_pool[dep->trb_dequeue];
req->sg = sg_next(s);
- req->num_pending_sgs--;
+ req->num_queued_sgs--;
ret = dwc3_gadget_ep_reclaim_completed_trb(dep, req,
trb, event, status, true);
@@ -2806,7 +2816,7 @@ static int dwc3_gadget_ep_reclaim_trb_linear(struct dwc3_ep *dep,
static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)
{
- return req->num_pending_sgs == 0;
+ return req->num_pending_sgs == 0 && req->num_queued_sgs == 0;
}
static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
@@ -2815,7 +2825,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
{
int ret;
- if (req->num_pending_sgs)
+ if (req->request.num_mapped_sgs)
ret = dwc3_gadget_ep_reclaim_trb_sg(dep, req, event,
status);
else
@@ -3268,6 +3278,15 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
u32 reg;
/*
+ * Ideally, dwc3_reset_gadget() would trigger the function
+ * drivers to stop any active transfers through ep disable.
+ * However, for functions which defer ep disable, such as mass
+ * storage, we will need to rely on the call to stop active
+ * transfers here, and avoid allowing of request queuing.
+ */
+ dwc->connected = false;
+
+ /*
* WORKAROUND: DWC3 revisions <1.88a have an issue which
* would cause a missing Disconnect Event if there's a
* pending Setup Packet in the FIFO.
@@ -3389,6 +3408,7 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
/* Enable USB2 LPM Capability */
if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A) &&
+ !dwc->usb2_gadget_lpm_disable &&
(speed != DWC3_DSTS_SUPERSPEED) &&
(speed != DWC3_DSTS_SUPERSPEED_PLUS)) {
reg = dwc3_readl(dwc->regs, DWC3_DCFG);
@@ -3415,6 +3435,12 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
dwc3_gadget_dctl_write_safe(dwc, reg);
} else {
+ if (dwc->usb2_gadget_lpm_disable) {
+ reg = dwc3_readl(dwc->regs, DWC3_DCFG);
+ reg &= ~DWC3_DCFG_LPM_CAP;
+ dwc3_writel(dwc->regs, DWC3_DCFG, reg);
+ }
+
reg = dwc3_readl(dwc->regs, DWC3_DCTL);
reg &= ~DWC3_DCTL_HIRD_THRES_MASK;
dwc3_gadget_dctl_write_safe(dwc, reg);
@@ -3862,7 +3888,7 @@ int dwc3_gadget_init(struct dwc3 *dwc)
dwc->gadget->speed = USB_SPEED_UNKNOWN;
dwc->gadget->sg_supported = true;
dwc->gadget->name = "dwc3-gadget";
- dwc->gadget->lpm_capable = true;
+ dwc->gadget->lpm_capable = !dwc->usb2_gadget_lpm_disable;
/*
* FIXME We might be setting max_speed to <SUPER, however versions
@@ -3929,8 +3955,9 @@ int dwc3_gadget_init(struct dwc3 *dwc)
void dwc3_gadget_exit(struct dwc3 *dwc)
{
- usb_del_gadget_udc(dwc->gadget);
+ usb_del_gadget(dwc->gadget);
dwc3_gadget_free_endpoints(dwc);
+ usb_put_gadget(dwc->gadget);
dma_free_coherent(dwc->sysdev, DWC3_BOUNCE_SIZE, dwc->bounce,
dwc->bounce_addr);
kfree(dwc->setup_buf);
diff --git a/drivers/usb/gadget/config.c b/drivers/usb/gadget/config.c
index 2d11535..8bb2577 100644
--- a/drivers/usb/gadget/config.c
+++ b/drivers/usb/gadget/config.c
@@ -194,9 +194,13 @@ EXPORT_SYMBOL_GPL(usb_assign_descriptors);
void usb_free_all_descriptors(struct usb_function *f)
{
usb_free_descriptors(f->fs_descriptors);
+ f->fs_descriptors = NULL;
usb_free_descriptors(f->hs_descriptors);
+ f->hs_descriptors = NULL;
usb_free_descriptors(f->ss_descriptors);
+ f->ss_descriptors = NULL;
usb_free_descriptors(f->ssp_descriptors);
+ f->ssp_descriptors = NULL;
}
EXPORT_SYMBOL_GPL(usb_free_all_descriptors);
diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
index f3443347..ffe67d8 100644
--- a/drivers/usb/gadget/function/f_fs.c
+++ b/drivers/usb/gadget/function/f_fs.c
@@ -2639,6 +2639,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
do { /* lang_count > 0 so we can use do-while */
unsigned needed = needed_count;
+ u32 str_per_lang = str_count;
if (unlikely(len < 3))
goto error_free;
@@ -2674,7 +2675,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
data += length + 1;
len -= length + 1;
- } while (--str_count);
+ } while (--str_per_lang);
s->id = 0; /* terminator */
s->s = NULL;
diff --git a/drivers/usb/gadget/function/f_uac1.c b/drivers/usb/gadget/function/f_uac1.c
index 560382e..e65f474 100644
--- a/drivers/usb/gadget/function/f_uac1.c
+++ b/drivers/usb/gadget/function/f_uac1.c
@@ -19,6 +19,9 @@
#include "u_audio.h"
#include "u_uac1.h"
+/* UAC1 spec: 3.7.2.3 Audio Channel Cluster Format */
+#define UAC1_CHANNEL_MASK 0x0FFF
+
struct f_uac1 {
struct g_audio g_audio;
u8 ac_intf, as_in_intf, as_out_intf;
@@ -30,6 +33,11 @@ static inline struct f_uac1 *func_to_uac1(struct usb_function *f)
return container_of(f, struct f_uac1, g_audio.func);
}
+static inline struct f_uac1_opts *g_audio_to_uac1_opts(struct g_audio *audio)
+{
+ return container_of(audio->func.fi, struct f_uac1_opts, func_inst);
+}
+
/*
* DESCRIPTORS ... most are static, but strings and full
* configuration descriptors are built on demand.
@@ -505,11 +513,42 @@ static void f_audio_disable(struct usb_function *f)
/*-------------------------------------------------------------------------*/
+static int f_audio_validate_opts(struct g_audio *audio, struct device *dev)
+{
+ struct f_uac1_opts *opts = g_audio_to_uac1_opts(audio);
+
+ if (!opts->p_chmask && !opts->c_chmask) {
+ dev_err(dev, "Error: no playback and capture channels\n");
+ return -EINVAL;
+ } else if (opts->p_chmask & ~UAC1_CHANNEL_MASK) {
+ dev_err(dev, "Error: unsupported playback channels mask\n");
+ return -EINVAL;
+ } else if (opts->c_chmask & ~UAC1_CHANNEL_MASK) {
+ dev_err(dev, "Error: unsupported capture channels mask\n");
+ return -EINVAL;
+ } else if ((opts->p_ssize < 1) || (opts->p_ssize > 4)) {
+ dev_err(dev, "Error: incorrect playback sample size\n");
+ return -EINVAL;
+ } else if ((opts->c_ssize < 1) || (opts->c_ssize > 4)) {
+ dev_err(dev, "Error: incorrect capture sample size\n");
+ return -EINVAL;
+ } else if (!opts->p_srate) {
+ dev_err(dev, "Error: incorrect playback sampling rate\n");
+ return -EINVAL;
+ } else if (!opts->c_srate) {
+ dev_err(dev, "Error: incorrect capture sampling rate\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/* audio function driver setup/binding */
static int f_audio_bind(struct usb_configuration *c, struct usb_function *f)
{
struct usb_composite_dev *cdev = c->cdev;
struct usb_gadget *gadget = cdev->gadget;
+ struct device *dev = &gadget->dev;
struct f_uac1 *uac1 = func_to_uac1(f);
struct g_audio *audio = func_to_g_audio(f);
struct f_uac1_opts *audio_opts;
@@ -519,6 +558,10 @@ static int f_audio_bind(struct usb_configuration *c, struct usb_function *f)
int rate;
int status;
+ status = f_audio_validate_opts(audio, dev);
+ if (status)
+ return status;
+
audio_opts = container_of(f->fi, struct f_uac1_opts, func_inst);
us = usb_gstrings_attach(cdev, uac1_strings, ARRAY_SIZE(strings_uac1));
diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
index 6f03e94..dd960ce 100644
--- a/drivers/usb/gadget/function/f_uac2.c
+++ b/drivers/usb/gadget/function/f_uac2.c
@@ -14,6 +14,9 @@
#include "u_audio.h"
#include "u_uac2.h"
+/* UAC2 spec: 4.1 Audio Channel Cluster Descriptor */
+#define UAC2_CHANNEL_MASK 0x07FFFFFF
+
/*
* The driver implements a simple UAC_2 topology.
* USB-OUT -> IT_1 -> OT_3 -> ALSA_Capture
@@ -604,6 +607,36 @@ static void setup_descriptor(struct f_uac2_opts *opts)
hs_audio_desc[i] = NULL;
}
+static int afunc_validate_opts(struct g_audio *agdev, struct device *dev)
+{
+ struct f_uac2_opts *opts = g_audio_to_uac2_opts(agdev);
+
+ if (!opts->p_chmask && !opts->c_chmask) {
+ dev_err(dev, "Error: no playback and capture channels\n");
+ return -EINVAL;
+ } else if (opts->p_chmask & ~UAC2_CHANNEL_MASK) {
+ dev_err(dev, "Error: unsupported playback channels mask\n");
+ return -EINVAL;
+ } else if (opts->c_chmask & ~UAC2_CHANNEL_MASK) {
+ dev_err(dev, "Error: unsupported capture channels mask\n");
+ return -EINVAL;
+ } else if ((opts->p_ssize < 1) || (opts->p_ssize > 4)) {
+ dev_err(dev, "Error: incorrect playback sample size\n");
+ return -EINVAL;
+ } else if ((opts->c_ssize < 1) || (opts->c_ssize > 4)) {
+ dev_err(dev, "Error: incorrect capture sample size\n");
+ return -EINVAL;
+ } else if (!opts->p_srate) {
+ dev_err(dev, "Error: incorrect playback sampling rate\n");
+ return -EINVAL;
+ } else if (!opts->c_srate) {
+ dev_err(dev, "Error: incorrect capture sampling rate\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static int
afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
{
@@ -612,11 +645,13 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
struct usb_composite_dev *cdev = cfg->cdev;
struct usb_gadget *gadget = cdev->gadget;
struct device *dev = &gadget->dev;
- struct f_uac2_opts *uac2_opts;
+ struct f_uac2_opts *uac2_opts = g_audio_to_uac2_opts(agdev);
struct usb_string *us;
int ret;
- uac2_opts = container_of(fn->fi, struct f_uac2_opts, func_inst);
+ ret = afunc_validate_opts(agdev, dev);
+ if (ret)
+ return ret;
us = usb_gstrings_attach(cdev, fn_strings, ARRAY_SIZE(strings_fn));
if (IS_ERR(us))
diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
index 44b4352..f48a00e 100644
--- a/drivers/usb/gadget/function/f_uvc.c
+++ b/drivers/usb/gadget/function/f_uvc.c
@@ -633,7 +633,12 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
uvc_hs_streaming_ep.wMaxPacketSize =
cpu_to_le16(max_packet_size | ((max_packet_mult - 1) << 11));
- uvc_hs_streaming_ep.bInterval = opts->streaming_interval;
+
+ /* A high-bandwidth endpoint must specify a bInterval value of 1 */
+ if (max_packet_mult > 1)
+ uvc_hs_streaming_ep.bInterval = 1;
+ else
+ uvc_hs_streaming_ep.bInterval = opts->streaming_interval;
uvc_ss_streaming_ep.wMaxPacketSize = cpu_to_le16(max_packet_size);
uvc_ss_streaming_ep.bInterval = opts->streaming_interval;
@@ -817,6 +822,7 @@ static struct usb_function_instance *uvc_alloc_inst(void)
pd->bmControls[0] = 1;
pd->bmControls[1] = 0;
pd->iProcessing = 0;
+ pd->bmVideoStandards = 0;
od = &opts->uvc_output_terminal;
od->bLength = UVC_DT_OUTPUT_TERMINAL_SIZE;
diff --git a/drivers/usb/gadget/legacy/webcam.c b/drivers/usb/gadget/legacy/webcam.c
index a9f8eb8..2c9eab2 100644
--- a/drivers/usb/gadget/legacy/webcam.c
+++ b/drivers/usb/gadget/legacy/webcam.c
@@ -125,6 +125,7 @@ static const struct uvc_processing_unit_descriptor uvc_processing = {
.bmControls[0] = 1,
.bmControls[1] = 0,
.iProcessing = 0,
+ .bmVideoStandards = 0,
};
static const struct uvc_output_terminal_descriptor uvc_output_terminal = {
diff --git a/drivers/usb/gadget/udc/aspeed-vhub/core.c b/drivers/usb/gadget/udc/aspeed-vhub/core.c
index be7bb64..d11d3d1 100644
--- a/drivers/usb/gadget/udc/aspeed-vhub/core.c
+++ b/drivers/usb/gadget/udc/aspeed-vhub/core.c
@@ -36,6 +36,7 @@ void ast_vhub_done(struct ast_vhub_ep *ep, struct ast_vhub_req *req,
int status)
{
bool internal = req->internal;
+ struct ast_vhub *vhub = ep->vhub;
EPVDBG(ep, "completing request @%p, status %d\n", req, status);
@@ -46,7 +47,7 @@ void ast_vhub_done(struct ast_vhub_ep *ep, struct ast_vhub_req *req,
if (req->req.dma) {
if (!WARN_ON(!ep->dev))
- usb_gadget_unmap_request(&ep->dev->gadget,
+ usb_gadget_unmap_request_by_dev(&vhub->pdev->dev,
&req->req, ep->epn.is_in);
req->req.dma = 0;
}
diff --git a/drivers/usb/gadget/udc/aspeed-vhub/epn.c b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
index 02d8bfa..cb164c6 100644
--- a/drivers/usb/gadget/udc/aspeed-vhub/epn.c
+++ b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
@@ -376,7 +376,7 @@ static int ast_vhub_epn_queue(struct usb_ep* u_ep, struct usb_request *u_req,
if (ep->epn.desc_mode ||
((((unsigned long)u_req->buf & 7) == 0) &&
(ep->epn.is_in || !(u_req->length & (u_ep->maxpacket - 1))))) {
- rc = usb_gadget_map_request(&ep->dev->gadget, u_req,
+ rc = usb_gadget_map_request_by_dev(&vhub->pdev->dev, u_req,
ep->epn.is_in);
if (rc) {
dev_warn(&vhub->pdev->dev,
diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
index 17704ee..92d01dd 100644
--- a/drivers/usb/gadget/udc/dummy_hcd.c
+++ b/drivers/usb/gadget/udc/dummy_hcd.c
@@ -901,6 +901,21 @@ static int dummy_pullup(struct usb_gadget *_gadget, int value)
spin_lock_irqsave(&dum->lock, flags);
dum->pullup = (value != 0);
set_link_state(dum_hcd);
+ if (value == 0) {
+ /*
+ * Emulate synchronize_irq(): wait for callbacks to finish.
+ * This seems to be the best place to emulate the call to
+ * synchronize_irq() that's in usb_gadget_remove_driver().
+ * Doing it in dummy_udc_stop() would be too late since it
+ * is called after the unbind callback and unbind shouldn't
+ * be invoked until all the other callbacks are finished.
+ */
+ while (dum->callback_usage > 0) {
+ spin_unlock_irqrestore(&dum->lock, flags);
+ usleep_range(1000, 2000);
+ spin_lock_irqsave(&dum->lock, flags);
+ }
+ }
spin_unlock_irqrestore(&dum->lock, flags);
usb_hcd_poll_rh_status(dummy_hcd_to_hcd(dum_hcd));
@@ -1002,14 +1017,6 @@ static int dummy_udc_stop(struct usb_gadget *g)
spin_lock_irq(&dum->lock);
dum->ints_enabled = 0;
stop_activity(dum);
-
- /* emulate synchronize_irq(): wait for callbacks to finish */
- while (dum->callback_usage > 0) {
- spin_unlock_irq(&dum->lock);
- usleep_range(1000, 2000);
- spin_lock_irq(&dum->lock);
- }
-
dum->driver = NULL;
spin_unlock_irq(&dum->lock);
diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
index d6ca50f..75bf446 100644
--- a/drivers/usb/gadget/udc/fotg210-udc.c
+++ b/drivers/usb/gadget/udc/fotg210-udc.c
@@ -338,15 +338,16 @@ static void fotg210_start_dma(struct fotg210_ep *ep,
} else {
buffer = req->req.buf + req->req.actual;
length = ioread32(ep->fotg210->reg +
- FOTG210_FIBCR(ep->epnum - 1));
- length &= FIBCR_BCFX;
+ FOTG210_FIBCR(ep->epnum - 1)) & FIBCR_BCFX;
+ if (length > req->req.length - req->req.actual)
+ length = req->req.length - req->req.actual;
}
} else {
buffer = req->req.buf + req->req.actual;
if (req->req.length - req->req.actual > ep->ep.maxpacket)
length = ep->ep.maxpacket;
else
- length = req->req.length;
+ length = req->req.length - req->req.actual;
}
d = dma_map_single(dev, buffer, length,
@@ -379,8 +380,7 @@ static void fotg210_ep0_queue(struct fotg210_ep *ep,
}
if (ep->dir_in) { /* if IN */
fotg210_start_dma(ep, req);
- if ((req->req.length == req->req.actual) ||
- (req->req.actual < ep->ep.maxpacket))
+ if (req->req.length == req->req.actual)
fotg210_done(ep, req, 0);
} else { /* OUT */
u32 value = ioread32(ep->fotg210->reg + FOTG210_DMISGR0);
@@ -820,7 +820,7 @@ static void fotg210_ep0in(struct fotg210_udc *fotg210)
if (req->req.length)
fotg210_start_dma(ep, req);
- if ((req->req.length - req->req.actual) < ep->ep.maxpacket)
+ if (req->req.actual == req->req.length)
fotg210_done(ep, req, 0);
} else {
fotg210_set_cxdone(fotg210);
@@ -849,12 +849,16 @@ static void fotg210_out_fifo_handler(struct fotg210_ep *ep)
{
struct fotg210_request *req = list_entry(ep->queue.next,
struct fotg210_request, queue);
+ int disgr1 = ioread32(ep->fotg210->reg + FOTG210_DISGR1);
fotg210_start_dma(ep, req);
- /* finish out transfer */
+ /* Complete the request when it's full or a short packet arrived.
+ * Like other drivers, short_not_ok isn't handled.
+ */
+
if (req->req.length == req->req.actual ||
- req->req.actual < ep->ep.maxpacket)
+ (disgr1 & DISGR1_SPK_INT(ep->epnum - 1)))
fotg210_done(ep, req, 0);
}
@@ -1027,6 +1031,12 @@ static void fotg210_init(struct fotg210_udc *fotg210)
value &= ~DMCR_GLINT_EN;
iowrite32(value, fotg210->reg + FOTG210_DMCR);
+ /* enable only grp2 irqs we handle */
+ iowrite32(~(DISGR2_DMA_ERROR | DISGR2_RX0BYTE_INT | DISGR2_TX0BYTE_INT
+ | DISGR2_ISO_SEQ_ABORT_INT | DISGR2_ISO_SEQ_ERR_INT
+ | DISGR2_RESM_INT | DISGR2_SUSP_INT | DISGR2_USBRST_INT),
+ fotg210->reg + FOTG210_DMISGR2);
+
/* disable all fifo interrupt */
iowrite32(~(u32)0, fotg210->reg + FOTG210_DMISGR1);
diff --git a/drivers/usb/gadget/udc/pch_udc.c b/drivers/usb/gadget/udc/pch_udc.c
index a3c1fc9..fd3656d 100644
--- a/drivers/usb/gadget/udc/pch_udc.c
+++ b/drivers/usb/gadget/udc/pch_udc.c
@@ -7,12 +7,14 @@
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/delay.h>
+#include <linux/dmi.h>
#include <linux/errno.h>
+#include <linux/gpio/consumer.h>
+#include <linux/gpio/machine.h>
#include <linux/list.h>
#include <linux/interrupt.h>
#include <linux/usb/ch9.h>
#include <linux/usb/gadget.h>
-#include <linux/gpio/consumer.h>
#include <linux/irq.h>
#define PCH_VBUS_PERIOD 3000 /* VBUS polling period (msec) */
@@ -596,18 +598,22 @@ static void pch_udc_reconnect(struct pch_udc_dev *dev)
static inline void pch_udc_vbus_session(struct pch_udc_dev *dev,
int is_active)
{
+ unsigned long iflags;
+
+ spin_lock_irqsave(&dev->lock, iflags);
if (is_active) {
pch_udc_reconnect(dev);
dev->vbus_session = 1;
} else {
if (dev->driver && dev->driver->disconnect) {
- spin_lock(&dev->lock);
+ spin_unlock_irqrestore(&dev->lock, iflags);
dev->driver->disconnect(&dev->gadget);
- spin_unlock(&dev->lock);
+ spin_lock_irqsave(&dev->lock, iflags);
}
pch_udc_set_disconnect(dev);
dev->vbus_session = 0;
}
+ spin_unlock_irqrestore(&dev->lock, iflags);
}
/**
@@ -1166,20 +1172,25 @@ static int pch_udc_pcd_selfpowered(struct usb_gadget *gadget, int value)
static int pch_udc_pcd_pullup(struct usb_gadget *gadget, int is_on)
{
struct pch_udc_dev *dev;
+ unsigned long iflags;
if (!gadget)
return -EINVAL;
+
dev = container_of(gadget, struct pch_udc_dev, gadget);
+
+ spin_lock_irqsave(&dev->lock, iflags);
if (is_on) {
pch_udc_reconnect(dev);
} else {
if (dev->driver && dev->driver->disconnect) {
- spin_lock(&dev->lock);
+ spin_unlock_irqrestore(&dev->lock, iflags);
dev->driver->disconnect(&dev->gadget);
- spin_unlock(&dev->lock);
+ spin_lock_irqsave(&dev->lock, iflags);
}
pch_udc_set_disconnect(dev);
}
+ spin_unlock_irqrestore(&dev->lock, iflags);
return 0;
}
@@ -1350,6 +1361,43 @@ static irqreturn_t pch_vbus_gpio_irq(int irq, void *data)
return IRQ_HANDLED;
}
+static struct gpiod_lookup_table minnowboard_udc_gpios = {
+ .dev_id = "0000:02:02.4",
+ .table = {
+ GPIO_LOOKUP("sch_gpio.33158", 12, NULL, GPIO_ACTIVE_HIGH),
+ {}
+ },
+};
+
+static const struct dmi_system_id pch_udc_gpio_dmi_table[] = {
+ {
+ .ident = "MinnowBoard",
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "MinnowBoard"),
+ },
+ .driver_data = &minnowboard_udc_gpios,
+ },
+ { }
+};
+
+static void pch_vbus_gpio_remove_table(void *table)
+{
+ gpiod_remove_lookup_table(table);
+}
+
+static int pch_vbus_gpio_add_table(struct pch_udc_dev *dev)
+{
+ struct device *d = &dev->pdev->dev;
+ const struct dmi_system_id *dmi;
+
+ dmi = dmi_first_match(pch_udc_gpio_dmi_table);
+ if (!dmi)
+ return 0;
+
+ gpiod_add_lookup_table(dmi->driver_data);
+ return devm_add_action_or_reset(d, pch_vbus_gpio_remove_table, dmi->driver_data);
+}
+
/**
* pch_vbus_gpio_init() - This API initializes GPIO port detecting VBUS.
* @dev: Reference to the driver structure
@@ -1360,6 +1408,7 @@ static irqreturn_t pch_vbus_gpio_irq(int irq, void *data)
*/
static int pch_vbus_gpio_init(struct pch_udc_dev *dev)
{
+ struct device *d = &dev->pdev->dev;
int err;
int irq_num = 0;
struct gpio_desc *gpiod;
@@ -1367,8 +1416,12 @@ static int pch_vbus_gpio_init(struct pch_udc_dev *dev)
dev->vbus_gpio.port = NULL;
dev->vbus_gpio.intr = 0;
+ err = pch_vbus_gpio_add_table(dev);
+ if (err)
+ return err;
+
/* Retrieve the GPIO line from the USB gadget device */
- gpiod = devm_gpiod_get(dev->gadget.dev.parent, NULL, GPIOD_IN);
+ gpiod = devm_gpiod_get_optional(d, NULL, GPIOD_IN);
if (IS_ERR(gpiod))
return PTR_ERR(gpiod);
gpiod_set_consumer_name(gpiod, "pch_vbus");
@@ -1756,7 +1809,7 @@ static struct usb_request *pch_udc_alloc_request(struct usb_ep *usbep,
}
/* prevent from using desc. - set HOST BUSY */
dma_desc->status |= PCH_UDC_BS_HST_BSY;
- dma_desc->dataptr = cpu_to_le32(DMA_ADDR_INVALID);
+ dma_desc->dataptr = lower_32_bits(DMA_ADDR_INVALID);
req->td_data = dma_desc;
req->td_data_last = dma_desc;
req->chain_len = 1;
@@ -2298,6 +2351,21 @@ static void pch_udc_svc_data_out(struct pch_udc_dev *dev, int ep_num)
pch_udc_set_dma(dev, DMA_DIR_RX);
}
+static int pch_udc_gadget_setup(struct pch_udc_dev *dev)
+ __must_hold(&dev->lock)
+{
+ int rc;
+
+ /* In some cases we can get an interrupt before driver gets setup */
+ if (!dev->driver)
+ return -ESHUTDOWN;
+
+ spin_unlock(&dev->lock);
+ rc = dev->driver->setup(&dev->gadget, &dev->setup_data);
+ spin_lock(&dev->lock);
+ return rc;
+}
+
/**
* pch_udc_svc_control_in() - Handle Control IN endpoint interrupts
* @dev: Reference to the device structure
@@ -2369,15 +2437,12 @@ static void pch_udc_svc_control_out(struct pch_udc_dev *dev)
dev->gadget.ep0 = &dev->ep[UDC_EP0IN_IDX].ep;
else /* OUT */
dev->gadget.ep0 = &ep->ep;
- spin_lock(&dev->lock);
/* If Mass storage Reset */
if ((dev->setup_data.bRequestType == 0x21) &&
(dev->setup_data.bRequest == 0xFF))
dev->prot_stall = 0;
/* call gadget with setup data received */
- setup_supported = dev->driver->setup(&dev->gadget,
- &dev->setup_data);
- spin_unlock(&dev->lock);
+ setup_supported = pch_udc_gadget_setup(dev);
if (dev->setup_data.bRequestType & USB_DIR_IN) {
ep->td_data->status = (ep->td_data->status &
@@ -2625,9 +2690,7 @@ static void pch_udc_svc_intf_interrupt(struct pch_udc_dev *dev)
dev->ep[i].halted = 0;
}
dev->stall = 0;
- spin_unlock(&dev->lock);
- dev->driver->setup(&dev->gadget, &dev->setup_data);
- spin_lock(&dev->lock);
+ pch_udc_gadget_setup(dev);
}
/**
@@ -2662,9 +2725,7 @@ static void pch_udc_svc_cfg_interrupt(struct pch_udc_dev *dev)
dev->stall = 0;
/* call gadget zero with setup data received */
- spin_unlock(&dev->lock);
- dev->driver->setup(&dev->gadget, &dev->setup_data);
- spin_lock(&dev->lock);
+ pch_udc_gadget_setup(dev);
}
/**
@@ -2870,14 +2931,20 @@ static void pch_udc_pcd_reinit(struct pch_udc_dev *dev)
* @dev: Reference to the driver structure
*
* Return codes:
- * 0: Success
+ * 0: Success
+ * -%ERRNO: All kind of errors when retrieving VBUS GPIO
*/
static int pch_udc_pcd_init(struct pch_udc_dev *dev)
{
+ int ret;
+
pch_udc_init(dev);
pch_udc_pcd_reinit(dev);
- pch_vbus_gpio_init(dev);
- return 0;
+
+ ret = pch_vbus_gpio_init(dev);
+ if (ret)
+ pch_udc_exit(dev);
+ return ret;
}
/**
@@ -2938,7 +3005,7 @@ static int init_dma_pools(struct pch_udc_dev *dev)
dev->dma_addr = dma_map_single(&dev->pdev->dev, ep0out_buf,
UDC_EP0OUT_BUFF_SIZE * 4,
DMA_FROM_DEVICE);
- return 0;
+ return dma_mapping_error(&dev->pdev->dev, dev->dma_addr);
}
static int pch_udc_start(struct usb_gadget *g,
@@ -3063,6 +3130,7 @@ static int pch_udc_probe(struct pci_dev *pdev,
if (retval)
return retval;
+ dev->pdev = pdev;
pci_set_drvdata(pdev, dev);
/* Determine BAR based on PCI ID */
@@ -3078,16 +3146,10 @@ static int pch_udc_probe(struct pci_dev *pdev,
dev->base_addr = pcim_iomap_table(pdev)[bar];
- /*
- * FIXME: add a GPIO descriptor table to pdev.dev using
- * gpiod_add_descriptor_table() from <linux/gpio/machine.h> based on
- * the PCI subsystem ID. The system-dependent GPIO is necessary for
- * VBUS operation.
- */
-
/* initialize the hardware */
- if (pch_udc_pcd_init(dev))
- return -ENODEV;
+ retval = pch_udc_pcd_init(dev);
+ if (retval)
+ return retval;
pci_enable_msi(pdev);
@@ -3104,7 +3166,6 @@ static int pch_udc_probe(struct pci_dev *pdev,
/* device struct setup */
spin_lock_init(&dev->lock);
- dev->pdev = pdev;
dev->gadget.ops = &pch_udc_ops;
retval = init_dma_pools(dev);
diff --git a/drivers/usb/gadget/udc/r8a66597-udc.c b/drivers/usb/gadget/udc/r8a66597-udc.c
index 896c1a0..65cae48 100644
--- a/drivers/usb/gadget/udc/r8a66597-udc.c
+++ b/drivers/usb/gadget/udc/r8a66597-udc.c
@@ -1849,6 +1849,8 @@ static int r8a66597_probe(struct platform_device *pdev)
return PTR_ERR(reg);
ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (!ires)
+ return -EINVAL;
irq = ires->start;
irq_trigger = ires->flags & IRQF_TRIGGER_MASK;
diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
index 0c418ce..f1b35a3 100644
--- a/drivers/usb/gadget/udc/renesas_usb3.c
+++ b/drivers/usb/gadget/udc/renesas_usb3.c
@@ -1488,7 +1488,7 @@ static void usb3_start_pipen(struct renesas_usb3_ep *usb3_ep,
struct renesas_usb3_request *usb3_req)
{
struct renesas_usb3 *usb3 = usb3_ep_to_usb3(usb3_ep);
- struct renesas_usb3_request *usb3_req_first = usb3_get_request(usb3_ep);
+ struct renesas_usb3_request *usb3_req_first;
unsigned long flags;
int ret = -EAGAIN;
u32 enable_bits = 0;
@@ -1496,7 +1496,8 @@ static void usb3_start_pipen(struct renesas_usb3_ep *usb3_ep,
spin_lock_irqsave(&usb3->lock, flags);
if (usb3_ep->halt || usb3_ep->started)
goto out;
- if (usb3_req != usb3_req_first)
+ usb3_req_first = __usb3_get_request(usb3_ep);
+ if (!usb3_req_first || usb3_req != usb3_req_first)
goto out;
if (usb3_pn_change(usb3, usb3_ep->num) < 0)
diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
index 1d3ebb0..b154b62 100644
--- a/drivers/usb/gadget/udc/s3c2410_udc.c
+++ b/drivers/usb/gadget/udc/s3c2410_udc.c
@@ -54,8 +54,6 @@ static struct clk *udc_clock;
static struct clk *usb_bus_clock;
static void __iomem *base_addr;
static int irq_usbd;
-static u64 rsrc_start;
-static u64 rsrc_len;
static struct dentry *s3c2410_udc_debugfs_root;
static inline u32 udc_read(u32 reg)
@@ -1752,7 +1750,8 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
udc_clock = clk_get(NULL, "usb-device");
if (IS_ERR(udc_clock)) {
dev_err(dev, "failed to get udc clock source\n");
- return PTR_ERR(udc_clock);
+ retval = PTR_ERR(udc_clock);
+ goto err_usb_bus_clk;
}
clk_prepare_enable(udc_clock);
@@ -1775,7 +1774,7 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
base_addr = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(base_addr)) {
retval = PTR_ERR(base_addr);
- goto err_mem;
+ goto err_udc_clk;
}
the_controller = udc;
@@ -1793,7 +1792,7 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
if (retval != 0) {
dev_err(dev, "cannot get irq %i, err %d\n", irq_usbd, retval);
retval = -EBUSY;
- goto err_map;
+ goto err_udc_clk;
}
dev_dbg(dev, "got irq %i\n", irq_usbd);
@@ -1864,10 +1863,14 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
gpio_free(udc_info->vbus_pin);
err_int:
free_irq(irq_usbd, udc);
-err_map:
- iounmap(base_addr);
-err_mem:
- release_mem_region(rsrc_start, rsrc_len);
+err_udc_clk:
+ clk_disable_unprepare(udc_clock);
+ clk_put(udc_clock);
+ udc_clock = NULL;
+err_usb_bus_clk:
+ clk_disable_unprepare(usb_bus_clock);
+ clk_put(usb_bus_clock);
+ usb_bus_clock = NULL;
return retval;
}
@@ -1899,9 +1902,6 @@ static int s3c2410_udc_remove(struct platform_device *pdev)
free_irq(irq_usbd, udc);
- iounmap(base_addr);
- release_mem_region(rsrc_start, rsrc_len);
-
if (!IS_ERR(udc_clock) && udc_clock != NULL) {
clk_disable_unprepare(udc_clock);
clk_put(udc_clock);
diff --git a/drivers/usb/gadget/udc/snps_udc_plat.c b/drivers/usb/gadget/udc/snps_udc_plat.c
index 32f1d3e..99805d6 100644
--- a/drivers/usb/gadget/udc/snps_udc_plat.c
+++ b/drivers/usb/gadget/udc/snps_udc_plat.c
@@ -114,8 +114,8 @@ static int udc_plat_probe(struct platform_device *pdev)
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
udc->virt_addr = devm_ioremap_resource(dev, res);
- if (IS_ERR(udc->regs))
- return PTR_ERR(udc->regs);
+ if (IS_ERR(udc->virt_addr))
+ return PTR_ERR(udc->virt_addr);
/* udc csr registers base */
udc->csr = udc->virt_addr + UDC_CSR_ADDR;
diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
index 580bef8..2319c97 100644
--- a/drivers/usb/gadget/udc/tegra-xudc.c
+++ b/drivers/usb/gadget/udc/tegra-xudc.c
@@ -3883,7 +3883,7 @@ static int tegra_xudc_remove(struct platform_device *pdev)
pm_runtime_get_sync(xudc->dev);
- cancel_delayed_work(&xudc->plc_reset_work);
+ cancel_delayed_work_sync(&xudc->plc_reset_work);
cancel_work_sync(&xudc->usb_role_sw_work);
usb_del_gadget_udc(&xudc->gadget);
diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
index 1d94fcf..bd958f0 100644
--- a/drivers/usb/host/fotg210-hcd.c
+++ b/drivers/usb/host/fotg210-hcd.c
@@ -5568,7 +5568,7 @@ static int fotg210_hcd_probe(struct platform_device *pdev)
struct usb_hcd *hcd;
struct resource *res;
int irq;
- int retval = -ENODEV;
+ int retval;
struct fotg210_hcd *fotg210;
if (usb_disabled())
@@ -5588,7 +5588,7 @@ static int fotg210_hcd_probe(struct platform_device *pdev)
hcd = usb_create_hcd(&fotg210_fotg210_hc_driver, dev,
dev_name(dev));
if (!hcd) {
- dev_err(dev, "failed to create hcd with err %d\n", retval);
+ dev_err(dev, "failed to create hcd\n");
retval = -ENOMEM;
goto fail_create_hcd;
}
diff --git a/drivers/usb/host/sl811-hcd.c b/drivers/usb/host/sl811-hcd.c
index adaf406..9465fce 100644
--- a/drivers/usb/host/sl811-hcd.c
+++ b/drivers/usb/host/sl811-hcd.c
@@ -1287,11 +1287,10 @@ sl811h_hub_control(
goto error;
put_unaligned_le32(sl811->port1, buf);
-#ifndef VERBOSE
- if (*(u16*)(buf+2)) /* only if wPortChange is interesting */
-#endif
- dev_dbg(hcd->self.controller, "GetPortStatus %08x\n",
- sl811->port1);
+ if (__is_defined(VERBOSE) ||
+ *(u16*)(buf+2)) /* only if wPortChange is interesting */
+ dev_dbg(hcd->self.controller, "GetPortStatus %08x\n",
+ sl811->port1);
break;
case SetPortFeature:
if (wIndex != 1 || wLength != 0)
diff --git a/drivers/usb/host/xhci-ext-caps.h b/drivers/usb/host/xhci-ext-caps.h
index fa59b24..e8af0a1 100644
--- a/drivers/usb/host/xhci-ext-caps.h
+++ b/drivers/usb/host/xhci-ext-caps.h
@@ -7,8 +7,9 @@
* Author: Sarah Sharp
* Some code borrowed from the Linux EHCI driver.
*/
-/* Up to 16 ms to halt an HC */
-#define XHCI_MAX_HALT_USEC (16*1000)
+
+/* HC should halt within 16 ms, but use 32 ms as some hosts take longer */
+#define XHCI_MAX_HALT_USEC (32 * 1000)
/* HC not running - set to 1 when run/stop bit is cleared. */
#define XHCI_STS_HALT (1<<0)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index 138ba45..8ce043e 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -2143,6 +2143,15 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
if (major_revision == 0x03) {
rhub = &xhci->usb3_rhub;
+ /*
+ * Some hosts incorrectly use sub-minor version for minor
+ * version (i.e. 0x02 instead of 0x20 for bcdUSB 0x320 and 0x01
+ * for bcdUSB 0x310). Since there is no USB release with sub
+ * minor version 0x301 to 0x309, we can assume that they are
+ * incorrect and fix it here.
+ */
+ if (minor_revision > 0x00 && minor_revision < 0x10)
+ minor_revision <<= 4;
} else if (major_revision <= 0x02) {
rhub = &xhci->usb2_rhub;
} else {
@@ -2254,6 +2263,9 @@ static void xhci_create_rhub_port_array(struct xhci_hcd *xhci,
return;
rhub->ports = kcalloc_node(rhub->num_ports, sizeof(*rhub->ports),
flags, dev_to_node(dev));
+ if (!rhub->ports)
+ return;
+
for (i = 0; i < HCS_MAX_PORTS(xhci->hcs_params1); i++) {
if (xhci->hw_ports[i].rhub != rhub ||
xhci->hw_ports[i].hcd_portnum == DUPLICATE_ENTRY)
diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
index b45e5bf..8950d1f 100644
--- a/drivers/usb/host/xhci-mtk-sch.c
+++ b/drivers/usb/host/xhci-mtk-sch.c
@@ -378,6 +378,31 @@ static void update_bus_bw(struct mu3h_sch_bw_info *sch_bw,
sch_ep->allocated = used;
}
+static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
+{
+ struct mu3h_sch_tt *tt = sch_ep->sch_tt;
+ u32 num_esit, tmp;
+ int base;
+ int i, j;
+
+ num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
+ for (i = 0; i < num_esit; i++) {
+ base = offset + i * sch_ep->esit;
+
+ /*
+ * Compared with hs bus, no matter what ep type,
+ * the hub will always delay one uframe to send data
+ */
+ for (j = 0; j < sch_ep->cs_count; j++) {
+ tmp = tt->fs_bus_bw[base + j] + sch_ep->bw_cost_per_microframe;
+ if (tmp > FS_PAYLOAD_MAX)
+ return -ERANGE;
+ }
+ }
+
+ return 0;
+}
+
static int check_sch_tt(struct usb_device *udev,
struct mu3h_sch_ep_info *sch_ep, u32 offset)
{
@@ -402,7 +427,7 @@ static int check_sch_tt(struct usb_device *udev,
return -ERANGE;
for (i = 0; i < sch_ep->cs_count; i++)
- if (test_bit(offset + i, tt->split_bit_map))
+ if (test_bit(offset + i, tt->ss_bit_map))
return -ERANGE;
} else {
@@ -432,7 +457,7 @@ static int check_sch_tt(struct usb_device *udev,
cs_count = 7; /* HW limit */
for (i = 0; i < cs_count + 2; i++) {
- if (test_bit(offset + i, tt->split_bit_map))
+ if (test_bit(offset + i, tt->ss_bit_map))
return -ERANGE;
}
@@ -448,24 +473,44 @@ static int check_sch_tt(struct usb_device *udev,
sch_ep->num_budget_microframes = sch_ep->esit;
}
- return 0;
+ return check_fs_bus_bw(sch_ep, offset);
}
static void update_sch_tt(struct usb_device *udev,
- struct mu3h_sch_ep_info *sch_ep)
+ struct mu3h_sch_ep_info *sch_ep, bool used)
{
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
u32 base, num_esit;
+ int bw_updated;
+ int bits;
int i, j;
num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
+ bits = (sch_ep->ep_type == ISOC_OUT_EP) ? sch_ep->cs_count : 1;
+
+ if (used)
+ bw_updated = sch_ep->bw_cost_per_microframe;
+ else
+ bw_updated = -sch_ep->bw_cost_per_microframe;
+
for (i = 0; i < num_esit; i++) {
base = sch_ep->offset + i * sch_ep->esit;
- for (j = 0; j < sch_ep->num_budget_microframes; j++)
- set_bit(base + j, tt->split_bit_map);
+
+ for (j = 0; j < bits; j++) {
+ if (used)
+ set_bit(base + j, tt->ss_bit_map);
+ else
+ clear_bit(base + j, tt->ss_bit_map);
+ }
+
+ for (j = 0; j < sch_ep->cs_count; j++)
+ tt->fs_bus_bw[base + j] += bw_updated;
}
- list_add_tail(&sch_ep->tt_endpoint, &tt->ep_list);
+ if (used)
+ list_add_tail(&sch_ep->tt_endpoint, &tt->ep_list);
+ else
+ list_del(&sch_ep->tt_endpoint);
}
static int check_sch_bw(struct usb_device *udev,
@@ -535,7 +580,7 @@ static int check_sch_bw(struct usb_device *udev,
if (!tt_offset_ok)
return -ERANGE;
- update_sch_tt(udev, sch_ep);
+ update_sch_tt(udev, sch_ep, 1);
}
/* update bus bandwidth info */
@@ -548,15 +593,16 @@ static void destroy_sch_ep(struct usb_device *udev,
struct mu3h_sch_bw_info *sch_bw, struct mu3h_sch_ep_info *sch_ep)
{
/* only release ep bw check passed by check_sch_bw() */
- if (sch_ep->allocated)
+ if (sch_ep->allocated) {
update_bus_bw(sch_bw, sch_ep, 0);
+ if (sch_ep->sch_tt)
+ update_sch_tt(udev, sch_ep, 0);
+ }
+
+ if (sch_ep->sch_tt)
+ drop_tt(udev);
list_del(&sch_ep->endpoint);
-
- if (sch_ep->sch_tt) {
- list_del(&sch_ep->tt_endpoint);
- drop_tt(udev);
- }
kfree(sch_ep);
}
@@ -643,7 +689,7 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
*/
if (usb_endpoint_xfer_int(&ep->desc)
|| usb_endpoint_xfer_isoc(&ep->desc))
- ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(1));
+ ep_ctx->reserved[0] = cpu_to_le32(EP_BPKTS(1));
return 0;
}
@@ -730,10 +776,10 @@ int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
list_move_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list);
ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
- ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts)
+ ep_ctx->reserved[0] = cpu_to_le32(EP_BPKTS(sch_ep->pkts)
| EP_BCSCOUNT(sch_ep->cs_count)
| EP_BBM(sch_ep->burst_mode));
- ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset)
+ ep_ctx->reserved[1] = cpu_to_le32(EP_BOFFSET(sch_ep->offset)
| EP_BREPEAT(sch_ep->repeat));
xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n",
diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
index 2f27dc0..1c33157 100644
--- a/drivers/usb/host/xhci-mtk.c
+++ b/drivers/usb/host/xhci-mtk.c
@@ -397,6 +397,8 @@ static void xhci_mtk_quirks(struct device *dev, struct xhci_hcd *xhci)
xhci->quirks |= XHCI_SPURIOUS_SUCCESS;
if (mtk->lpm_support)
xhci->quirks |= XHCI_LPM_SUPPORT;
+ if (mtk->u2_lpm_disable)
+ xhci->quirks |= XHCI_HW_LPM_DISABLE;
/*
* MTK xHCI 0.96: PSA is 1 by default even if doesn't support stream,
@@ -469,6 +471,7 @@ static int xhci_mtk_probe(struct platform_device *pdev)
return ret;
mtk->lpm_support = of_property_read_bool(node, "usb3-lpm-capable");
+ mtk->u2_lpm_disable = of_property_read_bool(node, "usb2-lpm-disable");
/* optional property, ignore the error if it does not exist */
of_property_read_u32(node, "mediatek,u3p-dis-msk",
&mtk->u3p_dis_msk);
diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h
index cbb09df..2fc0568 100644
--- a/drivers/usb/host/xhci-mtk.h
+++ b/drivers/usb/host/xhci-mtk.h
@@ -20,13 +20,15 @@
#define XHCI_MTK_MAX_ESIT 64
/**
- * @split_bit_map: used to avoid split microframes overlay
+ * @ss_bit_map: used to avoid start split microframes overlay
+ * @fs_bus_bw: array to keep track of bandwidth already used for FS
* @ep_list: Endpoints using this TT
* @usb_tt: usb TT related
* @tt_port: TT port number
*/
struct mu3h_sch_tt {
- DECLARE_BITMAP(split_bit_map, XHCI_MTK_MAX_ESIT);
+ DECLARE_BITMAP(ss_bit_map, XHCI_MTK_MAX_ESIT);
+ u32 fs_bus_bw[XHCI_MTK_MAX_ESIT];
struct list_head ep_list;
struct usb_tt *usb_tt;
int tt_port;
@@ -150,6 +152,7 @@ struct xhci_hcd_mtk {
struct phy **phys;
int num_phys;
bool lpm_support;
+ bool u2_lpm_disable;
/* usb remote wakeup */
bool uwk_en;
struct regmap *uwk;
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index 5bbccc9..7bc18cf 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -57,6 +57,7 @@
#define PCI_DEVICE_ID_INTEL_CML_XHCI 0xa3af
#define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13
#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138
+#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e
#define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
#define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
@@ -166,8 +167,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
(pdev->device == 0x15e0 || pdev->device == 0x15e1))
xhci->quirks |= XHCI_SNPS_BROKEN_SUSPEND;
- if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5)
+ if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5) {
xhci->quirks |= XHCI_DISABLE_SPARSE;
+ xhci->quirks |= XHCI_RESET_ON_RESUME;
+ }
if (pdev->vendor == PCI_VENDOR_ID_AMD)
xhci->quirks |= XHCI_TRUST_TX_LENGTH;
@@ -243,7 +246,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
- pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI))
+ pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI))
xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index c449de6..a8d97e2 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -228,6 +228,7 @@ static void xhci_zero_64b_regs(struct xhci_hcd *xhci)
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
int err, i;
u64 val;
+ u32 intrs;
/*
* Some Renesas controllers get into a weird state if they are
@@ -266,7 +267,10 @@ static void xhci_zero_64b_regs(struct xhci_hcd *xhci)
if (upper_32_bits(val))
xhci_write_64(xhci, 0, &xhci->op_regs->cmd_ring);
- for (i = 0; i < HCS_MAX_INTRS(xhci->hcs_params1); i++) {
+ intrs = min_t(u32, HCS_MAX_INTRS(xhci->hcs_params1),
+ ARRAY_SIZE(xhci->run_regs->ir_set));
+
+ for (i = 0; i < intrs; i++) {
struct xhci_intr_reg __iomem *ir;
ir = &xhci->run_regs->ir_set[i];
@@ -1393,7 +1397,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
* we need to issue an evaluate context command and wait on it.
*/
static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id,
- unsigned int ep_index, struct urb *urb)
+ unsigned int ep_index, struct urb *urb, gfp_t mem_flags)
{
struct xhci_container_ctx *out_ctx;
struct xhci_input_control_ctx *ctrl_ctx;
@@ -1424,7 +1428,7 @@ static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id,
* changes max packet sizes.
*/
- command = xhci_alloc_command(xhci, true, GFP_KERNEL);
+ command = xhci_alloc_command(xhci, true, mem_flags);
if (!command)
return -ENOMEM;
@@ -1520,7 +1524,7 @@ static int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
*/
if (urb->dev->speed == USB_SPEED_FULL) {
ret = xhci_check_maxpacket(xhci, slot_id,
- ep_index, urb);
+ ep_index, urb, mem_flags);
if (ret < 0) {
xhci_urb_free_priv(urb_priv);
urb->hcpriv = NULL;
@@ -3227,6 +3231,14 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
/* config ep command clears toggle if add and drop ep flags are set */
ctrl_ctx = xhci_get_input_control_ctx(cfg_cmd->in_ctx);
+ if (!ctrl_ctx) {
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ xhci_free_command(xhci, cfg_cmd);
+ xhci_warn(xhci, "%s: Could not get input context, bad type.\n",
+ __func__);
+ goto cleanup;
+ }
+
xhci_setup_input_ctx_for_config_ep(xhci, cfg_cmd->in_ctx, vdev->out_ctx,
ctrl_ctx, ep_flag, ep_flag);
xhci_endpoint_copy(xhci, cfg_cmd->in_ctx, vdev->out_ctx, ep_index);
diff --git a/drivers/usb/misc/trancevibrator.c b/drivers/usb/misc/trancevibrator.c
index a3dfc77..26baba3 100644
--- a/drivers/usb/misc/trancevibrator.c
+++ b/drivers/usb/misc/trancevibrator.c
@@ -61,9 +61,9 @@ static ssize_t speed_store(struct device *dev, struct device_attribute *attr,
/* Set speed */
retval = usb_control_msg(tv->udev, usb_sndctrlpipe(tv->udev, 0),
0x01, /* vendor request: set speed */
- USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
tv->speed, /* speed value */
- 0, NULL, 0, USB_CTRL_GET_TIMEOUT);
+ 0, NULL, 0, USB_CTRL_SET_TIMEOUT);
if (retval) {
tv->speed = old;
dev_dbg(&tv->udev->dev, "retval = %d\n", retval);
diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c
index b5d6616..748139d 100644
--- a/drivers/usb/misc/uss720.c
+++ b/drivers/usb/misc/uss720.c
@@ -736,6 +736,7 @@ static int uss720_probe(struct usb_interface *intf,
parport_announce_port(pp);
usb_set_intfdata(intf, pp);
+ usb_put_dev(usbdev);
return 0;
probe_abort:
diff --git a/drivers/usb/musb/mediatek.c b/drivers/usb/musb/mediatek.c
index eebeadd..6b92d03 100644
--- a/drivers/usb/musb/mediatek.c
+++ b/drivers/usb/musb/mediatek.c
@@ -518,8 +518,8 @@ static int mtk_musb_probe(struct platform_device *pdev)
glue->xceiv = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2);
if (IS_ERR(glue->xceiv)) {
- dev_err(dev, "fail to getting usb-phy %d\n", ret);
ret = PTR_ERR(glue->xceiv);
+ dev_err(dev, "fail to getting usb-phy %d\n", ret);
goto err_unregister_usb_phy;
}
diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
index fc0457d..8f09a38 100644
--- a/drivers/usb/musb/musb_core.c
+++ b/drivers/usb/musb/musb_core.c
@@ -2070,7 +2070,7 @@ static void musb_irq_work(struct work_struct *data)
struct musb *musb = container_of(data, struct musb, irq_work.work);
int error;
- error = pm_runtime_get_sync(musb->controller);
+ error = pm_runtime_resume_and_get(musb->controller);
if (error < 0) {
dev_err(musb->controller, "Could not enable: %i\n", error);
diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
index 97f3707..33b637d 100644
--- a/drivers/usb/roles/class.c
+++ b/drivers/usb/roles/class.c
@@ -189,6 +189,8 @@ usb_role_switch_find_by_fwnode(const struct fwnode_handle *fwnode)
return NULL;
dev = class_find_device_by_fwnode(role_class, fwnode);
+ if (dev)
+ WARN_ON(!try_module_get(dev->parent->driver->owner));
return dev ? to_role_switch(dev) : NULL;
}
diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
index 56cd70b..7c64b6e 100644
--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -1034,6 +1034,9 @@ static const struct usb_device_id id_table_combined[] = {
/* Sienna devices */
{ USB_DEVICE(FTDI_VID, FTDI_SIENNA_PID) },
{ USB_DEVICE(ECHELON_VID, ECHELON_U20_PID) },
+ /* IDS GmbH devices */
+ { USB_DEVICE(IDS_VID, IDS_SI31A_PID) },
+ { USB_DEVICE(IDS_VID, IDS_CM31A_PID) },
/* U-Blox devices */
{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ZED_PID) },
{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ODIN_PID) },
diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
index 3d47c6d..d854e04 100644
--- a/drivers/usb/serial/ftdi_sio_ids.h
+++ b/drivers/usb/serial/ftdi_sio_ids.h
@@ -1568,6 +1568,13 @@
#define UNJO_ISODEBUG_V1_PID 0x150D
/*
+ * IDS GmbH
+ */
+#define IDS_VID 0x2CAF
+#define IDS_SI31A_PID 0x13A2
+#define IDS_CM31A_PID 0x13A3
+
+/*
* U-Blox products (http://www.u-blox.com).
*/
#define UBLOX_VID 0x1546
diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
index c6969ca..61d9464 100644
--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -1240,6 +1240,10 @@ static const struct usb_device_id option_ids[] = {
.driver_info = NCTRL(0) | RSVD(1) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff), /* Telit LN940 (MBIM) */
.driver_info = NCTRL(0) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7010, 0xff), /* Telit LE910-S1 (RNDIS) */
+ .driver_info = NCTRL(2) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7011, 0xff), /* Telit LE910-S1 (ECM) */
+ .driver_info = NCTRL(2) },
{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010), /* Telit SBL FN980 flashing device */
.driver_info = NCTRL(0) | ZLP },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
index 29dda60..1bbe18f 100644
--- a/drivers/usb/serial/pl2303.c
+++ b/drivers/usb/serial/pl2303.c
@@ -113,6 +113,7 @@ static const struct usb_device_id id_table[] = {
{ USB_DEVICE(SONY_VENDOR_ID, SONY_QN3USB_PRODUCT_ID) },
{ USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) },
{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) },
+ { USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530GC_PRODUCT_ID) },
{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
{ USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
{ } /* Terminating entry */
diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
index 0f681dd..6097ee8 100644
--- a/drivers/usb/serial/pl2303.h
+++ b/drivers/usb/serial/pl2303.h
@@ -158,6 +158,7 @@
/* ADLINK ND-6530 RS232,RS485 and RS422 adapter */
#define ADLINK_VENDOR_ID 0x0b63
#define ADLINK_ND6530_PRODUCT_ID 0x6530
+#define ADLINK_ND6530GC_PRODUCT_ID 0x653a
/* SMART USB Serial Adapter */
#define SMART_VENDOR_ID 0x0b8c
diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
index 73075b9..afc4f96 100644
--- a/drivers/usb/serial/ti_usb_3410_5052.c
+++ b/drivers/usb/serial/ti_usb_3410_5052.c
@@ -37,6 +37,7 @@
/* Vendor and product ids */
#define TI_VENDOR_ID 0x0451
#define IBM_VENDOR_ID 0x04b3
+#define STARTECH_VENDOR_ID 0x14b0
#define TI_3410_PRODUCT_ID 0x3410
#define IBM_4543_PRODUCT_ID 0x4543
#define IBM_454B_PRODUCT_ID 0x454b
@@ -372,6 +373,7 @@ static const struct usb_device_id ti_id_table_3410[] = {
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) },
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) },
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) },
+ { USB_DEVICE(STARTECH_VENDOR_ID, TI_3410_PRODUCT_ID) },
{ } /* terminator */
};
@@ -410,6 +412,7 @@ static const struct usb_device_id ti_id_table_combined[] = {
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) },
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) },
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) },
+ { USB_DEVICE(STARTECH_VENDOR_ID, TI_3410_PRODUCT_ID) },
{ } /* terminator */
};
@@ -1420,14 +1423,19 @@ static int ti_set_serial_info(struct tty_struct *tty,
struct serial_struct *ss)
{
struct usb_serial_port *port = tty->driver_data;
- struct ti_port *tport = usb_get_serial_port_data(port);
+ struct tty_port *tport = &port->port;
unsigned cwait;
cwait = ss->closing_wait;
if (cwait != ASYNC_CLOSING_WAIT_NONE)
cwait = msecs_to_jiffies(10 * ss->closing_wait);
- tport->tp_port->port.closing_wait = cwait;
+ if (!capable(CAP_SYS_ADMIN)) {
+ if (cwait != tport->closing_wait)
+ return -EPERM;
+ }
+
+ tport->closing_wait = cwait;
return 0;
}
diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c
index 4b98458..b2285d5a 100644
--- a/drivers/usb/serial/usb_wwan.c
+++ b/drivers/usb/serial/usb_wwan.c
@@ -140,10 +140,10 @@ int usb_wwan_get_serial_info(struct tty_struct *tty,
ss->line = port->minor;
ss->port = port->port_number;
ss->baud_base = tty_get_baud_rate(port->port.tty);
- ss->close_delay = port->port.close_delay / 10;
+ ss->close_delay = jiffies_to_msecs(port->port.close_delay) / 10;
ss->closing_wait = port->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
ASYNC_CLOSING_WAIT_NONE :
- port->port.closing_wait / 10;
+ jiffies_to_msecs(port->port.closing_wait) / 10;
return 0;
}
EXPORT_SYMBOL(usb_wwan_get_serial_info);
@@ -155,9 +155,10 @@ int usb_wwan_set_serial_info(struct tty_struct *tty,
unsigned int closing_wait, close_delay;
int retval = 0;
- close_delay = ss->close_delay * 10;
+ close_delay = msecs_to_jiffies(ss->close_delay * 10);
closing_wait = ss->closing_wait == ASYNC_CLOSING_WAIT_NONE ?
- ASYNC_CLOSING_WAIT_NONE : ss->closing_wait * 10;
+ ASYNC_CLOSING_WAIT_NONE :
+ msecs_to_jiffies(ss->closing_wait * 10);
mutex_lock(&port->port.mutex);
diff --git a/drivers/usb/typec/mux.c b/drivers/usb/typec/mux.c
index cf720e9..42acdc8 100644
--- a/drivers/usb/typec/mux.c
+++ b/drivers/usb/typec/mux.c
@@ -191,6 +191,7 @@ static void *typec_mux_match(struct fwnode_handle *fwnode, const char *id,
bool match;
int nval;
u16 *val;
+ int ret;
int i;
/*
@@ -218,10 +219,10 @@ static void *typec_mux_match(struct fwnode_handle *fwnode, const char *id,
if (!val)
return ERR_PTR(-ENOMEM);
- nval = fwnode_property_read_u16_array(fwnode, "svid", val, nval);
- if (nval < 0) {
+ ret = fwnode_property_read_u16_array(fwnode, "svid", val, nval);
+ if (ret < 0) {
kfree(val);
- return ERR_PTR(nval);
+ return ERR_PTR(ret);
}
for (i = 0; i < nval; i++) {
diff --git a/drivers/usb/typec/stusb160x.c b/drivers/usb/typec/stusb160x.c
index d21750b..6eaeba9 100644
--- a/drivers/usb/typec/stusb160x.c
+++ b/drivers/usb/typec/stusb160x.c
@@ -682,8 +682,8 @@ static int stusb160x_probe(struct i2c_client *client)
}
fwnode = device_get_named_child_node(chip->dev, "connector");
- if (IS_ERR(fwnode))
- return PTR_ERR(fwnode);
+ if (!fwnode)
+ return -ENODEV;
/*
* When both VDD and VSYS power supplies are present, the low power
diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
index f9f0af6..a06da18 100644
--- a/drivers/usb/typec/tcpm/tcpci.c
+++ b/drivers/usb/typec/tcpm/tcpci.c
@@ -20,6 +20,15 @@
#define PD_RETRY_COUNT 3
+#define tcpc_presenting_cc1_rd(reg) \
+ (!(TCPC_ROLE_CTRL_DRP & (reg)) && \
+ (((reg) & (TCPC_ROLE_CTRL_CC1_MASK << TCPC_ROLE_CTRL_CC1_SHIFT)) == \
+ (TCPC_ROLE_CTRL_CC_RD << TCPC_ROLE_CTRL_CC1_SHIFT)))
+#define tcpc_presenting_cc2_rd(reg) \
+ (!(TCPC_ROLE_CTRL_DRP & (reg)) && \
+ (((reg) & (TCPC_ROLE_CTRL_CC2_MASK << TCPC_ROLE_CTRL_CC2_SHIFT)) == \
+ (TCPC_ROLE_CTRL_CC_RD << TCPC_ROLE_CTRL_CC2_SHIFT)))
+
struct tcpci {
struct device *dev;
@@ -174,19 +183,25 @@ static int tcpci_get_cc(struct tcpc_dev *tcpc,
enum typec_cc_status *cc1, enum typec_cc_status *cc2)
{
struct tcpci *tcpci = tcpc_to_tcpci(tcpc);
- unsigned int reg;
+ unsigned int reg, role_control;
int ret;
+ ret = regmap_read(tcpci->regmap, TCPC_ROLE_CTRL, &role_control);
+ if (ret < 0)
+ return ret;
+
ret = regmap_read(tcpci->regmap, TCPC_CC_STATUS, ®);
if (ret < 0)
return ret;
*cc1 = tcpci_to_typec_cc((reg >> TCPC_CC_STATUS_CC1_SHIFT) &
TCPC_CC_STATUS_CC1_MASK,
- reg & TCPC_CC_STATUS_TERM);
+ reg & TCPC_CC_STATUS_TERM ||
+ tcpc_presenting_cc1_rd(role_control));
*cc2 = tcpci_to_typec_cc((reg >> TCPC_CC_STATUS_CC2_SHIFT) &
TCPC_CC_STATUS_CC2_MASK,
- reg & TCPC_CC_STATUS_TERM);
+ reg & TCPC_CC_STATUS_TERM ||
+ tcpc_presenting_cc2_rd(role_control));
return 0;
}
diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
index 5636580..bdbd346 100644
--- a/drivers/usb/typec/tcpm/tcpm.c
+++ b/drivers/usb/typec/tcpm/tcpm.c
@@ -218,12 +218,27 @@ struct pd_mode_data {
struct typec_altmode_desc altmode_desc[ALTMODE_DISCOVERY_MAX];
};
+/*
+ * @min_volt: Actual min voltage at the local port
+ * @req_min_volt: Requested min voltage to the port partner
+ * @max_volt: Actual max voltage at the local port
+ * @req_max_volt: Requested max voltage to the port partner
+ * @max_curr: Actual max current at the local port
+ * @req_max_curr: Requested max current of the port partner
+ * @req_out_volt: Requested output voltage to the port partner
+ * @req_op_curr: Requested operating current to the port partner
+ * @supported: Parter has atleast one APDO hence supports PPS
+ * @active: PPS mode is active
+ */
struct pd_pps_data {
u32 min_volt;
+ u32 req_min_volt;
u32 max_volt;
+ u32 req_max_volt;
u32 max_curr;
- u32 out_volt;
- u32 op_curr;
+ u32 req_max_curr;
+ u32 req_out_volt;
+ u32 req_op_curr;
bool supported;
bool active;
};
@@ -326,7 +341,10 @@ struct tcpm_port {
unsigned int operating_snk_mw;
bool update_sink_caps;
- /* Requested current / voltage */
+ /* Requested current / voltage to the port partner */
+ u32 req_current_limit;
+ u32 req_supply_voltage;
+ /* Actual current / voltage limit of the local port */
u32 current_limit;
u32 supply_voltage;
@@ -1873,8 +1891,8 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
case SNK_TRANSITION_SINK:
if (port->vbus_present) {
tcpm_set_current_limit(port,
- port->current_limit,
- port->supply_voltage);
+ port->req_current_limit,
+ port->req_supply_voltage);
port->explicit_contract = true;
tcpm_set_state(port, SNK_READY, 0);
} else {
@@ -1916,8 +1934,8 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
break;
case SNK_NEGOTIATE_PPS_CAPABILITIES:
/* Revert data back from any requested PPS updates */
- port->pps_data.out_volt = port->supply_voltage;
- port->pps_data.op_curr = port->current_limit;
+ port->pps_data.req_out_volt = port->supply_voltage;
+ port->pps_data.req_op_curr = port->current_limit;
port->pps_status = (type == PD_CTRL_WAIT ?
-EAGAIN : -EOPNOTSUPP);
tcpm_set_state(port, SNK_READY, 0);
@@ -1956,8 +1974,12 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
break;
case SNK_NEGOTIATE_PPS_CAPABILITIES:
port->pps_data.active = true;
- port->supply_voltage = port->pps_data.out_volt;
- port->current_limit = port->pps_data.op_curr;
+ port->pps_data.min_volt = port->pps_data.req_min_volt;
+ port->pps_data.max_volt = port->pps_data.req_max_volt;
+ port->pps_data.max_curr = port->pps_data.req_max_curr;
+ port->req_supply_voltage = port->pps_data.req_out_volt;
+ port->req_current_limit = port->pps_data.req_op_curr;
+ power_supply_changed(port->psy);
tcpm_set_state(port, SNK_TRANSITION_SINK, 0);
break;
case SOFT_RESET_SEND:
@@ -2474,17 +2496,16 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port)
src = port->source_caps[src_pdo];
snk = port->snk_pdo[snk_pdo];
- port->pps_data.min_volt = max(pdo_pps_apdo_min_voltage(src),
- pdo_pps_apdo_min_voltage(snk));
- port->pps_data.max_volt = min(pdo_pps_apdo_max_voltage(src),
- pdo_pps_apdo_max_voltage(snk));
- port->pps_data.max_curr = min_pps_apdo_current(src, snk);
- port->pps_data.out_volt = min(port->pps_data.max_volt,
- max(port->pps_data.min_volt,
- port->pps_data.out_volt));
- port->pps_data.op_curr = min(port->pps_data.max_curr,
- port->pps_data.op_curr);
- power_supply_changed(port->psy);
+ port->pps_data.req_min_volt = max(pdo_pps_apdo_min_voltage(src),
+ pdo_pps_apdo_min_voltage(snk));
+ port->pps_data.req_max_volt = min(pdo_pps_apdo_max_voltage(src),
+ pdo_pps_apdo_max_voltage(snk));
+ port->pps_data.req_max_curr = min_pps_apdo_current(src, snk);
+ port->pps_data.req_out_volt = min(port->pps_data.req_max_volt,
+ max(port->pps_data.req_min_volt,
+ port->pps_data.req_out_volt));
+ port->pps_data.req_op_curr = min(port->pps_data.req_max_curr,
+ port->pps_data.req_op_curr);
}
return src_pdo;
@@ -2564,8 +2585,8 @@ static int tcpm_pd_build_request(struct tcpm_port *port, u32 *rdo)
flags & RDO_CAP_MISMATCH ? " [mismatch]" : "");
}
- port->current_limit = ma;
- port->supply_voltage = mv;
+ port->req_current_limit = ma;
+ port->req_supply_voltage = mv;
return 0;
}
@@ -2611,10 +2632,10 @@ static int tcpm_pd_build_pps_request(struct tcpm_port *port, u32 *rdo)
tcpm_log(port, "Invalid APDO selected!");
return -EINVAL;
}
- max_mv = port->pps_data.max_volt;
- max_ma = port->pps_data.max_curr;
- out_mv = port->pps_data.out_volt;
- op_ma = port->pps_data.op_curr;
+ max_mv = port->pps_data.req_max_volt;
+ max_ma = port->pps_data.req_max_curr;
+ out_mv = port->pps_data.req_out_volt;
+ op_ma = port->pps_data.req_op_curr;
break;
default:
tcpm_log(port, "Invalid PDO selected!");
@@ -2661,8 +2682,8 @@ static int tcpm_pd_build_pps_request(struct tcpm_port *port, u32 *rdo)
tcpm_log(port, "Requesting APDO %d: %u mV, %u mA",
src_pdo_index, out_mv, op_ma);
- port->pps_data.op_curr = op_ma;
- port->pps_data.out_volt = out_mv;
+ port->pps_data.req_op_curr = op_ma;
+ port->pps_data.req_out_volt = out_mv;
return 0;
}
@@ -2890,8 +2911,6 @@ static void tcpm_reset_port(struct tcpm_port *port)
port->sink_cap_done = false;
if (port->tcpc->enable_frs)
port->tcpc->enable_frs(port->tcpc, false);
-
- power_supply_changed(port->psy);
}
static void tcpm_detach(struct tcpm_port *port)
@@ -4503,7 +4522,7 @@ static int tcpm_try_role(struct typec_port *p, int role)
return ret;
}
-static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
+static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 req_op_curr)
{
unsigned int target_mw;
int ret;
@@ -4521,22 +4540,22 @@ static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
goto port_unlock;
}
- if (op_curr > port->pps_data.max_curr) {
+ if (req_op_curr > port->pps_data.max_curr) {
ret = -EINVAL;
goto port_unlock;
}
- target_mw = (op_curr * port->pps_data.out_volt) / 1000;
+ target_mw = (req_op_curr * port->supply_voltage) / 1000;
if (target_mw < port->operating_snk_mw) {
ret = -EINVAL;
goto port_unlock;
}
/* Round down operating current to align with PPS valid steps */
- op_curr = op_curr - (op_curr % RDO_PROG_CURR_MA_STEP);
+ req_op_curr = req_op_curr - (req_op_curr % RDO_PROG_CURR_MA_STEP);
reinit_completion(&port->pps_complete);
- port->pps_data.op_curr = op_curr;
+ port->pps_data.req_op_curr = req_op_curr;
port->pps_status = 0;
port->pps_pending = true;
tcpm_set_state(port, SNK_NEGOTIATE_PPS_CAPABILITIES, 0);
@@ -4558,7 +4577,7 @@ static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
return ret;
}
-static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
+static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 req_out_volt)
{
unsigned int target_mw;
int ret;
@@ -4576,23 +4595,23 @@ static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
goto port_unlock;
}
- if (out_volt < port->pps_data.min_volt ||
- out_volt > port->pps_data.max_volt) {
+ if (req_out_volt < port->pps_data.min_volt ||
+ req_out_volt > port->pps_data.max_volt) {
ret = -EINVAL;
goto port_unlock;
}
- target_mw = (port->pps_data.op_curr * out_volt) / 1000;
+ target_mw = (port->current_limit * req_out_volt) / 1000;
if (target_mw < port->operating_snk_mw) {
ret = -EINVAL;
goto port_unlock;
}
/* Round down output voltage to align with PPS valid steps */
- out_volt = out_volt - (out_volt % RDO_PROG_VOLT_MV_STEP);
+ req_out_volt = req_out_volt - (req_out_volt % RDO_PROG_VOLT_MV_STEP);
reinit_completion(&port->pps_complete);
- port->pps_data.out_volt = out_volt;
+ port->pps_data.req_out_volt = req_out_volt;
port->pps_status = 0;
port->pps_pending = true;
tcpm_set_state(port, SNK_NEGOTIATE_PPS_CAPABILITIES, 0);
@@ -4641,8 +4660,8 @@ static int tcpm_pps_activate(struct tcpm_port *port, bool activate)
/* Trigger PPS request or move back to standard PDO contract */
if (activate) {
- port->pps_data.out_volt = port->supply_voltage;
- port->pps_data.op_curr = port->current_limit;
+ port->pps_data.req_out_volt = port->supply_voltage;
+ port->pps_data.req_op_curr = port->current_limit;
tcpm_set_state(port, SNK_NEGOTIATE_PPS_CAPABILITIES, 0);
} else {
tcpm_set_state(port, SNK_NEGOTIATE_CAPABILITIES, 0);
diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
index d8e4594..30bfc31 100644
--- a/drivers/usb/typec/tps6598x.c
+++ b/drivers/usb/typec/tps6598x.c
@@ -515,8 +515,8 @@ static int tps6598x_probe(struct i2c_client *client)
return ret;
fwnode = device_get_named_child_node(&client->dev, "connector");
- if (IS_ERR(fwnode))
- return PTR_ERR(fwnode);
+ if (!fwnode)
+ return -ENODEV;
tps->role_sw = fwnode_usb_role_switch_get(fwnode);
if (IS_ERR(tps->role_sw)) {
diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
index 51a570d..b4615bb 100644
--- a/drivers/usb/typec/ucsi/ucsi.c
+++ b/drivers/usb/typec/ucsi/ucsi.c
@@ -495,7 +495,8 @@ static void ucsi_unregister_altmodes(struct ucsi_connector *con, u8 recipient)
}
}
-static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner)
+static int ucsi_get_pdos(struct ucsi_connector *con, int is_partner,
+ u32 *pdos, int offset, int num_pdos)
{
struct ucsi *ucsi = con->ucsi;
u64 command;
@@ -503,17 +504,39 @@ static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner)
command = UCSI_COMMAND(UCSI_GET_PDOS) | UCSI_CONNECTOR_NUMBER(con->num);
command |= UCSI_GET_PDOS_PARTNER_PDO(is_partner);
- command |= UCSI_GET_PDOS_NUM_PDOS(UCSI_MAX_PDOS - 1);
+ command |= UCSI_GET_PDOS_PDO_OFFSET(offset);
+ command |= UCSI_GET_PDOS_NUM_PDOS(num_pdos - 1);
command |= UCSI_GET_PDOS_SRC_PDOS;
- ret = ucsi_send_command(ucsi, command, con->src_pdos,
- sizeof(con->src_pdos));
- if (ret < 0) {
+ ret = ucsi_send_command(ucsi, command, pdos + offset,
+ num_pdos * sizeof(u32));
+ if (ret < 0)
dev_err(ucsi->dev, "UCSI_GET_PDOS failed (%d)\n", ret);
- return;
- }
- con->num_pdos = ret / sizeof(u32); /* number of bytes to 32-bit PDOs */
- if (ret == 0)
+ if (ret == 0 && offset == 0)
dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n");
+
+ return ret;
+}
+
+static void ucsi_get_src_pdos(struct ucsi_connector *con, int is_partner)
+{
+ int ret;
+
+ /* UCSI max payload means only getting at most 4 PDOs at a time */
+ ret = ucsi_get_pdos(con, 1, con->src_pdos, 0, UCSI_MAX_PDOS);
+ if (ret < 0)
+ return;
+
+ con->num_pdos = ret / sizeof(u32); /* number of bytes to 32-bit PDOs */
+ if (con->num_pdos < UCSI_MAX_PDOS)
+ return;
+
+ /* get the remaining PDOs, if any */
+ ret = ucsi_get_pdos(con, 1, con->src_pdos, UCSI_MAX_PDOS,
+ PDO_MAX_OBJECTS - UCSI_MAX_PDOS);
+ if (ret < 0)
+ return;
+
+ con->num_pdos += ret / sizeof(u32);
}
static void ucsi_pwr_opmode_change(struct ucsi_connector *con)
@@ -522,7 +545,7 @@ static void ucsi_pwr_opmode_change(struct ucsi_connector *con)
case UCSI_CONSTAT_PWR_OPMODE_PD:
con->rdo = con->status.request_data_obj;
typec_set_pwr_opmode(con->port, TYPEC_PWR_MODE_PD);
- ucsi_get_pdos(con, 1);
+ ucsi_get_src_pdos(con, 1);
break;
case UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5:
con->rdo = 0;
@@ -887,6 +910,7 @@ static const struct typec_operations ucsi_ops = {
.pr_set = ucsi_pr_swap
};
+/* Caller must call fwnode_handle_put() after use */
static struct fwnode_handle *ucsi_find_fwnode(struct ucsi_connector *con)
{
struct fwnode_handle *fwnode;
@@ -920,7 +944,7 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
command |= UCSI_CONNECTOR_NUMBER(con->num);
ret = ucsi_send_command(ucsi, command, &con->cap, sizeof(con->cap));
if (ret < 0)
- goto out;
+ goto out_unlock;
if (con->cap.op_mode & UCSI_CONCAP_OPMODE_DRP)
cap->data = TYPEC_PORT_DRD;
@@ -1016,6 +1040,8 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
trace_ucsi_register_port(con->num, &con->status);
out:
+ fwnode_handle_put(cap->fwnode);
+out_unlock:
mutex_unlock(&con->lock);
return ret;
}
diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
index b7a92f2..047e17c 100644
--- a/drivers/usb/typec/ucsi/ucsi.h
+++ b/drivers/usb/typec/ucsi/ucsi.h
@@ -8,6 +8,7 @@
#include <linux/power_supply.h>
#include <linux/types.h>
#include <linux/usb/typec.h>
+#include <linux/usb/pd.h>
/* -------------------------------------------------------------------------- */
@@ -133,7 +134,9 @@ void ucsi_connector_change(struct ucsi *ucsi, u8 num);
/* GET_PDOS command bits */
#define UCSI_GET_PDOS_PARTNER_PDO(_r_) ((u64)(_r_) << 23)
+#define UCSI_GET_PDOS_PDO_OFFSET(_r_) ((u64)(_r_) << 24)
#define UCSI_GET_PDOS_NUM_PDOS(_r_) ((u64)(_r_) << 32)
+#define UCSI_MAX_PDOS (4)
#define UCSI_GET_PDOS_SRC_PDOS ((u64)1 << 34)
/* -------------------------------------------------------------------------- */
@@ -300,7 +303,6 @@ struct ucsi {
#define UCSI_MAX_SVID 5
#define UCSI_MAX_ALTMODES (UCSI_MAX_SVID * 6)
-#define UCSI_MAX_PDOS (4)
#define UCSI_TYPEC_VSAFE5V 5000
#define UCSI_TYPEC_1_5_CURRENT 1500
@@ -327,7 +329,7 @@ struct ucsi_connector {
struct power_supply *psy;
struct power_supply_desc psy_desc;
u32 rdo;
- u32 src_pdos[UCSI_MAX_PDOS];
+ u32 src_pdos[PDO_MAX_OBJECTS];
int num_pdos;
};
diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
index 8f1de1f..d8d3892 100644
--- a/drivers/usb/usbip/stub_dev.c
+++ b/drivers/usb/usbip/stub_dev.c
@@ -63,6 +63,7 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
dev_info(dev, "stub up\n");
+ mutex_lock(&sdev->ud.sysfs_lock);
spin_lock_irq(&sdev->ud.lock);
if (sdev->ud.status != SDEV_ST_AVAILABLE) {
@@ -87,13 +88,13 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
tcp_rx = kthread_create(stub_rx_loop, &sdev->ud, "stub_rx");
if (IS_ERR(tcp_rx)) {
sockfd_put(socket);
- return -EINVAL;
+ goto unlock_mutex;
}
tcp_tx = kthread_create(stub_tx_loop, &sdev->ud, "stub_tx");
if (IS_ERR(tcp_tx)) {
kthread_stop(tcp_rx);
sockfd_put(socket);
- return -EINVAL;
+ goto unlock_mutex;
}
/* get task structs now */
@@ -112,6 +113,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
wake_up_process(sdev->ud.tcp_rx);
wake_up_process(sdev->ud.tcp_tx);
+ mutex_unlock(&sdev->ud.sysfs_lock);
+
} else {
dev_info(dev, "stub down\n");
@@ -122,6 +125,7 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
spin_unlock_irq(&sdev->ud.lock);
usbip_event_add(&sdev->ud, SDEV_EVENT_DOWN);
+ mutex_unlock(&sdev->ud.sysfs_lock);
}
return count;
@@ -130,6 +134,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
sockfd_put(socket);
err:
spin_unlock_irq(&sdev->ud.lock);
+unlock_mutex:
+ mutex_unlock(&sdev->ud.sysfs_lock);
return -EINVAL;
}
static DEVICE_ATTR_WO(usbip_sockfd);
@@ -270,6 +276,7 @@ static struct stub_device *stub_device_alloc(struct usb_device *udev)
sdev->ud.side = USBIP_STUB;
sdev->ud.status = SDEV_ST_AVAILABLE;
spin_lock_init(&sdev->ud.lock);
+ mutex_init(&sdev->ud.sysfs_lock);
sdev->ud.tcp_socket = NULL;
sdev->ud.sockfd = -1;
diff --git a/drivers/usb/usbip/usbip_common.h b/drivers/usb/usbip/usbip_common.h
index 8be857a4..a7e6ce9 100644
--- a/drivers/usb/usbip/usbip_common.h
+++ b/drivers/usb/usbip/usbip_common.h
@@ -263,6 +263,9 @@ struct usbip_device {
/* lock for status */
spinlock_t lock;
+ /* mutex for synchronizing sysfs store paths */
+ struct mutex sysfs_lock;
+
int sockfd;
struct socket *tcp_socket;
diff --git a/drivers/usb/usbip/usbip_event.c b/drivers/usb/usbip/usbip_event.c
index 5d88917..086ca76 100644
--- a/drivers/usb/usbip/usbip_event.c
+++ b/drivers/usb/usbip/usbip_event.c
@@ -70,6 +70,7 @@ static void event_handler(struct work_struct *work)
while ((ud = get_event()) != NULL) {
usbip_dbg_eh("pending event %lx\n", ud->event);
+ mutex_lock(&ud->sysfs_lock);
/*
* NOTE: shutdown must come first.
* Shutdown the device.
@@ -90,6 +91,7 @@ static void event_handler(struct work_struct *work)
ud->eh_ops.unusable(ud);
unset_event(ud, USBIP_EH_UNUSABLE);
}
+ mutex_unlock(&ud->sysfs_lock);
wake_up(&ud->eh_waitq);
}
diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
index a20a838..4ba6bcd 100644
--- a/drivers/usb/usbip/vhci_hcd.c
+++ b/drivers/usb/usbip/vhci_hcd.c
@@ -1101,6 +1101,7 @@ static void vhci_device_init(struct vhci_device *vdev)
vdev->ud.side = USBIP_VHCI;
vdev->ud.status = VDEV_ST_NULL;
spin_lock_init(&vdev->ud.lock);
+ mutex_init(&vdev->ud.sysfs_lock);
INIT_LIST_HEAD(&vdev->priv_rx);
INIT_LIST_HEAD(&vdev->priv_tx);
diff --git a/drivers/usb/usbip/vhci_sysfs.c b/drivers/usb/usbip/vhci_sysfs.c
index e64ea31..ebc7be1d 100644
--- a/drivers/usb/usbip/vhci_sysfs.c
+++ b/drivers/usb/usbip/vhci_sysfs.c
@@ -185,6 +185,8 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
usbip_dbg_vhci_sysfs("enter\n");
+ mutex_lock(&vdev->ud.sysfs_lock);
+
/* lock */
spin_lock_irqsave(&vhci->lock, flags);
spin_lock(&vdev->ud.lock);
@@ -195,6 +197,7 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
/* unlock */
spin_unlock(&vdev->ud.lock);
spin_unlock_irqrestore(&vhci->lock, flags);
+ mutex_unlock(&vdev->ud.sysfs_lock);
return -EINVAL;
}
@@ -205,6 +208,8 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
usbip_event_add(&vdev->ud, VDEV_EVENT_DOWN);
+ mutex_unlock(&vdev->ud.sysfs_lock);
+
return 0;
}
@@ -349,30 +354,36 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
else
vdev = &vhci->vhci_hcd_hs->vdev[rhport];
+ mutex_lock(&vdev->ud.sysfs_lock);
+
/* Extract socket from fd. */
socket = sockfd_lookup(sockfd, &err);
if (!socket) {
dev_err(dev, "failed to lookup sock");
- return -EINVAL;
+ err = -EINVAL;
+ goto unlock_mutex;
}
if (socket->type != SOCK_STREAM) {
dev_err(dev, "Expecting SOCK_STREAM - found %d",
socket->type);
sockfd_put(socket);
- return -EINVAL;
+ err = -EINVAL;
+ goto unlock_mutex;
}
/* create threads before locking */
tcp_rx = kthread_create(vhci_rx_loop, &vdev->ud, "vhci_rx");
if (IS_ERR(tcp_rx)) {
sockfd_put(socket);
- return -EINVAL;
+ err = -EINVAL;
+ goto unlock_mutex;
}
tcp_tx = kthread_create(vhci_tx_loop, &vdev->ud, "vhci_tx");
if (IS_ERR(tcp_tx)) {
kthread_stop(tcp_rx);
sockfd_put(socket);
- return -EINVAL;
+ err = -EINVAL;
+ goto unlock_mutex;
}
/* get task structs now */
@@ -397,7 +408,8 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
* Will be retried from userspace
* if there's another free port.
*/
- return -EBUSY;
+ err = -EBUSY;
+ goto unlock_mutex;
}
dev_info(dev, "pdev(%u) rhport(%u) sockfd(%d)\n",
@@ -422,7 +434,15 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
rh_port_connect(vdev, speed);
+ dev_info(dev, "Device attached\n");
+
+ mutex_unlock(&vdev->ud.sysfs_lock);
+
return count;
+
+unlock_mutex:
+ mutex_unlock(&vdev->ud.sysfs_lock);
+ return err;
}
static DEVICE_ATTR_WO(attach);
diff --git a/drivers/usb/usbip/vudc_dev.c b/drivers/usb/usbip/vudc_dev.c
index c8eeabd..2bc428f 100644
--- a/drivers/usb/usbip/vudc_dev.c
+++ b/drivers/usb/usbip/vudc_dev.c
@@ -572,6 +572,7 @@ static int init_vudc_hw(struct vudc *udc)
init_waitqueue_head(&udc->tx_waitq);
spin_lock_init(&ud->lock);
+ mutex_init(&ud->sysfs_lock);
ud->status = SDEV_ST_AVAILABLE;
ud->side = USBIP_VUDC;
diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
index 7383a54..d1cf6b5 100644
--- a/drivers/usb/usbip/vudc_sysfs.c
+++ b/drivers/usb/usbip/vudc_sysfs.c
@@ -112,6 +112,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
dev_err(dev, "no device");
return -ENODEV;
}
+ mutex_lock(&udc->ud.sysfs_lock);
spin_lock_irqsave(&udc->lock, flags);
/* Don't export what we don't have */
if (!udc->driver || !udc->pullup) {
@@ -155,12 +156,14 @@ static ssize_t usbip_sockfd_store(struct device *dev,
tcp_rx = kthread_create(&v_rx_loop, &udc->ud, "vudc_rx");
if (IS_ERR(tcp_rx)) {
sockfd_put(socket);
+ mutex_unlock(&udc->ud.sysfs_lock);
return -EINVAL;
}
tcp_tx = kthread_create(&v_tx_loop, &udc->ud, "vudc_tx");
if (IS_ERR(tcp_tx)) {
kthread_stop(tcp_rx);
sockfd_put(socket);
+ mutex_unlock(&udc->ud.sysfs_lock);
return -EINVAL;
}
@@ -187,6 +190,8 @@ static ssize_t usbip_sockfd_store(struct device *dev,
wake_up_process(udc->ud.tcp_rx);
wake_up_process(udc->ud.tcp_tx);
+
+ mutex_unlock(&udc->ud.sysfs_lock);
return count;
} else {
@@ -207,6 +212,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
}
spin_unlock_irqrestore(&udc->lock, flags);
+ mutex_unlock(&udc->ud.sysfs_lock);
return count;
@@ -216,6 +222,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
spin_unlock_irq(&udc->ud.lock);
unlock:
spin_unlock_irqrestore(&udc->lock, flags);
+ mutex_unlock(&udc->ud.sysfs_lock);
return ret;
}
diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
index 08f742f..b6cc53b 100644
--- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
@@ -4,9 +4,13 @@
#ifndef __MLX5_VDPA_H__
#define __MLX5_VDPA_H__
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
#include <linux/vdpa.h>
#include <linux/mlx5/driver.h>
+#define MLX5V_ETH_HARD_MTU (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN)
+
struct mlx5_vdpa_direct_mr {
u64 start;
u64 end;
diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
index d300f79..aa656f5 100644
--- a/drivers/vdpa/mlx5/core/mr.c
+++ b/drivers/vdpa/mlx5/core/mr.c
@@ -273,8 +273,10 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
mr->log_size = log_entity_size;
mr->nsg = nsg;
mr->nent = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
- if (!mr->nent)
+ if (!mr->nent) {
+ err = -ENOMEM;
goto err_map;
+ }
err = create_direct_mr(mvdev, mr);
if (err)
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 5a86ede..8af30d0 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -9,6 +9,7 @@
#include <linux/mlx5/vport.h>
#include <linux/mlx5/fs.h>
#include <linux/mlx5/device.h>
+#include <linux/mlx5/mpfs.h>
#include "mlx5_vnet.h"
#include "mlx5_vdpa_ifc.h"
#include "mlx5_vdpa.h"
@@ -805,7 +806,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
MLX5_SET(virtio_q, vq_ctx, event_qpn_or_msix, mvq->fwqp.mqp.qpn);
MLX5_SET(virtio_q, vq_ctx, queue_size, mvq->num_ent);
MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0,
- !!(ndev->mvdev.actual_features & VIRTIO_F_VERSION_1));
+ !!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_F_VERSION_1)));
MLX5_SET64(virtio_q, vq_ctx, desc_addr, mvq->desc_addr);
MLX5_SET64(virtio_q, vq_ctx, used_addr, mvq->device_addr);
MLX5_SET64(virtio_q, vq_ctx, available_addr, mvq->driver_addr);
@@ -1154,6 +1155,7 @@ static void suspend_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
return;
}
mvq->avail_idx = attr.available_index;
+ mvq->used_idx = attr.used_index;
}
static void suspend_vqs(struct mlx5_vdpa_net *ndev)
@@ -1411,6 +1413,7 @@ static int mlx5_vdpa_set_vq_state(struct vdpa_device *vdev, u16 idx,
return -EINVAL;
}
+ mvq->used_idx = state->avail_index;
mvq->avail_idx = state->avail_index;
return 0;
}
@@ -1428,7 +1431,11 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa
* that cares about emulating the index after vq is stopped.
*/
if (!mvq->initialized) {
- state->avail_index = mvq->avail_idx;
+ /* Firmware returns a wrong value for the available index.
+ * Since both values should be identical, we take the value of
+ * used_idx which is reported correctly.
+ */
+ state->avail_index = mvq->used_idx;
return 0;
}
@@ -1437,7 +1444,7 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa
mlx5_vdpa_warn(mvdev, "failed to query virtqueue\n");
return err;
}
- state->avail_index = attr.available_index;
+ state->avail_index = attr.used_index;
return 0;
}
@@ -1525,21 +1532,11 @@ static void teardown_virtqueues(struct mlx5_vdpa_net *ndev)
}
}
-static void clear_virtqueues(struct mlx5_vdpa_net *ndev)
-{
- int i;
-
- for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) {
- ndev->vqs[i].avail_idx = 0;
- ndev->vqs[i].used_idx = 0;
- }
-}
-
/* TODO: cross-endian support */
static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev)
{
return virtio_legacy_is_little_endian() ||
- (mvdev->actual_features & (1ULL << VIRTIO_F_VERSION_1));
+ (mvdev->actual_features & BIT_ULL(VIRTIO_F_VERSION_1));
}
static __virtio16 cpu_to_mlx5vdpa16(struct mlx5_vdpa_dev *mvdev, u16 val)
@@ -1770,7 +1767,6 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
if (!status) {
mlx5_vdpa_info(mvdev, "performing device reset\n");
teardown_driver(ndev);
- clear_virtqueues(ndev);
mlx5_vdpa_destroy_mr(&ndev->mvdev);
ndev->mvdev.status = 0;
ndev->mvdev.mlx_features = 0;
@@ -1844,11 +1840,16 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, struct vhost_iotlb *iotlb
static void mlx5_vdpa_free(struct vdpa_device *vdev)
{
struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
+ struct mlx5_core_dev *pfmdev;
struct mlx5_vdpa_net *ndev;
ndev = to_mlx5_vdpa_ndev(mvdev);
free_resources(ndev);
+ if (!is_zero_ether_addr(ndev->config.mac)) {
+ pfmdev = pci_get_drvdata(pci_physfn(mvdev->mdev->pdev));
+ mlx5_mpfs_del_mac(pfmdev, ndev->config.mac);
+ }
mlx5_vdpa_free_resources(&ndev->mvdev);
mutex_destroy(&ndev->reslock);
}
@@ -1892,6 +1893,19 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
.free = mlx5_vdpa_free,
};
+static int query_mtu(struct mlx5_core_dev *mdev, u16 *mtu)
+{
+ u16 hw_mtu;
+ int err;
+
+ err = mlx5_query_nic_vport_mtu(mdev, &hw_mtu);
+ if (err)
+ return err;
+
+ *mtu = hw_mtu - MLX5V_ETH_HARD_MTU;
+ return 0;
+}
+
static int alloc_resources(struct mlx5_vdpa_net *ndev)
{
struct mlx5_vdpa_net_resources *res = &ndev->res;
@@ -1954,6 +1968,7 @@ static void init_mvqs(struct mlx5_vdpa_net *ndev)
void *mlx5_vdpa_add_dev(struct mlx5_core_dev *mdev)
{
struct virtio_net_config *config;
+ struct mlx5_core_dev *pfmdev;
struct mlx5_vdpa_dev *mvdev;
struct mlx5_vdpa_net *ndev;
u32 max_vqs;
@@ -1974,7 +1989,7 @@ void *mlx5_vdpa_add_dev(struct mlx5_core_dev *mdev)
init_mvqs(ndev);
mutex_init(&ndev->reslock);
config = &ndev->config;
- err = mlx5_query_nic_vport_mtu(mdev, &ndev->mtu);
+ err = query_mtu(mdev, &ndev->mtu);
if (err)
goto err_mtu;
@@ -1982,10 +1997,17 @@ void *mlx5_vdpa_add_dev(struct mlx5_core_dev *mdev)
if (err)
goto err_mtu;
+ if (!is_zero_ether_addr(config->mac)) {
+ pfmdev = pci_get_drvdata(pci_physfn(mdev->pdev));
+ err = mlx5_mpfs_add_mac(pfmdev, config->mac);
+ if (err)
+ goto err_mtu;
+ }
+
mvdev->vdev.dma_dev = mdev->device;
err = mlx5_vdpa_alloc_resources(&ndev->mvdev);
if (err)
- goto err_mtu;
+ goto err_mpfs;
err = alloc_resources(ndev);
if (err)
@@ -2001,6 +2023,9 @@ void *mlx5_vdpa_add_dev(struct mlx5_core_dev *mdev)
free_resources(ndev);
err_res:
mlx5_vdpa_free_resources(&ndev->mvdev);
+err_mpfs:
+ if (!is_zero_ether_addr(config->mac))
+ mlx5_mpfs_del_mac(pfmdev, config->mac);
err_mtu:
mutex_destroy(&ndev->reslock);
put_device(&mvdev->vdev.dev);
diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
index 90c0525..67d0bf4 100644
--- a/drivers/vfio/Kconfig
+++ b/drivers/vfio/Kconfig
@@ -22,7 +22,7 @@
menuconfig VFIO
tristate "VFIO Non-Privileged userspace driver framework"
select IOMMU_API
- select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM || ARM64)
+ select VFIO_IOMMU_TYPE1 if MMU && (X86 || S390 || ARM || ARM64)
help
VFIO provides a framework for secure userspace device drivers.
See Documentation/driver-api/vfio.rst for more details.
diff --git a/drivers/vfio/fsl-mc/vfio_fsl_mc.c b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
index f27e251..8722f5ef 100644
--- a/drivers/vfio/fsl-mc/vfio_fsl_mc.c
+++ b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
@@ -568,25 +568,41 @@ static int vfio_fsl_mc_init_device(struct vfio_fsl_mc_device *vdev)
dev_err(&mc_dev->dev, "VFIO_FSL_MC: Failed to setup DPRC (%d)\n", ret);
goto out_nc_unreg;
}
-
- ret = dprc_scan_container(mc_dev, false);
- if (ret) {
- dev_err(&mc_dev->dev, "VFIO_FSL_MC: Container scanning failed (%d)\n", ret);
- goto out_dprc_cleanup;
- }
-
return 0;
-out_dprc_cleanup:
- dprc_remove_devices(mc_dev, NULL, 0);
- dprc_cleanup(mc_dev);
out_nc_unreg:
bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
- vdev->nb.notifier_call = NULL;
-
return ret;
}
+static int vfio_fsl_mc_scan_container(struct fsl_mc_device *mc_dev)
+{
+ int ret;
+
+ /* non dprc devices do not scan for other devices */
+ if (!is_fsl_mc_bus_dprc(mc_dev))
+ return 0;
+ ret = dprc_scan_container(mc_dev, false);
+ if (ret) {
+ dev_err(&mc_dev->dev,
+ "VFIO_FSL_MC: Container scanning failed (%d)\n", ret);
+ dprc_remove_devices(mc_dev, NULL, 0);
+ return ret;
+ }
+ return 0;
+}
+
+static void vfio_fsl_uninit_device(struct vfio_fsl_mc_device *vdev)
+{
+ struct fsl_mc_device *mc_dev = vdev->mc_dev;
+
+ if (!is_fsl_mc_bus_dprc(mc_dev))
+ return;
+
+ dprc_cleanup(mc_dev);
+ bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
+}
+
static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
{
struct iommu_group *group;
@@ -607,29 +623,39 @@ static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
}
vdev->mc_dev = mc_dev;
-
- ret = vfio_add_group_dev(dev, &vfio_fsl_mc_ops, vdev);
- if (ret) {
- dev_err(dev, "VFIO_FSL_MC: Failed to add to vfio group\n");
- goto out_group_put;
- }
+ mutex_init(&vdev->igate);
ret = vfio_fsl_mc_reflck_attach(vdev);
if (ret)
- goto out_group_dev;
+ goto out_group_put;
ret = vfio_fsl_mc_init_device(vdev);
if (ret)
goto out_reflck;
- mutex_init(&vdev->igate);
+ ret = vfio_add_group_dev(dev, &vfio_fsl_mc_ops, vdev);
+ if (ret) {
+ dev_err(dev, "VFIO_FSL_MC: Failed to add to vfio group\n");
+ goto out_device;
+ }
+ /*
+ * This triggers recursion into vfio_fsl_mc_probe() on another device
+ * and the vfio_fsl_mc_reflck_attach() must succeed, which relies on the
+ * vfio_add_group_dev() above. It has no impact on this vdev, so it is
+ * safe to be after the vfio device is made live.
+ */
+ ret = vfio_fsl_mc_scan_container(mc_dev);
+ if (ret)
+ goto out_group_dev;
return 0;
-out_reflck:
- vfio_fsl_mc_reflck_put(vdev->reflck);
out_group_dev:
vfio_del_group_dev(dev);
+out_device:
+ vfio_fsl_uninit_device(vdev);
+out_reflck:
+ vfio_fsl_mc_reflck_put(vdev->reflck);
out_group_put:
vfio_iommu_group_put(group, dev);
return ret;
@@ -646,16 +672,10 @@ static int vfio_fsl_mc_remove(struct fsl_mc_device *mc_dev)
mutex_destroy(&vdev->igate);
+ dprc_remove_devices(mc_dev, NULL, 0);
+ vfio_fsl_uninit_device(vdev);
vfio_fsl_mc_reflck_put(vdev->reflck);
- if (is_fsl_mc_bus_dprc(mc_dev)) {
- dprc_remove_devices(mc_dev, NULL, 0);
- dprc_cleanup(mc_dev);
- }
-
- if (vdev->nb.notifier_call)
- bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
-
vfio_iommu_group_put(mc_dev->dev.iommu_group, dev);
return 0;
diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c
index 917fd84..367ff54 100644
--- a/drivers/vfio/mdev/mdev_sysfs.c
+++ b/drivers/vfio/mdev/mdev_sysfs.c
@@ -105,6 +105,7 @@ static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent,
return ERR_PTR(-ENOMEM);
type->kobj.kset = parent->mdev_types_kset;
+ type->parent = parent;
ret = kobject_init_and_add(&type->kobj, &mdev_type_ktype, NULL,
"%s-%s", dev_driver_string(parent->dev),
@@ -132,7 +133,6 @@ static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent,
}
type->group = group;
- type->parent = parent;
return type;
attrs_failed:
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 706de3e..48b048e 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -1658,6 +1658,8 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT);
+ if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions)
+ return -EINVAL;
if (vma->vm_end < vma->vm_start)
return -EINVAL;
if ((vma->vm_flags & VM_SHARED) == 0)
@@ -1666,7 +1668,7 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
int regnum = index - VFIO_PCI_NUM_REGIONS;
struct vfio_pci_region *region = vdev->region + regnum;
- if (region && region->ops && region->ops->mmap &&
+ if (region->ops && region->ops->mmap &&
(region->flags & VFIO_REGION_INFO_FLAG_MMAP))
return region->ops->mmap(vdev, region, vma);
return -EINVAL;
@@ -1924,6 +1926,68 @@ static int vfio_pci_bus_notifier(struct notifier_block *nb,
return 0;
}
+static int vfio_pci_vf_init(struct vfio_pci_device *vdev)
+{
+ struct pci_dev *pdev = vdev->pdev;
+ int ret;
+
+ if (!pdev->is_physfn)
+ return 0;
+
+ vdev->vf_token = kzalloc(sizeof(*vdev->vf_token), GFP_KERNEL);
+ if (!vdev->vf_token)
+ return -ENOMEM;
+
+ mutex_init(&vdev->vf_token->lock);
+ uuid_gen(&vdev->vf_token->uuid);
+
+ vdev->nb.notifier_call = vfio_pci_bus_notifier;
+ ret = bus_register_notifier(&pci_bus_type, &vdev->nb);
+ if (ret) {
+ kfree(vdev->vf_token);
+ return ret;
+ }
+ return 0;
+}
+
+static void vfio_pci_vf_uninit(struct vfio_pci_device *vdev)
+{
+ if (!vdev->vf_token)
+ return;
+
+ bus_unregister_notifier(&pci_bus_type, &vdev->nb);
+ WARN_ON(vdev->vf_token->users);
+ mutex_destroy(&vdev->vf_token->lock);
+ kfree(vdev->vf_token);
+}
+
+static int vfio_pci_vga_init(struct vfio_pci_device *vdev)
+{
+ struct pci_dev *pdev = vdev->pdev;
+ int ret;
+
+ if (!vfio_pci_is_vga(pdev))
+ return 0;
+
+ ret = vga_client_register(pdev, vdev, NULL, vfio_pci_set_vga_decode);
+ if (ret)
+ return ret;
+ vga_set_legacy_decoding(pdev, vfio_pci_set_vga_decode(vdev, false));
+ return 0;
+}
+
+static void vfio_pci_vga_uninit(struct vfio_pci_device *vdev)
+{
+ struct pci_dev *pdev = vdev->pdev;
+
+ if (!vfio_pci_is_vga(pdev))
+ return;
+ vga_client_register(pdev, NULL, NULL, NULL);
+ vga_set_legacy_decoding(pdev, VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM |
+ VGA_RSRC_LEGACY_IO |
+ VGA_RSRC_LEGACY_MEM);
+}
+
static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct vfio_pci_device *vdev;
@@ -1970,35 +2034,15 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
INIT_LIST_HEAD(&vdev->vma_list);
init_rwsem(&vdev->memory_lock);
- ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev);
- if (ret)
- goto out_free;
-
ret = vfio_pci_reflck_attach(vdev);
if (ret)
- goto out_del_group_dev;
-
- if (pdev->is_physfn) {
- vdev->vf_token = kzalloc(sizeof(*vdev->vf_token), GFP_KERNEL);
- if (!vdev->vf_token) {
- ret = -ENOMEM;
- goto out_reflck;
- }
-
- mutex_init(&vdev->vf_token->lock);
- uuid_gen(&vdev->vf_token->uuid);
-
- vdev->nb.notifier_call = vfio_pci_bus_notifier;
- ret = bus_register_notifier(&pci_bus_type, &vdev->nb);
- if (ret)
- goto out_vf_token;
- }
-
- if (vfio_pci_is_vga(pdev)) {
- vga_client_register(pdev, vdev, NULL, vfio_pci_set_vga_decode);
- vga_set_legacy_decoding(pdev,
- vfio_pci_set_vga_decode(vdev, false));
- }
+ goto out_free;
+ ret = vfio_pci_vf_init(vdev);
+ if (ret)
+ goto out_reflck;
+ ret = vfio_pci_vga_init(vdev);
+ if (ret)
+ goto out_vf;
vfio_pci_probe_power_state(vdev);
@@ -2016,15 +2060,20 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
vfio_pci_set_power_state(vdev, PCI_D3hot);
}
- return ret;
+ ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev);
+ if (ret)
+ goto out_power;
+ return 0;
-out_vf_token:
- kfree(vdev->vf_token);
+out_power:
+ if (!disable_idle_d3)
+ vfio_pci_set_power_state(vdev, PCI_D0);
+out_vf:
+ vfio_pci_vf_uninit(vdev);
out_reflck:
vfio_pci_reflck_put(vdev->reflck);
-out_del_group_dev:
- vfio_del_group_dev(&pdev->dev);
out_free:
+ kfree(vdev->pm_save);
kfree(vdev);
out_group_put:
vfio_iommu_group_put(group, &pdev->dev);
@@ -2041,33 +2090,19 @@ static void vfio_pci_remove(struct pci_dev *pdev)
if (!vdev)
return;
- if (vdev->vf_token) {
- WARN_ON(vdev->vf_token->users);
- mutex_destroy(&vdev->vf_token->lock);
- kfree(vdev->vf_token);
- }
-
- if (vdev->nb.notifier_call)
- bus_unregister_notifier(&pci_bus_type, &vdev->nb);
-
+ vfio_pci_vf_uninit(vdev);
vfio_pci_reflck_put(vdev->reflck);
+ vfio_pci_vga_uninit(vdev);
vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
- kfree(vdev->region);
- mutex_destroy(&vdev->ioeventfds_lock);
if (!disable_idle_d3)
vfio_pci_set_power_state(vdev, PCI_D0);
+ mutex_destroy(&vdev->ioeventfds_lock);
+ kfree(vdev->region);
kfree(vdev->pm_save);
kfree(vdev);
-
- if (vfio_pci_is_vga(pdev)) {
- vga_client_register(pdev, NULL, NULL, NULL);
- vga_set_legacy_decoding(pdev,
- VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM |
- VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM);
- }
}
static pci_ers_result_t vfio_pci_aer_err_detected(struct pci_dev *pdev,
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index fc5707a..8018415 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -749,9 +749,11 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev,
const struct vdpa_config_ops *ops = vdpa->config;
int r = 0;
+ mutex_lock(&dev->mutex);
+
r = vhost_dev_check_owner(dev);
if (r)
- return r;
+ goto unlock;
switch (msg->type) {
case VHOST_IOTLB_UPDATE:
@@ -772,6 +774,8 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev,
r = -EINVAL;
break;
}
+unlock:
+ mutex_unlock(&dev->mutex);
return r;
}
@@ -993,6 +997,7 @@ static int vhost_vdpa_mmap(struct file *file, struct vm_area_struct *vma)
if (vma->vm_end - vma->vm_start != notify.size)
return -ENOTSUPP;
+ vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_ops = &vhost_vdpa_vm_ops;
return 0;
}
diff --git a/drivers/video/backlight/qcom-wled.c b/drivers/video/backlight/qcom-wled.c
index 3bc7800..cd11c57 100644
--- a/drivers/video/backlight/qcom-wled.c
+++ b/drivers/video/backlight/qcom-wled.c
@@ -336,19 +336,19 @@ static int wled3_sync_toggle(struct wled *wled)
unsigned int mask = GENMASK(wled->max_string_count - 1, 0);
rc = regmap_update_bits(wled->regmap,
- wled->ctrl_addr + WLED3_SINK_REG_SYNC,
+ wled->sink_addr + WLED3_SINK_REG_SYNC,
mask, mask);
if (rc < 0)
return rc;
rc = regmap_update_bits(wled->regmap,
- wled->ctrl_addr + WLED3_SINK_REG_SYNC,
+ wled->sink_addr + WLED3_SINK_REG_SYNC,
mask, WLED3_SINK_REG_SYNC_CLEAR);
return rc;
}
-static int wled5_sync_toggle(struct wled *wled)
+static int wled5_mod_sync_toggle(struct wled *wled)
{
int rc;
u8 val;
@@ -445,10 +445,23 @@ static int wled_update_status(struct backlight_device *bl)
goto unlock_mutex;
}
- rc = wled->wled_sync_toggle(wled);
- if (rc < 0) {
- dev_err(wled->dev, "wled sync failed rc:%d\n", rc);
- goto unlock_mutex;
+ if (wled->version < 5) {
+ rc = wled->wled_sync_toggle(wled);
+ if (rc < 0) {
+ dev_err(wled->dev, "wled sync failed rc:%d\n", rc);
+ goto unlock_mutex;
+ }
+ } else {
+ /*
+ * For WLED5 toggling the MOD_SYNC_BIT updates the
+ * brightness
+ */
+ rc = wled5_mod_sync_toggle(wled);
+ if (rc < 0) {
+ dev_err(wled->dev, "wled mod sync failed rc:%d\n",
+ rc);
+ goto unlock_mutex;
+ }
}
}
@@ -1459,7 +1472,7 @@ static int wled_configure(struct wled *wled)
size = ARRAY_SIZE(wled5_opts);
*cfg = wled5_config_defaults;
wled->wled_set_brightness = wled5_set_brightness;
- wled->wled_sync_toggle = wled5_sync_toggle;
+ wled->wled_sync_toggle = wled3_sync_toggle;
wled->wled_cabc_config = wled5_cabc_config;
wled->wled_ovp_delay = wled5_ovp_delay;
wled->wled_auto_detection_required =
diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
index 17876f0..5dc88a9 100644
--- a/drivers/video/console/vgacon.c
+++ b/drivers/video/console/vgacon.c
@@ -384,7 +384,7 @@ static void vgacon_init(struct vc_data *c, int init)
vc_resize(c, vga_video_num_columns, vga_video_num_lines);
c->vc_scan_lines = vga_scan_lines;
- c->vc_font.height = vga_video_font_height;
+ c->vc_font.height = c->vc_cell_height = vga_video_font_height;
c->vc_complement_mask = 0x7700;
if (vga_512_chars)
c->vc_hi_font_mask = 0x0800;
@@ -519,32 +519,32 @@ static void vgacon_cursor(struct vc_data *c, int mode)
switch (CUR_SIZE(c->vc_cursor_type)) {
case CUR_UNDERLINE:
vgacon_set_cursor_size(c->state.x,
- c->vc_font.height -
- (c->vc_font.height <
+ c->vc_cell_height -
+ (c->vc_cell_height <
10 ? 2 : 3),
- c->vc_font.height -
- (c->vc_font.height <
+ c->vc_cell_height -
+ (c->vc_cell_height <
10 ? 1 : 2));
break;
case CUR_TWO_THIRDS:
vgacon_set_cursor_size(c->state.x,
- c->vc_font.height / 3,
- c->vc_font.height -
- (c->vc_font.height <
+ c->vc_cell_height / 3,
+ c->vc_cell_height -
+ (c->vc_cell_height <
10 ? 1 : 2));
break;
case CUR_LOWER_THIRD:
vgacon_set_cursor_size(c->state.x,
- (c->vc_font.height * 2) / 3,
- c->vc_font.height -
- (c->vc_font.height <
+ (c->vc_cell_height * 2) / 3,
+ c->vc_cell_height -
+ (c->vc_cell_height <
10 ? 1 : 2));
break;
case CUR_LOWER_HALF:
vgacon_set_cursor_size(c->state.x,
- c->vc_font.height / 2,
- c->vc_font.height -
- (c->vc_font.height <
+ c->vc_cell_height / 2,
+ c->vc_cell_height -
+ (c->vc_cell_height <
10 ? 1 : 2));
break;
case CUR_NONE:
@@ -555,7 +555,7 @@ static void vgacon_cursor(struct vc_data *c, int mode)
break;
default:
vgacon_set_cursor_size(c->state.x, 1,
- c->vc_font.height);
+ c->vc_cell_height);
break;
}
break;
@@ -566,13 +566,13 @@ static int vgacon_doresize(struct vc_data *c,
unsigned int width, unsigned int height)
{
unsigned long flags;
- unsigned int scanlines = height * c->vc_font.height;
+ unsigned int scanlines = height * c->vc_cell_height;
u8 scanlines_lo = 0, r7 = 0, vsync_end = 0, mode, max_scan;
raw_spin_lock_irqsave(&vga_lock, flags);
vgacon_xres = width * VGA_FONTWIDTH;
- vgacon_yres = height * c->vc_font.height;
+ vgacon_yres = height * c->vc_cell_height;
if (vga_video_type >= VIDEO_TYPE_VGAC) {
outb_p(VGA_CRTC_MAX_SCAN, vga_video_port_reg);
max_scan = inb_p(vga_video_port_val);
@@ -627,9 +627,9 @@ static int vgacon_doresize(struct vc_data *c,
static int vgacon_switch(struct vc_data *c)
{
int x = c->vc_cols * VGA_FONTWIDTH;
- int y = c->vc_rows * c->vc_font.height;
+ int y = c->vc_rows * c->vc_cell_height;
int rows = screen_info.orig_video_lines * vga_default_font_height/
- c->vc_font.height;
+ c->vc_cell_height;
/*
* We need to save screen size here as it's the only way
* we can spot the screen has been resized and we need to
@@ -1060,7 +1060,7 @@ static int vgacon_adjust_height(struct vc_data *vc, unsigned fontheight)
cursor_size_lastto = 0;
c->vc_sw->con_cursor(c, CM_DRAW);
}
- c->vc_font.height = fontheight;
+ c->vc_font.height = c->vc_cell_height = fontheight;
vc_resize(c, 0, rows); /* Adjust console size */
}
}
@@ -1108,12 +1108,20 @@ static int vgacon_resize(struct vc_data *c, unsigned int width,
if ((width << 1) * height > vga_vram_size)
return -EINVAL;
+ if (user) {
+ /*
+ * Ho ho! Someone (svgatextmode, eh?) may have reprogrammed
+ * the video mode! Set the new defaults then and go away.
+ */
+ screen_info.orig_video_cols = width;
+ screen_info.orig_video_lines = height;
+ vga_default_font_height = c->vc_cell_height;
+ return 0;
+ }
if (width % 2 || width > screen_info.orig_video_cols ||
height > (screen_info.orig_video_lines * vga_default_font_height)/
- c->vc_font.height)
- /* let svgatextmode tinker with video timings and
- return success */
- return (user) ? 0 : -EINVAL;
+ c->vc_cell_height)
+ return -EINVAL;
if (con_is_visible(c) && !vga_is_gfx) /* who knows */
vgacon_doresize(c, width, height);
diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
index e5ae33c..e8a17fb 100644
--- a/drivers/video/fbdev/core/fbcmap.c
+++ b/drivers/video/fbdev/core/fbcmap.c
@@ -101,17 +101,17 @@ int fb_alloc_cmap_gfp(struct fb_cmap *cmap, int len, int transp, gfp_t flags)
if (!len)
return 0;
- cmap->red = kmalloc(size, flags);
+ cmap->red = kzalloc(size, flags);
if (!cmap->red)
goto fail;
- cmap->green = kmalloc(size, flags);
+ cmap->green = kzalloc(size, flags);
if (!cmap->green)
goto fail;
- cmap->blue = kmalloc(size, flags);
+ cmap->blue = kzalloc(size, flags);
if (!cmap->blue)
goto fail;
if (transp) {
- cmap->transp = kmalloc(size, flags);
+ cmap->transp = kzalloc(size, flags);
if (!cmap->transp)
goto fail;
} else {
diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
index 2658119..42c72d0 100644
--- a/drivers/video/fbdev/core/fbcon.c
+++ b/drivers/video/fbdev/core/fbcon.c
@@ -2031,7 +2031,7 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
return -EINVAL;
DPRINTK("resize now %ix%i\n", var.xres, var.yres);
- if (con_is_visible(vc)) {
+ if (con_is_visible(vc) && vc->vc_mode == KD_TEXT) {
var.activate = FB_ACTIVATE_NOW |
FB_ACTIVATE_FORCE;
fb_set_var(info, &var);
diff --git a/drivers/video/fbdev/hgafb.c b/drivers/video/fbdev/hgafb.c
index a45fcff..0fe3273 100644
--- a/drivers/video/fbdev/hgafb.c
+++ b/drivers/video/fbdev/hgafb.c
@@ -286,7 +286,7 @@ static int hga_card_detect(void)
hga_vram = ioremap(0xb0000, hga_vram_len);
if (!hga_vram)
- goto error;
+ return -ENOMEM;
if (request_region(0x3b0, 12, "hgafb"))
release_io_ports = 1;
@@ -346,13 +346,18 @@ static int hga_card_detect(void)
hga_type_name = "Hercules";
break;
}
- return 1;
+ return 0;
error:
if (release_io_ports)
release_region(0x3b0, 12);
if (release_io_port)
release_region(0x3bf, 1);
- return 0;
+
+ iounmap(hga_vram);
+
+ pr_err("hgafb: HGA card not detected.\n");
+
+ return -EINVAL;
}
/**
@@ -550,13 +555,11 @@ static const struct fb_ops hgafb_ops = {
static int hgafb_probe(struct platform_device *pdev)
{
struct fb_info *info;
+ int ret;
- if (! hga_card_detect()) {
- printk(KERN_INFO "hgafb: HGA card not detected.\n");
- if (hga_vram)
- iounmap(hga_vram);
- return -EINVAL;
- }
+ ret = hga_card_detect();
+ if (ret)
+ return ret;
printk(KERN_INFO "hgafb: %s with %ldK of memory detected.\n",
hga_type_name, hga_vram_len/1024);
diff --git a/drivers/video/fbdev/imsttfb.c b/drivers/video/fbdev/imsttfb.c
index 3ac053b..e044117 100644
--- a/drivers/video/fbdev/imsttfb.c
+++ b/drivers/video/fbdev/imsttfb.c
@@ -1512,11 +1512,6 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
info->fix.smem_start = addr;
info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ?
0x400000 : 0x800000);
- if (!info->screen_base) {
- release_mem_region(addr, size);
- framebuffer_release(info);
- return -ENOMEM;
- }
info->fix.mmio_start = addr + 0x800000;
par->dc_regs = ioremap(addr + 0x800000, 0x1000);
par->cmap_regs_phys = addr + 0x840000;
diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
index f1964ea..e21e1e8 100644
--- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
@@ -1524,7 +1524,8 @@ static const struct file_operations ne_enclave_fops = {
* enclave file descriptor to be further used for enclave
* resources handling e.g. memory regions and CPUs.
* @ne_pci_dev : Private data associated with the PCI device.
- * @slot_uid: Generated unique slot id associated with an enclave.
+ * @slot_uid: User pointer to store the generated unique slot id
+ * associated with an enclave to.
*
* Context: Process context. This function is called with the ne_pci_dev enclave
* mutex held.
@@ -1532,7 +1533,7 @@ static const struct file_operations ne_enclave_fops = {
* * Enclave fd on success.
* * Negative return value on failure.
*/
-static int ne_create_vm_ioctl(struct ne_pci_dev *ne_pci_dev, u64 *slot_uid)
+static int ne_create_vm_ioctl(struct ne_pci_dev *ne_pci_dev, u64 __user *slot_uid)
{
struct ne_pci_dev_cmd_reply cmd_reply = {};
int enclave_fd = -1;
@@ -1634,7 +1635,18 @@ static int ne_create_vm_ioctl(struct ne_pci_dev *ne_pci_dev, u64 *slot_uid)
list_add(&ne_enclave->enclave_list_entry, &ne_pci_dev->enclaves_list);
- *slot_uid = ne_enclave->slot_uid;
+ if (copy_to_user(slot_uid, &ne_enclave->slot_uid, sizeof(ne_enclave->slot_uid))) {
+ /*
+ * As we're holding the only reference to 'enclave_file', fput()
+ * will call ne_enclave_release() which will do a proper cleanup
+ * of all so far allocated resources, leaving only the unused fd
+ * for us to free.
+ */
+ fput(enclave_file);
+ put_unused_fd(enclave_fd);
+
+ return -EFAULT;
+ }
fd_install(enclave_fd, enclave_file);
@@ -1671,34 +1683,13 @@ static long ne_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
switch (cmd) {
case NE_CREATE_VM: {
int enclave_fd = -1;
- struct file *enclave_file = NULL;
struct ne_pci_dev *ne_pci_dev = ne_devs.ne_pci_dev;
- int rc = -EINVAL;
- u64 slot_uid = 0;
+ u64 __user *slot_uid = (void __user *)arg;
mutex_lock(&ne_pci_dev->enclaves_list_mutex);
-
- enclave_fd = ne_create_vm_ioctl(ne_pci_dev, &slot_uid);
- if (enclave_fd < 0) {
- rc = enclave_fd;
-
- mutex_unlock(&ne_pci_dev->enclaves_list_mutex);
-
- return rc;
- }
-
+ enclave_fd = ne_create_vm_ioctl(ne_pci_dev, slot_uid);
mutex_unlock(&ne_pci_dev->enclaves_list_mutex);
- if (copy_to_user((void __user *)arg, &slot_uid, sizeof(slot_uid))) {
- enclave_file = fget(enclave_fd);
- /* Decrement file refs to have release() called. */
- fput(enclave_file);
- fput(enclave_file);
- put_unused_fd(enclave_fd);
-
- return -EFAULT;
- }
-
return enclave_fd;
}
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 7bd03f6..29bec07 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -108,7 +108,7 @@ struct irq_info {
unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */
unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
u64 eoi_time; /* Time in jiffies when to EOI. */
- spinlock_t lock;
+ raw_spinlock_t lock;
union {
unsigned short virq;
@@ -280,7 +280,7 @@ static int xen_irq_info_common_setup(struct irq_info *info,
info->evtchn = evtchn;
info->cpu = cpu;
info->mask_reason = EVT_MASK_REASON_EXPLICIT;
- spin_lock_init(&info->lock);
+ raw_spin_lock_init(&info->lock);
ret = set_evtchn_to_irq(evtchn, irq);
if (ret < 0)
@@ -432,28 +432,28 @@ static void do_mask(struct irq_info *info, u8 reason)
{
unsigned long flags;
- spin_lock_irqsave(&info->lock, flags);
+ raw_spin_lock_irqsave(&info->lock, flags);
if (!info->mask_reason)
mask_evtchn(info->evtchn);
info->mask_reason |= reason;
- spin_unlock_irqrestore(&info->lock, flags);
+ raw_spin_unlock_irqrestore(&info->lock, flags);
}
static void do_unmask(struct irq_info *info, u8 reason)
{
unsigned long flags;
- spin_lock_irqsave(&info->lock, flags);
+ raw_spin_lock_irqsave(&info->lock, flags);
info->mask_reason &= ~reason;
if (!info->mask_reason)
unmask_evtchn(info->evtchn);
- spin_unlock_irqrestore(&info->lock, flags);
+ raw_spin_unlock_irqrestore(&info->lock, flags);
}
#ifdef CONFIG_X86
@@ -1809,7 +1809,7 @@ static void lateeoi_ack_dynirq(struct irq_data *data)
if (VALID_EVTCHN(evtchn)) {
do_mask(info, EVT_MASK_REASON_EOI_PENDING);
- event_handler_exit(info);
+ ack_dynirq(data);
}
}
@@ -1820,7 +1820,7 @@ static void lateeoi_mask_ack_dynirq(struct irq_data *data)
if (VALID_EVTCHN(evtchn)) {
do_mask(info, EVT_MASK_REASON_EXPLICIT);
- event_handler_exit(info);
+ ack_dynirq(data);
}
}
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 5447c51..b9651f7 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -1005,8 +1005,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
err = mmu_interval_notifier_insert_locked(
&map->notifier, vma->vm_mm, vma->vm_start,
vma->vm_end - vma->vm_start, &gntdev_mmu_ops);
- if (err)
+ if (err) {
+ map->vma = NULL;
goto out_unlock_put;
+ }
}
mutex_unlock(&priv->lock);
diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
index 7762c1bb..87e6b7d 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -27,11 +27,6 @@ static int fill_list(unsigned int nr_pages)
if (!res)
return -ENOMEM;
- pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
- if (!pgmap)
- goto err_pgmap;
-
- pgmap->type = MEMORY_DEVICE_GENERIC;
res->name = "Xen scratch";
res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
@@ -43,6 +38,13 @@ static int fill_list(unsigned int nr_pages)
goto err_resource;
}
+ pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
+ if (!pgmap) {
+ ret = -ENOMEM;
+ goto err_pgmap;
+ }
+
+ pgmap->type = MEMORY_DEVICE_GENERIC;
pgmap->range = (struct range) {
.start = res->start,
.end = res->end,
@@ -92,10 +94,10 @@ static int fill_list(unsigned int nr_pages)
return 0;
err_memremap:
- release_resource(res);
-err_resource:
kfree(pgmap);
err_pgmap:
+ release_resource(res);
+err_resource:
kfree(res);
return ret;
}
diff --git a/drivers/xen/xen-pciback/vpci.c b/drivers/xen/xen-pciback/vpci.c
index 5447b5a..1221cfd 100644
--- a/drivers/xen/xen-pciback/vpci.c
+++ b/drivers/xen/xen-pciback/vpci.c
@@ -70,7 +70,7 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
struct pci_dev *dev, int devid,
publish_pci_dev_cb publish_cb)
{
- int err = 0, slot, func = -1;
+ int err = 0, slot, func = PCI_FUNC(dev->devfn);
struct pci_dev_entry *t, *dev_entry;
struct vpci_dev_data *vpci_dev = pdev->pci_dev_data;
@@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
/*
* Keep multi-function devices together on the virtual PCI bus, except
- * virtual functions.
+ * that we want to keep virtual functions at func 0 on their own. They
+ * aren't multi-function devices and hence their presence at func 0
+ * may cause guests to not scan the other functions.
*/
- if (!dev->is_virtfn) {
+ if (!dev->is_virtfn || func) {
for (slot = 0; slot < PCI_SLOT_MAX; slot++) {
if (list_empty(&vpci_dev->dev_list[slot]))
continue;
t = list_entry(list_first(&vpci_dev->dev_list[slot]),
struct pci_dev_entry, list);
+ if (t->dev->is_virtfn && !PCI_FUNC(t->dev->devfn))
+ continue;
if (match_slot(dev, t->dev)) {
dev_info(&dev->dev, "vpci: assign to virtual slot %d func %d\n",
- slot, PCI_FUNC(dev->devfn));
+ slot, func);
list_add_tail(&dev_entry->list,
&vpci_dev->dev_list[slot]);
- func = PCI_FUNC(dev->devfn);
goto unlock;
}
}
@@ -123,7 +126,6 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
slot);
list_add_tail(&dev_entry->list,
&vpci_dev->dev_list[slot]);
- func = dev->is_virtfn ? 0 : PCI_FUNC(dev->devfn);
goto unlock;
}
}
diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
index e7c692c..cad56ea6 100644
--- a/drivers/xen/xen-pciback/xenbus.c
+++ b/drivers/xen/xen-pciback/xenbus.c
@@ -359,7 +359,8 @@ static int xen_pcibk_publish_pci_root(struct xen_pcibk_device *pdev,
return err;
}
-static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev)
+static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev,
+ enum xenbus_state state)
{
int err = 0;
int num_devs;
@@ -373,9 +374,7 @@ static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev)
dev_dbg(&pdev->xdev->dev, "Reconfiguring device ...\n");
mutex_lock(&pdev->dev_lock);
- /* Make sure we only reconfigure once */
- if (xenbus_read_driver_state(pdev->xdev->nodename) !=
- XenbusStateReconfiguring)
+ if (xenbus_read_driver_state(pdev->xdev->nodename) != state)
goto out;
err = xenbus_scanf(XBT_NIL, pdev->xdev->nodename, "num_devs", "%d",
@@ -500,6 +499,10 @@ static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev)
}
}
+ if (state != XenbusStateReconfiguring)
+ /* Make sure we only reconfigure once. */
+ goto out;
+
err = xenbus_switch_state(pdev->xdev, XenbusStateReconfigured);
if (err) {
xenbus_dev_fatal(pdev->xdev, err,
@@ -525,7 +528,7 @@ static void xen_pcibk_frontend_changed(struct xenbus_device *xdev,
break;
case XenbusStateReconfiguring:
- xen_pcibk_reconfigure(pdev);
+ xen_pcibk_reconfigure(pdev, XenbusStateReconfiguring);
break;
case XenbusStateConnected:
@@ -664,6 +667,15 @@ static void xen_pcibk_be_watch(struct xenbus_watch *watch,
xen_pcibk_setup_backend(pdev);
break;
+ case XenbusStateInitialised:
+ /*
+ * We typically move to Initialised when the first device was
+ * added. Hence subsequent devices getting added may need
+ * reconfiguring.
+ */
+ xen_pcibk_reconfigure(pdev, XenbusStateInitialised);
+ break;
+
default:
break;
}
diff --git a/fs/afs/dir.c b/fs/afs/dir.c
index 9dc6f4b..92d7fd7 100644
--- a/fs/afs/dir.c
+++ b/fs/afs/dir.c
@@ -1337,6 +1337,7 @@ static int afs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
afs_op_set_vnode(op, 0, dvnode);
op->file[0].dv_delta = 1;
+ op->file[0].modification = true;
op->file[0].update_ctime = true;
op->dentry = dentry;
op->create.mode = S_IFDIR | mode;
@@ -1418,6 +1419,7 @@ static int afs_rmdir(struct inode *dir, struct dentry *dentry)
afs_op_set_vnode(op, 0, dvnode);
op->file[0].dv_delta = 1;
+ op->file[0].modification = true;
op->file[0].update_ctime = true;
op->dentry = dentry;
@@ -1554,6 +1556,7 @@ static int afs_unlink(struct inode *dir, struct dentry *dentry)
afs_op_set_vnode(op, 0, dvnode);
op->file[0].dv_delta = 1;
+ op->file[0].modification = true;
op->file[0].update_ctime = true;
/* Try to make sure we have a callback promise on the victim. */
@@ -1636,6 +1639,7 @@ static int afs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
afs_op_set_vnode(op, 0, dvnode);
op->file[0].dv_delta = 1;
+ op->file[0].modification = true;
op->file[0].update_ctime = true;
op->dentry = dentry;
@@ -1710,6 +1714,7 @@ static int afs_link(struct dentry *from, struct inode *dir,
afs_op_set_vnode(op, 0, dvnode);
afs_op_set_vnode(op, 1, vnode);
op->file[0].dv_delta = 1;
+ op->file[0].modification = true;
op->file[0].update_ctime = true;
op->file[1].update_ctime = true;
@@ -1832,7 +1837,9 @@ static void afs_rename_edit_dir(struct afs_operation *op)
new_inode = d_inode(new_dentry);
if (new_inode) {
spin_lock(&new_inode->i_lock);
- if (new_inode->i_nlink > 0)
+ if (S_ISDIR(new_inode->i_mode))
+ clear_nlink(new_inode);
+ else if (new_inode->i_nlink > 0)
drop_nlink(new_inode);
spin_unlock(&new_inode->i_lock);
}
@@ -1905,6 +1912,8 @@ static int afs_rename(struct inode *old_dir, struct dentry *old_dentry,
afs_op_set_vnode(op, 1, new_dvnode); /* May be same as orig_dvnode */
op->file[0].dv_delta = 1;
op->file[1].dv_delta = 1;
+ op->file[0].modification = true;
+ op->file[1].modification = true;
op->file[0].update_ctime = true;
op->file[1].update_ctime = true;
diff --git a/fs/afs/dir_silly.c b/fs/afs/dir_silly.c
index 04f75a4..dae9a57 100644
--- a/fs/afs/dir_silly.c
+++ b/fs/afs/dir_silly.c
@@ -73,6 +73,8 @@ static int afs_do_silly_rename(struct afs_vnode *dvnode, struct afs_vnode *vnode
afs_op_set_vnode(op, 1, dvnode);
op->file[0].dv_delta = 1;
op->file[1].dv_delta = 1;
+ op->file[0].modification = true;
+ op->file[1].modification = true;
op->file[0].update_ctime = true;
op->file[1].update_ctime = true;
@@ -201,6 +203,7 @@ static int afs_do_silly_unlink(struct afs_vnode *dvnode, struct afs_vnode *vnode
afs_op_set_vnode(op, 0, dvnode);
afs_op_set_vnode(op, 1, vnode);
op->file[0].dv_delta = 1;
+ op->file[0].modification = true;
op->file[0].update_ctime = true;
op->file[1].op_unlinked = true;
op->file[1].update_ctime = true;
diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c
index 71c5872..a82515b 100644
--- a/fs/afs/fs_operation.c
+++ b/fs/afs/fs_operation.c
@@ -118,6 +118,8 @@ static void afs_prepare_vnode(struct afs_operation *op, struct afs_vnode_param *
vp->cb_break_before = afs_calc_vnode_cb_break(vnode);
if (vnode->lock_state != AFS_VNODE_LOCK_NONE)
op->flags |= AFS_OPERATION_CUR_ONLY;
+ if (vp->modification)
+ set_bit(AFS_VNODE_MODIFYING, &vnode->flags);
}
if (vp->fid.vnode)
@@ -223,6 +225,10 @@ int afs_put_operation(struct afs_operation *op)
if (op->ops && op->ops->put)
op->ops->put(op);
+ if (op->file[0].modification)
+ clear_bit(AFS_VNODE_MODIFYING, &op->file[0].vnode->flags);
+ if (op->file[1].modification && op->file[1].vnode != op->file[0].vnode)
+ clear_bit(AFS_VNODE_MODIFYING, &op->file[1].vnode->flags);
if (op->file[0].put_vnode)
iput(&op->file[0].vnode->vfs_inode);
if (op->file[1].put_vnode)
diff --git a/fs/afs/inode.c b/fs/afs/inode.c
index 1d03eb1..ae3016a 100644
--- a/fs/afs/inode.c
+++ b/fs/afs/inode.c
@@ -102,13 +102,13 @@ static int afs_inode_init_from_status(struct afs_operation *op,
switch (status->type) {
case AFS_FTYPE_FILE:
- inode->i_mode = S_IFREG | status->mode;
+ inode->i_mode = S_IFREG | (status->mode & S_IALLUGO);
inode->i_op = &afs_file_inode_operations;
inode->i_fop = &afs_file_operations;
inode->i_mapping->a_ops = &afs_fs_aops;
break;
case AFS_FTYPE_DIR:
- inode->i_mode = S_IFDIR | status->mode;
+ inode->i_mode = S_IFDIR | (status->mode & S_IALLUGO);
inode->i_op = &afs_dir_inode_operations;
inode->i_fop = &afs_dir_file_operations;
inode->i_mapping->a_ops = &afs_dir_aops;
@@ -198,7 +198,7 @@ static void afs_apply_status(struct afs_operation *op,
if (status->mode != vnode->status.mode) {
mode = inode->i_mode;
mode &= ~S_IALLUGO;
- mode |= status->mode;
+ mode |= status->mode & S_IALLUGO;
WRITE_ONCE(inode->i_mode, mode);
}
@@ -293,8 +293,9 @@ void afs_vnode_commit_status(struct afs_operation *op, struct afs_vnode_param *v
op->flags &= ~AFS_OPERATION_DIR_CONFLICT;
}
} else if (vp->scb.have_status) {
- if (vp->dv_before + vp->dv_delta != vp->scb.status.data_version &&
- vp->speculative)
+ if (vp->speculative &&
+ (test_bit(AFS_VNODE_MODIFYING, &vnode->flags) ||
+ vp->dv_before != vnode->status.data_version))
/* Ignore the result of a speculative bulk status fetch
* if it splits around a modification op, thereby
* appearing to regress the data version.
@@ -909,6 +910,7 @@ int afs_setattr(struct dentry *dentry, struct iattr *attr)
}
op->ctime = attr->ia_ctime;
op->file[0].update_ctime = 1;
+ op->file[0].modification = true;
op->ops = &afs_setattr_operation;
ret = afs_do_sync_operation(op);
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index 525ef07..ffe318a 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -640,6 +640,7 @@ struct afs_vnode {
#define AFS_VNODE_PSEUDODIR 7 /* set if Vnode is a pseudo directory */
#define AFS_VNODE_NEW_CONTENT 8 /* Set if file has new content (create/trunc-0) */
#define AFS_VNODE_SILLY_DELETED 9 /* Set if file has been silly-deleted */
+#define AFS_VNODE_MODIFYING 10 /* Set if we're performing a modification op */
struct list_head wb_keys; /* List of keys available for writeback */
struct list_head pending_locks; /* locks waiting to be granted */
@@ -756,6 +757,7 @@ struct afs_vnode_param {
bool set_size:1; /* Must update i_size */
bool op_unlinked:1; /* True if file was unlinked by op */
bool speculative:1; /* T if speculative status fetch (no vnode lock) */
+ bool modification:1; /* Set if the content gets modified */
};
/*
diff --git a/fs/afs/write.c b/fs/afs/write.c
index c9195fc..d37b5cf 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -450,6 +450,7 @@ static int afs_store_data(struct address_space *mapping,
afs_op_set_vnode(op, 0, vnode);
op->file[0].dv_delta = 1;
op->store.mapping = mapping;
+ op->file[0].modification = true;
op->store.first = first;
op->store.last = last;
op->store.first_offset = offset;
diff --git a/fs/block_dev.c b/fs/block_dev.c
index fe201b7..29f020c 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -280,6 +280,8 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
bio.bi_opf = dio_bio_write_op(iocb);
task_io_account_write(ret);
}
+ if (iocb->ki_flags & IOCB_NOWAIT)
+ bio.bi_opf |= REQ_NOWAIT;
if (iocb->ki_flags & IOCB_HIPRI)
bio_set_polled(&bio, iocb);
@@ -433,6 +435,8 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
bio->bi_opf = dio_bio_write_op(iocb);
task_io_account_write(bio->bi_iter.bi_size);
}
+ if (iocb->ki_flags & IOCB_NOWAIT)
+ bio->bi_opf |= REQ_NOWAIT;
dio->size += bio->bi_iter.bi_size;
pos += bio->bi_iter.bi_size;
@@ -1404,13 +1408,16 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate)
lockdep_assert_held(&bdev->bd_mutex);
- clear_bit(GD_NEED_PART_SCAN, &bdev->bd_disk->state);
+ if (!(disk->flags & GENHD_FL_UP))
+ return -ENXIO;
rescan:
ret = blk_drop_partitions(bdev);
if (ret)
return ret;
+ clear_bit(GD_NEED_PART_SCAN, &disk->state);
+
/*
* Historically we only set the capacity to zero for devices that
* support partitions (independ of actually having partitions created).
@@ -1899,6 +1906,7 @@ ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
struct inode *bd_inode = bdev_file_inode(file);
loff_t size = i_size_read(bd_inode);
struct blk_plug plug;
+ size_t shorted = 0;
ssize_t ret;
if (bdev_read_only(I_BDEV(bd_inode)))
@@ -1916,12 +1924,17 @@ ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
if ((iocb->ki_flags & (IOCB_NOWAIT | IOCB_DIRECT)) == IOCB_NOWAIT)
return -EOPNOTSUPP;
- iov_iter_truncate(from, size - iocb->ki_pos);
+ size -= iocb->ki_pos;
+ if (iov_iter_count(from) > size) {
+ shorted = iov_iter_count(from) - size;
+ iov_iter_truncate(from, size);
+ }
blk_start_plug(&plug);
ret = __generic_file_write_iter(iocb, from);
if (ret > 0)
ret = generic_write_sync(iocb, ret);
+ iov_iter_reexpand(from, iov_iter_count(from) + shorted);
blk_finish_plug(&plug);
return ret;
}
@@ -1933,13 +1946,21 @@ ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
struct inode *bd_inode = bdev_file_inode(file);
loff_t size = i_size_read(bd_inode);
loff_t pos = iocb->ki_pos;
+ size_t shorted = 0;
+ ssize_t ret;
if (pos >= size)
return 0;
size -= pos;
- iov_iter_truncate(to, size);
- return generic_file_read_iter(iocb, to);
+ if (iov_iter_count(to) > size) {
+ shorted = iov_iter_count(to) - size;
+ iov_iter_truncate(to, size);
+ }
+
+ ret = generic_file_read_iter(iocb, to);
+ iov_iter_reexpand(to, iov_iter_count(to) + shorted);
+ return ret;
}
EXPORT_SYMBOL_GPL(blkdev_read_iter);
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index eeface3..7bdc86b 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -80,10 +80,15 @@ static int compression_compress_pages(int type, struct list_head *ws,
case BTRFS_COMPRESS_NONE:
default:
/*
- * This can't happen, the type is validated several times
- * before we get here. As a sane fallback, return what the
- * callers will understand as 'no compression happened'.
+ * This can happen when compression races with remount setting
+ * it to 'no compress', while caller doesn't call
+ * inode_need_compress() to check if we really need to
+ * compress.
+ *
+ * Not a big deal, just need to inform caller that we
+ * haven't allocated any pages yet.
*/
+ *out_pages = 0;
return -E2BIG;
}
}
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 9faf15b..519cf14 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -1367,10 +1367,30 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
"failed to read tree block %llu from get_old_root",
logical);
} else {
+ struct tree_mod_elem *tm2;
+
btrfs_tree_read_lock(old);
eb = btrfs_clone_extent_buffer(old);
+ /*
+ * After the lookup for the most recent tree mod operation
+ * above and before we locked and cloned the extent buffer
+ * 'old', a new tree mod log operation may have been added.
+ * So lookup for a more recent one to make sure the number
+ * of mod log operations we replay is consistent with the
+ * number of items we have in the cloned extent buffer,
+ * otherwise we can hit a BUG_ON when rewinding the extent
+ * buffer.
+ */
+ tm2 = tree_mod_log_search(fs_info, logical, time_seq);
btrfs_tree_read_unlock(old);
free_extent_buffer(old);
+ ASSERT(tm2);
+ ASSERT(tm2 == tm || tm2->seq > tm->seq);
+ if (!tm2 || tm2->seq < tm->seq) {
+ free_extent_buffer(eb);
+ return NULL;
+ }
+ tm = tm2;
}
} else if (old_root) {
eb_root_owner = btrfs_header_owner(eb_root);
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 30cf917..81e98a4 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -4662,7 +4662,7 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len)
{
int ret = 0;
- u64 off = start;
+ u64 off;
u64 max = start + len;
u32 flags = 0;
u32 found_type;
@@ -4698,6 +4698,11 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
goto out_free_ulist;
}
+ /*
+ * We can't initialize that to 'start' as this could miss extents due
+ * to extent item merging
+ */
+ off = 0;
start = round_down(start, btrfs_inode_sectorsize(inode));
len = round_up(max, btrfs_inode_sectorsize(inode)) - start;
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index c81a20c..7e87549 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -2065,6 +2065,30 @@ static int start_ordered_ops(struct inode *inode, loff_t start, loff_t end)
return ret;
}
+static inline bool skip_inode_logging(const struct btrfs_log_ctx *ctx)
+{
+ struct btrfs_inode *inode = BTRFS_I(ctx->inode);
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+
+ if (btrfs_inode_in_log(inode, fs_info->generation) &&
+ list_empty(&ctx->ordered_extents))
+ return true;
+
+ /*
+ * If we are doing a fast fsync we can not bail out if the inode's
+ * last_trans is <= then the last committed transaction, because we only
+ * update the last_trans of the inode during ordered extent completion,
+ * and for a fast fsync we don't wait for that, we only wait for the
+ * writeback to complete.
+ */
+ if (inode->last_trans <= fs_info->last_trans_committed &&
+ (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags) ||
+ list_empty(&ctx->ordered_extents)))
+ return true;
+
+ return false;
+}
+
/*
* fsync call for both files and directories. This logs the inode into
* the tree log instead of forcing full commits whenever possible.
@@ -2080,7 +2104,6 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
{
struct dentry *dentry = file_dentry(file);
struct inode *inode = d_inode(dentry);
- struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
struct btrfs_root *root = BTRFS_I(inode)->root;
struct btrfs_trans_handle *trans;
struct btrfs_log_ctx ctx;
@@ -2187,17 +2210,8 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
atomic_inc(&root->log_batch);
- /*
- * If we are doing a fast fsync we can not bail out if the inode's
- * last_trans is <= then the last committed transaction, because we only
- * update the last_trans of the inode during ordered extent completion,
- * and for a fast fsync we don't wait for that, we only wait for the
- * writeback to complete.
- */
smp_mb();
- if (btrfs_inode_in_log(BTRFS_I(inode), fs_info->generation) ||
- (BTRFS_I(inode)->last_trans <= fs_info->last_trans_committed &&
- (full_sync || list_empty(&ctx.ordered_extents)))) {
+ if (skip_inode_logging(&ctx)) {
/*
* We've had everything committed since the last time we were
* modified so clear this flag in case it was set for whatever
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 4162ef6..94c24b2a 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2967,6 +2967,7 @@ void btrfs_run_delayed_iputs(struct btrfs_fs_info *fs_info)
inode = list_first_entry(&fs_info->delayed_iputs,
struct btrfs_inode, delayed_iput);
run_delayed_iput_locked(fs_info, inode);
+ cond_resched_lock(&fs_info->delayed_iput_lock);
}
spin_unlock(&fs_info->delayed_iput_lock);
}
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index f513531..040db0d 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -678,8 +678,6 @@ static noinline int create_subvol(struct inode *dir,
btrfs_set_root_otransid(root_item, trans->transid);
btrfs_tree_unlock(leaf);
- free_extent_buffer(leaf);
- leaf = NULL;
btrfs_set_root_dirid(root_item, new_dirid);
@@ -688,8 +686,22 @@ static noinline int create_subvol(struct inode *dir,
key.type = BTRFS_ROOT_ITEM_KEY;
ret = btrfs_insert_root(trans, fs_info->tree_root, &key,
root_item);
- if (ret)
+ if (ret) {
+ /*
+ * Since we don't abort the transaction in this case, free the
+ * tree block so that we don't leak space and leave the
+ * filesystem in an inconsistent state (an extent item in the
+ * extent tree without backreferences). Also no need to have
+ * the tree block locked since it is not in any tree at this
+ * point, so no other task can find it and use it.
+ */
+ btrfs_free_tree_block(trans, root, leaf, 0, 1);
+ free_extent_buffer(leaf);
goto fail;
+ }
+
+ free_extent_buffer(leaf);
+ leaf = NULL;
key.offset = (u64)-1;
new_root = btrfs_get_new_fs_root(fs_info, objectid, anon_dev);
diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
index c4f87df..eeb66e7 100644
--- a/fs/btrfs/reflink.c
+++ b/fs/btrfs/reflink.c
@@ -282,6 +282,11 @@ static int clone_copy_inline_extent(struct inode *dst,
out:
if (!ret && !trans) {
/*
+ * Release path before starting a new transaction so we don't
+ * hold locks that would confuse lockdep.
+ */
+ btrfs_release_path(path);
+ /*
* No transaction here means we copied the inline extent into a
* page of the destination inode.
*
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 6a44d8f..c21545c 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -733,10 +733,12 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
struct extent_buffer *eb;
struct btrfs_root_item *root_item;
struct btrfs_key root_key;
- int ret;
+ int ret = 0;
+ bool must_abort = false;
root_item = kmalloc(sizeof(*root_item), GFP_NOFS);
- BUG_ON(!root_item);
+ if (!root_item)
+ return ERR_PTR(-ENOMEM);
root_key.objectid = BTRFS_TREE_RELOC_OBJECTID;
root_key.type = BTRFS_ROOT_ITEM_KEY;
@@ -748,7 +750,9 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
/* called by btrfs_init_reloc_root */
ret = btrfs_copy_root(trans, root, root->commit_root, &eb,
BTRFS_TREE_RELOC_OBJECTID);
- BUG_ON(ret);
+ if (ret)
+ goto fail;
+
/*
* Set the last_snapshot field to the generation of the commit
* root - like this ctree.c:btrfs_block_can_be_shared() behaves
@@ -769,9 +773,16 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
*/
ret = btrfs_copy_root(trans, root, root->node, &eb,
BTRFS_TREE_RELOC_OBJECTID);
- BUG_ON(ret);
+ if (ret)
+ goto fail;
}
+ /*
+ * We have changed references at this point, we must abort the
+ * transaction if anything fails.
+ */
+ must_abort = true;
+
memcpy(root_item, &root->root_item, sizeof(*root_item));
btrfs_set_root_bytenr(root_item, eb->start);
btrfs_set_root_level(root_item, btrfs_header_level(eb));
@@ -789,14 +800,25 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
ret = btrfs_insert_root(trans, fs_info->tree_root,
&root_key, root_item);
- BUG_ON(ret);
+ if (ret)
+ goto fail;
+
kfree(root_item);
reloc_root = btrfs_read_tree_root(fs_info->tree_root, &root_key);
- BUG_ON(IS_ERR(reloc_root));
+ if (IS_ERR(reloc_root)) {
+ ret = PTR_ERR(reloc_root);
+ goto abort;
+ }
set_bit(BTRFS_ROOT_SHAREABLE, &reloc_root->state);
reloc_root->last_trans = trans->transid;
return reloc_root;
+fail:
+ kfree(root_item);
+abort:
+ if (must_abort)
+ btrfs_abort_transaction(trans, ret);
+ return ERR_PTR(ret);
}
/*
@@ -875,7 +897,7 @@ int btrfs_update_reloc_root(struct btrfs_trans_handle *trans,
int ret;
if (!have_reloc_root(root))
- goto out;
+ return 0;
reloc_root = root->reloc_root;
root_item = &reloc_root->root_item;
@@ -908,10 +930,8 @@ int btrfs_update_reloc_root(struct btrfs_trans_handle *trans,
ret = btrfs_update_root(trans, fs_info->tree_root,
&reloc_root->root_key, root_item);
- BUG_ON(ret);
btrfs_put_root(reloc_root);
-out:
- return 0;
+ return ret;
}
/*
@@ -1185,8 +1205,8 @@ int replace_path(struct btrfs_trans_handle *trans, struct reloc_control *rc,
int ret;
int slot;
- BUG_ON(src->root_key.objectid != BTRFS_TREE_RELOC_OBJECTID);
- BUG_ON(dest->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID);
+ ASSERT(src->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID);
+ ASSERT(dest->root_key.objectid != BTRFS_TREE_RELOC_OBJECTID);
last_snapshot = btrfs_root_last_snapshot(&src->root_item);
again:
@@ -1221,7 +1241,7 @@ int replace_path(struct btrfs_trans_handle *trans, struct reloc_control *rc,
struct btrfs_key first_key;
level = btrfs_header_level(parent);
- BUG_ON(level < lowest_level);
+ ASSERT(level >= lowest_level);
ret = btrfs_bin_search(parent, &key, &slot);
if (ret < 0)
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index 261a507..af2f2f8 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -1950,7 +1950,6 @@ static void cleanup_transaction(struct btrfs_trans_handle *trans, int err)
*/
BUG_ON(list_empty(&cur_trans->list));
- list_del_init(&cur_trans->list);
if (cur_trans == fs_info->running_transaction) {
cur_trans->state = TRANS_STATE_COMMIT_DOING;
spin_unlock(&fs_info->trans_lock);
@@ -1959,6 +1958,17 @@ static void cleanup_transaction(struct btrfs_trans_handle *trans, int err)
spin_lock(&fs_info->trans_lock);
}
+
+ /*
+ * Now that we know no one else is still using the transaction we can
+ * remove the transaction from the list of transactions. This avoids
+ * the transaction kthread from cleaning up the transaction while some
+ * other task is still using it, which could result in a use-after-free
+ * on things like log trees, as it forces the transaction kthread to
+ * wait for this transaction to be cleaned up by us.
+ */
+ list_del_init(&cur_trans->list);
+
spin_unlock(&fs_info->trans_lock);
btrfs_cleanup_one_transaction(trans->transaction, fs_info);
diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
index 5b11bb9..9a0cfa0 100644
--- a/fs/btrfs/tree-log.c
+++ b/fs/btrfs/tree-log.c
@@ -1823,8 +1823,6 @@ static noinline int link_to_fixup_dir(struct btrfs_trans_handle *trans,
ret = btrfs_update_inode(trans, root, inode);
} else if (ret == -EEXIST) {
ret = 0;
- } else {
- BUG(); /* Logic Error */
}
iput(inode);
@@ -6062,7 +6060,8 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
* (since logging them is pointless, a link count of 0 means they
* will never be accessible).
*/
- if (btrfs_inode_in_log(inode, trans->transid) ||
+ if ((btrfs_inode_in_log(inode, trans->transid) &&
+ list_empty(&ctx->ordered_extents)) ||
inode->vfs_inode.i_nlink == 0) {
ret = BTRFS_NO_LOG_SYNC;
goto end_no_trans;
diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
index 576d012..e4fc99a 100644
--- a/fs/ceph/caps.c
+++ b/fs/ceph/caps.c
@@ -1866,6 +1866,7 @@ static int try_nonblocking_invalidate(struct inode *inode)
u32 invalidating_gen = ci->i_rdcache_gen;
spin_unlock(&ci->i_ceph_lock);
+ ceph_fscache_invalidate(inode);
invalidate_mapping_pages(&inode->i_data, 0, -1);
spin_lock(&ci->i_ceph_lock);
diff --git a/fs/ceph/export.c b/fs/ceph/export.c
index e088843..042bb4a 100644
--- a/fs/ceph/export.c
+++ b/fs/ceph/export.c
@@ -129,6 +129,10 @@ static struct inode *__lookup_inode(struct super_block *sb, u64 ino)
vino.ino = ino;
vino.snap = CEPH_NOSNAP;
+
+ if (ceph_vino_is_reserved(vino))
+ return ERR_PTR(-ESTALE);
+
inode = ceph_find_inode(sb, vino);
if (!inode) {
struct ceph_mds_request *req;
@@ -178,8 +182,10 @@ static struct dentry *__fh_to_dentry(struct super_block *sb, u64 ino)
return ERR_CAST(inode);
/* We need LINK caps to reliably check i_nlink */
err = ceph_do_getattr(inode, CEPH_CAP_LINK_SHARED, false);
- if (err)
+ if (err) {
+ iput(inode);
return ERR_PTR(err);
+ }
/* -ESTALE if inode as been unlinked and no file is open */
if ((inode->i_nlink == 0) && (atomic_read(&inode->i_count) == 1)) {
iput(inode);
@@ -212,6 +218,10 @@ static struct dentry *__snapfh_to_dentry(struct super_block *sb,
vino.ino = sfh->ino;
vino.snap = sfh->snapid;
}
+
+ if (ceph_vino_is_reserved(vino))
+ return ERR_PTR(-ESTALE);
+
inode = ceph_find_inode(sb, vino);
if (inode)
return d_obtain_alias(inode);
diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
index 2462a9a..346fcdf 100644
--- a/fs/ceph/inode.c
+++ b/fs/ceph/inode.c
@@ -56,6 +56,9 @@ struct inode *ceph_get_inode(struct super_block *sb, struct ceph_vino vino)
{
struct inode *inode;
+ if (ceph_vino_is_reserved(vino))
+ return ERR_PTR(-EREMOTEIO);
+
inode = iget5_locked(sb, (unsigned long)vino.ino, ceph_ino_compare,
ceph_set_ino_cb, &vino);
if (!inode)
@@ -87,14 +90,15 @@ struct inode *ceph_get_snapdir(struct inode *parent)
inode->i_mtime = parent->i_mtime;
inode->i_ctime = parent->i_ctime;
inode->i_atime = parent->i_atime;
- inode->i_op = &ceph_snapdir_iops;
- inode->i_fop = &ceph_snapdir_fops;
- ci->i_snap_caps = CEPH_CAP_PIN; /* so we can open */
ci->i_rbytes = 0;
ci->i_btime = ceph_inode(parent)->i_btime;
- if (inode->i_state & I_NEW)
+ if (inode->i_state & I_NEW) {
+ inode->i_op = &ceph_snapdir_iops;
+ inode->i_fop = &ceph_snapdir_fops;
+ ci->i_snap_caps = CEPH_CAP_PIN; /* so we can open */
unlock_new_inode(inode);
+ }
return inode;
}
@@ -1912,6 +1916,7 @@ static void ceph_do_invalidate_pages(struct inode *inode)
orig_gen = ci->i_rdcache_gen;
spin_unlock(&ci->i_ceph_lock);
+ ceph_fscache_invalidate(inode);
if (invalidate_inode_pages2(inode->i_mapping) < 0) {
pr_err("invalidate_pages %p fails\n", inode);
}
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 8f1d750..d560752 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -433,6 +433,13 @@ static int ceph_parse_deleg_inos(void **p, void *end,
ceph_decode_64_safe(p, end, start, bad);
ceph_decode_64_safe(p, end, len, bad);
+
+ /* Don't accept a delegation of system inodes */
+ if (start < CEPH_INO_SYSTEM_BASE) {
+ pr_warn_ratelimited("ceph: ignoring reserved inode range delegation (start=0x%llx len=0x%llx)\n",
+ start, len);
+ continue;
+ }
while (len--) {
int err = xa_insert(&s->s_delegated_inos, ino = start++,
DELEGATED_INO_AVAILABLE,
diff --git a/fs/ceph/super.h b/fs/ceph/super.h
index 482473e..c33f744a 100644
--- a/fs/ceph/super.h
+++ b/fs/ceph/super.h
@@ -529,10 +529,34 @@ static inline int ceph_ino_compare(struct inode *inode, void *data)
ci->i_vino.snap == pvino->snap;
}
+/*
+ * The MDS reserves a set of inodes for its own usage. These should never
+ * be accessible by clients, and so the MDS has no reason to ever hand these
+ * out. The range is CEPH_MDS_INO_MDSDIR_OFFSET..CEPH_INO_SYSTEM_BASE.
+ *
+ * These come from src/mds/mdstypes.h in the ceph sources.
+ */
+#define CEPH_MAX_MDS 0x100
+#define CEPH_NUM_STRAY 10
+#define CEPH_MDS_INO_MDSDIR_OFFSET (1 * CEPH_MAX_MDS)
+#define CEPH_INO_SYSTEM_BASE ((6*CEPH_MAX_MDS) + (CEPH_MAX_MDS * CEPH_NUM_STRAY))
+
+static inline bool ceph_vino_is_reserved(const struct ceph_vino vino)
+{
+ if (vino.ino < CEPH_INO_SYSTEM_BASE &&
+ vino.ino >= CEPH_MDS_INO_MDSDIR_OFFSET) {
+ WARN_RATELIMIT(1, "Attempt to access reserved inode number 0x%llx", vino.ino);
+ return true;
+ }
+ return false;
+}
static inline struct inode *ceph_find_inode(struct super_block *sb,
struct ceph_vino vino)
{
+ if (ceph_vino_is_reserved(vino))
+ return NULL;
+
/*
* NB: The hashval will be run through the fs/inode.c hash function
* anyway, so there is no need to squash the inode number down to
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index fa359f4..aabaebd 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -663,6 +663,7 @@ server_unresponsive(struct TCP_Server_Info *server)
*/
if ((server->tcpStatus == CifsGood ||
server->tcpStatus == CifsNeedNegotiate) &&
+ (!server->ops->can_echo || server->ops->can_echo(server)) &&
time_after(jiffies, server->lstrp + 3 * server->echo_interval)) {
cifs_server_dbg(VFS, "has not responded in %lu seconds. Reconnecting...\n",
(3 * server->echo_interval) / HZ);
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index be46fab..da05757 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -164,6 +164,7 @@ int cifs_posix_open(char *full_path, struct inode **pinode,
goto posix_open_ret;
}
} else {
+ cifs_revalidate_mapping(*pinode);
cifs_fattr_to_inode(*pinode, &fattr);
}
diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
index c2fe85c..1a0298d 100644
--- a/fs/cifs/sess.c
+++ b/fs/cifs/sess.c
@@ -92,6 +92,12 @@ int cifs_try_adding_channels(struct cifs_ses *ses)
return 0;
}
+ if (!(ses->server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) {
+ cifs_dbg(VFS, "server %s does not support multichannel\n", ses->server->hostname);
+ ses->chan_max = 1;
+ return 0;
+ }
+
/*
* Make a copy of the iface list at the time and use that
* instead so as to not hold the iface spinlock for opening
diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
index db22d68..be3df90 100644
--- a/fs/cifs/smb2misc.c
+++ b/fs/cifs/smb2misc.c
@@ -745,8 +745,8 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
}
}
spin_unlock(&cifs_tcp_ses_lock);
- cifs_dbg(FYI, "Can not process oplock break for non-existent connection\n");
- return false;
+ cifs_dbg(FYI, "No file id matched, oplock break ignored\n");
+ return true;
}
void
diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index 46d247c..a9d15553 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -1705,18 +1705,14 @@ smb2_ioctl_query_info(const unsigned int xid,
}
iqinf_exit:
- kfree(vars);
- kfree(buffer);
- SMB2_open_free(&rqst[0]);
- if (qi.flags & PASSTHRU_FSCTL)
- SMB2_ioctl_free(&rqst[1]);
- else
- SMB2_query_info_free(&rqst[1]);
-
- SMB2_close_free(&rqst[2]);
+ cifs_small_buf_release(rqst[0].rq_iov[0].iov_base);
+ cifs_small_buf_release(rqst[1].rq_iov[0].iov_base);
+ cifs_small_buf_release(rqst[2].rq_iov[0].iov_base);
free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
free_rsp_buf(resp_buftype[2], rsp_iov[2].iov_base);
+ kfree(vars);
+ kfree(buffer);
return rc;
e_fault:
@@ -1768,6 +1764,8 @@ smb2_copychunk_range(const unsigned int xid,
cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk));
/* Request server copy to target from src identified by key */
+ kfree(retbuf);
+ retbuf = NULL;
rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,
trgtfile->fid.volatile_fid, FSCTL_SRV_COPYCHUNK_WRITE,
true /* is_fsctl */, (char *)pcchunk,
@@ -2174,7 +2172,7 @@ smb3_notify(const unsigned int xid, struct file *pfile,
cifs_sb = CIFS_SB(inode->i_sb);
- utf16_path = cifs_convert_path_to_utf16(path + 1, cifs_sb);
+ utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);
if (utf16_path == NULL) {
rc = -ENOMEM;
goto notify_exit;
@@ -4076,7 +4074,7 @@ smb2_get_enc_key(struct TCP_Server_Info *server, __u64 ses_id, int enc, u8 *key)
}
spin_unlock(&cifs_tcp_ses_lock);
- return 1;
+ return -EAGAIN;
}
/*
* Encrypt or decrypt @rqst message. @rqst[0] has the following format:
diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
index d1d5506..ab50996 100644
--- a/fs/cifs/smb2pdu.c
+++ b/fs/cifs/smb2pdu.c
@@ -840,6 +840,8 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
req->SecurityMode = 0;
req->Capabilities = cpu_to_le32(server->vals->req_capabilities);
+ if (ses->chan_max > 1)
+ req->Capabilities |= cpu_to_le32(SMB2_GLOBAL_CAP_MULTI_CHANNEL);
/* ClientGUID must be zero for SMB2.02 dialect */
if (server->vals->protocol_id == SMB20_PROT_ID)
@@ -949,6 +951,13 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
/* Internal types */
server->capabilities |= SMB2_NT_FIND | SMB2_LARGE_FILES;
+ /*
+ * SMB3.0 supports only 1 cipher and doesn't have a encryption neg context
+ * Set the cipher type manually.
+ */
+ if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
+ server->cipher_type = SMB2_ENCRYPTION_AES128_CCM;
+
security_blob = smb2_get_data_area_len(&blob_offset, &blob_length,
(struct smb2_sync_hdr *)rsp);
/*
@@ -1025,6 +1034,9 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
pneg_inbuf->Capabilities =
cpu_to_le32(server->vals->req_capabilities);
+ if (tcon->ses->chan_max > 1)
+ pneg_inbuf->Capabilities |= cpu_to_le32(SMB2_GLOBAL_CAP_MULTI_CHANNEL);
+
memcpy(pneg_inbuf->Guid, server->client_guid,
SMB2_CLIENT_GUID_SIZE);
@@ -3886,10 +3898,10 @@ smb2_new_read_req(void **buf, unsigned int *total_len,
* Related requests use info from previous read request
* in chain.
*/
- shdr->SessionId = 0xFFFFFFFF;
+ shdr->SessionId = 0xFFFFFFFFFFFFFFFF;
shdr->TreeId = 0xFFFFFFFF;
- req->PersistentFileId = 0xFFFFFFFF;
- req->VolatileFileId = 0xFFFFFFFF;
+ req->PersistentFileId = 0xFFFFFFFFFFFFFFFF;
+ req->VolatileFileId = 0xFFFFFFFFFFFFFFFF;
}
}
if (remaining_bytes > io_parms->length)
diff --git a/fs/dax.c b/fs/dax.c
index b3d27fd..df5485b4 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -144,6 +144,16 @@ struct wait_exceptional_entry_queue {
struct exceptional_entry_key key;
};
+/**
+ * enum dax_wake_mode: waitqueue wakeup behaviour
+ * @WAKE_ALL: wake all waiters in the waitqueue
+ * @WAKE_NEXT: wake only the first waiter in the waitqueue
+ */
+enum dax_wake_mode {
+ WAKE_ALL,
+ WAKE_NEXT,
+};
+
static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas,
void *entry, struct exceptional_entry_key *key)
{
@@ -182,7 +192,8 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait,
* The important information it's conveying is whether the entry at
* this index used to be a PMD entry.
*/
-static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all)
+static void dax_wake_entry(struct xa_state *xas, void *entry,
+ enum dax_wake_mode mode)
{
struct exceptional_entry_key key;
wait_queue_head_t *wq;
@@ -196,7 +207,7 @@ static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all)
* must be in the waitqueue and the following check will see them.
*/
if (waitqueue_active(wq))
- __wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key);
+ __wake_up(wq, TASK_NORMAL, mode == WAKE_ALL ? 0 : 1, &key);
}
/*
@@ -264,11 +275,11 @@ static void wait_entry_unlocked(struct xa_state *xas, void *entry)
finish_wait(wq, &ewait.wait);
}
-static void put_unlocked_entry(struct xa_state *xas, void *entry)
+static void put_unlocked_entry(struct xa_state *xas, void *entry,
+ enum dax_wake_mode mode)
{
- /* If we were the only waiter woken, wake the next one */
if (entry && !dax_is_conflict(entry))
- dax_wake_entry(xas, entry, false);
+ dax_wake_entry(xas, entry, mode);
}
/*
@@ -286,7 +297,7 @@ static void dax_unlock_entry(struct xa_state *xas, void *entry)
old = xas_store(xas, entry);
xas_unlock_irq(xas);
BUG_ON(!dax_is_locked(old));
- dax_wake_entry(xas, entry, false);
+ dax_wake_entry(xas, entry, WAKE_NEXT);
}
/*
@@ -524,7 +535,7 @@ static void *grab_mapping_entry(struct xa_state *xas,
dax_disassociate_entry(entry, mapping, false);
xas_store(xas, NULL); /* undo the PMD join */
- dax_wake_entry(xas, entry, true);
+ dax_wake_entry(xas, entry, WAKE_ALL);
mapping->nrexceptional--;
entry = NULL;
xas_set(xas, index);
@@ -622,7 +633,7 @@ struct page *dax_layout_busy_page_range(struct address_space *mapping,
entry = get_unlocked_entry(&xas, 0);
if (entry)
page = dax_busy_page(entry);
- put_unlocked_entry(&xas, entry);
+ put_unlocked_entry(&xas, entry, WAKE_NEXT);
if (page)
break;
if (++scanned % XA_CHECK_SCHED)
@@ -664,7 +675,7 @@ static int __dax_invalidate_entry(struct address_space *mapping,
mapping->nrexceptional--;
ret = 1;
out:
- put_unlocked_entry(&xas, entry);
+ put_unlocked_entry(&xas, entry, WAKE_ALL);
xas_unlock_irq(&xas);
return ret;
}
@@ -937,13 +948,13 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
xas_lock_irq(xas);
xas_store(xas, entry);
xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
- dax_wake_entry(xas, entry, false);
+ dax_wake_entry(xas, entry, WAKE_NEXT);
trace_dax_writeback_one(mapping->host, index, count);
return ret;
put_unlocked:
- put_unlocked_entry(xas, entry);
+ put_unlocked_entry(xas, entry, WAKE_NEXT);
return ret;
}
@@ -1684,7 +1695,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
/* Did we race with someone splitting entry or so? */
if (!entry || dax_is_conflict(entry) ||
(order == 0 && !dax_is_pte_entry(entry))) {
- put_unlocked_entry(&xas, entry);
+ put_unlocked_entry(&xas, entry, WAKE_NEXT);
xas_unlock_irq(&xas);
trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
VM_FAULT_NOPAGE);
diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
index 86c7f04..720d65f 100644
--- a/fs/debugfs/inode.c
+++ b/fs/debugfs/inode.c
@@ -35,7 +35,7 @@
static struct vfsmount *debugfs_mount;
static int debugfs_mount_count;
static bool debugfs_registered;
-static unsigned int debugfs_allow = DEFAULT_DEBUGFS_ALLOW_BITS;
+static unsigned int debugfs_allow __ro_after_init = DEFAULT_DEBUGFS_ALLOW_BITS;
/*
* Don't allow access attributes to be changed whilst the kernel is locked down
diff --git a/fs/direct-io.c b/fs/direct-io.c
index d53fa92..c64d4eb3 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -810,6 +810,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
struct buffer_head *map_bh)
{
int ret = 0;
+ int boundary = sdio->boundary; /* dio_send_cur_page may clear it */
if (dio->op == REQ_OP_WRITE) {
/*
@@ -848,10 +849,10 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
sdio->cur_page_fs_offset = sdio->block_in_file << sdio->blkbits;
out:
/*
- * If sdio->boundary then we want to schedule the IO now to
+ * If boundary then we want to schedule the IO now to
* avoid metadata seeks.
*/
- if (sdio->boundary) {
+ if (boundary) {
ret = dio_send_cur_page(dio, sdio, map_bh);
if (sdio->bio)
dio_bio_submit(dio, sdio);
diff --git a/fs/dlm/config.c b/fs/dlm/config.c
index 49c5f94..73e6643 100644
--- a/fs/dlm/config.c
+++ b/fs/dlm/config.c
@@ -125,7 +125,7 @@ static ssize_t cluster_cluster_name_store(struct config_item *item,
CONFIGFS_ATTR(cluster_, cluster_name);
static ssize_t cluster_set(struct dlm_cluster *cl, unsigned int *cl_field,
- int *info_field, bool (*check_cb)(unsigned int x),
+ int *info_field, int (*check_cb)(unsigned int x),
const char *buf, size_t len)
{
unsigned int x;
@@ -137,8 +137,11 @@ static ssize_t cluster_set(struct dlm_cluster *cl, unsigned int *cl_field,
if (rc)
return rc;
- if (check_cb && check_cb(x))
- return -EINVAL;
+ if (check_cb) {
+ rc = check_cb(x);
+ if (rc)
+ return rc;
+ }
*cl_field = x;
*info_field = x;
@@ -161,14 +164,20 @@ static ssize_t cluster_##name##_show(struct config_item *item, char *buf) \
} \
CONFIGFS_ATTR(cluster_, name);
-static bool dlm_check_zero(unsigned int x)
+static int dlm_check_zero(unsigned int x)
{
- return !x;
+ if (!x)
+ return -EINVAL;
+
+ return 0;
}
-static bool dlm_check_buffer_size(unsigned int x)
+static int dlm_check_buffer_size(unsigned int x)
{
- return (x < DEFAULT_BUFFER_SIZE);
+ if (x < DEFAULT_BUFFER_SIZE)
+ return -EINVAL;
+
+ return 0;
}
CLUSTER_ATTR(tcp_port, dlm_check_zero);
diff --git a/fs/dlm/debug_fs.c b/fs/dlm/debug_fs.c
index d6bbccb0..d5bd990 100644
--- a/fs/dlm/debug_fs.c
+++ b/fs/dlm/debug_fs.c
@@ -542,6 +542,7 @@ static void *table_seq_next(struct seq_file *seq, void *iter_ptr, loff_t *pos)
if (bucket >= ls->ls_rsbtbl_size) {
kfree(ri);
+ ++*pos;
return NULL;
}
tree = toss ? &ls->ls_rsbtbl[bucket].toss : &ls->ls_rsbtbl[bucket].keep;
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index 79f56f1..44e2716 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -612,10 +612,7 @@ static void shutdown_connection(struct connection *con)
{
int ret;
- if (cancel_work_sync(&con->swork)) {
- log_print("canceled swork for node %d", con->nodeid);
- clear_bit(CF_WRITE_PENDING, &con->flags);
- }
+ flush_work(&con->swork);
mutex_lock(&con->sock_mutex);
/* nothing to shutdown */
diff --git a/fs/dlm/midcomms.c b/fs/dlm/midcomms.c
index fde3a6a..0bedfa8 100644
--- a/fs/dlm/midcomms.c
+++ b/fs/dlm/midcomms.c
@@ -49,9 +49,10 @@ int dlm_process_incoming_buffer(int nodeid, unsigned char *buf, int len)
* cannot deliver this message to upper layers
*/
msglen = get_unaligned_le16(&hd->h_length);
- if (msglen > DEFAULT_BUFFER_SIZE) {
- log_print("received invalid length header: %u, will abort message parsing",
- msglen);
+ if (msglen > DEFAULT_BUFFER_SIZE ||
+ msglen < sizeof(struct dlm_header)) {
+ log_print("received invalid length header: %u from node %d, will abort message parsing",
+ msglen, nodeid);
return -EBADMSG;
}
diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c
index 0681540..adf0707 100644
--- a/fs/ecryptfs/crypto.c
+++ b/fs/ecryptfs/crypto.c
@@ -296,10 +296,8 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat *crypt_stat,
struct extent_crypt_result ecr;
int rc = 0;
- if (!crypt_stat || !crypt_stat->tfm
- || !(crypt_stat->flags & ECRYPTFS_STRUCT_INITIALIZED))
- return -EINVAL;
-
+ BUG_ON(!crypt_stat || !crypt_stat->tfm
+ || !(crypt_stat->flags & ECRYPTFS_STRUCT_INITIALIZED));
if (unlikely(ecryptfs_verbosity > 0)) {
ecryptfs_printk(KERN_DEBUG, "Key size [%zd]; key:\n",
crypt_stat->key_size);
diff --git a/fs/ecryptfs/main.c b/fs/ecryptfs/main.c
index e63259f..b2f6a19 100644
--- a/fs/ecryptfs/main.c
+++ b/fs/ecryptfs/main.c
@@ -492,6 +492,12 @@ static struct dentry *ecryptfs_mount(struct file_system_type *fs_type, int flags
goto out;
}
+ if (!dev_name) {
+ rc = -EINVAL;
+ err = "Device name cannot be null";
+ goto out;
+ }
+
rc = ecryptfs_parse_options(sbi, raw_data, &check_ruid);
if (rc) {
err = "Error parsing options";
diff --git a/fs/erofs/erofs_fs.h b/fs/erofs/erofs_fs.h
index 9ad1615f..e8d04d8 100644
--- a/fs/erofs/erofs_fs.h
+++ b/fs/erofs/erofs_fs.h
@@ -75,6 +75,9 @@ static inline bool erofs_inode_is_data_compressed(unsigned int datamode)
#define EROFS_I_VERSION_BIT 0
#define EROFS_I_DATALAYOUT_BIT 1
+#define EROFS_I_ALL \
+ ((1 << (EROFS_I_DATALAYOUT_BIT + EROFS_I_DATALAYOUT_BITS)) - 1)
+
/* 32-byte reduced form of an ondisk inode */
struct erofs_inode_compact {
__le16 i_format; /* inode format hints */
diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
index 3e21c0e..0a94a52 100644
--- a/fs/erofs/inode.c
+++ b/fs/erofs/inode.c
@@ -44,6 +44,13 @@ static struct page *erofs_read_inode(struct inode *inode,
dic = page_address(page) + *ofs;
ifmt = le16_to_cpu(dic->i_format);
+ if (ifmt & ~EROFS_I_ALL) {
+ erofs_err(inode->i_sb, "unsupported i_format %u of nid %llu",
+ ifmt, vi->nid);
+ err = -EOPNOTSUPP;
+ goto err_out;
+ }
+
vi->datalayout = erofs_inode_datalayout(ifmt);
if (vi->datalayout >= EROFS_INODE_DATALAYOUT_MAX) {
erofs_err(inode->i_sb, "unsupported datalayout %u of nid %llu",
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 2d5b172f4..6094b2e 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -746,6 +746,12 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
*/
list_splice(&txlist, &ep->rdllist);
__pm_relax(ep->ws);
+
+ if (!list_empty(&ep->rdllist)) {
+ if (waitqueue_active(&ep->wq))
+ wake_up(&ep->wq);
+ }
+
write_unlock_irq(&ep->lock);
if (!ep_locked)
diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c
index a987919..579c10f 100644
--- a/fs/exfat/balloc.c
+++ b/fs/exfat/balloc.c
@@ -141,10 +141,6 @@ void exfat_free_bitmap(struct exfat_sb_info *sbi)
kfree(sbi->vol_amap);
}
-/*
- * If the value of "clu" is 0, it means cluster 2 which is the first cluster of
- * the cluster heap.
- */
int exfat_set_bitmap(struct inode *inode, unsigned int clu)
{
int i, b;
@@ -162,10 +158,6 @@ int exfat_set_bitmap(struct inode *inode, unsigned int clu)
return 0;
}
-/*
- * If the value of "clu" is 0, it means cluster 2 which is the first cluster of
- * the cluster heap.
- */
void exfat_clear_bitmap(struct inode *inode, unsigned int clu)
{
int i, b;
@@ -186,8 +178,7 @@ void exfat_clear_bitmap(struct inode *inode, unsigned int clu)
int ret_discard;
ret_discard = sb_issue_discard(sb,
- exfat_cluster_to_sector(sbi, clu +
- EXFAT_RESERVED_CLUSTERS),
+ exfat_cluster_to_sector(sbi, clu),
(1 << sbi->sect_per_clus_bits), GFP_NOFS, 0);
if (ret_discard == -EOPNOTSUPP) {
diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
index 62e9e55..585ecbd 100644
--- a/fs/ext4/fast_commit.c
+++ b/fs/ext4/fast_commit.c
@@ -1093,8 +1093,10 @@ static int ext4_fc_perform_commit(journal_t *journal)
head.fc_tid = cpu_to_le32(
sbi->s_journal->j_running_transaction->t_tid);
if (!ext4_fc_add_tlv(sb, EXT4_FC_TAG_HEAD, sizeof(head),
- (u8 *)&head, &crc))
+ (u8 *)&head, &crc)) {
+ ret = -ENOSPC;
goto out;
+ }
}
spin_lock(&sbi->s_fc_lock);
@@ -1741,7 +1743,7 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
}
/* Range is mapped and needs a state change */
- jbd_debug(1, "Converting from %d to %d %lld",
+ jbd_debug(1, "Converting from %ld to %d %lld",
map.m_flags & EXT4_MAP_UNWRITTEN,
ext4_ext_is_unwritten(ex), map.m_pblk);
ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index b692355..7b28d44 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -372,15 +372,32 @@ static ssize_t ext4_handle_inode_extension(struct inode *inode, loff_t offset,
static int ext4_dio_write_end_io(struct kiocb *iocb, ssize_t size,
int error, unsigned int flags)
{
- loff_t offset = iocb->ki_pos;
+ loff_t pos = iocb->ki_pos;
struct inode *inode = file_inode(iocb->ki_filp);
if (error)
return error;
- if (size && flags & IOMAP_DIO_UNWRITTEN)
- return ext4_convert_unwritten_extents(NULL, inode,
- offset, size);
+ if (size && flags & IOMAP_DIO_UNWRITTEN) {
+ error = ext4_convert_unwritten_extents(NULL, inode, pos, size);
+ if (error < 0)
+ return error;
+ }
+ /*
+ * If we are extending the file, we have to update i_size here before
+ * page cache gets invalidated in iomap_dio_rw(). Otherwise racing
+ * buffered reads could zero out too much from page cache pages. Update
+ * of on-disk size will happen later in ext4_dio_write_iter() where
+ * we have enough information to also perform orphan list handling etc.
+ * Note that we perform all extending writes synchronously under
+ * i_rwsem held exclusively so i_size update is safe here in that case.
+ * If the write was not extending, we cannot see pos > i_size here
+ * because operations reducing i_size like truncate wait for all
+ * outstanding DIO before updating i_size.
+ */
+ pos += size;
+ if (pos > i_size_read(inode))
+ i_size_write(inode, pos);
return 0;
}
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
index b215c56..c92558e 100644
--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -1291,7 +1291,8 @@ struct inode *__ext4_new_inode(handle_t *handle, struct inode *dir,
ei->i_extra_isize = sbi->s_want_extra_isize;
ei->i_inline_off = 0;
- if (ext4_has_feature_inline_data(sb))
+ if (ext4_has_feature_inline_data(sb) &&
+ (!(ei->i_flags & EXT4_DAX_FL) || S_ISDIR(mode)))
ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
ret = inode;
err = dquot_alloc_inode(inode);
@@ -1512,6 +1513,7 @@ int ext4_init_inode_table(struct super_block *sb, ext4_group_t group,
handle_t *handle;
ext4_fsblk_t blk;
int num, ret = 0, used_blks = 0;
+ unsigned long used_inos = 0;
/* This should not happen, but just to be sure check this */
if (sb_rdonly(sb)) {
@@ -1542,22 +1544,37 @@ int ext4_init_inode_table(struct super_block *sb, ext4_group_t group,
* used inodes so we need to skip blocks with used inodes in
* inode table.
*/
- if (!(gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT)))
- used_blks = DIV_ROUND_UP((EXT4_INODES_PER_GROUP(sb) -
- ext4_itable_unused_count(sb, gdp)),
- sbi->s_inodes_per_block);
+ if (!(gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT))) {
+ used_inos = EXT4_INODES_PER_GROUP(sb) -
+ ext4_itable_unused_count(sb, gdp);
+ used_blks = DIV_ROUND_UP(used_inos, sbi->s_inodes_per_block);
- if ((used_blks < 0) || (used_blks > sbi->s_itb_per_group) ||
- ((group == 0) && ((EXT4_INODES_PER_GROUP(sb) -
- ext4_itable_unused_count(sb, gdp)) <
- EXT4_FIRST_INO(sb)))) {
- ext4_error(sb, "Something is wrong with group %u: "
- "used itable blocks: %d; "
- "itable unused count: %u",
- group, used_blks,
- ext4_itable_unused_count(sb, gdp));
- ret = 1;
- goto err_out;
+ /* Bogus inode unused count? */
+ if (used_blks < 0 || used_blks > sbi->s_itb_per_group) {
+ ext4_error(sb, "Something is wrong with group %u: "
+ "used itable blocks: %d; "
+ "itable unused count: %u",
+ group, used_blks,
+ ext4_itable_unused_count(sb, gdp));
+ ret = 1;
+ goto err_out;
+ }
+
+ used_inos += group * EXT4_INODES_PER_GROUP(sb);
+ /*
+ * Are there some uninitialized inodes in the inode table
+ * before the first normal inode?
+ */
+ if ((used_blks != sbi->s_itb_per_group) &&
+ (used_inos < EXT4_FIRST_INO(sb))) {
+ ext4_error(sb, "Something is wrong with group %u: "
+ "itable unused count: %u; "
+ "itables initialized count: %ld",
+ group, ext4_itable_unused_count(sb, gdp),
+ used_inos);
+ ret = 1;
+ goto err_out;
+ }
}
blk = ext4_inode_table(sb, gdp) + used_blks;
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index 106bf14..cb54ea6 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -312,6 +312,12 @@ static void ext4_dax_dontcache(struct inode *inode, unsigned int flags)
static bool dax_compatible(struct inode *inode, unsigned int oldflags,
unsigned int flags)
{
+ /* Allow the DAX flag to be changed on inline directories */
+ if (S_ISDIR(inode->i_mode)) {
+ flags &= ~EXT4_INLINE_DATA_FL;
+ oldflags &= ~EXT4_INLINE_DATA_FL;
+ }
+
if (flags & EXT4_DAX_FL) {
if ((oldflags & EXT4_DAX_MUT_EXCL) ||
ext4_test_inode_state(inode,
diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
index 795c3ff..68fbeed 100644
--- a/fs/ext4/mmp.c
+++ b/fs/ext4/mmp.c
@@ -56,7 +56,7 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
wait_on_buffer(bh);
sb_end_write(sb);
if (unlikely(!buffer_uptodate(bh)))
- return 1;
+ return -EIO;
return 0;
}
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 7d02aef..a105ad1 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -3055,9 +3055,6 @@ static void ext4_orphan_cleanup(struct super_block *sb,
sb->s_flags &= ~SB_RDONLY;
}
#ifdef CONFIG_QUOTA
- /* Needed for iput() to work correctly and not trash data */
- sb->s_flags |= SB_ACTIVE;
-
/*
* Turn on quotas which were not enabled for read-only mounts if
* filesystem has quota feature, so that they are updated correctly.
@@ -5523,8 +5520,10 @@ static int ext4_commit_super(struct super_block *sb, int sync)
struct buffer_head *sbh = EXT4_SB(sb)->s_sbh;
int error = 0;
- if (!sbh || block_device_ejected(sb))
- return error;
+ if (!sbh)
+ return -EINVAL;
+ if (block_device_ejected(sb))
+ return -ENODEV;
/*
* If the file system is mounted read-only, don't update the
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index d3f407b..f94b130 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -123,19 +123,6 @@ static void f2fs_unlock_rpages(struct compress_ctx *cc, int len)
f2fs_drop_rpages(cc, len, true);
}
-static void f2fs_put_rpages_mapping(struct address_space *mapping,
- pgoff_t start, int len)
-{
- int i;
-
- for (i = 0; i < len; i++) {
- struct page *page = find_get_page(mapping, start + i);
-
- put_page(page);
- put_page(page);
- }
-}
-
static void f2fs_put_rpages_wbc(struct compress_ctx *cc,
struct writeback_control *wbc, bool redirty, int unlock)
{
@@ -164,13 +151,14 @@ int f2fs_init_compress_ctx(struct compress_ctx *cc)
return cc->rpages ? 0 : -ENOMEM;
}
-void f2fs_destroy_compress_ctx(struct compress_ctx *cc)
+void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse)
{
page_array_free(cc->inode, cc->rpages, cc->cluster_size);
cc->rpages = NULL;
cc->nr_rpages = 0;
cc->nr_cpages = 0;
- cc->cluster_idx = NULL_CLUSTER;
+ if (!reuse)
+ cc->cluster_idx = NULL_CLUSTER;
}
void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page)
@@ -986,7 +974,7 @@ static int prepare_compress_overwrite(struct compress_ctx *cc,
}
if (PageUptodate(page))
- unlock_page(page);
+ f2fs_put_page(page, 1);
else
f2fs_compress_ctx_add_page(cc, page);
}
@@ -996,33 +984,35 @@ static int prepare_compress_overwrite(struct compress_ctx *cc,
ret = f2fs_read_multi_pages(cc, &bio, cc->cluster_size,
&last_block_in_bio, false, true);
- f2fs_destroy_compress_ctx(cc);
+ f2fs_put_rpages(cc);
+ f2fs_destroy_compress_ctx(cc, true);
if (ret)
- goto release_pages;
+ goto out;
if (bio)
f2fs_submit_bio(sbi, bio, DATA);
ret = f2fs_init_compress_ctx(cc);
if (ret)
- goto release_pages;
+ goto out;
}
for (i = 0; i < cc->cluster_size; i++) {
f2fs_bug_on(sbi, cc->rpages[i]);
page = find_lock_page(mapping, start_idx + i);
- f2fs_bug_on(sbi, !page);
+ if (!page) {
+ /* page can be truncated */
+ goto release_and_retry;
+ }
f2fs_wait_on_page_writeback(page, DATA, true, true);
-
f2fs_compress_ctx_add_page(cc, page);
- f2fs_put_page(page, 0);
if (!PageUptodate(page)) {
+release_and_retry:
+ f2fs_put_rpages(cc);
f2fs_unlock_rpages(cc, i + 1);
- f2fs_put_rpages_mapping(mapping, start_idx,
- cc->cluster_size);
- f2fs_destroy_compress_ctx(cc);
+ f2fs_destroy_compress_ctx(cc, true);
goto retry;
}
}
@@ -1053,10 +1043,10 @@ static int prepare_compress_overwrite(struct compress_ctx *cc,
}
unlock_pages:
+ f2fs_put_rpages(cc);
f2fs_unlock_rpages(cc, i);
-release_pages:
- f2fs_put_rpages_mapping(mapping, start_idx, i);
- f2fs_destroy_compress_ctx(cc);
+ f2fs_destroy_compress_ctx(cc, true);
+out:
return ret;
}
@@ -1091,7 +1081,7 @@ bool f2fs_compress_write_end(struct inode *inode, void *fsdata,
set_cluster_dirty(&cc);
f2fs_put_rpages_wbc(&cc, NULL, false, 1);
- f2fs_destroy_compress_ctx(&cc);
+ f2fs_destroy_compress_ctx(&cc, false);
return first_index;
}
@@ -1310,7 +1300,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
f2fs_put_rpages(cc);
page_array_free(cc->inode, cc->cpages, cc->nr_cpages);
cc->cpages = NULL;
- f2fs_destroy_compress_ctx(cc);
+ f2fs_destroy_compress_ctx(cc, false);
return 0;
out_destroy_crypt:
@@ -1321,7 +1311,8 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
for (i = 0; i < cc->nr_cpages; i++) {
if (!cc->cpages[i])
continue;
- f2fs_put_page(cc->cpages[i], 1);
+ f2fs_compress_free_page(cc->cpages[i]);
+ cc->cpages[i] = NULL;
}
out_put_cic:
kmem_cache_free(cic_entry_slab, cic);
@@ -1471,7 +1462,7 @@ int f2fs_write_multi_pages(struct compress_ctx *cc,
err = f2fs_write_raw_pages(cc, submitted, wbc, io_type);
f2fs_put_rpages_wbc(cc, wbc, false, 0);
destroy_out:
- f2fs_destroy_compress_ctx(cc);
+ f2fs_destroy_compress_ctx(cc, false);
return err;
}
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 901bd1d..bdc0f3b2 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2419,7 +2419,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
max_nr_pages,
&last_block_in_bio,
rac != NULL, false);
- f2fs_destroy_compress_ctx(&cc);
+ f2fs_destroy_compress_ctx(&cc, false);
if (ret)
goto set_error_page;
}
@@ -2464,7 +2464,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
max_nr_pages,
&last_block_in_bio,
rac != NULL, false);
- f2fs_destroy_compress_ctx(&cc);
+ f2fs_destroy_compress_ctx(&cc, false);
}
}
#endif
@@ -3168,7 +3168,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
}
}
if (f2fs_compressed_file(inode))
- f2fs_destroy_compress_ctx(&cc);
+ f2fs_destroy_compress_ctx(&cc, false);
#endif
if (retry) {
index = 0;
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 699815e..69a390c 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -402,85 +402,6 @@ static inline bool __has_cursum_space(struct f2fs_journal *journal,
return size <= MAX_SIT_JENTRIES(journal);
}
-/*
- * f2fs-specific ioctl commands
- */
-#define F2FS_IOCTL_MAGIC 0xf5
-#define F2FS_IOC_START_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 1)
-#define F2FS_IOC_COMMIT_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 2)
-#define F2FS_IOC_START_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 3)
-#define F2FS_IOC_RELEASE_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 4)
-#define F2FS_IOC_ABORT_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 5)
-#define F2FS_IOC_GARBAGE_COLLECT _IOW(F2FS_IOCTL_MAGIC, 6, __u32)
-#define F2FS_IOC_WRITE_CHECKPOINT _IO(F2FS_IOCTL_MAGIC, 7)
-#define F2FS_IOC_DEFRAGMENT _IOWR(F2FS_IOCTL_MAGIC, 8, \
- struct f2fs_defragment)
-#define F2FS_IOC_MOVE_RANGE _IOWR(F2FS_IOCTL_MAGIC, 9, \
- struct f2fs_move_range)
-#define F2FS_IOC_FLUSH_DEVICE _IOW(F2FS_IOCTL_MAGIC, 10, \
- struct f2fs_flush_device)
-#define F2FS_IOC_GARBAGE_COLLECT_RANGE _IOW(F2FS_IOCTL_MAGIC, 11, \
- struct f2fs_gc_range)
-#define F2FS_IOC_GET_FEATURES _IOR(F2FS_IOCTL_MAGIC, 12, __u32)
-#define F2FS_IOC_SET_PIN_FILE _IOW(F2FS_IOCTL_MAGIC, 13, __u32)
-#define F2FS_IOC_GET_PIN_FILE _IOR(F2FS_IOCTL_MAGIC, 14, __u32)
-#define F2FS_IOC_PRECACHE_EXTENTS _IO(F2FS_IOCTL_MAGIC, 15)
-#define F2FS_IOC_RESIZE_FS _IOW(F2FS_IOCTL_MAGIC, 16, __u64)
-#define F2FS_IOC_GET_COMPRESS_BLOCKS _IOR(F2FS_IOCTL_MAGIC, 17, __u64)
-#define F2FS_IOC_RELEASE_COMPRESS_BLOCKS \
- _IOR(F2FS_IOCTL_MAGIC, 18, __u64)
-#define F2FS_IOC_RESERVE_COMPRESS_BLOCKS \
- _IOR(F2FS_IOCTL_MAGIC, 19, __u64)
-#define F2FS_IOC_SEC_TRIM_FILE _IOW(F2FS_IOCTL_MAGIC, 20, \
- struct f2fs_sectrim_range)
-
-/*
- * should be same as XFS_IOC_GOINGDOWN.
- * Flags for going down operation used by FS_IOC_GOINGDOWN
- */
-#define F2FS_IOC_SHUTDOWN _IOR('X', 125, __u32) /* Shutdown */
-#define F2FS_GOING_DOWN_FULLSYNC 0x0 /* going down with full sync */
-#define F2FS_GOING_DOWN_METASYNC 0x1 /* going down with metadata */
-#define F2FS_GOING_DOWN_NOSYNC 0x2 /* going down */
-#define F2FS_GOING_DOWN_METAFLUSH 0x3 /* going down with meta flush */
-#define F2FS_GOING_DOWN_NEED_FSCK 0x4 /* going down to trigger fsck */
-
-/*
- * Flags used by F2FS_IOC_SEC_TRIM_FILE
- */
-#define F2FS_TRIM_FILE_DISCARD 0x1 /* send discard command */
-#define F2FS_TRIM_FILE_ZEROOUT 0x2 /* zero out */
-#define F2FS_TRIM_FILE_MASK 0x3
-
-struct f2fs_gc_range {
- u32 sync;
- u64 start;
- u64 len;
-};
-
-struct f2fs_defragment {
- u64 start;
- u64 len;
-};
-
-struct f2fs_move_range {
- u32 dst_fd; /* destination fd */
- u64 pos_in; /* start position in src_fd */
- u64 pos_out; /* start position in dst_fd */
- u64 len; /* size to move */
-};
-
-struct f2fs_flush_device {
- u32 dev_num; /* device number to flush */
- u32 segments; /* # of segments to flush */
-};
-
-struct f2fs_sectrim_range {
- u64 start;
- u64 len;
- u64 flags;
-};
-
/* for inline stuff */
#define DEF_INLINE_RESERVED_SIZE 1
static inline int get_extra_isize(struct inode *inode);
@@ -3361,6 +3282,7 @@ block_t f2fs_get_unusable_blocks(struct f2fs_sb_info *sbi);
int f2fs_disable_cp_again(struct f2fs_sb_info *sbi, block_t unusable);
void f2fs_release_discard_addrs(struct f2fs_sb_info *sbi);
int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra);
+bool f2fs_segment_has_free_slot(struct f2fs_sb_info *sbi, int segno);
void f2fs_init_inmem_curseg(struct f2fs_sb_info *sbi);
void f2fs_save_inmem_curseg(struct f2fs_sb_info *sbi);
void f2fs_restore_inmem_curseg(struct f2fs_sb_info *sbi);
@@ -3368,7 +3290,7 @@ void f2fs_get_new_segment(struct f2fs_sb_info *sbi,
unsigned int *newseg, bool new_sec, int dir);
void f2fs_allocate_segment_for_resize(struct f2fs_sb_info *sbi, int type,
unsigned int start, unsigned int end);
-void f2fs_allocate_new_segment(struct f2fs_sb_info *sbi, int type);
+void f2fs_allocate_new_section(struct f2fs_sb_info *sbi, int type);
void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi);
int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range);
bool f2fs_exist_trim_candidates(struct f2fs_sb_info *sbi,
@@ -3528,7 +3450,7 @@ void f2fs_destroy_post_read_wq(struct f2fs_sb_info *sbi);
int f2fs_start_gc_thread(struct f2fs_sb_info *sbi);
void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi);
block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode);
-int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background,
+int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background, bool force,
unsigned int segno);
void f2fs_build_gc_manager(struct f2fs_sb_info *sbi);
int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count);
@@ -3934,7 +3856,7 @@ void f2fs_free_dic(struct decompress_io_ctx *dic);
void f2fs_decompress_end_io(struct page **rpages,
unsigned int cluster_size, bool err, bool verity);
int f2fs_init_compress_ctx(struct compress_ctx *cc);
-void f2fs_destroy_compress_ctx(struct compress_ctx *cc);
+void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse);
void f2fs_init_compress_info(struct f2fs_sb_info *sbi);
int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi);
void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi);
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 498e3aa..5c74b29 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -31,6 +31,7 @@
#include "gc.h"
#include "trace.h"
#include <trace/events/f2fs.h>
+#include <uapi/linux/f2fs.h>
static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
{
@@ -1615,9 +1616,10 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
struct f2fs_map_blocks map = { .m_next_pgofs = NULL,
.m_next_extent = NULL, .m_seg_type = NO_CHECK_TYPE,
.m_may_create = true };
- pgoff_t pg_end;
+ pgoff_t pg_start, pg_end;
loff_t new_size = i_size_read(inode);
loff_t off_end;
+ block_t expanded = 0;
int err;
err = inode_newsize_ok(inode, (len + offset));
@@ -1630,11 +1632,12 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
f2fs_balance_fs(sbi, true);
+ pg_start = ((unsigned long long)offset) >> PAGE_SHIFT;
pg_end = ((unsigned long long)offset + len) >> PAGE_SHIFT;
off_end = (offset + len) & (PAGE_SIZE - 1);
- map.m_lblk = ((unsigned long long)offset) >> PAGE_SHIFT;
- map.m_len = pg_end - map.m_lblk;
+ map.m_lblk = pg_start;
+ map.m_len = pg_end - pg_start;
if (off_end)
map.m_len++;
@@ -1642,19 +1645,15 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
return 0;
if (f2fs_is_pinned_file(inode)) {
- block_t len = (map.m_len >> sbi->log_blocks_per_seg) <<
- sbi->log_blocks_per_seg;
- block_t done = 0;
+ block_t sec_blks = BLKS_PER_SEC(sbi);
+ block_t sec_len = roundup(map.m_len, sec_blks);
- if (map.m_len % sbi->blocks_per_seg)
- len += sbi->blocks_per_seg;
-
- map.m_len = sbi->blocks_per_seg;
+ map.m_len = sec_blks;
next_alloc:
if (has_not_enough_free_secs(sbi, 0,
GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) {
down_write(&sbi->gc_lock);
- err = f2fs_gc(sbi, true, false, NULL_SEGNO);
+ err = f2fs_gc(sbi, true, false, false, NULL_SEGNO);
if (err && err != -ENODATA && err != -EAGAIN)
goto out_err;
}
@@ -1662,7 +1661,7 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
down_write(&sbi->pin_sem);
f2fs_lock_op(sbi);
- f2fs_allocate_new_segment(sbi, CURSEG_COLD_DATA_PINNED);
+ f2fs_allocate_new_section(sbi, CURSEG_COLD_DATA_PINNED);
f2fs_unlock_op(sbi);
map.m_seg_type = CURSEG_COLD_DATA_PINNED;
@@ -1670,24 +1669,25 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
up_write(&sbi->pin_sem);
- done += map.m_len;
- len -= map.m_len;
+ expanded += map.m_len;
+ sec_len -= map.m_len;
map.m_lblk += map.m_len;
- if (!err && len)
+ if (!err && sec_len)
goto next_alloc;
- map.m_len = done;
+ map.m_len = expanded;
} else {
err = f2fs_map_blocks(inode, &map, 1, F2FS_GET_BLOCK_PRE_AIO);
+ expanded = map.m_len;
}
out_err:
if (err) {
pgoff_t last_off;
- if (!map.m_len)
+ if (!expanded)
return err;
- last_off = map.m_lblk + map.m_len - 1;
+ last_off = pg_start + expanded - 1;
/* update new size to the failed position */
new_size = (last_off == pg_end) ? offset + len :
@@ -2489,32 +2489,25 @@ static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
down_write(&sbi->gc_lock);
}
- ret = f2fs_gc(sbi, sync, true, NULL_SEGNO);
+ ret = f2fs_gc(sbi, sync, true, false, NULL_SEGNO);
out:
mnt_drop_write_file(filp);
return ret;
}
-static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
+static int __f2fs_ioc_gc_range(struct file *filp, struct f2fs_gc_range *range)
{
- struct inode *inode = file_inode(filp);
- struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
- struct f2fs_gc_range range;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(file_inode(filp));
u64 end;
int ret;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
-
- if (copy_from_user(&range, (struct f2fs_gc_range __user *)arg,
- sizeof(range)))
- return -EFAULT;
-
if (f2fs_readonly(sbi->sb))
return -EROFS;
- end = range.start + range.len;
- if (end < range.start || range.start < MAIN_BLKADDR(sbi) ||
+ end = range->start + range->len;
+ if (end < range->start || range->start < MAIN_BLKADDR(sbi) ||
end >= MAX_BLKADDR(sbi))
return -EINVAL;
@@ -2523,7 +2516,7 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
return ret;
do_more:
- if (!range.sync) {
+ if (!range->sync) {
if (!down_write_trylock(&sbi->gc_lock)) {
ret = -EBUSY;
goto out;
@@ -2532,20 +2525,31 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
down_write(&sbi->gc_lock);
}
- ret = f2fs_gc(sbi, range.sync, true, GET_SEGNO(sbi, range.start));
+ ret = f2fs_gc(sbi, range->sync, true, false,
+ GET_SEGNO(sbi, range->start));
if (ret) {
if (ret == -EBUSY)
ret = -EAGAIN;
goto out;
}
- range.start += BLKS_PER_SEC(sbi);
- if (range.start <= end)
+ range->start += BLKS_PER_SEC(sbi);
+ if (range->start <= end)
goto do_more;
out:
mnt_drop_write_file(filp);
return ret;
}
+static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
+{
+ struct f2fs_gc_range range;
+
+ if (copy_from_user(&range, (struct f2fs_gc_range __user *)arg,
+ sizeof(range)))
+ return -EFAULT;
+ return __f2fs_ioc_gc_range(filp, &range);
+}
+
static int f2fs_ioc_write_checkpoint(struct file *filp, unsigned long arg)
{
struct inode *inode = file_inode(filp);
@@ -2882,9 +2886,9 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
return ret;
}
-static int f2fs_ioc_move_range(struct file *filp, unsigned long arg)
+static int __f2fs_ioc_move_range(struct file *filp,
+ struct f2fs_move_range *range)
{
- struct f2fs_move_range range;
struct fd dst;
int err;
@@ -2892,11 +2896,7 @@ static int f2fs_ioc_move_range(struct file *filp, unsigned long arg)
!(filp->f_mode & FMODE_WRITE))
return -EBADF;
- if (copy_from_user(&range, (struct f2fs_move_range __user *)arg,
- sizeof(range)))
- return -EFAULT;
-
- dst = fdget(range.dst_fd);
+ dst = fdget(range->dst_fd);
if (!dst.file)
return -EBADF;
@@ -2909,21 +2909,25 @@ static int f2fs_ioc_move_range(struct file *filp, unsigned long arg)
if (err)
goto err_out;
- err = f2fs_move_file_range(filp, range.pos_in, dst.file,
- range.pos_out, range.len);
+ err = f2fs_move_file_range(filp, range->pos_in, dst.file,
+ range->pos_out, range->len);
mnt_drop_write_file(filp);
- if (err)
- goto err_out;
-
- if (copy_to_user((struct f2fs_move_range __user *)arg,
- &range, sizeof(range)))
- err = -EFAULT;
err_out:
fdput(dst);
return err;
}
+static int f2fs_ioc_move_range(struct file *filp, unsigned long arg)
+{
+ struct f2fs_move_range range;
+
+ if (copy_from_user(&range, (struct f2fs_move_range __user *)arg,
+ sizeof(range)))
+ return -EFAULT;
+ return __f2fs_ioc_move_range(filp, &range);
+}
+
static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
{
struct inode *inode = file_inode(filp);
@@ -2975,7 +2979,7 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
sm->last_victim[GC_CB] = end_segno + 1;
sm->last_victim[GC_GREEDY] = end_segno + 1;
sm->last_victim[ALLOC_NEXT] = end_segno + 1;
- ret = f2fs_gc(sbi, true, true, start_segno);
+ ret = f2fs_gc(sbi, true, true, true, start_segno);
if (ret == -EAGAIN)
ret = 0;
else if (ret < 0)
@@ -3960,13 +3964,8 @@ static int f2fs_sec_trim_file(struct file *filp, unsigned long arg)
return ret;
}
-long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
- if (unlikely(f2fs_cp_error(F2FS_I_SB(file_inode(filp)))))
- return -EIO;
- if (!f2fs_is_checkpoint_ready(F2FS_I_SB(file_inode(filp))))
- return -ENOSPC;
-
switch (cmd) {
case FS_IOC_GETFLAGS:
return f2fs_ioc_getflags(filp, arg);
@@ -4053,6 +4052,16 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
}
}
+long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ if (unlikely(f2fs_cp_error(F2FS_I_SB(file_inode(filp)))))
+ return -EIO;
+ if (!f2fs_is_checkpoint_ready(F2FS_I_SB(file_inode(filp))))
+ return -ENOSPC;
+
+ return __f2fs_ioctl(filp, cmd, arg);
+}
+
static ssize_t f2fs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
{
struct file *file = iocb->ki_filp;
@@ -4175,8 +4184,63 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
}
#ifdef CONFIG_COMPAT
+struct compat_f2fs_gc_range {
+ u32 sync;
+ compat_u64 start;
+ compat_u64 len;
+};
+#define F2FS_IOC32_GARBAGE_COLLECT_RANGE _IOW(F2FS_IOCTL_MAGIC, 11,\
+ struct compat_f2fs_gc_range)
+
+static int f2fs_compat_ioc_gc_range(struct file *file, unsigned long arg)
+{
+ struct compat_f2fs_gc_range __user *urange;
+ struct f2fs_gc_range range;
+ int err;
+
+ urange = compat_ptr(arg);
+ err = get_user(range.sync, &urange->sync);
+ err |= get_user(range.start, &urange->start);
+ err |= get_user(range.len, &urange->len);
+ if (err)
+ return -EFAULT;
+
+ return __f2fs_ioc_gc_range(file, &range);
+}
+
+struct compat_f2fs_move_range {
+ u32 dst_fd;
+ compat_u64 pos_in;
+ compat_u64 pos_out;
+ compat_u64 len;
+};
+#define F2FS_IOC32_MOVE_RANGE _IOWR(F2FS_IOCTL_MAGIC, 9, \
+ struct compat_f2fs_move_range)
+
+static int f2fs_compat_ioc_move_range(struct file *file, unsigned long arg)
+{
+ struct compat_f2fs_move_range __user *urange;
+ struct f2fs_move_range range;
+ int err;
+
+ urange = compat_ptr(arg);
+ err = get_user(range.dst_fd, &urange->dst_fd);
+ err |= get_user(range.pos_in, &urange->pos_in);
+ err |= get_user(range.pos_out, &urange->pos_out);
+ err |= get_user(range.len, &urange->len);
+ if (err)
+ return -EFAULT;
+
+ return __f2fs_ioc_move_range(file, &range);
+}
+
long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
+ if (unlikely(f2fs_cp_error(F2FS_I_SB(file_inode(file)))))
+ return -EIO;
+ if (!f2fs_is_checkpoint_ready(F2FS_I_SB(file_inode(file))))
+ return -ENOSPC;
+
switch (cmd) {
case FS_IOC32_GETFLAGS:
cmd = FS_IOC_GETFLAGS;
@@ -4187,6 +4251,10 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case FS_IOC32_GETVERSION:
cmd = FS_IOC_GETVERSION;
break;
+ case F2FS_IOC32_GARBAGE_COLLECT_RANGE:
+ return f2fs_compat_ioc_gc_range(file, arg);
+ case F2FS_IOC32_MOVE_RANGE:
+ return f2fs_compat_ioc_move_range(file, arg);
case F2FS_IOC_START_ATOMIC_WRITE:
case F2FS_IOC_COMMIT_ATOMIC_WRITE:
case F2FS_IOC_START_VOLATILE_WRITE:
@@ -4204,10 +4272,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
case FS_IOC_GET_ENCRYPTION_NONCE:
case F2FS_IOC_GARBAGE_COLLECT:
- case F2FS_IOC_GARBAGE_COLLECT_RANGE:
case F2FS_IOC_WRITE_CHECKPOINT:
case F2FS_IOC_DEFRAGMENT:
- case F2FS_IOC_MOVE_RANGE:
case F2FS_IOC_FLUSH_DEVICE:
case F2FS_IOC_GET_FEATURES:
case FS_IOC_FSGETXATTR:
@@ -4228,7 +4294,7 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
default:
return -ENOIOCTLCMD;
}
- return f2fs_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
+ return __f2fs_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
}
#endif
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index 05641a1..9b38cef 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -112,7 +112,7 @@ static int gc_thread_func(void *data)
sync_mode = F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC;
/* if return value is not zero, no victim was selected */
- if (f2fs_gc(sbi, sync_mode, true, NULL_SEGNO))
+ if (f2fs_gc(sbi, sync_mode, true, false, NULL_SEGNO))
wait_ms = gc_th->no_gc_sleep_time;
trace_f2fs_background_gc(sbi->sb, wait_ms,
@@ -392,10 +392,6 @@ static void add_victim_entry(struct f2fs_sb_info *sbi,
if (p->gc_mode == GC_AT &&
get_valid_blocks(sbi, segno, true) == 0)
return;
-
- if (p->alloc_mode == AT_SSR &&
- get_seg_entry(sbi, segno)->ckpt_valid_blocks == 0)
- return;
}
for (i = 0; i < sbi->segs_per_sec; i++)
@@ -728,11 +724,27 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi,
if (sec_usage_check(sbi, secno))
goto next;
+
/* Don't touch checkpointed data */
- if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED) &&
- get_ckpt_valid_blocks(sbi, segno) &&
- p.alloc_mode == LFS))
- goto next;
+ if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
+ if (p.alloc_mode == LFS) {
+ /*
+ * LFS is set to find source section during GC.
+ * The victim should have no checkpointed data.
+ */
+ if (get_ckpt_valid_blocks(sbi, segno, true))
+ goto next;
+ } else {
+ /*
+ * SSR | AT_SSR are set to find target segment
+ * for writes which can be full by checkpointed
+ * and newly written blocks.
+ */
+ if (!f2fs_segment_has_free_slot(sbi, segno))
+ goto next;
+ }
+ }
+
if (gc_type == BG_GC && test_bit(secno, dirty_i->victim_secmap))
goto next;
@@ -1356,7 +1368,8 @@ static int move_data_page(struct inode *inode, block_t bidx, int gc_type,
* the victim data block is ignored.
*/
static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
- struct gc_inode_list *gc_list, unsigned int segno, int gc_type)
+ struct gc_inode_list *gc_list, unsigned int segno, int gc_type,
+ bool force_migrate)
{
struct super_block *sb = sbi->sb;
struct f2fs_summary *entry;
@@ -1385,8 +1398,8 @@ static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
* race condition along with SSR block allocation.
*/
if ((gc_type == BG_GC && has_not_enough_free_secs(sbi, 0, 0)) ||
- get_valid_blocks(sbi, segno, true) ==
- BLKS_PER_SEC(sbi))
+ (!force_migrate && get_valid_blocks(sbi, segno, true) ==
+ BLKS_PER_SEC(sbi)))
return submitted;
if (check_valid_map(sbi, segno, off) == 0)
@@ -1521,7 +1534,8 @@ static int __get_victim(struct f2fs_sb_info *sbi, unsigned int *victim,
static int do_garbage_collect(struct f2fs_sb_info *sbi,
unsigned int start_segno,
- struct gc_inode_list *gc_list, int gc_type)
+ struct gc_inode_list *gc_list, int gc_type,
+ bool force_migrate)
{
struct page *sum_page;
struct f2fs_summary_block *sum;
@@ -1608,7 +1622,8 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
gc_type);
else
submitted += gc_data_segment(sbi, sum->entries, gc_list,
- segno, gc_type);
+ segno, gc_type,
+ force_migrate);
stat_inc_seg_count(sbi, type, gc_type);
migrated++;
@@ -1636,7 +1651,7 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
}
int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
- bool background, unsigned int segno)
+ bool background, bool force, unsigned int segno)
{
int gc_type = sync ? FG_GC : BG_GC;
int sec_freed = 0, seg_freed = 0, total_freed = 0;
@@ -1698,7 +1713,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
if (ret)
goto stop;
- seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type);
+ seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type, force);
if (gc_type == FG_GC &&
seg_freed == f2fs_usable_segs_in_sec(sbi, segno))
sec_freed++;
@@ -1837,7 +1852,7 @@ static int free_segment_range(struct f2fs_sb_info *sbi,
.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
};
- do_garbage_collect(sbi, segno, &gc_list, FG_GC);
+ do_garbage_collect(sbi, segno, &gc_list, FG_GC, true);
put_gc_inode(&gc_list);
if (!gc_only && get_valid_blocks(sbi, segno, true)) {
@@ -1976,7 +1991,20 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
/* stop CP to protect MAIN_SEC in free_segment_range */
f2fs_lock_op(sbi);
+
+ spin_lock(&sbi->stat_lock);
+ if (shrunk_blocks + valid_user_blocks(sbi) +
+ sbi->current_reserved_blocks + sbi->unusable_block_count +
+ F2FS_OPTION(sbi).root_reserved_blocks > sbi->user_block_count)
+ err = -ENOSPC;
+ spin_unlock(&sbi->stat_lock);
+
+ if (err)
+ goto out_unlock;
+
err = free_segment_range(sbi, secs, true);
+
+out_unlock:
f2fs_unlock_op(sbi);
up_write(&sbi->gc_lock);
if (err)
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
index b9e37f0..1d7dafd 100644
--- a/fs/f2fs/inline.c
+++ b/fs/f2fs/inline.c
@@ -218,7 +218,8 @@ int f2fs_convert_inline_inode(struct inode *inode)
f2fs_put_page(page, 1);
- f2fs_balance_fs(sbi, dn.node_changed);
+ if (!err)
+ f2fs_balance_fs(sbi, dn.node_changed);
return err;
}
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index e65d732..597a145 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -2781,6 +2781,9 @@ static void remove_nats_in_journal(struct f2fs_sb_info *sbi)
struct f2fs_nat_entry raw_ne;
nid_t nid = le32_to_cpu(nid_in_journal(journal, i));
+ if (f2fs_check_nid_range(sbi, nid))
+ continue;
+
raw_ne = nat_in_journal(journal, i);
ne = __lookup_nat_cache(nm_i, nid);
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index f2a4265..d04b449 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -327,23 +327,27 @@ void f2fs_drop_inmem_pages(struct inode *inode)
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct f2fs_inode_info *fi = F2FS_I(inode);
- while (!list_empty(&fi->inmem_pages)) {
+ do {
mutex_lock(&fi->inmem_lock);
+ if (list_empty(&fi->inmem_pages)) {
+ fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
+
+ spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
+ if (!list_empty(&fi->inmem_ilist))
+ list_del_init(&fi->inmem_ilist);
+ if (f2fs_is_atomic_file(inode)) {
+ clear_inode_flag(inode, FI_ATOMIC_FILE);
+ sbi->atomic_files--;
+ }
+ spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
+
+ mutex_unlock(&fi->inmem_lock);
+ break;
+ }
__revoke_inmem_pages(inode, &fi->inmem_pages,
true, false, true);
mutex_unlock(&fi->inmem_lock);
- }
-
- fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
-
- spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
- if (!list_empty(&fi->inmem_ilist))
- list_del_init(&fi->inmem_ilist);
- if (f2fs_is_atomic_file(inode)) {
- clear_inode_flag(inode, FI_ATOMIC_FILE);
- sbi->atomic_files--;
- }
- spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
+ } while (1);
}
void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
@@ -507,7 +511,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
*/
if (has_not_enough_free_secs(sbi, 0, 0)) {
down_write(&sbi->gc_lock);
- f2fs_gc(sbi, false, false, NULL_SEGNO);
+ f2fs_gc(sbi, false, false, false, NULL_SEGNO);
}
}
@@ -871,7 +875,7 @@ static void locate_dirty_segment(struct f2fs_sb_info *sbi, unsigned int segno)
mutex_lock(&dirty_i->seglist_lock);
valid_blocks = get_valid_blocks(sbi, segno, false);
- ckpt_valid_blocks = get_ckpt_valid_blocks(sbi, segno);
+ ckpt_valid_blocks = get_ckpt_valid_blocks(sbi, segno, false);
if (valid_blocks == 0 && (!is_sbi_flag_set(sbi, SBI_CP_DISABLED) ||
ckpt_valid_blocks == usable_blocks)) {
@@ -956,7 +960,7 @@ static unsigned int get_free_segment(struct f2fs_sb_info *sbi)
for_each_set_bit(segno, dirty_i->dirty_segmap[DIRTY], MAIN_SEGS(sbi)) {
if (get_valid_blocks(sbi, segno, false))
continue;
- if (get_ckpt_valid_blocks(sbi, segno))
+ if (get_ckpt_valid_blocks(sbi, segno, false))
continue;
mutex_unlock(&dirty_i->seglist_lock);
return segno;
@@ -2646,6 +2650,23 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi,
seg->next_blkoff++;
}
+bool f2fs_segment_has_free_slot(struct f2fs_sb_info *sbi, int segno)
+{
+ struct seg_entry *se = get_seg_entry(sbi, segno);
+ int entries = SIT_VBLOCK_MAP_SIZE / sizeof(unsigned long);
+ unsigned long *target_map = SIT_I(sbi)->tmp_map;
+ unsigned long *ckpt_map = (unsigned long *)se->ckpt_valid_map;
+ unsigned long *cur_map = (unsigned long *)se->cur_valid_map;
+ int i, pos;
+
+ for (i = 0; i < entries; i++)
+ target_map[i] = ckpt_map[i] | cur_map[i];
+
+ pos = __find_rev_next_zero_bit(target_map, sbi->blocks_per_seg, 0);
+
+ return pos < sbi->blocks_per_seg;
+}
+
/*
* This function always allocates a used segment(from dirty seglist) by SSR
* manner, so it should recover the existing segment information of valid blocks
@@ -2903,7 +2924,8 @@ void f2fs_allocate_segment_for_resize(struct f2fs_sb_info *sbi, int type,
up_read(&SM_I(sbi)->curseg_lock);
}
-static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type)
+static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type,
+ bool new_sec)
{
struct curseg_info *curseg = CURSEG_I(sbi, type);
unsigned int old_segno;
@@ -2911,32 +2933,42 @@ static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type)
if (!curseg->inited)
goto alloc;
- if (!curseg->next_blkoff &&
- !get_valid_blocks(sbi, curseg->segno, false) &&
- !get_ckpt_valid_blocks(sbi, curseg->segno))
- return;
+ if (curseg->next_blkoff ||
+ get_valid_blocks(sbi, curseg->segno, new_sec))
+ goto alloc;
+ if (!get_ckpt_valid_blocks(sbi, curseg->segno, new_sec))
+ return;
alloc:
old_segno = curseg->segno;
SIT_I(sbi)->s_ops->allocate_segment(sbi, type, true);
locate_dirty_segment(sbi, old_segno);
}
-void f2fs_allocate_new_segment(struct f2fs_sb_info *sbi, int type)
+static void __allocate_new_section(struct f2fs_sb_info *sbi, int type)
{
+ __allocate_new_segment(sbi, type, true);
+}
+
+void f2fs_allocate_new_section(struct f2fs_sb_info *sbi, int type)
+{
+ down_read(&SM_I(sbi)->curseg_lock);
down_write(&SIT_I(sbi)->sentry_lock);
- __allocate_new_segment(sbi, type);
+ __allocate_new_section(sbi, type);
up_write(&SIT_I(sbi)->sentry_lock);
+ up_read(&SM_I(sbi)->curseg_lock);
}
void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi)
{
int i;
+ down_read(&SM_I(sbi)->curseg_lock);
down_write(&SIT_I(sbi)->sentry_lock);
for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++)
- __allocate_new_segment(sbi, i);
+ __allocate_new_segment(sbi, i, false);
up_write(&SIT_I(sbi)->sentry_lock);
+ up_read(&SM_I(sbi)->curseg_lock);
}
static const struct segment_allocation default_salloc_ops = {
@@ -3375,12 +3407,12 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
f2fs_inode_chksum_set(sbi, page);
}
- if (F2FS_IO_ALIGNED(sbi))
- fio->retry = false;
-
if (fio) {
struct f2fs_bio_info *io;
+ if (F2FS_IO_ALIGNED(sbi))
+ fio->retry = false;
+
INIT_LIST_HEAD(&fio->list);
fio->in_list = true;
io = sbi->write_io[fio->type] + fio->temp;
diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
index 229814b..1bf33fc 100644
--- a/fs/f2fs/segment.h
+++ b/fs/f2fs/segment.h
@@ -361,8 +361,20 @@ static inline unsigned int get_valid_blocks(struct f2fs_sb_info *sbi,
}
static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi,
- unsigned int segno)
+ unsigned int segno, bool use_section)
{
+ if (use_section && __is_large_section(sbi)) {
+ unsigned int start_segno = START_SEGNO(segno);
+ unsigned int blocks = 0;
+ int i;
+
+ for (i = 0; i < sbi->segs_per_sec; i++, start_segno++) {
+ struct seg_entry *se = get_seg_entry(sbi, start_segno);
+
+ blocks += se->ckpt_valid_blocks;
+ }
+ return blocks;
+ }
return get_seg_entry(sbi, segno)->ckpt_valid_blocks;
}
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 4fffbef..abc469d 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -1723,7 +1723,7 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
while (!f2fs_time_over(sbi, DISABLE_TIME)) {
down_write(&sbi->gc_lock);
- err = f2fs_gc(sbi, true, false, NULL_SEGNO);
+ err = f2fs_gc(sbi, true, false, false, NULL_SEGNO);
if (err == -ENODATA) {
err = 0;
break;
diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c
index 054ec85..15ba369 100644
--- a/fs/f2fs/verity.c
+++ b/fs/f2fs/verity.c
@@ -152,40 +152,73 @@ static int f2fs_end_enable_verity(struct file *filp, const void *desc,
size_t desc_size, u64 merkle_tree_size)
{
struct inode *inode = file_inode(filp);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
u64 desc_pos = f2fs_verity_metadata_pos(inode) + merkle_tree_size;
struct fsverity_descriptor_location dloc = {
.version = cpu_to_le32(F2FS_VERIFY_VER),
.size = cpu_to_le32(desc_size),
.pos = cpu_to_le64(desc_pos),
};
- int err = 0;
+ int err = 0, err2 = 0;
- if (desc != NULL) {
- /* Succeeded; write the verity descriptor. */
- err = pagecache_write(inode, desc, desc_size, desc_pos);
+ /*
+ * If an error already occurred (which fs/verity/ signals by passing
+ * desc == NULL), then only clean-up is needed.
+ */
+ if (desc == NULL)
+ goto cleanup;
- /* Write all pages before clearing FI_VERITY_IN_PROGRESS. */
- if (!err)
- err = filemap_write_and_wait(inode->i_mapping);
- }
+ /* Append the verity descriptor. */
+ err = pagecache_write(inode, desc, desc_size, desc_pos);
+ if (err)
+ goto cleanup;
- /* If we failed, truncate anything we wrote past i_size. */
- if (desc == NULL || err)
- f2fs_truncate(inode);
+ /*
+ * Write all pages (both data and verity metadata). Note that this must
+ * happen before clearing FI_VERITY_IN_PROGRESS; otherwise pages beyond
+ * i_size won't be written properly. For crash consistency, this also
+ * must happen before the verity inode flag gets persisted.
+ */
+ err = filemap_write_and_wait(inode->i_mapping);
+ if (err)
+ goto cleanup;
+
+ /* Set the verity xattr. */
+ err = f2fs_setxattr(inode, F2FS_XATTR_INDEX_VERITY,
+ F2FS_XATTR_NAME_VERITY, &dloc, sizeof(dloc),
+ NULL, XATTR_CREATE);
+ if (err)
+ goto cleanup;
+
+ /* Finally, set the verity inode flag. */
+ file_set_verity(inode);
+ f2fs_set_inode_flags(inode);
+ f2fs_mark_inode_dirty_sync(inode, true);
clear_inode_flag(inode, FI_VERITY_IN_PROGRESS);
+ return 0;
- if (desc != NULL && !err) {
- err = f2fs_setxattr(inode, F2FS_XATTR_INDEX_VERITY,
- F2FS_XATTR_NAME_VERITY, &dloc, sizeof(dloc),
- NULL, XATTR_CREATE);
- if (!err) {
- file_set_verity(inode);
- f2fs_set_inode_flags(inode);
- f2fs_mark_inode_dirty_sync(inode, true);
- }
+cleanup:
+ /*
+ * Verity failed to be enabled, so clean up by truncating any verity
+ * metadata that was written beyond i_size (both from cache and from
+ * disk) and clearing FI_VERITY_IN_PROGRESS.
+ *
+ * Taking i_gc_rwsem[WRITE] is needed to stop f2fs garbage collection
+ * from re-instantiating cached pages we are truncating (since unlike
+ * normal file accesses, garbage collection isn't limited by i_size).
+ */
+ down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ truncate_inode_pages(inode->i_mapping, inode->i_size);
+ err2 = f2fs_truncate(inode);
+ if (err2) {
+ f2fs_err(sbi, "Truncating verity metadata failed (errno=%d)",
+ err2);
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
}
- return err;
+ up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ clear_inode_flag(inode, FI_VERITY_IN_PROGRESS);
+ return err ?: err2;
}
static int f2fs_get_verity_descriptor(struct inode *inode, void *buf,
diff --git a/fs/fuse/cuse.c b/fs/fuse/cuse.c
index 4508226..a37528b 100644
--- a/fs/fuse/cuse.c
+++ b/fs/fuse/cuse.c
@@ -627,6 +627,8 @@ static int __init cuse_init(void)
cuse_channel_fops.owner = THIS_MODULE;
cuse_channel_fops.open = cuse_channel_open;
cuse_channel_fops.release = cuse_channel_release;
+ /* CUSE is not prepared for FUSE_DEV_IOC_CLONE */
+ cuse_channel_fops.unlocked_ioctl = NULL;
cuse_class = class_create(THIS_MODULE, "cuse");
if (IS_ERR(cuse_class))
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 8b30600..8de9c24 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -1093,6 +1093,7 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia,
struct fuse_file *ff = file->private_data;
struct fuse_mount *fm = ff->fm;
unsigned int offset, i;
+ bool short_write;
int err;
for (i = 0; i < ap->num_pages; i++)
@@ -1105,32 +1106,38 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia,
if (!err && ia->write.out.size > count)
err = -EIO;
+ short_write = ia->write.out.size < count;
offset = ap->descs[0].offset;
count = ia->write.out.size;
for (i = 0; i < ap->num_pages; i++) {
struct page *page = ap->pages[i];
- if (!err && !offset && count >= PAGE_SIZE)
- SetPageUptodate(page);
-
- if (count > PAGE_SIZE - offset)
- count -= PAGE_SIZE - offset;
- else
- count = 0;
- offset = 0;
-
- unlock_page(page);
+ if (err) {
+ ClearPageUptodate(page);
+ } else {
+ if (count >= PAGE_SIZE - offset)
+ count -= PAGE_SIZE - offset;
+ else {
+ if (short_write)
+ ClearPageUptodate(page);
+ count = 0;
+ }
+ offset = 0;
+ }
+ if (ia->write.page_locked && (i == ap->num_pages - 1))
+ unlock_page(page);
put_page(page);
}
return err;
}
-static ssize_t fuse_fill_write_pages(struct fuse_args_pages *ap,
+static ssize_t fuse_fill_write_pages(struct fuse_io_args *ia,
struct address_space *mapping,
struct iov_iter *ii, loff_t pos,
unsigned int max_pages)
{
+ struct fuse_args_pages *ap = &ia->ap;
struct fuse_conn *fc = get_fuse_conn(mapping->host);
unsigned offset = pos & (PAGE_SIZE - 1);
size_t count = 0;
@@ -1183,6 +1190,16 @@ static ssize_t fuse_fill_write_pages(struct fuse_args_pages *ap,
if (offset == PAGE_SIZE)
offset = 0;
+ /* If we copied full page, mark it uptodate */
+ if (tmp == PAGE_SIZE)
+ SetPageUptodate(page);
+
+ if (PageUptodate(page)) {
+ unlock_page(page);
+ } else {
+ ia->write.page_locked = true;
+ break;
+ }
if (!fc->big_writes)
break;
} while (iov_iter_count(ii) && count < fc->max_write &&
@@ -1226,7 +1243,7 @@ static ssize_t fuse_perform_write(struct kiocb *iocb,
break;
}
- count = fuse_fill_write_pages(ap, mapping, ii, pos, nr_pages);
+ count = fuse_fill_write_pages(&ia, mapping, ii, pos, nr_pages);
if (count <= 0) {
err = count;
} else {
@@ -1744,8 +1761,17 @@ static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args,
container_of(args, typeof(*wpa), ia.ap.args);
struct inode *inode = wpa->inode;
struct fuse_inode *fi = get_fuse_inode(inode);
+ struct fuse_conn *fc = get_fuse_conn(inode);
mapping_set_error(inode->i_mapping, error);
+ /*
+ * A writeback finished and this might have updated mtime/ctime on
+ * server making local mtime/ctime stale. Hence invalidate attrs.
+ * Do this only if writeback_cache is not enabled. If writeback_cache
+ * is enabled, we trust local ctime/mtime.
+ */
+ if (!fc->writeback_cache)
+ fuse_invalidate_attr(inode);
spin_lock(&fi->lock);
rb_erase(&wpa->writepages_entry, &fi->writepages);
while (wpa->next) {
diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
index d64aee0..8150621 100644
--- a/fs/fuse/fuse_i.h
+++ b/fs/fuse/fuse_i.h
@@ -911,6 +911,7 @@ struct fuse_io_args {
struct {
struct fuse_write_in in;
struct fuse_write_out out;
+ bool page_locked;
} write;
};
struct fuse_args_pages ap;
diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
index 3d83c9e..b9cfb11 100644
--- a/fs/fuse/virtio_fs.c
+++ b/fs/fuse/virtio_fs.c
@@ -896,6 +896,7 @@ static int virtio_fs_probe(struct virtio_device *vdev)
out_vqs:
vdev->config->reset(vdev);
virtio_fs_cleanup_vqs(vdev, fs);
+ kfree(fs->vqs);
out:
vdev->priv = NULL;
@@ -1456,8 +1457,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
return -ENOMEM;
}
- fuse_conn_init(fc, fm, get_user_ns(current_user_ns()),
- &virtio_fs_fiq_ops, fs);
+ fuse_conn_init(fc, fm, fsc->user_ns, &virtio_fs_fiq_ops, fs);
fc->release = fuse_free_conn;
fc->delete_stale = true;
fc->auto_submounts = true;
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index ddd40c9..077dc8c 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -169,8 +169,10 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
int error;
error = init_threads(sdp);
- if (error)
+ if (error) {
+ gfs2_withdraw_delayed(sdp);
return error;
+ }
j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
if (gfs2_withdrawn(sdp)) {
@@ -767,11 +769,13 @@ void gfs2_freeze_func(struct work_struct *work)
static int gfs2_freeze(struct super_block *sb)
{
struct gfs2_sbd *sdp = sb->s_fs_info;
- int error = 0;
+ int error;
mutex_lock(&sdp->sd_freeze_mutex);
- if (atomic_read(&sdp->sd_freeze_state) != SFS_UNFROZEN)
+ if (atomic_read(&sdp->sd_freeze_state) != SFS_UNFROZEN) {
+ error = -EBUSY;
goto out;
+ }
for (;;) {
if (gfs2_withdrawn(sdp)) {
@@ -812,10 +816,10 @@ static int gfs2_unfreeze(struct super_block *sb)
struct gfs2_sbd *sdp = sb->s_fs_info;
mutex_lock(&sdp->sd_freeze_mutex);
- if (atomic_read(&sdp->sd_freeze_state) != SFS_FROZEN ||
+ if (atomic_read(&sdp->sd_freeze_state) != SFS_FROZEN ||
!gfs2_holder_initialized(&sdp->sd_freeze_gh)) {
mutex_unlock(&sdp->sd_freeze_mutex);
- return 0;
+ return -EINVAL;
}
gfs2_freeze_unlock(&sdp->sd_freeze_gh);
diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
index a930ddd..7054a54 100644
--- a/fs/hfsplus/extents.c
+++ b/fs/hfsplus/extents.c
@@ -598,13 +598,15 @@ void hfsplus_file_truncate(struct inode *inode)
res = __hfsplus_ext_cache_extent(&fd, inode, alloc_cnt);
if (res)
break;
- hfs_brec_remove(&fd);
- mutex_unlock(&fd.tree->tree_lock);
start = hip->cached_start;
+ if (blk_cnt <= start)
+ hfs_brec_remove(&fd);
+ mutex_unlock(&fd.tree->tree_lock);
hfsplus_free_extents(sb, hip->cached_extents,
alloc_cnt - start, alloc_cnt - blk_cnt);
hfsplus_dump_extent(hip->cached_extents);
+ mutex_lock(&fd.tree->tree_lock);
if (blk_cnt > start) {
hip->extent_state |= HFSPLUS_EXT_DIRTY;
break;
@@ -612,7 +614,6 @@ void hfsplus_file_truncate(struct inode *inode)
alloc_cnt = start;
hip->cached_start = hip->cached_blocks = 0;
hip->extent_state &= ~(HFSPLUS_EXT_DIRTY | HFSPLUS_EXT_NEW);
- mutex_lock(&fd.tree->tree_lock);
}
hfs_find_exit(&fd);
diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
index c070c0d..d4e3602 100644
--- a/fs/hostfs/hostfs_kern.c
+++ b/fs/hostfs/hostfs_kern.c
@@ -142,7 +142,7 @@ static char *follow_link(char *link)
char *name, *resolved, *end;
int n;
- name = __getname();
+ name = kmalloc(PATH_MAX, GFP_KERNEL);
if (!name) {
n = -ENOMEM;
goto out_free;
@@ -171,12 +171,11 @@ static char *follow_link(char *link)
goto out_free;
}
- __putname(name);
- kfree(link);
+ kfree(name);
return resolved;
out_free:
- __putname(name);
+ kfree(name);
return ERR_PTR(n);
}
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 21c20fd..b7c24d1 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -131,6 +131,7 @@ static void huge_pagevec_release(struct pagevec *pvec)
static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
{
struct inode *inode = file_inode(file);
+ struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
loff_t len, vma_len;
int ret;
struct hstate *h = hstate_file(file);
@@ -146,6 +147,10 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND;
vma->vm_ops = &hugetlb_vm_ops;
+ ret = seal_check_future_write(info->seals, vma);
+ if (ret)
+ return ret;
+
/*
* page based offset in vm_pgoff could be sufficiently large to
* overflow a loff_t when converted to byte offset. This can
diff --git a/fs/io_uring.c b/fs/io_uring.c
index c5b2fd0..f2ed92d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -222,7 +222,7 @@ struct fixed_file_data {
struct io_buffer {
struct list_head list;
__u64 addr;
- __s32 len;
+ __u32 len;
__u16 bid;
};
@@ -527,7 +527,7 @@ struct io_splice {
struct io_provide_buf {
struct file *file;
__u64 addr;
- __s32 len;
+ __u32 len;
__u32 bgid;
__u16 nbufs;
__u16 bid;
@@ -1439,7 +1439,7 @@ static void io_prep_async_work(struct io_kiocb *req)
if (req->flags & REQ_F_ISREG) {
if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
io_wq_hash_work(&req->work, file_inode(req->file));
- } else {
+ } else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
if (def->unbound_nonreg_file)
req->work.flags |= IO_WQ_WORK_UNBOUND;
}
@@ -1489,7 +1489,7 @@ static void io_queue_async_work(struct io_kiocb *req)
io_queue_linked_timeout(link);
}
-static void io_kill_timeout(struct io_kiocb *req)
+static void io_kill_timeout(struct io_kiocb *req, int status)
{
struct io_timeout_data *io = req->async_data;
int ret;
@@ -1499,7 +1499,7 @@ static void io_kill_timeout(struct io_kiocb *req)
atomic_set(&req->ctx->cq_timeouts,
atomic_read(&req->ctx->cq_timeouts) + 1);
list_del_init(&req->timeout.list);
- io_cqring_fill_event(req, 0);
+ io_cqring_fill_event(req, status);
io_put_req_deferred(req, 1);
}
}
@@ -1516,7 +1516,7 @@ static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
spin_lock_irq(&ctx->completion_lock);
list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
if (io_match_task(req, tsk, files)) {
- io_kill_timeout(req);
+ io_kill_timeout(req, -ECANCELED);
canceled++;
}
}
@@ -1568,7 +1568,7 @@ static void io_flush_timeouts(struct io_ring_ctx *ctx)
break;
list_del_init(&req->timeout.list);
- io_kill_timeout(req);
+ io_kill_timeout(req, 0);
} while (!list_empty(&ctx->timeout_list));
ctx->cq_last_tm_flush = seq;
@@ -3996,7 +3996,7 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
static int io_provide_buffers_prep(struct io_kiocb *req,
const struct io_uring_sqe *sqe)
{
- unsigned long size;
+ unsigned long size, tmp_check;
struct io_provide_buf *p = &req->pbuf;
u64 tmp;
@@ -4010,6 +4010,12 @@ static int io_provide_buffers_prep(struct io_kiocb *req,
p->addr = READ_ONCE(sqe->addr);
p->len = READ_ONCE(sqe->len);
+ if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
+ &size))
+ return -EOVERFLOW;
+ if (check_add_overflow((unsigned long)p->addr, size, &tmp_check))
+ return -EOVERFLOW;
+
size = (unsigned long)p->len * p->nbufs;
if (!access_ok(u64_to_user_ptr(p->addr), size))
return -EFAULT;
@@ -4034,7 +4040,7 @@ static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
break;
buf->addr = addr;
- buf->len = pbuf->len;
+ buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
buf->bid = bid;
addr += pbuf->len;
bid++;
diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
index dc0694f..1e07dfac 100644
--- a/fs/jbd2/recovery.c
+++ b/fs/jbd2/recovery.c
@@ -245,15 +245,14 @@ static int fc_do_one_pass(journal_t *journal,
return 0;
while (next_fc_block <= journal->j_fc_last) {
- jbd_debug(3, "Fast commit replay: next block %ld",
+ jbd_debug(3, "Fast commit replay: next block %ld\n",
next_fc_block);
err = jread(&bh, journal, next_fc_block);
if (err) {
- jbd_debug(3, "Fast commit replay: read error");
+ jbd_debug(3, "Fast commit replay: read error\n");
break;
}
- jbd_debug(3, "Processing fast commit blk with seq %d");
err = journal->j_fc_replay_callback(journal, bh, pass,
next_fc_block - journal->j_fc_first,
expected_commit_id);
diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index 9396666..e8fc45f 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -349,7 +349,12 @@ static int start_this_handle(journal_t *journal, handle_t *handle,
}
alloc_transaction:
- if (!journal->j_running_transaction) {
+ /*
+ * This check is racy but it is just an optimization of allocating new
+ * transaction early if there are high chances we'll need it. If we
+ * guess wrong, we'll retry or free unused transaction.
+ */
+ if (!data_race(journal->j_running_transaction)) {
/*
* If __GFP_FS is not present, then we may be being called from
* inside the fs writeback layer, so we MUST NOT fail.
@@ -1474,8 +1479,8 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
* crucial to catch bugs so let's do a reliable check until the
* lockless handling is fully proven.
*/
- if (jh->b_transaction != transaction &&
- jh->b_next_transaction != transaction) {
+ if (data_race(jh->b_transaction != transaction &&
+ jh->b_next_transaction != transaction)) {
spin_lock(&jh->b_state_lock);
J_ASSERT_JH(jh, jh->b_transaction == transaction ||
jh->b_next_transaction == transaction);
@@ -1483,8 +1488,8 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
}
if (jh->b_modified == 1) {
/* If it's in our transaction it must be in BJ_Metadata list. */
- if (jh->b_transaction == transaction &&
- jh->b_jlist != BJ_Metadata) {
+ if (data_race(jh->b_transaction == transaction &&
+ jh->b_jlist != BJ_Metadata)) {
spin_lock(&jh->b_state_lock);
if (jh->b_transaction == transaction &&
jh->b_jlist != BJ_Metadata)
diff --git a/fs/jffs2/compr_rtime.c b/fs/jffs2/compr_rtime.c
index 406d9cc..79e771a 100644
--- a/fs/jffs2/compr_rtime.c
+++ b/fs/jffs2/compr_rtime.c
@@ -37,6 +37,9 @@ static int jffs2_rtime_compress(unsigned char *data_in,
int outpos = 0;
int pos=0;
+ if (*dstlen <= 3)
+ return -1;
+
memset(positions,0,sizeof(positions));
while (pos < (*sourcelen) && outpos <= (*dstlen)-2) {
diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
index f8fb89b..4fc8cd6 100644
--- a/fs/jffs2/file.c
+++ b/fs/jffs2/file.c
@@ -57,6 +57,7 @@ const struct file_operations jffs2_file_operations =
.mmap = generic_file_readonly_mmap,
.fsync = jffs2_fsync,
.splice_read = generic_file_splice_read,
+ .splice_write = iter_file_splice_write,
};
/* jffs2_file_inode_operations */
diff --git a/fs/jffs2/scan.c b/fs/jffs2/scan.c
index db72a9d..b676056 100644
--- a/fs/jffs2/scan.c
+++ b/fs/jffs2/scan.c
@@ -1079,7 +1079,7 @@ static int jffs2_scan_dirent_node(struct jffs2_sb_info *c, struct jffs2_eraseblo
memcpy(&fd->name, rd->name, checkedlen);
fd->name[checkedlen] = 0;
- crc = crc32(0, fd->name, rd->nsize);
+ crc = crc32(0, fd->name, checkedlen);
if (crc != je32_to_cpu(rd->name_crc)) {
pr_notice("%s(): Name CRC failed on node at 0x%08x: Read 0x%08x, calculated 0x%08x\n",
__func__, ofs, je32_to_cpu(rd->name_crc), crc);
diff --git a/fs/namei.c b/fs/namei.c
index 7af66d5..4c9d0c36 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -2328,16 +2328,16 @@ static int path_lookupat(struct nameidata *nd, unsigned flags, struct path *path
while (!(err = link_path_walk(s, nd)) &&
(s = lookup_last(nd)) != NULL)
;
+ if (!err && unlikely(nd->flags & LOOKUP_MOUNTPOINT)) {
+ err = handle_lookup_down(nd);
+ nd->flags &= ~LOOKUP_JUMPED; // no d_weak_revalidate(), please...
+ }
if (!err)
err = complete_walk(nd);
if (!err && nd->flags & LOOKUP_DIRECTORY)
if (!d_can_lookup(nd->path.dentry))
err = -ENOTDIR;
- if (!err && unlikely(nd->flags & LOOKUP_MOUNTPOINT)) {
- err = handle_lookup_down(nd);
- nd->flags &= ~LOOKUP_JUMPED; // no d_weak_revalidate(), please...
- }
if (!err) {
*path = nd->path;
nd->path.mnt = NULL;
diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index e61dbc9..be546ec 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -132,12 +132,12 @@ static struct inode *nfs_layout_find_inode_by_stateid(struct nfs_client *clp,
list_for_each_entry_rcu(lo, &server->layouts, plh_layouts) {
if (!pnfs_layout_is_valid(lo))
continue;
- if (stateid != NULL &&
- !nfs4_stateid_match_other(stateid, &lo->plh_stateid))
+ if (!nfs4_stateid_match_other(stateid, &lo->plh_stateid))
continue;
- if (!nfs_sb_active(server->super))
- continue;
- inode = igrab(lo->plh_inode);
+ if (nfs_sb_active(server->super))
+ inode = igrab(lo->plh_inode);
+ else
+ inode = ERR_PTR(-EAGAIN);
rcu_read_unlock();
if (inode)
return inode;
@@ -171,9 +171,10 @@ static struct inode *nfs_layout_find_inode_by_fh(struct nfs_client *clp,
continue;
if (nfsi->layout != lo)
continue;
- if (!nfs_sb_active(server->super))
- continue;
- inode = igrab(lo->plh_inode);
+ if (nfs_sb_active(server->super))
+ inode = igrab(lo->plh_inode);
+ else
+ inode = ERR_PTR(-EAGAIN);
rcu_read_unlock();
if (inode)
return inode;
diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
index 7f5aa04..ae5ed3a 100644
--- a/fs/nfs/filelayout/filelayout.c
+++ b/fs/nfs/filelayout/filelayout.c
@@ -718,7 +718,7 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
if (unlikely(!p))
goto out_err;
fl->fh_array[i]->size = be32_to_cpup(p++);
- if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) {
+ if (fl->fh_array[i]->size > NFS_MAXFHSIZE) {
printk(KERN_ERR "NFS: Too big fh %d received %d\n",
i, fl->fh_array[i]->size);
goto out_err;
diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
index fd0eda3..a8a0208 100644
--- a/fs/nfs/flexfilelayout/flexfilelayout.c
+++ b/fs/nfs/flexfilelayout/flexfilelayout.c
@@ -106,7 +106,7 @@ static int decode_nfs_fh(struct xdr_stream *xdr, struct nfs_fh *fh)
if (unlikely(!p))
return -ENOBUFS;
fh->size = be32_to_cpup(p++);
- if (fh->size > sizeof(struct nfs_fh)) {
+ if (fh->size > NFS_MAXFHSIZE) {
printk(KERN_ERR "NFS flexfiles: Too big fh received %d\n",
fh->size);
return -EOVERFLOW;
diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c
index 29ec8b0..05b39e8 100644
--- a/fs/nfs/fs_context.c
+++ b/fs/nfs/fs_context.c
@@ -938,6 +938,15 @@ static int nfs23_parse_monolithic(struct fs_context *fc,
sizeof(mntfh->data) - mntfh->size);
/*
+ * for proto == XPRT_TRANSPORT_UDP, which is what uses
+ * to_exponential, implying shift: limit the shift value
+ * to BITS_PER_LONG (majortimeo is unsigned long)
+ */
+ if (!(data->flags & NFS_MOUNT_TCP)) /* this will be UDP */
+ if (data->retrans >= 64) /* shift value is too large */
+ goto out_invalid_data;
+
+ /*
* Translate to nfs_fs_context, which nfs_fill_super
* can deal with.
*/
@@ -1037,6 +1046,9 @@ static int nfs23_parse_monolithic(struct fs_context *fc,
out_invalid_fh:
return nfs_invalf(fc, "NFS: invalid root filehandle");
+
+out_invalid_data:
+ return nfs_invalf(fc, "NFS: invalid binary mount data");
}
#if IS_ENABLED(CONFIG_NFS_V4)
diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
index 43af053..dc2cbca 100644
--- a/fs/nfs/inode.c
+++ b/fs/nfs/inode.c
@@ -207,7 +207,8 @@ static void nfs_set_cache_invalid(struct inode *inode, unsigned long flags)
| NFS_INO_INVALID_SIZE
| NFS_INO_REVAL_PAGECACHE
| NFS_INO_INVALID_XATTR);
- }
+ } else if (flags & NFS_INO_REVAL_PAGECACHE)
+ flags |= NFS_INO_INVALID_CHANGE | NFS_INO_INVALID_SIZE;
if (inode->i_mapping->nrpages == 0)
flags &= ~(NFS_INO_INVALID_DATA|NFS_INO_DATA_INVAL_DEFER);
@@ -1642,10 +1643,10 @@ EXPORT_SYMBOL_GPL(_nfs_display_fhandle);
*/
static int nfs_inode_attrs_need_update(const struct inode *inode, const struct nfs_fattr *fattr)
{
- const struct nfs_inode *nfsi = NFS_I(inode);
+ unsigned long attr_gencount = NFS_I(inode)->attr_gencount;
- return ((long)fattr->gencount - (long)nfsi->attr_gencount) > 0 ||
- ((long)nfsi->attr_gencount - (long)nfs_read_attr_generation_counter() > 0);
+ return (long)(fattr->gencount - attr_gencount) > 0 ||
+ (long)(attr_gencount - nfs_read_attr_generation_counter()) > 0;
}
static int nfs_refresh_inode_locked(struct inode *inode, struct nfs_fattr *fattr)
@@ -2074,7 +2075,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
nfsi->attrtimeo_timestamp = now;
}
/* Set the barrier to be more recent than this fattr */
- if ((long)fattr->gencount - (long)nfsi->attr_gencount > 0)
+ if ((long)(fattr->gencount - nfsi->attr_gencount) > 0)
nfsi->attr_gencount = fattr->gencount;
}
diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
index 4fc61e3..4ebcd9d 100644
--- a/fs/nfs/nfs42proc.c
+++ b/fs/nfs/nfs42proc.c
@@ -46,11 +46,12 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
{
struct inode *inode = file_inode(filep);
struct nfs_server *server = NFS_SERVER(inode);
+ u32 bitmask[3];
struct nfs42_falloc_args args = {
.falloc_fh = NFS_FH(inode),
.falloc_offset = offset,
.falloc_length = len,
- .falloc_bitmask = nfs4_fattr_bitmap,
+ .falloc_bitmask = bitmask,
};
struct nfs42_falloc_res res = {
.falloc_server = server,
@@ -68,6 +69,10 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
return status;
}
+ memcpy(bitmask, server->cache_consistency_bitmask, sizeof(bitmask));
+ if (server->attr_bitmask[1] & FATTR4_WORD1_SPACE_USED)
+ bitmask[1] |= FATTR4_WORD1_SPACE_USED;
+
res.falloc_fattr = nfs_alloc_fattr();
if (!res.falloc_fattr)
return -ENOMEM;
@@ -75,7 +80,8 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
status = nfs4_call_sync(server->client, server, msg,
&args.seq_args, &res.seq_res, 0);
if (status == 0)
- status = nfs_post_op_update_inode(inode, res.falloc_fattr);
+ status = nfs_post_op_update_inode_force_wcc(inode,
+ res.falloc_fattr);
kfree(res.falloc_fattr);
return status;
@@ -84,7 +90,8 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
loff_t offset, loff_t len)
{
- struct nfs_server *server = NFS_SERVER(file_inode(filep));
+ struct inode *inode = file_inode(filep);
+ struct nfs_server *server = NFS_SERVER(inode);
struct nfs4_exception exception = { };
struct nfs_lock_context *lock;
int err;
@@ -93,9 +100,13 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
if (IS_ERR(lock))
return PTR_ERR(lock);
- exception.inode = file_inode(filep);
+ exception.inode = inode;
exception.state = lock->open_context->state;
+ err = nfs_sync_inode(inode);
+ if (err)
+ goto out;
+
do {
err = _nfs42_proc_fallocate(msg, filep, lock, offset, len);
if (err == -ENOTSUPP) {
@@ -104,7 +115,7 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
}
err = nfs4_handle_exception(server, err, &exception);
} while (exception.retry);
-
+out:
nfs_put_lock_context(lock);
return err;
}
@@ -142,16 +153,13 @@ int nfs42_proc_deallocate(struct file *filep, loff_t offset, loff_t len)
return -EOPNOTSUPP;
inode_lock(inode);
- err = nfs_sync_inode(inode);
- if (err)
- goto out_unlock;
err = nfs42_proc_fallocate(&msg, filep, offset, len);
if (err == 0)
truncate_pagecache_range(inode, offset, (offset + len) -1);
if (err == -EOPNOTSUPP)
NFS_SERVER(inode)->caps &= ~NFS_CAP_DEALLOCATE;
-out_unlock:
+
inode_unlock(inode);
return err;
}
@@ -657,7 +665,10 @@ static loff_t _nfs42_proc_llseek(struct file *filep,
if (status)
return status;
- return vfs_setpos(filep, res.sr_offset, inode->i_sb->s_maxbytes);
+ if (whence == SEEK_DATA && res.sr_eof)
+ return -NFS4ERR_NXIO;
+ else
+ return vfs_setpos(filep, res.sr_offset, inode->i_sb->s_maxbytes);
}
loff_t nfs42_proc_llseek(struct file *filep, loff_t offset, int whence)
diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
index 57b3821..a1e5c6b 100644
--- a/fs/nfs/nfs4file.c
+++ b/fs/nfs/nfs4file.c
@@ -211,7 +211,7 @@ static loff_t nfs4_file_llseek(struct file *filep, loff_t offset, int whence)
case SEEK_HOLE:
case SEEK_DATA:
ret = nfs42_proc_llseek(filep, offset, whence);
- if (ret != -ENOTSUPP)
+ if (ret != -EOPNOTSUPP)
return ret;
fallthrough;
default:
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 15ac6b6..c92d6ff 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -112,9 +112,10 @@ static int nfs41_test_stateid(struct nfs_server *, nfs4_stateid *,
static int nfs41_free_stateid(struct nfs_server *, const nfs4_stateid *,
const struct cred *, bool);
#endif
-static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
- struct nfs_server *server,
- struct nfs4_label *label);
+static void nfs4_bitmask_set(__u32 bitmask[NFS4_BITMASK_SZ],
+ const __u32 *src, struct inode *inode,
+ struct nfs_server *server,
+ struct nfs4_label *label);
#ifdef CONFIG_NFS_V4_SECURITY_LABEL
static inline struct nfs4_label *
@@ -1687,7 +1688,7 @@ static void nfs_set_open_stateid_locked(struct nfs4_state *state,
rcu_read_unlock();
trace_nfs4_open_stateid_update_wait(state->inode, stateid, 0);
- if (!signal_pending(current)) {
+ if (!fatal_signal_pending(current)) {
if (schedule_timeout(5*HZ) == 0)
status = -EAGAIN;
else
@@ -3462,7 +3463,7 @@ static bool nfs4_refresh_open_old_stateid(nfs4_stateid *dst,
write_sequnlock(&state->seqlock);
trace_nfs4_close_stateid_update_wait(state->inode, dst, 0);
- if (signal_pending(current))
+ if (fatal_signal_pending(current))
status = -EINTR;
else
if (schedule_timeout(5*HZ) != 0)
@@ -3596,6 +3597,7 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data)
struct nfs4_closedata *calldata = data;
struct nfs4_state *state = calldata->state;
struct inode *inode = calldata->inode;
+ struct nfs_server *server = NFS_SERVER(inode);
struct pnfs_layout_hdr *lo;
bool is_rdonly, is_wronly, is_rdwr;
int call_close = 0;
@@ -3652,8 +3654,10 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data)
if (calldata->arg.fmode == 0 || calldata->arg.fmode == FMODE_READ) {
/* Close-to-open cache consistency revalidation */
if (!nfs4_have_delegation(inode, FMODE_READ)) {
- calldata->arg.bitmask = NFS_SERVER(inode)->cache_consistency_bitmask;
- nfs4_bitmask_adjust(calldata->arg.bitmask, inode, NFS_SERVER(inode), NULL);
+ nfs4_bitmask_set(calldata->arg.bitmask_store,
+ server->cache_consistency_bitmask,
+ inode, server, NULL);
+ calldata->arg.bitmask = calldata->arg.bitmask_store;
} else
calldata->arg.bitmask = NULL;
}
@@ -5418,19 +5422,17 @@ bool nfs4_write_need_cache_consistency_data(struct nfs_pgio_header *hdr)
return nfs4_have_delegation(hdr->inode, FMODE_READ) == 0;
}
-static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
- struct nfs_server *server,
- struct nfs4_label *label)
+static void nfs4_bitmask_set(__u32 bitmask[NFS4_BITMASK_SZ], const __u32 *src,
+ struct inode *inode, struct nfs_server *server,
+ struct nfs4_label *label)
{
-
unsigned long cache_validity = READ_ONCE(NFS_I(inode)->cache_validity);
+ unsigned int i;
- if ((cache_validity & NFS_INO_INVALID_DATA) ||
- (cache_validity & NFS_INO_REVAL_PAGECACHE) ||
- (cache_validity & NFS_INO_REVAL_FORCED) ||
- (cache_validity & NFS_INO_INVALID_OTHER))
- nfs4_bitmap_copy_adjust(bitmask, nfs4_bitmask(server, label), inode);
+ memcpy(bitmask, src, sizeof(*bitmask) * NFS4_BITMASK_SZ);
+ if (cache_validity & (NFS_INO_INVALID_CHANGE | NFS_INO_REVAL_PAGECACHE))
+ bitmask[0] |= FATTR4_WORD0_CHANGE;
if (cache_validity & NFS_INO_INVALID_ATIME)
bitmask[1] |= FATTR4_WORD1_TIME_ACCESS;
if (cache_validity & NFS_INO_INVALID_OTHER)
@@ -5439,16 +5441,22 @@ static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
FATTR4_WORD1_NUMLINKS;
if (label && label->len && cache_validity & NFS_INO_INVALID_LABEL)
bitmask[2] |= FATTR4_WORD2_SECURITY_LABEL;
- if (cache_validity & NFS_INO_INVALID_CHANGE)
- bitmask[0] |= FATTR4_WORD0_CHANGE;
if (cache_validity & NFS_INO_INVALID_CTIME)
bitmask[1] |= FATTR4_WORD1_TIME_METADATA;
if (cache_validity & NFS_INO_INVALID_MTIME)
bitmask[1] |= FATTR4_WORD1_TIME_MODIFY;
- if (cache_validity & NFS_INO_INVALID_SIZE)
- bitmask[0] |= FATTR4_WORD0_SIZE;
if (cache_validity & NFS_INO_INVALID_BLOCKS)
bitmask[1] |= FATTR4_WORD1_SPACE_USED;
+
+ if (nfs4_have_delegation(inode, FMODE_READ) &&
+ !(cache_validity & NFS_INO_REVAL_FORCED))
+ bitmask[0] &= ~FATTR4_WORD0_SIZE;
+ else if (cache_validity &
+ (NFS_INO_INVALID_SIZE | NFS_INO_REVAL_PAGECACHE))
+ bitmask[0] |= FATTR4_WORD0_SIZE;
+
+ for (i = 0; i < NFS4_BITMASK_SZ; i++)
+ bitmask[i] &= server->attr_bitmask[i];
}
static void nfs4_proc_write_setup(struct nfs_pgio_header *hdr,
@@ -5461,8 +5469,10 @@ static void nfs4_proc_write_setup(struct nfs_pgio_header *hdr,
hdr->args.bitmask = NULL;
hdr->res.fattr = NULL;
} else {
- hdr->args.bitmask = server->cache_consistency_bitmask;
- nfs4_bitmask_adjust(hdr->args.bitmask, hdr->inode, server, NULL);
+ nfs4_bitmask_set(hdr->args.bitmask_store,
+ server->cache_consistency_bitmask,
+ hdr->inode, server, NULL);
+ hdr->args.bitmask = hdr->args.bitmask_store;
}
if (!hdr->pgio_done_cb)
@@ -6504,8 +6514,10 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
data->args.fhandle = &data->fh;
data->args.stateid = &data->stateid;
- data->args.bitmask = server->cache_consistency_bitmask;
- nfs4_bitmask_adjust(data->args.bitmask, inode, server, NULL);
+ nfs4_bitmask_set(data->args.bitmask_store,
+ server->cache_consistency_bitmask, inode, server,
+ NULL);
+ data->args.bitmask = data->args.bitmask_store;
nfs_copy_fh(&data->fh, NFS_FH(inode));
nfs4_stateid_copy(&data->stateid, stateid);
data->res.fattr = &data->fattr;
diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
index 78c9c4b..98b9c1e 100644
--- a/fs/nfs/pagelist.c
+++ b/fs/nfs/pagelist.c
@@ -1094,15 +1094,16 @@ nfs_pageio_do_add_request(struct nfs_pageio_descriptor *desc,
struct nfs_page *prev = NULL;
unsigned int size;
- if (mirror->pg_count != 0) {
- prev = nfs_list_entry(mirror->pg_list.prev);
- } else {
+ if (list_empty(&mirror->pg_list)) {
if (desc->pg_ops->pg_init)
desc->pg_ops->pg_init(desc, req);
if (desc->pg_error < 0)
return 0;
mirror->pg_base = req->wb_pgbase;
- }
+ mirror->pg_count = 0;
+ mirror->pg_recoalesce = 0;
+ } else
+ prev = nfs_list_entry(mirror->pg_list.prev);
if (desc->pg_maxretrans && req->wb_nio > desc->pg_maxretrans) {
if (NFS_SERVER(desc->pg_inode)->flags & NFS_MOUNT_SOFTERR)
@@ -1127,17 +1128,16 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc)
{
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
-
if (!list_empty(&mirror->pg_list)) {
int error = desc->pg_ops->pg_doio(desc);
if (error < 0)
desc->pg_error = error;
- else
+ if (list_empty(&mirror->pg_list)) {
mirror->pg_bytes_written += mirror->pg_count;
- }
- if (list_empty(&mirror->pg_list)) {
- mirror->pg_count = 0;
- mirror->pg_base = 0;
+ mirror->pg_count = 0;
+ mirror->pg_base = 0;
+ mirror->pg_recoalesce = 0;
+ }
}
}
@@ -1227,7 +1227,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
do {
list_splice_init(&mirror->pg_list, &head);
- mirror->pg_bytes_written -= mirror->pg_count;
mirror->pg_count = 0;
mirror->pg_base = 0;
mirror->pg_recoalesce = 0;
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index b8712b8..4d20125e 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -1317,6 +1317,11 @@ _pnfs_return_layout(struct inode *ino)
{
struct pnfs_layout_hdr *lo = NULL;
struct nfs_inode *nfsi = NFS_I(ino);
+ struct pnfs_layout_range range = {
+ .iomode = IOMODE_ANY,
+ .offset = 0,
+ .length = NFS4_MAX_UINT64,
+ };
LIST_HEAD(tmp_list);
const struct cred *cred;
nfs4_stateid stateid;
@@ -1344,16 +1349,10 @@ _pnfs_return_layout(struct inode *ino)
}
valid_layout = pnfs_layout_is_valid(lo);
pnfs_clear_layoutcommit(ino, &tmp_list);
- pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL, 0);
+ pnfs_mark_matching_lsegs_return(lo, &tmp_list, &range, 0);
- if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
- struct pnfs_layout_range range = {
- .iomode = IOMODE_ANY,
- .offset = 0,
- .length = NFS4_MAX_UINT64,
- };
+ if (NFS_SERVER(ino)->pnfs_curr_ld->return_range)
NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range);
- }
/* Don't send a LAYOUTRETURN if list was initially empty */
if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags) ||
@@ -2470,6 +2469,9 @@ pnfs_mark_matching_lsegs_return(struct pnfs_layout_hdr *lo,
assert_spin_locked(&lo->plh_inode->i_lock);
+ if (test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags))
+ tmp_list = &lo->plh_return_segs;
+
list_for_each_entry_safe(lseg, next, &lo->plh_segs, pls_list)
if (pnfs_match_lseg_recall(lseg, return_range, seq)) {
dprintk("%s: marking lseg %p iomode %d "
@@ -2477,6 +2479,8 @@ pnfs_mark_matching_lsegs_return(struct pnfs_layout_hdr *lo,
lseg, lseg->pls_range.iomode,
lseg->pls_range.offset,
lseg->pls_range.length);
+ if (test_bit(NFS_LSEG_LAYOUTRETURN, &lseg->pls_flags))
+ tmp_list = &lo->plh_return_segs;
if (mark_lseg_invalid(lseg, tmp_list))
continue;
remaining++;
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index 2e68cea..0044033 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -1425,7 +1425,7 @@ static __be32 nfsd4_do_copy(struct nfsd4_copy *copy, bool sync)
return status;
}
-static int dup_copy_fields(struct nfsd4_copy *src, struct nfsd4_copy *dst)
+static void dup_copy_fields(struct nfsd4_copy *src, struct nfsd4_copy *dst)
{
dst->cp_src_pos = src->cp_src_pos;
dst->cp_dst_pos = src->cp_dst_pos;
@@ -1444,8 +1444,6 @@ static int dup_copy_fields(struct nfsd4_copy *src, struct nfsd4_copy *dst)
memcpy(&dst->stateid, &src->stateid, sizeof(src->stateid));
memcpy(&dst->c_fh, &src->c_fh, sizeof(src->c_fh));
dst->ss_mnt = src->ss_mnt;
-
- return 0;
}
static void cleanup_async_copy(struct nfsd4_copy *copy)
@@ -1537,11 +1535,9 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
if (!nfs4_init_copy_state(nn, copy))
goto out_err;
refcount_set(&async_copy->refcount, 1);
- memcpy(©->cp_res.cb_stateid, ©->cp_stateid,
- sizeof(copy->cp_stateid));
- status = dup_copy_fields(copy, async_copy);
- if (status)
- goto out_err;
+ memcpy(©->cp_res.cb_stateid, ©->cp_stateid.stid,
+ sizeof(copy->cp_res.cb_stateid));
+ dup_copy_fields(copy, async_copy);
async_copy->copy_task = kthread_create(nfsd4_do_async_copy,
async_copy, "%s", "copy thread");
if (IS_ERR(async_copy->copy_task))
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 55cf60b..ac20f79 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4874,6 +4874,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
if (nf)
nfsd_file_put(nf);
+ status = nfserrno(nfsd_open_break_lease(cur_fh->fh_dentry->d_inode,
+ access));
+ if (status)
+ goto out_put_access;
+
status = nfsd4_truncate(rqstp, cur_fh, open);
if (status)
goto out_put_access;
@@ -6856,11 +6861,20 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct file_lock *lock)
{
struct nfsd_file *nf;
- __be32 err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf);
- if (!err) {
- err = nfserrno(vfs_test_lock(nf->nf_file, lock));
- nfsd_file_put(nf);
- }
+ __be32 err;
+
+ err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf);
+ if (err)
+ return err;
+ fh_lock(fhp); /* to block new leases till after test_lock: */
+ err = nfserrno(nfsd_open_break_lease(fhp->fh_dentry->d_inode,
+ NFSD_MAY_READ));
+ if (err)
+ goto out;
+ err = nfserrno(vfs_test_lock(nf->nf_file, lock));
+out:
+ fh_unlock(fhp);
+ nfsd_file_put(nf);
return err;
}
diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 3bfb414..ad20403 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -2295,7 +2295,7 @@ static int ocfs2_dio_end_io_write(struct inode *inode,
struct ocfs2_alloc_context *meta_ac = NULL;
handle_t *handle = NULL;
loff_t end = offset + bytes;
- int ret = 0, credits = 0, locked = 0;
+ int ret = 0, credits = 0;
ocfs2_init_dealloc_ctxt(&dealloc);
@@ -2306,13 +2306,6 @@ static int ocfs2_dio_end_io_write(struct inode *inode,
!dwc->dw_orphaned)
goto out;
- /* ocfs2_file_write_iter will get i_mutex, so we need not lock if we
- * are in that context. */
- if (dwc->dw_writer_pid != task_pid_nr(current)) {
- inode_lock(inode);
- locked = 1;
- }
-
ret = ocfs2_inode_lock(inode, &di_bh, 1);
if (ret < 0) {
mlog_errno(ret);
@@ -2393,8 +2386,6 @@ static int ocfs2_dio_end_io_write(struct inode *inode,
if (meta_ac)
ocfs2_free_alloc_context(meta_ac);
ocfs2_run_deallocs(osb, &dealloc);
- if (locked)
- inode_unlock(inode);
ocfs2_dio_free_write_ctx(inode, dwc);
return ret;
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index 85979e2..8880071 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -1244,22 +1244,24 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
goto bail_unlock;
}
}
+ down_write(&OCFS2_I(inode)->ip_alloc_sem);
handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS +
2 * ocfs2_quota_trans_credits(sb));
if (IS_ERR(handle)) {
status = PTR_ERR(handle);
mlog_errno(status);
- goto bail_unlock;
+ goto bail_unlock_alloc;
}
status = __dquot_transfer(inode, transfer_to);
if (status < 0)
goto bail_commit;
} else {
+ down_write(&OCFS2_I(inode)->ip_alloc_sem);
handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
if (IS_ERR(handle)) {
status = PTR_ERR(handle);
mlog_errno(status);
- goto bail_unlock;
+ goto bail_unlock_alloc;
}
}
@@ -1272,6 +1274,8 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
bail_commit:
ocfs2_commit_trans(osb, handle);
+bail_unlock_alloc:
+ up_write(&OCFS2_I(inode)->ip_alloc_sem);
bail_unlock:
if (status && inode_locked) {
ocfs2_inode_unlock_tracker(inode, 1, &oh, had_lock);
diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
index 89d5d59..e466c58 100644
--- a/fs/overlayfs/copy_up.c
+++ b/fs/overlayfs/copy_up.c
@@ -928,7 +928,7 @@ static int ovl_copy_up_one(struct dentry *parent, struct dentry *dentry,
static int ovl_copy_up_flags(struct dentry *dentry, int flags)
{
int err = 0;
- const struct cred *old_cred = ovl_override_creds(dentry->d_sb);
+ const struct cred *old_cred;
bool disconnected = (dentry->d_flags & DCACHE_DISCONNECTED);
/*
@@ -939,6 +939,7 @@ static int ovl_copy_up_flags(struct dentry *dentry, int flags)
if (WARN_ON(disconnected && d_is_dir(dentry)))
return -EIO;
+ old_cred = ovl_override_creds(dentry->d_sb);
while (!err) {
struct dentry *next;
struct dentry *parent = NULL;
diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
index a6162c4..f3309e0 100644
--- a/fs/overlayfs/namei.c
+++ b/fs/overlayfs/namei.c
@@ -913,6 +913,7 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
continue;
if ((uppermetacopy || d.metacopy) && !ofs->config.metacopy) {
+ dput(this);
err = -EPERM;
pr_warn_ratelimited("refusing to follow metacopy origin for (%pd2)\n", dentry);
goto out_put;
diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
index 9f7af98..e43dc68 100644
--- a/fs/overlayfs/overlayfs.h
+++ b/fs/overlayfs/overlayfs.h
@@ -308,9 +308,6 @@ int ovl_check_setxattr(struct dentry *dentry, struct dentry *upperdentry,
enum ovl_xattr ox, const void *value, size_t size,
int xerr);
int ovl_set_impure(struct dentry *dentry, struct dentry *upperdentry);
-void ovl_set_flag(unsigned long flag, struct inode *inode);
-void ovl_clear_flag(unsigned long flag, struct inode *inode);
-bool ovl_test_flag(unsigned long flag, struct inode *inode);
bool ovl_inuse_trylock(struct dentry *dentry);
void ovl_inuse_unlock(struct dentry *dentry);
bool ovl_is_inuse(struct dentry *dentry);
@@ -324,6 +321,21 @@ char *ovl_get_redirect_xattr(struct ovl_fs *ofs, struct dentry *dentry,
int padding);
int ovl_sync_status(struct ovl_fs *ofs);
+static inline void ovl_set_flag(unsigned long flag, struct inode *inode)
+{
+ set_bit(flag, &OVL_I(inode)->flags);
+}
+
+static inline void ovl_clear_flag(unsigned long flag, struct inode *inode)
+{
+ clear_bit(flag, &OVL_I(inode)->flags);
+}
+
+static inline bool ovl_test_flag(unsigned long flag, struct inode *inode)
+{
+ return test_bit(flag, &OVL_I(inode)->flags);
+}
+
static inline bool ovl_is_impuredir(struct super_block *sb,
struct dentry *dentry)
{
@@ -427,6 +439,18 @@ int ovl_workdir_cleanup(struct inode *dir, struct vfsmount *mnt,
struct dentry *dentry, int level);
int ovl_indexdir_cleanup(struct ovl_fs *ofs);
+/*
+ * Can we iterate real dir directly?
+ *
+ * Non-merge dir may contain whiteouts from a time it was a merge upper, before
+ * lower dir was removed under it and possibly before it was rotated from upper
+ * to lower layer.
+ */
+static inline bool ovl_dir_is_real(struct dentry *dir)
+{
+ return !ovl_test_flag(OVL_WHITEOUTS, d_inode(dir));
+}
+
/* inode.c */
int ovl_set_nlink_upper(struct dentry *dentry);
int ovl_set_nlink_lower(struct dentry *dentry);
diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
index f404a78..cc1e802 100644
--- a/fs/overlayfs/readdir.c
+++ b/fs/overlayfs/readdir.c
@@ -319,18 +319,6 @@ static inline int ovl_dir_read(struct path *realpath,
return err;
}
-/*
- * Can we iterate real dir directly?
- *
- * Non-merge dir may contain whiteouts from a time it was a merge upper, before
- * lower dir was removed under it and possibly before it was rotated from upper
- * to lower layer.
- */
-static bool ovl_dir_is_real(struct dentry *dir)
-{
- return !ovl_test_flag(OVL_WHITEOUTS, d_inode(dir));
-}
-
static void ovl_dir_reset(struct file *file)
{
struct ovl_dir_file *od = file->private_data;
diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
index 50529a4..77f08ac 100644
--- a/fs/overlayfs/super.c
+++ b/fs/overlayfs/super.c
@@ -1759,7 +1759,8 @@ static struct ovl_entry *ovl_get_lowerstack(struct super_block *sb,
* - upper/work dir of any overlayfs instance
*/
static int ovl_check_layer(struct super_block *sb, struct ovl_fs *ofs,
- struct dentry *dentry, const char *name)
+ struct dentry *dentry, const char *name,
+ bool is_lower)
{
struct dentry *next = dentry, *parent;
int err = 0;
@@ -1771,7 +1772,7 @@ static int ovl_check_layer(struct super_block *sb, struct ovl_fs *ofs,
/* Walk back ancestors to root (inclusive) looking for traps */
while (!err && parent != next) {
- if (ovl_lookup_trap_inode(sb, parent)) {
+ if (is_lower && ovl_lookup_trap_inode(sb, parent)) {
err = -ELOOP;
pr_err("overlapping %s path\n", name);
} else if (ovl_is_inuse(parent)) {
@@ -1797,7 +1798,7 @@ static int ovl_check_overlapping_layers(struct super_block *sb,
if (ovl_upper_mnt(ofs)) {
err = ovl_check_layer(sb, ofs, ovl_upper_mnt(ofs)->mnt_root,
- "upperdir");
+ "upperdir", false);
if (err)
return err;
@@ -1808,7 +1809,8 @@ static int ovl_check_overlapping_layers(struct super_block *sb,
* workbasedir. In that case, we already have their traps in
* inode cache and we will catch that case on lookup.
*/
- err = ovl_check_layer(sb, ofs, ofs->workbasedir, "workdir");
+ err = ovl_check_layer(sb, ofs, ofs->workbasedir, "workdir",
+ false);
if (err)
return err;
}
@@ -1816,7 +1818,7 @@ static int ovl_check_overlapping_layers(struct super_block *sb,
for (i = 1; i < ofs->numlayer; i++) {
err = ovl_check_layer(sb, ofs,
ofs->layers[i].mnt->mnt_root,
- "lowerdir");
+ "lowerdir", true);
if (err)
return err;
}
diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
index 6e7b8c8..e8b14d2 100644
--- a/fs/overlayfs/util.c
+++ b/fs/overlayfs/util.c
@@ -419,18 +419,20 @@ void ovl_inode_update(struct inode *inode, struct dentry *upperdentry)
}
}
-static void ovl_dentry_version_inc(struct dentry *dentry, bool impurity)
+static void ovl_dir_version_inc(struct dentry *dentry, bool impurity)
{
struct inode *inode = d_inode(dentry);
WARN_ON(!inode_is_locked(inode));
+ WARN_ON(!d_is_dir(dentry));
/*
- * Version is used by readdir code to keep cache consistent. For merge
- * dirs all changes need to be noted. For non-merge dirs, cache only
- * contains impure (ones which have been copied up and have origins)
- * entries, so only need to note changes to impure entries.
+ * Version is used by readdir code to keep cache consistent.
+ * For merge dirs (or dirs with origin) all changes need to be noted.
+ * For non-merge dirs, cache contains only impure entries (i.e. ones
+ * which have been copied up and have origins), so only need to note
+ * changes to impure entries.
*/
- if (OVL_TYPE_MERGE(ovl_path_type(dentry)) || impurity)
+ if (!ovl_dir_is_real(dentry) || impurity)
OVL_I(inode)->version++;
}
@@ -439,7 +441,7 @@ void ovl_dir_modified(struct dentry *dentry, bool impurity)
/* Copy mtime/ctime */
ovl_copyattr(d_inode(ovl_dentry_upper(dentry)), d_inode(dentry));
- ovl_dentry_version_inc(dentry, impurity);
+ ovl_dir_version_inc(dentry, impurity);
}
u64 ovl_dentry_version_get(struct dentry *dentry)
@@ -634,21 +636,6 @@ int ovl_set_impure(struct dentry *dentry, struct dentry *upperdentry)
return err;
}
-void ovl_set_flag(unsigned long flag, struct inode *inode)
-{
- set_bit(flag, &OVL_I(inode)->flags);
-}
-
-void ovl_clear_flag(unsigned long flag, struct inode *inode)
-{
- clear_bit(flag, &OVL_I(inode)->flags);
-}
-
-bool ovl_test_flag(unsigned long flag, struct inode *inode)
-{
- return test_bit(flag, &OVL_I(inode)->flags);
-}
-
/**
* Caller must hold a reference to inode to prevent it from being freed while
* it is marked inuse.
diff --git a/fs/proc/array.c b/fs/proc/array.c
index 65ec202..18a4588 100644
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -341,9 +341,11 @@ static inline void task_seccomp(struct seq_file *m, struct task_struct *p)
seq_put_decimal_ull(m, "NoNewPrivs:\t", task_no_new_privs(p));
#ifdef CONFIG_SECCOMP
seq_put_decimal_ull(m, "\nSeccomp:\t", p->seccomp.mode);
+#ifdef CONFIG_SECCOMP_FILTER
seq_put_decimal_ull(m, "\nSeccomp_filters:\t",
atomic_read(&p->seccomp.filter_count));
#endif
+#endif
seq_puts(m, "\nSpeculation_Store_Bypass:\t");
switch (arch_prctl_spec_ctrl_get(p, PR_SPEC_STORE_BYPASS)) {
case -EINVAL:
diff --git a/fs/proc/base.c b/fs/proc/base.c
index 4df86d0..e5ef64e 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -2714,6 +2714,10 @@ static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
void *page;
int rv;
+ /* A task may only write when it was the opener. */
+ if (file->f_cred != current_real_cred())
+ return -EPERM;
+
rcu_read_lock();
task = pid_task(proc_pid(inode), PIDTYPE_PID);
if (!task) {
diff --git a/fs/proc/generic.c b/fs/proc/generic.c
index 6c0a05f..09e4d8a 100644
--- a/fs/proc/generic.c
+++ b/fs/proc/generic.c
@@ -754,7 +754,7 @@ int remove_proc_subtree(const char *name, struct proc_dir_entry *parent)
while (1) {
next = pde_subdir_first(de);
if (next) {
- if (unlikely(pde_is_permanent(root))) {
+ if (unlikely(pde_is_permanent(next))) {
write_unlock(&proc_subdir_lock);
WARN(1, "removing permanent /proc entry '%s/%s'",
next->parent->name, next->name);
diff --git a/fs/readdir.c b/fs/readdir.c
index 19434b3..09e8ed7 100644
--- a/fs/readdir.c
+++ b/fs/readdir.c
@@ -150,6 +150,9 @@ static int fillonedir(struct dir_context *ctx, const char *name, int namlen,
if (buf->result)
return -EINVAL;
+ buf->result = verify_dirent_name(name, namlen);
+ if (buf->result < 0)
+ return buf->result;
d_ino = ino;
if (sizeof(d_ino) < sizeof(ino) && d_ino != ino) {
buf->result = -EOVERFLOW;
@@ -405,6 +408,9 @@ static int compat_fillonedir(struct dir_context *ctx, const char *name,
if (buf->result)
return -EINVAL;
+ buf->result = verify_dirent_name(name, namlen);
+ if (buf->result < 0)
+ return buf->result;
d_ino = ino;
if (sizeof(d_ino) < sizeof(ino) && d_ino != ino) {
buf->result = -EOVERFLOW;
diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
index 7b11283..89d4929 100644
--- a/fs/squashfs/file.c
+++ b/fs/squashfs/file.c
@@ -211,11 +211,11 @@ static long long read_indexes(struct super_block *sb, int n,
* If the skip factor is limited in this way then the file will use multiple
* slots.
*/
-static inline int calculate_skip(int blocks)
+static inline int calculate_skip(u64 blocks)
{
- int skip = blocks / ((SQUASHFS_META_ENTRIES + 1)
+ u64 skip = blocks / ((SQUASHFS_META_ENTRIES + 1)
* SQUASHFS_META_INDEXES);
- return min(SQUASHFS_CACHED_BLKS - 1, skip + 1);
+ return min((u64) SQUASHFS_CACHED_BLKS - 1, skip + 1);
}
diff --git a/fs/stat.c b/fs/stat.c
index dacecdd..1196af4 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -77,12 +77,20 @@ int vfs_getattr_nosec(const struct path *path, struct kstat *stat,
/* SB_NOATIME means filesystem supplies dummy atime value */
if (inode->i_sb->s_flags & SB_NOATIME)
stat->result_mask &= ~STATX_ATIME;
+
+ /*
+ * Note: If you add another clause to set an attribute flag, please
+ * update attributes_mask below.
+ */
if (IS_AUTOMOUNT(inode))
stat->attributes |= STATX_ATTR_AUTOMOUNT;
if (IS_DAX(inode))
stat->attributes |= STATX_ATTR_DAX;
+ stat->attributes_mask |= (STATX_ATTR_AUTOMOUNT |
+ STATX_ATTR_DAX);
+
if (inode->i_op->getattr)
return inode->i_op->getattr(path, stat, request_mask,
query_flags);
diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c
index 9a151a1..1c6fc99 100644
--- a/fs/ubifs/replay.c
+++ b/fs/ubifs/replay.c
@@ -223,7 +223,8 @@ static bool inode_still_linked(struct ubifs_info *c, struct replay_entry *rino)
*/
list_for_each_entry_reverse(r, &c->replay_list, list) {
ubifs_assert(c, r->sqnum >= rino->sqnum);
- if (key_inum(c, &r->key) == key_inum(c, &rino->key))
+ if (key_inum(c, &r->key) == key_inum(c, &rino->key) &&
+ key_type(c, &r->key) == UBIFS_INO_KEY)
return r->deletion == 0;
}
diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c
index fd8e641..96ac7e5 100644
--- a/fs/xfs/libxfs/xfs_attr.c
+++ b/fs/xfs/libxfs/xfs_attr.c
@@ -928,6 +928,7 @@ xfs_attr_node_addname(
* Search to see if name already exists, and get back a pointer
* to where it should go.
*/
+ error = 0;
retval = xfs_attr_node_hasname(args, &state);
if (retval != -ENOATTR && retval != -EEXIST)
goto out;
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
index fcde59c..cb3d6b1 100644
--- a/include/crypto/acompress.h
+++ b/include/crypto/acompress.h
@@ -165,6 +165,8 @@ static inline struct crypto_acomp *crypto_acomp_reqtfm(struct acomp_req *req)
* crypto_free_acomp() -- free ACOMPRESS tfm handle
*
* @tfm: ACOMPRESS tfm handle allocated with crypto_alloc_acomp()
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
*/
static inline void crypto_free_acomp(struct crypto_acomp *tfm)
{
diff --git a/include/crypto/aead.h b/include/crypto/aead.h
index c32a6f5..fe95662 100644
--- a/include/crypto/aead.h
+++ b/include/crypto/aead.h
@@ -185,6 +185,8 @@ static inline struct crypto_tfm *crypto_aead_tfm(struct crypto_aead *tfm)
/**
* crypto_free_aead() - zeroize and free aead handle
* @tfm: cipher handle to be freed
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
*/
static inline void crypto_free_aead(struct crypto_aead *tfm)
{
diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h
index 1d3aa25..5764b46 100644
--- a/include/crypto/akcipher.h
+++ b/include/crypto/akcipher.h
@@ -174,6 +174,8 @@ static inline struct crypto_akcipher *crypto_akcipher_reqtfm(
* crypto_free_akcipher() - free AKCIPHER tfm handle
*
* @tfm: AKCIPHER tfm handle allocated with crypto_alloc_akcipher()
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
*/
static inline void crypto_free_akcipher(struct crypto_akcipher *tfm)
{
diff --git a/include/crypto/chacha.h b/include/crypto/chacha.h
index 3a1c72f..dabaee6 100644
--- a/include/crypto/chacha.h
+++ b/include/crypto/chacha.h
@@ -47,13 +47,18 @@ static inline void hchacha_block(const u32 *state, u32 *out, int nrounds)
hchacha_block_generic(state, out, nrounds);
}
-void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv);
-static inline void chacha_init_generic(u32 *state, const u32 *key, const u8 *iv)
+static inline void chacha_init_consts(u32 *state)
{
state[0] = 0x61707865; /* "expa" */
state[1] = 0x3320646e; /* "nd 3" */
state[2] = 0x79622d32; /* "2-by" */
state[3] = 0x6b206574; /* "te k" */
+}
+
+void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv);
+static inline void chacha_init_generic(u32 *state, const u32 *key, const u8 *iv)
+{
+ chacha_init_consts(state);
state[4] = key[0];
state[5] = key[1];
state[6] = key[2];
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 13f8a6a..b2bc1e4 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -281,6 +281,8 @@ static inline struct crypto_tfm *crypto_ahash_tfm(struct crypto_ahash *tfm)
/**
* crypto_free_ahash() - zeroize and free the ahash handle
* @tfm: cipher handle to be freed
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
*/
static inline void crypto_free_ahash(struct crypto_ahash *tfm)
{
@@ -724,6 +726,8 @@ static inline struct crypto_tfm *crypto_shash_tfm(struct crypto_shash *tfm)
/**
* crypto_free_shash() - zeroize and free the message digest handle
* @tfm: cipher handle to be freed
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
*/
static inline void crypto_free_shash(struct crypto_shash *tfm)
{
diff --git a/include/crypto/internal/poly1305.h b/include/crypto/internal/poly1305.h
index 064e52ca..196aa76 100644
--- a/include/crypto/internal/poly1305.h
+++ b/include/crypto/internal/poly1305.h
@@ -18,7 +18,8 @@
* only the ε-almost-∆-universal hash function (not the full MAC) is computed.
*/
-void poly1305_core_setkey(struct poly1305_core_key *key, const u8 *raw_key);
+void poly1305_core_setkey(struct poly1305_core_key *key,
+ const u8 raw_key[POLY1305_BLOCK_SIZE]);
static inline void poly1305_core_init(struct poly1305_state *state)
{
*state = (struct poly1305_state){};
diff --git a/include/crypto/kpp.h b/include/crypto/kpp.h
index 88b5912..ccccead 100644
--- a/include/crypto/kpp.h
+++ b/include/crypto/kpp.h
@@ -154,6 +154,8 @@ static inline void crypto_kpp_set_flags(struct crypto_kpp *tfm, u32 flags)
* crypto_free_kpp() - free KPP tfm handle
*
* @tfm: KPP tfm handle allocated with crypto_alloc_kpp()
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
*/
static inline void crypto_free_kpp(struct crypto_kpp *tfm)
{
diff --git a/include/crypto/poly1305.h b/include/crypto/poly1305.h
index f1f67fc..090692e 100644
--- a/include/crypto/poly1305.h
+++ b/include/crypto/poly1305.h
@@ -58,8 +58,10 @@ struct poly1305_desc_ctx {
};
};
-void poly1305_init_arch(struct poly1305_desc_ctx *desc, const u8 *key);
-void poly1305_init_generic(struct poly1305_desc_ctx *desc, const u8 *key);
+void poly1305_init_arch(struct poly1305_desc_ctx *desc,
+ const u8 key[POLY1305_KEY_SIZE]);
+void poly1305_init_generic(struct poly1305_desc_ctx *desc,
+ const u8 key[POLY1305_KEY_SIZE]);
static inline void poly1305_init(struct poly1305_desc_ctx *desc, const u8 *key)
{
diff --git a/include/crypto/rng.h b/include/crypto/rng.h
index 8b4b844b..17bb3673 100644
--- a/include/crypto/rng.h
+++ b/include/crypto/rng.h
@@ -111,6 +111,8 @@ static inline struct rng_alg *crypto_rng_alg(struct crypto_rng *tfm)
/**
* crypto_free_rng() - zeroize and free RNG handle
* @tfm: cipher handle to be freed
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
*/
static inline void crypto_free_rng(struct crypto_rng *tfm)
{
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 6a733b1..ef0fc9e 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -196,6 +196,8 @@ static inline struct crypto_tfm *crypto_skcipher_tfm(
/**
* crypto_free_skcipher() - zeroize and free cipher handle
* @tfm: cipher handle to be freed
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
*/
static inline void crypto_free_skcipher(struct crypto_skcipher *tfm)
{
diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h
index a94c03a..b2ed348 100644
--- a/include/keys/trusted-type.h
+++ b/include/keys/trusted-type.h
@@ -30,6 +30,7 @@ struct trusted_key_options {
uint16_t keytype;
uint32_t keyhandle;
unsigned char keyauth[TPM_DIGEST_SIZE];
+ uint32_t blobauth_len;
unsigned char blobauth[TPM_DIGEST_SIZE];
uint32_t pcrinfo_len;
unsigned char pcrinfo[MAX_PCRINFO_SIZE];
diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h
index 40bad71..532bcbf 100644
--- a/include/linux/avf/virtchnl.h
+++ b/include/linux/avf/virtchnl.h
@@ -476,7 +476,6 @@ struct virtchnl_rss_key {
u16 vsi_id;
u16 key_len;
u8 key[1]; /* RSS hash key, packed bytes */
- u8 pad[1];
};
VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
@@ -485,7 +484,6 @@ struct virtchnl_rss_lut {
u16 vsi_id;
u16 lut_entries;
u8 lut[1]; /* RSS lookup table */
- u8 pad[1];
};
VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
diff --git a/include/linux/bits.h b/include/linux/bits.h
index 7f475d5..87d1126 100644
--- a/include/linux/bits.h
+++ b/include/linux/bits.h
@@ -22,7 +22,7 @@
#include <linux/build_bug.h>
#define GENMASK_INPUT_CHECK(h, l) \
(BUILD_BUG_ON_ZERO(__builtin_choose_expr( \
- __builtin_constant_p((l) > (h)), (l) > (h), 0)))
+ __is_constexpr((l) > (h)), (l) > (h), 0)))
#else
/*
* BUILD_BUG_ON_ZERO is not available in h files included from asm files,
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 281d0c7..6a86d65 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1260,6 +1260,11 @@ static inline bool bpf_allow_ptr_leaks(void)
return perfmon_capable();
}
+static inline bool bpf_allow_uninit_stack(void)
+{
+ return perfmon_capable();
+}
+
static inline bool bpf_allow_ptr_to_map_access(void)
{
return perfmon_capable();
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index e83ef6f..2739a64 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -187,7 +187,7 @@ struct bpf_func_state {
* 0 = main function, 1 = first callee.
*/
u32 frameno;
- /* subprog number == index within subprog_stack_depth
+ /* subprog number == index within subprog_info
* zero == main subprog
*/
u32 subprogno;
@@ -291,10 +291,11 @@ struct bpf_verifier_state_list {
};
/* Possible states for alu_state member. */
-#define BPF_ALU_SANITIZE_SRC 1U
-#define BPF_ALU_SANITIZE_DST 2U
+#define BPF_ALU_SANITIZE_SRC (1U << 0)
+#define BPF_ALU_SANITIZE_DST (1U << 1)
#define BPF_ALU_NEG_VALUE (1U << 2)
#define BPF_ALU_NON_POINTER (1U << 3)
+#define BPF_ALU_IMMEDIATE (1U << 4)
#define BPF_ALU_SANITIZE (BPF_ALU_SANITIZE_SRC | \
BPF_ALU_SANITIZE_DST)
@@ -390,6 +391,7 @@ struct bpf_verifier_env {
u32 used_map_cnt; /* number of used maps */
u32 id_gen; /* used to generate unique reg IDs */
bool allow_ptr_leaks;
+ bool allow_uninit_stack;
bool allow_ptr_to_map_access;
bool bpf_capable;
bool bypass_spec_v1;
diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
index 1537348..d5b9c8d 100644
--- a/include/linux/console_struct.h
+++ b/include/linux/console_struct.h
@@ -101,6 +101,7 @@ struct vc_data {
unsigned int vc_rows;
unsigned int vc_size_row; /* Bytes per row */
unsigned int vc_scan_lines; /* # of scan lines */
+ unsigned int vc_cell_height; /* CRTC character cell height */
unsigned long vc_origin; /* [!] Start of real screen */
unsigned long vc_scr_end; /* [!] End of real screen */
unsigned long vc_visible_origin; /* [!] Top of visible window */
diff --git a/include/linux/const.h b/include/linux/const.h
index 81b8aae..435ddd7 100644
--- a/include/linux/const.h
+++ b/include/linux/const.h
@@ -3,4 +3,12 @@
#include <vdso/const.h>
+/*
+ * This returns a constant expression while determining if an argument is
+ * a constant expression, most importantly without evaluating the argument.
+ * Glory to Martin Uecker <Martin.Uecker@med.uni-goettingen.de>
+ */
+#define __is_constexpr(x) \
+ (sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8)))
+
#endif /* _LINUX_CONST_H */
diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
index d53cd33..f5d127a 100644
--- a/include/linux/context_tracking.h
+++ b/include/linux/context_tracking.h
@@ -129,16 +129,26 @@ static __always_inline void guest_enter_irqoff(void)
}
}
-static __always_inline void guest_exit_irqoff(void)
+static __always_inline void context_tracking_guest_exit(void)
{
if (context_tracking_enabled())
__context_tracking_exit(CONTEXT_GUEST);
+}
- instrumentation_begin();
+static __always_inline void vtime_account_guest_exit(void)
+{
if (vtime_accounting_enabled_this_cpu())
vtime_guest_exit(current);
else
current->flags &= ~PF_VCPU;
+}
+
+static __always_inline void guest_exit_irqoff(void)
+{
+ context_tracking_guest_exit();
+
+ instrumentation_begin();
+ vtime_account_guest_exit();
instrumentation_end();
}
@@ -157,12 +167,19 @@ static __always_inline void guest_enter_irqoff(void)
instrumentation_end();
}
+static __always_inline void context_tracking_guest_exit(void) { }
+
+static __always_inline void vtime_account_guest_exit(void)
+{
+ vtime_account_kernel(current);
+ current->flags &= ~PF_VCPU;
+}
+
static __always_inline void guest_exit_irqoff(void)
{
instrumentation_begin();
/* Flush the guest cputime we spent on the guest */
- vtime_account_kernel(current);
- current->flags &= ~PF_VCPU;
+ vtime_account_guest_exit();
instrumentation_end();
}
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index bc56287..8fb893e 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -135,6 +135,7 @@ enum cpuhp_state {
CPUHP_AP_RISCV_TIMER_STARTING,
CPUHP_AP_CLINT_TIMER_STARTING,
CPUHP_AP_CSKY_TIMER_STARTING,
+ CPUHP_AP_TI_GP_TIMER_STARTING,
CPUHP_AP_HYPERV_TIMER_STARTING,
CPUHP_AP_KVM_STARTING,
CPUHP_AP_KVM_ARM_VGIC_INIT_STARTING,
diff --git a/include/linux/device.h b/include/linux/device.h
index 2b39de3..8d97871 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -291,6 +291,7 @@ struct device_dma_parameters {
* sg limitations.
*/
unsigned int max_segment_size;
+ unsigned int min_align_mask;
unsigned long segment_boundary_mask;
};
@@ -569,7 +570,7 @@ struct device {
* @flags: Link flags.
* @rpm_active: Whether or not the consumer device is runtime-PM-active.
* @kref: Count repeated addition of the same link.
- * @rcu_head: An RCU head to use for deferred execution of SRCU callbacks.
+ * @rm_work: Work structure used for removing the link.
* @supplier_preactivated: Supplier has been made active before consumer probe.
*/
struct device_link {
@@ -582,9 +583,7 @@ struct device_link {
u32 flags;
refcount_t rpm_active;
struct kref kref;
-#ifdef CONFIG_SRCU
- struct rcu_head rcu_head;
-#endif
+ struct work_struct rm_work;
bool supplier_preactivated; /* Owned by consumer probe. */
};
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 9561510..a7d70cde 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -500,6 +500,22 @@ static inline int dma_set_seg_boundary(struct device *dev, unsigned long mask)
return -EIO;
}
+static inline unsigned int dma_get_min_align_mask(struct device *dev)
+{
+ if (dev->dma_parms)
+ return dev->dma_parms->min_align_mask;
+ return 0;
+}
+
+static inline int dma_set_min_align_mask(struct device *dev,
+ unsigned int min_align_mask)
+{
+ if (WARN_ON_ONCE(!dev->dma_parms))
+ return -EIO;
+ dev->dma_parms->min_align_mask = min_align_mask;
+ return 0;
+}
+
static inline int dma_get_cache_alignment(void)
{
#ifdef ARCH_DMA_MINALIGN
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index bacc40a..bc26b4e 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -34,7 +34,7 @@ struct elevator_mq_ops {
void (*depth_updated)(struct blk_mq_hw_ctx *);
bool (*allow_merge)(struct request_queue *, struct request *, struct bio *);
- bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *, unsigned int);
+ bool (*bio_merge)(struct request_queue *, struct bio *, unsigned int);
int (*request_merge)(struct request_queue *q, struct request **, struct bio *);
void (*request_merged)(struct request_queue *, struct request *, enum elv_merge);
void (*requests_merged)(struct request_queue *, struct request *, struct request *);
diff --git a/include/linux/firmware/xlnx-zynqmp.h b/include/linux/firmware/xlnx-zynqmp.h
index 41a1bab..4930ece 100644
--- a/include/linux/firmware/xlnx-zynqmp.h
+++ b/include/linux/firmware/xlnx-zynqmp.h
@@ -354,111 +354,131 @@ int zynqmp_pm_read_pggs(u32 index, u32 *value);
int zynqmp_pm_system_shutdown(const u32 type, const u32 subtype);
int zynqmp_pm_set_boot_health_status(u32 value);
#else
-static inline struct zynqmp_eemi_ops *zynqmp_pm_get_eemi_ops(void)
-{
- return ERR_PTR(-ENODEV);
-}
static inline int zynqmp_pm_get_api_version(u32 *version)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_get_chipid(u32 *idcode, u32 *version)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata,
u32 *out)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_clock_enable(u32 clock_id)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_clock_disable(u32 clock_id)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_clock_getstate(u32 clock_id, u32 *state)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_clock_setdivider(u32 clock_id, u32 divider)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_clock_getdivider(u32 clock_id, u32 *divider)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_clock_setrate(u32 clock_id, u64 rate)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_clock_getrate(u32 clock_id, u64 *rate)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_clock_setparent(u32 clock_id, u32 parent_id)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_clock_getparent(u32 clock_id, u32 *parent_id)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_set_pll_frac_mode(u32 clk_id, u32 mode)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_get_pll_frac_mode(u32 clk_id, u32 *mode)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_set_pll_frac_data(u32 clk_id, u32 data)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_get_pll_frac_data(u32 clk_id, u32 *data)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_set_sd_tapdelay(u32 node_id, u32 type, u32 value)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_sd_dll_reset(u32 node_id, u32 type)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset,
const enum zynqmp_pm_reset_action assert_flag)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset,
u32 *status)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_init_finalize(void)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_set_suspend_mode(u32 mode)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_request_node(const u32 node, const u32 capabilities,
const u32 qos,
const enum zynqmp_pm_request_ack ack)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_release_node(const u32 node)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_set_requirement(const u32 node,
const u32 capabilities,
const u32 qos,
@@ -466,39 +486,48 @@ static inline int zynqmp_pm_set_requirement(const u32 node,
{
return -ENODEV;
}
+
static inline int zynqmp_pm_aes_engine(const u64 address, u32 *out)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_fpga_load(const u64 address, const u32 size,
const u32 flags)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_fpga_get_status(u32 *value)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_write_ggs(u32 index, u32 value)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_read_ggs(u32 index, u32 *value)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_write_pggs(u32 index, u32 value)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_read_pggs(u32 index, u32 *value)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_system_shutdown(const u32 type, const u32 subtype)
{
return -ENODEV;
}
+
static inline int zynqmp_pm_set_boot_health_status(u32 value)
{
return -ENODEV;
diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
index 4a7e295..8e14430 100644
--- a/include/linux/gpio/driver.h
+++ b/include/linux/gpio/driver.h
@@ -637,8 +637,17 @@ int gpiochip_irqchip_add_key(struct gpio_chip *gc,
bool gpiochip_irqchip_irq_valid(const struct gpio_chip *gc,
unsigned int offset);
+#ifdef CONFIG_GPIOLIB_IRQCHIP
int gpiochip_irqchip_add_domain(struct gpio_chip *gc,
struct irq_domain *domain);
+#else
+static inline int gpiochip_irqchip_add_domain(struct gpio_chip *gc,
+ struct irq_domain *domain)
+{
+ WARN_ON(1);
+ return -EINVAL;
+}
+#endif
#ifdef CONFIG_LOCKDEP
diff --git a/include/linux/hid.h b/include/linux/hid.h
index 5868465..8578db5 100644
--- a/include/linux/hid.h
+++ b/include/linux/hid.h
@@ -262,6 +262,8 @@ struct hid_item {
#define HID_CP_SELECTION 0x000c0080
#define HID_CP_MEDIASELECTION 0x000c0087
#define HID_CP_SELECTDISC 0x000c00ba
+#define HID_CP_VOLUMEUP 0x000c00e9
+#define HID_CP_VOLUMEDOWN 0x000c00ea
#define HID_CP_PLAYBACKSPEED 0x000c00f1
#define HID_CP_PROXIMITY 0x000c0109
#define HID_CP_SPEAKERSYSTEM 0x000c0160
diff --git a/include/linux/host1x.h b/include/linux/host1x.h
index ce59a6a..9eb77c8 100644
--- a/include/linux/host1x.h
+++ b/include/linux/host1x.h
@@ -320,7 +320,14 @@ static inline struct host1x_device *to_host1x_device(struct device *dev)
int host1x_device_init(struct host1x_device *device);
int host1x_device_exit(struct host1x_device *device);
-int host1x_client_register(struct host1x_client *client);
+int __host1x_client_register(struct host1x_client *client,
+ struct lock_class_key *key);
+#define host1x_client_register(class) \
+ ({ \
+ static struct lock_class_key __key; \
+ __host1x_client_register(class, &__key); \
+ })
+
int host1x_client_unregister(struct host1x_client *client);
int host1x_client_suspend(struct host1x_client *client);
diff --git a/include/linux/i2c.h b/include/linux/i2c.h
index 5662265..a670ae1 100644
--- a/include/linux/i2c.h
+++ b/include/linux/i2c.h
@@ -687,6 +687,8 @@ struct i2c_adapter_quirks {
#define I2C_AQ_NO_ZERO_LEN_READ BIT(5)
#define I2C_AQ_NO_ZERO_LEN_WRITE BIT(6)
#define I2C_AQ_NO_ZERO_LEN (I2C_AQ_NO_ZERO_LEN_READ | I2C_AQ_NO_ZERO_LEN_WRITE)
+/* adapter cannot do repeated START */
+#define I2C_AQ_NO_REP_START BIT(7)
/*
* i2c_adapter is the structure used to identify a physical i2c bus along
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 9452268..c00ee34 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -42,6 +42,8 @@
#define DMA_FL_PTE_PRESENT BIT_ULL(0)
#define DMA_FL_PTE_US BIT_ULL(2)
+#define DMA_FL_PTE_ACCESS BIT_ULL(5)
+#define DMA_FL_PTE_DIRTY BIT_ULL(6)
#define DMA_FL_PTE_XD BIT_ULL(63)
#define ADDR_WIDTH_5LEVEL (57)
@@ -367,6 +369,7 @@ enum {
/* PASID cache invalidation granu */
#define QI_PC_ALL_PASIDS 0
#define QI_PC_PASID_SEL 1
+#define QI_PC_GLOBAL 3
#define QI_EIOTLB_ADDR(addr) ((u64)(addr) & VTD_PAGE_MASK)
#define QI_EIOTLB_IH(ih) (((u64)ih) << 6)
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index f11f507..e90c267 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -544,7 +544,7 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
* structure can be rewritten.
*/
if (gather->pgsize != size ||
- end < gather->start || start > gather->end) {
+ end + 1 < gather->start || start > gather->end + 1) {
if (gather->pgsize)
iommu_iotlb_sync(domain, gather);
gather->pgsize = size;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 7f2e2a0..a2278b9 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -190,8 +190,8 @@ int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
int len, void *val);
int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
int len, struct kvm_io_device *dev);
-void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
- struct kvm_io_device *dev);
+int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ struct kvm_io_device *dev);
struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,
gpa_t addr);
diff --git a/include/linux/marvell_phy.h b/include/linux/marvell_phy.h
index ff7b760..f5cf19d 100644
--- a/include/linux/marvell_phy.h
+++ b/include/linux/marvell_phy.h
@@ -25,11 +25,12 @@
#define MARVELL_PHY_ID_88X3310 0x002b09a0
#define MARVELL_PHY_ID_88E2110 0x002b09b0
-/* The MV88e6390 Ethernet switch contains embedded PHYs. These PHYs do
+/* These Ethernet switch families contain embedded PHYs, but they do
* not have a model ID. So the switch driver traps reads to the ID2
* register and returns the switch family ID
*/
-#define MARVELL_PHY_ID_88E6390 0x01410f90
+#define MARVELL_PHY_ID_88E6341_FAMILY 0x01410f41
+#define MARVELL_PHY_ID_88E6390_FAMILY 0x01410f90
#define MARVELL_PHY_FAMILY_ID(id) ((id) >> 4)
diff --git a/include/linux/mfd/da9063/registers.h b/include/linux/mfd/da9063/registers.h
index 1dbabf1..6e0f66a 100644
--- a/include/linux/mfd/da9063/registers.h
+++ b/include/linux/mfd/da9063/registers.h
@@ -1037,6 +1037,9 @@
#define DA9063_NONKEY_PIN_AUTODOWN 0x02
#define DA9063_NONKEY_PIN_AUTOFLPRT 0x03
+/* DA9063_REG_CONFIG_J (addr=0x10F) */
+#define DA9063_TWOWIRE_TO 0x40
+
/* DA9063_REG_MON_REG_5 (addr=0x116) */
#define DA9063_MON_A8_IDX_MASK 0x07
#define DA9063_MON_A8_IDX_NONE 0x00
diff --git a/include/linux/mfd/intel-m10-bmc.h b/include/linux/mfd/intel-m10-bmc.h
index c8ef2f1..06da62c 100644
--- a/include/linux/mfd/intel-m10-bmc.h
+++ b/include/linux/mfd/intel-m10-bmc.h
@@ -11,7 +11,7 @@
#define M10BMC_LEGACY_SYS_BASE 0x300400
#define M10BMC_SYS_BASE 0x300800
-#define M10BMC_MEM_END 0x200000fc
+#define M10BMC_MEM_END 0x1fffffff
/* Register offset of system registers */
#define NIOS2_FW_VERSION 0x0
diff --git a/include/linux/minmax.h b/include/linux/minmax.h
index c0f57b0..5433c08 100644
--- a/include/linux/minmax.h
+++ b/include/linux/minmax.h
@@ -2,6 +2,8 @@
#ifndef _LINUX_MINMAX_H
#define _LINUX_MINMAX_H
+#include <linux/const.h>
+
/*
* min()/max()/clamp() macros must accomplish three things:
*
@@ -17,14 +19,6 @@
#define __typecheck(x, y) \
(!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
-/*
- * This returns a constant expression while determining if an argument is
- * a constant expression, most importantly without evaluating the argument.
- * Glory to Martin Uecker <Martin.Uecker@med.uni-goettingen.de>
- */
-#define __is_constexpr(x) \
- (sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8)))
-
#define __no_side_effects(x, y) \
(__is_constexpr(x) && __is_constexpr(y))
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 2333524..cc9ee07 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -435,11 +435,11 @@ struct mlx5_ifc_flow_table_prop_layout_bits {
u8 reserved_at_60[0x18];
u8 log_max_ft_num[0x8];
- u8 reserved_at_80[0x18];
+ u8 reserved_at_80[0x10];
+ u8 log_max_flow_counter[0x8];
u8 log_max_destination[0x8];
- u8 log_max_flow_counter[0x8];
- u8 reserved_at_a8[0x10];
+ u8 reserved_at_a0[0x18];
u8 log_max_flow[0x8];
u8 reserved_at_c0[0x40];
@@ -8719,6 +8719,8 @@ struct mlx5_ifc_pplm_reg_bits {
u8 fec_override_admin_100g_2x[0x10];
u8 fec_override_admin_50g_1x[0x10];
+
+ u8 reserved_at_140[0x140];
};
struct mlx5_ifc_ppcnt_reg_bits {
@@ -10056,7 +10058,7 @@ struct mlx5_ifc_pbmc_reg_bits {
struct mlx5_ifc_bufferx_reg_bits buffer[10];
- u8 reserved_at_2e0[0x40];
+ u8 reserved_at_2e0[0x80];
};
struct mlx5_ifc_qtct_reg_bits {
diff --git a/include/linux/mlx5/mpfs.h b/include/linux/mlx5/mpfs.h
new file mode 100644
index 0000000..bf700c8
--- /dev/null
+++ b/include/linux/mlx5/mpfs.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+ * Copyright (c) 2021 Mellanox Technologies Ltd.
+ */
+
+#ifndef _MLX5_MPFS_
+#define _MLX5_MPFS_
+
+struct mlx5_core_dev;
+
+#ifdef CONFIG_MLX5_MPFS
+int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac);
+int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac);
+#else /* #ifndef CONFIG_MLX5_MPFS */
+static inline int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
+static inline int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
+#endif
+
+#endif
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 08a48d3..5106db3 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3178,5 +3178,37 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping,
extern int sysctl_nr_trim_pages;
+/**
+ * seal_check_future_write - Check for F_SEAL_FUTURE_WRITE flag and handle it
+ * @seals: the seals to check
+ * @vma: the vma to operate on
+ *
+ * Check whether F_SEAL_FUTURE_WRITE is set; if so, do proper check/handling on
+ * the vma flags. Return 0 if check pass, or <0 for errors.
+ */
+static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)
+{
+ if (seals & F_SEAL_FUTURE_WRITE) {
+ /*
+ * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
+ * "future write" seal active.
+ */
+ if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
+ return -EPERM;
+
+ /*
+ * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as
+ * MAP_SHARED and read-only, take care to not allow mprotect to
+ * revert protections on such mappings. Do this only for shared
+ * mappings. For private mappings, don't need to mask
+ * VM_MAYWRITE as we still want them to be COW-writable.
+ */
+ if (vma->vm_flags & VM_SHARED)
+ vma->vm_flags &= ~(VM_MAYWRITE);
+ }
+
+ return 0;
+}
+
#endif /* __KERNEL__ */
#endif /* _LINUX_MM_H */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 3433ecc..a4fff7d 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -97,10 +97,10 @@ struct page {
};
struct { /* page_pool used by netstack */
/**
- * @dma_addr: might require a 64-bit value even on
+ * @dma_addr: might require a 64-bit value on
* 32-bit architectures.
*/
- dma_addr_t dma_addr;
+ unsigned long dma_addr[2];
};
struct { /* slab, slob and slub */
union {
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index c079b93..40d7e98 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -285,9 +285,6 @@ struct mmc_host {
u32 ocr_avail_sdio; /* SDIO-specific OCR */
u32 ocr_avail_sd; /* SD-specific OCR */
u32 ocr_avail_mmc; /* MMC-specific OCR */
-#ifdef CONFIG_PM_SLEEP
- struct notifier_block pm_notify;
-#endif
struct wakeup_source *ws; /* Enable consume of uevents */
u32 max_current_330;
u32 max_current_300;
diff --git a/include/linux/netfilter_arp/arp_tables.h b/include/linux/netfilter_arp/arp_tables.h
index 7d3537c..26a1329 100644
--- a/include/linux/netfilter_arp/arp_tables.h
+++ b/include/linux/netfilter_arp/arp_tables.h
@@ -52,8 +52,9 @@ extern void *arpt_alloc_initial_table(const struct xt_table *);
int arpt_register_table(struct net *net, const struct xt_table *table,
const struct arpt_replace *repl,
const struct nf_hook_ops *ops, struct xt_table **res);
-void arpt_unregister_table(struct net *net, struct xt_table *table,
- const struct nf_hook_ops *ops);
+void arpt_unregister_table(struct net *net, struct xt_table *table);
+void arpt_unregister_table_pre_exit(struct net *net, struct xt_table *table,
+ const struct nf_hook_ops *ops);
extern unsigned int arpt_do_table(struct sk_buff *skb,
const struct nf_hook_state *state,
struct xt_table *table);
diff --git a/include/linux/netfilter_bridge/ebtables.h b/include/linux/netfilter_bridge/ebtables.h
index 2f5c4e6..3a95614 100644
--- a/include/linux/netfilter_bridge/ebtables.h
+++ b/include/linux/netfilter_bridge/ebtables.h
@@ -110,8 +110,9 @@ extern int ebt_register_table(struct net *net,
const struct ebt_table *table,
const struct nf_hook_ops *ops,
struct ebt_table **res);
-extern void ebt_unregister_table(struct net *net, struct ebt_table *table,
- const struct nf_hook_ops *);
+extern void ebt_unregister_table(struct net *net, struct ebt_table *table);
+void ebt_unregister_table_pre_exit(struct net *net, const char *tablename,
+ const struct nf_hook_ops *ops);
extern unsigned int ebt_do_table(struct sk_buff *skb,
const struct nf_hook_state *state,
struct ebt_table *table);
diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
index d63cb86..5491ad5 100644
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -15,6 +15,8 @@
#define NFS_DEF_FILE_IO_SIZE (4096U)
#define NFS_MIN_FILE_IO_SIZE (1024U)
+#define NFS_BITMASK_SZ 3
+
struct nfs4_string {
unsigned int len;
char *data;
@@ -525,7 +527,8 @@ struct nfs_closeargs {
struct nfs_seqid * seqid;
fmode_t fmode;
u32 share_access;
- u32 * bitmask;
+ const u32 * bitmask;
+ u32 bitmask_store[NFS_BITMASK_SZ];
struct nfs4_layoutreturn_args *lr_args;
};
@@ -608,7 +611,8 @@ struct nfs4_delegreturnargs {
struct nfs4_sequence_args seq_args;
const struct nfs_fh *fhandle;
const nfs4_stateid *stateid;
- u32 * bitmask;
+ const u32 *bitmask;
+ u32 bitmask_store[NFS_BITMASK_SZ];
struct nfs4_layoutreturn_args *lr_args;
};
@@ -648,7 +652,8 @@ struct nfs_pgio_args {
union {
unsigned int replen; /* used by read */
struct {
- u32 * bitmask; /* used by write */
+ const u32 * bitmask; /* used by write */
+ u32 bitmask_store[NFS_BITMASK_SZ]; /* used by write */
enum nfs3_stable_how stable; /* used by write */
};
};
diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h
index cc66bec..88d311ba 100644
--- a/include/linux/pci-epc.h
+++ b/include/linux/pci-epc.h
@@ -201,8 +201,10 @@ int pci_epc_start(struct pci_epc *epc);
void pci_epc_stop(struct pci_epc *epc);
const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
u8 func_no);
-unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
- *epc_features);
+enum pci_barno
+pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features);
+enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
+ *epc_features, enum pci_barno bar);
struct pci_epc *pci_epc_get(const char *epc_name);
void pci_epc_put(struct pci_epc *epc);
diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h
index 6644ff3..fa3aca4 100644
--- a/include/linux/pci-epf.h
+++ b/include/linux/pci-epf.h
@@ -21,6 +21,7 @@ enum pci_notify_event {
};
enum pci_barno {
+ NO_BAR = -1,
BAR_0,
BAR_1,
BAR_2,
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 22ce060..072ac6c 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -607,6 +607,7 @@ struct swevent_hlist {
#define PERF_ATTACH_TASK_DATA 0x08
#define PERF_ATTACH_ITRACE 0x10
#define PERF_ATTACH_SCHED_CB 0x20
+#define PERF_ATTACH_CHILD 0x40
struct perf_cgroup;
struct perf_buffer;
diff --git a/include/linux/platform_data/gpio-omap.h b/include/linux/platform_data/gpio-omap.h
index 8b30b14..f377817 100644
--- a/include/linux/platform_data/gpio-omap.h
+++ b/include/linux/platform_data/gpio-omap.h
@@ -85,6 +85,7 @@
* omap2+ specific GPIO registers
*/
#define OMAP24XX_GPIO_REVISION 0x0000
+#define OMAP24XX_GPIO_SYSCONFIG 0x0010
#define OMAP24XX_GPIO_IRQSTATUS1 0x0018
#define OMAP24XX_GPIO_IRQSTATUS2 0x0028
#define OMAP24XX_GPIO_IRQENABLE2 0x002c
@@ -108,6 +109,7 @@
#define OMAP24XX_GPIO_SETDATAOUT 0x0094
#define OMAP4_GPIO_REVISION 0x0000
+#define OMAP4_GPIO_SYSCONFIG 0x0010
#define OMAP4_GPIO_EOI 0x0020
#define OMAP4_GPIO_IRQSTATUSRAW0 0x0024
#define OMAP4_GPIO_IRQSTATUSRAW1 0x0028
@@ -148,6 +150,7 @@
#ifndef __ASSEMBLER__
struct omap_gpio_reg_offs {
u16 revision;
+ u16 sysconfig;
u16 direction;
u16 datain;
u16 dataout;
diff --git a/include/linux/platform_device.h b/include/linux/platform_device.h
index 77a2aad..17f9cd5 100644
--- a/include/linux/platform_device.h
+++ b/include/linux/platform_device.h
@@ -350,4 +350,7 @@ static inline int is_sh_early_platform_device(struct platform_device *pdev)
}
#endif /* CONFIG_SUPERH */
+/* For now only SuperH uses it */
+void early_platform_cleanup(void);
+
#endif /* _PLATFORM_DEVICE_H_ */
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 47aca6b..52d9724 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -600,6 +600,7 @@ struct dev_pm_info {
unsigned int idle_notification:1;
unsigned int request_pending:1;
unsigned int deferred_resume:1;
+ unsigned int needs_force_resume:1;
unsigned int runtime_auto:1;
bool ignore_children:1;
unsigned int no_callbacks:1;
diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
index b492ae0..6c08a08 100644
--- a/include/linux/pm_runtime.h
+++ b/include/linux/pm_runtime.h
@@ -265,7 +265,7 @@ static inline void pm_runtime_no_callbacks(struct device *dev) {}
static inline void pm_runtime_irq_safe(struct device *dev) {}
static inline bool pm_runtime_is_irq_safe(struct device *dev) { return false; }
-static inline bool pm_runtime_callbacks_present(struct device *dev) { return false; }
+static inline bool pm_runtime_has_no_callbacks(struct device *dev) { return false; }
static inline void pm_runtime_mark_last_busy(struct device *dev) {}
static inline void __pm_runtime_use_autosuspend(struct device *dev,
bool use) {}
diff --git a/include/linux/power/bq27xxx_battery.h b/include/linux/power/bq27xxx_battery.h
index 111a40d..8d5f4f4 100644
--- a/include/linux/power/bq27xxx_battery.h
+++ b/include/linux/power/bq27xxx_battery.h
@@ -53,7 +53,6 @@ struct bq27xxx_reg_cache {
int capacity;
int energy;
int flags;
- int power_avg;
int health;
};
diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
index fec0c5a..82126d5 100644
--- a/include/linux/skmsg.h
+++ b/include/linux/skmsg.h
@@ -349,8 +349,13 @@ static inline void sk_psock_update_proto(struct sock *sk,
static inline void sk_psock_restore_proto(struct sock *sk,
struct sk_psock *psock)
{
- sk->sk_prot->unhash = psock->saved_unhash;
if (inet_csk_has_ulp(sk)) {
+ /* TLS does not have an unhash proto in SW cases, but we need
+ * to ensure we stop using the sock_map unhash routine because
+ * the associated psock is being removed. So use the original
+ * unhash handler.
+ */
+ WRITE_ONCE(sk->sk_prot->unhash, psock->saved_unhash);
tcp_update_ulp(sk, psock->sk_proto, psock->saved_write_space);
} else {
sk->sk_write_space = psock->saved_write_space;
diff --git a/include/linux/smp.h b/include/linux/smp.h
index 9f13966..04f44e0 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -74,7 +74,7 @@ void on_each_cpu_cond(smp_cond_func_t cond_func, smp_call_func_t func,
void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func,
void *info, bool wait, const struct cpumask *mask);
-int smp_call_function_single_async(int cpu, call_single_data_t *csd);
+int smp_call_function_single_async(int cpu, struct __call_single_data *csd);
#ifdef CONFIG_SMP
diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
index b390fda..2d906b9 100644
--- a/include/linux/spi/spi.h
+++ b/include/linux/spi/spi.h
@@ -511,6 +511,9 @@ struct spi_controller {
#define SPI_MASTER_GPIO_SS BIT(5) /* GPIO CS must select slave */
+ /* flag indicating this is a non-devres managed controller */
+ bool devm_allocated;
+
/* flag indicating this is an SPI slave controller */
bool slave;
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 3ac5037..cad1fa2 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -367,6 +367,8 @@ struct rpc_xprt * xprt_alloc(struct net *net, size_t size,
unsigned int num_prealloc,
unsigned int max_req);
void xprt_free(struct rpc_xprt *);
+void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task);
+bool xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req);
static inline int
xprt_enable_swap(struct rpc_xprt *xprt)
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index fbdc657..5d2dbe7 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -29,6 +29,7 @@ enum swiotlb_force {
* controllable.
*/
#define IO_TLB_SHIFT 11
+#define IO_TLB_SIZE (1 << IO_TLB_SHIFT)
extern void swiotlb_init(int verbose);
int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
diff --git a/include/linux/tty.h b/include/linux/tty.h
index bc8caac..5972f43 100644
--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -303,7 +303,6 @@ struct tty_struct {
spinlock_t flow_lock;
/* Termios values are protected by the termios rwsem */
struct ktermios termios, termios_locked;
- struct termiox *termiox; /* May be NULL for unsupported */
char name[64];
struct pid *pgrp; /* Protected by ctrl lock */
/*
diff --git a/include/linux/tty_driver.h b/include/linux/tty_driver.h
index 3584462..2f719b4 100644
--- a/include/linux/tty_driver.h
+++ b/include/linux/tty_driver.h
@@ -224,19 +224,11 @@
* line). See tty_do_resize() if you need to wrap the standard method
* in your own logic - the usual case.
*
- * void (*set_termiox)(struct tty_struct *tty, struct termiox *new);
- *
- * Called when the device receives a termiox based ioctl. Passes down
- * the requested data from user space. This method will not be invoked
- * unless the tty also has a valid tty->termiox pointer.
- *
- * Optional: Called under the termios lock
- *
* int (*get_icount)(struct tty_struct *tty, struct serial_icounter *icount);
*
* Called when the device receives a TIOCGICOUNT ioctl. Passed a kernel
* structure to complete. This method is optional and will only be called
- * if provided (otherwise EINVAL will be returned).
+ * if provided (otherwise ENOTTY will be returned).
*/
#include <linux/export.h>
@@ -285,7 +277,6 @@ struct tty_operations {
int (*tiocmset)(struct tty_struct *tty,
unsigned int set, unsigned int clear);
int (*resize)(struct tty_struct *tty, struct winsize *ws);
- int (*set_termiox)(struct tty_struct *tty, struct termiox *tnew);
int (*get_icount)(struct tty_struct *tty,
struct serial_icounter_struct *icount);
int (*get_serial)(struct tty_struct *tty, struct serial_struct *p);
diff --git a/include/linux/udp.h b/include/linux/udp.h
index aa84597..ae58ff3 100644
--- a/include/linux/udp.h
+++ b/include/linux/udp.h
@@ -51,7 +51,9 @@ struct udp_sock {
* different encapsulation layer set
* this
*/
- gro_enabled:1; /* Can accept GRO packets */
+ gro_enabled:1, /* Request GRO aggregation */
+ accept_udp_l4:1,
+ accept_udp_fraglist:1;
/*
* Following member retains the information to create a UDP header
* when the socket is uncorked.
@@ -131,8 +133,16 @@ static inline void udp_cmsg_recv(struct msghdr *msg, struct sock *sk,
static inline bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb)
{
- return !udp_sk(sk)->gro_enabled && skb_is_gso(skb) &&
- skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4;
+ if (!skb_is_gso(skb))
+ return false;
+
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && !udp_sk(sk)->accept_udp_l4)
+ return true;
+
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST && !udp_sk(sk)->accept_udp_fraglist)
+ return true;
+
+ return false;
}
#define udp_portaddr_for_each_entry(__sk, list) \
diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
index 6ef1c71..7616c7b 100644
--- a/include/linux/user_namespace.h
+++ b/include/linux/user_namespace.h
@@ -64,6 +64,9 @@ struct user_namespace {
kgid_t group;
struct ns_common ns;
unsigned long flags;
+ /* parent_could_setfcap: true if the creator if this ns had CAP_SETFCAP
+ * in its effective capability set at the child ns creation time. */
+ bool parent_could_setfcap;
#ifdef CONFIG_KEYS
/* List of joinable keyrings in this namespace. Modification access of
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
index 2d29d25..b465f8f 100644
--- a/include/linux/virtio_net.h
+++ b/include/linux/virtio_net.h
@@ -62,6 +62,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
return -EINVAL;
}
+ skb_reset_mac_header(skb);
+
if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
u32 start = __virtio16_to_cpu(little_endian, hdr->csum_start);
u32 off = __virtio16_to_cpu(little_endian, hdr->csum_offset);
diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h
index cb25f34..9ecbb98 100644
--- a/include/media/v4l2-ctrls.h
+++ b/include/media/v4l2-ctrls.h
@@ -303,12 +303,14 @@ struct v4l2_ctrl {
* the control has been applied. This prevents applying controls
* from a cluster with multiple controls twice (when the first
* control of a cluster is applied, they all are).
- * @req: If set, this refers to another request that sets this control.
+ * @valid_p_req: If set, then p_req contains the control value for the request.
* @p_req: If the control handler containing this control reference
* is bound to a media request, then this points to the
- * value of the control that should be applied when the request
+ * value of the control that must be applied when the request
* is executed, or to the value of the control at the time
- * that the request was completed.
+ * that the request was completed. If @valid_p_req is false,
+ * then this control was never set for this request and the
+ * control will not be updated when this request is applied.
*
* Each control handler has a list of these refs. The list_head is used to
* keep a sorted-by-control-ID list of all controls, while the next pointer
@@ -321,7 +323,7 @@ struct v4l2_ctrl_ref {
struct v4l2_ctrl_helper *helper;
bool from_other_dev;
bool req_done;
- struct v4l2_ctrl_ref *req;
+ bool valid_p_req;
union v4l2_ctrl_ptr p_req;
};
@@ -348,7 +350,7 @@ struct v4l2_ctrl_ref {
* @error: The error code of the first failed control addition.
* @request_is_queued: True if the request was queued.
* @requests: List to keep track of open control handler request objects.
- * For the parent control handler (@req_obj.req == NULL) this
+ * For the parent control handler (@req_obj.ops == NULL) this
* is the list header. When the parent control handler is
* removed, it has to unbind and put all these requests since
* they refer to the parent.
diff --git a/include/net/act_api.h b/include/net/act_api.h
index 89b42a1..2c88b8a 100644
--- a/include/net/act_api.h
+++ b/include/net/act_api.h
@@ -170,12 +170,7 @@ void tcf_idr_insert_many(struct tc_action *actions[]);
void tcf_idr_cleanup(struct tc_action_net *tn, u32 index);
int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index,
struct tc_action **a, int bind);
-int __tcf_idr_release(struct tc_action *a, bool bind, bool strict);
-
-static inline int tcf_idr_release(struct tc_action *a, bool bind)
-{
- return __tcf_idr_release(a, bind, false);
-}
+int tcf_idr_release(struct tc_action *a, bool bind);
int tcf_register_action(struct tc_action_ops *a, struct pernet_operations *ops);
int tcf_unregister_action(struct tc_action_ops *a,
@@ -185,7 +180,7 @@ int tcf_action_exec(struct sk_buff *skb, struct tc_action **actions,
int nr_actions, struct tcf_result *res);
int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
struct nlattr *est, char *name, int ovr, int bind,
- struct tc_action *actions[], size_t *attr_size,
+ struct tc_action *actions[], int init_res[], size_t *attr_size,
bool rtnl_held, struct netlink_ext_ack *extack);
struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
bool rtnl_held,
@@ -193,7 +188,8 @@ struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
struct nlattr *nla, struct nlattr *est,
char *name, int ovr, int bind,
- struct tc_action_ops *ops, bool rtnl_held,
+ struct tc_action_ops *a_o, int *init_res,
+ bool rtnl_held,
struct netlink_ext_ack *extack);
int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[], int bind,
int ref, bool terse);
diff --git a/include/net/addrconf.h b/include/net/addrconf.h
index 18f783d..78ea3e3 100644
--- a/include/net/addrconf.h
+++ b/include/net/addrconf.h
@@ -233,7 +233,6 @@ void ipv6_mc_unmap(struct inet6_dev *idev);
void ipv6_mc_remap(struct inet6_dev *idev);
void ipv6_mc_init_dev(struct inet6_dev *idev);
void ipv6_mc_destroy_dev(struct inet6_dev *idev);
-int ipv6_mc_check_icmpv6(struct sk_buff *skb);
int ipv6_mc_check_mld(struct sk_buff *skb);
void addrconf_dad_failure(struct sk_buff *skb, struct inet6_ifaddr *ifp);
diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
index 9873e1c..df611c8 100644
--- a/include/net/bluetooth/hci_core.h
+++ b/include/net/bluetooth/hci_core.h
@@ -669,6 +669,7 @@ struct hci_chan {
struct sk_buff_head data_q;
unsigned int sent;
__u8 state;
+ bool amp;
};
struct hci_conn_params {
diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
index d5ab8d9..8a1bf2d 100644
--- a/include/net/cfg80211.h
+++ b/include/net/cfg80211.h
@@ -5624,7 +5624,7 @@ unsigned int ieee80211_get_mesh_hdrlen(struct ieee80211s_hdr *meshhdr);
*/
int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
const u8 *addr, enum nl80211_iftype iftype,
- u8 data_offset);
+ u8 data_offset, bool is_amsdu);
/**
* ieee80211_data_to_8023 - convert an 802.11 data frame to 802.3
@@ -5636,7 +5636,7 @@ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
static inline int ieee80211_data_to_8023(struct sk_buff *skb, const u8 *addr,
enum nl80211_iftype iftype)
{
- return ieee80211_data_to_8023_exthdr(skb, NULL, addr, iftype, 0);
+ return ieee80211_data_to_8023_exthdr(skb, NULL, addr, iftype, 0, false);
}
/**
diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index 16e8b2f..b338638 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -126,7 +126,6 @@ enum nf_flow_flags {
NF_FLOW_HW,
NF_FLOW_HW_DYING,
NF_FLOW_HW_DEAD,
- NF_FLOW_HW_REFRESH,
NF_FLOW_HW_PENDING,
};
diff --git a/include/net/netfilter/nf_tables_offload.h b/include/net/netfilter/nf_tables_offload.h
index 1d34fe1..434a615 100644
--- a/include/net/netfilter/nf_tables_offload.h
+++ b/include/net/netfilter/nf_tables_offload.h
@@ -4,11 +4,16 @@
#include <net/flow_offload.h>
#include <net/netfilter/nf_tables.h>
+enum nft_offload_reg_flags {
+ NFT_OFFLOAD_F_NETWORK2HOST = (1 << 0),
+};
+
struct nft_offload_reg {
u32 key;
u32 len;
u32 base_offset;
u32 offset;
+ u32 flags;
struct nft_data data;
struct nft_data mask;
};
@@ -45,6 +50,7 @@ struct nft_flow_key {
struct flow_dissector_key_ports tp;
struct flow_dissector_key_ip ip;
struct flow_dissector_key_vlan vlan;
+ struct flow_dissector_key_vlan cvlan;
struct flow_dissector_key_eth_addrs eth_addrs;
struct flow_dissector_key_meta meta;
} __aligned(BITS_PER_LONG / 8); /* Ensure that we can do comparisons as longs. */
@@ -71,13 +77,17 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net, const struct nft_rul
void nft_flow_rule_destroy(struct nft_flow_rule *flow);
int nft_flow_rule_offload_commit(struct net *net);
-#define NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg) \
+#define NFT_OFFLOAD_MATCH_FLAGS(__key, __base, __field, __len, __reg, __flags) \
(__reg)->base_offset = \
offsetof(struct nft_flow_key, __base); \
(__reg)->offset = \
offsetof(struct nft_flow_key, __base.__field); \
(__reg)->len = __len; \
(__reg)->key = __key; \
+ (__reg)->flags = __flags;
+
+#define NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg) \
+ NFT_OFFLOAD_MATCH_FLAGS(__key, __base, __field, __len, __reg, 0)
#define NFT_OFFLOAD_MATCH_EXACT(__key, __base, __field, __len, __reg) \
NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg) \
diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
index 59f45b1..b59d73d 100644
--- a/include/net/netns/xfrm.h
+++ b/include/net/netns/xfrm.h
@@ -72,7 +72,9 @@ struct netns_xfrm {
#if IS_ENABLED(CONFIG_IPV6)
struct dst_ops xfrm6_dst_ops;
#endif
- spinlock_t xfrm_state_lock;
+ spinlock_t xfrm_state_lock;
+ seqcount_t xfrm_state_hash_generation;
+
spinlock_t xfrm_policy_lock;
struct mutex xfrm_cfg_mutex;
};
diff --git a/include/net/nfc/nci_core.h b/include/net/nfc/nci_core.h
index 43c9c5d..3397901 100644
--- a/include/net/nfc/nci_core.h
+++ b/include/net/nfc/nci_core.h
@@ -298,6 +298,7 @@ int nci_nfcc_loopback(struct nci_dev *ndev, void *data, size_t data_len,
struct sk_buff **resp);
struct nci_hci_dev *nci_hci_allocate(struct nci_dev *ndev);
+void nci_hci_deallocate(struct nci_dev *ndev);
int nci_hci_send_event(struct nci_dev *ndev, u8 gate, u8 event,
const u8 *param, size_t param_len);
int nci_hci_send_cmd(struct nci_dev *ndev, u8 gate,
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 81d7773..b139e7b 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -191,7 +191,17 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
{
- return page->dma_addr;
+ dma_addr_t ret = page->dma_addr[0];
+ if (sizeof(dma_addr_t) > sizeof(unsigned long))
+ ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16;
+ return ret;
+}
+
+static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
+{
+ page->dma_addr[0] = addr;
+ if (sizeof(dma_addr_t) > sizeof(unsigned long))
+ page->dma_addr[1] = upper_32_bits(addr);
}
static inline bool is_page_pool_compiled_in(void)
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index d4d4612..b608be5 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -709,6 +709,17 @@ tc_cls_common_offload_init(struct flow_cls_common_offload *cls_common,
cls_common->extack = extack;
}
+#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
+static inline struct tc_skb_ext *tc_skb_ext_alloc(struct sk_buff *skb)
+{
+ struct tc_skb_ext *tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT);
+
+ if (tc_skb_ext)
+ memset(tc_skb_ext, 0, sizeof(*tc_skb_ext));
+ return tc_skb_ext;
+}
+#endif
+
enum tc_matchall_command {
TC_CLSMATCHALL_REPLACE,
TC_CLSMATCHALL_DESTROY,
diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
index 4ed32e6..2be90a54 100644
--- a/include/net/pkt_sched.h
+++ b/include/net/pkt_sched.h
@@ -123,12 +123,7 @@ void __qdisc_run(struct Qdisc *q);
static inline void qdisc_run(struct Qdisc *q)
{
if (qdisc_run_begin(q)) {
- /* NOLOCK qdisc must check 'state' under the qdisc seqlock
- * to avoid racing with dev_qdisc_reset()
- */
- if (!(q->flags & TCQ_F_NOLOCK) ||
- likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
- __qdisc_run(q);
+ __qdisc_run(q);
qdisc_run_end(q);
}
}
diff --git a/include/net/red.h b/include/net/red.h
index 9e6647c..cc9f6b0 100644
--- a/include/net/red.h
+++ b/include/net/red.h
@@ -171,9 +171,9 @@ static inline void red_set_vars(struct red_vars *v)
static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog,
u8 Scell_log, u8 *stab)
{
- if (fls(qth_min) + Wlog > 32)
+ if (fls(qth_min) + Wlog >= 32)
return false;
- if (fls(qth_max) + Wlog > 32)
+ if (fls(qth_max) + Wlog >= 32)
return false;
if (Scell_log >= 32)
return false;
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 3648164..4dd2c9e 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -36,6 +36,7 @@ struct qdisc_rate_table {
enum qdisc_state_t {
__QDISC_STATE_SCHED,
__QDISC_STATE_DEACTIVATED,
+ __QDISC_STATE_MISSED,
};
struct qdisc_size_table {
@@ -159,8 +160,33 @@ static inline bool qdisc_is_empty(const struct Qdisc *qdisc)
static inline bool qdisc_run_begin(struct Qdisc *qdisc)
{
if (qdisc->flags & TCQ_F_NOLOCK) {
+ if (spin_trylock(&qdisc->seqlock))
+ goto nolock_empty;
+
+ /* If the MISSED flag is set, it means other thread has
+ * set the MISSED flag before second spin_trylock(), so
+ * we can return false here to avoid multi cpus doing
+ * the set_bit() and second spin_trylock() concurrently.
+ */
+ if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))
+ return false;
+
+ /* Set the MISSED flag before the second spin_trylock(),
+ * if the second spin_trylock() return false, it means
+ * other cpu holding the lock will do dequeuing for us
+ * or it will see the MISSED flag set after releasing
+ * lock and reschedule the net_tx_action() to do the
+ * dequeuing.
+ */
+ set_bit(__QDISC_STATE_MISSED, &qdisc->state);
+
+ /* Retry again in case other CPU may not see the new flag
+ * after it releases the lock at the end of qdisc_run_end().
+ */
if (!spin_trylock(&qdisc->seqlock))
return false;
+
+nolock_empty:
WRITE_ONCE(qdisc->empty, false);
} else if (qdisc_is_running(qdisc)) {
return false;
@@ -176,8 +202,15 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
static inline void qdisc_run_end(struct Qdisc *qdisc)
{
write_seqcount_end(&qdisc->running);
- if (qdisc->flags & TCQ_F_NOLOCK)
+ if (qdisc->flags & TCQ_F_NOLOCK) {
spin_unlock(&qdisc->seqlock);
+
+ if (unlikely(test_bit(__QDISC_STATE_MISSED,
+ &qdisc->state))) {
+ clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
+ __netif_schedule(qdisc);
+ }
+ }
}
static inline bool qdisc_may_bulk(const struct Qdisc *qdisc)
diff --git a/include/net/sock.h b/include/net/sock.h
index 253202d..f68184b 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -2197,6 +2197,17 @@ static inline void skb_set_owner_r(struct sk_buff *skb, struct sock *sk)
sk_mem_charge(sk, skb->truesize);
}
+static inline __must_check bool skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk)
+{
+ if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) {
+ skb_orphan(skb);
+ skb->destructor = sock_efree;
+ skb->sk = sk;
+ return true;
+ }
+ return false;
+}
+
void sk_reset_timer(struct sock *sk, struct timer_list *timer,
unsigned long expires);
diff --git a/include/net/xfrm.h b/include/net/xfrm.h
index b2a06f1..c58a6d4 100644
--- a/include/net/xfrm.h
+++ b/include/net/xfrm.h
@@ -1097,7 +1097,7 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
return __xfrm_policy_check(sk, ndir, skb, family);
return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) ||
- (skb_dst(skb)->flags & DST_NOPOLICY) ||
+ (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) ||
__xfrm_policy_check(sk, ndir, skb, family);
}
@@ -1557,7 +1557,7 @@ int xfrm_trans_queue_net(struct net *net, struct sk_buff *skb,
int xfrm_trans_queue(struct sk_buff *skb,
int (*finish)(struct net *, struct sock *,
struct sk_buff *));
-int xfrm_output_resume(struct sk_buff *skb, int err);
+int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err);
int xfrm_output(struct sock *sk, struct sk_buff *skb);
#if IS_ENABLED(CONFIG_NET_PKTGEN)
diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
index 2568cb0..fac8e89 100644
--- a/include/scsi/libfcoe.h
+++ b/include/scsi/libfcoe.h
@@ -249,7 +249,7 @@ int fcoe_ctlr_recv_flogi(struct fcoe_ctlr *, struct fc_lport *,
struct fc_frame *);
/* libfcoe funcs */
-u64 fcoe_wwn_from_mac(unsigned char mac[], unsigned int, unsigned int);
+u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int);
int fcoe_libfc_config(struct fc_lport *, struct fcoe_ctlr *,
const struct libfc_function_template *, int init_fcp);
u32 fcoe_fc_crc(struct fc_frame *fp);
diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
index f8f1e85..56b113e 100644
--- a/include/trace/events/f2fs.h
+++ b/include/trace/events/f2fs.h
@@ -6,6 +6,7 @@
#define _TRACE_F2FS_H
#include <linux/tracepoint.h>
+#include <uapi/linux/f2fs.h>
#define show_dev(dev) MAJOR(dev), MINOR(dev)
#define show_dev_ino(entry) show_dev(entry->dev), (unsigned long)entry->ino
diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
index 2a03263..23db248 100644
--- a/include/trace/events/sunrpc.h
+++ b/include/trace/events/sunrpc.h
@@ -1141,7 +1141,6 @@ DECLARE_EVENT_CLASS(xprt_writelock_event,
DEFINE_WRITELOCK_EVENT(reserve_xprt);
DEFINE_WRITELOCK_EVENT(release_xprt);
-DEFINE_WRITELOCK_EVENT(transmit_queued);
DECLARE_EVENT_CLASS(xprt_cong_event,
TP_PROTO(
diff --git a/include/uapi/linux/capability.h b/include/uapi/linux/capability.h
index c6ca330..2ddb422 100644
--- a/include/uapi/linux/capability.h
+++ b/include/uapi/linux/capability.h
@@ -335,7 +335,8 @@ struct vfs_ns_cap_data {
#define CAP_AUDIT_CONTROL 30
-/* Set or remove capabilities on files */
+/* Set or remove capabilities on files.
+ Map uid=0 into a child user namespace. */
#define CAP_SETFCAP 31
diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
new file mode 100644
index 0000000..28bcfe8
--- /dev/null
+++ b/include/uapi/linux/f2fs.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+
+#ifndef _UAPI_LINUX_F2FS_H
+#define _UAPI_LINUX_F2FS_H
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+/*
+ * f2fs-specific ioctl commands
+ */
+#define F2FS_IOCTL_MAGIC 0xf5
+#define F2FS_IOC_START_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 1)
+#define F2FS_IOC_COMMIT_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 2)
+#define F2FS_IOC_START_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 3)
+#define F2FS_IOC_RELEASE_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 4)
+#define F2FS_IOC_ABORT_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 5)
+#define F2FS_IOC_GARBAGE_COLLECT _IOW(F2FS_IOCTL_MAGIC, 6, __u32)
+#define F2FS_IOC_WRITE_CHECKPOINT _IO(F2FS_IOCTL_MAGIC, 7)
+#define F2FS_IOC_DEFRAGMENT _IOWR(F2FS_IOCTL_MAGIC, 8, \
+ struct f2fs_defragment)
+#define F2FS_IOC_MOVE_RANGE _IOWR(F2FS_IOCTL_MAGIC, 9, \
+ struct f2fs_move_range)
+#define F2FS_IOC_FLUSH_DEVICE _IOW(F2FS_IOCTL_MAGIC, 10, \
+ struct f2fs_flush_device)
+#define F2FS_IOC_GARBAGE_COLLECT_RANGE _IOW(F2FS_IOCTL_MAGIC, 11, \
+ struct f2fs_gc_range)
+#define F2FS_IOC_GET_FEATURES _IOR(F2FS_IOCTL_MAGIC, 12, __u32)
+#define F2FS_IOC_SET_PIN_FILE _IOW(F2FS_IOCTL_MAGIC, 13, __u32)
+#define F2FS_IOC_GET_PIN_FILE _IOR(F2FS_IOCTL_MAGIC, 14, __u32)
+#define F2FS_IOC_PRECACHE_EXTENTS _IO(F2FS_IOCTL_MAGIC, 15)
+#define F2FS_IOC_RESIZE_FS _IOW(F2FS_IOCTL_MAGIC, 16, __u64)
+#define F2FS_IOC_GET_COMPRESS_BLOCKS _IOR(F2FS_IOCTL_MAGIC, 17, __u64)
+#define F2FS_IOC_RELEASE_COMPRESS_BLOCKS \
+ _IOR(F2FS_IOCTL_MAGIC, 18, __u64)
+#define F2FS_IOC_RESERVE_COMPRESS_BLOCKS \
+ _IOR(F2FS_IOCTL_MAGIC, 19, __u64)
+#define F2FS_IOC_SEC_TRIM_FILE _IOW(F2FS_IOCTL_MAGIC, 20, \
+ struct f2fs_sectrim_range)
+
+/*
+ * should be same as XFS_IOC_GOINGDOWN.
+ * Flags for going down operation used by FS_IOC_GOINGDOWN
+ */
+#define F2FS_IOC_SHUTDOWN _IOR('X', 125, __u32) /* Shutdown */
+#define F2FS_GOING_DOWN_FULLSYNC 0x0 /* going down with full sync */
+#define F2FS_GOING_DOWN_METASYNC 0x1 /* going down with metadata */
+#define F2FS_GOING_DOWN_NOSYNC 0x2 /* going down */
+#define F2FS_GOING_DOWN_METAFLUSH 0x3 /* going down with meta flush */
+#define F2FS_GOING_DOWN_NEED_FSCK 0x4 /* going down to trigger fsck */
+
+/*
+ * Flags used by F2FS_IOC_SEC_TRIM_FILE
+ */
+#define F2FS_TRIM_FILE_DISCARD 0x1 /* send discard command */
+#define F2FS_TRIM_FILE_ZEROOUT 0x2 /* zero out */
+#define F2FS_TRIM_FILE_MASK 0x3
+
+struct f2fs_gc_range {
+ __u32 sync;
+ __u64 start;
+ __u64 len;
+};
+
+struct f2fs_defragment {
+ __u64 start;
+ __u64 len;
+};
+
+struct f2fs_move_range {
+ __u32 dst_fd; /* destination fd */
+ __u64 pos_in; /* start position in src_fd */
+ __u64 pos_out; /* start position in dst_fd */
+ __u64 len; /* size to move */
+};
+
+struct f2fs_flush_device {
+ __u32 dev_num; /* device number to flush */
+ __u32 segments; /* # of segments to flush */
+};
+
+struct f2fs_sectrim_range {
+ __u64 start;
+ __u64 len;
+ __u64 flags;
+};
+
+#endif /* _UAPI_LINUX_F2FS_H */
diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
index fdcdfe4..9d9ecc0 100644
--- a/include/uapi/linux/idxd.h
+++ b/include/uapi/linux/idxd.h
@@ -187,8 +187,8 @@ struct dsa_completion_record {
uint32_t rsvd2:8;
};
- uint16_t delta_rec_size;
- uint16_t crc_val;
+ uint32_t delta_rec_size;
+ uint32_t crc_val;
/* DIF check & strip */
struct {
diff --git a/include/uapi/linux/if_packet.h b/include/uapi/linux/if_packet.h
index 3d884d6..c07caf7 100644
--- a/include/uapi/linux/if_packet.h
+++ b/include/uapi/linux/if_packet.h
@@ -2,6 +2,7 @@
#ifndef __LINUX_IF_PACKET_H
#define __LINUX_IF_PACKET_H
+#include <asm/byteorder.h>
#include <linux/types.h>
struct sockaddr_pkt {
@@ -296,6 +297,17 @@ struct packet_mreq {
unsigned char mr_address[8];
};
+struct fanout_args {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ __u16 id;
+ __u16 type_flags;
+#else
+ __u16 type_flags;
+ __u16 id;
+#endif
+ __u32 max_num_members;
+};
+
#define PACKET_MR_MULTICAST 0
#define PACKET_MR_PROMISC 1
#define PACKET_MR_ALLMULTI 2
diff --git a/include/uapi/linux/netfilter/xt_SECMARK.h b/include/uapi/linux/netfilter/xt_SECMARK.h
index 1f2a7084..beb2cad 100644
--- a/include/uapi/linux/netfilter/xt_SECMARK.h
+++ b/include/uapi/linux/netfilter/xt_SECMARK.h
@@ -20,4 +20,10 @@ struct xt_secmark_target_info {
char secctx[SECMARK_SECCTX_MAX];
};
+struct xt_secmark_target_info_v1 {
+ __u8 mode;
+ char secctx[SECMARK_SECCTX_MAX];
+ __u32 secid;
+};
+
#endif /*_XT_SECMARK_H_target */
diff --git a/include/uapi/linux/tty_flags.h b/include/uapi/linux/tty_flags.h
index 900a32e..6a3ac49 100644
--- a/include/uapi/linux/tty_flags.h
+++ b/include/uapi/linux/tty_flags.h
@@ -39,7 +39,7 @@
* WARNING: These flags are no longer used and have been superceded by the
* TTY_PORT_ flags in the iflags field (and not userspace-visible)
*/
-#ifndef _KERNEL_
+#ifndef __KERNEL__
#define ASYNCB_INITIALIZED 31 /* Serial port was initialized */
#define ASYNCB_SUSPENDED 30 /* Serial port is suspended */
#define ASYNCB_NORMAL_ACTIVE 29 /* Normal device is active */
@@ -81,7 +81,7 @@
#define ASYNC_SPD_WARP (ASYNC_SPD_HI|ASYNC_SPD_SHI)
#define ASYNC_SPD_MASK (ASYNC_SPD_HI|ASYNC_SPD_VHI|ASYNC_SPD_SHI)
-#ifndef _KERNEL_
+#ifndef __KERNEL__
/* These flags are no longer used (and were always masked from userspace) */
#define ASYNC_INITIALIZED (1U << ASYNCB_INITIALIZED)
#define ASYNC_NORMAL_ACTIVE (1U << ASYNCB_NORMAL_ACTIVE)
diff --git a/include/uapi/linux/usb/video.h b/include/uapi/linux/usb/video.h
index d854cb1..bfdae12 100644
--- a/include/uapi/linux/usb/video.h
+++ b/include/uapi/linux/usb/video.h
@@ -302,9 +302,10 @@ struct uvc_processing_unit_descriptor {
__u8 bControlSize;
__u8 bmControls[2];
__u8 iProcessing;
+ __u8 bmVideoStandards;
} __attribute__((__packed__));
-#define UVC_DT_PROCESSING_UNIT_SIZE(n) (9+(n))
+#define UVC_DT_PROCESSING_UNIT_SIZE(n) (10+(n))
/* 3.7.2.6. Extension Unit Descriptor */
struct uvc_extension_unit_descriptor {
diff --git a/init/Kconfig b/init/Kconfig
index d559abf..fc4c9f41 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -114,8 +114,7 @@
config COMPILE_TEST
bool "Compile also drivers which will not load"
- depends on !UML
- default n
+ depends on HAS_IOMEM
help
Some drivers can be compiled on a different platform than they are
intended to be run on. Despite they cannot be loaded there (or even
diff --git a/init/init_task.c b/init/init_task.c
index 1395bda..68bfc90 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -209,7 +209,7 @@ struct task_struct init_task
#ifdef CONFIG_SECURITY
.security = NULL,
#endif
-#ifdef CONFIG_SECCOMP
+#ifdef CONFIG_SECCOMP_FILTER
.seccomp = { .filter_count = ATOMIC_INIT(0) },
#endif
};
diff --git a/ipc/mqueue.c b/ipc/mqueue.c
index beff0cf..05d2176c 100644
--- a/ipc/mqueue.c
+++ b/ipc/mqueue.c
@@ -1003,12 +1003,14 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
struct mqueue_inode_info *info,
struct ext_wait_queue *this)
{
+ struct task_struct *task;
+
list_del(&this->list);
- get_task_struct(this->task);
+ task = get_task_struct(this->task);
/* see MQ_BARRIER for purpose/pairing */
smp_store_release(&this->state, STATE_READY);
- wake_q_add_safe(wake_q, this->task);
+ wake_q_add_safe(wake_q, task);
}
/* pipelined_send() - send a message directly to the task waiting in
diff --git a/ipc/msg.c b/ipc/msg.c
index acd1bc7..6e6c8e0 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -251,11 +251,13 @@ static void expunge_all(struct msg_queue *msq, int res,
struct msg_receiver *msr, *t;
list_for_each_entry_safe(msr, t, &msq->q_receivers, r_list) {
- get_task_struct(msr->r_tsk);
+ struct task_struct *r_tsk;
+
+ r_tsk = get_task_struct(msr->r_tsk);
/* see MSG_BARRIER for purpose/pairing */
smp_store_release(&msr->r_msg, ERR_PTR(res));
- wake_q_add_safe(wake_q, msr->r_tsk);
+ wake_q_add_safe(wake_q, r_tsk);
}
}
diff --git a/ipc/sem.c b/ipc/sem.c
index f6c30a8..7d9c06b 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -784,12 +784,14 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
static inline void wake_up_sem_queue_prepare(struct sem_queue *q, int error,
struct wake_q_head *wake_q)
{
- get_task_struct(q->sleeper);
+ struct task_struct *sleeper;
+
+ sleeper = get_task_struct(q->sleeper);
/* see SEM_BARRIER_2 for purpuse/pairing */
smp_store_release(&q->status, error);
- wake_q_add_safe(wake_q, q->sleeper);
+ wake_q_add_safe(wake_q, sleeper);
}
static void unlink_queue(struct sem_array *sma, struct sem_queue *q)
diff --git a/kernel/.gitignore b/kernel/.gitignore
index 78701ea..5518835 100644
--- a/kernel/.gitignore
+++ b/kernel/.gitignore
@@ -1,4 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
+/config_data
kheaders.md5
timeconst.h
hz.bc
diff --git a/kernel/Makefile b/kernel/Makefile
index 88b60a6..e7905bd 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -134,10 +134,15 @@
$(obj)/configs.o: $(obj)/config_data.gz
-targets += config_data.gz
-$(obj)/config_data.gz: $(KCONFIG_CONFIG) FORCE
+targets += config_data config_data.gz
+$(obj)/config_data.gz: $(obj)/config_data FORCE
$(call if_changed,gzip)
+filechk_cat = cat $<
+
+$(obj)/config_data: $(KCONFIG_CONFIG) FORCE
+ $(call filechk,cat)
+
$(obj)/kheaders.o: $(obj)/kheaders_data.tar.xz
quiet_cmd_genikh = CHK $(obj)/kheaders_data.tar.xz
diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
index dd4b7fd..6b14b4c 100644
--- a/kernel/bpf/inode.c
+++ b/kernel/bpf/inode.c
@@ -546,7 +546,7 @@ int bpf_obj_get_user(const char __user *pathname, int flags)
else if (type == BPF_TYPE_MAP)
ret = bpf_map_new_fd(raw, f_flags);
else if (type == BPF_TYPE_LINK)
- ret = bpf_link_new_fd(raw);
+ ret = (f_flags != O_RDWR) ? -EINVAL : bpf_link_new_fd(raw);
else
return -ENOENT;
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 31cb04a..add0b34 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -240,25 +240,20 @@ static int ringbuf_map_get_next_key(struct bpf_map *map, void *key,
return -ENOTSUPP;
}
-static size_t bpf_ringbuf_mmap_page_cnt(const struct bpf_ringbuf *rb)
-{
- size_t data_pages = (rb->mask + 1) >> PAGE_SHIFT;
-
- /* consumer page + producer page + 2 x data pages */
- return RINGBUF_POS_PAGES + 2 * data_pages;
-}
-
static int ringbuf_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
{
struct bpf_ringbuf_map *rb_map;
- size_t mmap_sz;
rb_map = container_of(map, struct bpf_ringbuf_map, map);
- mmap_sz = bpf_ringbuf_mmap_page_cnt(rb_map->rb) << PAGE_SHIFT;
- if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) > mmap_sz)
- return -EINVAL;
-
+ if (vma->vm_flags & VM_WRITE) {
+ /* allow writable mapping for the consumer_pos only */
+ if (vma->vm_pgoff != 0 || vma->vm_end - vma->vm_start != PAGE_SIZE)
+ return -EPERM;
+ } else {
+ vma->vm_flags &= ~VM_MAYWRITE;
+ }
+ /* remap_vmalloc_range() checks size and offset constraints */
return remap_vmalloc_range(vma, rb_map->rb,
vma->vm_pgoff + RINGBUF_PGOFF);
}
@@ -334,6 +329,9 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
return NULL;
len = round_up(size + BPF_RINGBUF_HDR_SZ, 8);
+ if (len > rb->mask + 1)
+ return NULL;
+
cons_pos = smp_load_acquire(&rb->consumer_pos);
if (in_nmi()) {
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 6e83bf8..ebf6084 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -662,9 +662,17 @@ const struct bpf_func_proto bpf_get_stack_proto = {
BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf,
u32, size, u64, flags)
{
- struct pt_regs *regs = task_pt_regs(task);
+ struct pt_regs *regs;
+ long res;
- return __bpf_get_stack(regs, task, NULL, buf, size, flags);
+ if (!try_get_task_stack(task))
+ return -EFAULT;
+
+ regs = task_pt_regs(task);
+ res = __bpf_get_stack(regs, task, NULL, buf, size, flags);
+ put_task_stack(task);
+
+ return res;
}
BTF_ID_LIST_SINGLE(bpf_get_task_stack_btf_ids, struct, task_struct)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 76aefb4..fcfefff 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1310,9 +1310,7 @@ static bool __reg64_bound_s32(s64 a)
static bool __reg64_bound_u32(u64 a)
{
- if (a > U32_MIN && a < U32_MAX)
- return true;
- return false;
+ return a > U32_MIN && a < U32_MAX;
}
static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
@@ -1323,10 +1321,10 @@ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
reg->s32_min_value = (s32)reg->smin_value;
reg->s32_max_value = (s32)reg->smax_value;
}
- if (__reg64_bound_u32(reg->umin_value))
+ if (__reg64_bound_u32(reg->umin_value) && __reg64_bound_u32(reg->umax_value)) {
reg->u32_min_value = (u32)reg->umin_value;
- if (__reg64_bound_u32(reg->umax_value))
reg->u32_max_value = (u32)reg->umax_value;
+ }
/* Intersecting with the old var_off might have improved our bounds
* slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
@@ -2295,12 +2293,14 @@ static void save_register_state(struct bpf_func_state *state,
state->stack[spi].slot_type[i] = STACK_SPILL;
}
-/* check_stack_read/write functions track spill/fill of registers,
+/* check_stack_{read,write}_fixed_off functions track spill/fill of registers,
* stack boundary and alignment are checked in check_mem_access()
*/
-static int check_stack_write(struct bpf_verifier_env *env,
- struct bpf_func_state *state, /* func where register points to */
- int off, int size, int value_regno, int insn_idx)
+static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ /* stack frame we're writing to */
+ struct bpf_func_state *state,
+ int off, int size, int value_regno,
+ int insn_idx)
{
struct bpf_func_state *cur; /* state of the current function */
int i, slot = -off - 1, spi = slot / BPF_REG_SIZE, err;
@@ -2426,9 +2426,175 @@ static int check_stack_write(struct bpf_verifier_env *env,
return 0;
}
-static int check_stack_read(struct bpf_verifier_env *env,
- struct bpf_func_state *reg_state /* func where register points to */,
- int off, int size, int value_regno)
+/* Write the stack: 'stack[ptr_regno + off] = value_regno'. 'ptr_regno' is
+ * known to contain a variable offset.
+ * This function checks whether the write is permitted and conservatively
+ * tracks the effects of the write, considering that each stack slot in the
+ * dynamic range is potentially written to.
+ *
+ * 'off' includes 'regno->off'.
+ * 'value_regno' can be -1, meaning that an unknown value is being written to
+ * the stack.
+ *
+ * Spilled pointers in range are not marked as written because we don't know
+ * what's going to be actually written. This means that read propagation for
+ * future reads cannot be terminated by this write.
+ *
+ * For privileged programs, uninitialized stack slots are considered
+ * initialized by this write (even though we don't know exactly what offsets
+ * are going to be written to). The idea is that we don't want the verifier to
+ * reject future reads that access slots written to through variable offsets.
+ */
+static int check_stack_write_var_off(struct bpf_verifier_env *env,
+ /* func where register points to */
+ struct bpf_func_state *state,
+ int ptr_regno, int off, int size,
+ int value_regno, int insn_idx)
+{
+ struct bpf_func_state *cur; /* state of the current function */
+ int min_off, max_off;
+ int i, err;
+ struct bpf_reg_state *ptr_reg = NULL, *value_reg = NULL;
+ bool writing_zero = false;
+ /* set if the fact that we're writing a zero is used to let any
+ * stack slots remain STACK_ZERO
+ */
+ bool zero_used = false;
+
+ cur = env->cur_state->frame[env->cur_state->curframe];
+ ptr_reg = &cur->regs[ptr_regno];
+ min_off = ptr_reg->smin_value + off;
+ max_off = ptr_reg->smax_value + off + size;
+ if (value_regno >= 0)
+ value_reg = &cur->regs[value_regno];
+ if (value_reg && register_is_null(value_reg))
+ writing_zero = true;
+
+ err = realloc_func_state(state, round_up(-min_off, BPF_REG_SIZE),
+ state->acquired_refs, true);
+ if (err)
+ return err;
+
+
+ /* Variable offset writes destroy any spilled pointers in range. */
+ for (i = min_off; i < max_off; i++) {
+ u8 new_type, *stype;
+ int slot, spi;
+
+ slot = -i - 1;
+ spi = slot / BPF_REG_SIZE;
+ stype = &state->stack[spi].slot_type[slot % BPF_REG_SIZE];
+
+ if (!env->allow_ptr_leaks
+ && *stype != NOT_INIT
+ && *stype != SCALAR_VALUE) {
+ /* Reject the write if there's are spilled pointers in
+ * range. If we didn't reject here, the ptr status
+ * would be erased below (even though not all slots are
+ * actually overwritten), possibly opening the door to
+ * leaks.
+ */
+ verbose(env, "spilled ptr in range of var-offset stack write; insn %d, ptr off: %d",
+ insn_idx, i);
+ return -EINVAL;
+ }
+
+ /* Erase all spilled pointers. */
+ state->stack[spi].spilled_ptr.type = NOT_INIT;
+
+ /* Update the slot type. */
+ new_type = STACK_MISC;
+ if (writing_zero && *stype == STACK_ZERO) {
+ new_type = STACK_ZERO;
+ zero_used = true;
+ }
+ /* If the slot is STACK_INVALID, we check whether it's OK to
+ * pretend that it will be initialized by this write. The slot
+ * might not actually be written to, and so if we mark it as
+ * initialized future reads might leak uninitialized memory.
+ * For privileged programs, we will accept such reads to slots
+ * that may or may not be written because, if we're reject
+ * them, the error would be too confusing.
+ */
+ if (*stype == STACK_INVALID && !env->allow_uninit_stack) {
+ verbose(env, "uninit stack in range of var-offset write prohibited for !root; insn %d, off: %d",
+ insn_idx, i);
+ return -EINVAL;
+ }
+ *stype = new_type;
+ }
+ if (zero_used) {
+ /* backtracking doesn't work for STACK_ZERO yet. */
+ err = mark_chain_precision(env, value_regno);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
+/* When register 'dst_regno' is assigned some values from stack[min_off,
+ * max_off), we set the register's type according to the types of the
+ * respective stack slots. If all the stack values are known to be zeros, then
+ * so is the destination reg. Otherwise, the register is considered to be
+ * SCALAR. This function does not deal with register filling; the caller must
+ * ensure that all spilled registers in the stack range have been marked as
+ * read.
+ */
+static void mark_reg_stack_read(struct bpf_verifier_env *env,
+ /* func where src register points to */
+ struct bpf_func_state *ptr_state,
+ int min_off, int max_off, int dst_regno)
+{
+ struct bpf_verifier_state *vstate = env->cur_state;
+ struct bpf_func_state *state = vstate->frame[vstate->curframe];
+ int i, slot, spi;
+ u8 *stype;
+ int zeros = 0;
+
+ for (i = min_off; i < max_off; i++) {
+ slot = -i - 1;
+ spi = slot / BPF_REG_SIZE;
+ stype = ptr_state->stack[spi].slot_type;
+ if (stype[slot % BPF_REG_SIZE] != STACK_ZERO)
+ break;
+ zeros++;
+ }
+ if (zeros == max_off - min_off) {
+ /* any access_size read into register is zero extended,
+ * so the whole register == const_zero
+ */
+ __mark_reg_const_zero(&state->regs[dst_regno]);
+ /* backtracking doesn't support STACK_ZERO yet,
+ * so mark it precise here, so that later
+ * backtracking can stop here.
+ * Backtracking may not need this if this register
+ * doesn't participate in pointer adjustment.
+ * Forward propagation of precise flag is not
+ * necessary either. This mark is only to stop
+ * backtracking. Any register that contributed
+ * to const 0 was marked precise before spill.
+ */
+ state->regs[dst_regno].precise = true;
+ } else {
+ /* have read misc data from the stack */
+ mark_reg_unknown(env, state->regs, dst_regno);
+ }
+ state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
+}
+
+/* Read the stack at 'off' and put the results into the register indicated by
+ * 'dst_regno'. It handles reg filling if the addressed stack slot is a
+ * spilled reg.
+ *
+ * 'dst_regno' can be -1, meaning that the read value is not going to a
+ * register.
+ *
+ * The access is assumed to be within the current stack bounds.
+ */
+static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
+ /* func where src register points to */
+ struct bpf_func_state *reg_state,
+ int off, int size, int dst_regno)
{
struct bpf_verifier_state *vstate = env->cur_state;
struct bpf_func_state *state = vstate->frame[vstate->curframe];
@@ -2436,11 +2602,6 @@ static int check_stack_read(struct bpf_verifier_env *env,
struct bpf_reg_state *reg;
u8 *stype;
- if (reg_state->allocated_stack <= slot) {
- verbose(env, "invalid read from stack off %d+0 size %d\n",
- off, size);
- return -EACCES;
- }
stype = reg_state->stack[spi].slot_type;
reg = ®_state->stack[spi].spilled_ptr;
@@ -2451,9 +2612,9 @@ static int check_stack_read(struct bpf_verifier_env *env,
verbose(env, "invalid size of register fill\n");
return -EACCES;
}
- if (value_regno >= 0) {
- mark_reg_unknown(env, state->regs, value_regno);
- state->regs[value_regno].live |= REG_LIVE_WRITTEN;
+ if (dst_regno >= 0) {
+ mark_reg_unknown(env, state->regs, dst_regno);
+ state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
}
mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
return 0;
@@ -2465,16 +2626,16 @@ static int check_stack_read(struct bpf_verifier_env *env,
}
}
- if (value_regno >= 0) {
+ if (dst_regno >= 0) {
/* restore register state from stack */
- state->regs[value_regno] = *reg;
+ state->regs[dst_regno] = *reg;
/* mark reg as written since spilled pointer state likely
* has its liveness marks cleared by is_state_visited()
* which resets stack/reg liveness for state transitions
*/
- state->regs[value_regno].live |= REG_LIVE_WRITTEN;
+ state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
} else if (__is_pointer_value(env->allow_ptr_leaks, reg)) {
- /* If value_regno==-1, the caller is asking us whether
+ /* If dst_regno==-1, the caller is asking us whether
* it is acceptable to use this value as a SCALAR_VALUE
* (e.g. for XADD).
* We must not allow unprivileged callers to do that
@@ -2486,70 +2647,167 @@ static int check_stack_read(struct bpf_verifier_env *env,
}
mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
} else {
- int zeros = 0;
+ u8 type;
for (i = 0; i < size; i++) {
- if (stype[(slot - i) % BPF_REG_SIZE] == STACK_MISC)
+ type = stype[(slot - i) % BPF_REG_SIZE];
+ if (type == STACK_MISC)
continue;
- if (stype[(slot - i) % BPF_REG_SIZE] == STACK_ZERO) {
- zeros++;
+ if (type == STACK_ZERO)
continue;
- }
verbose(env, "invalid read from stack off %d+%d size %d\n",
off, i, size);
return -EACCES;
}
mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
- if (value_regno >= 0) {
- if (zeros == size) {
- /* any size read into register is zero extended,
- * so the whole register == const_zero
- */
- __mark_reg_const_zero(&state->regs[value_regno]);
- /* backtracking doesn't support STACK_ZERO yet,
- * so mark it precise here, so that later
- * backtracking can stop here.
- * Backtracking may not need this if this register
- * doesn't participate in pointer adjustment.
- * Forward propagation of precise flag is not
- * necessary either. This mark is only to stop
- * backtracking. Any register that contributed
- * to const 0 was marked precise before spill.
- */
- state->regs[value_regno].precise = true;
- } else {
- /* have read misc data from the stack */
- mark_reg_unknown(env, state->regs, value_regno);
- }
- state->regs[value_regno].live |= REG_LIVE_WRITTEN;
- }
+ if (dst_regno >= 0)
+ mark_reg_stack_read(env, reg_state, off, off + size, dst_regno);
}
return 0;
}
-static int check_stack_access(struct bpf_verifier_env *env,
- const struct bpf_reg_state *reg,
- int off, int size)
+enum stack_access_src {
+ ACCESS_DIRECT = 1, /* the access is performed by an instruction */
+ ACCESS_HELPER = 2, /* the access is performed by a helper */
+};
+
+static int check_stack_range_initialized(struct bpf_verifier_env *env,
+ int regno, int off, int access_size,
+ bool zero_size_allowed,
+ enum stack_access_src type,
+ struct bpf_call_arg_meta *meta);
+
+static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
{
- /* Stack accesses must be at a fixed offset, so that we
- * can determine what type of data were returned. See
- * check_stack_read().
+ return cur_regs(env) + regno;
+}
+
+/* Read the stack at 'ptr_regno + off' and put the result into the register
+ * 'dst_regno'.
+ * 'off' includes the pointer register's fixed offset(i.e. 'ptr_regno.off'),
+ * but not its variable offset.
+ * 'size' is assumed to be <= reg size and the access is assumed to be aligned.
+ *
+ * As opposed to check_stack_read_fixed_off, this function doesn't deal with
+ * filling registers (i.e. reads of spilled register cannot be detected when
+ * the offset is not fixed). We conservatively mark 'dst_regno' as containing
+ * SCALAR_VALUE. That's why we assert that the 'ptr_regno' has a variable
+ * offset; for a fixed offset check_stack_read_fixed_off should be used
+ * instead.
+ */
+static int check_stack_read_var_off(struct bpf_verifier_env *env,
+ int ptr_regno, int off, int size, int dst_regno)
+{
+ /* The state of the source register. */
+ struct bpf_reg_state *reg = reg_state(env, ptr_regno);
+ struct bpf_func_state *ptr_state = func(env, reg);
+ int err;
+ int min_off, max_off;
+
+ /* Note that we pass a NULL meta, so raw access will not be permitted.
*/
- if (!tnum_is_const(reg->var_off)) {
+ err = check_stack_range_initialized(env, ptr_regno, off, size,
+ false, ACCESS_DIRECT, NULL);
+ if (err)
+ return err;
+
+ min_off = reg->smin_value + off;
+ max_off = reg->smax_value + off;
+ mark_reg_stack_read(env, ptr_state, min_off, max_off + size, dst_regno);
+ return 0;
+}
+
+/* check_stack_read dispatches to check_stack_read_fixed_off or
+ * check_stack_read_var_off.
+ *
+ * The caller must ensure that the offset falls within the allocated stack
+ * bounds.
+ *
+ * 'dst_regno' is a register which will receive the value from the stack. It
+ * can be -1, meaning that the read value is not going to a register.
+ */
+static int check_stack_read(struct bpf_verifier_env *env,
+ int ptr_regno, int off, int size,
+ int dst_regno)
+{
+ struct bpf_reg_state *reg = reg_state(env, ptr_regno);
+ struct bpf_func_state *state = func(env, reg);
+ int err;
+ /* Some accesses are only permitted with a static offset. */
+ bool var_off = !tnum_is_const(reg->var_off);
+
+ /* The offset is required to be static when reads don't go to a
+ * register, in order to not leak pointers (see
+ * check_stack_read_fixed_off).
+ */
+ if (dst_regno < 0 && var_off) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "variable stack access var_off=%s off=%d size=%d\n",
+ verbose(env, "variable offset stack pointer cannot be passed into helper function; var_off=%s off=%d size=%d\n",
tn_buf, off, size);
return -EACCES;
}
+ /* Variable offset is prohibited for unprivileged mode for simplicity
+ * since it requires corresponding support in Spectre masking for stack
+ * ALU. See also retrieve_ptr_limit().
+ */
+ if (!env->bypass_spec_v1 && var_off) {
+ char tn_buf[48];
- if (off >= 0 || off < -MAX_BPF_STACK) {
- verbose(env, "invalid stack off=%d size=%d\n", off, size);
+ tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+ verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n",
+ ptr_regno, tn_buf);
return -EACCES;
}
- return 0;
+ if (!var_off) {
+ off += reg->var_off.value;
+ err = check_stack_read_fixed_off(env, state, off, size,
+ dst_regno);
+ } else {
+ /* Variable offset stack reads need more conservative handling
+ * than fixed offset ones. Note that dst_regno >= 0 on this
+ * branch.
+ */
+ err = check_stack_read_var_off(env, ptr_regno, off, size,
+ dst_regno);
+ }
+ return err;
+}
+
+
+/* check_stack_write dispatches to check_stack_write_fixed_off or
+ * check_stack_write_var_off.
+ *
+ * 'ptr_regno' is the register used as a pointer into the stack.
+ * 'off' includes 'ptr_regno->off', but not its variable offset (if any).
+ * 'value_regno' is the register whose value we're writing to the stack. It can
+ * be -1, meaning that we're not writing from a register.
+ *
+ * The caller must ensure that the offset falls within the maximum stack size.
+ */
+static int check_stack_write(struct bpf_verifier_env *env,
+ int ptr_regno, int off, int size,
+ int value_regno, int insn_idx)
+{
+ struct bpf_reg_state *reg = reg_state(env, ptr_regno);
+ struct bpf_func_state *state = func(env, reg);
+ int err;
+
+ if (tnum_is_const(reg->var_off)) {
+ off += reg->var_off.value;
+ err = check_stack_write_fixed_off(env, state, off, size,
+ value_regno, insn_idx);
+ } else {
+ /* Variable offset stack reads need more conservative handling
+ * than fixed offset ones.
+ */
+ err = check_stack_write_var_off(env, state,
+ ptr_regno, off, size,
+ value_regno, insn_idx);
+ }
+ return err;
}
static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
@@ -2878,11 +3136,6 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
return -EACCES;
}
-static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
-{
- return cur_regs(env) + regno;
-}
-
static bool is_pointer_value(struct bpf_verifier_env *env, int regno)
{
return __is_pointer_value(env->allow_ptr_leaks, reg_state(env, regno));
@@ -3001,8 +3254,8 @@ static int check_ptr_alignment(struct bpf_verifier_env *env,
break;
case PTR_TO_STACK:
pointer_desc = "stack ";
- /* The stack spill tracking logic in check_stack_write()
- * and check_stack_read() relies on stack accesses being
+ /* The stack spill tracking logic in check_stack_write_fixed_off()
+ * and check_stack_read_fixed_off() relies on stack accesses being
* aligned.
*/
strict = true;
@@ -3420,6 +3673,91 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env,
return 0;
}
+/* Check that the stack access at the given offset is within bounds. The
+ * maximum valid offset is -1.
+ *
+ * The minimum valid offset is -MAX_BPF_STACK for writes, and
+ * -state->allocated_stack for reads.
+ */
+static int check_stack_slot_within_bounds(int off,
+ struct bpf_func_state *state,
+ enum bpf_access_type t)
+{
+ int min_valid_off;
+
+ if (t == BPF_WRITE)
+ min_valid_off = -MAX_BPF_STACK;
+ else
+ min_valid_off = -state->allocated_stack;
+
+ if (off < min_valid_off || off > -1)
+ return -EACCES;
+ return 0;
+}
+
+/* Check that the stack access at 'regno + off' falls within the maximum stack
+ * bounds.
+ *
+ * 'off' includes `regno->offset`, but not its dynamic part (if any).
+ */
+static int check_stack_access_within_bounds(
+ struct bpf_verifier_env *env,
+ int regno, int off, int access_size,
+ enum stack_access_src src, enum bpf_access_type type)
+{
+ struct bpf_reg_state *regs = cur_regs(env);
+ struct bpf_reg_state *reg = regs + regno;
+ struct bpf_func_state *state = func(env, reg);
+ int min_off, max_off;
+ int err;
+ char *err_extra;
+
+ if (src == ACCESS_HELPER)
+ /* We don't know if helpers are reading or writing (or both). */
+ err_extra = " indirect access to";
+ else if (type == BPF_READ)
+ err_extra = " read from";
+ else
+ err_extra = " write to";
+
+ if (tnum_is_const(reg->var_off)) {
+ min_off = reg->var_off.value + off;
+ if (access_size > 0)
+ max_off = min_off + access_size - 1;
+ else
+ max_off = min_off;
+ } else {
+ if (reg->smax_value >= BPF_MAX_VAR_OFF ||
+ reg->smin_value <= -BPF_MAX_VAR_OFF) {
+ verbose(env, "invalid unbounded variable-offset%s stack R%d\n",
+ err_extra, regno);
+ return -EACCES;
+ }
+ min_off = reg->smin_value + off;
+ if (access_size > 0)
+ max_off = reg->smax_value + off + access_size - 1;
+ else
+ max_off = min_off;
+ }
+
+ err = check_stack_slot_within_bounds(min_off, state, type);
+ if (!err)
+ err = check_stack_slot_within_bounds(max_off, state, type);
+
+ if (err) {
+ if (tnum_is_const(reg->var_off)) {
+ verbose(env, "invalid%s stack R%d off=%d size=%d\n",
+ err_extra, regno, off, access_size);
+ } else {
+ char tn_buf[48];
+
+ tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+ verbose(env, "invalid variable-offset%s stack R%d var_off=%s size=%d\n",
+ err_extra, regno, tn_buf, access_size);
+ }
+ }
+ return err;
+}
/* check whether memory at (regno + off) is accessible for t = (read | write)
* if t==write, value_regno is a register which value is stored into memory
@@ -3532,8 +3870,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
}
} else if (reg->type == PTR_TO_STACK) {
- off += reg->var_off.value;
- err = check_stack_access(env, reg, off, size);
+ /* Basic bounds checks. */
+ err = check_stack_access_within_bounds(env, regno, off, size, ACCESS_DIRECT, t);
if (err)
return err;
@@ -3542,12 +3880,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
if (err)
return err;
- if (t == BPF_WRITE)
- err = check_stack_write(env, state, off, size,
- value_regno, insn_idx);
- else
- err = check_stack_read(env, state, off, size,
+ if (t == BPF_READ)
+ err = check_stack_read(env, regno, off, size,
value_regno);
+ else
+ err = check_stack_write(env, regno, off, size,
+ value_regno, insn_idx);
} else if (reg_is_pkt_pointer(reg)) {
if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) {
verbose(env, "cannot write into packet\n");
@@ -3714,49 +4052,53 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
return 0;
}
-static int __check_stack_boundary(struct bpf_verifier_env *env, u32 regno,
- int off, int access_size,
- bool zero_size_allowed)
-{
- struct bpf_reg_state *reg = reg_state(env, regno);
-
- if (off >= 0 || off < -MAX_BPF_STACK || off + access_size > 0 ||
- access_size < 0 || (access_size == 0 && !zero_size_allowed)) {
- if (tnum_is_const(reg->var_off)) {
- verbose(env, "invalid stack type R%d off=%d access_size=%d\n",
- regno, off, access_size);
- } else {
- char tn_buf[48];
-
- tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "invalid stack type R%d var_off=%s access_size=%d\n",
- regno, tn_buf, access_size);
- }
- return -EACCES;
- }
- return 0;
-}
-
-/* when register 'regno' is passed into function that will read 'access_size'
- * bytes from that pointer, make sure that it's within stack boundary
- * and all elements of stack are initialized.
- * Unlike most pointer bounds-checking functions, this one doesn't take an
- * 'off' argument, so it has to add in reg->off itself.
+/* When register 'regno' is used to read the stack (either directly or through
+ * a helper function) make sure that it's within stack boundary and, depending
+ * on the access type, that all elements of the stack are initialized.
+ *
+ * 'off' includes 'regno->off', but not its dynamic part (if any).
+ *
+ * All registers that have been spilled on the stack in the slots within the
+ * read offsets are marked as read.
*/
-static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
- int access_size, bool zero_size_allowed,
- struct bpf_call_arg_meta *meta)
+static int check_stack_range_initialized(
+ struct bpf_verifier_env *env, int regno, int off,
+ int access_size, bool zero_size_allowed,
+ enum stack_access_src type, struct bpf_call_arg_meta *meta)
{
struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_func_state *state = func(env, reg);
int err, min_off, max_off, i, j, slot, spi;
+ char *err_extra = type == ACCESS_HELPER ? " indirect" : "";
+ enum bpf_access_type bounds_check_type;
+ /* Some accesses can write anything into the stack, others are
+ * read-only.
+ */
+ bool clobber = false;
+
+ if (access_size == 0 && !zero_size_allowed) {
+ verbose(env, "invalid zero-sized read\n");
+ return -EACCES;
+ }
+
+ if (type == ACCESS_HELPER) {
+ /* The bounds checks for writes are more permissive than for
+ * reads. However, if raw_mode is not set, we'll do extra
+ * checks below.
+ */
+ bounds_check_type = BPF_WRITE;
+ clobber = true;
+ } else {
+ bounds_check_type = BPF_READ;
+ }
+ err = check_stack_access_within_bounds(env, regno, off, access_size,
+ type, bounds_check_type);
+ if (err)
+ return err;
+
if (tnum_is_const(reg->var_off)) {
- min_off = max_off = reg->var_off.value + reg->off;
- err = __check_stack_boundary(env, regno, min_off, access_size,
- zero_size_allowed);
- if (err)
- return err;
+ min_off = max_off = reg->var_off.value + off;
} else {
/* Variable offset is prohibited for unprivileged mode for
* simplicity since it requires corresponding support in
@@ -3767,8 +4109,8 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "R%d indirect variable offset stack access prohibited for !root, var_off=%s\n",
- regno, tn_buf);
+ verbose(env, "R%d%s variable offset stack access prohibited for !root, var_off=%s\n",
+ regno, err_extra, tn_buf);
return -EACCES;
}
/* Only initialized buffer on stack is allowed to be accessed
@@ -3780,28 +4122,8 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
if (meta && meta->raw_mode)
meta = NULL;
- if (reg->smax_value >= BPF_MAX_VAR_OFF ||
- reg->smax_value <= -BPF_MAX_VAR_OFF) {
- verbose(env, "R%d unbounded indirect variable offset stack access\n",
- regno);
- return -EACCES;
- }
- min_off = reg->smin_value + reg->off;
- max_off = reg->smax_value + reg->off;
- err = __check_stack_boundary(env, regno, min_off, access_size,
- zero_size_allowed);
- if (err) {
- verbose(env, "R%d min value is outside of stack bound\n",
- regno);
- return err;
- }
- err = __check_stack_boundary(env, regno, max_off, access_size,
- zero_size_allowed);
- if (err) {
- verbose(env, "R%d max value is outside of stack bound\n",
- regno);
- return err;
- }
+ min_off = reg->smin_value + off;
+ max_off = reg->smax_value + off;
}
if (meta && meta->raw_mode) {
@@ -3821,8 +4143,10 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
if (*stype == STACK_MISC)
goto mark;
if (*stype == STACK_ZERO) {
- /* helper can write anything into the stack */
- *stype = STACK_MISC;
+ if (clobber) {
+ /* helper can write anything into the stack */
+ *stype = STACK_MISC;
+ }
goto mark;
}
@@ -3831,23 +4155,26 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
goto mark;
if (state->stack[spi].slot_type[0] == STACK_SPILL &&
- state->stack[spi].spilled_ptr.type == SCALAR_VALUE) {
- __mark_reg_unknown(env, &state->stack[spi].spilled_ptr);
- for (j = 0; j < BPF_REG_SIZE; j++)
- state->stack[spi].slot_type[j] = STACK_MISC;
+ (state->stack[spi].spilled_ptr.type == SCALAR_VALUE ||
+ env->allow_ptr_leaks)) {
+ if (clobber) {
+ __mark_reg_unknown(env, &state->stack[spi].spilled_ptr);
+ for (j = 0; j < BPF_REG_SIZE; j++)
+ state->stack[spi].slot_type[j] = STACK_MISC;
+ }
goto mark;
}
err:
if (tnum_is_const(reg->var_off)) {
- verbose(env, "invalid indirect read from stack off %d+%d size %d\n",
- min_off, i - min_off, access_size);
+ verbose(env, "invalid%s read from stack R%d off %d+%d size %d\n",
+ err_extra, regno, min_off, i - min_off, access_size);
} else {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "invalid indirect read from stack var_off %s+%d size %d\n",
- tn_buf, i - min_off, access_size);
+ verbose(env, "invalid%s read from stack R%d var_off %s+%d size %d\n",
+ err_extra, regno, tn_buf, i - min_off, access_size);
}
return -EACCES;
mark:
@@ -3896,8 +4223,10 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
"rdwr",
&env->prog->aux->max_rdwr_access);
case PTR_TO_STACK:
- return check_stack_boundary(env, regno, access_size,
- zero_size_allowed, meta);
+ return check_stack_range_initialized(
+ env,
+ regno, reg->off, access_size,
+ zero_size_allowed, ACCESS_HELPER, meta);
default: /* scalar_value or invalid ptr */
/* Allow zero-byte read from NULL, regardless of pointer type */
if (zero_size_allowed && access_size == 0 &&
@@ -5413,40 +5742,43 @@ static struct bpf_insn_aux_data *cur_aux(struct bpf_verifier_env *env)
return &env->insn_aux_data[env->insn_idx];
}
+enum {
+ REASON_BOUNDS = -1,
+ REASON_TYPE = -2,
+ REASON_PATHS = -3,
+ REASON_LIMIT = -4,
+ REASON_STACK = -5,
+};
+
static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
- u32 *ptr_limit, u8 opcode, bool off_is_neg)
+ u32 *alu_limit, bool mask_to_left)
{
- bool mask_to_left = (opcode == BPF_ADD && off_is_neg) ||
- (opcode == BPF_SUB && !off_is_neg);
- u32 off, max;
+ u32 max = 0, ptr_limit = 0;
switch (ptr_reg->type) {
case PTR_TO_STACK:
/* Offset 0 is out-of-bounds, but acceptable start for the
- * left direction, see BPF_REG_FP.
+ * left direction, see BPF_REG_FP. Also, unknown scalar
+ * offset where we would need to deal with min/max bounds is
+ * currently prohibited for unprivileged.
*/
max = MAX_BPF_STACK + mask_to_left;
- /* Indirect variable offset stack access is prohibited in
- * unprivileged mode so it's not handled here.
- */
- off = ptr_reg->off + ptr_reg->var_off.value;
- if (mask_to_left)
- *ptr_limit = MAX_BPF_STACK + off;
- else
- *ptr_limit = -off - 1;
- return *ptr_limit >= max ? -ERANGE : 0;
+ ptr_limit = -(ptr_reg->var_off.value + ptr_reg->off);
+ break;
case PTR_TO_MAP_VALUE:
max = ptr_reg->map_ptr->value_size;
- if (mask_to_left) {
- *ptr_limit = ptr_reg->umax_value + ptr_reg->off;
- } else {
- off = ptr_reg->smin_value + ptr_reg->off;
- *ptr_limit = ptr_reg->map_ptr->value_size - off - 1;
- }
- return *ptr_limit >= max ? -ERANGE : 0;
+ ptr_limit = (mask_to_left ?
+ ptr_reg->smin_value :
+ ptr_reg->umax_value) + ptr_reg->off;
+ break;
default:
- return -EINVAL;
+ return REASON_TYPE;
}
+
+ if (ptr_limit >= max)
+ return REASON_LIMIT;
+ *alu_limit = ptr_limit;
+ return 0;
}
static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env,
@@ -5464,7 +5796,7 @@ static int update_alu_sanitation_state(struct bpf_insn_aux_data *aux,
if (aux->alu_state &&
(aux->alu_state != alu_state ||
aux->alu_limit != alu_limit))
- return -EACCES;
+ return REASON_PATHS;
/* Corresponding fixup done in fixup_bpf_calls(). */
aux->alu_state = alu_state;
@@ -5483,14 +5815,28 @@ static int sanitize_val_alu(struct bpf_verifier_env *env,
return update_alu_sanitation_state(aux, BPF_ALU_NON_POINTER, 0);
}
+static bool sanitize_needed(u8 opcode)
+{
+ return opcode == BPF_ADD || opcode == BPF_SUB;
+}
+
+struct bpf_sanitize_info {
+ struct bpf_insn_aux_data aux;
+ bool mask_to_left;
+};
+
static int sanitize_ptr_alu(struct bpf_verifier_env *env,
struct bpf_insn *insn,
const struct bpf_reg_state *ptr_reg,
+ const struct bpf_reg_state *off_reg,
struct bpf_reg_state *dst_reg,
- bool off_is_neg)
+ struct bpf_sanitize_info *info,
+ const bool commit_window)
{
+ struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : &info->aux;
struct bpf_verifier_state *vstate = env->cur_state;
- struct bpf_insn_aux_data *aux = cur_aux(env);
+ bool off_is_imm = tnum_is_const(off_reg->var_off);
+ bool off_is_neg = off_reg->smin_value < 0;
bool ptr_is_dst_reg = ptr_reg == dst_reg;
u8 opcode = BPF_OP(insn->code);
u32 alu_state, alu_limit;
@@ -5508,18 +5854,47 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
if (vstate->speculative)
goto do_sim;
- alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
- alu_state |= ptr_is_dst_reg ?
- BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
+ if (!commit_window) {
+ if (!tnum_is_const(off_reg->var_off) &&
+ (off_reg->smin_value < 0) != (off_reg->smax_value < 0))
+ return REASON_BOUNDS;
- err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg);
+ info->mask_to_left = (opcode == BPF_ADD && off_is_neg) ||
+ (opcode == BPF_SUB && !off_is_neg);
+ }
+
+ err = retrieve_ptr_limit(ptr_reg, &alu_limit, info->mask_to_left);
if (err < 0)
return err;
+ if (commit_window) {
+ /* In commit phase we narrow the masking window based on
+ * the observed pointer move after the simulated operation.
+ */
+ alu_state = info->aux.alu_state;
+ alu_limit = abs(info->aux.alu_limit - alu_limit);
+ } else {
+ alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
+ alu_state |= off_is_imm ? BPF_ALU_IMMEDIATE : 0;
+ alu_state |= ptr_is_dst_reg ?
+ BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
+ }
+
err = update_alu_sanitation_state(aux, alu_state, alu_limit);
if (err < 0)
return err;
do_sim:
+ /* If we're in commit phase, we're done here given we already
+ * pushed the truncated dst_reg into the speculative verification
+ * stack.
+ *
+ * Also, when register is a known constant, we rewrite register-based
+ * operation to immediate-based, and thus do not need masking (and as
+ * a consequence, do not need to simulate the zero-truncation either).
+ */
+ if (commit_window || off_is_imm)
+ return 0;
+
/* Simulate and find potential out-of-bounds access under
* speculative execution from truncation as a result of
* masking when off was not within expected range. If off
@@ -5536,7 +5911,112 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
if (!ptr_is_dst_reg && ret)
*dst_reg = tmp;
- return !ret ? -EFAULT : 0;
+ return !ret ? REASON_STACK : 0;
+}
+
+static int sanitize_err(struct bpf_verifier_env *env,
+ const struct bpf_insn *insn, int reason,
+ const struct bpf_reg_state *off_reg,
+ const struct bpf_reg_state *dst_reg)
+{
+ static const char *err = "pointer arithmetic with it prohibited for !root";
+ const char *op = BPF_OP(insn->code) == BPF_ADD ? "add" : "sub";
+ u32 dst = insn->dst_reg, src = insn->src_reg;
+
+ switch (reason) {
+ case REASON_BOUNDS:
+ verbose(env, "R%d has unknown scalar with mixed signed bounds, %s\n",
+ off_reg == dst_reg ? dst : src, err);
+ break;
+ case REASON_TYPE:
+ verbose(env, "R%d has pointer with unsupported alu operation, %s\n",
+ off_reg == dst_reg ? src : dst, err);
+ break;
+ case REASON_PATHS:
+ verbose(env, "R%d tried to %s from different maps, paths or scalars, %s\n",
+ dst, op, err);
+ break;
+ case REASON_LIMIT:
+ verbose(env, "R%d tried to %s beyond pointer bounds, %s\n",
+ dst, op, err);
+ break;
+ case REASON_STACK:
+ verbose(env, "R%d could not be pushed for speculative verification, %s\n",
+ dst, err);
+ break;
+ default:
+ verbose(env, "verifier internal error: unknown reason (%d)\n",
+ reason);
+ break;
+ }
+
+ return -EACCES;
+}
+
+/* check that stack access falls within stack limits and that 'reg' doesn't
+ * have a variable offset.
+ *
+ * Variable offset is prohibited for unprivileged mode for simplicity since it
+ * requires corresponding support in Spectre masking for stack ALU. See also
+ * retrieve_ptr_limit().
+ *
+ *
+ * 'off' includes 'reg->off'.
+ */
+static int check_stack_access_for_ptr_arithmetic(
+ struct bpf_verifier_env *env,
+ int regno,
+ const struct bpf_reg_state *reg,
+ int off)
+{
+ if (!tnum_is_const(reg->var_off)) {
+ char tn_buf[48];
+
+ tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
+ verbose(env, "R%d variable stack access prohibited for !root, var_off=%s off=%d\n",
+ regno, tn_buf, off);
+ return -EACCES;
+ }
+
+ if (off >= 0 || off < -MAX_BPF_STACK) {
+ verbose(env, "R%d stack pointer arithmetic goes out of range, "
+ "prohibited for !root; off=%d\n", regno, off);
+ return -EACCES;
+ }
+
+ return 0;
+}
+
+static int sanitize_check_bounds(struct bpf_verifier_env *env,
+ const struct bpf_insn *insn,
+ const struct bpf_reg_state *dst_reg)
+{
+ u32 dst = insn->dst_reg;
+
+ /* For unprivileged we require that resulting offset must be in bounds
+ * in order to be able to sanitize access later on.
+ */
+ if (env->bypass_spec_v1)
+ return 0;
+
+ switch (dst_reg->type) {
+ case PTR_TO_STACK:
+ if (check_stack_access_for_ptr_arithmetic(env, dst, dst_reg,
+ dst_reg->off + dst_reg->var_off.value))
+ return -EACCES;
+ break;
+ case PTR_TO_MAP_VALUE:
+ if (check_map_access(env, dst, dst_reg->off, 1, false)) {
+ verbose(env, "R%d pointer arithmetic of map value goes out of range, "
+ "prohibited for !root\n", dst);
+ return -EACCES;
+ }
+ break;
+ default:
+ break;
+ }
+
+ return 0;
}
/* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.
@@ -5557,8 +6037,9 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
- u32 dst = insn->dst_reg, src = insn->src_reg;
+ struct bpf_sanitize_info info = {};
u8 opcode = BPF_OP(insn->code);
+ u32 dst = insn->dst_reg;
int ret;
dst_reg = ®s[dst];
@@ -5606,13 +6087,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
verbose(env, "R%d pointer arithmetic on %s prohibited\n",
dst, reg_type_str[ptr_reg->type]);
return -EACCES;
- case PTR_TO_MAP_VALUE:
- if (!env->allow_ptr_leaks && !known && (smin_val < 0) != (smax_val < 0)) {
- verbose(env, "R%d has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root\n",
- off_reg == dst_reg ? dst : src);
- return -EACCES;
- }
- fallthrough;
default:
break;
}
@@ -5630,13 +6104,15 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
/* pointer types do not carry 32-bit bounds at the moment. */
__mark_reg32_unbounded(dst_reg);
+ if (sanitize_needed(opcode)) {
+ ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg,
+ &info, false);
+ if (ret < 0)
+ return sanitize_err(env, insn, ret, off_reg, dst_reg);
+ }
+
switch (opcode) {
case BPF_ADD:
- ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
- if (ret < 0) {
- verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst);
- return ret;
- }
/* We can take a fixed offset as long as it doesn't overflow
* the s32 'off' field
*/
@@ -5687,11 +6163,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
}
break;
case BPF_SUB:
- ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
- if (ret < 0) {
- verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst);
- return ret;
- }
if (dst_reg == off_reg) {
/* scalar -= pointer. Creates an unknown scalar */
verbose(env, "R%d tried to subtract pointer from scalar\n",
@@ -5772,22 +6243,13 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
- /* For unprivileged we require that resulting offset must be in bounds
- * in order to be able to sanitize access later on.
- */
- if (!env->bypass_spec_v1) {
- if (dst_reg->type == PTR_TO_MAP_VALUE &&
- check_map_access(env, dst, dst_reg->off, 1, false)) {
- verbose(env, "R%d pointer arithmetic of map value goes out of range, "
- "prohibited for !root\n", dst);
- return -EACCES;
- } else if (dst_reg->type == PTR_TO_STACK &&
- check_stack_access(env, dst_reg, dst_reg->off +
- dst_reg->var_off.value, 1)) {
- verbose(env, "R%d stack pointer arithmetic goes out of range, "
- "prohibited for !root\n", dst);
- return -EACCES;
- }
+ if (sanitize_check_bounds(env, insn, dst_reg) < 0)
+ return -EACCES;
+ if (sanitize_needed(opcode)) {
+ ret = sanitize_ptr_alu(env, insn, dst_reg, off_reg, dst_reg,
+ &info, true);
+ if (ret < 0)
+ return sanitize_err(env, insn, ret, off_reg, dst_reg);
}
return 0;
@@ -5974,11 +6436,10 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
s32 smin_val = src_reg->s32_min_value;
u32 umax_val = src_reg->u32_max_value;
- /* Assuming scalar64_min_max_and will be called so its safe
- * to skip updating register for known 32-bit case.
- */
- if (src_known && dst_known)
+ if (src_known && dst_known) {
+ __mark_reg32_known(dst_reg, var32_off.value);
return;
+ }
/* We get our minimum from the var_off, since that's inherently
* bitwise. Our maximum is the minimum of the operands' maxima.
@@ -5998,7 +6459,6 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
dst_reg->s32_min_value = dst_reg->u32_min_value;
dst_reg->s32_max_value = dst_reg->u32_max_value;
}
-
}
static void scalar_min_max_and(struct bpf_reg_state *dst_reg,
@@ -6045,11 +6505,10 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg,
s32 smin_val = src_reg->s32_min_value;
u32 umin_val = src_reg->u32_min_value;
- /* Assuming scalar64_min_max_or will be called so it is safe
- * to skip updating register for known case.
- */
- if (src_known && dst_known)
+ if (src_known && dst_known) {
+ __mark_reg32_known(dst_reg, var32_off.value);
return;
+ }
/* We get our maximum from the var_off, and our minimum is the
* maximum of the operands' minima
@@ -6114,11 +6573,10 @@ static void scalar32_min_max_xor(struct bpf_reg_state *dst_reg,
struct tnum var32_off = tnum_subreg(dst_reg->var_off);
s32 smin_val = src_reg->s32_min_value;
- /* Assuming scalar64_min_max_xor will be called so it is safe
- * to skip updating register for known case.
- */
- if (src_known && dst_known)
+ if (src_known && dst_known) {
+ __mark_reg32_known(dst_reg, var32_off.value);
return;
+ }
/* We get both minimum and maximum from the var32_off. */
dst_reg->u32_min_value = var32_off.value;
@@ -6381,9 +6839,8 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
s32 s32_min_val, s32_max_val;
u32 u32_min_val, u32_max_val;
u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
- u32 dst = insn->dst_reg;
- int ret;
bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64);
+ int ret;
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
@@ -6425,6 +6882,12 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
return 0;
}
+ if (sanitize_needed(opcode)) {
+ ret = sanitize_val_alu(env, insn);
+ if (ret < 0)
+ return sanitize_err(env, insn, ret, NULL, NULL);
+ }
+
/* Calculate sign/unsigned bounds and tnum for alu32 and alu64 bit ops.
* There are two classes of instructions: The first class we track both
* alu32 and alu64 sign/unsigned bounds independently this provides the
@@ -6441,21 +6904,11 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
*/
switch (opcode) {
case BPF_ADD:
- ret = sanitize_val_alu(env, insn);
- if (ret < 0) {
- verbose(env, "R%d tried to add from different pointers or scalars\n", dst);
- return ret;
- }
scalar32_min_max_add(dst_reg, &src_reg);
scalar_min_max_add(dst_reg, &src_reg);
dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
break;
case BPF_SUB:
- ret = sanitize_val_alu(env, insn);
- if (ret < 0) {
- verbose(env, "R%d tried to sub from different pointers or scalars\n", dst);
- return ret;
- }
scalar32_min_max_sub(dst_reg, &src_reg);
scalar_min_max_sub(dst_reg, &src_reg);
dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
@@ -11048,7 +11501,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
const u8 code_sub = BPF_ALU64 | BPF_SUB | BPF_X;
struct bpf_insn insn_buf[16];
struct bpf_insn *patch = &insn_buf[0];
- bool issrc, isneg;
+ bool issrc, isneg, isimm;
u32 off_reg;
aux = &env->insn_aux_data[i + delta];
@@ -11059,28 +11512,29 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
isneg = aux->alu_state & BPF_ALU_NEG_VALUE;
issrc = (aux->alu_state & BPF_ALU_SANITIZE) ==
BPF_ALU_SANITIZE_SRC;
+ isimm = aux->alu_state & BPF_ALU_IMMEDIATE;
off_reg = issrc ? insn->src_reg : insn->dst_reg;
- if (isneg)
- *patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
- *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
- *patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
- *patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
- *patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
- *patch++ = BPF_ALU64_IMM(BPF_ARSH, BPF_REG_AX, 63);
- if (issrc) {
- *patch++ = BPF_ALU64_REG(BPF_AND, BPF_REG_AX,
- off_reg);
- insn->src_reg = BPF_REG_AX;
+ if (isimm) {
+ *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
} else {
- *patch++ = BPF_ALU64_REG(BPF_AND, off_reg,
- BPF_REG_AX);
+ if (isneg)
+ *patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
+ *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
+ *patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
+ *patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
+ *patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
+ *patch++ = BPF_ALU64_IMM(BPF_ARSH, BPF_REG_AX, 63);
+ *patch++ = BPF_ALU64_REG(BPF_AND, BPF_REG_AX, off_reg);
}
+ if (!issrc)
+ *patch++ = BPF_MOV64_REG(insn->dst_reg, insn->src_reg);
+ insn->src_reg = BPF_REG_AX;
if (isneg)
insn->code = insn->code == code_add ?
code_sub : code_add;
*patch++ = *insn;
- if (issrc && isneg)
+ if (issrc && isneg && !isimm)
*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
cnt = patch - insn_buf;
@@ -11541,6 +11995,11 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
u32 btf_id, member_idx;
const char *mname;
+ if (!prog->gpl_compatible) {
+ verbose(env, "struct ops programs must have a GPL compatible license\n");
+ return -EINVAL;
+ }
+
btf_id = prog->aux->attach_btf_id;
st_ops = bpf_struct_ops_find(btf_id);
if (!st_ops) {
@@ -11994,6 +12453,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
env->strict_alignment = false;
env->allow_ptr_leaks = bpf_allow_ptr_leaks();
+ env->allow_uninit_stack = bpf_allow_uninit_stack();
env->allow_ptr_to_map_access = bpf_allow_ptr_to_map_access();
env->bypass_spec_v1 = bpf_bypass_spec_v1();
env->bypass_spec_v4 = bpf_bypass_spec_v4();
@@ -12002,12 +12462,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
if (is_priv)
env->test_state_freq = attr->prog_flags & BPF_F_TEST_STATE_FREQ;
- if (bpf_prog_is_dev_bound(env->prog->aux)) {
- ret = bpf_prog_offload_verifier_prep(env->prog);
- if (ret)
- goto skip_full_check;
- }
-
env->explored_states = kvcalloc(state_htab_size(env),
sizeof(struct bpf_verifier_state_list *),
GFP_USER);
@@ -12031,6 +12485,12 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
if (ret < 0)
goto skip_full_check;
+ if (bpf_prog_is_dev_bound(env->prog->aux)) {
+ ret = bpf_prog_offload_verifier_prep(env->prog);
+ if (ret)
+ goto skip_full_check;
+ }
+
ret = check_cfg(env);
if (ret < 0)
goto skip_full_check;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 781b9dc..0f61b14 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -50,9 +50,6 @@
#define CREATE_TRACE_POINTS
#include <trace/events/swiotlb.h>
-#define OFFSET(val,align) ((unsigned long) \
- ( (val) & ( (align) - 1)))
-
#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
/*
@@ -176,6 +173,16 @@ void swiotlb_print_info(void)
bytes >> 20);
}
+static inline unsigned long io_tlb_offset(unsigned long val)
+{
+ return val & (IO_TLB_SEGSIZE - 1);
+}
+
+static inline unsigned long nr_slots(u64 val)
+{
+ return DIV_ROUND_UP(val, IO_TLB_SIZE);
+}
+
/*
* Early SWIOTLB allocation may be too early to allow an architecture to
* perform the desired operations. This function allows the architecture to
@@ -225,7 +232,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
__func__, alloc_size, PAGE_SIZE);
for (i = 0; i < io_tlb_nslabs; i++) {
- io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+ io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i);
io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
}
io_tlb_index = 0;
@@ -359,7 +366,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
goto cleanup4;
for (i = 0; i < io_tlb_nslabs; i++) {
- io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+ io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i);
io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
}
io_tlb_index = 0;
@@ -445,19 +452,120 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
}
}
-phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
+#define slot_addr(start, idx) ((start) + ((idx) << IO_TLB_SHIFT))
+
+/*
+ * Return the offset into a iotlb slot required to keep the device happy.
+ */
+static unsigned int swiotlb_align_offset(struct device *dev, u64 addr)
+{
+ return addr & dma_get_min_align_mask(dev) & (IO_TLB_SIZE - 1);
+}
+
+/*
+ * Carefully handle integer overflow which can occur when boundary_mask == ~0UL.
+ */
+static inline unsigned long get_max_slots(unsigned long boundary_mask)
+{
+ if (boundary_mask == ~0UL)
+ return 1UL << (BITS_PER_LONG - IO_TLB_SHIFT);
+ return nr_slots(boundary_mask + 1);
+}
+
+static unsigned int wrap_index(unsigned int index)
+{
+ if (index >= io_tlb_nslabs)
+ return 0;
+ return index;
+}
+
+/*
+ * Find a suitable number of IO TLB entries size that will fit this request and
+ * allocate a buffer from that IO TLB pool.
+ */
+static int find_slots(struct device *dev, phys_addr_t orig_addr,
+ size_t alloc_size)
+{
+ unsigned long boundary_mask = dma_get_seg_boundary(dev);
+ dma_addr_t tbl_dma_addr =
+ phys_to_dma_unencrypted(dev, io_tlb_start) & boundary_mask;
+ unsigned long max_slots = get_max_slots(boundary_mask);
+ unsigned int iotlb_align_mask =
+ dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
+ unsigned int nslots = nr_slots(alloc_size), stride;
+ unsigned int index, wrap, count = 0, i;
+ unsigned long flags;
+
+ BUG_ON(!nslots);
+
+ /*
+ * For mappings with an alignment requirement don't bother looping to
+ * unaligned slots once we found an aligned one. For allocations of
+ * PAGE_SIZE or larger only look for page aligned allocations.
+ */
+ stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1;
+ if (alloc_size >= PAGE_SIZE)
+ stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT));
+
+ spin_lock_irqsave(&io_tlb_lock, flags);
+ if (unlikely(nslots > io_tlb_nslabs - io_tlb_used))
+ goto not_found;
+
+ index = wrap = wrap_index(ALIGN(io_tlb_index, stride));
+ do {
+ if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+ (orig_addr & iotlb_align_mask)) {
+ index = wrap_index(index + 1);
+ continue;
+ }
+
+ /*
+ * If we find a slot that indicates we have 'nslots' number of
+ * contiguous buffers, we allocate the buffers from that slot
+ * and mark the entries as '0' indicating unavailable.
+ */
+ if (!iommu_is_span_boundary(index, nslots,
+ nr_slots(tbl_dma_addr),
+ max_slots)) {
+ if (io_tlb_list[index] >= nslots)
+ goto found;
+ }
+ index = wrap_index(index + stride);
+ } while (index != wrap);
+
+not_found:
+ spin_unlock_irqrestore(&io_tlb_lock, flags);
+ return -1;
+
+found:
+ for (i = index; i < index + nslots; i++)
+ io_tlb_list[i] = 0;
+ for (i = index - 1;
+ io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
+ io_tlb_list[i]; i--)
+ io_tlb_list[i] = ++count;
+
+ /*
+ * Update the indices to avoid searching in the next round.
+ */
+ if (index + nslots < io_tlb_nslabs)
+ io_tlb_index = index + nslots;
+ else
+ io_tlb_index = 0;
+ io_tlb_used += nslots;
+
+ spin_unlock_irqrestore(&io_tlb_lock, flags);
+ return index;
+}
+
+phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
size_t mapping_size, size_t alloc_size,
enum dma_data_direction dir, unsigned long attrs)
{
- dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start);
- unsigned long flags;
+ unsigned int offset = swiotlb_align_offset(dev, orig_addr);
+ unsigned int i;
+ int index;
phys_addr_t tlb_addr;
- unsigned int nslots, stride, index, wrap;
- int i;
- unsigned long mask;
- unsigned long offset_slots;
- unsigned long max_slots;
- unsigned long tmp_io_tlb_used;
if (no_iotlb_memory)
panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
@@ -466,111 +574,32 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
if (mapping_size > alloc_size) {
- dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)",
+ dev_warn_once(dev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)",
mapping_size, alloc_size);
return (phys_addr_t)DMA_MAPPING_ERROR;
}
- mask = dma_get_seg_boundary(hwdev);
-
- tbl_dma_addr &= mask;
-
- offset_slots = ALIGN(tbl_dma_addr, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-
- /*
- * Carefully handle integer overflow which can occur when mask == ~0UL.
- */
- max_slots = mask + 1
- ? ALIGN(mask + 1, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT
- : 1UL << (BITS_PER_LONG - IO_TLB_SHIFT);
-
- /*
- * For mappings greater than or equal to a page, we limit the stride
- * (and hence alignment) to a page size.
- */
- nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
- if (alloc_size >= PAGE_SIZE)
- stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
- else
- stride = 1;
-
- BUG_ON(!nslots);
-
- /*
- * Find suitable number of IO TLB entries size that will fit this
- * request and allocate a buffer from that IO TLB pool.
- */
- spin_lock_irqsave(&io_tlb_lock, flags);
-
- if (unlikely(nslots > io_tlb_nslabs - io_tlb_used))
- goto not_found;
-
- index = ALIGN(io_tlb_index, stride);
- if (index >= io_tlb_nslabs)
- index = 0;
- wrap = index;
-
- do {
- while (iommu_is_span_boundary(index, nslots, offset_slots,
- max_slots)) {
- index += stride;
- if (index >= io_tlb_nslabs)
- index = 0;
- if (index == wrap)
- goto not_found;
- }
-
- /*
- * If we find a slot that indicates we have 'nslots' number of
- * contiguous buffers, we allocate the buffers from that slot
- * and mark the entries as '0' indicating unavailable.
- */
- if (io_tlb_list[index] >= nslots) {
- int count = 0;
-
- for (i = index; i < (int) (index + nslots); i++)
- io_tlb_list[i] = 0;
- for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--)
- io_tlb_list[i] = ++count;
- tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT);
-
- /*
- * Update the indices to avoid searching in the next
- * round.
- */
- io_tlb_index = ((index + nslots) < io_tlb_nslabs
- ? (index + nslots) : 0);
-
- goto found;
- }
- index += stride;
- if (index >= io_tlb_nslabs)
- index = 0;
- } while (index != wrap);
-
-not_found:
- tmp_io_tlb_used = io_tlb_used;
-
- spin_unlock_irqrestore(&io_tlb_lock, flags);
- if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit())
- dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
- alloc_size, io_tlb_nslabs, tmp_io_tlb_used);
- return (phys_addr_t)DMA_MAPPING_ERROR;
-found:
- io_tlb_used += nslots;
- spin_unlock_irqrestore(&io_tlb_lock, flags);
+ index = find_slots(dev, orig_addr, alloc_size + offset);
+ if (index == -1) {
+ if (!(attrs & DMA_ATTR_NO_WARN))
+ dev_warn_ratelimited(dev,
+ "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
+ alloc_size, io_tlb_nslabs, io_tlb_used);
+ return (phys_addr_t)DMA_MAPPING_ERROR;
+ }
/*
* Save away the mapping from the original address to the DMA address.
* This is needed when we sync the memory. Then we sync the buffer if
* needed.
*/
- for (i = 0; i < nslots; i++)
- io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
+ for (i = 0; i < nr_slots(alloc_size + offset); i++)
+ io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i);
+
+ tlb_addr = slot_addr(io_tlb_start, index) + offset;
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
-
return tlb_addr;
}
@@ -582,8 +611,9 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
enum dma_data_direction dir, unsigned long attrs)
{
unsigned long flags;
- int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
- int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT;
+ unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+ int i, count, nslots = nr_slots(alloc_size + offset);
+ int index = (tlb_addr - offset - io_tlb_start) >> IO_TLB_SHIFT;
phys_addr_t orig_addr = io_tlb_orig_addr[index];
/*
@@ -601,26 +631,29 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
* with slots below and above the pool being returned.
*/
spin_lock_irqsave(&io_tlb_lock, flags);
- {
- count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
- io_tlb_list[index + nslots] : 0);
- /*
- * Step 1: return the slots to the free list, merging the
- * slots with superceeding slots
- */
- for (i = index + nslots - 1; i >= index; i--) {
- io_tlb_list[i] = ++count;
- io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
- }
- /*
- * Step 2: merge the returned slots with the preceding slots,
- * if available (non zero)
- */
- for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
- io_tlb_list[i] = ++count;
+ if (index + nslots < ALIGN(index + 1, IO_TLB_SEGSIZE))
+ count = io_tlb_list[index + nslots];
+ else
+ count = 0;
- io_tlb_used -= nslots;
+ /*
+ * Step 1: return the slots to the free list, merging the slots with
+ * superceeding slots
+ */
+ for (i = index + nslots - 1; i >= index; i--) {
+ io_tlb_list[i] = ++count;
+ io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
}
+
+ /*
+ * Step 2: merge the returned slots with the preceding slots, if
+ * available (non zero)
+ */
+ for (i = index - 1;
+ io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && io_tlb_list[i];
+ i--)
+ io_tlb_list[i] = ++count;
+ io_tlb_used -= nslots;
spin_unlock_irqrestore(&io_tlb_lock, flags);
}
@@ -633,7 +666,6 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr,
if (orig_addr == INVALID_PHYS_ADDR)
return;
- orig_addr += (unsigned long)tlb_addr & ((1 << IO_TLB_SHIFT) - 1);
switch (target) {
case SYNC_FOR_CPU:
@@ -691,7 +723,7 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
size_t swiotlb_max_mapping_size(struct device *dev)
{
- return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
+ return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
}
bool is_swiotlb_active(void)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 4af161b..45fa716 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2209,6 +2209,26 @@ static void perf_group_detach(struct perf_event *event)
perf_event__header_size(leader);
}
+static void sync_child_event(struct perf_event *child_event);
+
+static void perf_child_detach(struct perf_event *event)
+{
+ struct perf_event *parent_event = event->parent;
+
+ if (!(event->attach_state & PERF_ATTACH_CHILD))
+ return;
+
+ event->attach_state &= ~PERF_ATTACH_CHILD;
+
+ if (WARN_ON_ONCE(!parent_event))
+ return;
+
+ lockdep_assert_held(&parent_event->child_mutex);
+
+ sync_child_event(event);
+ list_del_init(&event->child_list);
+}
+
static bool is_orphaned_event(struct perf_event *event)
{
return event->state == PERF_EVENT_STATE_DEAD;
@@ -2316,6 +2336,7 @@ group_sched_out(struct perf_event *group_event,
}
#define DETACH_GROUP 0x01UL
+#define DETACH_CHILD 0x02UL
/*
* Cross CPU call to remove a performance event
@@ -2339,6 +2360,8 @@ __perf_remove_from_context(struct perf_event *event,
event_sched_out(event, cpuctx, ctx);
if (flags & DETACH_GROUP)
perf_group_detach(event);
+ if (flags & DETACH_CHILD)
+ perf_child_detach(event);
list_del_event(event, ctx);
if (!ctx->nr_events && ctx->is_active) {
@@ -2367,25 +2390,21 @@ static void perf_remove_from_context(struct perf_event *event, unsigned long fla
lockdep_assert_held(&ctx->mutex);
- event_function_call(event, __perf_remove_from_context, (void *)flags);
-
/*
- * The above event_function_call() can NO-OP when it hits
- * TASK_TOMBSTONE. In that case we must already have been detached
- * from the context (by perf_event_exit_event()) but the grouping
- * might still be in-tact.
+ * Because of perf_event_exit_task(), perf_remove_from_context() ought
+ * to work in the face of TASK_TOMBSTONE, unlike every other
+ * event_function_call() user.
*/
- WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT);
- if ((flags & DETACH_GROUP) &&
- (event->attach_state & PERF_ATTACH_GROUP)) {
- /*
- * Since in that case we cannot possibly be scheduled, simply
- * detach now.
- */
- raw_spin_lock_irq(&ctx->lock);
- perf_group_detach(event);
+ raw_spin_lock_irq(&ctx->lock);
+ if (!ctx->is_active) {
+ __perf_remove_from_context(event, __get_cpu_context(ctx),
+ ctx, (void *)flags);
raw_spin_unlock_irq(&ctx->lock);
+ return;
}
+ raw_spin_unlock_irq(&ctx->lock);
+
+ event_function_call(event, __perf_remove_from_context, (void *)flags);
}
/*
@@ -11705,12 +11724,12 @@ SYSCALL_DEFINE5(perf_event_open,
return err;
}
- err = security_locked_down(LOCKDOWN_PERF);
- if (err && (attr.sample_type & PERF_SAMPLE_REGS_INTR))
- /* REGS_INTR can leak data, lockdown must prevent this */
- return err;
-
- err = 0;
+ /* REGS_INTR can leak data, lockdown must prevent this */
+ if (attr.sample_type & PERF_SAMPLE_REGS_INTR) {
+ err = security_locked_down(LOCKDOWN_PERF);
+ if (err)
+ return err;
+ }
/*
* In cgroup mode, the pid argument is used to pass the fd
@@ -12249,14 +12268,17 @@ void perf_pmu_migrate_context(struct pmu *pmu, int src_cpu, int dst_cpu)
}
EXPORT_SYMBOL_GPL(perf_pmu_migrate_context);
-static void sync_child_event(struct perf_event *child_event,
- struct task_struct *child)
+static void sync_child_event(struct perf_event *child_event)
{
struct perf_event *parent_event = child_event->parent;
u64 child_val;
- if (child_event->attr.inherit_stat)
- perf_event_read_event(child_event, child);
+ if (child_event->attr.inherit_stat) {
+ struct task_struct *task = child_event->ctx->task;
+
+ if (task && task != TASK_TOMBSTONE)
+ perf_event_read_event(child_event, task);
+ }
child_val = perf_event_count(child_event);
@@ -12271,60 +12293,53 @@ static void sync_child_event(struct perf_event *child_event,
}
static void
-perf_event_exit_event(struct perf_event *child_event,
- struct perf_event_context *child_ctx,
- struct task_struct *child)
+perf_event_exit_event(struct perf_event *event, struct perf_event_context *ctx)
{
- struct perf_event *parent_event = child_event->parent;
+ struct perf_event *parent_event = event->parent;
+ unsigned long detach_flags = 0;
+
+ if (parent_event) {
+ /*
+ * Do not destroy the 'original' grouping; because of the
+ * context switch optimization the original events could've
+ * ended up in a random child task.
+ *
+ * If we were to destroy the original group, all group related
+ * operations would cease to function properly after this
+ * random child dies.
+ *
+ * Do destroy all inherited groups, we don't care about those
+ * and being thorough is better.
+ */
+ detach_flags = DETACH_GROUP | DETACH_CHILD;
+ mutex_lock(&parent_event->child_mutex);
+ }
+
+ perf_remove_from_context(event, detach_flags);
+
+ raw_spin_lock_irq(&ctx->lock);
+ if (event->state > PERF_EVENT_STATE_EXIT)
+ perf_event_set_state(event, PERF_EVENT_STATE_EXIT);
+ raw_spin_unlock_irq(&ctx->lock);
/*
- * Do not destroy the 'original' grouping; because of the context
- * switch optimization the original events could've ended up in a
- * random child task.
- *
- * If we were to destroy the original group, all group related
- * operations would cease to function properly after this random
- * child dies.
- *
- * Do destroy all inherited groups, we don't care about those
- * and being thorough is better.
+ * Child events can be freed.
*/
- raw_spin_lock_irq(&child_ctx->lock);
- WARN_ON_ONCE(child_ctx->is_active);
-
- if (parent_event)
- perf_group_detach(child_event);
- list_del_event(child_event, child_ctx);
- perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */
- raw_spin_unlock_irq(&child_ctx->lock);
+ if (parent_event) {
+ mutex_unlock(&parent_event->child_mutex);
+ /*
+ * Kick perf_poll() for is_event_hup();
+ */
+ perf_event_wakeup(parent_event);
+ free_event(event);
+ put_event(parent_event);
+ return;
+ }
/*
* Parent events are governed by their filedesc, retain them.
*/
- if (!parent_event) {
- perf_event_wakeup(child_event);
- return;
- }
- /*
- * Child events can be cleaned up.
- */
-
- sync_child_event(child_event, child);
-
- /*
- * Remove this event from the parent's list
- */
- WARN_ON_ONCE(parent_event->ctx->parent_ctx);
- mutex_lock(&parent_event->child_mutex);
- list_del_init(&child_event->child_list);
- mutex_unlock(&parent_event->child_mutex);
-
- /*
- * Kick perf_poll() for is_event_hup().
- */
- perf_event_wakeup(parent_event);
- free_event(child_event);
- put_event(parent_event);
+ perf_event_wakeup(event);
}
static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
@@ -12381,7 +12396,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
perf_event_task(child, child_ctx, 0);
list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry)
- perf_event_exit_event(child_event, child_ctx, child);
+ perf_event_exit_event(child_event, child_ctx);
mutex_unlock(&child_ctx->mutex);
@@ -12641,6 +12656,7 @@ inherit_event(struct perf_event *parent_event,
*/
raw_spin_lock_irqsave(&child_ctx->lock, flags);
add_event_to_ctx(child_event, child_ctx);
+ child_event->attach_state |= PERF_ATTACH_CHILD;
raw_spin_unlock_irqrestore(&child_ctx->lock, flags);
/*
diff --git a/kernel/futex.c b/kernel/futex.c
index 851a144..30048a7 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -3771,8 +3771,7 @@ long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
if (op & FUTEX_CLOCK_REALTIME) {
flags |= FLAGS_CLOCKRT;
- if (cmd != FUTEX_WAIT && cmd != FUTEX_WAIT_BITSET && \
- cmd != FUTEX_WAIT_REQUEUE_PI)
+ if (cmd != FUTEX_WAIT_BITSET && cmd != FUTEX_WAIT_REQUEUE_PI)
return -ENOSYS;
}
@@ -3844,7 +3843,7 @@ SYSCALL_DEFINE6(futex, u32 __user *, uaddr, int, op, u32, val,
t = timespec64_to_ktime(ts);
if (cmd == FUTEX_WAIT || cmd == GFUTEX_SWAP)
t = ktime_add_safe(ktime_get(), t);
- else if (!(op & FUTEX_CLOCK_REALTIME))
+ else if (cmd != FUTEX_LOCK_PI && !(op & FUTEX_CLOCK_REALTIME))
t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
tp = &t;
}
@@ -4038,7 +4037,7 @@ SYSCALL_DEFINE6(futex_time32, u32 __user *, uaddr, int, op, u32, val,
t = timespec64_to_ktime(ts);
if (cmd == FUTEX_WAIT)
t = ktime_add_safe(ktime_get(), t);
- else if (!(op & FUTEX_CLOCK_REALTIME))
+ else if (cmd != FUTEX_LOCK_PI && !(op & FUTEX_CLOCK_REALTIME))
t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
tp = &t;
}
diff --git a/kernel/gcov/clang.c b/kernel/gcov/clang.c
index 8743150..c466c7f 100644
--- a/kernel/gcov/clang.c
+++ b/kernel/gcov/clang.c
@@ -70,7 +70,9 @@ struct gcov_fn_info {
u32 ident;
u32 checksum;
+#if CONFIG_CLANG_VERSION < 110000
u8 use_extra_checksum;
+#endif
u32 cfg_checksum;
u32 num_counters;
@@ -145,10 +147,8 @@ void llvm_gcda_emit_function(u32 ident, const char *function_name,
list_add_tail(&info->head, ¤t_info->functions);
}
-EXPORT_SYMBOL(llvm_gcda_emit_function);
#else
-void llvm_gcda_emit_function(u32 ident, u32 func_checksum,
- u8 use_extra_checksum, u32 cfg_checksum)
+void llvm_gcda_emit_function(u32 ident, u32 func_checksum, u32 cfg_checksum)
{
struct gcov_fn_info *info = kzalloc(sizeof(*info), GFP_KERNEL);
@@ -158,12 +158,11 @@ void llvm_gcda_emit_function(u32 ident, u32 func_checksum,
INIT_LIST_HEAD(&info->head);
info->ident = ident;
info->checksum = func_checksum;
- info->use_extra_checksum = use_extra_checksum;
info->cfg_checksum = cfg_checksum;
list_add_tail(&info->head, ¤t_info->functions);
}
-EXPORT_SYMBOL(llvm_gcda_emit_function);
#endif
+EXPORT_SYMBOL(llvm_gcda_emit_function);
void llvm_gcda_emit_arcs(u32 num_counters, u64 *counters)
{
@@ -293,11 +292,16 @@ int gcov_info_is_compatible(struct gcov_info *info1, struct gcov_info *info2)
!list_is_last(&fn_ptr2->head, &info2->functions)) {
if (fn_ptr1->checksum != fn_ptr2->checksum)
return false;
+#if CONFIG_CLANG_VERSION < 110000
if (fn_ptr1->use_extra_checksum != fn_ptr2->use_extra_checksum)
return false;
if (fn_ptr1->use_extra_checksum &&
fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum)
return false;
+#else
+ if (fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum)
+ return false;
+#endif
fn_ptr1 = list_next_entry(fn_ptr1, head);
fn_ptr2 = list_next_entry(fn_ptr2, head);
}
@@ -529,17 +533,22 @@ static size_t convert_to_gcda(char *buffer, struct gcov_info *info)
list_for_each_entry(fi_ptr, &info->functions, head) {
u32 i;
- u32 len = 2;
-
- if (fi_ptr->use_extra_checksum)
- len++;
pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION);
- pos += store_gcov_u32(buffer, pos, len);
+#if CONFIG_CLANG_VERSION < 110000
+ pos += store_gcov_u32(buffer, pos,
+ fi_ptr->use_extra_checksum ? 3 : 2);
+#else
+ pos += store_gcov_u32(buffer, pos, 3);
+#endif
pos += store_gcov_u32(buffer, pos, fi_ptr->ident);
pos += store_gcov_u32(buffer, pos, fi_ptr->checksum);
+#if CONFIG_CLANG_VERSION < 110000
if (fi_ptr->use_extra_checksum)
pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum);
+#else
+ pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum);
+#endif
pos += store_gcov_u32(buffer, pos, GCOV_TAG_COUNTER_BASE);
pos += store_gcov_u32(buffer, pos, fi_ptr->num_counters * 2);
diff --git a/kernel/irq/matrix.c b/kernel/irq/matrix.c
index 651a4ad..8e58685 100644
--- a/kernel/irq/matrix.c
+++ b/kernel/irq/matrix.c
@@ -423,7 +423,9 @@ void irq_matrix_free(struct irq_matrix *m, unsigned int cpu,
if (WARN_ON_ONCE(bit < m->alloc_start || bit >= m->alloc_end))
return;
- clear_bit(bit, cm->alloc_map);
+ if (WARN_ON_ONCE(!test_and_clear_bit(bit, cm->alloc_map)))
+ return;
+
cm->allocated--;
if(managed)
cm->managed_allocated--;
diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index 3bf98db..23e7acb 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -639,8 +639,6 @@ void __init kcsan_init(void)
BUG_ON(!in_task());
- kcsan_debugfs_init();
-
for_each_possible_cpu(cpu)
per_cpu(kcsan_rand_state, cpu) = (u32)get_cycles();
diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
index 3c8093a..62a52be 100644
--- a/kernel/kcsan/debugfs.c
+++ b/kernel/kcsan/debugfs.c
@@ -261,7 +261,10 @@ static const struct file_operations debugfs_ops =
.release = single_release
};
-void __init kcsan_debugfs_init(void)
+static int __init kcsan_debugfs_init(void)
{
debugfs_create_file("kcsan", 0644, NULL, NULL, &debugfs_ops);
+ return 0;
}
+
+late_initcall(kcsan_debugfs_init);
diff --git a/kernel/kcsan/kcsan.h b/kernel/kcsan/kcsan.h
index 8d4bf34..87ccdb3 100644
--- a/kernel/kcsan/kcsan.h
+++ b/kernel/kcsan/kcsan.h
@@ -31,11 +31,6 @@ void kcsan_save_irqtrace(struct task_struct *task);
void kcsan_restore_irqtrace(struct task_struct *task);
/*
- * Initialize debugfs file.
- */
-void kcsan_debugfs_init(void);
-
-/*
* Statistics counters displayed via debugfs; should only be modified in
* slow-paths.
*/
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index 7825adc..aea9104 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -740,8 +740,10 @@ static int kexec_calculate_store_digests(struct kimage *image)
sha_region_sz = KEXEC_SEGMENT_MAX * sizeof(struct kexec_sha_region);
sha_regions = vzalloc(sha_region_sz);
- if (!sha_regions)
+ if (!sha_regions) {
+ ret = -ENOMEM;
goto out_free_desc;
+ }
desc->tfm = tfm;
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 780012eb2..858b96b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -705,7 +705,7 @@ static void print_lock_name(struct lock_class *class)
printk(KERN_CONT " (");
__print_lock_name(class);
- printk(KERN_CONT "){%s}-{%hd:%hd}", usage,
+ printk(KERN_CONT "){%s}-{%d:%d}", usage,
class->wait_type_outer ?: class->wait_type_inner,
class->wait_type_inner);
}
@@ -930,7 +930,8 @@ static bool assign_lock_key(struct lockdep_map *lock)
/* Debug-check: all keys must be persistent! */
debug_locks_off();
pr_err("INFO: trying to register non-static key.\n");
- pr_err("the code is fine but needs lockdep annotation.\n");
+ pr_err("The code is fine but needs lockdep annotation, or maybe\n");
+ pr_err("you didn't initialize this object before use?\n");
pr_err("turning off the locking correctness validator.\n");
dump_stack();
return false;
@@ -5663,7 +5664,7 @@ void lock_contended(struct lockdep_map *lock, unsigned long ip)
{
unsigned long flags;
- trace_lock_acquired(lock, ip);
+ trace_lock_contended(lock, ip);
if (unlikely(!lock_stat || !lockdep_enabled()))
return;
@@ -5681,7 +5682,7 @@ void lock_acquired(struct lockdep_map *lock, unsigned long ip)
{
unsigned long flags;
- trace_lock_contended(lock, ip);
+ trace_lock_acquired(lock, ip);
if (unlikely(!lock_stat || !lockdep_enabled()))
return;
diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
index a7276aa..db93015 100644
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -57,7 +57,7 @@ void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
task->blocked_on = waiter;
}
-void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
+void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct task_struct *task)
{
DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list));
@@ -65,7 +65,7 @@ void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter);
task->blocked_on = NULL;
- list_del_init(&waiter->list);
+ INIT_LIST_HEAD(&waiter->list);
waiter->task = NULL;
}
diff --git a/kernel/locking/mutex-debug.h b/kernel/locking/mutex-debug.h
index 1edd3f4..53e631e 100644
--- a/kernel/locking/mutex-debug.h
+++ b/kernel/locking/mutex-debug.h
@@ -22,7 +22,7 @@ extern void debug_mutex_free_waiter(struct mutex_waiter *waiter);
extern void debug_mutex_add_waiter(struct mutex *lock,
struct mutex_waiter *waiter,
struct task_struct *task);
-extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
+extern void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct task_struct *task);
extern void debug_mutex_unlock(struct mutex *lock);
extern void debug_mutex_init(struct mutex *lock, const char *name,
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 2c25b83..15ac7c4b 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -204,7 +204,7 @@ static inline bool __mutex_waiter_is_first(struct mutex *lock, struct mutex_wait
* Add @waiter to a given location in the lock wait_list and set the
* FLAG_WAITERS flag if it's the first waiter.
*/
-static void __sched
+static void
__mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct list_head *list)
{
@@ -215,6 +215,16 @@ __mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
__mutex_set_flag(lock, MUTEX_FLAG_WAITERS);
}
+static void
+__mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter)
+{
+ list_del(&waiter->list);
+ if (likely(list_empty(&lock->wait_list)))
+ __mutex_clear_flag(lock, MUTEX_FLAGS);
+
+ debug_mutex_remove_waiter(lock, waiter, current);
+}
+
/*
* Give up ownership to a specific task, when @task = NULL, this is equivalent
* to a regular unlock. Sets PICKUP on a handoff, clears HANDOF, preserves
@@ -1071,9 +1081,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
__ww_mutex_check_waiters(lock, ww_ctx);
}
- mutex_remove_waiter(lock, &waiter, current);
- if (likely(list_empty(&lock->wait_list)))
- __mutex_clear_flag(lock, MUTEX_FLAGS);
+ __mutex_remove_waiter(lock, &waiter);
debug_mutex_free_waiter(&waiter);
@@ -1090,7 +1098,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
err:
__set_current_state(TASK_RUNNING);
- mutex_remove_waiter(lock, &waiter, current);
+ __mutex_remove_waiter(lock, &waiter);
err_early_kill:
spin_unlock(&lock->wait_lock);
debug_mutex_free_waiter(&waiter);
diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h
index 1c2287d..f0c710b1 100644
--- a/kernel/locking/mutex.h
+++ b/kernel/locking/mutex.h
@@ -10,12 +10,10 @@
* !CONFIG_DEBUG_MUTEXES case. Most of them are NOPs:
*/
-#define mutex_remove_waiter(lock, waiter, task) \
- __list_del((waiter)->list.prev, (waiter)->list.next)
-
#define debug_mutex_wake_waiter(lock, waiter) do { } while (0)
#define debug_mutex_free_waiter(waiter) do { } while (0)
#define debug_mutex_add_waiter(lock, waiter, ti) do { } while (0)
+#define debug_mutex_remove_waiter(lock, waiter, ti) do { } while (0)
#define debug_mutex_unlock(lock) do { } while (0)
#define debug_mutex_init(lock, name, key) do { } while (0)
diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
index fe9ca92..909b0bf 100644
--- a/kernel/locking/qrwlock.c
+++ b/kernel/locking/qrwlock.c
@@ -61,6 +61,8 @@ EXPORT_SYMBOL(queued_read_lock_slowpath);
*/
void queued_write_lock_slowpath(struct qrwlock *lock)
{
+ int cnts;
+
/* Put the writer into the wait queue */
arch_spin_lock(&lock->wait_lock);
@@ -74,9 +76,8 @@ void queued_write_lock_slowpath(struct qrwlock *lock)
/* When no more readers or writers, set the locked flag */
do {
- atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING);
- } while (atomic_cmpxchg_relaxed(&lock->cnts, _QW_WAITING,
- _QW_LOCKED) != _QW_WAITING);
+ cnts = atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING);
+ } while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED));
unlock:
arch_spin_unlock(&lock->wait_lock);
}
diff --git a/kernel/ptrace.c b/kernel/ptrace.c
index 79de129..eb4d04c 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -169,6 +169,21 @@ void __ptrace_unlink(struct task_struct *child)
spin_unlock(&child->sighand->siglock);
}
+static bool looks_like_a_spurious_pid(struct task_struct *task)
+{
+ if (task->exit_code != ((PTRACE_EVENT_EXEC << 8) | SIGTRAP))
+ return false;
+
+ if (task_pid_vnr(task) == task->ptrace_message)
+ return false;
+ /*
+ * The tracee changed its pid but the PTRACE_EVENT_EXEC event
+ * was not wait()'ed, most probably debugger targets the old
+ * leader which was destroyed in de_thread().
+ */
+ return true;
+}
+
/* Ensure that nothing can wake it up, even SIGKILL */
static bool ptrace_freeze_traced(struct task_struct *task)
{
@@ -179,7 +194,8 @@ static bool ptrace_freeze_traced(struct task_struct *task)
return ret;
spin_lock_irq(&task->sighand->siglock);
- if (task_is_traced(task) && !__fatal_signal_pending(task)) {
+ if (task_is_traced(task) && !looks_like_a_spurious_pid(task) &&
+ !__fatal_signal_pending(task)) {
task->state = __TASK_TRACED;
ret = true;
}
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 5dc36c6..61e250c 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1019,7 +1019,6 @@ noinstr void rcu_nmi_enter(void)
} else if (!in_nmi()) {
instrumentation_begin();
rcu_irq_enter_check_tick();
- instrumentation_end();
} else {
instrumentation_begin();
}
@@ -3386,7 +3385,7 @@ static void fill_page_cache_func(struct work_struct *work)
for (i = 0; i < rcu_min_cached_objs; i++) {
bnode = (struct kvfree_rcu_bulk_data *)
- __get_free_page(GFP_KERNEL | __GFP_NOWARN);
+ __get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
if (bnode) {
raw_spin_lock_irqsave(&krcp->lock, flags);
diff --git a/kernel/resource.c b/kernel/resource.c
index 3ae2f56..817545f 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -450,7 +450,7 @@ int walk_system_ram_res(u64 start, u64 end, void *arg,
{
unsigned long flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
- return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true,
+ return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, false,
arg, func);
}
@@ -463,7 +463,7 @@ int walk_mem_res(u64 start, u64 end, void *arg,
{
unsigned long flags = IORESOURCE_MEM | IORESOURCE_BUSY;
- return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true,
+ return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, false,
arg, func);
}
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 59b13ac..1f2a641 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -321,7 +321,7 @@ void update_rq_clock(struct rq *rq)
}
static inline void
-rq_csd_init(struct rq *rq, call_single_data_t *csd, smp_call_func_t func)
+rq_csd_init(struct rq *rq, struct __call_single_data *csd, smp_call_func_t func)
{
csd->flags = 0;
csd->func = func;
@@ -936,7 +936,7 @@ DEFINE_STATIC_KEY_FALSE(sched_uclamp_used);
static inline unsigned int uclamp_bucket_id(unsigned int clamp_value)
{
- return clamp_value / UCLAMP_BUCKET_DELTA;
+ return min_t(unsigned int, clamp_value / UCLAMP_BUCKET_DELTA, UCLAMP_BUCKETS - 1);
}
static inline unsigned int uclamp_none(enum uclamp_id clamp_id)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 2357921..6264584 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -8,8 +8,6 @@
*/
#include "sched.h"
-static DEFINE_SPINLOCK(sched_debug_lock);
-
/*
* This allows printing both to /proc/sched_debug and
* to the console
@@ -470,16 +468,37 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
#endif
#ifdef CONFIG_CGROUP_SCHED
+static DEFINE_SPINLOCK(sched_debug_lock);
static char group_path[PATH_MAX];
-static char *task_group_path(struct task_group *tg)
+static void task_group_path(struct task_group *tg, char *path, int plen)
{
- if (autogroup_path(tg, group_path, PATH_MAX))
- return group_path;
+ if (autogroup_path(tg, path, plen))
+ return;
- cgroup_path(tg->css.cgroup, group_path, PATH_MAX);
+ cgroup_path(tg->css.cgroup, path, plen);
+}
- return group_path;
+/*
+ * Only 1 SEQ_printf_task_group_path() caller can use the full length
+ * group_path[] for cgroup path. Other simultaneous callers will have
+ * to use a shorter stack buffer. A "..." suffix is appended at the end
+ * of the stack buffer so that it will show up in case the output length
+ * matches the given buffer size to indicate possible path name truncation.
+ */
+#define SEQ_printf_task_group_path(m, tg, fmt...) \
+{ \
+ if (spin_trylock(&sched_debug_lock)) { \
+ task_group_path(tg, group_path, sizeof(group_path)); \
+ SEQ_printf(m, fmt, group_path); \
+ spin_unlock(&sched_debug_lock); \
+ } else { \
+ char buf[128]; \
+ char *bufend = buf + sizeof(buf) - 3; \
+ task_group_path(tg, buf, bufend - buf); \
+ strcpy(bufend - 1, "..."); \
+ SEQ_printf(m, fmt, buf); \
+ } \
}
#endif
@@ -506,7 +525,7 @@ print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
SEQ_printf(m, " %d %d", task_node(p), task_numa_group_id(p));
#endif
#ifdef CONFIG_CGROUP_SCHED
- SEQ_printf(m, " %s", task_group_path(task_group(p)));
+ SEQ_printf_task_group_path(m, task_group(p), " %s")
#endif
SEQ_printf(m, "\n");
@@ -543,7 +562,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
#ifdef CONFIG_FAIR_GROUP_SCHED
SEQ_printf(m, "\n");
- SEQ_printf(m, "cfs_rq[%d]:%s\n", cpu, task_group_path(cfs_rq->tg));
+ SEQ_printf_task_group_path(m, cfs_rq->tg, "cfs_rq[%d]:%s\n", cpu);
#else
SEQ_printf(m, "\n");
SEQ_printf(m, "cfs_rq[%d]:\n", cpu);
@@ -614,7 +633,7 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
{
#ifdef CONFIG_RT_GROUP_SCHED
SEQ_printf(m, "\n");
- SEQ_printf(m, "rt_rq[%d]:%s\n", cpu, task_group_path(rt_rq->tg));
+ SEQ_printf_task_group_path(m, rt_rq->tg, "rt_rq[%d]:%s\n", cpu);
#else
SEQ_printf(m, "\n");
SEQ_printf(m, "rt_rq[%d]:\n", cpu);
@@ -666,7 +685,6 @@ void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq)
static void print_cpu(struct seq_file *m, int cpu)
{
struct rq *rq = cpu_rq(cpu);
- unsigned long flags;
#ifdef CONFIG_X86
{
@@ -717,13 +735,11 @@ do { \
}
#undef P
- spin_lock_irqsave(&sched_debug_lock, flags);
print_cfs_stats(m, cpu);
print_rt_stats(m, cpu);
print_dl_stats(m, cpu);
print_rq(m, rq, cpu);
- spin_unlock_irqrestore(&sched_debug_lock, flags);
SEQ_printf(m, "\n");
}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8bdfa75..92cd153 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -700,7 +700,13 @@ static u64 __sched_period(unsigned long nr_running)
*/
static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
- u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
+ unsigned int nr_running = cfs_rq->nr_running;
+ u64 slice;
+
+ if (sched_feat(ALT_PERIOD))
+ nr_running = rq_of(cfs_rq)->cfs.h_nr_running;
+
+ slice = __sched_period(nr_running + !se->on_rq);
for_each_sched_entity(se) {
struct load_weight *load;
@@ -717,6 +723,10 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
}
slice = __calc_delta(slice, se->load.weight, load);
}
+
+ if (sched_feat(BASE_SLICE))
+ slice = max(slice, (u64)sysctl_sched_min_granularity);
+
return slice;
}
@@ -3948,6 +3958,8 @@ static inline void util_est_dequeue(struct cfs_rq *cfs_rq,
trace_sched_util_est_cfs_tp(cfs_rq);
}
+#define UTIL_EST_MARGIN (SCHED_CAPACITY_SCALE / 100)
+
/*
* Check if a (signed) value is within a specified (unsigned) margin,
* based on the observation that:
@@ -3965,7 +3977,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
struct task_struct *p,
bool task_sleep)
{
- long last_ewma_diff;
+ long last_ewma_diff, last_enqueued_diff;
struct util_est ue;
if (!sched_feat(UTIL_EST))
@@ -3986,6 +3998,8 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
if (ue.enqueued & UTIL_AVG_UNCHANGED)
return;
+ last_enqueued_diff = ue.enqueued;
+
/*
* Reset EWMA on utilization increases, the moving average is used only
* to smooth utilization decreases.
@@ -3999,12 +4013,17 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
}
/*
- * Skip update of task's estimated utilization when its EWMA is
+ * Skip update of task's estimated utilization when its members are
* already ~1% close to its last activation value.
*/
last_ewma_diff = ue.enqueued - ue.ewma;
- if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100)))
+ last_enqueued_diff -= ue.enqueued;
+ if (within_margin(last_ewma_diff, UTIL_EST_MARGIN)) {
+ if (!within_margin(last_enqueued_diff, UTIL_EST_MARGIN))
+ goto done;
+
return;
+ }
/*
* To avoid overestimation of actual task utilization, skip updates if
@@ -7546,6 +7565,10 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
return 0;
+ /* Disregard pcpu kthreads; they are where they need to be. */
+ if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
+ return 0;
+
if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
int cpu;
@@ -7715,8 +7738,7 @@ static int detach_tasks(struct lb_env *env)
* scheduler fails to find a good waiting task to
* migrate.
*/
-
- if ((load >> env->sd->nr_balance_failed) > env->imbalance)
+ if (shr_bound(load, env->sd->nr_balance_failed) > env->imbalance)
goto next;
env->imbalance -= load;
@@ -10821,16 +10843,22 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
{
struct cfs_rq *cfs_rq;
+ list_add_leaf_cfs_rq(cfs_rq_of(se));
+
/* Start to propagate at parent */
se = se->parent;
for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se);
- if (cfs_rq_throttled(cfs_rq))
- break;
+ if (!cfs_rq_throttled(cfs_rq)){
+ update_load_avg(cfs_rq, se, UPDATE_TG);
+ list_add_leaf_cfs_rq(cfs_rq);
+ continue;
+ }
- update_load_avg(cfs_rq, se, UPDATE_TG);
+ if (list_add_leaf_cfs_rq(cfs_rq))
+ break;
}
}
#else
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 68d369c..f1bf5e1 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -90,3 +90,6 @@ SCHED_FEAT(WA_BIAS, true)
*/
SCHED_FEAT(UTIL_EST, true)
SCHED_FEAT(UTIL_EST_FASTUP, true)
+
+SCHED_FEAT(ALT_PERIOD, true)
+SCHED_FEAT(BASE_SLICE, true)
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index 967732c..651218d 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -711,14 +711,15 @@ static void psi_group_change(struct psi_group *group, int cpu,
for (t = 0, m = clear; m; m &= ~(1 << t), t++) {
if (!(m & (1 << t)))
continue;
- if (groupc->tasks[t] == 0 && !psi_bug) {
+ if (groupc->tasks[t]) {
+ groupc->tasks[t]--;
+ } else if (!psi_bug) {
printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u %u] clear=%x set=%x\n",
cpu, t, groupc->tasks[0],
groupc->tasks[1], groupc->tasks[2],
groupc->tasks[3], clear, set);
psi_bug = 1;
}
- groupc->tasks[t]--;
}
for (t = 0; set; set &= ~(1 << t), t++)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d8d52317..91578ff 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -206,6 +206,13 @@ static inline void update_avg(u64 *avg, u64 sample)
}
/*
+ * Shifting a value by an exponent greater *or equal* to the size of said value
+ * is UB; cap at size-1.
+ */
+#define shr_bound(val, shift) \
+ (val >> min_t(typeof(shift), shift, BITS_PER_TYPE(typeof(val)) - 1))
+
+/*
* !! For sched_setattr_nocheck() (kernel) only !!
*
* This is actually gross. :(
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 0ceaaba..0aabfca 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -864,28 +864,30 @@ static int seccomp_do_user_notification(int this_syscall,
up(&match->notif->request);
wake_up_poll(&match->wqh, EPOLLIN | EPOLLRDNORM);
- mutex_unlock(&match->notify_lock);
/*
* This is where we wait for a reply from userspace.
*/
-wait:
- err = wait_for_completion_interruptible(&n.ready);
- mutex_lock(&match->notify_lock);
- if (err == 0) {
- /* Check if we were woken up by a addfd message */
+ do {
+ mutex_unlock(&match->notify_lock);
+ err = wait_for_completion_interruptible(&n.ready);
+ mutex_lock(&match->notify_lock);
+ if (err != 0)
+ goto interrupted;
+
addfd = list_first_entry_or_null(&n.addfd,
struct seccomp_kaddfd, list);
- if (addfd && n.state != SECCOMP_NOTIFY_REPLIED) {
+ /* Check if we were woken up by a addfd message */
+ if (addfd)
seccomp_handle_addfd(addfd);
- mutex_unlock(&match->notify_lock);
- goto wait;
- }
- ret = n.val;
- err = n.error;
- flags = n.flags;
- }
+ } while (n.state != SECCOMP_NOTIFY_REPLIED);
+
+ ret = n.val;
+ err = n.error;
+ flags = n.flags;
+
+interrupted:
/* If there were any pending addfd calls, clear them out */
list_for_each_entry_safe(addfd, tmp, &n.addfd, list) {
/* The process went away before we got a chance to handle it */
diff --git a/kernel/smp.c b/kernel/smp.c
index 25240fb..f73a597 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -110,7 +110,7 @@ static DEFINE_PER_CPU(void *, cur_csd_info);
static atomic_t csd_bug_count = ATOMIC_INIT(0);
/* Record current CSD work for current CPU, NULL to erase. */
-static void csd_lock_record(call_single_data_t *csd)
+static void csd_lock_record(struct __call_single_data *csd)
{
if (!csd) {
smp_mb(); /* NULL cur_csd after unlock. */
@@ -125,7 +125,7 @@ static void csd_lock_record(call_single_data_t *csd)
/* Or before unlock, as the case may be. */
}
-static __always_inline int csd_lock_wait_getcpu(call_single_data_t *csd)
+static __always_inline int csd_lock_wait_getcpu(struct __call_single_data *csd)
{
unsigned int csd_type;
@@ -140,7 +140,7 @@ static __always_inline int csd_lock_wait_getcpu(call_single_data_t *csd)
* the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
* so waiting on other types gets much less information.
*/
-static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, int *bug_id)
+static __always_inline bool csd_lock_wait_toolong(struct __call_single_data *csd, u64 ts0, u64 *ts1, int *bug_id)
{
int cpu = -1;
int cpux;
@@ -204,7 +204,7 @@ static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 t
* previous function call. For multi-cpu calls its even more interesting
* as we'll have to ensure no other cpu is observing our csd.
*/
-static __always_inline void csd_lock_wait(call_single_data_t *csd)
+static __always_inline void csd_lock_wait(struct __call_single_data *csd)
{
int bug_id = 0;
u64 ts0, ts1;
@@ -219,17 +219,17 @@ static __always_inline void csd_lock_wait(call_single_data_t *csd)
}
#else
-static void csd_lock_record(call_single_data_t *csd)
+static void csd_lock_record(struct __call_single_data *csd)
{
}
-static __always_inline void csd_lock_wait(call_single_data_t *csd)
+static __always_inline void csd_lock_wait(struct __call_single_data *csd)
{
smp_cond_load_acquire(&csd->flags, !(VAL & CSD_FLAG_LOCK));
}
#endif
-static __always_inline void csd_lock(call_single_data_t *csd)
+static __always_inline void csd_lock(struct __call_single_data *csd)
{
csd_lock_wait(csd);
csd->flags |= CSD_FLAG_LOCK;
@@ -242,7 +242,7 @@ static __always_inline void csd_lock(call_single_data_t *csd)
smp_wmb();
}
-static __always_inline void csd_unlock(call_single_data_t *csd)
+static __always_inline void csd_unlock(struct __call_single_data *csd)
{
WARN_ON(!(csd->flags & CSD_FLAG_LOCK));
@@ -276,7 +276,7 @@ void __smp_call_single_queue(int cpu, struct llist_node *node)
* for execution on the given CPU. data must already have
* ->func, ->info, and ->flags set.
*/
-static int generic_exec_single(int cpu, call_single_data_t *csd)
+static int generic_exec_single(int cpu, struct __call_single_data *csd)
{
if (cpu == smp_processor_id()) {
smp_call_func_t func = csd->func;
@@ -542,7 +542,7 @@ EXPORT_SYMBOL(smp_call_function_single);
* NOTE: Be careful, there is unfortunately no current debugging facility to
* validate the correctness of this serialization.
*/
-int smp_call_function_single_async(int cpu, call_single_data_t *csd)
+int smp_call_function_single_async(int cpu, struct __call_single_data *csd)
{
int err = 0;
diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
index bf540f5a..dd5697d 100644
--- a/kernel/time/posix-timers.c
+++ b/kernel/time/posix-timers.c
@@ -1191,8 +1191,8 @@ SYSCALL_DEFINE2(clock_adjtime32, clockid_t, which_clock,
err = do_clock_adjtime(which_clock, &ktx);
- if (err >= 0)
- err = put_old_timex32(utp, &ktx);
+ if (err >= 0 && put_old_timex32(utp, &ktx))
+ return -EFAULT;
return err;
}
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 82041bb..a6d15a3 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -3230,7 +3230,8 @@ ftrace_allocate_pages(unsigned long num_to_init)
pg = start_pg;
while (pg) {
order = get_count_order(pg->size / ENTRIES_PER_PAGE);
- free_pages((unsigned long)pg->records, order);
+ if (order >= 0)
+ free_pages((unsigned long)pg->records, order);
start_pg = pg->next;
kfree(pg);
pg = start_pg;
@@ -5631,7 +5632,10 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
parser = &iter->parser;
if (trace_parser_loaded(parser)) {
- ftrace_match_records(iter->hash, parser->buffer, parser->idx);
+ int enable = !(iter->flags & FTRACE_ITER_NOTRACE);
+
+ ftrace_process_regex(iter, parser->buffer,
+ parser->idx, enable);
}
trace_parser_put(parser);
@@ -6452,7 +6456,8 @@ void ftrace_release_mod(struct module *mod)
clear_mod_from_hashes(pg);
order = get_count_order(pg->size / ENTRIES_PER_PAGE);
- free_pages((unsigned long)pg->records, order);
+ if (order >= 0)
+ free_pages((unsigned long)pg->records, order);
tmp_page = pg->next;
kfree(pg);
ftrace_number_of_pages -= 1 << order;
@@ -6812,7 +6817,8 @@ void ftrace_free_mem(struct module *mod, void *start_ptr, void *end_ptr)
if (!pg->index) {
*last_pg = pg->next;
order = get_count_order(pg->size / ENTRIES_PER_PAGE);
- free_pages((unsigned long)pg->records, order);
+ if (order >= 0)
+ free_pages((unsigned long)pg->records, order);
ftrace_number_of_pages -= 1 << order;
ftrace_number_of_groups--;
kfree(pg);
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8bfa4e7..321f7f7 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2387,14 +2387,13 @@ static void tracing_stop_tr(struct trace_array *tr)
static int trace_save_cmdline(struct task_struct *tsk)
{
- unsigned pid, idx;
+ unsigned tpid, idx;
/* treat recording of idle task as a success */
if (!tsk->pid)
return 1;
- if (unlikely(tsk->pid > PID_MAX_DEFAULT))
- return 0;
+ tpid = tsk->pid & (PID_MAX_DEFAULT - 1);
/*
* It's not the end of the world if we don't get
@@ -2405,26 +2404,15 @@ static int trace_save_cmdline(struct task_struct *tsk)
if (!arch_spin_trylock(&trace_cmdline_lock))
return 0;
- idx = savedcmd->map_pid_to_cmdline[tsk->pid];
+ idx = savedcmd->map_pid_to_cmdline[tpid];
if (idx == NO_CMDLINE_MAP) {
idx = (savedcmd->cmdline_idx + 1) % savedcmd->cmdline_num;
- /*
- * Check whether the cmdline buffer at idx has a pid
- * mapped. We are going to overwrite that entry so we
- * need to clear the map_pid_to_cmdline. Otherwise we
- * would read the new comm for the old pid.
- */
- pid = savedcmd->map_cmdline_to_pid[idx];
- if (pid != NO_CMDLINE_MAP)
- savedcmd->map_pid_to_cmdline[pid] = NO_CMDLINE_MAP;
-
- savedcmd->map_cmdline_to_pid[idx] = tsk->pid;
- savedcmd->map_pid_to_cmdline[tsk->pid] = idx;
-
+ savedcmd->map_pid_to_cmdline[tpid] = idx;
savedcmd->cmdline_idx = idx;
}
+ savedcmd->map_cmdline_to_pid[idx] = tsk->pid;
set_cmdline(idx, tsk->comm);
arch_spin_unlock(&trace_cmdline_lock);
@@ -2435,6 +2423,7 @@ static int trace_save_cmdline(struct task_struct *tsk)
static void __trace_find_cmdline(int pid, char comm[])
{
unsigned map;
+ int tpid;
if (!pid) {
strcpy(comm, "<idle>");
@@ -2446,16 +2435,16 @@ static void __trace_find_cmdline(int pid, char comm[])
return;
}
- if (pid > PID_MAX_DEFAULT) {
- strcpy(comm, "<...>");
- return;
+ tpid = pid & (PID_MAX_DEFAULT - 1);
+ map = savedcmd->map_pid_to_cmdline[tpid];
+ if (map != NO_CMDLINE_MAP) {
+ tpid = savedcmd->map_cmdline_to_pid[map];
+ if (tpid == pid) {
+ strlcpy(comm, get_saved_cmdlines(map), TASK_COMM_LEN);
+ return;
+ }
}
-
- map = savedcmd->map_pid_to_cmdline[pid];
- if (map != NO_CMDLINE_MAP)
- strlcpy(comm, get_saved_cmdlines(map), TASK_COMM_LEN);
- else
- strcpy(comm, "<...>");
+ strcpy(comm, "<...>");
}
void trace_find_cmdline(int pid, char comm[])
diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c
index aaf6793..c1637f9 100644
--- a/kernel/trace/trace_clock.c
+++ b/kernel/trace/trace_clock.c
@@ -95,33 +95,49 @@ u64 notrace trace_clock_global(void)
{
unsigned long flags;
int this_cpu;
- u64 now;
+ u64 now, prev_time;
raw_local_irq_save(flags);
this_cpu = raw_smp_processor_id();
- now = sched_clock_cpu(this_cpu);
+
/*
- * If in an NMI context then dont risk lockups and return the
- * cpu_clock() time:
+ * The global clock "guarantees" that the events are ordered
+ * between CPUs. But if two events on two different CPUS call
+ * trace_clock_global at roughly the same time, it really does
+ * not matter which one gets the earlier time. Just make sure
+ * that the same CPU will always show a monotonic clock.
+ *
+ * Use a read memory barrier to get the latest written
+ * time that was recorded.
+ */
+ smp_rmb();
+ prev_time = READ_ONCE(trace_clock_struct.prev_time);
+ now = sched_clock_cpu(this_cpu);
+
+ /* Make sure that now is always greater than prev_time */
+ if ((s64)(now - prev_time) < 0)
+ now = prev_time + 1;
+
+ /*
+ * If in an NMI context then dont risk lockups and simply return
+ * the current time.
*/
if (unlikely(in_nmi()))
goto out;
- arch_spin_lock(&trace_clock_struct.lock);
+ /* Tracing can cause strange recursion, always use a try lock */
+ if (arch_spin_trylock(&trace_clock_struct.lock)) {
+ /* Reread prev_time in case it was already updated */
+ prev_time = READ_ONCE(trace_clock_struct.prev_time);
+ if ((s64)(now - prev_time) < 0)
+ now = prev_time + 1;
- /*
- * TODO: if this happens often then maybe we should reset
- * my_scd->clock to prev_time+1, to make sure
- * we start ticking with the local clock from now on?
- */
- if ((s64)(now - trace_clock_struct.prev_time) < 0)
- now = trace_clock_struct.prev_time + 1;
+ trace_clock_struct.prev_time = now;
- trace_clock_struct.prev_time = now;
-
- arch_spin_unlock(&trace_clock_struct.lock);
-
+ /* The unlock acts as the wmb for the above rmb */
+ arch_spin_unlock(&trace_clock_struct.lock);
+ }
out:
raw_local_irq_restore(flags);
diff --git a/kernel/up.c b/kernel/up.c
index c6f323d..4edd549 100644
--- a/kernel/up.c
+++ b/kernel/up.c
@@ -25,7 +25,7 @@ int smp_call_function_single(int cpu, void (*func) (void *info), void *info,
}
EXPORT_SYMBOL(smp_call_function_single);
-int smp_call_function_single_async(int cpu, call_single_data_t *csd)
+int smp_call_function_single_async(int cpu, struct __call_single_data *csd)
{
unsigned long flags;
diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
index e703d5d..ce396ea 100644
--- a/kernel/user_namespace.c
+++ b/kernel/user_namespace.c
@@ -106,6 +106,7 @@ int create_user_ns(struct cred *new)
if (!ns)
goto fail_dec;
+ ns->parent_could_setfcap = cap_raised(new->cap_effective, CAP_SETFCAP);
ret = ns_alloc_inum(&ns->ns);
if (ret)
goto fail_free;
@@ -841,6 +842,60 @@ static int sort_idmaps(struct uid_gid_map *map)
return 0;
}
+/**
+ * verify_root_map() - check the uid 0 mapping
+ * @file: idmapping file
+ * @map_ns: user namespace of the target process
+ * @new_map: requested idmap
+ *
+ * If a process requests mapping parent uid 0 into the new ns, verify that the
+ * process writing the map had the CAP_SETFCAP capability as the target process
+ * will be able to write fscaps that are valid in ancestor user namespaces.
+ *
+ * Return: true if the mapping is allowed, false if not.
+ */
+static bool verify_root_map(const struct file *file,
+ struct user_namespace *map_ns,
+ struct uid_gid_map *new_map)
+{
+ int idx;
+ const struct user_namespace *file_ns = file->f_cred->user_ns;
+ struct uid_gid_extent *extent0 = NULL;
+
+ for (idx = 0; idx < new_map->nr_extents; idx++) {
+ if (new_map->nr_extents <= UID_GID_MAP_MAX_BASE_EXTENTS)
+ extent0 = &new_map->extent[idx];
+ else
+ extent0 = &new_map->forward[idx];
+ if (extent0->lower_first == 0)
+ break;
+
+ extent0 = NULL;
+ }
+
+ if (!extent0)
+ return true;
+
+ if (map_ns == file_ns) {
+ /* The process unshared its ns and is writing to its own
+ * /proc/self/uid_map. User already has full capabilites in
+ * the new namespace. Verify that the parent had CAP_SETFCAP
+ * when it unshared.
+ * */
+ if (!file_ns->parent_could_setfcap)
+ return false;
+ } else {
+ /* Process p1 is writing to uid_map of p2, who is in a child
+ * user namespace to p1's. Verify that the opener of the map
+ * file has CAP_SETFCAP against the parent of the new map
+ * namespace */
+ if (!file_ns_capable(file, map_ns->parent, CAP_SETFCAP))
+ return false;
+ }
+
+ return true;
+}
+
static ssize_t map_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos,
int cap_setid,
@@ -848,7 +903,7 @@ static ssize_t map_write(struct file *file, const char __user *buf,
struct uid_gid_map *parent_map)
{
struct seq_file *seq = file->private_data;
- struct user_namespace *ns = seq->private;
+ struct user_namespace *map_ns = seq->private;
struct uid_gid_map new_map;
unsigned idx;
struct uid_gid_extent extent;
@@ -895,7 +950,7 @@ static ssize_t map_write(struct file *file, const char __user *buf,
/*
* Adjusting namespace settings requires capabilities on the target.
*/
- if (cap_valid(cap_setid) && !file_ns_capable(file, ns, CAP_SYS_ADMIN))
+ if (cap_valid(cap_setid) && !file_ns_capable(file, map_ns, CAP_SYS_ADMIN))
goto out;
/* Parse the user data */
@@ -965,7 +1020,7 @@ static ssize_t map_write(struct file *file, const char __user *buf,
ret = -EPERM;
/* Validate the user is allowed to use user id's mapped to. */
- if (!new_idmap_permitted(file, ns, cap_setid, &new_map))
+ if (!new_idmap_permitted(file, map_ns, cap_setid, &new_map))
goto out;
ret = -EPERM;
@@ -1086,6 +1141,10 @@ static bool new_idmap_permitted(const struct file *file,
struct uid_gid_map *new_map)
{
const struct cred *cred = file->f_cred;
+
+ if (cap_setid == CAP_SETUID && !verify_root_map(file, ns, new_map))
+ return false;
+
/* Don't allow mappings that would allow anything that wouldn't
* be allowed without the establishment of unprivileged mappings.
*/
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 7110906..01bf977 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -172,7 +172,6 @@ static u64 __read_mostly sample_period;
static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts);
static DEFINE_PER_CPU(struct hrtimer, watchdog_hrtimer);
static DEFINE_PER_CPU(bool, softlockup_touch_sync);
-static DEFINE_PER_CPU(bool, soft_watchdog_warn);
static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts);
static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
static unsigned long soft_lockup_nmi_warn;
@@ -236,7 +235,7 @@ static void set_sample_period(void)
}
/* Commands for resetting the watchdog */
-static void __touch_watchdog(void)
+static void update_touch_ts(void)
{
__this_cpu_write(watchdog_touch_ts, get_timestamp());
}
@@ -331,7 +330,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work);
*/
static int softlockup_fn(void *data)
{
- __touch_watchdog();
+ update_touch_ts();
complete(this_cpu_ptr(&softlockup_completion));
return 0;
@@ -374,7 +373,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
/* Clear the guest paused flag on watchdog reset */
kvm_check_and_clear_guest_paused();
- __touch_watchdog();
+ update_touch_ts();
return HRTIMER_RESTART;
}
@@ -394,21 +393,18 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
if (kvm_check_and_clear_guest_paused())
return HRTIMER_RESTART;
- /* only warn once */
- if (__this_cpu_read(soft_watchdog_warn) == true)
- return HRTIMER_RESTART;
-
+ /*
+ * Prevent multiple soft-lockup reports if one cpu is already
+ * engaged in dumping all cpu back traces.
+ */
if (softlockup_all_cpu_backtrace) {
- /* Prevent multiple soft-lockup reports if one cpu is already
- * engaged in dumping cpu back traces
- */
- if (test_and_set_bit(0, &soft_lockup_nmi_warn)) {
- /* Someone else will report us. Let's give up */
- __this_cpu_write(soft_watchdog_warn, true);
+ if (test_and_set_bit_lock(0, &soft_lockup_nmi_warn))
return HRTIMER_RESTART;
- }
}
+ /* Start period for the next softlockup warning. */
+ update_touch_ts();
+
pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
smp_processor_id(), duration,
current->comm, task_pid_nr(current));
@@ -420,22 +416,14 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
dump_stack();
if (softlockup_all_cpu_backtrace) {
- /* Avoid generating two back traces for current
- * given that one is already made above
- */
trigger_allbutself_cpu_backtrace();
-
- clear_bit(0, &soft_lockup_nmi_warn);
- /* Barrier to sync with other cpus */
- smp_mb__after_atomic();
+ clear_bit_unlock(0, &soft_lockup_nmi_warn);
}
add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK);
if (softlockup_panic)
panic("softlockup: hung tasks");
- __this_cpu_write(soft_watchdog_warn, true);
- } else
- __this_cpu_write(soft_watchdog_warn, false);
+ }
return HRTIMER_RESTART;
}
@@ -460,7 +448,7 @@ static void watchdog_enable(unsigned int cpu)
HRTIMER_MODE_REL_PINNED_HARD);
/* Initialize timestamp */
- __touch_watchdog();
+ update_touch_ts();
/* Enable the perf event */
if (watchdog_enabled & NMI_WATCHDOG_ENABLED)
watchdog_nmi_enable(cpu);
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 1d99c52..1e2ca74 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1409,7 +1409,6 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
*/
lockdep_assert_irqs_disabled();
- debug_work_activate(work);
/* if draining, only works from the same workqueue are allowed */
if (unlikely(wq->flags & __WQ_DRAINING) &&
@@ -1491,6 +1490,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
worklist = &pwq->delayed_works;
}
+ debug_work_activate(work);
insert_work(pwq, work, worklist, work_flags);
out:
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index c789b39..dcf4a90 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1302,7 +1302,7 @@
bool
depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select STACKTRACE
- select FRAME_POINTER if !MIPS && !PPC && !ARM && !S390 && !MICROBLAZE && !ARC && !X86
+ depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86
select KALLSYMS
select KALLSYMS_ALL
@@ -1596,7 +1596,7 @@
depends on DEBUG_KERNEL
depends on STACKTRACE_SUPPORT
depends on PROC_FS
- select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86
+ depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86
select KALLSYMS
select KALLSYMS_ALL
select STACKTRACE
@@ -1849,7 +1849,7 @@
depends on FAULT_INJECTION_DEBUG_FS && STACKTRACE_SUPPORT
depends on !X86_64
select STACKTRACE
- select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86
+ depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86
help
Provide stacktrace filter for fault-injection capabilities
diff --git a/lib/bug.c b/lib/bug.c
index 7103440..4ab398a 100644
--- a/lib/bug.c
+++ b/lib/bug.c
@@ -158,30 +158,27 @@ enum bug_trap_type report_bug(unsigned long bugaddr, struct pt_regs *regs)
file = NULL;
line = 0;
- warning = 0;
- if (bug) {
#ifdef CONFIG_DEBUG_BUGVERBOSE
#ifndef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
- file = bug->file;
+ file = bug->file;
#else
- file = (const char *)bug + bug->file_disp;
+ file = (const char *)bug + bug->file_disp;
#endif
- line = bug->line;
+ line = bug->line;
#endif
- warning = (bug->flags & BUGFLAG_WARNING) != 0;
- once = (bug->flags & BUGFLAG_ONCE) != 0;
- done = (bug->flags & BUGFLAG_DONE) != 0;
+ warning = (bug->flags & BUGFLAG_WARNING) != 0;
+ once = (bug->flags & BUGFLAG_ONCE) != 0;
+ done = (bug->flags & BUGFLAG_DONE) != 0;
- if (warning && once) {
- if (done)
- return BUG_TRAP_TYPE_WARN;
+ if (warning && once) {
+ if (done)
+ return BUG_TRAP_TYPE_WARN;
- /*
- * Since this is the only store, concurrency is not an issue.
- */
- bug->flags |= BUGFLAG_DONE;
- }
+ /*
+ * Since this is the only store, concurrency is not an issue.
+ */
+ bug->flags |= BUGFLAG_DONE;
}
/*
diff --git a/lib/crypto/poly1305-donna32.c b/lib/crypto/poly1305-donna32.c
index 3cc77d9..7fb7184 100644
--- a/lib/crypto/poly1305-donna32.c
+++ b/lib/crypto/poly1305-donna32.c
@@ -10,7 +10,8 @@
#include <asm/unaligned.h>
#include <crypto/internal/poly1305.h>
-void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16])
+void poly1305_core_setkey(struct poly1305_core_key *key,
+ const u8 raw_key[POLY1305_BLOCK_SIZE])
{
/* r &= 0xffffffc0ffffffc0ffffffc0fffffff */
key->key.r[0] = (get_unaligned_le32(&raw_key[0])) & 0x3ffffff;
diff --git a/lib/crypto/poly1305-donna64.c b/lib/crypto/poly1305-donna64.c
index 6ae181b..d34cf40 100644
--- a/lib/crypto/poly1305-donna64.c
+++ b/lib/crypto/poly1305-donna64.c
@@ -12,7 +12,8 @@
typedef __uint128_t u128;
-void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16])
+void poly1305_core_setkey(struct poly1305_core_key *key,
+ const u8 raw_key[POLY1305_BLOCK_SIZE])
{
u64 t0, t1;
diff --git a/lib/crypto/poly1305.c b/lib/crypto/poly1305.c
index 9d2d14d..26d87fc 100644
--- a/lib/crypto/poly1305.c
+++ b/lib/crypto/poly1305.c
@@ -12,7 +12,8 @@
#include <linux/module.h>
#include <asm/unaligned.h>
-void poly1305_init_generic(struct poly1305_desc_ctx *desc, const u8 *key)
+void poly1305_init_generic(struct poly1305_desc_ctx *desc,
+ const u8 key[POLY1305_KEY_SIZE])
{
poly1305_core_setkey(&desc->core_r, key);
desc->s[0] = get_unaligned_le32(key + 16);
diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
index c70d634..921d0a6 100644
--- a/lib/dynamic_debug.c
+++ b/lib/dynamic_debug.c
@@ -396,7 +396,7 @@ static int ddebug_parse_query(char *words[], int nwords,
/* tail :$info is function or line-range */
fline = strchr(query->filename, ':');
if (!fline)
- break;
+ continue;
*fline++ = '\0';
if (isalpha(*fline) || *fline == '*' || *fline == '?') {
/* take as function name */
diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c
index 7998aff..c87d5b6 100644
--- a/lib/kobject_uevent.c
+++ b/lib/kobject_uevent.c
@@ -251,12 +251,13 @@ static int kobj_usermode_filter(struct kobject *kobj)
static int init_uevent_argv(struct kobj_uevent_env *env, const char *subsystem)
{
+ int buffer_size = sizeof(env->buf) - env->buflen;
int len;
- len = strlcpy(&env->buf[env->buflen], subsystem,
- sizeof(env->buf) - env->buflen);
- if (len >= (sizeof(env->buf) - env->buflen)) {
- WARN(1, KERN_ERR "init_uevent_argv: buffer size too small\n");
+ len = strlcpy(&env->buf[env->buflen], subsystem, buffer_size);
+ if (len >= buffer_size) {
+ pr_warn("init_uevent_argv: buffer size of %d too small, needed %d\n",
+ buffer_size, len);
return -ENOMEM;
}
diff --git a/lib/math/div64.c b/lib/math/div64.c
index 3952a07..edd1090 100644
--- a/lib/math/div64.c
+++ b/lib/math/div64.c
@@ -230,4 +230,5 @@ u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 c)
return res + div64_u64(a * b, c);
}
+EXPORT_SYMBOL(mul_u64_u64_div_u64);
#endif
diff --git a/lib/nlattr.c b/lib/nlattr.c
index 74019c8..fe60f9a 100644
--- a/lib/nlattr.c
+++ b/lib/nlattr.c
@@ -816,7 +816,7 @@ int nla_strcmp(const struct nlattr *nla, const char *str)
int attrlen = nla_len(nla);
int d;
- if (attrlen > 0 && buf[attrlen - 1] == '\0')
+ while (attrlen > 0 && buf[attrlen - 1] == '\0')
attrlen--;
d = attrlen - len;
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 2caffc6..25bbac4 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -70,7 +70,7 @@ static void *stack_slabs[STACK_ALLOC_MAX_SLABS];
static int depot_index;
static int next_slab_inited;
static size_t depot_offset;
-static DEFINE_SPINLOCK(depot_lock);
+static DEFINE_RAW_SPINLOCK(depot_lock);
static bool init_stack_slab(void **prealloc)
{
@@ -281,7 +281,7 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
prealloc = page_address(page);
}
- spin_lock_irqsave(&depot_lock, flags);
+ raw_spin_lock_irqsave(&depot_lock, flags);
found = find_stack(*bucket, entries, nr_entries, hash);
if (!found) {
@@ -305,7 +305,7 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
WARN_ON(!init_stack_slab(&prealloc));
}
- spin_unlock_irqrestore(&depot_lock, flags);
+ raw_spin_unlock_irqrestore(&depot_lock, flags);
exit:
if (prealloc) {
/* Nobody used this memory, ok to free it. */
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 400507f..28c7c12 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -449,8 +449,20 @@ static char global_array[10];
static void kasan_global_oob(struct kunit *test)
{
- volatile int i = 3;
- char *p = &global_array[ARRAY_SIZE(global_array) + i];
+ /*
+ * Deliberate out-of-bounds access. To prevent CONFIG_UBSAN_LOCAL_BOUNDS
+ * from failing here and panicing the kernel, access the array via a
+ * volatile pointer, which will prevent the compiler from being able to
+ * determine the array bounds.
+ *
+ * This access uses a volatile pointer to char (char *volatile) rather
+ * than the more conventional pointer to volatile char (volatile char *)
+ * because we want to prevent the compiler from making inferences about
+ * the pointer itself (i.e. its array bounds), not the data that it
+ * refers to.
+ */
+ char *volatile array = global_array;
+ char *p = &array[ARRAY_SIZE(global_array) + 3];
/* Only generic mode instruments globals. */
if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
@@ -479,8 +491,9 @@ static void ksize_unpoisons_memory(struct kunit *test)
static void kasan_stack_oob(struct kunit *test)
{
char stack_array[10];
- volatile int i = OOB_TAG_OFF;
- char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
+ /* See comment in kasan_global_oob. */
+ char *volatile array = stack_array;
+ char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF];
if (!IS_ENABLED(CONFIG_KASAN_STACK)) {
kunit_info(test, "CONFIG_KASAN_STACK is not enabled");
@@ -494,7 +507,9 @@ static void kasan_alloca_oob_left(struct kunit *test)
{
volatile int i = 10;
char alloca_array[i];
- char *p = alloca_array - 1;
+ /* See comment in kasan_global_oob. */
+ char *volatile array = alloca_array;
+ char *p = array - 1;
/* Only generic mode instruments dynamic allocas. */
if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
@@ -514,7 +529,9 @@ static void kasan_alloca_oob_right(struct kunit *test)
{
volatile int i = 10;
char alloca_array[i];
- char *p = alloca_array + i;
+ /* See comment in kasan_global_oob. */
+ char *volatile array = alloca_array;
+ char *p = array + i;
/* Only generic mode instruments dynamic allocas. */
if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
diff --git a/lib/test_xarray.c b/lib/test_xarray.c
index 8294f43..8b1c318 100644
--- a/lib/test_xarray.c
+++ b/lib/test_xarray.c
@@ -1530,24 +1530,24 @@ static noinline void check_store_range(struct xarray *xa)
#ifdef CONFIG_XARRAY_MULTI
static void check_split_1(struct xarray *xa, unsigned long index,
- unsigned int order)
+ unsigned int order, unsigned int new_order)
{
- XA_STATE(xas, xa, index);
- void *entry;
- unsigned int i = 0;
+ XA_STATE_ORDER(xas, xa, index, new_order);
+ unsigned int i;
xa_store_order(xa, index, order, xa, GFP_KERNEL);
xas_split_alloc(&xas, xa, order, GFP_KERNEL);
xas_lock(&xas);
xas_split(&xas, xa, order);
+ for (i = 0; i < (1 << order); i += (1 << new_order))
+ __xa_store(xa, index + i, xa_mk_index(index + i), 0);
xas_unlock(&xas);
- xa_for_each(xa, index, entry) {
- XA_BUG_ON(xa, entry != xa);
- i++;
+ for (i = 0; i < (1 << order); i++) {
+ unsigned int val = index + (i & ~((1 << new_order) - 1));
+ XA_BUG_ON(xa, xa_load(xa, index + i) != xa_mk_index(val));
}
- XA_BUG_ON(xa, i != 1 << order);
xa_set_mark(xa, index, XA_MARK_0);
XA_BUG_ON(xa, !xa_get_mark(xa, index, XA_MARK_0));
@@ -1557,14 +1557,16 @@ static void check_split_1(struct xarray *xa, unsigned long index,
static noinline void check_split(struct xarray *xa)
{
- unsigned int order;
+ unsigned int order, new_order;
XA_BUG_ON(xa, !xa_empty(xa));
for (order = 1; order < 2 * XA_CHUNK_SHIFT; order++) {
- check_split_1(xa, 0, order);
- check_split_1(xa, 1UL << order, order);
- check_split_1(xa, 3UL << order, order);
+ for (new_order = 0; new_order < order; new_order++) {
+ check_split_1(xa, 0, order, new_order);
+ check_split_1(xa, 1UL << order, order, new_order);
+ check_split_1(xa, 3UL << order, order, new_order);
+ }
}
}
#else
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 14c9a6a..fd0fde6 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -3102,8 +3102,6 @@ int bstr_printf(char *buf, size_t size, const char *fmt, const u32 *bin_buf)
switch (*fmt) {
case 'S':
case 's':
- case 'F':
- case 'f':
case 'x':
case 'K':
case 'e':
diff --git a/lib/xarray.c b/lib/xarray.c
index 5fa5161..ed775de 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1011,7 +1011,7 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
do {
unsigned int i;
- void *sibling;
+ void *sibling = NULL;
struct xa_node *node;
node = kmem_cache_alloc(radix_tree_node_cachep, gfp);
@@ -1021,7 +1021,7 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
for (i = 0; i < XA_CHUNK_SIZE; i++) {
if ((i & mask) == 0) {
RCU_INIT_POINTER(node->slots[i], entry);
- sibling = xa_mk_sibling(0);
+ sibling = xa_mk_sibling(i);
} else {
RCU_INIT_POINTER(node->slots[i], sibling);
}
diff --git a/mm/gup.c b/mm/gup.c
index 054ff92..c2826f3 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1561,54 +1561,60 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
struct vm_area_struct **vmas,
unsigned int gup_flags)
{
- unsigned long i;
- unsigned long step;
- bool drain_allow = true;
- bool migrate_allow = true;
+ unsigned long i, isolation_error_count;
+ bool drain_allow;
LIST_HEAD(cma_page_list);
long ret = nr_pages;
+ struct page *prev_head, *head;
struct migration_target_control mtc = {
.nid = NUMA_NO_NODE,
.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN,
};
check_again:
- for (i = 0; i < nr_pages;) {
-
- struct page *head = compound_head(pages[i]);
-
- /*
- * gup may start from a tail page. Advance step by the left
- * part.
- */
- step = compound_nr(head) - (pages[i] - head);
+ prev_head = NULL;
+ isolation_error_count = 0;
+ drain_allow = true;
+ for (i = 0; i < nr_pages; i++) {
+ head = compound_head(pages[i]);
+ if (head == prev_head)
+ continue;
+ prev_head = head;
/*
* If we get a page from the CMA zone, since we are going to
* be pinning these entries, we might as well move them out
* of the CMA zone if possible.
*/
if (is_migrate_cma_page(head)) {
- if (PageHuge(head))
- isolate_huge_page(head, &cma_page_list);
- else {
+ if (PageHuge(head)) {
+ if (!isolate_huge_page(head, &cma_page_list))
+ isolation_error_count++;
+ } else {
if (!PageLRU(head) && drain_allow) {
lru_add_drain_all();
drain_allow = false;
}
- if (!isolate_lru_page(head)) {
- list_add_tail(&head->lru, &cma_page_list);
- mod_node_page_state(page_pgdat(head),
- NR_ISOLATED_ANON +
- page_is_file_lru(head),
- thp_nr_pages(head));
+ if (isolate_lru_page(head)) {
+ isolation_error_count++;
+ continue;
}
+ list_add_tail(&head->lru, &cma_page_list);
+ mod_node_page_state(page_pgdat(head),
+ NR_ISOLATED_ANON +
+ page_is_file_lru(head),
+ thp_nr_pages(head));
}
}
-
- i += step;
}
+ /*
+ * If list is empty, and no isolation errors, means that all pages are
+ * in the correct zone.
+ */
+ if (list_empty(&cma_page_list) && !isolation_error_count)
+ return ret;
+
if (!list_empty(&cma_page_list)) {
/*
* drop the above get_user_pages reference.
@@ -1619,34 +1625,28 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
for (i = 0; i < nr_pages; i++)
put_page(pages[i]);
- if (migrate_pages(&cma_page_list, alloc_migration_target, NULL,
- (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
- /*
- * some of the pages failed migration. Do get_user_pages
- * without migration.
- */
- migrate_allow = false;
-
+ ret = migrate_pages(&cma_page_list, alloc_migration_target,
+ NULL, (unsigned long)&mtc, MIGRATE_SYNC,
+ MR_CONTIG_RANGE);
+ if (ret) {
if (!list_empty(&cma_page_list))
putback_movable_pages(&cma_page_list);
+ return ret > 0 ? -ENOMEM : ret;
}
- /*
- * We did migrate all the pages, Try to get the page references
- * again migrating any new CMA pages which we failed to isolate
- * earlier.
- */
- ret = __get_user_pages_locked(mm, start, nr_pages,
- pages, vmas, NULL,
- gup_flags);
- if ((ret > 0) && migrate_allow) {
- nr_pages = ret;
- drain_allow = true;
- goto check_again;
- }
+ /* We unpinned pages before migration, pin them again */
+ ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas,
+ NULL, gup_flags);
+ if (ret <= 0)
+ return ret;
+ nr_pages = ret;
}
- return ret;
+ /*
+ * check again because pages were unpinned, and we also might have
+ * had isolation errors and need more pages to migrate.
+ */
+ goto check_again;
}
#else
static long check_and_migrate_cma_pages(struct mm_struct *mm,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 573f1a0..900851a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -745,13 +745,20 @@ void hugetlb_fix_reserve_counts(struct inode *inode)
{
struct hugepage_subpool *spool = subpool_inode(inode);
long rsv_adjust;
+ bool reserved = false;
rsv_adjust = hugepage_subpool_get_pages(spool, 1);
- if (rsv_adjust) {
+ if (rsv_adjust > 0) {
struct hstate *h = hstate_inode(inode);
- hugetlb_acct_memory(h, 1);
+ if (!hugetlb_acct_memory(h, 1))
+ reserved = true;
+ } else if (!rsv_adjust) {
+ reserved = true;
}
+
+ if (!reserved)
+ pr_warn("hugetlb: Huge Page Reserved count may go negative.\n");
}
/*
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index abab394..a623811 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -714,17 +714,17 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
if (pte_write(pteval))
writable = true;
}
- if (likely(writable)) {
- if (likely(referenced)) {
- result = SCAN_SUCCEED;
- trace_mm_collapse_huge_page_isolate(page, none_or_zero,
- referenced, writable, result);
- return 1;
- }
- } else {
- result = SCAN_PAGE_RO;
- }
+ if (unlikely(!writable)) {
+ result = SCAN_PAGE_RO;
+ } else if (unlikely(!referenced)) {
+ result = SCAN_LACK_REFERENCED_PAGE;
+ } else {
+ result = SCAN_SUCCEED;
+ trace_mm_collapse_huge_page_isolate(page, none_or_zero,
+ referenced, writable, result);
+ return 1;
+ }
out:
release_pte_pages(pte, _pte, compound_pagelist);
trace_mm_collapse_huge_page_isolate(page, none_or_zero,
diff --git a/mm/ksm.c b/mm/ksm.c
index 0960750..25b8362 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -794,6 +794,7 @@ static void remove_rmap_item_from_tree(struct rmap_item *rmap_item)
stable_node->rmap_hlist_len--;
put_anon_vma(rmap_item->anon_vma);
+ rmap_item->head = NULL;
rmap_item->address &= PAGE_MASK;
} else if (rmap_item->address & UNSTABLE_FLAG) {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d72d2b9..8d9f5fa 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3162,9 +3162,17 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock)
unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1);
if (nr_pages) {
+ struct mem_cgroup *memcg;
+
rcu_read_lock();
- __memcg_kmem_uncharge(obj_cgroup_memcg(old), nr_pages);
+retry:
+ memcg = obj_cgroup_memcg(old);
+ if (unlikely(!css_tryget(&memcg->css)))
+ goto retry;
rcu_read_unlock();
+
+ __memcg_kmem_uncharge(memcg, nr_pages);
+ css_put(&memcg->css);
}
/*
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 570a20b..2d7a667 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1293,7 +1293,7 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
* communicated in siginfo, see kill_proc()
*/
start = (page->index << PAGE_SHIFT) & ~(size - 1);
- unmap_mapping_range(page->mapping, start, start + size, 0);
+ unmap_mapping_range(page->mapping, start, size, 0);
}
kill_procs(&tokill, flags & MF_MUST_KILL, !unmap_success, pfn, flags);
rc = 0;
diff --git a/mm/migrate.c b/mm/migrate.c
index 9d7ca1b..7982256 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2914,6 +2914,13 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE);
entry = swp_entry_to_pte(swp_entry);
+ } else {
+ /*
+ * For now we only support migrating to un-addressable
+ * device memory.
+ */
+ pr_warn_once("Unsupported ZONE_DEVICE page type.\n");
+ goto abort;
}
} else {
entry = mk_pte(page, vma->vm_page_prot);
diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h
index 18b768a..095d7ea 100644
--- a/mm/percpu-internal.h
+++ b/mm/percpu-internal.h
@@ -87,7 +87,7 @@ extern spinlock_t pcpu_lock;
extern struct list_head *pcpu_chunk_lists;
extern int pcpu_nr_slots;
-extern int pcpu_nr_empty_pop_pages;
+extern int pcpu_nr_empty_pop_pages[];
extern struct pcpu_chunk *pcpu_first_chunk;
extern struct pcpu_chunk *pcpu_reserved_chunk;
diff --git a/mm/percpu-stats.c b/mm/percpu-stats.c
index c8400a2..f6026db 100644
--- a/mm/percpu-stats.c
+++ b/mm/percpu-stats.c
@@ -145,6 +145,7 @@ static int percpu_stats_show(struct seq_file *m, void *v)
int slot, max_nr_alloc;
int *buffer;
enum pcpu_chunk_type type;
+ int nr_empty_pop_pages;
alloc_buffer:
spin_lock_irq(&pcpu_lock);
@@ -165,7 +166,11 @@ static int percpu_stats_show(struct seq_file *m, void *v)
goto alloc_buffer;
}
-#define PL(X) \
+ nr_empty_pop_pages = 0;
+ for (type = 0; type < PCPU_NR_CHUNK_TYPES; type++)
+ nr_empty_pop_pages += pcpu_nr_empty_pop_pages[type];
+
+#define PL(X) \
seq_printf(m, " %-20s: %12lld\n", #X, (long long int)pcpu_stats_ai.X)
seq_printf(m,
@@ -196,7 +201,7 @@ static int percpu_stats_show(struct seq_file *m, void *v)
PU(nr_max_chunks);
PU(min_alloc_size);
PU(max_alloc_size);
- P("empty_pop_pages", pcpu_nr_empty_pop_pages);
+ P("empty_pop_pages", nr_empty_pop_pages);
seq_putc(m, '\n');
#undef PU
diff --git a/mm/percpu.c b/mm/percpu.c
index ad7a37e..e12ab70 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -172,10 +172,10 @@ struct list_head *pcpu_chunk_lists __ro_after_init; /* chunk list slots */
static LIST_HEAD(pcpu_map_extend_chunks);
/*
- * The number of empty populated pages, protected by pcpu_lock. The
- * reserved chunk doesn't contribute to the count.
+ * The number of empty populated pages by chunk type, protected by pcpu_lock.
+ * The reserved chunk doesn't contribute to the count.
*/
-int pcpu_nr_empty_pop_pages;
+int pcpu_nr_empty_pop_pages[PCPU_NR_CHUNK_TYPES];
/*
* The number of populated pages in use by the allocator, protected by
@@ -555,7 +555,7 @@ static inline void pcpu_update_empty_pages(struct pcpu_chunk *chunk, int nr)
{
chunk->nr_empty_pop_pages += nr;
if (chunk != pcpu_reserved_chunk)
- pcpu_nr_empty_pop_pages += nr;
+ pcpu_nr_empty_pop_pages[pcpu_chunk_type(chunk)] += nr;
}
/*
@@ -1831,7 +1831,7 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
mutex_unlock(&pcpu_alloc_mutex);
}
- if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW)
+ if (pcpu_nr_empty_pop_pages[type] < PCPU_EMPTY_POP_PAGES_LOW)
pcpu_schedule_balance_work();
/* clear the areas and return address relative to base address */
@@ -1999,7 +1999,7 @@ static void __pcpu_balance_workfn(enum pcpu_chunk_type type)
pcpu_atomic_alloc_failed = false;
} else {
nr_to_pop = clamp(PCPU_EMPTY_POP_PAGES_HIGH -
- pcpu_nr_empty_pop_pages,
+ pcpu_nr_empty_pop_pages[type],
0, PCPU_EMPTY_POP_PAGES_HIGH);
}
@@ -2579,7 +2579,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
/* link the first chunk in */
pcpu_first_chunk = chunk;
- pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
+ pcpu_nr_empty_pop_pages[PCPU_CHUNK_ROOT] = pcpu_first_chunk->nr_empty_pop_pages;
pcpu_chunk_relocate(pcpu_first_chunk, -1);
/* include all regions of the first chunk */
diff --git a/mm/ptdump.c b/mm/ptdump.c
index ba88ec4..93f2f63 100644
--- a/mm/ptdump.c
+++ b/mm/ptdump.c
@@ -108,7 +108,7 @@ static int ptdump_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
{
struct ptdump_state *st = walk->private;
- pte_t val = READ_ONCE(*pte);
+ pte_t val = ptep_get(pte);
if (st->effective_prot)
st->effective_prot(st, 4, pte_val(val));
diff --git a/mm/shmem.c b/mm/shmem.c
index 537c137..6e487bf 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2256,25 +2256,11 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user)
static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
{
struct shmem_inode_info *info = SHMEM_I(file_inode(file));
+ int ret;
- if (info->seals & F_SEAL_FUTURE_WRITE) {
- /*
- * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
- * "future write" seal active.
- */
- if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
- return -EPERM;
-
- /*
- * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as
- * MAP_SHARED and read-only, take care to not allow mprotect to
- * revert protections on such mappings. Do this only for shared
- * mappings. For private mappings, don't need to mask
- * VM_MAYWRITE as we still want them to be COW-writable.
- */
- if (vma->vm_flags & VM_SHARED)
- vma->vm_flags &= ~(VM_MAYWRITE);
- }
+ ret = seal_check_future_write(info->seals, vma);
+ if (ret)
+ return ret;
/* arm64 - allow memory tagging on RAM-based files */
vma->vm_flags |= VM_MTE_ALLOWED;
@@ -2378,8 +2364,18 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
pgoff_t offset, max_off;
ret = -ENOMEM;
- if (!shmem_inode_acct_block(inode, 1))
+ if (!shmem_inode_acct_block(inode, 1)) {
+ /*
+ * We may have got a page, returned -ENOENT triggering a retry,
+ * and now we find ourselves with -ENOMEM. Release the page, to
+ * avoid a BUG_ON in our caller.
+ */
+ if (unlikely(*pagep)) {
+ put_page(*pagep);
+ *pagep = NULL;
+ }
goto out;
+ }
if (!*pagep) {
page = shmem_alloc_page(gfp, info, pgoff);
diff --git a/mm/slab.c b/mm/slab.c
index b111356..b2cc2cf 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1789,8 +1789,7 @@ static int __ref setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp)
}
slab_flags_t kmem_cache_flags(unsigned int object_size,
- slab_flags_t flags, const char *name,
- void (*ctor)(void *))
+ slab_flags_t flags, const char *name)
{
return flags;
}
diff --git a/mm/slab.h b/mm/slab.h
index f9977d6..e258ffc 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -110,8 +110,7 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
slab_flags_t flags, void (*ctor)(void *));
slab_flags_t kmem_cache_flags(unsigned int object_size,
- slab_flags_t flags, const char *name,
- void (*ctor)(void *));
+ slab_flags_t flags, const char *name);
#else
static inline struct kmem_cache *
__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
@@ -119,8 +118,7 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
{ return NULL; }
static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
- slab_flags_t flags, const char *name,
- void (*ctor)(void *))
+ slab_flags_t flags, const char *name)
{
return flags;
}
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 8d96679..8f27ccf 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -196,7 +196,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
size = ALIGN(size, sizeof(void *));
align = calculate_alignment(flags, align, size);
size = ALIGN(size, align);
- flags = kmem_cache_flags(size, flags, name, NULL);
+ flags = kmem_cache_flags(size, flags, name);
if (flags & SLAB_NEVER_MERGE)
return NULL;
diff --git a/mm/slub.c b/mm/slub.c
index fbc415c..05a501b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1397,7 +1397,6 @@ __setup("slub_debug", setup_slub_debug);
* @object_size: the size of an object without meta data
* @flags: flags to set
* @name: name of the cache
- * @ctor: constructor function
*
* Debug option(s) are applied to @flags. In addition to the debug
* option(s), if a slab name (or multiple) is specified i.e.
@@ -1405,8 +1404,7 @@ __setup("slub_debug", setup_slub_debug);
* then only the select slabs will receive the debug option(s).
*/
slab_flags_t kmem_cache_flags(unsigned int object_size,
- slab_flags_t flags, const char *name,
- void (*ctor)(void *))
+ slab_flags_t flags, const char *name)
{
char *iter;
size_t len;
@@ -1471,8 +1469,7 @@ static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n,
static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n,
struct page *page) {}
slab_flags_t kmem_cache_flags(unsigned int object_size,
- slab_flags_t flags, const char *name,
- void (*ctor)(void *))
+ slab_flags_t flags, const char *name)
{
return flags;
}
@@ -3782,7 +3779,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
{
- s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor);
+ s->flags = kmem_cache_flags(s->size, flags, s->name);
#ifdef CONFIG_SLAB_FREELIST_HARDENED
s->random = get_random_long();
#endif
diff --git a/mm/sparse.c b/mm/sparse.c
index 7bd23f9..33406ea 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -547,6 +547,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.",
__func__, nid);
pnum_begin = pnum;
+ sparse_buffer_fini();
goto failed;
}
check_usemap_section_nr(nid, usage);
diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
index 98a0aaa..a510f7f 100644
--- a/net/batman-adv/translation-table.c
+++ b/net/batman-adv/translation-table.c
@@ -891,6 +891,7 @@ batadv_tt_prepare_tvlv_global_data(struct batadv_orig_node *orig_node,
hlist_for_each_entry(vlan, &orig_node->vlan_list, list) {
tt_vlan->vid = htons(vlan->vid);
tt_vlan->crc = htonl(vlan->tt.crc);
+ tt_vlan->reserved = 0;
tt_vlan++;
}
@@ -974,6 +975,7 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
tt_vlan->vid = htons(vlan->vid);
tt_vlan->crc = htonl(vlan->tt.crc);
+ tt_vlan->reserved = 0;
tt_vlan++;
}
diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c
index 07cfa32..0a2d78e 100644
--- a/net/bluetooth/cmtp/core.c
+++ b/net/bluetooth/cmtp/core.c
@@ -392,6 +392,11 @@ int cmtp_add_connection(struct cmtp_connadd_req *req, struct socket *sock)
if (!(session->flags & BIT(CMTP_LOOPBACK))) {
err = cmtp_attach_device(session);
if (err < 0) {
+ /* Caller will call fput in case of failure, and so
+ * will cmtp_session kthread.
+ */
+ get_file(session->sock->file);
+
atomic_inc(&session->terminate);
wake_up_interruptible(sk_sleep(session->sock->sk));
up_write(&cmtp_session_sem);
diff --git a/net/bluetooth/ecdh_helper.h b/net/bluetooth/ecdh_helper.h
index a6f8d03..8307239 100644
--- a/net/bluetooth/ecdh_helper.h
+++ b/net/bluetooth/ecdh_helper.h
@@ -25,6 +25,6 @@
int compute_ecdh_secret(struct crypto_kpp *tfm, const u8 pair_public_key[64],
u8 secret[32]);
-int set_ecdh_privkey(struct crypto_kpp *tfm, const u8 *private_key);
+int set_ecdh_privkey(struct crypto_kpp *tfm, const u8 private_key[32]);
int generate_ecdh_public_key(struct crypto_kpp *tfm, u8 public_key[64]);
int generate_ecdh_keys(struct crypto_kpp *tfm, u8 public_key[64]);
diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
index d0c1024..1c5a0a6 100644
--- a/net/bluetooth/hci_conn.c
+++ b/net/bluetooth/hci_conn.c
@@ -1789,8 +1789,6 @@ u32 hci_conn_get_phy(struct hci_conn *conn)
{
u32 phys = 0;
- hci_dev_lock(conn->hdev);
-
/* BLUETOOTH CORE SPECIFICATION Version 5.2 | Vol 2, Part B page 471:
* Table 6.2: Packets defined for synchronous, asynchronous, and
* CSB logical transport types.
@@ -1887,7 +1885,5 @@ u32 hci_conn_get_phy(struct hci_conn *conn)
break;
}
- hci_dev_unlock(conn->hdev);
-
return phys;
}
diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
index 17a7269..4676e4b 100644
--- a/net/bluetooth/hci_event.c
+++ b/net/bluetooth/hci_event.c
@@ -4990,6 +4990,7 @@ static void hci_loglink_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
return;
hchan->handle = le16_to_cpu(ev->handle);
+ hchan->amp = true;
BT_DBG("hcon %p mgr %p hchan %p", hcon, hcon->amp_mgr, hchan);
@@ -5022,7 +5023,7 @@ static void hci_disconn_loglink_complete_evt(struct hci_dev *hdev,
hci_dev_lock(hdev);
hchan = hci_chan_lookup_handle(hdev, le16_to_cpu(ev->handle));
- if (!hchan)
+ if (!hchan || !hchan->amp)
goto unlock;
amp_destroy_logical_link(hchan, ev->reason);
@@ -5896,7 +5897,7 @@ static void hci_le_phy_update_evt(struct hci_dev *hdev, struct sk_buff *skb)
BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
- if (!ev->status)
+ if (ev->status)
return;
hci_dev_lock(hdev);
diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
index 610ed08..161ea93 100644
--- a/net/bluetooth/hci_request.c
+++ b/net/bluetooth/hci_request.c
@@ -271,12 +271,16 @@ int hci_req_sync(struct hci_dev *hdev, int (*req)(struct hci_request *req,
{
int ret;
- if (!test_bit(HCI_UP, &hdev->flags))
- return -ENETDOWN;
-
/* Serialize all requests */
hci_req_sync_lock(hdev);
- ret = __hci_req_sync(hdev, req, opt, timeout, hci_status);
+ /* check the state after obtaing the lock to protect the HCI_UP
+ * against any races from hci_dev_do_close when the controller
+ * gets removed.
+ */
+ if (test_bit(HCI_UP, &hdev->flags))
+ ret = __hci_req_sync(hdev, req, opt, timeout, hci_status);
+ else
+ ret = -ENETDOWN;
hci_req_sync_unlock(hdev);
return ret;
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index 1ab27b9..cdc3863 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -451,6 +451,8 @@ struct l2cap_chan *l2cap_chan_create(void)
if (!chan)
return NULL;
+ skb_queue_head_init(&chan->tx_q);
+ skb_queue_head_init(&chan->srej_q);
mutex_init(&chan->lock);
/* Set default lock nesting level */
@@ -516,7 +518,9 @@ void l2cap_chan_set_defaults(struct l2cap_chan *chan)
chan->flush_to = L2CAP_DEFAULT_FLUSH_TO;
chan->retrans_timeout = L2CAP_DEFAULT_RETRANS_TO;
chan->monitor_timeout = L2CAP_DEFAULT_MONITOR_TO;
+
chan->conf_state = 0;
+ set_bit(CONF_NOT_COMPLETE, &chan->conf_state);
set_bit(FLAG_FORCE_ACTIVE, &chan->flags);
}
diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
index f1b1edd..c99d65e 100644
--- a/net/bluetooth/l2cap_sock.c
+++ b/net/bluetooth/l2cap_sock.c
@@ -179,9 +179,17 @@ static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr,
struct l2cap_chan *chan = l2cap_pi(sk)->chan;
struct sockaddr_l2 la;
int len, err = 0;
+ bool zapped;
BT_DBG("sk %p", sk);
+ lock_sock(sk);
+ zapped = sock_flag(sk, SOCK_ZAPPED);
+ release_sock(sk);
+
+ if (zapped)
+ return -EINVAL;
+
if (!addr || alen < offsetofend(struct sockaddr, sa_family) ||
addr->sa_family != AF_BLUETOOTH)
return -EINVAL;
diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
index bf4bef1..2b7879a 100644
--- a/net/bluetooth/smp.c
+++ b/net/bluetooth/smp.c
@@ -2733,6 +2733,15 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
if (skb->len < sizeof(*key))
return SMP_INVALID_PARAMS;
+ /* Check if remote and local public keys are the same and debug key is
+ * not in use.
+ */
+ if (!test_bit(SMP_FLAG_DEBUG_KEY, &smp->flags) &&
+ !crypto_memneq(key, smp->local_pk, 64)) {
+ bt_dev_err(hdev, "Remote and local public keys are identical");
+ return SMP_UNSPECIFIED;
+ }
+
memcpy(smp->remote_pk, key, 64);
if (test_bit(SMP_FLAG_REMOTE_OOB, &smp->flags)) {
diff --git a/net/bridge/br_arp_nd_proxy.c b/net/bridge/br_arp_nd_proxy.c
index dfec65e..3db1def 100644
--- a/net/bridge/br_arp_nd_proxy.c
+++ b/net/bridge/br_arp_nd_proxy.c
@@ -160,7 +160,9 @@ void br_do_proxy_suppress_arp(struct sk_buff *skb, struct net_bridge *br,
if (br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED)) {
if (p && (p->flags & BR_NEIGH_SUPPRESS))
return;
- if (ipv4_is_zeronet(sip) || sip == tip) {
+ if (parp->ar_op != htons(ARPOP_RREQUEST) &&
+ parp->ar_op != htons(ARPOP_RREPLY) &&
+ (ipv4_is_zeronet(sip) || sip == tip)) {
/* prevent flooding to neigh suppress ports */
BR_INPUT_SKB_CB(skb)->proxyarp_replied = 1;
return;
diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
index 54cb82a..5015ece 100644
--- a/net/bridge/br_multicast.c
+++ b/net/bridge/br_multicast.c
@@ -3070,25 +3070,14 @@ static int br_multicast_ipv4_rcv(struct net_bridge *br,
}
#if IS_ENABLED(CONFIG_IPV6)
-static int br_ip6_multicast_mrd_rcv(struct net_bridge *br,
- struct net_bridge_port *port,
- struct sk_buff *skb)
+static void br_ip6_multicast_mrd_rcv(struct net_bridge *br,
+ struct net_bridge_port *port,
+ struct sk_buff *skb)
{
- int ret;
-
- if (ipv6_hdr(skb)->nexthdr != IPPROTO_ICMPV6)
- return -ENOMSG;
-
- ret = ipv6_mc_check_icmpv6(skb);
- if (ret < 0)
- return ret;
-
if (icmp6_hdr(skb)->icmp6_type != ICMPV6_MRDISC_ADV)
- return -ENOMSG;
+ return;
br_multicast_mark_router(br, port);
-
- return 0;
}
static int br_multicast_ipv6_rcv(struct net_bridge *br,
@@ -3102,18 +3091,12 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
err = ipv6_mc_check_mld(skb);
- if (err == -ENOMSG) {
+ if (err == -ENOMSG || err == -ENODATA) {
if (!ipv6_addr_is_ll_all_nodes(&ipv6_hdr(skb)->daddr))
BR_INPUT_SKB_CB(skb)->mrouters_only = 1;
-
- if (ipv6_addr_is_all_snoopers(&ipv6_hdr(skb)->daddr)) {
- err = br_ip6_multicast_mrd_rcv(br, port, skb);
-
- if (err < 0 && err != -ENOMSG) {
- br_multicast_err_count(br, port, skb->protocol);
- return err;
- }
- }
+ if (err == -ENODATA &&
+ ipv6_addr_is_all_snoopers(&ipv6_hdr(skb)->daddr))
+ br_ip6_multicast_mrd_rcv(br, port, skb);
return 0;
} else if (err < 0) {
diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
index 92d64ab..73f71c2 100644
--- a/net/bridge/br_netlink.c
+++ b/net/bridge/br_netlink.c
@@ -99,8 +99,9 @@ static size_t br_get_link_af_size_filtered(const struct net_device *dev,
rcu_read_lock();
if (netif_is_bridge_port(dev)) {
- p = br_port_get_rcu(dev);
- vg = nbp_vlan_group_rcu(p);
+ p = br_port_get_check_rcu(dev);
+ if (p)
+ vg = nbp_vlan_group_rcu(p);
} else if (dev->priv_flags & IFF_EBRIDGE) {
br = netdev_priv(dev);
vg = br_vlan_group_rcu(br);
diff --git a/net/bridge/netfilter/ebtable_broute.c b/net/bridge/netfilter/ebtable_broute.c
index 66e7af1..32bc282 100644
--- a/net/bridge/netfilter/ebtable_broute.c
+++ b/net/bridge/netfilter/ebtable_broute.c
@@ -105,14 +105,20 @@ static int __net_init broute_net_init(struct net *net)
&net->xt.broute_table);
}
+static void __net_exit broute_net_pre_exit(struct net *net)
+{
+ ebt_unregister_table_pre_exit(net, "broute", &ebt_ops_broute);
+}
+
static void __net_exit broute_net_exit(struct net *net)
{
- ebt_unregister_table(net, net->xt.broute_table, &ebt_ops_broute);
+ ebt_unregister_table(net, net->xt.broute_table);
}
static struct pernet_operations broute_net_ops = {
.init = broute_net_init,
.exit = broute_net_exit,
+ .pre_exit = broute_net_pre_exit,
};
static int __init ebtable_broute_init(void)
diff --git a/net/bridge/netfilter/ebtable_filter.c b/net/bridge/netfilter/ebtable_filter.c
index 78cb9b2..bcf982e 100644
--- a/net/bridge/netfilter/ebtable_filter.c
+++ b/net/bridge/netfilter/ebtable_filter.c
@@ -99,14 +99,20 @@ static int __net_init frame_filter_net_init(struct net *net)
&net->xt.frame_filter);
}
+static void __net_exit frame_filter_net_pre_exit(struct net *net)
+{
+ ebt_unregister_table_pre_exit(net, "filter", ebt_ops_filter);
+}
+
static void __net_exit frame_filter_net_exit(struct net *net)
{
- ebt_unregister_table(net, net->xt.frame_filter, ebt_ops_filter);
+ ebt_unregister_table(net, net->xt.frame_filter);
}
static struct pernet_operations frame_filter_net_ops = {
.init = frame_filter_net_init,
.exit = frame_filter_net_exit,
+ .pre_exit = frame_filter_net_pre_exit,
};
static int __init ebtable_filter_init(void)
diff --git a/net/bridge/netfilter/ebtable_nat.c b/net/bridge/netfilter/ebtable_nat.c
index 0888936..0d09277 100644
--- a/net/bridge/netfilter/ebtable_nat.c
+++ b/net/bridge/netfilter/ebtable_nat.c
@@ -99,14 +99,20 @@ static int __net_init frame_nat_net_init(struct net *net)
&net->xt.frame_nat);
}
+static void __net_exit frame_nat_net_pre_exit(struct net *net)
+{
+ ebt_unregister_table_pre_exit(net, "nat", ebt_ops_nat);
+}
+
static void __net_exit frame_nat_net_exit(struct net *net)
{
- ebt_unregister_table(net, net->xt.frame_nat, ebt_ops_nat);
+ ebt_unregister_table(net, net->xt.frame_nat);
}
static struct pernet_operations frame_nat_net_ops = {
.init = frame_nat_net_init,
.exit = frame_nat_net_exit,
+ .pre_exit = frame_nat_net_pre_exit,
};
static int __init ebtable_nat_init(void)
diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
index ebe33b6..d481ff24 100644
--- a/net/bridge/netfilter/ebtables.c
+++ b/net/bridge/netfilter/ebtables.c
@@ -1232,10 +1232,34 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
return ret;
}
-void ebt_unregister_table(struct net *net, struct ebt_table *table,
- const struct nf_hook_ops *ops)
+static struct ebt_table *__ebt_find_table(struct net *net, const char *name)
{
- nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
+ struct ebt_table *t;
+
+ mutex_lock(&ebt_mutex);
+
+ list_for_each_entry(t, &net->xt.tables[NFPROTO_BRIDGE], list) {
+ if (strcmp(t->name, name) == 0) {
+ mutex_unlock(&ebt_mutex);
+ return t;
+ }
+ }
+
+ mutex_unlock(&ebt_mutex);
+ return NULL;
+}
+
+void ebt_unregister_table_pre_exit(struct net *net, const char *name, const struct nf_hook_ops *ops)
+{
+ struct ebt_table *table = __ebt_find_table(net, name);
+
+ if (table)
+ nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
+}
+EXPORT_SYMBOL(ebt_unregister_table_pre_exit);
+
+void ebt_unregister_table(struct net *net, struct ebt_table *table)
+{
__ebt_unregister_table(net, table);
}
diff --git a/net/can/bcm.c b/net/can/bcm.c
index 0e5c37b..909b9e6 100644
--- a/net/can/bcm.c
+++ b/net/can/bcm.c
@@ -86,6 +86,8 @@ MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Oliver Hartkopp <oliver.hartkopp@volkswagen.de>");
MODULE_ALIAS("can-proto-2");
+#define BCM_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex)
+
/*
* easy access to the first 64 bit of can(fd)_frame payload. cp->data is
* 64 bit aligned so the offset has to be multiples of 8 which is ensured
@@ -1292,7 +1294,7 @@ static int bcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
/* no bound device as default => check msg_name */
DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name);
- if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex))
+ if (msg->msg_namelen < BCM_MIN_NAMELEN)
return -EINVAL;
if (addr->can_family != AF_CAN)
@@ -1534,7 +1536,7 @@ static int bcm_connect(struct socket *sock, struct sockaddr *uaddr, int len,
struct net *net = sock_net(sk);
int ret = 0;
- if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex))
+ if (len < BCM_MIN_NAMELEN)
return -EINVAL;
lock_sock(sk);
@@ -1616,8 +1618,8 @@ static int bcm_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
sock_recv_ts_and_drops(msg, sk, skb);
if (msg->msg_name) {
- __sockaddr_check_size(sizeof(struct sockaddr_can));
- msg->msg_namelen = sizeof(struct sockaddr_can);
+ __sockaddr_check_size(BCM_MIN_NAMELEN);
+ msg->msg_namelen = BCM_MIN_NAMELEN;
memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
}
diff --git a/net/can/isotp.c b/net/can/isotp.c
index ea1e227..d5780ab 100644
--- a/net/can/isotp.c
+++ b/net/can/isotp.c
@@ -77,6 +77,8 @@ MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Oliver Hartkopp <socketcan@hartkopp.net>");
MODULE_ALIAS("can-proto-6");
+#define ISOTP_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp)
+
#define SINGLE_MASK(id) (((id) & CAN_EFF_FLAG) ? \
(CAN_EFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG) : \
(CAN_SFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG))
@@ -981,7 +983,8 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
sock_recv_timestamp(msg, sk, skb);
if (msg->msg_name) {
- msg->msg_namelen = sizeof(struct sockaddr_can);
+ __sockaddr_check_size(ISOTP_MIN_NAMELEN);
+ msg->msg_namelen = ISOTP_MIN_NAMELEN;
memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
}
@@ -1050,7 +1053,7 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
int err = 0;
int notify_enetdown = 0;
- if (len < CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp))
+ if (len < ISOTP_MIN_NAMELEN)
return -EINVAL;
if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id)
@@ -1136,13 +1139,13 @@ static int isotp_getname(struct socket *sock, struct sockaddr *uaddr, int peer)
if (peer)
return -EOPNOTSUPP;
- memset(addr, 0, sizeof(*addr));
+ memset(addr, 0, ISOTP_MIN_NAMELEN);
addr->can_family = AF_CAN;
addr->can_ifindex = so->ifindex;
addr->can_addr.tp.rx_id = so->rxid;
addr->can_addr.tp.tx_id = so->txid;
- return sizeof(*addr);
+ return ISOTP_MIN_NAMELEN;
}
static int isotp_setsockopt(struct socket *sock, int level, int optname,
diff --git a/net/can/raw.c b/net/can/raw.c
index 6ec8aa1..95113b0 100644
--- a/net/can/raw.c
+++ b/net/can/raw.c
@@ -60,6 +60,8 @@ MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Urs Thuermann <urs.thuermann@volkswagen.de>");
MODULE_ALIAS("can-proto-1");
+#define RAW_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex)
+
#define MASK_ALL 0
/* A raw socket has a list of can_filters attached to it, each receiving
@@ -394,7 +396,7 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len)
int err = 0;
int notify_enetdown = 0;
- if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex))
+ if (len < RAW_MIN_NAMELEN)
return -EINVAL;
if (addr->can_family != AF_CAN)
return -EINVAL;
@@ -475,11 +477,11 @@ static int raw_getname(struct socket *sock, struct sockaddr *uaddr,
if (peer)
return -EOPNOTSUPP;
- memset(addr, 0, sizeof(*addr));
+ memset(addr, 0, RAW_MIN_NAMELEN);
addr->can_family = AF_CAN;
addr->can_ifindex = ro->ifindex;
- return sizeof(*addr);
+ return RAW_MIN_NAMELEN;
}
static int raw_setsockopt(struct socket *sock, int level, int optname,
@@ -731,7 +733,7 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
if (msg->msg_name) {
DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name);
- if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex))
+ if (msg->msg_namelen < RAW_MIN_NAMELEN)
return -EINVAL;
if (addr->can_family != AF_CAN)
@@ -824,8 +826,8 @@ static int raw_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
sock_recv_ts_and_drops(msg, sk, skb);
if (msg->msg_name) {
- __sockaddr_check_size(sizeof(struct sockaddr_can));
- msg->msg_namelen = sizeof(struct sockaddr_can);
+ __sockaddr_check_size(RAW_MIN_NAMELEN);
+ msg->msg_namelen = RAW_MIN_NAMELEN;
memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
}
diff --git a/net/core/dev.c b/net/core/dev.c
index 62ff712..0c9ce36 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3764,7 +3764,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
if (q->flags & TCQ_F_NOLOCK) {
rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;
- qdisc_run(q);
+ if (likely(!netif_xmit_frozen_or_stopped(txq)))
+ qdisc_run(q);
if (unlikely(to_free))
kfree_skb_list(to_free);
@@ -4910,25 +4911,43 @@ static __latent_entropy void net_tx_action(struct softirq_action *h)
sd->output_queue_tailp = &sd->output_queue;
local_irq_enable();
+ rcu_read_lock();
+
while (head) {
struct Qdisc *q = head;
spinlock_t *root_lock = NULL;
head = head->next_sched;
- if (!(q->flags & TCQ_F_NOLOCK)) {
- root_lock = qdisc_lock(q);
- spin_lock(root_lock);
- }
/* We need to make sure head->next_sched is read
* before clearing __QDISC_STATE_SCHED
*/
smp_mb__before_atomic();
+
+ if (!(q->flags & TCQ_F_NOLOCK)) {
+ root_lock = qdisc_lock(q);
+ spin_lock(root_lock);
+ } else if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED,
+ &q->state))) {
+ /* There is a synchronize_net() between
+ * STATE_DEACTIVATED flag being set and
+ * qdisc_reset()/some_qdisc_is_busy() in
+ * dev_deactivate(), so we can safely bail out
+ * early here to avoid data race between
+ * qdisc_deactivate() and some_qdisc_is_busy()
+ * for lockless qdisc.
+ */
+ clear_bit(__QDISC_STATE_SCHED, &q->state);
+ continue;
+ }
+
clear_bit(__QDISC_STATE_SCHED, &q->state);
qdisc_run(q);
if (root_lock)
spin_unlock(root_lock);
}
+
+ rcu_read_unlock();
}
xfrm_dev_backlog(sd);
@@ -5857,7 +5876,7 @@ static struct list_head *gro_list_prepare(struct napi_struct *napi,
return head;
}
-static void skb_gro_reset_offset(struct sk_buff *skb)
+static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff)
{
const struct skb_shared_info *pinfo = skb_shinfo(skb);
const skb_frag_t *frag0 = &pinfo->frags[0];
@@ -5867,7 +5886,8 @@ static void skb_gro_reset_offset(struct sk_buff *skb)
NAPI_GRO_CB(skb)->frag0_len = 0;
if (!skb_headlen(skb) && pinfo->nr_frags &&
- !PageHighMem(skb_frag_page(frag0))) {
+ !PageHighMem(skb_frag_page(frag0)) &&
+ (!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) {
NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int,
skb_frag_size(frag0),
@@ -6100,7 +6120,7 @@ gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb)
skb_mark_napi_id(skb, napi);
trace_napi_gro_receive_entry(skb);
- skb_gro_reset_offset(skb);
+ skb_gro_reset_offset(skb, 0);
ret = napi_skb_finish(napi, skb, dev_gro_receive(napi, skb));
trace_napi_gro_receive_exit(ret);
@@ -6193,7 +6213,7 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi)
napi->skb = NULL;
skb_reset_mac_header(skb);
- skb_gro_reset_offset(skb);
+ skb_gro_reset_offset(skb, hlen);
if (unlikely(skb_gro_header_hard(skb, hlen))) {
eth = skb_gro_header_slow(skb, hlen, 0);
diff --git a/net/core/filter.c b/net/core/filter.c
index 2e1ad8c..14d9e4d 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3782,6 +3782,7 @@ static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room,
__skb_push(skb, head_room);
memset(skb->data, 0, head_room);
skb_reset_mac_header(skb);
+ skb_reset_mac_len(skb);
}
return ret;
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index d48b37b..c52e5ea 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -822,8 +822,10 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
key_addrs = skb_flow_dissector_target(flow_dissector,
FLOW_DISSECTOR_KEY_IPV6_ADDRS,
target_container);
- memcpy(&key_addrs->v6addrs, &flow_keys->ipv6_src,
- sizeof(key_addrs->v6addrs));
+ memcpy(&key_addrs->v6addrs.src, &flow_keys->ipv6_src,
+ sizeof(key_addrs->v6addrs.src));
+ memcpy(&key_addrs->v6addrs.dst, &flow_keys->ipv6_dst,
+ sizeof(key_addrs->v6addrs.dst));
key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
}
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 2fe4bbb..a18c297 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -132,6 +132,9 @@ static void neigh_update_gc_list(struct neighbour *n)
write_lock_bh(&n->tbl->lock);
write_lock(&n->lock);
+ if (n->dead)
+ goto out;
+
/* remove from the gc list if new state is permanent or if neighbor
* is externally learned; otherwise entry should be on the gc list
*/
@@ -148,6 +151,7 @@ static void neigh_update_gc_list(struct neighbour *n)
atomic_inc(&n->tbl->gc_entries);
}
+out:
write_unlock(&n->lock);
write_unlock_bh(&n->tbl->lock);
}
@@ -1380,7 +1384,7 @@ static int __neigh_update(struct neighbour *neigh, const u8 *lladdr,
* we can reinject the packet there.
*/
n2 = NULL;
- if (dst) {
+ if (dst && dst->obsolete != DST_OBSOLETE_DEAD) {
n2 = dst_neigh_lookup_skb(dst, skb);
if (n2)
n1 = n2;
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ef98372..08fbf40 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -172,8 +172,10 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
struct page *page,
unsigned int dma_sync_size)
{
+ dma_addr_t dma_addr = page_pool_get_dma_addr(page);
+
dma_sync_size = min(dma_sync_size, pool->p.max_len);
- dma_sync_single_range_for_device(pool->p.dev, page->dma_addr,
+ dma_sync_single_range_for_device(pool->p.dev, dma_addr,
pool->p.offset, dma_sync_size,
pool->p.dma_dir);
}
@@ -224,7 +226,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
put_page(page);
return NULL;
}
- page->dma_addr = dma;
+ page_pool_set_dma_addr(page, dma);
if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
@@ -292,13 +294,13 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
*/
goto skip_dma_unmap;
- dma = page->dma_addr;
+ dma = page_pool_get_dma_addr(page);
- /* When page is unmapped, it cannot be returned our pool */
+ /* When page is unmapped, it cannot be returned to our pool */
dma_unmap_page_attrs(pool->p.dev, dma,
PAGE_SIZE << pool->p.order, pool->p.dma_dir,
DMA_ATTR_SKIP_CPU_SYNC);
- page->dma_addr = 0;
+ page_pool_set_dma_addr(page, 0);
skip_dma_unmap:
/* This may be the last page returned, releasing the pool, so
* it is not safe to reference pool afterwards.
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 25cdbb2..923a1d0 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -488,6 +488,7 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
if (unlikely(!msg))
return -EAGAIN;
sk_msg_init(msg);
+ skb_set_owner_r(skb, sk);
return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
}
@@ -791,7 +792,6 @@ static void sk_psock_tls_verdict_apply(struct sk_buff *skb, struct sock *sk, int
{
switch (verdict) {
case __SK_REDIRECT:
- skb_set_owner_r(skb, sk);
sk_psock_skb_redirect(skb);
break;
case __SK_PASS:
@@ -809,10 +809,6 @@ int sk_psock_tls_strp_read(struct sk_psock *psock, struct sk_buff *skb)
rcu_read_lock();
prog = READ_ONCE(psock->progs.skb_verdict);
if (likely(prog)) {
- /* We skip full set_owner_r here because if we do a SK_PASS
- * or SK_DROP we can skip skb memory accounting and use the
- * TLS context.
- */
skb->sk = psock->sk;
tcp_skb_bpf_redirect_clear(skb);
ret = sk_psock_bpf_run(psock, prog, skb);
@@ -881,12 +877,13 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)
kfree_skb(skb);
goto out;
}
- skb_set_owner_r(skb, sk);
prog = READ_ONCE(psock->progs.skb_verdict);
if (likely(prog)) {
+ skb->sk = sk;
tcp_skb_bpf_redirect_clear(skb);
ret = sk_psock_bpf_run(psock, prog, skb);
ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));
+ skb->sk = NULL;
}
sk_psock_verdict_apply(psock, skb, ret);
out:
@@ -957,12 +954,13 @@ static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
kfree_skb(skb);
goto out;
}
- skb_set_owner_r(skb, sk);
prog = READ_ONCE(psock->progs.skb_verdict);
if (likely(prog)) {
+ skb->sk = sk;
tcp_skb_bpf_redirect_clear(skb);
ret = sk_psock_bpf_run(psock, prog, skb);
ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));
+ skb->sk = NULL;
}
sk_psock_verdict_apply(psock, skb, ret);
out:
diff --git a/net/core/sock.c b/net/core/sock.c
index 727ea1c..dee29f4 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2099,16 +2099,10 @@ void skb_orphan_partial(struct sk_buff *skb)
if (skb_is_tcp_pure_ack(skb))
return;
- if (can_skb_orphan_partial(skb)) {
- struct sock *sk = skb->sk;
+ if (can_skb_orphan_partial(skb) && skb_set_owner_sk_safe(skb, skb->sk))
+ return;
- if (refcount_inc_not_zero(&sk->sk_refcnt)) {
- WARN_ON(refcount_sub_and_test(skb->truesize, &sk->sk_wmem_alloc));
- skb->destructor = sock_efree;
- }
- } else {
- skb_orphan(skb);
- }
+ skb_orphan(skb);
}
EXPORT_SYMBOL(skb_orphan_partial);
diff --git a/net/core/xdp.c b/net/core/xdp.c
index d900ceb..b8d7fa4 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -349,7 +349,8 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct,
/* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */
xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
page = virt_to_head_page(data);
- napi_direct &= !xdp_return_frame_no_direct();
+ if (napi_direct && xdp_return_frame_no_direct())
+ napi_direct = false;
page_pool_put_full_page(xa->page_pool, page, napi_direct);
rcu_read_unlock();
break;
diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
index a04fd63..3ada338 100644
--- a/net/dsa/dsa2.c
+++ b/net/dsa/dsa2.c
@@ -533,8 +533,14 @@ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
list_for_each_entry(dp, &dst->ports, list) {
err = dsa_port_setup(dp);
- if (err)
+ if (err) {
+ dsa_port_devlink_teardown(dp);
+ dp->type = DSA_PORT_TYPE_UNUSED;
+ err = dsa_port_devlink_setup(dp);
+ if (err)
+ goto teardown;
continue;
+ }
}
return 0;
diff --git a/net/dsa/master.c b/net/dsa/master.c
index 3a44da3..45bd627 100644
--- a/net/dsa/master.c
+++ b/net/dsa/master.c
@@ -147,8 +147,7 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset,
struct dsa_switch *ds = cpu_dp->ds;
int port = cpu_dp->index;
int len = ETH_GSTRING_LEN;
- int mcount = 0, count;
- unsigned int i;
+ int mcount = 0, count, i;
uint8_t pfx[4];
uint8_t *ndata;
@@ -178,6 +177,8 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset,
*/
ds->ops->get_strings(ds, port, stringset, ndata);
count = ds->ops->get_sset_count(ds, port, stringset);
+ if (count < 0)
+ return;
for (i = 0; i < count; i++) {
memmove(ndata + (i * len + sizeof(pfx)),
ndata + i * len, len - sizeof(pfx));
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index c6806ee..9281c9c 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -746,13 +746,15 @@ static int dsa_slave_get_sset_count(struct net_device *dev, int sset)
struct dsa_switch *ds = dp->ds;
if (sset == ETH_SS_STATS) {
- int count;
+ int count = 0;
- count = 4;
- if (ds->ops->get_sset_count)
- count += ds->ops->get_sset_count(ds, dp->index, sset);
+ if (ds->ops->get_sset_count) {
+ count = ds->ops->get_sset_count(ds, dp->index, sset);
+ if (count < 0)
+ return count;
+ }
- return count;
+ return count + 4;
}
return -EOPNOTSUPP;
diff --git a/net/ethtool/eee.c b/net/ethtool/eee.c
index 901b7de..e10bfcc 100644
--- a/net/ethtool/eee.c
+++ b/net/ethtool/eee.c
@@ -169,8 +169,8 @@ int ethnl_set_eee(struct sk_buff *skb, struct genl_info *info)
ethnl_update_bool32(&eee.eee_enabled, tb[ETHTOOL_A_EEE_ENABLED], &mod);
ethnl_update_bool32(&eee.tx_lpi_enabled,
tb[ETHTOOL_A_EEE_TX_LPI_ENABLED], &mod);
- ethnl_update_bool32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER],
- &mod);
+ ethnl_update_u32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER],
+ &mod);
ret = 0;
if (!mod)
goto out_ops;
diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
index ec2cd7a..2917af3 100644
--- a/net/ethtool/ioctl.c
+++ b/net/ethtool/ioctl.c
@@ -489,7 +489,7 @@ store_link_ksettings_for_user(void __user *to,
{
struct ethtool_link_usettings link_usettings;
- memcpy(&link_usettings.base, &from->base, sizeof(link_usettings));
+ memcpy(&link_usettings, from, sizeof(link_usettings));
bitmap_to_arr32(link_usettings.link_modes.supported,
from->link_modes.supported,
__ETHTOOL_LINK_MODE_MASK_NBITS);
diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
index 50d3c88..25a5508 100644
--- a/net/ethtool/netlink.c
+++ b/net/ethtool/netlink.c
@@ -384,7 +384,8 @@ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,
int ret;
ehdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
- ðtool_genl_family, 0, ctx->ops->reply_cmd);
+ ðtool_genl_family, NLM_F_MULTI,
+ ctx->ops->reply_cmd);
if (!ehdr)
return -EMSGSIZE;
diff --git a/net/ethtool/pause.c b/net/ethtool/pause.c
index 09998dc..d4ac027 100644
--- a/net/ethtool/pause.c
+++ b/net/ethtool/pause.c
@@ -38,16 +38,16 @@ static int pause_prepare_data(const struct ethnl_req_info *req_base,
if (!dev->ethtool_ops->get_pauseparam)
return -EOPNOTSUPP;
+ ethtool_stats_init((u64 *)&data->pausestat,
+ sizeof(data->pausestat) / 8);
+
ret = ethnl_ops_begin(dev);
if (ret < 0)
return ret;
dev->ethtool_ops->get_pauseparam(dev, &data->pauseparam);
if (req_base->flags & ETHTOOL_FLAG_STATS &&
- dev->ethtool_ops->get_pause_stats) {
- ethtool_stats_init((u64 *)&data->pausestat,
- sizeof(data->pausestat) / 8);
+ dev->ethtool_ops->get_pause_stats)
dev->ethtool_ops->get_pause_stats(dev, &data->pausestat);
- }
ethnl_ops_complete(dev);
return 0;
diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
index ab953a1..fec1b01 100644
--- a/net/hsr/hsr_device.c
+++ b/net/hsr/hsr_device.c
@@ -217,6 +217,8 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
master = hsr_port_get_hsr(hsr, HSR_PT_MASTER);
if (master) {
skb->dev = master->dev;
+ skb_reset_mac_header(skb);
+ skb_reset_mac_len(skb);
hsr_forward_skb(skb, master);
} else {
atomic_long_inc(&dev->tx_dropped);
@@ -260,6 +262,7 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master, u16 proto)
goto out;
skb_reset_mac_header(skb);
+ skb_reset_mac_len(skb);
skb_reset_network_header(skb);
skb_reset_transport_header(skb);
diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
index cadfccd..baf4765 100644
--- a/net/hsr/hsr_forward.c
+++ b/net/hsr/hsr_forward.c
@@ -451,25 +451,31 @@ static void handle_std_frame(struct sk_buff *skb,
}
}
-void hsr_fill_frame_info(__be16 proto, struct sk_buff *skb,
- struct hsr_frame_info *frame)
+int hsr_fill_frame_info(__be16 proto, struct sk_buff *skb,
+ struct hsr_frame_info *frame)
{
if (proto == htons(ETH_P_PRP) ||
proto == htons(ETH_P_HSR)) {
+ /* Check if skb contains hsr_ethhdr */
+ if (skb->mac_len < sizeof(struct hsr_ethhdr))
+ return -EINVAL;
+
/* HSR tagged frame :- Data or Supervision */
frame->skb_std = NULL;
frame->skb_prp = NULL;
frame->skb_hsr = skb;
frame->sequence_nr = hsr_get_skb_sequence_nr(skb);
- return;
+ return 0;
}
/* Standard frame or PRP from master port */
handle_std_frame(skb, frame);
+
+ return 0;
}
-void prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
- struct hsr_frame_info *frame)
+int prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
+ struct hsr_frame_info *frame)
{
/* Supervision frame */
struct prp_rct *rct = skb_get_PRP_rct(skb);
@@ -480,9 +486,11 @@ void prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
frame->skb_std = NULL;
frame->skb_prp = skb;
frame->sequence_nr = prp_get_skb_sequence_nr(rct);
- return;
+ return 0;
}
handle_std_frame(skb, frame);
+
+ return 0;
}
static int fill_frame_info(struct hsr_frame_info *frame,
@@ -492,6 +500,11 @@ static int fill_frame_info(struct hsr_frame_info *frame,
struct hsr_vlan_ethhdr *vlan_hdr;
struct ethhdr *ethhdr;
__be16 proto;
+ int ret;
+
+ /* Check if skb contains ethhdr */
+ if (skb->mac_len < sizeof(struct ethhdr))
+ return -EINVAL;
memset(frame, 0, sizeof(*frame));
frame->is_supervision = is_supervision_frame(port->hsr, skb);
@@ -517,7 +530,10 @@ static int fill_frame_info(struct hsr_frame_info *frame,
frame->is_from_san = false;
frame->port_rcv = port;
- hsr->proto_ops->fill_frame_info(proto, skb, frame);
+ ret = hsr->proto_ops->fill_frame_info(proto, skb, frame);
+ if (ret)
+ return ret;
+
check_local_dest(port->hsr, skb, frame);
return 0;
@@ -528,12 +544,6 @@ void hsr_forward_skb(struct sk_buff *skb, struct hsr_port *port)
{
struct hsr_frame_info frame;
- if (skb_mac_header(skb) != skb->data) {
- WARN_ONCE(1, "%s:%d: Malformed frame (port_src %s)\n",
- __FILE__, __LINE__, port->dev->name);
- goto out_drop;
- }
-
if (fill_frame_info(&frame, skb, port) < 0)
goto out_drop;
diff --git a/net/hsr/hsr_forward.h b/net/hsr/hsr_forward.h
index 618140d..008f457 100644
--- a/net/hsr/hsr_forward.h
+++ b/net/hsr/hsr_forward.h
@@ -23,8 +23,8 @@ struct sk_buff *hsr_get_untagged_frame(struct hsr_frame_info *frame,
struct sk_buff *prp_get_untagged_frame(struct hsr_frame_info *frame,
struct hsr_port *port);
bool prp_drop_frame(struct hsr_frame_info *frame, struct hsr_port *port);
-void prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
- struct hsr_frame_info *frame);
-void hsr_fill_frame_info(__be16 proto, struct sk_buff *skb,
- struct hsr_frame_info *frame);
+int prp_fill_frame_info(__be16 proto, struct sk_buff *skb,
+ struct hsr_frame_info *frame);
+int hsr_fill_frame_info(__be16 proto, struct sk_buff *skb,
+ struct hsr_frame_info *frame);
#endif /* __HSR_FORWARD_H */
diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
index f79ca55..9a25a5d 100644
--- a/net/hsr/hsr_main.h
+++ b/net/hsr/hsr_main.h
@@ -192,8 +192,8 @@ struct hsr_proto_ops {
struct hsr_port *port);
struct sk_buff * (*create_tagged_frame)(struct hsr_frame_info *frame,
struct hsr_port *port);
- void (*fill_frame_info)(__be16 proto, struct sk_buff *skb,
- struct hsr_frame_info *frame);
+ int (*fill_frame_info)(__be16 proto, struct sk_buff *skb,
+ struct hsr_frame_info *frame);
bool (*invalid_dan_ingress_frame)(__be16 protocol);
void (*update_san_info)(struct hsr_node *node, bool is_sup);
};
diff --git a/net/hsr/hsr_slave.c b/net/hsr/hsr_slave.c
index 36d5fcf..aecc05a 100644
--- a/net/hsr/hsr_slave.c
+++ b/net/hsr/hsr_slave.c
@@ -58,12 +58,11 @@ static rx_handler_result_t hsr_handle_frame(struct sk_buff **pskb)
goto finish_pass;
skb_push(skb, ETH_HLEN);
-
- if (skb_mac_header(skb) != skb->data) {
- WARN_ONCE(1, "%s:%d: Malformed frame at source port %s)\n",
- __func__, __LINE__, port->dev->name);
- goto finish_consume;
- }
+ skb_reset_mac_header(skb);
+ if ((!hsr->prot_version && protocol == htons(ETH_P_PRP)) ||
+ protocol == htons(ETH_P_HSR))
+ skb_set_network_header(skb, ETH_HLEN + HSR_HLEN);
+ skb_reset_mac_len(skb);
hsr_forward_skb(skb, port);
diff --git a/net/ieee802154/nl-mac.c b/net/ieee802154/nl-mac.c
index 6d091e4..d19c40c 100644
--- a/net/ieee802154/nl-mac.c
+++ b/net/ieee802154/nl-mac.c
@@ -551,9 +551,7 @@ ieee802154_llsec_parse_key_id(struct genl_info *info,
desc->mode = nla_get_u8(info->attrs[IEEE802154_ATTR_LLSEC_KEY_MODE]);
if (desc->mode == IEEE802154_SCF_KEY_IMPLICIT) {
- if (!info->attrs[IEEE802154_ATTR_PAN_ID] &&
- !(info->attrs[IEEE802154_ATTR_SHORT_ADDR] ||
- info->attrs[IEEE802154_ATTR_HW_ADDR]))
+ if (!info->attrs[IEEE802154_ATTR_PAN_ID])
return -EINVAL;
desc->device_addr.pan_id = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_PAN_ID]);
@@ -562,6 +560,9 @@ ieee802154_llsec_parse_key_id(struct genl_info *info,
desc->device_addr.mode = IEEE802154_ADDR_SHORT;
desc->device_addr.short_addr = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_SHORT_ADDR]);
} else {
+ if (!info->attrs[IEEE802154_ATTR_HW_ADDR])
+ return -EINVAL;
+
desc->device_addr.mode = IEEE802154_ADDR_LONG;
desc->device_addr.extended_addr = nla_get_hwaddr(info->attrs[IEEE802154_ATTR_HW_ADDR]);
}
diff --git a/net/ieee802154/nl802154.c b/net/ieee802154/nl802154.c
index 7c5a1aa..f0b47d4 100644
--- a/net/ieee802154/nl802154.c
+++ b/net/ieee802154/nl802154.c
@@ -820,8 +820,13 @@ nl802154_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flags,
goto nla_put_failure;
#ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ goto out;
+
if (nl802154_get_llsec_params(msg, rdev, wpan_dev) < 0)
goto nla_put_failure;
+
+out:
#endif /* CONFIG_IEEE802154_NL802154_EXPERIMENTAL */
genlmsg_end(msg, hdr);
@@ -1384,6 +1389,9 @@ static int nl802154_set_llsec_params(struct sk_buff *skb,
u32 changed = 0;
int ret;
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ return -EOPNOTSUPP;
+
if (info->attrs[NL802154_ATTR_SEC_ENABLED]) {
u8 enabled;
@@ -1490,6 +1498,11 @@ nl802154_dump_llsec_key(struct sk_buff *skb, struct netlink_callback *cb)
if (err)
return err;
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) {
+ err = skb->len;
+ goto out_err;
+ }
+
if (!wpan_dev->netdev) {
err = -EINVAL;
goto out_err;
@@ -1544,7 +1557,11 @@ static int nl802154_add_llsec_key(struct sk_buff *skb, struct genl_info *info)
struct ieee802154_llsec_key_id id = { };
u32 commands[NL802154_CMD_FRAME_NR_IDS / 32] = { };
- if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ return -EOPNOTSUPP;
+
+ if (!info->attrs[NL802154_ATTR_SEC_KEY] ||
+ nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
return -EINVAL;
if (!attrs[NL802154_KEY_ATTR_USAGE_FRAMES] ||
@@ -1592,7 +1609,11 @@ static int nl802154_del_llsec_key(struct sk_buff *skb, struct genl_info *info)
struct nlattr *attrs[NL802154_KEY_ATTR_MAX + 1];
struct ieee802154_llsec_key_id id;
- if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ return -EOPNOTSUPP;
+
+ if (!info->attrs[NL802154_ATTR_SEC_KEY] ||
+ nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
return -EINVAL;
if (ieee802154_llsec_parse_key_id(attrs[NL802154_KEY_ATTR_ID], &id) < 0)
@@ -1656,6 +1677,11 @@ nl802154_dump_llsec_dev(struct sk_buff *skb, struct netlink_callback *cb)
if (err)
return err;
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) {
+ err = skb->len;
+ goto out_err;
+ }
+
if (!wpan_dev->netdev) {
err = -EINVAL;
goto out_err;
@@ -1742,6 +1768,9 @@ static int nl802154_add_llsec_dev(struct sk_buff *skb, struct genl_info *info)
struct wpan_dev *wpan_dev = dev->ieee802154_ptr;
struct ieee802154_llsec_device dev_desc;
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ return -EOPNOTSUPP;
+
if (ieee802154_llsec_parse_device(info->attrs[NL802154_ATTR_SEC_DEVICE],
&dev_desc) < 0)
return -EINVAL;
@@ -1757,7 +1786,11 @@ static int nl802154_del_llsec_dev(struct sk_buff *skb, struct genl_info *info)
struct nlattr *attrs[NL802154_DEV_ATTR_MAX + 1];
__le64 extended_addr;
- if (nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack))
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ return -EOPNOTSUPP;
+
+ if (!info->attrs[NL802154_ATTR_SEC_DEVICE] ||
+ nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack))
return -EINVAL;
if (!attrs[NL802154_DEV_ATTR_EXTENDED_ADDR])
@@ -1825,6 +1858,11 @@ nl802154_dump_llsec_devkey(struct sk_buff *skb, struct netlink_callback *cb)
if (err)
return err;
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) {
+ err = skb->len;
+ goto out_err;
+ }
+
if (!wpan_dev->netdev) {
err = -EINVAL;
goto out_err;
@@ -1882,6 +1920,9 @@ static int nl802154_add_llsec_devkey(struct sk_buff *skb, struct genl_info *info
struct ieee802154_llsec_device_key key;
__le64 extended_addr;
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ return -EOPNOTSUPP;
+
if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] ||
nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack) < 0)
return -EINVAL;
@@ -1913,7 +1954,11 @@ static int nl802154_del_llsec_devkey(struct sk_buff *skb, struct genl_info *info
struct ieee802154_llsec_device_key key;
__le64 extended_addr;
- if (nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack))
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ return -EOPNOTSUPP;
+
+ if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] ||
+ nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack))
return -EINVAL;
if (!attrs[NL802154_DEVKEY_ATTR_EXTENDED_ADDR])
@@ -1986,6 +2031,11 @@ nl802154_dump_llsec_seclevel(struct sk_buff *skb, struct netlink_callback *cb)
if (err)
return err;
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) {
+ err = skb->len;
+ goto out_err;
+ }
+
if (!wpan_dev->netdev) {
err = -EINVAL;
goto out_err;
@@ -2070,6 +2120,9 @@ static int nl802154_add_llsec_seclevel(struct sk_buff *skb,
struct wpan_dev *wpan_dev = dev->ieee802154_ptr;
struct ieee802154_llsec_seclevel sl;
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ return -EOPNOTSUPP;
+
if (llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL],
&sl) < 0)
return -EINVAL;
@@ -2085,6 +2138,9 @@ static int nl802154_del_llsec_seclevel(struct sk_buff *skb,
struct wpan_dev *wpan_dev = dev->ieee802154_ptr;
struct ieee802154_llsec_seclevel sl;
+ if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
+ return -EOPNOTSUPP;
+
if (!info->attrs[NL802154_ATTR_SEC_LEVEL] ||
llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL],
&sl) < 0)
diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c
index d99e1be..36ed85b 100644
--- a/net/ipv4/ah4.c
+++ b/net/ipv4/ah4.c
@@ -141,7 +141,7 @@ static void ah_output_done(struct crypto_async_request *base, int err)
}
kfree(AH_SKB_CB(skb)->tmp);
- xfrm_output_resume(skb, err);
+ xfrm_output_resume(skb->sk, skb, err);
}
static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index a3271ec..4b834bb 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -279,7 +279,7 @@ static void esp_output_done(struct crypto_async_request *base, int err)
x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP)
esp_output_tail_tcp(x, skb);
else
- xfrm_output_resume(skb, err);
+ xfrm_output_resume(skb->sk, skb, err);
}
}
diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
index 5bda5ae..5aa7344 100644
--- a/net/ipv4/esp4_offload.c
+++ b/net/ipv4/esp4_offload.c
@@ -217,10 +217,12 @@ static struct sk_buff *esp4_gso_segment(struct sk_buff *skb,
if ((!(skb->dev->gso_partial_features & NETIF_F_HW_ESP) &&
!(features & NETIF_F_HW_ESP)) || x->xso.dev != skb->dev)
- esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK);
+ esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK |
+ NETIF_F_SCTP_CRC);
else if (!(features & NETIF_F_HW_ESP_TX_CSUM) &&
!(skb->dev->gso_partial_features & NETIF_F_HW_ESP_TX_CSUM))
- esp_features = features & ~NETIF_F_CSUM_MASK;
+ esp_features = features & ~(NETIF_F_CSUM_MASK |
+ NETIF_F_SCTP_CRC);
xo->flags |= XFRM_GSO_SEGMENT;
@@ -312,8 +314,17 @@ static int esp_xmit(struct xfrm_state *x, struct sk_buff *skb, netdev_features_
ip_hdr(skb)->tot_len = htons(skb->len);
ip_send_check(ip_hdr(skb));
- if (hw_offload)
+ if (hw_offload) {
+ if (!skb_ext_add(skb, SKB_EXT_SEC_PATH))
+ return -ENOMEM;
+
+ xo = xfrm_offload(skb);
+ if (!xo)
+ return -EINVAL;
+
+ xo->flags |= XFRM_XMIT;
return 0;
+ }
err = esp_output_tail(x, skb, &esp);
if (err)
diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
index d1e04d2..d6d45d8 100644
--- a/net/ipv4/netfilter/arp_tables.c
+++ b/net/ipv4/netfilter/arp_tables.c
@@ -1193,6 +1193,8 @@ static int translate_compat_table(struct net *net,
if (!newinfo)
goto out_unlock;
+ memset(newinfo->entries, 0, size);
+
newinfo->number = compatr->num_entries;
for (i = 0; i < NF_ARP_NUMHOOKS; i++) {
newinfo->hook_entry[i] = compatr->hook_entry[i];
@@ -1539,10 +1541,15 @@ int arpt_register_table(struct net *net,
return ret;
}
-void arpt_unregister_table(struct net *net, struct xt_table *table,
- const struct nf_hook_ops *ops)
+void arpt_unregister_table_pre_exit(struct net *net, struct xt_table *table,
+ const struct nf_hook_ops *ops)
{
nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
+}
+EXPORT_SYMBOL(arpt_unregister_table_pre_exit);
+
+void arpt_unregister_table(struct net *net, struct xt_table *table)
+{
__arpt_unregister_table(net, table);
}
diff --git a/net/ipv4/netfilter/arptable_filter.c b/net/ipv4/netfilter/arptable_filter.c
index c216b9a..6c300ba5 100644
--- a/net/ipv4/netfilter/arptable_filter.c
+++ b/net/ipv4/netfilter/arptable_filter.c
@@ -56,16 +56,24 @@ static int __net_init arptable_filter_table_init(struct net *net)
return err;
}
+static void __net_exit arptable_filter_net_pre_exit(struct net *net)
+{
+ if (net->ipv4.arptable_filter)
+ arpt_unregister_table_pre_exit(net, net->ipv4.arptable_filter,
+ arpfilter_ops);
+}
+
static void __net_exit arptable_filter_net_exit(struct net *net)
{
if (!net->ipv4.arptable_filter)
return;
- arpt_unregister_table(net, net->ipv4.arptable_filter, arpfilter_ops);
+ arpt_unregister_table(net, net->ipv4.arptable_filter);
net->ipv4.arptable_filter = NULL;
}
static struct pernet_operations arptable_filter_net_ops = {
.exit = arptable_filter_net_exit,
+ .pre_exit = arptable_filter_net_pre_exit,
};
static int __init arptable_filter_init(void)
diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
index f15bc21..f77ea0d 100644
--- a/net/ipv4/netfilter/ip_tables.c
+++ b/net/ipv4/netfilter/ip_tables.c
@@ -1428,6 +1428,8 @@ translate_compat_table(struct net *net,
if (!newinfo)
goto out_unlock;
+ memset(newinfo->entries, 0, size);
+
newinfo->number = compatr->num_entries;
for (i = 0; i < NF_INET_NUMHOOKS; i++) {
newinfo->hook_entry[i] = compatr->hook_entry[i];
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 50a6d93..798dc85b 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -66,6 +66,7 @@
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/mm.h>
+#include <linux/memblock.h>
#include <linux/string.h>
#include <linux/socket.h>
#include <linux/sockios.h>
@@ -476,8 +477,10 @@ static void ipv4_confirm_neigh(const struct dst_entry *dst, const void *daddr)
__ipv4_confirm_neigh(dev, *(__force u32 *)pkey);
}
-#define IP_IDENTS_SZ 2048u
-
+/* Hash tables of size 2048..262144 depending on RAM size.
+ * Each bucket uses 8 bytes.
+ */
+static u32 ip_idents_mask __read_mostly;
static atomic_t *ip_idents __read_mostly;
static u32 *ip_tstamps __read_mostly;
@@ -487,12 +490,16 @@ static u32 *ip_tstamps __read_mostly;
*/
u32 ip_idents_reserve(u32 hash, int segs)
{
- u32 *p_tstamp = ip_tstamps + hash % IP_IDENTS_SZ;
- atomic_t *p_id = ip_idents + hash % IP_IDENTS_SZ;
- u32 old = READ_ONCE(*p_tstamp);
- u32 now = (u32)jiffies;
+ u32 bucket, old, now = (u32)jiffies;
+ atomic_t *p_id;
+ u32 *p_tstamp;
u32 delta = 0;
+ bucket = hash & ip_idents_mask;
+ p_tstamp = ip_tstamps + bucket;
+ p_id = ip_idents + bucket;
+ old = READ_ONCE(*p_tstamp);
+
if (old != now && cmpxchg(p_tstamp, old, now) == old)
delta = prandom_u32_max(now - old);
@@ -3544,18 +3551,25 @@ struct ip_rt_acct __percpu *ip_rt_acct __read_mostly;
int __init ip_rt_init(void)
{
+ void *idents_hash;
int cpu;
- ip_idents = kmalloc_array(IP_IDENTS_SZ, sizeof(*ip_idents),
- GFP_KERNEL);
- if (!ip_idents)
- panic("IP: failed to allocate ip_idents\n");
+ /* For modern hosts, this will use 2 MB of memory */
+ idents_hash = alloc_large_system_hash("IP idents",
+ sizeof(*ip_idents) + sizeof(*ip_tstamps),
+ 0,
+ 16, /* one bucket per 64 KB */
+ HASH_ZERO,
+ NULL,
+ &ip_idents_mask,
+ 2048,
+ 256*1024);
- prandom_bytes(ip_idents, IP_IDENTS_SZ * sizeof(*ip_idents));
+ ip_idents = idents_hash;
- ip_tstamps = kcalloc(IP_IDENTS_SZ, sizeof(*ip_tstamps), GFP_KERNEL);
- if (!ip_tstamps)
- panic("IP: failed to allocate ip_tstamps\n");
+ prandom_bytes(ip_idents, (ip_idents_mask + 1) * sizeof(*ip_idents));
+
+ ip_tstamps = idents_hash + (ip_idents_mask + 1) * sizeof(*ip_idents);
for_each_possible_cpu(cpu) {
struct uncached_list *ul = &per_cpu(rt_uncached_list, cpu);
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
index 3e5f4f2..0882980 100644
--- a/net/ipv4/sysctl_net_ipv4.c
+++ b/net/ipv4/sysctl_net_ipv4.c
@@ -1369,9 +1369,19 @@ static __net_init int ipv4_sysctl_init_net(struct net *net)
if (!table)
goto err_alloc;
- /* Update the variables to point into the current struct net */
- for (i = 0; i < ARRAY_SIZE(ipv4_net_table) - 1; i++)
- table[i].data += (void *)net - (void *)&init_net;
+ for (i = 0; i < ARRAY_SIZE(ipv4_net_table) - 1; i++) {
+ if (table[i].data) {
+ /* Update the variables to point into
+ * the current struct net
+ */
+ table[i].data += (void *)net - (void *)&init_net;
+ } else {
+ /* Entries without data pointer are global;
+ * Make them read-only in non-init_net ns
+ */
+ table[i].mode &= ~0222;
+ }
+ }
}
net->ipv4.ipv4_hdr = register_net_sysctl(net, "net/ipv4", table);
diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
index 563d016..db5831e 100644
--- a/net/ipv4/tcp_cong.c
+++ b/net/ipv4/tcp_cong.c
@@ -230,6 +230,10 @@ int tcp_set_default_congestion_control(struct net *net, const char *name)
ret = -ENOENT;
} else if (!bpf_try_module_get(ca, ca->owner)) {
ret = -EBUSY;
+ } else if (!net_eq(net, &init_net) &&
+ !(ca->flags & TCP_CONG_NON_RESTRICTED)) {
+ /* Only init netns can set default to a restricted algorithm */
+ ret = -EPERM;
} else {
prev = xchg(&net->ipv4.tcp_congestion_control, ca);
if (prev)
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index e37a2fa..9d28b27 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -2657,9 +2657,12 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
case UDP_GRO:
lock_sock(sk);
+
+ /* when enabling GRO, accept the related GSO packet type */
if (valbool)
udp_tunnel_encap_enable(sk->sk_socket);
up->gro_enabled = valbool;
+ up->accept_udp_l4 = valbool;
release_sock(sk);
break;
@@ -2747,6 +2750,10 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
val = up->gso_size;
break;
+ case UDP_GRO:
+ val = up->gro_enabled;
+ break;
+
/* The following two cannot be changed on UDP sockets, the return is
* always 0 (which corresponds to the full checksum coverage of UDP). */
case UDPLITE_SEND_CSCOV:
diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c
index 440080d..080ee7f4 100644
--- a/net/ipv6/ah6.c
+++ b/net/ipv6/ah6.c
@@ -316,7 +316,7 @@ static void ah6_output_done(struct crypto_async_request *base, int err)
}
kfree(AH_SKB_CB(skb)->tmp);
- xfrm_output_resume(skb, err);
+ xfrm_output_resume(skb->sk, skb, err);
}
static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index 2b804fc..4071cb7 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -314,7 +314,7 @@ static void esp_output_done(struct crypto_async_request *base, int err)
x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP)
esp_output_tail_tcp(x, skb);
else
- xfrm_output_resume(skb, err);
+ xfrm_output_resume(skb->sk, skb, err);
}
}
diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
index 1ca516f..4af56af 100644
--- a/net/ipv6/esp6_offload.c
+++ b/net/ipv6/esp6_offload.c
@@ -254,9 +254,11 @@ static struct sk_buff *esp6_gso_segment(struct sk_buff *skb,
skb->encap_hdr_csum = 1;
if (!(features & NETIF_F_HW_ESP) || x->xso.dev != skb->dev)
- esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK);
+ esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK |
+ NETIF_F_SCTP_CRC);
else if (!(features & NETIF_F_HW_ESP_TX_CSUM))
- esp_features = features & ~NETIF_F_CSUM_MASK;
+ esp_features = features & ~(NETIF_F_CSUM_MASK |
+ NETIF_F_SCTP_CRC);
xo->flags |= XFRM_GSO_SEGMENT;
@@ -346,8 +348,17 @@ static int esp6_xmit(struct xfrm_state *x, struct sk_buff *skb, netdev_features
ipv6_hdr(skb)->payload_len = htons(len);
- if (hw_offload)
+ if (hw_offload) {
+ if (!skb_ext_add(skb, SKB_EXT_SEC_PATH))
+ return -ENOMEM;
+
+ xo = xfrm_offload(skb);
+ if (!xo)
+ return -EINVAL;
+
+ xo->flags |= XFRM_XMIT;
return 0;
+ }
err = esp6_output_tail(x, skb, &esp);
if (err)
diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
index 640f71a..09fa49b 100644
--- a/net/ipv6/ip6_gre.c
+++ b/net/ipv6/ip6_gre.c
@@ -387,7 +387,6 @@ static struct ip6_tnl *ip6gre_tunnel_locate(struct net *net,
if (!(nt->parms.o_flags & TUNNEL_SEQ))
dev->features |= NETIF_F_LLTX;
- dev_hold(dev);
ip6gre_tunnel_link(ign, nt);
return nt;
@@ -1496,6 +1495,7 @@ static int ip6gre_tunnel_init_common(struct net_device *dev)
}
ip6gre_tnl_init_features(dev);
+ dev_hold(dev);
return 0;
cleanup_dst_cache_init:
@@ -1538,8 +1538,6 @@ static void ip6gre_fb_tunnel_init(struct net_device *dev)
strcpy(tunnel->parms.name, dev->name);
tunnel->hlen = sizeof(struct ipv6hdr) + 4;
-
- dev_hold(dev);
}
static struct inet6_protocol ip6gre_protocol __read_mostly = {
@@ -1889,6 +1887,7 @@ static int ip6erspan_tap_init(struct net_device *dev)
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
ip6erspan_tnl_link_config(tunnel, 1);
+ dev_hold(dev);
return 0;
cleanup_dst_cache_init:
@@ -1988,8 +1987,6 @@ static int ip6gre_newlink_common(struct net *src_net, struct net_device *dev,
if (tb[IFLA_MTU])
ip6_tnl_change_mtu(dev, nla_get_u32(tb[IFLA_MTU]));
- dev_hold(dev);
-
out:
return err;
}
diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
index 5d27b5c..42ca2d0 100644
--- a/net/ipv6/ip6_tunnel.c
+++ b/net/ipv6/ip6_tunnel.c
@@ -293,7 +293,6 @@ static int ip6_tnl_create2(struct net_device *dev)
strcpy(t->parms.name, dev->name);
- dev_hold(dev);
ip6_tnl_link(ip6n, t);
return 0;
@@ -1913,6 +1912,7 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
dev->min_mtu = ETH_MIN_MTU;
dev->max_mtu = IP6_MAX_MTU - dev->hard_header_len;
+ dev_hold(dev);
return 0;
destroy_dst:
@@ -1956,7 +1956,6 @@ static int __net_init ip6_fb_tnl_dev_init(struct net_device *dev)
struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id);
t->parms.proto = IPPROTO_IPV6;
- dev_hold(dev);
rcu_assign_pointer(ip6n->tnls_wc[0], t);
return 0;
@@ -2275,6 +2274,16 @@ static void __net_exit ip6_tnl_destroy_tunnels(struct net *net, struct list_head
t = rtnl_dereference(t->next);
}
}
+
+ t = rtnl_dereference(ip6n->tnls_wc[0]);
+ while (t) {
+ /* If dev is in the same netns, it has already
+ * been added to the list by the previous loop.
+ */
+ if (!net_eq(dev_net(t->dev), net))
+ unregister_netdevice_queue(t->dev, list);
+ t = rtnl_dereference(t->next);
+ }
}
static int __net_init ip6_tnl_init_net(struct net *net)
diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
index ecfeffc..23aeeb4 100644
--- a/net/ipv6/ip6_vti.c
+++ b/net/ipv6/ip6_vti.c
@@ -192,7 +192,6 @@ static int vti6_tnl_create2(struct net_device *dev)
strcpy(t->parms.name, dev->name);
- dev_hold(dev);
vti6_tnl_link(ip6n, t);
return 0;
@@ -931,6 +930,7 @@ static inline int vti6_dev_init_gen(struct net_device *dev)
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
if (!dev->tstats)
return -ENOMEM;
+ dev_hold(dev);
return 0;
}
@@ -962,7 +962,6 @@ static int __net_init vti6_fb_tnl_dev_init(struct net_device *dev)
struct vti6_net *ip6n = net_generic(net, vti6_net_id);
t->parms.proto = IPPROTO_IPV6;
- dev_hold(dev);
rcu_assign_pointer(ip6n->tnls_wc[0], t);
return 0;
diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
index 8cd2782..9fb5077 100644
--- a/net/ipv6/mcast.c
+++ b/net/ipv6/mcast.c
@@ -1601,10 +1601,7 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
IPV6_TLV_PADN, 0 };
/* we assume size > sizeof(ra) here */
- /* limit our allocations to order-0 page */
- size = min_t(int, size, SKB_MAX_ORDER(0, 0));
skb = sock_alloc_send_skb(sk, size, 1, &err);
-
if (!skb)
return NULL;
diff --git a/net/ipv6/mcast_snoop.c b/net/ipv6/mcast_snoop.c
index d3d6b6a..04d5fcd 100644
--- a/net/ipv6/mcast_snoop.c
+++ b/net/ipv6/mcast_snoop.c
@@ -109,7 +109,7 @@ static int ipv6_mc_check_mld_msg(struct sk_buff *skb)
struct mld_msg *mld;
if (!ipv6_mc_may_pull(skb, len))
- return -EINVAL;
+ return -ENODATA;
mld = (struct mld_msg *)skb_transport_header(skb);
@@ -122,7 +122,7 @@ static int ipv6_mc_check_mld_msg(struct sk_buff *skb)
case ICMPV6_MGM_QUERY:
return ipv6_mc_check_mld_query(skb);
default:
- return -ENOMSG;
+ return -ENODATA;
}
}
@@ -131,7 +131,7 @@ static inline __sum16 ipv6_mc_validate_checksum(struct sk_buff *skb)
return skb_checksum_validate(skb, IPPROTO_ICMPV6, ip6_compute_pseudo);
}
-int ipv6_mc_check_icmpv6(struct sk_buff *skb)
+static int ipv6_mc_check_icmpv6(struct sk_buff *skb)
{
unsigned int len = skb_transport_offset(skb) + sizeof(struct icmp6hdr);
unsigned int transport_len = ipv6_transport_len(skb);
@@ -150,7 +150,6 @@ int ipv6_mc_check_icmpv6(struct sk_buff *skb)
return 0;
}
-EXPORT_SYMBOL(ipv6_mc_check_icmpv6);
/**
* ipv6_mc_check_mld - checks whether this is a sane MLD packet
@@ -161,7 +160,10 @@ EXPORT_SYMBOL(ipv6_mc_check_icmpv6);
*
* -EINVAL: A broken packet was detected, i.e. it violates some internet
* standard
- * -ENOMSG: IP header validation succeeded but it is not an MLD packet.
+ * -ENOMSG: IP header validation succeeded but it is not an ICMPv6 packet
+ * with a hop-by-hop option.
+ * -ENODATA: IP+ICMPv6 header with hop-by-hop option validation succeeded
+ * but it is not an MLD packet.
* -ENOMEM: A memory allocation failure happened.
*
* Caller needs to set the skb network header and free any returned skb if it
diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
index 2e2119b..eb2b540 100644
--- a/net/ipv6/netfilter/ip6_tables.c
+++ b/net/ipv6/netfilter/ip6_tables.c
@@ -1443,6 +1443,8 @@ translate_compat_table(struct net *net,
if (!newinfo)
goto out_unlock;
+ memset(newinfo->entries, 0, size);
+
newinfo->number = compatr->num_entries;
for (i = 0; i < NF_INET_NUMHOOKS; i++) {
newinfo->hook_entry[i] = compatr->hook_entry[i];
diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
index 6e4ab80..00f133a 100644
--- a/net/ipv6/raw.c
+++ b/net/ipv6/raw.c
@@ -298,7 +298,7 @@ static int rawv6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
*/
v4addr = LOOPBACK4_IPV6;
if (!(addr_type & IPV6_ADDR_MULTICAST) &&
- !sock_net(sk)->ipv6.sysctl.ip_nonlocal_bind) {
+ !ipv6_can_nonlocal_bind(sock_net(sk), inet)) {
err = -EADDRNOTAVAIL;
if (!ipv6_chk_addr(sock_net(sk), &addr->sin6_addr,
dev, 0)) {
diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
index 47a0dc4..28e4478 100644
--- a/net/ipv6/reassembly.c
+++ b/net/ipv6/reassembly.c
@@ -343,7 +343,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
hdr = ipv6_hdr(skb);
fhdr = (struct frag_hdr *)skb_transport_header(skb);
- if (!(fhdr->frag_off & htons(0xFFF9))) {
+ if (!(fhdr->frag_off & htons(IP6_OFFSET | IP6_MF))) {
/* It is not a fragmented frame */
skb->transport_header += sizeof(struct frag_hdr);
__IP6_INC_STATS(net,
@@ -351,6 +351,8 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb);
IP6CB(skb)->flags |= IP6SKB_FRAGMENTED;
+ IP6CB(skb)->frag_max_size = ntohs(hdr->payload_len) +
+ sizeof(struct ipv6hdr);
return 1;
}
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index fa27644..71e578ed 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -5203,9 +5203,11 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
* nexthops have been replaced by first new, the rest should
* be added to it.
*/
- cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL |
- NLM_F_REPLACE);
- cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE;
+ if (cfg->fc_nlinfo.nlh) {
+ cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL |
+ NLM_F_REPLACE);
+ cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE;
+ }
nhn++;
}
diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
index b26f469..a6a3d75 100644
--- a/net/ipv6/sit.c
+++ b/net/ipv6/sit.c
@@ -218,8 +218,6 @@ static int ipip6_tunnel_create(struct net_device *dev)
ipip6_tunnel_clone_6rd(dev, sitn);
- dev_hold(dev);
-
ipip6_tunnel_link(sitn, t);
return 0;
@@ -1456,7 +1454,7 @@ static int ipip6_tunnel_init(struct net_device *dev)
dev->tstats = NULL;
return err;
}
-
+ dev_hold(dev);
return 0;
}
@@ -1472,7 +1470,6 @@ static void __net_init ipip6_fb_tunnel_init(struct net_device *dev)
iph->ihl = 5;
iph->ttl = 64;
- dev_hold(dev);
rcu_assign_pointer(sitn->tunnels_wc[0], tunnel);
}
@@ -1867,9 +1864,9 @@ static void __net_exit sit_destroy_tunnels(struct net *net,
if (dev->rtnl_link_ops == &sit_link_ops)
unregister_netdevice_queue(dev, head);
- for (prio = 1; prio < 4; prio++) {
+ for (prio = 0; prio < 4; prio++) {
int h;
- for (h = 0; h < IP6_SIT_HASH_SIZE; h++) {
+ for (h = 0; h < (prio ? IP6_SIT_HASH_SIZE : 1); h++) {
struct ip_tunnel *t;
t = rtnl_dereference(sitn->tunnels[prio][h]);
diff --git a/net/mac80211/aead_api.c b/net/mac80211/aead_api.c
index d7b3d90..b00d6f5 100644
--- a/net/mac80211/aead_api.c
+++ b/net/mac80211/aead_api.c
@@ -23,6 +23,7 @@ int aead_encrypt(struct crypto_aead *tfm, u8 *b_0, u8 *aad, size_t aad_len,
struct aead_request *aead_req;
int reqsize = sizeof(*aead_req) + crypto_aead_reqsize(tfm);
u8 *__aad;
+ int ret;
aead_req = kzalloc(reqsize + aad_len, GFP_ATOMIC);
if (!aead_req)
@@ -40,10 +41,10 @@ int aead_encrypt(struct crypto_aead *tfm, u8 *b_0, u8 *aad, size_t aad_len,
aead_request_set_crypt(aead_req, sg, sg, data_len, b_0);
aead_request_set_ad(aead_req, sg[0].length);
- crypto_aead_encrypt(aead_req);
+ ret = crypto_aead_encrypt(aead_req);
kfree_sensitive(aead_req);
- return 0;
+ return ret;
}
int aead_decrypt(struct crypto_aead *tfm, u8 *b_0, u8 *aad, size_t aad_len,
diff --git a/net/mac80211/aes_gmac.c b/net/mac80211/aes_gmac.c
index 6f3b3a0..512cab0 100644
--- a/net/mac80211/aes_gmac.c
+++ b/net/mac80211/aes_gmac.c
@@ -22,6 +22,7 @@ int ieee80211_aes_gmac(struct crypto_aead *tfm, const u8 *aad, u8 *nonce,
struct aead_request *aead_req;
int reqsize = sizeof(*aead_req) + crypto_aead_reqsize(tfm);
const __le16 *fc;
+ int ret;
if (data_len < GMAC_MIC_LEN)
return -EINVAL;
@@ -59,10 +60,10 @@ int ieee80211_aes_gmac(struct crypto_aead *tfm, const u8 *aad, u8 *nonce,
aead_request_set_crypt(aead_req, sg, sg, 0, iv);
aead_request_set_ad(aead_req, GMAC_AAD_LEN + data_len);
- crypto_aead_encrypt(aead_req);
+ ret = crypto_aead_encrypt(aead_req);
kfree_sensitive(aead_req);
- return 0;
+ return ret;
}
struct crypto_aead *ieee80211_aes_gmac_key_setup(const u8 key[],
diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
index 2bf6271..6a96dede 100644
--- a/net/mac80211/cfg.c
+++ b/net/mac80211/cfg.c
@@ -1789,8 +1789,10 @@ static int ieee80211_change_station(struct wiphy *wiphy,
}
if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
- sta->sdata->u.vlan.sta)
+ sta->sdata->u.vlan.sta) {
+ ieee80211_clear_fast_rx(sta);
RCU_INIT_POINTER(sta->sdata->u.vlan.sta, NULL);
+ }
if (test_sta_flag(sta, WLAN_STA_AUTHORIZED))
ieee80211_vif_dec_num_mcast(sta->sdata);
diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
index d691378..be40f6b 100644
--- a/net/mac80211/ieee80211_i.h
+++ b/net/mac80211/ieee80211_i.h
@@ -50,12 +50,6 @@ struct ieee80211_local;
#define IEEE80211_ENCRYPT_HEADROOM 8
#define IEEE80211_ENCRYPT_TAILROOM 18
-/* IEEE 802.11 (Ch. 9.5 Defragmentation) requires support for concurrent
- * reception of at least three fragmented frames. This limit can be increased
- * by changing this define, at the cost of slower frame reassembly and
- * increased memory use (about 2 kB of RAM per entry). */
-#define IEEE80211_FRAGMENT_MAX 4
-
/* power level hasn't been configured (or set to automatic) */
#define IEEE80211_UNSET_POWER_LEVEL INT_MIN
@@ -88,18 +82,6 @@ extern const u8 ieee80211_ac_to_qos_mask[IEEE80211_NUM_ACS];
#define IEEE80211_MAX_NAN_INSTANCE_ID 255
-struct ieee80211_fragment_entry {
- struct sk_buff_head skb_list;
- unsigned long first_frag_time;
- u16 seq;
- u16 extra_len;
- u16 last_frag;
- u8 rx_queue;
- bool check_sequential_pn; /* needed for CCMP/GCMP */
- u8 last_pn[6]; /* PN of the last fragment if CCMP was used */
-};
-
-
struct ieee80211_bss {
u32 device_ts_beacon, device_ts_presp;
@@ -241,8 +223,15 @@ struct ieee80211_rx_data {
*/
int security_idx;
- u32 tkip_iv32;
- u16 tkip_iv16;
+ union {
+ struct {
+ u32 iv32;
+ u16 iv16;
+ } tkip;
+ struct {
+ u8 pn[IEEE80211_CCMP_PN_LEN];
+ } ccm_gcm;
+ };
};
struct ieee80211_csa_settings {
@@ -906,9 +895,7 @@ struct ieee80211_sub_if_data {
char name[IFNAMSIZ];
- /* Fragment table for host-based reassembly */
- struct ieee80211_fragment_entry fragments[IEEE80211_FRAGMENT_MAX];
- unsigned int fragment_next;
+ struct ieee80211_fragment_cache frags;
/* TID bitmap for NoAck policy */
u16 noack_map;
@@ -2327,4 +2314,7 @@ u32 ieee80211_calc_expected_tx_airtime(struct ieee80211_hw *hw,
#define debug_noinline
#endif
+void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache);
+void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache);
+
#endif /* IEEE80211_I_H */
diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
index f3c3557..30589b4 100644
--- a/net/mac80211/iface.c
+++ b/net/mac80211/iface.c
@@ -8,7 +8,7 @@
* Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
* Copyright 2013-2014 Intel Mobile Communications GmbH
* Copyright (c) 2016 Intel Deutschland GmbH
- * Copyright (C) 2018-2020 Intel Corporation
+ * Copyright (C) 2018-2021 Intel Corporation
*/
#include <linux/slab.h>
#include <linux/kernel.h>
@@ -679,16 +679,12 @@ static void ieee80211_set_multicast_list(struct net_device *dev)
*/
static void ieee80211_teardown_sdata(struct ieee80211_sub_if_data *sdata)
{
- int i;
-
/* free extra data */
ieee80211_free_keys(sdata, false);
ieee80211_debugfs_remove_netdev(sdata);
- for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++)
- __skb_queue_purge(&sdata->fragments[i].skb_list);
- sdata->fragment_next = 0;
+ ieee80211_destroy_frag_cache(&sdata->frags);
if (ieee80211_vif_is_mesh(&sdata->vif))
ieee80211_mesh_teardown_sdata(sdata);
@@ -1950,8 +1946,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
sdata->wdev.wiphy = local->hw.wiphy;
sdata->local = local;
- for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++)
- skb_queue_head_init(&sdata->fragments[i].skb_list);
+ ieee80211_init_frag_cache(&sdata->frags);
INIT_LIST_HEAD(&sdata->key_list);
diff --git a/net/mac80211/key.c b/net/mac80211/key.c
index 8c5f829..6a72c33 100644
--- a/net/mac80211/key.c
+++ b/net/mac80211/key.c
@@ -799,6 +799,7 @@ int ieee80211_key_link(struct ieee80211_key *key,
struct ieee80211_sub_if_data *sdata,
struct sta_info *sta)
{
+ static atomic_t key_color = ATOMIC_INIT(0);
struct ieee80211_key *old_key;
int idx = key->conf.keyidx;
bool pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;
@@ -850,6 +851,12 @@ int ieee80211_key_link(struct ieee80211_key *key,
key->sdata = sdata;
key->sta = sta;
+ /*
+ * Assign a unique ID to every key so we can easily prevent mixed
+ * key and fragment cache attacks.
+ */
+ key->color = atomic_inc_return(&key_color);
+
increment_tailroom_need_count(sdata);
ret = ieee80211_key_replace(sdata, sta, pairwise, old_key, key);
diff --git a/net/mac80211/key.h b/net/mac80211/key.h
index 7ad72e9..1e326c8 100644
--- a/net/mac80211/key.h
+++ b/net/mac80211/key.h
@@ -128,6 +128,8 @@ struct ieee80211_key {
} debugfs;
#endif
+ unsigned int color;
+
/*
* key config, must be last because it contains key
* material as variable length member
diff --git a/net/mac80211/main.c b/net/mac80211/main.c
index 523380a..7389302 100644
--- a/net/mac80211/main.c
+++ b/net/mac80211/main.c
@@ -982,8 +982,19 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
continue;
if (!dflt_chandef.chan) {
+ /*
+ * Assign the first enabled channel to dflt_chandef
+ * from the list of channels
+ */
+ for (i = 0; i < sband->n_channels; i++)
+ if (!(sband->channels[i].flags &
+ IEEE80211_CHAN_DISABLED))
+ break;
+ /* if none found then use the first anyway */
+ if (i == sband->n_channels)
+ i = 0;
cfg80211_chandef_create(&dflt_chandef,
- &sband->channels[0],
+ &sband->channels[i],
NL80211_CHAN_NO_HT);
/* init channel we're on */
if (!local->use_chanctx && !local->_oper_chandef.chan) {
@@ -1139,8 +1150,11 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
if (local->hw.wiphy->max_scan_ie_len)
local->hw.wiphy->max_scan_ie_len -= local->scan_ies_len;
- WARN_ON(!ieee80211_cs_list_valid(local->hw.cipher_schemes,
- local->hw.n_cipher_schemes));
+ if (WARN_ON(!ieee80211_cs_list_valid(local->hw.cipher_schemes,
+ local->hw.n_cipher_schemes))) {
+ result = -EINVAL;
+ goto fail_workqueue;
+ }
result = ieee80211_init_cipher_suites(local);
if (result < 0)
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
index 3f483e8..6d3220c 100644
--- a/net/mac80211/mlme.c
+++ b/net/mac80211/mlme.c
@@ -1295,6 +1295,11 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_sub_if_data *sdata)
sdata->vif.csa_active = false;
ifmgd->csa_waiting_bcn = false;
+ /*
+ * If the CSA IE is still present on the beacon after the switch,
+ * we need to consider it as a new CSA (possibly to self).
+ */
+ ifmgd->beacon_crc_valid = false;
ret = drv_post_channel_switch(sdata);
if (ret) {
@@ -4660,7 +4665,10 @@ static void ieee80211_sta_conn_mon_timer(struct timer_list *t)
timeout = sta->rx_stats.last_rx;
timeout += IEEE80211_CONNECTION_IDLE_TIME;
- if (time_is_before_jiffies(timeout)) {
+ /* If timeout is after now, then update timer to fire at
+ * the later date, but do not actually probe at this time.
+ */
+ if (time_is_after_jiffies(timeout)) {
mod_timer(&ifmgd->conn_mon_timer, round_jiffies_up(timeout));
return;
}
diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
index 9851742..ef8ff0b 100644
--- a/net/mac80211/rx.c
+++ b/net/mac80211/rx.c
@@ -6,7 +6,7 @@
* Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2013-2014 Intel Mobile Communications GmbH
* Copyright(c) 2015 - 2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2020 Intel Corporation
+ * Copyright (C) 2018-2021 Intel Corporation
*/
#include <linux/jiffies.h>
@@ -2133,19 +2133,34 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
return result;
}
+void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(cache->entries); i++)
+ skb_queue_head_init(&cache->entries[i].skb_list);
+}
+
+void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(cache->entries); i++)
+ __skb_queue_purge(&cache->entries[i].skb_list);
+}
+
static inline struct ieee80211_fragment_entry *
-ieee80211_reassemble_add(struct ieee80211_sub_if_data *sdata,
+ieee80211_reassemble_add(struct ieee80211_fragment_cache *cache,
unsigned int frag, unsigned int seq, int rx_queue,
struct sk_buff **skb)
{
struct ieee80211_fragment_entry *entry;
- entry = &sdata->fragments[sdata->fragment_next++];
- if (sdata->fragment_next >= IEEE80211_FRAGMENT_MAX)
- sdata->fragment_next = 0;
+ entry = &cache->entries[cache->next++];
+ if (cache->next >= IEEE80211_FRAGMENT_MAX)
+ cache->next = 0;
- if (!skb_queue_empty(&entry->skb_list))
- __skb_queue_purge(&entry->skb_list);
+ __skb_queue_purge(&entry->skb_list);
__skb_queue_tail(&entry->skb_list, *skb); /* no need for locking */
*skb = NULL;
@@ -2160,14 +2175,14 @@ ieee80211_reassemble_add(struct ieee80211_sub_if_data *sdata,
}
static inline struct ieee80211_fragment_entry *
-ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata,
+ieee80211_reassemble_find(struct ieee80211_fragment_cache *cache,
unsigned int frag, unsigned int seq,
int rx_queue, struct ieee80211_hdr *hdr)
{
struct ieee80211_fragment_entry *entry;
int i, idx;
- idx = sdata->fragment_next;
+ idx = cache->next;
for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++) {
struct ieee80211_hdr *f_hdr;
struct sk_buff *f_skb;
@@ -2176,7 +2191,7 @@ ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata,
if (idx < 0)
idx = IEEE80211_FRAGMENT_MAX - 1;
- entry = &sdata->fragments[idx];
+ entry = &cache->entries[idx];
if (skb_queue_empty(&entry->skb_list) || entry->seq != seq ||
entry->rx_queue != rx_queue ||
entry->last_frag + 1 != frag)
@@ -2204,15 +2219,27 @@ ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata,
return NULL;
}
+static bool requires_sequential_pn(struct ieee80211_rx_data *rx, __le16 fc)
+{
+ return rx->key &&
+ (rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP ||
+ rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP_256 ||
+ rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP ||
+ rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP_256) &&
+ ieee80211_has_protected(fc);
+}
+
static ieee80211_rx_result debug_noinline
ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
{
+ struct ieee80211_fragment_cache *cache = &rx->sdata->frags;
struct ieee80211_hdr *hdr;
u16 sc;
__le16 fc;
unsigned int frag, seq;
struct ieee80211_fragment_entry *entry;
struct sk_buff *skb;
+ struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
hdr = (struct ieee80211_hdr *)rx->skb->data;
fc = hdr->frame_control;
@@ -2228,6 +2255,9 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
goto out_no_led;
}
+ if (rx->sta)
+ cache = &rx->sta->frags;
+
if (likely(!ieee80211_has_morefrags(fc) && frag == 0))
goto out;
@@ -2246,20 +2276,17 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
if (frag == 0) {
/* This is the first fragment of a new frame. */
- entry = ieee80211_reassemble_add(rx->sdata, frag, seq,
+ entry = ieee80211_reassemble_add(cache, frag, seq,
rx->seqno_idx, &(rx->skb));
- if (rx->key &&
- (rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP ||
- rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP_256 ||
- rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP ||
- rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP_256) &&
- ieee80211_has_protected(fc)) {
+ if (requires_sequential_pn(rx, fc)) {
int queue = rx->security_idx;
/* Store CCMP/GCMP PN so that we can verify that the
* next fragment has a sequential PN value.
*/
entry->check_sequential_pn = true;
+ entry->is_protected = true;
+ entry->key_color = rx->key->color;
memcpy(entry->last_pn,
rx->key->u.ccmp.rx_pn[queue],
IEEE80211_CCMP_PN_LEN);
@@ -2271,6 +2298,11 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
sizeof(rx->key->u.gcmp.rx_pn[queue]));
BUILD_BUG_ON(IEEE80211_CCMP_PN_LEN !=
IEEE80211_GCMP_PN_LEN);
+ } else if (rx->key &&
+ (ieee80211_has_protected(fc) ||
+ (status->flag & RX_FLAG_DECRYPTED))) {
+ entry->is_protected = true;
+ entry->key_color = rx->key->color;
}
return RX_QUEUED;
}
@@ -2278,7 +2310,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
/* This is a fragment for a frame that should already be pending in
* fragment cache. Add this fragment to the end of the pending entry.
*/
- entry = ieee80211_reassemble_find(rx->sdata, frag, seq,
+ entry = ieee80211_reassemble_find(cache, frag, seq,
rx->seqno_idx, hdr);
if (!entry) {
I802_DEBUG_INC(rx->local->rx_handlers_drop_defrag);
@@ -2293,25 +2325,39 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
if (entry->check_sequential_pn) {
int i;
u8 pn[IEEE80211_CCMP_PN_LEN], *rpn;
- int queue;
- if (!rx->key ||
- (rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP &&
- rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP_256 &&
- rx->key->conf.cipher != WLAN_CIPHER_SUITE_GCMP &&
- rx->key->conf.cipher != WLAN_CIPHER_SUITE_GCMP_256))
+ if (!requires_sequential_pn(rx, fc))
return RX_DROP_UNUSABLE;
+
+ /* Prevent mixed key and fragment cache attacks */
+ if (entry->key_color != rx->key->color)
+ return RX_DROP_UNUSABLE;
+
memcpy(pn, entry->last_pn, IEEE80211_CCMP_PN_LEN);
for (i = IEEE80211_CCMP_PN_LEN - 1; i >= 0; i--) {
pn[i]++;
if (pn[i])
break;
}
- queue = rx->security_idx;
- rpn = rx->key->u.ccmp.rx_pn[queue];
+
+ rpn = rx->ccm_gcm.pn;
if (memcmp(pn, rpn, IEEE80211_CCMP_PN_LEN))
return RX_DROP_UNUSABLE;
memcpy(entry->last_pn, pn, IEEE80211_CCMP_PN_LEN);
+ } else if (entry->is_protected &&
+ (!rx->key ||
+ (!ieee80211_has_protected(fc) &&
+ !(status->flag & RX_FLAG_DECRYPTED)) ||
+ rx->key->color != entry->key_color)) {
+ /* Drop this as a mixed key or fragment cache attack, even
+ * if for TKIP Michael MIC should protect us, and WEP is a
+ * lost cause anyway.
+ */
+ return RX_DROP_UNUSABLE;
+ } else if (entry->is_protected && rx->key &&
+ entry->key_color != rx->key->color &&
+ (status->flag & RX_FLAG_DECRYPTED)) {
+ return RX_DROP_UNUSABLE;
}
skb_pull(rx->skb, ieee80211_hdrlen(fc));
@@ -2504,13 +2550,13 @@ static bool ieee80211_frame_allowed(struct ieee80211_rx_data *rx, __le16 fc)
struct ethhdr *ehdr = (struct ethhdr *) rx->skb->data;
/*
- * Allow EAPOL frames to us/the PAE group address regardless
- * of whether the frame was encrypted or not.
+ * Allow EAPOL frames to us/the PAE group address regardless of
+ * whether the frame was encrypted or not, and always disallow
+ * all other destination addresses for them.
*/
- if (ehdr->h_proto == rx->sdata->control_port_protocol &&
- (ether_addr_equal(ehdr->h_dest, rx->sdata->vif.addr) ||
- ether_addr_equal(ehdr->h_dest, pae_group_addr)))
- return true;
+ if (unlikely(ehdr->h_proto == rx->sdata->control_port_protocol))
+ return ether_addr_equal(ehdr->h_dest, rx->sdata->vif.addr) ||
+ ether_addr_equal(ehdr->h_dest, pae_group_addr);
if (ieee80211_802_1x_port_control(rx) ||
ieee80211_drop_unencrypted(rx, fc))
@@ -2535,8 +2581,28 @@ static void ieee80211_deliver_skb_to_local_stack(struct sk_buff *skb,
cfg80211_rx_control_port(dev, skb, noencrypt);
dev_kfree_skb(skb);
} else {
+ struct ethhdr *ehdr = (void *)skb_mac_header(skb);
+
memset(skb->cb, 0, sizeof(skb->cb));
+ /*
+ * 802.1X over 802.11 requires that the authenticator address
+ * be used for EAPOL frames. However, 802.1X allows the use of
+ * the PAE group address instead. If the interface is part of
+ * a bridge and we pass the frame with the PAE group address,
+ * then the bridge will forward it to the network (even if the
+ * client was not associated yet), which isn't supposed to
+ * happen.
+ * To avoid that, rewrite the destination address to our own
+ * address, so that the authenticator (e.g. hostapd) will see
+ * the frame, but bridge won't forward it anywhere else. Note
+ * that due to earlier filtering, the only other address can
+ * be the PAE group address.
+ */
+ if (unlikely(skb->protocol == sdata->control_port_protocol &&
+ !ether_addr_equal(ehdr->h_dest, sdata->vif.addr)))
+ ether_addr_copy(ehdr->h_dest, sdata->vif.addr);
+
/* deliver to local stack */
if (rx->list)
list_add_tail(&skb->list, rx->list);
@@ -2576,6 +2642,7 @@ ieee80211_deliver_skb(struct ieee80211_rx_data *rx)
if ((sdata->vif.type == NL80211_IFTYPE_AP ||
sdata->vif.type == NL80211_IFTYPE_AP_VLAN) &&
!(sdata->flags & IEEE80211_SDATA_DONT_BRIDGE_PACKETS) &&
+ ehdr->h_proto != rx->sdata->control_port_protocol &&
(sdata->vif.type != NL80211_IFTYPE_AP_VLAN || !sdata->u.vlan.sta)) {
if (is_multicast_ether_addr(ehdr->h_dest) &&
ieee80211_vif_get_num_mcast_if(sdata) != 0) {
@@ -2685,7 +2752,7 @@ __ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx, u8 data_offset)
if (ieee80211_data_to_8023_exthdr(skb, ðhdr,
rx->sdata->vif.addr,
rx->sdata->vif.type,
- data_offset))
+ data_offset, true))
return RX_DROP_UNUSABLE;
ieee80211_amsdu_to_8023s(skb, &frame_list, dev->dev_addr,
@@ -2742,6 +2809,23 @@ ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx)
if (is_multicast_ether_addr(hdr->addr1))
return RX_DROP_UNUSABLE;
+ if (rx->key) {
+ /*
+ * We should not receive A-MSDUs on pre-HT connections,
+ * and HT connections cannot use old ciphers. Thus drop
+ * them, as in those cases we couldn't even have SPP
+ * A-MSDUs or such.
+ */
+ switch (rx->key->conf.cipher) {
+ case WLAN_CIPHER_SUITE_WEP40:
+ case WLAN_CIPHER_SUITE_WEP104:
+ case WLAN_CIPHER_SUITE_TKIP:
+ return RX_DROP_UNUSABLE;
+ default:
+ break;
+ }
+ }
+
return __ieee80211_rx_h_amsdu(rx, 0);
}
diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
index ec6973e..f2fb69d 100644
--- a/net/mac80211/sta_info.c
+++ b/net/mac80211/sta_info.c
@@ -4,7 +4,7 @@
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
* Copyright 2013-2014 Intel Mobile Communications GmbH
* Copyright (C) 2015 - 2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2020 Intel Corporation
+ * Copyright (C) 2018-2021 Intel Corporation
*/
#include <linux/module.h>
@@ -392,6 +392,8 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata,
u64_stats_init(&sta->rx_stats.syncp);
+ ieee80211_init_frag_cache(&sta->frags);
+
sta->sta_state = IEEE80211_STA_NONE;
/* Mark TID as unreserved */
@@ -1102,6 +1104,8 @@ static void __sta_info_destroy_part2(struct sta_info *sta)
ieee80211_sta_debugfs_remove(sta);
+ ieee80211_destroy_frag_cache(&sta->frags);
+
cleanup_single_sta(sta);
}
diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
index 7afd076..355e006 100644
--- a/net/mac80211/sta_info.h
+++ b/net/mac80211/sta_info.h
@@ -3,7 +3,7 @@
* Copyright 2002-2005, Devicescape Software, Inc.
* Copyright 2013-2014 Intel Mobile Communications GmbH
* Copyright(c) 2015-2017 Intel Deutschland GmbH
- * Copyright(c) 2020 Intel Corporation
+ * Copyright(c) 2020-2021 Intel Corporation
*/
#ifndef STA_INFO_H
@@ -437,6 +437,34 @@ struct ieee80211_sta_rx_stats {
};
/*
+ * IEEE 802.11-2016 (10.6 "Defragmentation") recommends support for "concurrent
+ * reception of at least one MSDU per access category per associated STA"
+ * on APs, or "at least one MSDU per access category" on other interface types.
+ *
+ * This limit can be increased by changing this define, at the cost of slower
+ * frame reassembly and increased memory use while fragments are pending.
+ */
+#define IEEE80211_FRAGMENT_MAX 4
+
+struct ieee80211_fragment_entry {
+ struct sk_buff_head skb_list;
+ unsigned long first_frag_time;
+ u16 seq;
+ u16 extra_len;
+ u16 last_frag;
+ u8 rx_queue;
+ u8 check_sequential_pn:1, /* needed for CCMP/GCMP */
+ is_protected:1;
+ u8 last_pn[6]; /* PN of the last fragment if CCMP was used */
+ unsigned int key_color;
+};
+
+struct ieee80211_fragment_cache {
+ struct ieee80211_fragment_entry entries[IEEE80211_FRAGMENT_MAX];
+ unsigned int next;
+};
+
+/*
* The bandwidth threshold below which the per-station CoDel parameters will be
* scaled to be more lenient (to prevent starvation of slow stations). This
* value will be scaled by the number of active stations when it is being
@@ -529,6 +557,7 @@ struct ieee80211_sta_rx_stats {
* @status_stats.last_ack_signal: last ACK signal
* @status_stats.ack_signal_filled: last ACK signal validity
* @status_stats.avg_ack_signal: average ACK signal
+ * @frags: fragment cache
*/
struct sta_info {
/* General information, mostly static */
@@ -637,6 +666,8 @@ struct sta_info {
struct cfg80211_chan_def tdls_chandef;
+ struct ieee80211_fragment_cache frags;
+
/* keep last! */
struct ieee80211_sta sta;
};
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 88868bf..1d8526d 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -3605,7 +3605,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
test_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags))
goto out;
- if (vif->txqs_stopped[ieee80211_ac_from_tid(txq->tid)]) {
+ if (vif->txqs_stopped[txq->ac]) {
set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags);
goto out;
}
diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c
index 91bf32a..bca47fa 100644
--- a/net/mac80211/wpa.c
+++ b/net/mac80211/wpa.c
@@ -3,6 +3,7 @@
* Copyright 2002-2004, Instant802 Networks, Inc.
* Copyright 2008, Jouni Malinen <j@w1.fi>
* Copyright (C) 2016-2017 Intel Deutschland GmbH
+ * Copyright (C) 2020-2021 Intel Corporation
*/
#include <linux/netdevice.h>
@@ -167,8 +168,8 @@ ieee80211_rx_h_michael_mic_verify(struct ieee80211_rx_data *rx)
update_iv:
/* update IV in key information to be able to detect replays */
- rx->key->u.tkip.rx[rx->security_idx].iv32 = rx->tkip_iv32;
- rx->key->u.tkip.rx[rx->security_idx].iv16 = rx->tkip_iv16;
+ rx->key->u.tkip.rx[rx->security_idx].iv32 = rx->tkip.iv32;
+ rx->key->u.tkip.rx[rx->security_idx].iv16 = rx->tkip.iv16;
return RX_CONTINUE;
@@ -294,8 +295,8 @@ ieee80211_crypto_tkip_decrypt(struct ieee80211_rx_data *rx)
key, skb->data + hdrlen,
skb->len - hdrlen, rx->sta->sta.addr,
hdr->addr1, hwaccel, rx->security_idx,
- &rx->tkip_iv32,
- &rx->tkip_iv16);
+ &rx->tkip.iv32,
+ &rx->tkip.iv16);
if (res != TKIP_DECRYPT_OK)
return RX_DROP_UNUSABLE;
@@ -553,6 +554,8 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx,
}
memcpy(key->u.ccmp.rx_pn[queue], pn, IEEE80211_CCMP_PN_LEN);
+ if (unlikely(ieee80211_is_frag(hdr)))
+ memcpy(rx->ccm_gcm.pn, pn, IEEE80211_CCMP_PN_LEN);
}
/* Remove CCMP header and MIC */
@@ -781,6 +784,8 @@ ieee80211_crypto_gcmp_decrypt(struct ieee80211_rx_data *rx)
}
memcpy(key->u.gcmp.rx_pn[queue], pn, IEEE80211_GCMP_PN_LEN);
+ if (unlikely(ieee80211_is_frag(hdr)))
+ memcpy(rx->ccm_gcm.pn, pn, IEEE80211_CCMP_PN_LEN);
}
/* Remove GCMP header and MIC */
diff --git a/net/mac802154/llsec.c b/net/mac802154/llsec.c
index 585d331..55550ea 100644
--- a/net/mac802154/llsec.c
+++ b/net/mac802154/llsec.c
@@ -152,7 +152,7 @@ llsec_key_alloc(const struct ieee802154_llsec_key *template)
crypto_free_sync_skcipher(key->tfm0);
err_tfm:
for (i = 0; i < ARRAY_SIZE(key->tfm); i++)
- if (key->tfm[i])
+ if (!IS_ERR_OR_NULL(key->tfm[i]))
crypto_free_aead(key->tfm[i]);
kfree_sensitive(key);
diff --git a/net/mptcp/options.c b/net/mptcp/options.c
index a044dd4..91034a2 100644
--- a/net/mptcp/options.c
+++ b/net/mptcp/options.c
@@ -127,7 +127,6 @@ static void mptcp_parse_option(const struct sk_buff *skb,
memcpy(mp_opt->hmac, ptr, MPTCPOPT_HMAC_LEN);
pr_debug("MP_JOIN hmac");
} else {
- pr_warn("MP_JOIN bad option size");
mp_opt->mp_join = 0;
}
break;
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index f56b2e3..7832b20 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -758,11 +758,17 @@ static bool mptcp_skb_can_collapse_to(u64 write_seq,
return mpext && mpext->data_seq + mpext->data_len == write_seq;
}
+/* we can append data to the given data frag if:
+ * - there is space available in the backing page_frag
+ * - the data frag tail matches the current page_frag free offset
+ * - the data frag end sequence number matches the current write seq
+ */
static bool mptcp_frag_can_collapse_to(const struct mptcp_sock *msk,
const struct page_frag *pfrag,
const struct mptcp_data_frag *df)
{
return df && pfrag->page == df->page &&
+ pfrag->offset == (df->offset + df->data_len) &&
df->data_seq + df->data_len == msk->write_seq;
}
@@ -2245,6 +2251,48 @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname,
return ret;
}
+static bool mptcp_unsupported(int level, int optname)
+{
+ if (level == SOL_IP) {
+ switch (optname) {
+ case IP_ADD_MEMBERSHIP:
+ case IP_ADD_SOURCE_MEMBERSHIP:
+ case IP_DROP_MEMBERSHIP:
+ case IP_DROP_SOURCE_MEMBERSHIP:
+ case IP_BLOCK_SOURCE:
+ case IP_UNBLOCK_SOURCE:
+ case MCAST_JOIN_GROUP:
+ case MCAST_LEAVE_GROUP:
+ case MCAST_JOIN_SOURCE_GROUP:
+ case MCAST_LEAVE_SOURCE_GROUP:
+ case MCAST_BLOCK_SOURCE:
+ case MCAST_UNBLOCK_SOURCE:
+ case MCAST_MSFILTER:
+ return true;
+ }
+ return false;
+ }
+ if (level == SOL_IPV6) {
+ switch (optname) {
+ case IPV6_ADDRFORM:
+ case IPV6_ADD_MEMBERSHIP:
+ case IPV6_DROP_MEMBERSHIP:
+ case IPV6_JOIN_ANYCAST:
+ case IPV6_LEAVE_ANYCAST:
+ case MCAST_JOIN_GROUP:
+ case MCAST_LEAVE_GROUP:
+ case MCAST_JOIN_SOURCE_GROUP:
+ case MCAST_LEAVE_SOURCE_GROUP:
+ case MCAST_BLOCK_SOURCE:
+ case MCAST_UNBLOCK_SOURCE:
+ case MCAST_MSFILTER:
+ return true;
+ }
+ return false;
+ }
+ return false;
+}
+
static int mptcp_setsockopt(struct sock *sk, int level, int optname,
sockptr_t optval, unsigned int optlen)
{
@@ -2253,6 +2301,9 @@ static int mptcp_setsockopt(struct sock *sk, int level, int optname,
pr_debug("msk=%p", msk);
+ if (mptcp_unsupported(level, optname))
+ return -ENOPROTOOPT;
+
if (level == SOL_SOCKET)
return mptcp_setsockopt_sol_socket(msk, optname, optval, optlen);
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index 6317b9b..bdd6af3 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -445,8 +445,7 @@ static void mptcp_sock_destruct(struct sock *sk)
* ESTABLISHED state and will not have the SOCK_DEAD flag.
* Both result in warnings from inet_sock_destruct.
*/
-
- if (sk->sk_state == TCP_ESTABLISHED) {
+ if ((1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) {
sk->sk_state = TCP_CLOSE;
WARN_ON_ONCE(sk->sk_socket);
sock_orphan(sk);
@@ -741,7 +740,6 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
data_len = mpext->data_len;
if (data_len == 0) {
- pr_err("Infinite mapping not handled");
MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX);
return MAPPING_INVALID;
}
diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
index a9cb355..ffff8da 100644
--- a/net/ncsi/ncsi-manage.c
+++ b/net/ncsi/ncsi-manage.c
@@ -105,13 +105,20 @@ static void ncsi_channel_monitor(struct timer_list *t)
monitor_state = nc->monitor.state;
spin_unlock_irqrestore(&nc->lock, flags);
- if (!enabled || chained) {
- ncsi_stop_channel_monitor(nc);
- return;
- }
+ if (!enabled)
+ return; /* expected race disabling timer */
+ if (WARN_ON_ONCE(chained))
+ goto bad_state;
+
if (state != NCSI_CHANNEL_INACTIVE &&
state != NCSI_CHANNEL_ACTIVE) {
- ncsi_stop_channel_monitor(nc);
+bad_state:
+ netdev_warn(ndp->ndev.dev,
+ "Bad NCSI monitor state channel %d 0x%x %s queue\n",
+ nc->id, state, chained ? "on" : "off");
+ spin_lock_irqsave(&nc->lock, flags);
+ nc->monitor.enabled = false;
+ spin_unlock_irqrestore(&nc->lock, flags);
return;
}
@@ -136,10 +143,9 @@ static void ncsi_channel_monitor(struct timer_list *t)
ncsi_report_link(ndp, true);
ndp->flags |= NCSI_DEV_RESHUFFLE;
- ncsi_stop_channel_monitor(nc);
-
ncm = &nc->modes[NCSI_MODE_LINK];
spin_lock_irqsave(&nc->lock, flags);
+ nc->monitor.enabled = false;
nc->state = NCSI_CHANNEL_INVISIBLE;
ncm->data[2] &= ~0x1;
spin_unlock_irqrestore(&nc->lock, flags);
diff --git a/net/netfilter/nf_conntrack_proto_gre.c b/net/netfilter/nf_conntrack_proto_gre.c
index 5b05487..db11e40 100644
--- a/net/netfilter/nf_conntrack_proto_gre.c
+++ b/net/netfilter/nf_conntrack_proto_gre.c
@@ -218,9 +218,6 @@ int nf_conntrack_gre_packet(struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
const struct nf_hook_state *state)
{
- if (state->pf != NFPROTO_IPV4)
- return -NF_ACCEPT;
-
if (!nf_ct_is_confirmed(ct)) {
unsigned int *timeouts = nf_ct_timeout_lookup(ct);
diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
index 0ee702d..313d1c8 100644
--- a/net/netfilter/nf_conntrack_standalone.c
+++ b/net/netfilter/nf_conntrack_standalone.c
@@ -266,6 +266,7 @@ static const char* l4proto_name(u16 proto)
case IPPROTO_GRE: return "gre";
case IPPROTO_SCTP: return "sctp";
case IPPROTO_UDPLITE: return "udplite";
+ case IPPROTO_ICMPV6: return "icmpv6";
}
return "unknown";
@@ -1059,16 +1060,10 @@ static int nf_conntrack_standalone_init_sysctl(struct net *net)
nf_conntrack_standalone_init_dccp_sysctl(net, table);
nf_conntrack_standalone_init_gre_sysctl(net, table);
- /* Don't allow unprivileged users to alter certain sysctls */
- if (net->user_ns != &init_user_ns) {
+ /* Don't allow non-init_net ns to alter global sysctls */
+ if (!net_eq(&init_net, net)) {
table[NF_SYSCTL_CT_MAX].mode = 0444;
table[NF_SYSCTL_CT_EXPECT_MAX].mode = 0444;
- table[NF_SYSCTL_CT_HELPER].mode = 0444;
-#ifdef CONFIG_NF_CONNTRACK_EVENTS
- table[NF_SYSCTL_CT_EVENTS].mode = 0444;
-#endif
- table[NF_SYSCTL_CT_BUCKETS].mode = 0444;
- } else if (!net_eq(&init_net, net)) {
table[NF_SYSCTL_CT_BUCKETS].mode = 0444;
}
diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
index b03feb6..f4029fc 100644
--- a/net/netfilter/nf_flow_table_core.c
+++ b/net/netfilter/nf_flow_table_core.c
@@ -259,8 +259,7 @@ void flow_offload_refresh(struct nf_flowtable *flow_table,
{
flow->timeout = nf_flowtable_time_stamp + NF_FLOW_TIMEOUT;
- if (likely(!nf_flowtable_hw_offload(flow_table) ||
- !test_and_clear_bit(NF_FLOW_HW_REFRESH, &flow->flags)))
+ if (likely(!nf_flowtable_hw_offload(flow_table)))
return;
nf_flow_offload_add(flow_table, flow);
diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
index 2a6993f..92047ce 100644
--- a/net/netfilter/nf_flow_table_offload.c
+++ b/net/netfilter/nf_flow_table_offload.c
@@ -305,12 +305,12 @@ static void flow_offload_ipv6_mangle(struct nf_flow_rule *flow_rule,
const __be32 *addr, const __be32 *mask)
{
struct flow_action_entry *entry;
- int i;
+ int i, j;
- for (i = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32)) {
+ for (i = 0, j = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32), j++) {
entry = flow_action_entry_next(flow_rule);
flow_offload_mangle(entry, FLOW_ACT_MANGLE_HDR_TYPE_IP6,
- offset + i, &addr[i], mask);
+ offset + i, &addr[j], mask);
}
}
@@ -753,10 +753,11 @@ static void flow_offload_work_add(struct flow_offload_work *offload)
err = flow_offload_rule_add(offload, flow_rule);
if (err < 0)
- set_bit(NF_FLOW_HW_REFRESH, &offload->flow->flags);
- else
- set_bit(IPS_HW_OFFLOAD_BIT, &offload->flow->ct->status);
+ goto out;
+ set_bit(IPS_HW_OFFLOAD_BIT, &offload->flow->ct->status);
+
+out:
nf_flow_offload_destroy(flow_rule);
}
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index 978a968..7bf7bfa 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -6015,9 +6015,9 @@ static int nf_tables_newobj(struct net *net, struct sock *nlsk,
INIT_LIST_HEAD(&obj->list);
return err;
err_trans:
- kfree(obj->key.name);
-err_userdata:
kfree(obj->udata);
+err_userdata:
+ kfree(obj->key.name);
err_strdup:
if (obj->ops->destroy)
obj->ops->destroy(&ctx, obj);
@@ -6573,6 +6573,9 @@ static int nft_register_flowtable_net_hooks(struct net *net,
list_for_each_entry(hook, hook_list, list) {
list_for_each_entry(ft, &table->flowtables, list) {
+ if (!nft_is_active_next(net, ft))
+ continue;
+
list_for_each_entry(hook2, &ft->hook_list, list) {
if (hook->ops.dev == hook2->ops.dev &&
hook->ops.pf == hook2->ops.pf) {
diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
index 9ae1427..2b00f7f 100644
--- a/net/netfilter/nf_tables_offload.c
+++ b/net/netfilter/nf_tables_offload.c
@@ -45,6 +45,48 @@ void nft_flow_rule_set_addr_type(struct nft_flow_rule *flow,
offsetof(struct nft_flow_key, control);
}
+struct nft_offload_ethertype {
+ __be16 value;
+ __be16 mask;
+};
+
+static void nft_flow_rule_transfer_vlan(struct nft_offload_ctx *ctx,
+ struct nft_flow_rule *flow)
+{
+ struct nft_flow_match *match = &flow->match;
+ struct nft_offload_ethertype ethertype;
+
+ if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_CONTROL) &&
+ match->key.basic.n_proto != htons(ETH_P_8021Q) &&
+ match->key.basic.n_proto != htons(ETH_P_8021AD))
+ return;
+
+ ethertype.value = match->key.basic.n_proto;
+ ethertype.mask = match->mask.basic.n_proto;
+
+ if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_VLAN) &&
+ (match->key.vlan.vlan_tpid == htons(ETH_P_8021Q) ||
+ match->key.vlan.vlan_tpid == htons(ETH_P_8021AD))) {
+ match->key.basic.n_proto = match->key.cvlan.vlan_tpid;
+ match->mask.basic.n_proto = match->mask.cvlan.vlan_tpid;
+ match->key.cvlan.vlan_tpid = match->key.vlan.vlan_tpid;
+ match->mask.cvlan.vlan_tpid = match->mask.vlan.vlan_tpid;
+ match->key.vlan.vlan_tpid = ethertype.value;
+ match->mask.vlan.vlan_tpid = ethertype.mask;
+ match->dissector.offset[FLOW_DISSECTOR_KEY_CVLAN] =
+ offsetof(struct nft_flow_key, cvlan);
+ match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN);
+ } else {
+ match->key.basic.n_proto = match->key.vlan.vlan_tpid;
+ match->mask.basic.n_proto = match->mask.vlan.vlan_tpid;
+ match->key.vlan.vlan_tpid = ethertype.value;
+ match->mask.vlan.vlan_tpid = ethertype.mask;
+ match->dissector.offset[FLOW_DISSECTOR_KEY_VLAN] =
+ offsetof(struct nft_flow_key, vlan);
+ match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_VLAN);
+ }
+}
+
struct nft_flow_rule *nft_flow_rule_create(struct net *net,
const struct nft_rule *rule)
{
@@ -89,6 +131,8 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
expr = nft_expr_next(expr);
}
+ nft_flow_rule_transfer_vlan(ctx, flow);
+
flow->proto = ctx->dep.l3num;
kfree(ctx);
diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c
index 916a3c7..79fbf37 100644
--- a/net/netfilter/nfnetlink_osf.c
+++ b/net/netfilter/nfnetlink_osf.c
@@ -186,6 +186,8 @@ static const struct tcphdr *nf_osf_hdr_ctx_init(struct nf_osf_hdr_ctx *ctx,
ctx->optp = skb_header_pointer(skb, ip_hdrlen(skb) +
sizeof(struct tcphdr), ctx->optsize, opts);
+ if (!ctx->optp)
+ return NULL;
}
return tcp;
diff --git a/net/netfilter/nft_cmp.c b/net/netfilter/nft_cmp.c
index 00e563a..1d42d06 100644
--- a/net/netfilter/nft_cmp.c
+++ b/net/netfilter/nft_cmp.c
@@ -115,19 +115,56 @@ static int nft_cmp_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1;
}
+union nft_cmp_offload_data {
+ u16 val16;
+ u32 val32;
+ u64 val64;
+};
+
+static void nft_payload_n2h(union nft_cmp_offload_data *data,
+ const u8 *val, u32 len)
+{
+ switch (len) {
+ case 2:
+ data->val16 = ntohs(*((u16 *)val));
+ break;
+ case 4:
+ data->val32 = ntohl(*((u32 *)val));
+ break;
+ case 8:
+ data->val64 = be64_to_cpu(*((u64 *)val));
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
+}
+
static int __nft_cmp_offload(struct nft_offload_ctx *ctx,
struct nft_flow_rule *flow,
const struct nft_cmp_expr *priv)
{
struct nft_offload_reg *reg = &ctx->regs[priv->sreg];
+ union nft_cmp_offload_data _data, _datamask;
u8 *mask = (u8 *)&flow->match.mask;
u8 *key = (u8 *)&flow->match.key;
+ u8 *data, *datamask;
if (priv->op != NFT_CMP_EQ || priv->len > reg->len)
return -EOPNOTSUPP;
- memcpy(key + reg->offset, &priv->data, reg->len);
- memcpy(mask + reg->offset, ®->mask, reg->len);
+ if (reg->flags & NFT_OFFLOAD_F_NETWORK2HOST) {
+ nft_payload_n2h(&_data, (u8 *)&priv->data, reg->len);
+ nft_payload_n2h(&_datamask, (u8 *)®->mask, reg->len);
+ data = (u8 *)&_data;
+ datamask = (u8 *)&_datamask;
+ } else {
+ data = (u8 *)&priv->data;
+ datamask = (u8 *)®->mask;
+ }
+
+ memcpy(key + reg->offset, data, reg->len);
+ memcpy(mask + reg->offset, datamask, reg->len);
flow->match.dissector.used_keys |= BIT(reg->key);
flow->match.dissector.offset[reg->key] = reg->base_offset;
diff --git a/net/netfilter/nft_limit.c b/net/netfilter/nft_limit.c
index 0e2c315..82ec27b 100644
--- a/net/netfilter/nft_limit.c
+++ b/net/netfilter/nft_limit.c
@@ -76,13 +76,13 @@ static int nft_limit_init(struct nft_limit *limit,
return -EOVERFLOW;
if (pkts) {
- tokens = div_u64(limit->nsecs, limit->rate) * limit->burst;
+ tokens = div64_u64(limit->nsecs, limit->rate) * limit->burst;
} else {
/* The token bucket size limits the number of tokens can be
* accumulated. tokens_max specifies the bucket size.
* tokens_max = unit * (rate + burst) / rate.
*/
- tokens = div_u64(limit->nsecs * (limit->rate + limit->burst),
+ tokens = div64_u64(limit->nsecs * (limit->rate + limit->burst),
limit->rate);
}
diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
index 47d4e0e..1ebee25 100644
--- a/net/netfilter/nft_payload.c
+++ b/net/netfilter/nft_payload.c
@@ -226,8 +226,9 @@ static int nft_payload_offload_ll(struct nft_offload_ctx *ctx,
if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
return -EOPNOTSUPP;
- NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_VLAN, vlan,
- vlan_tci, sizeof(__be16), reg);
+ NFT_OFFLOAD_MATCH_FLAGS(FLOW_DISSECTOR_KEY_VLAN, vlan,
+ vlan_tci, sizeof(__be16), reg,
+ NFT_OFFLOAD_F_NETWORK2HOST);
break;
case offsetof(struct vlan_ethhdr, h_vlan_encapsulated_proto):
if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
@@ -241,16 +242,18 @@ static int nft_payload_offload_ll(struct nft_offload_ctx *ctx,
if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
return -EOPNOTSUPP;
- NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, vlan,
- vlan_tci, sizeof(__be16), reg);
+ NFT_OFFLOAD_MATCH_FLAGS(FLOW_DISSECTOR_KEY_CVLAN, cvlan,
+ vlan_tci, sizeof(__be16), reg,
+ NFT_OFFLOAD_F_NETWORK2HOST);
break;
case offsetof(struct vlan_ethhdr, h_vlan_encapsulated_proto) +
sizeof(struct vlan_hdr):
if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
return -EOPNOTSUPP;
- NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, vlan,
+ NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, cvlan,
vlan_tpid, sizeof(__be16), reg);
+ nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_NETWORK);
break;
default:
return -EOPNOTSUPP;
diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
index 4d3f147..d7083bc 100644
--- a/net/netfilter/nft_set_hash.c
+++ b/net/netfilter/nft_set_hash.c
@@ -393,9 +393,17 @@ static void nft_rhash_destroy(const struct nft_set *set)
(void *)set);
}
+/* Number of buckets is stored in u32, so cap our result to 1U<<31 */
+#define NFT_MAX_BUCKETS (1U << 31)
+
static u32 nft_hash_buckets(u32 size)
{
- return roundup_pow_of_two(size * 4 / 3);
+ u64 val = div_u64((u64)size * 4, 3);
+
+ if (val >= NFT_MAX_BUCKETS)
+ return NFT_MAX_BUCKETS;
+
+ return roundup_pow_of_two(val);
}
static bool nft_rhash_estimate(const struct nft_set_desc *desc, u32 features,
diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
index 9944523..2d73f26 100644
--- a/net/netfilter/nft_set_pipapo.c
+++ b/net/netfilter/nft_set_pipapo.c
@@ -408,8 +408,8 @@ int pipapo_refill(unsigned long *map, int len, int rules, unsigned long *dst,
*
* Return: true on match, false otherwise.
*/
-static bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
- const u32 *key, const struct nft_set_ext **ext)
+bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ const u32 *key, const struct nft_set_ext **ext)
{
struct nft_pipapo *priv = nft_set_priv(set);
unsigned long *res_map, *fill_map;
diff --git a/net/netfilter/nft_set_pipapo.h b/net/netfilter/nft_set_pipapo.h
index 25a7559..d84afb8 100644
--- a/net/netfilter/nft_set_pipapo.h
+++ b/net/netfilter/nft_set_pipapo.h
@@ -178,6 +178,8 @@ struct nft_pipapo_elem {
int pipapo_refill(unsigned long *map, int len, int rules, unsigned long *dst,
union nft_pipapo_map_bucket *mt, bool match_only);
+bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ const u32 *key, const struct nft_set_ext **ext);
/**
* pipapo_and_field_buckets_4bit() - Intersect 4-bit buckets
diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
index d65ae0e..eabdb8d 100644
--- a/net/netfilter/nft_set_pipapo_avx2.c
+++ b/net/netfilter/nft_set_pipapo_avx2.c
@@ -1131,6 +1131,9 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
bool map_index;
int i, ret = 0;
+ if (unlikely(!irq_fpu_usable()))
+ return nft_pipapo_lookup(net, set, key, ext);
+
m = rcu_dereference(priv->match);
/* This also protects access to all data related to scratch maps */
diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
index 6bd31a7..92e9d4e 100644
--- a/net/netfilter/x_tables.c
+++ b/net/netfilter/x_tables.c
@@ -733,7 +733,7 @@ void xt_compat_match_from_user(struct xt_entry_match *m, void **dstptr,
{
const struct xt_match *match = m->u.kernel.match;
struct compat_xt_entry_match *cm = (struct compat_xt_entry_match *)m;
- int pad, off = xt_compat_match_offset(match);
+ int off = xt_compat_match_offset(match);
u_int16_t msize = cm->u.user.match_size;
char name[sizeof(m->u.user.name)];
@@ -743,9 +743,6 @@ void xt_compat_match_from_user(struct xt_entry_match *m, void **dstptr,
match->compat_from_user(m->data, cm->data);
else
memcpy(m->data, cm->data, msize - sizeof(*cm));
- pad = XT_ALIGN(match->matchsize) - match->matchsize;
- if (pad > 0)
- memset(m->data + match->matchsize, 0, pad);
msize += off;
m->u.user.match_size = msize;
@@ -1116,7 +1113,7 @@ void xt_compat_target_from_user(struct xt_entry_target *t, void **dstptr,
{
const struct xt_target *target = t->u.kernel.target;
struct compat_xt_entry_target *ct = (struct compat_xt_entry_target *)t;
- int pad, off = xt_compat_target_offset(target);
+ int off = xt_compat_target_offset(target);
u_int16_t tsize = ct->u.user.target_size;
char name[sizeof(t->u.user.name)];
@@ -1126,9 +1123,6 @@ void xt_compat_target_from_user(struct xt_entry_target *t, void **dstptr,
target->compat_from_user(t->data, ct->data);
else
memcpy(t->data, ct->data, tsize - sizeof(*ct));
- pad = XT_ALIGN(target->targetsize) - target->targetsize;
- if (pad > 0)
- memset(t->data + target->targetsize, 0, pad);
tsize += off;
t->u.user.target_size = tsize;
diff --git a/net/netfilter/xt_SECMARK.c b/net/netfilter/xt_SECMARK.c
index 75625d1..498a0bf 100644
--- a/net/netfilter/xt_SECMARK.c
+++ b/net/netfilter/xt_SECMARK.c
@@ -24,10 +24,9 @@ MODULE_ALIAS("ip6t_SECMARK");
static u8 mode;
static unsigned int
-secmark_tg(struct sk_buff *skb, const struct xt_action_param *par)
+secmark_tg(struct sk_buff *skb, const struct xt_secmark_target_info_v1 *info)
{
u32 secmark = 0;
- const struct xt_secmark_target_info *info = par->targinfo;
switch (mode) {
case SECMARK_MODE_SEL:
@@ -41,7 +40,7 @@ secmark_tg(struct sk_buff *skb, const struct xt_action_param *par)
return XT_CONTINUE;
}
-static int checkentry_lsm(struct xt_secmark_target_info *info)
+static int checkentry_lsm(struct xt_secmark_target_info_v1 *info)
{
int err;
@@ -73,15 +72,15 @@ static int checkentry_lsm(struct xt_secmark_target_info *info)
return 0;
}
-static int secmark_tg_check(const struct xt_tgchk_param *par)
+static int
+secmark_tg_check(const char *table, struct xt_secmark_target_info_v1 *info)
{
- struct xt_secmark_target_info *info = par->targinfo;
int err;
- if (strcmp(par->table, "mangle") != 0 &&
- strcmp(par->table, "security") != 0) {
+ if (strcmp(table, "mangle") != 0 &&
+ strcmp(table, "security") != 0) {
pr_info_ratelimited("only valid in \'mangle\' or \'security\' table, not \'%s\'\n",
- par->table);
+ table);
return -EINVAL;
}
@@ -116,25 +115,76 @@ static void secmark_tg_destroy(const struct xt_tgdtor_param *par)
}
}
-static struct xt_target secmark_tg_reg __read_mostly = {
- .name = "SECMARK",
- .revision = 0,
- .family = NFPROTO_UNSPEC,
- .checkentry = secmark_tg_check,
- .destroy = secmark_tg_destroy,
- .target = secmark_tg,
- .targetsize = sizeof(struct xt_secmark_target_info),
- .me = THIS_MODULE,
+static int secmark_tg_check_v0(const struct xt_tgchk_param *par)
+{
+ struct xt_secmark_target_info *info = par->targinfo;
+ struct xt_secmark_target_info_v1 newinfo = {
+ .mode = info->mode,
+ };
+ int ret;
+
+ memcpy(newinfo.secctx, info->secctx, SECMARK_SECCTX_MAX);
+
+ ret = secmark_tg_check(par->table, &newinfo);
+ info->secid = newinfo.secid;
+
+ return ret;
+}
+
+static unsigned int
+secmark_tg_v0(struct sk_buff *skb, const struct xt_action_param *par)
+{
+ const struct xt_secmark_target_info *info = par->targinfo;
+ struct xt_secmark_target_info_v1 newinfo = {
+ .secid = info->secid,
+ };
+
+ return secmark_tg(skb, &newinfo);
+}
+
+static int secmark_tg_check_v1(const struct xt_tgchk_param *par)
+{
+ return secmark_tg_check(par->table, par->targinfo);
+}
+
+static unsigned int
+secmark_tg_v1(struct sk_buff *skb, const struct xt_action_param *par)
+{
+ return secmark_tg(skb, par->targinfo);
+}
+
+static struct xt_target secmark_tg_reg[] __read_mostly = {
+ {
+ .name = "SECMARK",
+ .revision = 0,
+ .family = NFPROTO_UNSPEC,
+ .checkentry = secmark_tg_check_v0,
+ .destroy = secmark_tg_destroy,
+ .target = secmark_tg_v0,
+ .targetsize = sizeof(struct xt_secmark_target_info),
+ .me = THIS_MODULE,
+ },
+ {
+ .name = "SECMARK",
+ .revision = 1,
+ .family = NFPROTO_UNSPEC,
+ .checkentry = secmark_tg_check_v1,
+ .destroy = secmark_tg_destroy,
+ .target = secmark_tg_v1,
+ .targetsize = sizeof(struct xt_secmark_target_info_v1),
+ .usersize = offsetof(struct xt_secmark_target_info_v1, secid),
+ .me = THIS_MODULE,
+ },
};
static int __init secmark_tg_init(void)
{
- return xt_register_target(&secmark_tg_reg);
+ return xt_register_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg));
}
static void __exit secmark_tg_exit(void)
{
- xt_unregister_target(&secmark_tg_reg);
+ xt_unregister_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg));
}
module_init(secmark_tg_init);
diff --git a/net/nfc/digital_dep.c b/net/nfc/digital_dep.c
index 5971fb6..dc21b41 100644
--- a/net/nfc/digital_dep.c
+++ b/net/nfc/digital_dep.c
@@ -1273,6 +1273,8 @@ static void digital_tg_recv_dep_req(struct nfc_digital_dev *ddev, void *arg,
}
rc = nfc_tm_data_received(ddev->nfc_dev, resp);
+ if (rc)
+ resp = NULL;
exit:
kfree_skb(ddev->chaining_skb);
diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
index d257ed3..53dbe73 100644
--- a/net/nfc/llcp_sock.c
+++ b/net/nfc/llcp_sock.c
@@ -108,11 +108,15 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
llcp_sock->service_name_len,
GFP_KERNEL);
if (!llcp_sock->service_name) {
+ nfc_llcp_local_put(llcp_sock->local);
+ llcp_sock->local = NULL;
ret = -ENOMEM;
goto put_dev;
}
llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock);
if (llcp_sock->ssap == LLCP_SAP_MAX) {
+ nfc_llcp_local_put(llcp_sock->local);
+ llcp_sock->local = NULL;
kfree(llcp_sock->service_name);
llcp_sock->service_name = NULL;
ret = -EADDRINUSE;
@@ -671,6 +675,10 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
ret = -EISCONN;
goto error;
}
+ if (sk->sk_state == LLCP_CONNECTING) {
+ ret = -EINPROGRESS;
+ goto error;
+ }
dev = nfc_get_device(addr->dev_idx);
if (dev == NULL) {
@@ -702,6 +710,8 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
llcp_sock->local = nfc_llcp_local_get(local);
llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
if (llcp_sock->ssap == LLCP_SAP_MAX) {
+ nfc_llcp_local_put(llcp_sock->local);
+ llcp_sock->local = NULL;
ret = -ENOMEM;
goto put_dev;
}
@@ -743,9 +753,13 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
sock_unlink:
nfc_llcp_sock_unlink(&local->connecting_sockets, sk);
+ kfree(llcp_sock->service_name);
+ llcp_sock->service_name = NULL;
sock_llcp_release:
nfc_llcp_put_ssap(local, llcp_sock->ssap);
+ nfc_llcp_local_put(llcp_sock->local);
+ llcp_sock->local = NULL;
put_dev:
nfc_put_device(dev);
diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
index 741da8f..32e8154 100644
--- a/net/nfc/nci/core.c
+++ b/net/nfc/nci/core.c
@@ -1175,6 +1175,7 @@ EXPORT_SYMBOL(nci_allocate_device);
void nci_free_device(struct nci_dev *ndev)
{
nfc_free_device(ndev->nfc_dev);
+ nci_hci_deallocate(ndev);
kfree(ndev);
}
EXPORT_SYMBOL(nci_free_device);
diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c
index c18e76d..04e55cc 100644
--- a/net/nfc/nci/hci.c
+++ b/net/nfc/nci/hci.c
@@ -795,3 +795,8 @@ struct nci_hci_dev *nci_hci_allocate(struct nci_dev *ndev)
return hdev;
}
+
+void nci_hci_deallocate(struct nci_dev *ndev)
+{
+ kfree(ndev->hci_dev);
+}
diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
index e8902a7..fc487f9 100644
--- a/net/openvswitch/actions.c
+++ b/net/openvswitch/actions.c
@@ -827,17 +827,17 @@ static void ovs_fragment(struct net *net, struct vport *vport,
}
if (key->eth.type == htons(ETH_P_IP)) {
- struct dst_entry ovs_dst;
+ struct rtable ovs_rt = { 0 };
unsigned long orig_dst;
prepare_frag(vport, skb, orig_network_offset,
ovs_key_mac_proto(key));
- dst_init(&ovs_dst, &ovs_dst_ops, NULL, 1,
+ dst_init(&ovs_rt.dst, &ovs_dst_ops, NULL, 1,
DST_OBSOLETE_NONE, DST_NOCOUNT);
- ovs_dst.dev = vport->dev;
+ ovs_rt.dst.dev = vport->dev;
orig_dst = skb->_skb_refdst;
- skb_dst_set_noref(skb, &ovs_dst);
+ skb_dst_set_noref(skb, &ovs_rt.dst);
IPCB(skb)->frag_max_size = mru;
ip_do_fragment(net, skb->sk, skb, ovs_vport_output);
diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
index 4beb961..a11b558 100644
--- a/net/openvswitch/conntrack.c
+++ b/net/openvswitch/conntrack.c
@@ -2024,16 +2024,12 @@ static int ovs_ct_limit_del_zone_limit(struct nlattr *nla_zone_limit,
static int ovs_ct_limit_get_default_limit(struct ovs_ct_limit_info *info,
struct sk_buff *reply)
{
- struct ovs_zone_limit zone_limit;
- int err;
+ struct ovs_zone_limit zone_limit = {
+ .zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE,
+ .limit = info->default_limit,
+ };
- zone_limit.zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE;
- zone_limit.limit = info->default_limit;
- err = nla_put_nohdr(reply, sizeof(zone_limit), &zone_limit);
- if (err)
- return err;
-
- return 0;
+ return nla_put_nohdr(reply, sizeof(zone_limit), &zone_limit);
}
static int __ovs_ct_limit_get_zone_limit(struct net *net,
diff --git a/net/openvswitch/meter.c b/net/openvswitch/meter.c
index 8fbefd5..e594b4d 100644
--- a/net/openvswitch/meter.c
+++ b/net/openvswitch/meter.c
@@ -611,6 +611,14 @@ bool ovs_meter_execute(struct datapath *dp, struct sk_buff *skb,
spin_lock(&meter->lock);
long_delta_ms = (now_ms - meter->used); /* ms */
+ if (long_delta_ms < 0) {
+ /* This condition means that we have several threads fighting
+ * for a meter lock, and the one who received the packets a
+ * bit later wins. Assuming that all racing threads received
+ * packets at the same time to avoid overflow.
+ */
+ long_delta_ms = 0;
+ }
/* Make sure delta_ms will not be too large, so that bucket will not
* wrap around below.
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index a0121e7..ddb68aa 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -421,7 +421,8 @@ static __u32 tpacket_get_timestamp(struct sk_buff *skb, struct timespec64 *ts,
ktime_to_timespec64_cond(shhwtstamps->hwtstamp, ts))
return TP_STATUS_TS_RAW_HARDWARE;
- if (ktime_to_timespec64_cond(skb->tstamp, ts))
+ if ((flags & SOF_TIMESTAMPING_SOFTWARE) &&
+ ktime_to_timespec64_cond(skb->tstamp, ts))
return TP_STATUS_TS_SOFTWARE;
return 0;
@@ -1358,7 +1359,7 @@ static unsigned int fanout_demux_rollover(struct packet_fanout *f,
struct packet_sock *po, *po_next, *po_skip = NULL;
unsigned int i, j, room = ROOM_NONE;
- po = pkt_sk(f->arr[idx]);
+ po = pkt_sk(rcu_dereference(f->arr[idx]));
if (try_self) {
room = packet_rcv_has_room(po, skb);
@@ -1370,7 +1371,7 @@ static unsigned int fanout_demux_rollover(struct packet_fanout *f,
i = j = min_t(int, po->rollover->sock, num - 1);
do {
- po_next = pkt_sk(f->arr[i]);
+ po_next = pkt_sk(rcu_dereference(f->arr[i]));
if (po_next != po_skip && !READ_ONCE(po_next->pressure) &&
packet_rcv_has_room(po_next, skb) == ROOM_NORMAL) {
if (i != j)
@@ -1465,7 +1466,7 @@ static int packet_rcv_fanout(struct sk_buff *skb, struct net_device *dev,
if (fanout_has_flag(f, PACKET_FANOUT_FLAG_ROLLOVER))
idx = fanout_demux_rollover(f, skb, idx, true, num);
- po = pkt_sk(f->arr[idx]);
+ po = pkt_sk(rcu_dereference(f->arr[idx]));
return po->prot_hook.func(skb, dev, &po->prot_hook, orig_dev);
}
@@ -1479,7 +1480,7 @@ static void __fanout_link(struct sock *sk, struct packet_sock *po)
struct packet_fanout *f = po->fanout;
spin_lock(&f->lock);
- f->arr[f->num_members] = sk;
+ rcu_assign_pointer(f->arr[f->num_members], sk);
smp_wmb();
f->num_members++;
if (f->num_members == 1)
@@ -1494,11 +1495,14 @@ static void __fanout_unlink(struct sock *sk, struct packet_sock *po)
spin_lock(&f->lock);
for (i = 0; i < f->num_members; i++) {
- if (f->arr[i] == sk)
+ if (rcu_dereference_protected(f->arr[i],
+ lockdep_is_held(&f->lock)) == sk)
break;
}
BUG_ON(i >= f->num_members);
- f->arr[i] = f->arr[f->num_members - 1];
+ rcu_assign_pointer(f->arr[i],
+ rcu_dereference_protected(f->arr[f->num_members - 1],
+ lockdep_is_held(&f->lock)));
f->num_members--;
if (f->num_members == 0)
__dev_remove_pack(&f->prot_hook);
@@ -1636,13 +1640,15 @@ static bool fanout_find_new_id(struct sock *sk, u16 *new_id)
return false;
}
-static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
+static int fanout_add(struct sock *sk, struct fanout_args *args)
{
struct packet_rollover *rollover = NULL;
struct packet_sock *po = pkt_sk(sk);
+ u16 type_flags = args->type_flags;
struct packet_fanout *f, *match;
u8 type = type_flags & 0xff;
u8 flags = type_flags >> 8;
+ u16 id = args->id;
int err;
switch (type) {
@@ -1700,11 +1706,21 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
}
}
err = -EINVAL;
- if (match && match->flags != flags)
- goto out;
- if (!match) {
+ if (match) {
+ if (match->flags != flags)
+ goto out;
+ if (args->max_num_members &&
+ args->max_num_members != match->max_num_members)
+ goto out;
+ } else {
+ if (args->max_num_members > PACKET_FANOUT_MAX)
+ goto out;
+ if (!args->max_num_members)
+ /* legacy PACKET_FANOUT_MAX */
+ args->max_num_members = 256;
err = -ENOMEM;
- match = kzalloc(sizeof(*match), GFP_KERNEL);
+ match = kvzalloc(struct_size(match, arr, args->max_num_members),
+ GFP_KERNEL);
if (!match)
goto out;
write_pnet(&match->net, sock_net(sk));
@@ -1720,6 +1736,7 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
match->prot_hook.func = packet_rcv_fanout;
match->prot_hook.af_packet_priv = match;
match->prot_hook.id_match = match_fanout_group;
+ match->max_num_members = args->max_num_members;
list_add(&match->list, &fanout_list);
}
err = -EINVAL;
@@ -1730,7 +1747,7 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
match->prot_hook.type == po->prot_hook.type &&
match->prot_hook.dev == po->prot_hook.dev) {
err = -ENOSPC;
- if (refcount_read(&match->sk_ref) < PACKET_FANOUT_MAX) {
+ if (refcount_read(&match->sk_ref) < match->max_num_members) {
__dev_remove_pack(&po->prot_hook);
po->fanout = match;
po->rollover = rollover;
@@ -1744,7 +1761,7 @@ static int fanout_add(struct sock *sk, u16 id, u16 type_flags)
if (err && !refcount_read(&match->sk_ref)) {
list_del(&match->list);
- kfree(match);
+ kvfree(match);
}
out:
@@ -2323,7 +2340,12 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
skb_copy_bits(skb, 0, h.raw + macoff, snaplen);
- if (!(ts_status = tpacket_get_timestamp(skb, &ts, po->tp_tstamp)))
+ /* Always timestamp; prefer an existing software timestamp taken
+ * closer to the time of capture.
+ */
+ ts_status = tpacket_get_timestamp(skb, &ts,
+ po->tp_tstamp | SOF_TIMESTAMPING_SOFTWARE);
+ if (!ts_status)
ktime_get_real_ts64(&ts);
status |= ts_status;
@@ -3075,7 +3097,7 @@ static int packet_release(struct socket *sock)
kfree(po->rollover);
if (f) {
fanout_release_data(f);
- kfree(f);
+ kvfree(f);
}
/*
* Now the socket is dead. No more input will appear.
@@ -3866,14 +3888,14 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
}
case PACKET_FANOUT:
{
- int val;
+ struct fanout_args args = { 0 };
- if (optlen != sizeof(val))
+ if (optlen != sizeof(int) && optlen != sizeof(args))
return -EINVAL;
- if (copy_from_sockptr(&val, optval, sizeof(val)))
+ if (copy_from_sockptr(&args, optval, optlen))
return -EFAULT;
- return fanout_add(sk, val & 0xffff, val >> 16);
+ return fanout_add(sk, &args);
}
case PACKET_FANOUT_DATA:
{
diff --git a/net/packet/internal.h b/net/packet/internal.h
index fd41ecb..7af1e91 100644
--- a/net/packet/internal.h
+++ b/net/packet/internal.h
@@ -77,11 +77,12 @@ struct packet_ring_buffer {
};
extern struct mutex fanout_mutex;
-#define PACKET_FANOUT_MAX 256
+#define PACKET_FANOUT_MAX (1 << 16)
struct packet_fanout {
possible_net_t net;
unsigned int num_members;
+ u32 max_num_members;
u16 id;
u8 type;
u8 flags;
@@ -90,10 +91,10 @@ struct packet_fanout {
struct bpf_prog __rcu *bpf_prog;
};
struct list_head list;
- struct sock *arr[PACKET_FANOUT_MAX];
spinlock_t lock;
refcount_t sk_ref;
struct packet_type prot_hook ____cacheline_aligned_in_smp;
+ struct sock __rcu *arr[];
};
struct packet_rollover {
diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c
index ff0c414..0fe4bf4 100644
--- a/net/qrtr/mhi.c
+++ b/net/qrtr/mhi.c
@@ -50,6 +50,9 @@ static int qcom_mhi_qrtr_send(struct qrtr_endpoint *ep, struct sk_buff *skb)
struct qrtr_mhi_dev *qdev = container_of(ep, struct qrtr_mhi_dev, ep);
int rc;
+ if (skb->sk)
+ sock_hold(skb->sk);
+
rc = skb_linearize(skb);
if (rc)
goto free_skb;
@@ -59,12 +62,11 @@ static int qcom_mhi_qrtr_send(struct qrtr_endpoint *ep, struct sk_buff *skb)
if (rc)
goto free_skb;
- if (skb->sk)
- sock_hold(skb->sk);
-
return rc;
free_skb:
+ if (skb->sk)
+ sock_put(skb->sk);
kfree_skb(skb);
return rc;
diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
index 45fbf5f..93a7edc 100644
--- a/net/qrtr/qrtr.c
+++ b/net/qrtr/qrtr.c
@@ -266,7 +266,10 @@ static int qrtr_tx_wait(struct qrtr_node *node, int dest_node, int dest_port,
flow = kzalloc(sizeof(*flow), GFP_KERNEL);
if (flow) {
init_waitqueue_head(&flow->resume_tx);
- radix_tree_insert(&node->qrtr_tx_flow, key, flow);
+ if (radix_tree_insert(&node->qrtr_tx_flow, key, flow)) {
+ kfree(flow);
+ flow = NULL;
+ }
}
}
mutex_unlock(&node->qrtr_tx_lock);
diff --git a/net/rds/message.c b/net/rds/message.c
index 071a261..799034e 100644
--- a/net/rds/message.c
+++ b/net/rds/message.c
@@ -347,8 +347,9 @@ struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned in
rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE);
rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs);
if (IS_ERR(rm->data.op_sg)) {
+ void *err = ERR_CAST(rm->data.op_sg);
rds_message_put(rm);
- return ERR_CAST(rm->data.op_sg);
+ return err;
}
for (i = 0; i < rm->data.op_nents; ++i) {
diff --git a/net/sched/act_api.c b/net/sched/act_api.c
index 181c4b5..88e14cf 100644
--- a/net/sched/act_api.c
+++ b/net/sched/act_api.c
@@ -142,7 +142,7 @@ static int __tcf_action_put(struct tc_action *p, bool bind)
return 0;
}
-int __tcf_idr_release(struct tc_action *p, bool bind, bool strict)
+static int __tcf_idr_release(struct tc_action *p, bool bind, bool strict)
{
int ret = 0;
@@ -168,7 +168,18 @@ int __tcf_idr_release(struct tc_action *p, bool bind, bool strict)
return ret;
}
-EXPORT_SYMBOL(__tcf_idr_release);
+
+int tcf_idr_release(struct tc_action *a, bool bind)
+{
+ const struct tc_action_ops *ops = a->ops;
+ int ret;
+
+ ret = __tcf_idr_release(a, bind, false);
+ if (ret == ACT_P_DELETED)
+ module_put(ops->owner);
+ return ret;
+}
+EXPORT_SYMBOL(tcf_idr_release);
static size_t tcf_action_shared_attrs_size(const struct tc_action *act)
{
@@ -445,6 +456,7 @@ int tcf_idr_create(struct tc_action_net *tn, u32 index, struct nlattr *est,
}
p->idrinfo = idrinfo;
+ __module_get(ops->owner);
p->ops = ops;
*a = p;
return 0;
@@ -972,7 +984,8 @@ struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
struct nlattr *nla, struct nlattr *est,
char *name, int ovr, int bind,
- struct tc_action_ops *a_o, bool rtnl_held,
+ struct tc_action_ops *a_o, int *init_res,
+ bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct nla_bitfield32 flags = { 0, 0 };
@@ -1008,6 +1021,7 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
}
if (err < 0)
goto err_out;
+ *init_res = err;
if (!name && tb[TCA_ACT_COOKIE])
tcf_set_action_cookie(&a->act_cookie, cookie);
@@ -1015,13 +1029,6 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
if (!name)
a->hw_stats = hw_stats;
- /* module count goes up only when brand new policy is created
- * if it exists and is only bound to in a_o->init() then
- * ACT_P_CREATED is not returned (a zero is).
- */
- if (err != ACT_P_CREATED)
- module_put(a_o->owner);
-
return a;
err_out:
@@ -1036,7 +1043,7 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
struct nlattr *est, char *name, int ovr, int bind,
- struct tc_action *actions[], size_t *attr_size,
+ struct tc_action *actions[], int init_res[], size_t *attr_size,
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct tc_action_ops *ops[TCA_ACT_MAX_PRIO] = {};
@@ -1064,7 +1071,8 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) {
act = tcf_action_init_1(net, tp, tb[i], est, name, ovr, bind,
- ops[i - 1], rtnl_held, extack);
+ ops[i - 1], &init_res[i - 1], rtnl_held,
+ extack);
if (IS_ERR(act)) {
err = PTR_ERR(act);
goto err;
@@ -1080,7 +1088,8 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
tcf_idr_insert_many(actions);
*attr_size = tcf_action_full_attrs_size(sz);
- return i - 1;
+ err = i - 1;
+ goto err_mod;
err:
tcf_action_destroy(actions, bind);
@@ -1477,12 +1486,13 @@ static int tcf_action_add(struct net *net, struct nlattr *nla,
struct netlink_ext_ack *extack)
{
size_t attr_size = 0;
- int loop, ret;
+ int loop, ret, i;
struct tc_action *actions[TCA_ACT_MAX_PRIO] = {};
+ int init_res[TCA_ACT_MAX_PRIO] = {};
for (loop = 0; loop < 10; loop++) {
ret = tcf_action_init(net, NULL, nla, NULL, NULL, ovr, 0,
- actions, &attr_size, true, extack);
+ actions, init_res, &attr_size, true, extack);
if (ret != -EAGAIN)
break;
}
@@ -1490,8 +1500,12 @@ static int tcf_action_add(struct net *net, struct nlattr *nla,
if (ret < 0)
return ret;
ret = tcf_add_notify(net, n, actions, portid, attr_size, extack);
- if (ovr)
- tcf_action_put_many(actions);
+
+ /* only put existing actions */
+ for (i = 0; i < TCA_ACT_MAX_PRIO; i++)
+ if (init_res[i] == ACT_P_CREATED)
+ actions[i] = NULL;
+ tcf_action_put_many(actions);
return ret;
}
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index b2b7834..a281da0 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -646,7 +646,7 @@ static void tc_block_indr_cleanup(struct flow_block_cb *block_cb)
struct net_device *dev = block_cb->indr.dev;
struct Qdisc *sch = block_cb->indr.sch;
struct netlink_ext_ack extack = {};
- struct flow_block_offload bo;
+ struct flow_block_offload bo = {};
tcf_block_offload_init(&bo, dev, sch, FLOW_BLOCK_UNBIND,
block_cb->indr.binder_type,
@@ -1625,7 +1625,7 @@ int tcf_classify_ingress(struct sk_buff *skb,
/* If we missed on some chain */
if (ret == TC_ACT_UNSPEC && last_executed_chain) {
- ext = skb_ext_add(skb, TC_SKB_EXT);
+ ext = tc_skb_ext_alloc(skb);
if (WARN_ON_ONCE(!ext))
return TC_ACT_SHOT;
ext->chain = last_executed_chain;
@@ -3051,6 +3051,7 @@ int tcf_exts_validate(struct net *net, struct tcf_proto *tp, struct nlattr **tb,
{
#ifdef CONFIG_NET_CLS_ACT
{
+ int init_res[TCA_ACT_MAX_PRIO] = {};
struct tc_action *act;
size_t attr_size = 0;
@@ -3062,12 +3063,11 @@ int tcf_exts_validate(struct net *net, struct tcf_proto *tp, struct nlattr **tb,
return PTR_ERR(a_o);
act = tcf_action_init_1(net, tp, tb[exts->police],
rate_tlv, "police", ovr,
- TCA_ACT_BIND, a_o, rtnl_held,
- extack);
- if (IS_ERR(act)) {
- module_put(a_o->owner);
+ TCA_ACT_BIND, a_o, init_res,
+ rtnl_held, extack);
+ module_put(a_o->owner);
+ if (IS_ERR(act))
return PTR_ERR(act);
- }
act->type = exts->type = TCA_OLD_COMPAT;
exts->actions[0] = act;
@@ -3078,8 +3078,8 @@ int tcf_exts_validate(struct net *net, struct tcf_proto *tp, struct nlattr **tb,
err = tcf_action_init(net, tp, tb[exts->action],
rate_tlv, NULL, ovr, TCA_ACT_BIND,
- exts->actions, &attr_size,
- rtnl_held, extack);
+ exts->actions, init_res,
+ &attr_size, rtnl_held, extack);
if (err < 0)
return err;
exts->nr_actions = err;
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index 14316ba..a5212a3 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -209,16 +209,16 @@ static bool fl_range_port_dst_cmp(struct cls_fl_filter *filter,
struct fl_flow_key *key,
struct fl_flow_key *mkey)
{
- __be16 min_mask, max_mask, min_val, max_val;
+ u16 min_mask, max_mask, min_val, max_val;
- min_mask = htons(filter->mask->key.tp_range.tp_min.dst);
- max_mask = htons(filter->mask->key.tp_range.tp_max.dst);
- min_val = htons(filter->key.tp_range.tp_min.dst);
- max_val = htons(filter->key.tp_range.tp_max.dst);
+ min_mask = ntohs(filter->mask->key.tp_range.tp_min.dst);
+ max_mask = ntohs(filter->mask->key.tp_range.tp_max.dst);
+ min_val = ntohs(filter->key.tp_range.tp_min.dst);
+ max_val = ntohs(filter->key.tp_range.tp_max.dst);
if (min_mask && max_mask) {
- if (htons(key->tp_range.tp.dst) < min_val ||
- htons(key->tp_range.tp.dst) > max_val)
+ if (ntohs(key->tp_range.tp.dst) < min_val ||
+ ntohs(key->tp_range.tp.dst) > max_val)
return false;
/* skb does not have min and max values */
@@ -232,16 +232,16 @@ static bool fl_range_port_src_cmp(struct cls_fl_filter *filter,
struct fl_flow_key *key,
struct fl_flow_key *mkey)
{
- __be16 min_mask, max_mask, min_val, max_val;
+ u16 min_mask, max_mask, min_val, max_val;
- min_mask = htons(filter->mask->key.tp_range.tp_min.src);
- max_mask = htons(filter->mask->key.tp_range.tp_max.src);
- min_val = htons(filter->key.tp_range.tp_min.src);
- max_val = htons(filter->key.tp_range.tp_max.src);
+ min_mask = ntohs(filter->mask->key.tp_range.tp_min.src);
+ max_mask = ntohs(filter->mask->key.tp_range.tp_max.src);
+ min_val = ntohs(filter->key.tp_range.tp_min.src);
+ max_val = ntohs(filter->key.tp_range.tp_max.src);
if (min_mask && max_mask) {
- if (htons(key->tp_range.tp.src) < min_val ||
- htons(key->tp_range.tp.src) > max_val)
+ if (ntohs(key->tp_range.tp.src) < min_val ||
+ ntohs(key->tp_range.tp.src) > max_val)
return false;
/* skb does not have min and max values */
@@ -779,16 +779,16 @@ static int fl_set_key_port_range(struct nlattr **tb, struct fl_flow_key *key,
TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src));
if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst &&
- htons(key->tp_range.tp_max.dst) <=
- htons(key->tp_range.tp_min.dst)) {
+ ntohs(key->tp_range.tp_max.dst) <=
+ ntohs(key->tp_range.tp_min.dst)) {
NL_SET_ERR_MSG_ATTR(extack,
tb[TCA_FLOWER_KEY_PORT_DST_MIN],
"Invalid destination port range (min must be strictly smaller than max)");
return -EINVAL;
}
if (mask->tp_range.tp_min.src && mask->tp_range.tp_max.src &&
- htons(key->tp_range.tp_max.src) <=
- htons(key->tp_range.tp_min.src)) {
+ ntohs(key->tp_range.tp_max.src) <=
+ ntohs(key->tp_range.tp_min.src)) {
NL_SET_ERR_MSG_ATTR(extack,
tb[TCA_FLOWER_KEY_PORT_SRC_MIN],
"Invalid source port range (min must be strictly smaller than max)");
diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c
index 2b88710..76ed1a0 100644
--- a/net/sched/sch_dsmark.c
+++ b/net/sched/sch_dsmark.c
@@ -406,7 +406,8 @@ static void dsmark_reset(struct Qdisc *sch)
struct dsmark_qdisc_data *p = qdisc_priv(sch);
pr_debug("%s(sch %p,[qdisc %p])\n", __func__, sch, p);
- qdisc_reset(p->q);
+ if (p->q)
+ qdisc_reset(p->q);
sch->qstats.backlog = 0;
sch->q.qlen = 0;
}
diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
index 949163f..cac6849 100644
--- a/net/sched/sch_fq_pie.c
+++ b/net/sched/sch_fq_pie.c
@@ -138,8 +138,15 @@ static int fq_pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
/* Classifies packet into corresponding flow */
idx = fq_pie_classify(skb, sch, &ret);
- sel_flow = &q->flows[idx];
+ if (idx == 0) {
+ if (ret & __NET_XMIT_BYPASS)
+ qdisc_qstats_drop(sch);
+ __qdisc_drop(skb, to_free);
+ return ret;
+ }
+ idx--;
+ sel_flow = &q->flows[idx];
/* Checks whether adding a new packet would exceed memory limit */
get_pie_cb(skb)->mem_usage = skb->truesize;
memory_limited = q->memory_usage > q->memory_limit + skb->truesize;
@@ -297,9 +304,9 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt,
goto flow_error;
}
q->flows_cnt = nla_get_u32(tb[TCA_FQ_PIE_FLOWS]);
- if (!q->flows_cnt || q->flows_cnt >= 65536) {
+ if (!q->flows_cnt || q->flows_cnt > 65536) {
NL_SET_ERR_MSG_MOD(extack,
- "Number of flows must range in [1..65535]");
+ "Number of flows must range in [1..65536]");
goto flow_error;
}
}
@@ -367,7 +374,7 @@ static void fq_pie_timer(struct timer_list *t)
struct fq_pie_sched_data *q = from_timer(q, t, adapt_timer);
struct Qdisc *sch = q->sch;
spinlock_t *root_lock; /* to lock qdisc for probability calculations */
- u16 idx;
+ u32 idx;
root_lock = qdisc_lock(qdisc_root_sleeping(sch));
spin_lock(root_lock);
@@ -388,7 +395,7 @@ static int fq_pie_init(struct Qdisc *sch, struct nlattr *opt,
{
struct fq_pie_sched_data *q = qdisc_priv(sch);
int err;
- u16 idx;
+ u32 idx;
pie_params_init(&q->p_params);
sch->limit = 10 * 1024;
@@ -500,7 +507,7 @@ static int fq_pie_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
static void fq_pie_reset(struct Qdisc *sch)
{
struct fq_pie_sched_data *q = qdisc_priv(sch);
- u16 idx;
+ u32 idx;
INIT_LIST_HEAD(&q->new_flows);
INIT_LIST_HEAD(&q->old_flows);
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 49eae93..854d2b3 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -35,6 +35,25 @@
const struct Qdisc_ops *default_qdisc_ops = &pfifo_fast_ops;
EXPORT_SYMBOL(default_qdisc_ops);
+static void qdisc_maybe_clear_missed(struct Qdisc *q,
+ const struct netdev_queue *txq)
+{
+ clear_bit(__QDISC_STATE_MISSED, &q->state);
+
+ /* Make sure the below netif_xmit_frozen_or_stopped()
+ * checking happens after clearing STATE_MISSED.
+ */
+ smp_mb__after_atomic();
+
+ /* Checking netif_xmit_frozen_or_stopped() again to
+ * make sure STATE_MISSED is set if the STATE_MISSED
+ * set by netif_tx_wake_queue()'s rescheduling of
+ * net_tx_action() is cleared by the above clear_bit().
+ */
+ if (!netif_xmit_frozen_or_stopped(txq))
+ set_bit(__QDISC_STATE_MISSED, &q->state);
+}
+
/* Main transmission queue. */
/* Modifications to data participating in scheduling must be protected with
@@ -74,6 +93,7 @@ static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q)
}
} else {
skb = SKB_XOFF_MAGIC;
+ qdisc_maybe_clear_missed(q, txq);
}
}
@@ -242,6 +262,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate,
}
} else {
skb = NULL;
+ qdisc_maybe_clear_missed(q, txq);
}
if (lock)
spin_unlock(lock);
@@ -251,8 +272,10 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate,
*validate = true;
if ((q->flags & TCQ_F_ONETXQUEUE) &&
- netif_xmit_frozen_or_stopped(txq))
+ netif_xmit_frozen_or_stopped(txq)) {
+ qdisc_maybe_clear_missed(q, txq);
return skb;
+ }
skb = qdisc_dequeue_skb_bad_txq(q);
if (unlikely(skb)) {
@@ -311,6 +334,8 @@ bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q,
HARD_TX_LOCK(dev, txq, smp_processor_id());
if (!netif_xmit_frozen_or_stopped(txq))
skb = dev_hard_start_xmit(skb, dev, txq, &ret);
+ else
+ qdisc_maybe_clear_missed(q, txq);
HARD_TX_UNLOCK(dev, txq);
} else {
@@ -640,8 +665,10 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
{
struct pfifo_fast_priv *priv = qdisc_priv(qdisc);
struct sk_buff *skb = NULL;
+ bool need_retry = true;
int band;
+retry:
for (band = 0; band < PFIFO_FAST_BANDS && !skb; band++) {
struct skb_array *q = band2list(priv, band);
@@ -652,6 +679,23 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
}
if (likely(skb)) {
qdisc_update_stats_at_dequeue(qdisc, skb);
+ } else if (need_retry &&
+ test_bit(__QDISC_STATE_MISSED, &qdisc->state)) {
+ /* Delay clearing the STATE_MISSED here to reduce
+ * the overhead of the second spin_trylock() in
+ * qdisc_run_begin() and __netif_schedule() calling
+ * in qdisc_run_end().
+ */
+ clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
+
+ /* Make sure dequeuing happens after clearing
+ * STATE_MISSED.
+ */
+ smp_mb__after_atomic();
+
+ need_retry = false;
+
+ goto retry;
} else {
WRITE_ONCE(qdisc->empty, true);
}
@@ -1158,8 +1202,10 @@ static void dev_reset_queue(struct net_device *dev,
qdisc_reset(qdisc);
spin_unlock_bh(qdisc_lock(qdisc));
- if (nolock)
+ if (nolock) {
+ clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
spin_unlock_bh(&qdisc->seqlock);
+ }
}
static bool some_qdisc_is_busy(struct net_device *dev)
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
index c966c05..0085306 100644
--- a/net/sched/sch_taprio.c
+++ b/net/sched/sch_taprio.c
@@ -900,6 +900,12 @@ static int parse_taprio_schedule(struct taprio_sched *q, struct nlattr **tb,
list_for_each_entry(entry, &new->entries, list)
cycle = ktime_add_ns(cycle, entry->interval);
+
+ if (!cycle) {
+ NL_SET_ERR_MSG(extack, "'cycle_time' can never be 0");
+ return -EINVAL;
+ }
+
new->cycle_time = cycle;
}
diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c
index 2f1f0a3..6af6b95 100644
--- a/net/sched/sch_teql.c
+++ b/net/sched/sch_teql.c
@@ -134,6 +134,9 @@ teql_destroy(struct Qdisc *sch)
struct teql_sched_data *dat = qdisc_priv(sch);
struct teql_master *master = dat->m;
+ if (!master)
+ return;
+
prev = master->slaves;
if (prev) {
do {
diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
index 8a58f42..c8074f43 100644
--- a/net/sctp/ipv6.c
+++ b/net/sctp/ipv6.c
@@ -643,8 +643,8 @@ static int sctp_v6_available(union sctp_addr *addr, struct sctp_sock *sp)
if (!(type & IPV6_ADDR_UNICAST))
return 0;
- return sp->inet.freebind || net->ipv6.sysctl.ip_nonlocal_bind ||
- ipv6_chk_addr(net, in6, NULL, 0);
+ return ipv6_can_nonlocal_bind(net, &sp->inet) ||
+ ipv6_chk_addr(net, in6, NULL, 0);
}
/* This function checks if the address is a valid address to be used for
@@ -933,8 +933,7 @@ static int sctp_inet6_bind_verify(struct sctp_sock *opt, union sctp_addr *addr)
net = sock_net(&opt->inet.sk);
rcu_read_lock();
dev = dev_get_by_index_rcu(net, addr->v6.sin6_scope_id);
- if (!dev || !(opt->inet.freebind ||
- net->ipv6.sysctl.ip_nonlocal_bind ||
+ if (!dev || !(ipv6_can_nonlocal_bind(net, &opt->inet) ||
ipv6_chk_addr(net, &addr->v6.sin6_addr,
dev, 0))) {
rcu_read_unlock();
diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
index 9a56ae2..b9d6bab 100644
--- a/net/sctp/sm_make_chunk.c
+++ b/net/sctp/sm_make_chunk.c
@@ -3126,7 +3126,7 @@ static __be16 sctp_process_asconf_param(struct sctp_association *asoc,
* primary.
*/
if (af->is_any(&addr))
- memcpy(&addr.v4, sctp_source(asconf), sizeof(addr));
+ memcpy(&addr, sctp_source(asconf), sizeof(addr));
if (security_sctp_bind_connect(asoc->ep->base.sk,
SCTP_PARAM_SET_PRIMARY,
diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
index c669f8b..b65bdaa 100644
--- a/net/sctp/sm_statefuns.c
+++ b/net/sctp/sm_statefuns.c
@@ -1841,20 +1841,35 @@ static enum sctp_disposition sctp_sf_do_dupcook_a(
SCTP_TO(SCTP_EVENT_TIMEOUT_T4_RTO));
sctp_add_cmd_sf(commands, SCTP_CMD_PURGE_ASCONF_QUEUE, SCTP_NULL());
- repl = sctp_make_cookie_ack(new_asoc, chunk);
+ /* Update the content of current association. */
+ if (sctp_assoc_update((struct sctp_association *)asoc, new_asoc)) {
+ struct sctp_chunk *abort;
+
+ abort = sctp_make_abort(asoc, NULL, sizeof(struct sctp_errhdr));
+ if (abort) {
+ sctp_init_cause(abort, SCTP_ERROR_RSRC_LOW, 0);
+ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort));
+ }
+ sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, SCTP_ERROR(ECONNABORTED));
+ sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED,
+ SCTP_PERR(SCTP_ERROR_RSRC_LOW));
+ SCTP_INC_STATS(net, SCTP_MIB_ABORTEDS);
+ SCTP_DEC_STATS(net, SCTP_MIB_CURRESTAB);
+ goto nomem;
+ }
+
+ repl = sctp_make_cookie_ack(asoc, chunk);
if (!repl)
goto nomem;
/* Report association restart to upper layer. */
ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_RESTART, 0,
- new_asoc->c.sinit_num_ostreams,
- new_asoc->c.sinit_max_instreams,
+ asoc->c.sinit_num_ostreams,
+ asoc->c.sinit_max_instreams,
NULL, GFP_ATOMIC);
if (!ev)
goto nomem_ev;
- /* Update the content of current association. */
- sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev));
if ((sctp_state(asoc, SHUTDOWN_PENDING) ||
sctp_state(asoc, SHUTDOWN_SENT)) &&
@@ -1918,7 +1933,8 @@ static enum sctp_disposition sctp_sf_do_dupcook_b(
sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
SCTP_STATE(SCTP_STATE_ESTABLISHED));
- SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
+ if (asoc->state < SCTP_STATE_ESTABLISHED)
+ SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL());
repl = sctp_make_cookie_ack(new_asoc, chunk);
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 53d0a41..3ac6b21 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -357,6 +357,18 @@ static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt,
return af;
}
+static void sctp_auto_asconf_init(struct sctp_sock *sp)
+{
+ struct net *net = sock_net(&sp->inet.sk);
+
+ if (net->sctp.default_auto_asconf) {
+ spin_lock(&net->sctp.addr_wq_lock);
+ list_add_tail(&sp->auto_asconf_list, &net->sctp.auto_asconf_splist);
+ spin_unlock(&net->sctp.addr_wq_lock);
+ sp->do_auto_asconf = 1;
+ }
+}
+
/* Bind a local address either to an endpoint or to an association. */
static int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
{
@@ -418,8 +430,10 @@ static int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
return -EADDRINUSE;
/* Refresh ephemeral port. */
- if (!bp->port)
+ if (!bp->port) {
bp->port = inet_sk(sk)->inet_num;
+ sctp_auto_asconf_init(sp);
+ }
/* Add the address to the bind address list.
* Use GFP_ATOMIC since BHs will be disabled.
@@ -4939,19 +4953,6 @@ static int sctp_init_sock(struct sock *sk)
sk_sockets_allocated_inc(sk);
sock_prot_inuse_add(net, sk->sk_prot, 1);
- /* Nothing can fail after this block, otherwise
- * sctp_destroy_sock() will be called without addr_wq_lock held
- */
- if (net->sctp.default_auto_asconf) {
- spin_lock(&sock_net(sk)->sctp.addr_wq_lock);
- list_add_tail(&sp->auto_asconf_list,
- &net->sctp.auto_asconf_splist);
- sp->do_auto_asconf = 1;
- spin_unlock(&sock_net(sk)->sctp.addr_wq_lock);
- } else {
- sp->do_auto_asconf = 0;
- }
-
local_bh_enable();
return 0;
@@ -9285,6 +9286,8 @@ static int sctp_sock_migrate(struct sock *oldsk, struct sock *newsk,
return err;
}
+ sctp_auto_asconf_init(newsp);
+
/* Move any messages in the old socket's receive queue that are for the
* peeled off association to the new socket's receive queue.
*/
diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
index 5dd4faa..030d7f3 100644
--- a/net/smc/af_smc.c
+++ b/net/smc/af_smc.c
@@ -2147,6 +2147,9 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
struct smc_sock *smc;
int val, rc;
+ if (level == SOL_TCP && optname == TCP_ULP)
+ return -EOPNOTSUPP;
+
smc = smc_sk(sk);
/* generic setsockopts reaching us here always apply to the
@@ -2171,7 +2174,6 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
if (rc || smc->use_fallback)
goto out;
switch (optname) {
- case TCP_ULP:
case TCP_FASTOPEN:
case TCP_FASTOPEN_CONNECT:
case TCP_FASTOPEN_KEY:
diff --git a/net/smc/smc_ism.c b/net/smc/smc_ism.c
index 6abbdd0..8e33c01 100644
--- a/net/smc/smc_ism.c
+++ b/net/smc/smc_ism.c
@@ -304,6 +304,14 @@ struct smcd_dev *smcd_alloc_dev(struct device *parent, const char *name,
return NULL;
}
+ smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)",
+ WQ_MEM_RECLAIM, name);
+ if (!smcd->event_wq) {
+ kfree(smcd->conn);
+ kfree(smcd);
+ return NULL;
+ }
+
smcd->dev.parent = parent;
smcd->dev.release = smcd_release;
device_initialize(&smcd->dev);
@@ -317,19 +325,14 @@ struct smcd_dev *smcd_alloc_dev(struct device *parent, const char *name,
INIT_LIST_HEAD(&smcd->vlan);
INIT_LIST_HEAD(&smcd->lgr_list);
init_waitqueue_head(&smcd->lgrs_deleted);
- smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)",
- WQ_MEM_RECLAIM, name);
- if (!smcd->event_wq) {
- kfree(smcd->conn);
- kfree(smcd);
- return NULL;
- }
return smcd;
}
EXPORT_SYMBOL_GPL(smcd_alloc_dev);
int smcd_register_dev(struct smcd_dev *smcd)
{
+ int rc;
+
mutex_lock(&smcd_dev_list.mutex);
if (list_empty(&smcd_dev_list.list)) {
u8 *system_eid = NULL;
@@ -349,7 +352,14 @@ int smcd_register_dev(struct smcd_dev *smcd)
dev_name(&smcd->dev), smcd->pnetid,
smcd->pnetid_by_user ? " (user defined)" : "");
- return device_add(&smcd->dev);
+ rc = device_add(&smcd->dev);
+ if (rc) {
+ mutex_lock(&smcd_dev_list.mutex);
+ list_del(&smcd->list);
+ mutex_unlock(&smcd_dev_list.mutex);
+ }
+
+ return rc;
}
EXPORT_SYMBOL_GPL(smcd_register_dev);
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index 3259120..84c8a53 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -1680,13 +1680,6 @@ call_reserveresult(struct rpc_task *task)
return;
}
- /*
- * Even though there was an error, we may have acquired
- * a request slot somehow. Make sure not to leak it.
- */
- if (task->tk_rqstp)
- xprt_release(task);
-
switch (status) {
case -ENOMEM:
rpc_delay(task, HZ >> 2);
@@ -1802,7 +1795,6 @@ call_allocate(struct rpc_task *task)
status = xprt->ops->buf_alloc(task);
trace_rpc_buf_alloc(task, status);
- xprt_inject_disconnect(xprt);
if (status == 0)
return;
if (status != -ENOMEM) {
@@ -2461,12 +2453,6 @@ call_decode(struct rpc_task *task)
}
/*
- * Ensure that we see all writes made by xprt_complete_rqst()
- * before it changed req->rq_reply_bytes_recvd.
- */
- smp_rmb();
-
- /*
* Did we ever call xprt_complete_rqst()? If not, we should assume
* the message is incomplete.
*/
@@ -2474,6 +2460,11 @@ call_decode(struct rpc_task *task)
if (!req->rq_reply_bytes_recvd)
goto out;
+ /* Ensure that we see all writes made by xprt_complete_rqst()
+ * before it changed req->rq_reply_bytes_recvd.
+ */
+ smp_rmb();
+
req->rq_rcv_buf.len = req->rq_private_buf.len;
trace_rpc_xdr_recvfrom(task, &req->rq_rcv_buf);
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index fa7b7ae..eba1714 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1176,7 +1176,7 @@ static int svc_tcp_sendto(struct svc_rqst *rqstp)
goto out_notconn;
err = svc_tcp_sendmsg(svsk->sk_sock, &msg, xdr, marker, &sent);
xdr_free_bvec(xdr);
- trace_svcsock_tcp_send(xprt, err < 0 ? err : sent);
+ trace_svcsock_tcp_send(xprt, err < 0 ? (long)err : sent);
if (err < 0 || sent != (xdr->len + sizeof(marker)))
goto out_close;
mutex_unlock(&xprt->xpt_mutex);
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index 57f09ea..9a50764 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -70,6 +70,7 @@
static void xprt_init(struct rpc_xprt *xprt, struct net *net);
static __be32 xprt_alloc_xid(struct rpc_xprt *xprt);
static void xprt_destroy(struct rpc_xprt *xprt);
+static void xprt_request_init(struct rpc_task *task);
static DEFINE_SPINLOCK(xprt_list_lock);
static LIST_HEAD(xprt_list);
@@ -670,9 +671,9 @@ int xprt_adjust_timeout(struct rpc_rqst *req)
const struct rpc_timeout *to = req->rq_task->tk_client->cl_timeout;
int status = 0;
- if (time_before(jiffies, req->rq_minortimeo))
- return status;
if (time_before(jiffies, req->rq_majortimeo)) {
+ if (time_before(jiffies, req->rq_minortimeo))
+ return status;
if (to->to_exponential)
req->rq_timeout <<= 1;
else
@@ -1441,8 +1442,6 @@ bool xprt_prepare_transmit(struct rpc_task *task)
struct rpc_xprt *xprt = req->rq_xprt;
if (!xprt_lock_write(xprt, task)) {
- trace_xprt_transmit_queued(xprt, task);
-
/* Race breaker: someone may have transmitted us */
if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
rpc_wake_up_queued_task_set_status(&xprt->sending,
@@ -1455,7 +1454,10 @@ bool xprt_prepare_transmit(struct rpc_task *task)
void xprt_end_transmit(struct rpc_task *task)
{
- xprt_release_write(task->tk_rqstp->rq_xprt, task);
+ struct rpc_xprt *xprt = task->tk_rqstp->rq_xprt;
+
+ xprt_inject_disconnect(xprt);
+ xprt_release_write(xprt, task);
}
/**
@@ -1573,17 +1575,40 @@ xprt_transmit(struct rpc_task *task)
spin_unlock(&xprt->queue_lock);
}
-static void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task)
+static void xprt_complete_request_init(struct rpc_task *task)
{
- set_bit(XPRT_CONGESTED, &xprt->state);
- rpc_sleep_on(&xprt->backlog, task, NULL);
+ if (task->tk_rqstp)
+ xprt_request_init(task);
}
-static void xprt_wake_up_backlog(struct rpc_xprt *xprt)
+void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task)
{
- if (rpc_wake_up_next(&xprt->backlog) == NULL)
- clear_bit(XPRT_CONGESTED, &xprt->state);
+ set_bit(XPRT_CONGESTED, &xprt->state);
+ rpc_sleep_on(&xprt->backlog, task, xprt_complete_request_init);
}
+EXPORT_SYMBOL_GPL(xprt_add_backlog);
+
+static bool __xprt_set_rq(struct rpc_task *task, void *data)
+{
+ struct rpc_rqst *req = data;
+
+ if (task->tk_rqstp == NULL) {
+ memset(req, 0, sizeof(*req)); /* mark unused */
+ task->tk_rqstp = req;
+ return true;
+ }
+ return false;
+}
+
+bool xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req)
+{
+ if (rpc_wake_up_first(&xprt->backlog, __xprt_set_rq, req) == NULL) {
+ clear_bit(XPRT_CONGESTED, &xprt->state);
+ return false;
+ }
+ return true;
+}
+EXPORT_SYMBOL_GPL(xprt_wake_up_backlog);
static bool xprt_throttle_congested(struct rpc_xprt *xprt, struct rpc_task *task)
{
@@ -1593,7 +1618,7 @@ static bool xprt_throttle_congested(struct rpc_xprt *xprt, struct rpc_task *task
goto out;
spin_lock(&xprt->reserve_lock);
if (test_bit(XPRT_CONGESTED, &xprt->state)) {
- rpc_sleep_on(&xprt->backlog, task, NULL);
+ xprt_add_backlog(xprt, task);
ret = true;
}
spin_unlock(&xprt->reserve_lock);
@@ -1670,11 +1695,11 @@ EXPORT_SYMBOL_GPL(xprt_alloc_slot);
void xprt_free_slot(struct rpc_xprt *xprt, struct rpc_rqst *req)
{
spin_lock(&xprt->reserve_lock);
- if (!xprt_dynamic_free_slot(xprt, req)) {
+ if (!xprt_wake_up_backlog(xprt, req) &&
+ !xprt_dynamic_free_slot(xprt, req)) {
memset(req, 0, sizeof(*req)); /* mark unused */
list_add(&req->rq_list, &xprt->free);
}
- xprt_wake_up_backlog(xprt);
spin_unlock(&xprt->reserve_lock);
}
EXPORT_SYMBOL_GPL(xprt_free_slot);
@@ -1857,15 +1882,14 @@ void xprt_release(struct rpc_task *task)
spin_unlock(&xprt->transport_lock);
if (req->rq_buffer)
xprt->ops->buf_free(task);
- xprt_inject_disconnect(xprt);
xdr_free_bvec(&req->rq_rcv_buf);
xdr_free_bvec(&req->rq_snd_buf);
if (req->rq_cred != NULL)
put_rpccred(req->rq_cred);
- task->tk_rqstp = NULL;
if (req->rq_release_snd_buf)
req->rq_release_snd_buf(req);
+ task->tk_rqstp = NULL;
if (likely(!bc_prealloc(req)))
xprt->ops->free_slot(xprt, req);
else
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 44888f5..bf3627dc 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -242,6 +242,7 @@ int frwr_query_device(struct rpcrdma_ep *ep, const struct ib_device *device)
ep->re_attr.cap.max_send_wr += 1; /* for ib_drain_sq */
ep->re_attr.cap.max_recv_wr = ep->re_max_requests;
ep->re_attr.cap.max_recv_wr += RPCRDMA_BACKWARD_WRS;
+ ep->re_attr.cap.max_recv_wr += RPCRDMA_MAX_RECV_BATCH;
ep->re_attr.cap.max_recv_wr += 1; /* for ib_drain_rq */
ep->re_max_rdma_segs =
@@ -554,7 +555,6 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
mr = container_of(frwr, struct rpcrdma_mr, frwr);
bad_wr = bad_wr->next;
- list_del_init(&mr->mr_list);
frwr_mr_recycle(mr);
}
}
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index c48536f..ca267a8 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -1467,9 +1467,10 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep)
credits = 1; /* don't deadlock */
else if (credits > r_xprt->rx_ep->re_max_requests)
credits = r_xprt->rx_ep->re_max_requests;
+ rpcrdma_post_recvs(r_xprt, credits + (buf->rb_bc_srv_max_requests << 1),
+ false);
if (buf->rb_credits != credits)
rpcrdma_update_cwnd(r_xprt, credits);
- rpcrdma_post_recvs(r_xprt, false);
req = rpcr_to_rdmar(rqst);
if (req->rl_reply) {
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 035060c..c26db0a 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -262,8 +262,10 @@ xprt_rdma_connect_worker(struct work_struct *work)
* xprt_rdma_inject_disconnect - inject a connection fault
* @xprt: transport context
*
- * If @xprt is connected, disconnect it to simulate spurious connection
- * loss.
+ * If @xprt is connected, disconnect it to simulate spurious
+ * connection loss. Caller must hold @xprt's send lock to
+ * ensure that data structures and hardware resources are
+ * stable during the rdma_disconnect() call.
*/
static void
xprt_rdma_inject_disconnect(struct rpc_xprt *xprt)
@@ -518,9 +520,8 @@ xprt_rdma_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task)
return;
out_sleep:
- set_bit(XPRT_CONGESTED, &xprt->state);
- rpc_sleep_on(&xprt->backlog, task, NULL);
task->tk_status = -EAGAIN;
+ xprt_add_backlog(xprt, task);
}
/**
@@ -535,10 +536,11 @@ xprt_rdma_free_slot(struct rpc_xprt *xprt, struct rpc_rqst *rqst)
struct rpcrdma_xprt *r_xprt =
container_of(xprt, struct rpcrdma_xprt, rx_xprt);
- memset(rqst, 0, sizeof(*rqst));
- rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst));
- if (unlikely(!rpc_wake_up_next(&xprt->backlog)))
- clear_bit(XPRT_CONGESTED, &xprt->state);
+ rpcrdma_reply_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst));
+ if (!xprt_wake_up_backlog(xprt, rqst)) {
+ memset(rqst, 0, sizeof(*rqst));
+ rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst));
+ }
}
static bool rpcrdma_check_regbuf(struct rpcrdma_xprt *r_xprt,
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index ad6e2e4..2555426 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -535,7 +535,7 @@ int rpcrdma_xprt_connect(struct rpcrdma_xprt *r_xprt)
* outstanding Receives.
*/
rpcrdma_ep_get(ep);
- rpcrdma_post_recvs(r_xprt, true);
+ rpcrdma_post_recvs(r_xprt, 1, true);
rc = rdma_connect(ep->re_id, &ep->re_remote_cma);
if (rc)
@@ -1198,6 +1198,20 @@ void rpcrdma_mr_put(struct rpcrdma_mr *mr)
}
/**
+ * rpcrdma_reply_put - Put reply buffers back into pool
+ * @buffers: buffer pool
+ * @req: object to return
+ *
+ */
+void rpcrdma_reply_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req)
+{
+ if (req->rl_reply) {
+ rpcrdma_rep_put(buffers, req->rl_reply);
+ req->rl_reply = NULL;
+ }
+}
+
+/**
* rpcrdma_buffer_get - Get a request buffer
* @buffers: Buffer pool from which to obtain a buffer
*
@@ -1225,9 +1239,7 @@ rpcrdma_buffer_get(struct rpcrdma_buffer *buffers)
*/
void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req)
{
- if (req->rl_reply)
- rpcrdma_rep_put(buffers, req->rl_reply);
- req->rl_reply = NULL;
+ rpcrdma_reply_put(buffers, req);
spin_lock(&buffers->rb_lock);
list_add(&req->rl_list, &buffers->rb_send_bufs);
@@ -1377,21 +1389,21 @@ int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
/**
* rpcrdma_post_recvs - Refill the Receive Queue
* @r_xprt: controlling transport instance
- * @temp: mark Receive buffers to be deleted after use
+ * @needed: current credit grant
+ * @temp: mark Receive buffers to be deleted after one use
*
*/
-void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, bool temp)
+void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp)
{
struct rpcrdma_buffer *buf = &r_xprt->rx_buf;
struct rpcrdma_ep *ep = r_xprt->rx_ep;
struct ib_recv_wr *wr, *bad_wr;
struct rpcrdma_rep *rep;
- int needed, count, rc;
+ int count, rc;
rc = 0;
count = 0;
- needed = buf->rb_credits + (buf->rb_bc_srv_max_requests << 1);
if (likely(ep->re_receive_count > needed))
goto out;
needed -= ep->re_receive_count;
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 43974ef..702f034 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -452,7 +452,7 @@ int rpcrdma_xprt_connect(struct rpcrdma_xprt *r_xprt);
void rpcrdma_xprt_disconnect(struct rpcrdma_xprt *r_xprt);
int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
-void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, bool temp);
+void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp);
/*
* Buffer calls - xprtrdma/verbs.c
@@ -472,6 +472,7 @@ void rpcrdma_mrs_refresh(struct rpcrdma_xprt *r_xprt);
struct rpcrdma_req *rpcrdma_buffer_get(struct rpcrdma_buffer *);
void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers,
struct rpcrdma_req *req);
+void rpcrdma_reply_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req);
void rpcrdma_recv_buffer_put(struct rpcrdma_rep *);
bool rpcrdma_regbuf_realloc(struct rpcrdma_regbuf *rb, size_t size,
diff --git a/net/tipc/core.c b/net/tipc/core.c
index c2ff429..40c0308 100644
--- a/net/tipc/core.c
+++ b/net/tipc/core.c
@@ -121,6 +121,8 @@ static void __net_exit tipc_exit_net(struct net *net)
#ifdef CONFIG_TIPC_CRYPTO
tipc_crypto_stop(&tipc_net(net)->crypto_tx);
#endif
+ while (atomic_read(&tn->wq_count))
+ cond_resched();
}
static void __net_exit tipc_pernet_pre_exit(struct net *net)
diff --git a/net/tipc/core.h b/net/tipc/core.h
index 1d57a4d..992924a 100644
--- a/net/tipc/core.h
+++ b/net/tipc/core.h
@@ -151,6 +151,8 @@ struct tipc_net {
#endif
/* Work item for net finalize */
struct tipc_net_work final_work;
+ /* The numbers of work queues in schedule */
+ atomic_t wq_count;
};
static inline struct tipc_net *tipc_net(struct net *net)
diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
index 740ab9a..2301b66 100644
--- a/net/tipc/crypto.c
+++ b/net/tipc/crypto.c
@@ -1485,6 +1485,8 @@ int tipc_crypto_start(struct tipc_crypto **crypto, struct net *net,
/* Allocate statistic structure */
c->stats = alloc_percpu_gfp(struct tipc_crypto_stats, GFP_ATOMIC);
if (!c->stats) {
+ if (c->wq)
+ destroy_workqueue(c->wq);
kfree_sensitive(c);
return -ENOMEM;
}
@@ -1934,12 +1936,13 @@ static void tipc_crypto_rcv_complete(struct net *net, struct tipc_aead *aead,
goto rcv;
if (tipc_aead_clone(&tmp, aead) < 0)
goto rcv;
+ WARN_ON(!refcount_inc_not_zero(&tmp->refcnt));
if (tipc_crypto_key_attach(rx, tmp, ehdr->tx_key, false) < 0) {
tipc_aead_free(&tmp->rcu);
goto rcv;
}
tipc_aead_put(aead);
- aead = tipc_aead_get(tmp);
+ aead = tmp;
}
if (unlikely(err)) {
diff --git a/net/tipc/msg.c b/net/tipc/msg.c
index 32c79c5..88a3ed8 100644
--- a/net/tipc/msg.c
+++ b/net/tipc/msg.c
@@ -151,18 +151,13 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)
if (unlikely(head))
goto err;
*buf = NULL;
+ if (skb_has_frag_list(frag) && __skb_linearize(frag))
+ goto err;
frag = skb_unshare(frag, GFP_ATOMIC);
if (unlikely(!frag))
goto err;
head = *headbuf = frag;
TIPC_SKB_CB(head)->tail = NULL;
- if (skb_is_nonlinear(head)) {
- skb_walk_frags(head, tail) {
- TIPC_SKB_CB(head)->tail = tail;
- }
- } else {
- skb_frag_list_init(head);
- }
return 0;
}
diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
index 1c7aa51..49e8933 100644
--- a/net/tipc/netlink_compat.c
+++ b/net/tipc/netlink_compat.c
@@ -693,7 +693,7 @@ static int tipc_nl_compat_link_dump(struct tipc_nl_compat_msg *msg,
if (err)
return err;
- link_info.dest = nla_get_flag(link[TIPC_NLA_LINK_DEST]);
+ link_info.dest = htonl(nla_get_flag(link[TIPC_NLA_LINK_DEST]));
link_info.up = htonl(nla_get_flag(link[TIPC_NLA_LINK_UP]));
nla_strlcpy(link_info.str, link[TIPC_NLA_LINK_NAME],
TIPC_MAX_LINK_NAME);
diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index e795a8a..9f7cc9e1 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -1244,6 +1244,9 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq,
spin_lock_bh(&inputq->lock);
if (skb_peek(arrvq) == skb) {
skb_queue_splice_tail_init(&tmpq, inputq);
+ /* Decrease the skb's refcnt as increasing in the
+ * function tipc_skb_peek
+ */
kfree_skb(__skb_dequeue(arrvq));
}
spin_unlock_bh(&inputq->lock);
diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
index 1d17f44..a236281 100644
--- a/net/tipc/udp_media.c
+++ b/net/tipc/udp_media.c
@@ -806,6 +806,7 @@ static void cleanup_bearer(struct work_struct *work)
kfree_rcu(rcast, rcu);
}
+ atomic_dec(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
dst_cache_destroy(&ub->rcast.dst_cache);
udp_tunnel_sock_release(ub->ubsock);
synchronize_net();
@@ -826,6 +827,7 @@ static void tipc_udp_disable(struct tipc_bearer *b)
RCU_INIT_POINTER(ub->bearer, NULL);
/* sock_release need to be done outside of rtnl lock */
+ atomic_inc(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
INIT_WORK(&ub->work, cleanup_bearer);
schedule_work(&ub->work);
}
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 845c628..3abe525 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -37,6 +37,7 @@
#include <linux/sched/signal.h>
#include <linux/module.h>
+#include <linux/splice.h>
#include <crypto/aead.h>
#include <net/strparser.h>
@@ -1282,7 +1283,7 @@ int tls_sw_sendpage(struct sock *sk, struct page *page,
}
static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock,
- int flags, long timeo, int *err)
+ bool nonblock, long timeo, int *err)
{
struct tls_context *tls_ctx = tls_get_ctx(sk);
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
@@ -1307,7 +1308,7 @@ static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock,
if (sock_flag(sk, SOCK_DONE))
return NULL;
- if ((flags & MSG_DONTWAIT) || !timeo) {
+ if (nonblock || !timeo) {
*err = -EAGAIN;
return NULL;
}
@@ -1787,7 +1788,7 @@ int tls_sw_recvmsg(struct sock *sk,
bool async_capable;
bool async = false;
- skb = tls_wait_data(sk, psock, flags, timeo, &err);
+ skb = tls_wait_data(sk, psock, flags & MSG_DONTWAIT, timeo, &err);
if (!skb) {
if (psock) {
int ret = __tcp_bpf_recvmsg(sk, psock,
@@ -1991,9 +1992,9 @@ ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
lock_sock(sk);
- timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+ timeo = sock_rcvtimeo(sk, flags & SPLICE_F_NONBLOCK);
- skb = tls_wait_data(sk, NULL, flags, timeo, &err);
+ skb = tls_wait_data(sk, NULL, flags & SPLICE_F_NONBLOCK, timeo, &err);
if (!skb)
goto splice_read_end;
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index e4370b1..902cb6d 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -733,6 +733,23 @@ static int virtio_transport_reset_no_sock(const struct virtio_transport *t,
return t->send_pkt(reply);
}
+/* This function should be called with sk_lock held and SOCK_DONE set */
+static void virtio_transport_remove_sock(struct vsock_sock *vsk)
+{
+ struct virtio_vsock_sock *vvs = vsk->trans;
+ struct virtio_vsock_pkt *pkt, *tmp;
+
+ /* We don't need to take rx_lock, as the socket is closing and we are
+ * removing it.
+ */
+ list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) {
+ list_del(&pkt->list);
+ virtio_transport_free_pkt(pkt);
+ }
+
+ vsock_remove_sock(vsk);
+}
+
static void virtio_transport_wait_close(struct sock *sk, long timeout)
{
if (timeout) {
@@ -765,7 +782,7 @@ static void virtio_transport_do_close(struct vsock_sock *vsk,
(!cancel_timeout || cancel_delayed_work(&vsk->close_work))) {
vsk->close_work_scheduled = false;
- vsock_remove_sock(vsk);
+ virtio_transport_remove_sock(vsk);
/* Release refcnt obtained when we scheduled the timeout */
sock_put(sk);
@@ -828,22 +845,15 @@ static bool virtio_transport_close(struct vsock_sock *vsk)
void virtio_transport_release(struct vsock_sock *vsk)
{
- struct virtio_vsock_sock *vvs = vsk->trans;
- struct virtio_vsock_pkt *pkt, *tmp;
struct sock *sk = &vsk->sk;
bool remove_sock = true;
if (sk->sk_type == SOCK_STREAM)
remove_sock = virtio_transport_close(vsk);
- list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) {
- list_del(&pkt->list);
- virtio_transport_free_pkt(pkt);
- }
-
if (remove_sock) {
sock_set_flag(sk, SOCK_DONE);
- vsock_remove_sock(vsk);
+ virtio_transport_remove_sock(vsk);
}
}
EXPORT_SYMBOL_GPL(virtio_transport_release);
diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
index 8b65323..1c9ecb1 100644
--- a/net/vmw_vsock/vmci_transport.c
+++ b/net/vmw_vsock/vmci_transport.c
@@ -568,8 +568,7 @@ vmci_transport_queue_pair_alloc(struct vmci_qp **qpair,
peer, flags, VMCI_NO_PRIVILEGE_FLAGS);
out:
if (err < 0) {
- pr_err("Could not attach to queue pair with %d\n",
- err);
+ pr_err_once("Could not attach to queue pair with %d\n", err);
err = vmci_transport_error_to_vsock_error(err);
}
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index 535e34a..daf3f29 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -5,7 +5,7 @@
* Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2013-2014 Intel Mobile Communications GmbH
* Copyright 2015-2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2020 Intel Corporation
+ * Copyright (C) 2018-2021 Intel Corporation
*/
#include <linux/if.h>
@@ -209,9 +209,13 @@ static int validate_beacon_head(const struct nlattr *attr,
unsigned int len = nla_len(attr);
const struct element *elem;
const struct ieee80211_mgmt *mgmt = (void *)data;
- bool s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control);
unsigned int fixedlen, hdrlen;
+ bool s1g_bcn;
+ if (len < offsetofend(typeof(*mgmt), frame_control))
+ goto err;
+
+ s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control);
if (s1g_bcn) {
fixedlen = offsetof(struct ieee80211_ext,
u.s1g_beacon.variable);
@@ -5320,7 +5324,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
rdev, info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP],
¶ms);
if (err)
- return err;
+ goto out;
}
nl80211_calculate_ap_params(¶ms);
diff --git a/net/wireless/scan.c b/net/wireless/scan.c
index 3409f37..87fc56b 100644
--- a/net/wireless/scan.c
+++ b/net/wireless/scan.c
@@ -1753,6 +1753,8 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
if (rdev->bss_entries >= bss_entries_limit &&
!cfg80211_bss_expire_oldest(rdev)) {
+ if (!list_empty(&new->hidden_list))
+ list_del(&new->hidden_list);
kfree(new);
goto drop;
}
@@ -2351,14 +2353,16 @@ cfg80211_inform_single_bss_frame_data(struct wiphy *wiphy,
return NULL;
if (ext) {
- struct ieee80211_s1g_bcn_compat_ie *compat;
- u8 *ie;
+ const struct ieee80211_s1g_bcn_compat_ie *compat;
+ const struct element *elem;
- ie = (void *)cfg80211_find_ie(WLAN_EID_S1G_BCN_COMPAT,
- variable, ielen);
- if (!ie)
+ elem = cfg80211_find_elem(WLAN_EID_S1G_BCN_COMPAT,
+ variable, ielen);
+ if (!elem)
return NULL;
- compat = (void *)(ie + 2);
+ if (elem->datalen < sizeof(*compat))
+ return NULL;
+ compat = (void *)elem->data;
bssid = ext->u.s1g_beacon.sa;
capability = le16_to_cpu(compat->compat_info);
beacon_int = le16_to_cpu(compat->beacon_int);
diff --git a/net/wireless/sme.c b/net/wireless/sme.c
index 38df713..060e365 100644
--- a/net/wireless/sme.c
+++ b/net/wireless/sme.c
@@ -530,7 +530,7 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
cfg80211_sme_free(wdev);
}
- if (WARN_ON(wdev->conn))
+ if (wdev->conn)
return -EINPROGRESS;
wdev->conn = kzalloc(sizeof(*wdev->conn), GFP_KERNEL);
diff --git a/net/wireless/util.c b/net/wireless/util.c
index e4247c3..2731267 100644
--- a/net/wireless/util.c
+++ b/net/wireless/util.c
@@ -541,7 +541,7 @@ EXPORT_SYMBOL(ieee80211_get_mesh_hdrlen);
int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
const u8 *addr, enum nl80211_iftype iftype,
- u8 data_offset)
+ u8 data_offset, bool is_amsdu)
{
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
struct {
@@ -629,7 +629,7 @@ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
skb_copy_bits(skb, hdrlen, &payload, sizeof(payload));
tmp.h_proto = payload.proto;
- if (likely((ether_addr_equal(payload.hdr, rfc1042_header) &&
+ if (likely((!is_amsdu && ether_addr_equal(payload.hdr, rfc1042_header) &&
tmp.h_proto != htons(ETH_P_AARP) &&
tmp.h_proto != htons(ETH_P_IPX)) ||
ether_addr_equal(payload.hdr, bridge_tunnel_header)))
@@ -771,6 +771,9 @@ void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list,
remaining = skb->len - offset;
if (subframe_len > remaining)
goto purge;
+ /* mitigate A-MSDU aggregation injection attacks */
+ if (ether_addr_equal(eth.h_dest, rfc1042_header))
+ goto purge;
offset += sizeof(struct ethhdr);
last = remaining <= subframe_len + padding;
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 52fd1f9..ca4716b 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -380,12 +380,16 @@ static int xsk_generic_xmit(struct sock *sk)
struct sk_buff *skb;
unsigned long flags;
int err = 0;
+ u32 hr, tr;
mutex_lock(&xs->mutex);
if (xs->queue_id >= xs->dev->real_num_tx_queues)
goto out;
+ hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(xs->dev->needed_headroom));
+ tr = xs->dev->needed_tailroom;
+
while (xskq_cons_peek_desc(xs->tx, &desc, xs->pool)) {
char *buffer;
u64 addr;
@@ -397,11 +401,13 @@ static int xsk_generic_xmit(struct sock *sk)
}
len = desc.len;
- skb = sock_alloc_send_skb(sk, len, 1, &err);
+ skb = sock_alloc_send_skb(sk, hr + len + tr, 1, &err);
if (unlikely(!skb))
goto out;
+ skb_reserve(skb, hr);
skb_put(skb, len);
+
addr = desc.addr;
buffer = xsk_buff_raw_get_data(xs->pool, addr);
err = skb_store_bits(skb, 0, buffer, len);
diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index ef6de0f..be9fd5a 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -126,13 +126,12 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool,
struct xdp_desc *desc)
{
- u64 chunk, chunk_end;
+ u64 chunk;
- chunk = xp_aligned_extract_addr(pool, desc->addr);
- chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len);
- if (chunk != chunk_end)
+ if (desc->len > pool->chunk_size)
return false;
+ chunk = xp_aligned_extract_addr(pool, desc->addr);
if (chunk >= pool->addrs_cnt)
return false;
diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
index d8e8a11..a20aec9 100644
--- a/net/xfrm/xfrm_compat.c
+++ b/net/xfrm/xfrm_compat.c
@@ -216,7 +216,7 @@ static struct nlmsghdr *xfrm_nlmsg_put_compat(struct sk_buff *skb,
case XFRM_MSG_GETSADINFO:
case XFRM_MSG_GETSPDINFO:
default:
- WARN_ONCE(1, "unsupported nlmsg_type %d", nlh_src->nlmsg_type);
+ pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type);
return ERR_PTR(-EOPNOTSUPP);
}
@@ -277,7 +277,7 @@ static int xfrm_xlate64_attr(struct sk_buff *dst, const struct nlattr *src)
return xfrm_nla_cpy(dst, src, nla_len(src));
default:
BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID);
- WARN_ONCE(1, "unsupported nla_type %d", src->nla_type);
+ pr_warn_once("unsupported nla_type %d\n", src->nla_type);
return -EOPNOTSUPP;
}
}
@@ -315,8 +315,10 @@ static int xfrm_alloc_compat(struct sk_buff *skb, const struct nlmsghdr *nlh_src
struct sk_buff *new = NULL;
int err;
- if (WARN_ON_ONCE(type >= ARRAY_SIZE(xfrm_msg_min)))
+ if (type >= ARRAY_SIZE(xfrm_msg_min)) {
+ pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type);
return -EOPNOTSUPP;
+ }
if (skb_shinfo(skb)->frag_list == NULL) {
new = alloc_skb(skb->len + skb_tailroom(skb), GFP_ATOMIC);
@@ -378,6 +380,10 @@ static int xfrm_attr_cpy32(void *dst, size_t *pos, const struct nlattr *src,
struct nlmsghdr *nlmsg = dst;
struct nlattr *nla;
+ /* xfrm_user_rcv_msg_compat() relies on fact that 32-bit messages
+ * have the same len or shorted than 64-bit ones.
+ * 32-bit translation that is bigger than 64-bit original is unexpected.
+ */
if (WARN_ON_ONCE(copy_len > payload))
copy_len = payload;
diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
index edf1189..6d6917b 100644
--- a/net/xfrm/xfrm_device.c
+++ b/net/xfrm/xfrm_device.c
@@ -134,8 +134,6 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur
return skb;
}
- xo->flags |= XFRM_XMIT;
-
if (skb_is_gso(skb) && unlikely(x->xso.dev != dev)) {
struct sk_buff *segs;
diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
index 9b8e292..e9ce233 100644
--- a/net/xfrm/xfrm_interface.c
+++ b/net/xfrm/xfrm_interface.c
@@ -305,6 +305,8 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
} else {
+ if (!(ip_hdr(skb)->frag_off & htons(IP_DF)))
+ goto xmit;
icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
htonl(mtu));
}
@@ -313,6 +315,7 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
return -EMSGSIZE;
}
+xmit:
xfrmi_scrub_packet(skb, !net_eq(xi->net, dev_net(dev)));
skb_dst_set(skb, dst);
skb->dev = tdev;
diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
index a7ab193..e4cb0ff 100644
--- a/net/xfrm/xfrm_output.c
+++ b/net/xfrm/xfrm_output.c
@@ -503,22 +503,22 @@ static int xfrm_output_one(struct sk_buff *skb, int err)
return err;
}
-int xfrm_output_resume(struct sk_buff *skb, int err)
+int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err)
{
struct net *net = xs_net(skb_dst(skb)->xfrm);
while (likely((err = xfrm_output_one(skb, err)) == 0)) {
nf_reset_ct(skb);
- err = skb_dst(skb)->ops->local_out(net, skb->sk, skb);
+ err = skb_dst(skb)->ops->local_out(net, sk, skb);
if (unlikely(err != 1))
goto out;
if (!skb_dst(skb)->xfrm)
- return dst_output(net, skb->sk, skb);
+ return dst_output(net, sk, skb);
err = nf_hook(skb_dst(skb)->ops->family,
- NF_INET_POST_ROUTING, net, skb->sk, skb,
+ NF_INET_POST_ROUTING, net, sk, skb,
NULL, skb_dst(skb)->dev, xfrm_output2);
if (unlikely(err != 1))
goto out;
@@ -534,7 +534,7 @@ EXPORT_SYMBOL_GPL(xfrm_output_resume);
static int xfrm_output2(struct net *net, struct sock *sk, struct sk_buff *skb)
{
- return xfrm_output_resume(skb, 1);
+ return xfrm_output_resume(sk, skb, 1);
}
static int xfrm_output_gso(struct net *net, struct sock *sk, struct sk_buff *skb)
@@ -660,6 +660,12 @@ static int xfrm4_extract_output(struct xfrm_state *x, struct sk_buff *skb)
{
int err;
+ if (x->outer_mode.encap == XFRM_MODE_BEET &&
+ ip_is_fragment(ip_hdr(skb))) {
+ net_warn_ratelimited("BEET mode doesn't support inner IPv4 fragments\n");
+ return -EAFNOSUPPORT;
+ }
+
err = xfrm4_tunnel_check_size(skb);
if (err)
return err;
@@ -705,8 +711,15 @@ static int xfrm6_tunnel_check_size(struct sk_buff *skb)
static int xfrm6_extract_output(struct xfrm_state *x, struct sk_buff *skb)
{
#if IS_ENABLED(CONFIG_IPV6)
+ unsigned int ptr = 0;
int err;
+ if (x->outer_mode.encap == XFRM_MODE_BEET &&
+ ipv6_find_hdr(skb, &ptr, NEXTHDR_FRAGMENT, NULL, NULL) >= 0) {
+ net_warn_ratelimited("BEET mode doesn't support inner IPv6 fragments\n");
+ return -EAFNOSUPPORT;
+ }
+
err = xfrm6_tunnel_check_size(skb);
if (err)
return err;
diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
index 2f15178..77499ab 100644
--- a/net/xfrm/xfrm_state.c
+++ b/net/xfrm/xfrm_state.c
@@ -44,7 +44,6 @@ static void xfrm_state_gc_task(struct work_struct *work);
*/
static unsigned int xfrm_state_hashmax __read_mostly = 1 * 1024 * 1024;
-static __read_mostly seqcount_t xfrm_state_hash_generation = SEQCNT_ZERO(xfrm_state_hash_generation);
static struct kmem_cache *xfrm_state_cache __ro_after_init;
static DECLARE_WORK(xfrm_state_gc_work, xfrm_state_gc_task);
@@ -140,7 +139,7 @@ static void xfrm_hash_resize(struct work_struct *work)
}
spin_lock_bh(&net->xfrm.xfrm_state_lock);
- write_seqcount_begin(&xfrm_state_hash_generation);
+ write_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
nhashmask = (nsize / sizeof(struct hlist_head)) - 1U;
odst = xfrm_state_deref_prot(net->xfrm.state_bydst, net);
@@ -156,7 +155,7 @@ static void xfrm_hash_resize(struct work_struct *work)
rcu_assign_pointer(net->xfrm.state_byspi, nspi);
net->xfrm.state_hmask = nhashmask;
- write_seqcount_end(&xfrm_state_hash_generation);
+ write_seqcount_end(&net->xfrm.xfrm_state_hash_generation);
spin_unlock_bh(&net->xfrm.xfrm_state_lock);
osize = (ohashmask + 1) * sizeof(struct hlist_head);
@@ -1061,7 +1060,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
to_put = NULL;
- sequence = read_seqcount_begin(&xfrm_state_hash_generation);
+ sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
rcu_read_lock();
h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family);
@@ -1174,7 +1173,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
if (to_put)
xfrm_state_put(to_put);
- if (read_seqcount_retry(&xfrm_state_hash_generation, sequence)) {
+ if (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence)) {
*err = -EAGAIN;
if (x) {
xfrm_state_put(x);
@@ -2664,6 +2663,7 @@ int __net_init xfrm_state_init(struct net *net)
net->xfrm.state_num = 0;
INIT_WORK(&net->xfrm.state_hash_work, xfrm_hash_resize);
spin_lock_init(&net->xfrm.xfrm_state_lock);
+ seqcount_init(&net->xfrm.xfrm_state_hash_generation);
return 0;
out_byspi:
diff --git a/samples/bpf/tracex1_kern.c b/samples/bpf/tracex1_kern.c
index 3f4599c..ef30d2b 100644
--- a/samples/bpf/tracex1_kern.c
+++ b/samples/bpf/tracex1_kern.c
@@ -26,7 +26,7 @@
SEC("kprobe/__netif_receive_skb_core")
int bpf_prog1(struct pt_regs *ctx)
{
- /* attaches to kprobe netif_receive_skb,
+ /* attaches to kprobe __netif_receive_skb_core,
* looks for packets on loobpack device and prints them
*/
char devname[IFNAMSIZ];
@@ -35,7 +35,7 @@ int bpf_prog1(struct pt_regs *ctx)
int len;
/* non-portable! works for the given kernel only */
- skb = (struct sk_buff *) PT_REGS_PARM1(ctx);
+ bpf_probe_read_kernel(&skb, sizeof(skb), (void *)PT_REGS_PARM1(ctx));
dev = _(skb->dev);
len = _(skb->len);
diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
index 3edae90..2e4508a6 100644
--- a/samples/bpf/xdpsock_user.c
+++ b/samples/bpf/xdpsock_user.c
@@ -1257,7 +1257,7 @@ static void tx_only(struct xsk_socket_info *xsk, u32 *frame_nb, int batch_size)
for (i = 0; i < batch_size; i++) {
struct xdp_desc *tx_desc = xsk_ring_prod__tx_desc(&xsk->tx,
idx + i);
- tx_desc->addr = (*frame_nb + i) << XSK_UMEM__DEFAULT_FRAME_SHIFT;
+ tx_desc->addr = (*frame_nb + i) * opt_xsk_frame_size;
tx_desc->len = PKT_SIZE;
}
diff --git a/samples/kfifo/bytestream-example.c b/samples/kfifo/bytestream-example.c
index c406f03..5a90aa5 100644
--- a/samples/kfifo/bytestream-example.c
+++ b/samples/kfifo/bytestream-example.c
@@ -122,8 +122,10 @@ static ssize_t fifo_write(struct file *file, const char __user *buf,
ret = kfifo_from_user(&test, buf, count, &copied);
mutex_unlock(&write_lock);
+ if (ret)
+ return ret;
- return ret ? ret : copied;
+ return copied;
}
static ssize_t fifo_read(struct file *file, char __user *buf,
@@ -138,8 +140,10 @@ static ssize_t fifo_read(struct file *file, char __user *buf,
ret = kfifo_to_user(&test, buf, count, &copied);
mutex_unlock(&read_lock);
+ if (ret)
+ return ret;
- return ret ? ret : copied;
+ return copied;
}
static const struct proc_ops fifo_proc_ops = {
diff --git a/samples/kfifo/inttype-example.c b/samples/kfifo/inttype-example.c
index 78977fc..e5403d8 100644
--- a/samples/kfifo/inttype-example.c
+++ b/samples/kfifo/inttype-example.c
@@ -115,8 +115,10 @@ static ssize_t fifo_write(struct file *file, const char __user *buf,
ret = kfifo_from_user(&test, buf, count, &copied);
mutex_unlock(&write_lock);
+ if (ret)
+ return ret;
- return ret ? ret : copied;
+ return copied;
}
static ssize_t fifo_read(struct file *file, char __user *buf,
@@ -131,8 +133,10 @@ static ssize_t fifo_read(struct file *file, char __user *buf,
ret = kfifo_to_user(&test, buf, count, &copied);
mutex_unlock(&read_lock);
+ if (ret)
+ return ret;
- return ret ? ret : copied;
+ return copied;
}
static const struct proc_ops fifo_proc_ops = {
diff --git a/samples/kfifo/record-example.c b/samples/kfifo/record-example.c
index c507998..f64f3d6 100644
--- a/samples/kfifo/record-example.c
+++ b/samples/kfifo/record-example.c
@@ -129,8 +129,10 @@ static ssize_t fifo_write(struct file *file, const char __user *buf,
ret = kfifo_from_user(&test, buf, count, &copied);
mutex_unlock(&write_lock);
+ if (ret)
+ return ret;
- return ret ? ret : copied;
+ return copied;
}
static ssize_t fifo_read(struct file *file, char __user *buf,
@@ -145,8 +147,10 @@ static ssize_t fifo_read(struct file *file, char __user *buf,
ret = kfifo_to_user(&test, buf, count, &copied);
mutex_unlock(&read_lock);
+ if (ret)
+ return ret;
- return ret ? ret : copied;
+ return copied;
}
static const struct proc_ops fifo_proc_ops = {
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 1e000cc..127012f 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -2,6 +2,8 @@
CFLAGS_KASAN_NOSANITIZE := -fno-builtin
KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
+cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1)))
+
ifdef CONFIG_KASAN_GENERIC
ifdef CONFIG_KASAN_INLINE
@@ -12,8 +14,6 @@
CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
-cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1)))
-
# -fasan-shadow-offset fails without -fsanitize
CFLAGS_KASAN_SHADOW := $(call cc-option, -fsanitize=kernel-address \
-fasan-shadow-offset=$(KASAN_SHADOW_OFFSET), \
@@ -36,14 +36,14 @@
ifdef CONFIG_KASAN_SW_TAGS
ifdef CONFIG_KASAN_INLINE
- instrumentation_flags := -mllvm -hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
+ instrumentation_flags := $(call cc-param,hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET))
else
- instrumentation_flags := -mllvm -hwasan-instrument-with-calls=1
+ instrumentation_flags := $(call cc-param,hwasan-instrument-with-calls=1)
endif
CFLAGS_KASAN := -fsanitize=kernel-hwaddress \
- -mllvm -hwasan-instrument-stack=$(CONFIG_KASAN_STACK) \
- -mllvm -hwasan-use-short-granules=0 \
+ $(call cc-param,hwasan-instrument-stack=$(CONFIG_KASAN_STACK)) \
+ $(call cc-param,hwasan-use-short-granules=0) \
$(instrumentation_flags)
endif # CONFIG_KASAN_SW_TAGS
diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
index f54b6ac..12a87be 100644
--- a/scripts/Makefile.modpost
+++ b/scripts/Makefile.modpost
@@ -65,7 +65,20 @@
ifeq ($(KBUILD_EXTMOD),)
input-symdump := vmlinux.symvers
-output-symdump := Module.symvers
+output-symdump := modules-only.symvers
+
+quiet_cmd_cat = GEN $@
+ cmd_cat = cat $(real-prereqs) > $@
+
+ifneq ($(wildcard vmlinux.symvers),)
+
+__modpost: Module.symvers
+Module.symvers: vmlinux.symvers modules-only.symvers FORCE
+ $(call if_changed,cat)
+
+targets += Module.symvers
+
+endif
else
diff --git a/scripts/bloat-o-meter b/scripts/bloat-o-meter
index d7ca46c..dcd8d87 100755
--- a/scripts/bloat-o-meter
+++ b/scripts/bloat-o-meter
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python3
#
# Copyright 2004 Matt Mackall <mpm@selenic.com>
#
diff --git a/scripts/clang-tools/gen_compile_commands.py b/scripts/clang-tools/gen_compile_commands.py
index 1996370..8ddb5d09 100755
--- a/scripts/clang-tools/gen_compile_commands.py
+++ b/scripts/clang-tools/gen_compile_commands.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
#
# Copyright (C) Google LLC, 2018
diff --git a/scripts/clang-tools/run-clang-tools.py b/scripts/clang-tools/run-clang-tools.py
index fa7655c..f754415a 100755
--- a/scripts/clang-tools/run-clang-tools.py
+++ b/scripts/clang-tools/run-clang-tools.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
#
# Copyright (C) Google LLC, 2020
diff --git a/scripts/config b/scripts/config
index eee5b7f..8c8d7c3 100755
--- a/scripts/config
+++ b/scripts/config
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env bash
# SPDX-License-Identifier: GPL-2.0
# Manipulate options in a .config file from the command line
diff --git a/scripts/diffconfig b/scripts/diffconfig
index 89abf77..d5da5fa 100755
--- a/scripts/diffconfig
+++ b/scripts/diffconfig
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
#
# diffconfig - a tool to compare .config files.
diff --git a/scripts/get_abi.pl b/scripts/get_abi.pl
index 68dab82..92d9aa6 100755
--- a/scripts/get_abi.pl
+++ b/scripts/get_abi.pl
@@ -1,4 +1,4 @@
-#!/usr/bin/perl
+#!/usr/bin/env perl
# SPDX-License-Identifier: GPL-2.0
use strict;
diff --git a/scripts/kconfig/nconf.c b/scripts/kconfig/nconf.c
index e0f9655..af814b3 100644
--- a/scripts/kconfig/nconf.c
+++ b/scripts/kconfig/nconf.c
@@ -504,8 +504,8 @@ static int get_mext_match(const char *match_str, match_f flag)
else if (flag == FIND_NEXT_MATCH_UP)
--match_start;
+ match_start = (match_start + items_num) % items_num;
index = match_start;
- index = (index + items_num) % items_num;
while (true) {
char *str = k_menu_items[index].str;
if (strcasestr(str, match_str) != NULL)
diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
index f882ce0..e08f75a 100644
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -2481,19 +2481,6 @@ static void read_dump(const char *fname)
fatal("parse error in symbol dump file\n");
}
-/* For normal builds always dump all symbols.
- * For external modules only dump symbols
- * that are not read from kernel Module.symvers.
- **/
-static int dump_sym(struct symbol *sym)
-{
- if (!external_module)
- return 1;
- if (sym->module->from_dump)
- return 0;
- return 1;
-}
-
static void write_dump(const char *fname)
{
struct buffer buf = { };
@@ -2504,7 +2491,7 @@ static void write_dump(const char *fname)
for (n = 0; n < SYMBOL_HASH_SIZE ; n++) {
symbol = symbolhash[n];
while (symbol) {
- if (dump_sym(symbol)) {
+ if (!symbol->module->from_dump) {
namespace = symbol->namespace;
buf_printf(&buf, "0x%08x\t%s\t%s\t%s\t%s\n",
symbol->crc, symbol->name,
diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
index 0bafed8..4f84657 100755
--- a/scripts/recordmcount.pl
+++ b/scripts/recordmcount.pl
@@ -395,7 +395,7 @@
$mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s_mcount\$";
} elsif ($arch eq "riscv") {
$function_regex = "^([0-9a-fA-F]+)\\s+<([^.0-9][0-9a-zA-Z_\\.]+)>:";
- $mcount_regex = "^\\s*([0-9a-fA-F]+):\\sR_RISCV_CALL\\s_mcount\$";
+ $mcount_regex = "^\\s*([0-9a-fA-F]+):\\sR_RISCV_CALL(_PLT)?\\s_?mcount\$";
$type = ".quad";
$alignment = 2;
} elsif ($arch eq "nds32") {
diff --git a/scripts/show_delta b/scripts/show_delta
index 2643993..28e67e1 100755
--- a/scripts/show_delta
+++ b/scripts/show_delta
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# SPDX-License-Identifier: GPL-2.0-only
#
# show_deltas: Read list of printk messages instrumented with
diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install
index 40fa692..828a861 100755
--- a/scripts/sphinx-pre-install
+++ b/scripts/sphinx-pre-install
@@ -1,4 +1,4 @@
-#!/usr/bin/perl
+#!/usr/bin/env perl
# SPDX-License-Identifier: GPL-2.0-or-later
use strict;
diff --git a/scripts/split-man.pl b/scripts/split-man.pl
index c3db607..96bd99d 100755
--- a/scripts/split-man.pl
+++ b/scripts/split-man.pl
@@ -1,4 +1,4 @@
-#!/usr/bin/perl
+#!/usr/bin/env perl
# SPDX-License-Identifier: GPL-2.0
#
# Author: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
diff --git a/scripts/tracing/draw_functrace.py b/scripts/tracing/draw_functrace.py
index b657357..74f8aad 100755
--- a/scripts/tracing/draw_functrace.py
+++ b/scripts/tracing/draw_functrace.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# SPDX-License-Identifier: GPL-2.0-only
"""
diff --git a/security/commoncap.c b/security/commoncap.c
index a6c9bb4..28d582e 100644
--- a/security/commoncap.c
+++ b/security/commoncap.c
@@ -391,7 +391,7 @@ int cap_inode_getsecurity(struct inode *inode, const char *name, void **buffer,
&tmpbuf, size, GFP_NOFS);
dput(dentry);
- if (ret < 0)
+ if (ret < 0 || !tmpbuf)
return ret;
fs_ns = inode->i_sb->s_user_ns;
diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c
index 1e89e2d3..f83255a 100644
--- a/security/integrity/ima/ima_template.c
+++ b/security/integrity/ima/ima_template.c
@@ -468,8 +468,8 @@ int ima_restore_measurement_list(loff_t size, void *buf)
}
}
- entry->pcr = !ima_canonical_fmt ? *(hdr[HDR_PCR].data) :
- le32_to_cpu(*(hdr[HDR_PCR].data));
+ entry->pcr = !ima_canonical_fmt ? *(u32 *)(hdr[HDR_PCR].data) :
+ le32_to_cpu(*(u32 *)(hdr[HDR_PCR].data));
ret = ima_restore_measurement_entry(entry);
if (ret < 0)
break;
diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
index 7a937c3..4c3cffc 100644
--- a/security/keys/trusted-keys/trusted_tpm1.c
+++ b/security/keys/trusted-keys/trusted_tpm1.c
@@ -500,10 +500,12 @@ static int tpm_seal(struct tpm_buf *tb, uint16_t keytype,
ret = tpm_get_random(chip, td->nonceodd, TPM_NONCE_SIZE);
if (ret < 0)
- return ret;
+ goto out;
- if (ret != TPM_NONCE_SIZE)
- return -EIO;
+ if (ret != TPM_NONCE_SIZE) {
+ ret = -EIO;
+ goto out;
+ }
ordinal = htonl(TPM_ORD_SEAL);
datsize = htonl(datalen);
@@ -791,13 +793,33 @@ static int getoptions(char *c, struct trusted_key_payload *pay,
return -EINVAL;
break;
case Opt_blobauth:
- if (strlen(args[0].from) != 2 * SHA1_DIGEST_SIZE)
- return -EINVAL;
- res = hex2bin(opt->blobauth, args[0].from,
- SHA1_DIGEST_SIZE);
- if (res < 0)
- return -EINVAL;
+ /*
+ * TPM 1.2 authorizations are sha1 hashes passed in as
+ * hex strings. TPM 2.0 authorizations are simple
+ * passwords (although it can take a hash as well)
+ */
+ opt->blobauth_len = strlen(args[0].from);
+
+ if (opt->blobauth_len == 2 * TPM_DIGEST_SIZE) {
+ res = hex2bin(opt->blobauth, args[0].from,
+ TPM_DIGEST_SIZE);
+ if (res < 0)
+ return -EINVAL;
+
+ opt->blobauth_len = TPM_DIGEST_SIZE;
+ break;
+ }
+
+ if (tpm2 && opt->blobauth_len <= sizeof(opt->blobauth)) {
+ memcpy(opt->blobauth, args[0].from,
+ opt->blobauth_len);
+ break;
+ }
+
+ return -EINVAL;
+
break;
+
case Opt_migratable:
if (*args[0].from == '0')
pay->migratable = 0;
diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
index e2a0ed5..4c19d3a 100644
--- a/security/keys/trusted-keys/trusted_tpm2.c
+++ b/security/keys/trusted-keys/trusted_tpm2.c
@@ -79,7 +79,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
if (i == ARRAY_SIZE(tpm2_hash_map))
return -EINVAL;
- rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_CREATE);
+ rc = tpm_try_get_ops(chip);
if (rc)
return rc;
@@ -97,10 +97,12 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
TPM_DIGEST_SIZE);
/* sensitive */
- tpm_buf_append_u16(&buf, 4 + TPM_DIGEST_SIZE + payload->key_len + 1);
+ tpm_buf_append_u16(&buf, 4 + options->blobauth_len + payload->key_len + 1);
- tpm_buf_append_u16(&buf, TPM_DIGEST_SIZE);
- tpm_buf_append(&buf, options->blobauth, TPM_DIGEST_SIZE);
+ tpm_buf_append_u16(&buf, options->blobauth_len);
+ if (options->blobauth_len)
+ tpm_buf_append(&buf, options->blobauth, options->blobauth_len);
+
tpm_buf_append_u16(&buf, payload->key_len + 1);
tpm_buf_append(&buf, payload->key, payload->key_len);
tpm_buf_append_u8(&buf, payload->migratable);
@@ -265,7 +267,7 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
NULL /* nonce */, 0,
TPM2_SA_CONTINUE_SESSION,
options->blobauth /* hmac */,
- TPM_DIGEST_SIZE);
+ options->blobauth_len);
rc = tpm_transmit_cmd(chip, &buf, 6, "unsealing");
if (rc > 0)
diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h
index 40cebde..b9fdba2 100644
--- a/security/selinux/include/classmap.h
+++ b/security/selinux/include/classmap.h
@@ -242,11 +242,12 @@ struct security_class_mapping secclass_map[] = {
{ "infiniband_endport",
{ "manage_subnet", NULL } },
{ "bpf",
- {"map_create", "map_read", "map_write", "prog_load", "prog_run"} },
+ { "map_create", "map_read", "map_write", "prog_load", "prog_run",
+ NULL } },
{ "xdp_socket",
{ COMMON_SOCK_PERMS, NULL } },
{ "perf_event",
- {"open", "cpu", "kernel", "tracepoint", "read", "write"} },
+ { "open", "cpu", "kernel", "tracepoint", "read", "write", NULL } },
{ "lockdown",
{ "integrity", "confidentiality", NULL } },
{ NULL }
diff --git a/security/selinux/ss/avtab.c b/security/selinux/ss/avtab.c
index 0172d87..364b2ef 100644
--- a/security/selinux/ss/avtab.c
+++ b/security/selinux/ss/avtab.c
@@ -109,7 +109,7 @@ static int avtab_insert(struct avtab *h, struct avtab_key *key, struct avtab_dat
struct avtab_node *prev, *cur, *newnode;
u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD);
- if (!h)
+ if (!h || !h->nslot)
return -EINVAL;
hvalue = avtab_hash(key, h->mask);
@@ -154,7 +154,7 @@ avtab_insert_nonunique(struct avtab *h, struct avtab_key *key, struct avtab_datu
struct avtab_node *prev, *cur;
u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD);
- if (!h)
+ if (!h || !h->nslot)
return NULL;
hvalue = avtab_hash(key, h->mask);
for (prev = NULL, cur = h->htable[hvalue];
@@ -184,7 +184,7 @@ struct avtab_datum *avtab_search(struct avtab *h, struct avtab_key *key)
struct avtab_node *cur;
u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD);
- if (!h)
+ if (!h || !h->nslot)
return NULL;
hvalue = avtab_hash(key, h->mask);
@@ -220,7 +220,7 @@ avtab_search_node(struct avtab *h, struct avtab_key *key)
struct avtab_node *cur;
u16 specified = key->specified & ~(AVTAB_ENABLED|AVTAB_ENABLED_OLD);
- if (!h)
+ if (!h || !h->nslot)
return NULL;
hvalue = avtab_hash(key, h->mask);
@@ -295,6 +295,7 @@ void avtab_destroy(struct avtab *h)
}
kvfree(h->htable);
h->htable = NULL;
+ h->nel = 0;
h->nslot = 0;
h->mask = 0;
}
@@ -303,88 +304,52 @@ void avtab_init(struct avtab *h)
{
h->htable = NULL;
h->nel = 0;
+ h->nslot = 0;
+ h->mask = 0;
}
-int avtab_alloc(struct avtab *h, u32 nrules)
+static int avtab_alloc_common(struct avtab *h, u32 nslot)
{
- u32 mask = 0;
- u32 shift = 0;
- u32 work = nrules;
- u32 nslot = 0;
-
- if (nrules == 0)
- goto avtab_alloc_out;
-
- while (work) {
- work = work >> 1;
- shift++;
- }
- if (shift > 2)
- shift = shift - 2;
- nslot = 1 << shift;
- if (nslot > MAX_AVTAB_HASH_BUCKETS)
- nslot = MAX_AVTAB_HASH_BUCKETS;
- mask = nslot - 1;
+ if (!nslot)
+ return 0;
h->htable = kvcalloc(nslot, sizeof(void *), GFP_KERNEL);
if (!h->htable)
return -ENOMEM;
- avtab_alloc_out:
- h->nel = 0;
h->nslot = nslot;
- h->mask = mask;
- pr_debug("SELinux: %d avtab hash slots, %d rules.\n",
- h->nslot, nrules);
+ h->mask = nslot - 1;
return 0;
}
-int avtab_duplicate(struct avtab *new, struct avtab *orig)
+int avtab_alloc(struct avtab *h, u32 nrules)
{
- int i;
- struct avtab_node *node, *tmp, *tail;
+ int rc;
+ u32 nslot = 0;
- memset(new, 0, sizeof(*new));
-
- new->htable = kvcalloc(orig->nslot, sizeof(void *), GFP_KERNEL);
- if (!new->htable)
- return -ENOMEM;
- new->nslot = orig->nslot;
- new->mask = orig->mask;
-
- for (i = 0; i < orig->nslot; i++) {
- tail = NULL;
- for (node = orig->htable[i]; node; node = node->next) {
- tmp = kmem_cache_zalloc(avtab_node_cachep, GFP_KERNEL);
- if (!tmp)
- goto error;
- tmp->key = node->key;
- if (tmp->key.specified & AVTAB_XPERMS) {
- tmp->datum.u.xperms =
- kmem_cache_zalloc(avtab_xperms_cachep,
- GFP_KERNEL);
- if (!tmp->datum.u.xperms) {
- kmem_cache_free(avtab_node_cachep, tmp);
- goto error;
- }
- tmp->datum.u.xperms = node->datum.u.xperms;
- } else
- tmp->datum.u.data = node->datum.u.data;
-
- if (tail)
- tail->next = tmp;
- else
- new->htable[i] = tmp;
-
- tail = tmp;
- new->nel++;
+ if (nrules != 0) {
+ u32 shift = 1;
+ u32 work = nrules >> 3;
+ while (work) {
+ work >>= 1;
+ shift++;
}
+ nslot = 1 << shift;
+ if (nslot > MAX_AVTAB_HASH_BUCKETS)
+ nslot = MAX_AVTAB_HASH_BUCKETS;
+
+ rc = avtab_alloc_common(h, nslot);
+ if (rc)
+ return rc;
}
+ pr_debug("SELinux: %d avtab hash slots, %d rules.\n", nslot, nrules);
return 0;
-error:
- avtab_destroy(new);
- return -ENOMEM;
+}
+
+int avtab_alloc_dup(struct avtab *new, const struct avtab *orig)
+{
+ return avtab_alloc_common(new, orig->nslot);
}
void avtab_hash_eval(struct avtab *h, char *tag)
diff --git a/security/selinux/ss/avtab.h b/security/selinux/ss/avtab.h
index 4c4445c..f2eeb36 100644
--- a/security/selinux/ss/avtab.h
+++ b/security/selinux/ss/avtab.h
@@ -89,7 +89,7 @@ struct avtab {
void avtab_init(struct avtab *h);
int avtab_alloc(struct avtab *, u32);
-int avtab_duplicate(struct avtab *new, struct avtab *orig);
+int avtab_alloc_dup(struct avtab *new, const struct avtab *orig);
struct avtab_datum *avtab_search(struct avtab *h, struct avtab_key *k);
void avtab_destroy(struct avtab *h);
void avtab_hash_eval(struct avtab *h, char *tag);
diff --git a/security/selinux/ss/conditional.c b/security/selinux/ss/conditional.c
index 0b32f3a..1ef74c0 100644
--- a/security/selinux/ss/conditional.c
+++ b/security/selinux/ss/conditional.c
@@ -605,7 +605,6 @@ static int cond_dup_av_list(struct cond_av_list *new,
struct cond_av_list *orig,
struct avtab *avtab)
{
- struct avtab_node *avnode;
u32 i;
memset(new, 0, sizeof(*new));
@@ -615,10 +614,11 @@ static int cond_dup_av_list(struct cond_av_list *new,
return -ENOMEM;
for (i = 0; i < orig->len; i++) {
- avnode = avtab_search_node(avtab, &orig->nodes[i]->key);
- if (WARN_ON(!avnode))
- return -EINVAL;
- new->nodes[i] = avnode;
+ new->nodes[i] = avtab_insert_nonunique(avtab,
+ &orig->nodes[i]->key,
+ &orig->nodes[i]->datum);
+ if (!new->nodes[i])
+ return -ENOMEM;
new->len++;
}
@@ -630,7 +630,7 @@ static int duplicate_policydb_cond_list(struct policydb *newp,
{
int rc, i, j;
- rc = avtab_duplicate(&newp->te_cond_avtab, &origp->te_cond_avtab);
+ rc = avtab_alloc_dup(&newp->te_cond_avtab, &origp->te_cond_avtab);
if (rc)
return rc;
diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
index 495fc86..fbdbfb5 100644
--- a/security/selinux/ss/services.c
+++ b/security/selinux/ss/services.c
@@ -1553,6 +1553,7 @@ static int security_context_to_sid_core(struct selinux_state *state,
if (!str)
goto out;
}
+retry:
rcu_read_lock();
policy = rcu_dereference(state->policy);
policydb = &policy->policydb;
@@ -1566,6 +1567,15 @@ static int security_context_to_sid_core(struct selinux_state *state,
} else if (rc)
goto out_unlock;
rc = sidtab_context_to_sid(sidtab, &context, sid);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ if (context.str) {
+ str = context.str;
+ context.str = NULL;
+ }
+ context_destroy(&context);
+ goto retry;
+ }
context_destroy(&context);
out_unlock:
rcu_read_unlock();
@@ -1715,7 +1725,7 @@ static int security_compute_sid(struct selinux_state *state,
struct selinux_policy *policy;
struct policydb *policydb;
struct sidtab *sidtab;
- struct class_datum *cladatum = NULL;
+ struct class_datum *cladatum;
struct context *scontext, *tcontext, newcontext;
struct sidtab_entry *sentry, *tentry;
struct avtab_key avkey;
@@ -1737,6 +1747,8 @@ static int security_compute_sid(struct selinux_state *state,
goto out;
}
+retry:
+ cladatum = NULL;
context_init(&newcontext);
rcu_read_lock();
@@ -1881,6 +1893,11 @@ static int security_compute_sid(struct selinux_state *state,
}
/* Obtain the sid for the context. */
rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ context_destroy(&newcontext);
+ goto retry;
+ }
out_unlock:
rcu_read_unlock();
context_destroy(&newcontext);
@@ -2192,6 +2209,7 @@ void selinux_policy_commit(struct selinux_state *state,
struct selinux_load_state *load_state)
{
struct selinux_policy *oldpolicy, *newpolicy = load_state->policy;
+ unsigned long flags;
u32 seqno;
oldpolicy = rcu_dereference_protected(state->policy,
@@ -2213,7 +2231,13 @@ void selinux_policy_commit(struct selinux_state *state,
seqno = newpolicy->latest_granting;
/* Install the new policy. */
- rcu_assign_pointer(state->policy, newpolicy);
+ if (oldpolicy) {
+ sidtab_freeze_begin(oldpolicy->sidtab, &flags);
+ rcu_assign_pointer(state->policy, newpolicy);
+ sidtab_freeze_end(oldpolicy->sidtab, &flags);
+ } else {
+ rcu_assign_pointer(state->policy, newpolicy);
+ }
/* Load the policycaps from the new policy */
security_load_policycaps(state, newpolicy);
@@ -2357,13 +2381,15 @@ int security_port_sid(struct selinux_state *state,
struct policydb *policydb;
struct sidtab *sidtab;
struct ocontext *c;
- int rc = 0;
+ int rc;
if (!selinux_initialized(state)) {
*out_sid = SECINITSID_PORT;
return 0;
}
+retry:
+ rc = 0;
rcu_read_lock();
policy = rcu_dereference(state->policy);
policydb = &policy->policydb;
@@ -2382,6 +2408,10 @@ int security_port_sid(struct selinux_state *state,
if (!c->sid[0]) {
rc = sidtab_context_to_sid(sidtab, &c->context[0],
&c->sid[0]);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc)
goto out;
}
@@ -2408,13 +2438,15 @@ int security_ib_pkey_sid(struct selinux_state *state,
struct policydb *policydb;
struct sidtab *sidtab;
struct ocontext *c;
- int rc = 0;
+ int rc;
if (!selinux_initialized(state)) {
*out_sid = SECINITSID_UNLABELED;
return 0;
}
+retry:
+ rc = 0;
rcu_read_lock();
policy = rcu_dereference(state->policy);
policydb = &policy->policydb;
@@ -2435,6 +2467,10 @@ int security_ib_pkey_sid(struct selinux_state *state,
rc = sidtab_context_to_sid(sidtab,
&c->context[0],
&c->sid[0]);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc)
goto out;
}
@@ -2460,13 +2496,15 @@ int security_ib_endport_sid(struct selinux_state *state,
struct policydb *policydb;
struct sidtab *sidtab;
struct ocontext *c;
- int rc = 0;
+ int rc;
if (!selinux_initialized(state)) {
*out_sid = SECINITSID_UNLABELED;
return 0;
}
+retry:
+ rc = 0;
rcu_read_lock();
policy = rcu_dereference(state->policy);
policydb = &policy->policydb;
@@ -2487,6 +2525,10 @@ int security_ib_endport_sid(struct selinux_state *state,
if (!c->sid[0]) {
rc = sidtab_context_to_sid(sidtab, &c->context[0],
&c->sid[0]);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc)
goto out;
}
@@ -2510,7 +2552,7 @@ int security_netif_sid(struct selinux_state *state,
struct selinux_policy *policy;
struct policydb *policydb;
struct sidtab *sidtab;
- int rc = 0;
+ int rc;
struct ocontext *c;
if (!selinux_initialized(state)) {
@@ -2518,6 +2560,8 @@ int security_netif_sid(struct selinux_state *state,
return 0;
}
+retry:
+ rc = 0;
rcu_read_lock();
policy = rcu_dereference(state->policy);
policydb = &policy->policydb;
@@ -2534,10 +2578,18 @@ int security_netif_sid(struct selinux_state *state,
if (!c->sid[0] || !c->sid[1]) {
rc = sidtab_context_to_sid(sidtab, &c->context[0],
&c->sid[0]);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc)
goto out;
rc = sidtab_context_to_sid(sidtab, &c->context[1],
&c->sid[1]);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc)
goto out;
}
@@ -2587,6 +2639,7 @@ int security_node_sid(struct selinux_state *state,
return 0;
}
+retry:
rcu_read_lock();
policy = rcu_dereference(state->policy);
policydb = &policy->policydb;
@@ -2635,6 +2688,10 @@ int security_node_sid(struct selinux_state *state,
rc = sidtab_context_to_sid(sidtab,
&c->context[0],
&c->sid[0]);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc)
goto out;
}
@@ -2676,18 +2733,24 @@ int security_get_user_sids(struct selinux_state *state,
struct sidtab *sidtab;
struct context *fromcon, usercon;
u32 *mysids = NULL, *mysids2, sid;
- u32 mynel = 0, maxnel = SIDS_NEL;
+ u32 i, j, mynel, maxnel = SIDS_NEL;
struct user_datum *user;
struct role_datum *role;
struct ebitmap_node *rnode, *tnode;
- int rc = 0, i, j;
+ int rc;
*sids = NULL;
*nel = 0;
if (!selinux_initialized(state))
- goto out;
+ return 0;
+ mysids = kcalloc(maxnel, sizeof(*mysids), GFP_KERNEL);
+ if (!mysids)
+ return -ENOMEM;
+
+retry:
+ mynel = 0;
rcu_read_lock();
policy = rcu_dereference(state->policy);
policydb = &policy->policydb;
@@ -2707,11 +2770,6 @@ int security_get_user_sids(struct selinux_state *state,
usercon.user = user->value;
- rc = -ENOMEM;
- mysids = kcalloc(maxnel, sizeof(*mysids), GFP_ATOMIC);
- if (!mysids)
- goto out_unlock;
-
ebitmap_for_each_positive_bit(&user->roles, rnode, i) {
role = policydb->role_val_to_struct[i];
usercon.role = i + 1;
@@ -2723,6 +2781,10 @@ int security_get_user_sids(struct selinux_state *state,
continue;
rc = sidtab_context_to_sid(sidtab, &usercon, &sid);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc)
goto out_unlock;
if (mynel < maxnel) {
@@ -2745,14 +2807,14 @@ int security_get_user_sids(struct selinux_state *state,
rcu_read_unlock();
if (rc || !mynel) {
kfree(mysids);
- goto out;
+ return rc;
}
rc = -ENOMEM;
mysids2 = kcalloc(mynel, sizeof(*mysids2), GFP_KERNEL);
if (!mysids2) {
kfree(mysids);
- goto out;
+ return rc;
}
for (i = 0, j = 0; i < mynel; i++) {
struct av_decision dummy_avd;
@@ -2765,12 +2827,10 @@ int security_get_user_sids(struct selinux_state *state,
mysids2[j++] = mysids[i];
cond_resched();
}
- rc = 0;
kfree(mysids);
*sids = mysids2;
*nel = j;
-out:
- return rc;
+ return 0;
}
/**
@@ -2783,6 +2843,9 @@ int security_get_user_sids(struct selinux_state *state,
* Obtain a SID to use for a file in a filesystem that
* cannot support xattr or use a fixed labeling behavior like
* transition SIDs or task SIDs.
+ *
+ * WARNING: This function may return -ESTALE, indicating that the caller
+ * must retry the operation after re-acquiring the policy pointer!
*/
static inline int __security_genfs_sid(struct selinux_policy *policy,
const char *fstype,
@@ -2861,11 +2924,13 @@ int security_genfs_sid(struct selinux_state *state,
return 0;
}
- rcu_read_lock();
- policy = rcu_dereference(state->policy);
- retval = __security_genfs_sid(policy,
- fstype, path, orig_sclass, sid);
- rcu_read_unlock();
+ do {
+ rcu_read_lock();
+ policy = rcu_dereference(state->policy);
+ retval = __security_genfs_sid(policy, fstype, path,
+ orig_sclass, sid);
+ rcu_read_unlock();
+ } while (retval == -ESTALE);
return retval;
}
@@ -2888,7 +2953,7 @@ int security_fs_use(struct selinux_state *state, struct super_block *sb)
struct selinux_policy *policy;
struct policydb *policydb;
struct sidtab *sidtab;
- int rc = 0;
+ int rc;
struct ocontext *c;
struct superblock_security_struct *sbsec = sb->s_security;
const char *fstype = sb->s_type->name;
@@ -2899,6 +2964,8 @@ int security_fs_use(struct selinux_state *state, struct super_block *sb)
return 0;
}
+retry:
+ rc = 0;
rcu_read_lock();
policy = rcu_dereference(state->policy);
policydb = &policy->policydb;
@@ -2916,6 +2983,10 @@ int security_fs_use(struct selinux_state *state, struct super_block *sb)
if (!c->sid[0]) {
rc = sidtab_context_to_sid(sidtab, &c->context[0],
&c->sid[0]);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc)
goto out;
}
@@ -2923,6 +2994,10 @@ int security_fs_use(struct selinux_state *state, struct super_block *sb)
} else {
rc = __security_genfs_sid(policy, fstype, "/",
SECCLASS_DIR, &sbsec->sid);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc) {
sbsec->behavior = SECURITY_FS_USE_NONE;
rc = 0;
@@ -3132,12 +3207,13 @@ int security_sid_mls_copy(struct selinux_state *state,
u32 len;
int rc;
- rc = 0;
if (!selinux_initialized(state)) {
*new_sid = sid;
- goto out;
+ return 0;
}
+retry:
+ rc = 0;
context_init(&newcon);
rcu_read_lock();
@@ -3196,10 +3272,14 @@ int security_sid_mls_copy(struct selinux_state *state,
}
}
rc = sidtab_context_to_sid(sidtab, &newcon, new_sid);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ context_destroy(&newcon);
+ goto retry;
+ }
out_unlock:
rcu_read_unlock();
context_destroy(&newcon);
-out:
return rc;
}
@@ -3796,6 +3876,8 @@ int security_netlbl_secattr_to_sid(struct selinux_state *state,
return 0;
}
+retry:
+ rc = 0;
rcu_read_lock();
policy = rcu_dereference(state->policy);
policydb = &policy->policydb;
@@ -3822,23 +3904,24 @@ int security_netlbl_secattr_to_sid(struct selinux_state *state,
goto out;
}
rc = -EIDRM;
- if (!mls_context_isvalid(policydb, &ctx_new))
- goto out_free;
+ if (!mls_context_isvalid(policydb, &ctx_new)) {
+ ebitmap_destroy(&ctx_new.range.level[0].cat);
+ goto out;
+ }
rc = sidtab_context_to_sid(sidtab, &ctx_new, sid);
+ ebitmap_destroy(&ctx_new.range.level[0].cat);
+ if (rc == -ESTALE) {
+ rcu_read_unlock();
+ goto retry;
+ }
if (rc)
- goto out_free;
+ goto out;
security_netlbl_cache_add(secattr, *sid);
-
- ebitmap_destroy(&ctx_new.range.level[0].cat);
} else
*sid = SECSID_NULL;
- rcu_read_unlock();
- return 0;
-out_free:
- ebitmap_destroy(&ctx_new.range.level[0].cat);
out:
rcu_read_unlock();
return rc;
diff --git a/security/selinux/ss/sidtab.c b/security/selinux/ss/sidtab.c
index 5ee190b..656d50b 100644
--- a/security/selinux/ss/sidtab.c
+++ b/security/selinux/ss/sidtab.c
@@ -39,6 +39,7 @@ int sidtab_init(struct sidtab *s)
for (i = 0; i < SECINITSID_NUM; i++)
s->isids[i].set = 0;
+ s->frozen = false;
s->count = 0;
s->convert = NULL;
hash_init(s->context_to_sid);
@@ -281,6 +282,15 @@ int sidtab_context_to_sid(struct sidtab *s, struct context *context,
if (*sid)
goto out_unlock;
+ if (unlikely(s->frozen)) {
+ /*
+ * This sidtab is now frozen - tell the caller to abort and
+ * get the new one.
+ */
+ rc = -ESTALE;
+ goto out_unlock;
+ }
+
count = s->count;
convert = s->convert;
@@ -474,6 +484,17 @@ void sidtab_cancel_convert(struct sidtab *s)
spin_unlock_irqrestore(&s->lock, flags);
}
+void sidtab_freeze_begin(struct sidtab *s, unsigned long *flags) __acquires(&s->lock)
+{
+ spin_lock_irqsave(&s->lock, *flags);
+ s->frozen = true;
+ s->convert = NULL;
+}
+void sidtab_freeze_end(struct sidtab *s, unsigned long *flags) __releases(&s->lock)
+{
+ spin_unlock_irqrestore(&s->lock, *flags);
+}
+
static void sidtab_destroy_entry(struct sidtab_entry *entry)
{
context_destroy(&entry->context);
diff --git a/security/selinux/ss/sidtab.h b/security/selinux/ss/sidtab.h
index 80c744d..4eff0e4 100644
--- a/security/selinux/ss/sidtab.h
+++ b/security/selinux/ss/sidtab.h
@@ -86,6 +86,7 @@ struct sidtab {
u32 count;
/* access only under spinlock */
struct sidtab_convert_params *convert;
+ bool frozen;
spinlock_t lock;
#if CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE > 0
@@ -125,6 +126,9 @@ int sidtab_convert(struct sidtab *s, struct sidtab_convert_params *params);
void sidtab_cancel_convert(struct sidtab *s);
+void sidtab_freeze_begin(struct sidtab *s, unsigned long *flags) __acquires(&s->lock);
+void sidtab_freeze_end(struct sidtab *s, unsigned long *flags) __releases(&s->lock);
+
int sidtab_context_to_sid(struct sidtab *s, struct context *context, u32 *sid);
void sidtab_destroy(struct sidtab *s);
diff --git a/sound/core/init.c b/sound/core/init.c
index 018ce4e..9f5270c 100644
--- a/sound/core/init.c
+++ b/sound/core/init.c
@@ -390,10 +390,8 @@ int snd_card_disconnect(struct snd_card *card)
return 0;
}
card->shutdown = 1;
- spin_unlock(&card->files_lock);
/* replace file->f_op with special dummy operations */
- spin_lock(&card->files_lock);
list_for_each_entry(mfile, &card->files_list, list) {
/* it's critical part, use endless loop */
/* we have no room to fail */
diff --git a/sound/drivers/aloop.c b/sound/drivers/aloop.c
index c913563..2c5f7e9 100644
--- a/sound/drivers/aloop.c
+++ b/sound/drivers/aloop.c
@@ -1572,6 +1572,14 @@ static int loopback_mixer_new(struct loopback *loopback, int notify)
return -ENOMEM;
kctl->id.device = dev;
kctl->id.subdevice = substr;
+
+ /* Add the control before copying the id so that
+ * the numid field of the id is set in the copy.
+ */
+ err = snd_ctl_add(card, kctl);
+ if (err < 0)
+ return err;
+
switch (idx) {
case ACTIVE_IDX:
setup->active_id = kctl->id;
@@ -1588,9 +1596,6 @@ static int loopback_mixer_new(struct loopback *loopback, int notify)
default:
break;
}
- err = snd_ctl_add(card, kctl);
- if (err < 0)
- return err;
}
}
}
diff --git a/sound/firewire/Kconfig b/sound/firewire/Kconfig
index 2577876..9897bd2 100644
--- a/sound/firewire/Kconfig
+++ b/sound/firewire/Kconfig
@@ -38,7 +38,7 @@
* Mackie(Loud) Onyx 1640i (former model)
* Mackie(Loud) Onyx Satellite
* Mackie(Loud) Tapco Link.Firewire
- * Mackie(Loud) d.2 pro/d.4 pro
+ * Mackie(Loud) d.4 pro
* Mackie(Loud) U.420/U.420d
* TASCAM FireOne
* Stanton Controllers & Systems 1 Deck/Mixer
@@ -84,7 +84,7 @@
* PreSonus FIREBOX/FIREPOD/FP10/Inspire1394
* BridgeCo RDAudio1/Audio5
* Mackie Onyx 1220/1620/1640 (FireWire I/O Card)
- * Mackie d.2 (FireWire Option)
+ * Mackie d.2 (FireWire Option) and d.2 Pro
* Stanton FinalScratch 2 (ScratchAmp)
* Tascam IF-FW/DM
* Behringer XENIX UFX 1204/1604
diff --git a/sound/firewire/amdtp-stream-trace.h b/sound/firewire/amdtp-stream-trace.h
index 26e7cb5..aa53c13 100644
--- a/sound/firewire/amdtp-stream-trace.h
+++ b/sound/firewire/amdtp-stream-trace.h
@@ -14,8 +14,8 @@
#include <linux/tracepoint.h>
TRACE_EVENT(amdtp_packet,
- TP_PROTO(const struct amdtp_stream *s, u32 cycles, const __be32 *cip_header, unsigned int payload_length, unsigned int data_blocks, unsigned int data_block_counter, unsigned int index),
- TP_ARGS(s, cycles, cip_header, payload_length, data_blocks, data_block_counter, index),
+ TP_PROTO(const struct amdtp_stream *s, u32 cycles, const __be32 *cip_header, unsigned int payload_length, unsigned int data_blocks, unsigned int data_block_counter, unsigned int packet_index, unsigned int index),
+ TP_ARGS(s, cycles, cip_header, payload_length, data_blocks, data_block_counter, packet_index, index),
TP_STRUCT__entry(
__field(unsigned int, second)
__field(unsigned int, cycle)
@@ -48,7 +48,7 @@ TRACE_EVENT(amdtp_packet,
__entry->payload_quadlets = payload_length / sizeof(__be32);
__entry->data_blocks = data_blocks;
__entry->data_block_counter = data_block_counter,
- __entry->packet_index = s->packet_index;
+ __entry->packet_index = packet_index;
__entry->irq = !!in_interrupt();
__entry->index = index;
),
diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
index 4e2f2bb78..e0faa66 100644
--- a/sound/firewire/amdtp-stream.c
+++ b/sound/firewire/amdtp-stream.c
@@ -526,7 +526,7 @@ static void build_it_pkt_header(struct amdtp_stream *s, unsigned int cycle,
}
trace_amdtp_packet(s, cycle, cip_header, payload_length, data_blocks,
- data_block_counter, index);
+ data_block_counter, s->packet_index, index);
}
static int check_cip_header(struct amdtp_stream *s, const __be32 *buf,
@@ -630,21 +630,27 @@ static int parse_ir_ctx_header(struct amdtp_stream *s, unsigned int cycle,
unsigned int *payload_length,
unsigned int *data_blocks,
unsigned int *data_block_counter,
- unsigned int *syt, unsigned int index)
+ unsigned int *syt, unsigned int packet_index, unsigned int index)
{
const __be32 *cip_header;
+ unsigned int cip_header_size;
int err;
*payload_length = be32_to_cpu(ctx_header[0]) >> ISO_DATA_LENGTH_SHIFT;
- if (*payload_length > s->ctx_data.tx.ctx_header_size +
- s->ctx_data.tx.max_ctx_payload_length) {
+
+ if (!(s->flags & CIP_NO_HEADER))
+ cip_header_size = 8;
+ else
+ cip_header_size = 0;
+
+ if (*payload_length > cip_header_size + s->ctx_data.tx.max_ctx_payload_length) {
dev_err(&s->unit->device,
"Detect jumbo payload: %04x %04x\n",
- *payload_length, s->ctx_data.tx.max_ctx_payload_length);
+ *payload_length, cip_header_size + s->ctx_data.tx.max_ctx_payload_length);
return -EIO;
}
- if (!(s->flags & CIP_NO_HEADER)) {
+ if (cip_header_size > 0) {
cip_header = ctx_header + 2;
err = check_cip_header(s, cip_header, *payload_length,
data_blocks, data_block_counter, syt);
@@ -662,7 +668,7 @@ static int parse_ir_ctx_header(struct amdtp_stream *s, unsigned int cycle,
}
trace_amdtp_packet(s, cycle, cip_header, *payload_length, *data_blocks,
- *data_block_counter, index);
+ *data_block_counter, packet_index, index);
return err;
}
@@ -701,12 +707,13 @@ static int generate_device_pkt_descs(struct amdtp_stream *s,
unsigned int packets)
{
unsigned int dbc = s->data_block_counter;
+ unsigned int packet_index = s->packet_index;
+ unsigned int queue_size = s->queue_size;
int i;
int err;
for (i = 0; i < packets; ++i) {
struct pkt_desc *desc = descs + i;
- unsigned int index = (s->packet_index + i) % s->queue_size;
unsigned int cycle;
unsigned int payload_length;
unsigned int data_blocks;
@@ -715,7 +722,7 @@ static int generate_device_pkt_descs(struct amdtp_stream *s,
cycle = compute_cycle_count(ctx_header[1]);
err = parse_ir_ctx_header(s, cycle, ctx_header, &payload_length,
- &data_blocks, &dbc, &syt, i);
+ &data_blocks, &dbc, &syt, packet_index, i);
if (err < 0)
return err;
@@ -723,13 +730,15 @@ static int generate_device_pkt_descs(struct amdtp_stream *s,
desc->syt = syt;
desc->data_blocks = data_blocks;
desc->data_block_counter = dbc;
- desc->ctx_payload = s->buffer.packets[index].buffer;
+ desc->ctx_payload = s->buffer.packets[packet_index].buffer;
if (!(s->flags & CIP_DBC_IS_END_EVENT))
dbc = (dbc + desc->data_blocks) & 0xff;
ctx_header +=
s->ctx_data.tx.ctx_header_size / sizeof(*ctx_header);
+
+ packet_index = (packet_index + 1) % queue_size;
}
s->data_block_counter = dbc;
@@ -1065,23 +1074,22 @@ static int amdtp_stream_start(struct amdtp_stream *s, int channel, int speed,
s->data_block_counter = 0;
}
- /* initialize packet buffer */
+ // initialize packet buffer.
+ max_ctx_payload_size = amdtp_stream_get_max_payload(s);
if (s->direction == AMDTP_IN_STREAM) {
dir = DMA_FROM_DEVICE;
type = FW_ISO_CONTEXT_RECEIVE;
- if (!(s->flags & CIP_NO_HEADER))
+ if (!(s->flags & CIP_NO_HEADER)) {
+ max_ctx_payload_size -= 8;
ctx_header_size = IR_CTX_HEADER_SIZE_CIP;
- else
+ } else {
ctx_header_size = IR_CTX_HEADER_SIZE_NO_CIP;
-
- max_ctx_payload_size = amdtp_stream_get_max_payload(s) -
- ctx_header_size;
+ }
} else {
dir = DMA_TO_DEVICE;
type = FW_ISO_CONTEXT_TRANSMIT;
ctx_header_size = 0; // No effect for IT context.
- max_ctx_payload_size = amdtp_stream_get_max_payload(s);
if (!(s->flags & CIP_NO_HEADER))
max_ctx_payload_size -= IT_PKT_HEADER_SIZE_CIP;
}
diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
index 2c8e339..daeecfa 100644
--- a/sound/firewire/bebob/bebob.c
+++ b/sound/firewire/bebob/bebob.c
@@ -387,7 +387,7 @@ static const struct ieee1394_device_id bebob_id_table[] = {
SND_BEBOB_DEV_ENTRY(VEN_BRIDGECO, 0x00010049, &spec_normal),
/* Mackie, Onyx 1220/1620/1640 (Firewire I/O Card) */
SND_BEBOB_DEV_ENTRY(VEN_MACKIE2, 0x00010065, &spec_normal),
- /* Mackie, d.2 (Firewire Option) */
+ // Mackie, d.2 (Firewire option card) and d.2 Pro (the card is built-in).
SND_BEBOB_DEV_ENTRY(VEN_MACKIE1, 0x00010067, &spec_normal),
/* Stanton, ScratchAmp */
SND_BEBOB_DEV_ENTRY(VEN_STANTON, 0x00000001, &spec_normal),
diff --git a/sound/firewire/bebob/bebob_stream.c b/sound/firewire/bebob/bebob_stream.c
index bbae047..c18017e 100644
--- a/sound/firewire/bebob/bebob_stream.c
+++ b/sound/firewire/bebob/bebob_stream.c
@@ -517,20 +517,22 @@ int snd_bebob_stream_init_duplex(struct snd_bebob *bebob)
static int keep_resources(struct snd_bebob *bebob, struct amdtp_stream *stream,
unsigned int rate, unsigned int index)
{
- struct snd_bebob_stream_formation *formation;
+ unsigned int pcm_channels;
+ unsigned int midi_ports;
struct cmp_connection *conn;
int err;
if (stream == &bebob->tx_stream) {
- formation = bebob->tx_stream_formations + index;
+ pcm_channels = bebob->tx_stream_formations[index].pcm;
+ midi_ports = bebob->midi_input_ports;
conn = &bebob->out_conn;
} else {
- formation = bebob->rx_stream_formations + index;
+ pcm_channels = bebob->rx_stream_formations[index].pcm;
+ midi_ports = bebob->midi_output_ports;
conn = &bebob->in_conn;
}
- err = amdtp_am824_set_parameters(stream, rate, formation->pcm,
- formation->midi, false);
+ err = amdtp_am824_set_parameters(stream, rate, pcm_channels, midi_ports, false);
if (err < 0)
return err;
diff --git a/sound/firewire/dice/dice-alesis.c b/sound/firewire/dice/dice-alesis.c
index 0916864..27c13b9 100644
--- a/sound/firewire/dice/dice-alesis.c
+++ b/sound/firewire/dice/dice-alesis.c
@@ -16,7 +16,7 @@ alesis_io14_tx_pcm_chs[MAX_STREAMS][SND_DICE_RATE_MODE_COUNT] = {
static const unsigned int
alesis_io26_tx_pcm_chs[MAX_STREAMS][SND_DICE_RATE_MODE_COUNT] = {
{10, 10, 4}, /* Tx0 = Analog + S/PDIF. */
- {16, 8, 0}, /* Tx1 = ADAT1 + ADAT2. */
+ {16, 4, 0}, /* Tx1 = ADAT1 + ADAT2 (available at low rate). */
};
int snd_dice_detect_alesis_formats(struct snd_dice *dice)
diff --git a/sound/firewire/dice/dice-tcelectronic.c b/sound/firewire/dice/dice-tcelectronic.c
index a8875d2..43a3bcb 100644
--- a/sound/firewire/dice/dice-tcelectronic.c
+++ b/sound/firewire/dice/dice-tcelectronic.c
@@ -38,8 +38,8 @@ static const struct dice_tc_spec konnekt_24d = {
};
static const struct dice_tc_spec konnekt_live = {
- .tx_pcm_chs = {{16, 16, 16}, {0, 0, 0} },
- .rx_pcm_chs = {{16, 16, 16}, {0, 0, 0} },
+ .tx_pcm_chs = {{16, 16, 6}, {0, 0, 0} },
+ .rx_pcm_chs = {{16, 16, 6}, {0, 0, 0} },
.has_midi = true,
};
diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
index 1f1e323..9eea25c4 100644
--- a/sound/firewire/oxfw/oxfw.c
+++ b/sound/firewire/oxfw/oxfw.c
@@ -355,7 +355,6 @@ static const struct ieee1394_device_id oxfw_id_table[] = {
* Onyx-i series (former models): 0x081216
* Mackie Onyx Satellite: 0x00200f
* Tapco LINK.firewire 4x6: 0x000460
- * d.2 pro: Unknown
* d.4 pro: Unknown
* U.420: Unknown
* U.420d: Unknown
diff --git a/sound/isa/gus/gus_main.c b/sound/isa/gus/gus_main.c
index afc088f..b751812 100644
--- a/sound/isa/gus/gus_main.c
+++ b/sound/isa/gus/gus_main.c
@@ -77,17 +77,8 @@ static const struct snd_kcontrol_new snd_gus_joystick_control = {
static void snd_gus_init_control(struct snd_gus_card *gus)
{
- int ret;
-
- if (!gus->ace_flag) {
- ret =
- snd_ctl_add(gus->card,
- snd_ctl_new1(&snd_gus_joystick_control,
- gus));
- if (ret)
- snd_printk(KERN_ERR "gus: snd_ctl_add failed: %d\n",
- ret);
- }
+ if (!gus->ace_flag)
+ snd_ctl_add(gus->card, snd_ctl_new1(&snd_gus_joystick_control, gus));
}
/*
diff --git a/sound/isa/sb/emu8000.c b/sound/isa/sb/emu8000.c
index 0aa545a..1c90421 100644
--- a/sound/isa/sb/emu8000.c
+++ b/sound/isa/sb/emu8000.c
@@ -1029,8 +1029,10 @@ snd_emu8000_create_mixer(struct snd_card *card, struct snd_emu8000 *emu)
memset(emu->controls, 0, sizeof(emu->controls));
for (i = 0; i < EMU8000_NUM_CONTROLS; i++) {
- if ((err = snd_ctl_add(card, emu->controls[i] = snd_ctl_new1(mixer_defs[i], emu))) < 0)
+ if ((err = snd_ctl_add(card, emu->controls[i] = snd_ctl_new1(mixer_defs[i], emu))) < 0) {
+ emu->controls[i] = NULL;
goto __error;
+ }
}
return 0;
diff --git a/sound/isa/sb/sb16_csp.c b/sound/isa/sb/sb16_csp.c
index 270af86..1528e04 100644
--- a/sound/isa/sb/sb16_csp.c
+++ b/sound/isa/sb/sb16_csp.c
@@ -1045,10 +1045,14 @@ static int snd_sb_qsound_build(struct snd_sb_csp * p)
spin_lock_init(&p->q_lock);
- if ((err = snd_ctl_add(card, p->qsound_switch = snd_ctl_new1(&snd_sb_qsound_switch, p))) < 0)
+ if ((err = snd_ctl_add(card, p->qsound_switch = snd_ctl_new1(&snd_sb_qsound_switch, p))) < 0) {
+ p->qsound_switch = NULL;
goto __error;
- if ((err = snd_ctl_add(card, p->qsound_space = snd_ctl_new1(&snd_sb_qsound_space, p))) < 0)
+ }
+ if ((err = snd_ctl_add(card, p->qsound_space = snd_ctl_new1(&snd_sb_qsound_space, p))) < 0) {
+ p->qsound_space = NULL;
goto __error;
+ }
return 0;
diff --git a/sound/isa/sb/sb16_main.c b/sound/isa/sb/sb16_main.c
index 38dc1fd..aa48705 100644
--- a/sound/isa/sb/sb16_main.c
+++ b/sound/isa/sb/sb16_main.c
@@ -846,14 +846,10 @@ int snd_sb16dsp_pcm(struct snd_sb *chip, int device)
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_sb16_playback_ops);
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_sb16_capture_ops);
- if (chip->dma16 >= 0 && chip->dma8 != chip->dma16) {
- err = snd_ctl_add(card, snd_ctl_new1(
- &snd_sb16_dma_control, chip));
- if (err)
- return err;
- } else {
+ if (chip->dma16 >= 0 && chip->dma8 != chip->dma16)
+ snd_ctl_add(card, snd_ctl_new1(&snd_sb16_dma_control, chip));
+ else
pcm->info_flags = SNDRV_PCM_INFO_HALF_DUPLEX;
- }
snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV,
card->dev, 64*1024, 128*1024);
diff --git a/sound/isa/sb/sb8.c b/sound/isa/sb/sb8.c
index 438109f..ae93191 100644
--- a/sound/isa/sb/sb8.c
+++ b/sound/isa/sb/sb8.c
@@ -96,10 +96,6 @@ static int snd_sb8_probe(struct device *pdev, unsigned int dev)
/* block the 0x388 port to avoid PnP conflicts */
acard->fm_res = request_region(0x388, 4, "SoundBlaster FM");
- if (!acard->fm_res) {
- err = -EBUSY;
- goto _err;
- }
if (port[dev] != SNDRV_AUTO_PORT) {
if ((err = snd_sbdsp_create(card, port[dev], irq[dev],
diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
index 9690329..7c49a7e 100644
--- a/sound/pci/hda/hda_generic.c
+++ b/sound/pci/hda/hda_generic.c
@@ -1202,11 +1202,17 @@ static const char *get_line_out_pfx(struct hda_codec *codec, int ch,
*index = ch;
return "Headphone";
case AUTO_PIN_LINE_OUT:
- /* This deals with the case where we have two DACs and
- * one LO, one HP and one Speaker */
- if (!ch && cfg->speaker_outs && cfg->hp_outs) {
- bool hp_lo_shared = !path_has_mixer(codec, spec->hp_paths[0], ctl_type);
- bool spk_lo_shared = !path_has_mixer(codec, spec->speaker_paths[0], ctl_type);
+ /* This deals with the case where one HP or one Speaker or
+ * one HP + one Speaker need to share the DAC with LO
+ */
+ if (!ch) {
+ bool hp_lo_shared = false, spk_lo_shared = false;
+
+ if (cfg->speaker_outs)
+ spk_lo_shared = !path_has_mixer(codec,
+ spec->speaker_paths[0], ctl_type);
+ if (cfg->hp_outs)
+ hp_lo_shared = !path_has_mixer(codec, spec->hp_paths[0], ctl_type);
if (hp_lo_shared && spk_lo_shared)
return spec->vmaster_mute.hook ? "PCM" : "Master";
if (hp_lo_shared)
diff --git a/sound/pci/hda/ideapad_s740_helper.c b/sound/pci/hda/ideapad_s740_helper.c
new file mode 100644
index 0000000..564b908
--- /dev/null
+++ b/sound/pci/hda/ideapad_s740_helper.c
@@ -0,0 +1,492 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Fixes for Lenovo Ideapad S740, to be included from codec driver */
+
+static const struct hda_verb alc285_ideapad_s740_coefs[] = {
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x10 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0320 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0041 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0041 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001d },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x004e },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001d },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x004e },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x002a },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x002a },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0046 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0046 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0044 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0044 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
+{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
+{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+{}
+};
+
+static void alc285_fixup_ideapad_s740_coef(struct hda_codec *codec,
+ const struct hda_fixup *fix,
+ int action)
+{
+ switch (action) {
+ case HDA_FIXUP_ACT_PRE_PROBE:
+ snd_hda_add_verbs(codec, alc285_ideapad_s740_coefs);
+ break;
+ }
+}
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
index a980a4e..8098088 100644
--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -930,20 +930,21 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
SND_PCI_QUIRK(0x103c, 0x8079, "HP EliteBook 840 G3", CXT_FIXUP_HP_DOCK),
SND_PCI_QUIRK(0x103c, 0x807C, "HP EliteBook 820 G3", CXT_FIXUP_HP_DOCK),
SND_PCI_QUIRK(0x103c, 0x80FD, "HP ProBook 640 G2", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC),
+ SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
+ SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO),
SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x837f, "HP ProBook 470 G5", CXT_FIXUP_MUTE_LED_GPIO),
SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK),
SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK),
SND_PCI_QUIRK(0x103c, 0x83d3, "HP ProBook 640 G4", CXT_FIXUP_HP_DOCK),
- SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
- SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC),
- SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
- SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO),
- SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO),
- SND_PCI_QUIRK(0x103c, 0x837f, "HP ProBook 470 G5", CXT_FIXUP_MUTE_LED_GPIO),
- SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
- SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO),
SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x844f, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
index 8c6f10c..6d2a4df 100644
--- a/sound/pci/hda/patch_hdmi.c
+++ b/sound/pci/hda/patch_hdmi.c
@@ -2653,7 +2653,7 @@ static void generic_acomp_pin_eld_notify(void *audio_ptr, int port, int dev_id)
/* skip notification during system suspend (but not in runtime PM);
* the state will be updated at resume
*/
- if (snd_power_get_state(codec->card) != SNDRV_CTL_POWER_D0)
+ if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
return;
/* ditto during suspend/resume process itself */
if (snd_hdac_is_in_pm(&codec->core))
@@ -2839,7 +2839,7 @@ static void intel_pin_eld_notify(void *audio_ptr, int port, int pipe)
/* skip notification during system suspend (but not in runtime PM);
* the state will be updated at resume
*/
- if (snd_power_get_state(codec->card) != SNDRV_CTL_POWER_D0)
+ if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
return;
/* ditto during suspend/resume process itself */
if (snd_hdac_is_in_pm(&codec->core))
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index 58946d0..d8424d2 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -395,7 +395,6 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
case 0x10ec0282:
case 0x10ec0283:
case 0x10ec0286:
- case 0x10ec0287:
case 0x10ec0288:
case 0x10ec0285:
case 0x10ec0298:
@@ -406,6 +405,10 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
case 0x10ec0275:
alc_update_coef_idx(codec, 0xe, 0, 1<<0);
break;
+ case 0x10ec0287:
+ alc_update_coef_idx(codec, 0x10, 1<<9, 0);
+ alc_write_coef_idx(codec, 0x8, 0x4ab7);
+ break;
case 0x10ec0293:
alc_update_coef_idx(codec, 0xa, 1<<13, 0);
break;
@@ -2470,13 +2473,13 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
ALC882_FIXUP_ACER_ASPIRE_8930G),
SND_PCI_QUIRK(0x1025, 0x0146, "Acer Aspire 6935G",
ALC882_FIXUP_ACER_ASPIRE_8930G),
+ SND_PCI_QUIRK(0x1025, 0x0142, "Acer Aspire 7730G",
+ ALC882_FIXUP_ACER_ASPIRE_4930G),
+ SND_PCI_QUIRK(0x1025, 0x0155, "Packard-Bell M5120", ALC882_FIXUP_PB_M5210),
SND_PCI_QUIRK(0x1025, 0x015e, "Acer Aspire 6930G",
ALC882_FIXUP_ACER_ASPIRE_4930G),
SND_PCI_QUIRK(0x1025, 0x0166, "Acer Aspire 6530G",
ALC882_FIXUP_ACER_ASPIRE_4930G),
- SND_PCI_QUIRK(0x1025, 0x0142, "Acer Aspire 7730G",
- ALC882_FIXUP_ACER_ASPIRE_4930G),
- SND_PCI_QUIRK(0x1025, 0x0155, "Packard-Bell M5120", ALC882_FIXUP_PB_M5210),
SND_PCI_QUIRK(0x1025, 0x021e, "Acer Aspire 5739G",
ALC882_FIXUP_ACER_ASPIRE_4930G),
SND_PCI_QUIRK(0x1025, 0x0259, "Acer Aspire 5935", ALC889_FIXUP_DAC_ROUTE),
@@ -2489,11 +2492,11 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601),
SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS),
SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3),
+ SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
+ SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT),
SND_PCI_QUIRK(0x104d, 0x905a, "Sony Vaio Z", ALC882_FIXUP_NO_PRIMARY_HP),
SND_PCI_QUIRK(0x104d, 0x9060, "Sony Vaio VPCL14M1R", ALC882_FIXUP_NO_PRIMARY_HP),
- SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
- SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
/* All Apple entries are in codec SSIDs */
SND_PCI_QUIRK(0x106b, 0x00a0, "MacBookPro 3,1", ALC889_FIXUP_MBP_VREF),
@@ -2536,9 +2539,19 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
+ SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x9506, "Clevo P955HQ", ALC1220_FIXUP_CLEVO_P950),
- SND_PCI_QUIRK(0x1558, 0x950A, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1558, 0x950a, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x95e3, "Clevo P955[ER]T", ALC1220_FIXUP_CLEVO_P950),
@@ -2548,14 +2561,6 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
SND_PCI_QUIRK(0x1558, 0x96e1, "Clevo P960[ER][CDFN]-K", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x97e1, "Clevo P970[ER][CDFN]", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x97e2, "Clevo P970RC-M", ALC1220_FIXUP_CLEVO_P950),
- SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
- SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
- SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
- SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
- SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
- SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
- SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
- SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
@@ -2598,6 +2603,28 @@ static const struct hda_model_fixup alc882_fixup_models[] = {
{}
};
+static const struct snd_hda_pin_quirk alc882_pin_fixup_tbl[] = {
+ SND_HDA_PIN_QUIRK(0x10ec1220, 0x1043, "ASUS", ALC1220_FIXUP_CLEVO_P950,
+ {0x14, 0x01014010},
+ {0x15, 0x01011012},
+ {0x16, 0x01016011},
+ {0x18, 0x01a19040},
+ {0x19, 0x02a19050},
+ {0x1a, 0x0181304f},
+ {0x1b, 0x0221401f},
+ {0x1e, 0x01456130}),
+ SND_HDA_PIN_QUIRK(0x10ec1220, 0x1462, "MS-7C35", ALC1220_FIXUP_CLEVO_P950,
+ {0x14, 0x01015010},
+ {0x15, 0x01011012},
+ {0x16, 0x01011011},
+ {0x18, 0x01a11040},
+ {0x19, 0x02a19050},
+ {0x1a, 0x0181104f},
+ {0x1b, 0x0221401f},
+ {0x1e, 0x01451130}),
+ {}
+};
+
/*
* BIOS auto configuration
*/
@@ -2639,6 +2666,7 @@ static int patch_alc882(struct hda_codec *codec)
snd_hda_pick_fixup(codec, alc882_fixup_models, alc882_fixup_tbl,
alc882_fixups);
+ snd_hda_pick_pin_fixup(codec, alc882_pin_fixup_tbl, alc882_fixups, true);
snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PRE_PROBE);
alc_auto_parse_customize_define(codec);
@@ -3927,6 +3955,15 @@ static void alc271_fixup_dmic(struct hda_codec *codec,
snd_hda_sequence_write(codec, verbs);
}
+/* Fix the speaker amp after resume, etc */
+static void alc269vb_fixup_aspire_e1_coef(struct hda_codec *codec,
+ const struct hda_fixup *fix,
+ int action)
+{
+ if (action == HDA_FIXUP_ACT_INIT)
+ alc_update_coef_idx(codec, 0x0d, 0x6000, 0x6000);
+}
+
static void alc269_fixup_pcm_44k(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
@@ -4320,6 +4357,35 @@ static void alc245_fixup_hp_x360_amp(struct hda_codec *codec,
}
}
+/* toggle GPIO2 at each time stream is started; we use PREPARE state instead */
+static void alc274_hp_envy_pcm_hook(struct hda_pcm_stream *hinfo,
+ struct hda_codec *codec,
+ struct snd_pcm_substream *substream,
+ int action)
+{
+ switch (action) {
+ case HDA_GEN_PCM_ACT_PREPARE:
+ alc_update_gpio_data(codec, 0x04, true);
+ break;
+ case HDA_GEN_PCM_ACT_CLEANUP:
+ alc_update_gpio_data(codec, 0x04, false);
+ break;
+ }
+}
+
+static void alc274_fixup_hp_envy_gpio(struct hda_codec *codec,
+ const struct hda_fixup *fix,
+ int action)
+{
+ struct alc_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PROBE) {
+ spec->gpio_mask |= 0x04;
+ spec->gpio_dir |= 0x04;
+ spec->gen.pcm_playback_hook = alc274_hp_envy_pcm_hook;
+ }
+}
+
static void alc_update_coef_led(struct hda_codec *codec,
struct alc_coef_led *led,
bool polarity, bool on)
@@ -4429,6 +4495,25 @@ static void alc236_fixup_hp_mute_led(struct hda_codec *codec,
alc236_fixup_hp_coef_micmute_led(codec, fix, action);
}
+static void alc236_fixup_hp_micmute_led_vref(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ struct alc_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ spec->cap_mute_led_nid = 0x1a;
+ snd_hda_gen_add_micmute_led_cdev(codec, vref_micmute_led_set);
+ codec->power_filter = led_power_filter;
+ }
+}
+
+static void alc236_fixup_hp_mute_led_micmute_vref(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ alc236_fixup_hp_mute_led_coefbit(codec, fix, action);
+ alc236_fixup_hp_micmute_led_vref(codec, fix, action);
+}
+
#if IS_REACHABLE(CONFIG_INPUT)
static void gpio2_mic_hotkey_event(struct hda_codec *codec,
struct hda_jack_callback *event)
@@ -5658,6 +5743,18 @@ static void alc_fixup_tpt470_dacs(struct hda_codec *codec,
spec->gen.preferred_dacs = preferred_pairs;
}
+static void alc295_fixup_asus_dacs(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ static const hda_nid_t preferred_pairs[] = {
+ 0x17, 0x02, 0x21, 0x03, 0
+ };
+ struct alc_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE)
+ spec->gen.preferred_dacs = preferred_pairs;
+}
+
static void alc_shutup_dell_xps13(struct hda_codec *codec)
{
struct alc_spec *spec = codec->spec;
@@ -6173,6 +6270,35 @@ static void alc294_fixup_gx502_hp(struct hda_codec *codec,
}
}
+static void alc294_gu502_toggle_output(struct hda_codec *codec,
+ struct hda_jack_callback *cb)
+{
+ /* Windows sets 0x10 to 0x8420 for Node 0x20 which is
+ * responsible from changes between speakers and headphones
+ */
+ if (snd_hda_jack_detect_state(codec, 0x21) == HDA_JACK_PRESENT)
+ alc_write_coef_idx(codec, 0x10, 0x8420);
+ else
+ alc_write_coef_idx(codec, 0x10, 0x0a20);
+}
+
+static void alc294_fixup_gu502_hp(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ if (!is_jack_detectable(codec, 0x21))
+ return;
+
+ switch (action) {
+ case HDA_FIXUP_ACT_PRE_PROBE:
+ snd_hda_jack_detect_enable_callback(codec, 0x21,
+ alc294_gu502_toggle_output);
+ break;
+ case HDA_FIXUP_ACT_INIT:
+ alc294_gu502_toggle_output(codec, NULL);
+ break;
+ }
+}
+
static void alc285_fixup_hp_gpio_amp_init(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
@@ -6223,6 +6349,9 @@ static void alc_fixup_thinkpad_acpi(struct hda_codec *codec,
/* for alc295_fixup_hp_top_speakers */
#include "hp_x360_helper.c"
+/* for alc285_fixup_ideapad_s740_coef() */
+#include "ideapad_s740_helper.c"
+
enum {
ALC269_FIXUP_GPIO2,
ALC269_FIXUP_SONY_VAIO,
@@ -6301,6 +6430,7 @@ enum {
ALC283_FIXUP_HEADSET_MIC,
ALC255_FIXUP_MIC_MUTE_LED,
ALC282_FIXUP_ASPIRE_V5_PINS,
+ ALC269VB_FIXUP_ASPIRE_E1_COEF,
ALC280_FIXUP_HP_GPIO4,
ALC286_FIXUP_HP_GPIO_LED,
ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY,
@@ -6386,10 +6516,14 @@ enum {
ALC294_FIXUP_ASUS_GX502_HP,
ALC294_FIXUP_ASUS_GX502_PINS,
ALC294_FIXUP_ASUS_GX502_VERBS,
+ ALC294_FIXUP_ASUS_GU502_HP,
+ ALC294_FIXUP_ASUS_GU502_PINS,
+ ALC294_FIXUP_ASUS_GU502_VERBS,
ALC285_FIXUP_HP_GPIO_LED,
ALC285_FIXUP_HP_MUTE_LED,
ALC236_FIXUP_HP_GPIO_LED,
ALC236_FIXUP_HP_MUTE_LED,
+ ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS,
@@ -6405,10 +6539,13 @@ enum {
ALC269_FIXUP_LEMOTE_A1802,
ALC269_FIXUP_LEMOTE_A190X,
ALC256_FIXUP_INTEL_NUC8_RUGGED,
+ ALC233_FIXUP_INTEL_NUC8_DMIC,
+ ALC233_FIXUP_INTEL_NUC8_BOOST,
ALC256_FIXUP_INTEL_NUC10,
ALC255_FIXUP_XIAOMI_HEADSET_MIC,
ALC274_FIXUP_HP_MIC,
ALC274_FIXUP_HP_HEADSET_MIC,
+ ALC274_FIXUP_HP_ENVY_GPIO,
ALC256_FIXUP_ASUS_HPE,
ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
ALC287_FIXUP_HP_GPIO_LED,
@@ -6417,6 +6554,12 @@ enum {
ALC282_FIXUP_ACER_DISABLE_LINEOUT,
ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST,
ALC256_FIXUP_ACER_HEADSET_MIC,
+ ALC285_FIXUP_IDEAPAD_S740_COEF,
+ ALC295_FIXUP_ASUS_DACS,
+ ALC295_FIXUP_HP_OMEN,
+ ALC285_FIXUP_HP_SPECTRE_X360,
+ ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP,
+ ALC623_FIXUP_LENOVO_THINKSTATION_P340,
};
static const struct hda_fixup alc269_fixups[] = {
@@ -6979,6 +7122,10 @@ static const struct hda_fixup alc269_fixups[] = {
{ },
},
},
+ [ALC269VB_FIXUP_ASPIRE_E1_COEF] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269vb_fixup_aspire_e1_coef,
+ },
[ALC280_FIXUP_HP_GPIO4] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc280_fixup_hp_gpio4,
@@ -7122,6 +7269,16 @@ static const struct hda_fixup alc269_fixups[] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc233_fixup_lenovo_line2_mic_hotkey,
},
+ [ALC233_FIXUP_INTEL_NUC8_DMIC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_inv_dmic,
+ .chained = true,
+ .chain_id = ALC233_FIXUP_INTEL_NUC8_BOOST,
+ },
+ [ALC233_FIXUP_INTEL_NUC8_BOOST] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_limit_int_mic_boost
+ },
[ALC255_FIXUP_DELL_SPK_NOISE] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc_fixup_disable_aamix,
@@ -7605,6 +7762,35 @@ static const struct hda_fixup alc269_fixups[] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc294_fixup_gx502_hp,
},
+ [ALC294_FIXUP_ASUS_GU502_PINS] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x19, 0x01a11050 }, /* rear HP mic */
+ { 0x1a, 0x01a11830 }, /* rear external mic */
+ { 0x21, 0x012110f0 }, /* rear HP out */
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC294_FIXUP_ASUS_GU502_VERBS
+ },
+ [ALC294_FIXUP_ASUS_GU502_VERBS] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ /* set 0x15 to HP-OUT ctrl */
+ { 0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, 0xc0 },
+ /* unmute the 0x15 amp */
+ { 0x15, AC_VERB_SET_AMP_GAIN_MUTE, 0xb000 },
+ /* set 0x1b to HP-OUT */
+ { 0x1b, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x24 },
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC294_FIXUP_ASUS_GU502_HP
+ },
+ [ALC294_FIXUP_ASUS_GU502_HP] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc294_fixup_gu502_hp,
+ },
[ALC294_FIXUP_ASUS_COEF_1B] = {
.type = HDA_FIXUP_VERBS,
.v.verbs = (const struct hda_verb[]) {
@@ -7632,6 +7818,10 @@ static const struct hda_fixup alc269_fixups[] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc236_fixup_hp_mute_led,
},
+ [ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc236_fixup_hp_mute_led_micmute_vref,
+ },
[ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
.type = HDA_FIXUP_VERBS,
.v.verbs = (const struct hda_verb[]) {
@@ -7830,6 +8020,10 @@ static const struct hda_fixup alc269_fixups[] = {
.chained = true,
.chain_id = ALC274_FIXUP_HP_MIC
},
+ [ALC274_FIXUP_HP_ENVY_GPIO] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc274_fixup_hp_envy_gpio,
+ },
[ALC256_FIXUP_ASUS_HPE] = {
.type = HDA_FIXUP_VERBS,
.v.verbs = (const struct hda_verb[]) {
@@ -7887,6 +8081,57 @@ static const struct hda_fixup alc269_fixups[] = {
.chained = true,
.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
},
+ [ALC285_FIXUP_IDEAPAD_S740_COEF] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc285_fixup_ideapad_s740_coef,
+ .chained = true,
+ .chain_id = ALC269_FIXUP_THINKPAD_ACPI,
+ },
+ [ALC295_FIXUP_ASUS_DACS] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc295_fixup_asus_dacs,
+ },
+ [ALC295_FIXUP_HP_OMEN] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x12, 0xb7a60130 },
+ { 0x13, 0x40000000 },
+ { 0x14, 0x411111f0 },
+ { 0x16, 0x411111f0 },
+ { 0x17, 0x90170110 },
+ { 0x18, 0x411111f0 },
+ { 0x19, 0x02a11030 },
+ { 0x1a, 0x411111f0 },
+ { 0x1b, 0x04a19030 },
+ { 0x1d, 0x40600001 },
+ { 0x1e, 0x411111f0 },
+ { 0x21, 0x03211020 },
+ {}
+ },
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HP_LINE1_MIC1_LED,
+ },
+ [ALC285_FIXUP_HP_SPECTRE_X360] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x14, 0x90170110 }, /* enable top speaker */
+ {}
+ },
+ .chained = true,
+ .chain_id = ALC285_FIXUP_SPEAKER2_TO_DAC1,
+ },
+ [ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc285_fixup_ideapad_s740_coef,
+ .chained = true,
+ .chain_id = ALC285_FIXUP_THINKPAD_HEADSET_JACK,
+ },
+ [ALC623_FIXUP_LENOVO_THINKSTATION_P340] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_no_shutup,
+ .chained = true,
+ .chain_id = ALC283_FIXUP_HEADSET_MIC,
+ },
};
static const struct snd_pci_quirk alc269_fixup_tbl[] = {
@@ -7895,12 +8140,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC),
SND_PCI_QUIRK(0x1025, 0x047c, "Acer AC700", ALC269_FIXUP_ACER_AC700),
SND_PCI_QUIRK(0x1025, 0x072d, "Acer Aspire V5-571G", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
- SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
+ SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x0840, "Acer Aspire E1", ALC269VB_FIXUP_ASPIRE_E1_COEF),
SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK),
SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC),
@@ -7955,8 +8201,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP),
SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
- SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
SND_PCI_QUIRK(0x1028, 0x080c, "Dell WYSE", ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
@@ -7966,8 +8212,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
- SND_PCI_QUIRK(0x1028, 0x097e, "Dell Precision", ALC289_FIXUP_DUAL_SPK),
SND_PCI_QUIRK(0x1028, 0x097d, "Dell Precision", ALC289_FIXUP_DUAL_SPK),
+ SND_PCI_QUIRK(0x1028, 0x097e, "Dell Precision", ALC289_FIXUP_DUAL_SPK),
SND_PCI_QUIRK(0x1028, 0x098d, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x09bf, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0a2e, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
@@ -7978,35 +8224,18 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
SND_PCI_QUIRK(0x103c, 0x18e6, "HP", ALC269_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x218b, "HP", ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED),
- SND_PCI_QUIRK(0x103c, 0x225f, "HP", ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY),
- /* ALC282 */
SND_PCI_QUIRK(0x103c, 0x21f9, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2210, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2214, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x221b, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x103c, 0x2221, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ SND_PCI_QUIRK(0x103c, 0x2225, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x2236, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x2237, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x2238, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x2239, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x224b, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
- SND_PCI_QUIRK(0x103c, 0x2268, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x226a, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x226b, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x226e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x2271, "HP", ALC286_FIXUP_HP_GPIO_LED),
- SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC280_FIXUP_HP_DOCK_PINS),
- SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC280_FIXUP_HP_DOCK_PINS),
- SND_PCI_QUIRK(0x103c, 0x229e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x22b2, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x22b7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x22bf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x22cf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x22db, "HP", ALC280_FIXUP_HP_9480M),
- SND_PCI_QUIRK(0x103c, 0x22dc, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
- SND_PCI_QUIRK(0x103c, 0x22fb, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
- /* ALC290 */
- SND_PCI_QUIRK(0x103c, 0x221b, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
- SND_PCI_QUIRK(0x103c, 0x2221, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
- SND_PCI_QUIRK(0x103c, 0x2225, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x2253, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x2254, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x2255, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
@@ -8014,28 +8243,45 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x2257, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x2259, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x225a, "HP", ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED),
+ SND_PCI_QUIRK(0x103c, 0x225f, "HP", ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY),
SND_PCI_QUIRK(0x103c, 0x2260, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2263, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2264, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2265, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x2268, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x226a, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x226b, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x226e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x2271, "HP", ALC286_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC280_FIXUP_HP_DOCK_PINS),
SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC280_FIXUP_HP_DOCK_PINS),
SND_PCI_QUIRK(0x103c, 0x2278, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x227f, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2282, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x228b, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x228e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x229e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x22b2, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x22b7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x22bf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x22c4, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x22c5, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x22c7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x22c8, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x22c4, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x22cf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x22db, "HP", ALC280_FIXUP_HP_9480M),
+ SND_PCI_QUIRK(0x103c, 0x22dc, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ SND_PCI_QUIRK(0x103c, 0x22fb, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
SND_PCI_QUIRK(0x103c, 0x2334, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2335, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x8077, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x103c, 0x8158, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
@@ -8044,10 +8290,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED),
SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
@@ -8064,6 +8314,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
@@ -8072,16 +8326,19 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1043, 0x10d0, "ASUS X540LA/X540LJ", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x1043, 0x11c0, "ASUS X556UR", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
- SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS),
SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
+ SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
SND_PCI_QUIRK(0x1043, 0x194e, "ASUS UX563FD", ALC294_FIXUP_ASUS_HPE),
@@ -8094,31 +8351,32 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
- SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
- SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x1043, 0x8516, "ASUS X101CH", ALC269_FIXUP_ASUS_X101),
- SND_PCI_QUIRK(0x104d, 0x90b5, "Sony VAIO Pro 11", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
- SND_PCI_QUIRK(0x104d, 0x90b6, "Sony VAIO Pro 13", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x104d, 0x9073, "Sony VAIO", ALC275_FIXUP_SONY_VAIO_GPIO2),
SND_PCI_QUIRK(0x104d, 0x907b, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
SND_PCI_QUIRK(0x104d, 0x9099, "Sony VAIO S13", ALC275_FIXUP_SONY_DISABLE_AAMIX),
+ SND_PCI_QUIRK(0x104d, 0x90b5, "Sony VAIO Pro 11", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x104d, 0x90b6, "Sony VAIO Pro 13", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
SND_PCI_QUIRK(0x10cf, 0x159f, "Lifebook E780", ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT),
SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
- SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
SND_PCI_QUIRK(0x10cf, 0x1629, "Lifebook U7x7", ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC),
+ SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
@@ -8128,9 +8386,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
- SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
@@ -8153,12 +8411,19 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1558, 0x50b8, "Clevo NK50SZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x50d5, "Clevo NP50D5", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x50f0, "Clevo NH50A[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x50f2, "Clevo NH50E[PR]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x50f3, "Clevo NH58DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x50f5, "Clevo NH55EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x50f6, "Clevo NH55DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x5101, "Clevo S510WU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x70f2, "Clevo NH79EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x70f3, "Clevo NH77DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
@@ -8176,19 +8441,27 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1558, 0x8a51, "Clevo NH70RCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x8d50, "Clevo NH55RCQ-M", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x951d, "Clevo N950T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x9600, "Clevo N960K[PR]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x961d, "Clevo N960S[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x971d, "Clevo N970T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0xa500, "Clevo NL53RU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0xa600, "Clevo NL5XNU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0xb022, "Clevo NH77D[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0xc018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0xc019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0xc022, "Clevo NH77[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS),
- SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE),
SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE),
SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
SND_PCI_QUIRK(0x17aa, 0x21ca, "Thinkpad L412", ALC269_FIXUP_SKU_IGNORE),
SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 15", ALC269_FIXUP_SKU_IGNORE),
+ SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
SND_PCI_QUIRK(0x17aa, 0x21f6, "Thinkpad T530", ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST),
SND_PCI_QUIRK(0x17aa, 0x21fa, "Thinkpad X230", ALC269_FIXUP_LENOVO_DOCK),
- SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
SND_PCI_QUIRK(0x17aa, 0x21fb, "Thinkpad T430s", ALC269_FIXUP_LENOVO_DOCK),
SND_PCI_QUIRK(0x17aa, 0x2203, "Thinkpad X230 Tablet", ALC269_FIXUP_LENOVO_DOCK),
SND_PCI_QUIRK(0x17aa, 0x2208, "Thinkpad T431s", ALC269_FIXUP_LENOVO_DOCK),
@@ -8229,9 +8502,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
@@ -8250,20 +8526,19 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
- SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
- SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
- SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
- SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
@@ -8395,6 +8670,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
{.id = ALC283_FIXUP_HEADSET_MIC, .name = "alc283-headset"},
{.id = ALC255_FIXUP_MIC_MUTE_LED, .name = "alc255-dell-mute"},
{.id = ALC282_FIXUP_ASPIRE_V5_PINS, .name = "aspire-v5"},
+ {.id = ALC269VB_FIXUP_ASPIRE_E1_COEF, .name = "aspire-e1-coef"},
{.id = ALC280_FIXUP_HP_GPIO4, .name = "hp-gpio4"},
{.id = ALC286_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"},
{.id = ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, .name = "hp-gpio2-hotkey"},
@@ -8441,6 +8717,10 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
{.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
{.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},
{.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
+ {.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"},
+ {.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
+ {.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
+ {.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
{}
};
#define ALC225_STANDARD_PINS \
@@ -8717,12 +8997,17 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
{0x12, 0x90a60130},
{0x19, 0x03a11020},
{0x21, 0x0321101f}),
- SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
+ {0x12, 0x90a60130},
{0x14, 0x90170110},
{0x19, 0x04a11040},
{0x21, 0x04211020}),
SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
- {0x12, 0x90a60130},
+ {0x14, 0x90170110},
+ {0x19, 0x04a11040},
+ {0x1d, 0x40600001},
+ {0x21, 0x04211020}),
+ SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
{0x14, 0x90170110},
{0x19, 0x04a11040},
{0x21, 0x04211020}),
@@ -9208,8 +9493,7 @@ static const struct snd_pci_quirk alc861_fixup_tbl[] = {
SND_PCI_QUIRK(0x1043, 0x1393, "ASUS A6Rp", ALC861_FIXUP_ASUS_A6RP),
SND_PCI_QUIRK_VENDOR(0x1043, "ASUS laptop", ALC861_FIXUP_AMP_VREF_0F),
SND_PCI_QUIRK(0x1462, 0x7254, "HP DX2200", ALC861_FIXUP_NO_JACK_DETECT),
- SND_PCI_QUIRK(0x1584, 0x2b01, "Haier W18", ALC861_FIXUP_AMP_VREF_0F),
- SND_PCI_QUIRK(0x1584, 0x0000, "Uniwill ECS M31EI", ALC861_FIXUP_AMP_VREF_0F),
+ SND_PCI_QUIRK_VENDOR(0x1584, "Haier/Uniwill", ALC861_FIXUP_AMP_VREF_0F),
SND_PCI_QUIRK(0x1734, 0x10c7, "FSC Amilo Pi1505", ALC861_FIXUP_FSC_AMILO_PI1505),
{}
};
@@ -10004,6 +10288,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x0349, "eMachines eM250", ALC662_FIXUP_INV_DMIC),
SND_PCI_QUIRK(0x1025, 0x034a, "Gateway LT27", ALC662_FIXUP_INV_DMIC),
SND_PCI_QUIRK(0x1025, 0x038b, "Acer Aspire 8943G", ALC662_FIXUP_ASPIRE),
+ SND_PCI_QUIRK(0x1025, 0x0566, "Acer Aspire Ethos 8951G", ALC669_FIXUP_ACER_ASPIRE_ETHOS),
SND_PCI_QUIRK(0x1025, 0x123c, "Acer Nitro N50-600", ALC662_FIXUP_ACER_NITRO_HEADSET_MODE),
SND_PCI_QUIRK(0x1025, 0x124e, "Acer 2660G", ALC662_FIXUP_ACER_X2660G_HEADSET_MODE),
SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
@@ -10020,9 +10305,9 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
- SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
SND_PCI_QUIRK(0x1043, 0x12ff, "ASUS G751", ALC668_FIXUP_ASUS_G751),
+ SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
@@ -10042,7 +10327,6 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
SND_PCI_QUIRK(0x1b0a, 0x01b8, "ACER Veriton", ALC662_FIXUP_ACER_VERITON),
SND_PCI_QUIRK(0x1b35, 0x1234, "CZC ET26", ALC662_FIXUP_CZC_ET26),
SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
- SND_PCI_QUIRK(0x1025, 0x0566, "Acer Aspire Ethos 8951G", ALC669_FIXUP_ACER_ASPIRE_ETHOS),
#if 0
/* Below is a quirk table taken from the old code.
diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
index 3349e45..6fb6f36 100644
--- a/sound/pci/intel8x0.c
+++ b/sound/pci/intel8x0.c
@@ -354,6 +354,7 @@ struct ichdev {
unsigned int ali_slot; /* ALI DMA slot */
struct ac97_pcm *pcm;
int pcm_open_flag;
+ unsigned int prepared:1;
unsigned int suspended: 1;
};
@@ -714,6 +715,9 @@ static inline void snd_intel8x0_update(struct intel8x0 *chip, struct ichdev *ich
int status, civ, i, step;
int ack = 0;
+ if (!ichdev->prepared || ichdev->suspended)
+ return;
+
spin_lock_irqsave(&chip->reg_lock, flags);
status = igetbyte(chip, port + ichdev->roff_sr);
civ = igetbyte(chip, port + ICH_REG_OFF_CIV);
@@ -904,6 +908,7 @@ static int snd_intel8x0_hw_params(struct snd_pcm_substream *substream,
if (ichdev->pcm_open_flag) {
snd_ac97_pcm_close(ichdev->pcm);
ichdev->pcm_open_flag = 0;
+ ichdev->prepared = 0;
}
err = snd_ac97_pcm_open(ichdev->pcm, params_rate(hw_params),
params_channels(hw_params),
@@ -925,6 +930,7 @@ static int snd_intel8x0_hw_free(struct snd_pcm_substream *substream)
if (ichdev->pcm_open_flag) {
snd_ac97_pcm_close(ichdev->pcm);
ichdev->pcm_open_flag = 0;
+ ichdev->prepared = 0;
}
return 0;
}
@@ -999,6 +1005,7 @@ static int snd_intel8x0_pcm_prepare(struct snd_pcm_substream *substream)
ichdev->pos_shift = (runtime->sample_bits > 16) ? 2 : 1;
}
snd_intel8x0_setup_periods(chip, ichdev);
+ ichdev->prepared = 1;
return 0;
}
diff --git a/sound/pci/rme9652/hdsp.c b/sound/pci/rme9652/hdsp.c
index cea53a8..4aee30d 100644
--- a/sound/pci/rme9652/hdsp.c
+++ b/sound/pci/rme9652/hdsp.c
@@ -5321,7 +5321,8 @@ static int snd_hdsp_free(struct hdsp *hdsp)
if (hdsp->port)
pci_release_regions(hdsp->pci);
- pci_disable_device(hdsp->pci);
+ if (pci_is_enabled(hdsp->pci))
+ pci_disable_device(hdsp->pci);
return 0;
}
diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c
index 4a1f576..51c3c6a 100644
--- a/sound/pci/rme9652/hdspm.c
+++ b/sound/pci/rme9652/hdspm.c
@@ -6891,7 +6891,8 @@ static int snd_hdspm_free(struct hdspm * hdspm)
if (hdspm->port)
pci_release_regions(hdspm->pci);
- pci_disable_device(hdspm->pci);
+ if (pci_is_enabled(hdspm->pci))
+ pci_disable_device(hdspm->pci);
return 0;
}
diff --git a/sound/pci/rme9652/rme9652.c b/sound/pci/rme9652/rme9652.c
index 7ab1002..8def246 100644
--- a/sound/pci/rme9652/rme9652.c
+++ b/sound/pci/rme9652/rme9652.c
@@ -1740,7 +1740,8 @@ static int snd_rme9652_free(struct snd_rme9652 *rme9652)
if (rme9652->port)
pci_release_regions(rme9652->pci);
- pci_disable_device(rme9652->pci);
+ if (pci_is_enabled(rme9652->pci))
+ pci_disable_device(rme9652->pci);
return 0;
}
diff --git a/sound/soc/codecs/ak5558.c b/sound/soc/codecs/ak5558.c
index 65a248c..adbdfdb 100644
--- a/sound/soc/codecs/ak5558.c
+++ b/sound/soc/codecs/ak5558.c
@@ -272,7 +272,7 @@ static void ak5558_power_off(struct ak5558_priv *ak5558)
if (!ak5558->reset_gpiod)
return;
- gpiod_set_value_cansleep(ak5558->reset_gpiod, 0);
+ gpiod_set_value_cansleep(ak5558->reset_gpiod, 1);
usleep_range(1000, 2000);
}
@@ -281,7 +281,7 @@ static void ak5558_power_on(struct ak5558_priv *ak5558)
if (!ak5558->reset_gpiod)
return;
- gpiod_set_value_cansleep(ak5558->reset_gpiod, 1);
+ gpiod_set_value_cansleep(ak5558->reset_gpiod, 0);
usleep_range(1000, 2000);
}
diff --git a/sound/soc/codecs/cs35l33.c b/sound/soc/codecs/cs35l33.c
index 6042194..8894369 100644
--- a/sound/soc/codecs/cs35l33.c
+++ b/sound/soc/codecs/cs35l33.c
@@ -1201,6 +1201,7 @@ static int cs35l33_i2c_probe(struct i2c_client *i2c_client,
dev_err(&i2c_client->dev,
"CS35L33 Device ID (%X). Expected ID %X\n",
devid, CS35L33_CHIP_ID);
+ ret = -EINVAL;
goto err_enable;
}
diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
index 4d82d24..7c6b10b 100644
--- a/sound/soc/codecs/cs42l42.c
+++ b/sound/soc/codecs/cs42l42.c
@@ -398,6 +398,9 @@ static const struct regmap_config cs42l42_regmap = {
.reg_defaults = cs42l42_reg_defaults,
.num_reg_defaults = ARRAY_SIZE(cs42l42_reg_defaults),
.cache_type = REGCACHE_RBTREE,
+
+ .use_single_read = true,
+ .use_single_write = true,
};
static DECLARE_TLV_DB_SCALE(adc_tlv, -9600, 100, false);
diff --git a/sound/soc/codecs/cs43130.c b/sound/soc/codecs/cs43130.c
index 7fb3442..8f70dee 100644
--- a/sound/soc/codecs/cs43130.c
+++ b/sound/soc/codecs/cs43130.c
@@ -1735,6 +1735,14 @@ static DEVICE_ATTR(hpload_dc_r, 0444, cs43130_show_dc_r, NULL);
static DEVICE_ATTR(hpload_ac_l, 0444, cs43130_show_ac_l, NULL);
static DEVICE_ATTR(hpload_ac_r, 0444, cs43130_show_ac_r, NULL);
+static struct attribute *hpload_attrs[] = {
+ &dev_attr_hpload_dc_l.attr,
+ &dev_attr_hpload_dc_r.attr,
+ &dev_attr_hpload_ac_l.attr,
+ &dev_attr_hpload_ac_r.attr,
+};
+ATTRIBUTE_GROUPS(hpload);
+
static struct reg_sequence hp_en_cal_seq[] = {
{CS43130_INT_MASK_4, CS43130_INT_MASK_ALL},
{CS43130_HP_MEAS_LOAD_1, 0},
@@ -2302,25 +2310,15 @@ static int cs43130_probe(struct snd_soc_component *component)
cs43130->hpload_done = false;
if (cs43130->dc_meas) {
- ret = device_create_file(component->dev, &dev_attr_hpload_dc_l);
- if (ret < 0)
- return ret;
-
- ret = device_create_file(component->dev, &dev_attr_hpload_dc_r);
- if (ret < 0)
- return ret;
-
- ret = device_create_file(component->dev, &dev_attr_hpload_ac_l);
- if (ret < 0)
- return ret;
-
- ret = device_create_file(component->dev, &dev_attr_hpload_ac_r);
- if (ret < 0)
+ ret = sysfs_create_groups(&component->dev->kobj, hpload_groups);
+ if (ret)
return ret;
cs43130->wq = create_singlethread_workqueue("cs43130_hp");
- if (!cs43130->wq)
+ if (!cs43130->wq) {
+ sysfs_remove_groups(&component->dev->kobj, hpload_groups);
return -ENOMEM;
+ }
INIT_WORK(&cs43130->work, cs43130_imp_meas);
}
diff --git a/sound/soc/codecs/max98373-i2c.c b/sound/soc/codecs/max98373-i2c.c
index 92921e3..32b0c1d 100644
--- a/sound/soc/codecs/max98373-i2c.c
+++ b/sound/soc/codecs/max98373-i2c.c
@@ -440,6 +440,7 @@ static bool max98373_volatile_reg(struct device *dev, unsigned int reg)
case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK:
case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK:
case MAX98373_R20B6_BDE_CUR_STATE_READBACK:
+ case MAX98373_R20FF_GLOBAL_SHDN:
case MAX98373_R21FF_REV_ID:
return true;
default:
diff --git a/sound/soc/codecs/max98373-sdw.c b/sound/soc/codecs/max98373-sdw.c
index fa589d8..14fd2f9 100644
--- a/sound/soc/codecs/max98373-sdw.c
+++ b/sound/soc/codecs/max98373-sdw.c
@@ -214,6 +214,7 @@ static bool max98373_volatile_reg(struct device *dev, unsigned int reg)
case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK:
case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK:
case MAX98373_R20B6_BDE_CUR_STATE_READBACK:
+ case MAX98373_R20FF_GLOBAL_SHDN:
case MAX98373_R21FF_REV_ID:
/* SoundWire Control Port Registers */
case MAX98373_R0040_SCP_INIT_STAT_1 ... MAX98373_R0070_SCP_FRAME_CTLR:
diff --git a/sound/soc/codecs/max98373.c b/sound/soc/codecs/max98373.c
index 929bb17..1fd4dbb 100644
--- a/sound/soc/codecs/max98373.c
+++ b/sound/soc/codecs/max98373.c
@@ -28,11 +28,13 @@ static int max98373_dac_event(struct snd_soc_dapm_widget *w,
regmap_update_bits(max98373->regmap,
MAX98373_R20FF_GLOBAL_SHDN,
MAX98373_GLOBAL_EN_MASK, 1);
+ usleep_range(30000, 31000);
break;
case SND_SOC_DAPM_POST_PMD:
regmap_update_bits(max98373->regmap,
MAX98373_R20FF_GLOBAL_SHDN,
MAX98373_GLOBAL_EN_MASK, 0);
+ usleep_range(30000, 31000);
max98373->tdm_mode = false;
break;
default:
diff --git a/sound/soc/codecs/rt286.c b/sound/soc/codecs/rt286.c
index 5fb9653..eec2dd9 100644
--- a/sound/soc/codecs/rt286.c
+++ b/sound/soc/codecs/rt286.c
@@ -171,6 +171,9 @@ static bool rt286_readable_register(struct device *dev, unsigned int reg)
case RT286_PROC_COEF:
case RT286_SET_AMP_GAIN_ADC_IN1:
case RT286_SET_AMP_GAIN_ADC_IN2:
+ case RT286_SET_GPIO_MASK:
+ case RT286_SET_GPIO_DIRECTION:
+ case RT286_SET_GPIO_DATA:
case RT286_SET_POWER(RT286_DAC_OUT1):
case RT286_SET_POWER(RT286_DAC_OUT2):
case RT286_SET_POWER(RT286_ADC_IN1):
@@ -1117,12 +1120,11 @@ static const struct dmi_system_id force_combo_jack_table[] = {
{ }
};
-static const struct dmi_system_id dmi_dell_dino[] = {
+static const struct dmi_system_id dmi_dell[] = {
{
- .ident = "Dell Dino",
+ .ident = "Dell",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9343")
}
},
{ }
@@ -1133,7 +1135,7 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
{
struct rt286_platform_data *pdata = dev_get_platdata(&i2c->dev);
struct rt286_priv *rt286;
- int i, ret, val;
+ int i, ret, vendor_id;
rt286 = devm_kzalloc(&i2c->dev, sizeof(*rt286),
GFP_KERNEL);
@@ -1149,14 +1151,15 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
}
ret = regmap_read(rt286->regmap,
- RT286_GET_PARAM(AC_NODE_ROOT, AC_PAR_VENDOR_ID), &val);
+ RT286_GET_PARAM(AC_NODE_ROOT, AC_PAR_VENDOR_ID), &vendor_id);
if (ret != 0) {
dev_err(&i2c->dev, "I2C error %d\n", ret);
return ret;
}
- if (val != RT286_VENDOR_ID && val != RT288_VENDOR_ID) {
+ if (vendor_id != RT286_VENDOR_ID && vendor_id != RT288_VENDOR_ID) {
dev_err(&i2c->dev,
- "Device with ID register %#x is not rt286\n", val);
+ "Device with ID register %#x is not rt286\n",
+ vendor_id);
return -ENODEV;
}
@@ -1180,8 +1183,8 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
if (pdata)
rt286->pdata = *pdata;
- if (dmi_check_system(force_combo_jack_table) ||
- dmi_check_system(dmi_dell_dino))
+ if ((vendor_id == RT288_VENDOR_ID && dmi_check_system(dmi_dell)) ||
+ dmi_check_system(force_combo_jack_table))
rt286->pdata.cbj_en = true;
regmap_write(rt286->regmap, RT286_SET_AUDIO_POWER, AC_PWRST_D3);
@@ -1220,7 +1223,7 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
regmap_update_bits(rt286->regmap, RT286_DEPOP_CTRL3, 0xf777, 0x4737);
regmap_update_bits(rt286->regmap, RT286_DEPOP_CTRL4, 0x00ff, 0x003f);
- if (dmi_check_system(dmi_dell_dino)) {
+ if (vendor_id == RT288_VENDOR_ID && dmi_check_system(dmi_dell)) {
regmap_update_bits(rt286->regmap,
RT286_SET_GPIO_MASK, 0x40, 0x40);
regmap_update_bits(rt286->regmap,
diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c
index a0c8f58..47ce074 100644
--- a/sound/soc/codecs/rt5670.c
+++ b/sound/soc/codecs/rt5670.c
@@ -2910,6 +2910,18 @@ static const struct dmi_system_id dmi_platform_intel_quirks[] = {
},
{
.callback = rt5670_quirk_cb,
+ .ident = "Dell Venue 10 Pro 5055",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Venue 10 Pro 5055"),
+ },
+ .driver_data = (unsigned long *)(RT5670_DMIC_EN |
+ RT5670_DMIC2_INR |
+ RT5670_GPIO1_IS_IRQ |
+ RT5670_JD_MODE1),
+ },
+ {
+ .callback = rt5670_quirk_cb,
.ident = "Aegex 10 tablet (RU2)",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "AEGEX"),
diff --git a/sound/soc/codecs/tlv320aic32x4.c b/sound/soc/codecs/tlv320aic32x4.c
index 9e3de9d..b895075 100644
--- a/sound/soc/codecs/tlv320aic32x4.c
+++ b/sound/soc/codecs/tlv320aic32x4.c
@@ -577,12 +577,12 @@ static const struct regmap_range_cfg aic32x4_regmap_pages[] = {
.window_start = 0,
.window_len = 128,
.range_min = 0,
- .range_max = AIC32X4_RMICPGAVOL,
+ .range_max = AIC32X4_REFPOWERUP,
},
};
const struct regmap_config aic32x4_regmap_config = {
- .max_register = AIC32X4_RMICPGAVOL,
+ .max_register = AIC32X4_REFPOWERUP,
.ranges = aic32x4_regmap_pages,
.num_ranges = ARRAY_SIZE(aic32x4_regmap_pages),
};
@@ -1243,6 +1243,10 @@ int aic32x4_probe(struct device *dev, struct regmap *regmap)
if (ret)
goto err_disable_regulators;
+ ret = aic32x4_register_clocks(dev, aic32x4->mclk_name);
+ if (ret)
+ goto err_disable_regulators;
+
ret = devm_snd_soc_register_component(dev,
&soc_component_dev_aic32x4, &aic32x4_dai, 1);
if (ret) {
@@ -1250,10 +1254,6 @@ int aic32x4_probe(struct device *dev, struct regmap *regmap)
goto err_disable_regulators;
}
- ret = aic32x4_register_clocks(dev, aic32x4->mclk_name);
- if (ret)
- goto err_disable_regulators;
-
return 0;
err_disable_regulators:
diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
index 660ec46..9d32555 100644
--- a/sound/soc/codecs/wm8960.c
+++ b/sound/soc/codecs/wm8960.c
@@ -608,10 +608,6 @@ static const int bclk_divs[] = {
* - lrclk = sysclk / dac_divs
* - 10 * bclk = sysclk / bclk_divs
*
- * If we cannot find an exact match for (sysclk, lrclk, bclk)
- * triplet, we relax the bclk such that bclk is chosen as the
- * closest available frequency greater than expected bclk.
- *
* @wm8960: codec private data
* @mclk: MCLK used to derive sysclk
* @sysclk_idx: sysclk_divs index for found sysclk
@@ -629,7 +625,7 @@ int wm8960_configure_sysclk(struct wm8960_priv *wm8960, int mclk,
{
int sysclk, bclk, lrclk;
int i, j, k;
- int diff, closest = mclk;
+ int diff;
/* marker for no match */
*bclk_idx = -1;
@@ -653,12 +649,6 @@ int wm8960_configure_sysclk(struct wm8960_priv *wm8960, int mclk,
*bclk_idx = k;
break;
}
- if (diff > 0 && closest > diff) {
- *sysclk_idx = i;
- *dac_idx = j;
- *bclk_idx = k;
- closest = diff;
- }
}
if (k != ARRAY_SIZE(bclk_divs))
break;
@@ -707,7 +697,13 @@ int wm8960_configure_pll(struct snd_soc_component *component, int freq_in,
best_freq_out = -EINVAL;
*sysclk_idx = *dac_idx = *bclk_idx = -1;
- for (i = 0; i < ARRAY_SIZE(sysclk_divs); ++i) {
+ /*
+ * From Datasheet, the PLL performs best when f2 is between
+ * 90MHz and 100MHz, the desired sysclk output is 11.2896MHz
+ * or 12.288MHz, then sysclkdiv = 2 is the best choice.
+ * So search sysclk_divs from 2 to 1 other than from 1 to 2.
+ */
+ for (i = ARRAY_SIZE(sysclk_divs) - 1; i >= 0; --i) {
if (sysclk_divs[i] == -1)
continue;
for (j = 0; j < ARRAY_SIZE(dac_divs); ++j) {
diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
index 39637ca..9f5f217 100644
--- a/sound/soc/fsl/fsl_esai.c
+++ b/sound/soc/fsl/fsl_esai.c
@@ -524,11 +524,13 @@ static int fsl_esai_startup(struct snd_pcm_substream *substream,
ESAI_SAICR_SYNC, esai_priv->synchronous ?
ESAI_SAICR_SYNC : 0);
- /* Set a default slot number -- 2 */
+ /* Set slots count */
regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR,
- ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(2));
+ ESAI_xCCR_xDC_MASK,
+ ESAI_xCCR_xDC(esai_priv->slots));
regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR,
- ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(2));
+ ESAI_xCCR_xDC_MASK,
+ ESAI_xCCR_xDC(esai_priv->slots));
}
return 0;
diff --git a/sound/soc/generic/audio-graph-card.c b/sound/soc/generic/audio-graph-card.c
index 97b4f54..0c64030 100644
--- a/sound/soc/generic/audio-graph-card.c
+++ b/sound/soc/generic/audio-graph-card.c
@@ -340,7 +340,7 @@ static int graph_dai_link_of(struct asoc_simple_priv *priv,
struct device_node *top = dev->of_node;
struct asoc_simple_dai *cpu_dai;
struct asoc_simple_dai *codec_dai;
- int ret, single_cpu;
+ int ret, single_cpu = 0;
/* Do it only CPU turn */
if (!li->cpu)
diff --git a/sound/soc/generic/simple-card.c b/sound/soc/generic/simple-card.c
index 75365c7..d916ec6 100644
--- a/sound/soc/generic/simple-card.c
+++ b/sound/soc/generic/simple-card.c
@@ -258,7 +258,7 @@ static int simple_dai_link_of(struct asoc_simple_priv *priv,
struct device_node *plat = NULL;
char prop[128];
char *prefix = "";
- int ret, single_cpu;
+ int ret, single_cpu = 0;
/*
* |CPU |Codec : turn
diff --git a/sound/soc/intel/Makefile b/sound/soc/intel/Makefile
index 4e0248d..7c503880 100644
--- a/sound/soc/intel/Makefile
+++ b/sound/soc/intel/Makefile
@@ -5,7 +5,7 @@
# Platform Support
obj-$(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM) += atom/
obj-$(CONFIG_SND_SOC_INTEL_CATPT) += catpt/
-obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE) += skylake/
+obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE_COMMON) += skylake/
obj-$(CONFIG_SND_SOC_INTEL_KEEMBAY) += keembay/
# Machine support
diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
index 9e9b058..aa5dd59 100644
--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
@@ -488,14 +488,14 @@ static struct snd_soc_dai_driver sst_platform_dai[] = {
.channels_min = SST_STEREO,
.channels_max = SST_STEREO,
.rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000,
- .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
},
.capture = {
.stream_name = "Headset Capture",
.channels_min = 1,
.channels_max = 2,
.rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000,
- .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
},
},
{
@@ -506,7 +506,7 @@ static struct snd_soc_dai_driver sst_platform_dai[] = {
.channels_min = SST_STEREO,
.channels_max = SST_STEREO,
.rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000,
- .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
},
},
{
diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
index d5812e7..1ef0464 100644
--- a/sound/soc/intel/boards/bytcr_rt5640.c
+++ b/sound/soc/intel/boards/bytcr_rt5640.c
@@ -478,6 +478,9 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TAF"),
},
.driver_data = (void *)(BYT_RT5640_IN1_MAP |
+ BYT_RT5640_JD_SRC_JD2_IN4N |
+ BYT_RT5640_OVCD_TH_2000UA |
+ BYT_RT5640_OVCD_SF_0P75 |
BYT_RT5640_MONO_SPEAKER |
BYT_RT5640_DIFF_MIC |
BYT_RT5640_SSP0_AIF2 |
@@ -512,6 +515,23 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
BYT_RT5640_MCLK_EN),
},
{
+ /* Chuwi Hi8 (CWI509) */
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
+ DMI_MATCH(DMI_BOARD_NAME, "BYT-PA03C"),
+ DMI_MATCH(DMI_SYS_VENDOR, "ilife"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "S806"),
+ },
+ .driver_data = (void *)(BYT_RT5640_IN1_MAP |
+ BYT_RT5640_JD_SRC_JD2_IN4N |
+ BYT_RT5640_OVCD_TH_2000UA |
+ BYT_RT5640_OVCD_SF_0P75 |
+ BYT_RT5640_MONO_SPEAKER |
+ BYT_RT5640_DIFF_MIC |
+ BYT_RT5640_SSP0_AIF1 |
+ BYT_RT5640_MCLK_EN),
+ },
+ {
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Circuitco"),
DMI_MATCH(DMI_PRODUCT_NAME, "Minnowboard Max B3 PLATFORM"),
diff --git a/sound/soc/intel/boards/kbl_da7219_max98927.c b/sound/soc/intel/boards/kbl_da7219_max98927.c
index cc9a2509..e0149cf 100644
--- a/sound/soc/intel/boards/kbl_da7219_max98927.c
+++ b/sound/soc/intel/boards/kbl_da7219_max98927.c
@@ -282,12 +282,34 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
struct snd_interval *chan = hw_param_interval(params,
SNDRV_PCM_HW_PARAM_CHANNELS);
struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);
- struct snd_soc_dpcm *dpcm = container_of(
- params, struct snd_soc_dpcm, hw_params);
- struct snd_soc_dai_link *fe_dai_link = dpcm->fe->dai_link;
- struct snd_soc_dai_link *be_dai_link = dpcm->be->dai_link;
+ struct snd_soc_dpcm *dpcm, *rtd_dpcm = NULL;
/*
+ * The following loop will be called only for playback stream
+ * In this platform, there is only one playback device on every SSP
+ */
+ for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_PLAYBACK, dpcm) {
+ rtd_dpcm = dpcm;
+ break;
+ }
+
+ /*
+ * This following loop will be called only for capture stream
+ * In this platform, there is only one capture device on every SSP
+ */
+ for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_CAPTURE, dpcm) {
+ rtd_dpcm = dpcm;
+ break;
+ }
+
+ if (!rtd_dpcm)
+ return -EINVAL;
+
+ /*
+ * The above 2 loops are mutually exclusive based on the stream direction,
+ * thus rtd_dpcm variable will never be overwritten
+ */
+ /*
* Topology for kblda7219m98373 & kblmax98373 supports only S24_LE,
* where as kblda7219m98927 & kblmax98927 supports S16_LE by default.
* Skipping the port wise FE and BE configuration for kblda7219m98373 &
@@ -309,9 +331,9 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
/*
* The ADSP will convert the FE rate to 48k, stereo, 24 bit
*/
- if (!strcmp(fe_dai_link->name, "Kbl Audio Port") ||
- !strcmp(fe_dai_link->name, "Kbl Audio Headset Playback") ||
- !strcmp(fe_dai_link->name, "Kbl Audio Capture Port")) {
+ if (!strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Port") ||
+ !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Headset Playback") ||
+ !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Capture Port")) {
rate->min = rate->max = 48000;
chan->min = chan->max = 2;
snd_mask_none(fmt);
@@ -322,7 +344,7 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
* The speaker on the SSP0 supports S16_LE and not S24_LE.
* thus changing the mask here
*/
- if (!strcmp(be_dai_link->name, "SSP0-Codec"))
+ if (!strcmp(rtd_dpcm->be->dai_link->name, "SSP0-Codec"))
snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE);
return 0;
diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
index 1d76773..9dc982c 100644
--- a/sound/soc/intel/boards/sof_sdw.c
+++ b/sound/soc/intel/boards/sof_sdw.c
@@ -187,6 +187,17 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
SOF_RT715_DAI_ID_FIX |
SOF_SDW_FOUR_SPK),
},
+ /* AlderLake devices */
+ {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Alder Lake Client Platform"),
+ },
+ .driver_data = (void *)(SOF_RT711_JD_SRC_JD1 |
+ SOF_SDW_TGL_HDMI |
+ SOF_SDW_PCH_DMIC),
+ },
{}
};
diff --git a/sound/soc/intel/boards/sof_wm8804.c b/sound/soc/intel/boards/sof_wm8804.c
index a46ba13..6a181e4 100644
--- a/sound/soc/intel/boards/sof_wm8804.c
+++ b/sound/soc/intel/boards/sof_wm8804.c
@@ -124,7 +124,11 @@ static int sof_wm8804_hw_params(struct snd_pcm_substream *substream,
}
snd_soc_dai_set_clkdiv(codec_dai, WM8804_MCLK_DIV, mclk_div);
- snd_soc_dai_set_pll(codec_dai, 0, 0, sysclk, mclk_freq);
+ ret = snd_soc_dai_set_pll(codec_dai, 0, 0, sysclk, mclk_freq);
+ if (ret < 0) {
+ dev_err(rtd->card->dev, "Failed to set WM8804 PLL\n");
+ return ret;
+ }
ret = snd_soc_dai_set_sysclk(codec_dai, WM8804_TX_CLKSRC_PLL,
sysclk, SND_SOC_CLOCK_OUT);
diff --git a/sound/soc/intel/skylake/Makefile b/sound/soc/intel/skylake/Makefile
index dd39149..1c4649b 100644
--- a/sound/soc/intel/skylake/Makefile
+++ b/sound/soc/intel/skylake/Makefile
@@ -7,7 +7,7 @@
snd-soc-skl-objs += skl-debug.o
endif
-obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE) += snd-soc-skl.o
+obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE_COMMON) += snd-soc-skl.o
#Skylake Clock device support
snd-soc-skl-ssp-clk-objs := skl-ssp-clk.o
diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
index 4fb2ec7..7a30a12 100644
--- a/sound/soc/qcom/lpass-cpu.c
+++ b/sound/soc/qcom/lpass-cpu.c
@@ -839,18 +839,8 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev)
if (dai_id == LPASS_DP_RX)
continue;
- drvdata->mi2s_osr_clk[dai_id] = devm_clk_get(dev,
+ drvdata->mi2s_osr_clk[dai_id] = devm_clk_get_optional(dev,
variant->dai_osr_clk_names[i]);
- if (IS_ERR(drvdata->mi2s_osr_clk[dai_id])) {
- dev_warn(dev,
- "%s() error getting optional %s: %ld\n",
- __func__,
- variant->dai_osr_clk_names[i],
- PTR_ERR(drvdata->mi2s_osr_clk[dai_id]));
-
- drvdata->mi2s_osr_clk[dai_id] = NULL;
- }
-
drvdata->mi2s_bit_clk[dai_id] = devm_clk_get(dev,
variant->dai_bit_clk_names[i]);
if (IS_ERR(drvdata->mi2s_bit_clk[dai_id])) {
diff --git a/sound/soc/samsung/tm2_wm5110.c b/sound/soc/samsung/tm2_wm5110.c
index 9300fef..125e07f 100644
--- a/sound/soc/samsung/tm2_wm5110.c
+++ b/sound/soc/samsung/tm2_wm5110.c
@@ -553,7 +553,7 @@ static int tm2_probe(struct platform_device *pdev)
ret = of_parse_phandle_with_args(dev->of_node, "i2s-controller",
cells_name, i, &args);
- if (!args.np) {
+ if (ret) {
dev_err(dev, "i2s-controller property parse error: %d\n", i);
ret = -EINVAL;
goto dai_node_put;
diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
index 6e670b3..289928d 100644
--- a/sound/soc/sh/rcar/core.c
+++ b/sound/soc/sh/rcar/core.c
@@ -1428,8 +1428,75 @@ static int rsnd_hw_params(struct snd_soc_component *component,
}
if (io->converted_chan)
dev_dbg(dev, "convert channels = %d\n", io->converted_chan);
- if (io->converted_rate)
+ if (io->converted_rate) {
+ /*
+ * SRC supports convert rates from params_rate(hw_params)/k_down
+ * to params_rate(hw_params)*k_up, where k_up is always 6, and
+ * k_down depends on number of channels and SRC unit.
+ * So all SRC units can upsample audio up to 6 times regardless
+ * its number of channels. And all SRC units can downsample
+ * 2 channel audio up to 6 times too.
+ */
+ int k_up = 6;
+ int k_down = 6;
+ int channel;
+ struct rsnd_mod *src_mod = rsnd_io_to_mod_src(io);
+
dev_dbg(dev, "convert rate = %d\n", io->converted_rate);
+
+ channel = io->converted_chan ? io->converted_chan :
+ params_channels(hw_params);
+
+ switch (rsnd_mod_id(src_mod)) {
+ /*
+ * SRC0 can downsample 4, 6 and 8 channel audio up to 4 times.
+ * SRC1, SRC3 and SRC4 can downsample 4 channel audio
+ * up to 4 times.
+ * SRC1, SRC3 and SRC4 can downsample 6 and 8 channel audio
+ * no more than twice.
+ */
+ case 1:
+ case 3:
+ case 4:
+ if (channel > 4) {
+ k_down = 2;
+ break;
+ }
+ fallthrough;
+ case 0:
+ if (channel > 2)
+ k_down = 4;
+ break;
+
+ /* Other SRC units do not support more than 2 channels */
+ default:
+ if (channel > 2)
+ return -EINVAL;
+ }
+
+ if (params_rate(hw_params) > io->converted_rate * k_down) {
+ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->min =
+ io->converted_rate * k_down;
+ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->max =
+ io->converted_rate * k_down;
+ hw_params->cmask |= SNDRV_PCM_HW_PARAM_RATE;
+ } else if (params_rate(hw_params) * k_up < io->converted_rate) {
+ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->min =
+ (io->converted_rate + k_up - 1) / k_up;
+ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->max =
+ (io->converted_rate + k_up - 1) / k_up;
+ hw_params->cmask |= SNDRV_PCM_HW_PARAM_RATE;
+ }
+
+ /*
+ * TBD: Max SRC input and output rates also depend on number
+ * of channels and SRC unit:
+ * SRC1, SRC3 and SRC4 do not support more than 128kHz
+ * for 6 channel and 96kHz for 8 channel audio.
+ * Perhaps this function should return EINVAL if the input or
+ * the output rate exceeds the limitation.
+ */
+ }
}
return rsnd_dai_call(hw_params, io, substream, hw_params);
diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
index d0ded42..042207c 100644
--- a/sound/soc/sh/rcar/ssi.c
+++ b/sound/soc/sh/rcar/ssi.c
@@ -507,10 +507,15 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
struct rsnd_priv *priv)
{
struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+ int ret;
if (!rsnd_ssi_is_run_mods(mod, io))
return 0;
+ ret = rsnd_ssi_master_clk_start(mod, io);
+ if (ret < 0)
+ return ret;
+
ssi->usrcnt++;
rsnd_mod_power_on(mod);
@@ -792,7 +797,6 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
SSI_SYS_STATUS(i * 2),
0xf << (id * 4));
stop = true;
- break;
}
}
break;
@@ -810,7 +814,6 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
SSI_SYS_STATUS((i * 2) + 1),
0xf << 4);
stop = true;
- break;
}
}
break;
@@ -1060,13 +1063,6 @@ static int rsnd_ssi_pio_pointer(struct rsnd_mod *mod,
return 0;
}
-static int rsnd_ssi_prepare(struct rsnd_mod *mod,
- struct rsnd_dai_stream *io,
- struct rsnd_priv *priv)
-{
- return rsnd_ssi_master_clk_start(mod, io);
-}
-
static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
.name = SSI_NAME,
.probe = rsnd_ssi_common_probe,
@@ -1079,7 +1075,6 @@ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
.pointer = rsnd_ssi_pio_pointer,
.pcm_new = rsnd_ssi_pcm_new,
.hw_params = rsnd_ssi_hw_params,
- .prepare = rsnd_ssi_prepare,
.get_status = rsnd_ssi_get_status,
};
@@ -1166,7 +1161,6 @@ static struct rsnd_mod_ops rsnd_ssi_dma_ops = {
.pcm_new = rsnd_ssi_pcm_new,
.fallback = rsnd_ssi_fallback,
.hw_params = rsnd_ssi_hw_params,
- .prepare = rsnd_ssi_prepare,
.get_status = rsnd_ssi_get_status,
};
diff --git a/sound/soc/sof/intel/hda-dsp.c b/sound/soc/sof/intel/hda-dsp.c
index c731b9b..85ec436 100644
--- a/sound/soc/sof/intel/hda-dsp.c
+++ b/sound/soc/sof/intel/hda-dsp.c
@@ -226,10 +226,17 @@ bool hda_dsp_core_is_enabled(struct snd_sof_dev *sdev,
val = snd_sof_dsp_read(sdev, HDA_DSP_BAR, HDA_DSP_REG_ADSPCS);
- is_enable = (val & HDA_DSP_ADSPCS_CPA_MASK(core_mask)) &&
- (val & HDA_DSP_ADSPCS_SPA_MASK(core_mask)) &&
- !(val & HDA_DSP_ADSPCS_CRST_MASK(core_mask)) &&
- !(val & HDA_DSP_ADSPCS_CSTALL_MASK(core_mask));
+#define MASK_IS_EQUAL(v, m, field) ({ \
+ u32 _m = field(m); \
+ ((v) & _m) == _m; \
+})
+
+ is_enable = MASK_IS_EQUAL(val, core_mask, HDA_DSP_ADSPCS_CPA_MASK) &&
+ MASK_IS_EQUAL(val, core_mask, HDA_DSP_ADSPCS_SPA_MASK) &&
+ !(val & HDA_DSP_ADSPCS_CRST_MASK(core_mask)) &&
+ !(val & HDA_DSP_ADSPCS_CSTALL_MASK(core_mask));
+
+#undef MASK_IS_EQUAL
dev_dbg(sdev->dev, "DSP core(s) enabled? %d : core_mask %x\n",
is_enable, core_mask);
diff --git a/sound/soc/sunxi/sun4i-codec.c b/sound/soc/sunxi/sun4i-codec.c
index 6c13cc8..2173991 100644
--- a/sound/soc/sunxi/sun4i-codec.c
+++ b/sound/soc/sunxi/sun4i-codec.c
@@ -1364,6 +1364,7 @@ static struct snd_soc_card *sun4i_codec_create_card(struct device *dev)
return ERR_PTR(-ENOMEM);
card->dev = dev;
+ card->owner = THIS_MODULE;
card->name = "sun4i-codec";
card->dapm_widgets = sun4i_codec_card_dapm_widgets;
card->num_dapm_widgets = ARRAY_SIZE(sun4i_codec_card_dapm_widgets);
@@ -1396,6 +1397,7 @@ static struct snd_soc_card *sun6i_codec_create_card(struct device *dev)
return ERR_PTR(-ENOMEM);
card->dev = dev;
+ card->owner = THIS_MODULE;
card->name = "A31 Audio Codec";
card->dapm_widgets = sun6i_codec_card_dapm_widgets;
card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
@@ -1449,6 +1451,7 @@ static struct snd_soc_card *sun8i_a23_codec_create_card(struct device *dev)
return ERR_PTR(-ENOMEM);
card->dev = dev;
+ card->owner = THIS_MODULE;
card->name = "A23 Audio Codec";
card->dapm_widgets = sun6i_codec_card_dapm_widgets;
card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
@@ -1487,6 +1490,7 @@ static struct snd_soc_card *sun8i_h3_codec_create_card(struct device *dev)
return ERR_PTR(-ENOMEM);
card->dev = dev;
+ card->owner = THIS_MODULE;
card->name = "H3 Audio Codec";
card->dapm_widgets = sun6i_codec_card_dapm_widgets;
card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
@@ -1525,6 +1529,7 @@ static struct snd_soc_card *sun8i_v3s_codec_create_card(struct device *dev)
return ERR_PTR(-ENOMEM);
card->dev = dev;
+ card->owner = THIS_MODULE;
card->name = "V3s Audio Codec";
card->dapm_widgets = sun6i_codec_card_dapm_widgets;
card->num_dapm_widgets = ARRAY_SIZE(sun6i_codec_card_dapm_widgets);
diff --git a/sound/usb/card.c b/sound/usb/card.c
index fc7c359..258b81b 100644
--- a/sound/usb/card.c
+++ b/sound/usb/card.c
@@ -182,9 +182,8 @@ static int snd_usb_create_stream(struct snd_usb_audio *chip, int ctrlif, int int
ctrlif, interface);
return -EINVAL;
}
- usb_driver_claim_interface(&usb_audio_driver, iface, (void *)-1L);
-
- return 0;
+ return usb_driver_claim_interface(&usb_audio_driver, iface,
+ USB_AUDIO_IFACE_UNUSED);
}
if ((altsd->bInterfaceClass != USB_CLASS_AUDIO &&
@@ -204,7 +203,8 @@ static int snd_usb_create_stream(struct snd_usb_audio *chip, int ctrlif, int int
if (! snd_usb_parse_audio_interface(chip, interface)) {
usb_set_interface(dev, interface, 0); /* reset the current interface */
- usb_driver_claim_interface(&usb_audio_driver, iface, (void *)-1L);
+ return usb_driver_claim_interface(&usb_audio_driver, iface,
+ USB_AUDIO_IFACE_UNUSED);
}
return 0;
@@ -864,7 +864,7 @@ static void usb_audio_disconnect(struct usb_interface *intf)
struct snd_card *card;
struct list_head *p;
- if (chip == (void *)-1L)
+ if (chip == USB_AUDIO_IFACE_UNUSED)
return;
card = chip->card;
@@ -993,7 +993,7 @@ static int usb_audio_suspend(struct usb_interface *intf, pm_message_t message)
struct usb_mixer_interface *mixer;
struct list_head *p;
- if (chip == (void *)-1L)
+ if (chip == USB_AUDIO_IFACE_UNUSED)
return 0;
if (!chip->num_suspended_intf++) {
@@ -1024,7 +1024,7 @@ static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
struct list_head *p;
int err = 0;
- if (chip == (void *)-1L)
+ if (chip == USB_AUDIO_IFACE_UNUSED)
return 0;
atomic_inc(&chip->active); /* avoid autopm */
diff --git a/sound/usb/clock.c b/sound/usb/clock.c
index 674e15b..e3d97e5 100644
--- a/sound/usb/clock.c
+++ b/sound/usb/clock.c
@@ -296,7 +296,7 @@ static int __uac_clock_find_source(struct snd_usb_audio *chip,
selector = snd_usb_find_clock_selector(chip->ctrl_intf, entity_id);
if (selector) {
- int ret, i, cur;
+ int ret, i, cur, err;
/* the entity ID we are looking for is a selector.
* find out what it currently selects */
@@ -318,13 +318,17 @@ static int __uac_clock_find_source(struct snd_usb_audio *chip,
ret = __uac_clock_find_source(chip, fmt,
selector->baCSourceID[ret - 1],
visited, validate);
+ if (ret > 0) {
+ err = uac_clock_selector_set_val(chip, entity_id, cur);
+ if (err < 0)
+ return err;
+ }
+
if (!validate || ret > 0 || !chip->autoclock)
return ret;
/* The current clock source is invalid, try others. */
for (i = 1; i <= selector->bNrInPins; i++) {
- int err;
-
if (i == cur)
continue;
@@ -390,7 +394,7 @@ static int __uac3_clock_find_source(struct snd_usb_audio *chip,
selector = snd_usb_find_clock_selector_v3(chip->ctrl_intf, entity_id);
if (selector) {
- int ret, i, cur;
+ int ret, i, cur, err;
/* the entity ID we are looking for is a selector.
* find out what it currently selects */
@@ -412,6 +416,12 @@ static int __uac3_clock_find_source(struct snd_usb_audio *chip,
ret = __uac3_clock_find_source(chip, fmt,
selector->baCSourceID[ret - 1],
visited, validate);
+ if (ret > 0) {
+ err = uac_clock_selector_set_val(chip, entity_id, cur);
+ if (err < 0)
+ return err;
+ }
+
if (!validate || ret > 0 || !chip->autoclock)
return ret;
diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
index a030dd6..9602929 100644
--- a/sound/usb/line6/driver.c
+++ b/sound/usb/line6/driver.c
@@ -699,6 +699,10 @@ static int line6_init_cap_control(struct usb_line6 *line6)
line6->buffer_message = kmalloc(LINE6_MIDI_MESSAGE_MAXLEN, GFP_KERNEL);
if (!line6->buffer_message)
return -ENOMEM;
+
+ ret = line6_init_midi(line6);
+ if (ret < 0)
+ return ret;
} else {
ret = line6_hwdep_init(line6);
if (ret < 0)
diff --git a/sound/usb/line6/pod.c b/sound/usb/line6/pod.c
index cd44cb5..16e6443 100644
--- a/sound/usb/line6/pod.c
+++ b/sound/usb/line6/pod.c
@@ -376,11 +376,6 @@ static int pod_init(struct usb_line6 *line6,
if (err < 0)
return err;
- /* initialize MIDI subsystem: */
- err = line6_init_midi(line6);
- if (err < 0)
- return err;
-
/* initialize PCM subsystem: */
err = line6_init_pcm(line6, &pod_pcm_properties);
if (err < 0)
diff --git a/sound/usb/line6/variax.c b/sound/usb/line6/variax.c
index ed158f0..c2245aa 100644
--- a/sound/usb/line6/variax.c
+++ b/sound/usb/line6/variax.c
@@ -159,7 +159,6 @@ static int variax_init(struct usb_line6 *line6,
const struct usb_device_id *id)
{
struct usb_line6_variax *variax = line6_to_variax(line6);
- int err;
line6->process_message = line6_variax_process_message;
line6->disconnect = line6_variax_disconnect;
@@ -172,11 +171,6 @@ static int variax_init(struct usb_line6 *line6,
if (variax->buffer_activate == NULL)
return -ENOMEM;
- /* initialize MIDI subsystem: */
- err = line6_init_midi(&variax->line6);
- if (err < 0)
- return err;
-
/* initiate startup procedure: */
schedule_delayed_work(&line6->startup_work,
msecs_to_jiffies(VARIAX_STARTUP_DELAY1));
diff --git a/sound/usb/midi.c b/sound/usb/midi.c
index 0c23fa6..fa91290 100644
--- a/sound/usb/midi.c
+++ b/sound/usb/midi.c
@@ -1332,7 +1332,7 @@ static int snd_usbmidi_in_endpoint_create(struct snd_usb_midi *umidi,
error:
snd_usbmidi_in_endpoint_delete(ep);
- return -ENOMEM;
+ return err;
}
/*
@@ -1889,8 +1889,12 @@ static int snd_usbmidi_get_ms_info(struct snd_usb_midi *umidi,
ms_ep = find_usb_ms_endpoint_descriptor(hostep);
if (!ms_ep)
continue;
+ if (ms_ep->bLength <= sizeof(*ms_ep))
+ continue;
if (ms_ep->bNumEmbMIDIJack > 0x10)
continue;
+ if (ms_ep->bLength < sizeof(*ms_ep) + ms_ep->bNumEmbMIDIJack)
+ continue;
if (usb_endpoint_dir_out(ep)) {
if (endpoints[epidx].out_ep) {
if (++epidx >= MIDI_MAX_ENDPOINTS) {
diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
index 646deb6..c5794e8 100644
--- a/sound/usb/mixer_maps.c
+++ b/sound/usb/mixer_maps.c
@@ -337,6 +337,13 @@ static const struct usbmix_name_map bose_companion5_map[] = {
{ 0 } /* terminator */
};
+/* Sennheiser Communications Headset [PC 8], the dB value is reported as -6 negative maximum */
+static const struct usbmix_dB_map sennheiser_pc8_dB = {-9500, 0};
+static const struct usbmix_name_map sennheiser_pc8_map[] = {
+ { 9, NULL, .dB = &sennheiser_pc8_dB },
+ { 0 } /* terminator */
+};
+
/*
* Dell usb dock with ALC4020 codec had a firmware problem where it got
* screwed up when zero volume is passed; just skip it as a workaround
@@ -593,6 +600,11 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
.id = USB_ID(0x17aa, 0x1046),
.map = lenovo_p620_rear_map,
},
+ {
+ /* Sennheiser Communications Headset [PC 8] */
+ .id = USB_ID(0x1395, 0x0025),
+ .map = sennheiser_pc8_map,
+ },
{ 0 } /* terminator */
};
diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
index 5171b3d..8297117 100644
--- a/sound/usb/mixer_quirks.c
+++ b/sound/usb/mixer_quirks.c
@@ -3017,7 +3017,7 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
case USB_ID(0x1235, 0x8203): /* Focusrite Scarlett 6i6 2nd Gen */
case USB_ID(0x1235, 0x8204): /* Focusrite Scarlett 18i8 2nd Gen */
case USB_ID(0x1235, 0x8201): /* Focusrite Scarlett 18i20 2nd Gen */
- err = snd_scarlett_gen2_controls_create(mixer);
+ err = snd_scarlett_gen2_init(mixer);
break;
case USB_ID(0x041e, 0x323b): /* Creative Sound Blaster E1 */
diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c
index 4bbec56..9a98b0c 100644
--- a/sound/usb/mixer_scarlett_gen2.c
+++ b/sound/usb/mixer_scarlett_gen2.c
@@ -635,7 +635,7 @@ static int scarlett2_usb(
/* send a second message to get the response */
err = snd_usb_ctl_msg(mixer->chip->dev,
- usb_sndctrlpipe(mixer->chip->dev, 0),
+ usb_rcvctrlpipe(mixer->chip->dev, 0),
SCARLETT2_USB_VENDOR_SPECIFIC_CMD_RESP,
USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_IN,
0,
@@ -1997,38 +1997,11 @@ static int scarlett2_mixer_status_create(struct usb_mixer_interface *mixer)
return usb_submit_urb(mixer->urb, GFP_KERNEL);
}
-/* Entry point */
-int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer)
+static int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer,
+ const struct scarlett2_device_info *info)
{
- const struct scarlett2_device_info *info;
int err;
- /* only use UAC_VERSION_2 */
- if (!mixer->protocol)
- return 0;
-
- switch (mixer->chip->usb_id) {
- case USB_ID(0x1235, 0x8203):
- info = &s6i6_gen2_info;
- break;
- case USB_ID(0x1235, 0x8204):
- info = &s18i8_gen2_info;
- break;
- case USB_ID(0x1235, 0x8201):
- info = &s18i20_gen2_info;
- break;
- default: /* device not (yet) supported */
- return -EINVAL;
- }
-
- if (!(mixer->chip->setup & SCARLETT2_ENABLE)) {
- usb_audio_err(mixer->chip,
- "Focusrite Scarlett Gen 2 Mixer Driver disabled; "
- "use options snd_usb_audio device_setup=1 "
- "to enable and report any issues to g@b4.vu");
- return 0;
- }
-
/* Initialise private data, routing, sequence number */
err = scarlett2_init_private(mixer, info);
if (err < 0)
@@ -2073,3 +2046,51 @@ int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer)
return 0;
}
+
+int snd_scarlett_gen2_init(struct usb_mixer_interface *mixer)
+{
+ struct snd_usb_audio *chip = mixer->chip;
+ const struct scarlett2_device_info *info;
+ int err;
+
+ /* only use UAC_VERSION_2 */
+ if (!mixer->protocol)
+ return 0;
+
+ switch (chip->usb_id) {
+ case USB_ID(0x1235, 0x8203):
+ info = &s6i6_gen2_info;
+ break;
+ case USB_ID(0x1235, 0x8204):
+ info = &s18i8_gen2_info;
+ break;
+ case USB_ID(0x1235, 0x8201):
+ info = &s18i20_gen2_info;
+ break;
+ default: /* device not (yet) supported */
+ return -EINVAL;
+ }
+
+ if (!(chip->setup & SCARLETT2_ENABLE)) {
+ usb_audio_info(chip,
+ "Focusrite Scarlett Gen 2 Mixer Driver disabled; "
+ "use options snd_usb_audio vid=0x%04x pid=0x%04x "
+ "device_setup=1 to enable and report any issues "
+ "to g@b4.vu",
+ USB_ID_VENDOR(chip->usb_id),
+ USB_ID_PRODUCT(chip->usb_id));
+ return 0;
+ }
+
+ usb_audio_info(chip,
+ "Focusrite Scarlett Gen 2 Mixer Driver enabled pid=0x%04x",
+ USB_ID_PRODUCT(chip->usb_id));
+
+ err = snd_scarlett_gen2_controls_create(mixer, info);
+ if (err < 0)
+ usb_audio_err(mixer->chip,
+ "Error initialising Scarlett Mixer Driver: %d",
+ err);
+
+ return err;
+}
diff --git a/sound/usb/mixer_scarlett_gen2.h b/sound/usb/mixer_scarlett_gen2.h
index 52e1dad..668c6b0 100644
--- a/sound/usb/mixer_scarlett_gen2.h
+++ b/sound/usb/mixer_scarlett_gen2.h
@@ -2,6 +2,6 @@
#ifndef __USB_MIXER_SCARLETT_GEN2_H
#define __USB_MIXER_SCARLETT_GEN2_H
-int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer);
+int snd_scarlett_gen2_init(struct usb_mixer_interface *mixer);
#endif /* __USB_MIXER_SCARLETT_GEN2_H */
diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
index 3c1697f..5728bf7 100644
--- a/sound/usb/quirks-table.h
+++ b/sound/usb/quirks-table.h
@@ -2376,6 +2376,16 @@ YAMAHA_DEVICE(0x7010, "UB99"),
}
},
+{
+ USB_DEVICE_VENDOR_SPEC(0x0944, 0x0204),
+ .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+ .vendor_name = "KORG, Inc.",
+ /* .product_name = "ToneLab EX", */
+ .ifnum = 3,
+ .type = QUIRK_MIDI_STANDARD_INTERFACE,
+ }
+},
+
/* AKAI devices */
{
USB_DEVICE(0x09e8, 0x0062),
diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
index 5ab2a45..bddef8a 100644
--- a/sound/usb/quirks.c
+++ b/sound/usb/quirks.c
@@ -55,8 +55,12 @@ static int create_composite_quirk(struct snd_usb_audio *chip,
if (!iface)
continue;
if (quirk->ifnum != probed_ifnum &&
- !usb_interface_claimed(iface))
- usb_driver_claim_interface(driver, iface, (void *)-1L);
+ !usb_interface_claimed(iface)) {
+ err = usb_driver_claim_interface(driver, iface,
+ USB_AUDIO_IFACE_UNUSED);
+ if (err < 0)
+ return err;
+ }
}
return 0;
@@ -390,8 +394,12 @@ static int create_autodetect_quirks(struct snd_usb_audio *chip,
continue;
err = create_autodetect_quirk(chip, iface, driver);
- if (err >= 0)
- usb_driver_claim_interface(driver, iface, (void *)-1L);
+ if (err >= 0) {
+ err = usb_driver_claim_interface(driver, iface,
+ USB_AUDIO_IFACE_UNUSED);
+ if (err < 0)
+ return err;
+ }
}
return 0;
diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
index 9667060..e54a98f 100644
--- a/sound/usb/usbaudio.h
+++ b/sound/usb/usbaudio.h
@@ -63,6 +63,8 @@ struct snd_usb_audio {
struct media_intf_devnode *ctl_intf_media_devnode;
};
+#define USB_AUDIO_IFACE_UNUSED ((void *)-1L)
+
#define usb_audio_err(chip, fmt, args...) \
dev_err(&(chip)->dev->dev, fmt, ##args)
#define usb_audio_warn(chip, fmt, args...) \
diff --git a/tools/arch/ia64/include/asm/barrier.h b/tools/arch/ia64/include/asm/barrier.h
index 4d471d9..6fffe56 100644
--- a/tools/arch/ia64/include/asm/barrier.h
+++ b/tools/arch/ia64/include/asm/barrier.h
@@ -39,9 +39,6 @@
* sequential memory pages only.
*/
-/* XXX From arch/ia64/include/uapi/asm/gcc_intrin.h */
-#define ia64_mf() asm volatile ("mf" ::: "memory")
-
#define mb() ia64_mf()
#define rmb() mb()
#define wmb() mb()
diff --git a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
index 790944c..baee859 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
@@ -30,7 +30,8 @@
| *ATTACH_TYPE* := { **ingress** | **egress** | **sock_create** | **sock_ops** | **device** |
| **bind4** | **bind6** | **post_bind4** | **post_bind6** | **connect4** | **connect6** |
| **getpeername4** | **getpeername6** | **getsockname4** | **getsockname6** | **sendmsg4** |
-| **sendmsg6** | **recvmsg4** | **recvmsg6** | **sysctl** | **getsockopt** | **setsockopt** }
+| **sendmsg6** | **recvmsg4** | **recvmsg6** | **sysctl** | **getsockopt** | **setsockopt** |
+| **sock_release** }
| *ATTACH_FLAGS* := { **multi** | **override** }
DESCRIPTION
@@ -106,6 +107,7 @@
**getpeername6** call to getpeername(2) for an inet6 socket (since 5.8);
**getsockname4** call to getsockname(2) for an inet4 socket (since 5.8);
**getsockname6** call to getsockname(2) for an inet6 socket (since 5.8).
+ **sock_release** closing an userspace inet socket (since 5.9).
**bpftool cgroup detach** *CGROUP* *ATTACH_TYPE* *PROG*
Detach *PROG* from the cgroup *CGROUP* and attach type
diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
index 358c730..fe1b38e 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
@@ -44,7 +44,7 @@
| **cgroup/connect4** | **cgroup/connect6** | **cgroup/getpeername4** | **cgroup/getpeername6** |
| **cgroup/getsockname4** | **cgroup/getsockname6** | **cgroup/sendmsg4** | **cgroup/sendmsg6** |
| **cgroup/recvmsg4** | **cgroup/recvmsg6** | **cgroup/sysctl** |
-| **cgroup/getsockopt** | **cgroup/setsockopt** |
+| **cgroup/getsockopt** | **cgroup/setsockopt** | **cgroup/sock_release** |
| **struct_ops** | **fentry** | **fexit** | **freplace** | **sk_lookup**
| }
| *ATTACH_TYPE* := {
diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool
index 3f1da30..f783e7c 100644
--- a/tools/bpf/bpftool/bash-completion/bpftool
+++ b/tools/bpf/bpftool/bash-completion/bpftool
@@ -478,7 +478,7 @@
cgroup/recvmsg4 cgroup/recvmsg6 \
cgroup/post_bind4 cgroup/post_bind6 \
cgroup/sysctl cgroup/getsockopt \
- cgroup/setsockopt struct_ops \
+ cgroup/setsockopt cgroup/sock_release struct_ops \
fentry fexit freplace sk_lookup" -- \
"$cur" ) )
return 0
@@ -1008,7 +1008,7 @@
device bind4 bind6 post_bind4 post_bind6 connect4 connect6 \
getpeername4 getpeername6 getsockname4 getsockname6 \
sendmsg4 sendmsg6 recvmsg4 recvmsg6 sysctl getsockopt \
- setsockopt'
+ setsockopt sock_release'
local ATTACH_FLAGS='multi override'
local PROG_TYPE='id pinned tag name'
case $prev in
@@ -1019,7 +1019,7 @@
ingress|egress|sock_create|sock_ops|device|bind4|bind6|\
post_bind4|post_bind6|connect4|connect6|getpeername4|\
getpeername6|getsockname4|getsockname6|sendmsg4|sendmsg6|\
- recvmsg4|recvmsg6|sysctl|getsockopt|setsockopt)
+ recvmsg4|recvmsg6|sysctl|getsockopt|setsockopt|sock_release)
COMPREPLY=( $( compgen -W "$PROG_TYPE" -- \
"$cur" ) )
return 0
diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
index 2afb7d5..592803a 100644
--- a/tools/bpf/bpftool/btf.c
+++ b/tools/bpf/bpftool/btf.c
@@ -519,6 +519,7 @@ static int do_dump(int argc, char **argv)
NEXT_ARG();
if (argc < 1) {
p_err("expecting value for 'format' option\n");
+ err = -EINVAL;
goto done;
}
if (strcmp(*argv, "c") == 0) {
@@ -528,11 +529,13 @@ static int do_dump(int argc, char **argv)
} else {
p_err("unrecognized format specifier: '%s', possible values: raw, c",
*argv);
+ err = -EINVAL;
goto done;
}
NEXT_ARG();
} else {
p_err("unrecognized option: '%s'", *argv);
+ err = -EINVAL;
goto done;
}
}
diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
index d901cc1..6e53b1d 100644
--- a/tools/bpf/bpftool/cgroup.c
+++ b/tools/bpf/bpftool/cgroup.c
@@ -28,7 +28,8 @@
" connect6 | getpeername4 | getpeername6 |\n" \
" getsockname4 | getsockname6 | sendmsg4 |\n" \
" sendmsg6 | recvmsg4 | recvmsg6 |\n" \
- " sysctl | getsockopt | setsockopt }"
+ " sysctl | getsockopt | setsockopt |\n" \
+ " sock_release }"
static unsigned int query_flags;
diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
index 682daaa..33068d6 100644
--- a/tools/bpf/bpftool/main.c
+++ b/tools/bpf/bpftool/main.c
@@ -274,7 +274,7 @@ static int do_batch(int argc, char **argv)
int n_argc;
FILE *fp;
char *cp;
- int err;
+ int err = 0;
int i;
if (argc < 2) {
@@ -368,7 +368,6 @@ static int do_batch(int argc, char **argv)
} else {
if (!json_output)
printf("processed %d commands\n", lines);
- err = 0;
}
err_close:
if (fp != stdin)
diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
index a7efbd8..ce6faf1b 100644
--- a/tools/bpf/bpftool/map.c
+++ b/tools/bpf/bpftool/map.c
@@ -99,7 +99,7 @@ static int do_dump_btf(const struct btf_dumper *d,
void *value)
{
__u32 value_id;
- int ret;
+ int ret = 0;
/* start of key-value pair */
jsonw_start_object(d->jw);
diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index acdb2c24..14237ff 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -2105,7 +2105,7 @@ static int do_help(int argc, char **argv)
" cgroup/getpeername4 | cgroup/getpeername6 |\n"
" cgroup/getsockname4 | cgroup/getsockname6 | cgroup/sendmsg4 |\n"
" cgroup/sendmsg6 | cgroup/recvmsg4 | cgroup/recvmsg6 |\n"
- " cgroup/getsockopt | cgroup/setsockopt |\n"
+ " cgroup/getsockopt | cgroup/setsockopt | cgroup/sock_release |\n"
" struct_ops | fentry | fexit | freplace | sk_lookup }\n"
" ATTACH_TYPE := { msg_verdict | stream_verdict | stream_parser |\n"
" flow_dissector }\n"
diff --git a/tools/bpf/resolve_btfids/.gitignore b/tools/bpf/resolve_btfids/.gitignore
index a026df7..16913ff 100644
--- a/tools/bpf/resolve_btfids/.gitignore
+++ b/tools/bpf/resolve_btfids/.gitignore
@@ -1,4 +1,3 @@
-/FEATURE-DUMP.libbpf
-/bpf_helper_defs.h
/fixdep
/resolve_btfids
+/libbpf/
diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
index bf65643..bb9fa8d 100644
--- a/tools/bpf/resolve_btfids/Makefile
+++ b/tools/bpf/resolve_btfids/Makefile
@@ -2,11 +2,7 @@
include ../../scripts/Makefile.include
include ../../scripts/Makefile.arch
-ifeq ($(srctree),)
-srctree := $(patsubst %/,%,$(dir $(CURDIR)))
-srctree := $(patsubst %/,%,$(dir $(srctree)))
-srctree := $(patsubst %/,%,$(dir $(srctree)))
-endif
+srctree := $(abspath $(CURDIR)/../../../)
ifeq ($(V),1)
Q =
@@ -22,28 +18,29 @@
CC = $(HOSTCC)
LD = $(HOSTLD)
ARCH = $(HOSTARCH)
+RM ?= rm
OUTPUT ?= $(srctree)/tools/bpf/resolve_btfids/
LIBBPF_SRC := $(srctree)/tools/lib/bpf/
SUBCMD_SRC := $(srctree)/tools/lib/subcmd/
-BPFOBJ := $(OUTPUT)/libbpf.a
-SUBCMDOBJ := $(OUTPUT)/libsubcmd.a
+BPFOBJ := $(OUTPUT)/libbpf/libbpf.a
+SUBCMDOBJ := $(OUTPUT)/libsubcmd/libsubcmd.a
BINARY := $(OUTPUT)/resolve_btfids
BINARY_IN := $(BINARY)-in.o
all: $(BINARY)
-$(OUTPUT):
+$(OUTPUT) $(OUTPUT)/libbpf $(OUTPUT)/libsubcmd:
$(call msg,MKDIR,,$@)
- $(Q)mkdir -p $(OUTPUT)
+ $(Q)mkdir -p $(@)
-$(SUBCMDOBJ): fixdep FORCE
- $(Q)$(MAKE) -C $(SUBCMD_SRC) OUTPUT=$(OUTPUT)
+$(SUBCMDOBJ): fixdep FORCE | $(OUTPUT)/libsubcmd
+ $(Q)$(MAKE) -C $(SUBCMD_SRC) OUTPUT=$(abspath $(dir $@))/ $(abspath $@)
-$(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(OUTPUT)
+$(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(OUTPUT)/libbpf
$(Q)$(MAKE) $(submake_extras) -C $(LIBBPF_SRC) OUTPUT=$(abspath $(dir $@))/ $(abspath $@)
CFLAGS := -g \
@@ -57,24 +54,27 @@
export srctree OUTPUT CFLAGS Q
include $(srctree)/tools/build/Makefile.include
-$(BINARY_IN): fixdep FORCE
+$(BINARY_IN): fixdep FORCE | $(OUTPUT)
$(Q)$(MAKE) $(build)=resolve_btfids
$(BINARY): $(BPFOBJ) $(SUBCMDOBJ) $(BINARY_IN)
$(call msg,LINK,$@)
$(Q)$(CC) $(BINARY_IN) $(LDFLAGS) -o $@ $(BPFOBJ) $(SUBCMDOBJ) $(LIBS)
-libsubcmd-clean:
- $(Q)$(MAKE) -C $(SUBCMD_SRC) OUTPUT=$(OUTPUT) clean
+clean_objects := $(wildcard $(OUTPUT)/*.o \
+ $(OUTPUT)/.*.o.cmd \
+ $(OUTPUT)/.*.o.d \
+ $(OUTPUT)/libbpf \
+ $(OUTPUT)/libsubcmd \
+ $(OUTPUT)/resolve_btfids)
-libbpf-clean:
- $(Q)$(MAKE) -C $(LIBBPF_SRC) OUTPUT=$(OUTPUT) clean
-
-clean: libsubcmd-clean libbpf-clean fixdep-clean
+ifneq ($(clean_objects),)
+clean: fixdep-clean
$(call msg,CLEAN,$(BINARY))
- $(Q)$(RM) -f $(BINARY); \
- $(RM) -rf $(if $(OUTPUT),$(OUTPUT),.)/feature; \
- find $(if $(OUTPUT),$(OUTPUT),.) -name \*.o -or -name \*.o.cmd -or -name \*.o.d | xargs $(RM)
+ $(Q)$(RM) -rf $(clean_objects)
+else
+clean:
+endif
tags:
$(call msg,GEN,,tags)
diff --git a/tools/cgroup/memcg_slabinfo.py b/tools/cgroup/memcg_slabinfo.py
index c4225ed..1600b17 100644
--- a/tools/cgroup/memcg_slabinfo.py
+++ b/tools/cgroup/memcg_slabinfo.py
@@ -128,9 +128,9 @@
cfg['nr_nodes'] = prog['nr_online_nodes'].value_()
- if prog.type('struct kmem_cache').members[1][1] == 'flags':
+ if prog.type('struct kmem_cache').members[1].name == 'flags':
cfg['allocator'] = 'SLUB'
- elif prog.type('struct kmem_cache').members[1][1] == 'batchcount':
+ elif prog.type('struct kmem_cache').members[1].name == 'batchcount':
cfg['allocator'] = 'SLAB'
else:
err('Can\'t determine the slab allocator')
@@ -193,7 +193,7 @@
# look over all slab pages, belonging to non-root memcgs
# and look for objects belonging to the given memory cgroup
for page in for_each_slab_page(prog):
- objcg_vec_raw = page.obj_cgroups.value_()
+ objcg_vec_raw = page.memcg_data.value_()
if objcg_vec_raw == 0:
continue
cache = page.slab_cache
@@ -202,7 +202,7 @@
addr = cache.value_()
caches[addr] = cache
# clear the lowest bit to get the true obj_cgroups
- objcg_vec = Object(prog, page.obj_cgroups.type_,
+ objcg_vec = Object(prog, 'struct obj_cgroup **',
value=objcg_vec_raw & ~1)
if addr not in stats:
diff --git a/tools/include/linux/bits.h b/tools/include/linux/bits.h
index 7f475d5..87d1126 100644
--- a/tools/include/linux/bits.h
+++ b/tools/include/linux/bits.h
@@ -22,7 +22,7 @@
#include <linux/build_bug.h>
#define GENMASK_INPUT_CHECK(h, l) \
(BUILD_BUG_ON_ZERO(__builtin_choose_expr( \
- __builtin_constant_p((l) > (h)), (l) > (h), 0)))
+ __is_constexpr((l) > (h)), (l) > (h), 0)))
#else
/*
* BUILD_BUG_ON_ZERO is not available in h files included from asm files,
diff --git a/tools/include/linux/const.h b/tools/include/linux/const.h
index 81b8aae..435ddd7 100644
--- a/tools/include/linux/const.h
+++ b/tools/include/linux/const.h
@@ -3,4 +3,12 @@
#include <vdso/const.h>
+/*
+ * This returns a constant expression while determining if an argument is
+ * a constant expression, most importantly without evaluating the argument.
+ * Glory to Martin Uecker <Martin.Uecker@med.uni-goettingen.de>
+ */
+#define __is_constexpr(x) \
+ (sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8)))
+
#endif /* _LINUX_CONST_H */
diff --git a/tools/include/uapi/asm/errno.h b/tools/include/uapi/asm/errno.h
index 637189e..d30439b 100644
--- a/tools/include/uapi/asm/errno.h
+++ b/tools/include/uapi/asm/errno.h
@@ -9,8 +9,6 @@
#include "../../../arch/alpha/include/uapi/asm/errno.h"
#elif defined(__mips__)
#include "../../../arch/mips/include/uapi/asm/errno.h"
-#elif defined(__ia64__)
-#include "../../../arch/ia64/include/uapi/asm/errno.h"
#elif defined(__xtensa__)
#include "../../../arch/xtensa/include/uapi/asm/errno.h"
#else
diff --git a/tools/kvm/kvm_stat/kvm_stat.service b/tools/kvm/kvm_stat/kvm_stat.service
index 71aabaf..8f13b84 100644
--- a/tools/kvm/kvm_stat/kvm_stat.service
+++ b/tools/kvm/kvm_stat/kvm_stat.service
@@ -9,6 +9,7 @@
ExecStart=/usr/bin/kvm_stat -dtcz -s 10 -L /var/log/kvm_stat.csv
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
+RestartSec=60s
SyslogIdentifier=kvm_stat
SyslogLevel=debug
diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index bbcefb3..4538ed7 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -88,11 +88,19 @@ enum bpf_enum_value_kind {
const void *p = (const void *)s + __CORE_RELO(s, field, BYTE_OFFSET); \
unsigned long long val; \
\
+ /* This is a so-called barrier_var() operation that makes specified \
+ * variable "a black box" for optimizing compiler. \
+ * It forces compiler to perform BYTE_OFFSET relocation on p and use \
+ * its calculated value in the switch below, instead of applying \
+ * the same relocation 4 times for each individual memory load. \
+ */ \
+ asm volatile("" : "=r"(p) : "0"(p)); \
+ \
switch (__CORE_RELO(s, field, BYTE_SIZE)) { \
- case 1: val = *(const unsigned char *)p; \
- case 2: val = *(const unsigned short *)p; \
- case 4: val = *(const unsigned int *)p; \
- case 8: val = *(const unsigned long long *)p; \
+ case 1: val = *(const unsigned char *)p; break; \
+ case 2: val = *(const unsigned short *)p; break; \
+ case 4: val = *(const unsigned int *)p; break; \
+ case 8: val = *(const unsigned long long *)p; break; \
} \
val <<= __CORE_RELO(s, field, LSHIFT_U64); \
if (__CORE_RELO(s, field, SIGNED)) \
diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
index f9ef377..1c2e91e 100644
--- a/tools/lib/bpf/bpf_tracing.h
+++ b/tools/lib/bpf/bpf_tracing.h
@@ -413,20 +413,38 @@ typeof(name(0)) name(struct pt_regs *ctx) \
} \
static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
+#define ___bpf_fill0(arr, p, x) do {} while (0)
+#define ___bpf_fill1(arr, p, x) arr[p] = x
+#define ___bpf_fill2(arr, p, x, args...) arr[p] = x; ___bpf_fill1(arr, p + 1, args)
+#define ___bpf_fill3(arr, p, x, args...) arr[p] = x; ___bpf_fill2(arr, p + 1, args)
+#define ___bpf_fill4(arr, p, x, args...) arr[p] = x; ___bpf_fill3(arr, p + 1, args)
+#define ___bpf_fill5(arr, p, x, args...) arr[p] = x; ___bpf_fill4(arr, p + 1, args)
+#define ___bpf_fill6(arr, p, x, args...) arr[p] = x; ___bpf_fill5(arr, p + 1, args)
+#define ___bpf_fill7(arr, p, x, args...) arr[p] = x; ___bpf_fill6(arr, p + 1, args)
+#define ___bpf_fill8(arr, p, x, args...) arr[p] = x; ___bpf_fill7(arr, p + 1, args)
+#define ___bpf_fill9(arr, p, x, args...) arr[p] = x; ___bpf_fill8(arr, p + 1, args)
+#define ___bpf_fill10(arr, p, x, args...) arr[p] = x; ___bpf_fill9(arr, p + 1, args)
+#define ___bpf_fill11(arr, p, x, args...) arr[p] = x; ___bpf_fill10(arr, p + 1, args)
+#define ___bpf_fill12(arr, p, x, args...) arr[p] = x; ___bpf_fill11(arr, p + 1, args)
+#define ___bpf_fill(arr, args...) \
+ ___bpf_apply(___bpf_fill, ___bpf_narg(args))(arr, 0, args)
+
/*
* BPF_SEQ_PRINTF to wrap bpf_seq_printf to-be-printed values
* in a structure.
*/
-#define BPF_SEQ_PRINTF(seq, fmt, args...) \
- ({ \
- _Pragma("GCC diagnostic push") \
- _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
- static const char ___fmt[] = fmt; \
- unsigned long long ___param[] = { args }; \
- _Pragma("GCC diagnostic pop") \
- int ___ret = bpf_seq_printf(seq, ___fmt, sizeof(___fmt), \
- ___param, sizeof(___param)); \
- ___ret; \
- })
+#define BPF_SEQ_PRINTF(seq, fmt, args...) \
+({ \
+ static const char ___fmt[] = fmt; \
+ unsigned long long ___param[___bpf_narg(args)]; \
+ \
+ _Pragma("GCC diagnostic push") \
+ _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
+ ___bpf_fill(___param, args); \
+ _Pragma("GCC diagnostic pop") \
+ \
+ bpf_seq_printf(seq, ___fmt, sizeof(___fmt), \
+ ___param, sizeof(___param)); \
+})
#endif
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index 5724724..9cabc8b 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -164,6 +164,7 @@ struct btf_dump_emit_type_decl_opts {
int indent_level;
/* strip all the const/volatile/restrict mods */
bool strip_mods;
+ size_t :0;
};
#define btf_dump_emit_type_decl_opts__last_field strip_mods
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 6909ee8..57d10b7 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -507,6 +507,7 @@ struct xdp_link_info {
struct bpf_xdp_set_link_opts {
size_t sz;
int old_fd;
+ size_t :0;
};
#define bpf_xdp_set_link_opts__last_field old_fd
diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
index 98537ff..86c31c7 100644
--- a/tools/lib/bpf/ringbuf.c
+++ b/tools/lib/bpf/ringbuf.c
@@ -202,9 +202,11 @@ static inline int roundup_len(__u32 len)
return (len + 7) / 8 * 8;
}
-static int ringbuf_process_ring(struct ring* r)
+static int64_t ringbuf_process_ring(struct ring* r)
{
- int *len_ptr, len, err, cnt = 0;
+ int *len_ptr, len, err;
+ /* 64-bit to avoid overflow in case of extreme application behavior */
+ int64_t cnt = 0;
unsigned long cons_pos, prod_pos;
bool got_new_data;
void *sample;
@@ -227,7 +229,7 @@ static int ringbuf_process_ring(struct ring* r)
if ((len & BPF_RINGBUF_DISCARD_BIT) == 0) {
sample = (void *)len_ptr + BPF_RINGBUF_HDR_SZ;
err = r->sample_cb(r->ctx, sample, len);
- if (err) {
+ if (err < 0) {
/* update consumer pos and bail out */
smp_store_release(r->consumer_pos,
cons_pos);
@@ -244,12 +246,14 @@ static int ringbuf_process_ring(struct ring* r)
}
/* Consume available ring buffer(s) data without event polling.
- * Returns number of records consumed across all registered ring buffers, or
- * negative number if any of the callbacks return error.
+ * Returns number of records consumed across all registered ring buffers (or
+ * INT_MAX, whichever is less), or negative number if any of the callbacks
+ * return error.
*/
int ring_buffer__consume(struct ring_buffer *rb)
{
- int i, err, res = 0;
+ int64_t err, res = 0;
+ int i;
for (i = 0; i < rb->ring_cnt; i++) {
struct ring *ring = &rb->rings[i];
@@ -259,18 +263,24 @@ int ring_buffer__consume(struct ring_buffer *rb)
return err;
res += err;
}
+ if (res > INT_MAX)
+ return INT_MAX;
return res;
}
/* Poll for available data and consume records, if any are available.
- * Returns number of records consumed, or negative number, if any of the
- * registered callbacks returned error.
+ * Returns number of records consumed (or INT_MAX, whichever is less), or
+ * negative number, if any of the registered callbacks returned error.
*/
int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
{
- int i, cnt, err, res = 0;
+ int i, cnt;
+ int64_t err, res = 0;
cnt = epoll_wait(rb->epoll_fd, rb->events, rb->ring_cnt, timeout_ms);
+ if (cnt < 0)
+ return -errno;
+
for (i = 0; i < cnt; i++) {
__u32 ring_id = rb->events[i].data.fd;
struct ring *ring = &rb->rings[ring_id];
@@ -280,5 +290,7 @@ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
return err;
res += err;
}
- return cnt < 0 ? -errno : res;
+ if (res > INT_MAX)
+ return INT_MAX;
+ return res;
}
diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
index f6e8831..7150e34 100644
--- a/tools/lib/bpf/xsk.c
+++ b/tools/lib/bpf/xsk.c
@@ -54,6 +54,8 @@ struct xsk_umem {
int fd;
int refcount;
struct list_head ctx_list;
+ bool rx_ring_setup_done;
+ bool tx_ring_setup_done;
};
struct xsk_ctx {
@@ -628,26 +630,30 @@ static struct xsk_ctx *xsk_get_ctx(struct xsk_umem *umem, int ifindex,
return NULL;
}
-static void xsk_put_ctx(struct xsk_ctx *ctx)
+static void xsk_put_ctx(struct xsk_ctx *ctx, bool unmap)
{
struct xsk_umem *umem = ctx->umem;
struct xdp_mmap_offsets off;
int err;
- if (--ctx->refcount == 0) {
- err = xsk_get_mmap_offsets(umem->fd, &off);
- if (!err) {
- munmap(ctx->fill->ring - off.fr.desc,
- off.fr.desc + umem->config.fill_size *
- sizeof(__u64));
- munmap(ctx->comp->ring - off.cr.desc,
- off.cr.desc + umem->config.comp_size *
- sizeof(__u64));
- }
+ if (--ctx->refcount)
+ return;
- list_del(&ctx->list);
- free(ctx);
- }
+ if (!unmap)
+ goto out_free;
+
+ err = xsk_get_mmap_offsets(umem->fd, &off);
+ if (err)
+ goto out_free;
+
+ munmap(ctx->fill->ring - off.fr.desc, off.fr.desc + umem->config.fill_size *
+ sizeof(__u64));
+ munmap(ctx->comp->ring - off.cr.desc, off.cr.desc + umem->config.comp_size *
+ sizeof(__u64));
+
+out_free:
+ list_del(&ctx->list);
+ free(ctx);
}
static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk,
@@ -682,8 +688,6 @@ static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk,
memcpy(ctx->ifname, ifname, IFNAMSIZ - 1);
ctx->ifname[IFNAMSIZ - 1] = '\0';
- umem->fill_save = NULL;
- umem->comp_save = NULL;
ctx->fill = fill;
ctx->comp = comp;
list_add(&ctx->list, &umem->ctx_list);
@@ -699,6 +703,7 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
struct xsk_ring_cons *comp,
const struct xsk_socket_config *usr_config)
{
+ bool unmap, rx_setup_done = false, tx_setup_done = false;
void *rx_map = NULL, *tx_map = NULL;
struct sockaddr_xdp sxdp = {};
struct xdp_mmap_offsets off;
@@ -709,6 +714,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
if (!umem || !xsk_ptr || !(rx || tx))
return -EFAULT;
+ unmap = umem->fill_save != fill;
+
xsk = calloc(1, sizeof(*xsk));
if (!xsk)
return -ENOMEM;
@@ -732,6 +739,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
}
} else {
xsk->fd = umem->fd;
+ rx_setup_done = umem->rx_ring_setup_done;
+ tx_setup_done = umem->tx_ring_setup_done;
}
ctx = xsk_get_ctx(umem, ifindex, queue_id);
@@ -750,7 +759,7 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
}
xsk->ctx = ctx;
- if (rx) {
+ if (rx && !rx_setup_done) {
err = setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING,
&xsk->config.rx_size,
sizeof(xsk->config.rx_size));
@@ -758,8 +767,10 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
err = -errno;
goto out_put_ctx;
}
+ if (xsk->fd == umem->fd)
+ umem->rx_ring_setup_done = true;
}
- if (tx) {
+ if (tx && !tx_setup_done) {
err = setsockopt(xsk->fd, SOL_XDP, XDP_TX_RING,
&xsk->config.tx_size,
sizeof(xsk->config.tx_size));
@@ -767,6 +778,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
err = -errno;
goto out_put_ctx;
}
+ if (xsk->fd == umem->fd)
+ umem->rx_ring_setup_done = true;
}
err = xsk_get_mmap_offsets(xsk->fd, &off);
@@ -845,6 +858,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
}
*xsk_ptr = xsk;
+ umem->fill_save = NULL;
+ umem->comp_save = NULL;
return 0;
out_mmap_tx:
@@ -856,7 +871,7 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
munmap(rx_map, off.rx.desc +
xsk->config.rx_size * sizeof(struct xdp_desc));
out_put_ctx:
- xsk_put_ctx(ctx);
+ xsk_put_ctx(ctx, unmap);
out_socket:
if (--umem->refcount)
close(xsk->fd);
@@ -870,6 +885,9 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
struct xsk_ring_cons *rx, struct xsk_ring_prod *tx,
const struct xsk_socket_config *usr_config)
{
+ if (!umem)
+ return -EFAULT;
+
return xsk_socket__create_shared(xsk_ptr, ifname, queue_id, umem,
rx, tx, umem->fill_save,
umem->comp_save, usr_config);
@@ -919,7 +937,7 @@ void xsk_socket__delete(struct xsk_socket *xsk)
}
}
- xsk_put_ctx(ctx);
+ xsk_put_ctx(ctx, true);
umem->refcount--;
/* Do not close an fd that also has an associated umem connected
diff --git a/tools/lib/perf/include/perf/event.h b/tools/lib/perf/include/perf/event.h
index 988c539..4a24b85 100644
--- a/tools/lib/perf/include/perf/event.h
+++ b/tools/lib/perf/include/perf/event.h
@@ -8,6 +8,8 @@
#include <linux/bpf.h>
#include <sys/types.h> /* pid_t */
+#define event_contains(obj, mem) ((obj).header.size > offsetof(typeof(obj), mem))
+
struct perf_record_mmap {
struct perf_event_header header;
__u32 pid, tid;
@@ -336,8 +338,9 @@ struct perf_record_time_conv {
__u64 time_zero;
__u64 time_cycles;
__u64 time_mask;
- bool cap_user_time_zero;
- bool cap_user_time_short;
+ __u8 cap_user_time_zero;
+ __u8 cap_user_time_short;
+ __u8 reserved[6]; /* For alignment */
};
struct perf_record_header_feature {
diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
index ce8516e..2abbd75 100644
--- a/tools/perf/Makefile.config
+++ b/tools/perf/Makefile.config
@@ -530,6 +530,7 @@
ifdef LIBBPF_DYNAMIC
ifeq ($(feature-libbpf), 1)
EXTLIBS += -lbpf
+ $(call detected,CONFIG_LIBBPF_DYNAMIC)
else
dummy := $(error Error: No libbpf devel library found, please install libbpf-devel);
endif
diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
index 9366fad..eecc70f 100644
--- a/tools/perf/builtin-ftrace.c
+++ b/tools/perf/builtin-ftrace.c
@@ -289,7 +289,7 @@ static int set_tracing_pid(struct perf_ftrace *ftrace)
for (i = 0; i < perf_thread_map__nr(ftrace->evlist->core.threads); i++) {
scnprintf(buf, sizeof(buf), "%d",
- ftrace->evlist->core.threads->map[i]);
+ perf_thread_map__pid(ftrace->evlist->core.threads, i));
if (append_tracing_file("set_ftrace_pid", buf) < 0)
return -1;
}
diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index 0462dc8..5320ac1 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -904,7 +904,7 @@ int cmd_inject(int argc, const char **argv)
}
data.path = inject.input_name;
- inject.session = perf_session__new(&data, true, &inject.tool);
+ inject.session = perf_session__new(&data, inject.output.is_pipe, &inject.tool);
if (IS_ERR(inject.session))
return PTR_ERR(inject.session);
diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
index 4ea7ec4..008f168 100644
--- a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
+++ b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
@@ -275,7 +275,7 @@
{
"EventName": "l2_pf_hit_l2",
"EventCode": "0x70",
- "BriefDescription": "L2 prefetch hit in L2.",
+ "BriefDescription": "L2 prefetch hit in L2. Use l2_cache_hits_from_l2_hwpf instead.",
"UMask": "0xff"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json b/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
index 2cfe2d2..3c95454 100644
--- a/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
+++ b/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
@@ -79,10 +79,10 @@
"UMask": "0x70"
},
{
- "MetricName": "l2_cache_hits_from_l2_hwpf",
+ "EventName": "l2_cache_hits_from_l2_hwpf",
+ "EventCode": "0x70",
"BriefDescription": "L2 Cache Hits from L2 HWPF",
- "MetricExpr": "l2_pf_hit_l2 + l2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3",
- "MetricGroup": "l2_cache"
+ "UMask": "0xff"
},
{
"EventName": "l3_accesses",
diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/cache.json b/tools/perf/pmu-events/arch/x86/amdzen2/cache.json
index f61b982..8ba84a4 100644
--- a/tools/perf/pmu-events/arch/x86/amdzen2/cache.json
+++ b/tools/perf/pmu-events/arch/x86/amdzen2/cache.json
@@ -205,7 +205,7 @@
{
"EventName": "l2_pf_hit_l2",
"EventCode": "0x70",
- "BriefDescription": "L2 prefetch hit in L2.",
+ "BriefDescription": "L2 prefetch hit in L2. Use l2_cache_hits_from_l2_hwpf instead.",
"UMask": "0xff"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json b/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
index 2ef91e2..1c624cee 100644
--- a/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
+++ b/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
@@ -79,10 +79,10 @@
"UMask": "0x70"
},
{
- "MetricName": "l2_cache_hits_from_l2_hwpf",
+ "EventName": "l2_cache_hits_from_l2_hwpf",
+ "EventCode": "0x70",
"BriefDescription": "L2 Cache Hits from L2 HWPF",
- "MetricExpr": "l2_pf_hit_l2 + l2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3",
- "MetricGroup": "l2_cache"
+ "UMask": "0xff"
},
{
"EventName": "l3_accesses",
diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
index e47644c..dcfdf6a 100644
--- a/tools/perf/pmu-events/jevents.c
+++ b/tools/perf/pmu-events/jevents.c
@@ -894,7 +894,7 @@ static int get_maxfds(void)
struct rlimit rlim;
if (getrlimit(RLIMIT_NOFILE, &rlim) == 0)
- return min((int)rlim.rlim_max / 2, 512);
+ return min(rlim.rlim_max / 2, (rlim_t)512);
return 512;
}
diff --git a/tools/perf/python/tracepoint.py b/tools/perf/python/tracepoint.py
index eb76f65..461848c 100755
--- a/tools/perf/python/tracepoint.py
+++ b/tools/perf/python/tracepoint.py
@@ -1,4 +1,4 @@
-#! /usr/bin/python
+#! /usr/bin/env python
# SPDX-License-Identifier: GPL-2.0
# -*- python -*-
# -*- coding: utf-8 -*-
diff --git a/tools/perf/python/twatch.py b/tools/perf/python/twatch.py
index ff87ccf..04f3db2 100755
--- a/tools/perf/python/twatch.py
+++ b/tools/perf/python/twatch.py
@@ -1,4 +1,4 @@
-#! /usr/bin/python
+#! /usr/bin/env python
# SPDX-License-Identifier: GPL-2.0-only
# -*- python -*-
# -*- coding: utf-8 -*-
diff --git a/tools/perf/scripts/python/exported-sql-viewer.py b/tools/perf/scripts/python/exported-sql-viewer.py
index 7daa8bb..711d4f9 100755
--- a/tools/perf/scripts/python/exported-sql-viewer.py
+++ b/tools/perf/scripts/python/exported-sql-viewer.py
@@ -91,6 +91,11 @@
from __future__ import print_function
import sys
+# Only change warnings if the python -W option was not used
+if not sys.warnoptions:
+ import warnings
+ # PySide2 causes deprecation warnings, ignore them.
+ warnings.filterwarnings("ignore", category=DeprecationWarning)
import argparse
import weakref
import threading
@@ -125,8 +130,9 @@
from PySide.QtGui import *
from PySide.QtSql import *
-from decimal import *
-from ctypes import *
+from decimal import Decimal, ROUND_HALF_UP
+from ctypes import CDLL, Structure, create_string_buffer, addressof, sizeof, \
+ c_void_p, c_bool, c_byte, c_char, c_int, c_uint, c_longlong, c_ulonglong
from multiprocessing import Process, Array, Value, Event
# xrange is range in Python3
@@ -3868,7 +3874,7 @@
if with_hdr:
model = indexes[0].model()
for col in range(min_col, max_col + 1):
- val = model.headerData(col, Qt.Horizontal)
+ val = model.headerData(col, Qt.Horizontal, Qt.DisplayRole)
if as_csv:
text += sep + ToCSValue(val)
sep = ","
diff --git a/tools/perf/trace/beauty/fsconfig.sh b/tools/perf/trace/beauty/fsconfig.sh
index 83fb24d..bc6ef7b 100755
--- a/tools/perf/trace/beauty/fsconfig.sh
+++ b/tools/perf/trace/beauty/fsconfig.sh
@@ -10,8 +10,7 @@
linux_mount=${linux_header_dir}/mount.h
printf "static const char *fsconfig_cmds[] = {\n"
-regex='^[[:space:]]*+FSCONFIG_([[:alnum:]_]+)[[:space:]]*=[[:space:]]*([[:digit:]]+)[[:space:]]*,[[:space:]]*.*'
-egrep $regex ${linux_mount} | \
- sed -r "s/$regex/\2 \1/g" | \
- xargs printf "\t[%s] = \"%s\",\n"
+ms='[[:space:]]*'
+sed -nr "s/^${ms}FSCONFIG_([[:alnum:]_]+)${ms}=${ms}([[:digit:]]+)${ms},.*/\t[\2] = \"\1\",/p" \
+ ${linux_mount}
printf "};\n"
diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index e2563d0..0cf2735 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -140,7 +140,14 @@
perf-$(CONFIG_LIBELF) += probe-file.o
perf-$(CONFIG_LIBELF) += probe-event.o
+ifdef CONFIG_LIBBPF_DYNAMIC
+ hashmap := 1
+endif
ifndef CONFIG_LIBBPF
+ hashmap := 1
+endif
+
+ifdef hashmap
perf-y += hashmap.o
endif
diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
index d8ada6a..d3c15b5 100644
--- a/tools/perf/util/auxtrace.c
+++ b/tools/perf/util/auxtrace.c
@@ -636,7 +636,7 @@ int auxtrace_parse_snapshot_options(struct auxtrace_record *itr,
break;
}
- if (itr)
+ if (itr && itr->parse_snapshot_options)
return itr->parse_snapshot_options(itr, opts, str);
pr_err("No AUX area tracing to snapshot\n");
diff --git a/tools/perf/util/block-info.c b/tools/perf/util/block-info.c
index 423ec69..5ecd4f4 100644
--- a/tools/perf/util/block-info.c
+++ b/tools/perf/util/block-info.c
@@ -201,7 +201,7 @@ static int block_total_cycles_pct_entry(struct perf_hpp_fmt *fmt,
double ratio = 0.0;
if (block_fmt->total_cycles)
- ratio = (double)bi->cycles / (double)block_fmt->total_cycles;
+ ratio = (double)bi->cycles_aggr / (double)block_fmt->total_cycles;
return color_pct(hpp, block_fmt->width, 100.0 * ratio);
}
@@ -216,9 +216,9 @@ static int64_t block_total_cycles_pct_sort(struct perf_hpp_fmt *fmt,
double l, r;
if (block_fmt->total_cycles) {
- l = ((double)bi_l->cycles /
+ l = ((double)bi_l->cycles_aggr /
(double)block_fmt->total_cycles) * 100000.0;
- r = ((double)bi_r->cycles /
+ r = ((double)bi_r->cycles_aggr /
(double)block_fmt->total_cycles) * 100000.0;
return (int64_t)l - (int64_t)r;
}
diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
index c47aa34..5d97b3e 100644
--- a/tools/perf/util/data.c
+++ b/tools/perf/util/data.c
@@ -35,7 +35,7 @@ void perf_data__close_dir(struct perf_data *data)
int perf_data__create_dir(struct perf_data *data, int nr)
{
struct perf_data_file *files = NULL;
- int i, ret = -1;
+ int i, ret;
if (WARN_ON(!data->is_dir))
return -EINVAL;
@@ -51,7 +51,8 @@ int perf_data__create_dir(struct perf_data *data, int nr)
for (i = 0; i < nr; i++) {
struct perf_data_file *file = &files[i];
- if (asprintf(&file->path, "%s/data.%d", data->path, i) < 0)
+ ret = asprintf(&file->path, "%s/data.%d", data->path, i);
+ if (ret < 0)
goto out_err;
ret = open(file->path, O_RDWR|O_CREAT|O_TRUNC, S_IRUSR|S_IWUSR);
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
index 197eb58..e6029d4 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
@@ -1120,6 +1120,8 @@ static bool intel_pt_fup_event(struct intel_pt_decoder *decoder)
decoder->set_fup_tx_flags = false;
decoder->tx_flags = decoder->fup_tx_flags;
decoder->state.type = INTEL_PT_TRANSACTION;
+ if (decoder->fup_tx_flags & INTEL_PT_ABORT_TX)
+ decoder->state.type |= INTEL_PT_BRANCH;
decoder->state.from_ip = decoder->ip;
decoder->state.to_ip = 0;
decoder->state.flags = decoder->fup_tx_flags;
@@ -1194,8 +1196,10 @@ static int intel_pt_walk_fup(struct intel_pt_decoder *decoder)
return 0;
if (err == -EAGAIN ||
intel_pt_fup_with_nlip(decoder, &intel_pt_insn, ip, err)) {
+ bool no_tip = decoder->pkt_state != INTEL_PT_STATE_FUP;
+
decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
- if (intel_pt_fup_event(decoder))
+ if (intel_pt_fup_event(decoder) && no_tip)
return 0;
return -EAGAIN;
}
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index dc023b8..e5aaf13 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -647,8 +647,10 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
*ip += intel_pt_insn->length;
- if (to_ip && *ip == to_ip)
+ if (to_ip && *ip == to_ip) {
+ intel_pt_insn->length = 0;
goto out_no_cache;
+ }
if (*ip >= al.map->end)
break;
@@ -1131,6 +1133,7 @@ static void intel_pt_set_pid_tid_cpu(struct intel_pt *pt,
static void intel_pt_sample_flags(struct intel_pt_queue *ptq)
{
+ ptq->insn_len = 0;
if (ptq->state->flags & INTEL_PT_ABORT_TX) {
ptq->flags = PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TX_ABORT;
} else if (ptq->state->flags & INTEL_PT_ASYNC) {
diff --git a/tools/perf/util/jitdump.c b/tools/perf/util/jitdump.c
index 055bab7..64d8f9b 100644
--- a/tools/perf/util/jitdump.c
+++ b/tools/perf/util/jitdump.c
@@ -369,21 +369,31 @@ jit_inject_event(struct jit_buf_desc *jd, union perf_event *event)
static uint64_t convert_timestamp(struct jit_buf_desc *jd, uint64_t timestamp)
{
- struct perf_tsc_conversion tc;
+ struct perf_tsc_conversion tc = { .time_shift = 0, };
+ struct perf_record_time_conv *time_conv = &jd->session->time_conv;
if (!jd->use_arch_timestamp)
return timestamp;
- tc.time_shift = jd->session->time_conv.time_shift;
- tc.time_mult = jd->session->time_conv.time_mult;
- tc.time_zero = jd->session->time_conv.time_zero;
- tc.time_cycles = jd->session->time_conv.time_cycles;
- tc.time_mask = jd->session->time_conv.time_mask;
- tc.cap_user_time_zero = jd->session->time_conv.cap_user_time_zero;
- tc.cap_user_time_short = jd->session->time_conv.cap_user_time_short;
+ tc.time_shift = time_conv->time_shift;
+ tc.time_mult = time_conv->time_mult;
+ tc.time_zero = time_conv->time_zero;
- if (!tc.cap_user_time_zero)
- return 0;
+ /*
+ * The event TIME_CONV was extended for the fields from "time_cycles"
+ * when supported cap_user_time_short, for backward compatibility,
+ * checks the event size and assigns these extended fields if these
+ * fields are contained in the event.
+ */
+ if (event_contains(*time_conv, time_cycles)) {
+ tc.time_cycles = time_conv->time_cycles;
+ tc.time_mask = time_conv->time_mask;
+ tc.cap_user_time_zero = time_conv->cap_user_time_zero;
+ tc.cap_user_time_short = time_conv->cap_user_time_short;
+
+ if (!tc.cap_user_time_zero)
+ return 0;
+ }
return tsc_to_perf_time(timestamp, &tc);
}
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index f44ede4..f4d44f7 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -77,8 +77,7 @@ static inline bool replace_android_lib(const char *filename, char *newfilename)
if (strstarts(filename, "/system/lib/")) {
char *ndk, *app;
const char *arch;
- size_t ndk_length;
- size_t app_length;
+ int ndk_length, app_length;
ndk = getenv("NDK_ROOT");
app = getenv("APP_PLATFORM");
@@ -106,8 +105,8 @@ static inline bool replace_android_lib(const char *filename, char *newfilename)
if (new_length > PATH_MAX)
return false;
snprintf(newfilename, new_length,
- "%s/platforms/%s/arch-%s/usr/lib/%s",
- ndk, app, arch, libname);
+ "%.*s/platforms/%.*s/arch-%s/usr/lib/%s",
+ ndk_length, ndk, app_length, app, arch, libname);
return true;
}
@@ -837,15 +836,18 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
int maps__clone(struct thread *thread, struct maps *parent)
{
struct maps *maps = thread->maps;
- int err = -ENOMEM;
+ int err;
struct map *map;
down_read(&parent->lock);
maps__for_each_entry(parent, map) {
struct map *new = map__clone(map);
- if (new == NULL)
+
+ if (new == NULL) {
+ err = -ENOMEM;
goto out_unlock;
+ }
err = unwind__prepare_access(maps, new, NULL);
if (err)
diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
index 22098ff..63b6190 100644
--- a/tools/perf/util/session.c
+++ b/tools/perf/util/session.c
@@ -945,6 +945,19 @@ static void perf_event__stat_round_swap(union perf_event *event,
event->stat_round.time = bswap_64(event->stat_round.time);
}
+static void perf_event__time_conv_swap(union perf_event *event,
+ bool sample_id_all __maybe_unused)
+{
+ event->time_conv.time_shift = bswap_64(event->time_conv.time_shift);
+ event->time_conv.time_mult = bswap_64(event->time_conv.time_mult);
+ event->time_conv.time_zero = bswap_64(event->time_conv.time_zero);
+
+ if (event_contains(event->time_conv, time_cycles)) {
+ event->time_conv.time_cycles = bswap_64(event->time_conv.time_cycles);
+ event->time_conv.time_mask = bswap_64(event->time_conv.time_mask);
+ }
+}
+
typedef void (*perf_event__swap_op)(union perf_event *event,
bool sample_id_all);
@@ -981,7 +994,7 @@ static perf_event__swap_op perf_event__swap_ops[] = {
[PERF_RECORD_STAT] = perf_event__stat_swap,
[PERF_RECORD_STAT_ROUND] = perf_event__stat_round_swap,
[PERF_RECORD_EVENT_UPDATE] = perf_event__event_update_swap,
- [PERF_RECORD_TIME_CONV] = perf_event__all64_swap,
+ [PERF_RECORD_TIME_CONV] = perf_event__time_conv_swap,
[PERF_RECORD_HEADER_MAX] = NULL,
};
diff --git a/tools/perf/util/symbol_fprintf.c b/tools/perf/util/symbol_fprintf.c
index 35c936c..2664fb6 100644
--- a/tools/perf/util/symbol_fprintf.c
+++ b/tools/perf/util/symbol_fprintf.c
@@ -68,7 +68,7 @@ size_t dso__fprintf_symbols_by_name(struct dso *dso,
for (nd = rb_first_cached(&dso->symbol_names); nd; nd = rb_next(nd)) {
pos = rb_entry(nd, struct symbol_name_rb_node, rb_node);
- fprintf(fp, "%s\n", pos->sym.name);
+ ret += fprintf(fp, "%s\n", pos->sym.name);
}
return ret;
diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
index 7a3dbc2..a74b517 100644
--- a/tools/perf/util/unwind-libdw.c
+++ b/tools/perf/util/unwind-libdw.c
@@ -20,10 +20,24 @@
static char *debuginfo_path;
+static int __find_debuginfo(Dwfl_Module *mod __maybe_unused, void **userdata,
+ const char *modname __maybe_unused, Dwarf_Addr base __maybe_unused,
+ const char *file_name, const char *debuglink_file __maybe_unused,
+ GElf_Word debuglink_crc __maybe_unused, char **debuginfo_file_name)
+{
+ const struct dso *dso = *userdata;
+
+ assert(dso);
+ if (dso->symsrc_filename && strcmp (file_name, dso->symsrc_filename))
+ *debuginfo_file_name = strdup(dso->symsrc_filename);
+ return -1;
+}
+
static const Dwfl_Callbacks offline_callbacks = {
- .find_debuginfo = dwfl_standard_find_debuginfo,
+ .find_debuginfo = __find_debuginfo,
.debuginfo_path = &debuginfo_path,
.section_address = dwfl_offline_section_address,
+ // .find_elf is not set as we use dwfl_report_elf() instead.
};
static int __report_module(struct addr_location *al, u64 ip,
@@ -53,9 +67,22 @@ static int __report_module(struct addr_location *al, u64 ip,
}
if (!mod)
- mod = dwfl_report_elf(ui->dwfl, dso->short_name,
- (dso->symsrc_filename ? dso->symsrc_filename : dso->long_name), -1, al->map->start - al->map->pgoff,
- false);
+ mod = dwfl_report_elf(ui->dwfl, dso->short_name, dso->long_name, -1,
+ al->map->start - al->map->pgoff, false);
+ if (!mod) {
+ char filename[PATH_MAX];
+
+ if (dso__build_id_filename(dso, filename, sizeof(filename), false))
+ mod = dwfl_report_elf(ui->dwfl, dso->short_name, filename, -1,
+ al->map->start - al->map->pgoff, false);
+ }
+
+ if (mod) {
+ void **userdatap;
+
+ dwfl_module_info(mod, &userdatap, NULL, NULL, NULL, NULL, NULL, NULL);
+ *userdatap = dso;
+ }
return mod && dwfl_addrmodule(ui->dwfl, ip) == mod ? 0 : -1;
}
diff --git a/tools/power/x86/intel-speed-select/isst-display.c b/tools/power/x86/intel-speed-select/isst-display.c
index e105fec..f32ce036 100644
--- a/tools/power/x86/intel-speed-select/isst-display.c
+++ b/tools/power/x86/intel-speed-select/isst-display.c
@@ -25,10 +25,14 @@ static void printcpulist(int str_len, char *str, int mask_size,
index = snprintf(&str[curr_index],
str_len - curr_index, ",");
curr_index += index;
+ if (curr_index >= str_len)
+ break;
}
index = snprintf(&str[curr_index], str_len - curr_index, "%d",
i);
curr_index += index;
+ if (curr_index >= str_len)
+ break;
first = 0;
}
}
@@ -64,10 +68,14 @@ static void printcpumask(int str_len, char *str, int mask_size,
index = snprintf(&str[curr_index], str_len - curr_index, "%08x",
mask[i]);
curr_index += index;
+ if (curr_index >= str_len)
+ break;
if (i) {
strncat(&str[curr_index], ",", str_len - curr_index);
curr_index++;
}
+ if (curr_index >= str_len)
+ break;
}
free(mask);
@@ -185,7 +193,7 @@ static void _isst_pbf_display_information(int cpu, FILE *outf, int level,
int disp_level)
{
char header[256];
- char value[256];
+ char value[512];
snprintf(header, sizeof(header), "speed-select-base-freq-properties");
format_and_print(outf, disp_level, header, NULL);
@@ -349,7 +357,7 @@ void isst_ctdp_display_information(int cpu, FILE *outf, int tdp_level,
struct isst_pkg_ctdp *pkg_dev)
{
char header[256];
- char value[256];
+ char value[512];
static int level;
int i;
diff --git a/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py b/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
index 3c47865..e15e206 100755
--- a/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
+++ b/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# SPDX-License-Identifier: GPL-2.0-only
# -*- coding: utf-8 -*-
#
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
index f3a1746..424ed19 100644
--- a/tools/power/x86/turbostat/turbostat.c
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -291,13 +291,16 @@ struct msr_sum_array {
/* The percpu MSR sum array.*/
struct msr_sum_array *per_cpu_msr_sum;
-int idx_to_offset(int idx)
+off_t idx_to_offset(int idx)
{
- int offset;
+ off_t offset;
switch (idx) {
case IDX_PKG_ENERGY:
- offset = MSR_PKG_ENERGY_STATUS;
+ if (do_rapl & RAPL_AMD_F17H)
+ offset = MSR_PKG_ENERGY_STAT;
+ else
+ offset = MSR_PKG_ENERGY_STATUS;
break;
case IDX_DRAM_ENERGY:
offset = MSR_DRAM_ENERGY_STATUS;
@@ -320,12 +323,13 @@ int idx_to_offset(int idx)
return offset;
}
-int offset_to_idx(int offset)
+int offset_to_idx(off_t offset)
{
int idx;
switch (offset) {
case MSR_PKG_ENERGY_STATUS:
+ case MSR_PKG_ENERGY_STAT:
idx = IDX_PKG_ENERGY;
break;
case MSR_DRAM_ENERGY_STATUS:
@@ -353,7 +357,7 @@ int idx_valid(int idx)
{
switch (idx) {
case IDX_PKG_ENERGY:
- return do_rapl & RAPL_PKG;
+ return do_rapl & (RAPL_PKG | RAPL_AMD_F17H);
case IDX_DRAM_ENERGY:
return do_rapl & RAPL_DRAM;
case IDX_PP0_ENERGY:
@@ -3245,7 +3249,7 @@ static int update_msr_sum(struct thread_data *t, struct core_data *c, struct pkg
for (i = IDX_PKG_ENERGY; i < IDX_COUNT; i++) {
unsigned long long msr_cur, msr_last;
- int offset;
+ off_t offset;
if (!idx_valid(i))
continue;
@@ -3254,7 +3258,8 @@ static int update_msr_sum(struct thread_data *t, struct core_data *c, struct pkg
continue;
ret = get_msr(cpu, offset, &msr_cur);
if (ret) {
- fprintf(outf, "Can not update msr(0x%x)\n", offset);
+ fprintf(outf, "Can not update msr(0x%llx)\n",
+ (unsigned long long)offset);
continue;
}
@@ -4790,33 +4795,12 @@ double discover_bclk(unsigned int family, unsigned int model)
* below this value, including the Digital Thermal Sensor (DTS),
* Package Thermal Management Sensor (PTM), and thermal event thresholds.
*/
-int read_tcc_activation_temp()
-{
- unsigned long long msr;
- unsigned int tcc, target_c, offset_c;
-
- /* Temperature Target MSR is Nehalem and newer only */
- if (!do_nhm_platform_info)
- return 0;
-
- if (get_msr(base_cpu, MSR_IA32_TEMPERATURE_TARGET, &msr))
- return 0;
-
- target_c = (msr >> 16) & 0xFF;
-
- offset_c = (msr >> 24) & 0xF;
-
- tcc = target_c - offset_c;
-
- if (!quiet)
- fprintf(outf, "cpu%d: MSR_IA32_TEMPERATURE_TARGET: 0x%08llx (%d C) (%d default - %d offset)\n",
- base_cpu, msr, tcc, target_c, offset_c);
-
- return tcc;
-}
-
int set_temperature_target(struct thread_data *t, struct core_data *c, struct pkg_data *p)
{
+ unsigned long long msr;
+ unsigned int target_c_local;
+ int cpu;
+
/* tcc_activation_temp is used only for dts or ptm */
if (!(do_dts || do_ptm))
return 0;
@@ -4825,18 +4809,43 @@ int set_temperature_target(struct thread_data *t, struct core_data *c, struct pk
if (!(t->flags & CPU_IS_FIRST_THREAD_IN_CORE) || !(t->flags & CPU_IS_FIRST_CORE_IN_PACKAGE))
return 0;
+ cpu = t->cpu_id;
+ if (cpu_migrate(cpu)) {
+ fprintf(outf, "Could not migrate to CPU %d\n", cpu);
+ return -1;
+ }
+
if (tcc_activation_temp_override != 0) {
tcc_activation_temp = tcc_activation_temp_override;
- fprintf(outf, "Using cmdline TCC Target (%d C)\n", tcc_activation_temp);
+ fprintf(outf, "cpu%d: Using cmdline TCC Target (%d C)\n",
+ cpu, tcc_activation_temp);
return 0;
}
- tcc_activation_temp = read_tcc_activation_temp();
- if (tcc_activation_temp)
- return 0;
+ /* Temperature Target MSR is Nehalem and newer only */
+ if (!do_nhm_platform_info)
+ goto guess;
+ if (get_msr(base_cpu, MSR_IA32_TEMPERATURE_TARGET, &msr))
+ goto guess;
+
+ target_c_local = (msr >> 16) & 0xFF;
+
+ if (!quiet)
+ fprintf(outf, "cpu%d: MSR_IA32_TEMPERATURE_TARGET: 0x%08llx (%d C)\n",
+ cpu, msr, target_c_local);
+
+ if (!target_c_local)
+ goto guess;
+
+ tcc_activation_temp = target_c_local;
+
+ return 0;
+
+guess:
tcc_activation_temp = TJMAX_DEFAULT;
- fprintf(outf, "Guessing tjMax %d C, Please use -T to specify\n", tcc_activation_temp);
+ fprintf(outf, "cpu%d: Guessing tjMax %d C, Please use -T to specify\n",
+ cpu, tcc_activation_temp);
return 0;
}
diff --git a/tools/testing/ktest/compare-ktest-sample.pl b/tools/testing/ktest/compare-ktest-sample.pl
index 4118eb4..ebea21d 100755
--- a/tools/testing/ktest/compare-ktest-sample.pl
+++ b/tools/testing/ktest/compare-ktest-sample.pl
@@ -1,4 +1,4 @@
-#!/usr/bin/perl
+#!/usr/bin/env perl
# SPDX-License-Identifier: GPL-2.0
open (IN,"ktest.pl");
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index d4f7846..21516e2 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
#
# A thin wrapper on top of the KUnit Kernel
diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
index 02ffc3a..b30e9d6 100644
--- a/tools/testing/kunit/kunit_config.py
+++ b/tools/testing/kunit/kunit_config.py
@@ -12,7 +12,7 @@
CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_(\w+) is not set$'
CONFIG_PATTERN = r'^CONFIG_(\w+)=(\S+|".*")$'
-KconfigEntryBase = collections.namedtuple('KconfigEntry', ['name', 'value'])
+KconfigEntryBase = collections.namedtuple('KconfigEntryBase', ['name', 'value'])
class KconfigEntry(KconfigEntryBase):
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 3fbe1ac..9a036e9 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
#
# A collection of tests for tools/testing/kunit/kunit.py
diff --git a/tools/testing/radix-tree/idr-test.c b/tools/testing/radix-tree/idr-test.c
index 3b796dd..6ce7460 100644
--- a/tools/testing/radix-tree/idr-test.c
+++ b/tools/testing/radix-tree/idr-test.c
@@ -301,16 +301,20 @@ void idr_find_test_1(int anchor_id, int throbber_id)
pthread_t throbber;
time_t start = time(NULL);
- pthread_create(&throbber, NULL, idr_throbber, &throbber_id);
-
BUG_ON(idr_alloc(&find_idr, xa_mk_value(anchor_id), anchor_id,
anchor_id + 1, GFP_KERNEL) != anchor_id);
+ pthread_create(&throbber, NULL, idr_throbber, &throbber_id);
+
+ rcu_read_lock();
do {
int id = 0;
void *entry = idr_get_next(&find_idr, &id);
+ rcu_read_unlock();
BUG_ON(entry != xa_mk_value(id));
+ rcu_read_lock();
} while (time(NULL) < start + 11);
+ rcu_read_unlock();
pthread_join(throbber, NULL);
@@ -577,6 +581,7 @@ void ida_tests(void)
int __weak main(void)
{
+ rcu_register_thread();
radix_tree_init();
idr_checks();
ida_tests();
@@ -584,5 +589,6 @@ int __weak main(void)
rcu_barrier();
if (nr_allocated)
printf("nr_allocated = %d\n", nr_allocated);
+ rcu_unregister_thread();
return 0;
}
diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
index 9eae0fb..e00520c 100644
--- a/tools/testing/radix-tree/multiorder.c
+++ b/tools/testing/radix-tree/multiorder.c
@@ -224,7 +224,9 @@ void multiorder_checks(void)
int __weak main(void)
{
+ rcu_register_thread();
radix_tree_init();
multiorder_checks();
+ rcu_unregister_thread();
return 0;
}
diff --git a/tools/testing/radix-tree/xarray.c b/tools/testing/radix-tree/xarray.c
index e61e43e..f20e12c 100644
--- a/tools/testing/radix-tree/xarray.c
+++ b/tools/testing/radix-tree/xarray.c
@@ -25,11 +25,13 @@ void xarray_tests(void)
int __weak main(void)
{
+ rcu_register_thread();
radix_tree_init();
xarray_tests();
radix_tree_cpu_dead(1);
rcu_barrier();
if (nr_allocated)
printf("nr_allocated = %d\n", nr_allocated);
+ rcu_unregister_thread();
return 0;
}
diff --git a/tools/testing/selftests/arm64/fp/sve-test.S b/tools/testing/selftests/arm64/fp/sve-test.S
index f95074c..07f14e2 100644
--- a/tools/testing/selftests/arm64/fp/sve-test.S
+++ b/tools/testing/selftests/arm64/fp/sve-test.S
@@ -284,16 +284,28 @@
// Set up test pattern in the FFR
// x0: pid
// x2: generation
+//
+// We need to generate a canonical FFR value, which consists of a number of
+// low "1" bits, followed by a number of zeros. This gives us 17 unique values
+// per 16 bits of FFR, so we create a 4 bit signature out of the PID and
+// generation, and use that as the initial number of ones in the pattern.
+// We fill the upper lanes of FFR with zeros.
// Beware: corrupts P0.
function setup_ffr
mov x4, x30
- bl pattern
+ and w0, w0, #0x3
+ bfi w0, w2, #2, #2
+ mov w1, #1
+ lsl w1, w1, w0
+ sub w1, w1, #1
+
ldr x0, =ffrref
- ldr x1, =scratch
- rdvl x2, #1
- lsr x2, x2, #3
- bl memcpy
+ strh w1, [x0], 2
+ rdvl x1, #1
+ lsr x1, x1, #3
+ sub x1, x1, #2
+ bl memclr
mov x0, #0
ldr x1, =ffrref
diff --git a/tools/testing/selftests/arm64/mte/Makefile b/tools/testing/selftests/arm64/mte/Makefile
index 2480226..4084ef1 100644
--- a/tools/testing/selftests/arm64/mte/Makefile
+++ b/tools/testing/selftests/arm64/mte/Makefile
@@ -6,9 +6,7 @@
PROGS := $(patsubst %.c,%,$(SRCS))
#Add mte compiler option
-ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep gcc),)
CFLAGS += -march=armv8.5-a+memtag
-endif
#check if the compiler works well
mte_cc_support := $(shell if ($(CC) $(CFLAGS) -E -x c /dev/null -o /dev/null 2>&1) then echo "1"; fi)
diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
index 39f8908..70665ba 100644
--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
+++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
@@ -278,22 +278,13 @@ int mte_switch_mode(int mte_option, unsigned long incl_mask)
return 0;
}
-#define ID_AA64PFR1_MTE_SHIFT 8
-#define ID_AA64PFR1_MTE 2
-
int mte_default_setup(void)
{
- unsigned long hwcaps = getauxval(AT_HWCAP);
+ unsigned long hwcaps2 = getauxval(AT_HWCAP2);
unsigned long en = 0;
int ret;
- if (!(hwcaps & HWCAP_CPUID)) {
- ksft_print_msg("FAIL: CPUID registers unavailable\n");
- return KSFT_FAIL;
- }
- /* Read ID_AA64PFR1_EL1 register */
- asm volatile("mrs %0, id_aa64pfr1_el1" : "=r"(hwcaps) : : "memory");
- if (((hwcaps >> ID_AA64PFR1_MTE_SHIFT) & MT_TAG_MASK) != ID_AA64PFR1_MTE) {
+ if (!(hwcaps2 & HWCAP2_MTE)) {
ksft_print_msg("FAIL: MTE features unavailable\n");
return KSFT_SKIP;
}
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 9359377..b5322d6 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -196,7 +196,7 @@
$(call msg,MKDIR,,$@)
$(Q)mkdir -p $@
-$(INCLUDE_DIR)/vmlinux.h: $(VMLINUX_BTF) | $(BPFTOOL) $(INCLUDE_DIR)
+$(INCLUDE_DIR)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL) | $(INCLUDE_DIR)
ifeq ($(VMLINUX_H),)
$(call msg,GEN,,$@)
$(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
@@ -333,7 +333,8 @@
$(TRUNNER_BPF_SKELS): $(TRUNNER_OUTPUT)/%.skel.h: \
$(TRUNNER_OUTPUT)/%.o \
- | $(BPFTOOL) $(TRUNNER_OUTPUT)
+ $(BPFTOOL) \
+ | $(TRUNNER_OUTPUT)
$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
$(Q)$$(BPFTOOL) gen skeleton $$< > $$@
endif
diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index 30e40ff..8b641c30 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -185,11 +185,6 @@ static int duration = 0;
.bpf_obj_file = "test_core_reloc_existence.o", \
.btf_src_file = "btf__core_reloc_" #name ".o" \
-#define FIELD_EXISTS_ERR_CASE(name) { \
- FIELD_EXISTS_CASE_COMMON(name), \
- .fails = true, \
-}
-
#define BITFIELDS_CASE_COMMON(objfile, test_name_prefix, name) \
.case_name = test_name_prefix#name, \
.bpf_obj_file = objfile, \
@@ -197,7 +192,7 @@ static int duration = 0;
#define BITFIELDS_CASE(name, ...) { \
BITFIELDS_CASE_COMMON("test_core_reloc_bitfields_probed.o", \
- "direct:", name), \
+ "probed:", name), \
.input = STRUCT_TO_CHAR_PTR(core_reloc_##name) __VA_ARGS__, \
.input_len = sizeof(struct core_reloc_##name), \
.output = STRUCT_TO_CHAR_PTR(core_reloc_bitfields_output) \
@@ -205,7 +200,7 @@ static int duration = 0;
.output_len = sizeof(struct core_reloc_bitfields_output), \
}, { \
BITFIELDS_CASE_COMMON("test_core_reloc_bitfields_direct.o", \
- "probed:", name), \
+ "direct:", name), \
.input = STRUCT_TO_CHAR_PTR(core_reloc_##name) __VA_ARGS__, \
.input_len = sizeof(struct core_reloc_##name), \
.output = STRUCT_TO_CHAR_PTR(core_reloc_bitfields_output) \
@@ -500,8 +495,7 @@ static struct core_reloc_test_case test_cases[] = {
ARRAYS_ERR_CASE(arrays___err_too_small),
ARRAYS_ERR_CASE(arrays___err_too_shallow),
ARRAYS_ERR_CASE(arrays___err_non_array),
- ARRAYS_ERR_CASE(arrays___err_wrong_val_type1),
- ARRAYS_ERR_CASE(arrays___err_wrong_val_type2),
+ ARRAYS_ERR_CASE(arrays___err_wrong_val_type),
ARRAYS_ERR_CASE(arrays___err_bad_zero_sz_arr),
/* enum/ptr/int handling scenarios */
@@ -592,13 +586,25 @@ static struct core_reloc_test_case test_cases[] = {
},
.output_len = sizeof(struct core_reloc_existence_output),
},
-
- FIELD_EXISTS_ERR_CASE(existence__err_int_sz),
- FIELD_EXISTS_ERR_CASE(existence__err_int_type),
- FIELD_EXISTS_ERR_CASE(existence__err_int_kind),
- FIELD_EXISTS_ERR_CASE(existence__err_arr_kind),
- FIELD_EXISTS_ERR_CASE(existence__err_arr_value_type),
- FIELD_EXISTS_ERR_CASE(existence__err_struct_type),
+ {
+ FIELD_EXISTS_CASE_COMMON(existence___wrong_field_defs),
+ .input = STRUCT_TO_CHAR_PTR(core_reloc_existence___wrong_field_defs) {
+ },
+ .input_len = sizeof(struct core_reloc_existence___wrong_field_defs),
+ .output = STRUCT_TO_CHAR_PTR(core_reloc_existence_output) {
+ .a_exists = 0,
+ .b_exists = 0,
+ .c_exists = 0,
+ .arr_exists = 0,
+ .s_exists = 0,
+ .a_value = 0xff000001u,
+ .b_value = 0xff000002u,
+ .c_value = 0xff000003u,
+ .arr_value = 0xff000004u,
+ .s_value = 0xff000005u,
+ },
+ .output_len = sizeof(struct core_reloc_existence_output),
+ },
/* bitfield relocation checks */
BITFIELDS_CASE(bitfields, {
@@ -804,13 +810,20 @@ void test_core_reloc(void)
"prog '%s' not found\n", probe_name))
goto cleanup;
+
+ if (test_case->btf_src_file) {
+ err = access(test_case->btf_src_file, R_OK);
+ if (!ASSERT_OK(err, "btf_src_file"))
+ goto cleanup;
+ }
+
load_attr.obj = obj;
load_attr.log_level = 0;
load_attr.target_btf_path = test_case->btf_src_file;
err = bpf_object__load_xattr(&load_attr);
if (err) {
if (!test_case->fails)
- CHECK(false, "obj_load", "failed to load prog '%s': %d\n", probe_name, err);
+ ASSERT_OK(err, "obj_load");
goto cleanup;
}
@@ -844,10 +857,8 @@ void test_core_reloc(void)
goto cleanup;
}
- if (test_case->fails) {
- CHECK(false, "obj_load_fail", "should fail to load prog '%s'\n", probe_name);
+ if (!ASSERT_FALSE(test_case->fails, "obj_load_should_fail"))
goto cleanup;
- }
equal = memcmp(data->out, test_case->output,
test_case->output_len) == 0;
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_kind.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_kind.c
deleted file mode 100644
index dd0ffa5..0000000
--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_kind.c
+++ /dev/null
@@ -1,3 +0,0 @@
-#include "core_reloc_types.h"
-
-void f(struct core_reloc_existence___err_wrong_arr_kind x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_value_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_value_type.c
deleted file mode 100644
index bc83372..0000000
--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_value_type.c
+++ /dev/null
@@ -1,3 +0,0 @@
-#include "core_reloc_types.h"
-
-void f(struct core_reloc_existence___err_wrong_arr_value_type x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_kind.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_kind.c
deleted file mode 100644
index 917bec4..0000000
--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_kind.c
+++ /dev/null
@@ -1,3 +0,0 @@
-#include "core_reloc_types.h"
-
-void f(struct core_reloc_existence___err_wrong_int_kind x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_sz.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_sz.c
deleted file mode 100644
index 6ec7e6e..0000000
--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_sz.c
+++ /dev/null
@@ -1,3 +0,0 @@
-#include "core_reloc_types.h"
-
-void f(struct core_reloc_existence___err_wrong_int_sz x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_type.c
deleted file mode 100644
index 7bbcacf..0000000
--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_type.c
+++ /dev/null
@@ -1,3 +0,0 @@
-#include "core_reloc_types.h"
-
-void f(struct core_reloc_existence___err_wrong_int_type x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_struct_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_struct_type.c
deleted file mode 100644
index f384dd3..0000000
--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_struct_type.c
+++ /dev/null
@@ -1,3 +0,0 @@
-#include "core_reloc_types.h"
-
-void f(struct core_reloc_existence___err_wrong_struct_type x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___wrong_field_defs.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___wrong_field_defs.c
new file mode 100644
index 0000000..d14b496
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___wrong_field_defs.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_existence___wrong_field_defs x) {}
diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
index e6e616c..af58ef9 100644
--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
+++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
@@ -683,27 +683,11 @@ struct core_reloc_existence___minimal {
int a;
};
-struct core_reloc_existence___err_wrong_int_sz {
- short a;
-};
-
-struct core_reloc_existence___err_wrong_int_type {
+struct core_reloc_existence___wrong_field_defs {
+ void *a;
int b[1];
-};
-
-struct core_reloc_existence___err_wrong_int_kind {
struct{ int x; } c;
-};
-
-struct core_reloc_existence___err_wrong_arr_kind {
int arr;
-};
-
-struct core_reloc_existence___err_wrong_arr_value_type {
- short arr[1];
-};
-
-struct core_reloc_existence___err_wrong_struct_type {
int s;
};
diff --git a/tools/testing/selftests/bpf/test_offload.py b/tools/testing/selftests/bpf/test_offload.py
index b99bb8e..edaffd4 100755
--- a/tools/testing/selftests/bpf/test_offload.py
+++ b/tools/testing/selftests/bpf/test_offload.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
# Copyright (C) 2017 Netronome Systems, Inc.
# Copyright (c) 2019 Mellanox Technologies. All rights reserved
diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
index 1b138cd..1b1c798 100644
--- a/tools/testing/selftests/bpf/verifier/array_access.c
+++ b/tools/testing/selftests/bpf/verifier/array_access.c
@@ -186,7 +186,7 @@
},
.fixup_map_hash_48b = { 3 },
.errstr_unpriv = "R0 leaks addr",
- .errstr = "invalid access to map value, value_size=48 off=44 size=8",
+ .errstr = "R0 unbounded memory access",
.result_unpriv = REJECT,
.result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
diff --git a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh
index 6f3a70d..e004357 100644
--- a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh
@@ -120,12 +120,13 @@
sleep 5
for ((i = 0; i < count; ++i)); do
+ local sip=$(mirror_gre_ipv6_addr 1 $i)::1
local dip=$(mirror_gre_ipv6_addr 1 $i)::2
local htun=h3-gt6-$i
local message
icmp6_capture_install $htun
- mirror_test v$h1 "" $dip $htun 100 10
+ mirror_test v$h1 $sip $dip $htun 100 10
icmp6_capture_uninstall $htun
done
}
diff --git a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh
index b0cb1aa..33ddd01 100644
--- a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh
@@ -507,8 +507,8 @@
check_err $? "backlog $backlog / $limit Got $pct% marked packets, expected == 0."
local diff=$((limit - backlog))
pct=$((100 * diff / limit))
- ((0 <= pct && pct <= 5))
- check_err $? "backlog $backlog / $limit expected <= 5% distance"
+ ((0 <= pct && pct <= 10))
+ check_err $? "backlog $backlog / $limit expected <= 10% distance"
log_test "TC $((vlan - 10)): RED backlog > limit"
stop_traffic
diff --git a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py
index 0d4b932..2223337 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py
+++ b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# SPDX-License-Identifier: GPL-2.0
import subprocess
diff --git a/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
index cc0f07e..aa74be9 100644
--- a/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
@@ -98,11 +98,7 @@
jq -r '[ .[] | select(.kind == "flower") |
.options | .in_hw ]' | jq .[] | wc -l)
[[ $((offload_count - 1)) -eq $count ]]
- if [[ $should_fail -eq 0 ]]; then
- check_err $? "Offload mismatch"
- else
- check_err_fail $should_fail $? "Offload more than expacted"
- fi
+ check_err_fail $should_fail $? "Attempt to offload $count rules (actual result $((offload_count - 1)))"
}
tc_flower_test()
diff --git a/tools/testing/selftests/exec/Makefile b/tools/testing/selftests/exec/Makefile
index cf69b2f..dd61118 100644
--- a/tools/testing/selftests/exec/Makefile
+++ b/tools/testing/selftests/exec/Makefile
@@ -28,8 +28,8 @@
cp $< $@
chmod -x $@
$(OUTPUT)/load_address_4096: load_address.c
- $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000 -pie $< -o $@
+ $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000 -pie -static $< -o $@
$(OUTPUT)/load_address_2097152: load_address.c
- $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x200000 -pie $< -o $@
+ $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x200000 -pie -static $< -o $@
$(OUTPUT)/load_address_16777216: load_address.c
- $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000000 -pie $< -o $@
+ $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000000 -pie -static $< -o $@
diff --git a/tools/testing/selftests/gpio/Makefile b/tools/testing/selftests/gpio/Makefile
index 32bdc97..acf4088 100644
--- a/tools/testing/selftests/gpio/Makefile
+++ b/tools/testing/selftests/gpio/Makefile
@@ -11,22 +11,24 @@
TEST_PROGS := gpio-mockup.sh
TEST_FILES := gpio-mockup-sysfs.sh
-TEST_PROGS_EXTENDED := gpio-mockup-chardev
-
-GPIODIR := $(realpath ../../../gpio)
-GPIOOBJ := gpio-utils.o
-
-all: $(TEST_PROGS_EXTENDED)
-
-override define CLEAN
- $(RM) $(TEST_PROGS_EXTENDED)
- $(MAKE) -C $(GPIODIR) OUTPUT=$(GPIODIR)/ clean
-endef
+TEST_GEN_PROGS_EXTENDED := gpio-mockup-chardev
KSFT_KHDR_INSTALL := 1
include ../lib.mk
-$(TEST_PROGS_EXTENDED): $(GPIODIR)/$(GPIOOBJ)
+GPIODIR := $(realpath ../../../gpio)
+GPIOOUT := $(OUTPUT)/tools-gpio/
+GPIOOBJ := $(GPIOOUT)/gpio-utils.o
-$(GPIODIR)/$(GPIOOBJ):
- $(MAKE) OUTPUT=$(GPIODIR)/ -C $(GPIODIR)
+override define CLEAN
+ $(RM) $(TEST_GEN_PROGS_EXTENDED)
+ $(RM) -rf $(GPIOOUT)
+endef
+
+$(TEST_GEN_PROGS_EXTENDED): $(GPIOOBJ)
+
+$(GPIOOUT):
+ mkdir -p $@
+
+$(GPIOOBJ): $(GPIOOUT)
+ $(MAKE) OUTPUT=$(GPIOOUT) -C $(GPIODIR)
diff --git a/tools/testing/selftests/kselftest/prefix.pl b/tools/testing/selftests/kselftest/prefix.pl
index 31f7c2a..12a7f4c 100755
--- a/tools/testing/selftests/kselftest/prefix.pl
+++ b/tools/testing/selftests/kselftest/prefix.pl
@@ -1,4 +1,4 @@
-#!/usr/bin/perl
+#!/usr/bin/env perl
# SPDX-License-Identifier: GPL-2.0
# Prefix all lines with "# ", unbuffered. Command being piped in may need
# to have unbuffering forced with "stdbuf -i0 -o0 -e0 $cmd".
diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
index a5ce26d..0af84ad 100644
--- a/tools/testing/selftests/lib.mk
+++ b/tools/testing/selftests/lib.mk
@@ -1,6 +1,10 @@
# This mimics the top-level Makefile. We do it explicitly here so that this
# Makefile can operate with or without the kbuild infrastructure.
+ifneq ($(LLVM),)
+CC := clang
+else
CC := $(CROSS_COMPILE)gcc
+endif
ifeq (0,$(MAKELEVEL))
ifeq ($(OUTPUT),)
@@ -74,7 +78,8 @@
rsync -aq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \
fi
@if [ "X$(TEST_PROGS)" != "X" ]; then \
- $(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(OUTPUT)/$(TEST_PROGS)) ; \
+ $(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) \
+ $(addprefix $(OUTPUT)/,$(TEST_PROGS))) ; \
else \
$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS)); \
fi
diff --git a/tools/testing/selftests/net/devlink_port_split.py b/tools/testing/selftests/net/devlink_port_split.py
index 58bb7e9..834066d 100755
--- a/tools/testing/selftests/net/devlink_port_split.py
+++ b/tools/testing/selftests/net/devlink_port_split.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
from subprocess import PIPE, Popen
diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
index c02291e..880e3ab 100755
--- a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+++ b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
@@ -271,7 +271,7 @@
while ((RET == 0)); do
bridge fdb del dev $swp3 $h3mac vlan 555 master 2>/dev/null
- bridge fdb add dev $swp2 $h3mac vlan 555 master
+ bridge fdb add dev $swp2 $h3mac vlan 555 master static
sleep 1
fail_test_span_gre_dir $tundev ingress
diff --git a/tools/testing/selftests/net/forwarding/mirror_lib.sh b/tools/testing/selftests/net/forwarding/mirror_lib.sh
index 13db1cb..6406cd7 100644
--- a/tools/testing/selftests/net/forwarding/mirror_lib.sh
+++ b/tools/testing/selftests/net/forwarding/mirror_lib.sh
@@ -20,6 +20,13 @@
tc filter del dev $swp1 $direction pref 1000
}
+is_ipv6()
+{
+ local addr=$1; shift
+
+ [[ -z ${addr//[0-9a-fA-F:]/} ]]
+}
+
mirror_test()
{
local vrf_name=$1; shift
@@ -29,9 +36,17 @@
local pref=$1; shift
local expect=$1; shift
+ if is_ipv6 $dip; then
+ local proto=-6
+ local type="icmp6 type=128" # Echo request.
+ else
+ local proto=
+ local type="icmp echoreq"
+ fi
+
local t0=$(tc_rule_stats_get $dev $pref)
- $MZ $vrf_name ${sip:+-A $sip} -B $dip -a own -b bc -q \
- -c 10 -d 100msec -t icmp type=8
+ $MZ $proto $vrf_name ${sip:+-A $sip} -B $dip -a own -b bc -q \
+ -c 10 -d 100msec -t $type
sleep 0.5
local t1=$(tc_rule_stats_get $dev $pref)
local delta=$((t1 - t0))
diff --git a/tools/testing/selftests/powerpc/security/entry_flush.c b/tools/testing/selftests/powerpc/security/entry_flush.c
index 78cf914..68ce377 100644
--- a/tools/testing/selftests/powerpc/security/entry_flush.c
+++ b/tools/testing/selftests/powerpc/security/entry_flush.c
@@ -53,7 +53,7 @@ int entry_flush_test(void)
entry_flush = entry_flush_orig;
- fd = perf_event_open_counter(PERF_TYPE_RAW, /* L1d miss */ 0x400f0, -1);
+ fd = perf_event_open_counter(PERF_TYPE_HW_CACHE, PERF_L1D_READ_MISS_CONFIG, -1);
FAIL_IF(fd < 0);
p = (char *)memalign(zero_size, CACHELINE_SIZE);
diff --git a/tools/testing/selftests/powerpc/security/flush_utils.h b/tools/testing/selftests/powerpc/security/flush_utils.h
index 07a5eb3..7a3d602 100644
--- a/tools/testing/selftests/powerpc/security/flush_utils.h
+++ b/tools/testing/selftests/powerpc/security/flush_utils.h
@@ -9,6 +9,10 @@
#define CACHELINE_SIZE 128
+#define PERF_L1D_READ_MISS_CONFIG ((PERF_COUNT_HW_CACHE_L1D) | \
+ (PERF_COUNT_HW_CACHE_OP_READ << 8) | \
+ (PERF_COUNT_HW_CACHE_RESULT_MISS << 16))
+
void syscall_loop(char *p, unsigned long iterations,
unsigned long zero_size);
diff --git a/tools/testing/selftests/powerpc/security/rfi_flush.c b/tools/testing/selftests/powerpc/security/rfi_flush.c
index 7565fd7..f73484a 100644
--- a/tools/testing/selftests/powerpc/security/rfi_flush.c
+++ b/tools/testing/selftests/powerpc/security/rfi_flush.c
@@ -54,7 +54,7 @@ int rfi_flush_test(void)
rfi_flush = rfi_flush_orig;
- fd = perf_event_open_counter(PERF_TYPE_RAW, /* L1d miss */ 0x400f0, -1);
+ fd = perf_event_open_counter(PERF_TYPE_HW_CACHE, PERF_L1D_READ_MISS_CONFIG, -1);
FAIL_IF(fd < 0);
p = (char *)memalign(zero_size, CACHELINE_SIZE);
diff --git a/tools/testing/selftests/resctrl/Makefile b/tools/testing/selftests/resctrl/Makefile
index d585cc1..6bcee2e 100644
--- a/tools/testing/selftests/resctrl/Makefile
+++ b/tools/testing/selftests/resctrl/Makefile
@@ -1,5 +1,5 @@
CC = $(CROSS_COMPILE)gcc
-CFLAGS = -g -Wall
+CFLAGS = -g -Wall -O2 -D_FORTIFY_SOURCE=2
SRCS=$(wildcard *.c)
OBJS=$(SRCS:.c=.o)
diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c
index 38dbf49..5922cc1 100644
--- a/tools/testing/selftests/resctrl/cache.c
+++ b/tools/testing/selftests/resctrl/cache.c
@@ -182,7 +182,7 @@ int measure_cache_vals(struct resctrl_val_param *param, int bm_pid)
/*
* Measure cache miss from perf.
*/
- if (!strcmp(param->resctrl_val, "cat")) {
+ if (!strncmp(param->resctrl_val, CAT_STR, sizeof(CAT_STR))) {
ret = get_llc_perf(&llc_perf_miss);
if (ret < 0)
return ret;
@@ -192,7 +192,7 @@ int measure_cache_vals(struct resctrl_val_param *param, int bm_pid)
/*
* Measure llc occupancy from resctrl.
*/
- if (!strcmp(param->resctrl_val, "cqm")) {
+ if (!strncmp(param->resctrl_val, CQM_STR, sizeof(CQM_STR))) {
ret = get_llc_occu_resctrl(&llc_occu_resc);
if (ret < 0)
return ret;
@@ -234,7 +234,7 @@ int cat_val(struct resctrl_val_param *param)
if (ret)
return ret;
- if ((strcmp(resctrl_val, "cat") == 0)) {
+ if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR))) {
ret = initialize_llc_perf();
if (ret)
return ret;
@@ -242,7 +242,7 @@ int cat_val(struct resctrl_val_param *param)
/* Test runs until the callback setup() tells the test to stop. */
while (1) {
- if (strcmp(resctrl_val, "cat") == 0) {
+ if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR))) {
ret = param->setup(1, param);
if (ret) {
ret = 0;
diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
index 5da4376..20823725 100644
--- a/tools/testing/selftests/resctrl/cat_test.c
+++ b/tools/testing/selftests/resctrl/cat_test.c
@@ -17,10 +17,10 @@
#define MAX_DIFF_PERCENT 4
#define MAX_DIFF 1000000
-int count_of_bits;
-char cbm_mask[256];
-unsigned long long_mask;
-unsigned long cache_size;
+static int count_of_bits;
+static char cbm_mask[256];
+static unsigned long long_mask;
+static unsigned long cache_size;
/*
* Change schemata. Write schemata to specified
@@ -136,7 +136,7 @@ int cat_perf_miss_val(int cpu_no, int n, char *cache_type)
return -1;
/* Get default cbm mask for L3/L2 cache */
- ret = get_cbm_mask(cache_type);
+ ret = get_cbm_mask(cache_type, cbm_mask);
if (ret)
return ret;
@@ -164,7 +164,7 @@ int cat_perf_miss_val(int cpu_no, int n, char *cache_type)
return -1;
struct resctrl_val_param param = {
- .resctrl_val = "cat",
+ .resctrl_val = CAT_STR,
.cpu_no = cpu_no,
.mum_resctrlfs = 0,
.setup = cat_setup,
diff --git a/tools/testing/selftests/resctrl/cqm_test.c b/tools/testing/selftests/resctrl/cqm_test.c
index c875615..271752e 100644
--- a/tools/testing/selftests/resctrl/cqm_test.c
+++ b/tools/testing/selftests/resctrl/cqm_test.c
@@ -16,10 +16,10 @@
#define MAX_DIFF 2000000
#define MAX_DIFF_PERCENT 15
-int count_of_bits;
-char cbm_mask[256];
-unsigned long long_mask;
-unsigned long cache_size;
+static int count_of_bits;
+static char cbm_mask[256];
+static unsigned long long_mask;
+static unsigned long cache_size;
static int cqm_setup(int num, ...)
{
@@ -86,7 +86,7 @@ static int check_results(struct resctrl_val_param *param, int no_of_bits)
return errno;
}
- while (fgets(temp, 1024, fp)) {
+ while (fgets(temp, sizeof(temp), fp)) {
char *token = strtok(temp, ":\t");
int fields = 0;
@@ -125,7 +125,7 @@ int cqm_resctrl_val(int cpu_no, int n, char **benchmark_cmd)
if (!validate_resctrl_feature_request("cqm"))
return -1;
- ret = get_cbm_mask("L3");
+ ret = get_cbm_mask("L3", cbm_mask);
if (ret)
return ret;
@@ -145,7 +145,7 @@ int cqm_resctrl_val(int cpu_no, int n, char **benchmark_cmd)
}
struct resctrl_val_param param = {
- .resctrl_val = "cqm",
+ .resctrl_val = CQM_STR,
.ctrlgrp = "c1",
.mongrp = "m1",
.cpu_no = cpu_no,
diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
index 79c611c9..51e5cf2 100644
--- a/tools/testing/selftests/resctrl/fill_buf.c
+++ b/tools/testing/selftests/resctrl/fill_buf.c
@@ -115,7 +115,7 @@ static int fill_cache_read(unsigned char *start_ptr, unsigned char *end_ptr,
while (1) {
ret = fill_one_span_read(start_ptr, end_ptr);
- if (!strcmp(resctrl_val, "cat"))
+ if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)))
break;
}
@@ -134,7 +134,7 @@ static int fill_cache_write(unsigned char *start_ptr, unsigned char *end_ptr,
{
while (1) {
fill_one_span_write(start_ptr, end_ptr);
- if (!strcmp(resctrl_val, "cat"))
+ if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)))
break;
}
diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
index 7bf8eaa..6449fbd 100644
--- a/tools/testing/selftests/resctrl/mba_test.c
+++ b/tools/testing/selftests/resctrl/mba_test.c
@@ -141,7 +141,7 @@ void mba_test_cleanup(void)
int mba_schemata_change(int cpu_no, char *bw_report, char **benchmark_cmd)
{
struct resctrl_val_param param = {
- .resctrl_val = "mba",
+ .resctrl_val = MBA_STR,
.ctrlgrp = "c1",
.mongrp = "m1",
.cpu_no = cpu_no,
diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
index 4700f74..ec6cfe0 100644
--- a/tools/testing/selftests/resctrl/mbm_test.c
+++ b/tools/testing/selftests/resctrl/mbm_test.c
@@ -114,7 +114,7 @@ void mbm_test_cleanup(void)
int mbm_bw_change(int span, int cpu_no, char *bw_report, char **benchmark_cmd)
{
struct resctrl_val_param param = {
- .resctrl_val = "mbm",
+ .resctrl_val = MBM_STR,
.ctrlgrp = "c1",
.mongrp = "m1",
.span = span,
diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
index 39bf59c..9dcc96e 100644
--- a/tools/testing/selftests/resctrl/resctrl.h
+++ b/tools/testing/selftests/resctrl/resctrl.h
@@ -28,6 +28,10 @@
#define RESCTRL_PATH "/sys/fs/resctrl"
#define PHYS_ID_PATH "/sys/devices/system/cpu/cpu"
#define CBM_MASK_PATH "/sys/fs/resctrl/info"
+#define L3_PATH "/sys/fs/resctrl/info/L3"
+#define MB_PATH "/sys/fs/resctrl/info/MB"
+#define L3_MON_PATH "/sys/fs/resctrl/info/L3_MON"
+#define L3_MON_FEATURES_PATH "/sys/fs/resctrl/info/L3_MON/mon_features"
#define PARENT_EXIT(err_msg) \
do { \
@@ -62,11 +66,16 @@ struct resctrl_val_param {
int (*setup)(int num, ...);
};
-pid_t bm_pid, ppid;
-int tests_run;
+#define MBM_STR "mbm"
+#define MBA_STR "mba"
+#define CQM_STR "cqm"
+#define CAT_STR "cat"
-char llc_occup_path[1024];
-bool is_amd;
+extern pid_t bm_pid, ppid;
+extern int tests_run;
+
+extern char llc_occup_path[1024];
+extern bool is_amd;
bool check_resctrlfs_support(void);
int filter_dmesg(void);
@@ -74,7 +83,7 @@ int remount_resctrlfs(bool mum_resctrlfs);
int get_resource_id(int cpu_no, int *resource_id);
int umount_resctrlfs(void);
int validate_bw_report_request(char *bw_report);
-bool validate_resctrl_feature_request(char *resctrl_val);
+bool validate_resctrl_feature_request(const char *resctrl_val);
char *fgrep(FILE *inf, const char *str);
int taskset_benchmark(pid_t bm_pid, int cpu_no);
void run_benchmark(int signum, siginfo_t *info, void *ucontext);
@@ -92,7 +101,7 @@ void tests_cleanup(void);
void mbm_test_cleanup(void);
int mba_schemata_change(int cpu_no, char *bw_report, char **benchmark_cmd);
void mba_test_cleanup(void);
-int get_cbm_mask(char *cache_type);
+int get_cbm_mask(char *cache_type, char *cbm_mask);
int get_cache_size(int cpu_no, char *cache_type, unsigned long *cache_size);
void ctrlc_handler(int signum, siginfo_t *info, void *ptr);
int cat_val(struct resctrl_val_param *param);
diff --git a/tools/testing/selftests/resctrl/resctrl_tests.c b/tools/testing/selftests/resctrl/resctrl_tests.c
index 425cc85..ac22696 100644
--- a/tools/testing/selftests/resctrl/resctrl_tests.c
+++ b/tools/testing/selftests/resctrl/resctrl_tests.c
@@ -73,7 +73,7 @@ int main(int argc, char **argv)
}
}
- while ((c = getopt(argc_new, argv, "ht:b:")) != -1) {
+ while ((c = getopt(argc_new, argv, "ht:b:n:p:")) != -1) {
char *token;
switch (c) {
@@ -85,13 +85,13 @@ int main(int argc, char **argv)
cqm_test = false;
cat_test = false;
while (token) {
- if (!strcmp(token, "mbm")) {
+ if (!strncmp(token, MBM_STR, sizeof(MBM_STR))) {
mbm_test = true;
- } else if (!strcmp(token, "mba")) {
+ } else if (!strncmp(token, MBA_STR, sizeof(MBA_STR))) {
mba_test = true;
- } else if (!strcmp(token, "cqm")) {
+ } else if (!strncmp(token, CQM_STR, sizeof(CQM_STR))) {
cqm_test = true;
- } else if (!strcmp(token, "cat")) {
+ } else if (!strncmp(token, CAT_STR, sizeof(CAT_STR))) {
cat_test = true;
} else {
printf("invalid argument\n");
@@ -161,7 +161,7 @@ int main(int argc, char **argv)
if (!is_amd && mbm_test) {
printf("# Starting MBM BW change ...\n");
if (!has_ben)
- sprintf(benchmark_cmd[5], "%s", "mba");
+ sprintf(benchmark_cmd[5], "%s", MBA_STR);
res = mbm_bw_change(span, cpu_no, bw_report, benchmark_cmd);
printf("%sok MBM: bw change\n", res ? "not " : "");
mbm_test_cleanup();
@@ -181,7 +181,7 @@ int main(int argc, char **argv)
if (cqm_test) {
printf("# Starting CQM test ...\n");
if (!has_ben)
- sprintf(benchmark_cmd[5], "%s", "cqm");
+ sprintf(benchmark_cmd[5], "%s", CQM_STR);
res = cqm_resctrl_val(cpu_no, no_of_bits, benchmark_cmd);
printf("%sok CQM: test\n", res ? "not " : "");
cqm_test_cleanup();
diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
index 520fea3..8df5578 100644
--- a/tools/testing/selftests/resctrl/resctrl_val.c
+++ b/tools/testing/selftests/resctrl/resctrl_val.c
@@ -221,8 +221,8 @@ static int read_from_imc_dir(char *imc_dir, int count)
*/
static int num_of_imcs(void)
{
+ char imc_dir[512], *temp;
unsigned int count = 0;
- char imc_dir[512];
struct dirent *ep;
int ret;
DIR *dp;
@@ -230,7 +230,25 @@ static int num_of_imcs(void)
dp = opendir(DYN_PMU_PATH);
if (dp) {
while ((ep = readdir(dp))) {
- if (strstr(ep->d_name, UNCORE_IMC)) {
+ temp = strstr(ep->d_name, UNCORE_IMC);
+ if (!temp)
+ continue;
+
+ /*
+ * imc counters are named as "uncore_imc_<n>", hence
+ * increment the pointer to point to <n>. Note that
+ * sizeof(UNCORE_IMC) would count for null character as
+ * well and hence the last underscore character in
+ * uncore_imc'_' need not be counted.
+ */
+ temp = temp + sizeof(UNCORE_IMC);
+
+ /*
+ * Some directories under "DYN_PMU_PATH" could have
+ * names like "uncore_imc_free_running", hence, check if
+ * first character is a numerical digit or not.
+ */
+ if (temp[0] >= '0' && temp[0] <= '9') {
sprintf(imc_dir, "%s/%s/", DYN_PMU_PATH,
ep->d_name);
ret = read_from_imc_dir(imc_dir, count);
@@ -282,9 +300,9 @@ static int initialize_mem_bw_imc(void)
* Memory B/W utilized by a process on a socket can be calculated using
* iMC counters. Perf events are used to read these counters.
*
- * Return: >= 0 on success. < 0 on failure.
+ * Return: = 0 on success. < 0 on failure.
*/
-static float get_mem_bw_imc(int cpu_no, char *bw_report)
+static int get_mem_bw_imc(int cpu_no, char *bw_report, float *bw_imc)
{
float reads, writes, of_mul_read, of_mul_write;
int imc, j, ret;
@@ -355,13 +373,18 @@ static float get_mem_bw_imc(int cpu_no, char *bw_report)
close(imc_counters_config[imc][WRITE].fd);
}
- if (strcmp(bw_report, "reads") == 0)
- return reads;
+ if (strcmp(bw_report, "reads") == 0) {
+ *bw_imc = reads;
+ return 0;
+ }
- if (strcmp(bw_report, "writes") == 0)
- return writes;
+ if (strcmp(bw_report, "writes") == 0) {
+ *bw_imc = writes;
+ return 0;
+ }
- return (reads + writes);
+ *bw_imc = reads + writes;
+ return 0;
}
void set_mbm_path(const char *ctrlgrp, const char *mongrp, int resource_id)
@@ -397,10 +420,10 @@ static void initialize_mem_bw_resctrl(const char *ctrlgrp, const char *mongrp,
return;
}
- if (strcmp(resctrl_val, "mbm") == 0)
+ if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)))
set_mbm_path(ctrlgrp, mongrp, resource_id);
- if ((strcmp(resctrl_val, "mba") == 0)) {
+ if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
if (ctrlgrp)
sprintf(mbm_total_path, CON_MBM_LOCAL_BYTES_PATH,
RESCTRL_PATH, ctrlgrp, resource_id);
@@ -420,9 +443,8 @@ static void initialize_mem_bw_resctrl(const char *ctrlgrp, const char *mongrp,
* 1. If con_mon grp is given, then read from it
* 2. If con_mon grp is not given, then read from root con_mon grp
*/
-static unsigned long get_mem_bw_resctrl(void)
+static int get_mem_bw_resctrl(unsigned long *mbm_total)
{
- unsigned long mbm_total = 0;
FILE *fp;
fp = fopen(mbm_total_path, "r");
@@ -431,7 +453,7 @@ static unsigned long get_mem_bw_resctrl(void)
return -1;
}
- if (fscanf(fp, "%lu", &mbm_total) <= 0) {
+ if (fscanf(fp, "%lu", mbm_total) <= 0) {
perror("Could not get mbm local bytes");
fclose(fp);
@@ -439,7 +461,7 @@ static unsigned long get_mem_bw_resctrl(void)
}
fclose(fp);
- return mbm_total;
+ return 0;
}
pid_t bm_pid, ppid;
@@ -524,14 +546,15 @@ static void initialize_llc_occu_resctrl(const char *ctrlgrp, const char *mongrp,
return;
}
- if (strcmp(resctrl_val, "cqm") == 0)
+ if (!strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
set_cqm_path(ctrlgrp, mongrp, resource_id);
}
static int
measure_vals(struct resctrl_val_param *param, unsigned long *bw_resc_start)
{
- unsigned long bw_imc, bw_resc, bw_resc_end;
+ unsigned long bw_resc, bw_resc_end;
+ float bw_imc;
int ret;
/*
@@ -541,13 +564,13 @@ measure_vals(struct resctrl_val_param *param, unsigned long *bw_resc_start)
* Compare the two values to validate resctrl value.
* It takes 1sec to measure the data.
*/
- bw_imc = get_mem_bw_imc(param->cpu_no, param->bw_report);
- if (bw_imc <= 0)
- return bw_imc;
+ ret = get_mem_bw_imc(param->cpu_no, param->bw_report, &bw_imc);
+ if (ret < 0)
+ return ret;
- bw_resc_end = get_mem_bw_resctrl();
- if (bw_resc_end <= 0)
- return bw_resc_end;
+ ret = get_mem_bw_resctrl(&bw_resc_end);
+ if (ret < 0)
+ return ret;
bw_resc = (bw_resc_end - *bw_resc_start) / MB;
ret = print_results_bw(param->filename, bm_pid, bw_imc, bw_resc);
@@ -579,8 +602,8 @@ int resctrl_val(char **benchmark_cmd, struct resctrl_val_param *param)
if (strcmp(param->filename, "") == 0)
sprintf(param->filename, "stdio");
- if ((strcmp(resctrl_val, "mba")) == 0 ||
- (strcmp(resctrl_val, "mbm")) == 0) {
+ if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR)) ||
+ !strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR))) {
ret = validate_bw_report_request(param->bw_report);
if (ret)
return ret;
@@ -674,15 +697,15 @@ int resctrl_val(char **benchmark_cmd, struct resctrl_val_param *param)
if (ret)
goto out;
- if ((strcmp(resctrl_val, "mbm") == 0) ||
- (strcmp(resctrl_val, "mba") == 0)) {
+ if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) ||
+ !strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
ret = initialize_mem_bw_imc();
if (ret)
goto out;
initialize_mem_bw_resctrl(param->ctrlgrp, param->mongrp,
param->cpu_no, resctrl_val);
- } else if (strcmp(resctrl_val, "cqm") == 0)
+ } else if (!strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
initialize_llc_occu_resctrl(param->ctrlgrp, param->mongrp,
param->cpu_no, resctrl_val);
@@ -710,8 +733,8 @@ int resctrl_val(char **benchmark_cmd, struct resctrl_val_param *param)
/* Test runs until the callback setup() tells the test to stop. */
while (1) {
- if ((strcmp(resctrl_val, "mbm") == 0) ||
- (strcmp(resctrl_val, "mba") == 0)) {
+ if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) ||
+ !strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
ret = param->setup(1, param);
if (ret) {
ret = 0;
@@ -721,7 +744,7 @@ int resctrl_val(char **benchmark_cmd, struct resctrl_val_param *param)
ret = measure_vals(param, &bw_resc_start);
if (ret)
break;
- } else if (strcmp(resctrl_val, "cqm") == 0) {
+ } else if (!strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR))) {
ret = param->setup(1, param);
if (ret) {
ret = 0;
diff --git a/tools/testing/selftests/resctrl/resctrlfs.c b/tools/testing/selftests/resctrl/resctrlfs.c
index 19c0ec4..b57170f5 100644
--- a/tools/testing/selftests/resctrl/resctrlfs.c
+++ b/tools/testing/selftests/resctrl/resctrlfs.c
@@ -49,8 +49,6 @@ static int find_resctrl_mount(char *buffer)
return -ENOENT;
}
-char cbm_mask[256];
-
/*
* remount_resctrlfs - Remount resctrl FS at /sys/fs/resctrl
* @mum_resctrlfs: Should the resctrl FS be remounted?
@@ -205,16 +203,18 @@ int get_cache_size(int cpu_no, char *cache_type, unsigned long *cache_size)
/*
* get_cbm_mask - Get cbm mask for given cache
* @cache_type: Cache level L2/L3
- *
- * Mask is stored in cbm_mask which is global variable.
+ * @cbm_mask: cbm_mask returned as a string
*
* Return: = 0 on success, < 0 on failure.
*/
-int get_cbm_mask(char *cache_type)
+int get_cbm_mask(char *cache_type, char *cbm_mask)
{
char cbm_mask_path[1024];
FILE *fp;
+ if (!cbm_mask)
+ return -1;
+
sprintf(cbm_mask_path, "%s/%s/cbm_mask", CBM_MASK_PATH, cache_type);
fp = fopen(cbm_mask_path, "r");
@@ -334,7 +334,7 @@ void run_benchmark(int signum, siginfo_t *info, void *ucontext)
operation = atoi(benchmark_cmd[4]);
sprintf(resctrl_val, "%s", benchmark_cmd[5]);
- if (strcmp(resctrl_val, "cqm") != 0)
+ if (strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
buffer_span = span * MB;
else
buffer_span = span;
@@ -459,8 +459,8 @@ int write_bm_pid_to_resctrl(pid_t bm_pid, char *ctrlgrp, char *mongrp,
goto out;
/* Create mon grp and write pid into it for "mbm" and "cqm" test */
- if ((strcmp(resctrl_val, "cqm") == 0) ||
- (strcmp(resctrl_val, "mbm") == 0)) {
+ if (!strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)) ||
+ !strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR))) {
if (strlen(mongrp)) {
sprintf(monitorgroup_p, "%s/mon_groups", controlgroup);
sprintf(monitorgroup, "%s/%s", monitorgroup_p, mongrp);
@@ -505,9 +505,9 @@ int write_schemata(char *ctrlgrp, char *schemata, int cpu_no, char *resctrl_val)
int resource_id, ret = 0;
FILE *fp;
- if ((strcmp(resctrl_val, "mba") != 0) &&
- (strcmp(resctrl_val, "cat") != 0) &&
- (strcmp(resctrl_val, "cqm") != 0))
+ if (strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR)) &&
+ strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)) &&
+ strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
return -ENOENT;
if (!schemata) {
@@ -528,9 +528,10 @@ int write_schemata(char *ctrlgrp, char *schemata, int cpu_no, char *resctrl_val)
else
sprintf(controlgroup, "%s/schemata", RESCTRL_PATH);
- if (!strcmp(resctrl_val, "cat") || !strcmp(resctrl_val, "cqm"))
+ if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR)) ||
+ !strncmp(resctrl_val, CQM_STR, sizeof(CQM_STR)))
sprintf(schema, "%s%d%c%s", "L3:", resource_id, '=', schemata);
- if (strcmp(resctrl_val, "mba") == 0)
+ if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR)))
sprintf(schema, "%s%d%c%s", "MB:", resource_id, '=', schemata);
fp = fopen(controlgroup, "w");
@@ -615,26 +616,56 @@ char *fgrep(FILE *inf, const char *str)
* validate_resctrl_feature_request - Check if requested feature is valid.
* @resctrl_val: Requested feature
*
- * Return: 0 on success, non-zero on failure
+ * Return: True if the feature is supported, else false
*/
-bool validate_resctrl_feature_request(char *resctrl_val)
+bool validate_resctrl_feature_request(const char *resctrl_val)
{
- FILE *inf = fopen("/proc/cpuinfo", "r");
+ struct stat statbuf;
bool found = false;
char *res;
+ FILE *inf;
- if (!inf)
+ if (!resctrl_val)
return false;
- res = fgrep(inf, "flags");
+ if (remount_resctrlfs(false))
+ return false;
- if (res) {
- char *s = strchr(res, ':');
+ if (!strncmp(resctrl_val, CAT_STR, sizeof(CAT_STR))) {
+ if (!stat(L3_PATH, &statbuf))
+ return true;
+ } else if (!strncmp(resctrl_val, MBA_STR, sizeof(MBA_STR))) {
+ if (!stat(MB_PATH, &statbuf))
+ return true;
+ } else if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR)) ||
+ !strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) {
+ if (!stat(L3_MON_PATH, &statbuf)) {
+ inf = fopen(L3_MON_FEATURES_PATH, "r");
+ if (!inf)
+ return false;
- found = s && !strstr(s, resctrl_val);
- free(res);
+ if (!strncmp(resctrl_val, CMT_STR, sizeof(CMT_STR))) {
+ res = fgrep(inf, "llc_occupancy");
+ if (res) {
+ found = true;
+ free(res);
+ }
+ }
+
+ if (!strncmp(resctrl_val, MBM_STR, sizeof(MBM_STR))) {
+ res = fgrep(inf, "mbm_total_bytes");
+ if (res) {
+ free(res);
+ res = fgrep(inf, "mbm_local_bytes");
+ if (res) {
+ found = true;
+ free(res);
+ }
+ }
+ }
+ fclose(inf);
+ }
}
- fclose(inf);
return found;
}
diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
index 1b6c7d3..dc21dc4 100644
--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
+++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
@@ -1753,16 +1753,25 @@ TEST_F(TRACE_poke, getpid_runs_normally)
# define SYSCALL_RET_SET(_regs, _val) \
do { \
typeof(_val) _result = (_val); \
- /* \
- * A syscall error is signaled by CR0 SO bit \
- * and the code is stored as a positive value. \
- */ \
- if (_result < 0) { \
- SYSCALL_RET(_regs) = -_result; \
- (_regs).ccr |= 0x10000000; \
- } else { \
+ if ((_regs.trap & 0xfff0) == 0x3000) { \
+ /* \
+ * scv 0 system call uses -ve result \
+ * for error, so no need to adjust. \
+ */ \
SYSCALL_RET(_regs) = _result; \
- (_regs).ccr &= ~0x10000000; \
+ } else { \
+ /* \
+ * A syscall error is signaled by the \
+ * CR0 SO bit and the code is stored as \
+ * a positive value. \
+ */ \
+ if (_result < 0) { \
+ SYSCALL_RET(_regs) = -_result; \
+ (_regs).ccr |= 0x10000000; \
+ } else { \
+ SYSCALL_RET(_regs) = _result; \
+ (_regs).ccr &= ~0x10000000; \
+ } \
} \
} while (0)
# define SYSCALL_RET_SET_ON_PTRACE_EXIT
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
index 1cda2e1..773c502 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
@@ -9,11 +9,11 @@
"setup": [
"$IP link add dev $DUMMY type dummy || /bin/true"
],
- "cmdUnderTest": "$TC qdisc add dev $DUMMY root fq_pie flows 65536",
- "expExitCode": "2",
+ "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_pie flows 65536",
+ "expExitCode": "0",
"verifyCmd": "$TC qdisc show dev $DUMMY",
- "matchPattern": "qdisc",
- "matchCount": "0",
+ "matchPattern": "qdisc fq_pie 1: root refcnt 2 limit 10240p flows 65536",
+ "matchCount": "1",
"teardown": [
"$IP link del dev $DUMMY"
]
diff --git a/tools/testing/selftests/tc-testing/tdc_batch.py b/tools/testing/selftests/tc-testing/tdc_batch.py
index 995f66c..35d5d94 100755
--- a/tools/testing/selftests/tc-testing/tdc_batch.py
+++ b/tools/testing/selftests/tc-testing/tdc_batch.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
"""
tdc_batch.py - a script to generate TC batch file
diff --git a/tools/testing/selftests/tc-testing/tdc_multibatch.py b/tools/testing/selftests/tc-testing/tdc_multibatch.py
index 5e72379..48e1f17 100755
--- a/tools/testing/selftests/tc-testing/tdc_multibatch.py
+++ b/tools/testing/selftests/tc-testing/tdc_multibatch.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
"""
tdc_multibatch.py - a thin wrapper over tdc_batch.py to generate multiple batch
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index e63f316..2cf32e6 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -99,7 +99,7 @@
ifeq ($(CAN_BUILD_I386),1)
$(BINARIES_32): CFLAGS += -m32
$(BINARIES_32): LDLIBS += -lrt -ldl -lm
-$(BINARIES_32): %_32: %.c
+$(BINARIES_32): $(OUTPUT)/%_32: %.c
$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $(notdir $^) $(LDLIBS) -o $@
$(foreach t,$(TARGETS),$(eval $(call gen-target-rule-32,$(t))))
endif
@@ -107,7 +107,7 @@
ifeq ($(CAN_BUILD_X86_64),1)
$(BINARIES_64): CFLAGS += -m64
$(BINARIES_64): LDLIBS += -lrt -ldl
-$(BINARIES_64): %_64: %.c
+$(BINARIES_64): $(OUTPUT)/%_64: %.c
$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $(notdir $^) $(LDLIBS) -o $@
$(foreach t,$(TARGETS),$(eval $(call gen-target-rule-64,$(t))))
endif
diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
index e2c197f..6edfcf1 100644
--- a/virt/kvm/coalesced_mmio.c
+++ b/virt/kvm/coalesced_mmio.c
@@ -174,21 +174,36 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
struct kvm_coalesced_mmio_zone *zone)
{
struct kvm_coalesced_mmio_dev *dev, *tmp;
+ int r;
if (zone->pio != 1 && zone->pio != 0)
return -EINVAL;
mutex_lock(&kvm->slots_lock);
- list_for_each_entry_safe(dev, tmp, &kvm->coalesced_zones, list)
+ list_for_each_entry_safe(dev, tmp, &kvm->coalesced_zones, list) {
if (zone->pio == dev->zone.pio &&
coalesced_mmio_in_range(dev, zone->addr, zone->size)) {
- kvm_io_bus_unregister_dev(kvm,
+ r = kvm_io_bus_unregister_dev(kvm,
zone->pio ? KVM_PIO_BUS : KVM_MMIO_BUS, &dev->dev);
kvm_iodevice_destructor(&dev->dev);
+
+ /*
+ * On failure, unregister destroys all devices on the
+ * bus _except_ the target device, i.e. coalesced_zones
+ * has been modified. No need to restart the walk as
+ * there aren't any zones left.
+ */
+ if (r)
+ break;
}
+ }
mutex_unlock(&kvm->slots_lock);
+ /*
+ * Ignore the result of kvm_io_bus_unregister_dev(), from userspace's
+ * perspective, the coalesced MMIO is most definitely unregistered.
+ */
return 0;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index ed4d2e3..f446c36 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2717,8 +2717,8 @@ static void grow_halt_poll_ns(struct kvm_vcpu *vcpu)
if (val < grow_start)
val = grow_start;
- if (val > halt_poll_ns)
- val = halt_poll_ns;
+ if (val > vcpu->kvm->max_halt_poll_ns)
+ val = vcpu->kvm->max_halt_poll_ns;
vcpu->halt_poll_ns = val;
out:
@@ -2797,7 +2797,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
goto out;
}
poll_end = cur = ktime_get();
- } while (single_task_running() && ktime_before(cur, stop));
+ } while (single_task_running() && !need_resched() &&
+ ktime_before(cur, stop));
}
prepare_to_rcuwait(&vcpu->wait);
@@ -4342,15 +4343,15 @@ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
}
/* Caller must hold slots_lock. */
-void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
- struct kvm_io_device *dev)
+int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ struct kvm_io_device *dev)
{
int i, j;
struct kvm_io_bus *new_bus, *bus;
bus = kvm_get_bus(kvm, bus_idx);
if (!bus)
- return;
+ return 0;
for (i = 0; i < bus->dev_count; i++)
if (bus->range[i].dev == dev) {
@@ -4358,7 +4359,7 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
}
if (i == bus->dev_count)
- return;
+ return 0;
new_bus = kmalloc(struct_size(bus, range, bus->dev_count - 1),
GFP_KERNEL_ACCOUNT);
@@ -4367,7 +4368,13 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
new_bus->dev_count--;
memcpy(new_bus->range + i, bus->range + i + 1,
flex_array_size(new_bus, range, new_bus->dev_count - i));
- } else {
+ }
+
+ rcu_assign_pointer(kvm->buses[bus_idx], new_bus);
+ synchronize_srcu_expedited(&kvm->srcu);
+
+ /* Destroy the old bus _after_ installing the (null) bus. */
+ if (!new_bus) {
pr_err("kvm: failed to shrink bus, removing it completely\n");
for (j = 0; j < bus->dev_count; j++) {
if (j == i)
@@ -4376,10 +4383,8 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
}
}
- rcu_assign_pointer(kvm->buses[bus_idx], new_bus);
- synchronize_srcu_expedited(&kvm->srcu);
kfree(bus);
- return;
+ return new_bus ? 0 : -ENOMEM;
}
struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,
diff --git a/virt/lib/irqbypass.c b/virt/lib/irqbypass.c
index c9bb395..28fda42 100644
--- a/virt/lib/irqbypass.c
+++ b/virt/lib/irqbypass.c
@@ -40,21 +40,17 @@ static int __connect(struct irq_bypass_producer *prod,
if (prod->add_consumer)
ret = prod->add_consumer(prod, cons);
- if (ret)
- goto err_add_consumer;
-
- ret = cons->add_producer(cons, prod);
- if (ret)
- goto err_add_producer;
+ if (!ret) {
+ ret = cons->add_producer(cons, prod);
+ if (ret && prod->del_consumer)
+ prod->del_consumer(prod, cons);
+ }
if (cons->start)
cons->start(cons);
if (prod->start)
prod->start(prod);
-err_add_producer:
- if (prod->del_consumer)
- prod->del_consumer(prod, cons);
-err_add_consumer:
+
return ret;
}