pci-v6.14-changes
-----BEGIN PGP SIGNATURE----- iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmeTr8wUHGJoZWxnYWFz QGdvb2dsZS5jb20ACgkQWYigwDrT+vxrMw//TJXH+U6x5LhYvBPD/KZ20ecGHqaA eGXrbHAasYbU1CfW7HM0onR8NffOIGoYvQrtefjQAln0w6rTvyFO0xJKLP15vMfN hnj+y1WWtKwAkSpu10Cl9nTj8uYRNNSQeoy5kS+1diwuXdby/DlgQONO2APSe9zd KMPXJcqSfDJlM5zHrcqqtlxauO9KHInLCc/iutd85AKjvcjOoNHNeZE0pTC0C3gE sXYHDqJiS3zdEG6X6mWFo3OzI/Q/7NGlHJ2j0CQaObsgQ9yA7eWkez25ifwZcugc TPtjm8DhaDo9/zx0NV9c2dPauHRC6NYUjAflMPK7Aye/41BE1Ag5Ka+tMDgC2i/N TbfBxSeArhjnjY+eZwRhrJNNC58TtHTUs69TO7Dbmuwr7cp99MIEDAYI5V6LFAdk plKqn1h8FztW5QKRPCgmzy6KTE+WPytiGAGAQFxzIGYkV/QqyvFaVs8FIyOJUIFM aDSa6Xy5WLGxmPZ9hPapzEm4ws/HTRpFjNgi/d4rRG5RWMwAxZZa44s9eldhN1D/ ZwEmF2rJ+U8S7Q+mXPHDlwcsHe5APCbiaTEp4X+e3LNe0i9oxhhaUWG6LrDDmTlQ tU5j5daHiBa0nTDL1lfaayJlYX/oJ+IYQrIYzGbnivZv4ZVdPnuWSsOsMOEiLhEt 4QqCoanqf0mCn2A= =O62O -----END PGP SIGNATURE----- Merge tag 'pci-v6.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci Pull pci updates from Bjorn Helgaas: "Enumeration: - Batch sizing of multiple BARs while memory decoding is disabled instead of disabling/enabling decoding for each BAR individually; this optimizes virtualized environments where toggling decoding enable is expensive (Alex Williamson) - Add host bridge .enable_device() and .disable_device() hooks for bridges that need to configure things like Requester ID to StreamID mapping when enabling devices (Frank Li) - Extend struct pci_ecam_ops with .enable_device() and .disable_device() hooks so drivers that use pci_host_common_probe() instead of their own .probe() have a way to set the .enable_device() callbacks (Marc Zyngier) - Drop 'No bus range found' message so we don't complain when DTs don't specify the default 'bus-range = <0x00 0xff>' (Bjorn Helgaas) - Rename the drivers/pci/of_property.c struct of_pci_range to of_pci_range_entry to avoid confusion with the global of_pci_range in include/linux/of_address.h (Bjorn Helgaas) Driver binding: - Update resource request API documentation to encourage callers to supply a driver name when requesting resources (Philipp Stanner) - Export pci_intx_unmanaged() and pcim_intx() (always managed) so callers of pci_intx() (which is sometimes managed) can explicitly choose the one they need (Philipp Stanner) - Convert drivers from pci_intx() to always-managed pcim_intx() or never-managed pci_intx_unmanaged(): amd_sfh, ata (ahci, ata_piix, pata_rdc, sata_sil24, sata_sis, sata_uli, sata_vsc), bnx2x, bna, ntb, qtnfmac, rtsx, tifm_7xx1, vfio, xen-pciback (Philipp Stanner) - Remove pci_intx_unmanaged() since pci_intx() is now always unmanaged and pcim_intx() is always managed (Philipp Stanner) Error handling: - Unexport pcie_read_tlp_log() to encourage drivers to use PCI core logging rather than building their own (Ilpo Järvinen) - Move TLP Log handling to its own file (Ilpo Järvinen) - Store number of supported End-End TLP Prefixes always so we can read the correct number of DWORDs from the TLP Prefix Log (Ilpo Järvinen) - Read TLP Prefixes in addition to the Header Log in pcie_read_tlp_log() (Ilpo Järvinen) - Add pcie_print_tlp_log() to consolidate printing of TLP Header and Prefix Log (Ilpo Järvinen) - Quirk the Intel Raptor Lake-P PIO log size to accommodate vendor BIOSes that don't configure it correctly (Takashi Iwai) ASPM: - Save parent L1 PM Substates config so when we restore it along with an endpoint's config, the parent info isn't junk (Jian-Hong Pan) Power management: - Avoid D3 for Root Ports on TUXEDO Sirius Gen1 with old BIOS because the system can't wake up from suspend (Werner Sembach) Endpoint framework: - Destroy the EPC device in devm_pci_epc_destroy(), which previously didn't call devres_release() (Zijun Hu) - Finish virtual EP removal in pci_epf_remove_vepf(), which previously caused a subsequent pci_epf_add_vepf() to fail with -EBUSY (Zijun Hu) - Write BAR_MASK before iATU registers in pci_epc_set_bar() so we don't depend on the BAR_MASK reset value being larger than the requested BAR size (Niklas Cassel) - Prevent changing BAR size/flags in pci_epc_set_bar() to prevent reads from bypassing the iATU if we reduced the BAR size (Niklas Cassel) - Verify address alignment when programming iATU so we don't attempt to write bits that are read-only because of the BAR size, which could lead to directing accesses to the wrong address (Niklas Cassel) - Implement artpec6 pci_epc_features so we can rely on all drivers supporting it so we can use it in EPC core code (Niklas Cassel) - Check for BARs of fixed size to prevent endpoint drivers from trying to change their size (Niklas Cassel) - Verify that requested BAR size is a power of two when endpoint driver sets the BAR (Niklas Cassel) Endpoint framework tests: - Clear pci-epf-test dma_chan_rx, not dma_chan_tx, after freeing dma_chan_rx (Mohamed Khalfella) - Correct the DMA MEMCPY test so it doesn't fail if the Endpoint supports both DMA_PRIVATE and DMA_MEMCPY (Manivannan Sadhasivam) - Add pci-epf-test and pci_endpoint_test support for capabilities (Niklas Cassel) - Add Endpoint test for consecutive BARs (Niklas Cassel) - Remove redundant comparison from Endpoint BAR test because a > 1MB BAR can always be exactly covered by iterating with a 1MB buffer (Hans Zhang) - Move and convert PCI Endpoint tests from tools/pci to Kselftests (Manivannan Sadhasivam) Apple PCIe controller driver: - Convert StreamID mapping configuration from a bus notifier to the .enable_device() and .disable_device() callbacks (Marc Zyngier) Freescale i.MX6 PCIe controller driver: - Add Requester ID to StreamID mapping configuration when enabling devices (Frank Li) - Use DWC core suspend/resume functions for imx6 (Frank Li) - Add suspend/resume support for i.MX8MQ, i.MX8Q, and i.MX95 (Richard Zhu) - Add DT compatible string 'fsl,imx8q-pcie-ep' and driver support for i.MX8Q series (i.MX8QM, i.MX8QXP, and i.MX8DXL) Endpoints (Frank Li) - Add DT binding for optional i.MX95 Refclk and driver support to enable it if the platform hasn't enabled it (Richard Zhu) - Configure PHY based on controller being in Root Complex or Endpoint mode (Frank Li) - Rely on dbi2 and iATU base addresses from DT via dw_pcie_get_resources() instead of hardcoding them (Richard Zhu) - Deassert apps_reset in imx_pcie_deassert_core_reset() since it is asserted in imx_pcie_assert_core_reset() (Richard Zhu) - Add missing reference clock enable or disable logic for IMX6SX, IMX7D, IMX8MM (Richard Zhu) - Remove redundant imx7d_pcie_init_phy() since imx7d_pcie_enable_ref_clk() does the same thing (Richard Zhu) Freescale Layerscape PCIe controller driver: - Simplify by using syscon_regmap_lookup_by_phandle_args() instead of syscon_regmap_lookup_by_phandle() followed by of_property_read_u32_array() (Krzysztof Kozlowski) Marvell MVEBU PCIe controller driver: - Add MODULE_DEVICE_TABLE() to enable module autoloading (Liao Chen) MediaTek PCIe Gen3 controller driver: - Use clk_bulk_prepare_enable() instead of separate clk_bulk_prepare() and clk_bulk_enable() (Lorenzo Bianconi) - Rearrange reset assert/deassert so they're both done in the *_power_up() callbacks (Lorenzo Bianconi) - Document that Airoha EN7581 requires PHY init and power-on before PHY reset deassert, unlike other MediaTek Gen3 controllers (Lorenzo Bianconi) - Move Airoha EN7581 post-reset delay from the en7581 clock .enable() method to mtk_pcie_en7581_power_up() (Lorenzo Bianconi) - Sleep instead of delay during Airoha EN7581 power-up, since this is a non-atomic context (Lorenzo Bianconi) - Skip PERST# assertion on Airoha EN7581 during probe and suspend/resume to avoid a hardware defect (Lorenzo Bianconi) - Enable async probe to reduce system startup time (Douglas Anderson) Microchip PolarFlare PCIe controller driver: - Set up the inbound address translation based on whether the platform allows coherent or non-coherent DMA (Daire McNamara) - Update DT binding such that platforms are DMA-coherent by default and must specify 'dma-noncoherent' if needed (Conor Dooley) Mobiveil PCIe controller driver: - Convert mobiveil-pcie.txt to YAML and update 'interrupt-names' and 'reg-names' (Frank Li) Qualcomm PCIe controller driver: - Add DT SM8550 and SM8650 optional 'global' interrupt for link events (Neil Armstrong) - Add DT 'compatible' strings for IPQ5424 PCIe controller (Manikanta Mylavarapu) - If 'global' IRQ is supported for detection of Link Up events, tell DWC core not to wait for link up (Krishna chaitanya chundru) Renesas R-Car PCIe controller driver: - Avoid passing stack buffer as resource name (King Dix) Rockchip PCIe controller driver: - Simplify clock and reset handling by using bulk interfaces (Anand Moon) - Pass typed rockchip_pcie (not void) pointer to rockchip_pcie_disable_clocks() (Anand Moon) - Return -ENOMEM, not success, when pci_epc_mem_alloc_addr() fails (Dan Carpenter) Rockchip DesignWare PCIe controller driver: - Use dll_link_up IRQ to detect Link Up and enumerate devices so users don't have to manually rescan (Niklas Cassel) - Tell DWC core not to wait for link up since the 'sys' interrupt is required and detects Link Up events (Niklas Cassel) Synopsys DesignWare PCIe controller driver: - Don't wait for link up in DWC core if driver can detect Link Up event (Krishna chaitanya chundru) - Update ICC and OPP votes after Link Up events (Krishna chaitanya chundru) - Always stop link in dw_pcie_suspend_noirq(), which is required at least for i.MX8QM to re-establish link on resume (Richard Zhu) - Drop racy and unnecessary LTSSM state check before sending PME_TURN_OFF message in dw_pcie_suspend_noirq() (Richard Zhu) - Add struct of_pci_range.parent_bus_addr for devices that need their immediate parent bus address, not the CPU address, e.g., to program an internal Address Translation Unit (iATU) (Frank Li) TI DRA7xx PCIe controller driver: - Simplify by using syscon_regmap_lookup_by_phandle_args() instead of syscon_regmap_lookup_by_phandle() followed by of_parse_phandle_with_fixed_args() or of_property_read_u32_index() (Krzysztof Kozlowski) Xilinx Versal CPM PCIe controller driver: - Add DT binding and driver support for Xilinx Versal CPM5 (Thippeswamy Havalige) MicroSemi Switchtec management driver: - Add Microchip PCI100X device IDs (Rakesh Babu Saladi) Miscellaneous: - Move reset related sysfs code from pci.c to pci-sysfs.c where other similar code lives (Ilpo Järvinen) - Simplify reset_method_store() memory management by using __free() instead of explicit kfree() cleanup (Ilpo Järvinen) - Constify struct bin_attribute for sysfs, VPD, P2PDMA, and the IBM ACPI hotplug driver (Thomas Weißschuh) - Remove redundant PCI_VSEC_HDR and PCI_VSEC_HDR_LEN_SHIFT (Dongdong Zhang) - Correct documentation of the 'config_acs=' kernel parameter (Akihiko Odaki)" * tag 'pci-v6.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (111 commits) PCI: Batch BAR sizing operations dt-bindings: PCI: microchip,pcie-host: Allow dma-noncoherent PCI: microchip: Set inbound address translation for coherent or non-coherent mode Documentation: Fix pci=config_acs= example PCI: Remove redundant PCI_VSEC_HDR and PCI_VSEC_HDR_LEN_SHIFT PCI: Don't include 'pm_wakeup.h' directly selftests: pci_endpoint: Migrate to Kselftest framework selftests: Move PCI Endpoint tests from tools/pci to Kselftests misc: pci_endpoint_test: Fix IOCTL return value dt-bindings: PCI: qcom: Document the IPQ5424 PCIe controller dt-bindings: PCI: qcom,pcie-sm8550: Document 'global' interrupt dt-bindings: PCI: mobiveil: Convert mobiveil-pcie.txt to YAML PCI: switchtec: Add Microchip PCI100X device IDs misc: pci_endpoint_test: Remove redundant 'remainder' test misc: pci_endpoint_test: Add consecutive BAR test misc: pci_endpoint_test: Add support for capabilities PCI: endpoint: pci-epf-test: Add support for capabilities PCI: endpoint: pci-epf-test: Fix check for DMA MEMCPY test PCI: endpoint: pci-epf-test: Set dma_chan_rx pointer to NULL on error PCI: dwc: Simplify config resource lookup ...
This commit is contained in:
commit
647d69605c
Documentation
PCI/endpoint
admin-guide
devicetree/bindings/pci
arch
drivers
ata
clk
hid/amd-sfh-hid
misc
net/wireless/quantenna/qtnfmac/pcie
of
pci
ats.cdevres.c
controller
dwc
pci-dra7xx.cpci-imx6.cpci-layerscape.cpcie-artpec6.cpcie-designware-ep.cpcie-designware-host.cpcie-designware.cpcie-designware.hpcie-dw-rockchip.cpcie-qcom.c
pci-host-common.cpci-mvebu.cpcie-apple.cpcie-mediatek-gen3.cpcie-rcar-ep.cpcie-rockchip-ep.cpcie-rockchip.cpcie-rockchip.hpcie-xilinx-cpm.cplda
endpoint
hotplug
iov.cof.cof_property.cp2pdma.cpci-sysfs.cpci.cpci.hpcie
probe.cquirks.cswitch
vpd.cvfio/pci
include
tools
pci
testing/selftests
|
@ -81,8 +81,8 @@ device, the following commands can be used::
|
|||
|
||||
# echo 0x104c > functions/pci_epf_test/func1/vendorid
|
||||
# echo 0xb500 > functions/pci_epf_test/func1/deviceid
|
||||
# echo 16 > functions/pci_epf_test/func1/msi_interrupts
|
||||
# echo 8 > functions/pci_epf_test/func1/msix_interrupts
|
||||
# echo 32 > functions/pci_epf_test/func1/msi_interrupts
|
||||
# echo 2048 > functions/pci_epf_test/func1/msix_interrupts
|
||||
|
||||
|
||||
Binding pci-epf-test Device to EP Controller
|
||||
|
@ -123,113 +123,83 @@ above::
|
|||
Using Endpoint Test function Device
|
||||
-----------------------------------
|
||||
|
||||
pcitest.sh added in tools/pci/ can be used to run all the default PCI endpoint
|
||||
tests. To compile this tool the following commands should be used::
|
||||
Kselftest added in tools/testing/selftests/pci_endpoint can be used to run all
|
||||
the default PCI endpoint tests. To build the Kselftest for PCI endpoint
|
||||
subsystem, the following commands should be used::
|
||||
|
||||
# cd <kernel-dir>
|
||||
# make -C tools/pci
|
||||
# make -C tools/testing/selftests/pci_endpoint
|
||||
|
||||
or if you desire to compile and install in your system::
|
||||
|
||||
# cd <kernel-dir>
|
||||
# make -C tools/pci install
|
||||
# make -C tools/testing/selftests/pci_endpoint INSTALL_PATH=/usr/bin install
|
||||
|
||||
The tool and script will be located in <rootfs>/usr/bin/
|
||||
The test will be located in <rootfs>/usr/bin/
|
||||
|
||||
|
||||
pcitest.sh Output
|
||||
~~~~~~~~~~~~~~~~~
|
||||
Kselftest Output
|
||||
~~~~~~~~~~~~~~~~
|
||||
::
|
||||
|
||||
# pcitest.sh
|
||||
BAR tests
|
||||
# pci_endpoint_test
|
||||
TAP version 13
|
||||
1..16
|
||||
# Starting 16 tests from 9 test cases.
|
||||
# RUN pci_ep_bar.BAR0.BAR_TEST ...
|
||||
# OK pci_ep_bar.BAR0.BAR_TEST
|
||||
ok 1 pci_ep_bar.BAR0.BAR_TEST
|
||||
# RUN pci_ep_bar.BAR1.BAR_TEST ...
|
||||
# OK pci_ep_bar.BAR1.BAR_TEST
|
||||
ok 2 pci_ep_bar.BAR1.BAR_TEST
|
||||
# RUN pci_ep_bar.BAR2.BAR_TEST ...
|
||||
# OK pci_ep_bar.BAR2.BAR_TEST
|
||||
ok 3 pci_ep_bar.BAR2.BAR_TEST
|
||||
# RUN pci_ep_bar.BAR3.BAR_TEST ...
|
||||
# OK pci_ep_bar.BAR3.BAR_TEST
|
||||
ok 4 pci_ep_bar.BAR3.BAR_TEST
|
||||
# RUN pci_ep_bar.BAR4.BAR_TEST ...
|
||||
# OK pci_ep_bar.BAR4.BAR_TEST
|
||||
ok 5 pci_ep_bar.BAR4.BAR_TEST
|
||||
# RUN pci_ep_bar.BAR5.BAR_TEST ...
|
||||
# OK pci_ep_bar.BAR5.BAR_TEST
|
||||
ok 6 pci_ep_bar.BAR5.BAR_TEST
|
||||
# RUN pci_ep_basic.CONSECUTIVE_BAR_TEST ...
|
||||
# OK pci_ep_basic.CONSECUTIVE_BAR_TEST
|
||||
ok 7 pci_ep_basic.CONSECUTIVE_BAR_TEST
|
||||
# RUN pci_ep_basic.LEGACY_IRQ_TEST ...
|
||||
# OK pci_ep_basic.LEGACY_IRQ_TEST
|
||||
ok 8 pci_ep_basic.LEGACY_IRQ_TEST
|
||||
# RUN pci_ep_basic.MSI_TEST ...
|
||||
# OK pci_ep_basic.MSI_TEST
|
||||
ok 9 pci_ep_basic.MSI_TEST
|
||||
# RUN pci_ep_basic.MSIX_TEST ...
|
||||
# OK pci_ep_basic.MSIX_TEST
|
||||
ok 10 pci_ep_basic.MSIX_TEST
|
||||
# RUN pci_ep_data_transfer.memcpy.READ_TEST ...
|
||||
# OK pci_ep_data_transfer.memcpy.READ_TEST
|
||||
ok 11 pci_ep_data_transfer.memcpy.READ_TEST
|
||||
# RUN pci_ep_data_transfer.memcpy.WRITE_TEST ...
|
||||
# OK pci_ep_data_transfer.memcpy.WRITE_TEST
|
||||
ok 12 pci_ep_data_transfer.memcpy.WRITE_TEST
|
||||
# RUN pci_ep_data_transfer.memcpy.COPY_TEST ...
|
||||
# OK pci_ep_data_transfer.memcpy.COPY_TEST
|
||||
ok 13 pci_ep_data_transfer.memcpy.COPY_TEST
|
||||
# RUN pci_ep_data_transfer.dma.READ_TEST ...
|
||||
# OK pci_ep_data_transfer.dma.READ_TEST
|
||||
ok 14 pci_ep_data_transfer.dma.READ_TEST
|
||||
# RUN pci_ep_data_transfer.dma.WRITE_TEST ...
|
||||
# OK pci_ep_data_transfer.dma.WRITE_TEST
|
||||
ok 15 pci_ep_data_transfer.dma.WRITE_TEST
|
||||
# RUN pci_ep_data_transfer.dma.COPY_TEST ...
|
||||
# OK pci_ep_data_transfer.dma.COPY_TEST
|
||||
ok 16 pci_ep_data_transfer.dma.COPY_TEST
|
||||
# PASSED: 16 / 16 tests passed.
|
||||
# Totals: pass:16 fail:0 xfail:0 xpass:0 skip:0 error:0
|
||||
|
||||
BAR0: OKAY
|
||||
BAR1: OKAY
|
||||
BAR2: OKAY
|
||||
BAR3: OKAY
|
||||
BAR4: NOT OKAY
|
||||
BAR5: NOT OKAY
|
||||
|
||||
Interrupt tests
|
||||
Testcase 16 (pci_ep_data_transfer.dma.COPY_TEST) will fail for most of the DMA
|
||||
capable endpoint controllers due to the absence of the MEMCPY over DMA. For such
|
||||
controllers, it is advisable to skip this testcase using this
|
||||
command::
|
||||
|
||||
SET IRQ TYPE TO LEGACY: OKAY
|
||||
LEGACY IRQ: NOT OKAY
|
||||
SET IRQ TYPE TO MSI: OKAY
|
||||
MSI1: OKAY
|
||||
MSI2: OKAY
|
||||
MSI3: OKAY
|
||||
MSI4: OKAY
|
||||
MSI5: OKAY
|
||||
MSI6: OKAY
|
||||
MSI7: OKAY
|
||||
MSI8: OKAY
|
||||
MSI9: OKAY
|
||||
MSI10: OKAY
|
||||
MSI11: OKAY
|
||||
MSI12: OKAY
|
||||
MSI13: OKAY
|
||||
MSI14: OKAY
|
||||
MSI15: OKAY
|
||||
MSI16: OKAY
|
||||
MSI17: NOT OKAY
|
||||
MSI18: NOT OKAY
|
||||
MSI19: NOT OKAY
|
||||
MSI20: NOT OKAY
|
||||
MSI21: NOT OKAY
|
||||
MSI22: NOT OKAY
|
||||
MSI23: NOT OKAY
|
||||
MSI24: NOT OKAY
|
||||
MSI25: NOT OKAY
|
||||
MSI26: NOT OKAY
|
||||
MSI27: NOT OKAY
|
||||
MSI28: NOT OKAY
|
||||
MSI29: NOT OKAY
|
||||
MSI30: NOT OKAY
|
||||
MSI31: NOT OKAY
|
||||
MSI32: NOT OKAY
|
||||
SET IRQ TYPE TO MSI-X: OKAY
|
||||
MSI-X1: OKAY
|
||||
MSI-X2: OKAY
|
||||
MSI-X3: OKAY
|
||||
MSI-X4: OKAY
|
||||
MSI-X5: OKAY
|
||||
MSI-X6: OKAY
|
||||
MSI-X7: OKAY
|
||||
MSI-X8: OKAY
|
||||
MSI-X9: NOT OKAY
|
||||
MSI-X10: NOT OKAY
|
||||
MSI-X11: NOT OKAY
|
||||
MSI-X12: NOT OKAY
|
||||
MSI-X13: NOT OKAY
|
||||
MSI-X14: NOT OKAY
|
||||
MSI-X15: NOT OKAY
|
||||
MSI-X16: NOT OKAY
|
||||
[...]
|
||||
MSI-X2047: NOT OKAY
|
||||
MSI-X2048: NOT OKAY
|
||||
|
||||
Read Tests
|
||||
|
||||
SET IRQ TYPE TO MSI: OKAY
|
||||
READ ( 1 bytes): OKAY
|
||||
READ ( 1024 bytes): OKAY
|
||||
READ ( 1025 bytes): OKAY
|
||||
READ (1024000 bytes): OKAY
|
||||
READ (1024001 bytes): OKAY
|
||||
|
||||
Write Tests
|
||||
|
||||
WRITE ( 1 bytes): OKAY
|
||||
WRITE ( 1024 bytes): OKAY
|
||||
WRITE ( 1025 bytes): OKAY
|
||||
WRITE (1024000 bytes): OKAY
|
||||
WRITE (1024001 bytes): OKAY
|
||||
|
||||
Copy Tests
|
||||
|
||||
COPY ( 1 bytes): OKAY
|
||||
COPY ( 1024 bytes): OKAY
|
||||
COPY ( 1025 bytes): OKAY
|
||||
COPY (1024000 bytes): OKAY
|
||||
COPY (1024001 bytes): OKAY
|
||||
# pci_endpoint_test -f pci_ep_bar -f pci_ep_basic -v memcpy -T COPY_TEST -v dma
|
||||
|
|
|
@ -4830,7 +4830,7 @@
|
|||
'1' – force enabled
|
||||
'x' – unchanged
|
||||
For example,
|
||||
pci=config_acs=10x
|
||||
pci=config_acs=10x@pci:0:0
|
||||
would configure all devices that support
|
||||
ACS to enable P2P Request Redirect, disable
|
||||
Translation Blocking, and leave Source
|
||||
|
|
|
@ -17,11 +17,11 @@ description:
|
|||
properties:
|
||||
clocks:
|
||||
minItems: 3
|
||||
maxItems: 4
|
||||
maxItems: 5
|
||||
|
||||
clock-names:
|
||||
minItems: 3
|
||||
maxItems: 4
|
||||
maxItems: 5
|
||||
|
||||
num-lanes:
|
||||
const: 1
|
||||
|
|
|
@ -22,6 +22,7 @@ properties:
|
|||
- fsl,imx8mm-pcie-ep
|
||||
- fsl,imx8mq-pcie-ep
|
||||
- fsl,imx8mp-pcie-ep
|
||||
- fsl,imx8q-pcie-ep
|
||||
- fsl,imx95-pcie-ep
|
||||
|
||||
clocks:
|
||||
|
@ -74,6 +75,20 @@ allOf:
|
|||
- const: dbi2
|
||||
- const: atu
|
||||
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- fsl,imx8q-pcie-ep
|
||||
then:
|
||||
properties:
|
||||
reg:
|
||||
maxItems: 2
|
||||
reg-names:
|
||||
items:
|
||||
- const: dbi
|
||||
- const: addr_space
|
||||
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
|
@ -103,13 +118,21 @@ allOf:
|
|||
properties:
|
||||
clocks:
|
||||
minItems: 4
|
||||
maxItems: 4
|
||||
clock-names:
|
||||
items:
|
||||
- const: pcie
|
||||
- const: pcie_bus
|
||||
- const: pcie_phy
|
||||
- const: pcie_aux
|
||||
else:
|
||||
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- fsl,imx8mm-pcie-ep
|
||||
- fsl,imx8mp-pcie-ep
|
||||
then:
|
||||
properties:
|
||||
clocks:
|
||||
maxItems: 3
|
||||
|
@ -119,6 +142,20 @@ allOf:
|
|||
- const: pcie_bus
|
||||
- const: pcie_aux
|
||||
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- fsl,imxq-pcie-ep
|
||||
then:
|
||||
properties:
|
||||
clocks:
|
||||
maxItems: 3
|
||||
clock-names:
|
||||
items:
|
||||
- const: dbi
|
||||
- const: mstr
|
||||
- const: slv
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
|
|
|
@ -40,10 +40,11 @@ properties:
|
|||
- description: PCIe PHY clock.
|
||||
- description: Additional required clock entry for imx6sx-pcie,
|
||||
imx6sx-pcie-ep, imx8mq-pcie, imx8mq-pcie-ep.
|
||||
- description: PCIe reference clock.
|
||||
|
||||
clock-names:
|
||||
minItems: 3
|
||||
maxItems: 4
|
||||
maxItems: 5
|
||||
|
||||
interrupts:
|
||||
items:
|
||||
|
@ -127,7 +128,7 @@ allOf:
|
|||
then:
|
||||
properties:
|
||||
clocks:
|
||||
minItems: 4
|
||||
maxItems: 4
|
||||
clock-names:
|
||||
items:
|
||||
- const: pcie
|
||||
|
@ -140,11 +141,10 @@ allOf:
|
|||
compatible:
|
||||
enum:
|
||||
- fsl,imx8mq-pcie
|
||||
- fsl,imx95-pcie
|
||||
then:
|
||||
properties:
|
||||
clocks:
|
||||
minItems: 4
|
||||
maxItems: 4
|
||||
clock-names:
|
||||
items:
|
||||
- const: pcie
|
||||
|
@ -200,6 +200,23 @@ allOf:
|
|||
- const: mstr
|
||||
- const: slv
|
||||
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- fsl,imx95-pcie
|
||||
then:
|
||||
properties:
|
||||
clocks:
|
||||
maxItems: 5
|
||||
clock-names:
|
||||
items:
|
||||
- const: pcie
|
||||
- const: pcie_bus
|
||||
- const: pcie_phy
|
||||
- const: pcie_aux
|
||||
- const: ref
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
examples:
|
||||
|
|
|
@ -1,52 +0,0 @@
|
|||
NXP Layerscape PCIe Gen4 controller
|
||||
|
||||
This PCIe controller is based on the Mobiveil PCIe IP and thus inherits all
|
||||
the common properties defined in mobiveil-pcie.txt.
|
||||
|
||||
Required properties:
|
||||
- compatible: should contain the platform identifier such as:
|
||||
"fsl,lx2160a-pcie"
|
||||
- reg: base addresses and lengths of the PCIe controller register blocks.
|
||||
"csr_axi_slave": Bridge config registers
|
||||
"config_axi_slave": PCIe controller registers
|
||||
- interrupts: A list of interrupt outputs of the controller. Must contain an
|
||||
entry for each entry in the interrupt-names property.
|
||||
- interrupt-names: It could include the following entries:
|
||||
"intr": The interrupt that is asserted for controller interrupts
|
||||
"aer": Asserted for aer interrupt when chip support the aer interrupt with
|
||||
none MSI/MSI-X/INTx mode,but there is interrupt line for aer.
|
||||
"pme": Asserted for pme interrupt when chip support the pme interrupt with
|
||||
none MSI/MSI-X/INTx mode,but there is interrupt line for pme.
|
||||
- dma-coherent: Indicates that the hardware IP block can ensure the coherency
|
||||
of the data transferred from/to the IP block. This can avoid the software
|
||||
cache flush/invalid actions, and improve the performance significantly.
|
||||
- msi-parent : See the generic MSI binding described in
|
||||
Documentation/devicetree/bindings/interrupt-controller/msi.txt.
|
||||
|
||||
Example:
|
||||
|
||||
pcie@3400000 {
|
||||
compatible = "fsl,lx2160a-pcie";
|
||||
reg = <0x00 0x03400000 0x0 0x00100000 /* controller registers */
|
||||
0x80 0x00000000 0x0 0x00001000>; /* configuration space */
|
||||
reg-names = "csr_axi_slave", "config_axi_slave";
|
||||
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* AER interrupt */
|
||||
<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* PME interrupt */
|
||||
<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */
|
||||
interrupt-names = "aer", "pme", "intr";
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
device_type = "pci";
|
||||
apio-wins = <8>;
|
||||
ppio-wins = <8>;
|
||||
dma-coherent;
|
||||
bus-range = <0x0 0xff>;
|
||||
msi-parent = <&its>;
|
||||
ranges = <0x82000000 0x0 0x40000000 0x80 0x40000000 0x0 0x40000000>;
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 7>;
|
||||
interrupt-map = <0000 0 0 1 &gic 0 0 GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0000 0 0 2 &gic 0 0 GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0000 0 0 3 &gic 0 0 GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0000 0 0 4 &gic 0 0 GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>;
|
||||
};
|
|
@ -0,0 +1,173 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/pci/mbvl,gpex40-pcie.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Mobiveil AXI PCIe Host Bridge
|
||||
|
||||
maintainers:
|
||||
- Frank Li <Frank Li@nxp.com>
|
||||
|
||||
description:
|
||||
Mobiveil's GPEX 4.0 is a PCIe Gen4 host bridge IP. This configurable IP
|
||||
has up to 8 outbound and inbound windows for address translation.
|
||||
|
||||
NXP Layerscape PCIe Gen4 controller (Deprecated) base on Mobiveil's GPEX 4.0.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- fsl,lx2160a-pcie
|
||||
- mbvl,gpex40-pcie
|
||||
|
||||
reg:
|
||||
items:
|
||||
- description: PCIe controller registers
|
||||
- description: Bridge config registers
|
||||
- description: GPIO registers to control slot power
|
||||
- description: MSI registers
|
||||
minItems: 2
|
||||
|
||||
reg-names:
|
||||
items:
|
||||
- const: csr_axi_slave
|
||||
- const: config_axi_slave
|
||||
- const: gpio_slave
|
||||
- const: apb_csr
|
||||
minItems: 2
|
||||
|
||||
apio-wins:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description: |
|
||||
number of requested APIO outbound windows
|
||||
1. Config window
|
||||
2. Memory window
|
||||
default: 2
|
||||
maximum: 256
|
||||
|
||||
ppio-wins:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description: number of requested PPIO inbound windows
|
||||
default: 1
|
||||
maximum: 256
|
||||
|
||||
interrupt-controller: true
|
||||
|
||||
"#interrupt-cells":
|
||||
const: 1
|
||||
|
||||
interrupts:
|
||||
minItems: 1
|
||||
maxItems: 3
|
||||
|
||||
interrupt-names:
|
||||
minItems: 1
|
||||
maxItems: 3
|
||||
|
||||
dma-coherent: true
|
||||
|
||||
msi-parent: true
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/pci-host-bridge.yaml#
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- fsl,lx2160a-pcie
|
||||
then:
|
||||
properties:
|
||||
reg:
|
||||
maxItems: 2
|
||||
|
||||
reg-names:
|
||||
maxItems: 2
|
||||
|
||||
interrupts:
|
||||
minItems: 3
|
||||
|
||||
interrupt-names:
|
||||
items:
|
||||
- const: aer
|
||||
- const: pme
|
||||
- const: intr
|
||||
else:
|
||||
properties:
|
||||
dma-coherent: false
|
||||
msi-parent: false
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
interrupt-names: false
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
|
||||
pcie@b0000000 {
|
||||
compatible = "mbvl,gpex40-pcie";
|
||||
reg = <0xb0000000 0x00010000>,
|
||||
<0xa0000000 0x00001000>,
|
||||
<0xff000000 0x00200000>,
|
||||
<0xb0010000 0x00001000>;
|
||||
reg-names = "csr_axi_slave",
|
||||
"config_axi_slave",
|
||||
"gpio_slave",
|
||||
"apb_csr";
|
||||
ranges = <0x83000000 0 0x00000000 0xa8000000 0 0x8000000>;
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
device_type = "pci";
|
||||
apio-wins = <2>;
|
||||
ppio-wins = <1>;
|
||||
bus-range = <0x00 0xff>;
|
||||
interrupt-controller;
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-parent = <&gic>;
|
||||
interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupt-map-mask = <0 0 0 7>;
|
||||
interrupt-map = <0 0 0 0 &pci_express 0>,
|
||||
<0 0 0 1 &pci_express 1>,
|
||||
<0 0 0 2 &pci_express 2>,
|
||||
<0 0 0 3 &pci_express 3>;
|
||||
};
|
||||
|
||||
- |
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
|
||||
soc {
|
||||
#address-cells = <2>;
|
||||
#size-cells = <2>;
|
||||
pcie@3400000 {
|
||||
compatible = "fsl,lx2160a-pcie";
|
||||
reg = <0x00 0x03400000 0x0 0x00100000 /* controller registers */
|
||||
0x80 0x00000000 0x0 0x00001000>; /* configuration space */
|
||||
reg-names = "csr_axi_slave", "config_axi_slave";
|
||||
ranges = <0x82000000 0x0 0x40000000 0x80 0x40000000 0x0 0x40000000>;
|
||||
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* AER interrupt */
|
||||
<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* PME interrupt */
|
||||
<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */
|
||||
interrupt-names = "aer", "pme", "intr";
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
device_type = "pci";
|
||||
apio-wins = <8>;
|
||||
ppio-wins = <8>;
|
||||
dma-coherent;
|
||||
bus-range = <0x00 0xff>;
|
||||
msi-parent = <&its>;
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 7>;
|
||||
interrupt-map = <0000 0 0 1 &gic 0 0 GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0000 0 0 2 &gic 0 0 GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0000 0 0 3 &gic 0 0 GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0000 0 0 4 &gic 0 0 GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>;
|
||||
};
|
||||
};
|
|
@ -50,6 +50,8 @@ properties:
|
|||
items:
|
||||
pattern: '^fic[0-3]$'
|
||||
|
||||
dma-coherent: true
|
||||
|
||||
ranges:
|
||||
minItems: 1
|
||||
maxItems: 3
|
||||
|
|
|
@ -1,72 +0,0 @@
|
|||
* Mobiveil AXI PCIe Root Port Bridge DT description
|
||||
|
||||
Mobiveil's GPEX 4.0 is a PCIe Gen4 root port bridge IP. This configurable IP
|
||||
has up to 8 outbound and inbound windows for the address translation.
|
||||
|
||||
Required properties:
|
||||
- #address-cells: Address representation for root ports, set to <3>
|
||||
- #size-cells: Size representation for root ports, set to <2>
|
||||
- #interrupt-cells: specifies the number of cells needed to encode an
|
||||
interrupt source. The value must be 1.
|
||||
- compatible: Should contain "mbvl,gpex40-pcie"
|
||||
- reg: Should contain PCIe registers location and length
|
||||
Mandatory:
|
||||
"config_axi_slave": PCIe controller registers
|
||||
"csr_axi_slave" : Bridge config registers
|
||||
Optional:
|
||||
"gpio_slave" : GPIO registers to control slot power
|
||||
"apb_csr" : MSI registers
|
||||
|
||||
- device_type: must be "pci"
|
||||
- apio-wins : number of requested apio outbound windows
|
||||
default 2 outbound windows are configured -
|
||||
1. Config window
|
||||
2. Memory window
|
||||
- ppio-wins : number of requested ppio inbound windows
|
||||
default 1 inbound memory window is configured.
|
||||
- bus-range: PCI bus numbers covered
|
||||
- interrupt-controller: identifies the node as an interrupt controller
|
||||
- #interrupt-cells: specifies the number of cells needed to encode an
|
||||
interrupt source. The value must be 1.
|
||||
- interrupts: The interrupt line of the PCIe controller
|
||||
last cell of this field is set to 4 to
|
||||
denote it as IRQ_TYPE_LEVEL_HIGH type interrupt.
|
||||
- interrupt-map-mask,
|
||||
interrupt-map: standard PCI properties to define the mapping of the
|
||||
PCI interface to interrupt numbers.
|
||||
- ranges: ranges for the PCI memory regions (I/O space region is not
|
||||
supported by hardware)
|
||||
Please refer to the standard PCI bus binding document for a more
|
||||
detailed explanation
|
||||
|
||||
|
||||
Example:
|
||||
++++++++
|
||||
pcie0: pcie@a0000000 {
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
compatible = "mbvl,gpex40-pcie";
|
||||
reg = <0xa0000000 0x00001000>,
|
||||
<0xb0000000 0x00010000>,
|
||||
<0xff000000 0x00200000>,
|
||||
<0xb0010000 0x00001000>;
|
||||
reg-names = "config_axi_slave",
|
||||
"csr_axi_slave",
|
||||
"gpio_slave",
|
||||
"apb_csr";
|
||||
device_type = "pci";
|
||||
apio-wins = <2>;
|
||||
ppio-wins = <1>;
|
||||
bus-range = <0x00000000 0x000000ff>;
|
||||
interrupt-controller;
|
||||
interrupt-parent = <&gic>;
|
||||
#interrupt-cells = <1>;
|
||||
interrupts = < 0 89 4 >;
|
||||
interrupt-map-mask = <0 0 0 7>;
|
||||
interrupt-map = <0 0 0 0 &pci_express 0>,
|
||||
<0 0 0 1 &pci_express 1>,
|
||||
<0 0 0 2 &pci_express 2>,
|
||||
<0 0 0 3 &pci_express 3>;
|
||||
ranges = < 0x83000000 0 0x00000000 0xa8000000 0 0x8000000>;
|
||||
|
||||
};
|
|
@ -57,9 +57,10 @@ properties:
|
|||
|
||||
interrupts:
|
||||
minItems: 8
|
||||
maxItems: 8
|
||||
maxItems: 9
|
||||
|
||||
interrupt-names:
|
||||
minItems: 8
|
||||
items:
|
||||
- const: msi0
|
||||
- const: msi1
|
||||
|
@ -69,6 +70,7 @@ properties:
|
|||
- const: msi5
|
||||
- const: msi6
|
||||
- const: msi7
|
||||
- const: global
|
||||
|
||||
resets:
|
||||
minItems: 1
|
||||
|
@ -139,9 +141,10 @@ examples:
|
|||
<GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
|
||||
<GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
|
||||
interrupt-names = "msi0", "msi1", "msi2", "msi3",
|
||||
"msi4", "msi5", "msi6", "msi7";
|
||||
"msi4", "msi5", "msi6", "msi7", "global";
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 0x7>;
|
||||
interrupt-map = <0 0 0 1 &intc 0 0 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
|
||||
|
|
|
@ -31,6 +31,10 @@ properties:
|
|||
- qcom,pcie-qcs404
|
||||
- qcom,pcie-sdm845
|
||||
- qcom,pcie-sdx55
|
||||
- items:
|
||||
- enum:
|
||||
- qcom,pcie-ipq5424
|
||||
- const: qcom,pcie-ipq9574
|
||||
- items:
|
||||
- const: qcom,pcie-msm8998
|
||||
- const: qcom,pcie-msm8996
|
||||
|
|
|
@ -17,6 +17,7 @@ properties:
|
|||
enum:
|
||||
- xlnx,versal-cpm-host-1.00
|
||||
- xlnx,versal-cpm5-host
|
||||
- xlnx,versal-cpm5-host1
|
||||
|
||||
reg:
|
||||
items:
|
||||
|
|
|
@ -18009,7 +18009,7 @@ M: Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>
|
|||
M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
|
||||
L: linux-pci@vger.kernel.org
|
||||
S: Supported
|
||||
F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt
|
||||
F: Documentation/devicetree/bindings/pci/mbvl,gpex40-pcie.yaml
|
||||
F: drivers/pci/controller/mobiveil/pcie-mobiveil*
|
||||
|
||||
PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support)
|
||||
|
@ -18033,7 +18033,6 @@ M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
|
|||
L: linux-pci@vger.kernel.org
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt
|
||||
F: drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c
|
||||
|
||||
PCI DRIVER FOR PLDA PCIE IP
|
||||
|
@ -18111,7 +18110,7 @@ F: Documentation/PCI/endpoint/*
|
|||
F: Documentation/misc-devices/pci-endpoint-test.rst
|
||||
F: drivers/misc/pci_endpoint_test.c
|
||||
F: drivers/pci/endpoint/
|
||||
F: tools/pci/
|
||||
F: tools/testing/selftests/pci_endpoint/
|
||||
|
||||
PCI ENHANCED ERROR HANDLING (EEH) FOR POWERPC
|
||||
M: Mahesh J Salgaonkar <mahesh@linux.ibm.com>
|
||||
|
|
|
@ -361,7 +361,7 @@ void pci_determine_mem_io_space(struct pci_pbm_info *pbm)
|
|||
int i, saw_mem, saw_io;
|
||||
int num_pbm_ranges;
|
||||
|
||||
/* Corresponding generic code in of_pci_get_host_bridge_resources() */
|
||||
/* Corresponds to generic devm_of_pci_get_host_bridge_resources() */
|
||||
|
||||
saw_mem = saw_io = 0;
|
||||
pbm_ranges = of_get_property(pbm->op->dev.of_node, "ranges", &i);
|
||||
|
|
|
@ -1010,4 +1010,34 @@ DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_suspend);
|
|||
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_resume);
|
||||
DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_suspend);
|
||||
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_resume);
|
||||
|
||||
/*
|
||||
* Putting PCIe root ports on Ryzen SoCs with USB4 controllers into D3hot
|
||||
* may cause problems when the system attempts wake up from s2idle.
|
||||
*
|
||||
* On the TUXEDO Sirius 16 Gen 1 with a specific old BIOS this manifests as
|
||||
* a system hang.
|
||||
*/
|
||||
static const struct dmi_system_id quirk_tuxeo_rp_d3_dmi_table[] = {
|
||||
{
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
|
||||
DMI_EXACT_MATCH(DMI_BOARD_NAME, "APX958"),
|
||||
DMI_EXACT_MATCH(DMI_BIOS_VERSION, "V1.00A00_20240108"),
|
||||
},
|
||||
},
|
||||
{}
|
||||
};
|
||||
|
||||
static void quirk_tuxeo_rp_d3(struct pci_dev *pdev)
|
||||
{
|
||||
struct pci_dev *root_pdev;
|
||||
|
||||
if (dmi_check_system(quirk_tuxeo_rp_d3_dmi_table)) {
|
||||
root_pdev = pcie_find_root_port(pdev);
|
||||
if (root_pdev)
|
||||
root_pdev->dev_flags |= PCI_DEV_FLAGS_NO_D3;
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1502, quirk_tuxeo_rp_d3);
|
||||
#endif /* CONFIG_SUSPEND */
|
||||
|
|
|
@ -1987,7 +1987,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
|
||||
if (ahci_init_msi(pdev, n_ports, hpriv) < 0) {
|
||||
/* legacy intx interrupts */
|
||||
pci_intx(pdev, 1);
|
||||
pcim_intx(pdev, 1);
|
||||
}
|
||||
hpriv->irq = pci_irq_vector(pdev, 0);
|
||||
|
||||
|
|
|
@ -1725,7 +1725,7 @@ static int piix_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
* message-signalled interrupts currently).
|
||||
*/
|
||||
if (port_flags & PIIX_FLAG_CHECKINTR)
|
||||
pci_intx(pdev, 1);
|
||||
pcim_intx(pdev, 1);
|
||||
|
||||
if (piix_check_450nx_errata(pdev)) {
|
||||
/* This writes into the master table but it does not
|
||||
|
|
|
@ -340,7 +340,7 @@ static int rdc_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
return rc;
|
||||
host->private_data = hpriv;
|
||||
|
||||
pci_intx(pdev, 1);
|
||||
pcim_intx(pdev, 1);
|
||||
|
||||
host->flags |= ATA_HOST_PARALLEL_SCAN;
|
||||
|
||||
|
|
|
@ -1316,7 +1316,7 @@ static int sil24_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
|
||||
if (sata_sil24_msi && !pci_enable_msi(pdev)) {
|
||||
dev_info(&pdev->dev, "Using MSI\n");
|
||||
pci_intx(pdev, 0);
|
||||
pcim_intx(pdev, 0);
|
||||
}
|
||||
|
||||
pci_set_master(pdev);
|
||||
|
|
|
@ -290,7 +290,7 @@ static int sis_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
}
|
||||
|
||||
pci_set_master(pdev);
|
||||
pci_intx(pdev, 1);
|
||||
pcim_intx(pdev, 1);
|
||||
return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,
|
||||
IRQF_SHARED, &sis_sht);
|
||||
}
|
||||
|
|
|
@ -221,7 +221,7 @@ static int uli_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
}
|
||||
|
||||
pci_set_master(pdev);
|
||||
pci_intx(pdev, 1);
|
||||
pcim_intx(pdev, 1);
|
||||
return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,
|
||||
IRQF_SHARED, &uli_sht);
|
||||
}
|
||||
|
|
|
@ -384,7 +384,7 @@ static int vsc_sata_init_one(struct pci_dev *pdev,
|
|||
pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x80);
|
||||
|
||||
if (pci_enable_msi(pdev) == 0)
|
||||
pci_intx(pdev, 0);
|
||||
pcim_intx(pdev, 0);
|
||||
|
||||
/*
|
||||
* Config offset 0x98 is "Extended Control and Status Register 0"
|
||||
|
|
|
@ -489,7 +489,6 @@ static int en7581_pci_enable(struct clk_hw *hw)
|
|||
REG_PCI_CONTROL_PERSTOUT;
|
||||
val = readl(np_base + REG_PCI_CONTROL);
|
||||
writel(val | mask, np_base + REG_PCI_CONTROL);
|
||||
msleep(250);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -122,7 +122,7 @@ int amd_sfh_irq_init_v2(struct amd_mp2_dev *privdata)
|
|||
{
|
||||
int rc;
|
||||
|
||||
pci_intx(privdata->pdev, true);
|
||||
pcim_intx(privdata->pdev, true);
|
||||
|
||||
rc = devm_request_irq(&privdata->pdev->dev, privdata->pdev->irq,
|
||||
amd_sfh_irq_handler, 0, DRIVER_NAME, privdata);
|
||||
|
@ -248,7 +248,7 @@ static void amd_mp2_pci_remove(void *privdata)
|
|||
struct amd_mp2_dev *mp2 = privdata;
|
||||
amd_sfh_hid_client_deinit(privdata);
|
||||
mp2->mp2_ops->stop_all(mp2);
|
||||
pci_intx(mp2->pdev, false);
|
||||
pcim_intx(mp2->pdev, false);
|
||||
amd_sfh_clear_intr(mp2);
|
||||
}
|
||||
|
||||
|
|
|
@ -311,7 +311,7 @@ static void amd_mp2_pci_remove(void *privdata)
|
|||
sfh_deinit_emp2();
|
||||
amd_sfh_hid_client_deinit(privdata);
|
||||
mp2->mp2_ops->stop_all(mp2);
|
||||
pci_intx(mp2->pdev, false);
|
||||
pcim_intx(mp2->pdev, false);
|
||||
amd_sfh_clear_intr(mp2);
|
||||
}
|
||||
|
||||
|
|
|
@ -69,6 +69,9 @@
|
|||
#define PCI_ENDPOINT_TEST_FLAGS 0x2c
|
||||
#define FLAG_USE_DMA BIT(0)
|
||||
|
||||
#define PCI_ENDPOINT_TEST_CAPS 0x30
|
||||
#define CAP_UNALIGNED_ACCESS BIT(0)
|
||||
|
||||
#define PCI_DEVICE_ID_TI_AM654 0xb00c
|
||||
#define PCI_DEVICE_ID_TI_J7200 0xb00f
|
||||
#define PCI_DEVICE_ID_TI_AM64 0xb010
|
||||
|
@ -166,43 +169,47 @@ static void pci_endpoint_test_free_irq_vectors(struct pci_endpoint_test *test)
|
|||
test->irq_type = IRQ_TYPE_UNDEFINED;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
|
||||
static int pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
|
||||
int type)
|
||||
{
|
||||
int irq = -1;
|
||||
int irq;
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
bool res = true;
|
||||
|
||||
switch (type) {
|
||||
case IRQ_TYPE_INTX:
|
||||
irq = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX);
|
||||
if (irq < 0)
|
||||
if (irq < 0) {
|
||||
dev_err(dev, "Failed to get Legacy interrupt\n");
|
||||
return irq;
|
||||
}
|
||||
|
||||
break;
|
||||
case IRQ_TYPE_MSI:
|
||||
irq = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI);
|
||||
if (irq < 0)
|
||||
if (irq < 0) {
|
||||
dev_err(dev, "Failed to get MSI interrupts\n");
|
||||
return irq;
|
||||
}
|
||||
|
||||
break;
|
||||
case IRQ_TYPE_MSIX:
|
||||
irq = pci_alloc_irq_vectors(pdev, 1, 2048, PCI_IRQ_MSIX);
|
||||
if (irq < 0)
|
||||
if (irq < 0) {
|
||||
dev_err(dev, "Failed to get MSI-X interrupts\n");
|
||||
return irq;
|
||||
}
|
||||
|
||||
break;
|
||||
default:
|
||||
dev_err(dev, "Invalid IRQ type selected\n");
|
||||
}
|
||||
|
||||
if (irq < 0) {
|
||||
irq = 0;
|
||||
res = false;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
test->irq_type = type;
|
||||
test->num_irqs = irq;
|
||||
|
||||
return res;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pci_endpoint_test_release_irq(struct pci_endpoint_test *test)
|
||||
|
@ -217,22 +224,22 @@ static void pci_endpoint_test_release_irq(struct pci_endpoint_test *test)
|
|||
test->num_irqs = 0;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_request_irq(struct pci_endpoint_test *test)
|
||||
static int pci_endpoint_test_request_irq(struct pci_endpoint_test *test)
|
||||
{
|
||||
int i;
|
||||
int err;
|
||||
int ret;
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
|
||||
for (i = 0; i < test->num_irqs; i++) {
|
||||
err = devm_request_irq(dev, pci_irq_vector(pdev, i),
|
||||
ret = devm_request_irq(dev, pci_irq_vector(pdev, i),
|
||||
pci_endpoint_test_irqhandler,
|
||||
IRQF_SHARED, test->name, test);
|
||||
if (err)
|
||||
if (ret)
|
||||
goto fail;
|
||||
}
|
||||
|
||||
return true;
|
||||
return 0;
|
||||
|
||||
fail:
|
||||
switch (irq_type) {
|
||||
|
@ -252,7 +259,7 @@ fail:
|
|||
break;
|
||||
}
|
||||
|
||||
return false;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const u32 bar_test_pattern[] = {
|
||||
|
@ -277,16 +284,16 @@ static int pci_endpoint_test_bar_memcmp(struct pci_endpoint_test *test,
|
|||
return memcmp(write_buf, read_buf, size);
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_bar(struct pci_endpoint_test *test,
|
||||
static int pci_endpoint_test_bar(struct pci_endpoint_test *test,
|
||||
enum pci_barno barno)
|
||||
{
|
||||
int j, bar_size, buf_size, iters, remain;
|
||||
int j, bar_size, buf_size, iters;
|
||||
void *write_buf __free(kfree) = NULL;
|
||||
void *read_buf __free(kfree) = NULL;
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
|
||||
if (!test->bar[barno])
|
||||
return false;
|
||||
return -ENOMEM;
|
||||
|
||||
bar_size = pci_resource_len(pdev, barno);
|
||||
|
||||
|
@ -301,28 +308,105 @@ static bool pci_endpoint_test_bar(struct pci_endpoint_test *test,
|
|||
|
||||
write_buf = kmalloc(buf_size, GFP_KERNEL);
|
||||
if (!write_buf)
|
||||
return false;
|
||||
return -ENOMEM;
|
||||
|
||||
read_buf = kmalloc(buf_size, GFP_KERNEL);
|
||||
if (!read_buf)
|
||||
return false;
|
||||
return -ENOMEM;
|
||||
|
||||
iters = bar_size / buf_size;
|
||||
for (j = 0; j < iters; j++)
|
||||
if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * j,
|
||||
write_buf, read_buf, buf_size))
|
||||
return false;
|
||||
return -EIO;
|
||||
|
||||
remain = bar_size % buf_size;
|
||||
if (remain)
|
||||
if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * iters,
|
||||
write_buf, read_buf, remain))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_intx_irq(struct pci_endpoint_test *test)
|
||||
static u32 bar_test_pattern_with_offset(enum pci_barno barno, int offset)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
/* Keep the BAR pattern in the top byte. */
|
||||
val = bar_test_pattern[barno] & 0xff000000;
|
||||
/* Store the (partial) offset in the remaining bytes. */
|
||||
val |= offset & 0x00ffffff;
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
static void pci_endpoint_test_bars_write_bar(struct pci_endpoint_test *test,
|
||||
enum pci_barno barno)
|
||||
{
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
int j, size;
|
||||
|
||||
size = pci_resource_len(pdev, barno);
|
||||
|
||||
if (barno == test->test_reg_bar)
|
||||
size = 0x4;
|
||||
|
||||
for (j = 0; j < size; j += 4)
|
||||
writel_relaxed(bar_test_pattern_with_offset(barno, j),
|
||||
test->bar[barno] + j);
|
||||
}
|
||||
|
||||
static int pci_endpoint_test_bars_read_bar(struct pci_endpoint_test *test,
|
||||
enum pci_barno barno)
|
||||
{
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
int j, size;
|
||||
u32 val;
|
||||
|
||||
size = pci_resource_len(pdev, barno);
|
||||
|
||||
if (barno == test->test_reg_bar)
|
||||
size = 0x4;
|
||||
|
||||
for (j = 0; j < size; j += 4) {
|
||||
u32 expected = bar_test_pattern_with_offset(barno, j);
|
||||
|
||||
val = readl_relaxed(test->bar[barno] + j);
|
||||
if (val != expected) {
|
||||
dev_err(dev,
|
||||
"BAR%d incorrect data at offset: %#x, got: %#x expected: %#x\n",
|
||||
barno, j, val, expected);
|
||||
return -EIO;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pci_endpoint_test_bars(struct pci_endpoint_test *test)
|
||||
{
|
||||
enum pci_barno bar;
|
||||
bool ret;
|
||||
|
||||
/* Write all BARs in order (without reading). */
|
||||
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
|
||||
if (test->bar[bar])
|
||||
pci_endpoint_test_bars_write_bar(test, bar);
|
||||
|
||||
/*
|
||||
* Read all BARs in order (without writing).
|
||||
* If there is an address translation issue on the EP, writing one BAR
|
||||
* might have overwritten another BAR. Ensure that this is not the case.
|
||||
* (Reading back the BAR directly after writing can not detect this.)
|
||||
*/
|
||||
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
|
||||
if (test->bar[bar]) {
|
||||
ret = pci_endpoint_test_bars_read_bar(test, bar);
|
||||
if (!ret)
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pci_endpoint_test_intx_irq(struct pci_endpoint_test *test)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
|
@ -334,16 +418,17 @@ static bool pci_endpoint_test_intx_irq(struct pci_endpoint_test *test)
|
|||
val = wait_for_completion_timeout(&test->irq_raised,
|
||||
msecs_to_jiffies(1000));
|
||||
if (!val)
|
||||
return false;
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
|
||||
static int pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
|
||||
u16 msi_num, bool msix)
|
||||
{
|
||||
u32 val;
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE,
|
||||
msix ? IRQ_TYPE_MSIX : IRQ_TYPE_MSI);
|
||||
|
@ -354,9 +439,16 @@ static bool pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
|
|||
val = wait_for_completion_timeout(&test->irq_raised,
|
||||
msecs_to_jiffies(1000));
|
||||
if (!val)
|
||||
return false;
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return pci_irq_vector(pdev, msi_num - 1) == test->last_irq;
|
||||
ret = pci_irq_vector(pdev, msi_num - 1);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (ret != test->last_irq)
|
||||
return -EIO;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pci_endpoint_test_validate_xfer_params(struct device *dev,
|
||||
|
@ -375,11 +467,10 @@ static int pci_endpoint_test_validate_xfer_params(struct device *dev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
|
||||
static int pci_endpoint_test_copy(struct pci_endpoint_test *test,
|
||||
unsigned long arg)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param;
|
||||
bool ret = false;
|
||||
void *src_addr;
|
||||
void *dst_addr;
|
||||
u32 flags = 0;
|
||||
|
@ -398,17 +489,17 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
|
|||
int irq_type = test->irq_type;
|
||||
u32 src_crc32;
|
||||
u32 dst_crc32;
|
||||
int err;
|
||||
int ret;
|
||||
|
||||
err = copy_from_user(¶m, (void __user *)arg, sizeof(param));
|
||||
if (err) {
|
||||
ret = copy_from_user(¶m, (void __user *)arg, sizeof(param));
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to get transfer param\n");
|
||||
return false;
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
err = pci_endpoint_test_validate_xfer_params(dev, ¶m, alignment);
|
||||
if (err)
|
||||
return false;
|
||||
ret = pci_endpoint_test_validate_xfer_params(dev, ¶m, alignment);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
size = param.size;
|
||||
|
||||
|
@ -418,22 +509,21 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
|
|||
|
||||
if (irq_type < IRQ_TYPE_INTX || irq_type > IRQ_TYPE_MSIX) {
|
||||
dev_err(dev, "Invalid IRQ type option\n");
|
||||
goto err;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
orig_src_addr = kzalloc(size + alignment, GFP_KERNEL);
|
||||
if (!orig_src_addr) {
|
||||
dev_err(dev, "Failed to allocate source buffer\n");
|
||||
ret = false;
|
||||
goto err;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
get_random_bytes(orig_src_addr, size + alignment);
|
||||
orig_src_phys_addr = dma_map_single(dev, orig_src_addr,
|
||||
size + alignment, DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(dev, orig_src_phys_addr)) {
|
||||
ret = dma_mapping_error(dev, orig_src_phys_addr);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to map source buffer address\n");
|
||||
ret = false;
|
||||
goto err_src_phys_addr;
|
||||
}
|
||||
|
||||
|
@ -457,15 +547,15 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
|
|||
orig_dst_addr = kzalloc(size + alignment, GFP_KERNEL);
|
||||
if (!orig_dst_addr) {
|
||||
dev_err(dev, "Failed to allocate destination address\n");
|
||||
ret = false;
|
||||
ret = -ENOMEM;
|
||||
goto err_dst_addr;
|
||||
}
|
||||
|
||||
orig_dst_phys_addr = dma_map_single(dev, orig_dst_addr,
|
||||
size + alignment, DMA_FROM_DEVICE);
|
||||
if (dma_mapping_error(dev, orig_dst_phys_addr)) {
|
||||
ret = dma_mapping_error(dev, orig_dst_phys_addr);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to map destination buffer address\n");
|
||||
ret = false;
|
||||
goto err_dst_phys_addr;
|
||||
}
|
||||
|
||||
|
@ -498,8 +588,8 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
|
|||
DMA_FROM_DEVICE);
|
||||
|
||||
dst_crc32 = crc32_le(~0, dst_addr, size);
|
||||
if (dst_crc32 == src_crc32)
|
||||
ret = true;
|
||||
if (dst_crc32 != src_crc32)
|
||||
ret = -EIO;
|
||||
|
||||
err_dst_phys_addr:
|
||||
kfree(orig_dst_addr);
|
||||
|
@ -510,16 +600,13 @@ err_dst_addr:
|
|||
|
||||
err_src_phys_addr:
|
||||
kfree(orig_src_addr);
|
||||
|
||||
err:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_write(struct pci_endpoint_test *test,
|
||||
static int pci_endpoint_test_write(struct pci_endpoint_test *test,
|
||||
unsigned long arg)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param;
|
||||
bool ret = false;
|
||||
u32 flags = 0;
|
||||
bool use_dma;
|
||||
u32 reg;
|
||||
|
@ -534,17 +621,17 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test,
|
|||
int irq_type = test->irq_type;
|
||||
size_t size;
|
||||
u32 crc32;
|
||||
int err;
|
||||
int ret;
|
||||
|
||||
err = copy_from_user(¶m, (void __user *)arg, sizeof(param));
|
||||
if (err != 0) {
|
||||
ret = copy_from_user(¶m, (void __user *)arg, sizeof(param));
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to get transfer param\n");
|
||||
return false;
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
err = pci_endpoint_test_validate_xfer_params(dev, ¶m, alignment);
|
||||
if (err)
|
||||
return false;
|
||||
ret = pci_endpoint_test_validate_xfer_params(dev, ¶m, alignment);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
size = param.size;
|
||||
|
||||
|
@ -554,23 +641,22 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test,
|
|||
|
||||
if (irq_type < IRQ_TYPE_INTX || irq_type > IRQ_TYPE_MSIX) {
|
||||
dev_err(dev, "Invalid IRQ type option\n");
|
||||
goto err;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
orig_addr = kzalloc(size + alignment, GFP_KERNEL);
|
||||
if (!orig_addr) {
|
||||
dev_err(dev, "Failed to allocate address\n");
|
||||
ret = false;
|
||||
goto err;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
get_random_bytes(orig_addr, size + alignment);
|
||||
|
||||
orig_phys_addr = dma_map_single(dev, orig_addr, size + alignment,
|
||||
DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(dev, orig_phys_addr)) {
|
||||
ret = dma_mapping_error(dev, orig_phys_addr);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to map source buffer address\n");
|
||||
ret = false;
|
||||
goto err_phys_addr;
|
||||
}
|
||||
|
||||
|
@ -603,24 +689,21 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test,
|
|||
wait_for_completion(&test->irq_raised);
|
||||
|
||||
reg = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS);
|
||||
if (reg & STATUS_READ_SUCCESS)
|
||||
ret = true;
|
||||
if (!(reg & STATUS_READ_SUCCESS))
|
||||
ret = -EIO;
|
||||
|
||||
dma_unmap_single(dev, orig_phys_addr, size + alignment,
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
err_phys_addr:
|
||||
kfree(orig_addr);
|
||||
|
||||
err:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_read(struct pci_endpoint_test *test,
|
||||
static int pci_endpoint_test_read(struct pci_endpoint_test *test,
|
||||
unsigned long arg)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param;
|
||||
bool ret = false;
|
||||
u32 flags = 0;
|
||||
bool use_dma;
|
||||
size_t size;
|
||||
|
@ -634,17 +717,17 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test,
|
|||
size_t alignment = test->alignment;
|
||||
int irq_type = test->irq_type;
|
||||
u32 crc32;
|
||||
int err;
|
||||
int ret;
|
||||
|
||||
err = copy_from_user(¶m, (void __user *)arg, sizeof(param));
|
||||
if (err) {
|
||||
ret = copy_from_user(¶m, (void __user *)arg, sizeof(param));
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to get transfer param\n");
|
||||
return false;
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
err = pci_endpoint_test_validate_xfer_params(dev, ¶m, alignment);
|
||||
if (err)
|
||||
return false;
|
||||
ret = pci_endpoint_test_validate_xfer_params(dev, ¶m, alignment);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
size = param.size;
|
||||
|
||||
|
@ -654,21 +737,20 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test,
|
|||
|
||||
if (irq_type < IRQ_TYPE_INTX || irq_type > IRQ_TYPE_MSIX) {
|
||||
dev_err(dev, "Invalid IRQ type option\n");
|
||||
goto err;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
orig_addr = kzalloc(size + alignment, GFP_KERNEL);
|
||||
if (!orig_addr) {
|
||||
dev_err(dev, "Failed to allocate destination address\n");
|
||||
ret = false;
|
||||
goto err;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
orig_phys_addr = dma_map_single(dev, orig_addr, size + alignment,
|
||||
DMA_FROM_DEVICE);
|
||||
if (dma_mapping_error(dev, orig_phys_addr)) {
|
||||
ret = dma_mapping_error(dev, orig_phys_addr);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to map source buffer address\n");
|
||||
ret = false;
|
||||
goto err_phys_addr;
|
||||
}
|
||||
|
||||
|
@ -700,50 +782,51 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test,
|
|||
DMA_FROM_DEVICE);
|
||||
|
||||
crc32 = crc32_le(~0, addr, size);
|
||||
if (crc32 == pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CHECKSUM))
|
||||
ret = true;
|
||||
if (crc32 != pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CHECKSUM))
|
||||
ret = -EIO;
|
||||
|
||||
err_phys_addr:
|
||||
kfree(orig_addr);
|
||||
err:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_clear_irq(struct pci_endpoint_test *test)
|
||||
static int pci_endpoint_test_clear_irq(struct pci_endpoint_test *test)
|
||||
{
|
||||
pci_endpoint_test_release_irq(test);
|
||||
pci_endpoint_test_free_irq_vectors(test);
|
||||
return true;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
|
||||
static int pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
|
||||
int req_irq_type)
|
||||
{
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
int ret;
|
||||
|
||||
if (req_irq_type < IRQ_TYPE_INTX || req_irq_type > IRQ_TYPE_MSIX) {
|
||||
dev_err(dev, "Invalid IRQ type option\n");
|
||||
return false;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (test->irq_type == req_irq_type)
|
||||
return true;
|
||||
return 0;
|
||||
|
||||
pci_endpoint_test_release_irq(test);
|
||||
pci_endpoint_test_free_irq_vectors(test);
|
||||
|
||||
if (!pci_endpoint_test_alloc_irq_vectors(test, req_irq_type))
|
||||
goto err;
|
||||
ret = pci_endpoint_test_alloc_irq_vectors(test, req_irq_type);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (!pci_endpoint_test_request_irq(test))
|
||||
goto err;
|
||||
ret = pci_endpoint_test_request_irq(test);
|
||||
if (ret) {
|
||||
pci_endpoint_test_free_irq_vectors(test);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return true;
|
||||
|
||||
err:
|
||||
pci_endpoint_test_free_irq_vectors(test);
|
||||
return false;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
|
||||
|
@ -768,6 +851,9 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
|
|||
goto ret;
|
||||
ret = pci_endpoint_test_bar(test, bar);
|
||||
break;
|
||||
case PCITEST_BARS:
|
||||
ret = pci_endpoint_test_bars(test);
|
||||
break;
|
||||
case PCITEST_INTX_IRQ:
|
||||
ret = pci_endpoint_test_intx_irq(test);
|
||||
break;
|
||||
|
@ -805,10 +891,24 @@ static const struct file_operations pci_endpoint_test_fops = {
|
|||
.unlocked_ioctl = pci_endpoint_test_ioctl,
|
||||
};
|
||||
|
||||
static void pci_endpoint_test_get_capabilities(struct pci_endpoint_test *test)
|
||||
{
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
u32 caps;
|
||||
|
||||
caps = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CAPS);
|
||||
dev_dbg(dev, "PCI_ENDPOINT_TEST_CAPS: %#x\n", caps);
|
||||
|
||||
/* CAP_UNALIGNED_ACCESS is set if the EP can do unaligned access */
|
||||
if (caps & CAP_UNALIGNED_ACCESS)
|
||||
test->alignment = 0;
|
||||
}
|
||||
|
||||
static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *ent)
|
||||
{
|
||||
int err;
|
||||
int ret;
|
||||
int id;
|
||||
char name[24];
|
||||
enum pci_barno bar;
|
||||
|
@ -847,24 +947,23 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
|
||||
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48));
|
||||
|
||||
err = pci_enable_device(pdev);
|
||||
if (err) {
|
||||
ret = pci_enable_device(pdev);
|
||||
if (ret) {
|
||||
dev_err(dev, "Cannot enable PCI device\n");
|
||||
return err;
|
||||
return ret;
|
||||
}
|
||||
|
||||
err = pci_request_regions(pdev, DRV_MODULE_NAME);
|
||||
if (err) {
|
||||
ret = pci_request_regions(pdev, DRV_MODULE_NAME);
|
||||
if (ret) {
|
||||
dev_err(dev, "Cannot obtain PCI resources\n");
|
||||
goto err_disable_pdev;
|
||||
}
|
||||
|
||||
pci_set_master(pdev);
|
||||
|
||||
if (!pci_endpoint_test_alloc_irq_vectors(test, irq_type)) {
|
||||
err = -EINVAL;
|
||||
ret = pci_endpoint_test_alloc_irq_vectors(test, irq_type);
|
||||
if (ret)
|
||||
goto err_disable_irq;
|
||||
}
|
||||
|
||||
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
|
||||
if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) {
|
||||
|
@ -879,7 +978,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
|
||||
test->base = test->bar[test_reg_bar];
|
||||
if (!test->base) {
|
||||
err = -ENOMEM;
|
||||
ret = -ENOMEM;
|
||||
dev_err(dev, "Cannot perform PCI test without BAR%d\n",
|
||||
test_reg_bar);
|
||||
goto err_iounmap;
|
||||
|
@ -889,7 +988,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
|
||||
id = ida_alloc(&pci_endpoint_test_ida, GFP_KERNEL);
|
||||
if (id < 0) {
|
||||
err = id;
|
||||
ret = id;
|
||||
dev_err(dev, "Unable to get id\n");
|
||||
goto err_iounmap;
|
||||
}
|
||||
|
@ -897,27 +996,28 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
snprintf(name, sizeof(name), DRV_MODULE_NAME ".%d", id);
|
||||
test->name = kstrdup(name, GFP_KERNEL);
|
||||
if (!test->name) {
|
||||
err = -ENOMEM;
|
||||
ret = -ENOMEM;
|
||||
goto err_ida_remove;
|
||||
}
|
||||
|
||||
if (!pci_endpoint_test_request_irq(test)) {
|
||||
err = -EINVAL;
|
||||
ret = pci_endpoint_test_request_irq(test);
|
||||
if (ret)
|
||||
goto err_kfree_test_name;
|
||||
}
|
||||
|
||||
pci_endpoint_test_get_capabilities(test);
|
||||
|
||||
misc_device = &test->miscdev;
|
||||
misc_device->minor = MISC_DYNAMIC_MINOR;
|
||||
misc_device->name = kstrdup(name, GFP_KERNEL);
|
||||
if (!misc_device->name) {
|
||||
err = -ENOMEM;
|
||||
ret = -ENOMEM;
|
||||
goto err_release_irq;
|
||||
}
|
||||
misc_device->parent = &pdev->dev;
|
||||
misc_device->fops = &pci_endpoint_test_fops;
|
||||
|
||||
err = misc_register(misc_device);
|
||||
if (err) {
|
||||
ret = misc_register(misc_device);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to register device\n");
|
||||
goto err_kfree_name;
|
||||
}
|
||||
|
@ -949,7 +1049,7 @@ err_disable_irq:
|
|||
err_disable_pdev:
|
||||
pci_disable_device(pdev);
|
||||
|
||||
return err;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void pci_endpoint_test_remove(struct pci_dev *pdev)
|
||||
|
|
|
@ -204,7 +204,7 @@ static void qtnf_pcie_init_irq(struct qtnf_pcie_bus_priv *priv, bool use_msi)
|
|||
|
||||
if (!priv->msi_enabled) {
|
||||
pr_warn("legacy PCIE interrupts enabled\n");
|
||||
pci_intx(pdev, 1);
|
||||
pcim_intx(pdev, 1);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -811,6 +811,8 @@ struct of_pci_range *of_pci_range_parser_one(struct of_pci_range_parser *parser,
|
|||
else
|
||||
range->cpu_addr = of_translate_address(parser->node,
|
||||
parser->range + na);
|
||||
|
||||
range->parent_bus_addr = of_read_number(parser->range + na, parser->pna);
|
||||
range->size = of_read_number(parser->range + parser->pna + na, ns);
|
||||
|
||||
parser->range += np;
|
||||
|
|
|
@ -410,7 +410,7 @@ int pci_enable_pasid(struct pci_dev *pdev, int features)
|
|||
if (WARN_ON(pdev->pasid_enabled))
|
||||
return -EBUSY;
|
||||
|
||||
if (!pdev->eetlp_prefix_path && !pdev->pasid_no_tlp)
|
||||
if (!pdev->eetlp_prefix_max && !pdev->pasid_no_tlp)
|
||||
return -EINVAL;
|
||||
|
||||
if (!pasid)
|
||||
|
|
|
@ -635,30 +635,20 @@ static int dra7xx_pcie_unaligned_memaccess(struct device *dev)
|
|||
{
|
||||
int ret;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct of_phandle_args args;
|
||||
unsigned int args[2];
|
||||
struct regmap *regmap;
|
||||
|
||||
regmap = syscon_regmap_lookup_by_phandle(np,
|
||||
"ti,syscon-unaligned-access");
|
||||
regmap = syscon_regmap_lookup_by_phandle_args(np, "ti,syscon-unaligned-access",
|
||||
2, args);
|
||||
if (IS_ERR(regmap)) {
|
||||
dev_dbg(dev, "can't get ti,syscon-unaligned-access\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = of_parse_phandle_with_fixed_args(np, "ti,syscon-unaligned-access",
|
||||
2, 0, &args);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to parse ti,syscon-unaligned-access\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = regmap_update_bits(regmap, args.args[0], args.args[1],
|
||||
args.args[1]);
|
||||
ret = regmap_update_bits(regmap, args[0], args[1], args[1]);
|
||||
if (ret)
|
||||
dev_err(dev, "failed to enable unaligned access\n");
|
||||
|
||||
of_node_put(args.np);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -671,18 +661,13 @@ static int dra7xx_pcie_configure_two_lane(struct device *dev,
|
|||
u32 mask;
|
||||
u32 val;
|
||||
|
||||
pcie_syscon = syscon_regmap_lookup_by_phandle(np, "ti,syscon-lane-sel");
|
||||
pcie_syscon = syscon_regmap_lookup_by_phandle_args(np, "ti,syscon-lane-sel",
|
||||
1, &pcie_reg);
|
||||
if (IS_ERR(pcie_syscon)) {
|
||||
dev_err(dev, "unable to get ti,syscon-lane-sel\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (of_property_read_u32_index(np, "ti,syscon-lane-sel", 1,
|
||||
&pcie_reg)) {
|
||||
dev_err(dev, "couldn't get lane selection reg offset\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mask = b1co_mode_sel_mask | PCIE_B0_B1_TSYNCEN;
|
||||
val = PCIE_B1C0_MODE_SEL | PCIE_B0_B1_TSYNCEN;
|
||||
regmap_update_bits(pcie_syscon, pcie_reg, mask, val);
|
||||
|
|
|
@ -33,6 +33,7 @@
|
|||
#include <linux/pm_domain.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include "../../pci.h"
|
||||
#include "pcie-designware.h"
|
||||
|
||||
#define IMX8MQ_GPR_PCIE_REF_USE_PAD BIT(9)
|
||||
|
@ -55,6 +56,22 @@
|
|||
#define IMX95_PE0_GEN_CTRL_3 0x1058
|
||||
#define IMX95_PCIE_LTSSM_EN BIT(0)
|
||||
|
||||
#define IMX95_PE0_LUT_ACSCTRL 0x1008
|
||||
#define IMX95_PEO_LUT_RWA BIT(16)
|
||||
#define IMX95_PE0_LUT_ENLOC GENMASK(4, 0)
|
||||
|
||||
#define IMX95_PE0_LUT_DATA1 0x100c
|
||||
#define IMX95_PE0_LUT_VLD BIT(31)
|
||||
#define IMX95_PE0_LUT_DAC_ID GENMASK(10, 8)
|
||||
#define IMX95_PE0_LUT_STREAM_ID GENMASK(5, 0)
|
||||
|
||||
#define IMX95_PE0_LUT_DATA2 0x1010
|
||||
#define IMX95_PE0_LUT_REQID GENMASK(31, 16)
|
||||
#define IMX95_PE0_LUT_MASK GENMASK(15, 0)
|
||||
|
||||
#define IMX95_SID_MASK GENMASK(5, 0)
|
||||
#define IMX95_MAX_LUT 32
|
||||
|
||||
#define to_imx_pcie(x) dev_get_drvdata((x)->dev)
|
||||
|
||||
enum imx_pcie_variants {
|
||||
|
@ -70,6 +87,7 @@ enum imx_pcie_variants {
|
|||
IMX8MQ_EP,
|
||||
IMX8MM_EP,
|
||||
IMX8MP_EP,
|
||||
IMX8Q_EP,
|
||||
IMX95_EP,
|
||||
};
|
||||
|
||||
|
@ -87,6 +105,7 @@ enum imx_pcie_variants {
|
|||
* workaround suspend resume on some devices which are affected by this errata.
|
||||
*/
|
||||
#define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9)
|
||||
#define IMX_PCIE_FLAG_HAS_LUT BIT(10)
|
||||
|
||||
#define imx_check_flag(pci, val) (pci->drvdata->flags & val)
|
||||
|
||||
|
@ -103,6 +122,7 @@ struct imx_pcie_drvdata {
|
|||
const char *gpr;
|
||||
const char * const *clk_names;
|
||||
const u32 clks_cnt;
|
||||
const u32 clks_optional_cnt;
|
||||
const u32 ltssm_off;
|
||||
const u32 ltssm_mask;
|
||||
const u32 mode_off[IMX_PCIE_MAX_INSTANCES];
|
||||
|
@ -111,19 +131,18 @@ struct imx_pcie_drvdata {
|
|||
int (*init_phy)(struct imx_pcie *pcie);
|
||||
int (*enable_ref_clk)(struct imx_pcie *pcie, bool enable);
|
||||
int (*core_reset)(struct imx_pcie *pcie, bool assert);
|
||||
const struct dw_pcie_host_ops *ops;
|
||||
};
|
||||
|
||||
struct imx_pcie {
|
||||
struct dw_pcie *pci;
|
||||
struct gpio_desc *reset_gpiod;
|
||||
bool link_is_up;
|
||||
struct clk_bulk_data clks[IMX_PCIE_MAX_CLKS];
|
||||
struct regmap *iomuxc_gpr;
|
||||
u16 msi_ctrl;
|
||||
u32 controller_id;
|
||||
struct reset_control *pciephy_reset;
|
||||
struct reset_control *apps_reset;
|
||||
struct reset_control *turnoff_reset;
|
||||
u32 tx_deemph_gen1;
|
||||
u32 tx_deemph_gen2_3p5db;
|
||||
u32 tx_deemph_gen2_6db;
|
||||
|
@ -139,6 +158,9 @@ struct imx_pcie {
|
|||
struct device *pd_pcie_phy;
|
||||
struct phy *phy;
|
||||
const struct imx_pcie_drvdata *drvdata;
|
||||
|
||||
/* Ensure that only one device's LUT is configured at any given time */
|
||||
struct mutex lock;
|
||||
};
|
||||
|
||||
/* Parameters for the waiting for PCIe PHY PLL to lock on i.MX7 */
|
||||
|
@ -234,11 +256,11 @@ static void imx_pcie_configure_type(struct imx_pcie *imx_pcie)
|
|||
|
||||
id = imx_pcie->controller_id;
|
||||
|
||||
/* If mode_mask is 0, then generic PHY driver is used to set the mode */
|
||||
/* If mode_mask is 0, generic PHY driver is used to set the mode */
|
||||
if (!drvdata->mode_mask[0])
|
||||
return;
|
||||
|
||||
/* If mode_mask[id] is zero, means each controller have its individual gpr */
|
||||
/* If mode_mask[id] is 0, each controller has its individual GPR */
|
||||
if (!drvdata->mode_mask[id])
|
||||
id = 0;
|
||||
|
||||
|
@ -375,14 +397,15 @@ static int pcie_phy_write(struct imx_pcie *imx_pcie, int addr, u16 data)
|
|||
|
||||
static int imx8mq_pcie_init_phy(struct imx_pcie *imx_pcie)
|
||||
{
|
||||
/* TODO: Currently this code assumes external oscillator is being used */
|
||||
/* TODO: This code assumes external oscillator is being used */
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr,
|
||||
imx_pcie_grp_offset(imx_pcie),
|
||||
IMX8MQ_GPR_PCIE_REF_USE_PAD,
|
||||
IMX8MQ_GPR_PCIE_REF_USE_PAD);
|
||||
/*
|
||||
* Regarding the datasheet, the PCIE_VPH is suggested to be 1.8V. If the PCIE_VPH is
|
||||
* supplied by 3.3V, the VREG_BYPASS should be cleared to zero.
|
||||
* Per the datasheet, the PCIE_VPH is suggested to be 1.8V. If the
|
||||
* PCIE_VPH is supplied by 3.3V, the VREG_BYPASS should be cleared
|
||||
* to zero.
|
||||
*/
|
||||
if (imx_pcie->vph && regulator_get_voltage(imx_pcie->vph) > 3000000)
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr,
|
||||
|
@ -393,13 +416,6 @@ static int imx8mq_pcie_init_phy(struct imx_pcie *imx_pcie)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int imx7d_pcie_init_phy(struct imx_pcie *imx_pcie)
|
||||
{
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int imx_pcie_init_phy(struct imx_pcie *imx_pcie)
|
||||
{
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
|
||||
|
@ -576,7 +592,7 @@ static int imx_pcie_attach_pd(struct device *dev)
|
|||
DL_FLAG_PM_RUNTIME |
|
||||
DL_FLAG_RPM_ACTIVE);
|
||||
if (!link) {
|
||||
dev_err(dev, "Failed to add device_link to pcie pd.\n");
|
||||
dev_err(dev, "Failed to add device_link to pcie pd\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -589,7 +605,7 @@ static int imx_pcie_attach_pd(struct device *dev)
|
|||
DL_FLAG_PM_RUNTIME |
|
||||
DL_FLAG_RPM_ACTIVE);
|
||||
if (!link) {
|
||||
dev_err(dev, "Failed to add device_link to pcie_phy pd.\n");
|
||||
dev_err(dev, "Failed to add device_link to pcie_phy pd\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -598,10 +614,9 @@ static int imx_pcie_attach_pd(struct device *dev)
|
|||
|
||||
static int imx6sx_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
|
||||
{
|
||||
if (enable)
|
||||
regmap_clear_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
|
||||
IMX6SX_GPR12_PCIE_TEST_POWERDOWN);
|
||||
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
|
||||
IMX6SX_GPR12_PCIE_TEST_POWERDOWN,
|
||||
enable ? 0 : IMX6SX_GPR12_PCIE_TEST_POWERDOWN);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -611,10 +626,10 @@ static int imx6q_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
|
|||
/* power up core phy and enable ref clock */
|
||||
regmap_clear_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR1, IMX6Q_GPR1_PCIE_TEST_PD);
|
||||
/*
|
||||
* the async reset input need ref clock to sync internally,
|
||||
* The async reset input need ref clock to sync internally,
|
||||
* when the ref clock comes after reset, internal synced
|
||||
* reset time is too short, cannot meet the requirement.
|
||||
* add one ~10us delay here.
|
||||
* Add a ~10us delay here.
|
||||
*/
|
||||
usleep_range(10, 100);
|
||||
regmap_set_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR1, IMX6Q_GPR1_PCIE_REF_CLK_EN);
|
||||
|
@ -630,19 +645,20 @@ static int imx8mm_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
|
|||
{
|
||||
int offset = imx_pcie_grp_offset(imx_pcie);
|
||||
|
||||
if (enable) {
|
||||
regmap_clear_bits(imx_pcie->iomuxc_gpr, offset, IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE);
|
||||
regmap_set_bits(imx_pcie->iomuxc_gpr, offset, IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN);
|
||||
}
|
||||
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr, offset,
|
||||
IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE,
|
||||
enable ? 0 : IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE);
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr, offset,
|
||||
IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN,
|
||||
enable ? IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN : 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int imx7d_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
|
||||
{
|
||||
if (!enable)
|
||||
regmap_set_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
|
||||
IMX7D_GPR12_PCIE_PHY_REFCLK_SEL);
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
|
||||
IMX7D_GPR12_PCIE_PHY_REFCLK_SEL,
|
||||
enable ? 0 : IMX7D_GPR12_PCIE_PHY_REFCLK_SEL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -775,6 +791,7 @@ static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie)
|
|||
static int imx_pcie_deassert_core_reset(struct imx_pcie *imx_pcie)
|
||||
{
|
||||
reset_control_deassert(imx_pcie->pciephy_reset);
|
||||
reset_control_deassert(imx_pcie->apps_reset);
|
||||
|
||||
if (imx_pcie->drvdata->core_reset)
|
||||
imx_pcie->drvdata->core_reset(imx_pcie, false);
|
||||
|
@ -884,6 +901,7 @@ static int imx_pcie_start_link(struct dw_pcie *pci)
|
|||
|
||||
if (imx_pcie->drvdata->flags &
|
||||
IMX_PCIE_FLAG_IMX_SPEED_CHANGE) {
|
||||
|
||||
/*
|
||||
* On i.MX7, DIRECT_SPEED_CHANGE behaves differently
|
||||
* from i.MX6 family when no link speed transition
|
||||
|
@ -892,7 +910,6 @@ static int imx_pcie_start_link(struct dw_pcie *pci)
|
|||
* which will cause the following code to report false
|
||||
* failure.
|
||||
*/
|
||||
|
||||
ret = imx_pcie_wait_for_speed_change(imx_pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to bring link up!\n");
|
||||
|
@ -908,13 +925,11 @@ static int imx_pcie_start_link(struct dw_pcie *pci)
|
|||
dev_info(dev, "Link: Only Gen1 is enabled\n");
|
||||
}
|
||||
|
||||
imx_pcie->link_is_up = true;
|
||||
tmp = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA);
|
||||
dev_info(dev, "Link up, Gen%i\n", tmp & PCI_EXP_LNKSTA_CLS);
|
||||
return 0;
|
||||
|
||||
err_reset_phy:
|
||||
imx_pcie->link_is_up = false;
|
||||
dev_dbg(dev, "PHY DEBUG_R0=0x%08x DEBUG_R1=0x%08x\n",
|
||||
dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0),
|
||||
dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG1));
|
||||
|
@ -930,6 +945,184 @@ static void imx_pcie_stop_link(struct dw_pcie *pci)
|
|||
imx_pcie_ltssm_disable(dev);
|
||||
}
|
||||
|
||||
static int imx_pcie_add_lut(struct imx_pcie *imx_pcie, u16 rid, u8 sid)
|
||||
{
|
||||
struct dw_pcie *pci = imx_pcie->pci;
|
||||
struct device *dev = pci->dev;
|
||||
u32 data1, data2;
|
||||
int free = -1;
|
||||
int i;
|
||||
|
||||
if (sid >= 64) {
|
||||
dev_err(dev, "Invalid SID for index %d\n", sid);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
guard(mutex)(&imx_pcie->lock);
|
||||
|
||||
/*
|
||||
* Iterate through all LUT entries to check for duplicate RID and
|
||||
* identify the first available entry. Configure this available entry
|
||||
* immediately after verification to avoid rescanning it.
|
||||
*/
|
||||
for (i = 0; i < IMX95_MAX_LUT; i++) {
|
||||
regmap_write(imx_pcie->iomuxc_gpr,
|
||||
IMX95_PE0_LUT_ACSCTRL, IMX95_PEO_LUT_RWA | i);
|
||||
regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1, &data1);
|
||||
|
||||
if (!(data1 & IMX95_PE0_LUT_VLD)) {
|
||||
if (free < 0)
|
||||
free = i;
|
||||
continue;
|
||||
}
|
||||
|
||||
regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, &data2);
|
||||
|
||||
/* Do not add duplicate RID */
|
||||
if (rid == FIELD_GET(IMX95_PE0_LUT_REQID, data2)) {
|
||||
dev_warn(dev, "Existing LUT entry available for RID (%d)", rid);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
if (free < 0) {
|
||||
dev_err(dev, "LUT entry is not available\n");
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
data1 = FIELD_PREP(IMX95_PE0_LUT_DAC_ID, 0);
|
||||
data1 |= FIELD_PREP(IMX95_PE0_LUT_STREAM_ID, sid);
|
||||
data1 |= IMX95_PE0_LUT_VLD;
|
||||
regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1, data1);
|
||||
|
||||
data2 = IMX95_PE0_LUT_MASK; /* Match all bits of RID */
|
||||
data2 |= FIELD_PREP(IMX95_PE0_LUT_REQID, rid);
|
||||
regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, data2);
|
||||
|
||||
regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_ACSCTRL, free);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void imx_pcie_remove_lut(struct imx_pcie *imx_pcie, u16 rid)
|
||||
{
|
||||
u32 data2;
|
||||
int i;
|
||||
|
||||
guard(mutex)(&imx_pcie->lock);
|
||||
|
||||
for (i = 0; i < IMX95_MAX_LUT; i++) {
|
||||
regmap_write(imx_pcie->iomuxc_gpr,
|
||||
IMX95_PE0_LUT_ACSCTRL, IMX95_PEO_LUT_RWA | i);
|
||||
regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, &data2);
|
||||
if (FIELD_GET(IMX95_PE0_LUT_REQID, data2) == rid) {
|
||||
regmap_write(imx_pcie->iomuxc_gpr,
|
||||
IMX95_PE0_LUT_DATA1, 0);
|
||||
regmap_write(imx_pcie->iomuxc_gpr,
|
||||
IMX95_PE0_LUT_DATA2, 0);
|
||||
regmap_write(imx_pcie->iomuxc_gpr,
|
||||
IMX95_PE0_LUT_ACSCTRL, i);
|
||||
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int imx_pcie_enable_device(struct pci_host_bridge *bridge,
|
||||
struct pci_dev *pdev)
|
||||
{
|
||||
struct imx_pcie *imx_pcie = to_imx_pcie(to_dw_pcie_from_pp(bridge->sysdata));
|
||||
u32 sid_i, sid_m, rid = pci_dev_id(pdev);
|
||||
struct device_node *target;
|
||||
struct device *dev;
|
||||
int err_i, err_m;
|
||||
u32 sid = 0;
|
||||
|
||||
dev = imx_pcie->pci->dev;
|
||||
|
||||
target = NULL;
|
||||
err_i = of_map_id(dev->of_node, rid, "iommu-map", "iommu-map-mask",
|
||||
&target, &sid_i);
|
||||
if (target) {
|
||||
of_node_put(target);
|
||||
} else {
|
||||
/*
|
||||
* "target == NULL && err_i == 0" means RID out of map range.
|
||||
* Use 1:1 map RID to streamID. Hardware can't support this
|
||||
* because the streamID is only 6 bits
|
||||
*/
|
||||
err_i = -EINVAL;
|
||||
}
|
||||
|
||||
target = NULL;
|
||||
err_m = of_map_id(dev->of_node, rid, "msi-map", "msi-map-mask",
|
||||
&target, &sid_m);
|
||||
|
||||
/*
|
||||
* err_m target
|
||||
* 0 NULL RID out of range. Use 1:1 map RID to
|
||||
* streamID, Current hardware can't
|
||||
* support it, so return -EINVAL.
|
||||
* != 0 NULL msi-map does not exist, use built-in MSI
|
||||
* 0 != NULL Get correct streamID from RID
|
||||
* != 0 != NULL Invalid combination
|
||||
*/
|
||||
if (!err_m && !target)
|
||||
return -EINVAL;
|
||||
else if (target)
|
||||
of_node_put(target); /* Find streamID map entry for RID in msi-map */
|
||||
|
||||
/*
|
||||
* msi-map iommu-map
|
||||
* N N DWC MSI Ctrl
|
||||
* Y Y ITS + SMMU, require the same SID
|
||||
* Y N ITS
|
||||
* N Y DWC MSI Ctrl + SMMU
|
||||
*/
|
||||
if (err_i && err_m)
|
||||
return 0;
|
||||
|
||||
if (!err_i && !err_m) {
|
||||
/*
|
||||
* Glue Layer
|
||||
* <==========>
|
||||
* ┌─────┐ ┌──────────┐
|
||||
* │ LUT │ 6-bit streamID │ │
|
||||
* │ │─────────────────►│ MSI │
|
||||
* └─────┘ 2-bit ctrl ID │ │
|
||||
* ┌───────────►│ │
|
||||
* (i.MX95) │ │ │
|
||||
* 00 PCIe0 │ │ │
|
||||
* 01 ENETC │ │ │
|
||||
* 10 PCIe1 │ │ │
|
||||
* │ └──────────┘
|
||||
* The MSI glue layer auto adds 2 bits controller ID ahead of
|
||||
* streamID, so mask these 2 bits to get streamID. The
|
||||
* IOMMU glue layer doesn't do that.
|
||||
*/
|
||||
if (sid_i != (sid_m & IMX95_SID_MASK)) {
|
||||
dev_err(dev, "iommu-map and msi-map entries mismatch!\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
if (!err_i)
|
||||
sid = sid_i;
|
||||
else if (!err_m)
|
||||
sid = sid_m & IMX95_SID_MASK;
|
||||
|
||||
return imx_pcie_add_lut(imx_pcie, rid, sid);
|
||||
}
|
||||
|
||||
static void imx_pcie_disable_device(struct pci_host_bridge *bridge,
|
||||
struct pci_dev *pdev)
|
||||
{
|
||||
struct imx_pcie *imx_pcie;
|
||||
|
||||
imx_pcie = to_imx_pcie(to_dw_pcie_from_pp(bridge->sysdata));
|
||||
imx_pcie_remove_lut(imx_pcie, pci_dev_id(pdev));
|
||||
}
|
||||
|
||||
static int imx_pcie_host_init(struct dw_pcie_rp *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
|
@ -946,6 +1139,11 @@ static int imx_pcie_host_init(struct dw_pcie_rp *pp)
|
|||
}
|
||||
}
|
||||
|
||||
if (pp->bridge && imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_LUT)) {
|
||||
pp->bridge->enable_device = imx_pcie_enable_device;
|
||||
pp->bridge->disable_device = imx_pcie_disable_device;
|
||||
}
|
||||
|
||||
imx_pcie_assert_core_reset(imx_pcie);
|
||||
|
||||
if (imx_pcie->drvdata->init_phy)
|
||||
|
@ -966,7 +1164,9 @@ static int imx_pcie_host_init(struct dw_pcie_rp *pp)
|
|||
goto err_clk_disable;
|
||||
}
|
||||
|
||||
ret = phy_set_mode_ext(imx_pcie->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC);
|
||||
ret = phy_set_mode_ext(imx_pcie->phy, PHY_MODE_PCIE,
|
||||
imx_pcie->drvdata->mode == DW_PCIE_EP_TYPE ?
|
||||
PHY_MODE_PCIE_EP : PHY_MODE_PCIE_RC);
|
||||
if (ret) {
|
||||
dev_err(dev, "unable to set PCIe PHY mode\n");
|
||||
goto err_phy_exit;
|
||||
|
@ -1033,9 +1233,31 @@ static u64 imx_pcie_cpu_addr_fixup(struct dw_pcie *pcie, u64 cpu_addr)
|
|||
return cpu_addr - entry->offset;
|
||||
}
|
||||
|
||||
/*
|
||||
* In old DWC implementations, PCIE_ATU_INHIBIT_PAYLOAD in iATU Ctrl2
|
||||
* register is reserved, so the generic DWC implementation of sending the
|
||||
* PME_Turn_Off message using a dummy MMIO write cannot be used.
|
||||
*/
|
||||
static void imx_pcie_pme_turn_off(struct dw_pcie_rp *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct imx_pcie *imx_pcie = to_imx_pcie(pci);
|
||||
|
||||
regmap_set_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, IMX6SX_GPR12_PCIE_PM_TURN_OFF);
|
||||
regmap_clear_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, IMX6SX_GPR12_PCIE_PM_TURN_OFF);
|
||||
|
||||
usleep_range(PCIE_PME_TO_L2_TIMEOUT_US/10, PCIE_PME_TO_L2_TIMEOUT_US);
|
||||
}
|
||||
|
||||
static const struct dw_pcie_host_ops imx_pcie_host_ops = {
|
||||
.init = imx_pcie_host_init,
|
||||
.deinit = imx_pcie_host_exit,
|
||||
.pme_turn_off = imx_pcie_pme_turn_off,
|
||||
};
|
||||
|
||||
static const struct dw_pcie_host_ops imx_pcie_host_dw_pme_ops = {
|
||||
.init = imx_pcie_host_init,
|
||||
.deinit = imx_pcie_host_exit,
|
||||
};
|
||||
|
||||
static const struct dw_pcie_ops dw_pcie_ops = {
|
||||
|
@ -1082,16 +1304,27 @@ static const struct pci_epc_features imx8m_pcie_epc_features = {
|
|||
.align = SZ_64K,
|
||||
};
|
||||
|
||||
static const struct pci_epc_features imx8q_pcie_epc_features = {
|
||||
.linkup_notifier = false,
|
||||
.msi_capable = true,
|
||||
.msix_capable = false,
|
||||
.bar[BAR_1] = { .type = BAR_RESERVED, },
|
||||
.bar[BAR_3] = { .type = BAR_RESERVED, },
|
||||
.bar[BAR_5] = { .type = BAR_RESERVED, },
|
||||
.align = SZ_64K,
|
||||
};
|
||||
|
||||
/*
|
||||
* BAR# | Default BAR enable | Default BAR Type | Default BAR Size | BAR Sizing Scheme
|
||||
* ================================================================================================
|
||||
* BAR0 | Enable | 64-bit | 1 MB | Programmable Size
|
||||
* BAR1 | Disable | 32-bit | 64 KB | Fixed Size
|
||||
* BAR1 should be disabled if BAR0 is 64bit.
|
||||
* BAR2 | Enable | 32-bit | 1 MB | Programmable Size
|
||||
* BAR3 | Enable | 32-bit | 64 KB | Programmable Size
|
||||
* BAR4 | Enable | 32-bit | 1M | Programmable Size
|
||||
* BAR5 | Enable | 32-bit | 64 KB | Programmable Size
|
||||
* | Default | Default | Default | BAR Sizing
|
||||
* BAR# | Enable? | Type | Size | Scheme
|
||||
* =======================================================
|
||||
* BAR0 | Enable | 64-bit | 1 MB | Programmable Size
|
||||
* BAR1 | Disable | 32-bit | 64 KB | Fixed Size
|
||||
* (BAR1 should be disabled if BAR0 is 64-bit)
|
||||
* BAR2 | Enable | 32-bit | 1 MB | Programmable Size
|
||||
* BAR3 | Enable | 32-bit | 64 KB | Programmable Size
|
||||
* BAR4 | Enable | 32-bit | 1 MB | Programmable Size
|
||||
* BAR5 | Enable | 32-bit | 64 KB | Programmable Size
|
||||
*/
|
||||
static const struct pci_epc_features imx95_pcie_epc_features = {
|
||||
.msi_capable = true,
|
||||
|
@ -1118,7 +1351,6 @@ static int imx_add_pcie_ep(struct imx_pcie *imx_pcie,
|
|||
struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
unsigned int pcie_dbi2_offset;
|
||||
struct dw_pcie_ep *ep;
|
||||
struct dw_pcie *pci = imx_pcie->pci;
|
||||
struct dw_pcie_rp *pp = &pci->pp;
|
||||
|
@ -1128,28 +1360,6 @@ static int imx_add_pcie_ep(struct imx_pcie *imx_pcie,
|
|||
ep = &pci->ep;
|
||||
ep->ops = &pcie_ep_ops;
|
||||
|
||||
switch (imx_pcie->drvdata->variant) {
|
||||
case IMX8MQ_EP:
|
||||
case IMX8MM_EP:
|
||||
case IMX8MP_EP:
|
||||
pcie_dbi2_offset = SZ_1M;
|
||||
break;
|
||||
default:
|
||||
pcie_dbi2_offset = SZ_4K;
|
||||
break;
|
||||
}
|
||||
|
||||
pci->dbi_base2 = pci->dbi_base + pcie_dbi2_offset;
|
||||
|
||||
/*
|
||||
* FIXME: Ideally, dbi2 base address should come from DT. But since only IMX95 is defining
|
||||
* "dbi2" in DT, "dbi_base2" is set to NULL here for that platform alone so that the DWC
|
||||
* core code can fetch that from DT. But once all platform DTs were fixed, this and the
|
||||
* above "dbi_base2" setting should be removed.
|
||||
*/
|
||||
if (device_property_match_string(dev, "reg-names", "dbi2") >= 0)
|
||||
pci->dbi_base2 = NULL;
|
||||
|
||||
if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_SUPPORT_64BIT))
|
||||
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
|
||||
|
||||
|
@ -1176,43 +1386,6 @@ static int imx_add_pcie_ep(struct imx_pcie *imx_pcie,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void imx_pcie_pm_turnoff(struct imx_pcie *imx_pcie)
|
||||
{
|
||||
struct device *dev = imx_pcie->pci->dev;
|
||||
|
||||
/* Some variants have a turnoff reset in DT */
|
||||
if (imx_pcie->turnoff_reset) {
|
||||
reset_control_assert(imx_pcie->turnoff_reset);
|
||||
reset_control_deassert(imx_pcie->turnoff_reset);
|
||||
goto pm_turnoff_sleep;
|
||||
}
|
||||
|
||||
/* Others poke directly at IOMUXC registers */
|
||||
switch (imx_pcie->drvdata->variant) {
|
||||
case IMX6SX:
|
||||
case IMX6QP:
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
|
||||
IMX6SX_GPR12_PCIE_PM_TURN_OFF,
|
||||
IMX6SX_GPR12_PCIE_PM_TURN_OFF);
|
||||
regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
|
||||
IMX6SX_GPR12_PCIE_PM_TURN_OFF, 0);
|
||||
break;
|
||||
default:
|
||||
dev_err(dev, "PME_Turn_Off not implemented\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Components with an upstream port must respond to
|
||||
* PME_Turn_Off with PME_TO_Ack but we can't check.
|
||||
*
|
||||
* The standard recommends a 1-10ms timeout after which to
|
||||
* proceed anyway as if acks were received.
|
||||
*/
|
||||
pm_turnoff_sleep:
|
||||
usleep_range(1000, 10000);
|
||||
}
|
||||
|
||||
static void imx_pcie_msi_save_restore(struct imx_pcie *imx_pcie, bool save)
|
||||
{
|
||||
u8 offset;
|
||||
|
@ -1236,7 +1409,6 @@ static void imx_pcie_msi_save_restore(struct imx_pcie *imx_pcie, bool save)
|
|||
static int imx_pcie_suspend_noirq(struct device *dev)
|
||||
{
|
||||
struct imx_pcie *imx_pcie = dev_get_drvdata(dev);
|
||||
struct dw_pcie_rp *pp = &imx_pcie->pci->pp;
|
||||
|
||||
if (!(imx_pcie->drvdata->flags & IMX_PCIE_FLAG_SUPPORTS_SUSPEND))
|
||||
return 0;
|
||||
|
@ -1251,9 +1423,7 @@ static int imx_pcie_suspend_noirq(struct device *dev)
|
|||
imx_pcie_assert_core_reset(imx_pcie);
|
||||
imx_pcie->drvdata->enable_ref_clk(imx_pcie, false);
|
||||
} else {
|
||||
imx_pcie_pm_turnoff(imx_pcie);
|
||||
imx_pcie_stop_link(imx_pcie->pci);
|
||||
imx_pcie_host_exit(pp);
|
||||
return dw_pcie_suspend_noirq(imx_pcie->pci);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -1263,7 +1433,6 @@ static int imx_pcie_resume_noirq(struct device *dev)
|
|||
{
|
||||
int ret;
|
||||
struct imx_pcie *imx_pcie = dev_get_drvdata(dev);
|
||||
struct dw_pcie_rp *pp = &imx_pcie->pci->pp;
|
||||
|
||||
if (!(imx_pcie->drvdata->flags & IMX_PCIE_FLAG_SUPPORTS_SUSPEND))
|
||||
return 0;
|
||||
|
@ -1275,6 +1444,7 @@ static int imx_pcie_resume_noirq(struct device *dev)
|
|||
ret = imx_pcie_deassert_core_reset(imx_pcie);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Using PCIE_TEST_PD seems to disable MSI and powers down the
|
||||
* root complex. This is why we have to setup the rc again and
|
||||
|
@ -1283,17 +1453,12 @@ static int imx_pcie_resume_noirq(struct device *dev)
|
|||
ret = dw_pcie_setup_rc(&imx_pcie->pci->pp);
|
||||
if (ret)
|
||||
return ret;
|
||||
imx_pcie_msi_save_restore(imx_pcie, false);
|
||||
} else {
|
||||
ret = imx_pcie_host_init(pp);
|
||||
ret = dw_pcie_resume_noirq(imx_pcie->pci);
|
||||
if (ret)
|
||||
return ret;
|
||||
imx_pcie_msi_save_restore(imx_pcie, false);
|
||||
dw_pcie_setup_rc(pp);
|
||||
|
||||
if (imx_pcie->link_is_up)
|
||||
imx_pcie_start_link(imx_pcie->pci);
|
||||
}
|
||||
imx_pcie_msi_save_restore(imx_pcie, false);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1311,9 +1476,8 @@ static int imx_pcie_probe(struct platform_device *pdev)
|
|||
struct device_node *np;
|
||||
struct resource *dbi_base;
|
||||
struct device_node *node = dev->of_node;
|
||||
int ret;
|
||||
int i, ret, req_cnt;
|
||||
u16 val;
|
||||
int i;
|
||||
|
||||
imx_pcie = devm_kzalloc(dev, sizeof(*imx_pcie), GFP_KERNEL);
|
||||
if (!imx_pcie)
|
||||
|
@ -1325,11 +1489,17 @@ static int imx_pcie_probe(struct platform_device *pdev)
|
|||
|
||||
pci->dev = dev;
|
||||
pci->ops = &dw_pcie_ops;
|
||||
pci->pp.ops = &imx_pcie_host_ops;
|
||||
|
||||
imx_pcie->pci = pci;
|
||||
imx_pcie->drvdata = of_device_get_match_data(dev);
|
||||
|
||||
mutex_init(&imx_pcie->lock);
|
||||
|
||||
if (imx_pcie->drvdata->ops)
|
||||
pci->pp.ops = imx_pcie->drvdata->ops;
|
||||
else
|
||||
pci->pp.ops = &imx_pcie_host_dw_pme_ops;
|
||||
|
||||
/* Find the PHY if one is defined, only imx7d uses it */
|
||||
np = of_parse_phandle(node, "fsl,imx7d-pcie-phy", 0);
|
||||
if (np) {
|
||||
|
@ -1363,9 +1533,13 @@ static int imx_pcie_probe(struct platform_device *pdev)
|
|||
imx_pcie->clks[i].id = imx_pcie->drvdata->clk_names[i];
|
||||
|
||||
/* Fetch clocks */
|
||||
ret = devm_clk_bulk_get(dev, imx_pcie->drvdata->clks_cnt, imx_pcie->clks);
|
||||
req_cnt = imx_pcie->drvdata->clks_cnt - imx_pcie->drvdata->clks_optional_cnt;
|
||||
ret = devm_clk_bulk_get(dev, req_cnt, imx_pcie->clks);
|
||||
if (ret)
|
||||
return ret;
|
||||
imx_pcie->clks[req_cnt].clk = devm_clk_get_optional(dev, "ref");
|
||||
if (IS_ERR(imx_pcie->clks[req_cnt].clk))
|
||||
return PTR_ERR(imx_pcie->clks[req_cnt].clk);
|
||||
|
||||
if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_PHYDRV)) {
|
||||
imx_pcie->phy = devm_phy_get(dev, "pcie-phy");
|
||||
|
@ -1391,7 +1565,6 @@ static int imx_pcie_probe(struct platform_device *pdev)
|
|||
switch (imx_pcie->drvdata->variant) {
|
||||
case IMX8MQ:
|
||||
case IMX8MQ_EP:
|
||||
case IMX7D:
|
||||
if (dbi_base->start == IMX8MQ_PCIE2_BASE_ADDR)
|
||||
imx_pcie->controller_id = 1;
|
||||
break;
|
||||
|
@ -1399,13 +1572,6 @@ static int imx_pcie_probe(struct platform_device *pdev)
|
|||
break;
|
||||
}
|
||||
|
||||
/* Grab turnoff reset */
|
||||
imx_pcie->turnoff_reset = devm_reset_control_get_optional_exclusive(dev, "turnoff");
|
||||
if (IS_ERR(imx_pcie->turnoff_reset)) {
|
||||
dev_err(dev, "Failed to get TURNOFF reset control\n");
|
||||
return PTR_ERR(imx_pcie->turnoff_reset);
|
||||
}
|
||||
|
||||
if (imx_pcie->drvdata->gpr) {
|
||||
/* Grab GPR config register range */
|
||||
imx_pcie->iomuxc_gpr =
|
||||
|
@ -1484,6 +1650,7 @@ static int imx_pcie_probe(struct platform_device *pdev)
|
|||
if (ret < 0)
|
||||
return ret;
|
||||
} else {
|
||||
pci->pp.use_atu_msg = true;
|
||||
ret = dw_pcie_host_init(&pci->pp);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
@ -1513,6 +1680,7 @@ static const char * const imx8mm_clks[] = {"pcie_bus", "pcie", "pcie_aux"};
|
|||
static const char * const imx8mq_clks[] = {"pcie_bus", "pcie", "pcie_phy", "pcie_aux"};
|
||||
static const char * const imx6sx_clks[] = {"pcie_bus", "pcie", "pcie_phy", "pcie_inbound_axi"};
|
||||
static const char * const imx8q_clks[] = {"mstr", "slv", "dbi"};
|
||||
static const char * const imx95_clks[] = {"pcie_bus", "pcie", "pcie_phy", "pcie_aux", "ref"};
|
||||
|
||||
static const struct imx_pcie_drvdata drvdata[] = {
|
||||
[IMX6Q] = {
|
||||
|
@ -1548,6 +1716,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
|
|||
.init_phy = imx6sx_pcie_init_phy,
|
||||
.enable_ref_clk = imx6sx_pcie_enable_ref_clk,
|
||||
.core_reset = imx6sx_pcie_core_reset,
|
||||
.ops = &imx_pcie_host_ops,
|
||||
},
|
||||
[IMX6QP] = {
|
||||
.variant = IMX6QP,
|
||||
|
@ -1565,6 +1734,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
|
|||
.init_phy = imx_pcie_init_phy,
|
||||
.enable_ref_clk = imx6q_pcie_enable_ref_clk,
|
||||
.core_reset = imx6qp_pcie_core_reset,
|
||||
.ops = &imx_pcie_host_ops,
|
||||
},
|
||||
[IMX7D] = {
|
||||
.variant = IMX7D,
|
||||
|
@ -1576,14 +1746,14 @@ static const struct imx_pcie_drvdata drvdata[] = {
|
|||
.clks_cnt = ARRAY_SIZE(imx6q_clks),
|
||||
.mode_off[0] = IOMUXC_GPR12,
|
||||
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
|
||||
.init_phy = imx7d_pcie_init_phy,
|
||||
.enable_ref_clk = imx7d_pcie_enable_ref_clk,
|
||||
.core_reset = imx7d_pcie_core_reset,
|
||||
},
|
||||
[IMX8MQ] = {
|
||||
.variant = IMX8MQ,
|
||||
.flags = IMX_PCIE_FLAG_HAS_APP_RESET |
|
||||
IMX_PCIE_FLAG_HAS_PHY_RESET,
|
||||
IMX_PCIE_FLAG_HAS_PHY_RESET |
|
||||
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
|
||||
.gpr = "fsl,imx8mq-iomuxc-gpr",
|
||||
.clk_names = imx8mq_clks,
|
||||
.clks_cnt = ARRAY_SIZE(imx8mq_clks),
|
||||
|
@ -1621,15 +1791,19 @@ static const struct imx_pcie_drvdata drvdata[] = {
|
|||
[IMX8Q] = {
|
||||
.variant = IMX8Q,
|
||||
.flags = IMX_PCIE_FLAG_HAS_PHYDRV |
|
||||
IMX_PCIE_FLAG_CPU_ADDR_FIXUP,
|
||||
IMX_PCIE_FLAG_CPU_ADDR_FIXUP |
|
||||
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
|
||||
.clk_names = imx8q_clks,
|
||||
.clks_cnt = ARRAY_SIZE(imx8q_clks),
|
||||
},
|
||||
[IMX95] = {
|
||||
.variant = IMX95,
|
||||
.flags = IMX_PCIE_FLAG_HAS_SERDES,
|
||||
.clk_names = imx8mq_clks,
|
||||
.clks_cnt = ARRAY_SIZE(imx8mq_clks),
|
||||
.flags = IMX_PCIE_FLAG_HAS_SERDES |
|
||||
IMX_PCIE_FLAG_HAS_LUT |
|
||||
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
|
||||
.clk_names = imx95_clks,
|
||||
.clks_cnt = ARRAY_SIZE(imx95_clks),
|
||||
.clks_optional_cnt = 1,
|
||||
.ltssm_off = IMX95_PE0_GEN_CTRL_3,
|
||||
.ltssm_mask = IMX95_PCIE_LTSSM_EN,
|
||||
.mode_off[0] = IMX95_PE0_GEN_CTRL_1,
|
||||
|
@ -1678,6 +1852,14 @@ static const struct imx_pcie_drvdata drvdata[] = {
|
|||
.epc_features = &imx8m_pcie_epc_features,
|
||||
.enable_ref_clk = imx8mm_pcie_enable_ref_clk,
|
||||
},
|
||||
[IMX8Q_EP] = {
|
||||
.variant = IMX8Q_EP,
|
||||
.flags = IMX_PCIE_FLAG_HAS_PHYDRV,
|
||||
.mode = DW_PCIE_EP_TYPE,
|
||||
.epc_features = &imx8q_pcie_epc_features,
|
||||
.clk_names = imx8q_clks,
|
||||
.clks_cnt = ARRAY_SIZE(imx8q_clks),
|
||||
},
|
||||
[IMX95_EP] = {
|
||||
.variant = IMX95_EP,
|
||||
.flags = IMX_PCIE_FLAG_HAS_SERDES |
|
||||
|
@ -1707,6 +1889,7 @@ static const struct of_device_id imx_pcie_of_match[] = {
|
|||
{ .compatible = "fsl,imx8mq-pcie-ep", .data = &drvdata[IMX8MQ_EP], },
|
||||
{ .compatible = "fsl,imx8mm-pcie-ep", .data = &drvdata[IMX8MM_EP], },
|
||||
{ .compatible = "fsl,imx8mp-pcie-ep", .data = &drvdata[IMX8MP_EP], },
|
||||
{ .compatible = "fsl,imx8q-pcie-ep", .data = &drvdata[IMX8Q_EP], },
|
||||
{ .compatible = "fsl,imx95-pcie-ep", .data = &drvdata[IMX95_EP], },
|
||||
{},
|
||||
};
|
||||
|
|
|
@ -329,7 +329,6 @@ static int ls_pcie_probe(struct platform_device *pdev)
|
|||
struct ls_pcie *pcie;
|
||||
struct resource *dbi_base;
|
||||
u32 index[2];
|
||||
int ret;
|
||||
|
||||
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
|
||||
if (!pcie)
|
||||
|
@ -355,16 +354,15 @@ static int ls_pcie_probe(struct platform_device *pdev)
|
|||
pcie->pf_lut_base = pci->dbi_base + pcie->drvdata->pf_lut_off;
|
||||
|
||||
if (pcie->drvdata->scfg_support) {
|
||||
pcie->scfg = syscon_regmap_lookup_by_phandle(dev->of_node, "fsl,pcie-scfg");
|
||||
pcie->scfg =
|
||||
syscon_regmap_lookup_by_phandle_args(dev->of_node,
|
||||
"fsl,pcie-scfg", 2,
|
||||
index);
|
||||
if (IS_ERR(pcie->scfg)) {
|
||||
dev_err(dev, "No syscfg phandle specified\n");
|
||||
return PTR_ERR(pcie->scfg);
|
||||
}
|
||||
|
||||
ret = of_property_read_u32_array(dev->of_node, "fsl,pcie-scfg", index, 2);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pcie->index = index[1];
|
||||
}
|
||||
|
||||
|
|
|
@ -369,9 +369,22 @@ static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct pci_epc_features artpec6_pcie_epc_features = {
|
||||
.linkup_notifier = false,
|
||||
.msi_capable = true,
|
||||
.msix_capable = false,
|
||||
};
|
||||
|
||||
static const struct pci_epc_features *
|
||||
artpec6_pcie_get_features(struct dw_pcie_ep *ep)
|
||||
{
|
||||
return &artpec6_pcie_epc_features;
|
||||
}
|
||||
|
||||
static const struct dw_pcie_ep_ops pcie_ep_ops = {
|
||||
.init = artpec6_pcie_ep_init,
|
||||
.raise_irq = artpec6_pcie_raise_irq,
|
||||
.get_features = artpec6_pcie_get_features,
|
||||
};
|
||||
|
||||
static int artpec6_pcie_probe(struct platform_device *pdev)
|
||||
|
|
|
@ -128,7 +128,8 @@ static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
|
|||
}
|
||||
|
||||
static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
|
||||
dma_addr_t cpu_addr, enum pci_barno bar)
|
||||
dma_addr_t cpu_addr, enum pci_barno bar,
|
||||
size_t size)
|
||||
{
|
||||
int ret;
|
||||
u32 free_win;
|
||||
|
@ -145,7 +146,7 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
|
|||
}
|
||||
|
||||
ret = dw_pcie_prog_ep_inbound_atu(pci, func_no, free_win, type,
|
||||
cpu_addr, bar);
|
||||
cpu_addr, bar, size);
|
||||
if (ret < 0) {
|
||||
dev_err(pci->dev, "Failed to program IB window\n");
|
||||
return ret;
|
||||
|
@ -222,20 +223,31 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
|
|||
if ((flags & PCI_BASE_ADDRESS_MEM_TYPE_64) && (bar & 1))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Certain EPF drivers dynamically change the physical address of a BAR
|
||||
* (i.e. they call set_bar() twice, without ever calling clear_bar(), as
|
||||
* calling clear_bar() would clear the BAR's PCI address assigned by the
|
||||
* host).
|
||||
*/
|
||||
if (ep->epf_bar[bar]) {
|
||||
/*
|
||||
* We can only dynamically change a BAR if the new BAR size and
|
||||
* BAR flags do not differ from the existing configuration.
|
||||
*/
|
||||
if (ep->epf_bar[bar]->barno != bar ||
|
||||
ep->epf_bar[bar]->size != size ||
|
||||
ep->epf_bar[bar]->flags != flags)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* When dynamically changing a BAR, skip writing the BAR reg, as
|
||||
* that would clear the BAR's PCI address assigned by the host.
|
||||
*/
|
||||
goto config_atu;
|
||||
}
|
||||
|
||||
reg = PCI_BASE_ADDRESS_0 + (4 * bar);
|
||||
|
||||
if (!(flags & PCI_BASE_ADDRESS_SPACE))
|
||||
type = PCIE_ATU_TYPE_MEM;
|
||||
else
|
||||
type = PCIE_ATU_TYPE_IO;
|
||||
|
||||
ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (ep->epf_bar[bar])
|
||||
return 0;
|
||||
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
|
||||
dw_pcie_ep_writel_dbi2(ep, func_no, reg, lower_32_bits(size - 1));
|
||||
|
@ -246,9 +258,21 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
|
|||
dw_pcie_ep_writel_dbi(ep, func_no, reg + 4, 0);
|
||||
}
|
||||
|
||||
ep->epf_bar[bar] = epf_bar;
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
|
||||
config_atu:
|
||||
if (!(flags & PCI_BASE_ADDRESS_SPACE))
|
||||
type = PCIE_ATU_TYPE_MEM;
|
||||
else
|
||||
type = PCIE_ATU_TYPE_IO;
|
||||
|
||||
ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar,
|
||||
size);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ep->epf_bar[bar] = epf_bar;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -436,18 +436,18 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
|
|||
return ret;
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
|
||||
if (res) {
|
||||
pp->cfg0_size = resource_size(res);
|
||||
pp->cfg0_base = res->start;
|
||||
|
||||
pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res);
|
||||
if (IS_ERR(pp->va_cfg0_base))
|
||||
return PTR_ERR(pp->va_cfg0_base);
|
||||
} else {
|
||||
dev_err(dev, "Missing *config* reg space\n");
|
||||
if (!res) {
|
||||
dev_err(dev, "Missing \"config\" reg space\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
pp->cfg0_size = resource_size(res);
|
||||
pp->cfg0_base = res->start;
|
||||
|
||||
pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res);
|
||||
if (IS_ERR(pp->va_cfg0_base))
|
||||
return PTR_ERR(pp->va_cfg0_base);
|
||||
|
||||
bridge = devm_pci_alloc_host_bridge(dev, 0);
|
||||
if (!bridge)
|
||||
return -ENOMEM;
|
||||
|
@ -530,8 +530,14 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
|
|||
goto err_remove_edma;
|
||||
}
|
||||
|
||||
/* Ignore errors, the link may come up later */
|
||||
dw_pcie_wait_for_link(pci);
|
||||
/*
|
||||
* Note: Skip the link up delay only when a Link Up IRQ is present.
|
||||
* If there is no Link Up IRQ, we should not bypass the delay
|
||||
* because that would require users to manually rescan for devices.
|
||||
*/
|
||||
if (!pp->use_linkup_irq)
|
||||
/* Ignore errors, the link may come up later */
|
||||
dw_pcie_wait_for_link(pci);
|
||||
|
||||
bridge->sysdata = pp;
|
||||
|
||||
|
@ -918,7 +924,7 @@ int dw_pcie_suspend_noirq(struct dw_pcie *pci)
|
|||
{
|
||||
u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
|
||||
u32 val;
|
||||
int ret = 0;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* If L1SS is supported, then do not put the link into L2 as some
|
||||
|
@ -927,25 +933,33 @@ int dw_pcie_suspend_noirq(struct dw_pcie *pci)
|
|||
if (dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKCTL) & PCI_EXP_LNKCTL_ASPM_L1)
|
||||
return 0;
|
||||
|
||||
if (dw_pcie_get_ltssm(pci) <= DW_PCIE_LTSSM_DETECT_ACT)
|
||||
return 0;
|
||||
|
||||
if (pci->pp.ops->pme_turn_off)
|
||||
if (pci->pp.ops->pme_turn_off) {
|
||||
pci->pp.ops->pme_turn_off(&pci->pp);
|
||||
else
|
||||
} else {
|
||||
ret = dw_pcie_pme_turn_off(pci);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = read_poll_timeout(dw_pcie_get_ltssm, val, val == DW_PCIE_LTSSM_L2_IDLE,
|
||||
ret = read_poll_timeout(dw_pcie_get_ltssm, val,
|
||||
val == DW_PCIE_LTSSM_L2_IDLE ||
|
||||
val <= DW_PCIE_LTSSM_DETECT_WAIT,
|
||||
PCIE_PME_TO_L2_TIMEOUT_US/10,
|
||||
PCIE_PME_TO_L2_TIMEOUT_US, false, pci);
|
||||
if (ret) {
|
||||
/* Only log message when LTSSM isn't in DETECT or POLL */
|
||||
dev_err(pci->dev, "Timeout waiting for L2 entry! LTSSM: 0x%x\n", val);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Per PCIe r6.0, sec 5.3.3.2.1, software should wait at least
|
||||
* 100ns after L2/L3 Ready before turning off refclock and
|
||||
* main power. This is harmless when no endpoint is connected.
|
||||
*/
|
||||
udelay(1);
|
||||
|
||||
dw_pcie_stop_link(pci);
|
||||
if (pci->pp.ops->deinit)
|
||||
pci->pp.ops->deinit(&pci->pp);
|
||||
|
||||
|
|
|
@ -597,11 +597,12 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
|
|||
}
|
||||
|
||||
int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int type, u64 cpu_addr, u8 bar)
|
||||
int type, u64 cpu_addr, u8 bar, size_t size)
|
||||
{
|
||||
u32 retries, val;
|
||||
|
||||
if (!IS_ALIGNED(cpu_addr, pci->region_align))
|
||||
if (!IS_ALIGNED(cpu_addr, pci->region_align) ||
|
||||
!IS_ALIGNED(cpu_addr, size))
|
||||
return -EINVAL;
|
||||
|
||||
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_LOWER_TARGET,
|
||||
|
@ -970,7 +971,7 @@ static int dw_pcie_edma_irq_verify(struct dw_pcie *pci)
|
|||
{
|
||||
struct platform_device *pdev = to_platform_device(pci->dev);
|
||||
u16 ch_cnt = pci->edma.ll_wr_cnt + pci->edma.ll_rd_cnt;
|
||||
char name[6];
|
||||
char name[15];
|
||||
int ret;
|
||||
|
||||
if (pci->edma.nr_irqs == 1)
|
||||
|
|
|
@ -330,6 +330,7 @@ enum dw_pcie_ltssm {
|
|||
/* Need to align with PCIE_PORT_DEBUG0 bits 0:5 */
|
||||
DW_PCIE_LTSSM_DETECT_QUIET = 0x0,
|
||||
DW_PCIE_LTSSM_DETECT_ACT = 0x1,
|
||||
DW_PCIE_LTSSM_DETECT_WAIT = 0x6,
|
||||
DW_PCIE_LTSSM_L0 = 0x11,
|
||||
DW_PCIE_LTSSM_L2_IDLE = 0x15,
|
||||
|
||||
|
@ -379,6 +380,7 @@ struct dw_pcie_rp {
|
|||
bool use_atu_msg;
|
||||
int msg_atu_index;
|
||||
struct resource *msg_res;
|
||||
bool use_linkup_irq;
|
||||
};
|
||||
|
||||
struct dw_pcie_ep_ops {
|
||||
|
@ -491,16 +493,13 @@ int dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
|
|||
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
|
||||
u64 cpu_addr, u64 pci_addr, u64 size);
|
||||
int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
|
||||
int type, u64 cpu_addr, u8 bar);
|
||||
int type, u64 cpu_addr, u8 bar, size_t size);
|
||||
void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index);
|
||||
void dw_pcie_setup(struct dw_pcie *pci);
|
||||
void dw_pcie_iatu_detect(struct dw_pcie *pci);
|
||||
int dw_pcie_edma_detect(struct dw_pcie *pci);
|
||||
void dw_pcie_edma_remove(struct dw_pcie *pci);
|
||||
|
||||
int dw_pcie_suspend_noirq(struct dw_pcie *pci);
|
||||
int dw_pcie_resume_noirq(struct dw_pcie *pci);
|
||||
|
||||
static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val)
|
||||
{
|
||||
dw_pcie_write_dbi(pci, reg, 0x4, val);
|
||||
|
@ -678,6 +677,8 @@ static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci)
|
|||
}
|
||||
|
||||
#ifdef CONFIG_PCIE_DW_HOST
|
||||
int dw_pcie_suspend_noirq(struct dw_pcie *pci);
|
||||
int dw_pcie_resume_noirq(struct dw_pcie *pci);
|
||||
irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp);
|
||||
int dw_pcie_setup_rc(struct dw_pcie_rp *pp);
|
||||
int dw_pcie_host_init(struct dw_pcie_rp *pp);
|
||||
|
@ -686,6 +687,16 @@ int dw_pcie_allocate_domains(struct dw_pcie_rp *pp);
|
|||
void __iomem *dw_pcie_own_conf_map_bus(struct pci_bus *bus, unsigned int devfn,
|
||||
int where);
|
||||
#else
|
||||
static inline int dw_pcie_suspend_noirq(struct dw_pcie *pci)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int dw_pcie_resume_noirq(struct dw_pcie *pci)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp)
|
||||
{
|
||||
return IRQ_NONE;
|
||||
|
|
|
@ -389,6 +389,34 @@ static const struct dw_pcie_ops dw_pcie_ops = {
|
|||
.stop_link = rockchip_pcie_stop_link,
|
||||
};
|
||||
|
||||
static irqreturn_t rockchip_pcie_rc_sys_irq_thread(int irq, void *arg)
|
||||
{
|
||||
struct rockchip_pcie *rockchip = arg;
|
||||
struct dw_pcie *pci = &rockchip->pci;
|
||||
struct dw_pcie_rp *pp = &pci->pp;
|
||||
struct device *dev = pci->dev;
|
||||
u32 reg, val;
|
||||
|
||||
reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC);
|
||||
rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC);
|
||||
|
||||
dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg);
|
||||
dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip));
|
||||
|
||||
if (reg & PCIE_RDLH_LINK_UP_CHGED) {
|
||||
val = rockchip_pcie_get_ltssm(rockchip);
|
||||
if ((val & PCIE_LINKUP) == PCIE_LINKUP) {
|
||||
dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
|
||||
/* Rescan the bus to enumerate endpoint devices */
|
||||
pci_lock_rescan_remove();
|
||||
pci_rescan_bus(pp->bridge->bus);
|
||||
pci_unlock_rescan_remove();
|
||||
}
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
|
||||
{
|
||||
struct rockchip_pcie *rockchip = arg;
|
||||
|
@ -418,14 +446,29 @@ static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int rockchip_pcie_configure_rc(struct rockchip_pcie *rockchip)
|
||||
static int rockchip_pcie_configure_rc(struct platform_device *pdev,
|
||||
struct rockchip_pcie *rockchip)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct dw_pcie_rp *pp;
|
||||
int irq, ret;
|
||||
u32 val;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_HOST))
|
||||
return -ENODEV;
|
||||
|
||||
irq = platform_get_irq_byname(pdev, "sys");
|
||||
if (irq < 0)
|
||||
return irq;
|
||||
|
||||
ret = devm_request_threaded_irq(dev, irq, NULL,
|
||||
rockchip_pcie_rc_sys_irq_thread,
|
||||
IRQF_ONESHOT, "pcie-sys-rc", rockchip);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to request PCIe sys IRQ\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* LTSSM enable control mode */
|
||||
val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE);
|
||||
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL);
|
||||
|
@ -435,8 +478,19 @@ static int rockchip_pcie_configure_rc(struct rockchip_pcie *rockchip)
|
|||
|
||||
pp = &rockchip->pci.pp;
|
||||
pp->ops = &rockchip_pcie_host_ops;
|
||||
pp->use_linkup_irq = true;
|
||||
|
||||
return dw_pcie_host_init(pp);
|
||||
ret = dw_pcie_host_init(pp);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to initialize host\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* unmask DLL up/down indicator */
|
||||
val = HIWORD_UPDATE(PCIE_RDLH_LINK_UP_CHGED, 0);
|
||||
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_INTR_MASK_MISC);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int rockchip_pcie_configure_ep(struct platform_device *pdev,
|
||||
|
@ -450,14 +504,12 @@ static int rockchip_pcie_configure_ep(struct platform_device *pdev,
|
|||
return -ENODEV;
|
||||
|
||||
irq = platform_get_irq_byname(pdev, "sys");
|
||||
if (irq < 0) {
|
||||
dev_err(dev, "missing sys IRQ resource\n");
|
||||
if (irq < 0)
|
||||
return irq;
|
||||
}
|
||||
|
||||
ret = devm_request_threaded_irq(dev, irq, NULL,
|
||||
rockchip_pcie_ep_sys_irq_thread,
|
||||
IRQF_ONESHOT, "pcie-sys", rockchip);
|
||||
IRQF_ONESHOT, "pcie-sys-ep", rockchip);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to request PCIe sys IRQ\n");
|
||||
return ret;
|
||||
|
@ -491,7 +543,8 @@ static int rockchip_pcie_configure_ep(struct platform_device *pdev,
|
|||
pci_epc_init_notify(rockchip->pci.ep.epc);
|
||||
|
||||
/* unmask DLL up/down indicator and hot reset/link-down reset */
|
||||
rockchip_pcie_writel_apb(rockchip, 0x60000, PCIE_CLIENT_INTR_MASK_MISC);
|
||||
val = HIWORD_UPDATE(PCIE_RDLH_LINK_UP_CHGED | PCIE_LINK_REQ_RST_NOT_INT, 0);
|
||||
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_INTR_MASK_MISC);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -553,7 +606,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
|
|||
|
||||
switch (data->mode) {
|
||||
case DW_PCIE_RC_TYPE:
|
||||
ret = rockchip_pcie_configure_rc(rockchip);
|
||||
ret = rockchip_pcie_configure_rc(pdev, rockchip);
|
||||
if (ret)
|
||||
goto deinit_clk;
|
||||
break;
|
||||
|
|
|
@ -1569,6 +1569,8 @@ static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data)
|
|||
pci_lock_rescan_remove();
|
||||
pci_rescan_bus(pp->bridge->bus);
|
||||
pci_unlock_rescan_remove();
|
||||
|
||||
qcom_pcie_icc_opp_update(pcie);
|
||||
} else {
|
||||
dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n",
|
||||
status);
|
||||
|
@ -1703,6 +1705,10 @@ static int qcom_pcie_probe(struct platform_device *pdev)
|
|||
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
|
||||
irq = platform_get_irq_byname_optional(pdev, "global");
|
||||
if (irq > 0)
|
||||
pp->use_linkup_irq = true;
|
||||
|
||||
ret = dw_pcie_host_init(pp);
|
||||
if (ret) {
|
||||
dev_err(dev, "cannot initialize host\n");
|
||||
|
@ -1716,7 +1722,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
|
|||
goto err_host_deinit;
|
||||
}
|
||||
|
||||
irq = platform_get_irq_byname_optional(pdev, "global");
|
||||
if (irq > 0) {
|
||||
ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
|
||||
qcom_pcie_global_irq_thread,
|
||||
|
|
|
@ -75,6 +75,8 @@ int pci_host_common_probe(struct platform_device *pdev)
|
|||
|
||||
bridge->sysdata = cfg;
|
||||
bridge->ops = (struct pci_ops *)&ops->pci_ops;
|
||||
bridge->enable_device = ops->enable_device;
|
||||
bridge->disable_device = ops->disable_device;
|
||||
bridge->msi_domain = true;
|
||||
|
||||
return pci_host_probe(bridge);
|
||||
|
|
|
@ -1715,6 +1715,7 @@ static const struct of_device_id mvebu_pcie_of_match_table[] = {
|
|||
{ .compatible = "marvell,kirkwood-pcie", },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, mvebu_pcie_of_match_table);
|
||||
|
||||
static const struct dev_pm_ops mvebu_pcie_pm_ops = {
|
||||
NOIRQ_SYSTEM_SLEEP_PM_OPS(mvebu_pcie_suspend, mvebu_pcie_resume)
|
||||
|
|
|
@ -26,7 +26,6 @@
|
|||
#include <linux/list.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/notifier.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
|
||||
|
@ -667,12 +666,16 @@ static struct apple_pcie_port *apple_pcie_get_port(struct pci_dev *pdev)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static int apple_pcie_add_device(struct apple_pcie_port *port,
|
||||
struct pci_dev *pdev)
|
||||
static int apple_pcie_enable_device(struct pci_host_bridge *bridge, struct pci_dev *pdev)
|
||||
{
|
||||
u32 sid, rid = pci_dev_id(pdev);
|
||||
struct apple_pcie_port *port;
|
||||
int idx, err;
|
||||
|
||||
port = apple_pcie_get_port(pdev);
|
||||
if (!port)
|
||||
return 0;
|
||||
|
||||
dev_dbg(&pdev->dev, "added to bus %s, index %d\n",
|
||||
pci_name(pdev->bus->self), port->idx);
|
||||
|
||||
|
@ -698,12 +701,16 @@ static int apple_pcie_add_device(struct apple_pcie_port *port,
|
|||
return idx >= 0 ? 0 : -ENOSPC;
|
||||
}
|
||||
|
||||
static void apple_pcie_release_device(struct apple_pcie_port *port,
|
||||
struct pci_dev *pdev)
|
||||
static void apple_pcie_disable_device(struct pci_host_bridge *bridge, struct pci_dev *pdev)
|
||||
{
|
||||
struct apple_pcie_port *port;
|
||||
u32 rid = pci_dev_id(pdev);
|
||||
int idx;
|
||||
|
||||
port = apple_pcie_get_port(pdev);
|
||||
if (!port)
|
||||
return;
|
||||
|
||||
mutex_lock(&port->pcie->lock);
|
||||
|
||||
for_each_set_bit(idx, port->sid_map, port->sid_map_sz) {
|
||||
|
@ -721,45 +728,6 @@ static void apple_pcie_release_device(struct apple_pcie_port *port,
|
|||
mutex_unlock(&port->pcie->lock);
|
||||
}
|
||||
|
||||
static int apple_pcie_bus_notifier(struct notifier_block *nb,
|
||||
unsigned long action,
|
||||
void *data)
|
||||
{
|
||||
struct device *dev = data;
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct apple_pcie_port *port;
|
||||
int err;
|
||||
|
||||
/*
|
||||
* This is a bit ugly. We assume that if we get notified for
|
||||
* any PCI device, we must be in charge of it, and that there
|
||||
* is no other PCI controller in the whole system. It probably
|
||||
* holds for now, but who knows for how long?
|
||||
*/
|
||||
port = apple_pcie_get_port(pdev);
|
||||
if (!port)
|
||||
return NOTIFY_DONE;
|
||||
|
||||
switch (action) {
|
||||
case BUS_NOTIFY_ADD_DEVICE:
|
||||
err = apple_pcie_add_device(port, pdev);
|
||||
if (err)
|
||||
return notifier_from_errno(err);
|
||||
break;
|
||||
case BUS_NOTIFY_DEL_DEVICE:
|
||||
apple_pcie_release_device(port, pdev);
|
||||
break;
|
||||
default:
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
static struct notifier_block apple_pcie_nb = {
|
||||
.notifier_call = apple_pcie_bus_notifier,
|
||||
};
|
||||
|
||||
static int apple_pcie_init(struct pci_config_window *cfg)
|
||||
{
|
||||
struct device *dev = cfg->parent;
|
||||
|
@ -799,23 +767,10 @@ static int apple_pcie_init(struct pci_config_window *cfg)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int apple_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = bus_register_notifier(&pci_bus_type, &apple_pcie_nb);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = pci_host_common_probe(pdev);
|
||||
if (ret)
|
||||
bus_unregister_notifier(&pci_bus_type, &apple_pcie_nb);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct pci_ecam_ops apple_pcie_cfg_ecam_ops = {
|
||||
.init = apple_pcie_init,
|
||||
.enable_device = apple_pcie_enable_device,
|
||||
.disable_device = apple_pcie_disable_device,
|
||||
.pci_ops = {
|
||||
.map_bus = pci_ecam_map_bus,
|
||||
.read = pci_generic_config_read,
|
||||
|
@ -830,7 +785,7 @@ static const struct of_device_id apple_pcie_of_match[] = {
|
|||
MODULE_DEVICE_TABLE(of, apple_pcie_of_match);
|
||||
|
||||
static struct platform_driver apple_pcie_driver = {
|
||||
.probe = apple_pcie_probe,
|
||||
.probe = pci_host_common_probe,
|
||||
.driver = {
|
||||
.name = "pcie-apple",
|
||||
.of_match_table = apple_pcie_of_match,
|
||||
|
|
|
@ -125,6 +125,8 @@
|
|||
|
||||
#define MAX_NUM_PHY_RESETS 3
|
||||
|
||||
#define PCIE_MTK_RESET_TIME_US 10
|
||||
|
||||
/* Time in ms needed to complete PCIe reset on EN7581 SoC */
|
||||
#define PCIE_EN7581_RESET_TIME_MS 100
|
||||
|
||||
|
@ -133,10 +135,18 @@ struct mtk_gen3_pcie;
|
|||
#define PCIE_CONF_LINK2_CTL_STS (PCIE_CFG_OFFSET_ADDR + 0xb0)
|
||||
#define PCIE_CONF_LINK2_LCR2_LINK_SPEED GENMASK(3, 0)
|
||||
|
||||
enum mtk_gen3_pcie_flags {
|
||||
SKIP_PCIE_RSTB = BIT(0), /* Skip PERST# assertion during device
|
||||
* probing or suspend/resume phase to
|
||||
* avoid hw bugs/issues.
|
||||
*/
|
||||
};
|
||||
|
||||
/**
|
||||
* struct mtk_gen3_pcie_pdata - differentiate between host generations
|
||||
* @power_up: pcie power_up callback
|
||||
* @phy_resets: phy reset lines SoC data.
|
||||
* @flags: pcie device flags.
|
||||
*/
|
||||
struct mtk_gen3_pcie_pdata {
|
||||
int (*power_up)(struct mtk_gen3_pcie *pcie);
|
||||
|
@ -144,6 +154,7 @@ struct mtk_gen3_pcie_pdata {
|
|||
const char *id[MAX_NUM_PHY_RESETS];
|
||||
int num_resets;
|
||||
} phy_resets;
|
||||
u32 flags;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -438,22 +449,33 @@ static int mtk_pcie_startup_port(struct mtk_gen3_pcie *pcie)
|
|||
val |= PCIE_DISABLE_DVFSRC_VLT_REQ;
|
||||
writel_relaxed(val, pcie->base + PCIE_MISC_CTRL_REG);
|
||||
|
||||
/* Assert all reset signals */
|
||||
val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG);
|
||||
val |= PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | PCIE_PE_RSTB;
|
||||
writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG);
|
||||
|
||||
/*
|
||||
* Described in PCIe CEM specification sections 2.2 (PERST# Signal)
|
||||
* and 2.2.1 (Initial Power-Up (G3 to S0)).
|
||||
* The deassertion of PERST# should be delayed 100ms (TPVPERL)
|
||||
* for the power and clock to become stable.
|
||||
* Airoha EN7581 has a hw bug asserting/releasing PCIE_PE_RSTB signal
|
||||
* causing occasional PCIe link down. In order to overcome the issue,
|
||||
* PCIE_RSTB signals are not asserted/released at this stage and the
|
||||
* PCIe block is reset using en7523_reset_assert() and
|
||||
* en7581_pci_enable().
|
||||
*/
|
||||
msleep(100);
|
||||
if (!(pcie->soc->flags & SKIP_PCIE_RSTB)) {
|
||||
/* Assert all reset signals */
|
||||
val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG);
|
||||
val |= PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB |
|
||||
PCIE_PE_RSTB;
|
||||
writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG);
|
||||
|
||||
/* De-assert reset signals */
|
||||
val &= ~(PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | PCIE_PE_RSTB);
|
||||
writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG);
|
||||
/*
|
||||
* Described in PCIe CEM specification revision 6.0.
|
||||
*
|
||||
* The deassertion of PERST# should be delayed 100ms (TPVPERL)
|
||||
* for the power and clock to become stable.
|
||||
*/
|
||||
msleep(PCIE_T_PVPERL_MS);
|
||||
|
||||
/* De-assert reset signals */
|
||||
val &= ~(PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB |
|
||||
PCIE_PE_RSTB);
|
||||
writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG);
|
||||
}
|
||||
|
||||
/* Check if the link is up or not */
|
||||
err = readl_poll_timeout(pcie->base + PCIE_LINK_STATUS_REG, val,
|
||||
|
@ -913,11 +935,20 @@ static int mtk_pcie_en7581_power_up(struct mtk_gen3_pcie *pcie)
|
|||
u32 val;
|
||||
|
||||
/*
|
||||
* Wait for the time needed to complete the bulk assert in
|
||||
* mtk_pcie_setup for EN7581 SoC.
|
||||
* The controller may have been left out of reset by the bootloader
|
||||
* so make sure that we get a clean start by asserting resets here.
|
||||
*/
|
||||
mdelay(PCIE_EN7581_RESET_TIME_MS);
|
||||
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets,
|
||||
pcie->phy_resets);
|
||||
reset_control_assert(pcie->mac_reset);
|
||||
|
||||
/* Wait for the time needed to complete the reset lines assert. */
|
||||
msleep(PCIE_EN7581_RESET_TIME_MS);
|
||||
|
||||
/*
|
||||
* Unlike the other MediaTek Gen3 controllers, the Airoha EN7581
|
||||
* requires PHY initialization and power-on before PHY reset deassert.
|
||||
*/
|
||||
err = phy_init(pcie->phy);
|
||||
if (err) {
|
||||
dev_err(dev, "failed to initialize PHY\n");
|
||||
|
@ -940,17 +971,11 @@ static int mtk_pcie_en7581_power_up(struct mtk_gen3_pcie *pcie)
|
|||
* Wait for the time needed to complete the bulk de-assert above.
|
||||
* This time is specific for EN7581 SoC.
|
||||
*/
|
||||
mdelay(PCIE_EN7581_RESET_TIME_MS);
|
||||
msleep(PCIE_EN7581_RESET_TIME_MS);
|
||||
|
||||
pm_runtime_enable(dev);
|
||||
pm_runtime_get_sync(dev);
|
||||
|
||||
err = clk_bulk_prepare(pcie->num_clks, pcie->clks);
|
||||
if (err) {
|
||||
dev_err(dev, "failed to prepare clock\n");
|
||||
goto err_clk_prepare;
|
||||
}
|
||||
|
||||
val = FIELD_PREP(PCIE_VAL_LN0_DOWNSTREAM, 0x47) |
|
||||
FIELD_PREP(PCIE_VAL_LN1_DOWNSTREAM, 0x47) |
|
||||
FIELD_PREP(PCIE_VAL_LN0_UPSTREAM, 0x41) |
|
||||
|
@ -963,17 +988,22 @@ static int mtk_pcie_en7581_power_up(struct mtk_gen3_pcie *pcie)
|
|||
FIELD_PREP(PCIE_K_FINETUNE_MAX, 0xf);
|
||||
writel_relaxed(val, pcie->base + PCIE_PIPE4_PIE8_REG);
|
||||
|
||||
err = clk_bulk_enable(pcie->num_clks, pcie->clks);
|
||||
err = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks);
|
||||
if (err) {
|
||||
dev_err(dev, "failed to prepare clock\n");
|
||||
goto err_clk_enable;
|
||||
goto err_clk_prepare_enable;
|
||||
}
|
||||
|
||||
/*
|
||||
* Airoha EN7581 performs PCIe reset via clk callbacks since it has a
|
||||
* hw issue with PCIE_PE_RSTB signal. Add wait for the time needed to
|
||||
* complete the PCIe reset.
|
||||
*/
|
||||
msleep(PCIE_T_PVPERL_MS);
|
||||
|
||||
return 0;
|
||||
|
||||
err_clk_enable:
|
||||
clk_bulk_unprepare(pcie->num_clks, pcie->clks);
|
||||
err_clk_prepare:
|
||||
err_clk_prepare_enable:
|
||||
pm_runtime_put_sync(dev);
|
||||
pm_runtime_disable(dev);
|
||||
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
|
||||
|
@ -990,6 +1020,15 @@ static int mtk_pcie_power_up(struct mtk_gen3_pcie *pcie)
|
|||
struct device *dev = pcie->dev;
|
||||
int err;
|
||||
|
||||
/*
|
||||
* The controller may have been left out of reset by the bootloader
|
||||
* so make sure that we get a clean start by asserting resets here.
|
||||
*/
|
||||
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets,
|
||||
pcie->phy_resets);
|
||||
reset_control_assert(pcie->mac_reset);
|
||||
usleep_range(PCIE_MTK_RESET_TIME_US, 2 * PCIE_MTK_RESET_TIME_US);
|
||||
|
||||
/* PHY power on and enable pipe clock */
|
||||
err = reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
|
||||
if (err) {
|
||||
|
@ -1074,14 +1113,6 @@ static int mtk_pcie_setup(struct mtk_gen3_pcie *pcie)
|
|||
* counter since the bulk is shared.
|
||||
*/
|
||||
reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
|
||||
/*
|
||||
* The controller may have been left out of reset by the bootloader
|
||||
* so make sure that we get a clean start by asserting resets here.
|
||||
*/
|
||||
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
|
||||
|
||||
reset_control_assert(pcie->mac_reset);
|
||||
usleep_range(10, 20);
|
||||
|
||||
/* Don't touch the hardware registers before power up */
|
||||
err = pcie->soc->power_up(pcie);
|
||||
|
@ -1231,10 +1262,12 @@ static int mtk_pcie_suspend_noirq(struct device *dev)
|
|||
return err;
|
||||
}
|
||||
|
||||
/* Pull down the PERST# pin */
|
||||
val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG);
|
||||
val |= PCIE_PE_RSTB;
|
||||
writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG);
|
||||
if (!(pcie->soc->flags & SKIP_PCIE_RSTB)) {
|
||||
/* Assert the PERST# pin */
|
||||
val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG);
|
||||
val |= PCIE_PE_RSTB;
|
||||
writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG);
|
||||
}
|
||||
|
||||
dev_dbg(pcie->dev, "entered L2 states successfully");
|
||||
|
||||
|
@ -1285,6 +1318,7 @@ static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_en7581 = {
|
|||
.id[2] = "phy-lane2",
|
||||
.num_resets = 3,
|
||||
},
|
||||
.flags = SKIP_PCIE_RSTB,
|
||||
};
|
||||
|
||||
static const struct of_device_id mtk_pcie_of_match[] = {
|
||||
|
@ -1301,6 +1335,7 @@ static struct platform_driver mtk_pcie_driver = {
|
|||
.name = "mtk-pcie-gen3",
|
||||
.of_match_table = mtk_pcie_of_match,
|
||||
.pm = &mtk_pcie_pm_ops,
|
||||
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
|
||||
},
|
||||
};
|
||||
|
||||
|
|
|
@ -107,7 +107,7 @@ static int rcar_pcie_parse_outbound_ranges(struct rcar_pcie_endpoint *ep,
|
|||
}
|
||||
if (!devm_request_mem_region(&pdev->dev, res->start,
|
||||
resource_size(res),
|
||||
outbound_name)) {
|
||||
res->name)) {
|
||||
dev_err(pcie->dev, "Cannot request memory region %s.\n",
|
||||
outbound_name);
|
||||
return -EIO;
|
||||
|
|
|
@ -40,6 +40,10 @@
|
|||
* @irq_pci_fn: the latest PCI function that has updated the mapping of
|
||||
* the MSI/INTX IRQ dedicated outbound region.
|
||||
* @irq_pending: bitmask of asserted INTX IRQs.
|
||||
* @perst_irq: IRQ used for the PERST# signal.
|
||||
* @perst_asserted: True if the PERST# signal was asserted.
|
||||
* @link_up: True if the PCI link is up.
|
||||
* @link_training: Work item to execute PCI link training.
|
||||
*/
|
||||
struct rockchip_pcie_ep {
|
||||
struct rockchip_pcie rockchip;
|
||||
|
@ -784,6 +788,7 @@ static int rockchip_pcie_ep_init_ob_mem(struct rockchip_pcie_ep *ep)
|
|||
SZ_1M);
|
||||
if (!ep->irq_cpu_addr) {
|
||||
dev_err(dev, "failed to reserve memory space for MSI\n");
|
||||
err = -ENOMEM;
|
||||
goto err_epc_mem_exit;
|
||||
}
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
|
|||
struct platform_device *pdev = to_platform_device(dev);
|
||||
struct device_node *node = dev->of_node;
|
||||
struct resource *regs;
|
||||
int err;
|
||||
int err, i;
|
||||
|
||||
if (rockchip->is_rc) {
|
||||
regs = platform_get_resource_byname(pdev,
|
||||
|
@ -69,55 +69,23 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
|
|||
if (rockchip->link_gen < 0 || rockchip->link_gen > 2)
|
||||
rockchip->link_gen = 2;
|
||||
|
||||
rockchip->core_rst = devm_reset_control_get_exclusive(dev, "core");
|
||||
if (IS_ERR(rockchip->core_rst)) {
|
||||
if (PTR_ERR(rockchip->core_rst) != -EPROBE_DEFER)
|
||||
dev_err(dev, "missing core reset property in node\n");
|
||||
return PTR_ERR(rockchip->core_rst);
|
||||
}
|
||||
for (i = 0; i < ROCKCHIP_NUM_PM_RSTS; i++)
|
||||
rockchip->pm_rsts[i].id = rockchip_pci_pm_rsts[i];
|
||||
|
||||
rockchip->mgmt_rst = devm_reset_control_get_exclusive(dev, "mgmt");
|
||||
if (IS_ERR(rockchip->mgmt_rst)) {
|
||||
if (PTR_ERR(rockchip->mgmt_rst) != -EPROBE_DEFER)
|
||||
dev_err(dev, "missing mgmt reset property in node\n");
|
||||
return PTR_ERR(rockchip->mgmt_rst);
|
||||
}
|
||||
err = devm_reset_control_bulk_get_exclusive(dev,
|
||||
ROCKCHIP_NUM_PM_RSTS,
|
||||
rockchip->pm_rsts);
|
||||
if (err)
|
||||
return dev_err_probe(dev, err, "Cannot get the PM reset\n");
|
||||
|
||||
rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev,
|
||||
"mgmt-sticky");
|
||||
if (IS_ERR(rockchip->mgmt_sticky_rst)) {
|
||||
if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER)
|
||||
dev_err(dev, "missing mgmt-sticky reset property in node\n");
|
||||
return PTR_ERR(rockchip->mgmt_sticky_rst);
|
||||
}
|
||||
for (i = 0; i < ROCKCHIP_NUM_CORE_RSTS; i++)
|
||||
rockchip->core_rsts[i].id = rockchip_pci_core_rsts[i];
|
||||
|
||||
rockchip->pipe_rst = devm_reset_control_get_exclusive(dev, "pipe");
|
||||
if (IS_ERR(rockchip->pipe_rst)) {
|
||||
if (PTR_ERR(rockchip->pipe_rst) != -EPROBE_DEFER)
|
||||
dev_err(dev, "missing pipe reset property in node\n");
|
||||
return PTR_ERR(rockchip->pipe_rst);
|
||||
}
|
||||
|
||||
rockchip->pm_rst = devm_reset_control_get_exclusive(dev, "pm");
|
||||
if (IS_ERR(rockchip->pm_rst)) {
|
||||
if (PTR_ERR(rockchip->pm_rst) != -EPROBE_DEFER)
|
||||
dev_err(dev, "missing pm reset property in node\n");
|
||||
return PTR_ERR(rockchip->pm_rst);
|
||||
}
|
||||
|
||||
rockchip->pclk_rst = devm_reset_control_get_exclusive(dev, "pclk");
|
||||
if (IS_ERR(rockchip->pclk_rst)) {
|
||||
if (PTR_ERR(rockchip->pclk_rst) != -EPROBE_DEFER)
|
||||
dev_err(dev, "missing pclk reset property in node\n");
|
||||
return PTR_ERR(rockchip->pclk_rst);
|
||||
}
|
||||
|
||||
rockchip->aclk_rst = devm_reset_control_get_exclusive(dev, "aclk");
|
||||
if (IS_ERR(rockchip->aclk_rst)) {
|
||||
if (PTR_ERR(rockchip->aclk_rst) != -EPROBE_DEFER)
|
||||
dev_err(dev, "missing aclk reset property in node\n");
|
||||
return PTR_ERR(rockchip->aclk_rst);
|
||||
}
|
||||
err = devm_reset_control_bulk_get_exclusive(dev,
|
||||
ROCKCHIP_NUM_CORE_RSTS,
|
||||
rockchip->core_rsts);
|
||||
if (err)
|
||||
return dev_err_probe(dev, err, "Cannot get the Core resets\n");
|
||||
|
||||
if (rockchip->is_rc)
|
||||
rockchip->perst_gpio = devm_gpiod_get_optional(dev, "ep",
|
||||
|
@ -129,29 +97,10 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
|
|||
return dev_err_probe(dev, PTR_ERR(rockchip->perst_gpio),
|
||||
"failed to get PERST# GPIO\n");
|
||||
|
||||
rockchip->aclk_pcie = devm_clk_get(dev, "aclk");
|
||||
if (IS_ERR(rockchip->aclk_pcie)) {
|
||||
dev_err(dev, "aclk clock not found\n");
|
||||
return PTR_ERR(rockchip->aclk_pcie);
|
||||
}
|
||||
|
||||
rockchip->aclk_perf_pcie = devm_clk_get(dev, "aclk-perf");
|
||||
if (IS_ERR(rockchip->aclk_perf_pcie)) {
|
||||
dev_err(dev, "aclk_perf clock not found\n");
|
||||
return PTR_ERR(rockchip->aclk_perf_pcie);
|
||||
}
|
||||
|
||||
rockchip->hclk_pcie = devm_clk_get(dev, "hclk");
|
||||
if (IS_ERR(rockchip->hclk_pcie)) {
|
||||
dev_err(dev, "hclk clock not found\n");
|
||||
return PTR_ERR(rockchip->hclk_pcie);
|
||||
}
|
||||
|
||||
rockchip->clk_pcie_pm = devm_clk_get(dev, "pm");
|
||||
if (IS_ERR(rockchip->clk_pcie_pm)) {
|
||||
dev_err(dev, "pm clock not found\n");
|
||||
return PTR_ERR(rockchip->clk_pcie_pm);
|
||||
}
|
||||
rockchip->num_clks = devm_clk_bulk_get_all(dev, &rockchip->clks);
|
||||
if (rockchip->num_clks < 0)
|
||||
return dev_err_probe(dev, rockchip->num_clks,
|
||||
"failed to get clocks\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -169,23 +118,10 @@ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
|||
int err, i;
|
||||
u32 regs;
|
||||
|
||||
err = reset_control_assert(rockchip->aclk_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "assert aclk_rst err %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
err = reset_control_assert(rockchip->pclk_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "assert pclk_rst err %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
err = reset_control_assert(rockchip->pm_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "assert pm_rst err %d\n", err);
|
||||
return err;
|
||||
}
|
||||
err = reset_control_bulk_assert(ROCKCHIP_NUM_PM_RSTS,
|
||||
rockchip->pm_rsts);
|
||||
if (err)
|
||||
return dev_err_probe(dev, err, "Couldn't assert PM resets\n");
|
||||
|
||||
for (i = 0; i < MAX_LANE_NUM; i++) {
|
||||
err = phy_init(rockchip->phys[i]);
|
||||
|
@ -195,47 +131,19 @@ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
|||
}
|
||||
}
|
||||
|
||||
err = reset_control_assert(rockchip->core_rst);
|
||||
err = reset_control_bulk_assert(ROCKCHIP_NUM_CORE_RSTS,
|
||||
rockchip->core_rsts);
|
||||
if (err) {
|
||||
dev_err(dev, "assert core_rst err %d\n", err);
|
||||
goto err_exit_phy;
|
||||
}
|
||||
|
||||
err = reset_control_assert(rockchip->mgmt_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "assert mgmt_rst err %d\n", err);
|
||||
goto err_exit_phy;
|
||||
}
|
||||
|
||||
err = reset_control_assert(rockchip->mgmt_sticky_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "assert mgmt_sticky_rst err %d\n", err);
|
||||
goto err_exit_phy;
|
||||
}
|
||||
|
||||
err = reset_control_assert(rockchip->pipe_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "assert pipe_rst err %d\n", err);
|
||||
dev_err_probe(dev, err, "Couldn't assert Core resets\n");
|
||||
goto err_exit_phy;
|
||||
}
|
||||
|
||||
udelay(10);
|
||||
|
||||
err = reset_control_deassert(rockchip->pm_rst);
|
||||
err = reset_control_bulk_deassert(ROCKCHIP_NUM_PM_RSTS,
|
||||
rockchip->pm_rsts);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert pm_rst err %d\n", err);
|
||||
goto err_exit_phy;
|
||||
}
|
||||
|
||||
err = reset_control_deassert(rockchip->aclk_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert aclk_rst err %d\n", err);
|
||||
goto err_exit_phy;
|
||||
}
|
||||
|
||||
err = reset_control_deassert(rockchip->pclk_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert pclk_rst err %d\n", err);
|
||||
dev_err(dev, "Couldn't deassert PM resets %d\n", err);
|
||||
goto err_exit_phy;
|
||||
}
|
||||
|
||||
|
@ -275,31 +183,10 @@ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
|||
goto err_power_off_phy;
|
||||
}
|
||||
|
||||
/*
|
||||
* Please don't reorder the deassert sequence of the following
|
||||
* four reset pins.
|
||||
*/
|
||||
err = reset_control_deassert(rockchip->mgmt_sticky_rst);
|
||||
err = reset_control_bulk_deassert(ROCKCHIP_NUM_CORE_RSTS,
|
||||
rockchip->core_rsts);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err);
|
||||
goto err_power_off_phy;
|
||||
}
|
||||
|
||||
err = reset_control_deassert(rockchip->core_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert core_rst err %d\n", err);
|
||||
goto err_power_off_phy;
|
||||
}
|
||||
|
||||
err = reset_control_deassert(rockchip->mgmt_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert mgmt_rst err %d\n", err);
|
||||
goto err_power_off_phy;
|
||||
}
|
||||
|
||||
err = reset_control_deassert(rockchip->pipe_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert pipe_rst err %d\n", err);
|
||||
dev_err(dev, "Couldn't deassert Core reset %d\n", err);
|
||||
goto err_power_off_phy;
|
||||
}
|
||||
|
||||
|
@ -375,50 +262,18 @@ int rockchip_pcie_enable_clocks(struct rockchip_pcie *rockchip)
|
|||
struct device *dev = rockchip->dev;
|
||||
int err;
|
||||
|
||||
err = clk_prepare_enable(rockchip->aclk_pcie);
|
||||
if (err) {
|
||||
dev_err(dev, "unable to enable aclk_pcie clock\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
err = clk_prepare_enable(rockchip->aclk_perf_pcie);
|
||||
if (err) {
|
||||
dev_err(dev, "unable to enable aclk_perf_pcie clock\n");
|
||||
goto err_aclk_perf_pcie;
|
||||
}
|
||||
|
||||
err = clk_prepare_enable(rockchip->hclk_pcie);
|
||||
if (err) {
|
||||
dev_err(dev, "unable to enable hclk_pcie clock\n");
|
||||
goto err_hclk_pcie;
|
||||
}
|
||||
|
||||
err = clk_prepare_enable(rockchip->clk_pcie_pm);
|
||||
if (err) {
|
||||
dev_err(dev, "unable to enable clk_pcie_pm clock\n");
|
||||
goto err_clk_pcie_pm;
|
||||
}
|
||||
err = clk_bulk_prepare_enable(rockchip->num_clks, rockchip->clks);
|
||||
if (err)
|
||||
return dev_err_probe(dev, err, "failed to enable clocks\n");
|
||||
|
||||
return 0;
|
||||
|
||||
err_clk_pcie_pm:
|
||||
clk_disable_unprepare(rockchip->hclk_pcie);
|
||||
err_hclk_pcie:
|
||||
clk_disable_unprepare(rockchip->aclk_perf_pcie);
|
||||
err_aclk_perf_pcie:
|
||||
clk_disable_unprepare(rockchip->aclk_pcie);
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rockchip_pcie_enable_clocks);
|
||||
|
||||
void rockchip_pcie_disable_clocks(void *data)
|
||||
void rockchip_pcie_disable_clocks(struct rockchip_pcie *rockchip)
|
||||
{
|
||||
struct rockchip_pcie *rockchip = data;
|
||||
|
||||
clk_disable_unprepare(rockchip->clk_pcie_pm);
|
||||
clk_disable_unprepare(rockchip->hclk_pcie);
|
||||
clk_disable_unprepare(rockchip->aclk_perf_pcie);
|
||||
clk_disable_unprepare(rockchip->aclk_pcie);
|
||||
clk_bulk_disable_unprepare(rockchip->num_clks, rockchip->clks);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rockchip_pcie_disable_clocks);
|
||||
|
||||
|
|
|
@ -11,9 +11,11 @@
|
|||
#ifndef _PCIE_ROCKCHIP_H
|
||||
#define _PCIE_ROCKCHIP_H
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/reset.h>
|
||||
|
||||
/*
|
||||
* The upper 16 bits of PCIE_CLIENT_CONFIG are a write mask for the lower 16
|
||||
|
@ -309,22 +311,31 @@
|
|||
(((c) << ((b) * 8 + 5)) & \
|
||||
ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
|
||||
|
||||
#define ROCKCHIP_NUM_PM_RSTS ARRAY_SIZE(rockchip_pci_pm_rsts)
|
||||
#define ROCKCHIP_NUM_CORE_RSTS ARRAY_SIZE(rockchip_pci_core_rsts)
|
||||
|
||||
static const char * const rockchip_pci_pm_rsts[] = {
|
||||
"pm",
|
||||
"pclk",
|
||||
"aclk",
|
||||
};
|
||||
|
||||
static const char * const rockchip_pci_core_rsts[] = {
|
||||
"mgmt-sticky",
|
||||
"core",
|
||||
"mgmt",
|
||||
"pipe",
|
||||
};
|
||||
|
||||
struct rockchip_pcie {
|
||||
void __iomem *reg_base; /* DT axi-base */
|
||||
void __iomem *apb_base; /* DT apb-base */
|
||||
bool legacy_phy;
|
||||
struct phy *phys[MAX_LANE_NUM];
|
||||
struct reset_control *core_rst;
|
||||
struct reset_control *mgmt_rst;
|
||||
struct reset_control *mgmt_sticky_rst;
|
||||
struct reset_control *pipe_rst;
|
||||
struct reset_control *pm_rst;
|
||||
struct reset_control *aclk_rst;
|
||||
struct reset_control *pclk_rst;
|
||||
struct clk *aclk_pcie;
|
||||
struct clk *aclk_perf_pcie;
|
||||
struct clk *hclk_pcie;
|
||||
struct clk *clk_pcie_pm;
|
||||
struct reset_control_bulk_data pm_rsts[ROCKCHIP_NUM_PM_RSTS];
|
||||
struct reset_control_bulk_data core_rsts[ROCKCHIP_NUM_CORE_RSTS];
|
||||
struct clk_bulk_data *clks;
|
||||
int num_clks;
|
||||
struct regulator *vpcie12v; /* 12V power supply */
|
||||
struct regulator *vpcie3v3; /* 3.3V power supply */
|
||||
struct regulator *vpcie1v8; /* 1.8V power supply */
|
||||
|
@ -358,7 +369,7 @@ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip);
|
|||
int rockchip_pcie_get_phys(struct rockchip_pcie *rockchip);
|
||||
void rockchip_pcie_deinit_phys(struct rockchip_pcie *rockchip);
|
||||
int rockchip_pcie_enable_clocks(struct rockchip_pcie *rockchip);
|
||||
void rockchip_pcie_disable_clocks(void *data);
|
||||
void rockchip_pcie_disable_clocks(struct rockchip_pcie *rockchip);
|
||||
void rockchip_pcie_cfg_configuration_accesses(
|
||||
struct rockchip_pcie *rockchip, u32 type);
|
||||
|
||||
|
|
|
@ -30,11 +30,14 @@
|
|||
#define XILINX_CPM_PCIE_REG_IDRN_MASK 0x00000E3C
|
||||
#define XILINX_CPM_PCIE_MISC_IR_STATUS 0x00000340
|
||||
#define XILINX_CPM_PCIE_MISC_IR_ENABLE 0x00000348
|
||||
#define XILINX_CPM_PCIE_MISC_IR_LOCAL BIT(1)
|
||||
#define XILINX_CPM_PCIE0_MISC_IR_LOCAL BIT(1)
|
||||
#define XILINX_CPM_PCIE1_MISC_IR_LOCAL BIT(2)
|
||||
|
||||
#define XILINX_CPM_PCIE_IR_STATUS 0x000002A0
|
||||
#define XILINX_CPM_PCIE_IR_ENABLE 0x000002A8
|
||||
#define XILINX_CPM_PCIE_IR_LOCAL BIT(0)
|
||||
#define XILINX_CPM_PCIE0_IR_STATUS 0x000002A0
|
||||
#define XILINX_CPM_PCIE1_IR_STATUS 0x000002B4
|
||||
#define XILINX_CPM_PCIE0_IR_ENABLE 0x000002A8
|
||||
#define XILINX_CPM_PCIE1_IR_ENABLE 0x000002BC
|
||||
#define XILINX_CPM_PCIE_IR_LOCAL BIT(0)
|
||||
|
||||
#define IMR(x) BIT(XILINX_PCIE_INTR_ ##x)
|
||||
|
||||
|
@ -80,14 +83,21 @@
|
|||
enum xilinx_cpm_version {
|
||||
CPM,
|
||||
CPM5,
|
||||
CPM5_HOST1,
|
||||
};
|
||||
|
||||
/**
|
||||
* struct xilinx_cpm_variant - CPM variant information
|
||||
* @version: CPM version
|
||||
* @ir_status: Offset for the error interrupt status register
|
||||
* @ir_enable: Offset for the CPM5 local error interrupt enable register
|
||||
* @ir_misc_value: A bitmask for the miscellaneous interrupt status
|
||||
*/
|
||||
struct xilinx_cpm_variant {
|
||||
enum xilinx_cpm_version version;
|
||||
u32 ir_status;
|
||||
u32 ir_enable;
|
||||
u32 ir_misc_value;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -269,6 +279,7 @@ static void xilinx_cpm_pcie_event_flow(struct irq_desc *desc)
|
|||
{
|
||||
struct xilinx_cpm_pcie *port = irq_desc_get_handler_data(desc);
|
||||
struct irq_chip *chip = irq_desc_get_chip(desc);
|
||||
const struct xilinx_cpm_variant *variant = port->variant;
|
||||
unsigned long val;
|
||||
int i;
|
||||
|
||||
|
@ -279,11 +290,11 @@ static void xilinx_cpm_pcie_event_flow(struct irq_desc *desc)
|
|||
generic_handle_domain_irq(port->cpm_domain, i);
|
||||
pcie_write(port, val, XILINX_CPM_PCIE_REG_IDR);
|
||||
|
||||
if (port->variant->version == CPM5) {
|
||||
val = readl_relaxed(port->cpm_base + XILINX_CPM_PCIE_IR_STATUS);
|
||||
if (variant->ir_status) {
|
||||
val = readl_relaxed(port->cpm_base + variant->ir_status);
|
||||
if (val)
|
||||
writel_relaxed(val, port->cpm_base +
|
||||
XILINX_CPM_PCIE_IR_STATUS);
|
||||
variant->ir_status);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -465,6 +476,8 @@ static int xilinx_cpm_setup_irq(struct xilinx_cpm_pcie *port)
|
|||
*/
|
||||
static void xilinx_cpm_pcie_init_port(struct xilinx_cpm_pcie *port)
|
||||
{
|
||||
const struct xilinx_cpm_variant *variant = port->variant;
|
||||
|
||||
if (cpm_pcie_link_up(port))
|
||||
dev_info(port->dev, "PCIe Link is UP\n");
|
||||
else
|
||||
|
@ -483,15 +496,15 @@ static void xilinx_cpm_pcie_init_port(struct xilinx_cpm_pcie *port)
|
|||
* XILINX_CPM_PCIE_MISC_IR_ENABLE register is mapped to
|
||||
* CPM SLCR block.
|
||||
*/
|
||||
writel(XILINX_CPM_PCIE_MISC_IR_LOCAL,
|
||||
writel(variant->ir_misc_value,
|
||||
port->cpm_base + XILINX_CPM_PCIE_MISC_IR_ENABLE);
|
||||
|
||||
if (port->variant->version == CPM5) {
|
||||
if (variant->ir_enable) {
|
||||
writel(XILINX_CPM_PCIE_IR_LOCAL,
|
||||
port->cpm_base + XILINX_CPM_PCIE_IR_ENABLE);
|
||||
port->cpm_base + variant->ir_enable);
|
||||
}
|
||||
|
||||
/* Enable the Bridge enable bit */
|
||||
/* Set Bridge enable bit */
|
||||
pcie_write(port, pcie_read(port, XILINX_CPM_PCIE_REG_RPSC) |
|
||||
XILINX_CPM_PCIE_REG_RPSC_BEN,
|
||||
XILINX_CPM_PCIE_REG_RPSC);
|
||||
|
@ -609,10 +622,21 @@ err_parse_dt:
|
|||
|
||||
static const struct xilinx_cpm_variant cpm_host = {
|
||||
.version = CPM,
|
||||
.ir_misc_value = XILINX_CPM_PCIE0_MISC_IR_LOCAL,
|
||||
};
|
||||
|
||||
static const struct xilinx_cpm_variant cpm5_host = {
|
||||
.version = CPM5,
|
||||
.ir_misc_value = XILINX_CPM_PCIE0_MISC_IR_LOCAL,
|
||||
.ir_status = XILINX_CPM_PCIE0_IR_STATUS,
|
||||
.ir_enable = XILINX_CPM_PCIE0_IR_ENABLE,
|
||||
};
|
||||
|
||||
static const struct xilinx_cpm_variant cpm5_host1 = {
|
||||
.version = CPM5_HOST1,
|
||||
.ir_misc_value = XILINX_CPM_PCIE1_MISC_IR_LOCAL,
|
||||
.ir_status = XILINX_CPM_PCIE1_IR_STATUS,
|
||||
.ir_enable = XILINX_CPM_PCIE1_IR_ENABLE,
|
||||
};
|
||||
|
||||
static const struct of_device_id xilinx_cpm_pcie_of_match[] = {
|
||||
|
@ -624,6 +648,10 @@ static const struct of_device_id xilinx_cpm_pcie_of_match[] = {
|
|||
.compatible = "xlnx,versal-cpm5-host",
|
||||
.data = &cpm5_host,
|
||||
},
|
||||
{
|
||||
.compatible = "xlnx,versal-cpm5-host1",
|
||||
.data = &cpm5_host1,
|
||||
},
|
||||
{}
|
||||
};
|
||||
|
||||
|
|
|
@ -7,20 +7,27 @@
|
|||
* Author: Daire McNamara <daire.mcnamara@microchip.com>
|
||||
*/
|
||||
|
||||
#include <linux/align.h>
|
||||
#include <linux/bits.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/log2.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/wordpart.h>
|
||||
|
||||
#include "../../pci.h"
|
||||
#include "pcie-plda.h"
|
||||
|
||||
#define MC_MAX_NUM_INBOUND_WINDOWS 8
|
||||
#define MPFS_NC_BOUNCE_ADDR 0x80000000
|
||||
|
||||
/* PCIe Bridge Phy and Controller Phy offsets */
|
||||
#define MC_PCIE1_BRIDGE_ADDR 0x00008000u
|
||||
#define MC_PCIE1_CTRL_ADDR 0x0000a000u
|
||||
|
@ -607,6 +614,91 @@ static void mc_disable_interrupts(struct mc_pcie *port)
|
|||
writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_HOST);
|
||||
}
|
||||
|
||||
static void mc_pcie_setup_inbound_atr(struct mc_pcie *port, int window_index,
|
||||
u64 axi_addr, u64 pcie_addr, u64 size)
|
||||
{
|
||||
u32 table_offset = window_index * ATR_ENTRY_SIZE;
|
||||
void __iomem *table_addr = port->bridge_base_addr + table_offset;
|
||||
u32 atr_sz;
|
||||
u32 val;
|
||||
|
||||
atr_sz = ilog2(size) - 1;
|
||||
|
||||
val = ALIGN_DOWN(lower_32_bits(pcie_addr), SZ_4K);
|
||||
val |= FIELD_PREP(ATR_SIZE_MASK, atr_sz);
|
||||
val |= ATR_IMPL_ENABLE;
|
||||
|
||||
writel(val, table_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
|
||||
|
||||
writel(upper_32_bits(pcie_addr), table_addr + ATR0_PCIE_WIN0_SRC_ADDR);
|
||||
|
||||
writel(lower_32_bits(axi_addr), table_addr + ATR0_PCIE_WIN0_TRSL_ADDR_LSB);
|
||||
writel(upper_32_bits(axi_addr), table_addr + ATR0_PCIE_WIN0_TRSL_ADDR_UDW);
|
||||
|
||||
writel(TRSL_ID_AXI4_MASTER_0, table_addr + ATR0_PCIE_WIN0_TRSL_PARAM);
|
||||
}
|
||||
|
||||
static int mc_pcie_setup_inbound_ranges(struct platform_device *pdev,
|
||||
struct mc_pcie *port)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct device_node *dn = dev->of_node;
|
||||
struct of_range_parser parser;
|
||||
struct of_range range;
|
||||
int atr_index = 0;
|
||||
|
||||
/*
|
||||
* MPFS PCIe Root Port is 32-bit only, behind a Fabric Interface
|
||||
* Controller FPGA logic block which contains the AXI-S interface.
|
||||
*
|
||||
* From the point of view of the PCIe Root Port, there are only two
|
||||
* supported Root Port configurations:
|
||||
*
|
||||
* Configuration 1: for use with fully coherent designs; supports a
|
||||
* window from 0x0 (CPU space) to specified PCIe space.
|
||||
*
|
||||
* Configuration 2: for use with non-coherent designs; supports two
|
||||
* 1 GB windows to CPU space; one mapping CPU space 0 to PCIe space
|
||||
* 0x80000000 and a second mapping CPU space 0x40000000 to PCIe
|
||||
* space 0xc0000000. This cfg needs two windows because of how the
|
||||
* MSI space is allocated in the AXI-S range on MPFS.
|
||||
*
|
||||
* The FIC interface outside the PCIe block *must* complete the
|
||||
* inbound address translation as per MCHP MPFS FPGA design
|
||||
* guidelines.
|
||||
*/
|
||||
if (device_property_read_bool(dev, "dma-noncoherent")) {
|
||||
/*
|
||||
* Always need same two tables in this case. Need two tables
|
||||
* due to hardware interactions between address and size.
|
||||
*/
|
||||
mc_pcie_setup_inbound_atr(port, 0, 0,
|
||||
MPFS_NC_BOUNCE_ADDR, SZ_1G);
|
||||
mc_pcie_setup_inbound_atr(port, 1, SZ_1G,
|
||||
MPFS_NC_BOUNCE_ADDR + SZ_1G, SZ_1G);
|
||||
} else {
|
||||
/* Find any DMA ranges */
|
||||
if (of_pci_dma_range_parser_init(&parser, dn)) {
|
||||
/* No DMA range property - setup default */
|
||||
mc_pcie_setup_inbound_atr(port, 0, 0, 0, SZ_4G);
|
||||
return 0;
|
||||
}
|
||||
|
||||
for_each_of_range(&parser, &range) {
|
||||
if (atr_index >= MC_MAX_NUM_INBOUND_WINDOWS) {
|
||||
dev_err(dev, "too many inbound ranges; %d available tables\n",
|
||||
MC_MAX_NUM_INBOUND_WINDOWS);
|
||||
return -EINVAL;
|
||||
}
|
||||
mc_pcie_setup_inbound_atr(port, atr_index, 0,
|
||||
range.pci_addr, range.size);
|
||||
atr_index++;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mc_platform_init(struct pci_config_window *cfg)
|
||||
{
|
||||
struct device *dev = cfg->parent;
|
||||
|
@ -627,6 +719,10 @@ static int mc_platform_init(struct pci_config_window *cfg)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = mc_pcie_setup_inbound_ranges(pdev, port);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
port->plda.event_ops = &mc_event_ops;
|
||||
port->plda.event_irq_chip = &mc_event_irq_chip;
|
||||
port->plda.events_bitmap = GENMASK(NUM_EVENTS - 1, 0);
|
||||
|
|
|
@ -8,11 +8,14 @@
|
|||
* Author: Daire McNamara <daire.mcnamara@microchip.com>
|
||||
*/
|
||||
|
||||
#include <linux/align.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/irqchip/chained_irq.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/pci_regs.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/wordpart.h>
|
||||
|
||||
#include "pcie-plda.h"
|
||||
|
||||
|
@ -502,8 +505,9 @@ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
|
|||
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
|
||||
ATR0_AXI4_SLV0_TRSL_PARAM);
|
||||
|
||||
val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
|
||||
ATR_IMPL_ENABLE;
|
||||
val = ALIGN_DOWN(lower_32_bits(axi_addr), SZ_4K);
|
||||
val |= FIELD_PREP(ATR_SIZE_MASK, atr_sz);
|
||||
val |= ATR_IMPL_ENABLE;
|
||||
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
|
||||
ATR0_AXI4_SLV0_SRCADDR_PARAM);
|
||||
|
||||
|
@ -518,13 +522,20 @@ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
|
|||
val = upper_32_bits(pci_addr);
|
||||
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
|
||||
ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(plda_pcie_setup_window);
|
||||
|
||||
void plda_pcie_setup_inbound_address_translation(struct plda_pcie_rp *port)
|
||||
{
|
||||
void __iomem *bridge_base_addr = port->bridge_addr;
|
||||
u32 val;
|
||||
|
||||
val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
|
||||
val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
|
||||
writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
|
||||
writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(plda_pcie_setup_window);
|
||||
EXPORT_SYMBOL_GPL(plda_pcie_setup_inbound_address_translation);
|
||||
|
||||
int plda_pcie_setup_iomems(struct pci_host_bridge *bridge,
|
||||
struct plda_pcie_rp *port)
|
||||
|
|
|
@ -89,14 +89,15 @@
|
|||
|
||||
/* PCIe AXI slave table init defines */
|
||||
#define ATR0_AXI4_SLV0_SRCADDR_PARAM 0x800u
|
||||
#define ATR_SIZE_SHIFT 1
|
||||
#define ATR_IMPL_ENABLE 1
|
||||
#define ATR_SIZE_MASK GENMASK(6, 1)
|
||||
#define ATR_IMPL_ENABLE BIT(0)
|
||||
#define ATR0_AXI4_SLV0_SRC_ADDR 0x804u
|
||||
#define ATR0_AXI4_SLV0_TRSL_ADDR_LSB 0x808u
|
||||
#define ATR0_AXI4_SLV0_TRSL_ADDR_UDW 0x80cu
|
||||
#define ATR0_AXI4_SLV0_TRSL_PARAM 0x810u
|
||||
#define PCIE_TX_RX_INTERFACE 0x00000000u
|
||||
#define PCIE_CONFIG_INTERFACE 0x00000001u
|
||||
#define TRSL_ID_AXI4_MASTER_0 0x00000004u
|
||||
|
||||
#define CONFIG_SPACE_ADDR_OFFSET 0x1000u
|
||||
|
||||
|
@ -204,6 +205,7 @@ int plda_init_interrupts(struct platform_device *pdev,
|
|||
void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
|
||||
phys_addr_t axi_addr, phys_addr_t pci_addr,
|
||||
size_t size);
|
||||
void plda_pcie_setup_inbound_address_translation(struct plda_pcie_rp *port);
|
||||
int plda_pcie_setup_iomems(struct pci_host_bridge *bridge,
|
||||
struct plda_pcie_rp *port);
|
||||
int plda_pcie_host_init(struct plda_pcie_rp *port, struct pci_ops *ops,
|
||||
|
|
|
@ -101,7 +101,7 @@ static inline void pcim_addr_devres_clear(struct pcim_addr_devres *res)
|
|||
* @bar: BAR the range is within
|
||||
* @offset: offset from the BAR's start address
|
||||
* @maxlen: length in bytes, beginning at @offset
|
||||
* @name: name associated with the request
|
||||
* @name: name of the driver requesting the resource
|
||||
* @req_flags: flags for the request, e.g., for kernel-exclusive requests
|
||||
*
|
||||
* Returns: 0 on success, a negative error code on failure.
|
||||
|
@ -411,31 +411,12 @@ static inline bool mask_contains_bar(int mask, int bar)
|
|||
return mask & BIT(bar);
|
||||
}
|
||||
|
||||
/*
|
||||
* This is a copy of pci_intx() used to bypass the problem of recursive
|
||||
* function calls due to the hybrid nature of pci_intx().
|
||||
*/
|
||||
static void __pcim_intx(struct pci_dev *pdev, int enable)
|
||||
{
|
||||
u16 pci_command, new;
|
||||
|
||||
pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
|
||||
|
||||
if (enable)
|
||||
new = pci_command & ~PCI_COMMAND_INTX_DISABLE;
|
||||
else
|
||||
new = pci_command | PCI_COMMAND_INTX_DISABLE;
|
||||
|
||||
if (new != pci_command)
|
||||
pci_write_config_word(pdev, PCI_COMMAND, new);
|
||||
}
|
||||
|
||||
static void pcim_intx_restore(struct device *dev, void *data)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct pcim_intx_devres *res = data;
|
||||
|
||||
__pcim_intx(pdev, res->orig_intx);
|
||||
pci_intx(pdev, res->orig_intx);
|
||||
}
|
||||
|
||||
static struct pcim_intx_devres *get_or_create_intx_devres(struct device *dev)
|
||||
|
@ -472,10 +453,11 @@ int pcim_intx(struct pci_dev *pdev, int enable)
|
|||
return -ENOMEM;
|
||||
|
||||
res->orig_intx = !enable;
|
||||
__pcim_intx(pdev, enable);
|
||||
pci_intx(pdev, enable);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pcim_intx);
|
||||
|
||||
static void pcim_disable_device(void *pdev_raw)
|
||||
{
|
||||
|
@ -723,7 +705,7 @@ EXPORT_SYMBOL(pcim_iounmap);
|
|||
* pcim_iomap_region - Request and iomap a PCI BAR
|
||||
* @pdev: PCI device to map IO resources for
|
||||
* @bar: Index of a BAR to map
|
||||
* @name: Name associated with the request
|
||||
* @name: Name of the driver requesting the resource
|
||||
*
|
||||
* Returns: __iomem pointer on success, an IOMEM_ERR_PTR on failure.
|
||||
*
|
||||
|
@ -790,7 +772,7 @@ EXPORT_SYMBOL(pcim_iounmap_region);
|
|||
* pcim_iomap_regions - Request and iomap PCI BARs (DEPRECATED)
|
||||
* @pdev: PCI device to map IO resources for
|
||||
* @mask: Mask of BARs to request and iomap
|
||||
* @name: Name associated with the requests
|
||||
* @name: Name of the driver requesting the resources
|
||||
*
|
||||
* Returns: 0 on success, negative error code on failure.
|
||||
*
|
||||
|
@ -855,9 +837,9 @@ static int _pcim_request_region(struct pci_dev *pdev, int bar, const char *name,
|
|||
|
||||
/**
|
||||
* pcim_request_region - Request a PCI BAR
|
||||
* @pdev: PCI device to requestion region for
|
||||
* @pdev: PCI device to request region for
|
||||
* @bar: Index of BAR to request
|
||||
* @name: Name associated with the request
|
||||
* @name: Name of the driver requesting the resource
|
||||
*
|
||||
* Returns: 0 on success, a negative error code on failure.
|
||||
*
|
||||
|
@ -874,9 +856,9 @@ EXPORT_SYMBOL(pcim_request_region);
|
|||
|
||||
/**
|
||||
* pcim_request_region_exclusive - Request a PCI BAR exclusively
|
||||
* @pdev: PCI device to requestion region for
|
||||
* @pdev: PCI device to request region for
|
||||
* @bar: Index of BAR to request
|
||||
* @name: Name associated with the request
|
||||
* @name: Name of the driver requesting the resource
|
||||
*
|
||||
* Returns: 0 on success, a negative error code on failure.
|
||||
*
|
||||
|
@ -932,7 +914,7 @@ static void pcim_release_all_regions(struct pci_dev *pdev)
|
|||
/**
|
||||
* pcim_request_all_regions - Request all regions
|
||||
* @pdev: PCI device to map IO resources for
|
||||
* @name: name associated with the request
|
||||
* @name: name of the driver requesting the resources
|
||||
*
|
||||
* Returns: 0 on success, negative error code on failure.
|
||||
*
|
||||
|
|
|
@ -44,6 +44,8 @@
|
|||
|
||||
#define TIMER_RESOLUTION 1
|
||||
|
||||
#define CAP_UNALIGNED_ACCESS BIT(0)
|
||||
|
||||
static struct workqueue_struct *kpcitest_workqueue;
|
||||
|
||||
struct pci_epf_test {
|
||||
|
@ -74,6 +76,7 @@ struct pci_epf_test_reg {
|
|||
u32 irq_type;
|
||||
u32 irq_number;
|
||||
u32 flags;
|
||||
u32 caps;
|
||||
} __packed;
|
||||
|
||||
static struct pci_epf_header test_header = {
|
||||
|
@ -251,7 +254,7 @@ static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test)
|
|||
|
||||
fail_back_rx:
|
||||
dma_release_channel(epf_test->dma_chan_rx);
|
||||
epf_test->dma_chan_tx = NULL;
|
||||
epf_test->dma_chan_rx = NULL;
|
||||
|
||||
fail_back_tx:
|
||||
dma_cap_zero(mask);
|
||||
|
@ -328,8 +331,8 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
|
|||
void *copy_buf = NULL, *buf;
|
||||
|
||||
if (reg->flags & FLAG_USE_DMA) {
|
||||
if (epf_test->dma_private) {
|
||||
dev_err(dev, "Cannot transfer data using DMA\n");
|
||||
if (!dma_has_cap(DMA_MEMCPY, epf_test->dma_chan_tx->device->cap_mask)) {
|
||||
dev_err(dev, "DMA controller doesn't support MEMCPY\n");
|
||||
ret = -EINVAL;
|
||||
goto set_status;
|
||||
}
|
||||
|
@ -739,6 +742,20 @@ static void pci_epf_test_clear_bar(struct pci_epf *epf)
|
|||
}
|
||||
}
|
||||
|
||||
static void pci_epf_test_set_capabilities(struct pci_epf *epf)
|
||||
{
|
||||
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
|
||||
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
|
||||
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
|
||||
struct pci_epc *epc = epf->epc;
|
||||
u32 caps = 0;
|
||||
|
||||
if (epc->ops->align_addr)
|
||||
caps |= CAP_UNALIGNED_ACCESS;
|
||||
|
||||
reg->caps = cpu_to_le32(caps);
|
||||
}
|
||||
|
||||
static int pci_epf_test_epc_init(struct pci_epf *epf)
|
||||
{
|
||||
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
|
||||
|
@ -763,6 +780,8 @@ static int pci_epf_test_epc_init(struct pci_epf *epf)
|
|||
}
|
||||
}
|
||||
|
||||
pci_epf_test_set_capabilities(epf);
|
||||
|
||||
ret = pci_epf_test_set_bar(epf);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
|
|
@ -60,26 +60,17 @@ struct pci_epc *pci_epc_get(const char *epc_name)
|
|||
int ret = -EINVAL;
|
||||
struct pci_epc *epc;
|
||||
struct device *dev;
|
||||
struct class_dev_iter iter;
|
||||
|
||||
class_dev_iter_init(&iter, &pci_epc_class, NULL, NULL);
|
||||
while ((dev = class_dev_iter_next(&iter))) {
|
||||
if (strcmp(epc_name, dev_name(dev)))
|
||||
continue;
|
||||
dev = class_find_device_by_name(&pci_epc_class, epc_name);
|
||||
if (!dev)
|
||||
goto err;
|
||||
|
||||
epc = to_pci_epc(dev);
|
||||
if (!try_module_get(epc->ops->owner)) {
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
class_dev_iter_exit(&iter);
|
||||
get_device(&epc->dev);
|
||||
epc = to_pci_epc(dev);
|
||||
if (try_module_get(epc->ops->owner))
|
||||
return epc;
|
||||
}
|
||||
|
||||
err:
|
||||
class_dev_iter_exit(&iter);
|
||||
put_device(dev);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_get);
|
||||
|
@ -609,10 +600,20 @@ EXPORT_SYMBOL_GPL(pci_epc_clear_bar);
|
|||
int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
|
||||
struct pci_epf_bar *epf_bar)
|
||||
{
|
||||
int ret;
|
||||
const struct pci_epc_features *epc_features;
|
||||
enum pci_barno bar = epf_bar->barno;
|
||||
int flags = epf_bar->flags;
|
||||
int ret;
|
||||
|
||||
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
|
||||
epc_features = pci_epc_get_features(epc, func_no, vfunc_no);
|
||||
if (!epc_features)
|
||||
return -EINVAL;
|
||||
|
||||
if (epc_features->bar[bar].type == BAR_FIXED &&
|
||||
(epc_features->bar[bar].fixed_size != epf_bar->size))
|
||||
return -EINVAL;
|
||||
|
||||
if (!is_power_of_2(epf_bar->size))
|
||||
return -EINVAL;
|
||||
|
||||
if ((epf_bar->barno == BAR_5 && flags & PCI_BASE_ADDRESS_MEM_TYPE_64) ||
|
||||
|
@ -942,7 +943,7 @@ void devm_pci_epc_destroy(struct device *dev, struct pci_epc *epc)
|
|||
{
|
||||
int r;
|
||||
|
||||
r = devres_destroy(dev, devm_pci_epc_release, devm_pci_epc_match,
|
||||
r = devres_release(dev, devm_pci_epc_release, devm_pci_epc_match,
|
||||
epc);
|
||||
dev_WARN_ONCE(dev, r, "couldn't find PCI EPC resource\n");
|
||||
}
|
||||
|
|
|
@ -202,6 +202,7 @@ void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf)
|
|||
|
||||
mutex_lock(&epf_pf->lock);
|
||||
clear_bit(epf_vf->vfunc_no, &epf_pf->vfunction_num_map);
|
||||
epf_vf->epf_pf = NULL;
|
||||
list_del(&epf_vf->list);
|
||||
mutex_unlock(&epf_pf->lock);
|
||||
}
|
||||
|
|
|
@ -84,7 +84,7 @@ static int ibm_get_attention_status(struct hotplug_slot *slot, u8 *status);
|
|||
static void ibm_handle_events(acpi_handle handle, u32 event, void *context);
|
||||
static int ibm_get_table_from_acpi(char **bufp);
|
||||
static ssize_t ibm_read_apci_table(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr,
|
||||
const struct bin_attribute *bin_attr,
|
||||
char *buffer, loff_t pos, size_t size);
|
||||
static acpi_status __init ibm_find_acpi_device(acpi_handle handle,
|
||||
u32 lvl, void *context, void **rv);
|
||||
|
@ -98,7 +98,7 @@ static struct bin_attribute ibm_apci_table_attr __ro_after_init = {
|
|||
.name = "apci_table",
|
||||
.mode = S_IRUGO,
|
||||
},
|
||||
.read = ibm_read_apci_table,
|
||||
.read_new = ibm_read_apci_table,
|
||||
.write = NULL,
|
||||
};
|
||||
static struct acpiphp_attention_info ibm_attention_info =
|
||||
|
@ -353,7 +353,7 @@ read_table_done:
|
|||
* our solution is to only allow reading the table in all at once.
|
||||
*/
|
||||
static ssize_t ibm_read_apci_table(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr,
|
||||
const struct bin_attribute *bin_attr,
|
||||
char *buffer, loff_t pos, size_t size)
|
||||
{
|
||||
int bytes_read = -EINVAL;
|
||||
|
|
|
@ -747,6 +747,7 @@ static int sriov_init(struct pci_dev *dev, int pos)
|
|||
struct resource *res;
|
||||
const char *res_name;
|
||||
struct pci_dev *pdev;
|
||||
u32 sriovbars[PCI_SRIOV_NUM_BARS];
|
||||
|
||||
pci_read_config_word(dev, pos + PCI_SRIOV_CTRL, &ctrl);
|
||||
if (ctrl & PCI_SRIOV_CTRL_VFE) {
|
||||
|
@ -783,6 +784,10 @@ found:
|
|||
if (!iov)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Sizing SR-IOV BARs with VF Enable cleared - no decode */
|
||||
__pci_size_stdbars(dev, PCI_SRIOV_NUM_BARS,
|
||||
pos + PCI_SRIOV_BAR, sriovbars);
|
||||
|
||||
nres = 0;
|
||||
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
|
||||
res = &dev->resource[i + PCI_IOV_RESOURCES];
|
||||
|
@ -796,7 +801,8 @@ found:
|
|||
bar64 = (res->flags & IORESOURCE_MEM_64) ? 1 : 0;
|
||||
else
|
||||
bar64 = __pci_read_base(dev, pci_bar_unknown, res,
|
||||
pos + PCI_SRIOV_BAR + i * 4);
|
||||
pos + PCI_SRIOV_BAR + i * 4,
|
||||
&sriovbars[i]);
|
||||
if (!res->flags)
|
||||
continue;
|
||||
if (resource_size(res) & (PAGE_SIZE - 1)) {
|
||||
|
|
|
@ -190,7 +190,8 @@ EXPORT_SYMBOL_GPL(of_pci_get_devfn);
|
|||
*
|
||||
* Returns 0 on success or a negative error-code on failure.
|
||||
*/
|
||||
int of_pci_parse_bus_range(struct device_node *node, struct resource *res)
|
||||
static int of_pci_parse_bus_range(struct device_node *node,
|
||||
struct resource *res)
|
||||
{
|
||||
u32 bus_range[2];
|
||||
int error;
|
||||
|
@ -207,7 +208,6 @@ int of_pci_parse_bus_range(struct device_node *node, struct resource *res)
|
|||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_pci_parse_bus_range);
|
||||
|
||||
/**
|
||||
* of_get_pci_domain_nr - Find the host bridge domain number
|
||||
|
@ -302,8 +302,6 @@ EXPORT_SYMBOL_GPL(of_pci_check_probe_only);
|
|||
* devm_of_pci_get_host_bridge_resources() - Resource-managed parsing of PCI
|
||||
* host bridge resources from DT
|
||||
* @dev: host bridge device
|
||||
* @busno: bus number associated with the bridge root bus
|
||||
* @bus_max: maximum number of buses for this bridge
|
||||
* @resources: list where the range of resources will be added after DT parsing
|
||||
* @ib_resources: list where the range of inbound resources (with addresses
|
||||
* from 'dma-ranges') will be added after DT parsing
|
||||
|
@ -319,7 +317,6 @@ EXPORT_SYMBOL_GPL(of_pci_check_probe_only);
|
|||
* value if it failed.
|
||||
*/
|
||||
static int devm_of_pci_get_host_bridge_resources(struct device *dev,
|
||||
unsigned char busno, unsigned char bus_max,
|
||||
struct list_head *resources,
|
||||
struct list_head *ib_resources,
|
||||
resource_size_t *io_base)
|
||||
|
@ -343,14 +340,15 @@ static int devm_of_pci_get_host_bridge_resources(struct device *dev,
|
|||
|
||||
err = of_pci_parse_bus_range(dev_node, bus_range);
|
||||
if (err) {
|
||||
bus_range->start = busno;
|
||||
bus_range->end = bus_max;
|
||||
bus_range->start = 0;
|
||||
bus_range->end = 0xff;
|
||||
bus_range->flags = IORESOURCE_BUS;
|
||||
dev_info(dev, " No bus range found for %pOF, using %pR\n",
|
||||
dev_node, bus_range);
|
||||
} else {
|
||||
if (bus_range->end > bus_range->start + bus_max)
|
||||
bus_range->end = bus_range->start + bus_max;
|
||||
if (bus_range->end > 0xff) {
|
||||
dev_warn(dev, " Invalid end bus number in %pR, defaulting to 0xff\n",
|
||||
bus_range);
|
||||
bus_range->end = 0xff;
|
||||
}
|
||||
}
|
||||
pci_add_resource(resources, bus_range);
|
||||
|
||||
|
@ -597,7 +595,7 @@ static int pci_parse_request_of_pci_ranges(struct device *dev,
|
|||
INIT_LIST_HEAD(&bridge->windows);
|
||||
INIT_LIST_HEAD(&bridge->dma_ranges);
|
||||
|
||||
err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &bridge->windows,
|
||||
err = devm_of_pci_get_host_bridge_resources(dev, &bridge->windows,
|
||||
&bridge->dma_ranges, &iobase);
|
||||
if (err)
|
||||
return err;
|
||||
|
|
|
@ -26,7 +26,7 @@ struct of_pci_addr_pair {
|
|||
* side and the child address is the corresponding address on the secondary
|
||||
* side.
|
||||
*/
|
||||
struct of_pci_range {
|
||||
struct of_pci_range_entry {
|
||||
u32 child_addr[OF_PCI_ADDRESS_CELLS];
|
||||
u32 parent_addr[OF_PCI_ADDRESS_CELLS];
|
||||
u32 size[OF_PCI_SIZE_CELLS];
|
||||
|
@ -101,7 +101,7 @@ static int of_pci_prop_bus_range(struct pci_dev *pdev,
|
|||
static int of_pci_prop_ranges(struct pci_dev *pdev, struct of_changeset *ocs,
|
||||
struct device_node *np)
|
||||
{
|
||||
struct of_pci_range *rp;
|
||||
struct of_pci_range_entry *rp;
|
||||
struct resource *res;
|
||||
int i, j, ret;
|
||||
u32 flags, num;
|
||||
|
|
|
@ -161,7 +161,7 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static struct bin_attribute p2pmem_alloc_attr = {
|
||||
static const struct bin_attribute p2pmem_alloc_attr = {
|
||||
.attr = { .name = "allocate", .mode = 0660 },
|
||||
.mmap = p2pmem_alloc_mmap,
|
||||
/*
|
||||
|
@ -180,14 +180,14 @@ static struct attribute *p2pmem_attrs[] = {
|
|||
NULL,
|
||||
};
|
||||
|
||||
static struct bin_attribute *p2pmem_bin_attrs[] = {
|
||||
static const struct bin_attribute *const p2pmem_bin_attrs[] = {
|
||||
&p2pmem_alloc_attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static const struct attribute_group p2pmem_group = {
|
||||
.attrs = p2pmem_attrs,
|
||||
.bin_attrs = p2pmem_bin_attrs,
|
||||
.bin_attrs_new = p2pmem_bin_attrs,
|
||||
.name = "p2pmem",
|
||||
};
|
||||
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/pci.h>
|
||||
|
@ -694,7 +695,7 @@ static ssize_t boot_vga_show(struct device *dev, struct device_attribute *attr,
|
|||
static DEVICE_ATTR_RO(boot_vga);
|
||||
|
||||
static ssize_t pci_read_config(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr, char *buf,
|
||||
const struct bin_attribute *bin_attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj));
|
||||
|
@ -769,7 +770,7 @@ static ssize_t pci_read_config(struct file *filp, struct kobject *kobj,
|
|||
}
|
||||
|
||||
static ssize_t pci_write_config(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr, char *buf,
|
||||
const struct bin_attribute *bin_attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj));
|
||||
|
@ -837,9 +838,9 @@ static ssize_t pci_write_config(struct file *filp, struct kobject *kobj,
|
|||
|
||||
return count;
|
||||
}
|
||||
static BIN_ATTR(config, 0644, pci_read_config, pci_write_config, 0);
|
||||
static const BIN_ATTR(config, 0644, pci_read_config, pci_write_config, 0);
|
||||
|
||||
static struct bin_attribute *pci_dev_config_attrs[] = {
|
||||
static const struct bin_attribute *const pci_dev_config_attrs[] = {
|
||||
&bin_attr_config,
|
||||
NULL,
|
||||
};
|
||||
|
@ -856,7 +857,7 @@ static size_t pci_dev_config_attr_bin_size(struct kobject *kobj,
|
|||
}
|
||||
|
||||
static const struct attribute_group pci_dev_config_attr_group = {
|
||||
.bin_attrs = pci_dev_config_attrs,
|
||||
.bin_attrs_new = pci_dev_config_attrs,
|
||||
.bin_size = pci_dev_config_attr_bin_size,
|
||||
};
|
||||
|
||||
|
@ -887,8 +888,8 @@ pci_llseek_resource(struct file *filep,
|
|||
* callback routine (pci_legacy_read).
|
||||
*/
|
||||
static ssize_t pci_read_legacy_io(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
const struct bin_attribute *bin_attr,
|
||||
char *buf, loff_t off, size_t count)
|
||||
{
|
||||
struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj));
|
||||
|
||||
|
@ -912,8 +913,8 @@ static ssize_t pci_read_legacy_io(struct file *filp, struct kobject *kobj,
|
|||
* callback routine (pci_legacy_write).
|
||||
*/
|
||||
static ssize_t pci_write_legacy_io(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
const struct bin_attribute *bin_attr,
|
||||
char *buf, loff_t off, size_t count)
|
||||
{
|
||||
struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj));
|
||||
|
||||
|
@ -1003,8 +1004,8 @@ void pci_create_legacy_files(struct pci_bus *b)
|
|||
b->legacy_io->attr.name = "legacy_io";
|
||||
b->legacy_io->size = 0xffff;
|
||||
b->legacy_io->attr.mode = 0600;
|
||||
b->legacy_io->read = pci_read_legacy_io;
|
||||
b->legacy_io->write = pci_write_legacy_io;
|
||||
b->legacy_io->read_new = pci_read_legacy_io;
|
||||
b->legacy_io->write_new = pci_write_legacy_io;
|
||||
/* See pci_create_attr() for motivation */
|
||||
b->legacy_io->llseek = pci_llseek_resource;
|
||||
b->legacy_io->mmap = pci_mmap_legacy_io;
|
||||
|
@ -1099,7 +1100,7 @@ static int pci_mmap_resource_wc(struct file *filp, struct kobject *kobj,
|
|||
}
|
||||
|
||||
static ssize_t pci_resource_io(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *attr, char *buf,
|
||||
const struct bin_attribute *attr, char *buf,
|
||||
loff_t off, size_t count, bool write)
|
||||
{
|
||||
#ifdef CONFIG_HAS_IOPORT
|
||||
|
@ -1142,14 +1143,14 @@ static ssize_t pci_resource_io(struct file *filp, struct kobject *kobj,
|
|||
}
|
||||
|
||||
static ssize_t pci_read_resource_io(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *attr, char *buf,
|
||||
const struct bin_attribute *attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
return pci_resource_io(filp, kobj, attr, buf, off, count, false);
|
||||
}
|
||||
|
||||
static ssize_t pci_write_resource_io(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *attr, char *buf,
|
||||
const struct bin_attribute *attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
int ret;
|
||||
|
@ -1210,8 +1211,8 @@ static int pci_create_attr(struct pci_dev *pdev, int num, int write_combine)
|
|||
} else {
|
||||
sprintf(res_attr_name, "resource%d", num);
|
||||
if (pci_resource_flags(pdev, num) & IORESOURCE_IO) {
|
||||
res_attr->read = pci_read_resource_io;
|
||||
res_attr->write = pci_write_resource_io;
|
||||
res_attr->read_new = pci_read_resource_io;
|
||||
res_attr->write_new = pci_write_resource_io;
|
||||
if (arch_can_pci_mmap_io())
|
||||
res_attr->mmap = pci_mmap_resource_uc;
|
||||
} else {
|
||||
|
@ -1292,7 +1293,7 @@ void __weak pci_remove_resource_files(struct pci_dev *dev) { return; }
|
|||
* writing anything except 0 enables it
|
||||
*/
|
||||
static ssize_t pci_write_rom(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr, char *buf,
|
||||
const struct bin_attribute *bin_attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
|
||||
|
@ -1318,7 +1319,7 @@ static ssize_t pci_write_rom(struct file *filp, struct kobject *kobj,
|
|||
* device corresponding to @kobj.
|
||||
*/
|
||||
static ssize_t pci_read_rom(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr, char *buf,
|
||||
const struct bin_attribute *bin_attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
|
||||
|
@ -1344,9 +1345,9 @@ static ssize_t pci_read_rom(struct file *filp, struct kobject *kobj,
|
|||
|
||||
return count;
|
||||
}
|
||||
static BIN_ATTR(rom, 0600, pci_read_rom, pci_write_rom, 0);
|
||||
static const BIN_ATTR(rom, 0600, pci_read_rom, pci_write_rom, 0);
|
||||
|
||||
static struct bin_attribute *pci_dev_rom_attrs[] = {
|
||||
static const struct bin_attribute *const pci_dev_rom_attrs[] = {
|
||||
&bin_attr_rom,
|
||||
NULL,
|
||||
};
|
||||
|
@ -1372,7 +1373,7 @@ static size_t pci_dev_rom_attr_bin_size(struct kobject *kobj,
|
|||
}
|
||||
|
||||
static const struct attribute_group pci_dev_rom_attr_group = {
|
||||
.bin_attrs = pci_dev_rom_attrs,
|
||||
.bin_attrs_new = pci_dev_rom_attrs,
|
||||
.is_bin_visible = pci_dev_rom_attr_is_visible,
|
||||
.bin_size = pci_dev_rom_attr_bin_size,
|
||||
};
|
||||
|
@ -1421,6 +1422,113 @@ static const struct attribute_group pci_dev_reset_attr_group = {
|
|||
.is_visible = pci_dev_reset_attr_is_visible,
|
||||
};
|
||||
|
||||
static ssize_t reset_method_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
ssize_t len = 0;
|
||||
int i, m;
|
||||
|
||||
for (i = 0; i < PCI_NUM_RESET_METHODS; i++) {
|
||||
m = pdev->reset_methods[i];
|
||||
if (!m)
|
||||
break;
|
||||
|
||||
len += sysfs_emit_at(buf, len, "%s%s", len ? " " : "",
|
||||
pci_reset_fn_methods[m].name);
|
||||
}
|
||||
|
||||
if (len)
|
||||
len += sysfs_emit_at(buf, len, "\n");
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
static int reset_method_lookup(const char *name)
|
||||
{
|
||||
int m;
|
||||
|
||||
for (m = 1; m < PCI_NUM_RESET_METHODS; m++) {
|
||||
if (sysfs_streq(name, pci_reset_fn_methods[m].name))
|
||||
return m;
|
||||
}
|
||||
|
||||
return 0; /* not found */
|
||||
}
|
||||
|
||||
static ssize_t reset_method_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
char *tmp_options, *name;
|
||||
int m, n;
|
||||
u8 reset_methods[PCI_NUM_RESET_METHODS] = {};
|
||||
|
||||
if (sysfs_streq(buf, "")) {
|
||||
pdev->reset_methods[0] = 0;
|
||||
pci_warn(pdev, "All device reset methods disabled by user");
|
||||
return count;
|
||||
}
|
||||
|
||||
if (sysfs_streq(buf, "default")) {
|
||||
pci_init_reset_methods(pdev);
|
||||
return count;
|
||||
}
|
||||
|
||||
char *options __free(kfree) = kstrndup(buf, count, GFP_KERNEL);
|
||||
if (!options)
|
||||
return -ENOMEM;
|
||||
|
||||
n = 0;
|
||||
tmp_options = options;
|
||||
while ((name = strsep(&tmp_options, " ")) != NULL) {
|
||||
if (sysfs_streq(name, ""))
|
||||
continue;
|
||||
|
||||
name = strim(name);
|
||||
|
||||
/* Leave previous methods unchanged if input is invalid */
|
||||
m = reset_method_lookup(name);
|
||||
if (!m) {
|
||||
pci_err(pdev, "Invalid reset method '%s'", name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (pci_reset_fn_methods[m].reset_fn(pdev, PCI_RESET_PROBE)) {
|
||||
pci_err(pdev, "Unsupported reset method '%s'", name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (n == PCI_NUM_RESET_METHODS - 1) {
|
||||
pci_err(pdev, "Too many reset methods\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
reset_methods[n++] = m;
|
||||
}
|
||||
|
||||
reset_methods[n] = 0;
|
||||
|
||||
/* Warn if dev-specific supported but not highest priority */
|
||||
if (pci_reset_fn_methods[1].reset_fn(pdev, PCI_RESET_PROBE) == 0 &&
|
||||
reset_methods[0] != 1)
|
||||
pci_warn(pdev, "Device-specific reset disabled/de-prioritized by user");
|
||||
memcpy(pdev->reset_methods, reset_methods, sizeof(pdev->reset_methods));
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(reset_method);
|
||||
|
||||
static struct attribute *pci_dev_reset_method_attrs[] = {
|
||||
&dev_attr_reset_method.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static const struct attribute_group pci_dev_reset_method_attr_group = {
|
||||
.attrs = pci_dev_reset_method_attrs,
|
||||
.is_visible = pci_dev_reset_attr_is_visible,
|
||||
};
|
||||
|
||||
static ssize_t __resource_resize_show(struct device *dev, int n, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
|
|
|
@ -23,7 +23,6 @@
|
|||
#include <linux/string.h>
|
||||
#include <linux/log2.h>
|
||||
#include <linux/logic_pio.h>
|
||||
#include <linux/pm_wakeup.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/pci_hotplug.h>
|
||||
|
@ -1099,34 +1098,6 @@ static void pci_enable_acs(struct pci_dev *dev)
|
|||
pci_write_config_word(dev, pos + PCI_ACS_CTRL, caps.ctrl);
|
||||
}
|
||||
|
||||
/**
|
||||
* pcie_read_tlp_log - read TLP Header Log
|
||||
* @dev: PCIe device
|
||||
* @where: PCI Config offset of TLP Header Log
|
||||
* @tlp_log: TLP Log structure to fill
|
||||
*
|
||||
* Fill @tlp_log from TLP Header Log registers, e.g., AER or DPC.
|
||||
*
|
||||
* Return: 0 on success and filled TLP Log structure, <0 on error.
|
||||
*/
|
||||
int pcie_read_tlp_log(struct pci_dev *dev, int where,
|
||||
struct pcie_tlp_log *tlp_log)
|
||||
{
|
||||
int i, ret;
|
||||
|
||||
memset(tlp_log, 0, sizeof(*tlp_log));
|
||||
|
||||
for (i = 0; i < 4; i++) {
|
||||
ret = pci_read_config_dword(dev, where + i * 4,
|
||||
&tlp_log->dw[i]);
|
||||
if (ret)
|
||||
return pcibios_err_to_errno(ret);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pcie_read_tlp_log);
|
||||
|
||||
/**
|
||||
* pci_restore_bars - restore a device's BAR values (e.g. after wake-up)
|
||||
* @dev: PCI device to have its BARs restored
|
||||
|
@ -2059,6 +2030,28 @@ int __weak pcibios_enable_device(struct pci_dev *dev, int bars)
|
|||
return pci_enable_resources(dev, bars);
|
||||
}
|
||||
|
||||
static int pci_host_bridge_enable_device(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_host_bridge *host_bridge = pci_find_host_bridge(dev->bus);
|
||||
int err;
|
||||
|
||||
if (host_bridge && host_bridge->enable_device) {
|
||||
err = host_bridge->enable_device(host_bridge, dev);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pci_host_bridge_disable_device(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_host_bridge *host_bridge = pci_find_host_bridge(dev->bus);
|
||||
|
||||
if (host_bridge && host_bridge->disable_device)
|
||||
host_bridge->disable_device(host_bridge, dev);
|
||||
}
|
||||
|
||||
static int do_pci_enable_device(struct pci_dev *dev, int bars)
|
||||
{
|
||||
int err;
|
||||
|
@ -2074,9 +2067,13 @@ static int do_pci_enable_device(struct pci_dev *dev, int bars)
|
|||
if (bridge)
|
||||
pcie_aspm_powersave_config_link(bridge);
|
||||
|
||||
err = pci_host_bridge_enable_device(dev);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = pcibios_enable_device(dev, bars);
|
||||
if (err < 0)
|
||||
return err;
|
||||
goto err_enable;
|
||||
pci_fixup_device(pci_fixup_enable, dev);
|
||||
|
||||
if (dev->msi_enabled || dev->msix_enabled)
|
||||
|
@ -2091,6 +2088,12 @@ static int do_pci_enable_device(struct pci_dev *dev, int bars)
|
|||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_enable:
|
||||
pci_host_bridge_disable_device(dev);
|
||||
|
||||
return err;
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2274,6 +2277,8 @@ void pci_disable_device(struct pci_dev *dev)
|
|||
if (atomic_dec_return(&dev->enable_cnt) != 0)
|
||||
return;
|
||||
|
||||
pci_host_bridge_disable_device(dev);
|
||||
|
||||
do_pci_disable_device(dev);
|
||||
|
||||
dev->is_busmaster = 0;
|
||||
|
@ -3941,15 +3946,14 @@ EXPORT_SYMBOL(pci_release_region);
|
|||
* __pci_request_region - Reserved PCI I/O and memory resource
|
||||
* @pdev: PCI device whose resources are to be reserved
|
||||
* @bar: BAR to be reserved
|
||||
* @res_name: Name to be associated with resource.
|
||||
* @name: name of the driver requesting the resource
|
||||
* @exclusive: whether the region access is exclusive or not
|
||||
*
|
||||
* Returns: 0 on success, negative error code on failure.
|
||||
*
|
||||
* Mark the PCI region associated with PCI device @pdev BAR @bar as
|
||||
* being reserved by owner @res_name. Do not access any
|
||||
* address inside the PCI regions unless this call returns
|
||||
* successfully.
|
||||
* Mark the PCI region associated with PCI device @pdev BAR @bar as being
|
||||
* reserved by owner @name. Do not access any address inside the PCI regions
|
||||
* unless this call returns successfully.
|
||||
*
|
||||
* If @exclusive is set, then the region is marked so that userspace
|
||||
* is explicitly not allowed to map the resource via /dev/mem or
|
||||
|
@ -3959,13 +3963,13 @@ EXPORT_SYMBOL(pci_release_region);
|
|||
* message is also printed on failure.
|
||||
*/
|
||||
static int __pci_request_region(struct pci_dev *pdev, int bar,
|
||||
const char *res_name, int exclusive)
|
||||
const char *name, int exclusive)
|
||||
{
|
||||
if (pci_is_managed(pdev)) {
|
||||
if (exclusive == IORESOURCE_EXCLUSIVE)
|
||||
return pcim_request_region_exclusive(pdev, bar, res_name);
|
||||
return pcim_request_region_exclusive(pdev, bar, name);
|
||||
|
||||
return pcim_request_region(pdev, bar, res_name);
|
||||
return pcim_request_region(pdev, bar, name);
|
||||
}
|
||||
|
||||
if (pci_resource_len(pdev, bar) == 0)
|
||||
|
@ -3973,11 +3977,11 @@ static int __pci_request_region(struct pci_dev *pdev, int bar,
|
|||
|
||||
if (pci_resource_flags(pdev, bar) & IORESOURCE_IO) {
|
||||
if (!request_region(pci_resource_start(pdev, bar),
|
||||
pci_resource_len(pdev, bar), res_name))
|
||||
pci_resource_len(pdev, bar), name))
|
||||
goto err_out;
|
||||
} else if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) {
|
||||
if (!__request_mem_region(pci_resource_start(pdev, bar),
|
||||
pci_resource_len(pdev, bar), res_name,
|
||||
pci_resource_len(pdev, bar), name,
|
||||
exclusive))
|
||||
goto err_out;
|
||||
}
|
||||
|
@ -3994,14 +3998,13 @@ err_out:
|
|||
* pci_request_region - Reserve PCI I/O and memory resource
|
||||
* @pdev: PCI device whose resources are to be reserved
|
||||
* @bar: BAR to be reserved
|
||||
* @res_name: Name to be associated with resource
|
||||
* @name: name of the driver requesting the resource
|
||||
*
|
||||
* Returns: 0 on success, negative error code on failure.
|
||||
*
|
||||
* Mark the PCI region associated with PCI device @pdev BAR @bar as
|
||||
* being reserved by owner @res_name. Do not access any
|
||||
* address inside the PCI regions unless this call returns
|
||||
* successfully.
|
||||
* Mark the PCI region associated with PCI device @pdev BAR @bar as being
|
||||
* reserved by owner @name. Do not access any address inside the PCI regions
|
||||
* unless this call returns successfully.
|
||||
*
|
||||
* Returns 0 on success, or %EBUSY on error. A warning
|
||||
* message is also printed on failure.
|
||||
|
@ -4011,9 +4014,9 @@ err_out:
|
|||
* when pcim_enable_device() has been called in advance. This hybrid feature is
|
||||
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
|
||||
*/
|
||||
int pci_request_region(struct pci_dev *pdev, int bar, const char *res_name)
|
||||
int pci_request_region(struct pci_dev *pdev, int bar, const char *name)
|
||||
{
|
||||
return __pci_request_region(pdev, bar, res_name, 0);
|
||||
return __pci_request_region(pdev, bar, name, 0);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_request_region);
|
||||
|
||||
|
@ -4036,13 +4039,13 @@ void pci_release_selected_regions(struct pci_dev *pdev, int bars)
|
|||
EXPORT_SYMBOL(pci_release_selected_regions);
|
||||
|
||||
static int __pci_request_selected_regions(struct pci_dev *pdev, int bars,
|
||||
const char *res_name, int excl)
|
||||
const char *name, int excl)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < PCI_STD_NUM_BARS; i++)
|
||||
if (bars & (1 << i))
|
||||
if (__pci_request_region(pdev, i, res_name, excl))
|
||||
if (__pci_request_region(pdev, i, name, excl))
|
||||
goto err_out;
|
||||
return 0;
|
||||
|
||||
|
@ -4059,7 +4062,7 @@ err_out:
|
|||
* pci_request_selected_regions - Reserve selected PCI I/O and memory resources
|
||||
* @pdev: PCI device whose resources are to be reserved
|
||||
* @bars: Bitmask of BARs to be requested
|
||||
* @res_name: Name to be associated with resource
|
||||
* @name: Name of the driver requesting the resources
|
||||
*
|
||||
* Returns: 0 on success, negative error code on failure.
|
||||
*
|
||||
|
@ -4069,9 +4072,9 @@ err_out:
|
|||
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
|
||||
*/
|
||||
int pci_request_selected_regions(struct pci_dev *pdev, int bars,
|
||||
const char *res_name)
|
||||
const char *name)
|
||||
{
|
||||
return __pci_request_selected_regions(pdev, bars, res_name, 0);
|
||||
return __pci_request_selected_regions(pdev, bars, name, 0);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_request_selected_regions);
|
||||
|
||||
|
@ -4079,7 +4082,7 @@ EXPORT_SYMBOL(pci_request_selected_regions);
|
|||
* pci_request_selected_regions_exclusive - Request regions exclusively
|
||||
* @pdev: PCI device to request regions from
|
||||
* @bars: bit mask of BARs to request
|
||||
* @res_name: name to be associated with the requests
|
||||
* @name: name of the driver requesting the resources
|
||||
*
|
||||
* Returns: 0 on success, negative error code on failure.
|
||||
*
|
||||
|
@ -4089,9 +4092,9 @@ EXPORT_SYMBOL(pci_request_selected_regions);
|
|||
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
|
||||
*/
|
||||
int pci_request_selected_regions_exclusive(struct pci_dev *pdev, int bars,
|
||||
const char *res_name)
|
||||
const char *name)
|
||||
{
|
||||
return __pci_request_selected_regions(pdev, bars, res_name,
|
||||
return __pci_request_selected_regions(pdev, bars, name,
|
||||
IORESOURCE_EXCLUSIVE);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_request_selected_regions_exclusive);
|
||||
|
@ -4114,12 +4117,11 @@ EXPORT_SYMBOL(pci_release_regions);
|
|||
/**
|
||||
* pci_request_regions - Reserve PCI I/O and memory resources
|
||||
* @pdev: PCI device whose resources are to be reserved
|
||||
* @res_name: Name to be associated with resource.
|
||||
* @name: name of the driver requesting the resources
|
||||
*
|
||||
* Mark all PCI regions associated with PCI device @pdev as
|
||||
* being reserved by owner @res_name. Do not access any
|
||||
* address inside the PCI regions unless this call returns
|
||||
* successfully.
|
||||
* Mark all PCI regions associated with PCI device @pdev as being reserved by
|
||||
* owner @name. Do not access any address inside the PCI regions unless this
|
||||
* call returns successfully.
|
||||
*
|
||||
* Returns 0 on success, or %EBUSY on error. A warning
|
||||
* message is also printed on failure.
|
||||
|
@ -4129,22 +4131,22 @@ EXPORT_SYMBOL(pci_release_regions);
|
|||
* when pcim_enable_device() has been called in advance. This hybrid feature is
|
||||
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
|
||||
*/
|
||||
int pci_request_regions(struct pci_dev *pdev, const char *res_name)
|
||||
int pci_request_regions(struct pci_dev *pdev, const char *name)
|
||||
{
|
||||
return pci_request_selected_regions(pdev,
|
||||
((1 << PCI_STD_NUM_BARS) - 1), res_name);
|
||||
((1 << PCI_STD_NUM_BARS) - 1), name);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_request_regions);
|
||||
|
||||
/**
|
||||
* pci_request_regions_exclusive - Reserve PCI I/O and memory resources
|
||||
* @pdev: PCI device whose resources are to be reserved
|
||||
* @res_name: Name to be associated with resource.
|
||||
* @name: name of the driver requesting the resources
|
||||
*
|
||||
* Returns: 0 on success, negative error code on failure.
|
||||
*
|
||||
* Mark all PCI regions associated with PCI device @pdev as being reserved
|
||||
* by owner @res_name. Do not access any address inside the PCI regions
|
||||
* by owner @name. Do not access any address inside the PCI regions
|
||||
* unless this call returns successfully.
|
||||
*
|
||||
* pci_request_regions_exclusive() will mark the region so that /dev/mem
|
||||
|
@ -4158,10 +4160,10 @@ EXPORT_SYMBOL(pci_request_regions);
|
|||
* when pcim_enable_device() has been called in advance. This hybrid feature is
|
||||
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
|
||||
*/
|
||||
int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name)
|
||||
int pci_request_regions_exclusive(struct pci_dev *pdev, const char *name)
|
||||
{
|
||||
return pci_request_selected_regions_exclusive(pdev,
|
||||
((1 << PCI_STD_NUM_BARS) - 1), res_name);
|
||||
((1 << PCI_STD_NUM_BARS) - 1), name);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_request_regions_exclusive);
|
||||
|
||||
|
@ -4488,11 +4490,6 @@ void pci_disable_parity(struct pci_dev *dev)
|
|||
* @enable: boolean: whether to enable or disable PCI INTx
|
||||
*
|
||||
* Enables/disables PCI INTx for device @pdev
|
||||
*
|
||||
* NOTE:
|
||||
* This is a "hybrid" function: It's normally unmanaged, but becomes managed
|
||||
* when pcim_enable_device() has been called in advance. This hybrid feature is
|
||||
* DEPRECATED! If you want managed cleanup, use pcim_intx() instead.
|
||||
*/
|
||||
void pci_intx(struct pci_dev *pdev, int enable)
|
||||
{
|
||||
|
@ -4505,15 +4502,10 @@ void pci_intx(struct pci_dev *pdev, int enable)
|
|||
else
|
||||
new = pci_command | PCI_COMMAND_INTX_DISABLE;
|
||||
|
||||
if (new != pci_command) {
|
||||
/* Preserve the "hybrid" behavior for backwards compatibility */
|
||||
if (pci_is_managed(pdev)) {
|
||||
WARN_ON_ONCE(pcim_intx(pdev, enable) != 0);
|
||||
return;
|
||||
}
|
||||
if (new == pci_command)
|
||||
return;
|
||||
|
||||
pci_write_config_word(pdev, PCI_COMMAND, new);
|
||||
}
|
||||
pci_write_config_word(pdev, PCI_COMMAND, new);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_intx);
|
||||
|
||||
|
@ -5204,7 +5196,7 @@ static void pci_dev_restore(struct pci_dev *dev)
|
|||
}
|
||||
|
||||
/* dev->reset_methods[] is a 0-terminated list of indices into this array */
|
||||
static const struct pci_reset_fn_method pci_reset_fn_methods[] = {
|
||||
const struct pci_reset_fn_method pci_reset_fn_methods[] = {
|
||||
{ },
|
||||
{ pci_dev_specific_reset, .name = "device_specific" },
|
||||
{ pci_dev_acpi_reset, .name = "acpi" },
|
||||
|
@ -5215,129 +5207,6 @@ static const struct pci_reset_fn_method pci_reset_fn_methods[] = {
|
|||
{ cxl_reset_bus_function, .name = "cxl_bus" },
|
||||
};
|
||||
|
||||
static ssize_t reset_method_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
ssize_t len = 0;
|
||||
int i, m;
|
||||
|
||||
for (i = 0; i < PCI_NUM_RESET_METHODS; i++) {
|
||||
m = pdev->reset_methods[i];
|
||||
if (!m)
|
||||
break;
|
||||
|
||||
len += sysfs_emit_at(buf, len, "%s%s", len ? " " : "",
|
||||
pci_reset_fn_methods[m].name);
|
||||
}
|
||||
|
||||
if (len)
|
||||
len += sysfs_emit_at(buf, len, "\n");
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
static int reset_method_lookup(const char *name)
|
||||
{
|
||||
int m;
|
||||
|
||||
for (m = 1; m < PCI_NUM_RESET_METHODS; m++) {
|
||||
if (sysfs_streq(name, pci_reset_fn_methods[m].name))
|
||||
return m;
|
||||
}
|
||||
|
||||
return 0; /* not found */
|
||||
}
|
||||
|
||||
static ssize_t reset_method_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
char *options, *tmp_options, *name;
|
||||
int m, n;
|
||||
u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 };
|
||||
|
||||
if (sysfs_streq(buf, "")) {
|
||||
pdev->reset_methods[0] = 0;
|
||||
pci_warn(pdev, "All device reset methods disabled by user");
|
||||
return count;
|
||||
}
|
||||
|
||||
if (sysfs_streq(buf, "default")) {
|
||||
pci_init_reset_methods(pdev);
|
||||
return count;
|
||||
}
|
||||
|
||||
options = kstrndup(buf, count, GFP_KERNEL);
|
||||
if (!options)
|
||||
return -ENOMEM;
|
||||
|
||||
n = 0;
|
||||
tmp_options = options;
|
||||
while ((name = strsep(&tmp_options, " ")) != NULL) {
|
||||
if (sysfs_streq(name, ""))
|
||||
continue;
|
||||
|
||||
name = strim(name);
|
||||
|
||||
m = reset_method_lookup(name);
|
||||
if (!m) {
|
||||
pci_err(pdev, "Invalid reset method '%s'", name);
|
||||
goto error;
|
||||
}
|
||||
|
||||
if (pci_reset_fn_methods[m].reset_fn(pdev, PCI_RESET_PROBE)) {
|
||||
pci_err(pdev, "Unsupported reset method '%s'", name);
|
||||
goto error;
|
||||
}
|
||||
|
||||
if (n == PCI_NUM_RESET_METHODS - 1) {
|
||||
pci_err(pdev, "Too many reset methods\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
reset_methods[n++] = m;
|
||||
}
|
||||
|
||||
reset_methods[n] = 0;
|
||||
|
||||
/* Warn if dev-specific supported but not highest priority */
|
||||
if (pci_reset_fn_methods[1].reset_fn(pdev, PCI_RESET_PROBE) == 0 &&
|
||||
reset_methods[0] != 1)
|
||||
pci_warn(pdev, "Device-specific reset disabled/de-prioritized by user");
|
||||
memcpy(pdev->reset_methods, reset_methods, sizeof(pdev->reset_methods));
|
||||
kfree(options);
|
||||
return count;
|
||||
|
||||
error:
|
||||
/* Leave previous methods unchanged */
|
||||
kfree(options);
|
||||
return -EINVAL;
|
||||
}
|
||||
static DEVICE_ATTR_RW(reset_method);
|
||||
|
||||
static struct attribute *pci_dev_reset_method_attrs[] = {
|
||||
&dev_attr_reset_method.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static umode_t pci_dev_reset_method_attr_is_visible(struct kobject *kobj,
|
||||
struct attribute *a, int n)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
|
||||
|
||||
if (!pci_reset_supported(pdev))
|
||||
return 0;
|
||||
|
||||
return a->mode;
|
||||
}
|
||||
|
||||
const struct attribute_group pci_dev_reset_method_attr_group = {
|
||||
.attrs = pci_dev_reset_method_attrs,
|
||||
.is_visible = pci_dev_reset_method_attr_is_visible,
|
||||
};
|
||||
|
||||
/**
|
||||
* __pci_reset_function_locked - reset a PCI device function while holding
|
||||
* the @dev mutex lock.
|
||||
|
|
|
@ -4,6 +4,8 @@
|
|||
|
||||
#include <linux/pci.h>
|
||||
|
||||
struct pcie_tlp_log;
|
||||
|
||||
/* Number of possible devfns: 0.0 to 1f.7 inclusive */
|
||||
#define MAX_NR_DEVFNS 256
|
||||
|
||||
|
@ -315,8 +317,10 @@ bool pci_bus_generic_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl,
|
|||
int pci_idt_bus_quirk(struct pci_bus *bus, int devfn, u32 *pl, int rrs_timeout);
|
||||
|
||||
int pci_setup_device(struct pci_dev *dev);
|
||||
void __pci_size_stdbars(struct pci_dev *dev, int count,
|
||||
unsigned int pos, u32 *sizes);
|
||||
int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
||||
struct resource *res, unsigned int reg);
|
||||
struct resource *res, unsigned int reg, u32 *sizes);
|
||||
void pci_configure_ari(struct pci_dev *dev);
|
||||
void __pci_bus_size_bridges(struct pci_bus *bus,
|
||||
struct list_head *realloc_head);
|
||||
|
@ -547,6 +551,12 @@ struct aer_err_info {
|
|||
|
||||
int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
|
||||
void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
|
||||
|
||||
int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2,
|
||||
unsigned int tlp_len, struct pcie_tlp_log *log);
|
||||
unsigned int aer_tlp_log_len(struct pci_dev *dev, u32 aercc);
|
||||
void pcie_print_tlp_log(const struct pci_dev *dev,
|
||||
const struct pcie_tlp_log *log, const char *pfx);
|
||||
#endif /* CONFIG_PCIEAER */
|
||||
|
||||
#ifdef CONFIG_PCIEPORTBUS
|
||||
|
@ -565,6 +575,7 @@ void pci_dpc_init(struct pci_dev *pdev);
|
|||
void dpc_process_error(struct pci_dev *pdev);
|
||||
pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
|
||||
bool pci_dpc_recovered(struct pci_dev *pdev);
|
||||
unsigned int dpc_tlp_log_len(struct pci_dev *dev);
|
||||
#else
|
||||
static inline void pci_save_dpc_state(struct pci_dev *dev) { }
|
||||
static inline void pci_restore_dpc_state(struct pci_dev *dev) { }
|
||||
|
@ -766,6 +777,7 @@ struct pci_reset_fn_method {
|
|||
int (*reset_fn)(struct pci_dev *pdev, bool probe);
|
||||
char *name;
|
||||
};
|
||||
extern const struct pci_reset_fn_method pci_reset_fn_methods[];
|
||||
|
||||
#ifdef CONFIG_PCI_QUIRKS
|
||||
int pci_dev_specific_reset(struct pci_dev *dev, bool probe);
|
||||
|
@ -797,7 +809,6 @@ static inline u64 pci_rebar_size_to_bytes(int size)
|
|||
struct device_node;
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
|
||||
int of_get_pci_domain_nr(struct device_node *node);
|
||||
int of_pci_get_max_link_speed(struct device_node *node);
|
||||
u32 of_pci_get_slot_power_limit(struct device_node *node,
|
||||
|
@ -813,12 +824,6 @@ int devm_of_pci_bridge_init(struct device *dev, struct pci_host_bridge *bridge);
|
|||
bool of_pci_supply_present(struct device_node *np);
|
||||
|
||||
#else
|
||||
static inline int
|
||||
of_pci_parse_bus_range(struct device_node *node, struct resource *res)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static inline int
|
||||
of_get_pci_domain_nr(struct device_node *node)
|
||||
{
|
||||
|
@ -960,8 +965,6 @@ static inline pci_power_t acpi_pci_choose_state(struct pci_dev *pdev)
|
|||
extern const struct attribute_group aspm_ctrl_attr_group;
|
||||
#endif
|
||||
|
||||
extern const struct attribute_group pci_dev_reset_method_attr_group;
|
||||
|
||||
#ifdef CONFIG_X86_INTEL_MID
|
||||
bool pci_use_mid_pm(void);
|
||||
int mid_pci_set_power_state(struct pci_dev *pdev, pci_power_t state);
|
||||
|
|
|
@ -7,7 +7,7 @@ pcieportdrv-y := portdrv.o rcec.o
|
|||
obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o bwctrl.o
|
||||
|
||||
obj-y += aspm.o
|
||||
obj-$(CONFIG_PCIEAER) += aer.o err.o
|
||||
obj-$(CONFIG_PCIEAER) += aer.o err.o tlp.o
|
||||
obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o
|
||||
obj-$(CONFIG_PCIE_PME) += pme.o
|
||||
obj-$(CONFIG_PCIE_DPC) += dpc.o
|
||||
|
|
|
@ -665,12 +665,6 @@ static void pci_rootport_aer_stats_incr(struct pci_dev *pdev,
|
|||
}
|
||||
}
|
||||
|
||||
static void __print_tlp_header(struct pci_dev *dev, struct pcie_tlp_log *t)
|
||||
{
|
||||
pci_err(dev, " TLP Header: %08x %08x %08x %08x\n",
|
||||
t->dw[0], t->dw[1], t->dw[2], t->dw[3]);
|
||||
}
|
||||
|
||||
static void __aer_print_error(struct pci_dev *dev,
|
||||
struct aer_err_info *info)
|
||||
{
|
||||
|
@ -725,7 +719,7 @@ void aer_print_error(struct pci_dev *dev, struct aer_err_info *info)
|
|||
__aer_print_error(dev, info);
|
||||
|
||||
if (info->tlp_header_valid)
|
||||
__print_tlp_header(dev, &info->tlp);
|
||||
pcie_print_tlp_log(dev, &info->tlp, dev_fmt(" "));
|
||||
|
||||
out:
|
||||
if (info->id && info->error_dev_num > 1 && info->id == id)
|
||||
|
@ -797,7 +791,7 @@ void pci_print_aer(struct pci_dev *dev, int aer_severity,
|
|||
aer->uncor_severity);
|
||||
|
||||
if (tlp_header_valid)
|
||||
__print_tlp_header(dev, &aer->header_log);
|
||||
pcie_print_tlp_log(dev, &aer->header_log, dev_fmt(" "));
|
||||
|
||||
trace_aer_event(dev_name(&dev->dev), (status & ~mask),
|
||||
aer_severity, tlp_header_valid, &aer->header_log);
|
||||
|
@ -1248,7 +1242,10 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
|
|||
|
||||
if (info->status & AER_LOG_TLP_MASKS) {
|
||||
info->tlp_header_valid = 1;
|
||||
pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, &info->tlp);
|
||||
pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG,
|
||||
aer + PCI_ERR_PREFIX_LOG,
|
||||
aer_tlp_log_len(dev, aercc),
|
||||
&info->tlp);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -81,24 +81,47 @@ void pci_configure_aspm_l1ss(struct pci_dev *pdev)
|
|||
|
||||
void pci_save_aspm_l1ss_state(struct pci_dev *pdev)
|
||||
{
|
||||
struct pci_dev *parent = pdev->bus->self;
|
||||
struct pci_cap_saved_state *save_state;
|
||||
u16 l1ss = pdev->l1ss;
|
||||
u32 *cap;
|
||||
|
||||
/*
|
||||
* If this is a Downstream Port, we never restore the L1SS state
|
||||
* directly; we only restore it when we restore the state of the
|
||||
* Upstream Port below it.
|
||||
*/
|
||||
if (pcie_downstream_port(pdev) || !parent)
|
||||
return;
|
||||
|
||||
if (!pdev->l1ss || !parent->l1ss)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Save L1 substate configuration. The ASPM L0s/L1 configuration
|
||||
* in PCI_EXP_LNKCTL_ASPMC is saved by pci_save_pcie_state().
|
||||
*/
|
||||
if (!l1ss)
|
||||
return;
|
||||
|
||||
save_state = pci_find_saved_ext_cap(pdev, PCI_EXT_CAP_ID_L1SS);
|
||||
if (!save_state)
|
||||
return;
|
||||
|
||||
cap = &save_state->cap.data[0];
|
||||
pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL2, cap++);
|
||||
pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, cap++);
|
||||
pci_read_config_dword(pdev, pdev->l1ss + PCI_L1SS_CTL2, cap++);
|
||||
pci_read_config_dword(pdev, pdev->l1ss + PCI_L1SS_CTL1, cap++);
|
||||
|
||||
if (parent->state_saved)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Save parent's L1 substate configuration so we have it for
|
||||
* pci_restore_aspm_l1ss_state(pdev) to restore.
|
||||
*/
|
||||
save_state = pci_find_saved_ext_cap(parent, PCI_EXT_CAP_ID_L1SS);
|
||||
if (!save_state)
|
||||
return;
|
||||
|
||||
cap = &save_state->cap.data[0];
|
||||
pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL2, cap++);
|
||||
pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, cap++);
|
||||
}
|
||||
|
||||
void pci_restore_aspm_l1ss_state(struct pci_dev *pdev)
|
||||
|
|
|
@ -190,7 +190,7 @@ out:
|
|||
static void dpc_process_rp_pio_error(struct pci_dev *pdev)
|
||||
{
|
||||
u16 cap = pdev->dpc_cap, dpc_status, first_error;
|
||||
u32 status, mask, sev, syserr, exc, log, prefix;
|
||||
u32 status, mask, sev, syserr, exc, log;
|
||||
struct pcie_tlp_log tlp_log;
|
||||
int i;
|
||||
|
||||
|
@ -215,22 +215,18 @@ static void dpc_process_rp_pio_error(struct pci_dev *pdev)
|
|||
first_error == i ? " (First)" : "");
|
||||
}
|
||||
|
||||
if (pdev->dpc_rp_log_size < 4)
|
||||
if (pdev->dpc_rp_log_size < PCIE_STD_NUM_TLP_HEADERLOG)
|
||||
goto clear_status;
|
||||
pcie_read_tlp_log(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG, &tlp_log);
|
||||
pci_err(pdev, "TLP Header: %#010x %#010x %#010x %#010x\n",
|
||||
tlp_log.dw[0], tlp_log.dw[1], tlp_log.dw[2], tlp_log.dw[3]);
|
||||
pcie_read_tlp_log(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG,
|
||||
cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG,
|
||||
dpc_tlp_log_len(pdev), &tlp_log);
|
||||
pcie_print_tlp_log(pdev, &tlp_log, dev_fmt(""));
|
||||
|
||||
if (pdev->dpc_rp_log_size < 5)
|
||||
if (pdev->dpc_rp_log_size < PCIE_STD_NUM_TLP_HEADERLOG + 1)
|
||||
goto clear_status;
|
||||
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, &log);
|
||||
pci_err(pdev, "RP PIO ImpSpec Log %#010x\n", log);
|
||||
|
||||
for (i = 0; i < pdev->dpc_rp_log_size - 5; i++) {
|
||||
pci_read_config_dword(pdev,
|
||||
cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG + i * 4, &prefix);
|
||||
pci_err(pdev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix);
|
||||
}
|
||||
clear_status:
|
||||
pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status);
|
||||
}
|
||||
|
@ -404,7 +400,9 @@ void pci_dpc_init(struct pci_dev *pdev)
|
|||
if (!pdev->dpc_rp_log_size) {
|
||||
pdev->dpc_rp_log_size =
|
||||
FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE, cap);
|
||||
if (pdev->dpc_rp_log_size < 4 || pdev->dpc_rp_log_size > 9) {
|
||||
if (pdev->dpc_rp_log_size < PCIE_STD_NUM_TLP_HEADERLOG ||
|
||||
pdev->dpc_rp_log_size > PCIE_STD_NUM_TLP_HEADERLOG + 1 +
|
||||
PCIE_STD_MAX_TLP_PREFIXLOG) {
|
||||
pci_err(pdev, "RP PIO log size %u is invalid\n",
|
||||
pdev->dpc_rp_log_size);
|
||||
pdev->dpc_rp_log_size = 0;
|
||||
|
|
|
@ -0,0 +1,115 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* PCIe TLP Log handling
|
||||
*
|
||||
* Copyright (C) 2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#include <linux/aer.h>
|
||||
#include <linux/array_size.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/string.h>
|
||||
|
||||
#include "../pci.h"
|
||||
|
||||
/**
|
||||
* aer_tlp_log_len - Calculate AER Capability TLP Header/Prefix Log length
|
||||
* @dev: PCIe device
|
||||
* @aercc: AER Capabilities and Control register value
|
||||
*
|
||||
* Return: TLP Header/Prefix Log length
|
||||
*/
|
||||
unsigned int aer_tlp_log_len(struct pci_dev *dev, u32 aercc)
|
||||
{
|
||||
return PCIE_STD_NUM_TLP_HEADERLOG +
|
||||
((aercc & PCI_ERR_CAP_PREFIX_LOG_PRESENT) ?
|
||||
dev->eetlp_prefix_max : 0);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PCIE_DPC
|
||||
/**
|
||||
* dpc_tlp_log_len - Calculate DPC RP PIO TLP Header/Prefix Log length
|
||||
* @dev: PCIe device
|
||||
*
|
||||
* Return: TLP Header/Prefix Log length
|
||||
*/
|
||||
unsigned int dpc_tlp_log_len(struct pci_dev *dev)
|
||||
{
|
||||
/* Remove ImpSpec Log register from the count */
|
||||
if (dev->dpc_rp_log_size >= PCIE_STD_NUM_TLP_HEADERLOG + 1)
|
||||
return dev->dpc_rp_log_size - 1;
|
||||
|
||||
return dev->dpc_rp_log_size;
|
||||
}
|
||||
#endif
|
||||
|
||||
/**
|
||||
* pcie_read_tlp_log - read TLP Header Log
|
||||
* @dev: PCIe device
|
||||
* @where: PCI Config offset of TLP Header Log
|
||||
* @where2: PCI Config offset of TLP Prefix Log
|
||||
* @tlp_len: TLP Log length (Header Log + TLP Prefix Log in DWORDs)
|
||||
* @log: TLP Log structure to fill
|
||||
*
|
||||
* Fill @log from TLP Header Log registers, e.g., AER or DPC.
|
||||
*
|
||||
* Return: 0 on success and filled TLP Log structure, <0 on error.
|
||||
*/
|
||||
int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2,
|
||||
unsigned int tlp_len, struct pcie_tlp_log *log)
|
||||
{
|
||||
unsigned int i;
|
||||
int off, ret;
|
||||
u32 *to;
|
||||
|
||||
memset(log, 0, sizeof(*log));
|
||||
|
||||
for (i = 0; i < tlp_len; i++) {
|
||||
if (i < PCIE_STD_NUM_TLP_HEADERLOG) {
|
||||
off = where + i * 4;
|
||||
to = &log->dw[i];
|
||||
} else {
|
||||
off = where2 + (i - PCIE_STD_NUM_TLP_HEADERLOG) * 4;
|
||||
to = &log->prefix[i - PCIE_STD_NUM_TLP_HEADERLOG];
|
||||
}
|
||||
|
||||
ret = pci_read_config_dword(dev, off, to);
|
||||
if (ret)
|
||||
return pcibios_err_to_errno(ret);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define EE_PREFIX_STR " E-E Prefixes:"
|
||||
|
||||
/**
|
||||
* pcie_print_tlp_log - Print TLP Header / Prefix Log contents
|
||||
* @dev: PCIe device
|
||||
* @log: TLP Log structure
|
||||
* @pfx: String prefix
|
||||
*
|
||||
* Prints TLP Header and Prefix Log information held by @log.
|
||||
*/
|
||||
void pcie_print_tlp_log(const struct pci_dev *dev,
|
||||
const struct pcie_tlp_log *log, const char *pfx)
|
||||
{
|
||||
char buf[11 * (PCIE_STD_NUM_TLP_HEADERLOG + ARRAY_SIZE(log->prefix)) +
|
||||
sizeof(EE_PREFIX_STR)];
|
||||
unsigned int i;
|
||||
int len;
|
||||
|
||||
len = scnprintf(buf, sizeof(buf), "%#010x %#010x %#010x %#010x",
|
||||
log->dw[0], log->dw[1], log->dw[2], log->dw[3]);
|
||||
|
||||
if (log->prefix[0])
|
||||
len += scnprintf(buf + len, sizeof(buf) - len, EE_PREFIX_STR);
|
||||
for (i = 0; i < ARRAY_SIZE(log->prefix); i++) {
|
||||
if (!log->prefix[i])
|
||||
break;
|
||||
len += scnprintf(buf + len, sizeof(buf) - len,
|
||||
" %#010x", log->prefix[i]);
|
||||
}
|
||||
|
||||
pci_err(dev, "%sTLP Header: %s\n", pfx, buf);
|
||||
}
|
|
@ -164,41 +164,67 @@ static inline unsigned long decode_bar(struct pci_dev *dev, u32 bar)
|
|||
|
||||
#define PCI_COMMAND_DECODE_ENABLE (PCI_COMMAND_MEMORY | PCI_COMMAND_IO)
|
||||
|
||||
/**
|
||||
* __pci_size_bars - Read the raw BAR mask for a range of PCI BARs
|
||||
* @dev: the PCI device
|
||||
* @count: number of BARs to size
|
||||
* @pos: starting config space position
|
||||
* @sizes: array to store mask values
|
||||
* @rom: indicate whether to use ROM mask, which avoids enabling ROM BARs
|
||||
*
|
||||
* Provided @sizes array must be sufficiently sized to store results for
|
||||
* @count u32 BARs. Caller is responsible for disabling decode to specified
|
||||
* BAR range around calling this function. This function is intended to avoid
|
||||
* disabling decode around sizing each BAR individually, which can result in
|
||||
* non-trivial overhead in virtualized environments with very large PCI BARs.
|
||||
*/
|
||||
static void __pci_size_bars(struct pci_dev *dev, int count,
|
||||
unsigned int pos, u32 *sizes, bool rom)
|
||||
{
|
||||
u32 orig, mask = rom ? PCI_ROM_ADDRESS_MASK : ~0;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < count; i++, pos += 4, sizes++) {
|
||||
pci_read_config_dword(dev, pos, &orig);
|
||||
pci_write_config_dword(dev, pos, mask);
|
||||
pci_read_config_dword(dev, pos, sizes);
|
||||
pci_write_config_dword(dev, pos, orig);
|
||||
}
|
||||
}
|
||||
|
||||
void __pci_size_stdbars(struct pci_dev *dev, int count,
|
||||
unsigned int pos, u32 *sizes)
|
||||
{
|
||||
__pci_size_bars(dev, count, pos, sizes, false);
|
||||
}
|
||||
|
||||
static void __pci_size_rom(struct pci_dev *dev, unsigned int pos, u32 *sizes)
|
||||
{
|
||||
__pci_size_bars(dev, 1, pos, sizes, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* __pci_read_base - Read a PCI BAR
|
||||
* @dev: the PCI device
|
||||
* @type: type of the BAR
|
||||
* @res: resource buffer to be filled in
|
||||
* @pos: BAR position in the config space
|
||||
* @sizes: array of one or more pre-read BAR masks
|
||||
*
|
||||
* Returns 1 if the BAR is 64-bit, or 0 if 32-bit.
|
||||
*/
|
||||
int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
||||
struct resource *res, unsigned int pos)
|
||||
struct resource *res, unsigned int pos, u32 *sizes)
|
||||
{
|
||||
u32 l = 0, sz = 0, mask;
|
||||
u32 l = 0, sz;
|
||||
u64 l64, sz64, mask64;
|
||||
u16 orig_cmd;
|
||||
struct pci_bus_region region, inverted_region;
|
||||
const char *res_name = pci_resource_name(dev, res - dev->resource);
|
||||
|
||||
mask = type ? PCI_ROM_ADDRESS_MASK : ~0;
|
||||
|
||||
/* No printks while decoding is disabled! */
|
||||
if (!dev->mmio_always_on) {
|
||||
pci_read_config_word(dev, PCI_COMMAND, &orig_cmd);
|
||||
if (orig_cmd & PCI_COMMAND_DECODE_ENABLE) {
|
||||
pci_write_config_word(dev, PCI_COMMAND,
|
||||
orig_cmd & ~PCI_COMMAND_DECODE_ENABLE);
|
||||
}
|
||||
}
|
||||
|
||||
res->name = pci_name(dev);
|
||||
|
||||
pci_read_config_dword(dev, pos, &l);
|
||||
pci_write_config_dword(dev, pos, l | mask);
|
||||
pci_read_config_dword(dev, pos, &sz);
|
||||
pci_write_config_dword(dev, pos, l);
|
||||
sz = sizes[0];
|
||||
|
||||
/*
|
||||
* All bits set in sz means the device isn't working properly.
|
||||
|
@ -238,18 +264,13 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
|
||||
if (res->flags & IORESOURCE_MEM_64) {
|
||||
pci_read_config_dword(dev, pos + 4, &l);
|
||||
pci_write_config_dword(dev, pos + 4, ~0);
|
||||
pci_read_config_dword(dev, pos + 4, &sz);
|
||||
pci_write_config_dword(dev, pos + 4, l);
|
||||
sz = sizes[1];
|
||||
|
||||
l64 |= ((u64)l << 32);
|
||||
sz64 |= ((u64)sz << 32);
|
||||
mask64 |= ((u64)~0 << 32);
|
||||
}
|
||||
|
||||
if (!dev->mmio_always_on && (orig_cmd & PCI_COMMAND_DECODE_ENABLE))
|
||||
pci_write_config_word(dev, PCI_COMMAND, orig_cmd);
|
||||
|
||||
if (!sz64)
|
||||
goto fail;
|
||||
|
||||
|
@ -320,7 +341,11 @@ out:
|
|||
|
||||
static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom)
|
||||
{
|
||||
u32 rombar, stdbars[PCI_STD_NUM_BARS];
|
||||
unsigned int pos, reg;
|
||||
u16 orig_cmd;
|
||||
|
||||
BUILD_BUG_ON(howmany > PCI_STD_NUM_BARS);
|
||||
|
||||
if (dev->non_compliant_bars)
|
||||
return;
|
||||
|
@ -329,10 +354,28 @@ static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom)
|
|||
if (dev->is_virtfn)
|
||||
return;
|
||||
|
||||
/* No printks while decoding is disabled! */
|
||||
if (!dev->mmio_always_on) {
|
||||
pci_read_config_word(dev, PCI_COMMAND, &orig_cmd);
|
||||
if (orig_cmd & PCI_COMMAND_DECODE_ENABLE) {
|
||||
pci_write_config_word(dev, PCI_COMMAND,
|
||||
orig_cmd & ~PCI_COMMAND_DECODE_ENABLE);
|
||||
}
|
||||
}
|
||||
|
||||
__pci_size_stdbars(dev, howmany, PCI_BASE_ADDRESS_0, stdbars);
|
||||
if (rom)
|
||||
__pci_size_rom(dev, rom, &rombar);
|
||||
|
||||
if (!dev->mmio_always_on &&
|
||||
(orig_cmd & PCI_COMMAND_DECODE_ENABLE))
|
||||
pci_write_config_word(dev, PCI_COMMAND, orig_cmd);
|
||||
|
||||
for (pos = 0; pos < howmany; pos++) {
|
||||
struct resource *res = &dev->resource[pos];
|
||||
reg = PCI_BASE_ADDRESS_0 + (pos << 2);
|
||||
pos += __pci_read_base(dev, pci_bar_unknown, res, reg);
|
||||
pos += __pci_read_base(dev, pci_bar_unknown,
|
||||
res, reg, &stdbars[pos]);
|
||||
}
|
||||
|
||||
if (rom) {
|
||||
|
@ -340,7 +383,7 @@ static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom)
|
|||
dev->rom_base_reg = rom;
|
||||
res->flags = IORESOURCE_MEM | IORESOURCE_PREFETCH |
|
||||
IORESOURCE_READONLY | IORESOURCE_SIZEALIGN;
|
||||
__pci_read_base(dev, pci_bar_mem32, res, rom);
|
||||
__pci_read_base(dev, pci_bar_mem32, res, rom, &rombar);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2251,8 +2294,8 @@ static void pci_configure_relaxed_ordering(struct pci_dev *dev)
|
|||
|
||||
static void pci_configure_eetlp_prefix(struct pci_dev *dev)
|
||||
{
|
||||
#ifdef CONFIG_PCI_PASID
|
||||
struct pci_dev *bridge;
|
||||
unsigned int eetlp_max;
|
||||
int pcie_type;
|
||||
u32 cap;
|
||||
|
||||
|
@ -2264,15 +2307,19 @@ static void pci_configure_eetlp_prefix(struct pci_dev *dev)
|
|||
return;
|
||||
|
||||
pcie_type = pci_pcie_type(dev);
|
||||
|
||||
eetlp_max = FIELD_GET(PCI_EXP_DEVCAP2_EE_PREFIX_MAX, cap);
|
||||
/* 00b means 4 */
|
||||
eetlp_max = eetlp_max ?: 4;
|
||||
|
||||
if (pcie_type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
pcie_type == PCI_EXP_TYPE_RC_END)
|
||||
dev->eetlp_prefix_path = 1;
|
||||
dev->eetlp_prefix_max = eetlp_max;
|
||||
else {
|
||||
bridge = pci_upstream_bridge(dev);
|
||||
if (bridge && bridge->eetlp_prefix_path)
|
||||
dev->eetlp_prefix_path = 1;
|
||||
if (bridge && bridge->eetlp_prefix_max)
|
||||
dev->eetlp_prefix_max = eetlp_max;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
static void pci_configure_serr(struct pci_dev *dev)
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
* file, where their drivers can use them.
|
||||
*/
|
||||
|
||||
#include <linux/aer.h>
|
||||
#include <linux/align.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/types.h>
|
||||
|
@ -5984,6 +5985,17 @@ SWITCHTEC_QUIRK(0x5552); /* PAXA 52XG5 */
|
|||
SWITCHTEC_QUIRK(0x5536); /* PAXA 36XG5 */
|
||||
SWITCHTEC_QUIRK(0x5528); /* PAXA 28XG5 */
|
||||
|
||||
#define SWITCHTEC_PCI100X_QUIRK(vid) \
|
||||
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_EFAR, vid, \
|
||||
PCI_CLASS_BRIDGE_OTHER, 8, quirk_switchtec_ntb_dma_alias)
|
||||
SWITCHTEC_PCI100X_QUIRK(0x1001); /* PCI1001XG4 */
|
||||
SWITCHTEC_PCI100X_QUIRK(0x1002); /* PCI1002XG4 */
|
||||
SWITCHTEC_PCI100X_QUIRK(0x1003); /* PCI1003XG4 */
|
||||
SWITCHTEC_PCI100X_QUIRK(0x1004); /* PCI1004XG4 */
|
||||
SWITCHTEC_PCI100X_QUIRK(0x1005); /* PCI1005XG4 */
|
||||
SWITCHTEC_PCI100X_QUIRK(0x1006); /* PCI1006XG4 */
|
||||
|
||||
|
||||
/*
|
||||
* The PLX NTB uses devfn proxy IDs to move TLPs between NT endpoints.
|
||||
* These IDs are used to forward responses to the originator on the other
|
||||
|
@ -6233,8 +6245,9 @@ static void dpc_log_size(struct pci_dev *dev)
|
|||
return;
|
||||
|
||||
if (FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE, val) == 0) {
|
||||
pci_info(dev, "Overriding RP PIO Log Size to 4\n");
|
||||
dev->dpc_rp_log_size = 4;
|
||||
pci_info(dev, "Overriding RP PIO Log Size to %d\n",
|
||||
PCIE_STD_NUM_TLP_HEADERLOG);
|
||||
dev->dpc_rp_log_size = PCIE_STD_NUM_TLP_HEADERLOG;
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x461f, dpc_log_size);
|
||||
|
@ -6253,6 +6266,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2b, dpc_log_size);
|
|||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2d, dpc_log_size);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2f, dpc_log_size);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a31, dpc_log_size);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa72f, dpc_log_size);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa73f, dpc_log_size);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa76e, dpc_log_size);
|
||||
#endif
|
||||
|
|
|
@ -1739,6 +1739,26 @@ static void switchtec_pci_remove(struct pci_dev *pdev)
|
|||
.driver_data = gen, \
|
||||
}
|
||||
|
||||
#define SWITCHTEC_PCI100X_DEVICE(device_id, gen) \
|
||||
{ \
|
||||
.vendor = PCI_VENDOR_ID_EFAR, \
|
||||
.device = device_id, \
|
||||
.subvendor = PCI_ANY_ID, \
|
||||
.subdevice = PCI_ANY_ID, \
|
||||
.class = (PCI_CLASS_MEMORY_OTHER << 8), \
|
||||
.class_mask = 0xFFFFFFFF, \
|
||||
.driver_data = gen, \
|
||||
}, \
|
||||
{ \
|
||||
.vendor = PCI_VENDOR_ID_EFAR, \
|
||||
.device = device_id, \
|
||||
.subvendor = PCI_ANY_ID, \
|
||||
.subdevice = PCI_ANY_ID, \
|
||||
.class = (PCI_CLASS_BRIDGE_OTHER << 8), \
|
||||
.class_mask = 0xFFFFFFFF, \
|
||||
.driver_data = gen, \
|
||||
}
|
||||
|
||||
static const struct pci_device_id switchtec_pci_tbl[] = {
|
||||
SWITCHTEC_PCI_DEVICE(0x8531, SWITCHTEC_GEN3), /* PFX 24xG3 */
|
||||
SWITCHTEC_PCI_DEVICE(0x8532, SWITCHTEC_GEN3), /* PFX 32xG3 */
|
||||
|
@ -1833,6 +1853,12 @@ static const struct pci_device_id switchtec_pci_tbl[] = {
|
|||
SWITCHTEC_PCI_DEVICE(0x5552, SWITCHTEC_GEN5), /* PAXA 52XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5536, SWITCHTEC_GEN5), /* PAXA 36XG5 */
|
||||
SWITCHTEC_PCI_DEVICE(0x5528, SWITCHTEC_GEN5), /* PAXA 28XG5 */
|
||||
SWITCHTEC_PCI100X_DEVICE(0x1001, SWITCHTEC_GEN4), /* PCI1001 16XG4 */
|
||||
SWITCHTEC_PCI100X_DEVICE(0x1002, SWITCHTEC_GEN4), /* PCI1002 12XG4 */
|
||||
SWITCHTEC_PCI100X_DEVICE(0x1003, SWITCHTEC_GEN4), /* PCI1003 16XG4 */
|
||||
SWITCHTEC_PCI100X_DEVICE(0x1004, SWITCHTEC_GEN4), /* PCI1004 16XG4 */
|
||||
SWITCHTEC_PCI100X_DEVICE(0x1005, SWITCHTEC_GEN4), /* PCI1005 16XG4 */
|
||||
SWITCHTEC_PCI100X_DEVICE(0x1006, SWITCHTEC_GEN4), /* PCI1006 16XG4 */
|
||||
{0}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, switchtec_pci_tbl);
|
||||
|
|
|
@ -271,8 +271,8 @@ void pci_vpd_init(struct pci_dev *dev)
|
|||
}
|
||||
|
||||
static ssize_t vpd_read(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr, char *buf, loff_t off,
|
||||
size_t count)
|
||||
const struct bin_attribute *bin_attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj));
|
||||
struct pci_dev *vpd_dev = dev;
|
||||
|
@ -295,8 +295,8 @@ static ssize_t vpd_read(struct file *filp, struct kobject *kobj,
|
|||
}
|
||||
|
||||
static ssize_t vpd_write(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *bin_attr, char *buf, loff_t off,
|
||||
size_t count)
|
||||
const struct bin_attribute *bin_attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj));
|
||||
struct pci_dev *vpd_dev = dev;
|
||||
|
@ -317,9 +317,9 @@ static ssize_t vpd_write(struct file *filp, struct kobject *kobj,
|
|||
|
||||
return ret;
|
||||
}
|
||||
static BIN_ATTR(vpd, 0600, vpd_read, vpd_write, 0);
|
||||
static const BIN_ATTR(vpd, 0600, vpd_read, vpd_write, 0);
|
||||
|
||||
static struct bin_attribute *vpd_attrs[] = {
|
||||
static const struct bin_attribute *const vpd_attrs[] = {
|
||||
&bin_attr_vpd,
|
||||
NULL,
|
||||
};
|
||||
|
@ -336,7 +336,7 @@ static umode_t vpd_attr_is_visible(struct kobject *kobj,
|
|||
}
|
||||
|
||||
const struct attribute_group pci_dev_vpd_attr_group = {
|
||||
.bin_attrs = vpd_attrs,
|
||||
.bin_attrs_new = vpd_attrs,
|
||||
.is_bin_visible = vpd_attr_is_visible,
|
||||
};
|
||||
|
||||
|
|
|
@ -1389,11 +1389,12 @@ static int vfio_ext_cap_len(struct vfio_pci_core_device *vdev, u16 ecap, u16 epo
|
|||
|
||||
switch (ecap) {
|
||||
case PCI_EXT_CAP_ID_VNDR:
|
||||
ret = pci_read_config_dword(pdev, epos + PCI_VSEC_HDR, &dword);
|
||||
ret = pci_read_config_dword(pdev, epos + PCI_VNDR_HEADER,
|
||||
&dword);
|
||||
if (ret)
|
||||
return pcibios_err_to_errno(ret);
|
||||
|
||||
return dword >> PCI_VSEC_HDR_LEN_SHIFT;
|
||||
return PCI_VNDR_HEADER_LEN(dword);
|
||||
case PCI_EXT_CAP_ID_VC:
|
||||
case PCI_EXT_CAP_ID_VC9:
|
||||
case PCI_EXT_CAP_ID_MFVC:
|
||||
|
|
|
@ -16,10 +16,18 @@
|
|||
#define AER_CORRECTABLE 2
|
||||
#define DPC_FATAL 3
|
||||
|
||||
/*
|
||||
* AER and DPC capabilities TLP Logging register sizes (PCIe r6.2, sec 7.8.4
|
||||
* & 7.9.14).
|
||||
*/
|
||||
#define PCIE_STD_NUM_TLP_HEADERLOG 4
|
||||
#define PCIE_STD_MAX_TLP_PREFIXLOG 4
|
||||
|
||||
struct pci_dev;
|
||||
|
||||
struct pcie_tlp_log {
|
||||
u32 dw[4];
|
||||
u32 dw[PCIE_STD_NUM_TLP_HEADERLOG];
|
||||
u32 prefix[PCIE_STD_MAX_TLP_PREFIXLOG];
|
||||
};
|
||||
|
||||
struct aer_capability_regs {
|
||||
|
@ -37,8 +45,6 @@ struct aer_capability_regs {
|
|||
u16 uncor_err_source;
|
||||
};
|
||||
|
||||
int pcie_read_tlp_log(struct pci_dev *dev, int where, struct pcie_tlp_log *log);
|
||||
|
||||
#if defined(CONFIG_PCIEAER)
|
||||
int pci_aer_clear_nonfatal_status(struct pci_dev *dev);
|
||||
int pcie_aer_is_native(struct pci_dev *dev);
|
||||
|
|
|
@ -26,6 +26,7 @@ struct of_pci_range {
|
|||
u64 bus_addr;
|
||||
};
|
||||
u64 cpu_addr;
|
||||
u64 parent_bus_addr;
|
||||
u64 size;
|
||||
u32 flags;
|
||||
};
|
||||
|
|
|
@ -45,6 +45,10 @@ struct pci_ecam_ops {
|
|||
unsigned int bus_shift;
|
||||
struct pci_ops pci_ops;
|
||||
int (*init)(struct pci_config_window *);
|
||||
int (*enable_device)(struct pci_host_bridge *,
|
||||
struct pci_dev *);
|
||||
void (*disable_device)(struct pci_host_bridge *,
|
||||
struct pci_dev *);
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
|
@ -157,7 +157,7 @@ struct pci_epf {
|
|||
struct device dev;
|
||||
const char *name;
|
||||
struct pci_epf_header *header;
|
||||
struct pci_epf_bar bar[6];
|
||||
struct pci_epf_bar bar[PCI_STD_NUM_BARS];
|
||||
u8 msi_interrupts;
|
||||
u16 msix_interrupts;
|
||||
u8 func_no;
|
||||
|
@ -174,7 +174,7 @@ struct pci_epf {
|
|||
/* Below members are to attach secondary EPC to an endpoint function */
|
||||
struct pci_epc *sec_epc;
|
||||
struct list_head sec_epc_list;
|
||||
struct pci_epf_bar sec_epc_bar[6];
|
||||
struct pci_epf_bar sec_epc_bar[PCI_STD_NUM_BARS];
|
||||
u8 sec_epc_func_no;
|
||||
struct config_group *group;
|
||||
unsigned int is_bound;
|
||||
|
|
|
@ -407,7 +407,7 @@ struct pci_dev {
|
|||
supported from root to here */
|
||||
#endif
|
||||
unsigned int pasid_no_tlp:1; /* PASID works without TLP Prefix */
|
||||
unsigned int eetlp_prefix_path:1; /* End-to-End TLP Prefix */
|
||||
unsigned int eetlp_prefix_max:3; /* Max # of End-End TLP Prefixes, 0=not supported */
|
||||
|
||||
pci_channel_state_t error_state; /* Current connectivity state */
|
||||
struct device dev; /* Generic device interface */
|
||||
|
@ -595,6 +595,8 @@ struct pci_host_bridge {
|
|||
u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */
|
||||
int (*map_irq)(const struct pci_dev *, u8, u8);
|
||||
void (*release_fn)(struct pci_host_bridge *);
|
||||
int (*enable_device)(struct pci_host_bridge *bridge, struct pci_dev *dev);
|
||||
void (*disable_device)(struct pci_host_bridge *bridge, struct pci_dev *dev);
|
||||
void *release_data;
|
||||
unsigned int ignore_reset_delay:1; /* For entire hierarchy */
|
||||
unsigned int no_ext_tags:1; /* No Extended Tags */
|
||||
|
@ -2311,6 +2313,7 @@ static inline void pci_fixup_device(enum pci_fixup_pass pass,
|
|||
struct pci_dev *dev) { }
|
||||
#endif
|
||||
|
||||
int pcim_intx(struct pci_dev *pdev, int enabled);
|
||||
int pcim_request_all_regions(struct pci_dev *pdev, const char *name);
|
||||
void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen);
|
||||
void __iomem *pcim_iomap_region(struct pci_dev *pdev, int bar,
|
||||
|
|
|
@ -533,7 +533,7 @@
|
|||
#define PCI_EXP_DEVSTA_TRPND 0x0020 /* Transactions Pending */
|
||||
#define PCI_CAP_EXP_RC_ENDPOINT_SIZEOF_V1 12 /* v1 endpoints without link end here */
|
||||
#define PCI_EXP_LNKCAP 0x0c /* Link Capabilities */
|
||||
#define PCI_EXP_LNKCAP_SLS 0x0000000f /* Supported Link Speeds */
|
||||
#define PCI_EXP_LNKCAP_SLS 0x0000000f /* Max Link Speed (prior to PCIe r3.0: Supported Link Speeds) */
|
||||
#define PCI_EXP_LNKCAP_SLS_2_5GB 0x00000001 /* LNKCAP2 SLS Vector bit 0 */
|
||||
#define PCI_EXP_LNKCAP_SLS_5_0GB 0x00000002 /* LNKCAP2 SLS Vector bit 1 */
|
||||
#define PCI_EXP_LNKCAP_SLS_8_0GB 0x00000003 /* LNKCAP2 SLS Vector bit 2 */
|
||||
|
@ -665,6 +665,7 @@
|
|||
#define PCI_EXP_DEVCAP2_OBFF_MSG 0x00040000 /* New message signaling */
|
||||
#define PCI_EXP_DEVCAP2_OBFF_WAKE 0x00080000 /* Re-use WAKE# for OBFF */
|
||||
#define PCI_EXP_DEVCAP2_EE_PREFIX 0x00200000 /* End-End TLP Prefix */
|
||||
#define PCI_EXP_DEVCAP2_EE_PREFIX_MAX 0x00c00000 /* Max End-End TLP Prefixes */
|
||||
#define PCI_EXP_DEVCTL2 0x28 /* Device Control 2 */
|
||||
#define PCI_EXP_DEVCTL2_COMP_TIMEOUT 0x000f /* Completion Timeout Value */
|
||||
#define PCI_EXP_DEVCTL2_COMP_TMOUT_DIS 0x0010 /* Completion Timeout Disable */
|
||||
|
@ -789,10 +790,11 @@
|
|||
/* Same bits as above */
|
||||
#define PCI_ERR_CAP 0x18 /* Advanced Error Capabilities & Ctrl*/
|
||||
#define PCI_ERR_CAP_FEP(x) ((x) & 0x1f) /* First Error Pointer */
|
||||
#define PCI_ERR_CAP_ECRC_GENC 0x00000020 /* ECRC Generation Capable */
|
||||
#define PCI_ERR_CAP_ECRC_GENE 0x00000040 /* ECRC Generation Enable */
|
||||
#define PCI_ERR_CAP_ECRC_CHKC 0x00000080 /* ECRC Check Capable */
|
||||
#define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */
|
||||
#define PCI_ERR_CAP_ECRC_GENC 0x00000020 /* ECRC Generation Capable */
|
||||
#define PCI_ERR_CAP_ECRC_GENE 0x00000040 /* ECRC Generation Enable */
|
||||
#define PCI_ERR_CAP_ECRC_CHKC 0x00000080 /* ECRC Check Capable */
|
||||
#define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */
|
||||
#define PCI_ERR_CAP_PREFIX_LOG_PRESENT 0x00000800 /* TLP Prefix Log Present */
|
||||
#define PCI_ERR_HEADER_LOG 0x1c /* Header Log Register (16 bytes) */
|
||||
#define PCI_ERR_ROOT_COMMAND 0x2c /* Root Error Command */
|
||||
#define PCI_ERR_ROOT_CMD_COR_EN 0x00000001 /* Correctable Err Reporting Enable */
|
||||
|
@ -808,6 +810,7 @@
|
|||
#define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */
|
||||
#define PCI_ERR_ROOT_AER_IRQ 0xf8000000 /* Advanced Error Interrupt Message Number */
|
||||
#define PCI_ERR_ROOT_ERR_SRC 0x34 /* Error Source Identification */
|
||||
#define PCI_ERR_PREFIX_LOG 0x38 /* TLP Prefix LOG Register (up to 16 bytes) */
|
||||
|
||||
/* Virtual Channel */
|
||||
#define PCI_VC_PORT_CAP1 0x04
|
||||
|
@ -1001,9 +1004,6 @@
|
|||
#define PCI_ACS_CTRL 0x06 /* ACS Control Register */
|
||||
#define PCI_ACS_EGRESS_CTL_V 0x08 /* ACS Egress Control Vector */
|
||||
|
||||
#define PCI_VSEC_HDR 4 /* extended cap - vendor-specific */
|
||||
#define PCI_VSEC_HDR_LEN_SHIFT 20 /* shift for length field */
|
||||
|
||||
/* SATA capability */
|
||||
#define PCI_SATA_REGS 4 /* SATA REGs specifier */
|
||||
#define PCI_SATA_REGS_MASK 0xF /* location - BAR#/inline */
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#define PCITEST_MSIX _IOW('P', 0x7, int)
|
||||
#define PCITEST_SET_IRQTYPE _IOW('P', 0x8, int)
|
||||
#define PCITEST_GET_IRQTYPE _IO('P', 0x9)
|
||||
#define PCITEST_BARS _IO('P', 0xa)
|
||||
#define PCITEST_CLEAR_IRQ _IO('P', 0x10)
|
||||
|
||||
#define PCITEST_FLAGS_USE_DMA 0x00000001
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
pcitest-y += pcitest.o
|
|
@ -1,58 +0,0 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
include ../scripts/Makefile.include
|
||||
|
||||
bindir ?= /usr/bin
|
||||
|
||||
ifeq ($(srctree),)
|
||||
srctree := $(patsubst %/,%,$(dir $(CURDIR)))
|
||||
srctree := $(patsubst %/,%,$(dir $(srctree)))
|
||||
endif
|
||||
|
||||
# Do not use make's built-in rules
|
||||
# (this improves performance and avoids hard-to-debug behaviour);
|
||||
MAKEFLAGS += -r
|
||||
|
||||
CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include
|
||||
|
||||
ALL_TARGETS := pcitest
|
||||
ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS))
|
||||
|
||||
SCRIPTS := pcitest.sh
|
||||
|
||||
all: $(ALL_PROGRAMS)
|
||||
|
||||
export srctree OUTPUT CC LD CFLAGS
|
||||
include $(srctree)/tools/build/Makefile.include
|
||||
|
||||
#
|
||||
# We need the following to be outside of kernel tree
|
||||
#
|
||||
$(OUTPUT)include/linux/: ../../include/uapi/linux/
|
||||
mkdir -p $(OUTPUT)include/linux/ 2>&1 || true
|
||||
ln -sf $(CURDIR)/../../include/uapi/linux/pcitest.h $@
|
||||
|
||||
prepare: $(OUTPUT)include/linux/
|
||||
|
||||
PCITEST_IN := $(OUTPUT)pcitest-in.o
|
||||
$(PCITEST_IN): prepare FORCE
|
||||
$(Q)$(MAKE) $(build)=pcitest
|
||||
$(OUTPUT)pcitest: $(PCITEST_IN)
|
||||
$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@
|
||||
|
||||
clean:
|
||||
rm -f $(ALL_PROGRAMS)
|
||||
rm -rf $(OUTPUT)include/
|
||||
find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete
|
||||
|
||||
install: $(ALL_PROGRAMS)
|
||||
install -d -m 755 $(DESTDIR)$(bindir); \
|
||||
for program in $(ALL_PROGRAMS); do \
|
||||
install $$program $(DESTDIR)$(bindir); \
|
||||
done; \
|
||||
for script in $(SCRIPTS); do \
|
||||
install $$script $(DESTDIR)$(bindir); \
|
||||
done
|
||||
|
||||
FORCE:
|
||||
|
||||
.PHONY: all install clean FORCE prepare
|
|
@ -1,250 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/**
|
||||
* Userspace PCI Endpoint Test Module
|
||||
*
|
||||
* Copyright (C) 2017 Texas Instruments
|
||||
* Author: Kishon Vijay Abraham I <kishon@ti.com>
|
||||
*/
|
||||
|
||||
#include <errno.h>
|
||||
#include <fcntl.h>
|
||||
#include <stdbool.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include <linux/pcitest.h>
|
||||
|
||||
static char *result[] = { "NOT OKAY", "OKAY" };
|
||||
static char *irq[] = { "LEGACY", "MSI", "MSI-X" };
|
||||
|
||||
struct pci_test {
|
||||
char *device;
|
||||
char barnum;
|
||||
bool legacyirq;
|
||||
unsigned int msinum;
|
||||
unsigned int msixnum;
|
||||
int irqtype;
|
||||
bool set_irqtype;
|
||||
bool get_irqtype;
|
||||
bool clear_irq;
|
||||
bool read;
|
||||
bool write;
|
||||
bool copy;
|
||||
unsigned long size;
|
||||
bool use_dma;
|
||||
};
|
||||
|
||||
static int run_test(struct pci_test *test)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param = {};
|
||||
int ret = -EINVAL;
|
||||
int fd;
|
||||
|
||||
fd = open(test->device, O_RDWR);
|
||||
if (fd < 0) {
|
||||
perror("can't open PCI Endpoint Test device");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (test->barnum >= 0 && test->barnum <= 5) {
|
||||
ret = ioctl(fd, PCITEST_BAR, test->barnum);
|
||||
fprintf(stdout, "BAR%d:\t\t", test->barnum);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
if (test->set_irqtype) {
|
||||
ret = ioctl(fd, PCITEST_SET_IRQTYPE, test->irqtype);
|
||||
fprintf(stdout, "SET IRQ TYPE TO %s:\t\t", irq[test->irqtype]);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
if (test->get_irqtype) {
|
||||
ret = ioctl(fd, PCITEST_GET_IRQTYPE);
|
||||
fprintf(stdout, "GET IRQ TYPE:\t\t");
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", irq[ret]);
|
||||
}
|
||||
|
||||
if (test->clear_irq) {
|
||||
ret = ioctl(fd, PCITEST_CLEAR_IRQ);
|
||||
fprintf(stdout, "CLEAR IRQ:\t\t");
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
if (test->legacyirq) {
|
||||
ret = ioctl(fd, PCITEST_LEGACY_IRQ, 0);
|
||||
fprintf(stdout, "LEGACY IRQ:\t");
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
if (test->msinum > 0 && test->msinum <= 32) {
|
||||
ret = ioctl(fd, PCITEST_MSI, test->msinum);
|
||||
fprintf(stdout, "MSI%u:\t\t", test->msinum);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
if (test->msixnum > 0 && test->msixnum <= 2048) {
|
||||
ret = ioctl(fd, PCITEST_MSIX, test->msixnum);
|
||||
fprintf(stdout, "MSI-X%u:\t\t", test->msixnum);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
if (test->write) {
|
||||
param.size = test->size;
|
||||
if (test->use_dma)
|
||||
param.flags = PCITEST_FLAGS_USE_DMA;
|
||||
ret = ioctl(fd, PCITEST_WRITE, ¶m);
|
||||
fprintf(stdout, "WRITE (%7lu bytes):\t\t", test->size);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
if (test->read) {
|
||||
param.size = test->size;
|
||||
if (test->use_dma)
|
||||
param.flags = PCITEST_FLAGS_USE_DMA;
|
||||
ret = ioctl(fd, PCITEST_READ, ¶m);
|
||||
fprintf(stdout, "READ (%7lu bytes):\t\t", test->size);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
if (test->copy) {
|
||||
param.size = test->size;
|
||||
if (test->use_dma)
|
||||
param.flags = PCITEST_FLAGS_USE_DMA;
|
||||
ret = ioctl(fd, PCITEST_COPY, ¶m);
|
||||
fprintf(stdout, "COPY (%7lu bytes):\t\t", test->size);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
fflush(stdout);
|
||||
close(fd);
|
||||
return (ret < 0) ? ret : 1 - ret; /* return 0 if test succeeded */
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int c;
|
||||
struct pci_test *test;
|
||||
|
||||
test = calloc(1, sizeof(*test));
|
||||
if (!test) {
|
||||
perror("Fail to allocate memory for pci_test\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* since '0' is a valid BAR number, initialize it to -1 */
|
||||
test->barnum = -1;
|
||||
|
||||
/* set default size as 100KB */
|
||||
test->size = 0x19000;
|
||||
|
||||
/* set default endpoint device */
|
||||
test->device = "/dev/pci-endpoint-test.0";
|
||||
|
||||
while ((c = getopt(argc, argv, "D:b:m:x:i:deIlhrwcs:")) != EOF)
|
||||
switch (c) {
|
||||
case 'D':
|
||||
test->device = optarg;
|
||||
continue;
|
||||
case 'b':
|
||||
test->barnum = atoi(optarg);
|
||||
if (test->barnum < 0 || test->barnum > 5)
|
||||
goto usage;
|
||||
continue;
|
||||
case 'l':
|
||||
test->legacyirq = true;
|
||||
continue;
|
||||
case 'm':
|
||||
test->msinum = atoi(optarg);
|
||||
if (test->msinum < 1 || test->msinum > 32)
|
||||
goto usage;
|
||||
continue;
|
||||
case 'x':
|
||||
test->msixnum = atoi(optarg);
|
||||
if (test->msixnum < 1 || test->msixnum > 2048)
|
||||
goto usage;
|
||||
continue;
|
||||
case 'i':
|
||||
test->irqtype = atoi(optarg);
|
||||
if (test->irqtype < 0 || test->irqtype > 2)
|
||||
goto usage;
|
||||
test->set_irqtype = true;
|
||||
continue;
|
||||
case 'I':
|
||||
test->get_irqtype = true;
|
||||
continue;
|
||||
case 'r':
|
||||
test->read = true;
|
||||
continue;
|
||||
case 'w':
|
||||
test->write = true;
|
||||
continue;
|
||||
case 'c':
|
||||
test->copy = true;
|
||||
continue;
|
||||
case 'e':
|
||||
test->clear_irq = true;
|
||||
continue;
|
||||
case 's':
|
||||
test->size = strtoul(optarg, NULL, 0);
|
||||
continue;
|
||||
case 'd':
|
||||
test->use_dma = true;
|
||||
continue;
|
||||
case 'h':
|
||||
default:
|
||||
usage:
|
||||
fprintf(stderr,
|
||||
"usage: %s [options]\n"
|
||||
"Options:\n"
|
||||
"\t-D <dev> PCI endpoint test device {default: /dev/pci-endpoint-test.0}\n"
|
||||
"\t-b <bar num> BAR test (bar number between 0..5)\n"
|
||||
"\t-m <msi num> MSI test (msi number between 1..32)\n"
|
||||
"\t-x <msix num> \tMSI-X test (msix number between 1..2048)\n"
|
||||
"\t-i <irq type> \tSet IRQ type (0 - Legacy, 1 - MSI, 2 - MSI-X)\n"
|
||||
"\t-e Clear IRQ\n"
|
||||
"\t-I Get current IRQ type configured\n"
|
||||
"\t-d Use DMA\n"
|
||||
"\t-l Legacy IRQ test\n"
|
||||
"\t-r Read buffer test\n"
|
||||
"\t-w Write buffer test\n"
|
||||
"\t-c Copy buffer test\n"
|
||||
"\t-s <size> Size of buffer {default: 100KB}\n"
|
||||
"\t-h Print this help message\n",
|
||||
argv[0]);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return run_test(test);
|
||||
}
|
|
@ -1,72 +0,0 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
echo "BAR tests"
|
||||
echo
|
||||
|
||||
bar=0
|
||||
|
||||
while [ $bar -lt 6 ]
|
||||
do
|
||||
pcitest -b $bar
|
||||
bar=`expr $bar + 1`
|
||||
done
|
||||
echo
|
||||
|
||||
echo "Interrupt tests"
|
||||
echo
|
||||
|
||||
pcitest -i 0
|
||||
pcitest -l
|
||||
|
||||
pcitest -i 1
|
||||
msi=1
|
||||
|
||||
while [ $msi -lt 33 ]
|
||||
do
|
||||
pcitest -m $msi
|
||||
msi=`expr $msi + 1`
|
||||
done
|
||||
echo
|
||||
|
||||
pcitest -i 2
|
||||
msix=1
|
||||
|
||||
while [ $msix -lt 2049 ]
|
||||
do
|
||||
pcitest -x $msix
|
||||
msix=`expr $msix + 1`
|
||||
done
|
||||
echo
|
||||
|
||||
echo "Read Tests"
|
||||
echo
|
||||
|
||||
pcitest -i 1
|
||||
|
||||
pcitest -r -s 1
|
||||
pcitest -r -s 1024
|
||||
pcitest -r -s 1025
|
||||
pcitest -r -s 1024000
|
||||
pcitest -r -s 1024001
|
||||
echo
|
||||
|
||||
echo "Write Tests"
|
||||
echo
|
||||
|
||||
pcitest -w -s 1
|
||||
pcitest -w -s 1024
|
||||
pcitest -w -s 1025
|
||||
pcitest -w -s 1024000
|
||||
pcitest -w -s 1024001
|
||||
echo
|
||||
|
||||
echo "Copy Tests"
|
||||
echo
|
||||
|
||||
pcitest -c -s 1
|
||||
pcitest -c -s 1024
|
||||
pcitest -c -s 1025
|
||||
pcitest -c -s 1024000
|
||||
pcitest -c -s 1024001
|
||||
echo
|
|
@ -72,6 +72,7 @@ TARGETS += net/packetdrill
|
|||
TARGETS += net/rds
|
||||
TARGETS += net/tcp_ao
|
||||
TARGETS += nsfs
|
||||
TARGETS += pci_endpoint
|
||||
TARGETS += pcie_bwctrl
|
||||
TARGETS += perf_events
|
||||
TARGETS += pidfd
|
||||
|
|
|
@ -0,0 +1,2 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
pci_endpoint_test
|
|
@ -0,0 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
CFLAGS += -O2 -Wl,-no-as-needed -Wall $(KHDR_INCLUDES)
|
||||
LDFLAGS += -lrt -lpthread -lm
|
||||
|
||||
TEST_GEN_PROGS = pci_endpoint_test
|
||||
|
||||
include ../lib.mk
|
|
@ -0,0 +1,4 @@
|
|||
CONFIG_PCI_ENDPOINT=y
|
||||
CONFIG_PCI_ENDPOINT_CONFIGFS=y
|
||||
CONFIG_PCI_EPF_TEST=m
|
||||
CONFIG_PCI_ENDPOINT_TEST=m
|
|
@ -0,0 +1,221 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Kselftest for PCI Endpoint Subsystem
|
||||
*
|
||||
* Copyright (c) 2022 Samsung Electronics Co., Ltd.
|
||||
* https://www.samsung.com
|
||||
* Author: Aman Gupta <aman1.gupta@samsung.com>
|
||||
*
|
||||
* Copyright (c) 2024, Linaro Ltd.
|
||||
* Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
|
||||
*/
|
||||
|
||||
#include <errno.h>
|
||||
#include <fcntl.h>
|
||||
#include <stdbool.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include "../../../../include/uapi/linux/pcitest.h"
|
||||
|
||||
#include "../kselftest_harness.h"
|
||||
|
||||
#define pci_ep_ioctl(cmd, arg) \
|
||||
({ \
|
||||
ret = ioctl(self->fd, cmd, arg); \
|
||||
ret = ret < 0 ? -errno : 0; \
|
||||
})
|
||||
|
||||
static const char *test_device = "/dev/pci-endpoint-test.0";
|
||||
static const unsigned long test_size[5] = { 1, 1024, 1025, 1024000, 1024001 };
|
||||
|
||||
FIXTURE(pci_ep_bar)
|
||||
{
|
||||
int fd;
|
||||
};
|
||||
|
||||
FIXTURE_SETUP(pci_ep_bar)
|
||||
{
|
||||
self->fd = open(test_device, O_RDWR);
|
||||
|
||||
ASSERT_NE(-1, self->fd) TH_LOG("Can't open PCI Endpoint Test device");
|
||||
}
|
||||
|
||||
FIXTURE_TEARDOWN(pci_ep_bar)
|
||||
{
|
||||
close(self->fd);
|
||||
}
|
||||
|
||||
FIXTURE_VARIANT(pci_ep_bar)
|
||||
{
|
||||
int barno;
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(pci_ep_bar, BAR0) { .barno = 0 };
|
||||
FIXTURE_VARIANT_ADD(pci_ep_bar, BAR1) { .barno = 1 };
|
||||
FIXTURE_VARIANT_ADD(pci_ep_bar, BAR2) { .barno = 2 };
|
||||
FIXTURE_VARIANT_ADD(pci_ep_bar, BAR3) { .barno = 3 };
|
||||
FIXTURE_VARIANT_ADD(pci_ep_bar, BAR4) { .barno = 4 };
|
||||
FIXTURE_VARIANT_ADD(pci_ep_bar, BAR5) { .barno = 5 };
|
||||
|
||||
TEST_F(pci_ep_bar, BAR_TEST)
|
||||
{
|
||||
int ret;
|
||||
|
||||
pci_ep_ioctl(PCITEST_BAR, variant->barno);
|
||||
EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno);
|
||||
}
|
||||
|
||||
FIXTURE(pci_ep_basic)
|
||||
{
|
||||
int fd;
|
||||
};
|
||||
|
||||
FIXTURE_SETUP(pci_ep_basic)
|
||||
{
|
||||
self->fd = open(test_device, O_RDWR);
|
||||
|
||||
ASSERT_NE(-1, self->fd) TH_LOG("Can't open PCI Endpoint Test device");
|
||||
}
|
||||
|
||||
FIXTURE_TEARDOWN(pci_ep_basic)
|
||||
{
|
||||
close(self->fd);
|
||||
}
|
||||
|
||||
TEST_F(pci_ep_basic, CONSECUTIVE_BAR_TEST)
|
||||
{
|
||||
int ret;
|
||||
|
||||
pci_ep_ioctl(PCITEST_BARS, 0);
|
||||
EXPECT_FALSE(ret) TH_LOG("Consecutive BAR test failed");
|
||||
}
|
||||
|
||||
TEST_F(pci_ep_basic, LEGACY_IRQ_TEST)
|
||||
{
|
||||
int ret;
|
||||
|
||||
pci_ep_ioctl(PCITEST_SET_IRQTYPE, 0);
|
||||
ASSERT_EQ(0, ret) TH_LOG("Can't set Legacy IRQ type");
|
||||
|
||||
pci_ep_ioctl(PCITEST_LEGACY_IRQ, 0);
|
||||
EXPECT_FALSE(ret) TH_LOG("Test failed for Legacy IRQ");
|
||||
}
|
||||
|
||||
TEST_F(pci_ep_basic, MSI_TEST)
|
||||
{
|
||||
int ret, i;
|
||||
|
||||
pci_ep_ioctl(PCITEST_SET_IRQTYPE, 1);
|
||||
ASSERT_EQ(0, ret) TH_LOG("Can't set MSI IRQ type");
|
||||
|
||||
for (i = 1; i <= 32; i++) {
|
||||
pci_ep_ioctl(PCITEST_MSI, i);
|
||||
EXPECT_FALSE(ret) TH_LOG("Test failed for MSI%d", i);
|
||||
}
|
||||
}
|
||||
|
||||
TEST_F(pci_ep_basic, MSIX_TEST)
|
||||
{
|
||||
int ret, i;
|
||||
|
||||
pci_ep_ioctl(PCITEST_SET_IRQTYPE, 2);
|
||||
ASSERT_EQ(0, ret) TH_LOG("Can't set MSI-X IRQ type");
|
||||
|
||||
for (i = 1; i <= 2048; i++) {
|
||||
pci_ep_ioctl(PCITEST_MSIX, i);
|
||||
EXPECT_FALSE(ret) TH_LOG("Test failed for MSI-X%d", i);
|
||||
}
|
||||
}
|
||||
|
||||
FIXTURE(pci_ep_data_transfer)
|
||||
{
|
||||
int fd;
|
||||
};
|
||||
|
||||
FIXTURE_SETUP(pci_ep_data_transfer)
|
||||
{
|
||||
self->fd = open(test_device, O_RDWR);
|
||||
|
||||
ASSERT_NE(-1, self->fd) TH_LOG("Can't open PCI Endpoint Test device");
|
||||
}
|
||||
|
||||
FIXTURE_TEARDOWN(pci_ep_data_transfer)
|
||||
{
|
||||
close(self->fd);
|
||||
}
|
||||
|
||||
FIXTURE_VARIANT(pci_ep_data_transfer)
|
||||
{
|
||||
bool use_dma;
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(pci_ep_data_transfer, memcpy)
|
||||
{
|
||||
.use_dma = false,
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(pci_ep_data_transfer, dma)
|
||||
{
|
||||
.use_dma = true,
|
||||
};
|
||||
|
||||
TEST_F(pci_ep_data_transfer, READ_TEST)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param = {};
|
||||
int ret, i;
|
||||
|
||||
if (variant->use_dma)
|
||||
param.flags = PCITEST_FLAGS_USE_DMA;
|
||||
|
||||
pci_ep_ioctl(PCITEST_SET_IRQTYPE, 1);
|
||||
ASSERT_EQ(0, ret) TH_LOG("Can't set MSI IRQ type");
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(test_size); i++) {
|
||||
param.size = test_size[i];
|
||||
pci_ep_ioctl(PCITEST_READ, ¶m);
|
||||
EXPECT_FALSE(ret) TH_LOG("Test failed for size (%ld)",
|
||||
test_size[i]);
|
||||
}
|
||||
}
|
||||
|
||||
TEST_F(pci_ep_data_transfer, WRITE_TEST)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param = {};
|
||||
int ret, i;
|
||||
|
||||
if (variant->use_dma)
|
||||
param.flags = PCITEST_FLAGS_USE_DMA;
|
||||
|
||||
pci_ep_ioctl(PCITEST_SET_IRQTYPE, 1);
|
||||
ASSERT_EQ(0, ret) TH_LOG("Can't set MSI IRQ type");
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(test_size); i++) {
|
||||
param.size = test_size[i];
|
||||
pci_ep_ioctl(PCITEST_WRITE, ¶m);
|
||||
EXPECT_FALSE(ret) TH_LOG("Test failed for size (%ld)",
|
||||
test_size[i]);
|
||||
}
|
||||
}
|
||||
|
||||
TEST_F(pci_ep_data_transfer, COPY_TEST)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param = {};
|
||||
int ret, i;
|
||||
|
||||
if (variant->use_dma)
|
||||
param.flags = PCITEST_FLAGS_USE_DMA;
|
||||
|
||||
pci_ep_ioctl(PCITEST_SET_IRQTYPE, 1);
|
||||
ASSERT_EQ(0, ret) TH_LOG("Can't set MSI IRQ type");
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(test_size); i++) {
|
||||
param.size = test_size[i];
|
||||
pci_ep_ioctl(PCITEST_COPY, ¶m);
|
||||
EXPECT_FALSE(ret) TH_LOG("Test failed for size (%ld)",
|
||||
test_size[i]);
|
||||
}
|
||||
}
|
||||
TEST_HARNESS_MAIN
|
Loading…
Reference in New Issue