Merge: crypto: iaa - Add Intel IAA Compression Accelerator crypto driver

MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/3617

```
JIRA: https://issues.redhat.com/browse/RHEL-20145
Upstream Status: 19 commits are merged into the linux.git
                 1 RHEL-Only commit with configs

Backport Intel Analytics Accelerator (IAA) Compression Accelerator
crypto driver with the upstream v6.7 code. All the commits apply
cleanly, no conflicts, no changes vs the upstream.

Signed-off-by: Vladis Dronov <vdronov@redhat.com>
```

Approved-by: Phil Auld <pauld@redhat.com>
Approved-by: Jerry Snitselaar <jsnitsel@redhat.com>
Approved-by: Herbert Xu <zxu@redhat.com>

Merged-by: Scott Weaver <scweaver@redhat.com>
This commit is contained in:
Scott Weaver 2024-02-22 19:56:51 -05:00
commit 5e66be03c9
31 changed files with 3973 additions and 31 deletions

View File

@ -270,6 +270,12 @@ Description: Shows the operation capability bits displayed in bitmap format
correlates to the operations allowed. It's visible only
on platforms that support the capability.
What: /sys/bus/dsa/devices/wq<m>.<n>/driver_name
Date: Sept 8, 2023
KernelVersion: 6.7.0
Contact: dmaengine@vger.kernel.org
Description: Name of driver to be bounded to the wq.
What: /sys/bus/dsa/devices/engine<m>.<n>/group_id
Date: Oct 25, 2019
KernelVersion: 5.6.0

View File

@ -0,0 +1,824 @@
.. SPDX-License-Identifier: GPL-2.0
=========================================
IAA Compression Accelerator Crypto Driver
=========================================
Tom Zanussi <tom.zanussi@linux.intel.com>
The IAA crypto driver supports compression/decompression compatible
with the DEFLATE compression standard described in RFC 1951, which is
the compression/decompression algorithm exported by this module.
The IAA hardware spec can be found here:
https://cdrdv2.intel.com/v1/dl/getContent/721858
The iaa_crypto driver is designed to work as a layer underneath
higher-level compression devices such as zswap.
Users can select IAA compress/decompress acceleration by specifying
one of the supported IAA compression algorithms in whatever facility
allows compression algorithms to be selected.
For example, a zswap device can select the IAA 'fixed' mode
represented by selecting the 'deflate-iaa' crypto compression
algorithm::
# echo deflate-iaa > /sys/module/zswap/parameters/compressor
This will tell zswap to use the IAA 'fixed' compression mode for all
compresses and decompresses.
Currently, there is only one compression modes available, 'fixed'
mode.
The 'fixed' compression mode implements the compression scheme
specified by RFC 1951 and is given the crypto algorithm name
'deflate-iaa'. (Because the IAA hardware has a 4k history-window
limitation, only buffers <= 4k, or that have been compressed using a
<= 4k history window, are technically compliant with the deflate spec,
which allows for a window of up to 32k. Because of this limitation,
the IAA fixed mode deflate algorithm is given its own algorithm name
rather than simply 'deflate').
Config options and other setup
==============================
The IAA crypto driver is available via menuconfig using the following
path::
Cryptographic API -> Hardware crypto devices -> Support for Intel(R) IAA Compression Accelerator
In the configuration file the option called CONFIG_CRYPTO_DEV_IAA_CRYPTO.
The IAA crypto driver also supports statistics, which are available
via menuconfig using the following path::
Cryptographic API -> Hardware crypto devices -> Support for Intel(R) IAA Compression -> Enable Intel(R) IAA Compression Accelerator Statistics
In the configuration file the option called CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS.
The following config options should also be enabled::
CONFIG_IRQ_REMAP=y
CONFIG_INTEL_IOMMU=y
CONFIG_INTEL_IOMMU_SVM=y
CONFIG_PCI_ATS=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
CONFIG_INTEL_IDXD=m
CONFIG_INTEL_IDXD_SVM=y
IAA is one of the first Intel accelerator IPs that can work in
conjunction with the Intel IOMMU. There are multiple modes that exist
for testing. Based on IOMMU configuration, there are 3 modes::
- Scalable
- Legacy
- No IOMMU
Scalable mode
-------------
Scalable mode supports Shared Virtual Memory (SVM or SVA). It is
entered when using the kernel boot commandline::
intel_iommu=on,sm_on
with VT-d turned on in BIOS.
With scalable mode, both shared and dedicated workqueues are available
for use.
For scalable mode, the following BIOS settings should be enabled::
Socket Configuration > IIO Configuration > Intel VT for Directed I/O (VT-d) > Intel VT for Directed I/O
Socket Configuration > IIO Configuration > PCIe ENQCMD > ENQCMDS
Legacy mode
-----------
Legacy mode is entered when using the kernel boot commandline::
intel_iommu=off
or VT-d is not turned on in BIOS.
If you have booted into Linux and not sure if VT-d is on, do a "dmesg
| grep -i dmar". If you don't see a number of DMAR devices enumerated,
most likely VT-d is not on.
With legacy mode, only dedicated workqueues are available for use.
No IOMMU mode
-------------
No IOMMU mode is entered when using the kernel boot commandline::
iommu=off.
With no IOMMU mode, only dedicated workqueues are available for use.
Usage
=====
accel-config
------------
When loaded, the iaa_crypto driver automatically creates a default
configuration and enables it, and assigns default driver attributes.
If a different configuration or set of driver attributes is required,
the user must first disable the IAA devices and workqueues, reset the
configuration, and then re-register the deflate-iaa algorithm with the
crypto subsystem by removing and reinserting the iaa_crypto module.
The :ref:`iaa_disable_script` in the 'Use Cases'
section below can be used to disable the default configuration.
See :ref:`iaa_default_config` below for details of the default
configuration.
More likely than not, however, and because of the complexity and
configurability of the accelerator devices, the user will want to
configure the device and manually enable the desired devices and
workqueues.
The userspace tool to help doing that is called accel-config. Using
accel-config to configure device or loading a previously saved config
is highly recommended. The device can be controlled via sysfs
directly but comes with the warning that you should do this ONLY if
you know exactly what you are doing. The following sections will not
cover the sysfs interface but assumes you will be using accel-config.
The :ref:`iaa_sysfs_config` section in the appendix below can be
consulted for the sysfs interface details if interested.
The accel-config tool along with instructions for building it can be
found here:
https://github.com/intel/idxd-config/#readme
Typical usage
-------------
In order for the iaa_crypto module to actually do any
compression/decompression work on behalf of a facility, one or more
IAA workqueues need to be bound to the iaa_crypto driver.
For instance, here's an example of configuring an IAA workqueue and
binding it to the iaa_crypto driver (note that device names are
specified as 'iax' rather than 'iaa' - this is because upstream still
has the old 'iax' device naming in place) ::
# configure wq1.0
accel-config config-wq --group-id=0 --mode=dedicated --type=kernel --name="iaa_crypto" --device_name="crypto" iax1/wq1.0
# enable IAA device iax1
accel-config enable-device iax1
# enable wq1.0 on IAX device iax1
accel-config enable-wq iax1/wq1.0
Whenever a new workqueue is bound to or unbound from the iaa_crypto
driver, the available workqueues are 'rebalanced' such that work
submitted from a particular CPU is given to the most appropriate
workqueue available. Current best practice is to configure and bind
at least one workqueue for each IAA device, but as long as there is at
least one workqueue configured and bound to any IAA device in the
system, the iaa_crypto driver will work, albeit most likely not as
efficiently.
The IAA crypto algorigthms is operational and compression and
decompression operations are fully enabled following the successful
binding of the first IAA workqueue to the iaa_crypto driver.
Similarly, the IAA crypto algorithm is not operational and compression
and decompression operations are disabled following the unbinding of
the last IAA worqueue to the iaa_crypto driver.
As a result, the IAA crypto algorithms and thus the IAA hardware are
only available when one or more workques are bound to the iaa_crypto
driver.
When there are no IAA workqueues bound to the driver, the IAA crypto
algorithms can be unregistered by removing the module.
Driver attributes
-----------------
There are a couple user-configurable driver attributes that can be
used to configure various modes of operation. They're listed below,
along with their default values. To set any of these attributes, echo
the appropriate values to the attribute file located under
/sys/bus/dsa/drivers/crypto/
The attribute settings at the time the IAA algorithms are registered
are captured in each algorithm's crypto_ctx and used for all compresses
and decompresses when using that algorithm.
The available attributes are:
- verify_compress
Toggle compression verification. If set, each compress will be
internally decompressed and the contents verified, returning error
codes if unsuccessful. This can be toggled with 0/1::
echo 0 > /sys/bus/dsa/drivers/crypto/verify_compress
The default setting is '1' - verify all compresses.
- sync_mode
Select mode to be used to wait for completion of each compresses
and decompress operation.
The crypto async interface support implemented by iaa_crypto
provides an implementation that satisfies the interface but does
so in a synchronous manner - it fills and submits the IDXD
descriptor and then loops around waiting for it to complete before
returning. This isn't a problem at the moment, since all existing
callers (e.g. zswap) wrap any asynchronous callees in a
synchronous wrapper anyway.
The iaa_crypto driver does however provide true asynchronous
support for callers that can make use of it. In this mode, it
fills and submits the IDXD descriptor, then returns immediately
with -EINPROGRESS. The caller can then either poll for completion
itself, which requires specific code in the caller which currently
nothing in the upstream kernel implements, or go to sleep and wait
for an interrupt signaling completion. This latter mode is
supported by current users in the kernel such as zswap via
synchronous wrappers. Although it is supported this mode is
significantly slower than the synchronous mode that does the
polling in the iaa_crypto driver previously mentioned.
This mode can be enabled by writing 'async_irq' to the sync_mode
iaa_crypto driver attribute::
echo async_irq > /sys/bus/dsa/drivers/crypto/sync_mode
Async mode without interrupts (caller must poll) can be enabled by
writing 'async' to it::
echo async > /sys/bus/dsa/drivers/crypto/sync_mode
The mode that does the polling in the iaa_crypto driver can be
enabled by writing 'sync' to it::
echo sync > /sys/bus/dsa/drivers/crypto/sync_mode
The default mode is 'sync'.
.. _iaa_default_config:
IAA Default Configuration
-------------------------
When the iaa_crypto driver is loaded, each IAA device has a single
work queue configured for it, with the following attributes::
mode "dedicated"
threshold 0
size Total WQ Size from WQCAP
priority 10
type IDXD_WQT_KERNEL
group 0
name "iaa_crypto"
driver_name "crypto"
The devices and workqueues are also enabled and therefore the driver
is ready to be used without any additional configuration.
The default driver attributes in effect when the driver is loaded are::
sync_mode "sync"
verify_compress 1
In order to change either the device/work queue or driver attributes,
the enabled devices and workqueues must first be disabled. In order
to have the new configuration applied to the deflate-iaa crypto
algorithm, it needs to be re-registered by removing and reinserting
the iaa_crypto module. The :ref:`iaa_disable_script` in the 'Use
Cases' section below can be used to disable the default configuration.
Statistics
==========
If the optional debugfs statistics support is enabled, the IAA crypto
driver will generate statistics which can be accessed in debugfs at::
# ls -al /sys/kernel/debug/iaa-crypto/
total 0
drwxr-xr-x 2 root root 0 Mar 3 09:35 .
drwx------ 47 root root 0 Mar 3 09:35 ..
-rw-r--r-- 1 root root 0 Mar 3 09:35 max_acomp_delay_ns
-rw-r--r-- 1 root root 0 Mar 3 09:35 max_adecomp_delay_ns
-rw-r--r-- 1 root root 0 Mar 3 09:35 max_comp_delay_ns
-rw-r--r-- 1 root root 0 Mar 3 09:35 max_decomp_delay_ns
-rw-r--r-- 1 root root 0 Mar 3 09:35 stats_reset
-rw-r--r-- 1 root root 0 Mar 3 09:35 total_comp_bytes_out
-rw-r--r-- 1 root root 0 Mar 3 09:35 total_comp_calls
-rw-r--r-- 1 root root 0 Mar 3 09:35 total_decomp_bytes_in
-rw-r--r-- 1 root root 0 Mar 3 09:35 total_decomp_calls
-rw-r--r-- 1 root root 0 Mar 3 09:35 wq_stats
Most of the above statisticss are self-explanatory. The wq_stats file
shows per-wq stats, a set for each iaa device and wq in addition to
some global stats::
# cat wq_stats
global stats:
total_comp_calls: 100
total_decomp_calls: 100
total_comp_bytes_out: 22800
total_decomp_bytes_in: 22800
total_completion_einval_errors: 0
total_completion_timeout_errors: 0
total_completion_comp_buf_overflow_errors: 0
iaa device:
id: 1
n_wqs: 1
comp_calls: 0
comp_bytes: 0
decomp_calls: 0
decomp_bytes: 0
wqs:
name: iaa_crypto
comp_calls: 0
comp_bytes: 0
decomp_calls: 0
decomp_bytes: 0
iaa device:
id: 3
n_wqs: 1
comp_calls: 0
comp_bytes: 0
decomp_calls: 0
decomp_bytes: 0
wqs:
name: iaa_crypto
comp_calls: 0
comp_bytes: 0
decomp_calls: 0
decomp_bytes: 0
iaa device:
id: 5
n_wqs: 1
comp_calls: 100
comp_bytes: 22800
decomp_calls: 100
decomp_bytes: 22800
wqs:
name: iaa_crypto
comp_calls: 100
comp_bytes: 22800
decomp_calls: 100
decomp_bytes: 22800
Writing 0 to 'stats_reset' resets all the stats, including the
per-device and per-wq stats::
# echo 0 > stats_reset
# cat wq_stats
global stats:
total_comp_calls: 0
total_decomp_calls: 0
total_comp_bytes_out: 0
total_decomp_bytes_in: 0
total_completion_einval_errors: 0
total_completion_timeout_errors: 0
total_completion_comp_buf_overflow_errors: 0
...
Use cases
=========
Simple zswap test
-----------------
For this example, the kernel should be configured according to the
dedicated mode options described above, and zswap should be enabled as
well::
CONFIG_ZSWAP=y
This is a simple test that uses iaa_compress as the compressor for a
swap (zswap) device. It sets up the zswap device and then uses the
memory_memadvise program listed below to forcibly swap out and in a
specified number of pages, demonstrating both compress and decompress.
The zswap test expects the work queues for each IAA device on the
system to be configured properly as a kernel workqueue with a
workqueue driver_name of "crypto".
The first step is to make sure the iaa_crypto module is loaded::
modprobe iaa_crypto
If the IAA devices and workqueues haven't previously been disabled and
reconfigured, then the default configuration should be in place and no
further IAA configuration is necessary. See :ref:`iaa_default_config`
below for details of the default configuration.
If the default configuration is in place, you should see the iaa
devices and wq0s enabled::
# cat /sys/bus/dsa/devices/iax1/state
enabled
# cat /sys/bus/dsa/devices/iax1/wq1.0/state
enabled
To demonstrate that the following steps work as expected, these
commands can be used to enable debug output::
# echo -n 'module iaa_crypto +p' > /sys/kernel/debug/dynamic_debug/control
# echo -n 'module idxd +p' > /sys/kernel/debug/dynamic_debug/control
Use the following commands to enable zswap::
# echo 0 > /sys/module/zswap/parameters/enabled
# echo 50 > /sys/module/zswap/parameters/max_pool_percent
# echo deflate-iaa > /sys/module/zswap/parameters/compressor
# echo zsmalloc > /sys/module/zswap/parameters/zpool
# echo 1 > /sys/module/zswap/parameters/enabled
# echo 0 > /sys/module/zswap/parameters/same_filled_pages_enabled
# echo 100 > /proc/sys/vm/swappiness
# echo never > /sys/kernel/mm/transparent_hugepage/enabled
# echo 1 > /proc/sys/vm/overcommit_memory
Now you can now run the zswap workload you want to measure. For
example, using the memory_memadvise code below, the following command
will swap in and out 100 pages::
./memory_madvise 100
Allocating 100 pages to swap in/out
Swapping out 100 pages
Swapping in 100 pages
Swapped out and in 100 pages
You should see something like the following in the dmesg output::
[ 404.202972] idxd 0000:e7:02.0: iaa_comp_acompress: dma_map_sg, src_addr 223925c000, nr_sgs 1, req->src 00000000ee7cb5e6, req->slen 4096, sg_dma_len(sg) 4096
[ 404.202973] idxd 0000:e7:02.0: iaa_comp_acompress: dma_map_sg, dst_addr 21dadf8000, nr_sgs 1, req->dst 000000008d6acea8, req->dlen 4096, sg_dma_len(sg) 8192
[ 404.202975] idxd 0000:e7:02.0: iaa_compress: desc->src1_addr 223925c000, desc->src1_size 4096, desc->dst_addr 21dadf8000, desc->max_dst_size 4096, desc->src2_addr 2203543000, desc->src2_size 1568
[ 404.202981] idxd 0000:e7:02.0: iaa_compress_verify: (verify) desc->src1_addr 21dadf8000, desc->src1_size 228, desc->dst_addr 223925c000, desc->max_dst_size 4096, desc->src2_addr 0, desc->src2_size 0
...
Now that basic functionality has been demonstrated, the defaults can
be erased and replaced with a different configuration. To do that,
first disable zswap::
# echo lzo > /sys/module/zswap/parameters/compressor
# swapoff -a
# echo 0 > /sys/module/zswap/parameters/accept_threshold_percent
# echo 0 > /sys/module/zswap/parameters/max_pool_percent
# echo 0 > /sys/module/zswap/parameters/enabled
# echo 0 > /sys/module/zswap/parameters/enabled
Then run the :ref:`iaa_disable_script` in the 'Use Cases' section
below to disable the default configuration.
Finally turn swap back on::
# swapon -a
Following all that the IAA device(s) can now be re-configured and
enabled as desired for further testing. Below is one example.
The zswap test expects the work queues for each IAA device on the
system to be configured properly as a kernel workqueue with a
workqueue driver_name of "crypto".
The below script automatically does that::
#!/bin/bash
echo "IAA devices:"
lspci -d:0cfe
echo "# IAA devices:"
lspci -d:0cfe | wc -l
#
# count iaa instances
#
iaa_dev_id="0cfe"
num_iaa=$(lspci -d:${iaa_dev_id} | wc -l)
echo "Found ${num_iaa} IAA instances"
#
# disable iaa wqs and devices
#
echo "Disable IAA"
for ((i = 1; i < ${num_iaa} * 2; i += 2)); do
echo disable wq iax${i}/wq${i}.0
accel-config disable-wq iax${i}/wq${i}.0
echo disable iaa iax${i}
accel-config disable-device iax${i}
done
echo "End Disable IAA"
#
# configure iaa wqs and devices
#
echo "Configure IAA"
for ((i = 1; i < ${num_iaa} * 2; i += 2)); do
accel-config config-wq --group-id=0 --mode=dedicated --size=128 --priority=10 --type=kernel --name="iaa_crypto" --driver_name="crypto" iax${i}/wq${i}
done
echo "End Configure IAA"
#
# enable iaa wqs and devices
#
echo "Enable IAA"
for ((i = 1; i < ${num_iaa} * 2; i += 2)); do
echo enable iaa iaa${i}
accel-config enable-device iaa${i}
echo enable wq iaa${i}/wq${i}.0
accel-config enable-wq iaa${i}/wq${i}.0
done
echo "End Enable IAA"
When the workqueues are bound to the iaa_crypto driver, you should
see something similar to the following in dmesg output if you've
enabled debug output (echo -n 'module iaa_crypto +p' >
/sys/kernel/debug/dynamic_debug/control)::
[ 60.752344] idxd 0000:f6:02.0: add_iaa_wq: added wq 000000004068d14d to iaa 00000000c9585ba2, n_wq 1
[ 60.752346] iaa_crypto: rebalance_wq_table: nr_nodes=2, nr_cpus 160, nr_iaa 8, cpus_per_iaa 20
[ 60.752347] iaa_crypto: rebalance_wq_table: iaa=0
[ 60.752349] idxd 0000:6a:02.0: request_iaa_wq: getting wq from iaa_device 0000000042d7bc52 (0)
[ 60.752350] idxd 0000:6a:02.0: request_iaa_wq: returning unused wq 00000000c8bb4452 (0) from iaa device 0000000042d7bc52 (0)
[ 60.752352] iaa_crypto: rebalance_wq_table: assigned wq for cpu=0, node=0 = wq 00000000c8bb4452
[ 60.752354] iaa_crypto: rebalance_wq_table: iaa=0
[ 60.752355] idxd 0000:6a:02.0: request_iaa_wq: getting wq from iaa_device 0000000042d7bc52 (0)
[ 60.752356] idxd 0000:6a:02.0: request_iaa_wq: returning unused wq 00000000c8bb4452 (0) from iaa device 0000000042d7bc52 (0)
[ 60.752358] iaa_crypto: rebalance_wq_table: assigned wq for cpu=1, node=0 = wq 00000000c8bb4452
[ 60.752359] iaa_crypto: rebalance_wq_table: iaa=0
[ 60.752360] idxd 0000:6a:02.0: request_iaa_wq: getting wq from iaa_device 0000000042d7bc52 (0)
[ 60.752361] idxd 0000:6a:02.0: request_iaa_wq: returning unused wq 00000000c8bb4452 (0) from iaa device 0000000042d7bc52 (0)
[ 60.752362] iaa_crypto: rebalance_wq_table: assigned wq for cpu=2, node=0 = wq 00000000c8bb4452
[ 60.752364] iaa_crypto: rebalance_wq_table: iaa=0
.
.
.
Once the workqueues and devices have been enabled, the IAA crypto
algorithms are enabled and available. When the IAA crypto algorithms
have been successfully enabled, you should see the following dmesg
output::
[ 64.893759] iaa_crypto: iaa_crypto_enable: iaa_crypto now ENABLED
Now run the following zswap-specific setup commands to have zswap use
the 'fixed' compression mode::
echo 0 > /sys/module/zswap/parameters/enabled
echo 50 > /sys/module/zswap/parameters/max_pool_percent
echo deflate-iaa > /sys/module/zswap/parameters/compressor
echo zsmalloc > /sys/module/zswap/parameters/zpool
echo 1 > /sys/module/zswap/parameters/enabled
echo 0 > /sys/module/zswap/parameters/same_filled_pages_enabled
echo 100 > /proc/sys/vm/swappiness
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo 1 > /proc/sys/vm/overcommit_memory
Finally, you can now run the zswap workload you want to measure. For
example, using the code below, the following command will swap in and
out 100 pages::
./memory_madvise 100
Allocating 100 pages to swap in/out
Swapping out 100 pages
Swapping in 100 pages
Swapped out and in 100 pages
You should see something like the following in the dmesg output if
you've enabled debug output (echo -n 'module iaa_crypto +p' >
/sys/kernel/debug/dynamic_debug/control)::
[ 404.202972] idxd 0000:e7:02.0: iaa_comp_acompress: dma_map_sg, src_addr 223925c000, nr_sgs 1, req->src 00000000ee7cb5e6, req->slen 4096, sg_dma_len(sg) 4096
[ 404.202973] idxd 0000:e7:02.0: iaa_comp_acompress: dma_map_sg, dst_addr 21dadf8000, nr_sgs 1, req->dst 000000008d6acea8, req->dlen 4096, sg_dma_len(sg) 8192
[ 404.202975] idxd 0000:e7:02.0: iaa_compress: desc->src1_addr 223925c000, desc->src1_size 4096, desc->dst_addr 21dadf8000, desc->max_dst_size 4096, desc->src2_addr 2203543000, desc->src2_size 1568
[ 404.202981] idxd 0000:e7:02.0: iaa_compress_verify: (verify) desc->src1_addr 21dadf8000, desc->src1_size 228, desc->dst_addr 223925c000, desc->max_dst_size 4096, desc->src2_addr 0, desc->src2_size 0
[ 409.203227] idxd 0000:e7:02.0: iaa_comp_adecompress: dma_map_sg, src_addr 21ddd8b100, nr_sgs 1, req->src 0000000084adab64, req->slen 228, sg_dma_len(sg) 228
[ 409.203235] idxd 0000:e7:02.0: iaa_comp_adecompress: dma_map_sg, dst_addr 21ee3dc000, nr_sgs 1, req->dst 000000004e2990d0, req->dlen 4096, sg_dma_len(sg) 4096
[ 409.203239] idxd 0000:e7:02.0: iaa_decompress: desc->src1_addr 21ddd8b100, desc->src1_size 228, desc->dst_addr 21ee3dc000, desc->max_dst_size 4096, desc->src2_addr 0, desc->src2_size 0
[ 409.203254] idxd 0000:e7:02.0: iaa_comp_adecompress: dma_map_sg, src_addr 21ddd8b100, nr_sgs 1, req->src 0000000084adab64, req->slen 228, sg_dma_len(sg) 228
[ 409.203256] idxd 0000:e7:02.0: iaa_comp_adecompress: dma_map_sg, dst_addr 21f1551000, nr_sgs 1, req->dst 000000004e2990d0, req->dlen 4096, sg_dma_len(sg) 4096
[ 409.203257] idxd 0000:e7:02.0: iaa_decompress: desc->src1_addr 21ddd8b100, desc->src1_size 228, desc->dst_addr 21f1551000, desc->max_dst_size 4096, desc->src2_addr 0, desc->src2_size 0
In order to unregister the IAA crypto algorithms, and register new
ones using different parameters, any users of the current algorithm
should be stopped and the IAA workqueues and devices disabled.
In the case of zswap, remove the IAA crypto algorithm as the
compressor and turn off swap (to remove all references to
iaa_crypto)::
echo lzo > /sys/module/zswap/parameters/compressor
swapoff -a
echo 0 > /sys/module/zswap/parameters/accept_threshold_percent
echo 0 > /sys/module/zswap/parameters/max_pool_percent
echo 0 > /sys/module/zswap/parameters/enabled
Once zswap is disabled and no longer using iaa_crypto, the IAA wqs and
devices can be disabled.
.. _iaa_disable_script:
IAA disable script
------------------
The below script automatically does that::
#!/bin/bash
echo "IAA devices:"
lspci -d:0cfe
echo "# IAA devices:"
lspci -d:0cfe | wc -l
#
# count iaa instances
#
iaa_dev_id="0cfe"
num_iaa=$(lspci -d:${iaa_dev_id} | wc -l)
echo "Found ${num_iaa} IAA instances"
#
# disable iaa wqs and devices
#
echo "Disable IAA"
for ((i = 1; i < ${num_iaa} * 2; i += 2)); do
echo disable wq iax${i}/wq${i}.0
accel-config disable-wq iax${i}/wq${i}.0
echo disable iaa iax${i}
accel-config disable-device iax${i}
done
echo "End Disable IAA"
Finally, at this point the iaa_crypto module can be removed, which
will unregister the current IAA crypto algorithms::
rmmod iaa_crypto
memory_madvise.c (gcc -o memory_memadvise memory_madvise.c)::
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/mman.h>
#include <linux/mman.h>
#ifndef MADV_PAGEOUT
#define MADV_PAGEOUT 21 /* force pages out immediately */
#endif
#define PG_SZ 4096
int main(int argc, char **argv)
{
int i, nr_pages = 1;
int64_t *dump_ptr;
char *addr, *a;
int loop = 1;
if (argc > 1)
nr_pages = atoi(argv[1]);
printf("Allocating %d pages to swap in/out\n", nr_pages);
/* allocate pages */
addr = mmap(NULL, nr_pages * PG_SZ, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
*addr = 1;
/* initialize data in page to all '*' chars */
memset(addr, '*', nr_pages * PG_SZ);
printf("Swapping out %d pages\n", nr_pages);
/* Tell kernel to swap it out */
madvise(addr, nr_pages * PG_SZ, MADV_PAGEOUT);
while (loop > 0) {
/* Wait for swap out to finish */
sleep(5);
a = addr;
printf("Swapping in %d pages\n", nr_pages);
/* Access the page ... this will swap it back in again */
for (i = 0; i < nr_pages; i++) {
if (a[0] != '*') {
printf("Bad data from decompress!!!!!\n");
dump_ptr = (int64_t *)a;
for (int j = 0; j < 100; j++) {
printf(" page %d data: %#llx\n", i, *dump_ptr);
dump_ptr++;
}
}
a += PG_SZ;
}
loop --;
}
printf("Swapped out and in %d pages\n", nr_pages);
Appendix
========
.. _iaa_sysfs_config:
IAA sysfs config interface
--------------------------
Below is a description of the IAA sysfs interface, which as mentioned
in the main document, should only be used if you know exactly what you
are doing. Even then, there's no compelling reason to use it directly
since accel-config can do everything the sysfs interface can and in
fact accel-config is based on it under the covers.
The 'IAA config path' is /sys/bus/dsa/devices and contains
subdirectories representing each IAA device, workqueue, engine, and
group. Note that in the sysfs interface, the IAA devices are actually
named using iax e.g. iax1, iax3, etc. (Note that IAA devices are the
odd-numbered devices; the even-numbered devices are DSA devices and
can be ignored for IAA).
The 'IAA device bind path' is /sys/bus/dsa/drivers/idxd/bind and is
the file that is written to enable an IAA device.
The 'IAA workqueue bind path' is /sys/bus/dsa/drivers/crypto/bind and
is the file that is written to enable an IAA workqueue.
Similarly /sys/bus/dsa/drivers/idxd/unbind and
/sys/bus/dsa/drivers/crypto/unbind are used to disable IAA devices and
workqueues.
The basic sequence of commands needed to set up the IAA devices and
workqueues is:
For each device::
1) Disable any workqueues enabled on the device. For example to
disable workques 0 and 1 on IAA device 3::
# echo wq3.0 > /sys/bus/dsa/drivers/crypto/unbind
# echo wq3.1 > /sys/bus/dsa/drivers/crypto/unbind
2) Disable the device. For example to disable IAA device 3::
# echo iax3 > /sys/bus/dsa/drivers/idxd/unbind
3) configure the desired workqueues. For example, to configure
workqueue 3 on IAA device 3::
# echo dedicated > /sys/bus/dsa/devices/iax3/wq3.3/mode
# echo 128 > /sys/bus/dsa/devices/iax3/wq3.3/size
# echo 0 > /sys/bus/dsa/devices/iax3/wq3.3/group_id
# echo 10 > /sys/bus/dsa/devices/iax3/wq3.3/priority
# echo "kernel" > /sys/bus/dsa/devices/iax3/wq3.3/type
# echo "iaa_crypto" > /sys/bus/dsa/devices/iax3/wq3.3/name
# echo "crypto" > /sys/bus/dsa/devices/iax3/wq3.3/driver_name
4) Enable the device. For example to enable IAA device 3::
# echo iax3 > /sys/bus/dsa/drivers/idxd/bind
5) Enable the desired workqueues on the device. For example to
enable workques 0 and 1 on IAA device 3::
# echo wq3.0 > /sys/bus/dsa/drivers/crypto/bind
# echo wq3.1 > /sys/bus/dsa/drivers/crypto/bind

View File

@ -0,0 +1,20 @@
.. SPDX-License-Identifier: GPL-2.0
=================================
IAA (Intel Analytics Accelerator)
=================================
IAA provides hardware compression and decompression via the crypto
API.
.. toctree::
:maxdepth: 1
iaa-crypto
.. only:: subproject and html
Indices
=======
* :ref:`genindex`

View File

@ -0,0 +1,20 @@
.. SPDX-License-Identifier: GPL-2.0
==============
Crypto Drivers
==============
Documentation for crypto drivers that may need more involved setup and
configuration.
.. toctree::
:maxdepth: 1
iaa/index
.. only:: subproject and html
Indices
=======
* :ref:`genindex`

View File

@ -111,6 +111,7 @@ available subsections can be seen below.
zorro
hte/index
dpll
crypto/index
.. only:: subproject and html

View File

@ -9671,6 +9671,13 @@ S: Supported
F: drivers/dma/idxd/*
F: include/uapi/linux/idxd.h
INTEL IAA CRYPTO DRIVER
M: Tom Zanussi <tom.zanussi@linux.intel.com>
L: linux-crypto@vger.kernel.org
S: Supported
F: Documentation/driver-api/crypto/iaa/iaa-crypto.rst
F: drivers/crypto/intel/iaa/*
INTEL IDLE DRIVER
M: Jacob Pan <jacob.jun.pan@linux.intel.com>
M: Len Brown <lenb@kernel.org>

View File

@ -4757,6 +4757,16 @@ static const struct alg_test_desc alg_test_descs[] = {
.decomp = __VECS(deflate_decomp_tv_template)
}
}
}, {
.alg = "deflate-iaa",
.test = alg_test_comp,
.fips_allowed = 1,
.suite = {
.comp = {
.comp = __VECS(deflate_comp_tv_template),
.decomp = __VECS(deflate_decomp_tv_template)
}
}
}, {
.alg = "dh",
.test = alg_test_kpp,

View File

@ -3,3 +3,4 @@
source "drivers/crypto/intel/keembay/Kconfig"
source "drivers/crypto/intel/ixp4xx/Kconfig"
source "drivers/crypto/intel/qat/Kconfig"
source "drivers/crypto/intel/iaa/Kconfig"

View File

@ -3,3 +3,4 @@
obj-y += keembay/
obj-y += ixp4xx/
obj-$(CONFIG_CRYPTO_DEV_QAT) += qat/
obj-$(CONFIG_CRYPTO_DEV_IAA_CRYPTO) += iaa/

View File

@ -0,0 +1,19 @@
config CRYPTO_DEV_IAA_CRYPTO
tristate "Support for Intel(R) IAA Compression Accelerator"
depends on CRYPTO_DEFLATE
depends on INTEL_IDXD
default n
help
This driver supports acceleration for compression and
decompression with the Intel Analytics Accelerator (IAA)
hardware using the cryptographic API. If you choose 'M'
here, the module will be called iaa_crypto.
config CRYPTO_DEV_IAA_CRYPTO_STATS
bool "Enable Intel(R) IAA Compression Accelerator Statistics"
depends on CRYPTO_DEV_IAA_CRYPTO
default n
help
Enable statistics for the IAA compression accelerator.
These include per-device and per-workqueue statistics in
addition to global driver statistics.

View File

@ -0,0 +1,12 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for IAA crypto device drivers
#
ccflags-y += -I $(srctree)/drivers/dma/idxd -DDEFAULT_SYMBOL_NAMESPACE=IDXD
obj-$(CONFIG_CRYPTO_DEV_IAA_CRYPTO) := iaa_crypto.o
iaa_crypto-y := iaa_crypto_main.o iaa_crypto_comp_fixed.o
iaa_crypto-$(CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS) += iaa_crypto_stats.o

View File

@ -0,0 +1,173 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 Intel Corporation. All rights rsvd. */
#ifndef __IAA_CRYPTO_H__
#define __IAA_CRYPTO_H__
#include <linux/crypto.h>
#include <linux/idxd.h>
#include <uapi/linux/idxd.h>
#define IDXD_SUBDRIVER_NAME "crypto"
#define IAA_DECOMP_ENABLE BIT(0)
#define IAA_DECOMP_FLUSH_OUTPUT BIT(1)
#define IAA_DECOMP_CHECK_FOR_EOB BIT(2)
#define IAA_DECOMP_STOP_ON_EOB BIT(3)
#define IAA_DECOMP_SUPPRESS_OUTPUT BIT(9)
#define IAA_COMP_FLUSH_OUTPUT BIT(1)
#define IAA_COMP_APPEND_EOB BIT(2)
#define IAA_COMPLETION_TIMEOUT 1000000
#define IAA_ANALYTICS_ERROR 0x0a
#define IAA_ERROR_DECOMP_BUF_OVERFLOW 0x0b
#define IAA_ERROR_COMP_BUF_OVERFLOW 0x19
#define IAA_ERROR_WATCHDOG_EXPIRED 0x24
#define IAA_COMP_MODES_MAX 2
#define FIXED_HDR 0x2
#define FIXED_HDR_SIZE 3
#define IAA_COMP_FLAGS (IAA_COMP_FLUSH_OUTPUT | \
IAA_COMP_APPEND_EOB)
#define IAA_DECOMP_FLAGS (IAA_DECOMP_ENABLE | \
IAA_DECOMP_FLUSH_OUTPUT | \
IAA_DECOMP_CHECK_FOR_EOB | \
IAA_DECOMP_STOP_ON_EOB)
/* Representation of IAA workqueue */
struct iaa_wq {
struct list_head list;
struct idxd_wq *wq;
int ref;
bool remove;
struct iaa_device *iaa_device;
u64 comp_calls;
u64 comp_bytes;
u64 decomp_calls;
u64 decomp_bytes;
};
struct iaa_device_compression_mode {
const char *name;
struct aecs_comp_table_record *aecs_comp_table;
struct aecs_decomp_table_record *aecs_decomp_table;
dma_addr_t aecs_comp_table_dma_addr;
dma_addr_t aecs_decomp_table_dma_addr;
};
/* Representation of IAA device with wqs, populated by probe */
struct iaa_device {
struct list_head list;
struct idxd_device *idxd;
struct iaa_device_compression_mode *compression_modes[IAA_COMP_MODES_MAX];
int n_wq;
struct list_head wqs;
u64 comp_calls;
u64 comp_bytes;
u64 decomp_calls;
u64 decomp_bytes;
};
struct wq_table_entry {
struct idxd_wq **wqs;
int max_wqs;
int n_wqs;
int cur_wq;
};
#define IAA_AECS_ALIGN 32
/*
* Analytics Engine Configuration and State (AECS) contains parameters and
* internal state of the analytics engine.
*/
struct aecs_comp_table_record {
u32 crc;
u32 xor_checksum;
u32 reserved0[5];
u32 num_output_accum_bits;
u8 output_accum[256];
u32 ll_sym[286];
u32 reserved1;
u32 reserved2;
u32 d_sym[30];
u32 reserved_padding[2];
} __packed;
/* AECS for decompress */
struct aecs_decomp_table_record {
u32 crc;
u32 xor_checksum;
u32 low_filter_param;
u32 high_filter_param;
u32 output_mod_idx;
u32 drop_init_decomp_out_bytes;
u32 reserved[36];
u32 output_accum_data[2];
u32 out_bits_valid;
u32 bit_off_indexing;
u32 input_accum_data[64];
u8 size_qw[32];
u32 decomp_state[1220];
} __packed;
int iaa_aecs_init_fixed(void);
void iaa_aecs_cleanup_fixed(void);
typedef int (*iaa_dev_comp_init_fn_t) (struct iaa_device_compression_mode *mode);
typedef int (*iaa_dev_comp_free_fn_t) (struct iaa_device_compression_mode *mode);
struct iaa_compression_mode {
const char *name;
u32 *ll_table;
int ll_table_size;
u32 *d_table;
int d_table_size;
u32 *header_table;
int header_table_size;
u16 gen_decomp_table_flags;
iaa_dev_comp_init_fn_t init;
iaa_dev_comp_free_fn_t free;
};
int add_iaa_compression_mode(const char *name,
const u32 *ll_table,
int ll_table_size,
const u32 *d_table,
int d_table_size,
const u8 *header_table,
int header_table_size,
u16 gen_decomp_table_flags,
iaa_dev_comp_init_fn_t init,
iaa_dev_comp_free_fn_t free);
void remove_iaa_compression_mode(const char *name);
enum iaa_mode {
IAA_MODE_FIXED,
};
struct iaa_compression_ctx {
enum iaa_mode mode;
bool verify_compress;
bool async_mode;
bool use_irq;
};
extern struct list_head iaa_devices;
extern struct mutex iaa_devices_lock;
#endif

View File

@ -0,0 +1,92 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 Intel Corporation. All rights rsvd. */
#include "idxd.h"
#include "iaa_crypto.h"
/*
* Fixed Huffman tables the IAA hardware requires to implement RFC-1951.
*/
static const u32 fixed_ll_sym[286] = {
0x40030, 0x40031, 0x40032, 0x40033, 0x40034, 0x40035, 0x40036, 0x40037,
0x40038, 0x40039, 0x4003A, 0x4003B, 0x4003C, 0x4003D, 0x4003E, 0x4003F,
0x40040, 0x40041, 0x40042, 0x40043, 0x40044, 0x40045, 0x40046, 0x40047,
0x40048, 0x40049, 0x4004A, 0x4004B, 0x4004C, 0x4004D, 0x4004E, 0x4004F,
0x40050, 0x40051, 0x40052, 0x40053, 0x40054, 0x40055, 0x40056, 0x40057,
0x40058, 0x40059, 0x4005A, 0x4005B, 0x4005C, 0x4005D, 0x4005E, 0x4005F,
0x40060, 0x40061, 0x40062, 0x40063, 0x40064, 0x40065, 0x40066, 0x40067,
0x40068, 0x40069, 0x4006A, 0x4006B, 0x4006C, 0x4006D, 0x4006E, 0x4006F,
0x40070, 0x40071, 0x40072, 0x40073, 0x40074, 0x40075, 0x40076, 0x40077,
0x40078, 0x40079, 0x4007A, 0x4007B, 0x4007C, 0x4007D, 0x4007E, 0x4007F,
0x40080, 0x40081, 0x40082, 0x40083, 0x40084, 0x40085, 0x40086, 0x40087,
0x40088, 0x40089, 0x4008A, 0x4008B, 0x4008C, 0x4008D, 0x4008E, 0x4008F,
0x40090, 0x40091, 0x40092, 0x40093, 0x40094, 0x40095, 0x40096, 0x40097,
0x40098, 0x40099, 0x4009A, 0x4009B, 0x4009C, 0x4009D, 0x4009E, 0x4009F,
0x400A0, 0x400A1, 0x400A2, 0x400A3, 0x400A4, 0x400A5, 0x400A6, 0x400A7,
0x400A8, 0x400A9, 0x400AA, 0x400AB, 0x400AC, 0x400AD, 0x400AE, 0x400AF,
0x400B0, 0x400B1, 0x400B2, 0x400B3, 0x400B4, 0x400B5, 0x400B6, 0x400B7,
0x400B8, 0x400B9, 0x400BA, 0x400BB, 0x400BC, 0x400BD, 0x400BE, 0x400BF,
0x48190, 0x48191, 0x48192, 0x48193, 0x48194, 0x48195, 0x48196, 0x48197,
0x48198, 0x48199, 0x4819A, 0x4819B, 0x4819C, 0x4819D, 0x4819E, 0x4819F,
0x481A0, 0x481A1, 0x481A2, 0x481A3, 0x481A4, 0x481A5, 0x481A6, 0x481A7,
0x481A8, 0x481A9, 0x481AA, 0x481AB, 0x481AC, 0x481AD, 0x481AE, 0x481AF,
0x481B0, 0x481B1, 0x481B2, 0x481B3, 0x481B4, 0x481B5, 0x481B6, 0x481B7,
0x481B8, 0x481B9, 0x481BA, 0x481BB, 0x481BC, 0x481BD, 0x481BE, 0x481BF,
0x481C0, 0x481C1, 0x481C2, 0x481C3, 0x481C4, 0x481C5, 0x481C6, 0x481C7,
0x481C8, 0x481C9, 0x481CA, 0x481CB, 0x481CC, 0x481CD, 0x481CE, 0x481CF,
0x481D0, 0x481D1, 0x481D2, 0x481D3, 0x481D4, 0x481D5, 0x481D6, 0x481D7,
0x481D8, 0x481D9, 0x481DA, 0x481DB, 0x481DC, 0x481DD, 0x481DE, 0x481DF,
0x481E0, 0x481E1, 0x481E2, 0x481E3, 0x481E4, 0x481E5, 0x481E6, 0x481E7,
0x481E8, 0x481E9, 0x481EA, 0x481EB, 0x481EC, 0x481ED, 0x481EE, 0x481EF,
0x481F0, 0x481F1, 0x481F2, 0x481F3, 0x481F4, 0x481F5, 0x481F6, 0x481F7,
0x481F8, 0x481F9, 0x481FA, 0x481FB, 0x481FC, 0x481FD, 0x481FE, 0x481FF,
0x38000, 0x38001, 0x38002, 0x38003, 0x38004, 0x38005, 0x38006, 0x38007,
0x38008, 0x38009, 0x3800A, 0x3800B, 0x3800C, 0x3800D, 0x3800E, 0x3800F,
0x38010, 0x38011, 0x38012, 0x38013, 0x38014, 0x38015, 0x38016, 0x38017,
0x400C0, 0x400C1, 0x400C2, 0x400C3, 0x400C4, 0x400C5
};
static const u32 fixed_d_sym[30] = {
0x28000, 0x28001, 0x28002, 0x28003, 0x28004, 0x28005, 0x28006, 0x28007,
0x28008, 0x28009, 0x2800A, 0x2800B, 0x2800C, 0x2800D, 0x2800E, 0x2800F,
0x28010, 0x28011, 0x28012, 0x28013, 0x28014, 0x28015, 0x28016, 0x28017,
0x28018, 0x28019, 0x2801A, 0x2801B, 0x2801C, 0x2801D
};
static int init_fixed_mode(struct iaa_device_compression_mode *mode)
{
struct aecs_comp_table_record *comp_table = mode->aecs_comp_table;
u32 bfinal = 1;
u32 offset;
/* Configure aecs table using fixed Huffman table */
comp_table->crc = 0;
comp_table->xor_checksum = 0;
offset = comp_table->num_output_accum_bits / 8;
comp_table->output_accum[offset] = FIXED_HDR | bfinal;
comp_table->num_output_accum_bits = FIXED_HDR_SIZE;
return 0;
}
int iaa_aecs_init_fixed(void)
{
int ret;
ret = add_iaa_compression_mode("fixed",
fixed_ll_sym,
sizeof(fixed_ll_sym),
fixed_d_sym,
sizeof(fixed_d_sym),
NULL, 0, 0,
init_fixed_mode, NULL);
if (!ret)
pr_debug("IAA fixed compression mode initialized\n");
return ret;
}
void iaa_aecs_cleanup_fixed(void)
{
remove_iaa_compression_mode("fixed");
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,312 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 Intel Corporation. All rights rsvd. */
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/highmem.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/smp.h>
#include <uapi/linux/idxd.h>
#include <linux/idxd.h>
#include <linux/dmaengine.h>
#include "../../dma/idxd/idxd.h"
#include <linux/debugfs.h>
#include <crypto/internal/acompress.h>
#include "iaa_crypto.h"
#include "iaa_crypto_stats.h"
static u64 total_comp_calls;
static u64 total_decomp_calls;
static u64 total_sw_decomp_calls;
static u64 max_comp_delay_ns;
static u64 max_decomp_delay_ns;
static u64 max_acomp_delay_ns;
static u64 max_adecomp_delay_ns;
static u64 total_comp_bytes_out;
static u64 total_decomp_bytes_in;
static u64 total_completion_einval_errors;
static u64 total_completion_timeout_errors;
static u64 total_completion_comp_buf_overflow_errors;
static struct dentry *iaa_crypto_debugfs_root;
void update_total_comp_calls(void)
{
total_comp_calls++;
}
void update_total_comp_bytes_out(int n)
{
total_comp_bytes_out += n;
}
void update_total_decomp_calls(void)
{
total_decomp_calls++;
}
void update_total_sw_decomp_calls(void)
{
total_sw_decomp_calls++;
}
void update_total_decomp_bytes_in(int n)
{
total_decomp_bytes_in += n;
}
void update_completion_einval_errs(void)
{
total_completion_einval_errors++;
}
void update_completion_timeout_errs(void)
{
total_completion_timeout_errors++;
}
void update_completion_comp_buf_overflow_errs(void)
{
total_completion_comp_buf_overflow_errors++;
}
void update_max_comp_delay_ns(u64 start_time_ns)
{
u64 time_diff;
time_diff = ktime_get_ns() - start_time_ns;
if (time_diff > max_comp_delay_ns)
max_comp_delay_ns = time_diff;
}
void update_max_decomp_delay_ns(u64 start_time_ns)
{
u64 time_diff;
time_diff = ktime_get_ns() - start_time_ns;
if (time_diff > max_decomp_delay_ns)
max_decomp_delay_ns = time_diff;
}
void update_max_acomp_delay_ns(u64 start_time_ns)
{
u64 time_diff;
time_diff = ktime_get_ns() - start_time_ns;
if (time_diff > max_acomp_delay_ns)
max_acomp_delay_ns = time_diff;
}
void update_max_adecomp_delay_ns(u64 start_time_ns)
{
u64 time_diff;
time_diff = ktime_get_ns() - start_time_ns;
if (time_diff > max_adecomp_delay_ns)
max_adecomp_delay_ns = time_diff;
}
void update_wq_comp_calls(struct idxd_wq *idxd_wq)
{
struct iaa_wq *wq = idxd_wq_get_private(idxd_wq);
wq->comp_calls++;
wq->iaa_device->comp_calls++;
}
void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n)
{
struct iaa_wq *wq = idxd_wq_get_private(idxd_wq);
wq->comp_bytes += n;
wq->iaa_device->comp_bytes += n;
}
void update_wq_decomp_calls(struct idxd_wq *idxd_wq)
{
struct iaa_wq *wq = idxd_wq_get_private(idxd_wq);
wq->decomp_calls++;
wq->iaa_device->decomp_calls++;
}
void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n)
{
struct iaa_wq *wq = idxd_wq_get_private(idxd_wq);
wq->decomp_bytes += n;
wq->iaa_device->decomp_bytes += n;
}
static void reset_iaa_crypto_stats(void)
{
total_comp_calls = 0;
total_decomp_calls = 0;
total_sw_decomp_calls = 0;
max_comp_delay_ns = 0;
max_decomp_delay_ns = 0;
max_acomp_delay_ns = 0;
max_adecomp_delay_ns = 0;
total_comp_bytes_out = 0;
total_decomp_bytes_in = 0;
total_completion_einval_errors = 0;
total_completion_timeout_errors = 0;
total_completion_comp_buf_overflow_errors = 0;
}
static void reset_wq_stats(struct iaa_wq *wq)
{
wq->comp_calls = 0;
wq->comp_bytes = 0;
wq->decomp_calls = 0;
wq->decomp_bytes = 0;
}
static void reset_device_stats(struct iaa_device *iaa_device)
{
struct iaa_wq *iaa_wq;
iaa_device->comp_calls = 0;
iaa_device->comp_bytes = 0;
iaa_device->decomp_calls = 0;
iaa_device->decomp_bytes = 0;
list_for_each_entry(iaa_wq, &iaa_device->wqs, list)
reset_wq_stats(iaa_wq);
}
static void wq_show(struct seq_file *m, struct iaa_wq *iaa_wq)
{
seq_printf(m, " name: %s\n", iaa_wq->wq->name);
seq_printf(m, " comp_calls: %llu\n", iaa_wq->comp_calls);
seq_printf(m, " comp_bytes: %llu\n", iaa_wq->comp_bytes);
seq_printf(m, " decomp_calls: %llu\n", iaa_wq->decomp_calls);
seq_printf(m, " decomp_bytes: %llu\n\n", iaa_wq->decomp_bytes);
}
static void device_stats_show(struct seq_file *m, struct iaa_device *iaa_device)
{
struct iaa_wq *iaa_wq;
seq_puts(m, "iaa device:\n");
seq_printf(m, " id: %d\n", iaa_device->idxd->id);
seq_printf(m, " n_wqs: %d\n", iaa_device->n_wq);
seq_printf(m, " comp_calls: %llu\n", iaa_device->comp_calls);
seq_printf(m, " comp_bytes: %llu\n", iaa_device->comp_bytes);
seq_printf(m, " decomp_calls: %llu\n", iaa_device->decomp_calls);
seq_printf(m, " decomp_bytes: %llu\n", iaa_device->decomp_bytes);
seq_puts(m, " wqs:\n");
list_for_each_entry(iaa_wq, &iaa_device->wqs, list)
wq_show(m, iaa_wq);
}
static void global_stats_show(struct seq_file *m)
{
seq_puts(m, "global stats:\n");
seq_printf(m, " total_comp_calls: %llu\n", total_comp_calls);
seq_printf(m, " total_decomp_calls: %llu\n", total_decomp_calls);
seq_printf(m, " total_sw_decomp_calls: %llu\n", total_sw_decomp_calls);
seq_printf(m, " total_comp_bytes_out: %llu\n", total_comp_bytes_out);
seq_printf(m, " total_decomp_bytes_in: %llu\n", total_decomp_bytes_in);
seq_printf(m, " total_completion_einval_errors: %llu\n",
total_completion_einval_errors);
seq_printf(m, " total_completion_timeout_errors: %llu\n",
total_completion_timeout_errors);
seq_printf(m, " total_completion_comp_buf_overflow_errors: %llu\n\n",
total_completion_comp_buf_overflow_errors);
}
static int wq_stats_show(struct seq_file *m, void *v)
{
struct iaa_device *iaa_device;
mutex_lock(&iaa_devices_lock);
global_stats_show(m);
list_for_each_entry(iaa_device, &iaa_devices, list)
device_stats_show(m, iaa_device);
mutex_unlock(&iaa_devices_lock);
return 0;
}
static int iaa_crypto_stats_reset(void *data, u64 value)
{
struct iaa_device *iaa_device;
reset_iaa_crypto_stats();
mutex_lock(&iaa_devices_lock);
list_for_each_entry(iaa_device, &iaa_devices, list)
reset_device_stats(iaa_device);
mutex_unlock(&iaa_devices_lock);
return 0;
}
static int wq_stats_open(struct inode *inode, struct file *file)
{
return single_open(file, wq_stats_show, file);
}
static const struct file_operations wq_stats_fops = {
.open = wq_stats_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
DEFINE_DEBUGFS_ATTRIBUTE(wq_stats_reset_fops, NULL, iaa_crypto_stats_reset, "%llu\n");
int __init iaa_crypto_debugfs_init(void)
{
if (!debugfs_initialized())
return -ENODEV;
iaa_crypto_debugfs_root = debugfs_create_dir("iaa_crypto", NULL);
if (!iaa_crypto_debugfs_root)
return -ENOMEM;
debugfs_create_u64("max_comp_delay_ns", 0644,
iaa_crypto_debugfs_root, &max_comp_delay_ns);
debugfs_create_u64("max_decomp_delay_ns", 0644,
iaa_crypto_debugfs_root, &max_decomp_delay_ns);
debugfs_create_u64("max_acomp_delay_ns", 0644,
iaa_crypto_debugfs_root, &max_comp_delay_ns);
debugfs_create_u64("max_adecomp_delay_ns", 0644,
iaa_crypto_debugfs_root, &max_decomp_delay_ns);
debugfs_create_u64("total_comp_calls", 0644,
iaa_crypto_debugfs_root, &total_comp_calls);
debugfs_create_u64("total_decomp_calls", 0644,
iaa_crypto_debugfs_root, &total_decomp_calls);
debugfs_create_u64("total_sw_decomp_calls", 0644,
iaa_crypto_debugfs_root, &total_sw_decomp_calls);
debugfs_create_u64("total_comp_bytes_out", 0644,
iaa_crypto_debugfs_root, &total_comp_bytes_out);
debugfs_create_u64("total_decomp_bytes_in", 0644,
iaa_crypto_debugfs_root, &total_decomp_bytes_in);
debugfs_create_file("wq_stats", 0644, iaa_crypto_debugfs_root, NULL,
&wq_stats_fops);
debugfs_create_file("stats_reset", 0644, iaa_crypto_debugfs_root, NULL,
&wq_stats_reset_fops);
return 0;
}
void __exit iaa_crypto_debugfs_cleanup(void)
{
debugfs_remove_recursive(iaa_crypto_debugfs_root);
}
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,53 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 Intel Corporation. All rights rsvd. */
#ifndef __CRYPTO_DEV_IAA_CRYPTO_STATS_H__
#define __CRYPTO_DEV_IAA_CRYPTO_STATS_H__
#if defined(CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS)
int iaa_crypto_debugfs_init(void);
void iaa_crypto_debugfs_cleanup(void);
void update_total_comp_calls(void);
void update_total_comp_bytes_out(int n);
void update_total_decomp_calls(void);
void update_total_sw_decomp_calls(void);
void update_total_decomp_bytes_in(int n);
void update_max_comp_delay_ns(u64 start_time_ns);
void update_max_decomp_delay_ns(u64 start_time_ns);
void update_max_acomp_delay_ns(u64 start_time_ns);
void update_max_adecomp_delay_ns(u64 start_time_ns);
void update_completion_einval_errs(void);
void update_completion_timeout_errs(void);
void update_completion_comp_buf_overflow_errs(void);
void update_wq_comp_calls(struct idxd_wq *idxd_wq);
void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n);
void update_wq_decomp_calls(struct idxd_wq *idxd_wq);
void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n);
#else
static inline int iaa_crypto_debugfs_init(void) { return 0; }
static inline void iaa_crypto_debugfs_cleanup(void) {}
static inline void update_total_comp_calls(void) {}
static inline void update_total_comp_bytes_out(int n) {}
static inline void update_total_decomp_calls(void) {}
static inline void update_total_sw_decomp_calls(void) {}
static inline void update_total_decomp_bytes_in(int n) {}
static inline void update_max_comp_delay_ns(u64 start_time_ns) {}
static inline void update_max_decomp_delay_ns(u64 start_time_ns) {}
static inline void update_max_acomp_delay_ns(u64 start_time_ns) {}
static inline void update_max_adecomp_delay_ns(u64 start_time_ns) {}
static inline void update_completion_einval_errs(void) {}
static inline void update_completion_timeout_errs(void) {}
static inline void update_completion_comp_buf_overflow_errs(void) {}
static inline void update_wq_comp_calls(struct idxd_wq *idxd_wq) {}
static inline void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n) {}
static inline void update_wq_decomp_calls(struct idxd_wq *idxd_wq) {}
static inline void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n) {}
#endif // CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS
#endif

View File

@ -1,7 +1,7 @@
ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=IDXD
obj-$(CONFIG_INTEL_IDXD) += idxd.o
idxd-y := init.o irq.o device.o sysfs.o submit.o dma.o cdev.o debugfs.o
idxd-y := init.o irq.o device.o sysfs.o submit.o dma.o cdev.o debugfs.o defaults.o
idxd-$(CONFIG_INTEL_IDXD_PERFMON) += perfmon.o

View File

@ -67,11 +67,17 @@ static void idxd_config_bus_remove(struct device *dev)
idxd_drv->remove(idxd_dev);
}
static int idxd_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
return add_uevent_var(env, "MODALIAS=" IDXD_DEVICES_MODALIAS_FMT, 0);
}
struct bus_type dsa_bus_type = {
.name = "dsa",
.match = idxd_config_bus_match,
.probe = idxd_config_bus_probe,
.remove = idxd_config_bus_remove,
.uevent = idxd_bus_uevent,
};
EXPORT_SYMBOL_GPL(dsa_bus_type);

View File

@ -509,6 +509,7 @@ void idxd_wq_del_cdev(struct idxd_wq *wq)
static int idxd_user_drv_probe(struct idxd_dev *idxd_dev)
{
struct device *dev = &idxd_dev->conf_dev;
struct idxd_wq *wq = idxd_dev_to_wq(idxd_dev);
struct idxd_device *idxd = wq->idxd;
int rc;
@ -536,6 +537,12 @@ static int idxd_user_drv_probe(struct idxd_dev *idxd_dev)
mutex_lock(&wq->wq_lock);
if (!idxd_wq_driver_name_match(wq, dev)) {
idxd->cmd_status = IDXD_SCMD_WQ_NO_DRV_NAME;
rc = -ENODEV;
goto wq_err;
}
wq->wq = create_workqueue(dev_name(wq_confdev(wq)));
if (!wq->wq) {
rc = -ENOMEM;
@ -543,7 +550,7 @@ static int idxd_user_drv_probe(struct idxd_dev *idxd_dev)
}
wq->type = IDXD_WQT_USER;
rc = drv_enable_wq(wq);
rc = idxd_drv_enable_wq(wq);
if (rc < 0)
goto err;
@ -558,7 +565,7 @@ static int idxd_user_drv_probe(struct idxd_dev *idxd_dev)
return 0;
err_cdev:
drv_disable_wq(wq);
idxd_drv_disable_wq(wq);
err:
destroy_workqueue(wq->wq);
wq->type = IDXD_WQT_NONE;
@ -573,7 +580,7 @@ static void idxd_user_drv_remove(struct idxd_dev *idxd_dev)
mutex_lock(&wq->wq_lock);
idxd_wq_del_cdev(wq);
drv_disable_wq(wq);
idxd_drv_disable_wq(wq);
wq->type = IDXD_WQT_NONE;
destroy_workqueue(wq->wq);
wq->wq = NULL;

View File

@ -0,0 +1,53 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Intel Corporation. All rights rsvd. */
#include <linux/kernel.h>
#include "idxd.h"
int idxd_load_iaa_device_defaults(struct idxd_device *idxd)
{
struct idxd_engine *engine;
struct idxd_group *group;
struct idxd_wq *wq;
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
return 0;
wq = idxd->wqs[0];
if (wq->state != IDXD_WQ_DISABLED)
return -EPERM;
/* set mode to "dedicated" */
set_bit(WQ_FLAG_DEDICATED, &wq->flags);
wq->threshold = 0;
/* only setting up 1 wq, so give it all the wq space */
wq->size = idxd->max_wq_size;
/* set priority to 10 */
wq->priority = 10;
/* set type to "kernel" */
wq->type = IDXD_WQT_KERNEL;
/* set wq group to 0 */
group = idxd->groups[0];
wq->group = group;
group->num_wqs++;
/* set name to "iaa_crypto" */
memset(wq->name, 0, WQ_NAME_SIZE + 1);
strscpy(wq->name, "iaa_crypto", WQ_NAME_SIZE + 1);
/* set driver_name to "crypto" */
memset(wq->driver_name, 0, DRIVER_NAME_SIZE + 1);
strscpy(wq->driver_name, "crypto", DRIVER_NAME_SIZE + 1);
engine = idxd->engines[0];
/* set engine group to 0 */
engine->group = idxd->groups[0];
engine->group->num_engines++;
return 0;
}

View File

@ -161,6 +161,7 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq)
free_hw_descs(wq);
return rc;
}
EXPORT_SYMBOL_NS_GPL(idxd_wq_alloc_resources, IDXD);
void idxd_wq_free_resources(struct idxd_wq *wq)
{
@ -174,6 +175,7 @@ void idxd_wq_free_resources(struct idxd_wq *wq)
dma_free_coherent(dev, wq->compls_size, wq->compls, wq->compls_addr);
sbitmap_queue_free(&wq->sbq);
}
EXPORT_SYMBOL_NS_GPL(idxd_wq_free_resources, IDXD);
int idxd_wq_enable(struct idxd_wq *wq)
{
@ -405,6 +407,7 @@ int idxd_wq_init_percpu_ref(struct idxd_wq *wq)
reinit_completion(&wq->wq_resurrect);
return 0;
}
EXPORT_SYMBOL_NS_GPL(idxd_wq_init_percpu_ref, IDXD);
void __idxd_wq_quiesce(struct idxd_wq *wq)
{
@ -414,6 +417,7 @@ void __idxd_wq_quiesce(struct idxd_wq *wq)
complete_all(&wq->wq_resurrect);
wait_for_completion(&wq->wq_dead);
}
EXPORT_SYMBOL_NS_GPL(__idxd_wq_quiesce, IDXD);
void idxd_wq_quiesce(struct idxd_wq *wq)
{
@ -421,6 +425,7 @@ void idxd_wq_quiesce(struct idxd_wq *wq)
__idxd_wq_quiesce(wq);
mutex_unlock(&wq->wq_lock);
}
EXPORT_SYMBOL_NS_GPL(idxd_wq_quiesce, IDXD);
/* Device control bits */
static inline bool idxd_is_enabled(struct idxd_device *idxd)
@ -1266,7 +1271,7 @@ static void idxd_flush_pending_descs(struct idxd_irq_entry *ie)
tx = &desc->txd;
tx->callback = NULL;
tx->callback_result = NULL;
idxd_dma_complete_txd(desc, ctype, true);
idxd_dma_complete_txd(desc, ctype, true, NULL, NULL);
}
}
@ -1350,7 +1355,7 @@ err_irq:
return rc;
}
int drv_enable_wq(struct idxd_wq *wq)
int idxd_drv_enable_wq(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
struct device *dev = &idxd->pdev->dev;
@ -1482,8 +1487,9 @@ err_map_portal:
err:
return rc;
}
EXPORT_SYMBOL_NS_GPL(idxd_drv_enable_wq, IDXD);
void drv_disable_wq(struct idxd_wq *wq)
void idxd_drv_disable_wq(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
struct device *dev = &idxd->pdev->dev;
@ -1503,6 +1509,7 @@ void drv_disable_wq(struct idxd_wq *wq)
wq->type = IDXD_WQT_NONE;
wq->client_count = 0;
}
EXPORT_SYMBOL_NS_GPL(idxd_drv_disable_wq, IDXD);
int idxd_device_drv_probe(struct idxd_dev *idxd_dev)
{

View File

@ -22,7 +22,7 @@ static inline struct idxd_wq *to_idxd_wq(struct dma_chan *c)
void idxd_dma_complete_txd(struct idxd_desc *desc,
enum idxd_complete_type comp_type,
bool free_desc)
bool free_desc, void *ctx, u32 *status)
{
struct idxd_device *idxd = desc->wq->idxd;
struct dma_async_tx_descriptor *tx;
@ -306,9 +306,15 @@ static int idxd_dmaengine_drv_probe(struct idxd_dev *idxd_dev)
return -ENXIO;
mutex_lock(&wq->wq_lock);
if (!idxd_wq_driver_name_match(wq, dev)) {
idxd->cmd_status = IDXD_SCMD_WQ_NO_DRV_NAME;
rc = -ENODEV;
goto err;
}
wq->type = IDXD_WQT_KERNEL;
rc = drv_enable_wq(wq);
rc = idxd_drv_enable_wq(wq);
if (rc < 0) {
dev_dbg(dev, "Enable wq %d failed: %d\n", wq->id, rc);
rc = -ENXIO;
@ -327,7 +333,7 @@ static int idxd_dmaengine_drv_probe(struct idxd_dev *idxd_dev)
return 0;
err_dma:
drv_disable_wq(wq);
idxd_drv_disable_wq(wq);
err:
wq->type = IDXD_WQT_NONE;
mutex_unlock(&wq->wq_lock);
@ -341,7 +347,7 @@ static void idxd_dmaengine_drv_remove(struct idxd_dev *idxd_dev)
mutex_lock(&wq->wq_lock);
__idxd_wq_quiesce(wq);
idxd_unregister_dma_channel(wq);
drv_disable_wq(wq);
idxd_drv_disable_wq(wq);
mutex_unlock(&wq->wq_lock);
}
@ -353,6 +359,7 @@ static enum idxd_dev_type dev_types[] = {
struct idxd_device_driver idxd_dmaengine_drv = {
.probe = idxd_dmaengine_drv_probe,
.remove = idxd_dmaengine_drv_remove,
.desc_complete = idxd_dma_complete_txd,
.name = "dmaengine",
.type = dev_types,
};

View File

@ -13,6 +13,7 @@
#include <linux/bitmap.h>
#include <linux/perf_event.h>
#include <linux/iommu.h>
#include <linux/crypto.h>
#include <uapi/linux/idxd.h>
#include "registers.h"
@ -57,11 +58,23 @@ enum idxd_type {
#define IDXD_ENQCMDS_RETRIES 32
#define IDXD_ENQCMDS_MAX_RETRIES 64
enum idxd_complete_type {
IDXD_COMPLETE_NORMAL = 0,
IDXD_COMPLETE_ABORT,
IDXD_COMPLETE_DEV_FAIL,
};
struct idxd_desc;
struct idxd_device_driver {
const char *name;
enum idxd_dev_type *type;
int (*probe)(struct idxd_dev *idxd_dev);
void (*remove)(struct idxd_dev *idxd_dev);
void (*desc_complete)(struct idxd_desc *desc,
enum idxd_complete_type comp_type,
bool free_desc,
void *ctx, u32 *status);
struct device_driver drv;
};
@ -159,6 +172,8 @@ struct idxd_cdev {
int minor;
};
#define DRIVER_NAME_SIZE 128
#define IDXD_ALLOCATED_BATCH_SIZE 128U
#define WQ_NAME_SIZE 1024
#define WQ_TYPE_SIZE 10
@ -172,12 +187,6 @@ enum idxd_op_type {
IDXD_OP_NONBLOCK = 1,
};
enum idxd_complete_type {
IDXD_COMPLETE_NORMAL = 0,
IDXD_COMPLETE_ABORT,
IDXD_COMPLETE_DEV_FAIL,
};
struct idxd_dma_chan {
struct dma_chan chan;
struct idxd_wq *wq;
@ -227,6 +236,8 @@ struct idxd_wq {
/* Lock to protect upasid_xa access. */
struct mutex uc_lock;
struct xarray upasid_xa;
char driver_name[DRIVER_NAME_SIZE + 1];
};
struct idxd_engine {
@ -266,6 +277,8 @@ struct idxd_dma_dev {
struct dma_device dma;
};
typedef int (*load_device_defaults_fn_t) (struct idxd_device *idxd);
struct idxd_driver_data {
const char *name_prefix;
enum idxd_type type;
@ -275,6 +288,7 @@ struct idxd_driver_data {
int evl_cr_off;
int cr_status_off;
int cr_result_off;
load_device_defaults_fn_t load_device_defaults;
};
struct idxd_evl {
@ -374,6 +388,14 @@ static inline unsigned int evl_size(struct idxd_device *idxd)
return idxd->evl->size * evl_ent_size(idxd);
}
struct crypto_ctx {
struct acomp_req *req;
struct crypto_tfm *tfm;
dma_addr_t src_addr;
dma_addr_t dst_addr;
bool compress;
};
/* IDXD software descriptor */
struct idxd_desc {
union {
@ -386,7 +408,10 @@ struct idxd_desc {
struct iax_completion_record *iax_completion;
};
dma_addr_t compl_dma;
struct dma_async_tx_descriptor txd;
union {
struct dma_async_tx_descriptor txd;
struct crypto_ctx crypto;
};
struct llist_node llnode;
struct list_head list;
int id;
@ -413,6 +438,15 @@ enum idxd_completion_status {
#define idxd_dev_to_idxd(idxd_dev) container_of(idxd_dev, struct idxd_device, idxd_dev)
#define idxd_dev_to_wq(idxd_dev) container_of(idxd_dev, struct idxd_wq, idxd_dev)
static inline struct idxd_device_driver *wq_to_idxd_drv(struct idxd_wq *wq)
{
struct device *dev = wq_confdev(wq);
struct idxd_device_driver *idxd_drv =
container_of(dev->driver, struct idxd_device_driver, drv);
return idxd_drv;
}
static inline struct idxd_device *confdev_to_idxd(struct device *dev)
{
struct idxd_dev *idxd_dev = confdev_to_idxd_dev(dev);
@ -614,6 +648,16 @@ static inline int idxd_wq_refcount(struct idxd_wq *wq)
return wq->client_count;
};
static inline void idxd_wq_set_private(struct idxd_wq *wq, void *private)
{
dev_set_drvdata(wq_confdev(wq), private);
}
static inline void *idxd_wq_get_private(struct idxd_wq *wq)
{
return dev_get_drvdata(wq_confdev(wq));
}
/*
* Intel IAA does not support batch processing.
* The max batch size of device, max batch size of wq and
@ -646,6 +690,14 @@ static inline void idxd_wqcfg_set_max_batch_shift(int idxd_type, union wqcfg *wq
wqcfg->max_batch_shift = max_batch_shift;
}
static inline int idxd_wq_driver_name_match(struct idxd_wq *wq, struct device *dev)
{
return (strncmp(wq->driver_name, dev->driver->name, strlen(dev->driver->name)) == 0);
}
#define MODULE_ALIAS_IDXD_DEVICE(type) MODULE_ALIAS("idxd:t" __stringify(type) "*")
#define IDXD_DEVICES_MODALIAS_FMT "idxd:t%d"
int __must_check __idxd_driver_register(struct idxd_device_driver *idxd_drv,
struct module *module, const char *mod_name);
#define idxd_driver_register(driver) \
@ -656,6 +708,24 @@ void idxd_driver_unregister(struct idxd_device_driver *idxd_drv);
#define module_idxd_driver(__idxd_driver) \
module_driver(__idxd_driver, idxd_driver_register, idxd_driver_unregister)
void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc);
void idxd_dma_complete_txd(struct idxd_desc *desc,
enum idxd_complete_type comp_type,
bool free_desc, void *ctx, u32 *status);
static inline void idxd_desc_complete(struct idxd_desc *desc,
enum idxd_complete_type comp_type,
bool free_desc)
{
struct idxd_device_driver *drv;
u32 status;
drv = wq_to_idxd_drv(desc->wq);
if (drv->desc_complete)
drv->desc_complete(desc, comp_type, free_desc,
&desc->txd, &status);
}
int idxd_register_bus_type(void);
void idxd_unregister_bus_type(void);
int idxd_register_devices(struct idxd_device *idxd);
@ -663,6 +733,7 @@ void idxd_unregister_devices(struct idxd_device *idxd);
void idxd_wqs_quiesce(struct idxd_device *idxd);
bool idxd_queue_int_handle_resubmit(struct idxd_desc *desc);
void multi_u64_to_bmap(unsigned long *bmap, u64 *val, int count);
int idxd_load_iaa_device_defaults(struct idxd_device *idxd);
/* device interrupt control */
irqreturn_t idxd_misc_thread(int vec, void *data);
@ -673,8 +744,8 @@ void idxd_unmask_error_interrupts(struct idxd_device *idxd);
/* device control */
int idxd_device_drv_probe(struct idxd_dev *idxd_dev);
void idxd_device_drv_remove(struct idxd_dev *idxd_dev);
int drv_enable_wq(struct idxd_wq *wq);
void drv_disable_wq(struct idxd_wq *wq);
int idxd_drv_enable_wq(struct idxd_wq *wq);
void idxd_drv_disable_wq(struct idxd_wq *wq);
int idxd_device_init_reset(struct idxd_device *idxd);
int idxd_device_enable(struct idxd_device *idxd);
int idxd_device_disable(struct idxd_device *idxd);
@ -709,14 +780,11 @@ int idxd_wq_request_irq(struct idxd_wq *wq);
/* submission */
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc);
struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype);
void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc);
int idxd_enqcmds(struct idxd_wq *wq, void __iomem *portal, const void *desc);
/* dmaengine */
int idxd_register_dma_device(struct idxd_device *idxd);
void idxd_unregister_dma_device(struct idxd_device *idxd);
void idxd_dma_complete_txd(struct idxd_desc *desc,
enum idxd_complete_type comp_type, bool free_desc);
/* cdev */
int idxd_cdev_register(void);

View File

@ -59,6 +59,7 @@ static struct idxd_driver_data idxd_driver_data[] = {
.evl_cr_off = offsetof(struct iax_evl_entry, cr),
.cr_status_off = offsetof(struct iax_completion_record, status),
.cr_result_off = offsetof(struct iax_completion_record, error_code),
.load_device_defaults = idxd_load_iaa_device_defaults,
},
};
@ -745,6 +746,12 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto err;
}
if (data->load_device_defaults) {
rc = data->load_device_defaults(idxd);
if (rc)
dev_warn(dev, "IDXD loading device defaults failed\n");
}
rc = idxd_register_devices(idxd);
if (rc) {
dev_err(dev, "IDXD sysfs setup failed\n");

View File

@ -123,7 +123,7 @@ static void idxd_abort_invalid_int_handle_descs(struct idxd_irq_entry *ie)
list_for_each_entry_safe(d, t, &flist, list) {
list_del(&d->list);
idxd_dma_complete_txd(d, IDXD_COMPLETE_ABORT, true);
idxd_desc_complete(d, IDXD_COMPLETE_ABORT, true);
}
}
@ -534,7 +534,7 @@ static void idxd_int_handle_resubmit_work(struct work_struct *work)
*/
if (rc != -EAGAIN) {
desc->completion->status = IDXD_COMP_DESC_ABORT;
idxd_dma_complete_txd(desc, IDXD_COMPLETE_ABORT, false);
idxd_desc_complete(desc, IDXD_COMPLETE_ABORT, false);
}
idxd_free_desc(wq, desc);
}
@ -575,11 +575,11 @@ static void irq_process_pending_llist(struct idxd_irq_entry *irq_entry)
* and 0xff, which DSA_COMP_STATUS_MASK can mask out.
*/
if (unlikely(desc->completion->status == IDXD_COMP_DESC_ABORT)) {
idxd_dma_complete_txd(desc, IDXD_COMPLETE_ABORT, true);
idxd_desc_complete(desc, IDXD_COMPLETE_ABORT, true);
continue;
}
idxd_dma_complete_txd(desc, IDXD_COMPLETE_NORMAL, true);
idxd_desc_complete(desc, IDXD_COMPLETE_NORMAL, true);
} else {
spin_lock(&irq_entry->list_lock);
list_add_tail(&desc->list,
@ -618,11 +618,11 @@ static void irq_process_work_list(struct idxd_irq_entry *irq_entry)
* and 0xff, which DSA_COMP_STATUS_MASK can mask out.
*/
if (unlikely(desc->completion->status == IDXD_COMP_DESC_ABORT)) {
idxd_dma_complete_txd(desc, IDXD_COMPLETE_ABORT, true);
idxd_desc_complete(desc, IDXD_COMPLETE_ABORT, true);
continue;
}
idxd_dma_complete_txd(desc, IDXD_COMPLETE_NORMAL, true);
idxd_desc_complete(desc, IDXD_COMPLETE_NORMAL, true);
}
}

View File

@ -61,6 +61,7 @@ struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype)
return __get_desc(wq, idx, cpu);
}
EXPORT_SYMBOL_NS_GPL(idxd_alloc_desc, IDXD);
void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc)
{
@ -69,6 +70,7 @@ void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc)
desc->cpu = -1;
sbitmap_queue_clear(&wq->sbq, desc->id, cpu);
}
EXPORT_SYMBOL_NS_GPL(idxd_free_desc, IDXD);
static struct idxd_desc *list_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
struct idxd_desc *desc)
@ -125,7 +127,8 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
spin_unlock(&ie->list_lock);
if (found)
idxd_dma_complete_txd(found, IDXD_COMPLETE_ABORT, false);
idxd_dma_complete_txd(found, IDXD_COMPLETE_ABORT, false,
NULL, NULL);
/*
* completing the descriptor will return desc to allocator and
@ -135,7 +138,8 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
*/
list_for_each_entry_safe(d, t, &flist, list) {
list_del_init(&d->list);
idxd_dma_complete_txd(found, IDXD_COMPLETE_ABORT, true);
idxd_dma_complete_txd(found, IDXD_COMPLETE_ABORT, true,
NULL, NULL);
}
}
@ -215,3 +219,4 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
percpu_ref_put(&wq->wq_active);
return 0;
}
EXPORT_SYMBOL_NS_GPL(idxd_submit_desc, IDXD);

View File

@ -1259,6 +1259,39 @@ err:
static struct device_attribute dev_attr_wq_op_config =
__ATTR(op_config, 0644, wq_op_config_show, wq_op_config_store);
static ssize_t wq_driver_name_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct idxd_wq *wq = confdev_to_wq(dev);
return sysfs_emit(buf, "%s\n", wq->driver_name);
}
static ssize_t wq_driver_name_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct idxd_wq *wq = confdev_to_wq(dev);
char *input, *pos;
if (wq->state != IDXD_WQ_DISABLED)
return -EPERM;
if (strlen(buf) > DRIVER_NAME_SIZE || strlen(buf) == 0)
return -EINVAL;
input = kstrndup(buf, count, GFP_KERNEL);
if (!input)
return -ENOMEM;
pos = strim(input);
memset(wq->driver_name, 0, DRIVER_NAME_SIZE + 1);
sprintf(wq->driver_name, "%s", pos);
kfree(input);
return count;
}
static struct device_attribute dev_attr_wq_driver_name =
__ATTR(driver_name, 0644, wq_driver_name_show, wq_driver_name_store);
static struct attribute *idxd_wq_attributes[] = {
&dev_attr_wq_clients.attr,
&dev_attr_wq_state.attr,
@ -1278,6 +1311,7 @@ static struct attribute *idxd_wq_attributes[] = {
&dev_attr_wq_occupancy.attr,
&dev_attr_wq_enqcmds_retries.attr,
&dev_attr_wq_op_config.attr,
&dev_attr_wq_driver_name.attr,
NULL,
};

View File

@ -31,6 +31,7 @@ enum idxd_scmd_stat {
IDXD_SCMD_WQ_IRQ_ERR = 0x80100000,
IDXD_SCMD_WQ_USER_NO_IOMMU = 0x80110000,
IDXD_SCMD_DEV_EVL_ERR = 0x80120000,
IDXD_SCMD_WQ_NO_DRV_NAME = 0x80200000,
};
#define IDXD_SCMD_SOFTERR_MASK 0x80000000

View File

@ -0,0 +1 @@
# CONFIG_CRYPTO_DEV_IAA_CRYPTO is not set

View File

@ -0,0 +1 @@
CONFIG_CRYPTO_DEV_IAA_CRYPTO=m

View File

@ -0,0 +1 @@
# CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS is not set