JIRA: https://issues.redhat.com/browse/RHEL-56247
commit 844bc12e6da3e2d8ab0cd96d049fc695d5d8ba68
Author: Max Gurtovoy <mgurtovoy@nvidia.com>
Date: Wed Jun 19 20:11:52 2024 +0300
IB/core: add support for draining Shared receive queues
To avoid leakage for QPs assocoated with SRQ, according to IB spec
(section 10.3.1):
"Note, for QPs that are associated with an SRQ, the Consumer should take
the QP through the Error State before invoking a Destroy QP or a Modify
QP to the Reset State. The Consumer may invoke the Destroy QP without
first performing a Modify QP to the Error State and waiting for the Affiliated
Asynchronous Last WQE Reached Event. However, if the Consumer
does not wait for the Affiliated Asynchronous Last WQE Reached Event,
then WQE and Data Segment leakage may occur. Therefore, it is good
programming practice to teardown a QP that is associated with an SRQ
by using the following process:
- Put the QP in the Error State;
- wait for the Affiliated Asynchronous Last WQE Reached Event;
- either:
- drain the CQ by invoking the Poll CQ verb and either wait for CQ
to be empty or the number of Poll CQ operations has exceeded
CQ capacity size; or
- post another WR that completes on the same CQ and wait for this
WR to return as a WC;
- and then invoke a Destroy QP or Reset QP."
Catch the Last WQE Reached Event in the core layer during drain QP flow.
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Link: https://lore.kernel.org/r/20240619171153.34631-2-mgurtovoy@nvidia.com
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
JIRA: https://issues.redhat.com/browse/RHEL-56247
commit e57b0eef66849732e4ec28277f70e56220c53477
Author: Rohit Chavan <roheetchavan@gmail.com>
Date: Thu Aug 24 13:43:04 2023 +0530
RDMA/core: Fix repeated words in comments
Corrected the repeated words in the documentation of
rdma_replace_ah_attr() and ib_resolve_unicast_gid_dmac()
functions.
Signed-off-by: Rohit Chavan <roheetchavan@gmail.com>
Link: https://lore.kernel.org/r/20230824081304.408-1-roheetchavan@gmail.com
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Kamal Heib <kheib@redhat.com>
JIRA: https://issues.redhat.com/browse/RHEL-56247
commit ca60fd116c7ee1a4471a8ad0fe07cdfa57f24c11
Author: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Date: Wed Aug 2 02:00:23 2023 -0700
IB/core: Add more speed parsing in ib_get_width_and_speed()
When the Ethernet driver does not provide the number of lanes
in the __ethtool_get_link_ksettings() response, the function
ib_get_width_and_speed() does not take consideration of 50G,
100G and 200G speeds while calculating the IB width and speed.
Update the width and speed for the above netdev speeds.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
Link: https://lore.kernel.org/r/1690966823-8159-1-git-send-email-selvin.xavier@broadcom.com
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Kamal Heib <kheib@redhat.com>
JIRA: https://issues.redhat.com/browse/RHEL-30146
commit 703289ce43f740b0096724300107df82d008552f
Author: Or Har-Toov <ohartoov@nvidia.com>
Date: Wed Sep 20 13:07:40 2023 +0300
IB/core: Add support for XDR link speed
Add new IBTA speed XDR, the new rate that was added to Infiniband spec
as part of XDR and supporting signaling rate of 200Gb.
In order to report that value to rdma-core, add new u32 field to
query_port response.
Signed-off-by: Or Har-Toov <ohartoov@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Link: https://lore.kernel.org/r/9d235fc600a999e8274010f0e18b40fa60540e6c.1695204156.git.leon@kernel.org
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Kamal Heib <kheib@redhat.com>
JIRA: https://issues.redhat.com/browse/RHEL-23034
commit cb06b6b3f6cbc56c534587db2aac3e0958a4a314
Author: Haoyue Xu <xuhaoyue1@hisilicon.com>
Date: Fri Jul 21 17:20:52 2023 +0800
RDMA/core: Get IB width and speed from netdev
Previously, there was no way to query the number of lanes for a network
card, so the same netdev_speed would result in a fixed pair of width and
speed. As network card specifications become more diverse, such fixed
mode is no longer suitable, so a method is needed to obtain the correct
width and speed based on the number of lanes.
This patch retrieves netdev lanes and speed from net_device and
translates them to IB width and speed.
Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com>
Signed-off-by: Luoyouming <luoyouming@huawei.com>
Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com>
Link: https://lore.kernel.org/r/20230721092052.2090449-1-huangjunxian6@hisilicon.com
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: https://bugzilla.redhat.com/2168936
commit 0afec5e9cea732cb47014655685a2a47fb180c31
Author: Yonatan Nachum <ynachum@amazon.com>
Date: Mon Jan 9 13:37:11 2023 +0000
RDMA/core: Fix ib block iterator counter overflow
When registering a new DMA MR after selecting the best aligned page size
for it, we iterate over the given sglist to split each entry to smaller,
aligned to the selected page size, DMA blocks.
In given circumstances where the sg entry and page size fit certain
sizes and the sg entry is not aligned to the selected page size, the
total size of the aligned pages we need to cover the sg entry is >= 4GB.
Under this circumstances, while iterating page aligned blocks, the
counter responsible for counting how much we advanced from the start of
the sg entry is overflowed because its type is u32 and we pass 4GB in
size. This can lead to an infinite loop inside the iterator function
because the overflow prevents the counter to be larger
than the size of the sg entry.
Fix the presented problem by changing the advancement condition to
eliminate overflow.
Backtrace:
[ 192.374329] efa_reg_user_mr_dmabuf
[ 192.376783] efa_register_mr
[ 192.382579] pgsz_bitmap 0xfffff000 rounddown 0x80000000
[ 192.386423] pg_sz [0x80000000] umem_length[0xc0000000]
[ 192.392657] start 0x0 length 0xc0000000 params.page_shift 31 params.page_num 3
[ 192.399559] hp_cnt[3], pages_in_hp[524288]
[ 192.403690] umem->sgt_append.sgt.nents[1]
[ 192.407905] number entries: [1], pg_bit: [31]
[ 192.411397] biter->__sg_nents [1] biter->__sg [0000000008b0c5d8]
[ 192.415601] biter->__sg_advance [665837568] sg_dma_len[3221225472]
[ 192.419823] biter->__sg_nents [1] biter->__sg [0000000008b0c5d8]
[ 192.423976] biter->__sg_advance [2813321216] sg_dma_len[3221225472]
[ 192.428243] biter->__sg_nents [1] biter->__sg [0000000008b0c5d8]
[ 192.432397] biter->__sg_advance [665837568] sg_dma_len[3221225472]
Fixes: a808273a49 ("RDMA/verbs: Add a DMA iterator to return aligned contiguous memory blocks")
Signed-off-by: Yonatan Nachum <ynachum@amazon.com>
Link: https://lore.kernel.org/r/20230109133711.13678-1-ynachum@amazon.com
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: https://bugzilla.redhat.com/2168933
commit b300729b77b0b746c4f898332705672eb50d3297
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date: Thu Sep 22 14:22:35 2022 +0300
RDMA/core: Clean up a variable name in ib_create_srq_user()
"&srq->pd->usecnt" and "&pd->usecnt" are different names for the same
reference count. Use "&pd->usecnt" consistently for both the increment
and decrement.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Link: https://lore.kernel.org/r/YyxFe3Pm0uzRuBkQ@kili
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: https://bugzilla.redhat.com/2120668
commit 241f9a27e0fc0eaf23e3d52c8450f10648cd11f1
Author: Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
Date: Wed Sep 21 17:08:43 2022 +0900
IB: Set IOVA/LENGTH on IB_MR in core/uverbs layers
Set 'iova' and 'length' on ib_mr in ib_uverbs and ib_core layers to let all
drivers have the members filled. Also, this commit removes redundancy in
the respective drivers.
Previously, commit 04c0a5fcfc ("IB/uverbs: Set IOVA on IB MR in uverbs
layer") changed to set 'iova', but seems to have missed 'length' and the
ib_core layer at that time.
Fixes: 04c0a5fcfc ("IB/uverbs: Set IOVA on IB MR in uverbs layer")
Signed-off-by: Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
Link: https://lore.kernel.org/r/20220921080844.1616883-1-matsuda-daisuke@fujitsu.com
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: https://bugzilla.redhat.com/2120662
commit e945c653c8e972d1b81a88e474d79f801b60213a
Author: Jason Gunthorpe <jgg@ziepe.ca>
Date: Mon Apr 4 12:26:42 2022 -0300
RDMA: Split kernel-only global device caps from uverbs device caps
Split out flags from ib_device::device_cap_flags that are only used
internally to the kernel into kernel_cap_flags that is not part of the
uapi. This limits the device_cap_flags to being the same bitmap that will
be copied to userspace.
This cleanly splits out the uverbs flags from the kernel flags to avoid
confusion in the flags bitmap.
Add some short comments describing which each of the kernel flags is
connected to. Remove unused kernel flags.
Link: https://lore.kernel.org/r/0-v2-22c19e565eef+139a-kern_caps_jgg@nvidia.com
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: https://bugzilla.redhat.com/2049447
Upstream-status: v5.15-rc1
commit 20da44dfe8eff5b61685e394dec690a5d9dc36ce
Author: Leon Romanovsky <leon@kernel.org>
Date: Fri Jul 23 14:39:51 2021 +0300
RDMA/mlx5: Drop in-driver verbs object creations
There is no real value in bypassing IB/core APIs for creating standard
objects with standard types. The open-coded variant didn't have any
restrack task management calls and caused to such objects to be not
present when running rdmatoool.
Link: https://lore.kernel.org/r/f745590e5fb7d56f90fdb25f64ee3983ba17e1e4.1627040189.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Mohammad Kabat <mkabat@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056772
commit 7c4a539ec38f4ce400a0f3fcb5ff6c940fcd67bb
Author: Yajun Deng <yajun.deng@linux.dev>
Date: Thu Mar 3 10:42:32 2022 +0800
RDMA/core: Fix ib_qp_usecnt_dec() called when error
ib_destroy_qp() would called by ib_create_qp_user() if error, the former
contains ib_qp_usecnt_dec(), but ib_qp_usecnt_inc() was not called before.
So move ib_qp_usecnt_inc() into create_qp().
Fixes: d2b10794fc13 ("RDMA/core: Create clean QP creations interface for uverbs")
Link: https://lore.kernel.org/r/20220303024232.2847388-1-yajun.deng@linux.dev
Signed-off-by: Yajun Deng <yajun.deng@linux.dev>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056772
commit a80501b89152adb29adc7ab943d75c7345f9a3fb
Author: Yajun Deng <yajun.deng@linux.dev>
Date: Wed Feb 23 15:49:01 2022 +0800
RDMA/core: Remove unnecessary statements
The rdma_zalloc_drv_obj() in __ib_alloc_pd() would zero pd, it unnecessary
add NULL to the object in struct pd.
The uverbs_free_pd() already return busy if pd->usecnt is true, there is
no need to add a warning.
Link: https://lore.kernel.org/r/20220223074901.201506-1-yajun.deng@linux.dev
Signed-off-by: Yajun Deng <yajun.deng@linux.dev>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056772
commit 32a88d16615c2be295571c29273c4ac94cb75309
Author: Maor Gottlieb <maorg@nvidia.com>
Date: Tue Jan 18 09:35:02 2022 +0200
RDMA/core: Set MR type in ib_reg_user_mr
Add missing assignment of MR type to IB_MR_TYPE_USER.
Fixes: 33006bd4f3 ("IB/core: Introduce ib_reg_user_mr")
Link: https://lore.kernel.org/r/be2e91bcd6e52dc36be289ae92f30d3a5cc6dcb1.1642491047.git.leonro@nvidia.com
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056770
commit 6cd7397d01c4a3e09757840299e4f114f0aa5fa0
Author: Leon Romanovsky <leon@kernel.org>
Date: Thu Nov 11 13:45:00 2021 +0200
RDMA/core: Set send and receive CQ before forwarding to the driver
Preset both receive and send CQ pointers prior to call to the drivers and
overwrite it later again till the mlx4 is going to be changed do not
overwrite ibqp properties.
This change is needed for mlx5, because in case of QP creation failure, it
will go to the path of QP destroy which relies on proper CQ pointers.
BUG: KASAN: use-after-free in create_qp.cold+0x164/0x16e [mlx5_ib]
Write of size 8 at addr ffff8880064c55c0 by task a.out/246
CPU: 0 PID: 246 Comm: a.out Not tainted 5.15.0+ #291
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
dump_stack_lvl+0x45/0x59
print_address_description.constprop.0+0x1f/0x140
kasan_report.cold+0x83/0xdf
create_qp.cold+0x164/0x16e [mlx5_ib]
mlx5_ib_create_qp+0x358/0x28a0 [mlx5_ib]
create_qp.part.0+0x45b/0x6a0 [ib_core]
ib_create_qp_user+0x97/0x150 [ib_core]
ib_uverbs_handler_UVERBS_METHOD_QP_CREATE+0x92c/0x1250 [ib_uverbs]
ib_uverbs_cmd_verbs+0x1c38/0x3150 [ib_uverbs]
ib_uverbs_ioctl+0x169/0x260 [ib_uverbs]
__x64_sys_ioctl+0x866/0x14d0
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
Allocated by task 246:
kasan_save_stack+0x1b/0x40
__kasan_kmalloc+0xa4/0xd0
create_qp.part.0+0x92/0x6a0 [ib_core]
ib_create_qp_user+0x97/0x150 [ib_core]
ib_uverbs_handler_UVERBS_METHOD_QP_CREATE+0x92c/0x1250 [ib_uverbs]
ib_uverbs_cmd_verbs+0x1c38/0x3150 [ib_uverbs]
ib_uverbs_ioctl+0x169/0x260 [ib_uverbs]
__x64_sys_ioctl+0x866/0x14d0
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
Freed by task 246:
kasan_save_stack+0x1b/0x40
kasan_set_track+0x1c/0x30
kasan_set_free_info+0x20/0x30
__kasan_slab_free+0x10c/0x150
slab_free_freelist_hook+0xb4/0x1b0
kfree+0xe7/0x2a0
create_qp.part.0+0x52b/0x6a0 [ib_core]
ib_create_qp_user+0x97/0x150 [ib_core]
ib_uverbs_handler_UVERBS_METHOD_QP_CREATE+0x92c/0x1250 [ib_uverbs]
ib_uverbs_cmd_verbs+0x1c38/0x3150 [ib_uverbs]
ib_uverbs_ioctl+0x169/0x260 [ib_uverbs]
__x64_sys_ioctl+0x866/0x14d0
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
Fixes: 514aee660df4 ("RDMA: Globally allocate and release QP memory")
Link: https://lore.kernel.org/r/2dbb2e2cbb1efb188a500e5634be1d71956424ce.1636631035.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056770
commit 067113d9db66f8a6cde6d9105404731b3b275202
Author: Mark Zhang <markzhang@nvidia.com>
Date: Tue Oct 26 11:43:03 2021 +0300
RDMA/core: Fix missed initialization of rdma_hw_stats::lock
alloc_and_bind() creates a new rdma_hw_stats structure but misses
initializing the mutex lock.
This causes debug kernel failures:
DEBUG_LOCKS_WARN_ON(lock->magic != lock)
WARNING: CPU: 4 PID: 64464 at kernel/locking/mutex.c:575 __mutex_lock+0x9c3/0x12b0
Call Trace:
fill_res_counter_entry+0x6ee/0x1020 [ib_core]
res_get_common_dumpit+0x907/0x10a0 [ib_core]
nldev_stat_get_dumpit+0x20a/0x290 [ib_core]
netlink_dump+0x451/0x1040
__netlink_dump_start+0x583/0x830
rdma_nl_rcv_msg+0x3f3/0x7c0 [ib_core]
rdma_nl_rcv+0x264/0x410 [ib_core]
netlink_unicast+0x433/0x700
netlink_sendmsg+0x707/0xbf0
sock_sendmsg+0xb0/0xe0
__sys_sendto+0x193/0x240
__x64_sys_sendto+0xdd/0x1b0
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
Instead of requiring all users to open code initialization of the lock put
it in the general rdma_alloc_hw_stats_struct() function and remove
duplicates.
Fixes: c4ffee7c9b ("RDMA/netlink: Implement counter dumpit calback")
Link: https://lore.kernel.org/r/4a22986c4685058d2c735d91703ee7d865815bb9.1635237668.git.leonro@nvidia.com
Signed-off-by: Mark Zhang <markzhang@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056770
commit 0dc89684605e8b60af645988d3e0d80a57b6e937
Author: Aharon Landau <aharonl@nvidia.com>
Date: Fri Oct 8 15:24:31 2021 +0300
RDMA/counter: Add an is_disabled field in struct rdma_hw_stats
Add a bitmap in rdma_hw_stat structure, with each bit indicates whether
the corresponding counter is currently disabled or not. By default
hwcounters are enabled.
Link: https://lore.kernel.org/r/20211008122439.166063-6-markzhang@nvidia.com
Signed-off-by: Aharon Landau <aharonl@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056770
commit 0a0800ce2a6a4d58d73f712f1c0780974877babf
Author: Mark Zhang <markzhang@nvidia.com>
Date: Fri Oct 8 15:24:30 2021 +0300
RDMA/core: Add a helper API rdma_free_hw_stats_struct
Add a new API rdma_free_hw_stats_struct to pair with
rdma_alloc_hw_stats_struct (which is also de-inlined).
This will be useful when there are more alloc/free works in following
patches.
Link: https://lore.kernel.org/r/20211008122439.166063-5-markzhang@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056769
commit 5507f67d08cdd947714647caa5c60f96b719fcb7
Author: Leon Romanovsky <leon@kernel.org>
Date: Tue Aug 3 21:20:37 2021 +0300
RDMA/core: Properly increment and decrement QP usecnts
The QP usecnts were incremented through QP attributes structure while
decreased through QP itself. Rely on the ib_creat_qp_user() code that
initialized all QP parameters prior returning to the user and increment
exactly like destroy does.
Link: https://lore.kernel.org/r/25d256a3bb1fc480b77d7fe439817b993de48610.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056769
commit 00a79d6b996d46e9077b0d02a19c87b99305b94a
Author: Leon Romanovsky <leon@kernel.org>
Date: Tue Aug 3 21:20:36 2021 +0300
RDMA/core: Configure selinux QP during creation
All QP creation flows called ib_create_qp_security(), but differently.
This caused to the need to provide exclusion conditions for the XRC_TGT,
because such QP already had selinux configuration call.
In order to fix it, move ib_create_qp_security() to the general QP
creation routine.
Link: https://lore.kernel.org/r/4d7cd6f5828aca37fb62283e6b126b73ab86b18c.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056769
commit 8da9fe4e4fa7d561df0f3fe65bfa6dbf78aa7590
Author: Leon Romanovsky <leon@kernel.org>
Date: Tue Aug 3 21:20:35 2021 +0300
RDMA/core: Reorganize create QP low-level functions
The low-level create QP function grew to be larger than any sensible
inline function should be. The inline attribute is not really needed for
that function and can be implemented as exported symbol.
Link: https://lore.kernel.org/r/2c08709d86f876c3dfb77684357b2a939e570ca4.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056769
commit 20e2bcc4c2a8ede2fe6e335b9eea357bcfbe79bb
Author: Leon Romanovsky <leon@kernel.org>
Date: Tue Aug 3 21:20:34 2021 +0300
RDMA/core: Remove protection from wrong in-kernel API usage
The ib_create_named_qp() is kernel verb that is not used for user supplied
attributes. In such case, it is ULP responsibility to provide valid QP
attributes.
In-kernel API shouldn't check it, exactly like other functions that don't
check device capabilities.
Link: https://lore.kernel.org/r/b9b9e981d1af148b750750196e686199dbbf61f8.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056769
commit 8fc3beebf623092e446f4c88fca1699c868ca86d
Author: Leon Romanovsky <leon@kernel.org>
Date: Tue Aug 3 21:20:33 2021 +0300
RDMA/core: Delete duplicated and unreachable code
The ib_create_named_qp() is kernel verb and no kernel users exist that use
XRC_INI QP. Hence such QP path is not reachable. In addition, delete
duplicated assignments of QP attributes from the initialization structure.
Link: https://lore.kernel.org/r/1b4c0d1def5f8f6d26839e14d19da950cc4a0b05.1628014762.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Bugzilla: http://bugzilla.redhat.com/2056769
commit 514aee660df493cd673154a6ba6bab745ec47b8c
Author: Leon Romanovsky <leon@kernel.org>
Date: Fri Jul 23 14:39:50 2021 +0300
RDMA: Globally allocate and release QP memory
Convert QP object to follow IB/core general allocation scheme. That
change allows us to make sure that restrack properly kref the memory.
Link: https://lore.kernel.org/r/48e767124758aeecc433360ddd85eaa6325b34d9.1627040189.git.leonro@nvidia.com
Reviewed-by: Gal Pressman <galpress@amazon.com> #efa
Tested-by: Gal Pressman <galpress@amazon.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> #rdma and core
Tested-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Tested-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
In order to track SRQ resources, a new restrack object is initialized and
added to the resource tracking database.
Link: https://lore.kernel.org/r/0db71c409f24f2f6b019bf8797a8fed96fe7079c.1618753110.git.leonro@nvidia.com
Signed-off-by: Neta Ostrovsky <netao@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Block comments should not use a trailing */ on a separate line and every
line of a block comment should start with an '*'.
Link: https://lore.kernel.org/r/1617783353-48249-7-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Local invalidate is also a kind of memory management operation, not only
memory bind operation. Furthermore, as invalidate operations include local
and remote, add prefix to the prompt message to make it clearer.
Link: https://lore.kernel.org/r/1617698772-13871-1-git-send-email-liweihang@huawei.com
Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Current code uses many different types when dealing with a port of a RDMA
device: u8, unsigned int and u32. Switch to u32 to clean up the logic.
This allows us to make (at least) the core view consistent and use the
same type. Unfortunately not all places can be converted. Many uverbs
functions expect port to be u8 so keep those places in order not to break
UAPIs. HW/Spec defined values must also not be changed.
With the switch to u32 we now can support devices with more than 255
ports. U32_MAX is reserved to make control logic a bit easier to deal
with. As a device with U32_MAX ports probably isn't going to happen any
time soon this seems like a non issue.
When a device with more than 255 ports is created uverbs will report the
RDMA device as having 255 ports as this is the max currently supported.
The verbs interface is not changed yet because the IBTA spec limits the
port size in too many places to be u8 and all applications that relies in
verbs won't be able to cope with this change. At this stage, we are
extending the interfaces that are using vendor channel solely
Once the limitation is lifted mlx5 in switchdev mode will be able to have
thousands of SFs created by the device. As the only instance of an RDMA
device that reports more than 255 ports will be a representor device and
it exposes itself as a RAW Ethernet only device CM/MAD/IPoIB and other
ULPs aren't effected by this change and their sysfs/interfaces that are
exposes to userspace can remain unchanged.
While here cleanup some alignment issues and remove unneeded sanity
checks (mainly in rdmavt),
Link: https://lore.kernel.org/r/20210301070420.439400-1-leon@kernel.org
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
IB HCA port starts from 1. Use IB core provided port iterator API to avoid
any assumption with start port number.
Link: https://lore.kernel.org/r/20210127150010.1876121-11-leon@kernel.org
Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Commit 66f57b871e ("RDMA/restrack: Support all QP types") extends
ib_create_qp() to a named ib_create_named_qp(), which takes the caller's
name as argument, but it did not add the new argument description to the
function's kerneldoc.
make htmldocs warns:
./drivers/infiniband/core/verbs.c:1206: warning: Function parameter or member 'caller' not described in 'ib_create_named_qp'
Add a description for this new argument based on the description of the
same argument in other related functions.
Fixes: 66f57b871e ("RDMA/restrack: Support all QP types")
Link: https://lore.kernel.org/r/20201207173255.13355-1-lukas.bulwahn@gmail.com
Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
The latest changes in restrack name handling allowed to simplify the QP
creation code to support all types of QPs.
For example XRC QP are presented with rdmatool.
$ ibv_xsrq_pingpong &
$ rdma res show qp
link ibp0s9/1 lqpn 0 type SMI state RTS sq-psn 0 comm [ib_core]
link ibp0s9/1 lqpn 1 type GSI state RTS sq-psn 0 comm [ib_core]
link ibp0s9/1 lqpn 7 type UD state RTS sq-psn 0 comm [mlx5_ib]
link ibp0s9/1 lqpn 42 type XRC_TGT state INIT sq-psn 0 path-mig-state MIGRATED comm [ib_uverbs]
link ibp0s9/1 lqpn 43 type XRC_INI state INIT sq-psn 0 path-mig-state MIGRATED pdn 197 pid 419 comm ibv_xsrq_pingpong
Link: https://lore.kernel.org/r/20201117070148.1974114-4-leon@kernel.org
Reviewed-by: Mark Zhang <markz@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Drivers now expose two callbacks for address handle creation, one for
uverbs and one for kverbs. EFA only supports uverbs so the .create_ah
assignment can be removed. Fix the core code caller to check the proper
function pointer.
Link: https://lore.kernel.org/r/20201115103404.48829-3-galpress@amazon.com
Signed-off-by: Gal Pressman <galpress@amazon.com>
Acked-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Fix to return error code PTR_ERR() from the error handling case instead of
0.
Fixes: 51aab12631 ("RDMA/core: Get xmit slave for LAG")
Link: https://lore.kernel.org/r/20201016075845.129562-1-jingxiangfeng@huawei.com
Signed-off-by: Jing Xiangfeng <jingxiangfeng@huawei.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Drivers that need a uverbs AH should instead set the create_user_ah() op
similar to reg_user_mr(). MODIFY_AH and QUERY_AH cmds were never
implemented so are just deleted.
Link: https://lore.kernel.org/r/11-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Separate IB_GID_TYPE_IB and IB_GID_TYPE_ROCE to two different values, so
enum ib_gid_type will match the gid types of the new query GID table API
which will be introduced in the following patches.
This change in enum ib_gid_type requires to change also enum
rdma_network_type by separating RDMA_NETWORK_IB and RDMA_NETWORK_ROCE_V1
values.
Link: https://lore.kernel.org/r/20200923165015.2491894-3-leon@kernel.org
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Use rdma_restrack_set_name() and rdma_restrack_parent_name() instead of
tricky uses of rdma_restrack_attach_task()/rdma_restrack_uadd().
This uniformly makes all restracks add'd using rdma_restrack_add().
Link: https://lore.kernel.org/r/20200922091106.2152715-6-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Have a single rdma_restrack_add() that adds an entry, there is no reason
to split the user/kernel here, the rdma_restrack_set_task() is responsible
for this difference.
This patch prepares the code to the future requirement of making restrack
is mandatory for managing ib objects.
Link: https://lore.kernel.org/r/20200922091106.2152715-5-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Refactor the restrack code to make sure the kref inside the restrack entry
properly kref's the object in which it is embedded. This slight change is
needed for future conversions of MR and QP which are refcounted before the
release and kfree.
The ideal flow from ib_core perspective as follows:
* Allocate ib_* structure with rdma_zalloc_*.
* Set everything that is known to ib_core to that newly created object.
* Initialize kref with restrack help
* Call to driver specific allocation functions.
* Insert into restrack DB
....
* Return and release restrack with restrack_put.
Largely this means a rdma_restrack_new() should be called near allocating
the containing structure.
Link: https://lore.kernel.org/r/20200922091106.2152715-4-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Leon Romanovsky says:
====================
IBTA declares speed as 16 bits, but kernel stores it in u8. This series
fixes in-kernel declaration while keeping external interface intact.
====================
Based on the mlx5-next branch at
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux
due to dependencies.
* branch 'mlx5_active_speed':
RDMA: Fix link active_speed size
RDMA/mlx5: Delete duplicated mlx5_ptys_width enum
net/mlx5: Refactor query port speed functions
According to the IB spec active_speed size should be u16 and not u8 as
before. Changing it to allow further extensions in offered speeds.
Link: https://lore.kernel.org/r/20200917090223.1018224-4-leon@kernel.org
Signed-off-by: Aharon Landau <aharonl@mellanox.com>
Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Like any other verbs objects, CQ shouldn't fail during destroy, but
mlx5_ib didn't follow this contract with mixed IB verbs objects with
DEVX. Such mix causes to the situation where FW and kernel are fully
interdependent on the reference counting of each side.
Kernel verbs and drivers that don't have DEVX flows shouldn't fail.
Fixes: e39afe3d6d ("RDMA: Convert CQ allocations to be under core responsibility")
Link: https://lore.kernel.org/r/20200907120921.476363-7-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>