Including fixes from bluetooth. We didn't get netfilter or wireless PRs
this week, so next week's PR is probably going to be bigger. A healthy dose of fixes for bugs introduced in the current release nonetheless. Current release - regressions: - Bluetooth: always allow SCO packets for user channel - af_unix: fix memory leak in unix_dgram_sendmsg() - rxrpc: - remove redundant peer->mtu_lock causing lockdep splats - fix spinlock flavor issues with the peer record hash - eth: iavf: fix circular lock dependency with netdev_lock - net: use rtnl_net_dev_lock() in register_netdevice_notifier_dev_net() RDMA driver register notifier after the device Current release - new code bugs: - ethtool: fix ioctl confusing drivers about desired HDS user config - eth: ixgbe: fix media cage present detection for E610 device Previous releases - regressions: - loopback: avoid sending IP packets without an Ethernet header - mptcp: reset connection when MPTCP opts are dropped after join Previous releases - always broken: - net: better track kernel sockets lifetime - ipv6: fix dst ref loop on input in seg6 and rpl lw tunnels - phy: qca807x: use right value from DTS for DAC_DSP_BIAS_CURRENT - eth: enetc: number of error handling fixes - dsa: rtl8366rb: reshuffle the code to fix config / build issue with LED support Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmfAj8MACgkQMUZtbf5S IrtoTRAAj0XNWXGWZdOuVub0xhtjsPLoZktux4AzsELqaynextkJW6w9pG5qVrWu UZt3a3bC7u6+JoTgb+GQVhyjuuVjv6NOSuLK3FS+NePW8ijhLP5oTg6eD0MQS60Z wa9yQx3yL1Kvb6b80Go/3WgRX9V6Rx8zlROAl/gOlZ9NKB0rSVqnueZGPjGZJf1a ayyXsmzRykshbr5Ic0e+b74hFP3DGxVgHjIob1C4kk/Q+WOfQKnm3C3fnZ/R2QcS 7B7kSk9WokvNwk3hJc7ZtFxJbrQKSSuRI8nCD93hBjTn76yJjlPicJ9b6HJoGhE/ Pwt7fBnDCCA00x6ejD3OrurR+/80PbPtyvNtgMMTD49wSwxQpQ6YpTMInnodCzAV NvIhkkXBprI0kiTT4dDpNoeFMKD3i07etKpvMfEoDzZR7vgUsj6aClSmuxILeU9a crFC4Vp5SgyU1/lUPDiG4dfbd8s4hfM4bZ+d0zAtth3/rQA7/EA6dLqbRXXWX7h5 Gl6egKWPsSl+WUgFjpBjYfhqrQsc06hxaCh0SQYH6SnS3i+PlMU2uRJYZMLQ66rX QsSQOyqCEHwd1qnrLedg9rCniv+DzOJf+qh+H0eY9WhuOay+8T52OHLxpRjSHxBo SCP+qQxSX0qhH5DtUiOV50Fwg19UhJJyWd0COfv5SIGm/I1dUOY= =+Ci7 -----END PGP SIGNATURE----- Merge tag 'net-6.14-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bluetooth. We didn't get netfilter or wireless PRs this week, so next week's PR is probably going to be bigger. A healthy dose of fixes for bugs introduced in the current release nonetheless. Current release - regressions: - Bluetooth: always allow SCO packets for user channel - af_unix: fix memory leak in unix_dgram_sendmsg() - rxrpc: - remove redundant peer->mtu_lock causing lockdep splats - fix spinlock flavor issues with the peer record hash - eth: iavf: fix circular lock dependency with netdev_lock - net: use rtnl_net_dev_lock() in register_netdevice_notifier_dev_net() RDMA driver register notifier after the device Current release - new code bugs: - ethtool: fix ioctl confusing drivers about desired HDS user config - eth: ixgbe: fix media cage present detection for E610 device Previous releases - regressions: - loopback: avoid sending IP packets without an Ethernet header - mptcp: reset connection when MPTCP opts are dropped after join Previous releases - always broken: - net: better track kernel sockets lifetime - ipv6: fix dst ref loop on input in seg6 and rpl lw tunnels - phy: qca807x: use right value from DTS for DAC_DSP_BIAS_CURRENT - eth: enetc: number of error handling fixes - dsa: rtl8366rb: reshuffle the code to fix config / build issue with LED support" * tag 'net-6.14-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (53 commits) net: ti: icss-iep: Reject perout generation request idpf: fix checksums set in idpf_rx_rsc() selftests: drv-net: Check if combined-count exists net: ipv6: fix dst ref loop on input in rpl lwt net: ipv6: fix dst ref loop on input in seg6 lwt usbnet: gl620a: fix endpoint checking in genelink_bind() net/mlx5: IRQ, Fix null string in debug print net/mlx5: Restore missing trace event when enabling vport QoS net/mlx5: Fix vport QoS cleanup on error net: mvpp2: cls: Fixed Non IP flow, with vlan tag flow defination. af_unix: Fix memory leak in unix_dgram_sendmsg() net: Handle napi_schedule() calls from non-interrupt net: Clear old fragment checksum value in napi_reuse_skb gve: unlink old napi when stopping a queue using queue API net: Use rtnl_net_dev_lock() in register_netdevice_notifier_dev_net(). tcp: Defer ts_recent changes until req is owned net: enetc: fix the off-by-one issue in enetc_map_tx_tso_buffs() net: enetc: remove the mm_lock from the ENETC v4 driver net: enetc: add missing enetc4_link_deinit() net: enetc: update UDP checksum when updating originTimestamp field ...
This commit is contained in:
commit
1e15510b71
|
@ -2878,7 +2878,7 @@ F: drivers/pinctrl/nxp/
|
|||
|
||||
ARM/NXP S32G/S32R DWMAC ETHERNET DRIVER
|
||||
M: Jan Petrous <jan.petrous@oss.nxp.com>
|
||||
L: NXP S32 Linux Team <s32@nxp.com>
|
||||
R: s32@nxp.com
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/net/nxp,s32-dwmac.yaml
|
||||
F: drivers/net/ethernet/stmicro/stmmac/dwmac-s32.c
|
||||
|
@ -21922,10 +21922,13 @@ F: sound/soc/uniphier/
|
|||
|
||||
SOCKET TIMESTAMPING
|
||||
M: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
|
||||
R: Jason Xing <kernelxing@tencent.com>
|
||||
S: Maintained
|
||||
F: Documentation/networking/timestamping.rst
|
||||
F: include/linux/net_tstamp.h
|
||||
F: include/uapi/linux/net_tstamp.h
|
||||
F: tools/testing/selftests/bpf/*/net_timestamping*
|
||||
F: tools/testing/selftests/net/*timestamp*
|
||||
F: tools/testing/selftests/net/so_txtime.c
|
||||
|
||||
SOEKRIS NET48XX LED SUPPORT
|
||||
|
|
|
@ -2102,7 +2102,8 @@ static int btusb_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
|||
return submit_or_queue_tx_urb(hdev, urb);
|
||||
|
||||
case HCI_SCODATA_PKT:
|
||||
if (hci_conn_num(hdev, SCO_LINK) < 1)
|
||||
if (!hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
|
||||
hci_conn_num(hdev, SCO_LINK) < 1)
|
||||
return -ENODEV;
|
||||
|
||||
urb = alloc_isoc_urb(hdev, skb);
|
||||
|
@ -2576,7 +2577,8 @@ static int btusb_send_frame_intel(struct hci_dev *hdev, struct sk_buff *skb)
|
|||
return submit_or_queue_tx_urb(hdev, urb);
|
||||
|
||||
case HCI_SCODATA_PKT:
|
||||
if (hci_conn_num(hdev, SCO_LINK) < 1)
|
||||
if (!hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
|
||||
hci_conn_num(hdev, SCO_LINK) < 1)
|
||||
return -ENODEV;
|
||||
|
||||
urb = alloc_isoc_urb(hdev, skb);
|
||||
|
|
|
@ -43,4 +43,10 @@ config NET_DSA_REALTEK_RTL8366RB
|
|||
help
|
||||
Select to enable support for Realtek RTL8366RB.
|
||||
|
||||
config NET_DSA_REALTEK_RTL8366RB_LEDS
|
||||
bool "Support RTL8366RB LED control"
|
||||
depends on (LEDS_CLASS=y || LEDS_CLASS=NET_DSA_REALTEK_RTL8366RB)
|
||||
depends on NET_DSA_REALTEK_RTL8366RB
|
||||
default NET_DSA_REALTEK_RTL8366RB
|
||||
|
||||
endif
|
||||
|
|
|
@ -12,4 +12,7 @@ endif
|
|||
|
||||
obj-$(CONFIG_NET_DSA_REALTEK_RTL8366RB) += rtl8366.o
|
||||
rtl8366-objs := rtl8366-core.o rtl8366rb.o
|
||||
ifdef CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS
|
||||
rtl8366-objs += rtl8366rb-leds.o
|
||||
endif
|
||||
obj-$(CONFIG_NET_DSA_REALTEK_RTL8365MB) += rtl8365mb.o
|
||||
|
|
|
@ -0,0 +1,177 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/regmap.h>
|
||||
#include <net/dsa.h>
|
||||
#include "rtl83xx.h"
|
||||
#include "rtl8366rb.h"
|
||||
|
||||
static inline u32 rtl8366rb_led_group_port_mask(u8 led_group, u8 port)
|
||||
{
|
||||
switch (led_group) {
|
||||
case 0:
|
||||
return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
|
||||
case 1:
|
||||
return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
|
||||
case 2:
|
||||
return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
|
||||
case 3:
|
||||
return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
static int rb8366rb_get_port_led(struct rtl8366rb_led *led)
|
||||
{
|
||||
struct realtek_priv *priv = led->priv;
|
||||
u8 led_group = led->led_group;
|
||||
u8 port_num = led->port_num;
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
ret = regmap_read(priv->map, RTL8366RB_LED_X_X_CTRL_REG(led_group),
|
||||
&val);
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "error reading LED on port %d group %d\n",
|
||||
led_group, port_num);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return !!(val & rtl8366rb_led_group_port_mask(led_group, port_num));
|
||||
}
|
||||
|
||||
static int rb8366rb_set_port_led(struct rtl8366rb_led *led, bool enable)
|
||||
{
|
||||
struct realtek_priv *priv = led->priv;
|
||||
u8 led_group = led->led_group;
|
||||
u8 port_num = led->port_num;
|
||||
int ret;
|
||||
|
||||
ret = regmap_update_bits(priv->map,
|
||||
RTL8366RB_LED_X_X_CTRL_REG(led_group),
|
||||
rtl8366rb_led_group_port_mask(led_group,
|
||||
port_num),
|
||||
enable ? 0xffff : 0);
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "error updating LED on port %d group %d\n",
|
||||
led_group, port_num);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Change the LED group to manual controlled LEDs if required */
|
||||
ret = rb8366rb_set_ledgroup_mode(priv, led_group,
|
||||
RTL8366RB_LEDGROUP_FORCE);
|
||||
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "error updating LED GROUP group %d\n",
|
||||
led_group);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
rtl8366rb_cled_brightness_set_blocking(struct led_classdev *ldev,
|
||||
enum led_brightness brightness)
|
||||
{
|
||||
struct rtl8366rb_led *led = container_of(ldev, struct rtl8366rb_led,
|
||||
cdev);
|
||||
|
||||
return rb8366rb_set_port_led(led, brightness == LED_ON);
|
||||
}
|
||||
|
||||
static int rtl8366rb_setup_led(struct realtek_priv *priv, struct dsa_port *dp,
|
||||
struct fwnode_handle *led_fwnode)
|
||||
{
|
||||
struct rtl8366rb *rb = priv->chip_data;
|
||||
struct led_init_data init_data = { };
|
||||
enum led_default_state state;
|
||||
struct rtl8366rb_led *led;
|
||||
u32 led_group;
|
||||
int ret;
|
||||
|
||||
ret = fwnode_property_read_u32(led_fwnode, "reg", &led_group);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (led_group >= RTL8366RB_NUM_LEDGROUPS) {
|
||||
dev_warn(priv->dev, "Invalid LED reg %d defined for port %d",
|
||||
led_group, dp->index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
led = &rb->leds[dp->index][led_group];
|
||||
led->port_num = dp->index;
|
||||
led->led_group = led_group;
|
||||
led->priv = priv;
|
||||
|
||||
state = led_init_default_state_get(led_fwnode);
|
||||
switch (state) {
|
||||
case LEDS_DEFSTATE_ON:
|
||||
led->cdev.brightness = 1;
|
||||
rb8366rb_set_port_led(led, 1);
|
||||
break;
|
||||
case LEDS_DEFSTATE_KEEP:
|
||||
led->cdev.brightness =
|
||||
rb8366rb_get_port_led(led);
|
||||
break;
|
||||
case LEDS_DEFSTATE_OFF:
|
||||
default:
|
||||
led->cdev.brightness = 0;
|
||||
rb8366rb_set_port_led(led, 0);
|
||||
}
|
||||
|
||||
led->cdev.max_brightness = 1;
|
||||
led->cdev.brightness_set_blocking =
|
||||
rtl8366rb_cled_brightness_set_blocking;
|
||||
init_data.fwnode = led_fwnode;
|
||||
init_data.devname_mandatory = true;
|
||||
|
||||
init_data.devicename = kasprintf(GFP_KERNEL, "Realtek-%d:0%d:%d",
|
||||
dp->ds->index, dp->index, led_group);
|
||||
if (!init_data.devicename)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = devm_led_classdev_register_ext(priv->dev, &led->cdev, &init_data);
|
||||
if (ret) {
|
||||
dev_warn(priv->dev, "Failed to init LED %d for port %d",
|
||||
led_group, dp->index);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int rtl8366rb_setup_leds(struct realtek_priv *priv)
|
||||
{
|
||||
struct dsa_switch *ds = &priv->ds;
|
||||
struct device_node *leds_np;
|
||||
struct dsa_port *dp;
|
||||
int ret = 0;
|
||||
|
||||
dsa_switch_for_each_port(dp, ds) {
|
||||
if (!dp->dn)
|
||||
continue;
|
||||
|
||||
leds_np = of_get_child_by_name(dp->dn, "leds");
|
||||
if (!leds_np) {
|
||||
dev_dbg(priv->dev, "No leds defined for port %d",
|
||||
dp->index);
|
||||
continue;
|
||||
}
|
||||
|
||||
for_each_child_of_node_scoped(leds_np, led_np) {
|
||||
ret = rtl8366rb_setup_led(priv, dp,
|
||||
of_fwnode_handle(led_np));
|
||||
if (ret)
|
||||
break;
|
||||
}
|
||||
|
||||
of_node_put(leds_np);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
|
@ -27,11 +27,7 @@
|
|||
#include "realtek-smi.h"
|
||||
#include "realtek-mdio.h"
|
||||
#include "rtl83xx.h"
|
||||
|
||||
#define RTL8366RB_PORT_NUM_CPU 5
|
||||
#define RTL8366RB_NUM_PORTS 6
|
||||
#define RTL8366RB_PHY_NO_MAX 4
|
||||
#define RTL8366RB_PHY_ADDR_MAX 31
|
||||
#include "rtl8366rb.h"
|
||||
|
||||
/* Switch Global Configuration register */
|
||||
#define RTL8366RB_SGCR 0x0000
|
||||
|
@ -176,39 +172,6 @@
|
|||
*/
|
||||
#define RTL8366RB_VLAN_INGRESS_CTRL2_REG 0x037f
|
||||
|
||||
/* LED control registers */
|
||||
/* The LED blink rate is global; it is used by all triggers in all groups. */
|
||||
#define RTL8366RB_LED_BLINKRATE_REG 0x0430
|
||||
#define RTL8366RB_LED_BLINKRATE_MASK 0x0007
|
||||
#define RTL8366RB_LED_BLINKRATE_28MS 0x0000
|
||||
#define RTL8366RB_LED_BLINKRATE_56MS 0x0001
|
||||
#define RTL8366RB_LED_BLINKRATE_84MS 0x0002
|
||||
#define RTL8366RB_LED_BLINKRATE_111MS 0x0003
|
||||
#define RTL8366RB_LED_BLINKRATE_222MS 0x0004
|
||||
#define RTL8366RB_LED_BLINKRATE_446MS 0x0005
|
||||
|
||||
/* LED trigger event for each group */
|
||||
#define RTL8366RB_LED_CTRL_REG 0x0431
|
||||
#define RTL8366RB_LED_CTRL_OFFSET(led_group) \
|
||||
(4 * (led_group))
|
||||
#define RTL8366RB_LED_CTRL_MASK(led_group) \
|
||||
(0xf << RTL8366RB_LED_CTRL_OFFSET(led_group))
|
||||
|
||||
/* The RTL8366RB_LED_X_X registers are used to manually set the LED state only
|
||||
* when the corresponding LED group in RTL8366RB_LED_CTRL_REG is
|
||||
* RTL8366RB_LEDGROUP_FORCE. Otherwise, it is ignored.
|
||||
*/
|
||||
#define RTL8366RB_LED_0_1_CTRL_REG 0x0432
|
||||
#define RTL8366RB_LED_2_3_CTRL_REG 0x0433
|
||||
#define RTL8366RB_LED_X_X_CTRL_REG(led_group) \
|
||||
((led_group) <= 1 ? \
|
||||
RTL8366RB_LED_0_1_CTRL_REG : \
|
||||
RTL8366RB_LED_2_3_CTRL_REG)
|
||||
#define RTL8366RB_LED_0_X_CTRL_MASK GENMASK(5, 0)
|
||||
#define RTL8366RB_LED_X_1_CTRL_MASK GENMASK(11, 6)
|
||||
#define RTL8366RB_LED_2_X_CTRL_MASK GENMASK(5, 0)
|
||||
#define RTL8366RB_LED_X_3_CTRL_MASK GENMASK(11, 6)
|
||||
|
||||
#define RTL8366RB_MIB_COUNT 33
|
||||
#define RTL8366RB_GLOBAL_MIB_COUNT 1
|
||||
#define RTL8366RB_MIB_COUNTER_PORT_OFFSET 0x0050
|
||||
|
@ -244,7 +207,6 @@
|
|||
#define RTL8366RB_PORT_STATUS_AN_MASK 0x0080
|
||||
|
||||
#define RTL8366RB_NUM_VLANS 16
|
||||
#define RTL8366RB_NUM_LEDGROUPS 4
|
||||
#define RTL8366RB_NUM_VIDS 4096
|
||||
#define RTL8366RB_PRIORITYMAX 7
|
||||
#define RTL8366RB_NUM_FIDS 8
|
||||
|
@ -351,46 +313,6 @@
|
|||
#define RTL8366RB_GREEN_FEATURE_TX BIT(0)
|
||||
#define RTL8366RB_GREEN_FEATURE_RX BIT(2)
|
||||
|
||||
enum rtl8366_ledgroup_mode {
|
||||
RTL8366RB_LEDGROUP_OFF = 0x0,
|
||||
RTL8366RB_LEDGROUP_DUP_COL = 0x1,
|
||||
RTL8366RB_LEDGROUP_LINK_ACT = 0x2,
|
||||
RTL8366RB_LEDGROUP_SPD1000 = 0x3,
|
||||
RTL8366RB_LEDGROUP_SPD100 = 0x4,
|
||||
RTL8366RB_LEDGROUP_SPD10 = 0x5,
|
||||
RTL8366RB_LEDGROUP_SPD1000_ACT = 0x6,
|
||||
RTL8366RB_LEDGROUP_SPD100_ACT = 0x7,
|
||||
RTL8366RB_LEDGROUP_SPD10_ACT = 0x8,
|
||||
RTL8366RB_LEDGROUP_SPD100_10_ACT = 0x9,
|
||||
RTL8366RB_LEDGROUP_FIBER = 0xa,
|
||||
RTL8366RB_LEDGROUP_AN_FAULT = 0xb,
|
||||
RTL8366RB_LEDGROUP_LINK_RX = 0xc,
|
||||
RTL8366RB_LEDGROUP_LINK_TX = 0xd,
|
||||
RTL8366RB_LEDGROUP_MASTER = 0xe,
|
||||
RTL8366RB_LEDGROUP_FORCE = 0xf,
|
||||
|
||||
__RTL8366RB_LEDGROUP_MODE_MAX
|
||||
};
|
||||
|
||||
struct rtl8366rb_led {
|
||||
u8 port_num;
|
||||
u8 led_group;
|
||||
struct realtek_priv *priv;
|
||||
struct led_classdev cdev;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct rtl8366rb - RTL8366RB-specific data
|
||||
* @max_mtu: per-port max MTU setting
|
||||
* @pvid_enabled: if PVID is set for respective port
|
||||
* @leds: per-port and per-ledgroup led info
|
||||
*/
|
||||
struct rtl8366rb {
|
||||
unsigned int max_mtu[RTL8366RB_NUM_PORTS];
|
||||
bool pvid_enabled[RTL8366RB_NUM_PORTS];
|
||||
struct rtl8366rb_led leds[RTL8366RB_NUM_PORTS][RTL8366RB_NUM_LEDGROUPS];
|
||||
};
|
||||
|
||||
static struct rtl8366_mib_counter rtl8366rb_mib_counters[] = {
|
||||
{ 0, 0, 4, "IfInOctets" },
|
||||
{ 0, 4, 4, "EtherStatsOctets" },
|
||||
|
@ -831,9 +753,10 @@ static int rtl8366rb_jam_table(const struct rtl8366rb_jam_tbl_entry *jam_table,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
|
||||
u8 led_group,
|
||||
enum rtl8366_ledgroup_mode mode)
|
||||
/* This code is used also with LEDs disabled */
|
||||
int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
|
||||
u8 led_group,
|
||||
enum rtl8366_ledgroup_mode mode)
|
||||
{
|
||||
int ret;
|
||||
u32 val;
|
||||
|
@ -850,144 +773,7 @@ static int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline u32 rtl8366rb_led_group_port_mask(u8 led_group, u8 port)
|
||||
{
|
||||
switch (led_group) {
|
||||
case 0:
|
||||
return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
|
||||
case 1:
|
||||
return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
|
||||
case 2:
|
||||
return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
|
||||
case 3:
|
||||
return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
static int rb8366rb_get_port_led(struct rtl8366rb_led *led)
|
||||
{
|
||||
struct realtek_priv *priv = led->priv;
|
||||
u8 led_group = led->led_group;
|
||||
u8 port_num = led->port_num;
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
ret = regmap_read(priv->map, RTL8366RB_LED_X_X_CTRL_REG(led_group),
|
||||
&val);
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "error reading LED on port %d group %d\n",
|
||||
led_group, port_num);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return !!(val & rtl8366rb_led_group_port_mask(led_group, port_num));
|
||||
}
|
||||
|
||||
static int rb8366rb_set_port_led(struct rtl8366rb_led *led, bool enable)
|
||||
{
|
||||
struct realtek_priv *priv = led->priv;
|
||||
u8 led_group = led->led_group;
|
||||
u8 port_num = led->port_num;
|
||||
int ret;
|
||||
|
||||
ret = regmap_update_bits(priv->map,
|
||||
RTL8366RB_LED_X_X_CTRL_REG(led_group),
|
||||
rtl8366rb_led_group_port_mask(led_group,
|
||||
port_num),
|
||||
enable ? 0xffff : 0);
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "error updating LED on port %d group %d\n",
|
||||
led_group, port_num);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Change the LED group to manual controlled LEDs if required */
|
||||
ret = rb8366rb_set_ledgroup_mode(priv, led_group,
|
||||
RTL8366RB_LEDGROUP_FORCE);
|
||||
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "error updating LED GROUP group %d\n",
|
||||
led_group);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
rtl8366rb_cled_brightness_set_blocking(struct led_classdev *ldev,
|
||||
enum led_brightness brightness)
|
||||
{
|
||||
struct rtl8366rb_led *led = container_of(ldev, struct rtl8366rb_led,
|
||||
cdev);
|
||||
|
||||
return rb8366rb_set_port_led(led, brightness == LED_ON);
|
||||
}
|
||||
|
||||
static int rtl8366rb_setup_led(struct realtek_priv *priv, struct dsa_port *dp,
|
||||
struct fwnode_handle *led_fwnode)
|
||||
{
|
||||
struct rtl8366rb *rb = priv->chip_data;
|
||||
struct led_init_data init_data = { };
|
||||
enum led_default_state state;
|
||||
struct rtl8366rb_led *led;
|
||||
u32 led_group;
|
||||
int ret;
|
||||
|
||||
ret = fwnode_property_read_u32(led_fwnode, "reg", &led_group);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (led_group >= RTL8366RB_NUM_LEDGROUPS) {
|
||||
dev_warn(priv->dev, "Invalid LED reg %d defined for port %d",
|
||||
led_group, dp->index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
led = &rb->leds[dp->index][led_group];
|
||||
led->port_num = dp->index;
|
||||
led->led_group = led_group;
|
||||
led->priv = priv;
|
||||
|
||||
state = led_init_default_state_get(led_fwnode);
|
||||
switch (state) {
|
||||
case LEDS_DEFSTATE_ON:
|
||||
led->cdev.brightness = 1;
|
||||
rb8366rb_set_port_led(led, 1);
|
||||
break;
|
||||
case LEDS_DEFSTATE_KEEP:
|
||||
led->cdev.brightness =
|
||||
rb8366rb_get_port_led(led);
|
||||
break;
|
||||
case LEDS_DEFSTATE_OFF:
|
||||
default:
|
||||
led->cdev.brightness = 0;
|
||||
rb8366rb_set_port_led(led, 0);
|
||||
}
|
||||
|
||||
led->cdev.max_brightness = 1;
|
||||
led->cdev.brightness_set_blocking =
|
||||
rtl8366rb_cled_brightness_set_blocking;
|
||||
init_data.fwnode = led_fwnode;
|
||||
init_data.devname_mandatory = true;
|
||||
|
||||
init_data.devicename = kasprintf(GFP_KERNEL, "Realtek-%d:0%d:%d",
|
||||
dp->ds->index, dp->index, led_group);
|
||||
if (!init_data.devicename)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = devm_led_classdev_register_ext(priv->dev, &led->cdev, &init_data);
|
||||
if (ret) {
|
||||
dev_warn(priv->dev, "Failed to init LED %d for port %d",
|
||||
led_group, dp->index);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* This code is used also with LEDs disabled */
|
||||
static int rtl8366rb_setup_all_leds_off(struct realtek_priv *priv)
|
||||
{
|
||||
int ret = 0;
|
||||
|
@ -1008,38 +794,6 @@ static int rtl8366rb_setup_all_leds_off(struct realtek_priv *priv)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int rtl8366rb_setup_leds(struct realtek_priv *priv)
|
||||
{
|
||||
struct dsa_switch *ds = &priv->ds;
|
||||
struct device_node *leds_np;
|
||||
struct dsa_port *dp;
|
||||
int ret = 0;
|
||||
|
||||
dsa_switch_for_each_port(dp, ds) {
|
||||
if (!dp->dn)
|
||||
continue;
|
||||
|
||||
leds_np = of_get_child_by_name(dp->dn, "leds");
|
||||
if (!leds_np) {
|
||||
dev_dbg(priv->dev, "No leds defined for port %d",
|
||||
dp->index);
|
||||
continue;
|
||||
}
|
||||
|
||||
for_each_child_of_node_scoped(leds_np, led_np) {
|
||||
ret = rtl8366rb_setup_led(priv, dp,
|
||||
of_fwnode_handle(led_np));
|
||||
if (ret)
|
||||
break;
|
||||
}
|
||||
|
||||
of_node_put(leds_np);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rtl8366rb_setup(struct dsa_switch *ds)
|
||||
{
|
||||
struct realtek_priv *priv = ds->priv;
|
||||
|
|
|
@ -0,0 +1,107 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0+ */
|
||||
|
||||
#ifndef _RTL8366RB_H
|
||||
#define _RTL8366RB_H
|
||||
|
||||
#include "realtek.h"
|
||||
|
||||
#define RTL8366RB_PORT_NUM_CPU 5
|
||||
#define RTL8366RB_NUM_PORTS 6
|
||||
#define RTL8366RB_PHY_NO_MAX 4
|
||||
#define RTL8366RB_NUM_LEDGROUPS 4
|
||||
#define RTL8366RB_PHY_ADDR_MAX 31
|
||||
|
||||
/* LED control registers */
|
||||
/* The LED blink rate is global; it is used by all triggers in all groups. */
|
||||
#define RTL8366RB_LED_BLINKRATE_REG 0x0430
|
||||
#define RTL8366RB_LED_BLINKRATE_MASK 0x0007
|
||||
#define RTL8366RB_LED_BLINKRATE_28MS 0x0000
|
||||
#define RTL8366RB_LED_BLINKRATE_56MS 0x0001
|
||||
#define RTL8366RB_LED_BLINKRATE_84MS 0x0002
|
||||
#define RTL8366RB_LED_BLINKRATE_111MS 0x0003
|
||||
#define RTL8366RB_LED_BLINKRATE_222MS 0x0004
|
||||
#define RTL8366RB_LED_BLINKRATE_446MS 0x0005
|
||||
|
||||
/* LED trigger event for each group */
|
||||
#define RTL8366RB_LED_CTRL_REG 0x0431
|
||||
#define RTL8366RB_LED_CTRL_OFFSET(led_group) \
|
||||
(4 * (led_group))
|
||||
#define RTL8366RB_LED_CTRL_MASK(led_group) \
|
||||
(0xf << RTL8366RB_LED_CTRL_OFFSET(led_group))
|
||||
|
||||
/* The RTL8366RB_LED_X_X registers are used to manually set the LED state only
|
||||
* when the corresponding LED group in RTL8366RB_LED_CTRL_REG is
|
||||
* RTL8366RB_LEDGROUP_FORCE. Otherwise, it is ignored.
|
||||
*/
|
||||
#define RTL8366RB_LED_0_1_CTRL_REG 0x0432
|
||||
#define RTL8366RB_LED_2_3_CTRL_REG 0x0433
|
||||
#define RTL8366RB_LED_X_X_CTRL_REG(led_group) \
|
||||
((led_group) <= 1 ? \
|
||||
RTL8366RB_LED_0_1_CTRL_REG : \
|
||||
RTL8366RB_LED_2_3_CTRL_REG)
|
||||
#define RTL8366RB_LED_0_X_CTRL_MASK GENMASK(5, 0)
|
||||
#define RTL8366RB_LED_X_1_CTRL_MASK GENMASK(11, 6)
|
||||
#define RTL8366RB_LED_2_X_CTRL_MASK GENMASK(5, 0)
|
||||
#define RTL8366RB_LED_X_3_CTRL_MASK GENMASK(11, 6)
|
||||
|
||||
enum rtl8366_ledgroup_mode {
|
||||
RTL8366RB_LEDGROUP_OFF = 0x0,
|
||||
RTL8366RB_LEDGROUP_DUP_COL = 0x1,
|
||||
RTL8366RB_LEDGROUP_LINK_ACT = 0x2,
|
||||
RTL8366RB_LEDGROUP_SPD1000 = 0x3,
|
||||
RTL8366RB_LEDGROUP_SPD100 = 0x4,
|
||||
RTL8366RB_LEDGROUP_SPD10 = 0x5,
|
||||
RTL8366RB_LEDGROUP_SPD1000_ACT = 0x6,
|
||||
RTL8366RB_LEDGROUP_SPD100_ACT = 0x7,
|
||||
RTL8366RB_LEDGROUP_SPD10_ACT = 0x8,
|
||||
RTL8366RB_LEDGROUP_SPD100_10_ACT = 0x9,
|
||||
RTL8366RB_LEDGROUP_FIBER = 0xa,
|
||||
RTL8366RB_LEDGROUP_AN_FAULT = 0xb,
|
||||
RTL8366RB_LEDGROUP_LINK_RX = 0xc,
|
||||
RTL8366RB_LEDGROUP_LINK_TX = 0xd,
|
||||
RTL8366RB_LEDGROUP_MASTER = 0xe,
|
||||
RTL8366RB_LEDGROUP_FORCE = 0xf,
|
||||
|
||||
__RTL8366RB_LEDGROUP_MODE_MAX
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS)
|
||||
|
||||
struct rtl8366rb_led {
|
||||
u8 port_num;
|
||||
u8 led_group;
|
||||
struct realtek_priv *priv;
|
||||
struct led_classdev cdev;
|
||||
};
|
||||
|
||||
int rtl8366rb_setup_leds(struct realtek_priv *priv);
|
||||
|
||||
#else
|
||||
|
||||
static inline int rtl8366rb_setup_leds(struct realtek_priv *priv)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* IS_ENABLED(CONFIG_LEDS_CLASS) */
|
||||
|
||||
/**
|
||||
* struct rtl8366rb - RTL8366RB-specific data
|
||||
* @max_mtu: per-port max MTU setting
|
||||
* @pvid_enabled: if PVID is set for respective port
|
||||
* @leds: per-port and per-ledgroup led info
|
||||
*/
|
||||
struct rtl8366rb {
|
||||
unsigned int max_mtu[RTL8366RB_NUM_PORTS];
|
||||
bool pvid_enabled[RTL8366RB_NUM_PORTS];
|
||||
#if IS_ENABLED(CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS)
|
||||
struct rtl8366rb_led leds[RTL8366RB_NUM_PORTS][RTL8366RB_NUM_LEDGROUPS];
|
||||
#endif
|
||||
};
|
||||
|
||||
/* This code is used also with LEDs disabled */
|
||||
int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
|
||||
u8 led_group,
|
||||
enum rtl8366_ledgroup_mode mode);
|
||||
|
||||
#endif /* _RTL8366RB_H */
|
|
@ -1279,6 +1279,8 @@ struct macb {
|
|||
struct clk *rx_clk;
|
||||
struct clk *tsu_clk;
|
||||
struct net_device *dev;
|
||||
/* Protects hw_stats and ethtool_stats */
|
||||
spinlock_t stats_lock;
|
||||
union {
|
||||
struct macb_stats macb;
|
||||
struct gem_stats gem;
|
||||
|
|
|
@ -1978,10 +1978,12 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
|
|||
|
||||
if (status & MACB_BIT(ISR_ROVR)) {
|
||||
/* We missed at least one packet */
|
||||
spin_lock(&bp->stats_lock);
|
||||
if (macb_is_gem(bp))
|
||||
bp->hw_stats.gem.rx_overruns++;
|
||||
else
|
||||
bp->hw_stats.macb.rx_overruns++;
|
||||
spin_unlock(&bp->stats_lock);
|
||||
|
||||
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
|
||||
queue_writel(queue, ISR, MACB_BIT(ISR_ROVR));
|
||||
|
@ -3102,6 +3104,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
|
|||
if (!netif_running(bp->dev))
|
||||
return nstat;
|
||||
|
||||
spin_lock_irq(&bp->stats_lock);
|
||||
gem_update_stats(bp);
|
||||
|
||||
nstat->rx_errors = (hwstat->rx_frame_check_sequence_errors +
|
||||
|
@ -3131,6 +3134,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
|
|||
nstat->tx_aborted_errors = hwstat->tx_excessive_collisions;
|
||||
nstat->tx_carrier_errors = hwstat->tx_carrier_sense_errors;
|
||||
nstat->tx_fifo_errors = hwstat->tx_underrun;
|
||||
spin_unlock_irq(&bp->stats_lock);
|
||||
|
||||
return nstat;
|
||||
}
|
||||
|
@ -3138,12 +3142,13 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
|
|||
static void gem_get_ethtool_stats(struct net_device *dev,
|
||||
struct ethtool_stats *stats, u64 *data)
|
||||
{
|
||||
struct macb *bp;
|
||||
struct macb *bp = netdev_priv(dev);
|
||||
|
||||
bp = netdev_priv(dev);
|
||||
spin_lock_irq(&bp->stats_lock);
|
||||
gem_update_stats(bp);
|
||||
memcpy(data, &bp->ethtool_stats, sizeof(u64)
|
||||
* (GEM_STATS_LEN + QUEUE_STATS_LEN * MACB_MAX_QUEUES));
|
||||
spin_unlock_irq(&bp->stats_lock);
|
||||
}
|
||||
|
||||
static int gem_get_sset_count(struct net_device *dev, int sset)
|
||||
|
@ -3193,6 +3198,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev)
|
|||
return gem_get_stats(bp);
|
||||
|
||||
/* read stats from hardware */
|
||||
spin_lock_irq(&bp->stats_lock);
|
||||
macb_update_stats(bp);
|
||||
|
||||
/* Convert HW stats into netdevice stats */
|
||||
|
@ -3226,6 +3232,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev)
|
|||
nstat->tx_carrier_errors = hwstat->tx_carrier_errors;
|
||||
nstat->tx_fifo_errors = hwstat->tx_underruns;
|
||||
/* Don't know about heartbeat or window errors... */
|
||||
spin_unlock_irq(&bp->stats_lock);
|
||||
|
||||
return nstat;
|
||||
}
|
||||
|
@ -5097,6 +5104,7 @@ static int macb_probe(struct platform_device *pdev)
|
|||
}
|
||||
}
|
||||
spin_lock_init(&bp->lock);
|
||||
spin_lock_init(&bp->stats_lock);
|
||||
|
||||
/* setup capabilities */
|
||||
macb_configure_caps(bp, macb_config);
|
||||
|
|
|
@ -167,6 +167,24 @@ static bool enetc_skb_is_tcp(struct sk_buff *skb)
|
|||
return skb->csum_offset == offsetof(struct tcphdr, check);
|
||||
}
|
||||
|
||||
/**
|
||||
* enetc_unwind_tx_frame() - Unwind the DMA mappings of a multi-buffer Tx frame
|
||||
* @tx_ring: Pointer to the Tx ring on which the buffer descriptors are located
|
||||
* @count: Number of Tx buffer descriptors which need to be unmapped
|
||||
* @i: Index of the last successfully mapped Tx buffer descriptor
|
||||
*/
|
||||
static void enetc_unwind_tx_frame(struct enetc_bdr *tx_ring, int count, int i)
|
||||
{
|
||||
while (count--) {
|
||||
struct enetc_tx_swbd *tx_swbd = &tx_ring->tx_swbd[i];
|
||||
|
||||
enetc_free_tx_frame(tx_ring, tx_swbd);
|
||||
if (i == 0)
|
||||
i = tx_ring->bd_count;
|
||||
i--;
|
||||
}
|
||||
}
|
||||
|
||||
static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
|
||||
{
|
||||
bool do_vlan, do_onestep_tstamp = false, do_twostep_tstamp = false;
|
||||
|
@ -279,9 +297,11 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
|
|||
}
|
||||
|
||||
if (do_onestep_tstamp) {
|
||||
u32 lo, hi, val;
|
||||
u64 sec, nsec;
|
||||
__be32 new_sec_l, new_nsec;
|
||||
u32 lo, hi, nsec, val;
|
||||
__be16 new_sec_h;
|
||||
u8 *data;
|
||||
u64 sec;
|
||||
|
||||
lo = enetc_rd_hot(hw, ENETC_SICTR0);
|
||||
hi = enetc_rd_hot(hw, ENETC_SICTR1);
|
||||
|
@ -295,13 +315,38 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
|
|||
/* Update originTimestamp field of Sync packet
|
||||
* - 48 bits seconds field
|
||||
* - 32 bits nanseconds field
|
||||
*
|
||||
* In addition, the UDP checksum needs to be updated
|
||||
* by software after updating originTimestamp field,
|
||||
* otherwise the hardware will calculate the wrong
|
||||
* checksum when updating the correction field and
|
||||
* update it to the packet.
|
||||
*/
|
||||
data = skb_mac_header(skb);
|
||||
*(__be16 *)(data + offset2) =
|
||||
htons((sec >> 32) & 0xffff);
|
||||
*(__be32 *)(data + offset2 + 2) =
|
||||
htonl(sec & 0xffffffff);
|
||||
*(__be32 *)(data + offset2 + 6) = htonl(nsec);
|
||||
new_sec_h = htons((sec >> 32) & 0xffff);
|
||||
new_sec_l = htonl(sec & 0xffffffff);
|
||||
new_nsec = htonl(nsec);
|
||||
if (udp) {
|
||||
struct udphdr *uh = udp_hdr(skb);
|
||||
__be32 old_sec_l, old_nsec;
|
||||
__be16 old_sec_h;
|
||||
|
||||
old_sec_h = *(__be16 *)(data + offset2);
|
||||
inet_proto_csum_replace2(&uh->check, skb, old_sec_h,
|
||||
new_sec_h, false);
|
||||
|
||||
old_sec_l = *(__be32 *)(data + offset2 + 2);
|
||||
inet_proto_csum_replace4(&uh->check, skb, old_sec_l,
|
||||
new_sec_l, false);
|
||||
|
||||
old_nsec = *(__be32 *)(data + offset2 + 6);
|
||||
inet_proto_csum_replace4(&uh->check, skb, old_nsec,
|
||||
new_nsec, false);
|
||||
}
|
||||
|
||||
*(__be16 *)(data + offset2) = new_sec_h;
|
||||
*(__be32 *)(data + offset2 + 2) = new_sec_l;
|
||||
*(__be32 *)(data + offset2 + 6) = new_nsec;
|
||||
|
||||
/* Configure single-step register */
|
||||
val = ENETC_PM0_SINGLE_STEP_EN;
|
||||
|
@ -372,25 +417,20 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
|
|||
dma_err:
|
||||
dev_err(tx_ring->dev, "DMA map error");
|
||||
|
||||
do {
|
||||
tx_swbd = &tx_ring->tx_swbd[i];
|
||||
enetc_free_tx_frame(tx_ring, tx_swbd);
|
||||
if (i == 0)
|
||||
i = tx_ring->bd_count;
|
||||
i--;
|
||||
} while (count--);
|
||||
enetc_unwind_tx_frame(tx_ring, count, i);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
|
||||
struct enetc_tx_swbd *tx_swbd,
|
||||
union enetc_tx_bd *txbd, int *i, int hdr_len,
|
||||
int data_len)
|
||||
static int enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
|
||||
struct enetc_tx_swbd *tx_swbd,
|
||||
union enetc_tx_bd *txbd, int *i, int hdr_len,
|
||||
int data_len)
|
||||
{
|
||||
union enetc_tx_bd txbd_tmp;
|
||||
u8 flags = 0, e_flags = 0;
|
||||
dma_addr_t addr;
|
||||
int count = 1;
|
||||
|
||||
enetc_clear_tx_bd(&txbd_tmp);
|
||||
addr = tx_ring->tso_headers_dma + *i * TSO_HEADER_SIZE;
|
||||
|
@ -433,7 +473,10 @@ static void enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
|
|||
/* Write the BD */
|
||||
txbd_tmp.ext.e_flags = e_flags;
|
||||
*txbd = txbd_tmp;
|
||||
count++;
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static int enetc_map_tx_tso_data(struct enetc_bdr *tx_ring, struct sk_buff *skb,
|
||||
|
@ -790,9 +833,9 @@ static int enetc_map_tx_tso_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb
|
|||
|
||||
/* compute the csum over the L4 header */
|
||||
csum = enetc_tso_hdr_csum(&tso, skb, hdr, hdr_len, &pos);
|
||||
enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd, &i, hdr_len, data_len);
|
||||
count += enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd,
|
||||
&i, hdr_len, data_len);
|
||||
bd_data_num = 0;
|
||||
count++;
|
||||
|
||||
while (data_len > 0) {
|
||||
int size;
|
||||
|
@ -816,8 +859,13 @@ static int enetc_map_tx_tso_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb
|
|||
err = enetc_map_tx_tso_data(tx_ring, skb, tx_swbd, txbd,
|
||||
tso.data, size,
|
||||
size == data_len);
|
||||
if (err)
|
||||
if (err) {
|
||||
if (i == 0)
|
||||
i = tx_ring->bd_count;
|
||||
i--;
|
||||
|
||||
goto err_map_data;
|
||||
}
|
||||
|
||||
data_len -= size;
|
||||
count++;
|
||||
|
@ -846,13 +894,7 @@ err_map_data:
|
|||
dev_err(tx_ring->dev, "DMA map error");
|
||||
|
||||
err_chained_bd:
|
||||
do {
|
||||
tx_swbd = &tx_ring->tx_swbd[i];
|
||||
enetc_free_tx_frame(tx_ring, tx_swbd);
|
||||
if (i == 0)
|
||||
i = tx_ring->bd_count;
|
||||
i--;
|
||||
} while (count--);
|
||||
enetc_unwind_tx_frame(tx_ring, count, i);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1901,7 +1943,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
|
|||
enetc_xdp_drop(rx_ring, orig_i, i);
|
||||
tx_ring->stats.xdp_tx_drops++;
|
||||
} else {
|
||||
tx_ring->stats.xdp_tx += xdp_tx_bd_cnt;
|
||||
tx_ring->stats.xdp_tx++;
|
||||
rx_ring->xdp.xdp_tx_in_flight += xdp_tx_bd_cnt;
|
||||
xdp_tx_frm_cnt++;
|
||||
/* The XDP_TX enqueue was successful, so we
|
||||
|
@ -3228,6 +3270,9 @@ static int enetc_hwtstamp_set(struct net_device *ndev, struct ifreq *ifr)
|
|||
new_offloads |= ENETC_F_TX_TSTAMP;
|
||||
break;
|
||||
case HWTSTAMP_TX_ONESTEP_SYNC:
|
||||
if (!enetc_si_is_pf(priv->si))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
new_offloads &= ~ENETC_F_TX_TSTAMP_MASK;
|
||||
new_offloads |= ENETC_F_TX_ONESTEP_SYNC_TSTAMP;
|
||||
break;
|
||||
|
|
|
@ -672,7 +672,6 @@ err_link_init:
|
|||
err_alloc_msix:
|
||||
err_config_si:
|
||||
err_clk_get:
|
||||
mutex_destroy(&priv->mm_lock);
|
||||
free_netdev(ndev);
|
||||
|
||||
return err;
|
||||
|
@ -684,6 +683,7 @@ static void enetc4_pf_netdev_destroy(struct enetc_si *si)
|
|||
struct net_device *ndev = si->ndev;
|
||||
|
||||
unregister_netdev(ndev);
|
||||
enetc4_link_deinit(priv);
|
||||
enetc_free_msix(priv);
|
||||
free_netdev(ndev);
|
||||
}
|
||||
|
|
|
@ -832,6 +832,7 @@ static int enetc_set_coalesce(struct net_device *ndev,
|
|||
static int enetc_get_ts_info(struct net_device *ndev,
|
||||
struct kernel_ethtool_ts_info *info)
|
||||
{
|
||||
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
||||
int *phc_idx;
|
||||
|
||||
phc_idx = symbol_get(enetc_phc_index);
|
||||
|
@ -852,8 +853,10 @@ static int enetc_get_ts_info(struct net_device *ndev,
|
|||
SOF_TIMESTAMPING_TX_SOFTWARE;
|
||||
|
||||
info->tx_types = (1 << HWTSTAMP_TX_OFF) |
|
||||
(1 << HWTSTAMP_TX_ON) |
|
||||
(1 << HWTSTAMP_TX_ONESTEP_SYNC);
|
||||
(1 << HWTSTAMP_TX_ON);
|
||||
|
||||
if (enetc_si_is_pf(priv->si))
|
||||
info->tx_types |= (1 << HWTSTAMP_TX_ONESTEP_SYNC);
|
||||
|
||||
info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
|
||||
(1 << HWTSTAMP_FILTER_ALL);
|
||||
|
|
|
@ -109,10 +109,12 @@ static void gve_rx_reset_ring_dqo(struct gve_priv *priv, int idx)
|
|||
void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx)
|
||||
{
|
||||
int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
|
||||
struct gve_rx_ring *rx = &priv->rx[idx];
|
||||
|
||||
if (!gve_rx_was_added_to_block(priv, idx))
|
||||
return;
|
||||
|
||||
page_pool_disable_direct_recycling(rx->dqo.page_pool);
|
||||
gve_remove_napi(priv, ntfy_idx);
|
||||
gve_rx_remove_from_block(priv, idx);
|
||||
gve_rx_reset_ring_dqo(priv, idx);
|
||||
|
|
|
@ -1983,7 +1983,7 @@ err:
|
|||
static void iavf_finish_config(struct work_struct *work)
|
||||
{
|
||||
struct iavf_adapter *adapter;
|
||||
bool netdev_released = false;
|
||||
bool locks_released = false;
|
||||
int pairs, err;
|
||||
|
||||
adapter = container_of(work, struct iavf_adapter, finish_config);
|
||||
|
@ -2012,19 +2012,22 @@ static void iavf_finish_config(struct work_struct *work)
|
|||
netif_set_real_num_tx_queues(adapter->netdev, pairs);
|
||||
|
||||
if (adapter->netdev->reg_state != NETREG_REGISTERED) {
|
||||
mutex_unlock(&adapter->crit_lock);
|
||||
netdev_unlock(adapter->netdev);
|
||||
netdev_released = true;
|
||||
locks_released = true;
|
||||
err = register_netdevice(adapter->netdev);
|
||||
if (err) {
|
||||
dev_err(&adapter->pdev->dev, "Unable to register netdev (%d)\n",
|
||||
err);
|
||||
|
||||
/* go back and try again.*/
|
||||
mutex_lock(&adapter->crit_lock);
|
||||
iavf_free_rss(adapter);
|
||||
iavf_free_misc_irq(adapter);
|
||||
iavf_reset_interrupt_capability(adapter);
|
||||
iavf_change_state(adapter,
|
||||
__IAVF_INIT_CONFIG_ADAPTER);
|
||||
mutex_unlock(&adapter->crit_lock);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
@ -2040,9 +2043,10 @@ static void iavf_finish_config(struct work_struct *work)
|
|||
}
|
||||
|
||||
out:
|
||||
mutex_unlock(&adapter->crit_lock);
|
||||
if (!netdev_released)
|
||||
if (!locks_released) {
|
||||
mutex_unlock(&adapter->crit_lock);
|
||||
netdev_unlock(adapter->netdev);
|
||||
}
|
||||
rtnl_unlock();
|
||||
}
|
||||
|
||||
|
|
|
@ -38,8 +38,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf)
|
|||
if (ice_vsi_add_vlan_zero(uplink_vsi))
|
||||
goto err_vlan_zero;
|
||||
|
||||
if (ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, true,
|
||||
ICE_FLTR_RX))
|
||||
if (ice_set_dflt_vsi(uplink_vsi))
|
||||
goto err_def_rx;
|
||||
|
||||
if (ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, true,
|
||||
|
|
|
@ -36,6 +36,7 @@ static void ice_free_vf_entries(struct ice_pf *pf)
|
|||
|
||||
hash_for_each_safe(vfs->table, bkt, tmp, vf, entry) {
|
||||
hash_del_rcu(&vf->entry);
|
||||
ice_deinitialize_vf_entry(vf);
|
||||
ice_put_vf(vf);
|
||||
}
|
||||
}
|
||||
|
@ -193,10 +194,6 @@ void ice_free_vfs(struct ice_pf *pf)
|
|||
wr32(hw, GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx));
|
||||
}
|
||||
|
||||
/* clear malicious info since the VF is getting released */
|
||||
if (!ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
|
||||
list_del(&vf->mbx_info.list_entry);
|
||||
|
||||
mutex_unlock(&vf->cfg_lock);
|
||||
}
|
||||
|
||||
|
|
|
@ -1036,6 +1036,14 @@ void ice_initialize_vf_entry(struct ice_vf *vf)
|
|||
mutex_init(&vf->cfg_lock);
|
||||
}
|
||||
|
||||
void ice_deinitialize_vf_entry(struct ice_vf *vf)
|
||||
{
|
||||
struct ice_pf *pf = vf->pf;
|
||||
|
||||
if (!ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
|
||||
list_del(&vf->mbx_info.list_entry);
|
||||
}
|
||||
|
||||
/**
|
||||
* ice_dis_vf_qs - Disable the VF queues
|
||||
* @vf: pointer to the VF structure
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
#endif
|
||||
|
||||
void ice_initialize_vf_entry(struct ice_vf *vf);
|
||||
void ice_deinitialize_vf_entry(struct ice_vf *vf);
|
||||
void ice_dis_vf_qs(struct ice_vf *vf);
|
||||
int ice_check_vf_init(struct ice_vf *vf);
|
||||
enum virtchnl_status_code ice_err_to_virt_err(int err);
|
||||
|
|
|
@ -3013,7 +3013,6 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
|
|||
skb_shinfo(skb)->gso_size = rsc_seg_len;
|
||||
|
||||
skb_reset_network_header(skb);
|
||||
len = skb->len - skb_transport_offset(skb);
|
||||
|
||||
if (ipv4) {
|
||||
struct iphdr *ipv4h = ip_hdr(skb);
|
||||
|
@ -3022,6 +3021,7 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
|
|||
|
||||
/* Reset and set transport header offset in skb */
|
||||
skb_set_transport_header(skb, sizeof(struct iphdr));
|
||||
len = skb->len - skb_transport_offset(skb);
|
||||
|
||||
/* Compute the TCP pseudo header checksum*/
|
||||
tcp_hdr(skb)->check =
|
||||
|
@ -3031,6 +3031,7 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
|
|||
|
||||
skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6;
|
||||
skb_set_transport_header(skb, sizeof(struct ipv6hdr));
|
||||
len = skb->len - skb_transport_offset(skb);
|
||||
tcp_hdr(skb)->check =
|
||||
~tcp_v6_check(len, &ipv6h->saddr, &ipv6h->daddr, 0);
|
||||
}
|
||||
|
|
|
@ -1122,7 +1122,7 @@ static bool ixgbe_is_media_cage_present(struct ixgbe_hw *hw)
|
|||
* returns error (ENOENT), then no cage present. If no cage present then
|
||||
* connection type is backplane or BASE-T.
|
||||
*/
|
||||
return ixgbe_aci_get_netlist_node(hw, cmd, NULL, NULL);
|
||||
return !ixgbe_aci_get_netlist_node(hw, cmd, NULL, NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -324,7 +324,7 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
|
|||
MVPP2_PRS_RI_VLAN_MASK),
|
||||
/* Non IP flow, with vlan tag */
|
||||
MVPP2_DEF_FLOW(MVPP22_FLOW_ETHERNET, MVPP2_FL_NON_IP_TAG,
|
||||
MVPP22_CLS_HEK_OPT_VLAN,
|
||||
MVPP22_CLS_HEK_TAGGED,
|
||||
0, 0),
|
||||
};
|
||||
|
||||
|
|
|
@ -564,6 +564,9 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport, struct mlx5_esw_sched_
|
|||
return err;
|
||||
|
||||
esw_qos_normalize_min_rate(parent->esw, parent, extack);
|
||||
trace_mlx5_esw_vport_qos_create(vport->dev, vport,
|
||||
vport->qos.sched_node->max_rate,
|
||||
vport->qos.sched_node->bw_share);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -591,8 +594,11 @@ static int mlx5_esw_qos_vport_enable(struct mlx5_vport *vport, enum sched_node_t
|
|||
sched_node->vport = vport;
|
||||
vport->qos.sched_node = sched_node;
|
||||
err = esw_qos_vport_enable(vport, parent, extack);
|
||||
if (err)
|
||||
if (err) {
|
||||
__esw_qos_free_node(sched_node);
|
||||
esw_qos_put(esw);
|
||||
vport->qos.sched_node = NULL;
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -572,7 +572,7 @@ irq_pool_alloc(struct mlx5_core_dev *dev, int start, int size, char *name,
|
|||
pool->min_threshold = min_threshold * MLX5_EQ_REFS_PER_IRQ;
|
||||
pool->max_threshold = max_threshold * MLX5_EQ_REFS_PER_IRQ;
|
||||
mlx5_core_dbg(dev, "pool->name = %s, pool->size = %d, pool->start = %d",
|
||||
name, size, start);
|
||||
name ? name : "mlx5_pcif_pool", size, start);
|
||||
return pool;
|
||||
}
|
||||
|
||||
|
|
|
@ -516,6 +516,19 @@ static int loongson_dwmac_acpi_config(struct pci_dev *pdev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* Loongson's DWMAC device may take nearly two seconds to complete DMA reset */
|
||||
static int loongson_dwmac_fix_reset(void *priv, void __iomem *ioaddr)
|
||||
{
|
||||
u32 value = readl(ioaddr + DMA_BUS_MODE);
|
||||
|
||||
value |= DMA_BUS_MODE_SFT_RESET;
|
||||
writel(value, ioaddr + DMA_BUS_MODE);
|
||||
|
||||
return readl_poll_timeout(ioaddr + DMA_BUS_MODE, value,
|
||||
!(value & DMA_BUS_MODE_SFT_RESET),
|
||||
10000, 2000000);
|
||||
}
|
||||
|
||||
static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
{
|
||||
struct plat_stmmacenet_data *plat;
|
||||
|
@ -566,6 +579,7 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
|
|||
|
||||
plat->bsp_priv = ld;
|
||||
plat->setup = loongson_dwmac_setup;
|
||||
plat->fix_soc_reset = loongson_dwmac_fix_reset;
|
||||
ld->dev = &pdev->dev;
|
||||
ld->loongson_id = readl(res.addr + GMAC_VERSION) & 0xff;
|
||||
|
||||
|
|
|
@ -99,6 +99,7 @@ config TI_K3_AM65_CPSW_NUSS
|
|||
select NET_DEVLINK
|
||||
select TI_DAVINCI_MDIO
|
||||
select PHYLINK
|
||||
select PAGE_POOL
|
||||
select TI_K3_CPPI_DESC_POOL
|
||||
imply PHY_TI_GMII_SEL
|
||||
depends on TI_K3_AM65_CPTS || !TI_K3_AM65_CPTS
|
||||
|
|
|
@ -474,26 +474,7 @@ static int icss_iep_perout_enable_hw(struct icss_iep *iep,
|
|||
static int icss_iep_perout_enable(struct icss_iep *iep,
|
||||
struct ptp_perout_request *req, int on)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&iep->ptp_clk_mutex);
|
||||
|
||||
if (iep->pps_enabled) {
|
||||
ret = -EBUSY;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
if (iep->perout_enabled == !!on)
|
||||
goto exit;
|
||||
|
||||
ret = icss_iep_perout_enable_hw(iep, req, on);
|
||||
if (!ret)
|
||||
iep->perout_enabled = !!on;
|
||||
|
||||
exit:
|
||||
mutex_unlock(&iep->ptp_clk_mutex);
|
||||
|
||||
return ret;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static void icss_iep_cap_cmp_work(struct work_struct *work)
|
||||
|
|
|
@ -416,20 +416,25 @@ struct ipvl_addr *ipvlan_addr_lookup(struct ipvl_port *port, void *lyr3h,
|
|||
|
||||
static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb)
|
||||
{
|
||||
const struct iphdr *ip4h = ip_hdr(skb);
|
||||
struct net_device *dev = skb->dev;
|
||||
struct net *net = dev_net(dev);
|
||||
struct rtable *rt;
|
||||
int err, ret = NET_XMIT_DROP;
|
||||
const struct iphdr *ip4h;
|
||||
struct rtable *rt;
|
||||
struct flowi4 fl4 = {
|
||||
.flowi4_oif = dev->ifindex,
|
||||
.flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h)),
|
||||
.flowi4_flags = FLOWI_FLAG_ANYSRC,
|
||||
.flowi4_mark = skb->mark,
|
||||
.daddr = ip4h->daddr,
|
||||
.saddr = ip4h->saddr,
|
||||
};
|
||||
|
||||
if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
|
||||
goto err;
|
||||
|
||||
ip4h = ip_hdr(skb);
|
||||
fl4.daddr = ip4h->daddr;
|
||||
fl4.saddr = ip4h->saddr;
|
||||
fl4.flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h));
|
||||
|
||||
rt = ip_route_output_flow(net, &fl4, NULL);
|
||||
if (IS_ERR(rt))
|
||||
goto err;
|
||||
|
@ -488,6 +493,12 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
|
|||
struct net_device *dev = skb->dev;
|
||||
int err, ret = NET_XMIT_DROP;
|
||||
|
||||
if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr))) {
|
||||
DEV_STATS_INC(dev, tx_errors);
|
||||
kfree_skb(skb);
|
||||
return ret;
|
||||
}
|
||||
|
||||
err = ipvlan_route_v6_outbound(dev, skb);
|
||||
if (unlikely(err)) {
|
||||
DEV_STATS_INC(dev, tx_errors);
|
||||
|
|
|
@ -244,8 +244,22 @@ static netdev_tx_t blackhole_netdev_xmit(struct sk_buff *skb,
|
|||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
static int blackhole_neigh_output(struct neighbour *n, struct sk_buff *skb)
|
||||
{
|
||||
kfree_skb(skb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int blackhole_neigh_construct(struct net_device *dev,
|
||||
struct neighbour *n)
|
||||
{
|
||||
n->output = blackhole_neigh_output;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct net_device_ops blackhole_netdev_ops = {
|
||||
.ndo_start_xmit = blackhole_netdev_xmit,
|
||||
.ndo_neigh_construct = blackhole_neigh_construct,
|
||||
};
|
||||
|
||||
/* This is a dst-dummy device used specifically for invalidated
|
||||
|
|
|
@ -184,9 +184,11 @@ static const struct ethtool_ops nsim_ethtool_ops = {
|
|||
|
||||
static void nsim_ethtool_ring_init(struct netdevsim *ns)
|
||||
{
|
||||
ns->ethtool.ring.rx_pending = 512;
|
||||
ns->ethtool.ring.rx_max_pending = 4096;
|
||||
ns->ethtool.ring.rx_jumbo_max_pending = 4096;
|
||||
ns->ethtool.ring.rx_mini_max_pending = 4096;
|
||||
ns->ethtool.ring.tx_pending = 512;
|
||||
ns->ethtool.ring.tx_max_pending = 4096;
|
||||
}
|
||||
|
||||
|
|
|
@ -774,7 +774,7 @@ static int qca807x_config_init(struct phy_device *phydev)
|
|||
control_dac &= ~QCA807X_CONTROL_DAC_MASK;
|
||||
if (!priv->dac_full_amplitude)
|
||||
control_dac |= QCA807X_CONTROL_DAC_DSP_AMPLITUDE;
|
||||
if (!priv->dac_full_amplitude)
|
||||
if (!priv->dac_full_bias_current)
|
||||
control_dac |= QCA807X_CONTROL_DAC_DSP_BIAS_CURRENT;
|
||||
if (!priv->dac_disable_bias_current_tweak)
|
||||
control_dac |= QCA807X_CONTROL_DAC_BIAS_CURRENT_TWEAK;
|
||||
|
|
|
@ -179,9 +179,7 @@ static int genelink_bind(struct usbnet *dev, struct usb_interface *intf)
|
|||
{
|
||||
dev->hard_mtu = GL_RCV_BUF_SIZE;
|
||||
dev->net->hard_header_len += 4;
|
||||
dev->in = usb_rcvbulkpipe(dev->udev, dev->driver_info->in);
|
||||
dev->out = usb_sndbulkpipe(dev->udev, dev->driver_info->out);
|
||||
return 0;
|
||||
return usbnet_get_endpoints(dev, intf);
|
||||
}
|
||||
|
||||
static const struct driver_info genelink_info = {
|
||||
|
|
|
@ -163,6 +163,8 @@ static struct afs_server *afs_install_server(struct afs_cell *cell,
|
|||
rb_insert_color(&server->uuid_rb, &net->fs_servers);
|
||||
hlist_add_head_rcu(&server->proc_link, &net->fs_proc);
|
||||
|
||||
afs_get_cell(cell, afs_cell_trace_get_server);
|
||||
|
||||
added_dup:
|
||||
write_seqlock(&net->fs_addr_lock);
|
||||
estate = rcu_dereference_protected(server->endpoint_state,
|
||||
|
@ -442,6 +444,7 @@ static void afs_server_rcu(struct rcu_head *rcu)
|
|||
atomic_read(&server->active), afs_server_trace_free);
|
||||
afs_put_endpoint_state(rcu_access_pointer(server->endpoint_state),
|
||||
afs_estate_trace_put_server);
|
||||
afs_put_cell(server->cell, afs_cell_trace_put_server);
|
||||
kfree(server);
|
||||
}
|
||||
|
||||
|
|
|
@ -97,8 +97,8 @@ struct afs_server_list *afs_alloc_server_list(struct afs_volume *volume,
|
|||
break;
|
||||
if (j < slist->nr_servers) {
|
||||
if (slist->servers[j].server == server) {
|
||||
afs_put_server(volume->cell->net, server,
|
||||
afs_server_trace_put_slist_isort);
|
||||
afs_unuse_server(volume->cell->net, server,
|
||||
afs_server_trace_put_slist_isort);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
|
|
@ -392,6 +392,8 @@ struct ucred {
|
|||
|
||||
extern int move_addr_to_kernel(void __user *uaddr, int ulen, struct sockaddr_storage *kaddr);
|
||||
extern int put_cmsg(struct msghdr*, int level, int type, int len, void *data);
|
||||
extern int put_cmsg_notrunc(struct msghdr *msg, int level, int type, int len,
|
||||
void *data);
|
||||
|
||||
struct timespec64;
|
||||
struct __kernel_timespec;
|
||||
|
|
|
@ -1751,6 +1751,7 @@ static inline bool sock_allow_reclassification(const struct sock *csk)
|
|||
struct sock *sk_alloc(struct net *net, int family, gfp_t priority,
|
||||
struct proto *prot, int kern);
|
||||
void sk_free(struct sock *sk);
|
||||
void sk_net_refcnt_upgrade(struct sock *sk);
|
||||
void sk_destruct(struct sock *sk);
|
||||
struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority);
|
||||
void sk_free_unlock_clone(struct sock *sk);
|
||||
|
|
|
@ -174,6 +174,7 @@ enum yfs_cm_operation {
|
|||
EM(afs_cell_trace_get_queue_dns, "GET q-dns ") \
|
||||
EM(afs_cell_trace_get_queue_manage, "GET q-mng ") \
|
||||
EM(afs_cell_trace_get_queue_new, "GET q-new ") \
|
||||
EM(afs_cell_trace_get_server, "GET server") \
|
||||
EM(afs_cell_trace_get_vol, "GET vol ") \
|
||||
EM(afs_cell_trace_insert, "INSERT ") \
|
||||
EM(afs_cell_trace_manage, "MANAGE ") \
|
||||
|
@ -182,6 +183,7 @@ enum yfs_cm_operation {
|
|||
EM(afs_cell_trace_put_destroy, "PUT destry") \
|
||||
EM(afs_cell_trace_put_queue_work, "PUT q-work") \
|
||||
EM(afs_cell_trace_put_queue_fail, "PUT q-fail") \
|
||||
EM(afs_cell_trace_put_server, "PUT server") \
|
||||
EM(afs_cell_trace_put_vol, "PUT vol ") \
|
||||
EM(afs_cell_trace_see_source, "SEE source") \
|
||||
EM(afs_cell_trace_see_ws, "SEE ws ") \
|
||||
|
|
|
@ -632,7 +632,8 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
|
|||
test_bit(FLAG_HOLD_HCI_CONN, &chan->flags))
|
||||
hci_conn_hold(conn->hcon);
|
||||
|
||||
list_add(&chan->list, &conn->chan_l);
|
||||
/* Append to the list since the order matters for ECRED */
|
||||
list_add_tail(&chan->list, &conn->chan_l);
|
||||
}
|
||||
|
||||
void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
|
||||
|
@ -3771,7 +3772,11 @@ static void l2cap_ecred_rsp_defer(struct l2cap_chan *chan, void *data)
|
|||
struct l2cap_ecred_conn_rsp *rsp_flex =
|
||||
container_of(&rsp->pdu.rsp, struct l2cap_ecred_conn_rsp, hdr);
|
||||
|
||||
if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags))
|
||||
/* Check if channel for outgoing connection or if it wasn't deferred
|
||||
* since in those cases it must be skipped.
|
||||
*/
|
||||
if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags) ||
|
||||
!test_and_clear_bit(FLAG_DEFER_SETUP, &chan->flags))
|
||||
return;
|
||||
|
||||
/* Reset ident so only one response is sent */
|
||||
|
|
|
@ -2141,21 +2141,15 @@ int register_netdevice_notifier_dev_net(struct net_device *dev,
|
|||
struct notifier_block *nb,
|
||||
struct netdev_net_notifier *nn)
|
||||
{
|
||||
struct net *net = dev_net(dev);
|
||||
int err;
|
||||
|
||||
/* rtnl_net_lock() assumes dev is not yet published by
|
||||
* register_netdevice().
|
||||
*/
|
||||
DEBUG_NET_WARN_ON_ONCE(!list_empty(&dev->dev_list));
|
||||
|
||||
rtnl_net_lock(net);
|
||||
err = __register_netdevice_notifier_net(net, nb, false);
|
||||
rtnl_net_dev_lock(dev);
|
||||
err = __register_netdevice_notifier_net(dev_net(dev), nb, false);
|
||||
if (!err) {
|
||||
nn->nb = nb;
|
||||
list_add(&nn->list, &dev->net_notifier_list);
|
||||
}
|
||||
rtnl_net_unlock(net);
|
||||
rtnl_net_dev_unlock(dev);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -4763,7 +4757,7 @@ use_local_napi:
|
|||
* we have to raise NET_RX_SOFTIRQ.
|
||||
*/
|
||||
if (!sd->in_net_rx_action)
|
||||
__raise_softirq_irqoff(NET_RX_SOFTIRQ);
|
||||
raise_softirq_irqoff(NET_RX_SOFTIRQ);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_RPS
|
||||
|
|
|
@ -653,6 +653,7 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
|
|||
skb->pkt_type = PACKET_HOST;
|
||||
|
||||
skb->encapsulation = 0;
|
||||
skb->ip_summed = CHECKSUM_NONE;
|
||||
skb_shinfo(skb)->gso_type = 0;
|
||||
skb_shinfo(skb)->gso_size = 0;
|
||||
if (unlikely(skb->slow_gro)) {
|
||||
|
|
|
@ -282,6 +282,16 @@ efault:
|
|||
}
|
||||
EXPORT_SYMBOL(put_cmsg);
|
||||
|
||||
int put_cmsg_notrunc(struct msghdr *msg, int level, int type, int len,
|
||||
void *data)
|
||||
{
|
||||
/* Don't produce truncated CMSGs */
|
||||
if (!msg->msg_control || msg->msg_controllen < CMSG_LEN(len))
|
||||
return -ETOOSMALL;
|
||||
|
||||
return put_cmsg(msg, level, type, len, data);
|
||||
}
|
||||
|
||||
void put_cmsg_scm_timestamping64(struct msghdr *msg, struct scm_timestamping_internal *tss_internal)
|
||||
{
|
||||
struct scm_timestamping64 tss;
|
||||
|
|
|
@ -6033,11 +6033,11 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet)
|
|||
skb->offload_fwd_mark = 0;
|
||||
skb->offload_l3_fwd_mark = 0;
|
||||
#endif
|
||||
ipvs_reset(skb);
|
||||
|
||||
if (!xnet)
|
||||
return;
|
||||
|
||||
ipvs_reset(skb);
|
||||
skb->mark = 0;
|
||||
skb_clear_tstamp(skb);
|
||||
}
|
||||
|
|
|
@ -2246,6 +2246,7 @@ struct sock *sk_alloc(struct net *net, int family, gfp_t priority,
|
|||
get_net_track(net, &sk->ns_tracker, priority);
|
||||
sock_inuse_add(net, 1);
|
||||
} else {
|
||||
net_passive_inc(net);
|
||||
__netns_tracker_alloc(net, &sk->ns_tracker,
|
||||
false, priority);
|
||||
}
|
||||
|
@ -2270,6 +2271,7 @@ EXPORT_SYMBOL(sk_alloc);
|
|||
static void __sk_destruct(struct rcu_head *head)
|
||||
{
|
||||
struct sock *sk = container_of(head, struct sock, sk_rcu);
|
||||
struct net *net = sock_net(sk);
|
||||
struct sk_filter *filter;
|
||||
|
||||
if (sk->sk_destruct)
|
||||
|
@ -2301,14 +2303,28 @@ static void __sk_destruct(struct rcu_head *head)
|
|||
put_cred(sk->sk_peer_cred);
|
||||
put_pid(sk->sk_peer_pid);
|
||||
|
||||
if (likely(sk->sk_net_refcnt))
|
||||
put_net_track(sock_net(sk), &sk->ns_tracker);
|
||||
else
|
||||
__netns_tracker_free(sock_net(sk), &sk->ns_tracker, false);
|
||||
|
||||
if (likely(sk->sk_net_refcnt)) {
|
||||
put_net_track(net, &sk->ns_tracker);
|
||||
} else {
|
||||
__netns_tracker_free(net, &sk->ns_tracker, false);
|
||||
net_passive_dec(net);
|
||||
}
|
||||
sk_prot_free(sk->sk_prot_creator, sk);
|
||||
}
|
||||
|
||||
void sk_net_refcnt_upgrade(struct sock *sk)
|
||||
{
|
||||
struct net *net = sock_net(sk);
|
||||
|
||||
WARN_ON_ONCE(sk->sk_net_refcnt);
|
||||
__netns_tracker_free(net, &sk->ns_tracker, false);
|
||||
net_passive_dec(net);
|
||||
sk->sk_net_refcnt = 1;
|
||||
get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
|
||||
sock_inuse_add(net, 1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sk_net_refcnt_upgrade);
|
||||
|
||||
void sk_destruct(struct sock *sk)
|
||||
{
|
||||
bool use_call_rcu = sock_flag(sk, SOCK_RCU_FREE);
|
||||
|
@ -2405,6 +2421,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
|
|||
* is not properly dismantling its kernel sockets at netns
|
||||
* destroy time.
|
||||
*/
|
||||
net_passive_inc(sock_net(newsk));
|
||||
__netns_tracker_alloc(sock_net(newsk), &newsk->ns_tracker,
|
||||
false, priority);
|
||||
}
|
||||
|
|
|
@ -34,6 +34,7 @@ static int min_sndbuf = SOCK_MIN_SNDBUF;
|
|||
static int min_rcvbuf = SOCK_MIN_RCVBUF;
|
||||
static int max_skb_frags = MAX_SKB_FRAGS;
|
||||
static int min_mem_pcpu_rsv = SK_MEMORY_PCPU_RESERVE;
|
||||
static int netdev_budget_usecs_min = 2 * USEC_PER_SEC / HZ;
|
||||
|
||||
static int net_msg_warn; /* Unused, but still a sysctl */
|
||||
|
||||
|
@ -587,7 +588,7 @@ static struct ctl_table net_core_table[] = {
|
|||
.maxlen = sizeof(unsigned int),
|
||||
.mode = 0644,
|
||||
.proc_handler = proc_dointvec_minmax,
|
||||
.extra1 = SYSCTL_ZERO,
|
||||
.extra1 = &netdev_budget_usecs_min,
|
||||
},
|
||||
{
|
||||
.procname = "fb_tunnels_only_for_init_net",
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
#include <linux/rtnetlink.h>
|
||||
#include <linux/ptp_clock_kernel.h>
|
||||
#include <linux/phy_link_topology.h>
|
||||
#include <net/netdev_queues.h>
|
||||
|
||||
#include "netlink.h"
|
||||
#include "common.h"
|
||||
|
@ -771,6 +772,21 @@ int ethtool_check_ops(const struct ethtool_ops *ops)
|
|||
return 0;
|
||||
}
|
||||
|
||||
void ethtool_ringparam_get_cfg(struct net_device *dev,
|
||||
struct ethtool_ringparam *param,
|
||||
struct kernel_ethtool_ringparam *kparam,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
memset(param, 0, sizeof(*param));
|
||||
memset(kparam, 0, sizeof(*kparam));
|
||||
|
||||
param->cmd = ETHTOOL_GRINGPARAM;
|
||||
dev->ethtool_ops->get_ringparam(dev, param, kparam, extack);
|
||||
|
||||
/* Driver gives us current state, we want to return current config */
|
||||
kparam->tcp_data_split = dev->cfg->hds_config;
|
||||
}
|
||||
|
||||
static void ethtool_init_tsinfo(struct kernel_ethtool_ts_info *info)
|
||||
{
|
||||
memset(info, 0, sizeof(*info));
|
||||
|
|
|
@ -51,6 +51,12 @@ int ethtool_check_max_channel(struct net_device *dev,
|
|||
struct ethtool_channels channels,
|
||||
struct genl_info *info);
|
||||
int ethtool_check_rss_ctx_busy(struct net_device *dev, u32 rss_context);
|
||||
|
||||
void ethtool_ringparam_get_cfg(struct net_device *dev,
|
||||
struct ethtool_ringparam *param,
|
||||
struct kernel_ethtool_ringparam *kparam,
|
||||
struct netlink_ext_ack *extack);
|
||||
|
||||
int __ethtool_get_ts_info(struct net_device *dev, struct kernel_ethtool_ts_info *info);
|
||||
int ethtool_get_ts_info_by_phc(struct net_device *dev,
|
||||
struct kernel_ethtool_ts_info *info,
|
||||
|
|
|
@ -2059,8 +2059,8 @@ static int ethtool_get_ringparam(struct net_device *dev, void __user *useraddr)
|
|||
|
||||
static int ethtool_set_ringparam(struct net_device *dev, void __user *useraddr)
|
||||
{
|
||||
struct ethtool_ringparam ringparam, max = { .cmd = ETHTOOL_GRINGPARAM };
|
||||
struct kernel_ethtool_ringparam kernel_ringparam;
|
||||
struct ethtool_ringparam ringparam, max;
|
||||
int ret;
|
||||
|
||||
if (!dev->ethtool_ops->set_ringparam || !dev->ethtool_ops->get_ringparam)
|
||||
|
@ -2069,7 +2069,7 @@ static int ethtool_set_ringparam(struct net_device *dev, void __user *useraddr)
|
|||
if (copy_from_user(&ringparam, useraddr, sizeof(ringparam)))
|
||||
return -EFAULT;
|
||||
|
||||
dev->ethtool_ops->get_ringparam(dev, &max, &kernel_ringparam, NULL);
|
||||
ethtool_ringparam_get_cfg(dev, &max, &kernel_ringparam, NULL);
|
||||
|
||||
/* ensure new ring parameters are within the maximums */
|
||||
if (ringparam.rx_pending > max.rx_max_pending ||
|
||||
|
|
|
@ -215,17 +215,16 @@ ethnl_set_rings_validate(struct ethnl_req_info *req_info,
|
|||
static int
|
||||
ethnl_set_rings(struct ethnl_req_info *req_info, struct genl_info *info)
|
||||
{
|
||||
struct kernel_ethtool_ringparam kernel_ringparam = {};
|
||||
struct ethtool_ringparam ringparam = {};
|
||||
struct kernel_ethtool_ringparam kernel_ringparam;
|
||||
struct net_device *dev = req_info->dev;
|
||||
struct ethtool_ringparam ringparam;
|
||||
struct nlattr **tb = info->attrs;
|
||||
const struct nlattr *err_attr;
|
||||
bool mod = false;
|
||||
int ret;
|
||||
|
||||
dev->ethtool_ops->get_ringparam(dev, &ringparam,
|
||||
&kernel_ringparam, info->extack);
|
||||
kernel_ringparam.tcp_data_split = dev->cfg->hds_config;
|
||||
ethtool_ringparam_get_cfg(dev, &ringparam, &kernel_ringparam,
|
||||
info->extack);
|
||||
|
||||
ethnl_update_u32(&ringparam.rx_pending, tb[ETHTOOL_A_RINGS_RX], &mod);
|
||||
ethnl_update_u32(&ringparam.rx_mini_pending,
|
||||
|
|
|
@ -2457,14 +2457,12 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb,
|
|||
*/
|
||||
memset(&dmabuf_cmsg, 0, sizeof(dmabuf_cmsg));
|
||||
dmabuf_cmsg.frag_size = copy;
|
||||
err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_LINEAR,
|
||||
sizeof(dmabuf_cmsg), &dmabuf_cmsg);
|
||||
if (err || msg->msg_flags & MSG_CTRUNC) {
|
||||
msg->msg_flags &= ~MSG_CTRUNC;
|
||||
if (!err)
|
||||
err = -ETOOSMALL;
|
||||
err = put_cmsg_notrunc(msg, SOL_SOCKET,
|
||||
SO_DEVMEM_LINEAR,
|
||||
sizeof(dmabuf_cmsg),
|
||||
&dmabuf_cmsg);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
|
||||
sent += copy;
|
||||
|
||||
|
@ -2518,16 +2516,12 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb,
|
|||
offset += copy;
|
||||
remaining_len -= copy;
|
||||
|
||||
err = put_cmsg(msg, SOL_SOCKET,
|
||||
SO_DEVMEM_DMABUF,
|
||||
sizeof(dmabuf_cmsg),
|
||||
&dmabuf_cmsg);
|
||||
if (err || msg->msg_flags & MSG_CTRUNC) {
|
||||
msg->msg_flags &= ~MSG_CTRUNC;
|
||||
if (!err)
|
||||
err = -ETOOSMALL;
|
||||
err = put_cmsg_notrunc(msg, SOL_SOCKET,
|
||||
SO_DEVMEM_DMABUF,
|
||||
sizeof(dmabuf_cmsg),
|
||||
&dmabuf_cmsg);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
|
||||
atomic_long_inc(&niov->pp_ref_count);
|
||||
tcp_xa_pool.netmems[tcp_xa_pool.idx++] = skb_frag_netmem(frag);
|
||||
|
|
|
@ -815,12 +815,6 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
|
|||
|
||||
/* In sequence, PAWS is OK. */
|
||||
|
||||
/* TODO: We probably should defer ts_recent change once
|
||||
* we take ownership of @req.
|
||||
*/
|
||||
if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt))
|
||||
WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval);
|
||||
|
||||
if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) {
|
||||
/* Truncate SYN, it is out of window starting
|
||||
at tcp_rsk(req)->rcv_isn + 1. */
|
||||
|
@ -869,6 +863,10 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
|
|||
if (!child)
|
||||
goto listen_overflow;
|
||||
|
||||
if (own_req && tmp_opt.saw_tstamp &&
|
||||
!after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt))
|
||||
tcp_sk(child)->rx_opt.ts_recent = tmp_opt.rcv_tsval;
|
||||
|
||||
if (own_req && rsk_drop_req(req)) {
|
||||
reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req);
|
||||
inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req);
|
||||
|
|
|
@ -262,10 +262,18 @@ static int rpl_input(struct sk_buff *skb)
|
|||
{
|
||||
struct dst_entry *orig_dst = skb_dst(skb);
|
||||
struct dst_entry *dst = NULL;
|
||||
struct lwtunnel_state *lwtst;
|
||||
struct rpl_lwt *rlwt;
|
||||
int err;
|
||||
|
||||
rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
|
||||
/* We cannot dereference "orig_dst" once ip6_route_input() or
|
||||
* skb_dst_drop() is called. However, in order to detect a dst loop, we
|
||||
* need the address of its lwtstate. So, save the address of lwtstate
|
||||
* now and use it later as a comparison.
|
||||
*/
|
||||
lwtst = orig_dst->lwtstate;
|
||||
|
||||
rlwt = rpl_lwt_lwtunnel(lwtst);
|
||||
|
||||
local_bh_disable();
|
||||
dst = dst_cache_get(&rlwt->cache);
|
||||
|
@ -280,7 +288,9 @@ static int rpl_input(struct sk_buff *skb)
|
|||
if (!dst) {
|
||||
ip6_route_input(skb);
|
||||
dst = skb_dst(skb);
|
||||
if (!dst->error) {
|
||||
|
||||
/* cache only if we don't create a dst reference loop */
|
||||
if (!dst->error && lwtst != dst->lwtstate) {
|
||||
local_bh_disable();
|
||||
dst_cache_set_ip6(&rlwt->cache, dst,
|
||||
&ipv6_hdr(skb)->saddr);
|
||||
|
|
|
@ -472,10 +472,18 @@ static int seg6_input_core(struct net *net, struct sock *sk,
|
|||
{
|
||||
struct dst_entry *orig_dst = skb_dst(skb);
|
||||
struct dst_entry *dst = NULL;
|
||||
struct lwtunnel_state *lwtst;
|
||||
struct seg6_lwt *slwt;
|
||||
int err;
|
||||
|
||||
slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
|
||||
/* We cannot dereference "orig_dst" once ip6_route_input() or
|
||||
* skb_dst_drop() is called. However, in order to detect a dst loop, we
|
||||
* need the address of its lwtstate. So, save the address of lwtstate
|
||||
* now and use it later as a comparison.
|
||||
*/
|
||||
lwtst = orig_dst->lwtstate;
|
||||
|
||||
slwt = seg6_lwt_lwtunnel(lwtst);
|
||||
|
||||
local_bh_disable();
|
||||
dst = dst_cache_get(&slwt->cache);
|
||||
|
@ -490,7 +498,9 @@ static int seg6_input_core(struct net *net, struct sock *sk,
|
|||
if (!dst) {
|
||||
ip6_route_input(skb);
|
||||
dst = skb_dst(skb);
|
||||
if (!dst->error) {
|
||||
|
||||
/* cache only if we don't create a dst reference loop */
|
||||
if (!dst->error && lwtst != dst->lwtstate) {
|
||||
local_bh_disable();
|
||||
dst_cache_set_ip6(&slwt->cache, dst,
|
||||
&ipv6_hdr(skb)->saddr);
|
||||
|
|
|
@ -1514,11 +1514,6 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
|
|||
if (mptcp_pm_is_userspace(msk))
|
||||
goto next;
|
||||
|
||||
if (list_empty(&msk->conn_list)) {
|
||||
mptcp_pm_remove_anno_addr(msk, addr, false);
|
||||
goto next;
|
||||
}
|
||||
|
||||
lock_sock(sk);
|
||||
remove_subflow = mptcp_lookup_subflow_by_saddr(&msk->conn_list, addr);
|
||||
mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
|
||||
|
|
|
@ -1199,6 +1199,8 @@ static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
|
|||
pr_debug("TCP fallback already done (msk=%p)\n", msk);
|
||||
return;
|
||||
}
|
||||
if (WARN_ON_ONCE(!READ_ONCE(msk->allow_infinite_fallback)))
|
||||
return;
|
||||
set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
|
||||
}
|
||||
|
||||
|
|
|
@ -1142,7 +1142,6 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
|
|||
if (data_len == 0) {
|
||||
pr_debug("infinite mapping received\n");
|
||||
MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX);
|
||||
subflow->map_data_len = 0;
|
||||
return MAPPING_INVALID;
|
||||
}
|
||||
|
||||
|
@ -1286,18 +1285,6 @@ static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ss
|
|||
mptcp_schedule_work(sk);
|
||||
}
|
||||
|
||||
static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
|
||||
{
|
||||
struct mptcp_sock *msk = mptcp_sk(subflow->conn);
|
||||
|
||||
if (subflow->mp_join)
|
||||
return false;
|
||||
else if (READ_ONCE(msk->csum_enabled))
|
||||
return !subflow->valid_csum_seen;
|
||||
else
|
||||
return READ_ONCE(msk->allow_infinite_fallback);
|
||||
}
|
||||
|
||||
static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
|
||||
|
@ -1393,7 +1380,7 @@ fallback:
|
|||
return true;
|
||||
}
|
||||
|
||||
if (!subflow_can_fallback(subflow) && subflow->map_data_len) {
|
||||
if (!READ_ONCE(msk->allow_infinite_fallback)) {
|
||||
/* fatal protocol error, close the socket.
|
||||
* subflow_error_report() will introduce the appropriate barriers
|
||||
*/
|
||||
|
@ -1772,10 +1759,7 @@ int mptcp_subflow_create_socket(struct sock *sk, unsigned short family,
|
|||
* needs it.
|
||||
* Update ns_tracker to current stack trace and refcounted tracker.
|
||||
*/
|
||||
__netns_tracker_free(net, &sf->sk->ns_tracker, false);
|
||||
sf->sk->sk_net_refcnt = 1;
|
||||
get_net_track(net, &sf->sk->ns_tracker, GFP_KERNEL);
|
||||
sock_inuse_add(net, 1);
|
||||
sk_net_refcnt_upgrade(sf->sk);
|
||||
err = tcp_set_ulp(sf->sk, "mptcp");
|
||||
if (err)
|
||||
goto err_free;
|
||||
|
|
|
@ -795,16 +795,6 @@ static int netlink_release(struct socket *sock)
|
|||
|
||||
sock_prot_inuse_add(sock_net(sk), &netlink_proto, -1);
|
||||
|
||||
/* Because struct net might disappear soon, do not keep a pointer. */
|
||||
if (!sk->sk_net_refcnt && sock_net(sk) != &init_net) {
|
||||
__netns_tracker_free(sock_net(sk), &sk->ns_tracker, false);
|
||||
/* Because of deferred_put_nlk_sk and use of work queue,
|
||||
* it is possible netns will be freed before this socket.
|
||||
*/
|
||||
sock_net_set(sk, &init_net);
|
||||
__netns_tracker_alloc(&init_net, &sk->ns_tracker,
|
||||
false, GFP_KERNEL);
|
||||
}
|
||||
call_rcu(&nlk->rcu, deferred_put_nlk_sk);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -504,12 +504,8 @@ bool rds_tcp_tune(struct socket *sock)
|
|||
release_sock(sk);
|
||||
return false;
|
||||
}
|
||||
/* Update ns_tracker to current stack trace and refcounted tracker */
|
||||
__netns_tracker_free(net, &sk->ns_tracker, false);
|
||||
|
||||
sk->sk_net_refcnt = 1;
|
||||
netns_tracker_alloc(net, &sk->ns_tracker, GFP_KERNEL);
|
||||
sock_inuse_add(net, 1);
|
||||
sk_net_refcnt_upgrade(sk);
|
||||
put_net(net);
|
||||
}
|
||||
rtn = net_generic(net, rds_tcp_netid);
|
||||
if (rtn->sndbuf_size > 0) {
|
||||
|
|
|
@ -360,7 +360,6 @@ struct rxrpc_peer {
|
|||
u8 pmtud_jumbo; /* Max jumbo packets for the MTU */
|
||||
bool ackr_adv_pmtud; /* T if the peer advertises path-MTU */
|
||||
unsigned int ackr_max_data; /* Maximum data advertised by peer */
|
||||
seqcount_t mtu_lock; /* Lockless MTU access management */
|
||||
unsigned int if_mtu; /* Local interface MTU (- hdrsize) for this peer */
|
||||
unsigned int max_data; /* Maximum packet data capacity for this peer */
|
||||
unsigned short hdrsize; /* header size (IP + UDP + RxRPC) */
|
||||
|
|
|
@ -810,9 +810,7 @@ static void rxrpc_input_ack_trailer(struct rxrpc_call *call, struct sk_buff *skb
|
|||
if (max_mtu < peer->max_data) {
|
||||
trace_rxrpc_pmtud_reduce(peer, sp->hdr.serial, max_mtu,
|
||||
rxrpc_pmtud_reduce_ack);
|
||||
write_seqcount_begin(&peer->mtu_lock);
|
||||
peer->max_data = max_mtu;
|
||||
write_seqcount_end(&peer->mtu_lock);
|
||||
}
|
||||
|
||||
max_data = umin(max_mtu, peer->max_data);
|
||||
|
|
|
@ -130,9 +130,7 @@ static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, unsigned int mtu)
|
|||
peer->pmtud_bad = max_data + 1;
|
||||
|
||||
trace_rxrpc_pmtud_reduce(peer, 0, max_data, rxrpc_pmtud_reduce_icmp);
|
||||
write_seqcount_begin(&peer->mtu_lock);
|
||||
peer->max_data = max_data;
|
||||
write_seqcount_end(&peer->mtu_lock);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -408,13 +406,8 @@ void rxrpc_input_probe_for_pmtud(struct rxrpc_connection *conn, rxrpc_serial_t a
|
|||
}
|
||||
|
||||
max_data = umin(max_data, peer->ackr_max_data);
|
||||
if (max_data != peer->max_data) {
|
||||
preempt_disable();
|
||||
write_seqcount_begin(&peer->mtu_lock);
|
||||
if (max_data != peer->max_data)
|
||||
peer->max_data = max_data;
|
||||
write_seqcount_end(&peer->mtu_lock);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
jumbo = max_data + sizeof(struct rxrpc_jumbo_header);
|
||||
jumbo /= RXRPC_JUMBO_SUBPKTLEN;
|
||||
|
|
|
@ -235,7 +235,6 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp,
|
|||
peer->service_conns = RB_ROOT;
|
||||
seqlock_init(&peer->service_conn_lock);
|
||||
spin_lock_init(&peer->lock);
|
||||
seqcount_init(&peer->mtu_lock);
|
||||
peer->debug_id = atomic_inc_return(&rxrpc_debug_id);
|
||||
peer->recent_srtt_us = UINT_MAX;
|
||||
peer->cong_ssthresh = RXRPC_TX_MAX_WINDOW;
|
||||
|
@ -325,10 +324,10 @@ void rxrpc_new_incoming_peer(struct rxrpc_local *local, struct rxrpc_peer *peer)
|
|||
hash_key = rxrpc_peer_hash_key(local, &peer->srx);
|
||||
rxrpc_init_peer(local, peer, hash_key);
|
||||
|
||||
spin_lock_bh(&rxnet->peer_hash_lock);
|
||||
spin_lock(&rxnet->peer_hash_lock);
|
||||
hash_add_rcu(rxnet->peer_hash, &peer->hash_link, hash_key);
|
||||
list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive_new);
|
||||
spin_unlock_bh(&rxnet->peer_hash_lock);
|
||||
spin_unlock(&rxnet->peer_hash_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -478,6 +478,18 @@ static int rxperf_deliver_request(struct rxperf_call *call)
|
|||
call->unmarshal++;
|
||||
fallthrough;
|
||||
case 2:
|
||||
ret = rxperf_extract_data(call, true);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* Deal with the terminal magic cookie. */
|
||||
call->iov_len = 4;
|
||||
call->kvec[0].iov_len = call->iov_len;
|
||||
call->kvec[0].iov_base = call->tmp;
|
||||
iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len);
|
||||
call->unmarshal++;
|
||||
fallthrough;
|
||||
case 3:
|
||||
ret = rxperf_extract_data(call, false);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
|
|
@ -3337,10 +3337,7 @@ int smc_create_clcsk(struct net *net, struct sock *sk, int family)
|
|||
* which need net ref.
|
||||
*/
|
||||
sk = smc->clcsock->sk;
|
||||
__netns_tracker_free(net, &sk->ns_tracker, false);
|
||||
sk->sk_net_refcnt = 1;
|
||||
get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
|
||||
sock_inuse_add(net, 1);
|
||||
sk_net_refcnt_upgrade(sk);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -1541,10 +1541,7 @@ static struct svc_xprt *svc_create_socket(struct svc_serv *serv,
|
|||
newlen = error;
|
||||
|
||||
if (protocol == IPPROTO_TCP) {
|
||||
__netns_tracker_free(net, &sock->sk->ns_tracker, false);
|
||||
sock->sk->sk_net_refcnt = 1;
|
||||
get_net_track(net, &sock->sk->ns_tracker, GFP_KERNEL);
|
||||
sock_inuse_add(net, 1);
|
||||
sk_net_refcnt_upgrade(sock->sk);
|
||||
if ((error = kernel_listen(sock, 64)) < 0)
|
||||
goto bummer;
|
||||
}
|
||||
|
|
|
@ -1941,12 +1941,8 @@ static struct socket *xs_create_sock(struct rpc_xprt *xprt,
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (protocol == IPPROTO_TCP) {
|
||||
__netns_tracker_free(xprt->xprt_net, &sock->sk->ns_tracker, false);
|
||||
sock->sk->sk_net_refcnt = 1;
|
||||
get_net_track(xprt->xprt_net, &sock->sk->ns_tracker, GFP_KERNEL);
|
||||
sock_inuse_add(xprt->xprt_net, 1);
|
||||
}
|
||||
if (protocol == IPPROTO_TCP)
|
||||
sk_net_refcnt_upgrade(sock->sk);
|
||||
|
||||
filp = sock_alloc_file(sock, O_NONBLOCK, NULL);
|
||||
if (IS_ERR(filp))
|
||||
|
|
|
@ -2102,6 +2102,7 @@ restart_locked:
|
|||
goto out_sock_put;
|
||||
}
|
||||
|
||||
sock_put(other);
|
||||
goto lookup;
|
||||
}
|
||||
|
||||
|
|
|
@ -2,17 +2,54 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
import errno
|
||||
import os
|
||||
from lib.py import ksft_run, ksft_exit, ksft_eq, ksft_raises, KsftSkipEx
|
||||
from lib.py import EthtoolFamily, NlError
|
||||
from lib.py import CmdExitFailure, EthtoolFamily, NlError
|
||||
from lib.py import NetDrvEnv
|
||||
from lib.py import defer, ethtool, ip
|
||||
|
||||
def get_hds(cfg, netnl) -> None:
|
||||
|
||||
def _get_hds_mode(cfg, netnl) -> str:
|
||||
try:
|
||||
rings = netnl.rings_get({'header': {'dev-index': cfg.ifindex}})
|
||||
except NlError as e:
|
||||
raise KsftSkipEx('ring-get not supported by device')
|
||||
if 'tcp-data-split' not in rings:
|
||||
raise KsftSkipEx('tcp-data-split not supported by device')
|
||||
return rings['tcp-data-split']
|
||||
|
||||
|
||||
def _xdp_onoff(cfg):
|
||||
test_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o"
|
||||
ip("link set dev %s xdp obj %s sec xdp" %
|
||||
(cfg.ifname, prog))
|
||||
ip("link set dev %s xdp off" % cfg.ifname)
|
||||
|
||||
|
||||
def _ioctl_ringparam_modify(cfg, netnl) -> None:
|
||||
"""
|
||||
Helper for performing a hopefully unimportant IOCTL SET.
|
||||
IOCTL does not support HDS, so it should not affect the HDS config.
|
||||
"""
|
||||
try:
|
||||
rings = netnl.rings_get({'header': {'dev-index': cfg.ifindex}})
|
||||
except NlError as e:
|
||||
raise KsftSkipEx('ring-get not supported by device')
|
||||
|
||||
if 'tx' not in rings:
|
||||
raise KsftSkipEx('setting Tx ring size not supported')
|
||||
|
||||
try:
|
||||
ethtool(f"--disable-netlink -G {cfg.ifname} tx {rings['tx'] // 2}")
|
||||
except CmdExitFailure as e:
|
||||
ethtool(f"--disable-netlink -G {cfg.ifname} tx {rings['tx'] * 2}")
|
||||
defer(ethtool, f"-G {cfg.ifname} tx {rings['tx']}")
|
||||
|
||||
|
||||
def get_hds(cfg, netnl) -> None:
|
||||
_get_hds_mode(cfg, netnl)
|
||||
|
||||
|
||||
def get_hds_thresh(cfg, netnl) -> None:
|
||||
try:
|
||||
|
@ -104,6 +141,103 @@ def set_hds_thresh_gt(cfg, netnl) -> None:
|
|||
netnl.rings_set({'header': {'dev-index': cfg.ifindex}, 'hds-thresh': hds_gt})
|
||||
ksft_eq(e.exception.nl_msg.error, -errno.EINVAL)
|
||||
|
||||
|
||||
def set_xdp(cfg, netnl) -> None:
|
||||
"""
|
||||
Enable single-buffer XDP on the device.
|
||||
When HDS is in "auto" / UNKNOWN mode, XDP installation should work.
|
||||
"""
|
||||
mode = _get_hds_mode(cfg, netnl)
|
||||
if mode == 'enabled':
|
||||
netnl.rings_set({'header': {'dev-index': cfg.ifindex},
|
||||
'tcp-data-split': 'unknown'})
|
||||
|
||||
_xdp_onoff(cfg)
|
||||
|
||||
|
||||
def enabled_set_xdp(cfg, netnl) -> None:
|
||||
"""
|
||||
Enable single-buffer XDP on the device.
|
||||
When HDS is in "enabled" mode, XDP installation should not work.
|
||||
"""
|
||||
_get_hds_mode(cfg, netnl)
|
||||
netnl.rings_set({'header': {'dev-index': cfg.ifindex},
|
||||
'tcp-data-split': 'enabled'})
|
||||
|
||||
defer(netnl.rings_set, {'header': {'dev-index': cfg.ifindex},
|
||||
'tcp-data-split': 'unknown'})
|
||||
|
||||
with ksft_raises(CmdExitFailure) as e:
|
||||
_xdp_onoff(cfg)
|
||||
|
||||
|
||||
def set_xdp(cfg, netnl) -> None:
|
||||
"""
|
||||
Enable single-buffer XDP on the device.
|
||||
When HDS is in "auto" / UNKNOWN mode, XDP installation should work.
|
||||
"""
|
||||
mode = _get_hds_mode(cfg, netnl)
|
||||
if mode == 'enabled':
|
||||
netnl.rings_set({'header': {'dev-index': cfg.ifindex},
|
||||
'tcp-data-split': 'unknown'})
|
||||
|
||||
_xdp_onoff(cfg)
|
||||
|
||||
|
||||
def enabled_set_xdp(cfg, netnl) -> None:
|
||||
"""
|
||||
Enable single-buffer XDP on the device.
|
||||
When HDS is in "enabled" mode, XDP installation should not work.
|
||||
"""
|
||||
_get_hds_mode(cfg, netnl) # Trigger skip if not supported
|
||||
|
||||
netnl.rings_set({'header': {'dev-index': cfg.ifindex},
|
||||
'tcp-data-split': 'enabled'})
|
||||
defer(netnl.rings_set, {'header': {'dev-index': cfg.ifindex},
|
||||
'tcp-data-split': 'unknown'})
|
||||
|
||||
with ksft_raises(CmdExitFailure) as e:
|
||||
_xdp_onoff(cfg)
|
||||
|
||||
|
||||
def ioctl(cfg, netnl) -> None:
|
||||
mode1 = _get_hds_mode(cfg, netnl)
|
||||
_ioctl_ringparam_modify(cfg, netnl)
|
||||
mode2 = _get_hds_mode(cfg, netnl)
|
||||
|
||||
ksft_eq(mode1, mode2)
|
||||
|
||||
|
||||
def ioctl_set_xdp(cfg, netnl) -> None:
|
||||
"""
|
||||
Like set_xdp(), but we perturb the settings via the legacy ioctl.
|
||||
"""
|
||||
mode = _get_hds_mode(cfg, netnl)
|
||||
if mode == 'enabled':
|
||||
netnl.rings_set({'header': {'dev-index': cfg.ifindex},
|
||||
'tcp-data-split': 'unknown'})
|
||||
|
||||
_ioctl_ringparam_modify(cfg, netnl)
|
||||
|
||||
_xdp_onoff(cfg)
|
||||
|
||||
|
||||
def ioctl_enabled_set_xdp(cfg, netnl) -> None:
|
||||
"""
|
||||
Enable single-buffer XDP on the device.
|
||||
When HDS is in "enabled" mode, XDP installation should not work.
|
||||
"""
|
||||
_get_hds_mode(cfg, netnl) # Trigger skip if not supported
|
||||
|
||||
netnl.rings_set({'header': {'dev-index': cfg.ifindex},
|
||||
'tcp-data-split': 'enabled'})
|
||||
defer(netnl.rings_set, {'header': {'dev-index': cfg.ifindex},
|
||||
'tcp-data-split': 'unknown'})
|
||||
|
||||
with ksft_raises(CmdExitFailure) as e:
|
||||
_xdp_onoff(cfg)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with NetDrvEnv(__file__, queue_count=3) as cfg:
|
||||
ksft_run([get_hds,
|
||||
|
@ -112,7 +246,12 @@ def main() -> None:
|
|||
set_hds_enable,
|
||||
set_hds_thresh_zero,
|
||||
set_hds_thresh_max,
|
||||
set_hds_thresh_gt],
|
||||
set_hds_thresh_gt,
|
||||
set_xdp,
|
||||
enabled_set_xdp,
|
||||
ioctl,
|
||||
ioctl_set_xdp,
|
||||
ioctl_enabled_set_xdp],
|
||||
args=(cfg, EthtoolFamily()))
|
||||
ksft_exit()
|
||||
|
||||
|
|
|
@ -45,10 +45,9 @@ def addremove_queues(cfg, nl) -> None:
|
|||
|
||||
netnl = EthtoolFamily()
|
||||
channels = netnl.channels_get({'header': {'dev-index': cfg.ifindex}})
|
||||
if channels['combined-count'] == 0:
|
||||
rx_type = 'rx'
|
||||
else:
|
||||
rx_type = 'combined'
|
||||
rx_type = 'rx'
|
||||
if channels.get('combined-count', 0) > 0:
|
||||
rx_type = 'combined'
|
||||
|
||||
expected = curr_queues - 1
|
||||
cmd(f"ethtool -L {cfg.dev['ifname']} {rx_type} {expected}", timeout=10)
|
||||
|
|
|
@ -9,7 +9,10 @@ TEST_FILES := ../../../../../Documentation/netlink/specs
|
|||
TEST_FILES += ../../../../net/ynl
|
||||
|
||||
TEST_GEN_FILES += csum
|
||||
TEST_GEN_FILES += $(patsubst %.c,%.o,$(wildcard *.bpf.c))
|
||||
|
||||
TEST_INCLUDES := $(wildcard py/*.py sh/*.sh)
|
||||
|
||||
include ../../lib.mk
|
||||
|
||||
include ../bpf.mk
|
||||
|
|
|
@ -0,0 +1,13 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#define KBUILD_MODNAME "xdp_dummy"
|
||||
#include <linux/bpf.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
SEC("xdp")
|
||||
int xdp_dummy_prog(struct xdp_md *ctx)
|
||||
{
|
||||
return XDP_PASS;
|
||||
}
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
Loading…
Reference in New Issue