!489 [sync] sync master branch

From: @huangdengdui 
Reviewed-by: @li-huisong 
Signed-off-by: @li-huisong
This commit is contained in:
openeuler-ci-bot 2023-11-23 10:21:58 +00:00 committed by Gitee
commit ddab303f2f
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
45 changed files with 4991 additions and 1 deletions

View File

@ -0,0 +1,93 @@
From b3e2b303f964e5ad17af01a498ef8c1cdc32fbd6 Mon Sep 17 00:00:00 2001
From: Dongdong Liu <liudongdong3@huawei.com>
Date: Mon, 26 Jun 2023 20:43:04 +0800
Subject: [PATCH 351/366] config/arm: add HiSilicon HIP10
[ upstream commit 5b2a7f12edcaba0daab0154c9ab03430083cfd80 ]
Adding support for HiSilicon HIP10 platform.
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
config/arm/arm64_hip10_linux_gcc | 16 ++++++++++++++++
config/arm/meson.build | 19 +++++++++++++++++++
2 files changed, 35 insertions(+)
create mode 100644 config/arm/arm64_hip10_linux_gcc
diff --git a/config/arm/arm64_hip10_linux_gcc b/config/arm/arm64_hip10_linux_gcc
new file mode 100644
index 0000000..2943e4a
--- /dev/null
+++ b/config/arm/arm64_hip10_linux_gcc
@@ -0,0 +1,16 @@
+[binaries]
+c = ['ccache', 'aarch64-linux-gnu-gcc']
+cpp = ['ccache', 'aarch64-linux-gnu-g++']
+ar = 'aarch64-linux-gnu-gcc-ar'
+strip = 'aarch64-linux-gnu-strip'
+pkgconfig = 'aarch64-linux-gnu-pkg-config'
+pcap-config = ''
+
+[host_machine]
+system = 'linux'
+cpu_family = 'aarch64'
+cpu = 'armv8-a'
+endian = 'little'
+
+[properties]
+platform = 'hip10'
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 213324d..ef047e9 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -193,6 +193,16 @@ implementer_hisilicon = {
['RTE_MAX_LCORE', 1280],
['RTE_MAX_NUMA_NODES', 16]
]
+ },
+ '0xd03': {
+ 'march': 'armv8.5-a',
+ 'march_features': ['crypto', 'sve'],
+ 'flags': [
+ ['RTE_MACHINE', '"hip10"'],
+ ['RTE_ARM_FEATURE_ATOMICS', true],
+ ['RTE_MAX_LCORE', 1280],
+ ['RTE_MAX_NUMA_NODES', 16]
+ ]
}
}
}
@@ -309,6 +319,13 @@ soc_graviton2 = {
'numa': false
}
+soc_hip10 = {
+ 'description': 'HiSilicon HIP10',
+ 'implementer': '0x48',
+ 'part_number': '0xd03',
+ 'numa': true
+}
+
soc_kunpeng920 = {
'description': 'HiSilicon Kunpeng 920',
'implementer': '0x48',
@@ -381,6 +398,7 @@ cn10k: Marvell OCTEON 10
dpaa: NXP DPAA
emag: Ampere eMAG
graviton2: AWS Graviton2
+hip10: HiSilicon HIP10
kunpeng920: HiSilicon Kunpeng 920
kunpeng930: HiSilicon Kunpeng 930
n1sdp: Arm Neoverse N1SDP
@@ -403,6 +421,7 @@ socs = {
'dpaa': soc_dpaa,
'emag': soc_emag,
'graviton2': soc_graviton2,
+ 'hip10': soc_hip10,
'kunpeng920': soc_kunpeng920,
'kunpeng930': soc_kunpeng930,
'n1sdp': soc_n1sdp,
--
2.41.0.windows.2

View File

@ -0,0 +1,56 @@
From af30b78f204788a5a82cc637b813a3b8bb66ae6b Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Fri, 7 Jul 2023 18:40:53 +0800
Subject: [PATCH 352/366] net/hns3: fix non-zero weight for disabled TC
[ upstream commit 1abcdb3f247393a04703071452b560a77ab23c04 ]
hns3 PF driver enables one TC, allocates to 100% weight for this
TC and 0% for other disabled TC by default. But driver modifies
the weight to 1% for disabled TC and then set to hardware to make
all TC work in DWRR mode. As a result, the total percent of all TC
is more than 100%. Actually, this operation is also redundant,
because these disabled TC will never be used. So this patch sets
the weight of all TC based on user's configuration.
Fixes: 62e3ccc2b94c ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_dcb.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index af045b2..07b8c46 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -237,9 +237,9 @@ hns3_dcb_qs_weight_cfg(struct hns3_hw *hw, uint16_t qs_id, uint8_t dwrr)
static int
hns3_dcb_ets_tc_dwrr_cfg(struct hns3_hw *hw)
{
-#define DEFAULT_TC_WEIGHT 1
#define DEFAULT_TC_OFFSET 14
struct hns3_ets_tc_weight_cmd *ets_weight;
+ struct hns3_pg_info *pg_info;
struct hns3_cmd_desc desc;
uint8_t i;
@@ -247,13 +247,6 @@ hns3_dcb_ets_tc_dwrr_cfg(struct hns3_hw *hw)
ets_weight = (struct hns3_ets_tc_weight_cmd *)desc.data;
for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
- struct hns3_pg_info *pg_info;
-
- ets_weight->tc_weight[i] = DEFAULT_TC_WEIGHT;
-
- if (!(hw->hw_tc_map & BIT(i)))
- continue;
-
pg_info = &hw->dcb_info.pg_info[hw->dcb_info.tc_info[i].pgid];
ets_weight->tc_weight[i] = pg_info->tc_dwrr[i];
}
--
2.41.0.windows.2

View File

@ -0,0 +1,40 @@
From c7f8daafe6ec2cfde7af46e446c227f15b0eec7f Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 11 Jul 2023 18:24:44 +0800
Subject: [PATCH 353/366] net/hns3: fix index to look up table in NEON Rx
[ upstream commit 6bec7c50be7a38c114680481f285976142df40d0 ]
In hns3_recv_burst_vec(), the index to get packet length and data
size are reversed. Fortunately, this doesn't affect functionality
because the NEON Rx only supports single BD in which the packet
length is equal to the date size. Now this patch fixes it to get
back to the truth.
Fixes: a3d4f4d291d7 ("net/hns3: support NEON Rx")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_rxtx_vec_neon.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx_vec_neon.h b/drivers/net/hns3/hns3_rxtx_vec_neon.h
index 55d9bf8..a20a6b6 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_neon.h
+++ b/drivers/net/hns3/hns3_rxtx_vec_neon.h
@@ -142,8 +142,8 @@ hns3_recv_burst_vec(struct hns3_rx_queue *__restrict rxq,
/* mask to shuffle from desc to mbuf's rx_descriptor_fields1 */
uint8x16_t shuf_desc_fields_msk = {
0xff, 0xff, 0xff, 0xff, /* packet type init zero */
- 22, 23, 0xff, 0xff, /* rx.pkt_len to rte_mbuf.pkt_len */
- 20, 21, /* size to rte_mbuf.data_len */
+ 20, 21, 0xff, 0xff, /* rx.pkt_len to rte_mbuf.pkt_len */
+ 22, 23, /* size to rte_mbuf.data_len */
0xff, 0xff, /* rte_mbuf.vlan_tci init zero */
8, 9, 10, 11, /* rx.rss_hash to rte_mbuf.hash.rss */
};
--
2.41.0.windows.2

View File

@ -0,0 +1,35 @@
From f2d94f67f97a92cd142f1e7e6fa5106766acd08a Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Sat, 5 Aug 2023 16:36:23 +0800
Subject: [PATCH 354/366] net/hns3: fix VF default MAC modified when set failed
[ upstream commit ed7faab2a717347077d9e657fba010bb145a2b54 ]
When the VF fail to set the default MAC address,
"hw->mac.mac_addr" should not be updated.
Fixes: a5475d61fa34 ("net/hns3: support VF")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_ethdev_vf.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 6898a77..02fb4a8 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -250,6 +250,8 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev,
hns3_err(hw, "Failed to set mac addr(%s) for vf: %d",
mac_str, ret);
}
+ rte_spinlock_unlock(&hw->lock);
+ return ret;
}
rte_ether_addr_copy(mac_addr,
--
2.41.0.windows.2

View File

@ -0,0 +1,35 @@
From 81f221e0c7e43eb37eda6e4ea8765a159fae9b08 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Sat, 5 Aug 2023 16:36:24 +0800
Subject: [PATCH 355/366] net/hns3: fix error code for multicast resource
[ upstream commit c8cd885352d58bcfcc514770cb6068dd689d0dc3 ]
Return ENOSPC instead of EINVAL when the hardware
has not enough multicast filtering resources.
Fixes: 7d7f9f80bbfb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c
index a7b576a..51a1c68 100644
--- a/drivers/net/hns3/hns3_common.c
+++ b/drivers/net/hns3/hns3_common.c
@@ -384,7 +384,7 @@ hns3_set_mc_addr_chk_param(struct hns3_hw *hw,
hns3_err(hw, "failed to set mc mac addr, nb_mc_addr(%u) "
"invalid. valid range: 0~%d",
nb_mc_addr, HNS3_MC_MACADDR_NUM);
- return -EINVAL;
+ return -ENOSPC;
}
/* Check if input mac addresses are valid */
--
2.41.0.windows.2

View File

@ -0,0 +1,51 @@
From 526759b4f78ecd42b217285c892a2e2e664192a2 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Sat, 5 Aug 2023 16:36:25 +0800
Subject: [PATCH 356/366] net/hns3: fix flushing multicast MAC address
[ upstream commit 49d1ab205b033b6131fb895b5e4d9ebc14081e51 ]
According rte_eth_dev_set_mc_addr_list() API definition,
support flush multicast MAC address if mc_addr_set is NULL
or nb_mc_addr is zero.
Fixes: 7d7f9f80bbfb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_common.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c
index 51a1c68..5dec62c 100644
--- a/drivers/net/hns3/hns3_common.c
+++ b/drivers/net/hns3/hns3_common.c
@@ -442,6 +442,7 @@ hns3_set_mc_mac_addr_list(struct rte_eth_dev *dev,
uint32_t nb_mc_addr)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
struct rte_ether_addr *addr;
int cur_addr_num;
int set_addr_num;
@@ -449,6 +450,15 @@ hns3_set_mc_mac_addr_list(struct rte_eth_dev *dev,
int ret;
int i;
+ if (mc_addr_set == NULL || nb_mc_addr == 0) {
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_configure_all_mc_mac_addr(hns, true);
+ if (ret == 0)
+ hw->mc_addrs_num = 0;
+ rte_spinlock_unlock(&hw->lock);
+ return ret;
+ }
+
/* Check if input parameters are valid */
ret = hns3_set_mc_addr_chk_param(hw, mc_addr_set, nb_mc_addr);
if (ret)
--
2.41.0.windows.2

View File

@ -0,0 +1,273 @@
From a5b54a960acbdd2c55f60577f7801af096ee84ba Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Sat, 5 Aug 2023 16:36:26 +0800
Subject: [PATCH 357/366] net/hns3: fix traffic management thread safety
[ upstream commit 69901040975bff8a38edfc47aee727cadc87d356 ]
The driver-related TM (traffic management) info is implemented through
the linked list. The following threads are involved in the read and
write of the TM info:
1. main thread: invokes the rte_tm_xxx() API family to modify or read.
2. interrupt thread: will read TM info in reset recover process.
3. telemetry/proc-info thread: invoke rte_eth_dev_priv_dump() API to
read TM info.
Currently, thread safety protection of TM info is implemented only in
the following operations:
1. some of the rte_tm_xxx() API's implementation.
2. reset recover process.
Thread safety risks may exist in other scenarios, so fix by:
1. make sure all the rte_tm_xxx() API's implementations protected by
hw.lock.
2. make sure rte_eth_dev_priv_dump() API's implementation protected
by hw.lock.
Fixes: c09c7847d892 ("net/hns3: support traffic management")
Fixes: e4cfe6bb9114 ("net/hns3: dump TM configuration info")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_dump.c | 8 +-
drivers/net/hns3/hns3_tm.c | 173 ++++++++++++++++++++++++++++++-----
2 files changed, 157 insertions(+), 24 deletions(-)
diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c
index 7ecfca8..2dc44f2 100644
--- a/drivers/net/hns3/hns3_dump.c
+++ b/drivers/net/hns3/hns3_dump.c
@@ -918,6 +918,8 @@ hns3_eth_dev_priv_dump(struct rte_eth_dev *dev, FILE *file)
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
+ rte_spinlock_lock(&hw->lock);
+
hns3_get_device_basic_info(file, dev);
hns3_get_dev_feature_capability(file, hw);
hns3_get_rxtx_queue_info(file, dev);
@@ -927,8 +929,10 @@ hns3_eth_dev_priv_dump(struct rte_eth_dev *dev, FILE *file)
* VF only supports dumping basic info, feaure capability and queue
* info.
*/
- if (hns->is_vf)
+ if (hns->is_vf) {
+ rte_spinlock_unlock(&hw->lock);
return 0;
+ }
hns3_get_dev_mac_info(file, hns);
hns3_get_vlan_config_info(file, hw);
@@ -936,6 +940,8 @@ hns3_eth_dev_priv_dump(struct rte_eth_dev *dev, FILE *file)
hns3_get_tm_conf_info(file, dev);
hns3_get_flow_ctrl_info(file, dev);
+ rte_spinlock_unlock(&hw->lock);
+
return 0;
}
diff --git a/drivers/net/hns3/hns3_tm.c b/drivers/net/hns3/hns3_tm.c
index e1089b6..67402a7 100644
--- a/drivers/net/hns3/hns3_tm.c
+++ b/drivers/net/hns3/hns3_tm.c
@@ -1081,21 +1081,6 @@ hns3_tm_hierarchy_commit(struct rte_eth_dev *dev,
return -EINVAL;
}
-static int
-hns3_tm_hierarchy_commit_wrap(struct rte_eth_dev *dev,
- int clear_on_fail,
- struct rte_tm_error *error)
-{
- struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- int ret;
-
- rte_spinlock_lock(&hw->lock);
- ret = hns3_tm_hierarchy_commit(dev, clear_on_fail, error);
- rte_spinlock_unlock(&hw->lock);
-
- return ret;
-}
-
static int
hns3_tm_node_shaper_do_update(struct hns3_hw *hw,
uint32_t node_id,
@@ -1195,6 +1180,148 @@ hns3_tm_node_shaper_update(struct rte_eth_dev *dev,
return 0;
}
+static int
+hns3_tm_capabilities_get_wrap(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_tm_capabilities_get(dev, cap, error);
+ rte_spinlock_unlock(&hw->lock);
+
+ return ret;
+}
+
+static int
+hns3_tm_shaper_profile_add_wrap(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_tm_shaper_profile_add(dev, shaper_profile_id, profile, error);
+ rte_spinlock_unlock(&hw->lock);
+
+ return ret;
+}
+
+static int
+hns3_tm_shaper_profile_del_wrap(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_tm_shaper_profile_del(dev, shaper_profile_id, error);
+ rte_spinlock_unlock(&hw->lock);
+
+ return ret;
+}
+
+static int
+hns3_tm_node_add_wrap(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_tm_node_add(dev, node_id, parent_node_id, priority,
+ weight, level_id, params, error);
+ rte_spinlock_unlock(&hw->lock);
+
+ return ret;
+}
+
+static int
+hns3_tm_node_delete_wrap(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_tm_node_delete(dev, node_id, error);
+ rte_spinlock_unlock(&hw->lock);
+
+ return ret;
+}
+
+static int
+hns3_tm_node_type_get_wrap(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_tm_node_type_get(dev, node_id, is_leaf, error);
+ rte_spinlock_unlock(&hw->lock);
+
+ return ret;
+}
+
+static int
+hns3_tm_level_capabilities_get_wrap(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_tm_level_capabilities_get(dev, level_id, cap, error);
+ rte_spinlock_unlock(&hw->lock);
+
+ return ret;
+}
+
+static int
+hns3_tm_node_capabilities_get_wrap(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_tm_node_capabilities_get(dev, node_id, cap, error);
+ rte_spinlock_unlock(&hw->lock);
+
+ return ret;
+}
+
+static int
+hns3_tm_hierarchy_commit_wrap(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ rte_spinlock_lock(&hw->lock);
+ ret = hns3_tm_hierarchy_commit(dev, clear_on_fail, error);
+ rte_spinlock_unlock(&hw->lock);
+
+ return ret;
+}
+
static int
hns3_tm_node_shaper_update_wrap(struct rte_eth_dev *dev,
uint32_t node_id,
@@ -1213,14 +1340,14 @@ hns3_tm_node_shaper_update_wrap(struct rte_eth_dev *dev,
}
static const struct rte_tm_ops hns3_tm_ops = {
- .capabilities_get = hns3_tm_capabilities_get,
- .shaper_profile_add = hns3_tm_shaper_profile_add,
- .shaper_profile_delete = hns3_tm_shaper_profile_del,
- .node_add = hns3_tm_node_add,
- .node_delete = hns3_tm_node_delete,
- .node_type_get = hns3_tm_node_type_get,
- .level_capabilities_get = hns3_tm_level_capabilities_get,
- .node_capabilities_get = hns3_tm_node_capabilities_get,
+ .capabilities_get = hns3_tm_capabilities_get_wrap,
+ .shaper_profile_add = hns3_tm_shaper_profile_add_wrap,
+ .shaper_profile_delete = hns3_tm_shaper_profile_del_wrap,
+ .node_add = hns3_tm_node_add_wrap,
+ .node_delete = hns3_tm_node_delete_wrap,
+ .node_type_get = hns3_tm_node_type_get_wrap,
+ .level_capabilities_get = hns3_tm_level_capabilities_get_wrap,
+ .node_capabilities_get = hns3_tm_node_capabilities_get_wrap,
.hierarchy_commit = hns3_tm_hierarchy_commit_wrap,
.node_shaper_update = hns3_tm_node_shaper_update_wrap,
};
--
2.41.0.windows.2

View File

@ -0,0 +1,107 @@
From c813bce4dfa2c99ec1ddc06cce3adff7b5f5fdef Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Sat, 5 Aug 2023 16:36:27 +0800
Subject: [PATCH 358/366] net/hns3: fix traffic management dump text alignment
[ upstream commit a73065bfea87385aa86d8ec2e7b65f68494c4f06 ]
Currently the dumped TM info is un-align, which are:
- TM config info:
-- nb_leaf_nodes_max=64 nb_nodes_max=73
-- nb_shaper_profile=2 nb_tc_node=1 nb_queue_node=1
-- committed=0
shaper_profile:
id=800 reference_count=1 peak_rate=4000000Bps
id=801 reference_count=1 peak_rate=12000000Bps
port_node:
...
This patch fix it, the new formatting:
- TM config info:
-- nb_leaf_nodes_max=256 nb_nodes_max=265
-- nb_shaper_profile=2 nb_tc_node=1 nb_queue_node=1
-- committed=1
-- shaper_profile:
id=800 reference_count=0 peak_rate=4000000Bps
id=801 reference_count=0 peak_rate=12000000Bps
-- port_node:
...
Fixes: e4cfe6bb9114 ("net/hns3: dump TM configuration info")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_dump.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c
index 2dc44f2..b6e8b62 100644
--- a/drivers/net/hns3/hns3_dump.c
+++ b/drivers/net/hns3/hns3_dump.c
@@ -664,10 +664,10 @@ hns3_get_tm_conf_shaper_info(FILE *file, struct hns3_tm_conf *conf)
if (conf->nb_shaper_profile == 0)
return;
- fprintf(file, " shaper_profile:\n");
+ fprintf(file, "\t -- shaper_profile:\n");
TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
fprintf(file,
- " id=%u reference_count=%u peak_rate=%" PRIu64 "Bps\n",
+ "\t id=%u reference_count=%u peak_rate=%" PRIu64 "Bps\n",
shaper_profile->shaper_profile_id,
shaper_profile->reference_count,
shaper_profile->profile.peak.rate);
@@ -681,8 +681,8 @@ hns3_get_tm_conf_port_node_info(FILE *file, struct hns3_tm_conf *conf)
return;
fprintf(file,
- " port_node:\n"
- " node_id=%u reference_count=%u shaper_profile_id=%d\n",
+ "\t -- port_node:\n"
+ "\t node_id=%u reference_count=%u shaper_profile_id=%d\n",
conf->root->id, conf->root->reference_count,
conf->root->shaper_profile ?
(int)conf->root->shaper_profile->shaper_profile_id : -1);
@@ -699,7 +699,7 @@ hns3_get_tm_conf_tc_node_info(FILE *file, struct hns3_tm_conf *conf)
if (conf->nb_tc_node == 0)
return;
- fprintf(file, " tc_node:\n");
+ fprintf(file, "\t -- tc_node:\n");
memset(tc_node, 0, sizeof(tc_node));
TAILQ_FOREACH(tm_node, tc_list, node) {
tidx = hns3_tm_calc_node_tc_no(conf, tm_node->id);
@@ -712,7 +712,7 @@ hns3_get_tm_conf_tc_node_info(FILE *file, struct hns3_tm_conf *conf)
if (tm_node == NULL)
continue;
fprintf(file,
- " id=%u TC%u reference_count=%u parent_id=%d "
+ "\t id=%u TC%u reference_count=%u parent_id=%d "
"shaper_profile_id=%d\n",
tm_node->id, hns3_tm_calc_node_tc_no(conf, tm_node->id),
tm_node->reference_count,
@@ -738,7 +738,7 @@ hns3_get_tm_conf_queue_format_info(FILE *file, struct hns3_tm_node **queue_node,
end_queue_id = (i + 1) * HNS3_PERLINE_QUEUES - 1;
if (end_queue_id > nb_tx_queues - 1)
end_queue_id = nb_tx_queues - 1;
- fprintf(file, " %04u - %04u | ", start_queue_id,
+ fprintf(file, "\t %04u - %04u | ", start_queue_id,
end_queue_id);
for (j = start_queue_id; j < nb_tx_queues; j++) {
if (j >= end_queue_id + 1)
@@ -767,8 +767,8 @@ hns3_get_tm_conf_queue_node_info(FILE *file, struct hns3_tm_conf *conf,
return;
fprintf(file,
- " queue_node:\n"
- " tx queue id | mapped tc (8 mean node not exist)\n");
+ "\t -- queue_node:\n"
+ "\t tx queue id | mapped tc (8 mean node not exist)\n");
memset(queue_node, 0, sizeof(queue_node));
memset(queue_node_tc, 0, sizeof(queue_node_tc));
--
2.41.0.windows.2

View File

@ -0,0 +1,155 @@
From 7739ae6472f1dc986ce72d24ff3fcdd1a1eccc3f Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 11 Jul 2023 18:24:45 +0800
Subject: [PATCH 359/366] net/hns3: fix order in NEON Rx
[ upstream commit 7dd439ed998c36c8d0204c436cc656af08cfa5fc ]
This patch reorders the order of the NEON Rx for better maintenance
and easier understanding.
Fixes: a3d4f4d291d7 ("net/hns3: support NEON Rx")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_rxtx_vec_neon.h | 78 +++++++++++----------------
1 file changed, 31 insertions(+), 47 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx_vec_neon.h b/drivers/net/hns3/hns3_rxtx_vec_neon.h
index a20a6b6..1048b9d 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_neon.h
+++ b/drivers/net/hns3/hns3_rxtx_vec_neon.h
@@ -180,19 +180,12 @@ hns3_recv_burst_vec(struct hns3_rx_queue *__restrict rxq,
bd_vld = vset_lane_u16(rxdp[2].rx.bdtype_vld_udp0, bd_vld, 2);
bd_vld = vset_lane_u16(rxdp[3].rx.bdtype_vld_udp0, bd_vld, 3);
- /* load 2 mbuf pointer */
- mbp1 = vld1q_u64((uint64_t *)&sw_ring[pos]);
-
bd_vld = vshl_n_u16(bd_vld,
HNS3_UINT16_BIT - 1 - HNS3_RXD_VLD_B);
bd_vld = vreinterpret_u16_s16(
vshr_n_s16(vreinterpret_s16_u16(bd_vld),
HNS3_UINT16_BIT - 1));
stat = ~vget_lane_u64(vreinterpret_u64_u16(bd_vld), 0);
-
- /* load 2 mbuf pointer again */
- mbp2 = vld1q_u64((uint64_t *)&sw_ring[pos + 2]);
-
if (likely(stat == 0))
bd_valid_num = HNS3_DEFAULT_DESCS_PER_LOOP;
else
@@ -200,20 +193,20 @@ hns3_recv_burst_vec(struct hns3_rx_queue *__restrict rxq,
if (bd_valid_num == 0)
break;
- /* use offset to control below data load oper ordering */
- offset = rxq->offset_table[bd_valid_num];
+ /* load 4 mbuf pointer */
+ mbp1 = vld1q_u64((uint64_t *)&sw_ring[pos]);
+ mbp2 = vld1q_u64((uint64_t *)&sw_ring[pos + 2]);
- /* store 2 mbuf pointer into rx_pkts */
+ /* store 4 mbuf pointer into rx_pkts */
vst1q_u64((uint64_t *)&rx_pkts[pos], mbp1);
+ vst1q_u64((uint64_t *)&rx_pkts[pos + 2], mbp2);
- /* read first two descs */
+ /* use offset to control below data load oper ordering */
+ offset = rxq->offset_table[bd_valid_num];
+
+ /* read 4 descs */
descs[0] = vld2q_u64((uint64_t *)(rxdp + offset));
descs[1] = vld2q_u64((uint64_t *)(rxdp + offset + 1));
-
- /* store 2 mbuf pointer into rx_pkts again */
- vst1q_u64((uint64_t *)&rx_pkts[pos + 2], mbp2);
-
- /* read remains two descs */
descs[2] = vld2q_u64((uint64_t *)(rxdp + offset + 2));
descs[3] = vld2q_u64((uint64_t *)(rxdp + offset + 3));
@@ -221,56 +214,47 @@ hns3_recv_burst_vec(struct hns3_rx_queue *__restrict rxq,
pkt_mbuf1.val[1] = vreinterpretq_u8_u64(descs[0].val[1]);
pkt_mbuf2.val[0] = vreinterpretq_u8_u64(descs[1].val[0]);
pkt_mbuf2.val[1] = vreinterpretq_u8_u64(descs[1].val[1]);
+ pkt_mbuf3.val[0] = vreinterpretq_u8_u64(descs[2].val[0]);
+ pkt_mbuf3.val[1] = vreinterpretq_u8_u64(descs[2].val[1]);
+ pkt_mbuf4.val[0] = vreinterpretq_u8_u64(descs[3].val[0]);
+ pkt_mbuf4.val[1] = vreinterpretq_u8_u64(descs[3].val[1]);
- /* pkt 1,2 convert format from desc to pktmbuf */
+ /* 4 packets convert format from desc to pktmbuf */
pkt_mb1 = vqtbl2q_u8(pkt_mbuf1, shuf_desc_fields_msk);
pkt_mb2 = vqtbl2q_u8(pkt_mbuf2, shuf_desc_fields_msk);
+ pkt_mb3 = vqtbl2q_u8(pkt_mbuf3, shuf_desc_fields_msk);
+ pkt_mb4 = vqtbl2q_u8(pkt_mbuf4, shuf_desc_fields_msk);
- /* store the first 8 bytes of pkt 1,2 mbuf's rearm_data */
- *(uint64_t *)&sw_ring[pos + 0].mbuf->rearm_data =
- rxq->mbuf_initializer;
- *(uint64_t *)&sw_ring[pos + 1].mbuf->rearm_data =
- rxq->mbuf_initializer;
-
- /* pkt 1,2 remove crc */
+ /* 4 packets remove crc */
tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb1), crc_adjust);
pkt_mb1 = vreinterpretq_u8_u16(tmp);
tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb2), crc_adjust);
pkt_mb2 = vreinterpretq_u8_u16(tmp);
+ tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb3), crc_adjust);
+ pkt_mb3 = vreinterpretq_u8_u16(tmp);
+ tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb4), crc_adjust);
+ pkt_mb4 = vreinterpretq_u8_u16(tmp);
- pkt_mbuf3.val[0] = vreinterpretq_u8_u64(descs[2].val[0]);
- pkt_mbuf3.val[1] = vreinterpretq_u8_u64(descs[2].val[1]);
- pkt_mbuf4.val[0] = vreinterpretq_u8_u64(descs[3].val[0]);
- pkt_mbuf4.val[1] = vreinterpretq_u8_u64(descs[3].val[1]);
-
- /* pkt 3,4 convert format from desc to pktmbuf */
- pkt_mb3 = vqtbl2q_u8(pkt_mbuf3, shuf_desc_fields_msk);
- pkt_mb4 = vqtbl2q_u8(pkt_mbuf4, shuf_desc_fields_msk);
-
- /* pkt 1,2 save to rx_pkts mbuf */
+ /* save packet info to rx_pkts mbuf */
vst1q_u8((void *)&sw_ring[pos + 0].mbuf->rx_descriptor_fields1,
pkt_mb1);
vst1q_u8((void *)&sw_ring[pos + 1].mbuf->rx_descriptor_fields1,
pkt_mb2);
+ vst1q_u8((void *)&sw_ring[pos + 2].mbuf->rx_descriptor_fields1,
+ pkt_mb3);
+ vst1q_u8((void *)&sw_ring[pos + 3].mbuf->rx_descriptor_fields1,
+ pkt_mb4);
- /* pkt 3,4 remove crc */
- tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb3), crc_adjust);
- pkt_mb3 = vreinterpretq_u8_u16(tmp);
- tmp = vsubq_u16(vreinterpretq_u16_u8(pkt_mb4), crc_adjust);
- pkt_mb4 = vreinterpretq_u8_u16(tmp);
-
- /* store the first 8 bytes of pkt 3,4 mbuf's rearm_data */
+ /* store the first 8 bytes of packets mbuf's rearm_data */
+ *(uint64_t *)&sw_ring[pos + 0].mbuf->rearm_data =
+ rxq->mbuf_initializer;
+ *(uint64_t *)&sw_ring[pos + 1].mbuf->rearm_data =
+ rxq->mbuf_initializer;
*(uint64_t *)&sw_ring[pos + 2].mbuf->rearm_data =
rxq->mbuf_initializer;
*(uint64_t *)&sw_ring[pos + 3].mbuf->rearm_data =
rxq->mbuf_initializer;
- /* pkt 3,4 save to rx_pkts mbuf */
- vst1q_u8((void *)&sw_ring[pos + 2].mbuf->rx_descriptor_fields1,
- pkt_mb3);
- vst1q_u8((void *)&sw_ring[pos + 3].mbuf->rx_descriptor_fields1,
- pkt_mb4);
-
rte_prefetch_non_temporal(rxdp + HNS3_DEFAULT_DESCS_PER_LOOP);
parse_retcode = hns3_desc_parse_field(rxq, &sw_ring[pos],
--
2.41.0.windows.2

View File

@ -0,0 +1,89 @@
From d967db92088afcb06e7b245109ff35288c8cd3fe Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 11 Jul 2023 18:24:46 +0800
Subject: [PATCH 360/366] net/hns3: optimize free mbuf for SVE Tx
[ upstream commit 01a295b741603b9366366a665402a2667a29fcc3 ]
Currently, hns3 SVE Tx checks the valid bits of all descriptors
in a batch and then determines whether to release the corresponding
mbufs. Actually, once the valid bit of any descriptor in a batch
isn't cleared, driver does not need to scan the rest of descriptors.
If we optimize SVE codes algorithm about this function, the performance
of a single queue for 64B packet is improved by ~2% on txonly forwarding
mode. And if use C code to scan all descriptors, the performance is
improved by ~8%.
So this patch selects C code to optimize this code to improve the SVE
Tx performance.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_rxtx_vec_sve.c | 42 +---------------------------
1 file changed, 1 insertion(+), 41 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index 6f23ba6..51d4bf3 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -337,46 +337,6 @@ hns3_recv_pkts_vec_sve(void *__restrict rx_queue,
return nb_rx;
}
-static inline void
-hns3_tx_free_buffers_sve(struct hns3_tx_queue *txq)
-{
-#define HNS3_SVE_CHECK_DESCS_PER_LOOP 8
-#define TX_VLD_U8_ZIP_INDEX svindex_u8(0, 4)
- svbool_t pg32 = svwhilelt_b32(0, HNS3_SVE_CHECK_DESCS_PER_LOOP);
- svuint32_t vld, vld2;
- svuint8_t vld_u8;
- uint64_t vld_all;
- struct hns3_desc *tx_desc;
- int i;
-
- /*
- * All mbufs can be released only when the VLD bits of all
- * descriptors in a batch are cleared.
- */
- /* do logical OR operation for all desc's valid field */
- vld = svdup_n_u32(0);
- tx_desc = &txq->tx_ring[txq->next_to_clean];
- for (i = 0; i < txq->tx_rs_thresh; i += HNS3_SVE_CHECK_DESCS_PER_LOOP,
- tx_desc += HNS3_SVE_CHECK_DESCS_PER_LOOP) {
- vld2 = svld1_gather_u32offset_u32(pg32, (uint32_t *)tx_desc,
- svindex_u32(BD_FIELD_VALID_OFFSET, BD_SIZE));
- vld = svorr_u32_z(pg32, vld, vld2);
- }
- /* shift left and then right to get all valid bit */
- vld = svlsl_n_u32_z(pg32, vld,
- HNS3_UINT32_BIT - 1 - HNS3_TXD_VLD_B);
- vld = svreinterpret_u32_s32(svasr_n_s32_z(pg32,
- svreinterpret_s32_u32(vld), HNS3_UINT32_BIT - 1));
- /* use tbl to compress 32bit-lane to 8bit-lane */
- vld_u8 = svtbl_u8(svreinterpret_u8_u32(vld), TX_VLD_U8_ZIP_INDEX);
- /* dump compressed 64bit to variable */
- svst1_u64(PG64_64BIT, &vld_all, svreinterpret_u64_u8(vld_u8));
- if (vld_all > 0)
- return;
-
- hns3_tx_bulk_free_buffers(txq);
-}
-
static inline void
hns3_tx_fill_hw_ring_sve(struct hns3_tx_queue *txq,
struct rte_mbuf **pkts,
@@ -457,7 +417,7 @@ hns3_xmit_fixed_burst_vec_sve(void *__restrict tx_queue,
uint16_t nb_tx = 0;
if (txq->tx_bd_ready < txq->tx_free_thresh)
- hns3_tx_free_buffers_sve(txq);
+ hns3_tx_free_buffers(txq);
nb_pkts = RTE_MIN(txq->tx_bd_ready, nb_pkts);
if (unlikely(nb_pkts == 0)) {
--
2.41.0.windows.2

View File

@ -0,0 +1,223 @@
From 133dbfed220120724a60a2b7deae5ec7d4c38301 Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 11 Jul 2023 18:24:47 +0800
Subject: [PATCH 361/366] net/hns3: optimize rearm mbuf for SVE Rx
[ upstream commit d49b64477f246e53210488825fdd92ccf53fa184 ]
Use hns3_rxq_rearm_mbuf() to replace the hns3_rxq_rearm_mbuf_sve()
to optimize the performance of SVE Rx.
On the rxonly forwarding mode, the performance of a single queue
for 64B packet is improved by ~15%.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_rxtx_vec.c | 51 ---------------------------
drivers/net/hns3/hns3_rxtx_vec.h | 51 +++++++++++++++++++++++++++
drivers/net/hns3/hns3_rxtx_vec_sve.c | 52 ++--------------------------
3 files changed, 53 insertions(+), 101 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index 153866c..5cdfa60 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -55,57 +55,6 @@ hns3_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return nb_tx;
}
-static inline void
-hns3_rxq_rearm_mbuf(struct hns3_rx_queue *rxq)
-{
-#define REARM_LOOP_STEP_NUM 4
- struct hns3_entry *rxep = &rxq->sw_ring[rxq->rx_rearm_start];
- struct hns3_desc *rxdp = rxq->rx_ring + rxq->rx_rearm_start;
- uint64_t dma_addr;
- int i;
-
- if (unlikely(rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep,
- HNS3_DEFAULT_RXQ_REARM_THRESH) < 0)) {
- rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++;
- return;
- }
-
- for (i = 0; i < HNS3_DEFAULT_RXQ_REARM_THRESH; i += REARM_LOOP_STEP_NUM,
- rxep += REARM_LOOP_STEP_NUM, rxdp += REARM_LOOP_STEP_NUM) {
- if (likely(i <
- HNS3_DEFAULT_RXQ_REARM_THRESH - REARM_LOOP_STEP_NUM)) {
- rte_prefetch_non_temporal(rxep[4].mbuf);
- rte_prefetch_non_temporal(rxep[5].mbuf);
- rte_prefetch_non_temporal(rxep[6].mbuf);
- rte_prefetch_non_temporal(rxep[7].mbuf);
- }
-
- dma_addr = rte_mbuf_data_iova_default(rxep[0].mbuf);
- rxdp[0].addr = rte_cpu_to_le_64(dma_addr);
- rxdp[0].rx.bd_base_info = 0;
-
- dma_addr = rte_mbuf_data_iova_default(rxep[1].mbuf);
- rxdp[1].addr = rte_cpu_to_le_64(dma_addr);
- rxdp[1].rx.bd_base_info = 0;
-
- dma_addr = rte_mbuf_data_iova_default(rxep[2].mbuf);
- rxdp[2].addr = rte_cpu_to_le_64(dma_addr);
- rxdp[2].rx.bd_base_info = 0;
-
- dma_addr = rte_mbuf_data_iova_default(rxep[3].mbuf);
- rxdp[3].addr = rte_cpu_to_le_64(dma_addr);
- rxdp[3].rx.bd_base_info = 0;
- }
-
- rxq->rx_rearm_start += HNS3_DEFAULT_RXQ_REARM_THRESH;
- if (rxq->rx_rearm_start >= rxq->nb_rx_desc)
- rxq->rx_rearm_start = 0;
-
- rxq->rx_rearm_nb -= HNS3_DEFAULT_RXQ_REARM_THRESH;
-
- hns3_write_reg_opt(rxq->io_head_reg, HNS3_DEFAULT_RXQ_REARM_THRESH);
-}
-
uint16_t
hns3_recv_pkts_vec(void *__restrict rx_queue,
struct rte_mbuf **__restrict rx_pkts,
diff --git a/drivers/net/hns3/hns3_rxtx_vec.h b/drivers/net/hns3/hns3_rxtx_vec.h
index 2c8a919..a9a6774 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.h
+++ b/drivers/net/hns3/hns3_rxtx_vec.h
@@ -94,4 +94,55 @@ hns3_rx_reassemble_pkts(struct rte_mbuf **rx_pkts,
return count;
}
+
+static inline void
+hns3_rxq_rearm_mbuf(struct hns3_rx_queue *rxq)
+{
+#define REARM_LOOP_STEP_NUM 4
+ struct hns3_entry *rxep = &rxq->sw_ring[rxq->rx_rearm_start];
+ struct hns3_desc *rxdp = rxq->rx_ring + rxq->rx_rearm_start;
+ uint64_t dma_addr;
+ int i;
+
+ if (unlikely(rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep,
+ HNS3_DEFAULT_RXQ_REARM_THRESH) < 0)) {
+ rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++;
+ return;
+ }
+
+ for (i = 0; i < HNS3_DEFAULT_RXQ_REARM_THRESH; i += REARM_LOOP_STEP_NUM,
+ rxep += REARM_LOOP_STEP_NUM, rxdp += REARM_LOOP_STEP_NUM) {
+ if (likely(i <
+ HNS3_DEFAULT_RXQ_REARM_THRESH - REARM_LOOP_STEP_NUM)) {
+ rte_prefetch_non_temporal(rxep[4].mbuf);
+ rte_prefetch_non_temporal(rxep[5].mbuf);
+ rte_prefetch_non_temporal(rxep[6].mbuf);
+ rte_prefetch_non_temporal(rxep[7].mbuf);
+ }
+
+ dma_addr = rte_mbuf_data_iova_default(rxep[0].mbuf);
+ rxdp[0].addr = rte_cpu_to_le_64(dma_addr);
+ rxdp[0].rx.bd_base_info = 0;
+
+ dma_addr = rte_mbuf_data_iova_default(rxep[1].mbuf);
+ rxdp[1].addr = rte_cpu_to_le_64(dma_addr);
+ rxdp[1].rx.bd_base_info = 0;
+
+ dma_addr = rte_mbuf_data_iova_default(rxep[2].mbuf);
+ rxdp[2].addr = rte_cpu_to_le_64(dma_addr);
+ rxdp[2].rx.bd_base_info = 0;
+
+ dma_addr = rte_mbuf_data_iova_default(rxep[3].mbuf);
+ rxdp[3].addr = rte_cpu_to_le_64(dma_addr);
+ rxdp[3].rx.bd_base_info = 0;
+ }
+
+ rxq->rx_rearm_start += HNS3_DEFAULT_RXQ_REARM_THRESH;
+ if (rxq->rx_rearm_start >= rxq->nb_rx_desc)
+ rxq->rx_rearm_start = 0;
+
+ rxq->rx_rearm_nb -= HNS3_DEFAULT_RXQ_REARM_THRESH;
+
+ hns3_write_reg_opt(rxq->io_head_reg, HNS3_DEFAULT_RXQ_REARM_THRESH);
+}
#endif /* HNS3_RXTX_VEC_H */
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index 51d4bf3..1251939 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -237,54 +237,6 @@ hns3_recv_burst_vec_sve(struct hns3_rx_queue *__restrict rxq,
return nb_rx;
}
-static inline void
-hns3_rxq_rearm_mbuf_sve(struct hns3_rx_queue *rxq)
-{
-#define REARM_LOOP_STEP_NUM 4
- struct hns3_entry *rxep = &rxq->sw_ring[rxq->rx_rearm_start];
- struct hns3_desc *rxdp = rxq->rx_ring + rxq->rx_rearm_start;
- struct hns3_entry *rxep_tmp = rxep;
- int i;
-
- if (unlikely(rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep,
- HNS3_DEFAULT_RXQ_REARM_THRESH) < 0)) {
- rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++;
- return;
- }
-
- for (i = 0; i < HNS3_DEFAULT_RXQ_REARM_THRESH; i += REARM_LOOP_STEP_NUM,
- rxep_tmp += REARM_LOOP_STEP_NUM) {
- svuint64_t prf = svld1_u64(PG64_256BIT, (uint64_t *)rxep_tmp);
- svprfd_gather_u64base(PG64_256BIT, prf, SV_PLDL1STRM);
- }
-
- for (i = 0; i < HNS3_DEFAULT_RXQ_REARM_THRESH; i += REARM_LOOP_STEP_NUM,
- rxep += REARM_LOOP_STEP_NUM, rxdp += REARM_LOOP_STEP_NUM) {
- uint64_t iova[REARM_LOOP_STEP_NUM];
- iova[0] = rxep[0].mbuf->buf_iova;
- iova[1] = rxep[1].mbuf->buf_iova;
- iova[2] = rxep[2].mbuf->buf_iova;
- iova[3] = rxep[3].mbuf->buf_iova;
- svuint64_t siova = svld1_u64(PG64_256BIT, iova);
- siova = svadd_n_u64_z(PG64_256BIT, siova, RTE_PKTMBUF_HEADROOM);
- svuint64_t ol_base = svdup_n_u64(0);
- svst1_scatter_u64offset_u64(PG64_256BIT,
- (uint64_t *)&rxdp[0].addr,
- svindex_u64(BD_FIELD_ADDR_OFFSET, BD_SIZE), siova);
- svst1_scatter_u64offset_u64(PG64_256BIT,
- (uint64_t *)&rxdp[0].addr,
- svindex_u64(BD_FIELD_OL_OFFSET, BD_SIZE), ol_base);
- }
-
- rxq->rx_rearm_start += HNS3_DEFAULT_RXQ_REARM_THRESH;
- if (rxq->rx_rearm_start >= rxq->nb_rx_desc)
- rxq->rx_rearm_start = 0;
-
- rxq->rx_rearm_nb -= HNS3_DEFAULT_RXQ_REARM_THRESH;
-
- hns3_write_reg_opt(rxq->io_head_reg, HNS3_DEFAULT_RXQ_REARM_THRESH);
-}
-
uint16_t
hns3_recv_pkts_vec_sve(void *__restrict rx_queue,
struct rte_mbuf **__restrict rx_pkts,
@@ -300,7 +252,7 @@ hns3_recv_pkts_vec_sve(void *__restrict rx_queue,
nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, HNS3_SVE_DEFAULT_DESCS_PER_LOOP);
if (rxq->rx_rearm_nb > HNS3_DEFAULT_RXQ_REARM_THRESH)
- hns3_rxq_rearm_mbuf_sve(rxq);
+ hns3_rxq_rearm_mbuf(rxq);
if (unlikely(!(rxdp->rx.bd_base_info &
rte_cpu_to_le_32(1u << HNS3_RXD_VLD_B))))
@@ -331,7 +283,7 @@ hns3_recv_pkts_vec_sve(void *__restrict rx_queue,
break;
if (rxq->rx_rearm_nb > HNS3_DEFAULT_RXQ_REARM_THRESH)
- hns3_rxq_rearm_mbuf_sve(rxq);
+ hns3_rxq_rearm_mbuf(rxq);
}
return nb_rx;
--
2.41.0.windows.2

View File

@ -0,0 +1,242 @@
From 5e6c0f58eff79c06edf3638108c096e792b81a3b Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 11 Jul 2023 18:24:48 +0800
Subject: [PATCH 362/366] net/hns3: optimize SVE Rx performance
[ upstream commit f1ad6decfbd44c3dc2d73dcda3fa8fb37b140186 ]
This patch optimizes SVE Rx performance by the following ways:
1> optimize the calculation of valid BD number.
2> remove a temporary variable (key_fields)
3> use C language to parse some descriptor fields, instead of
SVE instruction.
4> small step prefetch descriptor.
On the rxonly forwarding mode, the performance of a single queue
or 64B packet is improved by ~40%.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_rxtx_vec_sve.c | 137 ++++++---------------------
1 file changed, 27 insertions(+), 110 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index 1251939..88b484d 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -20,40 +20,36 @@
#define BD_SIZE 32
#define BD_FIELD_ADDR_OFFSET 0
-#define BD_FIELD_L234_OFFSET 8
-#define BD_FIELD_XLEN_OFFSET 12
-#define BD_FIELD_RSS_OFFSET 16
-#define BD_FIELD_OL_OFFSET 24
#define BD_FIELD_VALID_OFFSET 28
-typedef struct {
- uint32_t l234_info[HNS3_SVE_DEFAULT_DESCS_PER_LOOP];
- uint32_t ol_info[HNS3_SVE_DEFAULT_DESCS_PER_LOOP];
- uint32_t bd_base_info[HNS3_SVE_DEFAULT_DESCS_PER_LOOP];
-} HNS3_SVE_KEY_FIELD_S;
-
static inline uint32_t
hns3_desc_parse_field_sve(struct hns3_rx_queue *rxq,
struct rte_mbuf **rx_pkts,
- HNS3_SVE_KEY_FIELD_S *key,
+ struct hns3_desc *rxdp,
uint32_t bd_vld_num)
{
+ uint32_t l234_info, ol_info, bd_base_info;
uint32_t retcode = 0;
int ret, i;
for (i = 0; i < (int)bd_vld_num; i++) {
/* init rte_mbuf.rearm_data last 64-bit */
rx_pkts[i]->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
-
- ret = hns3_handle_bdinfo(rxq, rx_pkts[i], key->bd_base_info[i],
- key->l234_info[i]);
+ rx_pkts[i]->hash.rss = rxdp[i].rx.rss_hash;
+ rx_pkts[i]->pkt_len = rte_le_to_cpu_16(rxdp[i].rx.pkt_len) -
+ rxq->crc_len;
+ rx_pkts[i]->data_len = rx_pkts[i]->pkt_len;
+
+ l234_info = rxdp[i].rx.l234_info;
+ ol_info = rxdp[i].rx.ol_info;
+ bd_base_info = rxdp[i].rx.bd_base_info;
+ ret = hns3_handle_bdinfo(rxq, rx_pkts[i], bd_base_info, l234_info);
if (unlikely(ret)) {
retcode |= 1u << i;
continue;
}
- rx_pkts[i]->packet_type = hns3_rx_calc_ptype(rxq,
- key->l234_info[i], key->ol_info[i]);
+ rx_pkts[i]->packet_type = hns3_rx_calc_ptype(rxq, l234_info, ol_info);
/* Increment bytes counter */
rxq->basic_stats.bytes += rx_pkts[i]->pkt_len;
@@ -77,46 +73,16 @@ hns3_recv_burst_vec_sve(struct hns3_rx_queue *__restrict rxq,
uint16_t nb_pkts,
uint64_t *bd_err_mask)
{
-#define XLEN_ADJUST_LEN 32
-#define RSS_ADJUST_LEN 16
-#define GEN_VLD_U8_ZIP_INDEX svindex_s8(28, -4)
uint16_t rx_id = rxq->next_to_use;
struct hns3_entry *sw_ring = &rxq->sw_ring[rx_id];
struct hns3_desc *rxdp = &rxq->rx_ring[rx_id];
- struct hns3_desc *rxdp2;
- HNS3_SVE_KEY_FIELD_S key_field;
+ struct hns3_desc *rxdp2, *next_rxdp;
uint64_t bd_valid_num;
uint32_t parse_retcode;
uint16_t nb_rx = 0;
int pos, offset;
- uint16_t xlen_adjust[XLEN_ADJUST_LEN] = {
- 0, 0xffff, 1, 0xffff, /* 1st mbuf: pkt_len and dat_len */
- 2, 0xffff, 3, 0xffff, /* 2st mbuf: pkt_len and dat_len */
- 4, 0xffff, 5, 0xffff, /* 3st mbuf: pkt_len and dat_len */
- 6, 0xffff, 7, 0xffff, /* 4st mbuf: pkt_len and dat_len */
- 8, 0xffff, 9, 0xffff, /* 5st mbuf: pkt_len and dat_len */
- 10, 0xffff, 11, 0xffff, /* 6st mbuf: pkt_len and dat_len */
- 12, 0xffff, 13, 0xffff, /* 7st mbuf: pkt_len and dat_len */
- 14, 0xffff, 15, 0xffff, /* 8st mbuf: pkt_len and dat_len */
- };
-
- uint32_t rss_adjust[RSS_ADJUST_LEN] = {
- 0, 0xffff, /* 1st mbuf: rss */
- 1, 0xffff, /* 2st mbuf: rss */
- 2, 0xffff, /* 3st mbuf: rss */
- 3, 0xffff, /* 4st mbuf: rss */
- 4, 0xffff, /* 5st mbuf: rss */
- 5, 0xffff, /* 6st mbuf: rss */
- 6, 0xffff, /* 7st mbuf: rss */
- 7, 0xffff, /* 8st mbuf: rss */
- };
-
svbool_t pg32 = svwhilelt_b32(0, HNS3_SVE_DEFAULT_DESCS_PER_LOOP);
- svuint16_t xlen_tbl1 = svld1_u16(PG16_256BIT, xlen_adjust);
- svuint16_t xlen_tbl2 = svld1_u16(PG16_256BIT, &xlen_adjust[16]);
- svuint32_t rss_tbl1 = svld1_u32(PG32_256BIT, rss_adjust);
- svuint32_t rss_tbl2 = svld1_u32(PG32_256BIT, &rss_adjust[8]);
/* compile-time verifies the xlen_adjust mask */
RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
@@ -126,30 +92,21 @@ hns3_recv_burst_vec_sve(struct hns3_rx_queue *__restrict rxq,
for (pos = 0; pos < nb_pkts; pos += HNS3_SVE_DEFAULT_DESCS_PER_LOOP,
rxdp += HNS3_SVE_DEFAULT_DESCS_PER_LOOP) {
- svuint64_t vld_clz, mbp1st, mbp2st, mbuf_init;
- svuint64_t xlen1st, xlen2st, rss1st, rss2st;
- svuint32_t l234, ol, vld, vld2, xlen, rss;
- svuint8_t vld_u8;
+ svuint64_t mbp1st, mbp2st, mbuf_init;
+ svuint32_t vld;
+ svbool_t vld_op;
/* calc how many bd valid: part 1 */
vld = svld1_gather_u32offset_u32(pg32, (uint32_t *)rxdp,
svindex_u32(BD_FIELD_VALID_OFFSET, BD_SIZE));
- vld2 = svlsl_n_u32_z(pg32, vld,
- HNS3_UINT32_BIT - 1 - HNS3_RXD_VLD_B);
- vld2 = svreinterpret_u32_s32(svasr_n_s32_z(pg32,
- svreinterpret_s32_u32(vld2), HNS3_UINT32_BIT - 1));
+ vld = svand_n_u32_z(pg32, vld, BIT(HNS3_RXD_VLD_B));
+ vld_op = svcmpne_n_u32(pg32, vld, BIT(HNS3_RXD_VLD_B));
+ bd_valid_num = svcntp_b32(pg32, svbrkb_b_z(pg32, vld_op));
+ if (bd_valid_num == 0)
+ break;
/* load 4 mbuf pointer */
mbp1st = svld1_u64(PG64_256BIT, (uint64_t *)&sw_ring[pos]);
-
- /* calc how many bd valid: part 2 */
- vld_u8 = svtbl_u8(svreinterpret_u8_u32(vld2),
- svreinterpret_u8_s8(GEN_VLD_U8_ZIP_INDEX));
- vld_clz = svnot_u64_z(PG64_64BIT, svreinterpret_u64_u8(vld_u8));
- vld_clz = svclz_u64_z(PG64_64BIT, vld_clz);
- svst1_u64(PG64_64BIT, &bd_valid_num, vld_clz);
- bd_valid_num /= HNS3_UINT8_BIT;
-
/* load 4 more mbuf pointer */
mbp2st = svld1_u64(PG64_256BIT, (uint64_t *)&sw_ring[pos + 4]);
@@ -159,65 +116,25 @@ hns3_recv_burst_vec_sve(struct hns3_rx_queue *__restrict rxq,
/* store 4 mbuf pointer into rx_pkts */
svst1_u64(PG64_256BIT, (uint64_t *)&rx_pkts[pos], mbp1st);
-
- /* load key field to vector reg */
- l234 = svld1_gather_u32offset_u32(pg32, (uint32_t *)rxdp2,
- svindex_u32(BD_FIELD_L234_OFFSET, BD_SIZE));
- ol = svld1_gather_u32offset_u32(pg32, (uint32_t *)rxdp2,
- svindex_u32(BD_FIELD_OL_OFFSET, BD_SIZE));
-
/* store 4 mbuf pointer into rx_pkts again */
svst1_u64(PG64_256BIT, (uint64_t *)&rx_pkts[pos + 4], mbp2st);
- /* load datalen, pktlen and rss_hash */
- xlen = svld1_gather_u32offset_u32(pg32, (uint32_t *)rxdp2,
- svindex_u32(BD_FIELD_XLEN_OFFSET, BD_SIZE));
- rss = svld1_gather_u32offset_u32(pg32, (uint32_t *)rxdp2,
- svindex_u32(BD_FIELD_RSS_OFFSET, BD_SIZE));
-
- /* store key field to stash buffer */
- svst1_u32(pg32, (uint32_t *)key_field.l234_info, l234);
- svst1_u32(pg32, (uint32_t *)key_field.bd_base_info, vld);
- svst1_u32(pg32, (uint32_t *)key_field.ol_info, ol);
-
- /* sub crc_len for pkt_len and data_len */
- xlen = svreinterpret_u32_u16(svsub_n_u16_z(PG16_256BIT,
- svreinterpret_u16_u32(xlen), rxq->crc_len));
-
/* init mbuf_initializer */
mbuf_init = svdup_n_u64(rxq->mbuf_initializer);
-
- /* extract datalen, pktlen and rss from xlen and rss */
- xlen1st = svreinterpret_u64_u16(
- svtbl_u16(svreinterpret_u16_u32(xlen), xlen_tbl1));
- xlen2st = svreinterpret_u64_u16(
- svtbl_u16(svreinterpret_u16_u32(xlen), xlen_tbl2));
- rss1st = svreinterpret_u64_u32(
- svtbl_u32(svreinterpret_u32_u32(rss), rss_tbl1));
- rss2st = svreinterpret_u64_u32(
- svtbl_u32(svreinterpret_u32_u32(rss), rss_tbl2));
-
/* save mbuf_initializer */
svst1_scatter_u64base_offset_u64(PG64_256BIT, mbp1st,
offsetof(struct rte_mbuf, rearm_data), mbuf_init);
svst1_scatter_u64base_offset_u64(PG64_256BIT, mbp2st,
offsetof(struct rte_mbuf, rearm_data), mbuf_init);
- /* save datalen and pktlen and rss */
- svst1_scatter_u64base_offset_u64(PG64_256BIT, mbp1st,
- offsetof(struct rte_mbuf, pkt_len), xlen1st);
- svst1_scatter_u64base_offset_u64(PG64_256BIT, mbp1st,
- offsetof(struct rte_mbuf, hash.rss), rss1st);
- svst1_scatter_u64base_offset_u64(PG64_256BIT, mbp2st,
- offsetof(struct rte_mbuf, pkt_len), xlen2st);
- svst1_scatter_u64base_offset_u64(PG64_256BIT, mbp2st,
- offsetof(struct rte_mbuf, hash.rss), rss2st);
-
- rte_prefetch_non_temporal(rxdp +
- HNS3_SVE_DEFAULT_DESCS_PER_LOOP);
+ next_rxdp = rxdp + HNS3_SVE_DEFAULT_DESCS_PER_LOOP;
+ rte_prefetch_non_temporal(next_rxdp);
+ rte_prefetch_non_temporal(next_rxdp + 2);
+ rte_prefetch_non_temporal(next_rxdp + 4);
+ rte_prefetch_non_temporal(next_rxdp + 6);
parse_retcode = hns3_desc_parse_field_sve(rxq, &rx_pkts[pos],
- &key_field, bd_valid_num);
+ &rxdp2[offset], bd_valid_num);
if (unlikely(parse_retcode))
(*bd_err_mask) |= ((uint64_t)parse_retcode) << pos;
--
2.41.0.windows.2

View File

@ -0,0 +1,97 @@
From 9b13302cec30ec70d2aedcd024bde4db57bc8eaa Mon Sep 17 00:00:00 2001
From: Ke Zhang <ke1x.zhang@intel.com>
Date: Fri, 25 Mar 2022 08:35:55 +0000
Subject: [PATCH 363/366] app/testpmd: fix multicast address pool leak
[ upstream commit 68629be3a622ee53cd5b40c8447ae9b083ff3f6c ]
A multicast address pool is allocated for a port when
using mcast_addr testpmd commands.
When closing a port or stopping testpmd, this pool was
not freed, resulting in a leak.
This issue has been caught using ASan.
Free this pool when closing the port.
Error info as following:
ERROR: LeakSanitizer: detected memory leaksDirect leak of
192 byte(s)
0 0x7f6a2e0aeffe in __interceptor_realloc
(/lib/x86_64-linux-gnu/libasan.so.5+0x10dffe)
1 0x565361eb340f in mcast_addr_pool_extend
../app/test-pmd/config.c:5162
2 0x565361eb3556 in mcast_addr_pool_append
../app/test-pmd/config.c:5180
3 0x565361eb3aae in mcast_addr_add
../app/test-pmd/config.c:5243
Fixes: 8fff667578a7 ("app/testpmd: new command to add/remove multicast MAC addresses")
Cc: stable@dpdk.org
Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Acked-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
---
app/test-pmd/config.c | 19 +++++++++++++++++++
app/test-pmd/testpmd.c | 1 +
app/test-pmd/testpmd.h | 1 +
3 files changed, 21 insertions(+)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 22c63e2..61dc56f 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5364,6 +5364,25 @@ mcast_addr_pool_remove(struct rte_port *port, uint32_t addr_idx)
sizeof(struct rte_ether_addr) * (port->mc_addr_nb - addr_idx));
}
+int
+mcast_addr_pool_destroy(portid_t port_id)
+{
+ struct rte_port *port;
+
+ if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+ port_id == (portid_t)RTE_PORT_ALL)
+ return -EINVAL;
+ port = &ports[port_id];
+
+ if (port->mc_addr_nb != 0) {
+ /* free the pool of multicast addresses. */
+ free(port->mc_addr_pool);
+ port->mc_addr_pool = NULL;
+ port->mc_addr_nb = 0;
+ }
+ return 0;
+}
+
static int
eth_port_multicast_addr_list_set(portid_t port_id)
{
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 20134c5..6f59bd2 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -3284,6 +3284,7 @@ close_port(portid_t pid)
}
if (is_proc_primary()) {
+ mcast_addr_pool_destroy(pi);
port_flow_flush(pi);
port_flex_item_flush(pi);
port_action_handle_flush(pi);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index be7454a..54d3112 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -906,6 +906,7 @@ int port_flow_create(portid_t port_id,
int port_action_handle_query(portid_t port_id, uint32_t id);
void update_age_action_context(const struct rte_flow_action *actions,
struct port_flow *pf);
+int mcast_addr_pool_destroy(portid_t port_id);
int port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t *rule);
int port_flow_flush(portid_t port_id);
int port_flow_dump(portid_t port_id, bool dump_all,
--
2.41.0.windows.2

View File

@ -0,0 +1,40 @@
From 21f694a2c28879a863dc255e7800ee31aac5c068 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Sun, 8 Oct 2023 14:46:19 +0800
Subject: [PATCH 364/366] app/testpmd: fix help string
[ upstream commit 42661fb8f18e52684d0d9f0d376017082fca45e0 ]
Command help string is missing 'mcast_addr add|remove'.
This patch add it.
Fixes: 8fff667578a7 ("app/testpmd: new command to add/remove multicast MAC addresses")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/cmdline.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index bc770f3..ec8f385 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -504,6 +504,12 @@ static void cmd_help_long_parsed(void *parsed_result,
"mac_addr add port (port_id) vf (vf_id) (mac_address)\n"
" Add a MAC address for a VF on the port.\n\n"
+ "mcast_addr add (port_id) (mcast_addr)\n"
+ " Add a multicast MAC addresses on port_id.\n\n"
+
+ "mcast_addr remove (port_id) (mcast_addr)\n"
+ " Remove a multicast MAC address from port_id.\n\n"
+
"set vf mac addr (port_id) (vf_id) (XX:XX:XX:XX:XX:XX)\n"
" Set the MAC address for a VF from the PF.\n\n"
--
2.41.0.windows.2

View File

@ -0,0 +1,152 @@
From c2f8baf727df5d43ba3e1366037d31bd6185b77d Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Sun, 8 Oct 2023 14:46:20 +0800
Subject: [PATCH 365/366] app/testpmd: add command to flush multicast MAC addresses
[ upstream commit ef8bd7d0b25abdcc425d4a7e399c66957b15b935 ]
Add command to flush all multicast MAC address
Usage:
mcast_addr flush <port_id> :
flush all multicast MAC address on port_id
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/cmdline.c | 43 +++++++++++++++++++++
app/test-pmd/config.c | 18 +++++++++
app/test-pmd/testpmd.h | 1 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++++
4 files changed, 69 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index ec8f385..8facca3 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -510,6 +510,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"mcast_addr remove (port_id) (mcast_addr)\n"
" Remove a multicast MAC address from port_id.\n\n"
+ "mcast_addr flush (port_id)\n"
+ " Flush all multicast MAC addresses on port_id.\n\n"
+
"set vf mac addr (port_id) (vf_id) (XX:XX:XX:XX:XX:XX)\n"
" Set the MAC address for a VF from the PF.\n\n"
@@ -11004,6 +11007,45 @@ cmdline_parse_inst_t cmd_mcast_addr = {
},
};
+/* *** FLUSH MULTICAST MAC ADDRESS ON PORT *** */
+struct cmd_mcast_addr_flush_result {
+ cmdline_fixed_string_t mcast_addr_cmd;
+ cmdline_fixed_string_t what;
+ uint16_t port_num;
+};
+
+static void cmd_mcast_addr_flush_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_mcast_addr_flush_result *res = parsed_result;
+
+ mcast_addr_flush(res->port_num);
+}
+
+static cmdline_parse_token_string_t cmd_mcast_addr_flush_cmd =
+ TOKEN_STRING_INITIALIZER(struct cmd_mcast_addr_result,
+ mcast_addr_cmd, "mcast_addr");
+static cmdline_parse_token_string_t cmd_mcast_addr_flush_what =
+ TOKEN_STRING_INITIALIZER(struct cmd_mcast_addr_result, what,
+ "flush");
+static cmdline_parse_token_num_t cmd_mcast_addr_flush_portnum =
+ TOKEN_NUM_INITIALIZER(struct cmd_mcast_addr_result, port_num,
+ RTE_UINT16);
+
+static cmdline_parse_inst_t cmd_mcast_addr_flush = {
+ .f = cmd_mcast_addr_flush_parsed,
+ .data = (void *)0,
+ .help_str = "mcast_addr flush <port_id> : "
+ "flush all multicast MAC addresses on port_id",
+ .tokens = {
+ (void *)&cmd_mcast_addr_flush_cmd,
+ (void *)&cmd_mcast_addr_flush_what,
+ (void *)&cmd_mcast_addr_flush_portnum,
+ NULL,
+ },
+};
+
/* vf vlan anti spoof configuration */
/* Common result structure for vf vlan anti spoof */
@@ -17867,6 +17909,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_set_port_meter_stats_mask,
(cmdline_parse_inst_t *)&cmd_show_port_meter_stats,
(cmdline_parse_inst_t *)&cmd_mcast_addr,
+ (cmdline_parse_inst_t *)&cmd_mcast_addr_flush,
(cmdline_parse_inst_t *)&cmd_set_vf_vlan_anti_spoof,
(cmdline_parse_inst_t *)&cmd_set_vf_mac_anti_spoof,
(cmdline_parse_inst_t *)&cmd_set_vf_vlan_stripq,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 61dc56f..af00078 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5459,6 +5459,24 @@ mcast_addr_remove(portid_t port_id, struct rte_ether_addr *mc_addr)
mcast_addr_pool_append(port, mc_addr);
}
+void
+mcast_addr_flush(portid_t port_id)
+{
+ int ret;
+
+ if (port_id_is_invalid(port_id, ENABLED_WARN))
+ return;
+
+ ret = rte_eth_dev_set_mc_addr_list(port_id, NULL, 0);
+ if (ret != 0) {
+ fprintf(stderr,
+ "Failed to flush all multicast MAC addresses on port_id %u\n",
+ port_id);
+ return;
+ }
+ mcast_addr_pool_destroy(port_id);
+}
+
void
port_dcb_info_display(portid_t port_id)
{
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 54d3112..30c7177 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1051,6 +1051,7 @@ void show_mcast_macs(portid_t port_id);
/* Functions to manage the set of filtered Multicast MAC addresses */
void mcast_addr_add(portid_t port_id, struct rte_ether_addr *mc_addr);
void mcast_addr_remove(portid_t port_id, struct rte_ether_addr *mc_addr);
+void mcast_addr_flush(portid_t port_id);
void port_dcb_info_display(portid_t port_id);
uint8_t *open_file(const char *file_path, uint32_t *size);
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index ecf89aa..c33c845 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1406,6 +1406,13 @@ filtered by port::
testpmd> mcast_addr remove (port_id) (mcast_addr)
+mcast_addr flush
+~~~~~~~~~~~~~~~~
+
+Flush all multicast MAC addresses on port_id::
+
+ testpmd> mcast_addr flush (port_id)
+
mac_addr add (for VF)
~~~~~~~~~~~~~~~~~~~~~
--
2.41.0.windows.2

View File

@ -0,0 +1,33 @@
From d743e25356ecfda7dcfc029c4e6a5d46fd80bce1 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Tue, 26 Sep 2023 18:04:05 +0800
Subject: [PATCH] maintainers: update for hns3 driver
[ upstream commit 5e4b7cad5119956df1ca9f0d22d1429399a5c818 ]
Dongdong Liu currently do not work for the hns3 PMD.
I will do the work, so update the hns3 maintainers.
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
MAINTAINERS | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 7a28fec..7db6d4e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -695,9 +695,8 @@ F: doc/guides/nics/enic.rst
F: doc/guides/nics/features/enic.ini
Hisilicon hns3
-M: Min Hu (Connor) <humin29@huawei.com>
+M: Jie Hai <haijie1@huawei.com>
M: Yisen Zhuang <yisen.zhuang@huawei.com>
-M: Lijun Ou <oulijun@huawei.com>
F: drivers/net/hns3/
F: doc/guides/nics/hns3.rst
F: doc/guides/nics/features/hns3.ini
--
2.41.0.windows.2

View File

@ -0,0 +1,38 @@
From 0ba973a96681d5c5f85423176d63c14f8cbc1c25 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Thu, 9 Feb 2023 01:25:33 +0000
Subject: [PATCH 367/394] telemetry: fix repeat display when callback don't
init dict
[ upstream commit ff50c4f9136781bae9089c596e0a12d113e1d474 ]
When a telemetry callback doesn't initialize the telemetry data
structure and returns a non-negative number, the telemetry will repeat
to display the last result. This patch zero the data structure to avoid
the problem.
Fixes: 6dd571fd07c3 ("telemetry: introduce new functionality")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/telemetry/telemetry.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c
index 52048de55c..2c12db20cb 100644
--- a/lib/telemetry/telemetry.c
+++ b/lib/telemetry/telemetry.c
@@ -332,7 +332,7 @@ output_json(const char *cmd, const struct rte_tel_data *d, int s)
static void
perform_command(telemetry_cb fn, const char *cmd, const char *param, int s)
{
- struct rte_tel_data data;
+ struct rte_tel_data data = {0};
int ret = fn(cmd, param, &data);
if (ret < 0) {
--
2.23.0

View File

@ -0,0 +1,50 @@
From 86aadc9fdf971e0f261572d01fe5fa7cbcfda385 Mon Sep 17 00:00:00 2001
From: Jerin Jacob <jerinj@marvell.com>
Date: Tue, 4 Apr 2023 12:25:25 +0530
Subject: [PATCH 368/394] net/hns3: fix build warning
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
[ upstream commit 60fe5c3cfc3c28952448d2163c4eb1d22d86ccac ]
aarch64 gcc 12.2.0 build complain with below warning[1].
Move the new_link initialization upwards to fix the warning.
[1]
drivers/net/hns3/hns3_ethdev.c: In function hns3_dev_link_update:
drivers/net/hns3/hns3_ethdev.c:2249:1:
warning: new_link may be used uninitialized [-Wmaybe-uninitialized]
Fixes: 64308555d5bf ("net/hns3: fix link status when port is stopped")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Dongdong Liu <liudongdong3@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 6c3ae75c4d..ad595478a7 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2275,6 +2275,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
struct rte_eth_link new_link;
int ret;
+ memset(&new_link, 0, sizeof(new_link));
/* When port is stopped, report link down. */
if (eth_dev->data->dev_started == 0) {
new_link.link_autoneg = mac->link_autoneg;
@@ -2298,7 +2299,6 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
} while (retry_cnt--);
- memset(&new_link, 0, sizeof(new_link));
hns3_setup_linkstatus(eth_dev, &new_link);
out:
--
2.23.0

View File

@ -0,0 +1,44 @@
From e1aae46f2f2185c5d3b0d33a4db8452d9c5129b3 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Fri, 27 Oct 2023 14:09:39 +0800
Subject: [PATCH 369/394] net/hns3: fix typo in function name
[ upstream commit 28ad38dd7403d64b3c0aa6dfd33e314bdce276c6 ]
This patch fixes a typo.
Fixes: c09c7847d892 ("net/hns3: support traffic management")
Cc: stable@dpdk.org
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/net/hns3/hns3_tm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/hns3/hns3_tm.c b/drivers/net/hns3/hns3_tm.c
index 67402a700f..d969164014 100644
--- a/drivers/net/hns3/hns3_tm.c
+++ b/drivers/net/hns3/hns3_tm.c
@@ -739,7 +739,7 @@ hns3_tm_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
}
static void
-hns3_tm_nonleaf_level_capsbilities_get(struct rte_eth_dev *dev,
+hns3_tm_nonleaf_level_capabilities_get(struct rte_eth_dev *dev,
uint32_t level_id,
struct rte_tm_level_capabilities *cap)
{
@@ -818,7 +818,7 @@ hns3_tm_level_capabilities_get(struct rte_eth_dev *dev,
memset(cap, 0, sizeof(struct rte_tm_level_capabilities));
if (level_id != HNS3_TM_NODE_LEVEL_QUEUE)
- hns3_tm_nonleaf_level_capsbilities_get(dev, level_id, cap);
+ hns3_tm_nonleaf_level_capabilities_get(dev, level_id, cap);
else
hns3_tm_leaf_level_capabilities_get(dev, cap);
--
2.23.0

View File

@ -0,0 +1,44 @@
From e21bdbf93b0ec692c86d9457a23acb3e3209243b Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 27 Oct 2023 14:09:40 +0800
Subject: [PATCH 370/394] net/hns3: fix unchecked Rx free threshold
[ upstream commit c1f0cd3a4c834c2e550370b6d31b6bcd456a15f9 ]
To reduce the frequency of updating the head pointer of Rx queue,
driver just updates this pointer when the number of processed
descriptors is greater than the Rx free threshold. If the Rx free
threshold is set to a value greater than or equal to the number of
descriptors in Rx queue, the driver does not update this pointer.
As a result, the hardware cannot receive more packets.
This patch fix it by adding Rx free threshold check.
Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
drivers/net/hns3/hns3_rxtx.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 4c79163e3f..208c725cd5 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1785,6 +1785,12 @@ hns3_rx_queue_conf_check(struct hns3_hw *hw, const struct rte_eth_rxconf *conf,
return -EINVAL;
}
+ if (conf->rx_free_thresh >= nb_desc) {
+ hns3_err(hw, "rx_free_thresh (%u) must be less than %u",
+ conf->rx_free_thresh, nb_desc);
+ return -EINVAL;
+ }
+
if (conf->rx_drop_en == 0)
hns3_warn(hw, "if no descriptors available, packets are always "
"dropped and rx_drop_en (1) is fixed on");
--
2.23.0

View File

@ -0,0 +1,64 @@
From 090826e4646db4a438336c5e9e879f2fa5a6e07a Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Fri, 27 Oct 2023 14:09:41 +0800
Subject: [PATCH 371/394] net/hns3: fix crash for NEON and SVE
[ upstream commit 01843ab2f2fc8c3137258ec39b2cb6f62ba7b8a2 ]
Driver may fail to allocate bulk mbufs for Neon and SVE when rearm
mbuf. Currently, driver keeps going to handle packets even if there
isn't available descriptors to receive packets at this moment.
As a result, driver probably fills the mbufs with invalid data to
application and accesses to illegal address because of the VLD bit
of the descriptor at the "rx_rearm_start" position still being set.
So driver has to clear VLD bit for this descriptor in this scenario
in case of receiving packets later.
In addition, it is possible that the sum of the "rx_rearm_nb" and
"rx_rearm_start" is greater than total descriptor number of Rx queue
in the above scenario. So the index of rxq->sw_ring[] to set mbuf
pointer to NULL should also be fixed to avoid out-of-bounds memory
access.
Fixes: a3d4f4d291d7 ("net/hns3: support NEON Rx")
Fixes: f81a18f49152 ("net/hns3: fix mbuf leakage when RxQ started after reset")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/hns3/hns3_rxtx_vec.h | 5 +++++
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 208c725cd5..3054d24080 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -51,7 +51,7 @@ hns3_rx_queue_release_mbufs(struct hns3_rx_queue *rxq)
}
}
for (i = 0; i < rxq->rx_rearm_nb; i++)
- rxq->sw_ring[rxq->rx_rearm_start + i].mbuf = NULL;
+ rxq->sw_ring[(rxq->rx_rearm_start + i) % rxq->nb_rx_desc].mbuf = NULL;
}
for (i = 0; i < rxq->bulk_mbuf_num; i++)
diff --git a/drivers/net/hns3/hns3_rxtx_vec.h b/drivers/net/hns3/hns3_rxtx_vec.h
index a9a6774294..9018e79c2f 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.h
+++ b/drivers/net/hns3/hns3_rxtx_vec.h
@@ -106,6 +106,11 @@ hns3_rxq_rearm_mbuf(struct hns3_rx_queue *rxq)
if (unlikely(rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep,
HNS3_DEFAULT_RXQ_REARM_THRESH) < 0)) {
+ /*
+ * Clear VLD bit for the first descriptor rearmed in case
+ * of going to receive packets later.
+ */
+ rxdp[0].rx.bd_base_info = 0;
rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++;
return;
}
--
2.23.0

View File

@ -0,0 +1,61 @@
From 9406299efd12990e91299d6abbe1b191d0360101 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 27 Oct 2023 14:09:42 +0800
Subject: [PATCH 372/394] net/hns3: fix double stats for IMP and global reset
[ upstream commit c48e74370c5eafbe8db5c826a797344e4fdf8f49 ]
There is a stats counter for IMP and global reset in PF driver.
hns3 driver has two following task to detect reset event:
(1) interrupt handled task(A): triggered by interrupt and detect
which reset level. And the reset service will be executed
after 10us.
(2) polling task(B): scan reset source register to detect if
driver has to do reset. And the reset service will be executed
after deferred 3s.
They'll both count the number of one reset plus 1.
Task(A) adds it before doing the reset service. And in the reset service,
task(B) adds it if hw->reset.schedule is 'SCHEDULE_REQUESTED'.
Normally, this reset counter is just added by 1 once. Unfortunately,
this counter is added by 2 in the following case:
1. Task(B) detect the reset event, like IMP. hw->reset.schedule is
set to 'SCHEDULE_REQUESTED'.
2. Task(A) is just triggered before running the reset service of task(B).
Note: the reset counter is added by 1 at this moment before running
the reset service of task(A). Additionally, the reset service of
task(B) is canceled in task(A) because of schedule status being
'SCHEDULE_REQUESTED'.
3. Then the reset service of task(A) is executed at last.
Note: The reset counter is added by 1 again in this step because of
schedule status still being 'SCHEDULE_REQUESTED'.
So this patch fix it by setting the scheduling status to
'SCHEDULE_REQUESTED' in step 2.
Fixes: 2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
drivers/net/hns3/hns3_intr.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
index 57679254ee..51711244b5 100644
--- a/drivers/net/hns3/hns3_intr.c
+++ b/drivers/net/hns3/hns3_intr.c
@@ -2413,8 +2413,8 @@ hns3_schedule_reset(struct hns3_adapter *hns)
if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) ==
SCHEDULE_DEFERRED)
rte_eal_alarm_cancel(hw->reset.ops->reset_service, hns);
- else
- __atomic_store_n(&hw->reset.schedule, SCHEDULE_REQUESTED,
+
+ __atomic_store_n(&hw->reset.schedule, SCHEDULE_REQUESTED,
__ATOMIC_RELAXED);
rte_eal_alarm_set(SWITCH_CONTEXT_US, hw->reset.ops->reset_service, hns);
--
2.23.0

View File

@ -0,0 +1,74 @@
From 0593fced9d1946d55c95c8dea448217f0867faff Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 27 Oct 2023 14:09:43 +0800
Subject: [PATCH 373/394] net/hns3: remove reset log in secondary
[ upstream commit 5394df455749f60614a19d791d1d73c26b74dea1 ]
The reset event is checked and done in primary. And the secondary
doesn't check and display reset log. There is a patch to remove the
check code for secondary. please see commit a8f1f7cf1b42 ("net/hns3:
fix crash when secondary process access FW")
This patch removes the redundant log print of reset.
Fixes: a8f1f7cf1b42 ("net/hns3: fix crash when secondary process access FW")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 11 +++++------
drivers/net/hns3/hns3_ethdev_vf.c | 11 +++++------
2 files changed, 10 insertions(+), 12 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index ad595478a7..185f211591 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5512,14 +5512,13 @@ hns3_is_reset_pending(struct hns3_adapter *hns)
enum hns3_reset_level reset;
/*
- * Check the registers to confirm whether there is reset pending.
- * Note: This check may lead to schedule reset task, but only primary
- * process can process the reset event. Therefore, limit the
- * checking under only primary process.
+ * Only primary can process can process the reset event,
+ * so don't check reset event in secondary.
*/
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- hns3_check_event_cause(hns, NULL);
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return false;
+ hns3_check_event_cause(hns, NULL);
reset = hns3_get_reset_level(hns, &hw->reset.pending);
if (reset != HNS3_NONE_RESET && hw->reset.level != HNS3_NONE_RESET &&
hw->reset.level < reset) {
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 02fb4a84cf..003071c6ff 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1796,14 +1796,13 @@ hns3vf_is_reset_pending(struct hns3_adapter *hns)
return false;
/*
- * Check the registers to confirm whether there is reset pending.
- * Note: This check may lead to schedule reset task, but only primary
- * process can process the reset event. Therefore, limit the
- * checking under only primary process.
+ * Only primary can process can process the reset event,
+ * so don't check reset event in secondary.
*/
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- hns3vf_check_event_cause(hns, NULL);
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return false;
+ hns3vf_check_event_cause(hns, NULL);
reset = hns3vf_get_reset_level(hw, &hw->reset.pending);
if (hw->reset.level != HNS3_NONE_RESET && reset != HNS3_NONE_RESET &&
hw->reset.level < reset) {
--
2.23.0

View File

@ -0,0 +1,165 @@
From c5628ce4a2c2203e172cd70e6d876bd215f650ed Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 27 Oct 2023 14:09:44 +0800
Subject: [PATCH 374/394] net/hns3: fix multiple reset detected log
[ upstream commit 5be38fc6c0fc7e54d0121bab2fe93a27b8e8f7ab ]
Currently, the driver proactively checks whether interrupt exist
(by checking reset registers), related reset delay task is scheduled.
When a reset whose level is equal to or lower than the current level
is detected, there is unnecessary to add delay task and print logs.
This patch fix it.
Fixes: 2790c6464725 ("net/hns3: support device reset")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 64 ++++++++++++++++++++--------------
1 file changed, 37 insertions(+), 27 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 185f211591..8c96c8a964 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -124,42 +124,29 @@ hns3_pf_enable_irq0(struct hns3_hw *hw)
}
static enum hns3_evt_cause
-hns3_proc_imp_reset_event(struct hns3_adapter *hns, bool is_delay,
- uint32_t *vec_val)
+hns3_proc_imp_reset_event(struct hns3_adapter *hns, uint32_t *vec_val)
{
struct hns3_hw *hw = &hns->hw;
__atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending);
*vec_val = BIT(HNS3_VECTOR0_IMPRESET_INT_B);
- if (!is_delay) {
- hw->reset.stats.imp_cnt++;
- hns3_warn(hw, "IMP reset detected, clear reset status");
- } else {
- hns3_schedule_delayed_reset(hns);
- hns3_warn(hw, "IMP reset detected, don't clear reset status");
- }
+ hw->reset.stats.imp_cnt++;
+ hns3_warn(hw, "IMP reset detected, clear reset status");
return HNS3_VECTOR0_EVENT_RST;
}
static enum hns3_evt_cause
-hns3_proc_global_reset_event(struct hns3_adapter *hns, bool is_delay,
- uint32_t *vec_val)
+hns3_proc_global_reset_event(struct hns3_adapter *hns, uint32_t *vec_val)
{
struct hns3_hw *hw = &hns->hw;
__atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
hns3_atomic_set_bit(HNS3_GLOBAL_RESET, &hw->reset.pending);
*vec_val = BIT(HNS3_VECTOR0_GLOBALRESET_INT_B);
- if (!is_delay) {
- hw->reset.stats.global_cnt++;
- hns3_warn(hw, "Global reset detected, clear reset status");
- } else {
- hns3_schedule_delayed_reset(hns);
- hns3_warn(hw,
- "Global reset detected, don't clear reset status");
- }
+ hw->reset.stats.global_cnt++;
+ hns3_warn(hw, "Global reset detected, clear reset status");
return HNS3_VECTOR0_EVENT_RST;
}
@@ -173,14 +160,12 @@ hns3_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
uint32_t hw_err_src_reg;
uint32_t val;
enum hns3_evt_cause ret;
- bool is_delay;
/* fetch the events from their corresponding regs */
vector0_int_stats = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
cmdq_src_val = hns3_read_dev(hw, HNS3_VECTOR0_CMDQ_SRC_REG);
hw_err_src_reg = hns3_read_dev(hw, HNS3_RAS_PF_OTHER_INT_STS_REG);
- is_delay = clearval == NULL ? true : false;
/*
* Assumption: If by any chance reset and mailbox events are reported
* together then we will only process reset event and defer the
@@ -189,13 +174,13 @@ hns3_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
* from H/W just for the mailbox.
*/
if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_int_stats) { /* IMP */
- ret = hns3_proc_imp_reset_event(hns, is_delay, &val);
+ ret = hns3_proc_imp_reset_event(hns, &val);
goto out;
}
/* Global reset */
if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_int_stats) {
- ret = hns3_proc_global_reset_event(hns, is_delay, &val);
+ ret = hns3_proc_global_reset_event(hns, &val);
goto out;
}
@@ -224,10 +209,9 @@ hns3_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
val = vector0_int_stats;
ret = HNS3_VECTOR0_EVENT_OTHER;
-out:
- if (clearval)
- *clearval = val;
+out:
+ *clearval = val;
return ret;
}
@@ -5505,6 +5489,32 @@ is_pf_reset_done(struct hns3_hw *hw)
return true;
}
+static void
+hns3_detect_reset_event(struct hns3_hw *hw)
+{
+ struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+ enum hns3_reset_level new_req = HNS3_NONE_RESET;
+ enum hns3_reset_level last_req;
+ uint32_t vector0_intr_state;
+
+ last_req = hns3_get_reset_level(hns, &hw->reset.pending);
+ vector0_intr_state = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
+ if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_intr_state) {
+ __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
+ hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending);
+ new_req = HNS3_IMP_RESET;
+ } else if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_intr_state) {
+ __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
+ hns3_atomic_set_bit(HNS3_GLOBAL_RESET, &hw->reset.pending);
+ new_req = HNS3_GLOBAL_RESET;
+ }
+
+ if (new_req != HNS3_NONE_RESET && last_req < new_req) {
+ hns3_schedule_delayed_reset(hns);
+ hns3_warn(hw, "High level reset detected, delay do reset");
+ }
+}
+
bool
hns3_is_reset_pending(struct hns3_adapter *hns)
{
@@ -5518,7 +5528,7 @@ hns3_is_reset_pending(struct hns3_adapter *hns)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return false;
- hns3_check_event_cause(hns, NULL);
+ hns3_detect_reset_event(hw);
reset = hns3_get_reset_level(hns, &hw->reset.pending);
if (reset != HNS3_NONE_RESET && hw->reset.level != HNS3_NONE_RESET &&
hw->reset.level < reset) {
--
2.23.0

View File

@ -0,0 +1,229 @@
From 2bf782a351fe9e5bd7155e5be9548fa2569aa6dc Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 27 Oct 2023 14:09:45 +0800
Subject: [PATCH 375/394] net/hns3: fix IMP or global reset
[ upstream commit 1eee1ea75c0eadaea6dde368b289cf0acf6a1190 ]
Currently, when the IMP or Global reset detected, the vector0
interrupt is enabled before the reset process is completed.
At this moment, if the initialization of IMP is not completed,
and the vector0 interrupt may continue to be reported. In this
scenario, the IMP/global reset being performed by the driver
does not need to be interrupted. Therefore, for IMP and global
resets, the driver has to enable the interrupt after the end
of reset.
The RAS interrupt is also shared with the vector0 interrupt.
When the interrupt is disabled, the RAS interrupt can still be
reported to the driver and the driver interrupt processing
function is also called. In this case, the interrupt status of
the IMP/global may still exist. Therefore, this patch also has
to the check of the new reset level based on the priority of
reset level in the interrupt handler.
Fixes: 2790c6464725 ("net/hns3: support device reset")
Fixes: 3988ab0eee52 ("net/hns3: add abnormal interrupt process")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 88 ++++++++++++++++++++++++++++------
drivers/net/hns3/hns3_ethdev.h | 1 +
drivers/net/hns3/hns3_intr.c | 2 +
3 files changed, 77 insertions(+), 14 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 8c96c8a964..0f201b8b99 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -215,6 +215,30 @@ hns3_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
return ret;
}
+void
+hns3_clear_reset_event(struct hns3_hw *hw)
+{
+ uint32_t clearval = 0;
+
+ switch (hw->reset.level) {
+ case HNS3_IMP_RESET:
+ clearval = BIT(HNS3_VECTOR0_IMPRESET_INT_B);
+ break;
+ case HNS3_GLOBAL_RESET:
+ clearval = BIT(HNS3_VECTOR0_GLOBALRESET_INT_B);
+ break;
+ default:
+ break;
+ }
+
+ if (clearval == 0)
+ return;
+
+ hns3_write_dev(hw, HNS3_MISC_RESET_STS_REG, clearval);
+
+ hns3_pf_enable_irq0(hw);
+}
+
static void
hns3_clear_event_cause(struct hns3_hw *hw, uint32_t event_type, uint32_t regclr)
{
@@ -287,6 +311,34 @@ hns3_delay_before_clear_event_cause(struct hns3_hw *hw, uint32_t event_type, uin
}
}
+static bool
+hns3_reset_event_valid(struct hns3_hw *hw)
+{
+ struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+ enum hns3_reset_level new_req = HNS3_NONE_RESET;
+ enum hns3_reset_level last_req;
+ uint32_t vector0_int;
+
+ vector0_int = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
+ if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_int)
+ new_req = HNS3_IMP_RESET;
+ else if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_int)
+ new_req = HNS3_GLOBAL_RESET;
+ if (new_req == HNS3_NONE_RESET)
+ return true;
+
+ last_req = hns3_get_reset_level(hns, &hw->reset.pending);
+ if (last_req == HNS3_NONE_RESET)
+ return true;
+
+ if (new_req > last_req)
+ return true;
+
+ hns3_warn(hw, "last_req (%u) less than or equal to new_req (%u) ignore",
+ last_req, new_req);
+ return false;
+}
+
static void
hns3_interrupt_handler(void *param)
{
@@ -299,6 +351,9 @@ hns3_interrupt_handler(void *param)
uint32_t ras_int;
uint32_t cmdq_int;
+ if (!hns3_reset_event_valid(hw))
+ return;
+
/* Disable interrupt */
hns3_pf_disable_irq0(hw);
@@ -327,7 +382,11 @@ hns3_interrupt_handler(void *param)
}
/* Enable interrupt if it is not cause by reset */
- hns3_pf_enable_irq0(hw);
+ if (event_cause == HNS3_VECTOR0_EVENT_ERR ||
+ event_cause == HNS3_VECTOR0_EVENT_MBX ||
+ event_cause == HNS3_VECTOR0_EVENT_PTP ||
+ event_cause == HNS3_VECTOR0_EVENT_OTHER)
+ hns3_pf_enable_irq0(hw);
}
static int
@@ -5489,7 +5548,7 @@ is_pf_reset_done(struct hns3_hw *hw)
return true;
}
-static void
+static enum hns3_reset_level
hns3_detect_reset_event(struct hns3_hw *hw)
{
struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
@@ -5501,11 +5560,9 @@ hns3_detect_reset_event(struct hns3_hw *hw)
vector0_intr_state = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_intr_state) {
__atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
- hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending);
new_req = HNS3_IMP_RESET;
} else if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_intr_state) {
__atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
- hns3_atomic_set_bit(HNS3_GLOBAL_RESET, &hw->reset.pending);
new_req = HNS3_GLOBAL_RESET;
}
@@ -5513,13 +5570,16 @@ hns3_detect_reset_event(struct hns3_hw *hw)
hns3_schedule_delayed_reset(hns);
hns3_warn(hw, "High level reset detected, delay do reset");
}
+
+ return new_req;
}
bool
hns3_is_reset_pending(struct hns3_adapter *hns)
{
+ enum hns3_reset_level new_req;
struct hns3_hw *hw = &hns->hw;
- enum hns3_reset_level reset;
+ enum hns3_reset_level last_req;
/*
* Only primary can process can process the reset event,
@@ -5528,17 +5588,17 @@ hns3_is_reset_pending(struct hns3_adapter *hns)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return false;
- hns3_detect_reset_event(hw);
- reset = hns3_get_reset_level(hns, &hw->reset.pending);
- if (reset != HNS3_NONE_RESET && hw->reset.level != HNS3_NONE_RESET &&
- hw->reset.level < reset) {
- hns3_warn(hw, "High level reset %d is pending", reset);
+ new_req = hns3_detect_reset_event(hw);
+ last_req = hns3_get_reset_level(hns, &hw->reset.pending);
+ if (last_req != HNS3_NONE_RESET && new_req != HNS3_NONE_RESET &&
+ new_req < last_req) {
+ hns3_warn(hw, "High level reset %d is pending", last_req);
return true;
}
- reset = hns3_get_reset_level(hns, &hw->reset.request);
- if (reset != HNS3_NONE_RESET && hw->reset.level != HNS3_NONE_RESET &&
- hw->reset.level < reset) {
- hns3_warn(hw, "High level reset %d is request", reset);
+ last_req = hns3_get_reset_level(hns, &hw->reset.request);
+ if (last_req != HNS3_NONE_RESET && hw->reset.level != HNS3_NONE_RESET &&
+ hw->reset.level < last_req) {
+ hns3_warn(hw, "High level reset %d is request", last_req);
return true;
}
return false;
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index c85a6912ad..0e8d043704 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -1033,6 +1033,7 @@ void hns3_update_linkstatus_and_event(struct hns3_hw *hw, bool query);
void hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status,
uint32_t link_speed, uint8_t link_duplex);
void hns3vf_update_push_lsc_cap(struct hns3_hw *hw, bool supported);
+void hns3_clear_reset_event(struct hns3_hw *hw);
const char *hns3_get_media_type_name(uint8_t media_type);
diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
index 51711244b5..ce8a28e2f9 100644
--- a/drivers/net/hns3/hns3_intr.c
+++ b/drivers/net/hns3/hns3_intr.c
@@ -2727,6 +2727,7 @@ hns3_reset_post(struct hns3_adapter *hns)
/* IMP will wait ready flag before reset */
hns3_notify_reset_ready(hw, false);
hns3_clear_reset_level(hw, &hw->reset.pending);
+ hns3_clear_reset_event(hw);
__atomic_store_n(&hns->hw.reset.resetting, 0, __ATOMIC_RELAXED);
hw->reset.attempts = 0;
hw->reset.stats.success_cnt++;
@@ -2775,6 +2776,7 @@ hns3_reset_fail_handle(struct hns3_adapter *hns)
struct timeval tv;
hns3_clear_reset_level(hw, &hw->reset.pending);
+ hns3_clear_reset_event(hw);
if (hns3_reset_err_handle(hns)) {
hw->reset.stage = RESET_STAGE_PREWAIT;
hns3_schedule_reset(hns);
--
2.23.0

View File

@ -0,0 +1,165 @@
From 4828fd884f3d2abb70976414cc7a9e859001bb6d Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 27 Oct 2023 14:09:46 +0800
Subject: [PATCH 376/394] net/hns3: refactor interrupt state query
[ upstream commit c01ffb24a241a360361ed5c94a819824a8542f3f ]
PF driver get all interrupt states by reading three registers. This logic
code block is distributed in many places. So this patch extracts a common
function to do this to improve the maintenance.
Fixes: f53a793bb7c2 ("net/hns3: add more hardware error types")
Fixes: 3988ab0eee52 ("net/hns3: add abnormal interrupt process")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 57 +++++++++++++++++++---------------
1 file changed, 32 insertions(+), 25 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 0f201b8b99..9966748835 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -57,6 +57,12 @@ enum hns3_evt_cause {
HNS3_VECTOR0_EVENT_OTHER,
};
+struct hns3_intr_state {
+ uint32_t vector0_state;
+ uint32_t cmdq_state;
+ uint32_t hw_err_state;
+};
+
#define HNS3_SPEEDS_SUPP_FEC (RTE_ETH_LINK_SPEED_10G | \
RTE_ETH_LINK_SPEED_25G | \
RTE_ETH_LINK_SPEED_40G | \
@@ -151,20 +157,23 @@ hns3_proc_global_reset_event(struct hns3_adapter *hns, uint32_t *vec_val)
return HNS3_VECTOR0_EVENT_RST;
}
+static void
+hns3_query_intr_state(struct hns3_hw *hw, struct hns3_intr_state *state)
+{
+ state->vector0_state = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
+ state->cmdq_state = hns3_read_dev(hw, HNS3_VECTOR0_CMDQ_SRC_REG);
+ state->hw_err_state = hns3_read_dev(hw, HNS3_RAS_PF_OTHER_INT_STS_REG);
+}
+
static enum hns3_evt_cause
hns3_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
{
struct hns3_hw *hw = &hns->hw;
- uint32_t vector0_int_stats;
- uint32_t cmdq_src_val;
- uint32_t hw_err_src_reg;
+ struct hns3_intr_state state;
uint32_t val;
enum hns3_evt_cause ret;
- /* fetch the events from their corresponding regs */
- vector0_int_stats = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
- cmdq_src_val = hns3_read_dev(hw, HNS3_VECTOR0_CMDQ_SRC_REG);
- hw_err_src_reg = hns3_read_dev(hw, HNS3_RAS_PF_OTHER_INT_STS_REG);
+ hns3_query_intr_state(hw, &state);
/*
* Assumption: If by any chance reset and mailbox events are reported
@@ -173,41 +182,41 @@ hns3_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
* RX CMDQ event this time we would receive again another interrupt
* from H/W just for the mailbox.
*/
- if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_int_stats) { /* IMP */
+ if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & state.vector0_state) { /* IMP */
ret = hns3_proc_imp_reset_event(hns, &val);
goto out;
}
/* Global reset */
- if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_int_stats) {
+ if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & state.vector0_state) {
ret = hns3_proc_global_reset_event(hns, &val);
goto out;
}
/* Check for vector0 1588 event source */
- if (BIT(HNS3_VECTOR0_1588_INT_B) & vector0_int_stats) {
+ if (BIT(HNS3_VECTOR0_1588_INT_B) & state.vector0_state) {
val = BIT(HNS3_VECTOR0_1588_INT_B);
ret = HNS3_VECTOR0_EVENT_PTP;
goto out;
}
/* check for vector0 msix event source */
- if (vector0_int_stats & HNS3_VECTOR0_REG_MSIX_MASK ||
- hw_err_src_reg & HNS3_RAS_REG_NFE_MASK) {
- val = vector0_int_stats | hw_err_src_reg;
+ if (state.vector0_state & HNS3_VECTOR0_REG_MSIX_MASK ||
+ state.hw_err_state & HNS3_RAS_REG_NFE_MASK) {
+ val = state.vector0_state | state.hw_err_state;
ret = HNS3_VECTOR0_EVENT_ERR;
goto out;
}
/* check for vector0 mailbox(=CMDQ RX) event source */
- if (BIT(HNS3_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_val) {
- cmdq_src_val &= ~BIT(HNS3_VECTOR0_RX_CMDQ_INT_B);
- val = cmdq_src_val;
+ if (BIT(HNS3_VECTOR0_RX_CMDQ_INT_B) & state.cmdq_state) {
+ state.cmdq_state &= ~BIT(HNS3_VECTOR0_RX_CMDQ_INT_B);
+ val = state.cmdq_state;
ret = HNS3_VECTOR0_EVENT_MBX;
goto out;
}
- val = vector0_int_stats;
+ val = state.vector0_state;
ret = HNS3_VECTOR0_EVENT_OTHER;
out:
@@ -346,10 +355,8 @@ hns3_interrupt_handler(void *param)
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
enum hns3_evt_cause event_cause;
+ struct hns3_intr_state state;
uint32_t clearval = 0;
- uint32_t vector0_int;
- uint32_t ras_int;
- uint32_t cmdq_int;
if (!hns3_reset_event_valid(hw))
return;
@@ -358,16 +365,15 @@ hns3_interrupt_handler(void *param)
hns3_pf_disable_irq0(hw);
event_cause = hns3_check_event_cause(hns, &clearval);
- vector0_int = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
- ras_int = hns3_read_dev(hw, HNS3_RAS_PF_OTHER_INT_STS_REG);
- cmdq_int = hns3_read_dev(hw, HNS3_VECTOR0_CMDQ_SRC_REG);
+ hns3_query_intr_state(hw, &state);
hns3_delay_before_clear_event_cause(hw, event_cause, clearval);
hns3_clear_event_cause(hw, event_cause, clearval);
/* vector 0 interrupt is shared with reset and mailbox source events. */
if (event_cause == HNS3_VECTOR0_EVENT_ERR) {
hns3_warn(hw, "received interrupt: vector0_int_stat:0x%x "
"ras_int_stat:0x%x cmdq_int_stat:0x%x",
- vector0_int, ras_int, cmdq_int);
+ state.vector0_state, state.hw_err_state,
+ state.cmdq_state);
hns3_handle_mac_tnl(hw);
hns3_handle_error(hns);
} else if (event_cause == HNS3_VECTOR0_EVENT_RST) {
@@ -378,7 +384,8 @@ hns3_interrupt_handler(void *param)
} else if (event_cause != HNS3_VECTOR0_EVENT_PTP) {
hns3_warn(hw, "received unknown event: vector0_int_stat:0x%x "
"ras_int_stat:0x%x cmdq_int_stat:0x%x",
- vector0_int, ras_int, cmdq_int);
+ state.vector0_state, state.hw_err_state,
+ state.cmdq_state);
}
/* Enable interrupt if it is not cause by reset */
--
2.23.0

View File

@ -0,0 +1,348 @@
From fecdbdc4f7b3b0abace40e5070ab9803c8de850d Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Mon, 23 Oct 2023 02:29:39 +0000
Subject: [PATCH 377/394] app/testpmd: ease configuring all offloads
[ upstream commit 8f6c2a1209c31b401d0a8fc74e4b98b1f2d599dc ]
Extend supports all offload configuration in following commands:
1. port config 0 rx_offload all on/off
2. port config 0 tx_offload all on/off
3. port 0 rxq 0 rx_offload all on/off
4. port 0 txq 0 tx_offload all on/off
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/cmdline.c | 112 +++++++++++---------
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 8 +-
2 files changed, 68 insertions(+), 52 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 8facca3c51..49152ec348 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -867,7 +867,7 @@ static void cmd_help_long_parsed(void *parsed_result,
"port config (port_id) udp_tunnel_port add|rm vxlan|geneve|ecpri (udp_port)\n\n"
" Add/remove UDP tunnel port for tunneling offload\n\n"
- "port config <port_id> rx_offload vlan_strip|"
+ "port config <port_id> rx_offload all|vlan_strip|"
"ipv4_cksum|udp_cksum|tcp_cksum|tcp_lro|qinq_strip|"
"outer_ipv4_cksum|macsec_strip|header_split|"
"vlan_filter|vlan_extend|scatter|"
@@ -875,7 +875,7 @@ static void cmd_help_long_parsed(void *parsed_result,
" Enable or disable a per port Rx offloading"
" on all Rx queues of a port\n\n"
- "port (port_id) rxq (queue_id) rx_offload vlan_strip|"
+ "port (port_id) rxq (queue_id) rx_offload all|vlan_strip|"
"ipv4_cksum|udp_cksum|tcp_cksum|tcp_lro|qinq_strip|"
"outer_ipv4_cksum|macsec_strip|header_split|"
"vlan_filter|vlan_extend|scatter|"
@@ -883,7 +883,7 @@ static void cmd_help_long_parsed(void *parsed_result,
" Enable or disable a per queue Rx offloading"
" only on a specific Rx queue\n\n"
- "port config (port_id) tx_offload vlan_insert|"
+ "port config (port_id) tx_offload all|vlan_insert|"
"ipv4_cksum|udp_cksum|tcp_cksum|sctp_cksum|tcp_tso|"
"udp_tso|outer_ipv4_cksum|qinq_insert|vxlan_tnl_tso|"
"gre_tnl_tso|ipip_tnl_tso|geneve_tnl_tso|"
@@ -892,7 +892,7 @@ static void cmd_help_long_parsed(void *parsed_result,
" Enable or disable a per port Tx offloading"
" on all Tx queues of a port\n\n"
- "port (port_id) txq (queue_id) tx_offload vlan_insert|"
+ "port (port_id) txq (queue_id) tx_offload all|vlan_insert|"
"ipv4_cksum|udp_cksum|tcp_cksum|sctp_cksum|tcp_tso|"
"udp_tso|outer_ipv4_cksum|qinq_insert|vxlan_tnl_tso|"
"gre_tnl_tso|ipip_tnl_tso|geneve_tnl_tso|macsec_insert"
@@ -16175,7 +16175,7 @@ cmdline_parse_token_string_t cmd_config_per_port_rx_offload_result_rx_offload =
cmdline_parse_token_string_t cmd_config_per_port_rx_offload_result_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_port_rx_offload_result,
- offload, "vlan_strip#ipv4_cksum#udp_cksum#tcp_cksum#tcp_lro#"
+ offload, "all#vlan_strip#ipv4_cksum#udp_cksum#tcp_cksum#tcp_lro#"
"qinq_strip#outer_ipv4_cksum#macsec_strip#"
"header_split#vlan_filter#vlan_extend#"
"scatter#buffer_split#timestamp#security#"
@@ -16218,8 +16218,8 @@ cmd_config_per_port_rx_offload_parsed(void *parsed_result,
portid_t port_id = res->port_id;
struct rte_eth_dev_info dev_info;
struct rte_port *port = &ports[port_id];
- uint64_t single_offload;
uint16_t nb_rx_queues;
+ uint64_t offload;
int q;
int ret;
@@ -16230,25 +16230,29 @@ cmd_config_per_port_rx_offload_parsed(void *parsed_result,
return;
}
- single_offload = search_rx_offload(res->offload);
- if (single_offload == 0) {
- fprintf(stderr, "Unknown offload name: %s\n", res->offload);
- return;
- }
-
ret = eth_dev_info_get_print_err(port_id, &dev_info);
if (ret != 0)
return;
+ if (!strcmp(res->offload, "all")) {
+ offload = dev_info.rx_offload_capa;
+ } else {
+ offload = search_rx_offload(res->offload);
+ if (offload == 0) {
+ fprintf(stderr, "Unknown offload name: %s\n", res->offload);
+ return;
+ }
+ }
+
nb_rx_queues = dev_info.nb_rx_queues;
if (!strcmp(res->on_off, "on")) {
- port->dev_conf.rxmode.offloads |= single_offload;
+ port->dev_conf.rxmode.offloads |= offload;
for (q = 0; q < nb_rx_queues; q++)
- port->rx_conf[q].offloads |= single_offload;
+ port->rx_conf[q].offloads |= offload;
} else {
- port->dev_conf.rxmode.offloads &= ~single_offload;
+ port->dev_conf.rxmode.offloads &= ~offload;
for (q = 0; q < nb_rx_queues; q++)
- port->rx_conf[q].offloads &= ~single_offload;
+ port->rx_conf[q].offloads &= ~offload;
}
cmd_reconfig_device_queue(port_id, 1, 1);
@@ -16257,7 +16261,7 @@ cmd_config_per_port_rx_offload_parsed(void *parsed_result,
cmdline_parse_inst_t cmd_config_per_port_rx_offload = {
.f = cmd_config_per_port_rx_offload_parsed,
.data = NULL,
- .help_str = "port config <port_id> rx_offload vlan_strip|ipv4_cksum|"
+ .help_str = "port config <port_id> rx_offload all|vlan_strip|ipv4_cksum|"
"udp_cksum|tcp_cksum|tcp_lro|qinq_strip|outer_ipv4_cksum|"
"macsec_strip|header_split|vlan_filter|vlan_extend|"
"scatter|buffer_split|timestamp|security|"
@@ -16307,7 +16311,7 @@ cmdline_parse_token_string_t cmd_config_per_queue_rx_offload_result_rxoffload =
cmdline_parse_token_string_t cmd_config_per_queue_rx_offload_result_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_queue_rx_offload_result,
- offload, "vlan_strip#ipv4_cksum#udp_cksum#tcp_cksum#tcp_lro#"
+ offload, "all#vlan_strip#ipv4_cksum#udp_cksum#tcp_cksum#tcp_lro#"
"qinq_strip#outer_ipv4_cksum#macsec_strip#"
"header_split#vlan_filter#vlan_extend#"
"scatter#buffer_split#timestamp#security#keep_crc");
@@ -16326,7 +16330,7 @@ cmd_config_per_queue_rx_offload_parsed(void *parsed_result,
portid_t port_id = res->port_id;
uint16_t queue_id = res->queue_id;
struct rte_port *port = &ports[port_id];
- uint64_t single_offload;
+ uint64_t offload;
int ret;
if (port->port_status != RTE_PORT_STOPPED) {
@@ -16347,16 +16351,20 @@ cmd_config_per_queue_rx_offload_parsed(void *parsed_result,
return;
}
- single_offload = search_rx_offload(res->offload);
- if (single_offload == 0) {
- fprintf(stderr, "Unknown offload name: %s\n", res->offload);
- return;
+ if (!strcmp(res->offload, "all")) {
+ offload = dev_info.rx_queue_offload_capa;
+ } else {
+ offload = search_rx_offload(res->offload);
+ if (offload == 0) {
+ fprintf(stderr, "Unknown offload name: %s\n", res->offload);
+ return;
+ }
}
if (!strcmp(res->on_off, "on"))
- port->rx_conf[queue_id].offloads |= single_offload;
+ port->rx_conf[queue_id].offloads |= offload;
else
- port->rx_conf[queue_id].offloads &= ~single_offload;
+ port->rx_conf[queue_id].offloads &= ~offload;
cmd_reconfig_device_queue(port_id, 1, 1);
}
@@ -16365,7 +16373,7 @@ cmdline_parse_inst_t cmd_config_per_queue_rx_offload = {
.f = cmd_config_per_queue_rx_offload_parsed,
.data = NULL,
.help_str = "port <port_id> rxq <queue_id> rx_offload "
- "vlan_strip|ipv4_cksum|"
+ "all|vlan_strip|ipv4_cksum|"
"udp_cksum|tcp_cksum|tcp_lro|qinq_strip|outer_ipv4_cksum|"
"macsec_strip|header_split|vlan_filter|vlan_extend|"
"scatter|buffer_split|timestamp|security|"
@@ -16594,7 +16602,7 @@ cmdline_parse_token_string_t cmd_config_per_port_tx_offload_result_tx_offload =
cmdline_parse_token_string_t cmd_config_per_port_tx_offload_result_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_port_tx_offload_result,
- offload, "vlan_insert#ipv4_cksum#udp_cksum#tcp_cksum#"
+ offload, "all#vlan_insert#ipv4_cksum#udp_cksum#tcp_cksum#"
"sctp_cksum#tcp_tso#udp_tso#outer_ipv4_cksum#"
"qinq_insert#vxlan_tnl_tso#gre_tnl_tso#"
"ipip_tnl_tso#geneve_tnl_tso#macsec_insert#"
@@ -16641,8 +16649,8 @@ cmd_config_per_port_tx_offload_parsed(void *parsed_result,
portid_t port_id = res->port_id;
struct rte_eth_dev_info dev_info;
struct rte_port *port = &ports[port_id];
- uint64_t single_offload;
uint16_t nb_tx_queues;
+ uint64_t offload;
int q;
int ret;
@@ -16653,25 +16661,29 @@ cmd_config_per_port_tx_offload_parsed(void *parsed_result,
return;
}
- single_offload = search_tx_offload(res->offload);
- if (single_offload == 0) {
- fprintf(stderr, "Unknown offload name: %s\n", res->offload);
- return;
- }
-
ret = eth_dev_info_get_print_err(port_id, &dev_info);
if (ret != 0)
return;
+ if (!strcmp(res->offload, "all")) {
+ offload = dev_info.tx_offload_capa;
+ } else {
+ offload = search_tx_offload(res->offload);
+ if (offload == 0) {
+ fprintf(stderr, "Unknown offload name: %s\n", res->offload);
+ return;
+ }
+ }
+
nb_tx_queues = dev_info.nb_tx_queues;
if (!strcmp(res->on_off, "on")) {
- port->dev_conf.txmode.offloads |= single_offload;
+ port->dev_conf.txmode.offloads |= offload;
for (q = 0; q < nb_tx_queues; q++)
- port->tx_conf[q].offloads |= single_offload;
+ port->tx_conf[q].offloads |= offload;
} else {
- port->dev_conf.txmode.offloads &= ~single_offload;
+ port->dev_conf.txmode.offloads &= ~offload;
for (q = 0; q < nb_tx_queues; q++)
- port->tx_conf[q].offloads &= ~single_offload;
+ port->tx_conf[q].offloads &= ~offload;
}
cmd_reconfig_device_queue(port_id, 1, 1);
@@ -16681,7 +16693,7 @@ cmdline_parse_inst_t cmd_config_per_port_tx_offload = {
.f = cmd_config_per_port_tx_offload_parsed,
.data = NULL,
.help_str = "port config <port_id> tx_offload "
- "vlan_insert|ipv4_cksum|udp_cksum|tcp_cksum|"
+ "all|vlan_insert|ipv4_cksum|udp_cksum|tcp_cksum|"
"sctp_cksum|tcp_tso|udp_tso|outer_ipv4_cksum|"
"qinq_insert|vxlan_tnl_tso|gre_tnl_tso|"
"ipip_tnl_tso|geneve_tnl_tso|macsec_insert|"
@@ -16732,7 +16744,7 @@ cmdline_parse_token_string_t cmd_config_per_queue_tx_offload_result_txoffload =
cmdline_parse_token_string_t cmd_config_per_queue_tx_offload_result_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_queue_tx_offload_result,
- offload, "vlan_insert#ipv4_cksum#udp_cksum#tcp_cksum#"
+ offload, "all#vlan_insert#ipv4_cksum#udp_cksum#tcp_cksum#"
"sctp_cksum#tcp_tso#udp_tso#outer_ipv4_cksum#"
"qinq_insert#vxlan_tnl_tso#gre_tnl_tso#"
"ipip_tnl_tso#geneve_tnl_tso#macsec_insert#"
@@ -16752,7 +16764,7 @@ cmd_config_per_queue_tx_offload_parsed(void *parsed_result,
portid_t port_id = res->port_id;
uint16_t queue_id = res->queue_id;
struct rte_port *port = &ports[port_id];
- uint64_t single_offload;
+ uint64_t offload;
int ret;
if (port->port_status != RTE_PORT_STOPPED) {
@@ -16773,16 +16785,20 @@ cmd_config_per_queue_tx_offload_parsed(void *parsed_result,
return;
}
- single_offload = search_tx_offload(res->offload);
- if (single_offload == 0) {
- fprintf(stderr, "Unknown offload name: %s\n", res->offload);
- return;
+ if (!strcmp(res->offload, "all")) {
+ offload = dev_info.tx_queue_offload_capa;
+ } else {
+ offload = search_tx_offload(res->offload);
+ if (offload == 0) {
+ fprintf(stderr, "Unknown offload name: %s\n", res->offload);
+ return;
+ }
}
if (!strcmp(res->on_off, "on"))
- port->tx_conf[queue_id].offloads |= single_offload;
+ port->tx_conf[queue_id].offloads |= offload;
else
- port->tx_conf[queue_id].offloads &= ~single_offload;
+ port->tx_conf[queue_id].offloads &= ~offload;
cmd_reconfig_device_queue(port_id, 1, 1);
}
@@ -16791,7 +16807,7 @@ cmdline_parse_inst_t cmd_config_per_queue_tx_offload = {
.f = cmd_config_per_queue_tx_offload_parsed,
.data = NULL,
.help_str = "port <port_id> txq <queue_id> tx_offload "
- "vlan_insert|ipv4_cksum|udp_cksum|tcp_cksum|"
+ "all|vlan_insert|ipv4_cksum|udp_cksum|tcp_cksum|"
"sctp_cksum|tcp_tso|udp_tso|outer_ipv4_cksum|"
"qinq_insert|vxlan_tnl_tso|gre_tnl_tso|"
"ipip_tnl_tso|geneve_tnl_tso|macsec_insert|"
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index c33c8456bf..50c45db6f7 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1772,7 +1772,7 @@ Enable or disable a per port Rx offloading on all Rx queues of a port::
testpmd> port config (port_id) rx_offload (offloading) on|off
* ``offloading``: can be any of these offloading capability:
- vlan_strip, ipv4_cksum, udp_cksum, tcp_cksum, tcp_lro,
+ all, vlan_strip, ipv4_cksum, udp_cksum, tcp_cksum, tcp_lro,
qinq_strip, outer_ipv4_cksum, macsec_strip,
header_split, vlan_filter, vlan_extend, scatter, timestamp, security,
keep_crc, rss_hash
@@ -1787,7 +1787,7 @@ Enable or disable a per queue Rx offloading only on a specific Rx queue::
testpmd> port (port_id) rxq (queue_id) rx_offload (offloading) on|off
* ``offloading``: can be any of these offloading capability:
- vlan_strip, ipv4_cksum, udp_cksum, tcp_cksum, tcp_lro,
+ all, vlan_strip, ipv4_cksum, udp_cksum, tcp_cksum, tcp_lro,
qinq_strip, outer_ipv4_cksum, macsec_strip,
header_split, vlan_filter, vlan_extend, scatter, timestamp, security,
keep_crc
@@ -1802,7 +1802,7 @@ Enable or disable a per port Tx offloading on all Tx queues of a port::
testpmd> port config (port_id) tx_offload (offloading) on|off
* ``offloading``: can be any of these offloading capability:
- vlan_insert, ipv4_cksum, udp_cksum, tcp_cksum,
+ all, vlan_insert, ipv4_cksum, udp_cksum, tcp_cksum,
sctp_cksum, tcp_tso, udp_tso, outer_ipv4_cksum,
qinq_insert, vxlan_tnl_tso, gre_tnl_tso,
ipip_tnl_tso, geneve_tnl_tso, macsec_insert,
@@ -1818,7 +1818,7 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
testpmd> port (port_id) txq (queue_id) tx_offload (offloading) on|off
* ``offloading``: can be any of these offloading capability:
- vlan_insert, ipv4_cksum, udp_cksum, tcp_cksum,
+ all, vlan_insert, ipv4_cksum, udp_cksum, tcp_cksum,
sctp_cksum, tcp_tso, udp_tso, outer_ipv4_cksum,
qinq_insert, vxlan_tnl_tso, gre_tnl_tso,
ipip_tnl_tso, geneve_tnl_tso, macsec_insert,
--
2.23.0

View File

@ -0,0 +1,95 @@
From 98fc655dcb21ac85c24a5f7f454a361ef37e2b07 Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 31 Oct 2023 20:23:54 +0800
Subject: [PATCH 378/394] net/hns3: fix setting DCB capability
[ upstream commit ac61c444e647298dded80a2ab52966a2dbe22b68 ]
The "hw->capability" is set after querying firmware and version.
But the DCB capability of PF is set in other place.
So this patch moves setting DCB capability to the place where
all capabilities are set.
Fixes: ab2e2e344163 ("net/hns3: get device capability in primary process")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_cmd.c | 25 +++++++++++++++++++++++++
drivers/net/hns3/hns3_ethdev.c | 13 -------------
2 files changed, 25 insertions(+), 13 deletions(-)
diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index ca1d3f1b8c..62c55f347f 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -525,6 +525,28 @@ hns3_build_api_caps(void)
return rte_cpu_to_le_32(api_caps);
}
+static void
+hns3_set_dcb_capability(struct hns3_hw *hw)
+{
+ struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+ struct rte_pci_device *pci_dev;
+ struct rte_eth_dev *eth_dev;
+ uint16_t device_id;
+
+ if (hns->is_vf)
+ return;
+
+ eth_dev = &rte_eth_devices[hw->data->port_id];
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ device_id = pci_dev->id.device_id;
+
+ if (device_id == HNS3_DEV_ID_25GE_RDMA ||
+ device_id == HNS3_DEV_ID_50GE_RDMA ||
+ device_id == HNS3_DEV_ID_100G_RDMA_MACSEC ||
+ device_id == HNS3_DEV_ID_200G_RDMA)
+ hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_DCB_B, 1);
+}
+
static int
hns3_cmd_query_firmware_version_and_capability(struct hns3_hw *hw)
{
@@ -542,6 +564,9 @@ hns3_cmd_query_firmware_version_and_capability(struct hns3_hw *hw)
return ret;
hw->fw_version = rte_le_to_cpu_32(resp->firmware);
+
+ hns3_set_dcb_capability(hw);
+
/*
* Make sure mask the capability before parse capability because it
* may overwrite resp's data.
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 9966748835..022696d204 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2719,22 +2719,9 @@ static int
hns3_get_capability(struct hns3_hw *hw)
{
struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
- struct rte_pci_device *pci_dev;
struct hns3_pf *pf = &hns->pf;
- struct rte_eth_dev *eth_dev;
- uint16_t device_id;
int ret;
- eth_dev = &rte_eth_devices[hw->data->port_id];
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- device_id = pci_dev->id.device_id;
-
- if (device_id == HNS3_DEV_ID_25GE_RDMA ||
- device_id == HNS3_DEV_ID_50GE_RDMA ||
- device_id == HNS3_DEV_ID_100G_RDMA_MACSEC ||
- device_id == HNS3_DEV_ID_200G_RDMA)
- hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_DCB_B, 1);
-
ret = hns3_get_pci_revision_id(hw, &hw->revision);
if (ret)
return ret;
--
2.23.0

View File

@ -0,0 +1,206 @@
From 607756d19e218e01a780551473e3f7c6f3851d45 Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 31 Oct 2023 20:23:55 +0800
Subject: [PATCH 379/394] net/hns3: fix LRO offload to report
[ upstream commit a4b2c6815abd3e39daca2e2c93334b813e6a0be4 ]
Some network engines, like part of HIP09, may not support LRO
offload, but this offload capability is also reported to user.
So this patch determines whether driver reports this capability
based on the capabilities from firmware.
In addition, some network engines, like HIP08, always support LRO
offload and their firmware don't report this capability. So this
patch has to move getting revision ID codes to earlier stage and set
default capabilities for these network engines based on revision ID.
Fixes: ab2e2e344163 ("net/hns3: get device capability in primary process")
Fixes: f5ed7d99cf45 ("net/hns3: extract common function to obtain revision ID")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_cmd.c | 17 ++++++++++++++++-
drivers/net/hns3/hns3_cmd.h | 1 +
drivers/net/hns3/hns3_common.c | 5 +++--
drivers/net/hns3/hns3_dump.c | 3 ++-
drivers/net/hns3/hns3_ethdev.c | 8 ++++----
drivers/net/hns3/hns3_ethdev.h | 1 +
drivers/net/hns3/hns3_ethdev_vf.c | 8 ++++----
drivers/net/hns3/hns3_rxtx.c | 3 +++
8 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 62c55f347f..a5c4c11dc8 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -513,6 +513,8 @@ hns3_parse_capability(struct hns3_hw *hw,
hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_TM_B, 1);
if (hns3_get_bit(caps, HNS3_CAPS_FC_AUTO_B))
hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_FC_AUTO_B, 1);
+ if (hns3_get_bit(caps, HNS3_CAPS_GRO_B))
+ hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_GRO_B, 1);
}
static uint32_t
@@ -547,6 +549,19 @@ hns3_set_dcb_capability(struct hns3_hw *hw)
hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_DCB_B, 1);
}
+static void
+hns3_set_default_capability(struct hns3_hw *hw)
+{
+ hns3_set_dcb_capability(hw);
+
+ /*
+ * The firmware of the network engines with HIP08 do not report some
+ * capabilities, like GRO. Set default capabilities for it.
+ */
+ if (hw->revision < PCI_REVISION_ID_HIP09_A)
+ hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_GRO_B, 1);
+}
+
static int
hns3_cmd_query_firmware_version_and_capability(struct hns3_hw *hw)
{
@@ -565,7 +580,7 @@ hns3_cmd_query_firmware_version_and_capability(struct hns3_hw *hw)
hw->fw_version = rte_le_to_cpu_32(resp->firmware);
- hns3_set_dcb_capability(hw);
+ hns3_set_default_capability(hw);
/*
* Make sure mask the capability before parse capability because it
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
index 3f2bb4fd29..79a8c1edad 100644
--- a/drivers/net/hns3/hns3_cmd.h
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -323,6 +323,7 @@ enum HNS3_CAPS_BITS {
HNS3_CAPS_RAS_IMP_B,
HNS3_CAPS_RXD_ADV_LAYOUT_B = 15,
HNS3_CAPS_TM_B = 19,
+ HNS3_CAPS_GRO_B = 20,
HNS3_CAPS_FC_AUTO_B = 30,
};
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c
index 5dec62cbfb..6b1aeaa41b 100644
--- a/drivers/net/hns3/hns3_common.c
+++ b/drivers/net/hns3/hns3_common.c
@@ -70,8 +70,7 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
RTE_ETH_RX_OFFLOAD_SCATTER |
RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
- RTE_ETH_RX_OFFLOAD_RSS_HASH |
- RTE_ETH_RX_OFFLOAD_TCP_LRO);
+ RTE_ETH_RX_OFFLOAD_RSS_HASH);
info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
@@ -99,6 +98,8 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
if (hns3_dev_get_support(hw, PTP))
info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+ if (hns3_dev_get_support(hw, GRO))
+ info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
info->rx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = HNS3_MAX_RING_DESC,
diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c
index b6e8b621f5..8d4c4d0a3b 100644
--- a/drivers/net/hns3/hns3_dump.c
+++ b/drivers/net/hns3/hns3_dump.c
@@ -104,7 +104,8 @@ hns3_get_dev_feature_capability(FILE *file, struct hns3_hw *hw)
{HNS3_DEV_SUPPORT_RAS_IMP_B, "RAS IMP"},
{HNS3_DEV_SUPPORT_TM_B, "TM"},
{HNS3_DEV_SUPPORT_VF_VLAN_FLT_MOD_B, "VF VLAN FILTER MOD"},
- {HNS3_DEV_SUPPORT_FC_AUTO_B, "FC AUTO"}
+ {HNS3_DEV_SUPPORT_FC_AUTO_B, "FC AUTO"},
+ {HNS3_DEV_SUPPORT_GRO_B, "GRO"}
};
uint32_t i;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 022696d204..2d4af9f3ea 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2722,10 +2722,6 @@ hns3_get_capability(struct hns3_hw *hw)
struct hns3_pf *pf = &hns->pf;
int ret;
- ret = hns3_get_pci_revision_id(hw, &hw->revision);
- if (ret)
- return ret;
-
ret = hns3_query_mac_stats_reg_num(hw);
if (ret)
return ret;
@@ -4582,6 +4578,10 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
/* Get hardware io base address from pcie BAR2 IO space */
hw->io_base = pci_dev->mem_resource[2].addr;
+ ret = hns3_get_pci_revision_id(hw, &hw->revision);
+ if (ret)
+ return ret;
+
/* Firmware command queue initialize */
ret = hns3_cmd_init_queue(hw);
if (ret) {
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 0e8d043704..668f141e32 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -888,6 +888,7 @@ enum hns3_dev_cap {
HNS3_DEV_SUPPORT_TM_B,
HNS3_DEV_SUPPORT_VF_VLAN_FLT_MOD_B,
HNS3_DEV_SUPPORT_FC_AUTO_B,
+ HNS3_DEV_SUPPORT_GRO_B,
};
#define hns3_dev_get_support(hw, _name) \
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 003071c6ff..ba4fe13c01 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -762,10 +762,6 @@ hns3vf_get_capability(struct hns3_hw *hw)
{
int ret;
- ret = hns3_get_pci_revision_id(hw, &hw->revision);
- if (ret)
- return ret;
-
if (hw->revision < PCI_REVISION_ID_HIP09_A) {
hns3_set_default_dev_specifications(hw);
hw->intr.mapping_mode = HNS3_INTR_MAPPING_VEC_RSV_ONE;
@@ -1418,6 +1414,10 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
/* Get hardware io base address from pcie BAR2 IO space */
hw->io_base = pci_dev->mem_resource[2].addr;
+ ret = hns3_get_pci_revision_id(hw, &hw->revision);
+ if (ret)
+ return ret;
+
/* Firmware command queue initialize */
ret = hns3_cmd_init_queue(hw);
if (ret) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 3054d24080..8b7c469685 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -3125,6 +3125,9 @@ hns3_config_gro(struct hns3_hw *hw, bool en)
struct hns3_cmd_desc desc;
int ret;
+ if (!hns3_dev_get_support(hw, GRO))
+ return 0;
+
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_GRO_GENERIC_CONFIG, false);
req = (struct hns3_cfg_gro_status_cmd *)desc.data;
--
2.23.0

View File

@ -0,0 +1,100 @@
From 52f35192771e5a62412a5fcaebd0d694355efdfa Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Tue, 31 Oct 2023 20:23:56 +0800
Subject: [PATCH 380/394] net/hns3: fix some return values
[ upstream commit 08159599978f7f7eb6c4aaed7c290e33b8bc3d64 ]
1. Fix the return value of hns3_get_imissed_stats_num as 'uint16_t'.
2. Add some error check for return value.
Fixes: fcba820d9b9e ("net/hns3: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_ethdev_vf.c | 5 ++++-
drivers/net/hns3/hns3_fdir.c | 2 +-
drivers/net/hns3/hns3_stats.c | 15 ++++++++++-----
3 files changed, 15 insertions(+), 7 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index ba4fe13c01..db1a30aff0 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -2162,8 +2162,11 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
*/
if (pci_dev->kdrv == RTE_PCI_KDRV_IGB_UIO ||
pci_dev->kdrv == RTE_PCI_KDRV_UIO_GENERIC) {
- if (hns3vf_enable_msix(pci_dev, true))
+ ret = hns3vf_enable_msix(pci_dev, true);
+ if (ret != 0) {
hns3_err(hw, "Failed to enable msix");
+ return ret;
+ }
}
rte_intr_enable(pci_dev->intr_handle);
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index c80fa59e63..d100e58d10 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -978,7 +978,7 @@ int hns3_fdir_filter_program(struct hns3_adapter *hns,
rule->key_conf.spec.src_port,
rule->key_conf.spec.dst_port, ret);
else
- hns3_remove_fdir_filter(hw, fdir_info, &rule->key_conf);
+ ret = hns3_remove_fdir_filter(hw, fdir_info, &rule->key_conf);
return ret;
}
diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c
index c2e692a2c5..9a1e8935e5 100644
--- a/drivers/net/hns3/hns3_stats.c
+++ b/drivers/net/hns3/hns3_stats.c
@@ -771,7 +771,7 @@ hns3_mac_stats_reset(struct hns3_hw *hw)
return 0;
}
-static int
+static uint16_t
hns3_get_imissed_stats_num(struct hns3_adapter *hns)
{
#define NO_IMISSED_STATS_NUM 0
@@ -993,7 +993,7 @@ hns3_imissed_stats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
struct hns3_rx_missed_stats *imissed_stats = &hw->imissed_stats;
- int imissed_stats_num;
+ uint16_t imissed_stats_num;
int cnt = *count;
char *addr;
uint16_t i;
@@ -1170,7 +1170,7 @@ hns3_imissed_stats_name_get(struct rte_eth_dev *dev,
{
struct hns3_adapter *hns = dev->data->dev_private;
uint32_t cnt = *count;
- int imissed_stats_num;
+ uint16_t imissed_stats_num;
uint16_t i;
imissed_stats_num = hns3_get_imissed_stats_num(hns);
@@ -1539,8 +1539,13 @@ hns3_stats_init(struct hns3_hw *hw)
return ret;
}
- if (!hns->is_vf)
- hns3_mac_stats_reset(hw);
+ if (!hns->is_vf) {
+ ret = hns3_mac_stats_reset(hw);
+ if (ret) {
+ hns3_err(hw, "reset mac stats failed, ret = %d", ret);
+ return ret;
+ }
+ }
return hns3_tqp_stats_init(hw);
}
--
2.23.0

View File

@ -0,0 +1,47 @@
From eacae1d1b2f0d8765dfa14839e88005d7e1eeb73 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Tue, 31 Oct 2023 20:23:57 +0800
Subject: [PATCH 381/394] net/hns3: fix some error logs
[ upstream commit fdafdca875eafe36950542cbfbdb21b01b371081 ]
This patch fixes some error log.
Fixes: 62e3ccc2b94c ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_dcb.c | 2 +-
drivers/net/hns3/hns3_flow.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 07b8c46a81..2831d3dc62 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1082,7 +1082,7 @@ hns3_dcb_map_cfg(struct hns3_hw *hw)
ret = hns3_pg_to_pri_map(hw);
if (ret) {
- hns3_err(hw, "pri_to_pg mapping fail: %d", ret);
+ hns3_err(hw, "pg_to_pri mapping fail: %d", ret);
return ret;
}
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index d5c9c22633..da17fa6e69 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -927,7 +927,7 @@ hns3_parse_sctp(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM_MASK,
item,
- "Only support src & dst port in SCTP");
+ "Only support src & dst port & v-tag in SCTP");
if (sctp_mask->hdr.src_port) {
hns3_set_bit(rule->input_set, INNER_SRC_PORT, 1);
rule->key_conf.mask.src_port =
--
2.23.0

View File

@ -0,0 +1,61 @@
From fd44bf6577c48ed17419db18ef1a87620fa936ec Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Tue, 31 Oct 2023 20:23:58 +0800
Subject: [PATCH 382/394] net/hns3: keep set/get algo key functions local
[ upstream commit 4d996f3b2a1dcce2fff59a0a9490c04480e4c805 ]
The functions "hns3_rss_set_algo_key()" and "hns3_rss_get_algo_key()"
are the inner interfaces to set hardware. Driver already had an API,
"hns3_update_rss_algo_key()", to export and to update RSS algo or key.
So above two innter interface don't export.
Fixes: 7da415d27d88 ("net/hns3: use hardware config to report hash key")
Cc: stable@dpdk.org
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_rss.c | 4 ++--
drivers/net/hns3/hns3_rss.h | 4 ----
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 6126512bd7..9bb8426256 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -283,7 +283,7 @@ static const struct {
* rss_generic_config command function, opcode:0x0D01.
* Used to set algorithm and hash key of RSS.
*/
-int
+static int
hns3_rss_set_algo_key(struct hns3_hw *hw, uint8_t hash_algo,
const uint8_t *key, uint8_t key_len)
{
@@ -324,7 +324,7 @@ hns3_rss_set_algo_key(struct hns3_hw *hw, uint8_t hash_algo,
return 0;
}
-int
+static int
hns3_rss_get_algo_key(struct hns3_hw *hw, uint8_t *hash_algo,
uint8_t *key, uint8_t key_len)
{
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 415430a399..9d182a8025 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -190,10 +190,6 @@ bool hns3_check_rss_types_valid(struct hns3_hw *hw, uint64_t types);
int hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw, uint64_t rss_hf);
int hns3_set_rss_tuple_field(struct hns3_hw *hw, uint64_t tuple_fields);
int hns3_get_rss_tuple_field(struct hns3_hw *hw, uint64_t *tuple_fields);
-int hns3_rss_set_algo_key(struct hns3_hw *hw, uint8_t hash_algo,
- const uint8_t *key, uint8_t key_len);
-int hns3_rss_get_algo_key(struct hns3_hw *hw, uint8_t *hash_algo,
- uint8_t *key, uint8_t key_len);
uint64_t hns3_rss_calc_tuple_filed(uint64_t rss_hf);
int hns3_update_rss_algo_key(struct hns3_hw *hw, uint8_t hash_algo,
uint8_t *key, uint8_t key_len);
--
2.23.0

View File

@ -0,0 +1,43 @@
From f99da9ff1fd939d98025625bca3986054f00592e Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Tue, 31 Oct 2023 20:23:59 +0800
Subject: [PATCH 383/394] net/hns3: fix uninitialized hash algo value
[ upstream commit 177cf5c93f9ac86d8a2b817115ef1e979023414c ]
This patch initializes "hash_algo" as zero to avoid using
it uninitialized.
Fixes: e3069658da9f ("net/hns3: reimplement hash flow function")
Cc: stable@dpdk.org
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_rss.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 9bb8426256..eeeca71a5c 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -771,7 +771,7 @@ hns3_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
{
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
- uint8_t hash_algo;
+ uint8_t hash_algo = 0;
int ret;
rte_spinlock_lock(&hw->lock);
@@ -993,7 +993,7 @@ hns3_update_rss_algo_key(struct hns3_hw *hw, uint8_t hash_func,
{
uint8_t rss_key[HNS3_RSS_KEY_SIZE_MAX] = {0};
bool modify_key, modify_algo;
- uint8_t hash_algo;
+ uint8_t hash_algo = 0;
int ret;
modify_key = (key != NULL && key_len > 0);
--
2.23.0

View File

@ -0,0 +1,159 @@
From e9c4dc9a6488e7dfccba0e24c9e8606beea7e91b Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 2 Nov 2023 16:20:12 +0800
Subject: [PATCH 384/394] ethdev: clarify RSS related fields usage
[ upstream commit bae3cfa520a7205d63752c506d51e832d4944180 ]
In rte_eth_dev_rss_hash_conf_get(), the "rss_key_len" should be
greater than or equal to the "hash_key_size" which get from
rte_eth_dev_info_get() API. And the "rss_key" should contain at
least "hash_key_size" bytes. If these requirements are not met,
the query unreliable.
In rte_eth_dev_rss_hash_update() or rte_eth_dev_configure(), the
"rss_key_len" indicates the length of the "rss_key" in bytes of
the array pointed by "rss_key", it should be equal to the
"hash_key_size" if "rss_key" is not NULL.
This patch overwrites the comments of fields of "rte_eth_rss_conf"
and "RTE_ETH_HASH_FUNCTION_DEFAULT", checks "rss_key_len" in
ethdev level, and documents these changes.
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
lib/ethdev/rte_ethdev.c | 32 ++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 33 ++++++++++++++++++---------------
lib/ethdev/rte_flow.h | 1 +
3 files changed, 51 insertions(+), 15 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 132e3d8dc7..f8f111ba6d 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1620,6 +1620,16 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
+ if (dev_conf->rx_adv_conf.rss_conf.rss_key != NULL &&
+ dev_conf->rx_adv_conf.rss_conf.rss_key_len != dev_info.hash_key_size) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n",
+ port_id, dev_conf->rx_adv_conf.rss_conf.rss_key_len,
+ dev_info.hash_key_size);
+ ret = -EINVAL;
+ goto rollback;
+ }
+
/*
* Setup new number of Rx/Tx queues and reconfigure device.
*/
@@ -4205,6 +4215,14 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
return -ENOTSUP;
}
+ if (rss_conf->rss_key != NULL &&
+ rss_conf->rss_key_len != dev_info.hash_key_size) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n",
+ port_id, rss_conf->rss_key_len, dev_info.hash_key_size);
+ return -EINVAL;
+ }
+
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
return eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
rss_conf));
@@ -4214,7 +4232,9 @@ int
rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
struct rte_eth_rss_conf *rss_conf)
{
+ struct rte_eth_dev_info dev_info = { 0 };
struct rte_eth_dev *dev;
+ int ret;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -4226,6 +4246,18 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
return -EINVAL;
}
+ ret = rte_eth_dev_info_get(port_id, &dev_info);
+ if (ret != 0)
+ return ret;
+
+ if (rss_conf->rss_key != NULL &&
+ rss_conf->rss_key_len < dev_info.hash_key_size) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u invalid RSS key len: %u, should not be less than: %u\n",
+ port_id, rss_conf->rss_key_len, dev_info.hash_key_size);
+ return -EINVAL;
+ }
+
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
return eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
rss_conf));
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index c555ecb840..03799bafa9 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -506,24 +506,27 @@ struct rte_vlan_filter_conf {
/**
* A structure used to configure the Receive Side Scaling (RSS) feature
* of an Ethernet port.
- * If not NULL, the *rss_key* pointer of the *rss_conf* structure points
- * to an array holding the RSS key to use for hashing specific header
- * fields of received packets. The length of this array should be indicated
- * by *rss_key_len* below. Otherwise, a default random hash key is used by
- * the device driver.
- *
- * The *rss_key_len* field of the *rss_conf* structure indicates the length
- * in bytes of the array pointed by *rss_key*. To be compatible, this length
- * will be checked in i40e only. Others assume 40 bytes to be used as before.
- *
- * The *rss_hf* field of the *rss_conf* structure indicates the different
- * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
- * Supplying an *rss_hf* equal to zero disables the RSS feature.
*/
struct rte_eth_rss_conf {
- uint8_t *rss_key; /**< If not NULL, 40-byte hash key. */
+ /**
+ * In rte_eth_dev_rss_hash_conf_get(), the *rss_key_len* should be
+ * greater than or equal to the *hash_key_size* which get from
+ * rte_eth_dev_info_get() API. And the *rss_key* should contain at least
+ * *hash_key_size* bytes. If not meet these requirements, the query
+ * result is unreliable even if the operation returns success.
+ *
+ * In rte_eth_dev_rss_hash_update() or rte_eth_dev_configure(), if
+ * *rss_key* is not NULL, the *rss_key_len* indicates the length of the
+ * *rss_key* in bytes and it should be equal to *hash_key_size*.
+ * If *rss_key* is NULL, drivers are free to use a random or a default key.
+ */
+ uint8_t *rss_key;
uint8_t rss_key_len; /**< hash key length in bytes. */
- uint64_t rss_hf; /**< Hash functions to apply - see below. */
+ /**
+ * Indicates the type of packets or the specific part of packets to
+ * which RSS hashing is to be applied.
+ */
+ uint64_t rss_hf;
};
/*
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 1031fb246b..039d09e0a9 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2974,6 +2974,7 @@ struct rte_flow_query_count {
* Hash function types.
*/
enum rte_eth_hash_function {
+ /** DEFAULT means driver decides which hash algorithm to pick. */
RTE_ETH_HASH_FUNCTION_DEFAULT = 0,
RTE_ETH_HASH_FUNCTION_TOEPLITZ, /**< Toeplitz */
RTE_ETH_HASH_FUNCTION_SIMPLE_XOR, /**< Simple XOR */
--
2.23.0

View File

@ -0,0 +1,207 @@
From 597270b32229f1c39f29cd6b0d07203850bd975b Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 2 Nov 2023 16:20:13 +0800
Subject: [PATCH 385/394] ethdev: set and query RSS hash algorithm
[ upstream commit 34ff088cc24159c9fa6e61242efb76d0289b4e37 ]
Currently, rte_eth_rss_conf supports configuring and querying
RSS hash functions, rss key and it's length, but not RSS hash
algorithm.
The structure ``rte_eth_dev_info`` is extended by adding a new
field "rss_algo_capa". Drivers are responsible for reporting this
capa and configurations of RSS hash algorithm can be verified based
on the capability. The default value of "rss_algo_capa" is
RTE_ETH_HASH_ALGO_CAPA_MASK(DEFAULT) if drivers do not report it.
The structure ``rte_eth_rss_conf`` is extended by adding a new
field "algorithm". This represents the RSS algorithms to apply.
If the value of "algorithm" used for configuration is a gibberish
value, drivers should report the error.
To check whether the drivers report valid "algorithm", it is set
to default value before querying in rte_eth_dev_rss_hash_conf_get().
Signed-off-by: Jie Hai <haijie1@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
lib/ethdev/rte_ethdev.c | 25 +++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 29 +++++++++++++++++++++++++++++
lib/ethdev/rte_flow.c | 1 -
lib/ethdev/rte_flow.h | 19 ++-----------------
4 files changed, 56 insertions(+), 18 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index f8f111ba6d..ec06bd3a9c 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1422,6 +1422,7 @@ int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
{
+ enum rte_eth_hash_function algorithm;
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
@@ -1630,6 +1631,17 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
+ algorithm = dev_conf->rx_adv_conf.rss_conf.algorithm;
+ if ((size_t)algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) ||
+ (dev_info.rss_algo_capa & RTE_ETH_HASH_ALGO_TO_CAPA(algorithm)) == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u configured RSS hash algorithm (%u)"
+ "is not in the algorithm capability (0x%" PRIx32 ")\n",
+ port_id, algorithm, dev_info.rss_algo_capa);
+ ret = -EINVAL;
+ goto rollback;
+ }
+
/*
* Setup new number of Rx/Tx queues and reconfigure device.
*/
@@ -3507,6 +3519,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN;
dev_info->max_mtu = UINT16_MAX;
+ dev_info->rss_algo_capa = RTE_ETH_HASH_ALGO_CAPA_MASK(DEFAULT);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
diag = (*dev->dev_ops->dev_infos_get)(dev, dev_info);
@@ -4223,6 +4236,16 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
return -EINVAL;
}
+ if ((size_t)rss_conf->algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) ||
+ (dev_info.rss_algo_capa &
+ RTE_ETH_HASH_ALGO_TO_CAPA(rss_conf->algorithm)) == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u configured RSS hash algorithm (%u)"
+ "is not in the algorithm capability (0x%" PRIx32 ")\n",
+ port_id, rss_conf->algorithm, dev_info.rss_algo_capa);
+ return -EINVAL;
+ }
+
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
return eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
rss_conf));
@@ -4258,6 +4281,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
return -EINVAL;
}
+ rss_conf->algorithm = RTE_ETH_HASH_FUNCTION_DEFAULT;
+
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
return eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
rss_conf));
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 03799bafa9..911b9e03ab 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -503,6 +503,33 @@ struct rte_vlan_filter_conf {
uint64_t ids[64];
};
+/**
+ * Hash function types.
+ */
+enum rte_eth_hash_function {
+ /** DEFAULT means driver decides which hash algorithm to pick. */
+ RTE_ETH_HASH_FUNCTION_DEFAULT = 0,
+ RTE_ETH_HASH_FUNCTION_TOEPLITZ, /**< Toeplitz */
+ RTE_ETH_HASH_FUNCTION_SIMPLE_XOR, /**< Simple XOR */
+ /**
+ * Symmetric Toeplitz: src, dst will be replaced by
+ * xor(src, dst). For the case with src/dst only,
+ * src or dst address will xor with zero pair.
+ */
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ,
+ /**
+ * Symmetric Toeplitz: L3 and L4 fields are sorted prior to
+ * the hash function.
+ * If src_ip > dst_ip, swap src_ip and dst_ip.
+ * If src_port > dst_port, swap src_port and dst_port.
+ */
+ RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT,
+ RTE_ETH_HASH_FUNCTION_MAX,
+};
+
+#define RTE_ETH_HASH_ALGO_TO_CAPA(x) RTE_BIT32(x)
+#define RTE_ETH_HASH_ALGO_CAPA_MASK(x) RTE_BIT32(RTE_ETH_HASH_FUNCTION_ ## x)
+
/**
* A structure used to configure the Receive Side Scaling (RSS) feature
* of an Ethernet port.
@@ -527,6 +554,7 @@ struct rte_eth_rss_conf {
* which RSS hashing is to be applied.
*/
uint64_t rss_hf;
+ enum rte_eth_hash_function algorithm; /**< Hash algorithm. */
};
/*
@@ -1820,6 +1848,7 @@ struct rte_eth_dev_info {
/** Device redirection table size, the total number of entries. */
uint16_t reta_size;
uint8_t hash_key_size; /**< Hash key size in bytes */
+ uint32_t rss_algo_capa; /** RSS hash algorithms capabilities */
/** Bit mask of RSS offloads, the bit offset also means flow type */
uint64_t flow_type_rss_offloads;
struct rte_eth_rxconf default_rxconf; /**< Default Rx configuration */
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index a93f68abbc..e11c08baae 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -14,7 +14,6 @@
#include <rte_string_fns.h>
#include <rte_mbuf.h>
#include <rte_mbuf_dyn.h>
-#include "rte_ethdev.h"
#include "rte_flow_driver.h"
#include "rte_flow.h"
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 039d09e0a9..d560cc7dcd 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -38,6 +38,8 @@
#include <rte_l2tpv2.h>
#include <rte_ppp.h>
+#include "rte_ethdev.h"
+
#ifdef __cplusplus
extern "C" {
#endif
@@ -2970,23 +2972,6 @@ struct rte_flow_query_count {
uint64_t bytes; /**< Number of bytes through this rule [out]. */
};
-/**
- * Hash function types.
- */
-enum rte_eth_hash_function {
- /** DEFAULT means driver decides which hash algorithm to pick. */
- RTE_ETH_HASH_FUNCTION_DEFAULT = 0,
- RTE_ETH_HASH_FUNCTION_TOEPLITZ, /**< Toeplitz */
- RTE_ETH_HASH_FUNCTION_SIMPLE_XOR, /**< Simple XOR */
- /**
- * Symmetric Toeplitz: src, dst will be replaced by
- * xor(src, dst). For the case with src/dst only,
- * src or dst address will xor with zero pair.
- */
- RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ,
- RTE_ETH_HASH_FUNCTION_MAX,
-};
-
/**
* RTE_FLOW_ACTION_TYPE_RSS
*
--
2.23.0

View File

@ -0,0 +1,36 @@
From 5c2aa37412339dac879a2c945262b840cbc627a2 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 2 Nov 2023 16:20:14 +0800
Subject: [PATCH 386/394] net/hns3: report RSS hash algorithms capability
[ upstream commit 36b0b4fdeb64e92ffa8df617e8fdd3ed52923510 ]
The hns3 driver should reports RSS hash algorithm capability
to support updating RSS hash algorithm by
rte_eth_dev_rss_hash_update() or rte_eth_dev_configure().
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/net/hns3/hns3_common.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c
index 6b1aeaa41b..7a49f0d11d 100644
--- a/drivers/net/hns3/hns3_common.c
+++ b/drivers/net/hns3/hns3_common.c
@@ -133,6 +133,10 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->reta_size = hw->rss_ind_tbl_size;
info->hash_key_size = hw->rss_key_size;
info->flow_type_rss_offloads = HNS3_ETH_RSS_SUPPORT;
+ info->rss_algo_capa = RTE_ETH_HASH_ALGO_CAPA_MASK(DEFAULT) |
+ RTE_ETH_HASH_ALGO_CAPA_MASK(TOEPLITZ) |
+ RTE_ETH_HASH_ALGO_CAPA_MASK(SIMPLE_XOR) |
+ RTE_ETH_HASH_ALGO_CAPA_MASK(SYMMETRIC_TOEPLITZ);
info->default_rxportconf.burst_size = HNS3_DEFAULT_PORT_CONF_BURST_SIZE;
info->default_txportconf.burst_size = HNS3_DEFAULT_PORT_CONF_BURST_SIZE;
--
2.23.0

View File

@ -0,0 +1,102 @@
From 551ff5a491295b17551d81f5c77a5167abc766fc Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Thu, 2 Nov 2023 16:20:15 +0800
Subject: [PATCH 387/394] net/hns3: support setting and querying RSS hash
function
[ upstream commit 9913a55d37f7a80c143de3c5eb4ba39f266291cb ]
Support setting and querying RSS hash function by ethdev ops.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/net/hns3/hns3_rss.c | 47 +++++++++++++++++++++----------------
1 file changed, 27 insertions(+), 20 deletions(-)
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index eeeca71a5c..15feb26043 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -646,14 +646,14 @@ hns3_dev_rss_hash_update(struct rte_eth_dev *dev,
if (ret)
goto set_tuple_fail;
- if (key) {
- ret = hns3_rss_set_algo_key(hw, hw->rss_info.hash_algo,
- key, hw->rss_key_size);
- if (ret)
- goto set_algo_key_fail;
- /* Update the shadow RSS key with user specified */
+ ret = hns3_update_rss_algo_key(hw, rss_conf->algorithm, key, key_len);
+ if (ret != 0)
+ goto set_algo_key_fail;
+
+ if (rss_conf->algorithm != RTE_ETH_HASH_FUNCTION_DEFAULT)
+ hw->rss_info.hash_algo = hns3_hash_func_map[rss_conf->algorithm];
+ if (key != NULL)
memcpy(hw->rss_info.key, key, hw->rss_key_size);
- }
hw->rss_info.rss_hf = rss_hf;
rte_spinlock_unlock(&hw->lock);
@@ -769,7 +769,13 @@ int
hns3_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
struct rte_eth_rss_conf *rss_conf)
{
+ const uint8_t hash_func_map[] = {
+ [HNS3_RSS_HASH_ALGO_TOEPLITZ] = RTE_ETH_HASH_FUNCTION_TOEPLITZ,
+ [HNS3_RSS_HASH_ALGO_SIMPLE] = RTE_ETH_HASH_FUNCTION_SIMPLE_XOR,
+ [HNS3_RSS_HASH_ALGO_SYMMETRIC_TOEP] = RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ,
+ };
struct hns3_adapter *hns = dev->data->dev_private;
+ uint8_t rss_key[HNS3_RSS_KEY_SIZE_MAX] = {0};
struct hns3_hw *hw = &hns->hw;
uint8_t hash_algo = 0;
int ret;
@@ -777,26 +783,27 @@ hns3_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
rte_spinlock_lock(&hw->lock);
ret = hns3_rss_hash_get_rss_hf(hw, &rss_conf->rss_hf);
if (ret != 0) {
+ rte_spinlock_unlock(&hw->lock);
hns3_err(hw, "obtain hash tuples failed, ret = %d", ret);
- goto out;
+ return ret;
+ }
+
+ ret = hns3_rss_get_algo_key(hw, &hash_algo, rss_key, hw->rss_key_size);
+ if (ret != 0) {
+ rte_spinlock_unlock(&hw->lock);
+ hns3_err(hw, "obtain hash algo and key failed, ret = %d", ret);
+ return ret;
}
+ rte_spinlock_unlock(&hw->lock);
- /* Get the RSS Key required by the user */
+ /* Get the RSS Key if user required. */
if (rss_conf->rss_key && rss_conf->rss_key_len >= hw->rss_key_size) {
- ret = hns3_rss_get_algo_key(hw, &hash_algo, rss_conf->rss_key,
- hw->rss_key_size);
- if (ret != 0) {
- hns3_err(hw, "obtain hash algo and key failed, ret = %d",
- ret);
- goto out;
- }
+ memcpy(rss_conf->rss_key, rss_key, hw->rss_key_size);
rss_conf->rss_key_len = hw->rss_key_size;
}
+ rss_conf->algorithm = hash_func_map[hash_algo];
-out:
- rte_spinlock_unlock(&hw->lock);
-
- return ret;
+ return 0;
}
/*
--
2.23.0

View File

@ -0,0 +1,76 @@
From 0984219ef3fb85833458c14cdd99d9918febb22b Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 2 Nov 2023 16:20:16 +0800
Subject: [PATCH 388/394] app/procinfo: fix RSS info
[ upstream commit 33079eccf5c1a99af722fe168d8465f602bc98b2 ]
Command show-port should show RSS info (rss_key, len and rss_hf),
However, the information is shown only when rss_conf.rss_key is not
NULL. Since no memory is allocated for rss_conf.rss_key, rss_key
will always be NULL and the rss_info will never show. This patch
fixes it.
Fixes: 8a37f37fc243 ("app/procinfo: add --show-port")
Cc: stable@dpdk.org
Signed-off-by: Jie Hai <haijie1@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
---
app/proc-info/main.c | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index 0cc01e3dad..de7c3b4b27 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -132,6 +132,8 @@ struct desc_param {
static struct desc_param rx_desc_param;
static struct desc_param tx_desc_param;
+#define RSS_HASH_KEY_SIZE 64
+
/* display usage */
static void
proc_info_usage(const char *prgname)
@@ -823,6 +825,7 @@ show_port(void)
struct rte_eth_fc_conf fc_conf;
struct rte_ether_addr mac;
struct rte_eth_dev_owner owner;
+ uint8_t rss_key[RSS_HASH_KEY_SIZE];
/* Skip if port is not in mask */
if ((enabled_port_mask & (1ul << i)) == 0)
@@ -981,17 +984,17 @@ show_port(void)
printf("\n");
}
+ rss_conf.rss_key = rss_key;
+ rss_conf.rss_key_len = dev_info.hash_key_size;
ret = rte_eth_dev_rss_hash_conf_get(i, &rss_conf);
if (ret == 0) {
- if (rss_conf.rss_key) {
- printf(" - RSS\n");
- printf("\t -- RSS len %u key (hex):",
- rss_conf.rss_key_len);
- for (k = 0; k < rss_conf.rss_key_len; k++)
- printf(" %x", rss_conf.rss_key[k]);
- printf("\t -- hf 0x%"PRIx64"\n",
- rss_conf.rss_hf);
- }
+ printf(" - RSS\n");
+ printf("\t -- RSS len %u key (hex):",
+ rss_conf.rss_key_len);
+ for (k = 0; k < rss_conf.rss_key_len; k++)
+ printf(" %x", rss_conf.rss_key[k]);
+ printf("\t -- hf 0x%"PRIx64"\n",
+ rss_conf.rss_hf);
}
#ifdef RTE_LIB_SECURITY
--
2.23.0

View File

@ -0,0 +1,59 @@
From a70e268e9425c17da66e1063dc6d11a30b0a81bc Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 2 Nov 2023 16:20:17 +0800
Subject: [PATCH 389/394] app/procinfo: adjust format of RSS info
[ upstream commit 66d4bacc39fb765051594669c33aab4f5d0f9d6c ]
This patch splits the length and value of RSS key into two parts,
removes spaces between RSS keys, and adds line breaks between RSS
key and RSS hf.
Before the adjustment, RSS info is shown as:
- RSS
-- RSS len 40 key (hex): 6d 5a 56 da 25 5b e c2 41 67 \
25 3d 43 a3 8f b0 d0 ca 2b cb ae 7b 30 b4 77 cb 2d \
a3 80 30 f2 c 6a 42 b7 3b be ac 1 fa -- hf 0x0
and after:
- RSS info
-- key len : 40
-- key (hex) : 6d5a56da255b0ec24167253d43a38fb0d0c \
a2bcbae7b30b477cb2da38030f20c6a42b73bbeac01fa
-- hash function : 0x0
Fixes: 8a37f37fc243 ("app/procinfo: add --show-port")
Cc: stable@dpdk.org
Signed-off-by: Jie Hai <haijie1@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
app/proc-info/main.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index de7c3b4b27..55bfbcaa9c 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -988,12 +988,13 @@ show_port(void)
rss_conf.rss_key_len = dev_info.hash_key_size;
ret = rte_eth_dev_rss_hash_conf_get(i, &rss_conf);
if (ret == 0) {
- printf(" - RSS\n");
- printf("\t -- RSS len %u key (hex):",
+ printf(" - RSS info\n");
+ printf("\t -- key len : %u\n",
rss_conf.rss_key_len);
+ printf("\t -- key (hex) : ");
for (k = 0; k < rss_conf.rss_key_len; k++)
- printf(" %x", rss_conf.rss_key[k]);
- printf("\t -- hf 0x%"PRIx64"\n",
+ printf("%02x", rss_conf.rss_key[k]);
+ printf("\n\t -- hash function : 0x%"PRIx64"\n",
rss_conf.rss_hf);
}
--
2.23.0

View File

@ -0,0 +1,284 @@
From 811392906150ad09a2502b1d40f87cf48faec751 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 2 Nov 2023 16:20:18 +0800
Subject: [PATCH 390/394] ethdev: get RSS algorithm names
[ upstream commit 92628e2b04923c098128acdb173ab25953162ef8 ]
This patch adds new API rte_eth_dev_rss_algo_name() to get
name of a RSS algorithm and document it.
Example:
testpmd> show port 0 rss-hash algorithm
RSS algorithm:
toeplitz
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/cmdline.c | 29 +++++++++++++++++----
app/test-pmd/config.c | 29 +++++++--------------
app/test-pmd/testpmd.h | 2 +-
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 +--
lib/ethdev/rte_ethdev.c | 25 ++++++++++++++++++
lib/ethdev/rte_ethdev.h | 16 ++++++++++++
lib/ethdev/version.map | 3 +++
7 files changed, 81 insertions(+), 27 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 49152ec348..cdf943162b 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -174,8 +174,8 @@ static void cmd_help_long_parsed(void *parsed_result,
" by masks on port X. size is used to indicate the"
" hardware supported reta size\n\n"
- "show port (port_id) rss-hash [key]\n"
- " Display the RSS hash functions and RSS hash key of port\n\n"
+ "show port (port_id) rss-hash [key | algorithm]\n"
+ " Display the RSS hash functions, RSS hash key and RSS hash algorithms of port\n\n"
"clear port (info|stats|xstats|fdir) (port_id|all)\n"
" Clear information for port_id, or all.\n\n"
@@ -3150,15 +3150,17 @@ struct cmd_showport_rss_hash {
cmdline_fixed_string_t rss_hash;
cmdline_fixed_string_t rss_type;
cmdline_fixed_string_t key; /* optional argument */
+ cmdline_fixed_string_t algorithm; /* optional argument */
};
static void cmd_showport_rss_hash_parsed(void *parsed_result,
__rte_unused struct cmdline *cl,
- void *show_rss_key)
+ __rte_unused void *data)
{
struct cmd_showport_rss_hash *res = parsed_result;
- port_rss_hash_conf_show(res->port_id, show_rss_key != NULL);
+ port_rss_hash_conf_show(res->port_id,
+ !strcmp(res->key, "key"), !strcmp(res->algorithm, "algorithm"));
}
cmdline_parse_token_string_t cmd_showport_rss_hash_show =
@@ -3173,6 +3175,8 @@ cmdline_parse_token_string_t cmd_showport_rss_hash_rss_hash =
"rss-hash");
cmdline_parse_token_string_t cmd_showport_rss_hash_rss_key =
TOKEN_STRING_INITIALIZER(struct cmd_showport_rss_hash, key, "key");
+static cmdline_parse_token_string_t cmd_showport_rss_hash_rss_algo =
+ TOKEN_STRING_INITIALIZER(struct cmd_showport_rss_hash, algorithm, "algorithm");
cmdline_parse_inst_t cmd_showport_rss_hash = {
.f = cmd_showport_rss_hash_parsed,
@@ -3189,7 +3193,7 @@ cmdline_parse_inst_t cmd_showport_rss_hash = {
cmdline_parse_inst_t cmd_showport_rss_hash_key = {
.f = cmd_showport_rss_hash_parsed,
- .data = (void *)1,
+ .data = NULL,
.help_str = "show port <port_id> rss-hash key",
.tokens = {
(void *)&cmd_showport_rss_hash_show,
@@ -3201,6 +3205,20 @@ cmdline_parse_inst_t cmd_showport_rss_hash_key = {
},
};
+static cmdline_parse_inst_t cmd_showport_rss_hash_algo = {
+ .f = cmd_showport_rss_hash_parsed,
+ .data = NULL,
+ .help_str = "show port <port_id> rss-hash algorithm",
+ .tokens = {
+ (void *)&cmd_showport_rss_hash_show,
+ (void *)&cmd_showport_rss_hash_port,
+ (void *)&cmd_showport_rss_hash_port_id,
+ (void *)&cmd_showport_rss_hash_rss_hash,
+ (void *)&cmd_showport_rss_hash_rss_algo,
+ NULL,
+ },
+};
+
/* *** Configure DCB *** */
struct cmd_config_dcb {
cmdline_fixed_string_t port;
@@ -17899,6 +17917,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_tunnel_udp_config,
(cmdline_parse_inst_t *)&cmd_showport_rss_hash,
(cmdline_parse_inst_t *)&cmd_showport_rss_hash_key,
+ (cmdline_parse_inst_t *)&cmd_showport_rss_hash_algo,
(cmdline_parse_inst_t *)&cmd_config_rss_hash_key,
(cmdline_parse_inst_t *)&cmd_cleanup_txq_mbufs,
(cmdline_parse_inst_t *)&cmd_dump,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index af00078108..9d7b10548e 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1636,24 +1636,7 @@ rss_config_display(struct rte_flow_action_rss *rss_conf)
printf(" %d", rss_conf->queue[i]);
printf("\n");
- printf(" function: ");
- switch (rss_conf->func) {
- case RTE_ETH_HASH_FUNCTION_DEFAULT:
- printf("default\n");
- break;
- case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
- printf("toeplitz\n");
- break;
- case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
- printf("simple_xor\n");
- break;
- case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
- printf("symmetric_toeplitz\n");
- break;
- default:
- printf("Unknown function\n");
- return;
- }
+ printf(" function: %s\n", rte_eth_dev_rss_algo_name(rss_conf->func));
printf(" RSS key:\n");
if (rss_conf->key_len == 0) {
@@ -3077,7 +3060,7 @@ port_rss_reta_info(portid_t port_id,
* key of the port.
*/
void
-port_rss_hash_conf_show(portid_t port_id, int show_rss_key)
+port_rss_hash_conf_show(portid_t port_id, int show_rss_key, int show_rss_algo)
{
struct rte_eth_rss_conf rss_conf = {0};
uint8_t rss_key[RSS_HASH_KEY_LENGTH];
@@ -3127,8 +3110,16 @@ port_rss_hash_conf_show(portid_t port_id, int show_rss_key)
printf("RSS disabled\n");
return;
}
+
+ if (show_rss_algo) {
+ printf("RSS algorithm:\n %s\n",
+ rte_eth_dev_rss_algo_name(rss_conf.algorithm));
+ return;
+ }
+
printf("RSS functions:\n");
rss_types_display(rss_hf, TESTPMD_RSS_TYPES_CHAR_NUM_PER_LINE);
+
if (!show_rss_key)
return;
printf("RSS key:\n");
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 30c7177630..d19deeff4a 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1021,7 +1021,7 @@ int set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate);
int set_vf_rate_limit(portid_t port_id, uint16_t vf, uint16_t rate,
uint64_t q_msk);
-void port_rss_hash_conf_show(portid_t port_id, int show_rss_key);
+void port_rss_hash_conf_show(portid_t port_id, int show_rss_key, int show_rss_algo);
void port_rss_hash_key_update(portid_t port_id, char rss_type[],
uint8_t *hash_key, uint8_t hash_key_len);
int rx_queue_id_is_invalid(queueid_t rxq_id);
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 50c45db6f7..a81296d2ba 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -233,9 +233,9 @@ size is used to indicate the hardware supported reta size
show port rss-hash
~~~~~~~~~~~~~~~~~~
-Display the RSS hash functions and RSS hash key of a port::
+Display the RSS hash functions and RSS hash key or RSS hash algorithm of a port::
- testpmd> show port (port_id) rss-hash [key]
+ testpmd> show port (port_id) rss-hash [key | algorithm]
clear port
~~~~~~~~~~
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index ec06bd3a9c..289fe45e6c 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -196,6 +196,17 @@ enum {
STAT_QMAP_RX
};
+static const struct {
+ enum rte_eth_hash_function algo;
+ const char *name;
+} rte_eth_dev_rss_algo_names[] = {
+ {RTE_ETH_HASH_FUNCTION_DEFAULT, "default"},
+ {RTE_ETH_HASH_FUNCTION_SIMPLE_XOR, "simple_xor"},
+ {RTE_ETH_HASH_FUNCTION_TOEPLITZ, "toeplitz"},
+ {RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ, "symmetric_toeplitz"},
+ {RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT, "symmetric_toeplitz_sort"},
+};
+
int
rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str)
{
@@ -4288,6 +4299,20 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
rss_conf));
}
+const char *
+rte_eth_dev_rss_algo_name(enum rte_eth_hash_function rss_algo)
+{
+ const char *name = "Unknown function";
+ unsigned int i;
+
+ for (i = 0; i < RTE_DIM(rte_eth_dev_rss_algo_names); i++) {
+ if (rss_algo == rte_eth_dev_rss_algo_names[i].algo)
+ return rte_eth_dev_rss_algo_names[i].name;
+ }
+
+ return name;
+}
+
int
rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
struct rte_eth_udp_tunnel *udp_tunnel)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 911b9e03ab..09a546a48b 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4396,6 +4396,22 @@ int
rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
struct rte_eth_rss_conf *rss_conf);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Get the name of RSS hash algorithm.
+ *
+ * @param rss_algo
+ * Hash algorithm.
+ *
+ * @return
+ * Hash algorithm name or 'UNKNOWN' if the rss_algo cannot be recognized.
+ */
+__rte_experimental
+const char *
+rte_eth_dev_rss_algo_name(enum rte_eth_hash_function rss_algo);
+
/**
* Add UDP tunneling port for a type of tunnel.
*
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index f593f64ea9..1867016054 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -265,6 +265,9 @@ EXPERIMENTAL {
rte_eth_tx_descriptor_dump;
rte_eth_dev_is_valid_rxq;
rte_eth_dev_is_valid_txq;
+
+ # added in 23.11
+ rte_eth_dev_rss_algo_name;
};
INTERNAL {
--
2.23.0

View File

@ -0,0 +1,36 @@
From fdf0043acae2d1df5aff874133c92ff224ad3de1 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 2 Nov 2023 16:20:19 +0800
Subject: [PATCH 391/394] app/procinfo: show RSS hash algorithm
[ upstream commit 130c5a4ba0ca06c921f8a5b52b43e469250a3ea8 ]
Display RSS hash algorithm with command show-port as below.
- RSS info
-- hash algorithm : toeplitz
Signed-off-by: Jie Hai <haijie1@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
app/proc-info/main.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index 55bfbcaa9c..d2f78278d5 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -996,6 +996,8 @@ show_port(void)
printf("%02x", rss_conf.rss_key[k]);
printf("\n\t -- hash function : 0x%"PRIx64"\n",
rss_conf.rss_hf);
+ printf("\t -- hash algorithm : %s\n",
+ rte_eth_dev_rss_algo_name(rss_conf.algorithm));
}
#ifdef RTE_LIB_SECURITY
--
2.23.0

View File

@ -0,0 +1,102 @@
From 5e315791df0bcdaa3383e14e7b93a5297fe0b49e Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Fri, 3 Nov 2023 18:27:57 +0800
Subject: [PATCH 392/394] ethdev: add maximum Rx buffer size
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
[ upstream commit 75c7849a9dcca356985fdb87f2d11cae135dfb1a ]
The "min_rx_bufsize" in struct rte_eth_dev_info stands for the minimum
Rx buffer size supported by hardware. Actually, some engines also have
the maximum Rx buffer specification, like, hns3, i40e and so on. If mbuf
data room size in mempool is greater then the maximum Rx buffer size
per descriptor supported by HW, the data size application used in each
mbuf is just as much as the maximum Rx buffer size instead of the whole
data room size.
So introduce maximum Rx buffer size which is not enforced just to
report user to avoid memory waste. In addition, fix the comment for
the "min_rx_bufsize" to make it be more specific.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/config.c | 2 ++
lib/ethdev/rte_ethdev.c | 8 ++++++++
lib/ethdev/rte_ethdev.h | 10 +++++++++-
3 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9d7b10548e..fbb0cabf3d 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -848,6 +848,8 @@ port_infos_display(portid_t port_id)
}
printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize);
+ if (dev_info.max_rx_bufsize != UINT32_MAX)
+ printf("Maximum size of RX buffer: %u\n", dev_info.max_rx_bufsize);
printf("Maximum configurable length of RX packet: %u\n",
dev_info.max_rx_pktlen);
printf("Maximum configurable size of LRO aggregated packet: %u\n",
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 289fe45e6c..4702515240 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -2126,6 +2126,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_rxconf local_conf;
+ uint32_t buf_data_size;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -2162,6 +2163,12 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -ENOSPC;
}
mbp_buf_size = rte_pktmbuf_data_room_size(mp);
+ buf_data_size = mbp_buf_size - RTE_PKTMBUF_HEADROOM;
+ if (buf_data_size > dev_info.max_rx_bufsize)
+ RTE_ETHDEV_LOG(DEBUG,
+ "For port_id=%u, the mbuf data buffer size (%u) is bigger than "
+ "max buffer size (%u) device can utilize, so mbuf size can be reduced.\n",
+ port_id, buf_data_size, dev_info.max_rx_bufsize);
if (mbp_buf_size < dev_info.min_rx_bufsize +
RTE_PKTMBUF_HEADROOM) {
RTE_ETHDEV_LOG(ERR,
@@ -3531,6 +3538,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
RTE_ETHER_CRC_LEN;
dev_info->max_mtu = UINT16_MAX;
dev_info->rss_algo_capa = RTE_ETH_HASH_ALGO_CAPA_MASK(DEFAULT);
+ dev_info->max_rx_bufsize = UINT32_MAX;
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
diag = (*dev->dev_ops->dev_infos_get)(dev, dev_info);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 09a546a48b..2880a55890 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1825,7 +1825,15 @@ struct rte_eth_dev_info {
uint16_t min_mtu; /**< Minimum MTU allowed */
uint16_t max_mtu; /**< Maximum MTU allowed */
const uint32_t *dev_flags; /**< Device flags */
- uint32_t min_rx_bufsize; /**< Minimum size of Rx buffer. */
+ /** Minimum Rx buffer size per descriptor supported by HW. */
+ uint32_t min_rx_bufsize;
+ /**
+ * Maximum Rx buffer size per descriptor supported by HW.
+ * The value is not enforced, information only to application to
+ * optimize mbuf size.
+ * Its value is UINT32_MAX when not specified by the driver.
+ */
+ uint32_t max_rx_bufsize;
uint32_t max_rx_pktlen; /**< Maximum configurable length of Rx pkt. */
/** Maximum configurable size of LRO aggregated packet. */
uint32_t max_lro_pkt_size;
--
2.23.0

View File

@ -0,0 +1,30 @@
From 51ce4165992b99416a89951c403b9ed1907ff67c Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Fri, 3 Nov 2023 18:27:59 +0800
Subject: [PATCH 393/394] net/hns3: report maximum buffer size
[ upstream commit a276af95fa52ea4e97d173f6f0afe6cdec6949ba ]
This patch reports the maximum buffer size hardware supported.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/net/hns3/hns3_common.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c
index 7a49f0d11d..0d6b2c65af 100644
--- a/drivers/net/hns3/hns3_common.c
+++ b/drivers/net/hns3/hns3_common.c
@@ -59,6 +59,7 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
info->max_tx_queues = hw->tqps_num;
info->max_rx_pktlen = HNS3_MAX_FRAME_LEN; /* CRC included */
info->min_rx_bufsize = HNS3_MIN_BD_BUF_SIZE;
+ info->max_rx_bufsize = HNS3_MAX_BD_BUF_SIZE;
info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
--
2.23.0

View File

@ -0,0 +1,250 @@
From fc4a8dfe7b91702f2930957840a51796ffb12c2d Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Sat, 11 Nov 2023 09:59:14 +0800
Subject: [PATCH 394/394] net/hns3: fix mailbox sync
[ upstream commit be3590f54d0e415c23d4ed6ea55d967139c3ad10 ]
Currently, hns3 VF driver uses the following points to match
the response and request message for the mailbox synchronous
message between VF and PF.
1. req_msg_data which is consist of message code and subcode,
is used to match request and response.
2. head means the number of send success for VF.
3. tail means the number of receive success for VF.
4. lost means the number of send timeout for VF.
And 'head', 'tail' and 'lost' are dynamically updated during
the communication.
Now there is a issue that all sync mailbox message will
send failure forever at the flollowing case:
1. VF sends the message A
then head=UINT32_MAX-1, tail=UINT32_MAX-3, lost=2.
2. VF sends the message B
then head=UINT32_MAX, tail=UINT32_MAX-2, lost=2.
3. VF sends the message C, the message will be timeout because
it can't get the response within 500ms.
then head=0, tail=0, lost=2
note: tail is assigned to head if tail > head according to
current code logic. From now on, all subsequent sync milbox
messages fail to be sent.
It's very complicated to use the fields 'lost','tail','head'.
The code and subcode of the request sync mailbox are used as the
matching code of the message, which is used to match the response
message for receiving the synchronization response.
This patch drops these fields and uses the following solution
to solve this issue:
In the handling response message process, using the req_msg_data
of the request and response message to judge whether the sync
mailbox message has been received.
Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
---
drivers/net/hns3/hns3_cmd.c | 3 --
drivers/net/hns3/hns3_mbx.c | 81 ++++++-------------------------------
drivers/net/hns3/hns3_mbx.h | 10 -----
3 files changed, 13 insertions(+), 81 deletions(-)
diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index a5c4c11dc8..2c1664485b 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -731,9 +731,6 @@ hns3_cmd_init(struct hns3_hw *hw)
hw->cmq.csq.next_to_use = 0;
hw->cmq.crq.next_to_clean = 0;
hw->cmq.crq.next_to_use = 0;
- hw->mbx_resp.head = 0;
- hw->mbx_resp.tail = 0;
- hw->mbx_resp.lost = 0;
hns3_cmd_init_regs(hw);
rte_spinlock_unlock(&hw->cmq.crq.lock);
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index 8e0a58aa02..f1743c195e 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -40,23 +40,6 @@ hns3_resp_to_errno(uint16_t resp_code)
return -EIO;
}
-static void
-hns3_mbx_proc_timeout(struct hns3_hw *hw, uint16_t code, uint16_t subcode)
-{
- if (hw->mbx_resp.matching_scheme ==
- HNS3_MBX_RESP_MATCHING_SCHEME_OF_ORIGINAL) {
- hw->mbx_resp.lost++;
- hns3_err(hw,
- "VF could not get mbx(%u,%u) head(%u) tail(%u) "
- "lost(%u) from PF",
- code, subcode, hw->mbx_resp.head, hw->mbx_resp.tail,
- hw->mbx_resp.lost);
- return;
- }
-
- hns3_err(hw, "VF could not get mbx(%u,%u) from PF", code, subcode);
-}
-
static int
hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
uint8_t *resp_data, uint16_t resp_len)
@@ -67,7 +50,6 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
struct hns3_mbx_resp_status *mbx_resp;
uint32_t wait_time = 0;
- bool received;
if (resp_len > HNS3_MBX_MAX_RESP_DATA_SIZE) {
hns3_err(hw, "VF mbx response len(=%u) exceeds maximum(=%d)",
@@ -93,20 +75,14 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
hns3_dev_handle_mbx_msg(hw);
rte_delay_us(HNS3_WAIT_RESP_US);
- if (hw->mbx_resp.matching_scheme ==
- HNS3_MBX_RESP_MATCHING_SCHEME_OF_ORIGINAL)
- received = (hw->mbx_resp.head ==
- hw->mbx_resp.tail + hw->mbx_resp.lost);
- else
- received = hw->mbx_resp.received_match_resp;
- if (received)
+ if (hw->mbx_resp.received_match_resp)
break;
wait_time += HNS3_WAIT_RESP_US;
}
hw->mbx_resp.req_msg_data = 0;
if (wait_time >= mbx_time_limit) {
- hns3_mbx_proc_timeout(hw, code, subcode);
+ hns3_err(hw, "VF could not get mbx(%u,%u) from PF", code, subcode);
return -ETIME;
}
rte_io_rmb();
@@ -132,7 +108,6 @@ hns3_mbx_prepare_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode)
* we get the exact scheme which is used.
*/
hw->mbx_resp.req_msg_data = (uint32_t)code << 16 | subcode;
- hw->mbx_resp.head++;
/* Update match_id and ensure the value of match_id is not zero */
hw->mbx_resp.match_id++;
@@ -185,7 +160,6 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
req->match_id = hw->mbx_resp.match_id;
ret = hns3_cmd_send(hw, &desc, 1);
if (ret) {
- hw->mbx_resp.head--;
rte_spinlock_unlock(&hw->mbx_resp.lock);
hns3_err(hw, "VF failed(=%d) to send mbx message to PF",
ret);
@@ -254,41 +228,10 @@ hns3_handle_asserting_reset(struct hns3_hw *hw,
hns3_schedule_reset(HNS3_DEV_HW_TO_ADAPTER(hw));
}
-/*
- * Case1: receive response after timeout, req_msg_data
- * is 0, not equal resp_msg, do lost--
- * Case2: receive last response during new send_mbx_msg,
- * req_msg_data is different with resp_msg, let
- * lost--, continue to wait for response.
- */
-static void
-hns3_update_resp_position(struct hns3_hw *hw, uint32_t resp_msg)
-{
- struct hns3_mbx_resp_status *resp = &hw->mbx_resp;
- uint32_t tail = resp->tail + 1;
-
- if (tail > resp->head)
- tail = resp->head;
- if (resp->req_msg_data != resp_msg) {
- if (resp->lost)
- resp->lost--;
- hns3_warn(hw, "Received a mismatched response req_msg(%x) "
- "resp_msg(%x) head(%u) tail(%u) lost(%u)",
- resp->req_msg_data, resp_msg, resp->head, tail,
- resp->lost);
- } else if (tail + resp->lost > resp->head) {
- resp->lost--;
- hns3_warn(hw, "Received a new response again resp_msg(%x) "
- "head(%u) tail(%u) lost(%u)", resp_msg,
- resp->head, tail, resp->lost);
- }
- rte_io_wmb();
- resp->tail = tail;
-}
-
static void
hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req)
{
+#define HNS3_MBX_RESP_CODE_OFFSET 16
struct hns3_mbx_resp_status *resp = &hw->mbx_resp;
uint32_t msg_data;
@@ -298,12 +241,6 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req)
* match_id to its response. So VF could use the match_id
* to match the request.
*/
- if (resp->matching_scheme !=
- HNS3_MBX_RESP_MATCHING_SCHEME_OF_MATCH_ID) {
- resp->matching_scheme =
- HNS3_MBX_RESP_MATCHING_SCHEME_OF_MATCH_ID;
- hns3_info(hw, "detect mailbox support match id!");
- }
if (req->match_id == resp->match_id) {
resp->resp_status = hns3_resp_to_errno(req->msg[3]);
memcpy(resp->additional_info, &req->msg[4],
@@ -319,11 +256,19 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req)
* support copy request's match_id to its response. So VF follows the
* original scheme to process.
*/
+ msg_data = (uint32_t)req->msg[1] << HNS3_MBX_RESP_CODE_OFFSET | req->msg[2];
+ if (resp->req_msg_data != msg_data) {
+ hns3_warn(hw,
+ "received response tag (%u) is mismatched with requested tag (%u)",
+ msg_data, resp->req_msg_data);
+ return;
+ }
+
resp->resp_status = hns3_resp_to_errno(req->msg[3]);
memcpy(resp->additional_info, &req->msg[4],
HNS3_MBX_MAX_RESP_DATA_SIZE);
- msg_data = (uint32_t)req->msg[1] << 16 | req->msg[2];
- hns3_update_resp_position(hw, msg_data);
+ rte_io_wmb();
+ resp->received_match_resp = true;
}
static void
diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h
index c378783c6c..4a328802b9 100644
--- a/drivers/net/hns3/hns3_mbx.h
+++ b/drivers/net/hns3/hns3_mbx.h
@@ -93,21 +93,11 @@ enum hns3_mbx_link_fail_subcode {
#define HNS3_MBX_MAX_RESP_DATA_SIZE 8
#define HNS3_MBX_DEF_TIME_LIMIT_MS 500
-enum {
- HNS3_MBX_RESP_MATCHING_SCHEME_OF_ORIGINAL = 0,
- HNS3_MBX_RESP_MATCHING_SCHEME_OF_MATCH_ID
-};
-
struct hns3_mbx_resp_status {
rte_spinlock_t lock; /* protects against contending sync cmd resp */
- uint8_t matching_scheme;
-
/* The following fields used in the matching scheme for original */
uint32_t req_msg_data;
- uint32_t head;
- uint32_t tail;
- uint32_t lost;
/* The following fields used in the matching scheme for match_id */
uint16_t match_id;
--
2.23.0

View File

@ -1,6 +1,6 @@
Name: dpdk
Version: 21.11
Release: 57
Release: 59
Packager: packaging@6wind.com
URL: http://dpdk.org
%global source_version 21.11
@ -377,6 +377,50 @@ Patch6347: 0347-net-hns3-add-FDIR-VLAN-match-mode-runtime-config.patch
Patch6348: 0348-doc-fix-kernel-patch-link-in-hns3-guide.patch
Patch6349: 0349-doc-fix-syntax-in-hns3-guide.patch
Patch6350: 0350-doc-fix-number-of-leading-spaces-in-hns3-guide.patch
Patch6351: 0351-config-arm-add-HiSilicon-HIP10.patch
Patch6352: 0352-net-hns3-fix-non-zero-weight-for-disabled-TC.patch
Patch6353: 0353-net-hns3-fix-index-to-look-up-table-in-NEON-Rx.patch
Patch6354: 0354-net-hns3-fix-VF-default-MAC-modified-when-set-failed.patch
Patch6355: 0355-net-hns3-fix-error-code-for-multicast-resource.patch
Patch6356: 0356-net-hns3-fix-flushing-multicast-MAC-address.patch
Patch6357: 0357-net-hns3-fix-traffic-management-thread-safety.patch
Patch6358: 0358-net-hns3-fix-traffic-management-dump-text-alignment.patch
Patch6359: 0359-net-hns3-fix-order-in-NEON-Rx.patch
Patch6360: 0360-net-hns3-optimize-free-mbuf-for-SVE-Tx.patch
Patch6361: 0361-net-hns3-optimize-rearm-mbuf-for-SVE-Rx.patch
Patch6362: 0362-net-hns3-optimize-SVE-Rx-performance.patch
Patch6363: 0363-app-testpmd-fix-multicast-address-pool-leak.patch
Patch6364: 0364-app-testpmd-fix-help-string.patch
Patch6365: 0365-app-testpmd-add-command-to-flush-multicast-MAC-addre.patch
Patch6366: 0366-maintainers-update-for-hns3-driver.patch
Patch6367: 0367-telemetry-fix-repeat-display-when-callback-don-t-init-dict.patch
Patch6368: 0368-net-hns3-fix-build-warning.patch
Patch6369: 0369-net-hns3-fix-typo-in-function-name.patch
Patch6370: 0370-net-hns3-fix-unchecked-Rx-free-threshold.patch
Patch6371: 0371-net-hns3-fix-crash-for-NEON-and-SVE.patch
Patch6372: 0372-net-hns3-fix-double-stats-for-IMP-and-global-reset.patch
Patch6373: 0373-net-hns3-remove-reset-log-in-secondary.patch
Patch6374: 0374-net-hns3-fix-multiple-reset-detected-log.patch
Patch6375: 0375-net-hns3-fix-IMP-or-global-reset.patch
Patch6376: 0376-net-hns3-refactor-interrupt-state-query.patch
Patch6377: 0377-app-testpmd-ease-configuring-all-offloads.patch
Patch6378: 0378-net-hns3-fix-setting-DCB-capability.patch
Patch6379: 0379-net-hns3-fix-LRO-offload-to-report.patch
Patch6380: 0380-net-hns3-fix-some-return-values.patch
Patch6381: 0381-net-hns3-fix-some-error-logs.patch
Patch6382: 0382-net-hns3-keep-set-get-algo-key-functions-local.patch
Patch6383: 0383-net-hns3-fix-uninitialized-hash-algo-value.patch
Patch6384: 0384-ethdev-clarify-RSS-related-fields-usage.patch
Patch6385: 0385-ethdev-set-and-query-RSS-hash-algorithm.patch
Patch6386: 0386-net-hns3-report-RSS-hash-algorithms-capability.patch
Patch6387: 0387-net-hns3-support-setting-and-querying-RSS-hash-function.patch
Patch6388: 0388-app-procinfo-fix-RSS-info.patch
Patch6389: 0389-app-procinfo-adjust-format-of-RSS-info.patch
Patch6390: 0390-ethdev-get-RSS-algorithm-names.patch
Patch6391: 0391-app-procinfo-show-RSS-hash-algorithm.patch
Patch6392: 0392-ethdev-add-maximum-Rx-buffer-size.patch
Patch6393: 0393-net-hns3-report-maximum-buffer-size.patch
Patch6394: 0394-net-hns3-fix-mailbox-sync.patch
Patch1000: 1000-add-sw_64-support-not-upstream-modified.patch
Patch1001: 1001-add-sw_64-support-not-upstream-new.patch
@ -535,6 +579,56 @@ strip -g $RPM_BUILD_ROOT/lib/modules/%{kern_devel_ver}/extra/dpdk/igb_uio.ko
/usr/sbin/depmod
%changelog
* Mon Nov 20 2023 huangdengdui <huangdengui@huawei.com> - 21.11-59
Sync some patchs from upstreaming and modifies are as follow:
- net/hns3: fix mailbox sync
- net/hns3: report maximum buffer size
- ethdev: add maximum Rx buffer size
- app/procinfo: show RSS hash algorithm
- ethdev: get RSS algorithm names
- app/procinfo: adjust format of RSS info
- app/procinfo: fix RSS info
- net/hns3: support setting and querying RSS hash function
- net/hns3: report RSS hash algorithms capability
- ethdev: set and query RSS hash algorithm
- ethdev: clarify RSS related fields usage
- net/hns3: fix uninitialized hash algo value
- net/hns3: keep set/get algo key functions local
- net/hns3: fix some error logs
- net/hns3: fix some return values
- net/hns3: fix LRO offload to report
- net/hns3: fix setting DCB capability
- app/testpmd: ease configuring all offloads
- net/hns3: refactor interrupt state query
- net/hns3: fix IMP or global reset
- net/hns3: fix multiple reset detected log
- net/hns3: remove reset log in secondary
- net/hns3: fix double stats for IMP and global reset
- net/hns3: fix crash for NEON and SVE
- net/hns3: fix unchecked Rx free threshold
- net/hns3: fix typo in function name
- net/hns3: fix build warning
- telemetry: fix repeat display when callback don't init dict
* Fri Oct 27 2023 huangdengdui <huangdengui@huawei.com> - 21.11-58
Sync some patchs from upstreaming and modifies are as follow:
- maintainers: update for hns3 driver
- app/testpmd: add command to flush multicast MAC addresses
- app/testpmd: fix help string
- app/testpmd: fix multicast address pool leak
- net/hns3: optimize SVE Rx performance
- net/hns3: optimize rearm mbuf for SVE Rx
- net/hns3: optimize free mbuf for SVE Tx
- net/hns3: fix order in NEON Rx
- net/hns3: fix traffic management dump text alignment
- net/hns3: fix traffic management thread safety
- net/hns3: fix flushing multicast MAC address
- net/hns3: fix error code for multicast resource
- net/hns3: fix VF default MAC modified when set failed
- net/hns3: fix index to look up table in NEON Rx
- net/hns3: fix non-zero weight for disabled TC
- config/arm: add HiSilicon HIP10
* Wed Aug 30 2023 herengui <herengui@kylinsec.com.cn> - 21.11-57
- Add support for sw_64