!270 [sync] PR-268: Sync some patches for bonding PMD and testpmd

From: @openeuler-sync-bot 
Reviewed-by: @wu-changsheng 
Signed-off-by: @wu-changsheng
This commit is contained in:
openeuler-ci-bot 2022-11-16 14:10:19 +00:00 committed by Gitee
commit e547f6503d
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
11 changed files with 1062 additions and 1 deletions

View File

@ -0,0 +1,42 @@
From 304a7bf032352999131c0b3e28c585610000990e Mon Sep 17 00:00:00 2001
From: Maxime Coquelin <maxime.coquelin@redhat.com>
Date: Tue, 15 Nov 2022 12:06:06 +0800
Subject: app/testpmd: revert MAC update in checksum forwarding
[ upstream commit 9b4ea7ae77faa8f8aba8c7510c821f75d7863b16 ]
This patch reverts
commit 10f4620f02e1 ("app/testpmd: modify mac in csum forwarding"),
as the checksum forwarding is expected to only perform
checksum and not also overwrites the source and destination MAC addresses.
Doing so, we can test checksum offloading with real traffic
without breaking broadcast packets.
Fixes: 10f4620f02e1 ("app/testpmd: modify mac in csum forwarding")
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
---
app/test-pmd/csumonly.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 0177284d9c..206968d37a 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -887,10 +887,6 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
* and inner headers */
eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
- rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
- &eth_hdr->dst_addr);
- rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
- &eth_hdr->src_addr);
parse_ethernet(eth_hdr, &info);
l3_hdr = (char *)eth_hdr + info.l2_len;
--
2.23.0

View File

@ -0,0 +1,86 @@
From 44f34b117cb446f9dce03e683942a40a8a04436c Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 15 Nov 2022 12:06:07 +0800
Subject: net/bonding: fix bond4 drop valid MAC packets
[ upstream commit 2176782ec87589927e1b13737b60ee8be28d76af ]
Currently, by default, bond4 will first try to enable allmulti and
then enable promiscuous if fail to enable allmulti. On reception,
whether unicast and multicast packets should be dropped depends on
which mode has been enabled on the bonding interface.
In fact, if MAC address of packets in mac_addrs array of bonding
interface, these packets should not be dropped. However, now only
check the default MAC address, which will cause the packets with
MAC added by the '.mac_addr_add' are dropped.
Fixes: 68218b87c184 ("net/bonding: prefer allmulti to promiscuous for LACP")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 33 +++++++++++++++++++-------
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index ab1196e505..f1e7b6459a 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -271,6 +271,24 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
return 0;
}
+static bool
+is_bond_mac_addr(const struct rte_ether_addr *ea,
+ const struct rte_ether_addr *mac_addrs, uint32_t max_mac_addrs)
+{
+ uint32_t i;
+
+ for (i = 0; i < max_mac_addrs; i++) {
+ /* skip zero address */
+ if (rte_is_zero_ether_addr(&mac_addrs[i]))
+ continue;
+
+ if (rte_is_same_ether_addr(ea, &mac_addrs[i]))
+ return true;
+ }
+
+ return false;
+}
+
static inline uint16_t
rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
bool dedicated_rxq)
@@ -331,8 +349,9 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
/* Remove packet from array if:
* - it is slow packet but no dedicated rxq is present,
* - slave is not in collecting state,
- * - bonding interface is not in promiscuous mode:
- * - packet is unicast and address does not match,
+ * - bonding interface is not in promiscuous mode and
+ * packet address isn't in mac_addrs array:
+ * - packet is unicast,
* - packet is multicast and bonding interface
* is not in allmulti,
*/
@@ -342,12 +361,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
bufs[j])) ||
!collecting ||
(!promisc &&
- ((rte_is_unicast_ether_addr(&hdr->dst_addr) &&
- !rte_is_same_ether_addr(bond_mac,
- &hdr->dst_addr)) ||
- (!allmulti &&
- rte_is_multicast_ether_addr(&hdr->dst_addr)))))) {
-
+ !is_bond_mac_addr(&hdr->dst_addr, bond_mac,
+ BOND_MAX_MAC_ADDRS) &&
+ (rte_is_unicast_ether_addr(&hdr->dst_addr) ||
+ !allmulti)))) {
if (hdr->ether_type == ether_type_slow_be) {
bond_mode_8023ad_handle_slow_pkt(
internals, slaves[idx], bufs[j]);
--
2.23.0

View File

@ -0,0 +1,54 @@
From 6ca88723b7df208ffa5c43fdfda06381103e488a Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 15 Nov 2022 12:06:08 +0800
Subject: net/bonding: fix slave device Rx/Tx offload configuration
[ upstream commit fdbc4e7704a7de0f41f72d4f5337b0eddaa81991 ]
Normally, the Rx/Tx offload capability of bonding interface is
the intersection of the capability of all slave devices. And
Rx/Tx offloads configuration of slave device comes from bonding
interface. But now there is a risk that slave device retains its
previous offload configurations which is not within the offload
configurations of bond interface.
Fixes: 57b156540f51 ("net/bonding: fix offloading configuration")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 17 ++++-------------
1 file changed, 4 insertions(+), 13 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f1e7b6459a..2bf28b829d 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1762,20 +1762,11 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.link_speeds =
bonded_eth_dev->data->dev_conf.link_speeds;
- slave_eth_dev->data->dev_conf.txmode.offloads |=
- bonded_eth_dev->data->dev_conf.txmode.offloads;
-
- slave_eth_dev->data->dev_conf.txmode.offloads &=
- (bonded_eth_dev->data->dev_conf.txmode.offloads |
- ~internals->tx_offload_capa);
-
- slave_eth_dev->data->dev_conf.rxmode.offloads |=
- bonded_eth_dev->data->dev_conf.rxmode.offloads;
-
- slave_eth_dev->data->dev_conf.rxmode.offloads &=
- (bonded_eth_dev->data->dev_conf.rxmode.offloads |
- ~internals->rx_offload_capa);
+ slave_eth_dev->data->dev_conf.txmode.offloads =
+ bonded_eth_dev->data->dev_conf.txmode.offloads;
+ slave_eth_dev->data->dev_conf.rxmode.offloads =
+ bonded_eth_dev->data->dev_conf.rxmode.offloads;
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
--
2.23.0

View File

@ -0,0 +1,154 @@
From a31eaf3090f26f73fa3996487d9bde36418dbcd9 Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 15 Nov 2022 12:06:09 +0800
Subject: app/testpmd: fix MAC header in csum forward engine
[ upstream commit 008834b91ac9a9e4ea982e5d2a4526d1b90a8d18 ]
MLX5 SR-IOV Tx engine will not transmit Ethernet frame
if destination MAC address matched local port address. The frame ether
looped-back to Rx or dropped, depending on the port configuration.
Application running over MLX5 SR-IOV port cannot transmit packet
polled from Rx queue as is. The packet Ethernet destination address
must be changed.
Add new run-time configuration parameter to the `csum` forwarding
engine to control MAC addresses configuration:
testpmd> csum mac-swap on|off <port_id>
`mac-swap on` replace MAC addresses.
`mac-swap off` keep Ethernet header unchanged.
Fixes: 9b4ea7ae77fa ("app/testpmd: revert MAC update in checksum forwarding")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
---
app/test-pmd/cmdline.c | 50 +++++++++++++++++++++++++++++++++++++++++
app/test-pmd/csumonly.c | 6 +++++
app/test-pmd/testpmd.c | 5 +++--
app/test-pmd/testpmd.h | 3 ++-
4 files changed, 61 insertions(+), 3 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 8d4a88bb85..9e0e725913 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -4836,6 +4836,55 @@ cmdline_parse_inst_t cmd_csum_tunnel = {
},
};
+struct cmd_csum_mac_swap_result {
+ cmdline_fixed_string_t csum;
+ cmdline_fixed_string_t parse;
+ cmdline_fixed_string_t onoff;
+ portid_t port_id;
+};
+
+static void
+cmd_csum_mac_swap_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_csum_mac_swap_result *res = parsed_result;
+
+ if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+ return;
+ if (strcmp(res->onoff, "on") == 0)
+ ports[res->port_id].fwd_mac_swap = 1;
+ else
+ ports[res->port_id].fwd_mac_swap = 0;
+}
+
+static cmdline_parse_token_string_t cmd_csum_mac_swap_csum =
+ TOKEN_STRING_INITIALIZER(struct cmd_csum_mac_swap_result,
+ csum, "csum");
+static cmdline_parse_token_string_t cmd_csum_mac_swap_parse =
+ TOKEN_STRING_INITIALIZER(struct cmd_csum_mac_swap_result,
+ parse, "mac-swap");
+static cmdline_parse_token_string_t cmd_csum_mac_swap_onoff =
+ TOKEN_STRING_INITIALIZER(struct cmd_csum_mac_swap_result,
+ onoff, "on#off");
+static cmdline_parse_token_num_t cmd_csum_mac_swap_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_csum_mac_swap_result,
+ port_id, RTE_UINT16);
+
+static cmdline_parse_inst_t cmd_csum_mac_swap = {
+ .f = cmd_csum_mac_swap_parsed,
+ .data = NULL,
+ .help_str = "csum mac-swap on|off <port_id>: "
+ "Enable/Disable forward mac address swap",
+ .tokens = {
+ (void *)&cmd_csum_mac_swap_csum,
+ (void *)&cmd_csum_mac_swap_parse,
+ (void *)&cmd_csum_mac_swap_onoff,
+ (void *)&cmd_csum_mac_swap_portid,
+ NULL,
+ },
+};
+
/* *** ENABLE HARDWARE SEGMENTATION IN TX NON-TUNNELED PACKETS *** */
struct cmd_tso_set_result {
cmdline_fixed_string_t tso;
@@ -17699,6 +17748,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_csum_set,
(cmdline_parse_inst_t *)&cmd_csum_show,
(cmdline_parse_inst_t *)&cmd_csum_tunnel,
+ (cmdline_parse_inst_t *)&cmd_csum_mac_swap,
(cmdline_parse_inst_t *)&cmd_tso_set,
(cmdline_parse_inst_t *)&cmd_tso_show,
(cmdline_parse_inst_t *)&cmd_tunnel_tso_set,
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 206968d37a..d8cb8c89aa 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -887,6 +887,12 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
* and inner headers */
eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+ if (ports[fs->tx_port].fwd_mac_swap) {
+ rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
+ &eth_hdr->dst_addr);
+ rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
+ &eth_hdr->src_addr);
+ }
parse_ethernet(eth_hdr, &info);
l3_hdr = (char *)eth_hdr + info.l2_len;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 2be92af9f8..ff9eabbcb7 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -4160,10 +4160,11 @@ init_port(void)
"rte_zmalloc(%d struct rte_port) failed\n",
RTE_MAX_ETHPORTS);
}
- for (i = 0; i < RTE_MAX_ETHPORTS; i++)
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ ports[i].fwd_mac_swap = 1;
ports[i].xstats_info.allocated = false;
- for (i = 0; i < RTE_MAX_ETHPORTS; i++)
LIST_INIT(&ports[i].flow_tunnel_list);
+ }
/* Initialize ports NUMA structures */
memset(port_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
memset(rxring_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index ab6642585e..442f97ce3d 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -247,7 +247,8 @@ struct rte_port {
struct rte_ether_addr *mc_addr_pool; /**< pool of multicast addrs */
uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
uint8_t slave_flag : 1, /**< bonding slave port */
- bond_flag : 1; /**< port is bond device */
+ bond_flag : 1, /**< port is bond device */
+ fwd_mac_swap : 1; /**< swap packet MAC before forward */
struct port_flow *flow_list; /**< Associated flows. */
struct port_indirect_action *actions_list;
/**< Associated indirect actions. */
--
2.23.0

View File

@ -0,0 +1,110 @@
From 97b384c9ecb993ea111bd7648a0aac9127917d22 Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 15 Nov 2022 12:06:10 +0800
Subject: app/testpmd: update bond port configurations when add slave
[ upstream commit 76376bd9cd491fb0ca9c0b78346cee0ca7c4a351 ]
Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
device in dev_info is zero when no slave is added. And its capability will
be updated when add a new slave device.
The capability to update dynamically may introduce some problems if not
handled properly. For example, the reconfig() is called to initialize
bonding port configurations when create a bonding device. The global
tx_mode is assigned to dev_conf.txmode. The DEV_TX_OFFLOAD_MBUF_FAST_FREE
which is the default value of global tx_mode.offloads in testpmd is removed
from bonding device configuration because of zero rx_offload_capa.
As a result, this offload isn't set to bonding device.
Generally, port configurations of bonding device must be within the
intersection of the capability of all slave devices. If use original port
configurations, the removed capabilities because of adding a new slave may
cause failure when re-initialize bonding device.
So port configurations of bonding device need to be updated because of the
added and removed capabilities. In addition, this also helps to ensure
consistency between testpmd and bonding device.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Min Hu (Connor) <humin29@huawei.com>
---
app/test-pmd/testpmd.c | 40 ++++++++++++++++++++++++++++++++++++++++
app/test-pmd/testpmd.h | 3 ++-
2 files changed, 42 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index ff9eabbcb7..32098d4701 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2778,6 +2778,41 @@ fill_xstats_display_info(void)
fill_xstats_display_info_for_port(pi);
}
+/*
+ * Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
+ * device in dev_info is zero when no slave is added. And its capability
+ * will be updated when add a new slave device. So adding a slave device need
+ * to update the port configurations of bonding device.
+ */
+static void
+update_bonding_port_dev_conf(portid_t bond_pid)
+{
+#ifdef RTE_NET_BOND
+ struct rte_port *port = &ports[bond_pid];
+ uint16_t i;
+ int ret;
+
+ ret = eth_dev_info_get_print_err(bond_pid, &port->dev_info);
+ if (ret != 0) {
+ fprintf(stderr, "Failed to get dev info for port = %u\n",
+ bond_pid);
+ return;
+ }
+
+ if (port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ port->dev_conf.txmode.offloads |=
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
+ /* Apply Tx offloads configuration */
+ for (i = 0; i < port->dev_info.max_tx_queues; i++)
+ port->tx_conf[i].offloads = port->dev_conf.txmode.offloads;
+
+ port->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
+ port->dev_info.flow_type_rss_offloads;
+#else
+ RTE_SET_USED(bond_pid);
+#endif
+}
+
int
start_port(portid_t pid)
{
@@ -2842,6 +2877,11 @@ start_port(portid_t pid)
return -1;
}
+ if (port->bond_flag == 1 && port->update_conf == 1) {
+ update_bonding_port_dev_conf(pi);
+ port->update_conf = 0;
+ }
+
/* configure port */
diag = eth_dev_configure_mp(pi, nb_rxq + nb_hairpinq,
nb_txq + nb_hairpinq,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 442f97ce3d..480dc3fa34 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -248,7 +248,8 @@ struct rte_port {
uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
uint8_t slave_flag : 1, /**< bonding slave port */
bond_flag : 1, /**< port is bond device */
- fwd_mac_swap : 1; /**< swap packet MAC before forward */
+ fwd_mac_swap : 1, /**< swap packet MAC before forward */
+ update_conf : 1; /**< need to update bonding device configuration */
struct port_flow *flow_list; /**< Associated flows. */
struct port_indirect_action *actions_list;
/**< Associated indirect actions. */
--
2.23.0

View File

@ -0,0 +1,83 @@
From ecfa2e7054530f4a1eb9118a30a9bc6439b29bd8 Mon Sep 17 00:00:00 2001
From: Raja Zidane <rzidane@nvidia.com>
Date: Tue, 15 Nov 2022 12:06:11 +0800
Subject: app/testpmd: fix GENEVE parsing in checksum mode
[ upstream commit 993677affe391be8bb390c2625bc3d8bb857f0a5 ]
The csum FWD mode parses any received packet to set mbuf offloads for
the transmitting burst, mainly in the checksum/TSO areas.
In the case of a tunnel header, the csum FWD tries to detect known
tunnels by the standard definition using the header's data and fallback
to check the packet type in the mbuf to see if the Rx port driver
already sign the packet as a tunnel.
In the fallback case, the csum assumes the tunnel is VXLAN and parses
the tunnel as VXLAN.
When the GENEVE tunnel was added to the known tunnels in csum, its
parsing trial was wrongly located after the pkt type detection, causing
the csum to parse the GENEVE header as VXLAN when the Rx port set the
tunnel packet type.
Remove the fall back case to VXLAN.
Log error of unrecognized tunnel if no tunnel was parsed successfully.
Fixes: c10a026c3b03 ("app/testpmd: introduce vxlan parsing function in csum fwd engine")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
app/test-pmd/csumonly.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index d8cb8c89aa..7c4c04be26 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -257,8 +257,7 @@ parse_gtp(struct rte_udp_hdr *udp_hdr,
/* Parse a vxlan header */
static void
parse_vxlan(struct rte_udp_hdr *udp_hdr,
- struct testpmd_offload_info *info,
- uint32_t pkt_type)
+ struct testpmd_offload_info *info)
{
struct rte_ether_hdr *eth_hdr;
@@ -266,8 +265,7 @@ parse_vxlan(struct rte_udp_hdr *udp_hdr,
* default vxlan port (rfc7348) or that the rx offload flag is set
* (i40e only currently)
*/
- if (udp_hdr->dst_port != _htons(RTE_VXLAN_DEFAULT_PORT) &&
- RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
+ if (udp_hdr->dst_port != _htons(RTE_VXLAN_DEFAULT_PORT))
return;
update_tunnel_outer(info);
@@ -914,8 +912,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE;
goto tunnel_update;
}
- parse_vxlan(udp_hdr, &info,
- m->packet_type);
+ parse_vxlan(udp_hdr, &info);
if (info.is_tunnel) {
tx_ol_flags |=
RTE_MBUF_F_TX_TUNNEL_VXLAN;
@@ -927,6 +924,12 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
RTE_MBUF_F_TX_TUNNEL_GENEVE;
goto tunnel_update;
}
+ /* Always keep last. */
+ if (unlikely(RTE_ETH_IS_TUNNEL_PKT(
+ m->packet_type) != 0)) {
+ TESTPMD_LOG(DEBUG, "Unknown tunnel packet. UDP dst port: %hu",
+ udp_hdr->dst_port);
+ }
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
--
2.23.0

View File

@ -0,0 +1,239 @@
From 179fb7a7246a835dbf3fb0449faa506214468b5f Mon Sep 17 00:00:00 2001
From: Xiaoyun Li <xiaoyun.li@intel.com>
Date: Tue, 15 Nov 2022 12:06:12 +0800
Subject: net: add UDP/TCP checksum in mbuf segments
[ upstream commit d178f693bbfe07506d6e3e23a3ce9c34ee554444 ]
Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
UDP/TCP checksum in mbuf which can be over multi-segments.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Sunil Pai G <sunil.pai.g@intel.com>
---
lib/net/rte_ip.h | 186 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 186 insertions(+)
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index c575250852..534f401d26 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr *ipv4_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv4 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Compute the IPv4 UDP/TCP checksum of a packet.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv4 UDP or TCP checksum.
*
@@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct rte_ipv4_hdr *ipv4_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Verify the IPv4 UDP/TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0
+ * (i.e. no checksum).
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/**
* IPv6 Header
*/
@@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr *ipv6_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv6 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process the IPv6 UDP or TCP checksum of a packet.
+ *
+ * The IPv6 header must not be followed by extension headers. The layer 4
+ * checksum must be set to 0 in the L4 header by the caller.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv6 UDP or TCP checksum.
*
@@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct rte_ipv6_hdr *ipv6_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Validate the IPv6 UDP or TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
+ * this is either invalid or means no checksum in some situations. See 8.1
+ * (Upper-Layer Checksums) in RFC 8200.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline int
+rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/** IPv6 fragment extension header. */
#define RTE_IPV6_EHDR_MF_SHIFT 0
#define RTE_IPV6_EHDR_MF_MASK 1
--
2.23.0

View File

@ -0,0 +1,137 @@
From 2f89f906acfed6fe476f84875bbe1f2c53b8f31a Mon Sep 17 00:00:00 2001
From: Xiaoyun Li <xiaoyun.li@intel.com>
Date: Tue, 15 Nov 2022 12:06:13 +0800
Subject: app/testpmd: add SW L4 checksum in multi-segments
[ upstream commit e6b9d6411e91be7289409238f05ad1c09e8a0d05 ]
Csum forwarding mode only supports software UDP/TCP csum calculation
for single segment packets when hardware offload is not enabled.
This patch enables software UDP/TCP csum calculation over multiple
segments.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Tested-by: Sunil Pai G <sunil.pai.g@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
app/test-pmd/csumonly.c | 41 ++++++++++++++++++++++++++---------------
1 file changed, 26 insertions(+), 15 deletions(-)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 7c4c04be26..10aab3431b 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -96,12 +96,13 @@ struct simple_gre_hdr {
} __rte_packed;
static uint16_t
-get_udptcp_checksum(void *l3_hdr, void *l4_hdr, uint16_t ethertype)
+get_udptcp_checksum(struct rte_mbuf *m, void *l3_hdr, uint16_t l4_off,
+ uint16_t ethertype)
{
if (ethertype == _htons(RTE_ETHER_TYPE_IPV4))
- return rte_ipv4_udptcp_cksum(l3_hdr, l4_hdr);
+ return rte_ipv4_udptcp_cksum_mbuf(m, l3_hdr, l4_off);
else /* assume ethertype == RTE_ETHER_TYPE_IPV6 */
- return rte_ipv6_udptcp_cksum(l3_hdr, l4_hdr);
+ return rte_ipv6_udptcp_cksum_mbuf(m, l3_hdr, l4_off);
}
/* Parse an IPv4 header to fill l3_len, l4_len, and l4_proto */
@@ -458,7 +459,7 @@ parse_encap_ip(void *encap_ip, struct testpmd_offload_info *info)
* depending on the testpmd command line configuration */
static uint64_t
process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
- uint64_t tx_offloads)
+ uint64_t tx_offloads, struct rte_mbuf *m)
{
struct rte_ipv4_hdr *ipv4_hdr = l3_hdr;
struct rte_udp_hdr *udp_hdr;
@@ -466,6 +467,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
struct rte_sctp_hdr *sctp_hdr;
uint64_t ol_flags = 0;
uint32_t max_pkt_len, tso_segsz = 0;
+ uint16_t l4_off;
/* ensure packet is large enough to require tso */
if (!info->is_tunnel) {
@@ -508,9 +510,15 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
} else {
+ if (info->is_tunnel)
+ l4_off = info->l2_len +
+ info->outer_l3_len +
+ info->l2_len + info->l3_len;
+ else
+ l4_off = info->l2_len + info->l3_len;
udp_hdr->dgram_cksum = 0;
udp_hdr->dgram_cksum =
- get_udptcp_checksum(l3_hdr, udp_hdr,
+ get_udptcp_checksum(m, l3_hdr, l4_off,
info->ethertype);
}
}
@@ -525,9 +533,14 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
} else {
+ if (info->is_tunnel)
+ l4_off = info->l2_len + info->outer_l3_len +
+ info->l2_len + info->l3_len;
+ else
+ l4_off = info->l2_len + info->l3_len;
tcp_hdr->cksum = 0;
tcp_hdr->cksum =
- get_udptcp_checksum(l3_hdr, tcp_hdr,
+ get_udptcp_checksum(m, l3_hdr, l4_off,
info->ethertype);
}
#ifdef RTE_LIB_GSO
@@ -555,7 +568,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
/* Calculate the checksum of outer header */
static uint64_t
process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
- uint64_t tx_offloads, int tso_enabled)
+ uint64_t tx_offloads, int tso_enabled, struct rte_mbuf *m)
{
struct rte_ipv4_hdr *ipv4_hdr = outer_l3_hdr;
struct rte_ipv6_hdr *ipv6_hdr = outer_l3_hdr;
@@ -609,12 +622,9 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
/* do not recalculate udp cksum if it was 0 */
if (udp_hdr->dgram_cksum != 0) {
udp_hdr->dgram_cksum = 0;
- if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
- udp_hdr->dgram_cksum =
- rte_ipv4_udptcp_cksum(ipv4_hdr, udp_hdr);
- else
- udp_hdr->dgram_cksum =
- rte_ipv6_udptcp_cksum(ipv6_hdr, udp_hdr);
+ udp_hdr->dgram_cksum = get_udptcp_checksum(m, outer_l3_hdr,
+ info->l2_len + info->outer_l3_len,
+ info->outer_ethertype);
}
return ol_flags;
@@ -962,7 +972,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
/* process checksums of inner headers first */
tx_ol_flags |= process_inner_cksums(l3_hdr, &info,
- tx_offloads);
+ tx_offloads, m);
/* Then process outer headers if any. Note that the software
* checksum will be wrong if one of the inner checksums is
@@ -970,7 +980,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
if (info.is_tunnel == 1) {
tx_ol_flags |= process_outer_cksums(outer_l3_hdr, &info,
tx_offloads,
- !!(tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG));
+ !!(tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG),
+ m);
}
/* step 3: fill the mbuf meta data (flags and header lengths) */
--
2.23.0

View File

@ -0,0 +1,59 @@
From e6b89f7ed49494302ef1e9cd852281c808f5b14f Mon Sep 17 00:00:00 2001
From: Kevin Liu <kevinx.liu@intel.com>
Date: Tue, 15 Nov 2022 12:06:14 +0800
Subject: app/testpmd: fix L4 checksum in multi-segments
[ upstream commit 7dc92d17298d8fd05a912606f02a094566ec0b3f ]
Testpmd forwards packets in checksum mode that it needs to calculate
the checksum of each layer's protocol.
In process_inner_cksums, when parsing tunnel packets, inner L4 offset
should be outer_l2_len + outer_l3_len + l2_len + l3_len.
In process_outer_cksums, when parsing tunnel packets, outer L4 offset
should be outer_l2_len + outer_l3_len.
Fixes: e6b9d6411e91 ("app/testpmd: add SW L4 checksum in multi-segments")
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
---
app/test-pmd/csumonly.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 10aab3431b..47856dd70a 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -511,7 +511,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
} else {
if (info->is_tunnel)
- l4_off = info->l2_len +
+ l4_off = info->outer_l2_len +
info->outer_l3_len +
info->l2_len + info->l3_len;
else
@@ -534,7 +534,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
} else {
if (info->is_tunnel)
- l4_off = info->l2_len + info->outer_l3_len +
+ l4_off = info->outer_l2_len + info->outer_l3_len +
info->l2_len + info->l3_len;
else
l4_off = info->l2_len + info->l3_len;
@@ -623,7 +623,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
if (udp_hdr->dgram_cksum != 0) {
udp_hdr->dgram_cksum = 0;
udp_hdr->dgram_cksum = get_udptcp_checksum(m, outer_l3_hdr,
- info->l2_len + info->outer_l3_len,
+ info->outer_l2_len + info->outer_l3_len,
info->outer_ethertype);
}
--
2.23.0

View File

@ -0,0 +1,73 @@
From c90a36013ccaeeb3baf258e4e23120253faee7aa Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 15 Nov 2022 12:06:15 +0800
Subject: net/bonding: fix mbuf fast free handling
[ upstream commit b4924c0db589b5d4698abfab3ce60978d9df518b ]
The RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE offload can't be used in bonding
mode Broadcast and mode 8023AD. Currently, bonding driver forcibly removes
from the dev->data->dev_conf.txmode.offloads and processes as success in
bond_ethdev_configure(). But this still cause that rte_eth_dev_configure()
fails to execute because of the failure of validating Tx offload in the
eth_dev_validate_offloads(). So this patch moves the modification of txmode
offlaods to the stage of adding slave device to report the correct txmode
offloads.
Fixes: 18c41457cbae ("net/bonding: fix mbuf fast free usage")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/bonding/rte_eth_bond_api.c | 5 +++++
drivers/net/bonding/rte_eth_bond_pmd.c | 11 -----------
2 files changed, 5 insertions(+), 11 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index b74477128a..1235573bf2 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -544,6 +544,11 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
return ret;
}
+ /* Bond mode Broadcast & 8023AD don't support MBUF_FAST_FREE offload. */
+ if (internals->mode == BONDING_MODE_8023AD ||
+ internals->mode == BONDING_MODE_BROADCAST)
+ internals->tx_offload_capa &= ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
+
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
internals->flow_type_rss_offloads;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 2bf28b829d..29871cf8a3 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -3600,7 +3600,6 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
const char *name = dev->device->name;
struct bond_dev_private *internals = dev->data->dev_private;
struct rte_kvargs *kvlist = internals->kvlist;
- uint64_t offloads;
int arg_count;
uint16_t port_id = dev - rte_eth_devices;
uint32_t link_speeds;
@@ -3652,16 +3651,6 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
}
}
- offloads = dev->data->dev_conf.txmode.offloads;
- if ((offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
- (internals->mode == BONDING_MODE_8023AD ||
- internals->mode == BONDING_MODE_BROADCAST)) {
- RTE_BOND_LOG(WARNING,
- "bond mode broadcast & 8023AD don't support MBUF_FAST_FREE offload, force disable it.");
- offloads &= ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
- dev->data->dev_conf.txmode.offloads = offloads;
- }
-
link_speeds = dev->data->dev_conf.link_speeds;
/*
* The default value of 'link_speeds' is zero. From its definition,
--
2.23.0

View File

@ -1,6 +1,6 @@
Name: dpdk
Version: 21.11
Release: 24
Release: 25
Packager: packaging@6wind.com
URL: http://dpdk.org
%global source_version 21.11
@ -212,6 +212,16 @@ Patch9191: 0191-net-bonding-add-link-speeds-configuration.patch
Patch9192: 0192-net-bonding-call-Tx-prepare-before-Tx-burst.patch
Patch9193: 0193-net-bonding-fix-MTU-set-for-slaves.patch
Patch9194: 0194-app-testpmd-remove-jumbo-offload-related-code.patch
Patch9195: 0195-app-testpmd-revert-MAC-update-in-checksum-forwarding.patch
Patch9196: 0196-net-bonding-fix-bond4-drop-valid-MAC-packets.patch
Patch9197: 0197-net-bonding-fix-slave-device-Rx-Tx-offload-configura.patch
Patch9198: 0198-app-testpmd-fix-MAC-header-in-csum-forward-engine.patch
Patch9199: 0199-app-testpmd-update-bond-port-configurations-when-add.patch
Patch9200: 0200-app-testpmd-fix-GENEVE-parsing-in-checksum-mode.patch
Patch9201: 0201-net-add-UDP-TCP-checksum-in-mbuf-segments.patch
Patch9202: 0202-app-testpmd-add-SW-L4-checksum-in-multi-segments.patch
Patch9203: 0203-app-testpmd-fix-L4-checksum-in-multi-segments.patch
Patch9204: 0204-net-bonding-fix-mbuf-fast-free-handling.patch
Summary: Data Plane Development Kit core
Group: System Environment/Libraries
@ -354,6 +364,20 @@ strip -g $RPM_BUILD_ROOT/lib/modules/%{kern_devel_ver}/extra/dpdk/igb_uio.ko
/usr/sbin/depmod
%changelog
* Wed Nov 16 2022 chenjiji <chenjiji09@163.com> - 21.11-25
Sync some patches for bonding PMD and testpmd. And patchs
are as follows:
- app/testpmd: revert MAC update in checksum forwarding
- net/bonding: fix bond4 drop valid MAC packets
- net/bonding: fix slave device Rx/Tx offload configuration
- app/testpmd: fix MAC header in csum forward engine
- app/testpmd: update bond port configurations when add slave
- app/testpmd: fix GENEVE parsing in checksum mode
- net: add UDP/TCP checksum in mbuf segments
- app/testpmd: add SW L4 checksum in multi-segments
- app/testpmd: fix L4 checksum in multi-segments
- net/bonding: fix mbuf fast free handling
* Tue Nov 15 2022 jiangheng <jiangheng14@huawei.com> - 21.11-24
- proc-info: add gazelle-proc-info support in dpdk