CN118445088A - A method, system, device, equipment and medium for processing network messages - Google Patents
A method, system, device, equipment and medium for processing network messages Download PDFInfo
- Publication number
- CN118445088A CN118445088A CN202410895786.1A CN202410895786A CN118445088A CN 118445088 A CN118445088 A CN 118445088A CN 202410895786 A CN202410895786 A CN 202410895786A CN 118445088 A CN118445088 A CN 118445088A
- Authority
- CN
- China
- Prior art keywords
- data
- processing
- network message
- bit width
- message data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3867—Concurrent instruction execution, e.g. pipeline or look ahead using instruction pipelines
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Detection And Prevention Of Errors In Transmission (AREA)
Abstract
The invention discloses a processing method, a system, a device, equipment and a medium for network messages, and relates to the technical field of communication. Whether the large bit width data is larger than the checksum bit width or the small bit width data is smaller than the checksum bit width under the first network message data, the bit width grouping processing can be carried out based on the checksum bit width so as to improve the bit width processing capability and meet the network card requirements of hundreds of G. After grouping, the first network message data after grouping is subjected to parallel check processing based on the calculation unit and the grouping data to obtain each check data, and the parallel processing characteristic of the FPGA is utilized, and a parallel processing check sum processing mode is adopted, so that compared with the serial processing of a CPU, the data processing efficiency is improved. And the checksum processing mode adopts a pipeline processing mode, and compared with the accumulation processing of the traditional serial mode of checksum processing, the data processing efficiency is further improved. Based on FPGA, processing is performed, CPU resources are saved, and network bandwidth is improved.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, a system, an apparatus, a device, and a medium for processing a network packet.
Background
The intelligent network card is a high-performance network card special for network data processing, adopts a custom chip, a high-speed network interface and strong software support, and can provide faster, safer and more reliable network connection and data transmission service for a data center and an enterprise network.
The traditional intelligent network card is mostly realized by data processing of a central processing unit (Central Processing Unit, CPU) +a field programmable gate array (Field Programmable GATE ARRAY, FPGA), various operations of a data center are mainly finished on the CPU, including calculation tasks, various infrastructure tasks and the like, but the calculation power of the CPU reaches the bottleneck in face of the increase of data processing demands, and the moore's law is gradually invalid. The CPU needs a lot of time to calculate, and occupies CPU resources, so that network bandwidth can not be improved in the network data processing process, and data processing efficiency is reduced.
Therefore, how to save CPU resources and improve network bandwidth and efficiency of network data processing is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a processing method, a system, a device, equipment and a medium for network messages, which are used for solving the problems that the data processing in the traditional intelligent network card has longer operation time on a CPU and occupies CPU resources, so that the network bandwidth in the network data processing process cannot be improved and the data processing efficiency is reduced.
In order to solve the above technical problems, the present invention provides a method for processing a network packet, including:
receiving first network message data and input data bit width;
performing bit width grouping processing on the first network message data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data; the grouping data is the number of groups in which the network message data in the computing unit are located;
Performing parallel checksum processing on the grouped first network message data according to the computing unit and the grouped data to obtain each piece of verification data corresponding to the first network message data; the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units;
And accumulating the check data to obtain final check data corresponding to the first network message data.
In one aspect, when the bit width of the input data is greater than or equal to the bit width of the checksum, performing bit width grouping processing on the first network packet data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data, including:
Dividing the first network message data according to the bit width of the input data to obtain processed second network message data;
determining the dividing number of the first computing unit according to the checksum bit width and the current second network message data;
dividing the dividing number of the first computing unit into two groups of data to obtain the group number which is used as the grouping data of the first computing unit;
dividing the group data of the first computing unit into two groups of data to obtain the group number which is used as the group data of the second computing unit;
And so on until the grouping data of the Nth computing unit is 2 groups, so as to obtain the number of each computing unit and the corresponding grouping data, wherein N is a positive integer.
On the other hand, when the bit width of the input data is smaller than the bit width of the checksum, performing bit width grouping processing on the first network message data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data, including:
Dividing the first network message data according to the bit width of the input data to obtain processed second network message data;
determining the merging number of the second network message data according to the relation between the checksum bit width and the input data bit width so as to meet the condition that the bit width data of the merged second network message data is the same as the checksum bit width;
determining the dividing number of the first computing unit according to the checksum bit width and the second network message data after current combination;
dividing the dividing number of the first computing unit into two groups of data to obtain the group number which is used as the grouping data of the first computing unit;
dividing the group data of the first computing unit into two groups of data to obtain the group number which is used as the group data of the second computing unit;
And so on until the grouping data of the Nth computing unit is 2 groups, so as to obtain the number of each computing unit and the corresponding grouping data, wherein N is a positive integer.
On the other hand, the input data bit width is related to the number of calculation units as follows:
dividing the bit width of the input data and the bit width of the checksum to obtain first data;
Processing the first data by a power of 2 algorithm to obtain second data;
the second data is taken as the number of calculation units.
On the other hand, the parallel checksum processing is performed on the first network message data after grouping according to the computing unit and the grouping data to obtain each piece of verification data corresponding to the first network message data, including:
Taking the grouping data of the first computing unit as the unit number of the first computing sub-units of the first computing unit;
performing checksum processing on the two groups of data in each first computing subunit to obtain first intermediate data corresponding to each first computing subunit;
placing first intermediate data corresponding to each first computing subunit into a second computing unit;
Dividing the number of the first intermediate data into two groups of data to obtain the group number as the group data of the second calculation unit;
taking the grouping data of the second computing unit as the unit number of the second computing sub-units of the second computing unit;
performing checksum processing on the two groups of data in each second computing subunit to obtain second intermediate data corresponding to each second computing subunit;
and pushing until the number of the N computing subunits in the N computing unit is two, wherein the N intermediate data of the corresponding N computing subunits are used as the check data of the current second network message data, and the check data of each second network message data are obtained by processing the first network message data according to the input data bit width grouping.
In another aspect, the checksum processing of the nth intermediate data of the nth computing subunit includes:
Adding the two groups of data of the Nth computing subunit to obtain first initial data; wherein, the bit width of the first initial data is larger than the bit width of the input data;
taking the highest bit data of the first initial data as carry data;
adding the carry data and the data corresponding to the preset bit number of the first initial data to obtain the Nth intermediate data; the preset bit number is the bit number obtained by performing inverse pushing according to the bit width of the input data from the last bit of the first initial data.
On the other hand, the first network message data is obtained by data screening of the first initial network message data, and specifically includes:
Acquiring a current bus protocol;
splitting the first initial network message data according to the current bus protocol to obtain two paths of first initial network message data;
outputting the first initial network message data of one path;
Performing signal generation processing on the first initial network message data of the other path according to the current bus protocol to obtain a detection packet start mark;
and carrying out data selection processing according to the first initial network message data of the other path corresponding to the detection packet start mark to obtain first network message data.
On the other hand, when the current bus protocol is a multi-channel transmission bus protocol, performing signal generation processing on the first initial network message data of the other path according to the current bus protocol to obtain a detection packet start flag, including:
acquiring an effective signal for receiving first initial network message data, a rear-end receiving preparation signal and a last beat signal of a register;
determining a level signal of a rising edge of a next clock according to a level signal relation among the valid signal, the rear-end receiving preparation signal and a last beat signal of the register;
Performing non-logic processing according to the level signal of the next clock rising edge to obtain a first level signal;
and performing AND logic processing on the first level signal and the effective signal to obtain the detection packet start mark.
On the other hand, the detection packet start flag is further carried by a private signal, and specifically includes:
carrying out private signal processing on the first network message data in advance to obtain a carrying mark corresponding to the first network message data;
and taking the carrying mark as the detection packet start mark.
On the other hand, according to the first initial network message data of the other path corresponding to the detection packet start mark, performing data selection processing to obtain first network message data, including:
When the detection packet start mark of the first initial network message data of the other path is 1, obtaining message data which does not participate in parallel checksum processing and corresponds to the first initial network message data of the other path;
setting the message data which does not participate in parallel checksum processing to be 0 so as to be combined into the first initial network message data of the other path, and completing data selection processing to obtain the first network message data;
When the detection packet start mark of the first initial network message data of the other path is not 1, the first initial network message data of the other path is not processed so as to obtain the first network message data.
On the other hand, before the accumulation processing is performed on each check data to obtain the final check data corresponding to the first network message data, the method further comprises the following steps:
Obtaining an output signal of a control subunit corresponding to the computing unit;
Outputting the intermediate data of each corresponding calculation unit under the condition that the output signals of each control subunit are effective so as to obtain current check data;
and under the condition that the output signals of the control subunits are invalid, registering the intermediate data of the calculation subunits, and outputting the intermediate data of the calculation subunits after the control register delays for a preset clock period.
On the other hand, the output signal output process of the control subunit is determined by a combinational logic circuit; the combined logic circuit comprises a first NOT OR gate circuit, a second NOT OR gate circuit, an AND gate circuit and a register;
acquiring an input effective signal for receiving first initial network message data, outputting the effective signal, an input rear-end receiving preparation signal and an output rear-end receiving preparation signal;
the port for inputting the effective signal is used as an OR gate input end of a first NOT gate and is used as a first input end of an AND gate;
the port for outputting the effective signal is used as an NOT gate input end of the second NOT gate circuit and is used as an output end of the register;
Taking a port of which the output back end receives the preparation signal as an OR gate input end of the second NOR gate; the output end of the second NOT OR gate circuit is used as the NOT gate input end of the first NOT OR gate circuit and is used as the second input end of the AND gate circuit, and the output input back end receives the preparation signal;
Taking the output end of the first NOT OR gate as the input end of the register;
And the output end of the AND gate circuit is used for outputting the output signal of the control subunit.
On the other hand, the accumulating processing is carried out on each check data to obtain the final check data corresponding to the first network message data, which comprises the following steps:
Acquiring a detection packet start mark corresponding to the first network message data;
If the detection packet start mark is 1, storing the check data corresponding to the plurality of second network message data until the frame tail mark bit of the first network message data is 1, and ending the storage;
And accumulating the stored check data to obtain the final check data.
On the other hand, after the accumulation processing is performed on each check data to obtain the final check data corresponding to the first network message data, the method further comprises the following steps:
And carrying out synchronous processing on the first initial network message data and the final check data of one path so as to send the first initial network message data and the final check data to an upper computer.
In order to solve the technical problem, the invention also provides a processing method of the network message, which comprises the following steps:
acquiring third network message data, input data bit width and message command information sent by an upper computer;
Synchronizing the third network message data with the message command information to obtain synchronized third network message data;
Performing bit width grouping processing on the synchronized third network message data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data; the grouping data is the number of groups in which the network message data in the computing unit are located;
performing parallel checksum processing on the grouped third network message data according to the calculation unit and the grouping data to obtain each piece of verification data corresponding to the third network message data; the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units;
accumulating the verification data to obtain final verification data corresponding to the third network message data; and inserting the final check data into the replacement processing to obtain replaced check data.
On the one hand, the method for synchronizing the third network message data with the message command information to obtain the synchronized third network message data comprises the following steps:
Acquiring an enabling signal zone bit, an initial checksum zone bit and checksum position information of message command information;
And synchronizing the enable signal flag bit, the initial checksum flag bit, the checksum position information and the third network message data to be converted into side information of a bus protocol, so as to obtain the synchronized third network message data.
On the other hand, inserting the final check data into the replacement process to obtain the replaced check data includes:
And if the enable signal flag bit is valid, replacing initial check data corresponding to the third network message data with final check data according to the checksum position information.
In order to solve the technical problem, the invention also provides a processing system of the network message, which comprises a first terminal device, a switch and a second terminal device;
the first terminal equipment is used for controlling the sending end to process the fourth network message data to obtain corresponding check data, wherein the processing process of the fourth network message data is obtained by the processing method of the network message;
the switch is used for receiving the fourth network message data and the corresponding check data to transmit to the second terminal equipment;
The second terminal equipment is used for controlling the receiving end to receive the fourth network message data and processing the fourth network message data to obtain corresponding new test data so as to be conveniently transmitted to the upper computer; the processing procedure of the fourth network message data received by the switch in the second terminal device is obtained by the processing method of the network message.
In order to solve the technical problem, the present invention further provides a processing device for a network packet, including:
the first receiving module is used for receiving the first network message data and the input data bit width;
The first processing module is used for carrying out bit width grouping processing on the first network message data according to the bit width of the input data and the bit width of the checksum so as to obtain the number of computing units and grouping data; the grouping data is the number of groups in which the network message data in the computing unit are located;
The second processing module is used for carrying out parallel checksum processing on the grouped first network message data according to the calculation unit and the grouping data to obtain each piece of verification data corresponding to the first network message data; the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units;
And the third processing module is used for carrying out accumulation processing on each check data to obtain final check data corresponding to the first network message data.
In order to solve the technical problem, the present invention further provides a processing device for a network packet, including:
the acquisition module is used for acquiring third network message data, input data bit width and message command information sent by the upper computer;
The fourth processing module is used for carrying out synchronous processing on the third network message data and the message command information to obtain the synchronous third network message data;
The fifth processing module is used for carrying out bit width grouping processing on the synchronized third network message data according to the bit width of the input data and the bit width of the checksum so as to obtain the number of the computing units and grouping data; the grouping data is the number of groups in which the network message data in the computing unit are located;
The sixth processing module is used for carrying out parallel checksum processing on the grouped third network message data according to the calculation unit and the grouping data to obtain each piece of verification data corresponding to the third network message data; the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units;
the seventh processing module is used for accumulating each check data to obtain final check data corresponding to the third network message data; and inserting the final check data into the replacement processing to obtain replaced check data.
In order to solve the technical problem, the present invention further provides a processing device for a network packet, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of the processing method of the network message when executing the computer program.
In order to solve the above technical problem, the present invention further provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements the steps of the method for processing a network packet as described above.
The invention provides a processing method of a network message, which comprises the steps of receiving first network message data and input data bit width; performing bit width grouping processing on the first network message data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data; the grouping data is the number of groups in which the network message data in the computing unit are located; performing parallel checksum processing on the grouped first network message data according to the computing unit and the grouped data to obtain each piece of verification data corresponding to the first network message data; the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units; and accumulating the check data to obtain final check data corresponding to the first network message data.
The invention has the advantages that whether the large bit width data is larger than the checksum bit width or the small bit width data is smaller than the checksum bit width under the first network message data, the bit width grouping processing can be carried out based on the checksum bit width so as to improve the bit width processing capacity, meet the network card requirement of hundreds of G, and simultaneously improve the flexibility of the bit width processing. After grouping, the first network message data after grouping is subjected to parallel check processing based on the calculation unit and the grouping data to obtain check data, wherein the parallel processing characteristic of the FPGA is utilized, and a parallel processing check sum processing mode is adopted, so that the data processing efficiency is improved compared with the serial processing of a CPU. In addition, the checksum processing mode adopts a pipeline processing mode, and compared with the accumulation processing of the traditional serial mode of checksum processing, the data processing efficiency is further improved. Based on FPGA, processing is performed, CPU resources are saved, and network bandwidth is improved.
Secondly, the bit width grouping processing is performed under the different size relations between the checksum bit width and the input data bit width, the network message data with larger bit width data is reduced to be the same as the checksum bit width, and then the subsequent data checksum processing is performed, or the network message data with smaller bit width data is combined to be the same as the checksum bit width. Meanwhile, the number of the computing units in the pipeline computing mode is determined, so that the verification data obtained by subsequent verification and processing is more accurate, and meanwhile, the flexibility of the bit width processing process is improved; the relation between the number of the computing units and the bit width of the input data can clearly know the number of the computing units corresponding to the pipeline computing mode in the data processing process, so that the second network message data are accurately connected; the parallel checksum processing process is based on parallel processing of each computing subunit in a computing unit, and the checksum processing performed in one computing subunit obtains corresponding result data. So as to facilitate the pipelining of the respective calculation units to the respective check data. The parallel processing characteristic of the FPGA is improved, the parallel processing checksum processing mode is adopted, compared with the serial processing of a CPU, the data processing efficiency is improved, meanwhile, the pipeline processing mode adopted under the condition that longer network message data cannot be processed simultaneously is considered, and the accuracy of the check data is improved.
In addition, the invention also provides a processing system of the network message, a processing method, a device, equipment and a medium of the network message, which have the same beneficial effects as the processing method of the network message.
Drawings
For a clearer description of embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 is a flowchart of a method for processing a network message applied to a receiving end field programmable gate array module according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a receiving end field programmable gate array module according to an embodiment of the present invention;
Fig. 3 is a schematic structural diagram of another receiving-end field programmable gate array module according to an embodiment of the present invention;
FIG. 4 is a block diagram of a combinational logic circuit of a control subunit according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for processing a network packet applied to a sender field programmable gate array module according to an embodiment of the present invention;
fig. 6 is a block diagram of a transmitting end field programmable gate array module according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a parallel checksum processing module of another sender field programmable gate array module according to an embodiment of the present invention;
FIG. 8 is a block diagram of a network message processing system according to an embodiment of the present invention;
fig. 9 is a schematic port diagram of a receiving end field programmable gate array module according to an embodiment of the present invention;
fig. 10 is a schematic port diagram of a transmitting end field programmable gate array module according to an embodiment of the present invention;
Fig. 11 is a block diagram of a processing device for a network packet applied to a receiving end field programmable gate array module according to an embodiment of the present invention;
Fig. 12 is a block diagram of a processing device for a network packet applied to a sender field programmable gate array module according to an embodiment of the present invention;
Fig. 13 is a block diagram of a processing device for a network packet according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
The core of the invention is to provide a processing method, a system, a device, equipment and a medium for network messages, so as to solve the problems that the data processing in the traditional intelligent network card has longer operation time on a CPU and occupies CPU resources, thereby leading to the incapability of improving the network bandwidth in the network data processing process and reducing the data processing efficiency.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description.
An acceleration engine of corresponding service corresponding to data processing is added in the traditional intelligent network card so as to release the CPU computing power of the server, provide more CPU computing power for computing tasks, or realize specific functions through an accelerator and ensure the line speed of data forwarding. The CPU still has not been solved to take on the role of main calculation, resulting in occupation of CPU resources. In addition, regarding the network bandwidth, as the CPU occupancy rate is higher, the running speed of the corresponding data processing process is full, and the running data processing may not have available CPU resources, so that the CPU does not have resources to perform transmission processing work for processing network data, the speed of connecting the network is slower, the network bandwidth cannot be improved, and the data processing efficiency is reduced. Therefore, the processing method of the network data provided by the invention can solve the technical problems.
Fig. 1 is a flowchart of a method for processing a network message applied to a receiving end field programmable gate array module according to an embodiment of the present invention, as shown in fig. 1, where the method includes:
S11: receiving first network message data and input data bit width;
Specifically, the receiving end field programmable gate array module is based on an FPGA module receiving end, and an interface of the receiving end is used for receiving first network message data under a serial port module of the FPGA module. The receiving end is used for outputting the original data and the check value based on the received network message data so as to be conveniently sent to an upper computer with a CPU for processing. The data processing process is mainly completed based on the FPGA, namely, the data processing work of the CPU is unloaded to the FPGA so as to relieve the calculation power of the CPU and save the resources of the CPU.
It should be noted that, the corresponding receiving end FPGA in the embodiment of the present invention may be a device or a chip such as a data processor (Data Processing Unit, DPU), an intelligent network card, a common network card, a switch, etc., and the corresponding usage scenario is not limited, and the receiving end field programmable gate array module and the transmitting end field programmable gate array module mentioned in the embodiment of the present invention are only how to implement the processing procedure of the network message in such a scenario, and may also be applied in the above mentioned scenario components, which is not limited herein. In addition, the verification and offloading process mentioned in the present embodiment may be an offloading process of other data processes, which is not limited herein.
The receiving end in this embodiment is a module for receiving network message data based on the configuration in the FPGA chip, where the receiving end and the transmitting end may be based on the same FPGA chip, or based on the receiving ends and the transmitting ends corresponding to the two FPGA chips on the terminal device, which is not limited herein, and may be set according to practical situations.
The first network message data may be data sent by another end-to-end terminal device through the sending end of the FPGA module, or may be data sent by an upper computer, which is not limited herein. In addition, the first network message data may be unprocessed data directly transmitted from the FPGA module transmitting end of the other terminal device or issued by the host computer, or may be network message data after data processing based on the foregoing, which is not limited herein, and may be set and processed according to actual situations.
The input data bit width is based on the bit number of the data which can be transmitted by the intelligent network card in one clock period, the larger the bit number is, the larger the data quantity which can be transmitted at the moment corresponding to the larger bit number is, and the current 8bit, 16bit, 64bit, 128bit, 256bit, 512bit and the like are obtained through the input network message data acquisition. In this embodiment, the bit width data is not limited to a specific one, and may be set and acquired according to actual situations.
S12: performing bit width grouping processing on the first network message data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data;
The grouping data is the number of groups in which the network message data in the computing unit are located;
It can be understood that the checksum bit width is set based on checksum processing, the checksum processing is a technology for detecting errors in the transmission process of the data packet, a transmitting end calculates the sum of all bytes in the data packet, then adds the result to the tail of the data packet, a receiving end recalculates the sum of all bytes in the data packet by using the same algorithm, compares the result with the checksum of the tail of the data packet, and in this embodiment, the checksum obtained by the transmitting end and the receiving end is sent to an upper computer, and finally the upper computer judges and checks whether the transmission errors exist.
The step of the checksum processing is to consider the data to be checked as a digital component in units of 16 bits, and to sum binary data. Here 16 bits are the unit, i.e. the checksum bit width mentioned in this embodiment.
And performing bit width grouping processing on the first network message data according to the bit width of the input data and the bit width of the checksum to obtain the number of the computing units and grouping data. The bit width grouping processing is to divide the first network message data into a plurality of network message fields by considering the relation between the bit width of the input data and the bit width of the checksum.
The data length of the first network message data is longer, the network message data after a plurality of groups is obtained by dividing based on the input data bit width, and the data length bit width of the network message data after the groups is the input data bit width; and grouping the network message data after the grouping based on the input data bit width and the checksum bit width to obtain a plurality of computing units and the number of groups in which the network message data in one computing unit is located.
For the grouping carried out on the input data bit width and the checksum bit width, if the input data bit width is larger than the checksum bit width, the corresponding bit width data is larger, and the network message data corresponding to the input data bit width needs to be split for the second time to obtain the data identical to the checksum bit width. If the bit width of the input data is smaller than the bit width of the checksum, the corresponding bit width data is smaller, and the network message data corresponding to the bit widths of the input data are required to be combined to obtain the data identical to the bit width of the checksum.
In the calculation unit in this embodiment, although parallel processing can be realized by the FPGA, it is considered that the calculation cannot be performed simultaneously for a plurality of data, so a stepwise flow method is adopted. Processing the network message data after bit width processing by adopting a plurality of computing units, wherein a first ladder belongs to a first computing unit, processing the network message data after bit width grouping processing in the first computing unit for one time, and transmitting the processed data to a second computing unit of a second ladder until 1 data is obtained from an Nth computing unit. In the packet data in this embodiment, considering the parallel processing characteristic of the FPGA, the network packet data after the bit width packet processing is divided into a plurality of groups in the first computing unit, and the network packet data of the plurality of groups is processed in parallel during the subsequent checksum processing, so that the data processing process is improved to a certain extent.
For example, the input data bit width is 512 bits, the checksum bit width is 16 bits, the first network message data is 1024 bits, and the first network message data is grouped according to the input data bit width to obtain two groups, that is, the two groups of data cannot be processed completely in each processing, and one group of data is processed. When the previous group of network messages are processed, a group of network message data (512 bits) is grouped according to the checksum bit width of 16 bits to obtain 32 groups, namely 32 16-bit data exist. And (3) putting the 16-bit data of the 32 groups in a first computing unit, checking the 32 groups in pairs to obtain 16 groups when checksum processing is adopted, namely, obtaining 16 result data in the first computing unit, putting the 16 result data in a second computing unit, checking the 16 result data in pairs to obtain 8 groups, namely, obtaining 8 result data in the second computing unit, and the like, and finally obtaining 1 result data, namely, the check data serving as the current network message data (512 bits) of the first group.
S13: performing parallel checksum processing on the grouped first network message data according to the computing unit and the grouped data to obtain each piece of verification data corresponding to the first network message data;
the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units;
It should be noted that, the first network packet data after grouping is a generic name of the network packet data in each computing unit, and the first network packet data after grouping is combined with the computing unit and the packet data to perform parallel checksum processing to obtain corresponding check data.
Taking a group of network message data as an example, a first computing unit is used for illustration, and the first computing unit comprises 32 pieces of 16-bit data, and the data are divided into 16 groups of data through two-by-two checksum processing, namely, the group data are 16. And the parallel processing is to process 16 groups of data in parallel by utilizing the parallel processing characteristic of the FPGA to obtain 16 result data. In the parallel processing process, a checksum processing mode is adopted, and the checksum processing mode of adding two data of each group of 16 groups of data is adopted to obtain 16 result data. And then placing the 16 result data in a second calculation unit to realize pipeline processing, wherein in the second calculation unit, the two-by-two checksum processing is divided into 8 groups of data, namely the group data is 8, and the 8 groups of data are processed in parallel to obtain 8 result data. The first computing unit to the second computing unit realize a pipeline processing mode, and 1 result data is obtained by the same method.
The traditional checksum processing is that the former 16bit data and the latter 16bit data are added, and the added superimposed data is added with the latter 16bit data, so that the whole process is a serial accumulation processing process, and the data processing period is longer. In the pipeline processing mode of parallel processing, the embodiment adopts the parallel processing mode of the pipeline computing mode to improve the data processing efficiency under the condition that the chip cannot process all data simultaneously.
Checksum, which is a checksum of a group of data items used for verification purposes in the field of data processing and data communication, is embodied as data or other strings that are considered as numbers in the process of calculating the checksum, thereby ensuring the integrity and accuracy of the data. The checksum processing in this embodiment may be the same as that of a common checksum algorithm, such as parity check, hamming code check, cyclic redundancy check (Cyclic Redundancy Check, CRC) 16 check, or CRC32 check, or other checksum algorithms may be used, which are not limited herein.
In addition, the checksum processing processes the current 16bit integer of each pair of characters, and if the checksum is greater than 16bit data, carry data needs to be added to the result data in the checksum processing.
S14: and accumulating the check data to obtain final check data corresponding to the first network message data.
It can be understood that each check data is accumulated, where the accumulation is an accumulation of check data corresponding to a plurality of network message data obtained by splitting the first network message data with the input data bit width. For example, two groups of network messages obtained by 1024 bits are subjected to parallel checksum processing on one group, so that two verification data are obtained. And accumulating the two check data to obtain the final check data of the first network message data.
Specifically, the accumulation process may be direct addition, or may be performed after the validity of the check data is determined, which is not limited herein.
The embodiment of the invention provides a processing method of a network message, which comprises the steps of receiving first network message data and input data bit width; performing bit width grouping processing on the first network message data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data; the grouping data is the number of groups in which the network message data in the computing unit are located; performing parallel checksum processing on the grouped first network message data according to the computing unit and the grouped data to obtain each piece of verification data corresponding to the first network message data; the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units; and accumulating the check data to obtain final check data corresponding to the first network message data. And the large bit width data larger than the checksum bit width or the small bit width data smaller than the checksum bit width under the first network message data can be subjected to bit width grouping processing based on the checksum bit width so as to improve the bit width processing capacity, meet the network card requirements of hundreds of G and improve the flexibility of bit width processing. After grouping, the first network message data after grouping is subjected to parallel check processing based on the calculation unit and the grouping data to obtain check data, wherein the parallel processing characteristic of the FPGA is utilized, and a parallel processing check sum processing mode is adopted, so that the data processing efficiency is improved compared with the serial processing of a CPU. In addition, the checksum processing mode adopts a pipeline processing mode, and compared with the accumulation processing of the traditional serial mode of checksum processing, the data processing efficiency is further improved. Based on FPGA, processing is performed, CPU resources are saved, and network bandwidth is improved.
In some embodiments, bit width packet processing manners caused by different input data bit widths are different, and when the input data bit width is greater than or equal to the checksum bit width, performing bit width packet processing on the first network packet data according to the input data bit width and the checksum bit width to obtain the number of calculation units and packet data, including:
Dividing the first network message data according to the bit width of the input data to obtain processed second network message data;
determining the dividing number of the first computing unit according to the checksum bit width and the current second network message data;
dividing the dividing number of the first computing unit into two groups of data to obtain the group number which is used as the grouping data of the first computing unit;
dividing the group data of the first computing unit into two groups of data to obtain the group number which is used as the group data of the second computing unit;
And so on until the grouping data of the Nth computing unit is 2 groups, so as to obtain the number of each computing unit and the corresponding grouping data, wherein N is a positive integer.
Specifically, the first network message data is divided by combining the input data bit width to obtain a plurality of second network message data, and because the input data bit width is limited, the longer first network message data cannot be processed at one time, and the plurality of second network message data is needed to be obtained by dividing according to the input data bit width, that is, the bit width data of the second network message data is set according to the input data bit width. And then, after processing based on the second network message data, processing the next second network message data. Taking a second network message data as an example, processing the current second network message according to the checksum bit width to obtain the dividing number of the first computing unit, dividing the checksum bit width by the bit width data of the second network message data to obtain the corresponding dividing number, and placing the second network message data in the first computing unit to perform subsequent checksum processing. Before the checksum processing, since the checksum processing is performed based on the two network packet data, the number of groups divided again according to the number of divisions of the first calculation unit, that is, the packet data of the first calculation unit divided into two groups of data, is performed.
In combination with the example in the above embodiment, the bit width of the second network packet data is 512 bits, the checksum bit width is 16 bits, and the number of the first calculation units is 32, that is, 32 16-bit data obtained based on the 512-bit network packet data. And dividing 32 into two groups to obtain 16 groups, namely, performing subsequent checksum processing on two second network message data in the 16 groups of data, wherein the 16 groups of data are the grouping data serving as the first computing unit. The 16 result data obtained in the first calculation unit are put into a second calculation unit to obtain the division number of the second calculation unit, the division number is divided into two groups to obtain group numbers which are used as grouping data (8 groups) of the second calculation unit, and the like, after the grouping data of the N calculation unit is obtained as 2 groups, 1 result data obtained by checking and processing is used as checking data of the current second network message data, wherein N is a positive integer.
Fig. 2 is a schematic structural diagram of a receiving end field programmable gate array module according to an embodiment of the present invention, as shown in fig. 2, after receiving first network packet data, a plurality of second network packet data may be obtained, in a subsequent calculation unit, the current second network packet data is first divided into the first calculation unit, the result data obtained by checksum calculation in the first calculation unit is put into the second calculation unit to perform checksum calculation, and so on, the result data obtained by finally putting into the nth calculation unit is used as check data, and is stored into an accumulation unit to facilitate final accumulation of the next second network packet data to obtain final check data of the first network packet data.
In connection with fig. 2, the interfaces on the left and right sides of the receiving end are used for interacting with other modules, and pins corresponding to the five interfaces on the left upper side on the left side are respectively data (data), byte modifier (key), last beat signal (last), valid signal (valid) and back end receive ready signal (ready) from top to bottom, so, in order to distinguish signals sent from the host to the slave, the pin names correspond to s_tdata, s_tkee, s_tlast, s_ tvalid and s_ tready from top to bottom. The signal transmission from the signal generating module inside the receiving end to the data selecting module is respectively from top to bottom, namely a start of packet (sop), data, a byte modifier, a last beat signal of a register, a valid signal and a back end receiving preparation signal, and the sending directions of a host and a slave do not need to be distinguished, so that the corresponding pin names are sop, data, keep, last, valid and ready. And transmission pins between the data selection module and the computing unit and between the transmission units do not need to distinguish the transmission directions of the master machine and the slave machine. In addition, fig. 2 shows that the input data bit width is 512bit distance, the number of corresponding computing units n=5, the number of control subunits is n+1=5, the number of first computing subunits of the first computing unit is 16, the number of second computing subunits of the second computing unit is 8, and so on, the number of nth computing subunits of the nth computing unit is 1.
The pins corresponding to the four interfaces on the right upper side of the right side of the receiving end are respectively data (data), a byte modifier (key), a last beat signal (last), a valid signal (valid) and a back end receiving preparation signal (ready) from top to bottom, and based on the sending of the receiving end, in order to distinguish the sending and receiving directions, the pin names correspond to m_tdata, m_tkep, m_tlast, m_ tvalid and m_ tready from top to bottom.
The pins corresponding to the three interfaces at the right lower part of the right side of the receiving end are respectively verification data (value), valid signal (valid) and rear end receiving preparation signal (ready) from top to bottom, and based on the parallel verification and processed result data sent by the receiving end, the pin names of the pins are csum _value, csum_ tvalid and csum _ tready from top to bottom.
According to the method, the device and the system for processing the input data bit width greater than or equal to the checksum bit width, the bit width grouping processing is carried out, network message data with larger bit width data is reduced to be the same as the checksum bit width, then subsequent data checksum processing is carried out, meanwhile, the number of computing units in a pipeline computing mode is determined, so that the verification data obtained by the subsequent checksum processing is more accurate, and meanwhile, the flexibility of the bit width processing process is improved.
In some embodiments, when the bit width of the input data is smaller than the bit width of the checksum, performing bit width grouping processing on the first network packet data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data, including:
Dividing the first network message data according to the bit width of the input data to obtain processed second network message data;
determining the merging number of the second network message data according to the relation between the checksum bit width and the input data bit width so as to meet the condition that the bit width data of the merged second network message data is the same as the checksum bit width;
determining the dividing number of the first computing unit according to the checksum bit width and the second network message data after current combination;
dividing the dividing number of the first computing unit into two groups of data to obtain the group number which is used as the grouping data of the first computing unit;
dividing the group data of the first computing unit into two groups of data to obtain the group number which is used as the group data of the second computing unit;
And so on until the grouping data of the Nth computing unit is 2 groups, so as to obtain the number of each computing unit and the corresponding grouping data, wherein N is a positive integer.
Specifically, considering that the bit width of the input data is smaller than the bit width of the checksum, the second network message data corresponding to the bit width of the input data needs to be combined to obtain the network message data with the same bit width as the checksum.
Taking 8 bits of input data bit width and 16 bits of checksum bit width as an example, the first network message data is 1024 bits, and the second network message data after being processed is 8 bits according to the corresponding division. Based on the relation between the checksum bit width and the input data bit width, the merging number of the second network messages is determined, that is, how many second network message data are merged, which can be the same as the checksum bit width, and the data exemplified in the section, that is, two second network message data are merged, and each two second network message data are merged to obtain the second network message data after being merged.
The number of the first calculation units, the number of the calculation units, and the determination process of the packet data corresponding to each calculation unit, which are performed subsequently, are the same as those of the foregoing embodiment, and will not be described in detail herein, but only refer to the foregoing embodiment.
Fig. 3 is a schematic structural diagram of another receiving end field programmable gate array module according to an embodiment of the present invention, as shown in fig. 3, after receiving first network packet data, a plurality of second network packet data may be obtained, the second network packet data after being combined (combined processing module) is obtained, the second network packet data after being combined is divided into first computing units in subsequent computing units, the result data obtained by checksum computation in the first computing units is put into second computing units for checksum computation, and the result data obtained by the nth computing unit is finally put into the accumulation unit as verification data, so that final accumulation of the next second network packet data is conveniently achieved by computation to obtain final verification data of the first network packet data. It will be appreciated that the merge processing module is added only after the data selection module, otherwise identical to that described above with respect to fig. 2.
The merging process in the present embodiment may be performed after the signal generation in fig. 3, before the calculation unit determines; the calculation unit may be configured not to limit the data selection module and may be configured according to the actual situation before the data selection module is determined.
The input data bit width smaller than the checksum bit width is provided by the embodiment, the bit width merging processing is performed, the network message data with smaller bit width data is expanded to be the same as the checksum bit width, then subsequent data checksum processing is performed, meanwhile, the number of calculation units in a pipeline calculation mode is determined, so that the verification data obtained by the subsequent checksum processing is more accurate, and meanwhile, the flexibility of the bit width processing process is improved.
In some embodiments, the input data bit width is related to the number of computational units as follows:
dividing the bit width of the input data and the bit width of the checksum to obtain first data;
Processing the first data by a power of 2 algorithm to obtain second data;
the second data is taken as the number of calculation units.
Specifically, the relation between the number of the computing units and the bit width of the input data takes the checksum bit width of 16 bits, the bit width of the input data is 512 bits as an example, and the number of the corresponding computing units is 5; when the bit width of input data is 256 bits, the number of corresponding computing units is 4; when the bit width of the input data is 128 bits, the number of corresponding computing units is 3; when the input data bit width is 1024 bits, the corresponding calculation units are 6.
Accordingly, the formula is as follows:
512bit=25*16bit;
256bit=24*16bit;
128bit=23*16bit;
1024bit=26*16bit。
Combining the formulas, dividing the input data bit width and the checksum bit width to obtain first data (512 bit is taken as an example, 512bit/16 bit=32), and then performing power-of-2 algorithm processing on the first data to obtain second data (32=2 5), namely, taking the second data as the number of calculation units.
The relation between the number of the computing units and the bit width of the input data provided by the embodiment can clearly know the number of the computing units corresponding to the pipeline computing mode in the data processing process, so that the second network message data are accurately connected.
In some embodiments, in step S13, performing parallel checksum processing on the first network packet data after grouping according to the calculation unit and the grouping data to obtain each piece of verification data corresponding to the first network packet data, including:
Taking the grouping data of the first computing unit as the unit number of the first computing sub-units of the first computing unit;
performing checksum processing on the two groups of data in each first computing subunit to obtain first intermediate data corresponding to each first computing subunit;
placing first intermediate data corresponding to each first computing subunit into a second computing unit;
Dividing the number of the first intermediate data into two groups of data to obtain the group number as the group data of the second calculation unit;
taking the grouping data of the second computing unit as the unit number of the second computing sub-units of the second computing unit;
performing checksum processing on the two groups of data in each second computing subunit to obtain second intermediate data corresponding to each second computing subunit;
and pushing until the number of the N computing subunits in the N computing unit is two, wherein the N intermediate data of the corresponding N computing subunits are used as the check data of the current second network message data, and the check data of each second network message data are obtained by processing the first network message data according to the input data bit width grouping.
It will be appreciated that the parallel checksum processing is performed based on network packet data within the computing units, and in connection with the above embodiment, taking the first computing unit as an example, packet data of the first computing unit is already known explicitly, so the packet data (e.g. 16 groups) is taken as the number of units (16) of the first computing subunit of the first computing unit; in the 16 first computing subunits, two groups of data are stored in one first computing subunit, the output data of each first computing subunit can be obtained as corresponding intermediate data, and 16 first intermediate data can also be obtained. The first intermediate data corresponding to each first computing subunit is placed in a second computing unit, and the group number obtained by dividing the number of the first intermediate data into two groups of data is used as the group data (8 groups) of the second computing unit; and taking the grouping data of the second computing units as the unit number (8) of the second computing subunits of the second computing units, performing checksum processing on the two groups of data in each second computing subunit to obtain second intermediate data corresponding to each second computing subunit, namely obtaining 8 second intermediate data, and so on until the number of the N computing subunits of the N computing units is two, namely obtaining the number of the N intermediate data as one, and taking the checking data as the checking data of the current second network message data.
Note that, the nth intermediate data, the nth calculation subunit, and the nth calculation unit in this embodiment are only one type of flag, and do not correspond to the number of each data. In addition, the check data of each second network message data is each check data obtained after the first network message data is grouped according to the bit width of the input data.
The parallel checksum processing procedure of the embodiment is based on parallel processing of each computation subunit in a computation unit, and checksum processing performed in one computation subunit obtains corresponding result data. So as to facilitate the pipelining of the respective calculation units to the respective check data. The parallel processing characteristic of the FPGA is improved, the parallel processing checksum processing mode is adopted, compared with the serial processing of a CPU, the data processing efficiency is improved, meanwhile, the pipeline processing mode adopted under the condition that longer network message data cannot be processed simultaneously is considered, and the accuracy of the check data is improved.
In some embodiments, the checksum processing of the nth intermediate data of the nth computing subunit includes:
Adding the two groups of data of the Nth computing subunit to obtain first initial data; wherein, the bit width of the first initial data is larger than the bit width of the input data;
taking the highest bit data of the first initial data as carry data;
adding the carry data and the data corresponding to the preset bit number of the first initial data to obtain the Nth intermediate data; the preset bit number is the bit number obtained by performing inverse pushing according to the bit width of the input data from the last bit of the first initial data.
Specifically, taking a computing subunit as an example, adding two groups of data in the computing subunit to obtain first initial data, where the two groups of data are 16 bits. The two data are added, and there is an excessive carry problem, so the highest bit data of the first initial data is taken as carry data, for example, the bit width of the first initial data is 17 bits. And adding the carry data and the first initial data to obtain corresponding intermediate data so as to ensure that the intermediate data is 16 bits as well.
Input data a, b are defined, both of which are 16 bits. Adding the input data a and b to obtain first initial data c, wherein the bit width of the data c is 17 bits, and the data c=a+b; the output intermediate data is c [15:0] +c [16], namely, the carry data c [16] and the data c [15:0] corresponding to the preset bit number of the first initial data are added to obtain the intermediate data of the calculation subunit.
For example, decimal data 3+3=6 is converted into binary data 11+11=110, but the final result is the same as the number of digits of the added number, so the data obtained by adding the last two digits "10" to the most significant digit "1" of 110 is required to be "11", that is, the final intermediate data.
In the specific process of checksum processing provided in this embodiment, the carry problem is considered, and the checksum is added to corresponding intermediate data, so as to retain effective information and avoid loss.
In some embodiments, the first network packet data in step S11 is obtained by performing data screening on the first initial network packet data, and specifically includes:
Acquiring a current bus protocol;
splitting the first initial network message data according to the current bus protocol to obtain two paths of first initial network message data;
outputting the first initial network message data of one path;
Performing signal generation processing on the first initial network message data of the other path according to the current bus protocol to obtain a detection packet start mark;
And carrying out data selection processing according to the first initial network message data of the other path corresponding to the detection packet start mark to obtain the first network message data.
It will be appreciated that, as shown in fig. 2, the first network packet data is located before the computing unit and after the data selecting module, that is, the first network packet data is obtained by performing data screening on the first initial network packet data. The specific screening process needs to be combined with a bus protocol, and first splitting processing is carried out on the first initial network message data according to the current bus protocol to obtain two paths of first initial network message data. That is, the splitting process in this embodiment is actually a copy process, implemented by the forking module in fig. 2, splitting one standard way of data into two ways of data streams identical in one way, and implementing by writing one way of data into the first-in first-Out module (FIRST IN FIRST Out, FIFO) and directly outputting the other way of data. In particular, a fork function may be employed that allocates resources, such as space to store data and code, to a new process and then copies all of the values of the original process to the new process. The splitting of this part is implemented in a forking module (Fork). Based on the bifurcation module, the first initial network message data of one path is directly output, and the first initial network message data of the other path is subjected to subsequent parallel checksum processing. It should be noted that, the forking module completes the copying of the data, but the valid and ready of the control data are asynchronous processes.
Further, signal generation processing is carried out on the first initial network message of the other path according to the current bus protocol to obtain a detection packet start mark. The signal generation process herein may also be carried out for private signals, in addition to being based on the current bus protocol. The sop flag is a flag in the process of selecting and accumulating the subsequent data so as to accurately process the data.
And carrying out data selection processing according to the first initial network message data of the other path corresponding to the sop mark to obtain first network message data. The data selection in this embodiment takes into account that the portion of the data in the first initial network packet data does not need to be subjected to checksum processing, so that the first initial network packet data can be masked, thereby reducing the amount of calculation.
In this embodiment, the first initial network packet data is obtained by performing data selection processing on the first initial network packet data, and on the premise of generating the sop flag, the data selection processing is performed on part of the data of the first initial network packet data based on the sop flag, so that the calculation is convenient and simple during the subsequent checksum processing, and the calculation power is improved.
In some embodiments, considering a generation process of a sop flag, when a current bus protocol is a multi-channel transmission bus protocol, performing signal generation processing on first initial network message data of another path according to the current bus protocol to obtain a detection packet start flag, including:
acquiring an effective signal for receiving first initial network message data, a rear-end receiving preparation signal and a last beat signal of a register;
determining a level signal of a rising edge of a next clock according to a level signal relation among the valid signal, the rear-end receiving preparation signal and a last beat signal of the register;
Performing non-logic processing according to the level signal of the next clock rising edge to obtain a first level signal;
and performing AND logic processing on the first level signal and the effective signal to obtain a detection packet start mark.
Specifically, as shown in fig. 2, the current bus protocol is a multi-channel transmission bus protocol (Advanced eXtensible Interface, AXI), which is an on-chip bus with high performance, high bandwidth and low latency. The address/control and the data phase are separated, the misaligned data transmission is supported, meanwhile, in burst transmission, only the first address is needed, meanwhile, the separated read-write data channel is supported, the outlining transmission access and the disordered access are supported, and the timing sequence convergence is easier to carry out.
The embodiment is based on a corresponding valid signal (tvalid) when the first initial network message data is received, a back end receiving preparation signal (tready) and a last beat signal (tlast) of a register, and a handshake mechanism based on an AXI bus protocol, wherein a tvalid signal is a master tells that the secondary transmission data is valid data, tready tells the secondary master that the secondary transmission is ready for transmission, and the tlast signal is a master tells the secondary transmission that the secondary transmission is the burst transmission end, namely the last beat signal of the register.
The level signal of the next clock rising edge is determined according to the level signal relation among the three signals, specifically:
when the above three signals are all at high level, the data packet signal (in_packet) of the sop is set to 0 at the next clock rising edge, and when the sop flag is at high level and the tready signal is at high level, the data packet signal (in_packet) of the sop is set to 1 at the next clock rising edge.
Performing non-logic processing on the level signal of the next clock rising edge to obtain a first level signal, and performing AND logic processing on the first level signal and the valid signal (tvalid) to obtain a detection packet start flag (sop), wherein the specific formula is as follows:
sop=!in_packet_q&&tvalid;
Wherein in_packet is a detection packet of sop, in_packet_q represents a q-th corresponding clock signal, tvalid is an effective signal; first level signal pass-! in_packet_q; "|! "means not logic; "& &" means AND logic.
In some embodiments, the detection packet start flag is further carried by a private signal, specifically including:
carrying out private signal processing on the first network message data in advance to obtain a carrying mark corresponding to the first network message data;
The carrying flag is used as a detection packet start flag.
It can be understood that, besides the above-mentioned sop signal obtained based on AXI bus protocol, the sop signal can also be obtained through carrying a private signal, where the private signal is a marking mode of a user-defined signal, and can be obtained through marking by other buses, and the mark can also be additionally carried when the first network message data is processed, and the mark can be obtained through a register mark, and can also be obtained by other modes, which are not limited herein, and the carried mark is used as the sop signal.
The sop mark generation process provided by the embodiment provides convenience for the parallel checksum processing process of the subsequent calculation unit, and also provides data markability processing for data selection so as to save calculation force.
In some embodiments, the header portion of the network message data need not participate in the parallel checksum processing, so that it is necessary to mask that portion to reduce the processing of that portion. Performing data selection processing according to first initial network message data of another path corresponding to the detection packet start mark to obtain first network message data, including:
When the detection packet start mark of the first initial network message data of the other path is 1, obtaining message data which does not participate in parallel checksum processing and corresponds to the first initial network message data of the other path;
setting the message data which does not participate in parallel checksum processing to be 0 so as to be combined into the first initial network message data of the other path, and completing data selection processing to obtain the first network message data;
When the detection packet start mark of the first initial network message data of the other path is not 1, the first initial network message data of the other path is not processed so as to obtain the first network message data.
Specifically, when the sop flag is 1, acquiring the message data corresponding to the check sum processing not to participate in the first initial network message data of the other path, setting the data of the part to 0, and actually using 0 to participate in subsequent calculation, wherein the result is not influenced, and the data of the part is ensured to be output unchanged. It can be understood that the network message data corresponding to the parallel checksum processing is not involved, and the network message data can be obtained based on the data in the computing subunit corresponding to the computing unit, or can be obtained by adopting a preset data address, which is not limited herein. Of course, the network message data corresponding to the whole input data bit width in the computing subunit is set to 0, and subsequent processing is not needed. On the other hand, if the network message data corresponding to the whole input data bit width in the computation subunit is not set to 0, the network message data corresponding to the random preset data address is processed by the data set to 0, and the result data is not influenced.
When the detection packet start mark of the first initial network message data of the other path is not 1, the first initial network message data of the other path is not processed so as to obtain the first network message data. The data is not changed and is output as it is.
The network message data which does not need to participate in the parallel check processing process is shielded, so that the data processing efficiency and the computing power of the FPGA are improved.
In some embodiments, considering that there may be a conflict in data processing caused by the pipeline speed of the computing unit in the pipeline computing mode, that is, the computing of the computing unit corresponding to the current second network packet data has not yet ended, the situation that the next second network packet data starts to be processed in the first computing unit occurs, so before the adding up the respective check data to obtain the final check data corresponding to the first network packet data, the method further includes:
Obtaining an output signal of a control subunit corresponding to the computing unit;
Outputting the intermediate data of each corresponding calculation unit under the condition that the output signals of each control subunit are effective so as to obtain current check data;
and under the condition that the output signals of the control subunits are invalid, registering the intermediate data of the calculation subunits, and outputting the intermediate data of the calculation subunits after the control register delays for a preset clock period.
Specifically, the output signals of the control subunits corresponding to the computing units are obtained, as shown in fig. 2, each computing unit corresponds to one control subunit, and when the output signals of the control subunits are valid, it is only necessary to indicate that the current pipeline mode is normal and directly output the intermediate data of the computing units. If the output signal of the control subunit is valid, it indicates that the current pipeline mode is abnormal, and the data processing process is crowded, so that the output needs to be performed after the register is delayed by a preset clock period. The number of control subunits is one more than the number of calculation units, considering that the accumulation unit also corresponds to one control subunit, i.e. the number of control subunits is n+1.
The generation of the output signal of the control subunit may be implemented by software or by a logic circuit of hardware, and the corresponding logic circuit is not limited and may be based on one logic circuit or a combination mode of a plurality of logic circuits.
In some embodiments, the output signal output process of the control subunit is determined by a combinational logic circuit comprising a first nor gate, a second nor gate, an and gate, and a register;
acquiring an input effective signal for receiving first initial network message data, outputting the effective signal, an input rear-end receiving preparation signal and an output rear-end receiving preparation signal;
the port for inputting the effective signal is used as an OR gate input end of a first NOT gate and is used as a first input end of an AND gate;
the port for outputting the effective signal is used as an NOT gate input end of the second NOT gate circuit and is used as an output end of the register;
Taking a port of which the output back end receives the preparation signal as an OR gate input end of the second NOR gate; the output end of the second NOT OR gate circuit is used as the NOT gate input end of the first NOT OR gate circuit and is used as the second input end of the AND gate circuit, and the output input back end receives the preparation signal;
Taking the output end of the first NOT OR gate as the input end of the register;
And the output end of the AND gate circuit is used for outputting the output signal of the control subunit.
The control subunit calculates the level signals of the combined logic circuit of the four signals based on the input valid signal, the output valid signal, the input back-end receiving preparation signal and the output back-end receiving preparation signal to determine whether the level signals are valid.
The input effective signal of the first control subunit corresponding to the first computing unit is obtained through the data selection module, and the receiving preparation signal is generated based on the control subunit itself so as to be conveniently transmitted to the data selection module.
Fig. 4 is a block diagram of a combinational logic circuit of a control subunit according to an embodiment of the present invention, as shown in fig. 4, including a first nor gate 1, a second nor gate 3, an and gate 4, and a register 2, where a port for inputting a valid signal (in_valid) is used as an or gate input terminal of the first nor gate 1 and as a first input terminal of the and gate 4; a port for outputting a valid signal (out_valid) is taken as an NOT gate input end of the second NOT gate 3 and is taken as an output end of the register 2; taking a port of which the output back end receives the preparation signal (out_ready) as an OR gate input end of the second NOT gate circuit 3; the output end of the second nor gate 3 is taken as the not gate input end of the first nor gate 1 and as the second input end of the and gate 4, and the output input back end receives the ready signal (in_ready); the output end of the first NOT OR gate 1 is used as the input end of the register 2; and an output terminal of the and circuit 4 is used for outputting an output signal of the control subunit.
The specific formula of the combinational logic circuit is as follows:
in_valid= (-out_valid) |out_ready; here, it means that intermediate data is directly output after the combination logic is calculated;
out_valid < = (in_ready) |in_valid; here, it means that the register beats one beat of output after the combination logic calculates;
output signal of control subunit = in_ready & in_valid; here representing and logic.
The output process of the output signal under the logic combination circuit of the control subunit provided by the embodiment ensures that the processing of the network message data in the pipeline computing mode is not jammed, and the order is ensured.
In some embodiments, performing accumulation processing on each check data to obtain final check data corresponding to the first network packet data, including:
Acquiring a detection packet start mark corresponding to the first network message data;
if the detection packet start mark is 1, storing the check data corresponding to the plurality of second network message data until the frame tail mark bit of the first network message data is 1, and ending the storage;
And accumulating the stored check data to obtain final check data.
Specifically, as shown in fig. 2, the calculation unit before the accumulation unit has calculated the checksum data corresponding to the bit width of the input data, and needs to perform accumulation calculation on the checksum data corresponding to the parallel checksum processing with each input data bit width as a unit, that is, accumulate the checksum data of the plurality of second network packet data to obtain the first network packet data. When accumulation is needed, storing the check data corresponding to the second network message data when the sop flag bit is 1, namely storing each calculation result until the frame tail flag bit is 1, and ending the storage. And accumulating the stored test data to obtain final test data. And outputting the final check data to the FIFO module for caching.
In this embodiment, accumulation processing is performed to ensure the integrity and accuracy of the first network packet data.
In some embodiments, as shown in fig. 2, after the FIFO module caches the final check data, that is, after accumulating each check data to obtain the final check data corresponding to the first network packet data, the method further includes:
and synchronously processing the first initial network message data and the final check data of one path to send to an upper computer.
That is, in this embodiment, the first initial network message data and the final check data are synchronously processed corresponding to the final output operation of the receiving end, so as to be sent to the upper computer. The synchronization processing can be carried with a binding relationship so as to be conveniently transmitted to the upper computer, and the upper computer carries out subsequent data processing work.
Further, the present invention also provides a method for processing a network message applied to a sender field programmable gate array module, and fig. 5 is a flowchart of a method for processing a network message applied to a sender field programmable gate array module, as shown in fig. 5, where the method includes:
S21: acquiring third network message data, input data bit width and message command information sent by an upper computer;
s22: synchronizing the third network message data with the message command information to obtain synchronized third network message data;
S23: performing bit width grouping processing on the synchronized third network message data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data;
The grouping data is the number of groups in which the network message data in the computing unit are located;
S24: performing parallel checksum processing on the grouped third network message data according to the calculation unit and the grouping data to obtain each piece of verification data corresponding to the third network message data;
the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units;
S25: accumulating the verification data to obtain final verification data corresponding to the third network message data; and inserting the final check data into the replacement processing to obtain replaced check data.
Specifically, the third network message data is obtained, and the third network message data is mainly the original network message data sent by the upper computer. The bit width of the input data is mentioned in the above embodiment, please refer to the above embodiment of the processing method of the network message applied to the receiving end field programmable gate array module, and the detailed description is omitted herein. Message command information sent by the upper computer. The message command information herein refers to command information how to process network message data, such as an enable signal, a start signal, an end signal, and an instruction valid signal. Fig. 6 is a block diagram of a sender field programmable gate array module according to an embodiment of the present invention, where, as shown in fig. 6, information corresponding to the lower left leg of the synchronization module is message command information.
In connection with fig. 6, interfaces on the left and right sides of the transmitting end are used for interacting with other modules, pins corresponding to five interfaces on the left upper side of the left side are respectively data (data), byte modifier (key), last beat signal (last), valid signal (valid) and back end receive ready signal (ready) from top to bottom, and based on the transmitting end transmitted from the other modules, so here, pin names correspond to s_tdata, s_tkeep, s_tlast, s_ tvalid and s_ tready from top to bottom. The signal transmission from the synchronous module to the forking module (Fork) in the transmitting end is respectively from top to bottom, namely data, a byte modifier, a last beat signal of a register, an enable signal flag bit, a start checksum flag bit, a checksum position signal, a valid signal and a rear end receiving preparation signal, and the transmitting direction is not needed to be distinguished, so that the corresponding pin names are data, keep, last, enable, start, offset, valid and ready. The bifurcation module transmits one path of data to the first-in first-out module, and the other path of data to the parallel check processing module. The signal transmission from the first-in first-out module to the insertion replacement module is respectively from top to bottom of data, a byte modifier, a last beat signal of a register, an enable signal flag bit, a start checksum flag bit, a checksum position signal, a valid signal and a rear end receive preparation signal, and the corresponding pin names are data, keep, last, enable, start, offset, valid and ready. And the parallel checksum processing module transmits signals from the replacement module, namely the check data, the valid signal and the rear-end receiving preparation signal from top to bottom, and the corresponding pin names are csum _value, valid and ready. And when the data output by the replacement module is inserted, the interface signals corresponding to the data from top to bottom are as follows: the data (data), the byte modifier (key), the last beat of the register signal (last), the valid signal (valid), and the ready signal (ready) are transmitted based on the transmitting end, and here, pin names correspond to m_tdata, m_tkep, m_tlast, m_ tvalid, and m_ tready from top to bottom in order to distinguish the transmitting and receiving directions.
In addition, the message command information is input from the left lower corner of the left side of the transmitting end, and is respectively an enable signal flag bit, a start checksum flag bit, a checksum position signal, a valid signal and a rear end receive preparation signal from top to bottom, and for distinguishing from other interface signals, the corresponding pin names are as follows: cmd_enable, cmd_start, cmd_offset, cmd_valid, and cmd_ready. And the synchronous processing module synchronizes the third network message data with the message command information so as to facilitate the subsequent parallel check and processing of the data. In addition, after the third network message data is synchronized, a forking module (Fork) is needed to split the third network message data into two identical data, one data is sent to a FIFO module (first-in first-out module), and the other data is sent to a parallel checksum processing module. The splitting process of the Fork module, the FIFO module and the parallel checksum processing module are the same as the embodiments of the above-mentioned processing method applied to the network message of the receiving end field programmable gate array module, and are not described herein again.
Fig. 7 is a schematic diagram of a parallel checksum processing module of another sender field programmable gate array module according to an embodiment of the present invention, as shown in fig. 7, the checksum processing module needs to obtain check data through a signal generating module, a data selecting module and a calculating unit, and meanwhile, a control subunit performs pipeline control on the calculating unit, and finally, the whole process from the check data to the accumulation processing of the accumulating unit is the same as the embodiment of the above processing method applied to the network message of the receiver field programmable gate array module, which is not described herein again. In addition, the input signal of the parallel checksum processing module also comprises an enable signal flag bit (enable), and the input data bit width of the enable signal flag bit is 512 bits.
After the final check data is obtained, a synchronization mechanism is adopted in the embodiment to perform insertion and replacement processing, and the insertion and replacement module is responsible for replacing the calculated final check data according to the effective signal of the message command information to realize network message data of a replacement check value, and then the network message data is sent to other terminal equipment or a switch and the like through optical fibers.
The processing method of the network message applied to the field programmable gate array module of the transmitting end provided by the embodiment of the invention carries out synchronous processing on the third network message data and the message command information to obtain the synchronized third network message data; performing bit width grouping processing on the synchronized third network message data according to the bit width of the input data and the bit width of the checksum to obtain the number of computing units and grouping data; performing parallel checksum processing on the grouped third network message data according to the calculation unit and the grouping data to obtain each piece of verification data corresponding to the third network message data; accumulating the verification data to obtain final verification data corresponding to the third network message data; and inserting the final check data into the replacement processing to obtain replaced check data. And the large bit width data larger than the checksum bit width or the small bit width data smaller than the checksum bit width under the first network message data can be subjected to bit width grouping processing based on the checksum bit width so as to improve the bit width processing capacity, meet the network card requirements of hundreds of G and improve the flexibility of bit width processing. After grouping, the first network message data after grouping is subjected to parallel check processing based on the calculation unit and the grouping data to obtain check data, wherein the parallel processing characteristic of the FPGA is utilized, and a parallel processing check sum processing mode is adopted, so that the data processing efficiency is improved compared with the serial processing of a CPU. In addition, the checksum processing mode adopts a pipeline processing mode, and compared with the accumulation processing of the traditional serial mode of checksum processing, the data processing efficiency is further improved. Based on FPGA, processing is performed, CPU resources are saved, and network bandwidth is improved. In addition, the sending end also adopts a synchronous-asynchronous-synchronous mechanism to realize a sending framework of the sending end and a data processing framework.
In some embodiments, synchronizing the third network packet data with the packet command information to obtain synchronized third network packet data includes:
Acquiring an enabling signal zone bit, an initial checksum zone bit and checksum position information of message command information;
And synchronizing the enable signal flag bit, the initial checksum flag bit, the checksum position information and the third network message data to be converted into side information of a bus protocol, so as to obtain the synchronized third network message data.
As shown in fig. 6, an enable signal flag bit (enable), a start checksum flag bit (start), and checksum location information (offset) are synchronized with the third network packet data to be converted into sideband information of the bus protocol, so as to obtain the synchronized third network packet data. Enable represents whether the checksum in the original message is replaced by the checksum data (transmitted using csum interfaces in the figure) of the calculated checksum. start represents the start position of the checksum calculated from the frame header. offset represents the position of the original message checksum. The command transmission is a standard handshake protocol, but the network data packet is an axis protocol, one packet of data corresponds to one command, and the two packets need to be synchronized. The result of the synchronization is to translate the command information into sideband information of the axis protocol.
The synchronous processing process provided by the embodiment of the invention provides convenience for the subsequent parallel checksum processing process.
In some embodiments, inserting the final verification data into the replacement process to obtain replaced verification data includes:
And if the enable signal flag bit is valid, replacing initial check data corresponding to the third network message data with final check data according to the checksum position information.
It can be understood that, according to the substitution of the enable and the offset, if the enable is 1, the initial checksum in the original message is replaced by the final check data; if 0, the substitution is not performed. The replacement position is determined according to the offset value. The original message data in the module is obtained from the FIFO module, and each packet of network data message corresponds to a calculated checksum value.
The insertion replacement processing provided in this embodiment is used to implement synchronization of the check data and the third network packet data.
Further, the present invention also provides a system for processing network messages, and fig. 8 is a block diagram of a system for processing network messages provided in an embodiment of the present invention, where, as shown in fig. 8, the processing system includes a first terminal device 5, a switch 6, and a second terminal device 7;
the first terminal device 5 is configured to control the sending end to process the fourth network packet data to obtain corresponding check data, where the processing procedure of the fourth network packet data is obtained by the above processing method of the network packet applied to the field programmable gate array module of the sending end;
a switch 6, configured to receive the fourth network packet data and the corresponding check data for transmission to the second terminal device 7;
The second terminal device 7 is configured to control the receiving end to receive the fourth network packet data, and process the fourth network packet data to obtain corresponding new test data so as to send the new test data to the upper computer; the processing procedure of the fourth network message data received by the switch 6 in the second terminal device 7 is obtained by the above-mentioned processing method of the network message applied to the receiving end field programmable gate array module.
It can be understood that the application scenario in this embodiment is not limited to the packet data transmission between two terminal devices and the switch, but may be an end-to-end data transmission process, where the processing bit width capability between the transmission objects needs to be kept the same. In addition, with respect to the transmitting-side field programmable gate array module of the first terminal device, the receiving-side field programmable gate array module of the second terminal device, here, it is not the first terminal device that includes only the transmitting-side field programmable gate array module, but the second terminal device that includes only the receiving-side field programmable gate array module. Currently, only one type of data is transmitted from the transmitting end of the first terminal device to the receiving end of the second terminal device through the switch. The method comprises the steps that a transmitting end field programmable gate array module and a receiving end field programmable gate array module exist in the first terminal device or the second terminal device.
For the description of the processing system of the network message provided by the present invention, please refer to the above method embodiment, the present invention is not described herein, and the method has the same advantages as the above method for processing the network message.
Fig. 9 is a schematic diagram of a port of a receiving end field programmable gate array module according to an embodiment of the present invention, where, as shown in fig. 9, a data interface uses a standard AIX-stream protocol for transmission, and a command interface uses a standard valid-ready handshake protocol for transmission. For the receiving end, the received network message data is input, the original data and the check value are output, and the two parts are subsequently sent to an upper computer with a CPU for processing. The same meaning as the interfaces on the left and right sides in fig. 3 may refer to the embodiment of fig. 3, and will not be described herein. Fig. 10 is a schematic diagram of a port of a field programmable gate array module at a transmitting end, as shown in fig. 10, for the transmitting end, original network message data and calculation related commands (both from an upper computer with a CPU) are input, network message data for calculating a replacement check value is output, and the calculated data is transmitted through an optical fiber. The same meaning as the interfaces on the left and right sides in fig. 6 may refer to the embodiment of fig. 6, and will not be described herein.
The invention further discloses a processing device of the network message applied to the receiving end field programmable gate array module, which corresponds to the method, and fig. 11 is a structural diagram of the processing device of the network message applied to the receiving end field programmable gate array module. As shown in fig. 11, the apparatus includes:
A first receiving module 11, configured to receive first network packet data and an input data bit width;
A first processing module 12, configured to perform bit width grouping processing on the first network packet data according to the input data bit width and the checksum bit width to obtain the number of computing units and grouping data; the grouping data is the number of groups in which the network message data in the computing unit are located;
The second processing module 13 is configured to perform parallel checksum processing on the grouped first network packet data according to the computing unit and the grouped data to obtain each piece of verification data corresponding to the first network packet data; the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units;
and the third processing module 14 is configured to perform accumulation processing on each check data to obtain final check data corresponding to the first network packet data.
Since the embodiments of the device portion correspond to the above embodiments, the embodiments of the device portion are described with reference to the embodiments of the method portion, and are not described herein.
The description of the processing device for the network message applied to the receiving end field programmable gate array module provided by the invention refers to the embodiment of the method, and the invention is not repeated herein, and has the same beneficial effects as the processing method for the network message applied to the receiving end field programmable gate array module.
The invention further discloses a processing device for the network message applied to the sender field programmable gate array module, which is corresponding to the method, and fig. 12 is a structural diagram of the processing device for the network message applied to the sender field programmable gate array module. As shown in fig. 12, the apparatus includes:
the acquiring module 15 is configured to acquire third network message data, an input data bit width, and message command information sent by the upper computer;
the fourth processing module 16 is configured to synchronize the third network packet data with the packet command information to obtain synchronized third network packet data;
a fifth processing module 17, configured to perform bit width grouping processing on the synchronized third network packet data according to the input data bit width and the checksum bit width to obtain the number of computing units and grouping data; the grouping data is the number of groups in which the network message data in the computing unit are located;
A sixth processing module 18, configured to perform parallel checksum processing on the grouped third network packet data according to the computing unit and the grouping data to obtain each piece of verification data corresponding to the third network packet data; the parallel checksum processing mode is a processing mode of performing checksum processing on network message data in each computing unit by utilizing the parallel processing characteristic of the field programmable gate array, and obtaining the verification data in a pipeline processing mode among the computing units;
A seventh processing module 19, configured to perform accumulation processing on each check data to obtain final check data corresponding to the third network packet data; and inserting the final check data into the replacement processing to obtain replaced check data.
Since the embodiments of the device portion correspond to the above embodiments, the embodiments of the device portion are described with reference to the embodiments of the method portion, and are not described herein.
The description of the processing device for the network message applied to the sender field programmable gate array module provided by the invention refers to the embodiment of the method, and the invention is not repeated herein, and has the same beneficial effects as the processing method for the network message applied to the sender field programmable gate array module.
Fig. 13 is a block diagram of a processing device for a network packet according to an embodiment of the present invention, where, as shown in fig. 13, the processing device for a network packet includes:
A memory 21 for storing a computer program;
The processor 22 is configured to implement the above-mentioned processing method of the network message applied to the receiving end field programmable gate array module and the above-mentioned processing method of the network message applied to the transmitting end field programmable gate array module when executing the computer program.
The processing device for the network message provided in this embodiment may include, but is not limited to, a tablet computer, a notebook computer, a desktop computer, or the like.
Processor 22 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like, among others. The Processor 22 may be implemented in hardware in at least one of a digital signal Processor (DIGITAL SIGNAL Processor, DSP), FPGA, programmable logic array (Programmable Logic Array, PLA). The processor 22 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU, and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 22 may be integrated with an image processor (Graphics Processing Unit, GPU) that is responsible for rendering and rendering of the content that the display screen is required to display. In some embodiments, the processor 22 may also include an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) processor for processing computing operations related to machine learning.
Memory 21 may include one or more computer-readable storage media, which may be non-transitory. Memory 21 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 21 is at least used for storing a computer program 211, where the computer program, after being loaded and executed by the processor 22, can implement the above-mentioned processing method for a network message applied to a receiving end field programmable gate array module and the above-mentioned relevant steps of the processing method for a network message applied to a transmitting end field programmable gate array module disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 21 may further include an operating system 212, data 213, and the like, and the storage manner may be transient storage or permanent storage. Operating system 212 may include Windows, unix, linux, among other things. The data 213 may include, but is not limited to, the data related to the above-mentioned processing method of the network message applied to the receiving end field programmable gate array module and the above-mentioned processing method of the network message applied to the sending end field programmable gate array module.
In some embodiments, the processing device of the network message may further include a display screen 23, an input/output interface 24, a communication interface 25, a power supply 26, and a communication bus 27.
Those skilled in the art will appreciate that the structure shown in fig. 13 does not constitute a limitation of the processing device of the network message and may include more or less components than those illustrated.
The processor 22 invokes the instructions stored in the memory 21 to implement the method for processing a network message applied to the receiving end field programmable gate array module and the method for processing a network message applied to the transmitting end field programmable gate array module provided in any of the above embodiments.
For the description of the network message processing device provided by the invention, please refer to the above method embodiment, the invention is not repeated herein, and has the same beneficial effects as the above method for processing the network message applied to the receiving end field programmable gate array module and the method for processing the network message applied to the transmitting end field programmable gate array module.
Further, the present invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by the processor 22 implements the steps of the above-mentioned processing method for a network message applied to a receiving end field programmable gate array module and the processing method for a network message applied to a transmitting end field programmable gate array module.
It will be appreciated that the methods of the above embodiments, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored on a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in part or in whole or in part in the form of a software product stored in a storage medium for performing all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
For the introduction of the computer readable storage medium provided by the present invention, please refer to the above method embodiment, the present invention is not described herein, and has the same advantages as the above method for processing the network message applied to the receiving end field programmable gate array module and the method for processing the network message applied to the transmitting end field programmable gate array module.
The method, the system, the device, the equipment and the medium for processing the network message provided by the invention are described in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that the present invention may be modified and practiced without departing from the spirit of the present invention.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
Claims (22)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410895786.1A CN118445088B (en) | 2024-07-05 | 2024-07-05 | Processing method, system, device, equipment and medium of network message |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410895786.1A CN118445088B (en) | 2024-07-05 | 2024-07-05 | Processing method, system, device, equipment and medium of network message |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118445088A true CN118445088A (en) | 2024-08-06 |
CN118445088B CN118445088B (en) | 2024-10-29 |
Family
ID=92312790
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410895786.1A Active CN118445088B (en) | 2024-07-05 | 2024-07-05 | Processing method, system, device, equipment and medium of network message |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118445088B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103970692A (en) * | 2013-01-25 | 2014-08-06 | 北京旋极信息技术股份有限公司 | RapidIO serial data processing method |
CN113626405A (en) * | 2021-07-09 | 2021-11-09 | 济南浪潮数据技术有限公司 | HDFS network data transmission optimization method, system, terminal and storage medium |
CN114499757A (en) * | 2022-01-07 | 2022-05-13 | 锐捷网络股份有限公司 | Method and device for generating checksum and electronic equipment |
CN115052055A (en) * | 2022-08-17 | 2022-09-13 | 北京左江科技股份有限公司 | Network message checksum unloading method based on FPGA |
US20220416939A1 (en) * | 2020-03-09 | 2022-12-29 | Huawei Technologies Co., Ltd. | Data transmission method and communication apparatus |
CN116318529A (en) * | 2022-09-09 | 2023-06-23 | 新华三信息安全技术有限公司 | A message processing method and network security equipment |
CN116633968A (en) * | 2023-05-04 | 2023-08-22 | 郑州恒达智控科技股份有限公司 | An FPGA-based industrial control system and method |
CN117793038A (en) * | 2023-12-14 | 2024-03-29 | 北京百度网讯科技有限公司 | Message processing method, device, electronic equipment, computer-readable storage medium |
-
2024
- 2024-07-05 CN CN202410895786.1A patent/CN118445088B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103970692A (en) * | 2013-01-25 | 2014-08-06 | 北京旋极信息技术股份有限公司 | RapidIO serial data processing method |
US20220416939A1 (en) * | 2020-03-09 | 2022-12-29 | Huawei Technologies Co., Ltd. | Data transmission method and communication apparatus |
CN113626405A (en) * | 2021-07-09 | 2021-11-09 | 济南浪潮数据技术有限公司 | HDFS network data transmission optimization method, system, terminal and storage medium |
CN114499757A (en) * | 2022-01-07 | 2022-05-13 | 锐捷网络股份有限公司 | Method and device for generating checksum and electronic equipment |
CN115052055A (en) * | 2022-08-17 | 2022-09-13 | 北京左江科技股份有限公司 | Network message checksum unloading method based on FPGA |
CN116318529A (en) * | 2022-09-09 | 2023-06-23 | 新华三信息安全技术有限公司 | A message processing method and network security equipment |
CN116633968A (en) * | 2023-05-04 | 2023-08-22 | 郑州恒达智控科技股份有限公司 | An FPGA-based industrial control system and method |
CN117793038A (en) * | 2023-12-14 | 2024-03-29 | 北京百度网讯科技有限公司 | Message processing method, device, electronic equipment, computer-readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN118445088B (en) | 2024-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12222881B2 (en) | Logical physical layer interface specification support for PCie 6.0, cxl 3.0, and UPI 3.0 protocols | |
CN111327603B (en) | Data transmission method, device and system | |
CN112948295B (en) | A system and method for high-speed data packet transmission between FPGA and DDR based on AXI4 bus | |
CN118295956B (en) | Control method, bare chip and system in bare chip-to-bare chip transmission | |
CN108462620B (en) | A Gigabit SpaceWire Bus System | |
CN116961696A (en) | Dual-mode module communication method and device, electronic equipment and storage medium | |
EP4550158A1 (en) | Method for link transition in universal serial bus and system thereof | |
CN102394720A (en) | Information safety checking processor | |
CN117687889B (en) | Performance test device and method for memory expansion equipment | |
CN116685959A (en) | Logical physical layer interface specification supporting PCIE 6.0, CXL 3.0 and UPI 3.0 protocols | |
CN118445088A (en) | A method, system, device, equipment and medium for processing network messages | |
CN104009823B (en) | Dislocation detection and error correction circuit in a kind of SerDes technologies | |
CN111966623A (en) | Method for real-time full-duplex reliable communication between MCU and multiple FPGAs by using SPI | |
CN101136855B (en) | Asynchronous clock data transmission device and method | |
CN116155843A (en) | A PYNQ-based spiking neural network chip data communication method and system | |
CN101446887B (en) | Method, device and system for original language processing | |
CN112104537B (en) | Communication controller | |
EP4550701A1 (en) | Method for link transition in universal serial bus and system thereof | |
CN119847973B (en) | Data transmission method, data transmission device, system on chip and storage medium | |
CN118158300B (en) | HDLC protocol-based communication method and electronic equipment | |
CN115687197B (en) | Data receiving module, data receiving method, circuit, chip and related equipment | |
CN114726482B (en) | SPI data transmission method | |
JP2643089B2 (en) | Error detection and recovery system in parallel / serial bus | |
CN117093129A (en) | High-compatibility parallel ADC data acquisition and transmission system | |
Wang et al. | A PCIe-based Hardware Acceleration Architecture of the Communication Protocol Stack |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |