Changeset 970

Ignore:
Timestamp:
Mar 22, 2011, 4:03:11 PM (8 years ago)
Message:

Background from Ken and Nigel

Location:
docs/PACT2011
Files:
2 edited

Legend:

Unmodified
 r966 \section{Background} \label{section:background} This section provides a brief overview of XML and traditional XML processing technology and describes the key design and performance aspects of sucessive generations of the Parabix parallel XML processing technology. This section provides a brief overview of XML and traditional and parallel XML processing technology, and describes the key design and performance aspects of successive generations of the Parabix parallel XML processing technology. % clunky sounding... XML markup encodes a description of an XML documents storage layout and logical structure and is verbose by design \cite{TR:XML} --- this  is especially evident in the case of markup dense SOAP and WSDL documents. Processing XML documents requires a software module called an XML parser to read XML documents and provide access to their content and structure in an application-specific manner \cite{TR:XML}. Cameron et al.'s work in \cite{CameronHerdyLin2008} demonstrates that both XML parser selection and XML document markup density have considerable impact on the computational costs of processing XML documents. Computational costs are reported in processor cycles per XML byte; markup density is defined as the ratio of XML markup bytes to the total number of XML document bytes. In 1998, W3C officially adopted XML as the standard platform-independent data interchange format. The defining characteristics of XML was that it could represent virtually any type of information through the use of self-describing markup tags and could easily store semi-structured data in a descriptive fashion. XML markup encodes a description of an XML document's storage layout and logical structure. Because XML is intended to be ubiqutious, XML markup tags are often verbose by design \cite{TR:XML}. For example, a typical XML file could be: \subsection{Sequential XML Parsing} Traditional XML parsers employ a sequential byte-at-a-time approach. In this approach, the XML parser serially scans the source document and processes the input in a top-down manner. As it moves through the source document a typical parser oscillates between markup scanning and data validation operations. At each processing step, a parser initiates a scan of the source document and matches expected markup bytes or reports an error and terminates. For example, in scanning for a particular XML markup delimeter, such as the opening angle bracket, each loop scanning operation scans for the expected markup bytes. This scanning loop provides one bit of information with each iteration whether the character in question is matched or not. On commodity processors, SIMD registers are commonly 128 bits wide, with the advent of the Intel's Sandy Bridge Advanced Vector Extensions (AVX) \cite{Firasta2008} 256 bit registers are now available. Yet traditional XML parsers work with only 8 bits at a time. Thus, there is often a dramatic mismatch between available processor resources and the amount of information that is computed \cite{CameronHerdyLin2008}. Moreover, XML document markup corresponds to conditional branches in XML  processing software, that is, decision points in this sequential parsing process. In the sequential approach, branch mispredictions represent performance degration in proportion to the markup density of the source document \cite{CameronHerdyLin2008}. \begin{figure}[h] { \scriptsize \begin{verbatim} Widget Bitoniau ABC $19.95 ... \end{verbatim} } \caption{Simple XML Document}\label{fig:sample_xml} \end{figure} % éšä»¶ % can't represent in verbose and not really sure if the google auto-translater is correct XML files tend to be classified as either documents'' or data''. XML-Documents are usually designed to be human readable, such as Figure \ref{fig:sample_xml}; XML-Data files are intended to be parsed by machines and omit any human-friendly'' formatting techniques, such as the use of whitespace and descriptive natural language'' naming schemes. Although the XML specification does not distinguish between XML for documents'' and XML for data'' \cite{TR:XML}, the latter often requires the use of an XML parser in order to utilize the information within them. The role of an XML parser is to transform the text-based XML data into an application-ready format. %For example, an XML parser for a web browser may take a XML file, apply a style sheet to it, and display it to the end user in an attractive yet informative way; an XML database parser may take a XML file and construct indexes and/or compress the tree into a proprietary format to provide the end user with efficient relational, hierarchical, and/or object-based query access to it. %Cameron et al.'s work in \cite{CameronHerdyLin2008} demonstrates that both XML parser selection and XML document markup density have considerable impact on the computational costs of processing XML documents. Computational costs are reported in processor cycles per XML byte; markup density is defined as the ratio of XML markup bytes to the total number of XML document bytes. \subsection{Traditional XML Parsers} % However, textual data tends to consist of variable-length items in generally unpredictable patterns \cite{Cameron2010}. Traditional XML parsers are sequential byte-at-a-time parsers. Using this approach, an XML parser processes a source document by serially scanning through it in a top-down manner. Each character of text is read to distinguish between the XML-specific markup, such as an opening angle bracket `<', and the data held within the document. As the parser moves through the source document, it alternates between markup scanning and data validation operations. At each processing step, a parser scans the source document and locates the expected markup fields or reports an error and terminates. % not happy with the phrasing of this line In other words, traditional XML parsers are complex finite-state machines that use per-character comparisons to transition between data- and metadata-type states. Each state transition indicates the context in which to interpret the subsequent characters. Unfortunetly, textual data tends to consist of variable-length items in generally unpredictable patterns \cite{Cameron2010}; thus any character could be a state transition until deemed otherwise. Two such parsers are Expat and Xerces-C. Both are C/C++ based open-source XML parsers. Expat was originally released in 1998; it is currently used in Mozilla Firefox and Open Office \cite{expat}. Xerces-C was released in 1999 and is the foundation of the Apache XML project \cite{xerces}. The major disadvantage with sequential XML parsers is that every character requires at least one conditional branch. Branch mispredictions have been shown to degrade performance in proportion to the markup density of the source document \cite{CameronHerdyLin2008} (i.e., the proportion of XML-markup vs. XML-data). \subsection{Parallel XML Parsing} Using SIMD registers to process multiple bytes at a time is one possibility to take advantage of SIMD registers and overcome the performance limitations of the sequential paradigm and inherently data dependent branch misprediction rates. However, textual data tends to consist of variable-length items in generally unpredictable patterns \cite{Cameron2010}. Our approach breaks this sequential processing model and is based on parallel bit stream processing technology. In this method, byte-oriented character data is first transposed to eight parallel bit streams, one for each bit position within the character code units (bytes). Loading bit stream data into SIMD registers, then, allows data from SIMD register width W (128-bit,256-bit,512-bit) consecutive code units to be represented and processed at once. Bitwise logic and shift operations, bit scans, population counts and other bit-based operations are then used to carry out the work in parallel \cite{CameronLin2009}. Parallel XML processing generally comes in one of two forms: multithreading and SIMD. Multithreaded XML parsers take advantage of parallism by first quickly preparsing the XML file to locate the key markup entities and determine the best workload distribution in which process the XML file using$n$-cores \cite{ZhangPanChiu09}. SIMD XML parsers leverage the SIMD registers to overcome the performance limitations of the sequential paradigm and inherently data dependent branch misprediction rates \cite{Cameron2010}. Two such SIMD XML parsers, Parabix1 and Parabix2, utilizes parallel bit stream processing technology. With this method, byte-oriented character data is first transposed to eight parallel bit streams, one for each bit position within the character code units (bytes). These bit streams are then loaded into SIMD registers of width$W$(e.g., 64-bit, 128-bit, 256-bit, etc). This allows$W\$ consecutive code units to be represented and processed at once. Bitwise logic and shift operations, bit scans, population counts and other bit-based operations are then used to carry out the work in parallel \cite{CameronLin2009}. \subsubsection{Parabix1} Our first generation parallel XML parser -- Parabix1 -- uses employs a less conventional approach of SIMD technology to represent text in parallel bitstreams. Bits of each stream are in one-to-one-correspondence with the bytes of a character stream. A transposition step first transforms sequential byte stream data into eight basis bitstreams for the bits of each byte. Bitwise logical combinations of these basis bitstreams can then be used to classify bytes in various ways, while the bit scan operations common to commodity processors can be used for fast sequential scanning. At a high level, Parabix1 processes source XML in a functionally equivalent manner as a traditional processor. That is, Parabix1 moves sequentially through the source documen, maintaining a single cursor scanning position throughout the parse. However, this scanning operation itself is accelerated significantly which leads to dramatic performance improvements, since bit scan operations can perform up to general register width (32-bit,64-bit) finite state transitions per clock cycle. This approach has recently been applied to Unicode transcoding and XML parsing to good effect, with research prototypes showing substantial speed-ups over even the best of byte-at-a-time alternatives \cite{Cameron2010} \cite{CameronHerdyLin2008} \cite{CameronLin2009}. Our first generation parallel bitstream XML parser---Parabix1---uses employs a less conventional approach of SIMD technology to represent text in parallel bitstreams. Bits of each stream are in one-to-one-correspondence with the bytes of a character stream. A transposition step first transforms sequential byte stream data into eight basis bitstreams for the bits of each byte. Bitwise logical combinations of these basis bitstreams can then be used to classify bytes in various ways, while the bit scan operations common to commodity processors can be used for fast sequential scanning. At a high level, Parabix1 processes source XML in a functionally equivalent manner as a traditional processor. That is, Parabix1 moves sequentially through the source document, maintaining a single cursor scanning position throughout the parse. However, this scanning operation itself is accelerated significantly which leads to dramatic performance improvements, since bit scan operations can perform up to general register width (32-bit, 64-bit) finite state transitions per clock cycle. This approach has recently been applied to Unicode transcoding and XML parsing to good effect, with research prototypes showing substantial speed-ups over even the best of byte-at-a-time alternatives \cite{CameronHerdyLin2008, CameronLin2009, Cameron2010}. \subsubsection{Parabix2} In our second generation XML parser -- Parabix2 -- we address the replacement of sequential parsing using bit scan instructions with a parallel parsing method using bitstream addition. Unlike the single cursor approach of Parabix1 and conceptually of traditional sequential approach, in Parabix2 multiple cursors positions are processed in parallel. To deal with these parallel cursors, three additional categories of bitstreams are introduced. Marker bitstreams are used to represent positions of interest in the parsing of a source data stream \cite{Cameron2010}. The appearance of a 1 at a position in a marker bitstream could, for example, denote the starting position an XML tag in the data stream. In general, the set of bit positions in a marker bitstream may be considered to be the current parsing positions of multiple parses taking place in parallel throughout the source data stream. A further aspect of the parallel method is that conditional branch statements used to identify syntax error at each each parsing position are eliminated. Instead, error bitstreams are used to identify the position of parsing or well-formedness errors during the parsing process. Error positions are gathered and processed in as a final post processing step. Hence, Parabix2 offers additional parallelism over Parabix1 in the form of multiple cursor parsing as well as significanlty reduces branch misprediction penalty. In our second generation XML parser---Parabix2---we address the replacement of sequential parsing using bit scan instructions with a parallel parsing method using bitstream addition. Unlike the single cursor approach of Parabix1 and conceptually of traditional sequential approach, in Parabix2 multiple cursors positions are processed in parallel. To deal with these parallel cursors, three additional categories of bitstreams are introduced. Marker bitstreams are used to represent positions of interest in the parsing of a source data stream \cite{Cameron2010}. The appearance of a 1 at a position in a marker bitstream could, for example, denote the starting position an XML tag in the data stream. In general, the set of bit positions in a marker bitstream may be considered to be the current parsing positions of multiple parses taking place in parallel throughout the source data stream. A further aspect of the parallel method is that conditional branch statements used to identify syntax error at each each parsing position are eliminated. Instead, error bitstreams are used to identify the position of parsing or well-formedness errors during the parsing process. Error positions are gathered and processed in as a final post processing step. Hence, Parabix2 offers additional parallelism over Parabix1 in the form of multiple cursor parsing as well as significanlty reduces branch misprediction penalty. % % Expat 2.0.1, Xerces-C++ 3.1.1 (SAX2), and Parabix2. All three parsers are C/C++ based, event-driven, stream-oriented XML parsers. % Think the person reading doesn't know much about XML parsers. % % % Need gory details on byte at a time parsers. Pictures. % Xerces. Explain overall data flow and control flow for these parsers. % Briefly highlight inefficiencies. % % % % Talk about the usage of SIMD instructions and how it might help. Lead % onto briefly describe the key technology behind parabix. %