# Changeset 1292

Ignore:
Timestamp:
Aug 8, 2011, 2:23:58 PM (8 years ago)
Message:

Minor edits.

Location:
docs/PACT2011
Files:
3 edited

### Legend:

Unmodified
 r1291 \subsection{XML} In 1998, the W3C officially adopted XML as a standard. The defining characteristics of XML are that it can represent virtually any type of information through the use of self-describing markup tags and can easily store semi-structured data in a descriptive fashion. XML markup encodes a description of an XML document's storage layout and logical structure. Because XML was intended to be human-readable, XML markup tags are often verbose by design \cite{TR:XML}. An example of XML document is as follows. In 1998, the W3C officially adopted XML as a standard. The defining characteristics of XML are that it can represent virtually any type of information through the use of self-describing markup tags and can easily store semi-structured data in a descriptive fashion. XML markup encodes a description of an XML document's storage layout and logical structure. Because XML was intended to be human-readable, XML markup tags are often verbose by design \cite{TR:XML}. % éšä»¶ % can't represent in verbose and not really sure if the google auto-translater is correct XML files can be classified as document-oriented'' or data-oriented'' \cite{DuCharme04}. Documented-oriented XML is designed for human readability, such as shown in Figure \ref{fig:sample_xml}; data-oriented XML files are intended to be parsed by machines and omit human-friendly'' formatting techniques, such as the use of whitespace and descriptive natural language'' naming schemes.  Although the XML specification itself does not distinguish between XML for documents'' and XML for data'' \cite{TR:XML}, the latter often requires the use of an XML parser to extract the information within. The role of an XML parser is to transform the text-based XML data into application ready data. \begin{figure}[h] \end{figure} % éšä»¶ % can't represent in verbose and not really sure if the google auto-translater is correct XML files can be classified as document-oriented'' or data-oriented'' \cite{DuCharme04}. Documented-oriented XML is designed for human readability, such as shown in Figure \ref{fig:sample_xml}; data-oriented XML files are intended to be parsed by machines and omit human-friendly'' formatting techniques, such as the use of whitespace and descriptive natural language'' naming schemes.  Although the XML specification itself does not distinguish between XML for documents'' and XML for data'' \cite{TR:XML}, the latter often requires the use of an XML parser to extract the information within. The role of an XML parser is to transform the text-based XML data into application ready data. %For example, an XML parser for a web browser may take a XML file, apply a style sheet to it, and display it to the end user in an attractive yet informative way; an XML database parser may take a XML file and construct indexes and/or compress the tree into a proprietary format to provide the end user with efficient relational, hierarchical, and/or object-based query access to it. \subsection{Traditional XML Parsers} % However, textual data tends to consist of variable-length items in generally unpredictable patterns \cite{Cameron2010}. Traditional XML parsers process XML sequentially a single byte-at-a-time. Following this approach, an XML parser processes a source document serially, from the first to the last byte of the source file. Each character of the source text is examined in turn to distinguish between the XML-specific markup, such as an opening angle bracket <', and the content held within the document. The current character that the parser is processing is commonly referred to using the concept of a current cursor position. As the parser moves the cursor through the source document, the parser alternates between markup scanning, and data validation and processing operations. At each processing step, the parser scans the source document and either locates the expected markup, or reports an error condition and terminates. In other words, traditional XML parsers are complex finite-state machines that use byte comparisons to transition between data and metadata states. Each state transition indicates the context in which to interpret the subsequent characters. Unfortunately, textual data tends to consist of variable-length items sequenced in generally unpredictable patterns \cite{Cameron2010}; thus any character could be a state transition until deemed otherwise. Traditional XML parsers process XML sequentially a single byte-at-a-time. Following this approach, an XML parser processes a source document serially, from the first to the last byte of the source file. Each character of the source text is examined in turn to distinguish between the XML-specific markup, such as an opening angle bracket <', and the content held within the document. The current character that the parser is processing is commonly referred to using the concept of a current cursor position. As the parser moves the cursor through the source document, the parser alternates between markup scanning, and data validation and processing operations. At each processing step, the parser scans the source document and either locates the expected markup, or reports an error condition and terminates. In other words, traditional XML parsers operate as complex finite-state machines that use byte comparisons to transition between data and metadata states. Each state transition indicates the context in which to interpret the subsequent characters. Unfortunately, textual data tends to consist of variable-length items sequenced in generally unpredictable patterns \cite{Cameron2010}; thus any character could be a state transition until deemed otherwise. Expat and Xerces-C are popular byte-a-time sequential parsers. Both are C/C++ based and open-source. Expat was originally released in 1998; it is currently used in Mozilla Firefox and provides the core functionality of many additional XML processing tools \cite{expat}. Xerces-C was released in 1999 and is the foundation of the Apache XML project \cite{xerces}. \subsection {Parallel XML Parsing} In general, parallel XML acceleration methods comes in one of two forms: multithreaded approaches and SIMD-based techniques. In general, parallel XML acceleration methods come in one of two forms: multithreaded approaches and SIMD-based techniques. Multithreaded XML parsers take advantage of multiple cores via number of strategies. Common strategies include preparsing the XML file to locate key partitioning points \cite{ZhangPanChiu09} and speculative p-DFAs \cite{ZhangPanChiu09}. SIMD XML parsers leverage the SIMD registers to overcome the performance limitations of the sequential byte-at-a-time processing model and its inherently data dependent branch misprediction rates.  Further, SIMD instructions allow the processor to perform the same inherently data dependent branch misprediction rates.  Further, data parallel SIMD instructions allow the processor to perform the same operation on multiple pieces of data simultaneously.  The Parabix1 and Parabix2 parsers studied in this paper fall into the SIMD classification. The Parabix parser are described in further detail in Section \ref{section:parabix}. fall under the SIMD classification. The Parabix parser versions studied are described in further detail in Section \ref{section:parabix}. %\subsection {SIMD Operations}