source: docs/EuroPar2011/europar-cameron.tex @ 888

Last change on this file since 888 was 888, checked in by cameron, 8 years ago

Tighten section 3

File size: 43.5 KB
Line 
1
2%%%%%%%%%%%%%%%%%%%%%%% file typeinst.tex %%%%%%%%%%%%%%%%%%%%%%%%%
3%
4% This is the LaTeX source for the instructions to authors using
5% the LaTeX document class 'llncs.cls' for contributions to
6% the Lecture Notes in Computer Sciences series.
7% http://www.springer.com/lncs       Springer Heidelberg 2006/05/04
8%
9% It may be used as a template for your own input - copy it
10% to a new file with a new name and use it as the basis
11% for your article.
12%
13% NB: the document class 'llncs' has its own and detailed documentation, see
14% ftp://ftp.springer.de/data/pubftp/pub/tex/latex/llncs/latex2e/llncsdoc.pdf
15%
16%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
17
18
19\documentclass[runningheads,a4paper]{llncs}
20\usepackage{multirow}
21\usepackage{amssymb}
22\setcounter{tocdepth}{3}
23\usepackage{graphicx}
24
25\usepackage{url}
26\urldef{\mailsa}\path|{alfred.hofmann, ursula.barth, ingrid.haas, frank.holzwarth,|
27\urldef{\mailsb}\path|anna.kramer, leonie.kunz, christine.reiss, nicole.sator,|
28\urldef{\mailsc}\path|erika.siebert-cole, peter.strasser, lncs}@springer.com|   
29\newcommand{\keywords}[1]{\par\addvspace\baselineskip
30\noindent\keywordname\enspace\ignorespaces#1}
31
32\begin{document}
33
34\mainmatter  % start of an individual contribution
35
36
37
38
39% first the title is needed
40\title{Parallel Scanning with Bitstream Addition: An XML Case Study}
41
42% a short form should be given in case it is too long for the running head
43\titlerunning{Parallel Scanning with Bitstream Addition}
44
45% the name(s) of the author(s) follow(s) next
46%
47% NB: Chinese authors should write their first names(s) in front of
48% their surnames. This ensures that the names appear correctly in
49% the running heads and the author index.
50%
51\author{Robert D. Cameron
52\thanks{}
53\and Ehsan Amiri \and Kenneth S. Herdy \and Dan Lin \and Thomas C. Shermer \and Fred P. Popowich}
54
55%
56\authorrunning{Cameron {\em et al}}
57
58% the affiliations are given next; don't give your e-mail address
59% unless you accept that it will be published
60\institute{Simon Fraser University, Surrey, BC, Canada\\
61\email{\{cameron, eamiri, ksherdy, lindanl, shermer, popowich\}@cs.sfu.ca}
62}
63
64\maketitle
65
66
67\begin{abstract}
68A parallel scanning method using the concept of bitstream addition is
69introduced and studied in application to the problem of XML
70parsing and well-formedness checking.   
71% The method parallelizes
72% finite-state transitions, using carry propagation to achieve up
73% to $W$ transitions with each $W$-bit binary addition operation.
74On processors supporting $W$-bit addition operations,
75the method can perform up to $W$ finite state transitions per instruction.
76The method is based on the concept of parallel bitstream technology,
77in which parallel streams of bits are formed such that each stream
78comprises bits in one-to-one correspondence with the character
79code units of a source data stream.    Parsing routines are initially
80prototyped in Python using its native support for unbounded
81integers to represent arbitrary-length  bitstreams.  A compiler
82then translates the Python code into low-level C-based implementations.
83These low-level implementations take advantage of
84the SIMD (single-instruction multiple-data) capabilities of commodity
85processors to yield a dramatic speed-up over
86traditional alternatives employing byte-at-a-time parsing.
87\keywords{SIMD text processing, parallel bitstream technology, XML, parsing}
88\end{abstract}
89
90
91\section{Introduction}
92
93
94
95Traditional byte-at-a-time parsing technology is increasingly
96mismatched to the capabilities of modern processors.   Current
97commodity processors generally possess 64-bit general purpose registers
98as well as 128-bit SIMD registers, with 256-bit registers now
99appearing.   General purpose processing on graphics processors
100can make available 512-bit or wider registers.   Parsing models
101based on the traditional loading and processing of 8 bits at a time
102would seem to be greatly underutilizing processor resources.
103
104Unfortunately, parsing is hard to parallelize.   Indeed, in their seminal
105report outlining the landscape of parallel computing research,
106researchers from Berkeley identified the finite state machine
107methods underlying parsing and lexical processing as the hardest
108of the "13 dwarves" to parallelize, concluding at one point that
109"nothing helps." \cite{Asanovic:EECS-2006-183}   SIMD methods, in particular, would seem to
110be ill-suited to parsing, because textual data streams are seldom organized in
111convenient 16-byte blocks, tending to consist instead of
112variable-length items in generally unpredictable patterns.
113Nevertheless, there have been some notable works such as that of
114Scarpazza in applying the multicore and SIMD capabilities of the
115Cell/BE processor to regular expression matching \cite{Scarpazza:2009}
116Intel has also signalled the importance of accelerated string
117processing to its customers through the introduction of new string processing
118instructions in the SSE 4.2 instruction set extension, demonstrating
119how those features may be used to advantage in activities such as
120XML parsing \cite{XMLSSE42}
121
122Our research has been exploring a promising alternative approach, however, based on
123the concept of {\em parallel bit streams} \cite{Cameron2009,PPoPP08,CameronHerdyLin2008}.   
124In this approach, byte streams
125are first sliced into eight basis bit streams, one for each
126bit position within the byte.  Bit stream $i$ thus comprises
127the $i$th bit of each byte.   Using 128-bit SIMD registers, then,
128bitwise logic operations on these basis bit streams allows
129byte classification operations to be carried out in parallel
130for 128 bytes at a time.  For example, consider a character class
131bit stream \verb:[<]: using a 1 bit to mark the position of
132opening angle brackets in a byte stream.  This stream may
133computed as logical combination of the basis bit streams using
134only seven bitwise logical operations per 128 bytes.
135
136Based on this approach, our prior work has shown how parallel
137bit streams may be used to accelerate XML parsing by further
138taking advantage of processor {\em bit scan} instructions, commonly
139found within commodity processors\cite{CameronHerdyLin2008}.
140On current Intel or AMD processors, for example, these instructions
141allow one to determine the position of the first 1 bit in a group of 64
142in a single instruction.   Using these techniques, our Parabix 1
143parser demonstrated offered considerable accelaration of XML
144parsing in statistics gathering \cite{CameronHerdyLin2008} as well
145as GML to SVG conversion \cite{Herdy2008}.
146
147In this paper, we further increase the parallelism in our methods
148by introducing a new parallel scanning primitive using bitstream
149addition.   In essence, multiple 1 bits in a marker stream
150identify current scanning positions for multiple instances
151of a particular syntactic context within a byte stream.
152These multiple marker positions may each be independently
153advanced in parallel using addition and masking. 
154The net result is a new scanning primitive that
155allows multiple instances
156of syntactic elements to be parsed simultaneously.   For example,
157in dense XML markup, one might find several instances of particular
158types of markup tags within a given 64-byte block of text; parallel
159addition on 64-bit words allows all such instances to be processed at once.
160
161
162Other efforts to accelerate XML parsing include the use of custom
163XML chips \cite{Leventhal2009}, FPGAs \cite{DaiNiZhu2010}, and
164multithread/multicore speedups based on fast preparsing \cite{ZhangPanChiu09}.
165
166The remainder of this paper is organized as follows.
167Section 2 reviews the basics of parallel bitstream technology
168and introduces our new parallel scanning primitive.
169Section 3 illustrates how this primitive may be used
170in the lexical processing of XML references including the
171parallel identification of errors.   Section 4 goes on
172to consider the more complex task of XML tag processing
173that goes beyond mere tokenization.
174Building on these methods, Section 5 describes how to
175construct a
176complete solution to the problem of XML parsing and
177well-formedness checking, in order
178to gauge the applicability and power of the techniques.
179Section \ref{sec:compile} then considers
180the translation of high-level operations on unbounded bitstreams into
181equivalent low-level code using SIMD intrinsics in the C programming
182language.
183Performance results are presented in section 7, comparing
184the results of our generated implementations in comparison with
185a number of existing alternatives.
186The paper concludes with
187comments on the current status of the work and directions for
188further research.
189
190\section{The Parallel Bitstream Method}\label{sec:parabit}
191
192\begin{figure}[b]
193\begin{center}
194\begin{tabular}{cr}\\
195source data $\vartriangleleft$ & \verb`----173942---654----1----49731----321--`\\
196$B_7$ & \verb`.......................................`\\
197$B_6$ & \verb`.......................................`\\
198$B_5$ & \verb`111111111111111111111111111111111111111`\\
199$B_4$ & \verb`....111111...111....1....11111....111..`\\
200$B_3$ & \verb`1111...1..111...1111.1111.1...1111...11`\\
201$B_2$ & \verb`1111.1..1.1111111111.11111.1..1111...11`\\
202$B_1$ & \verb`.....11..1...1.............11.....11...`\\
203$B_0$ & \verb`11111111..111.1.111111111.111111111.111`\\
204\verb:[0-9]: & \verb`....111111...111....1....11111....111..`\\
205\end{tabular}
206\end{center}
207\caption{Basis and Character-Class Bitstreams}
208\label{fig:inputstreams}
209\end{figure}
210
211\subsection{Fundamentals}
212
213A bitstream is simply a sequence of $0$s and $1$s, where there is one such bit in the bitstream for each character in a source data stream.
214For parsing, and other text processing tasks, we need to consider multiple properties of characters at different stages during the parsing process. A bitstream can be associated with each of these properties, and hence there will be multiple (parallel) bitstreams associated with a source data stream of characters \cite{Cameron2009,PPoPP08}.
215
216The starting point for bitstream methods are \emph{basis} bitstreams
217and their use in determining \emph{character-class} bitstreams.
218The $k$th basis bitstream $B_k$ consists of the $k$th bit (0-based, starting at the LSB)
219of each character in the source data stream;
220thus each $B_k$ is dependent on the encoding of the source characters (ASCII, UTF-8, UTF-16, etc.).
221Given these basis bit streams, it is then possible to combine them
222using bitwise logic in order to compute character-class
223bit streams, that is, streams that identify the positions at which characters belonging
224to a particular class occur.  For example, the character class bitstream
225$D=$\verb:[0-9]: marks with $1$s the positions at which decimal digits
226occur.    These bitstreams are illustrated in Figure \ref{fig:inputstreams},
227for an example source data stream consisting of digits and hyphens.
228This figure also illustrates some of our conventions for figures:  the left triangle $\vartriangleleft$ after
229``source data'' indicates that all streams are read from right to left
230(i.e., they are in little-endian notation).  We also use hyphens
231in the input stream represent any character that is not relevant to a character
232class under consideration, so that relevant characters stand out.
233Furthermore, the $0$ bits in the bitstreams are represented by periods,
234so that the $1$ bits stand out.
235
236
237Transposition of source data to basis bit streams and calculation
238of character-class streams in this way is an overhead on parallel bit
239stream applications, in general.   However, using the SIMD
240capabilities of current commodity processors, these operations are quite
241fast, with an amortized overhead of about 1 CPU cycle per byte for
242transposition and less than 1 CPU cycle per byte for all the character
243classes needed for XML parsing \cite{CameronHerdyLin2008}.
244Improved instruction sets using parallel extract operations or
245inductive doubling techniques may further reduce this overhead significantly \cite{CameronLin2009,HilewitzLee2006}.
246
247Beyond the bitwise logic needed for character class determination,
248we also need \emph{upshifting} to deal with sequential combination.
249The upshift $n(S)$ of a bitstream $S$ is obtained by shifting the bits in $S$ one position forward,
250then placing a $0$ bit in the starting position of the bitstream; $n$ is meant to be mnemonic of ``next''.
251In $n(S)$, the last bit of $S$ may be eliminated or retained for error-testing purposes.
252
253\subsection{A Parallel Scanning Primitive}
254
255In this section, we introduce the principal new feature of the paper,
256a parallel scanning method based on bitstream addition.   Key to this
257method is the concept of {\em marker} bitstreams. 
258Marker bitstreams are used to represent positions of interest in the
259scanning or parsing of a source data stream.
260The appearance of a 1 at a position in a marker bitstream could, for example, denote
261the starting position of an XML tag in the data stream.   In general, the set of
262bit positions in a marker bitstream may be considered to be the current
263parsing positions of multiple parses taking place in parallel throughout
264the source data stream.
265
266Figure \ref{fig:scan1} illustrates the basic concept
267underlying parallel parsing with bitstream addition.
268As with the previous figures, all streams are shown in little-endian
269representation, with streams reading from right-to-left.
270The first row shows a source data stream that includes three
271spans of digits, $13840$, $1139845$, and $127$, with other nondigit characters shown
272as hyphens.  The second row specifies the parsing problem
273using a marker bitstream $M_0$ to mark three initial
274marker positions at the start of each span of digits.
275The parallel parsing task is to move each
276of the three markers forward through the corresponding spans of
277digits to the immediately following positions.
278
279\begin{figure}[tbh]
280\begin{center}
281\begin{tabular}{l@{}lr}\\
282\multicolumn{2}{l}{source data $\vartriangleleft$}
283                          & \verb`--721----5489311-----04831------`\\
284$M_0$ &                   & \verb`....1..........1.........1......`\\
285$D$   & $= \verb`[0..9]`$ & \verb`..111....1111111.....11111......`\\
286$M_1$ & $ = M_0 + D$      & \verb`.1......1...........1...........`
287\end{tabular}
288\end{center}
289\caption{Bitstream addition}
290\label{fig:scan1}
291\end{figure}
292
293The third row of Figure \ref{fig:scan1}
294shows the derived character-class bitstream $D$ identifying
295positions of all digits in the source stream. 
296The fourth row then illustrates the key concept: marker movement
297is achieved by binary addition of the marker and character
298class bitstreams.  As a marker 1 bit is combined using binary addition to
299a span of 1s, each 1 in the span becomes 0, generating
300a carry to add to the next position to the left. 
301For each span, the process terminates at the left end
302of the span, generating a 1 bit in the immediately
303following position.   In this way, binary addition produces the marker bitstream
304$M_1$, with each of the three markers
305moved independently through their respective spans of digits to the position at the end.
306
307However, the simple addition technique shown in Figure \ref{fig:scan1}
308does not account for digits in the source stream that do
309not play a role in a particular scanning operation.
310Figure \ref{fig:scan2} shows an example and how this
311may be resolved.   The source data stream is again shown in row 1,
312and the marker bitstream defining the initial marker positions
313for the the parallel parsing tasks shown in row 2.   
314Row 3 again contains the character class bitstream for digits $D$.
315Row 4 shows the result of bitstream addition, in which
316marker bits are advanced, but additional bits not
317involved in the scan operation are included in the result.
318However, these are easily removed in row 5, by masking
319off any bits from the digit bitstream; these can never
320be marker positions resulting from a scan.
321
322\begin{figure}[tbh]
323\begin{center}
324\begin{tabular}{l@{}lr}\\
325\multicolumn{2}{l}{source data $\vartriangleleft$}     
326                                                        & \verb`--134--31--59127---3--3474--`\\
327$M_0$ &                                                 & \verb`....1.........1..........1..`\\
328$D$   & $= \verb`[0..9]`$ & \verb`..111..11..11111...1..1111..`\\
329$M_1$ & $= M_0 + D$       & \verb`.1.....11.1....1...1.1......`\\
330$M_2$ & $= (M_0 + D) \wedge \neg D$ & \verb`.1........1..........1......`
331\end{tabular}
332\end{center}
333\caption{Parallel Scan Using Addition and Mask}
334\label{fig:scan2}
335\end{figure}
336
337The addition and masking technique allows matching of
338the regular expression \verb:[0-9]*: for any reasonable
339(conflict-free) set of initial markers specified in $M_0$.
340A conflict occurs when a span from one marker would run
341into another marker position.   However, such conflicts
342do not occur with the normal methods of marker bitstream
343formation, in which unique syntactic features of
344the input stream are used to specify the initial marker
345positions.
346
347In the remainder of this paper, the notation $s(M, C)$
348denotes the operation to scan
349from an initial set of marker positions $M$ through
350the spans of characters belonging to a character class $C$ found at each position.
351\[s(M, C) = (M_0 + C)  \wedge \neg C\]
352
353
354\section{XML Scanning and Parsing}
355\label{sec:errorstream}
356
357We now consider how the parallel scanning primitive can
358be applied to the following problems in scanning and
359parsing of XML structures:  (1) parallel scanning of XML decimal character references,
360and (2) parallel parsing of XML start tags.
361The grammar of these structures is shown in Figure \ref{fig:xmlgrmr}.
362
363\begin{figure}[tbh]
364\begin{center}
365\begin{tabular}{rcl}
366DecRef & ::=   &        '\verb:&#:' Digit+ '\verb:;:'  \\
367Digit  & ::=   &         \verb:[0-9]:\\
368STag         &  ::=   &        '\verb:<:' Name (WS  Attribute)* WS? '\verb:>:'  \\
369Attribute & ::=   &        Name WS? '=' WS? AttValue \\
370AttValue  &           ::=   &        `\verb:":' \verb:[^<"]*: `\verb:":' $|$ ``\verb:':'' \verb:[^<']*: ``\verb:':'' \\
371%DQuoted & ::= & \verb:[^<"]*:  \\
372%SQuoted & ::= & \verb:[^<']*:
373\end{tabular}
374\end{center}
375\caption{XML Grammar: Decimal Character References and Start Tags}
376\label{fig:xmlgrmr}
377\end{figure}
378
379\begin{figure}[b]
380\begin{center}
381\begin{tabular}{l@{}lr}\\
382\multicolumn{2}{l}{source data $\vartriangleright$}     
383                                         & \verb`-&#978;-&9;--&#;--&#13!-`\\
384$M_0$ &                                  & \verb`.1......1....1....1.....`\\
385$M_1$ & $ = n(M_0)$                      & \verb`..1......1....1....1....`\\
386$E_0$ & $ = M_1 \wedge \neg $\verb:[#]:  & \verb`.........1..............`\\
387$M_2$ & $ = n(M_1 \wedge \neg  E_0)$     & \verb`...1...........1....1...`\\
388$E_1$ & $ = M_2 \wedge \neg  D$          & \verb`...............1........`\\
389$M_3$ & $ = s(M_2 \wedge \neg  E_1, D)$  & \verb`......1...............1.`\\
390$E_2$ & $ = M_3 \wedge \neg  $\verb:[;]: & \verb`......................1.`\\
391$M_4$ & $ = M_3 \wedge \neg  E_2$        & \verb`......1.................`\\
392$E $  & $= E_0 \, | \, E_1 \, | \, E_2$  & \verb`.........1.....1......1.`
393\end{tabular}
394\end{center}
395\caption{Parsing Decimal References}
396\label{fig:decref}
397\end{figure}
398
399Figure \ref{fig:decref} shows the parallel parsing of
400decimal references together with error checking.
401The source data includes four instances of potential
402decimal references beginning with the \verb:&: character.
403Of these, only the first one is legal according to
404the decimal reference syntax, the other three instances
405are in error.   These references may be parsed in
406parallel as follows.  The
407starting marker bitstream $M_0$ is formed from the \verb:[&]:
408character-class bitstream as shown in the second row.  The next row shows the
409result of the marker advance operation $n(M_0)$ to
410produce the new marker bitstream $M_1$.  At this point,
411a hash mark is required, so the first error bitstream $E_0$ is
412formed using a bitwise ``and'' operation combined with negation,
413to indicate violations of this condition.
414Marker bitstream $M_2$ is then defined as those positions
415immediately following any $M_1$ positions not in error.
416In the following row, the condition that at least
417one digit is required is checked to produce error bitstream $E_1$.
418A parallel scan operation is then applied through the
419digit sequences as shown in the next row to produce
420marker bitstream $M_3$.  The final error bitstream $E_2$ is
421produced to identify any references without a
422closing semicolon.
423In the penultimate row, the final marker bitstream $M_4$ marks the
424positions of all fully-checked decimal references, while the
425last row defines a unified error bitstream $E$ 
426indicating the positions of all detected errors.
427
428
429One question that may arise is: how are marker bitstreams initialized?   In general,
430this is an important problem, and dependent on the task at hand.   
431In the XML parsing context,
432we rely on an important property of well-formed
433XML: after an initial filtering pass to identify
434XML comments, processing instructions and CDATA
435sections, every remaining \verb:<: in the
436file must be the initial character of a start,
437end or empty element tag, and every remaining \verb:&:
438must be the initial character of a general entity
439or character reference. These assumptions permit easy creation of
440marker bitstreams for XML tags and XML references.
441
442The parsing of XML start tags is a richer problem, involving
443sequential structure of attribute-value pairs as shown in Figure \ref{fig:xmlgrmr}.
444Using the bitstream addition technique, our method
445is to start with the opening angle bracket of all tags as
446the initial marker bitstream for parsing the tags in parallel,
447advance through the element name and then use an iterative
448process to move through attribute-value pairs.
449
450Figure \ref{fig:stag-ex}
451illustrates the parallel parsing of three XML start tags.
452The figure omits determination
453of error bitstreams, processing of single-quoted attribute values and handling
454of empty element tags, for simplicity.  In this figure, the first
455four rows show the source data and three character class bitstreams:
456$N$ for characters permitted in XML names,
457$W$ for whitespace characters,
458and $Q$ for characters permitted within a double-quoted attribute value string. 
459
460\begin{figure*}[tbh]
461\begin{center}\footnotesize
462
463\begin{tabular}{lr}\\
464source data $\vartriangleright$ & \verb`--<e a= "137">---<el2 a="17" a2="3379">---<x>--`\\
465$N = $ name chars & \verb`11.1.1...111..111.111.1..11..11..1111..111.1.11`\\
466$W = $ white space & \verb`....1..1.............1......1..................`\\
467$Q = \neg$\verb:[">]: & \verb`11.11111.111.1111.111111.11.1111.1111.1111.1111`\\
468\\
469$M_0$ & \verb`..1..............1........................1....`\\
470$M_1 = n(M_0)$ & \verb`...1..............1........................1...`\\
471$M_{0,7} = s(M_1, N)$ & \verb`....1................1......................1..`\\
472$M_{0,8} = s(M_{0,7}, W) \wedge \neg$\verb:[>]: & \verb`.....1................1........................`\\
473\\
474$M_{1,1} = s(M_{0,8}$ & \verb`......1................1.......................`\\
475$M_{1,2} = s(M_{1,1}, W) \wedge$\verb:[=]: & \verb`......1................1.......................`\\
476$M_{1,3} = n(M_{1,2})$ & \verb`.......1................1......................`\\
477$M_{1,4} = s({1,3}, W) \wedge$\verb:["]: & \verb`........1...............1......................`\\
478$M_{1,5} = n(M_{1,4})$ & \verb`.........1...............1.....................`\\
479$M_{1,6} = s(M_{1,5}, Q) \wedge$\verb:["]: & \verb`............1..............1...................`\\
480$M_{1,7} = n(M_{1,6})$ & \verb`.............1..............1..................`\\
481$M_{1,8} = s(M_{1,7}, W) \wedge \neg$\verb:[>]: & \verb`.............................1.................`\\
482\\
483$M_{2,1} = s(M_{1,8}, N)$ & \verb`...............................1...............`\\
484$M_{2,2} = s(M_{2,1}, W) \wedge$\verb:[=]: & \verb`...............................1...............`\\
485$M_{2,3} = n(M_{2,2})$ & \verb`................................1..............`\\
486$M_{2,4} = s(M_{2,3}, W) \wedge$\verb:["]: & \verb`................................1..............`\\
487$M_{2,5} = n(M_{2,4})$ & \verb`.................................1.............`\\
488$M_{2,6} = s(M_{2,5}, Q) \wedge$\verb:["]: & \verb`.....................................1.........`\\
489$M_{2,7} = n(M_{2,6})$ & \verb`......................................1........`\\
490$M_{2,8} = s(M_{2,7}, W) \wedge \neg$\verb:[>]: & \verb`...............................................`
491\end{tabular}
492\end{center}
493\caption{Start Tag Parsing}
494\label{fig:stag-ex}
495\end{figure*}
496
497
498The parsing process is illustrated in the remaining rows of the
499figure.    Each successive row shows the set of parsing markers as they
500advance in parallel using bitwise logic and addition.
501Overall, the sets of marker transitions can be divided into three groups.
502
503The first group
504$M_0$ through $M_{0,8}$ shows the initiation of parsing for each of the
505 tags through the
506opening angle brackets and  the element names, up to the first
507attribute name, if present.  Note that the there are no attribute names
508in the final tag shown, so the corresponding marker becomes zeroed
509out at the closing angle bracket.
510Since $M_{0,8}$ is not all $0$s, the parsing continues.
511
512The second group of marker transitions
513$M_{1,1}$ through $M_{1,8}$ deal with the parallel parsing of the first attribute-value
514pair of the remaining tags.
515After these operations, there are no more attributes
516in the first tag, so its corresponding marker becomes zeroed out.
517However, $M_{1, 8}$ is not all $0$s, as the second tags still has an unparsed attribute-value pair.
518Thus, the parsing continues.
519
520The third group of marker transitions $M_{2,1}$ through $M_{2,8}$ deal with the parsing of
521the second attribute-value pair of this tag.  The
522final transition to $M_{2,8}$ shows the zeroing out of all remaining markers
523once two iterations of attribute-value processing have taken place.
524Since $M_{2,8}$ is all $0$s, start tag parsing stops.
525
526The implementation of start tag processing uses a while loop that
527terminates when the set of active markers becomes zero,
528i.e.\  when some $M_{k, 8} = 0$.
529Considered
530as an iteration over unbounded bitstreams, all start tags in the document
531are processed in parallel, using a number of iterations equal to the maximum
532number of attribute-value pairs in any one tag in the document.   
533However, in block-by-block processing, the cost of iteration is considerably reduced; the iteration for
534each block only requires as many steps as there are attribute-value pairs
535overlapping the block.
536
537
538
539
540%\subsection{Name Scans}
541%To illustrate the scanning of the name found in an XML start tag,
542%let us consider a sequence that might be found in an HTML file,
543%\verb:<div id="myid">:,
544%which is shown as the source data stream in Figure \ref{fig:stag-scan}.
545
546%\begin{figure}[tbh]
547%\begin{center}
548%\begin{tabular}{cr}\\
549%source data & \verb:<div id="myid">:\\
550%$M_0$ & \verb`1..............`\\
551%$C_0$ & \verb`.111.11...1111.`\\
552%$M_1 = n(M_0)$ & \verb`.1.............`\\
553%$M_2 = s(M_1, D_0) \wedge \neg C_0$ & \verb`....1.........`\\
554%lastline & \verb`...............`
555%\end{tabular}
556%\end{center}
557%\caption{Scanning Names}
558%\label{fig:stag-scan}
559%\end{figure}
560
561%If we set our initial marker bitstream according to the procedure outlined in our discussion of marker bitstream initialization, we %obtain the bitstream $M_0$.
562%According to the grammar in Figure \ref{fig:stag-grmr}, we can then look for a \verb:Name: in an \verb:STag: after we have found a %5\verb:<:.
563%So, $M_1$ is the marker bitstream for the starting position of our name.
564%Although we do not know the length of the name, the $C_0$ bit vector can easily be set to $1$ for the characters that can be contained %in a name.
565%We can then use the scan function in a manner similar to how it was used in Figure \ref{fig:scan2} to scan through the entire name to %identify its end position.
566
567Following the pattern shown here, the remaining syntactic
568features of XML markup can similarly be parsed with
569bitstream based methods.   One complexity is that the
570parsing of comments,
571CDATA sections and processing instructions must be
572performed first to determine those regions of text
573within which ordinary XML markups are not parsed (i.e.,
574within each of these types of construct.   This is handled
575by first performance the parsing of these structures and
576then forming a {\em mask bitstream}, that is, a stream that
577identifies spans of text to be excluded from parsing
578(comment and CDATA interiors, parameter text to processing instructions).
579
580
581\section{XML Well-Formedness}
582
583In this section, we consider the full application of the parsing techniques
584of the previous section to the problem of XML well-formedness checking \cite{TR:XML}.
585This application is useful as a well-defined and commercially significant
586example to assess the overall applicability of parallel bit stream techniques. 
587To what extent can the well-formedness requirements of XML be
588completely discharged using parallel bitstream techniques?
589Are those techniques worthwhile in every instance, or
590do better alternatives exist for certain requirements?
591For those requirements that cannot be fully implemented
592using parallel bitstream technology alone, what
593preprocessing support can be offered by parallel bit stream
594technology to the discharge of these requirements in other ways?
595We address each of these questions in this section,
596and look not only at the question of well-formedness, but also at
597the identification of error positions in documents that
598are not well-formed.
599
600
601%\subsection{Error and Error-Check Bitstreams}
602
603Most of the requirements of XML well-formedness checking
604can be implemented using two particular types of computed
605bitstream: {\em error bitstreams}, introduced in the previous section, and {\em error-check bitstreams}.
606Recall that an error bitstream stream is a stream marking the location of definite errors in accordance with
607a particular requirement.  For example, the
608$E_0$, $E_1$, and $E_2$ bitstreams as computed during parsing of
609decimal character references in Figure \ref{fig:decref}
610are error bitstreams.  One bits mark definite errors and zero bits mark the
611absence of error according to the requirement.   
612Thus the complete absence of errors according to the
613requirements listed may be determined by forming the
614bitwise logical ``or'' of these bitstreams and confirming
615that the resulting value is zero. An error check bitstream is one
616that marks potential errors to be further checked in
617some fashion during post-bitstream processing.   
618An example is the bitstream marking the start positions
619of CDATA sections.   This is a useful information stream
620computed during bitstream processing to identify opening
621\verb:<![: sequences, but also marks positions to
622subsequently check for the complete opening
623delimiter  \verb:<![CDATA[: at each position.
624
625In typical documents, most of these error-check streams will be quite sparse
626or even zero.   Many of the error conditions could
627actually be fully implemented using bitstream techniques,
628but at the cost of a number of additional logical and shift
629operations.   In general, however, the conditions are
630easier and more efficient to check one-at-a-time using
631multibyte comparisons on the original source data stream.
632With very sparse streams, it is very unlikely that
633multiple instances occur within any given block, thus
634eliminating the benefit of parallel evaluation of the logic.
635
636The requirement for name checking merits comment.   XML
637names may use a wide range of Unicode character values.
638It is too expensive to check every instance of an XML name
639against the full range of possible values.   However, it is
640possible and quite inexpensive to use parallel bitstream
641techniques to verify that any ASCII characters within a name
642are indeed legal name start characters or name characters.
643Furthermore, the characters that may legally follow a
644name in XML are confined to the ASCII range.  This makes
645it useful to define a name scan character class to include all the legal ASCII characters
646for names as well as all non-ASCII characters. 
647A namecheck character class bitstream will then be defined to identify nonASCII
648characters found within namescans.   In most documents
649this bitstream will be all $0$s; even in documents with substantial
650internationalized content, the tag and attribute names used
651to define the document schema tend to be confined to the
652ASCII repertoire.   In the case that this bitstream is nonempty,
653the positions of all 1 bits in this bitstream denote characters
654that need to be individually validated.
655
656Attribute names within a single XML start tag or empty
657element tag must be unique.  This requirement could be
658implemented using one of several different approaches. Standard
659approaches include: sequential search, symbol lookup, and Bloom filters
660\cite{DaiNiZhu2010}.
661
662In general, the use of error-check bitstreams is a straightforward,
663convenient and reasonably efficient mechanism for
664checking the well-formedness requirements.
665
666%\subsection{Tag Matching}
667
668Except for empty element tags, XML tags come in pairs with
669names that must be matched.   To discharge this requirement,
670we form a bitstream consisting of the disjunction of three
671bitstreams formed during parsing: the bitstream marking the
672positions of start or empty tags (which have a common
673initial structure), the bitstream marking tags that end using
674the empty tag syntax (``\verb:/>:''), and the bitstream
675marking the occurrences of end tags.   In post-bitstream
676processing, we iterate through this computed bitstream
677and match tags using an iterative stack-based approach.
678
679%\subsection{Document Structure}
680
681An XML document consists of a single root element with recursively
682defined structure together with material in the document
683prolog and epilogs.  Verifying this top-level structure and
684the structure of the prolog and epilog material is not
685well suited to parallel bitstream techniques, in particular, nor
686to any form of parallelism, in general.  In essence, the
687prolog and epilog materials occur once per document instance
688Thus the requirements to check this top-level structure
689for well-formedness are relatively minor, with an overhead
690that is quite low for sufficiently sized files.
691
692%\subsection{Summary}
693
694Overall, parallel bitstream techniques are quite well-suited to
695verification problems such as XML well-formedness checking. 
696Many of the character validation and syntax checking requirements
697can be conveniently and efficiently implemented using error streams.
698Other requirements are also supported by the computation of
699error-check streams for simple post-bitstream processing or
700composite stream over which iterative stack-based procedures
701can be defined for checking recursive syntax.
702
703\section{Compilation to Block-Based Processing} 
704\label{sec:compile}
705While a Python implementation of the techniques described in the previous section works on unbounded bitstreams, a corresponding
706C implementation needs to process an input stream in blocks of size equal to the
707SIMD register width of the processor it runs on.
708So, to convert Python code into C, the key question becomes how
709to transfer information from one block to the next one.
710The answer lies in the use of {\em carry bits}, the collection of carries resulting from bitwise additions.
711 
712In fact, in the methods we have outlined, all the
713the information flow between blocks for parallel bit stream
714calculations can be modeled using carry bits.   The parallel
715scanning primitive uses only addition and bitwise logic.
716Since the logic operations do not require information flow
717accross block boundaries, the information flow is entirely
718accounted by the carry.   Carry bits can also be used to
719capture the information flow associated with upshift
720operations, which move information forward one position
721in the file.   In essence, an upshift by one position for
722a bitstream is equivalent to the addition of the stream
723to itself; the bit shifted out in an upshift is in this
724case equivalent to the carry generated by the additon.
725The only other information flow requirement in the
726calculation of parallel bit streams occurs with the
727bitstream subtractions that are used to calculate span streams.
728In this case, the information flow is based on borrows
729generated, which can be handled in the same way as carries.
730
731Properly determining, initializing and inserting carry bits
732into a block-by-block implementation of parallel bit stream
733code is a task too tedious for manual implementation.
734We have thus developed compiler technology to automatically
735insert declarations, initializations and carry save/restore
736operations into appropriate locations when translating
737Python operations on unbounded bit streams into the
738equivalent low-level C code implemented on a block-by-block
739bases.  Our current compiler toolkit is capable of inserting
740carry logic using a variety of strategies, including both
741simulated carry bit processing with SIMD registers, as
742well as carry-flag processing using the processor general
743purpose registers and ALU.   Details are beyond the
744scope of this paper, but are described in the on-line
745source code repository at parabix.costar.sfu.ca.
746
747\section{Performance Results}
748
749In this section, we compare the performance of our \verb:xmlwf:
750implementation using the Parabix2 technology described above with several
751other implementations.
752These include the original \verb:xmlwf:
753distributed as an example application of the \verb:expat: XML
754parser,  implementations based on the widely used Xerces
755open source parser using both SAX and DOM interfaces,
756and an implementation using our prior Parabix 1 technology with
757bit scan operations. 
758
759Table \ref{XMLDocChars} 
760shows the document characteristics of the XML instances selected for this performance study,
761including both document-oriented and data-oriented XML files.
762The jawiki.xml and dewiki.xml XML files are document-oriented XML instances of Wikimedia books, written in Japanese and German, respectively. The remaining files are data-oriented.  The roads.gml file is an instance of Geography Markup Language (GML),
763a modeling language for geographic information systems as well as an open interchange format for geographic transactions on the Internet \cite{GML04}.  The po.xml file is an example of purchase order data, while the soap.xml file contains a large SOAP message.
764Markup density is defined as the ratio of the total markup contained within an XML file to the total XML document size.
765This metric is reported for each document.
766
767\begin{table*}[tbh]
768\begin{center}
769\begin{tabular}{|c||r|r|r|r|r|}
770\hline
771File Name               & dewiki.xml            & jawiki.xml            & roads.gml     & po.xml        & soap.xml \\ \hline   
772File Type               & document      & document      & data  & data  & data   \\ \hline     
773File Size (kB)          & 66240                 & 7343                  & 11584         & 76450         & 2717 \\ \hline
774Markup Item Count       & 406792                & 74882                 & 280724        & 4634110       & 18004 \\ \hline               
775Attribute Count         & 18808                 & 3529                  & 160416        & 463397        & 30001\\ \hline
776Avg. Attribute Size     & 8                     & 8                     & 6             & 5             & 9\\ \hline
777Markup Density          & 0.07                  & 0.13                  & 0.57          & 0.76          & 0.87  \\ \hline
778
779\end{tabular}
780\end{center}
781 \caption{XML Document Characteristics} 
782 \label{XMLDocChars} 
783\end{table*}
784
785Table \ref{parsers-cpb} shows performance measurements for the
786various \verb:xmlwf: implementations applied to the
787test suite.   Measurements are made on a single core of an
788Intel Core 2 system running a stock 64-bit Ubuntu 10.10 operating system,
789with all applications compiled with llvm-gcc 4.4.5 optimization level 3.
790Measurements are reported in CPU cycles per input byte of
791the XML data files in each case.
792The first row shows the performance of the Xerces C parser
793using the tree-building DOM interface. 
794Note that the performance
795varies considerably depending on markup density.  Note also that
796the DOM tree construction overhead is substantial and unnecessary
797for XML well-formedness checking.  Using the event-based SAX interface
798to Xerces gives much better results as shown in the
799second row.   
800The third row shows the best performance of our byte-at-a-time
801parsers, using the original  \verb:xmlwf: based on expat.
802
803The remaining rows of Table \ref{parsers-cpb} show performance
804of parallel bit stream implementations.  The first row shows
805the performance of our Parabix 1 implementation using
806bit scan instructions.   While showing a substantial speed-up
807over the byte-at-a-time parsers in every case, note also that
808the performance advantage increases with increasing markup
809density, as expected.   The last two rows show different versions of
810the \verb:xmlwf: implemented based on the Parabix 2 technology
811as discussed in this paper.   They differ in the carry handling
812strategy, with the ``simd\_add'' row referring to carry
813computations performed with simulated calculation of
814propagated and generated carries using SIMD operations, while the
815``adc64'' row refers to an implementation directly employing
816the processor carry flags and add-with-carry instructions on
81764-bit general registers.  In both cases, the overall
818performance is quite impressive, with the increased
819parallelism of parallel bit scans clearly paying off in
820improved performance for dense markup.
821
822
823\begin{table}[thb]
824\begin{center}
825\begin{tabular}{|c|c||c|c|c|c|c|}
826\hline
827Parser Class & Parser & dewiki.xml  & jawiki.xml    & roads.gml  & po.xml & soap.xml  \\ \hline 
828
829\multirow{3}{*}{Byte-at-a-time} & Xerces (DOM)    &    37.921   &    40.559   &    72.78   &    105.497   &    125.929  \\ \cline{2-7} 
830& Xerces (SAX)   &     19.829   &    24.883   &    33.435   &    46.891   &    57.119      \\ \cline{2-7}
831& expat      &  12.639   &    16.535   &    32.717   &    42.982   &    51.468      \\ \hline 
832\multirow{3}{*}{Parallel Bit Stream} & Parabix1   &    8.313   &    9.335   &     13.345   &    16.136   &      19.047 \\ \cline{2-7}
833& Parabix2 (simd\_add)   &      6.103   &    6.445   &    8.034   &    8.685   &    9.53 \\ \cline{2-7} 
834& Parabix2 (adc64)       &      5.123   &    5.996   &    6.852   &    7.648   &    8.275 \\ \hline
835 \end{tabular}
836\end{center}
837 \caption{Parser Performance (Cycles Per Byte)} 
838\label{parsers-cpb} 
839\end{table}
840 
841%gcc (simd\_add)    &   6.174   &       6.405   &       7.948   &       8.565   &       9.172 \\ \hline
842%llvm (simd\_add)   &   6.104   &       6.335   &       8.332   &       8.849   &       9.811 \\ \hline
843%gcc (adc64)        &   9.23   &        9.921   &       10.394   &      10.705   &      11.751 \\ \hline
844%llvm (adc64)       &   5.757   &       6.142   &       6.763   &       7.424   &       7.952 \\ \hline
845%gcc (SAHFLAHF)    &    7.951   &       8.539   &       9.984   &       10.219   &      11.388 \\ \hline
846%llvm(SAHFLAHF)    &    5.61   &        6.02   &        6.901   &       7.597   &       8.183 \\ \hline
847 
848
849
850
851\section{Conclusion}
852
853In application to the problem of XML parsing and well-formedness
854checking, the method of parallel parsing with bitstream addition
855is effective and efficient.   Using only bitstream addition
856and bitwise logic, it is possible to handle all of the
857character validation, lexical recognition and parsing problems
858except for the recursive aspects of start and end tag matching.
859Error checking is elegantly supported through the use of error
860streams that eliminate separate if-statements to check for
861errors with each byte.   The techniques are generally very
862efficient particularly when markup density is high.   However, for some
863conditions that occur rarely and/or require complex combinations
864of upshifting and logic, it may be better to define simpler
865error-check streams that require limited postprocessing using
866byte matching techniques.
867
868The techniques have been implemented and assessed for present-day commodity processors employing current SIMD technology.
869As processor advances see improved instruction sets and increases
870in width of SIMD registers, the relative advantages of the
871techniques over traditional byte-at-a-time sequential
872parsing methods is likely to increase substantially.
873Of particular benefit to this method, instruction set modifications
874that provide for more convenient carry propagation for long
875bitstream arithmetic would be most welcome.
876
877A significant challenge to the application of these techniques
878is the difficulty of programming.   The method of prototyping
879on unbounded bitstreams has proven to be of significant value
880in our work.   Using the prototyping language as input to
881a bitstream compiler has also proven effective in generating
882high-performance code.   Nevertheless, direct programming
883with bitstreams is still a specialized skill; our future
884research includes developing yet higher level tools to
885generate efficient bitstream implementations from grammars,
886regular expressions and other text processing formalisms.
887
888
889\bibliographystyle{plain}
890\bibliography{xmlperf}
891
892
893\end{document}
894
Note: See TracBrowser for help on using the repository browser.