# source:docs/PACT2011/03-research.tex@1024

Last change on this file since 1024 was 1003, checked in by ksherdy, 9 years ago

Edits.

File size: 9.3 KB
Line
1\section{Parabix}
2\label{section:reserach}
3%Describe key technology behind Parabix
4%Introduce SIMD;
6%Highlight which SSE instructions are important
7%TAlk about each pass in the parser; How SSE is used in every phase...
8%Benefits of SSE in each phase.
9
10
11% Extract section 2.2 and merge into 3.   Add a new subsection
12% in section 2 saying a bit about SIMD.   Say a bit about pure SIMD vertical
13% operations and then mention the pack operations that allow
14% us to implement transposition efficiently in parallel.
15% Also note that the SIMD registers support bitwise logic across
16% their full width and that this is extensively used in our work.
17%
18% Also, it could be good to have a small excerpt of a byte-at-a-time
19% scanning loop for XML, e.g., extracted from Xerces in section 2.1.
20% Just a few lines showing the while loop - Linda can tell you the file.
21%
22
23% This section focuses on the
24
25
26% With this method, byte-oriented character data is first transposed to eight parallel bit streams, one for each bit position within the character code units (bytes). These bit streams are then loaded into SIMD registers of width $W$ (e.g., 64-bit, 128-bit, 256-bit, etc). This allows $W$ consecutive code units to be represented and processed at once. Bitwise logic and shift operations, bit scans, population counts and other bit-based operations are then used to carry out the work in parallel \cite{CameronLin2009}.
27
28% The results of \cite{CameronHerdyLin2008} showed that Parabix, the predecessor of Parabix2, was dramatically faster than both Expat 2.0.1 and Xerces-C++ 2.8.0.
29% It is our expectation is that Parabix2 will outperform both Expat 2.0.1 and Xerces-C++ 3.1.1 in terms of energy consumption per source XML byte.
30% This expectation is based on the relatively-branchless code composition of Parabix2 and the more-efficient utilization of last-level cache resources.
31% The authors of \cite {bellosa2001, bircher2007, bertran2010} indicate that such factors have a considerable effect on overall energy consumption.
32% Hence, one of the foci in our study is the manner in which straight line SIMD code influences energy usage.
33
34
35\subsection{Parabix1}
36
37% Our first generation parallel bitstream XML parser---Parabix1---uses employs a less conventional approach of SIMD technology to represent text in parallel bitstreams. Bits of each stream are in one-to-one-correspondence with the bytes of a character stream. A transposition step first transforms sequential byte stream data into eight basis bitstreams for the bits of each byte.
38
39At a high level, Parabix1 processes source XML in a functionally equivalent manner as a traditional processor. That is, Parabix1 moves sequentially through the source document, maintaining a single cursor position throughout the parsing process. Where Parabix1 differs from the traditional parser is that it scans for key markup characters using a series of basis bitstreams.
40A bitstream is simply a sequence of $0$s and $1$s, where there is one such bit in the bitstream for each character in a source data stream. A basis bitstream is a bitstream that consists of only transposed textual XML data.
41In other words, a source character consisting of $M$ bits can be represented with $M$ bitstreams and
42by utilizing $M$ SIMD registers of width $W$, it is possible to scan through $W$ characters in parallel.
43The register width $W$ varies between 64-bit for MMX, 128-bit for SSE, and 256-bit for AVX.
44Figure \ref{fig:inputstreams} presents an example of how we represent 8-bit ASCII characters using eight bitstreams. $B_0 \ldots B_7$ are the individual bitstreams. The $0$ bits in the bitstreams are represented by periods, so that the $1$ bits stand out.
45
46\begin{figure}[h]
47\begin{center}
48\begin{tabular}{cr}\\
49source data $\vartriangleright$ & \verb<t1>abc</t1><t2/>\\
50$B_0$ & \verb..1.1.1.1.1....1.\\
51$B_1$ & \verb...1.11.1..1..111\\
52$B_2$ & \verb11.1...111.111.11\\
53$B_3$ & \verb1..1...11..11..11\\
54$B_4$ & \verb1111...1.111111.1\\
55$B_5$ & \verb11111111111111111\\
56$B_6$ & \verb.1..111..1...1...\\
57$B_7$ & \verb.................\\
58\end{tabular}
59\end{center}
60\caption{Parallel Bitstream Example}
61\label{fig:inputstreams}
62\end{figure}
63
64In order represent the byte-oriented character data as parallel bitstreams, the source data is first loaded in sequential order and converted into its transposed representation through a series of packs, shifts, and bitwise operations.
65Using the SIMD capabilities of current commodity processors, this transposition of source data to bitstreams incurs an amortized overhead of about 1 CPU cycle per byte for transposition \cite{CameronHerdyLin2008}. When parsing, we need to consider multiple properties of characters at different stages during the process. Using the basis bitstreams, it is possible to combine them using bitwise logic in order to compute character-class bitstreams;t hat is, streams that identify the positions at which characters belonging to a specific character class occur. For example, a ASCII character is an open angle bracket <' if and only if $B_2 \land \ldots \land B_5 =$ 1 and the other basis bitstreams are 0 at the same position within the basis bitstreams. Once these character-class bitstreams are created, bit-scan operations, common to commodity processors, can be used for sequential markup scanning and data validation operations. A common operation in all XML parsers is identifying the start tags (<') and their accompanying end tags (either />'' or >'' depending whether the element tag is an empty element tag or not, respectively).
66
67\begin{figure}[h]
68\begin{center}
69\begin{tabular}{lr}\\
70source data $\vartriangleright$ & \verb<t1>abc<t1/><t2/>\\
71% $N =$ name chars & \verb.11.111.11...11..\\
72$M_0 = 1$ & \verb1................\\
73$M_1 = advance(M_0)$ & \verb.1...............\\
74$M_2 = bitscan('>')$ & \verb...1.............\\
75$M_3 = advance(M_2)$ & \verb....1............\\
76$M_4 = bitscan('<')$ & \verb.......1.........\\
77$M_5 = bitscan('/')$ & \verb..........1......\\
78$M_6 = advance(M_5)$ & \verb...........1.....\\
79$M_7 = bitscan('<')$ & \verb.............1...\\
80$M_8 = bitscan('/')$ & \verb...............1.\\
81$M_9 = advance(M_8)$ & \verb................1\\
82% $M_2 \lor M_6 \lor M_9$    & \verb...1.......1....1\\
83\end{tabular}
84\end{center}
85\caption{Parabix1 Start and End Tag Identification}
86\label{fig:Parabix1StarttagExample}
87\end{figure}
88
89
90Unlike traditional parsers, these sequential operations are accelerated significantly since bit scan operations can perform up to $W$ finite state transitions per clock cycle. This approach has recently been applied to Unicode transcoding and XML parsing to good effect, with research prototypes showing substantial speed-ups over even the best of byte-at-a-time alternatives \cite{CameronHerdyLin2008, CameronLin2009, Cameron2010}.
91
92
93
94
95% In section 3, we should try to explain a bit more detail of the
96% operation.   Under Parabix 1, a little bit on transposition
97% and calculation of the [<] bitstream would be good, perhaps
98% using the examples from the 2010 Technical Report or EuroPar submission.
99
100
101\subsection{Parabix2}
102
103% Under Parabix 2 a little discussion of bitwise addition for
104% scanning, perhaps again excerpted from the TR/EuroPar submission
105% would be good.
106
107%In Parabix2, we replace the sequential single-cursor parsing using bit scan instructions with a parallel parsing method using bitstream addition. Unlike the single cursor approach of Parabix1 and conceptually of traditional sequential approach, in Parabix2 multiple cursors positions are processed in parallel.
108
109In Parabix2, we replace the sequential single-cursor parsing using bit scan instructions with a parallel parsing method using bitstream addition. Unlike the single-cursor approach of Parabix1 (and conceptually of all sequential XML parsers), Parabix2 processes multiple cursors in parallel. For example, using the source data from Figure \ref{fig:Parabix1StarttagExample}, Figure \ref{fig:Parabix2StarttagExample} shows how Parabix2 identifies and moves each of the start tag markers forwards to the corresponding end tag. Like Parabix1, we assume that $N$ (the name chars) has been computed using the basis bitstreams and that
110
111\begin{figure}[h]
112\begin{center}
113\begin{tabular}{lr}\\
114source data $\vartriangleright$ & \verb<t1>abc<t1/><t2/>\\
115$N =$ name chars & \verb.11.111.11...11..\\
116$M_0 = [<]$ & \verb1......1....1....\\
117$M_1 = \texttt{advance}(M_0)$ & \verb.1......1....1...\\
118$M_2 = \texttt{scanto}('/','>')$ & \verb...1......1....1.\\
119$M_3 = \texttt{scanto}(>)$ & \verb...1.......1....1
120\end{tabular}
121\end{center}
122\caption{Parabix2 Start and End Tag Identification}
123\label{fig:Parabix2StarttagExample}
124\end{figure}
125
126In general, the set of bit positions in a marker bitstream may be considered to be the current parsing positions of multiple parses taking place in parallel throughout the source data stream. A further aspect of the parallel method is that conditional branch statements used to identify syntax error at each each parsing position are eliminated. Although we do not show it in the prior examples, error bitstreams can be used to identify any well-formedness errors found during the parsing process. Error positions are gathered and processed in as a final post processing step. Hence, Parabix2 offers additional parallelism over Parabix1 in the form of multiple cursor parsing as well as significanlty reduces branch misprediction penalty.
127
128
Note: See TracBrowser for help on using the repository browser.