source: docs/Working/re/re-main.tex @ 3649

Last change on this file since 3649 was 3649, checked in by cameron, 5 years ago

mCleannups

File size: 32.7 KB
Line 
1\documentclass[pageno]{jpaper}
2
3%replace XXX with the submission number you are given from the PACT submission site.
4\newcommand{\pactsubmissionnumber}{XXX}
5
6\usepackage[normalem]{ulem}
7\usepackage{amssymb}
8\usepackage{amsmath}
9\usepackage{graphicx}
10\usepackage{tikz}
11\usepackage{pgfplots}
12
13\begin{document}
14
15\title{Bitwise Data Parallelism in Regular Expression Matching}
16
17
18\date{}
19\maketitle
20
21\thispagestyle{empty}
22
23
24\begin{abstract}
25\input{abstract}
26\end{abstract}
27
28\section{Introduction}
29
30The use of regular expressions to search texts for occurrences
31of string patterns has a long history and
32remains a pervasive technique throughout computing applications today.
33% {\em a brief history}
34The origins of regular expression matching date back to automata theory
35developed by Kleene in the 1950s \cite{kleene1951}.
36Thompson \cite{thompson1968} is credited with the first construction to convert regular expressions
37to nondeterministic finite automata (NFA).
38Following Thompson's approach, a regular expression of length $m$ is converted
39to an NFA with $O(m)$ states. It is then possible to search a text of length $n$ using the
40NFA in worst case $O(mn)$ time. Often, a more efficient choice
41is to convert an NFA into a DFA. A DFA has a single active state at any time
42throughout the matching process and
43hence it is possible to search a text of length $n$ in $O(n)$ time\footnote{It is well
44known that the conversion of an NFA to an equivalent DFA may result
45in state explosion. That is, the number of resultant DFA states may increase exponentially.}.
46
47A significant proportion of the research in fast regular expression matching can be
48regarded as the ``quest for efficient automata'' \cite{navarro98fastand}.
49In \cite{baeza1992new}, Baeza-Yates and Gonnet
50presented a new approach to string search based on bit-level parallelism.
51This technique takes advantage of the intrinsic parallelism of bitwise operations
52within a computer word.
53Given a $w$-bit word, the number of operations that a string search algorithms
54performs can be reduced by a factor $w$.
55Using this fact, the Shift-Or algorithm simulates an NFA using
56bitwise operations and achieves $O(\frac{nm}{w})$ worst-case time \cite{navarro2000}.
57A disadvantage of the Shift-Or approach
58is an inability to skip input characters.
59Simple string matching algorithms,
60such as the Boyer-Moore family of algorithms \cite{boyer1977fast,horspool1980practical} skip input characters
61to achieve sublinear times in the average case.
62% Backward Dawg Matching (BDM) string matching algorithms \cite{crochemore1994text}
63% based on suffix automata are able to skip characters.
64The Backward Nondeterministic Dawg Matching (BNDM) pattern matching algorithm \cite{wu1992fast} 
65combines the bit-parallel advantages of the Shift-Or approach
66with the ability to skip characters. %character skipping property of BDM algorithms.
67The nrgrep pattern matching tool,
68is based on the BNDM algorithm. Prior to the bitwise
69data parallel approach presented herein, nrgrep
70was by far the fastest grep tool
71for matching complex patterns, and achieved similar performance
72to that of the fastest existing string
73matching tools for simple patterns \cite{navarro2000}.
74
75There has been considerable recent
76interest in accelerating regular expression matching
77on parallel hardware
78such as multicore processors (CPUs),
79general purpose graphics processing units (GPGPUs),
80field-programmable gate arrays (FPGAs),
81and specialized architectures such as
82the Cell Broadband Engine (Cell BE). % FPGA results (synthesis of patterns into logic circuits) vs. memory based approaches (STTs in memory)
83%CPU
84Scarpazza and Braudaway \cite{scarpazza2008fast} demonstrated that
85text processing algorithms that exhibit irregular memory access patterns
86can be efficiently executed on multicore hardware.
87In related work, Pasetto et al. presented a flexible tool that
88performs small-ruleset regular expression matching at a rate of
892.88 Gbps per chip on Intel Xeon E5472 hardware \cite{pasetto2010}.
90Naghmouchi et al. \cite{scarpazza2011top,naghmouchi2010} demonstrated that the Aho-Corasick (AC)
91string matching algorithm \cite{aho1975} is well suited for parallel
92implementation on multicore CPUs, GPGPUs and the Cell BE.
93On each hardware, both thread-level parallelism (cores) and data-level parallelism
94(SIMD units) were leveraged for performance.
95Salapura et. al. \cite{salapura2012accelerating} advocated the use of vector-style processing for regular expressions
96in business analytics applications and leveraged the SIMD hardware available
97on multi-core processors to acheive a speedup of greater than 1.8 over a
98range of data sizes of interest.
99%Cell
100In \cite{scarpazza2008}, Scarpazza and Russell presented a SIMD tokenizer
101that delivered 1.00--1.78 Gbps on a single
102Cell BE chip and extended this approach for emulation on the Intel Larrabee
103instruction set \cite{scarpazza2009larrabee}.
104On the Cell BE, Scarpazza \cite{scarpazza2009cell} described a pattern matching
105implementation that delivered a throughput of 40
106Gbps for a small dictionary of approximately 100 patterns and a throughput of 3.3-3.4
107Gbps for a larger dictionary of thousands of patterns. Iorio and van Lunteren \cite{iorio2008} 
108presented a string matching implementation for automata that achieved
1094 Gbps on the Cell BE.
110% GPU
111In more recent work, Tumeo et al. \cite{tumeo2010efficient} presented a chunk-based
112implementation of the AC algorithm for
113accelerating string matching on GPGPUs. Lin et al., proposed
114the Parallel Failureless Aho-Corasick (PFAC)
115algorithm to accelerate pattern matching on GPGPU hardware and
116achieved 143 Gbps throughput, 14.74 times faster
117than the AC algorithm performed on a four core
118multi-core processor using OpenMP \cite{lin2013accelerating}.
119
120Whereas the existing approaches to parallelization have been
121based on adapting traditional sequential algorithms to emergent
122parallel architectures, we introduce both a new algorithmic
123approach and its implementation on SIMD and GPGPU architectures.
124This approach relies on a bitwise data parallel view of text
125streams as well as a surprising use of addition to match
126runs of characters in a single step.  The closest previous
127work is that underlying bit-parallel XML parsing using 128-bit SSE2 SIMD
128technology together with a parallel scanning primitive also
129based on addition \cite{cameron2011parallel}.   
130However, in contrast to the deterministic, longest-match
131scanning associated with the ScanThru primitive of that
132work, we introduce here a new primitive MatchStar
133that can be used in full generality for nondeterministic
134regular expression matching.   We also introduce a long-stream
135addition technique involving a further application of MatchStar
136that enables us to scale the technique to $n$-bit addition
137in $\lceil\log_{64}{n}\rceil$ steps.   We ultimately apply this technique,
138for example, to perform
139synchronized 4096-bit addition on GPGPU warps of 64 threads.
140
141There is also a strong keyword match between the bit-parallel
142data streams used in our approach and the bit-parallelism
143used for NFA state transitions in the classical algorithms of
144Wu and Manber \cite{wu1992agrep}, Baez-Yates and Gonnet \cite{baeza1992new}
145and Navarro and Raffinot \cite{navarro1998bit}.
146However those algorithms use bit-parallelism in a fundamentally
147different way: representing all possible current NFA states
148as a bit vector and performing parallel transitions to a new
149set of states using table lookups and bitwise logic.    Whereas
150our approach can match multiple characters per step, bit-parallel
151NFA algorithms proceed through the input one byte at a time.
152Nevertheless, the agrep \cite{wu1992agrep} and
153nrgrep \cite{navarro2000} programs implemented using these techniques remain
154among the strongest competitors in regular expression matching
155performance, so we include them in our comparative evaluation.
156
157The remainder of this paper is organized as follows.
158Section \ref{sec:grep} briefly describes regular expression
159notation and the grep problem.
160Section \ref{sec:bitwise} presents our basic algorithm and MatchStar
161primitive using a model of arbitrary-length bit-parallel data streams.
162Section \ref{sec:blockwise} discusses the block-by-block
163implementation of our techniques including the long stream
164addition techniques for 256-bit addition with AVX2 and
1654096-bit additions with GPGPU SIMT.
166Section \ref{sec:SSE2} describes our overall SSE2 implementation
167and carries out a performance study in comparison with
168existing grep implementations.
169Given the dramatic variation in grep performance across
170different implementation techniques, expressions and data sets,
171Section \ref{sec:analysis} considers a comparison between
172the bit-stream and NFA approaches from a theoretical perspective.
173Section \ref{sec:AVX2} then examines and demonstrates
174the scalability of our
175bitwise data-parallel approach in moving from
176128-bit to 256-bit SIMD on Intel Haswell architecture.
177To further investigate scalability, Section \ref{sec:GPU} 
178addresses the implementation of our matcher using
179groups of 64 threads working together SIMT-style on a GPGPU system.
180Section \ref{sec:Concl} concludes the paper with a discussion of results and
181areas for future work.
182
183\section{Regular Expression Notation and Grep}\label{sec:grep}
184
185We follow common POSIX notation for regular expressions.
186A regular expression specifies a set of strings through
187a pattern notation.   Individual characters normally
188stand for themselves, unless they are one of the
189special characters \verb:*+?[{\(|^$.: that serve as metacharacters
190of the notation system.  Thus the regular expression \verb:cat:
191is a pattern for the set consisting of the single 3-character
192string ``\verb:cat:''.   The special characters must be escaped
193with a backslash to prevent interpretation as metacharacter, thus
194\verb:\$: represents the dollar-sign and \verb:\\\\: represent
195the string consisting of two backslash characters.
196Character class bracket expressions are pattern elements
197that allow any character in a given class to be used in a particular
198context.  For example, \verb:[@#%]: is a regular expression
199that stands for any of the three given symbols.  Contiguous
200ranges of characters may be specified using hyphens;
201for example \verb:[0-9]: for digits and \verb:[A-Za-z0-9_]:
202for any alphanumeric character or underscore.  If the
203caret character immediately follows the opening bracket,
204the class is negated, thus \verb:[^0-9]: stands for
205any character except a digit.  The period metacharacter
206\verb:.: stands for the class of all characters.
207
208Consecutive pattern elements stand for strings formed by
209concatenation, thus \verb:[cd][ao][tg]: stands for the
210set of 8 three-letter strings ``\verb:cat:'' through ``\verb:dog:''.
211The alternation operator \verb:|: allows a pattern to be
212defined to have to alternative forms, thus \verb:cat|dog:
213matches either ``\verb:cat:'' or ``\verb:dog:''.  Concatenation
214takes precedence over alternation, but parenthesis may be
215used to change this, thus \verb:(ab|cd)[0-9]: stands for any
216digit following one of the two prefixes  ``\verb:ab:'' or ``\verb:cd:''.
217
218Repetition operators may be appended to a pattern to specify
219a variable number of occurrences of that pattern. 
220The Kleene \verb:*: specifies zero-or-more occurrences
221matching the previous pattern, while \verb:+: specifies one-or
222more occurrences.  Thus \verb:[a-z][a-z]*: and \verb:[a-z]+:
223both specify the same set: strings of at least one lower-case
224letter.  The postfix operator \verb:?: specifies an optional
225component, i.e., zero-or-one occurrence of strings matching
226the element.  Specific bounds may be given within braces:
227\verb:(ab){3}: specifies the string ``\verb:ababab:'',
228\verb:[0-9A-Fa-f]{2,4}: specifies strings of two, three
229or four hexadecimal digits, and \verb:[A-Z]{4,}: specifies
230strings of at least 4 consecutive capital letters.
231
232The grep program searches a file for lines containing matches
233to a regular expression using any of the above notations.
234In addition, the pattern elements \verb:^: and \verb:$:
235may be used to match respectively the beginning or the
236end of a line.  In line-based tools such as grep, \verb:.:
237matches any character except newlines; matches cannot extend
238over lines.
239Normally, grep prints all matching
240lines to its output.  However, grep programs typically
241allow a command line flag such as \verb:-c: to specify
242that only a count of matching lines be produced; we use
243this option in our experimental evaluation to focus
244our comparisons on the performance of the underlying matching
245algorithms.
246
247\section{Matching with Bit-Parallel Data Streams}\label{sec:bitwise}
248
249Whereas the traditional approaches to regular expression matching
250using NFAs, DFAs or backtracking all rely on a byte-at-a-time
251processing model, the approach  we introduce in this paper is based
252on quite a different concept:  a data-parallel approach to simultaneous
253processing of data stream elements.  Indeed, our most abstract model
254is that of unbounded data parallelism: processing all elements of
255the input data stream simultaneously.   In essence, data streams are viewed
256as (very large) integers.   The fundamental operations are bitwise
257logic, stream shifting and long-stream addition.
258
259Depending on the available parallel processing resources, an actual
260implementation may divide an input stream into blocks  and process
261the blocks sequentially.   Within each block  all elements of the
262input stream are processed together, relying the availability of
263bitwise logic and addition scaled to the block size.   On commodity
264Intel and AMD processors with 128-bit SIMD capabilities (SSE2),
265we typically process input streams 128 bytes at a time.   
266In this
267case, we rely on the Parabix tool chain \cite{lin2012parabix}
268to handle the details of compilation to block-by-block processing.
269On the
270latest processors supporting the 256-bit AVX2 SIMD operations,
271we also use the Parabix tool chain, but substitute a parallelized
272long-stream addition technique to avoid the sequential chaining
273of 4 64-bit additions.
274Our GPGPU implementation uses scripts to modify the output
275of the Parabix tools, effectively dividing the input into blocks
276of 4K bytes.   
277We also have adapted our long-stream addition technique
278to perform 4096-bit additions using 64 threads working in lock-step
279SIMT fashion. 
280 
281\begin{figure}[tbh]
282\begin{center}
283\begin{tabular}{cr}\\
284input data  & \verb`a453z--b3z--az--a12949z--ca22z7--`\\
285$B_7$ & \verb`.................................`\\
286$B_6$ & \verb`1...1..1.1..11..1.....1..11..1...`\\
287$B_5$ & \verb`111111111111111111111111111111111`\\
288$B_4$ & \verb`.1111...11...1...111111....1111..`\\
289$B_3$ & \verb`....111..111.111...1.1111....1.11`\\
290$B_2$ & \verb`.11..11...11..11....1..11.....111`\\
291$B_1$ & \verb`...11..111...1....1...1..1.1111..`\\
292$B_0$ & \verb`1.11.11.1.111.1111.1.1.1111...111`\\
293\verb:[a]: & \verb`1...........1...1.........1......`\\
294\verb:[z9]: & \verb`....1....1...1.....1.11......1...`\\
295\verb:[0-9]: & \verb`.111....1........11111.....11.1..`
296\end{tabular}
297
298\end{center}
299\caption{Basis and Character Class Streams}
300\label{fig:streams}
301\end{figure}
302
303A key concept in this streaming approach is the derivation of bit streams
304that are parallel to the input data stream, i.e., in one-to-one
305correspondence with the data element positions of the input
306streams.   Typically, the input stream is a byte stream comprising
307the 8-bit character code units of a particular encoding such
308as extended ASCII, ISO-8859-1 or UTF-8.   However, the method may also
309easily be used with wider code units such as the 16-bit code units of
310UTF-16.   In the case of a byte stream, the first step is to transpose
311the byte stream into eight parallel bit streams, such that bit stream
312$i$ comprises the $i^\text{th}$ bit of each byte.   These streams form
313a set of basis bit streams from which many other parallel bit
314streams can be calculated, such as character class bit
315streams such that each bit $j$ of the stream specifies
316whether character $j$ of the input stream is in the class
317or not.  Figure \ref{fig:streams} shows an example of an
318input byte stream in ASCII, the eight basis bit streams of the
319transposed representation, and the character class bit streams
320\verb:[a]:,
321\verb:[z9]:, and
322\verb:[0-9]:
323that may be computed from the basis bit streams using bitwise logic.
324Zero bits are marked with periods ({\tt .}) so that the one bits stand out.
325Transposition and character class construction are straightforward
326using the Parabix tool chain \cite{lin2012parabix}.
327
328\begin{figure}[tbh]
329\begin{center}
330\begin{tabular}{cr}\\
331input data  & \verb`a453z--b3z--az--a12949z--ca22z7--`\\
332$M_1$ & \verb`.1...........1...1.........1.....`\\
333$M_2$ & \verb`.1111........1...111111....111...`\\
334$M_3$ & \verb`.....1........1.....1.11......1..`
335\end{tabular}
336
337\end{center}
338\caption{Marker Streams in Matching {\tt a[0-9]*[z9]}}
339\label{fig:streams2}
340\end{figure}
341
342\paragraph*{Marker Streams.}  Now consider how bit-parallel data
343streams can be used in regular expression matching.   Consider
344the problem of searching the input stream of Figure \ref{fig:streams}
345to finding occurrence of strings matching
346the regular expression \verb:a[0-9]*[z9]:. 
347Note that this is an ambiguous regular expression, which could match
348texts such as \verb:a12949z: in multiple ways.
349The matching process involves the concept of {\em marker streams}, that
350is streams that mark the positions of current matches during the
351overall process.  In this case there are three marker streams computed
352during the match process, namely,
353$M_1$ representing match positions after an initial \verb:a:
354character has been found, $M_2$ representing positions
355reachable from positions marked by $M_1$ by further matching zero or
356more digits (\verb:[0-9]*:) and finally $M_3$ the stream
357marking positions after a final \verb:z: or \verb:9: has been found.
358Without describing the details of how these streams are computed
359for the time being, Figure \ref{fig:streams2} shows what each
360of these streams should be for our example matching problem.
361Our convention that a marker stream contains a 1 bit
362at the next character position to be matched, that is,
363immediately past the last position that was matched.
364Note that all three matches from the third occurrence of \verb:a:
365are correctly marked in $M_3$.
366
367
368\paragraph*{MatchStar.}
369MatchStar takes a marker bitstream and a character class bitstream as input.  It returns all positions that can be reached by advancing the marker bitstream zero or more times through the character class bitstream.
370
371\begin{figure}[tbh]
372\begin{center}
373\begin{tabular}{cr}\\
374input data  & \verb`a453z--b3z--az--a12949z--ca22z7--`\\
375$M_1$ & \verb`.1...........1...1.........1.....`\\
376$D = \text{\tt [0-9]}$ & \verb`.111....1........11111.....11.1..`\\
377$T_0 = M_1 \wedge D$ & \verb`.1...............1.........1.....`\\
378$T_1 = T_0 + D$ & \verb`....1...1.............1......11..`\\
379$T_2 = T_1 \oplus D$ & \verb`.1111............111111....111...`\\
380$M_2 = T_2 \, | \, M_1$ & \verb`.1111........1...111111....111...`
381\end{tabular}
382
383\end{center}
384\caption{$M_2 = \text{MatchStar}(M_1, D)$}
385\label{fig:matchstar}
386\end{figure}
387
388
389Figure \ref{fig:matchstar} illustrates the MatchStar method. In this figure,
390it is important to note that our bitstreams are shown in natural left-to-right order reflecting the
391conventional presentation of our character data input.   However, this reverses the normal
392order of presentation when considering bitstreams as numeric values.  The key point here is
393that when we perform bitstream addition, we will show bit movement from left-to-right.
394For example, $\verb:111.: + \verb:1...: = \verb:...1:$.
395
396The first row of the figure is the input data,
397the second and third rows are the input bitstreams: the initial marker position bitstream and the
398character class bitstream for digits derived from input data. 
399
400In the first operation ($T_0$), marker positions that cannot be advanced are temporarily removed from consideration by masking off marker positions that aren't character class positions using bitwise logic.  Next, the temporary marker bitstream is added to the character class bitstream. 
401The addition produces 1s in three types of positions.  There will be a 1 immediately following a block of character class positions that spanned one or more marker positions, at any character class positions that weren't affected by the addition (and are not part of the desired output), and at any marker position that wasn't the first in its block of character class positions.  Any character class positions that have a 0 in $T_1$ were affected by the addition and are part of the desired output.  These positions are obtained and the undesired 1 bits are removed by XORing with the character class stream. $T_2$ is now only missing marker positions that were removed in the first step as well as marker positions that were 1s in $T_1$.  The
402output marker stream is obtained by ORing $T_2$ with the initial marker stream.
403
404In general, given a marker stream $M$ and a character class stream $C$,
405the operation of MatchStar is defined by the following equation. 
406\[\text{MatchStar}(M, C) = (((M \wedge C) + C)  \oplus C) | M\]
407Given a set of initial marker positions, the result stream marks
408all possible positions that can be reached by 0 or more occurrences
409of characters in class $C$ from each position in $M$
410
411MatchStar differs from ScanThru of the Parabix tool chain in that it
412finds all matches, not just the longest match.  This is necessary
413for general matching involving possibly ambiguous regular
414expressions.   
415
416\input{compilation}
417
418\input{re-Unicode}
419
420\section{Block-at-a-Time Processing}\label{sec:blockwise}
421
422The unbounded stream model of the previous section must of course
423be translated an implementation that proceeds block-at-a-time for
424realistic application.  In this, we primarily rely on the Pablo
425compiler of the Parabix toolchain \cite{lin2012parabix}.  Given input
426statements expressed as arbitrary-length bitstream equations, Pablo
427produces block-at-a-time C++ code that initializes and maintains all the necessary
428carry bits for each of the additions and shifts involved in the
429bitstream calculations.   
430
431In the present work, our principal contribution to the Parabix tool
432chain is to incorporate the technique of long-stream addition described below.
433Otherwise, we were able to use Pablo directly in compiling our
434SSE2 and AVX2 implementations.   Our GPGPU implementation required
435some scripting to modify the output of the Pablo compiler for our
436purpose.
437
438\paragraph*{Long-Stream Addition.}  The maximum word size for
439addition on commodity processors is typically 64 bits.  In order
440to implement long-stream addition for block sizes of 256 or larger,
441a method for propagating carries through the individual stages of
44264-bit addition is required.  However, the normal technique of
443sequential addition using add-with-carry instructions, for example,
444is far from ideal.
445
446We use the following general model using SIMD methods for constant-time
447long-stream addition up to 4096 bits.   Related GPGPU solutions have been
448independently developed\cite{Crovella2012},
449however our model is intended to be a more broadly applicable abstraction.
450We assume the availability of the following SIMD/SIMT operations
451operating on vectors of $f$ 64-bit fields.
452\begin{itemize}
453\item \verb#simd<64>::add(X, Y)#: vertical SIMD addition of corresponding 64-bit fields
454in two vectors to produce a result vector of $f$ 64-bit fields.
455\item  \verb#simd<64>::eq(X, -1)# :  comparison of the 64-bit fields
456of \verb:x: each with the constant value -1 (all bits 1), producing
457an $f$-bit mask value,
458\item  \verb#hsimd<64>::mask(X)# : gathering the high bit of each 64-bit
459field into a single compressed $f$-bit mask value, and
460\item normal bitwise logic operations on $f$-bit masks, and
461\item  \verb#simd<64>::spread(x)# : distributing the bits of
462an $f$ bit mask, one bit each to the $f$ 64-bit fields of a vector.
463\end{itemize}
464
465In this model, the \verb#hsimd<64>::mask(X)# and
466\verb#simd<64>::spread(x)# model the minimum
467communication requirements between the parallel processing units
468(SIMD lanes or SIMT processors).    In essence, we just need
469the ability to quickly send and receive 1 bit of information
470per parallel unit.    The \verb#hsimd<64>::mask(X)# operation
471gathers 1 bit from each of the processors to a central resource.
472After calculations on the gather bits are performed, we then
473just need an operation to invert the communication, i.e.,
474sending 1 bit each from the central processor to each of
475the parallel units.   There are a variety of ways in which
476these facilities may be implemented depending on the
477underlying architecture; details of our AVX2 and GPGPU implementations
478are presented later.   
479
480Given these operations, our method for long stream addition of
481two $f \times 64$ bit values \verb:X: and \verb:Y: is the following.
482\begin{enumerate}
483\item Form the vector of 64-bit sums of \verb:x: and \verb:y:.
484\[\text{\tt R} = \text{\tt simd<64>::add(X, Y)} \]
485
486\item Extract the $f$-bit masks of \verb:X:, \verb:Y: and \verb:R:.
487\[\text{\tt x} = \text{\tt hsimd<64>::mask(X)} \]
488\[\text{\tt y} = \text{\tt hsimd<64>::mask(Y)} \]
489\[\text{\tt r} = \text{\tt hsimd<64>::mask(R)} \]
490
491\item Compute an $f$-bit mask of carries generated for each of the
49264-bit additions of \verb:X: and \verb:Y:.
493\[\text{\tt c} = (\text{\tt x} \wedge \text{\tt y}) \vee ((\text{\tt x} \vee \text{\tt y}) \wedge \neg \text{\tt r})\]
494
495\item Compute an $f$-bit mask of all fields of {\tt R} that will overflow with
496an incoming carry bit.  This is the {\em bubble mask}.
497\[\text{\tt b} = \text{\tt simd<64>::eq(R, -1)}\]
498
499\item Determine an $f$-bit mask identifying the fields of {\tt R} that need to be
500incremented to produce the final sum.  Here we find a new application of
501MatchStar.
502\[\text{\tt i} = \text{\tt MatchStar(c*2, b)}\]
503
504This is the key step.  The mask {\tt c} of outgoing carries must be
505shifted one position ({\tt c*2}) so that each outgoing carry bit becomes associated
506with the next digit.  At the incoming position, the carry will
507increment the 64-bit digit.   However, if this digit is all ones (as
508signalled by the corresponding bit of bubble mask {\tt b}, then the addition
509will generate another carry.  In fact, if there is a sequence of
510digits that are all ones, then the carry must bubble through
511each of them.   This is just MatchStar.
512
513\item Compute the final result {\tt Z}.
514\[\text{\tt Z} = \text{\tt simd<64>::add(R, simd<64>::spread(i))}\]
515
516\end{enumerate}
517\begin{figure}
518\begin{center}
519\begin{tabular}{c||r|r|r|r|r|r|r|r||}\cline{2-9}
520{\tt X} & {\tt 19} & {\tt 31} & {\tt BA} & {\tt 4C} & {\tt 3D} & {\tt 45} & {\tt 21} & {\tt F1} \\ \cline{2-9}
521{\tt Y} & {\tt 22} & {\tt 12} & {\tt 45} & {\tt B3} & {\tt E2} & {\tt 16} & {\tt 17} & {\tt 36} \\ \cline{2-9}
522{\tt R} & {\tt 3B} & {\tt 43} & {\tt FF} & {\tt FF} & {\tt 1F} & {\tt 5B} & {\tt 38} & {\tt 27} \\ \cline{2-9}
523{\tt x} & {\tt 0} & {\tt 0} & {\tt 1} & {\tt 0} & {\tt 0} & {\tt 0} & {\tt 0} & {\tt 1} \\ \cline{2-9}
524{\tt y} & {\tt 0} & {\tt 0} & {\tt 0} & {\tt 1} & {\tt 1} & {\tt 0} & {\tt 0} & {\tt 0} \\ \cline{2-9}
525{\tt r} & {\tt 0} & {\tt 0} & {\tt 1} & {\tt 1} & {\tt 0} & {\tt 0} & {\tt 0} & {\tt 0} \\ \cline{2-9}
526{\tt c} & {\tt 0} & {\tt 0} & {\tt 0} & {\tt 0} & {\tt 1} & {\tt 0} & {\tt 0} & {\tt 1} \\ \cline{2-9}
527{\tt c*2} & {\tt 0} & {\tt 0} & {\tt 0} & {\tt 1} & {\tt 0} & {\tt 0} & {\tt 1} & {\tt 0} \\ \cline{2-9}
528{\tt b} & {\tt 0} & {\tt 0} & {\tt 1} & {\tt 1} & {\tt 0} & {\tt 0} & {\tt 0} & {\tt 0} \\ \cline{2-9}
529{\tt i} & {\tt 0} & {\tt 1} & {\tt 1} & {\tt 1} & {\tt 0} & {\tt 0} & {\tt 1} & {\tt 0} \\ \cline{2-9}
530{\tt Z} & {\tt 3B} & {\tt 44} & {\tt 0} & {\tt 0} & {\tt 1F} & {\tt 5B} & {\tt 39} & {\tt 27} \\ \cline{2-9}
531\end{tabular}
532\end{center}
533\caption{Long Stream Addition}\label{fig:longadd}
534\end{figure}
535
536Figure \ref{fig:longadd} illustrates the process.  In the figure,
537we illustrate the process with 8-bit fields rather than 64-bit fields
538and show all field values in hexadecimal notation.  Note that
539two of the individual 8-bit additions produce carries, while two
540others produce {\tt FF} values that generate bubble bits.  The
541net result is that four of the original 8-bit sums must be
542incremented to produce the long stream result.
543
544A slight extension to the process produces a long-stream full adder
545that can be used in chained addition.  In this case, the
546adder must take an additional carry-in bit
547{\tt p} and produce a carry-out bit {\tt q}.
548This may be accomplished by incorporating {\tt p}
549in calculating the increment mask in the low bit position,
550and then extracting the carry-out {\tt q} from the high bit position.
551\[\text{\tt i} = \text{\tt MatchStar(c*2+p, b)}\]
552\[\text{\tt q} = \text{\tt i >> f}\]
553
554As described subsequently, we use a two-level long-stream addition technique
555in both our AVX2 and GPGPU implementations.  In principle, one can extend
556the technique to additional levels.  Using 64-bit adders throughout,
557$\lceil\log_{64}{n}\rceil$ steps are needed for $n$-bit addition.
558A three-level scheme could coordinate
55964 groups each performing 4096-bit long additions in a two-level structure.
560However, whether there are reasonable architectures that can support fine-grained
561SIMT style coordinate at this level is an open question.
562
563Using the methods outlined, it is quite conceivable that instruction
564set extensions to support long-stream addition could be added for
565future SIMD and GPGPU processors.   Given the fundamental nature
566of addition as a primitive and its particular application to regular
567expression matching as shown herein, it seems reasonable to expect
568such instructions to become available.    Alternatively, it may
569be worthwhile to simply ensure that the \verb#hsimd<64>::mask(X)# and
570\verb#simd<64>::spread(X)# operations are efficiently supported.
571
572
573\input{sse2}
574
575\input{analysis}
576
577\input{avx2}
578
579
580
581\section{GPGPU Implementation}\label{sec:GPU}
582
583To further assess the scalability of our regular expression matching
584using bit-parallel data streams, we implemented a GPGPU version
585in OpenCL.   
586We arranged for 64 work groups each having 64 threads.
587The size of work group and number of work groups is choosen
588to provide the best occupancy calculated by AMD App Profiler.
589Input files are divided in data parallel fashion among
590the 64 work groups.  Each work group carries out the regular
591expression matching operations 4096 bytes at a time using SIMT
592processing.   Although the GPGPU
593does not directly support the mask and spread operations required
594by our long-stream addition model,
595we are able to simulate them using shared memory.
596Each thread maintains
597its own carry and bubble values in shared memory and performs
598synchronized updates with the other threads using a six-step
599parallel-prefix style process.  Others have implemented
600long-stream addition on the GPGPU using similar techniques,
601as noted previously.
602
603We performed our test on an AMD Radeon HD A10-6800K APU machine.
604On the AMD Fusion systems, the input buffer is allocated in
605pinned memory to take advantage of the zero-copy memory regions
606where data can be read directly into this region by CPU
607and also accessed by GPGPU for further processing. Therefore,
608the expensive data transferring time that needed by traditional
609discrete GPGPUs is hidden and we compare only the kernel execution
610time with our SSE2 and AVX implementations as shown in Figure
611\ref{fig:SSE-AVX-GPU}. The GPGPU version gives 30\% to 55\% performance
612improvement over SSE version and 10\% to 40\% performance
613improvement over AVX version. Although we intended to process
61464 work groups with 4096 bytes each at a time rather than 128 bytes
615at a time on SSE or 256 bytes at a time on AVX, the performance
616improvement is less than 55\%. The first reason is hardware
617limitations. Our kernel occupancy is limited by register usage
618and not all the work groups can be scheduled at the same time.
619The second reason is that the long-stream addition implemented
620on GPGPU is more expensive than the implementations on SSE or AVX.
621Another important reason is the control flow. When a possible
622match is found in one thread, the rest of the threads in the
623same work group have to execute the same instructions for
624further processing rather than jump to the next block with a
625simple IF test. Therefore, the performance of different
626regular expresions is dependent on the number of
627long-stream addition operations and the total number of matches
628of a given input.
629
630\begin{figure}
631\begin{center}
632\begin{tikzpicture}
633\begin{axis}[
634xtick=data,
635ylabel=Running Time (ms per megabyte),
636xticklabels={@,Date,Email,URIorEmail,HexBytes},
637tick label style={font=\tiny},
638enlarge x limits=0.15,
639%enlarge y limits={0.15, upper},
640ymin=0,
641legend style={at={(0.5,-0.15)},
642anchor=north,legend columns=-1},
643ybar,
644bar width=7pt,
645]
646\addplot
647file {data/ssetime.dat};
648\addplot
649file {data/avxtime.dat};
650\addplot
651file {data/gputime.dat};
652
653\legend{SSE2,AVX2,GPGPU,Annot}
654\end{axis}
655\end{tikzpicture}
656\end{center}
657\caption{Running Time}\label{fig:SSE-AVX-GPU}
658\end{figure}
659
660
661
662
663
664
665
666
667\input{conclusion}
668
669
670
671%\appendix
672%\section{Appendix Title}
673
674%This is the text of the appendix, if you need one.
675
676
677This research was supported by grants from the Natural Sciences and Engineering Research Council of Canada and
678MITACS, Inc.
679
680\bibliographystyle{IEEEtranS}
681\bibliography{reference}
682
683\end{document}
684
685
Note: See TracBrowser for help on using the repository browser.