source: docs/Working/re/avx2.tex @ 3660

Last change on this file since 3660 was 3659, checked in by cameron, 6 years ago

Update all AVX2 results using today's cs-amoedba-n5 data

File size: 4.7 KB
Line 
1\section{SIMD Scalability}\label{sec:AVX2}
2
3
4Although commodity processors have provided 128-bit SIMD operations for
5more than a decade, the extension to 256-bit integer SIMD operations
6has just recently taken place with the availability of AVX2
7instructions in Intel Haswell architecture chips as of mid 2013.
8This provides an excellent opportunity to assess the scalability
9of the bitwise data-parallel approach to regular expression matching.
10
11For the most part, adapting the Parabix tool chain to the new AVX2
12instructions was straightforward.   This mostly involved regenerating
13library functions using the new AVX2 intrinsics.   There were minor
14issues in the core transposition algorithm because the doublebyte-to-byte
15pack instructions are confined to independent operations within two
16128-bit lanes. 
17
18
19\paragraph*{AVX2 256-Bit Addition}
20 \begin{figure}[tbh]
21
22\begin{center} \small
23\begin{verbatim}
24bitblock_t spread(uint64_t bits) {
25  uint64_t s = 0x0000200040008001 * bits;
26  uint64_t t = s & 0x0001000100010001;
27  return _mm256_cvtepu16_epi64(t);
28}
29\end{verbatim}
30\end{center}
31\caption{AVX2 256-bit Spread}
32\label{fig:AVX2spread}
33
34\end{figure}
35
36Bitstream addition at the 256-bit block size was implemented using the
37long-stream addition technique.   The AVX2 instruction set directly
38supports the \verb#hsimd<64>::mask(X)# operation using
39the \verb#_mm256_movemask_pd#  intrinsic, extracting
40the required 4-bit mask directly from the 256-bit vector.
41The \verb#hsimd<64>::spread(X)# is slightly more
42problematic, requiring a short sequence of instructions
43to convert the computed 4-bit increment mask back
44into a vector of 4 64-bit values.   One method is to
45use the AVX2 broadcast instruction to make 4 copies
46of the mask to be spread, followed by appropriate
47bit manipulation.   Another uses multiplication to
48first spread to 16-bit fields as shown in Figure \ref{fig:AVX2spread}.
49
50We also compiled new versions of the {\tt egrep} and {\tt nrgrep} programs
51using the {\tt -march=core-avx2} flag in case the compiler is able
52to vectorize some of the code.
53
54\begin{figure}
55\begin{center}
56\begin{tikzpicture}
57\begin{axis}[
58xtick=data,
59ylabel=AVX2 Instruction Reduction,
60xticklabels={@,Date,Email,URIorEmail,HexBytes},
61tick label style={font=\tiny},
62enlarge x limits=0.15,
63%enlarge y limits={0.15, upper},
64ymin=0,
65legend style={at={(0.5,-0.15)},
66anchor=north,legend columns=-1},
67ybar,
68bar width=7pt,
69]
70\addplot
71file {data/sse2-avx2-instr-red-bitstreams.dat};
72\addplot
73file {data/sse2-avx2-instr-red-nrgrep112.dat};
74\addplot
75file {data/sse2-avx2-instr-red-gre2p.dat};
76
77\legend{bitstreams,nrgrep,gre2p,Annot}
78\end{axis}
79\end{tikzpicture}
80\end{center}
81\caption{Instruction Reduction}\label{fig:AVXInstrReduction}
82\end{figure}
83
84
85Figure \ref{fig:AVXInstrReduction} shows the reduction in instruction
86count achieved for each of the applications.   Working at a block
87size of 256 bytes at a time rather than 128 bytes at a time,
88the bitstreams implementation scaled dramatically well with reductions in
89instruction count over a factor of two in each case.   Although a factor
90of two would seem an outside limit, we attribute the change to
91greater instruction efficiency. 
92AVX2 instructions use a
93non destructive three-operand
94form instead of the destructive two-operand form of SSE2.
95In the two-operand form, binary instructions must always use
96one of the source registers as a destination register.   As a
97result the SSE2 object code generates many data movement operations
98that are unnecessary with the AVX2 set.
99
100As expected, there was no observable reduction in instruction
101count with the recompiled grep and nrgrep applications.
102
103
104
105\begin{figure}
106\begin{center}
107\begin{tikzpicture}
108\begin{axis}[
109xtick=data,
110ylabel=AVX2 Speedup,
111xticklabels={@,Date,Email,URIorEmail,HexBytes},
112tick label style={font=\tiny},
113enlarge x limits=0.15,
114%enlarge y limits={0.15, upper},
115ymin=0,
116legend style={at={(0.5,-0.15)},
117anchor=north,legend columns=-1},
118ybar,
119bar width=7pt,
120]
121\addplot
122file {data/sse2-avx2-speedup-bitstreams.dat};
123\addplot
124file {data/sse2-avx2-speedup-nrgrep112.dat};
125\addplot
126file {data/sse2-avx2-speedup-gre2p.dat};
127
128\legend{bitstreams,nrgrep,gre2p,Annot}
129\end{axis}
130\end{tikzpicture}
131\end{center}
132\caption{AVX Speedup}\label{fig:AVXSpeedup}
133\end{figure}
134
135As shown in Figure \ref{fig:AVXSpeedup} the reduction in
136instruction count was reflected in a significant speed-up
137in the bitstreams implementation.  However, the speed-up was
138considerably less than expected. 
139The bitstreams code  on AVX2 has suffered from a considerable
140reduction in instructions per cycle compared to the SSE2
141implementation, possibly indicating
142that our grep implementation has become memory-bound.
143Nevertheless, the overall results on our AVX2 machine were quite encouraging,
144demonstrating very good scalability of the bitwise data-parallel approach.
145
146
Note: See TracBrowser for help on using the repository browser.