source: docs/HPCA2011/09-pipeline.tex @ 1325

Last change on this file since 1325 was 1325, checked in by lindanl, 8 years ago

multi-thread section

File size: 5.4 KB
RevLine 
[1302]1\section{Multi-threaded Parabix}
[1320]2The general problem of addressing performance through multicore parallelism
3is the increasing energy cost. As discussed in previous sections,
4Parabix, which applies SIMD-based techniques can not only achieves better performance but consumes less energy.
5Moreover, using mulitiple cores, we can further improve the performance of Parabix while keeping the energy consumption at the same level.
[1302]6
[1325]7A typical approach to parallelizing software, data parallelism, requires nearly independent data,
8However, the nature of XML files makes them hard to partition nicely for data parallelism.
9Several approaches have been used to address this problem.
10A preparsing phase has been proposed to help partition the XML document \cite{dataparallel}.
11The goal of this preparsing is to determine the tree structure of the XML document
12so that it can be used to guide the full parsing in the next phase.
13Another data parallel algorithm is called ParDOM \cite{Shah:2009}.
14It first builds partial DOM node tree structures for each data segments and then link them
15using preorder numbers that has been assigned to each start element to determine the ordering among siblings
16and a stack to manage the parent-child relationship between elements.
[1320]17
[1325]18Theses data parallelism approaches introduce a lot of overheads to solve the data dependencies between segments.
19Therefore, instead of partitioning the data and assigning different data segments to different cores,
20we propose a pipeline parallelism strategy that partitions the process into several stages and let each core work with one single stage.
21
[1320]22The interface between stages is implemented using a circular array,
23where each entry consists of all ten data structures for one segment as listed in Table \ref{pass_structure}.
24Each thread keeps an index of the array ($I_N$),
25which is compared with the index ($I_{N-1}$) kept by its previous thread before processing the segment.
26If $I_N$ is smaller than $I_{N-1}$, thread N can start processing segment $I_N$,
27otherwise the thread keeps reading $I_{N-1}$ until $I_{N-1}$ is larger than $I_N$.
28The time consumed by continuously loading the value of $I_{N-1}$ and
29comparing it with $I_N$ will be later referred as stall time.
30When a thread finishes processing the segment, it increases the index by one.
31
[1302]32\begin{table*}[t]
33\begin{center}
34\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
35\hline
[1320]36       & & \multicolumn{10}{|c|}{Data Structures}\\ \hline
37       &                & srcbuf & basis\_bits & u8   & lex   & scope & ctCDPI & ref    & tag    & xml\_names & check\_streams\\ \hline
38Stage1 &fill\_buffer    & write  &             &      &       &       &        &        &        &            &               \\ 
39       &s2p             & read   & write       &      &       &       &        &        &        &            &               \\ 
40       &classify\_bytes &        & read        &      & write &       &        &        &        &            &               \\ \hline
41Stage2 &validate\_u8    &        & read        & write&       &       &        &        &        &            &               \\ 
42       &gen\_scope      &        &             &      & read  & write &        &        &        &            &               \\ 
43       &parse\_CtCDPI   &        &             &      & read  & read  & write  &        &        &            & write         \\ 
44       &parse\_ref      &        &             &      & read  & read  & read   & write  &        &            &               \\ \hline
45Stage3 &parse\_tag      &        &             &      & read  & read  & read   &        & write  &            &               \\ 
46       &validate\_name  &        &             & read & read  &       & read   & read   & read   & write      & write         \\ 
47       &gen\_check      &        &             & read & read  & read  & read   &        & read   & read       & write         \\ \hline
48Stage4 &postprocessing  & read   &             &      & read  &       & read   & read   &        &            & read          \\ \hline
[1302]49\end{tabular}
50\end{center}
51\caption{Relationship between Each Pass and Data Structures} 
52\label{pass_structure} 
53\end{table*}
54
[1320]55Figure \ref{multithread_perf} demonstrates the XML well-formedness checking performance of
56the multi-threaded Parabix in comparison with the single-threaded version.
57The multi-threaded Parabix is more than two times faster and runs at 2.7 cycles per input byte on the \SB{} machine.
[1302]58
59\begin{figure}
60\begin{center}
61\includegraphics[width=0.5\textwidth]{plots/performance.pdf}
62\end{center}
63\caption{Processing Time (y axis: CPU cycles per byte)}
[1320]64\label{multithread_perf}
[1302]65\end{figure}
66
[1320]67Figure \ref{power} shows the average power consumed by the multi-threaded Parabix in comparison with the single-threaded version.
68By running four threads and using all the cores at the same time, the power consumption of the multi-threaded Parabix is much higher
69than the single-threaded version. However, the energy consumption is about the same, because the multi-threaded Parabix needs less processing time.
70In fact, as shown in Figure \ref{energy}, parsing soap.xml using multi-threaded Parabix consumes less energy than using single-threaded Parabix.
71
[1302]72\begin{figure}
73\begin{center}
[1320]74\includegraphics[width=0.5\textwidth]{plots/power.pdf}
[1302]75\end{center}
[1320]76\caption{Average Power (watts)}
77\label{power}
[1302]78\end{figure}
[1320]79\begin{figure}
80\begin{center}
81\includegraphics[width=0.5\textwidth]{plots/energy.pdf}
82\end{center}
83\caption{Energy Consumption (nJ per byte)}
84\label{energy}
85\end{figure}
[1302]86
Note: See TracBrowser for help on using the repository browser.