source: docs/HPCA2012/09-pipeline.tex @ 1335

Last change on this file since 1335 was 1335, checked in by ashriram, 8 years ago

Working on evaluation. Fixed Figure sizes

File size: 5.5 KB
Line 
1\section{Multi-threaded Parabix}
2The general problem of addressing performance through multicore parallelism
3is the increasing energy cost. As discussed in previous sections,
4Parabix, which applies SIMD-based techniques can not only achieves better performance but consumes less energy.
5Moreover, using mulitiple cores, we can further improve the performance of Parabix while keeping the energy consumption at the same level.
6
7A typical approach to parallelizing software, data parallelism, requires nearly independent data,
8However, the nature of XML files makes them hard to partition nicely for data parallelism.
9Several approaches have been used to address this problem.
10A preparsing phase has been proposed to help partition the XML document \cite{dataparallel}.
11The goal of this preparsing is to determine the tree structure of the XML document
12so that it can be used to guide the full parsing in the next phase.
13Another data parallel algorithm is called ParDOM \cite{Shah:2009}.
14It first builds partial DOM node tree structures for each data segments and then link them
15using preorder numbers that has been assigned to each start element to determine the ordering among siblings
16and a stack to manage the parent-child relationship between elements.
17
18Data parallelism approaches introduce a lot of overheads to solve the data dependencies between segments.
19Therefore, instead of partitioning the data into segments and assigning different data segments to different cores,
20we propose a pipeline parallelism strategy that partitions the process into several stages and let each core work with one single stage.
21
22The interface between stages is implemented using a circular array,
23where each entry consists of all ten data structures for one segment as listed in Table \ref{pass_structure}.
24Each thread keeps an index of the array ($I_N$),
25which is compared with the index ($I_{N-1}$) kept by its previous thread before processing the segment.
26If $I_N$ is smaller than $I_{N-1}$, thread N can start processing segment $I_N$,
27otherwise the thread keeps reading $I_{N-1}$ until $I_{N-1}$ is larger than $I_N$.
28The time consumed by continuously loading the value of $I_{N-1}$ and
29comparing it with $I_N$ will be later referred as stall time.
30When a thread finishes processing the segment, it increases the index by one.
31
32\begin{table*}[t]
33{
34\centering
35\footnotesize
36\begin{center}
37\begin{tabular}{|c|@{~}c@{~}|c|@{~}c@{~}|c@{~}|@{~}c@{~}|c|@{~}c@{~}|c|@{~}c@{~}|c|@{~}c@{~}|}
38\hline
39       & & \multicolumn{10}{|c|}{Data Structures}\\ \hline
40       &                & data\_buffer& basis\_bits & u8   & lex   & scope & ctCDPI & ref    & tag    & xml\_names & err\_streams\\ \hline
41Stage1 &read\_data      & write       &             &      &       &       &        &        &        &            &               \\ 
42       &transposition   & read        & write       &      &       &       &        &        &        &            &               \\ 
43       &classification  &             & read        &      & write &       &        &        &        &            &               \\ \hline
44Stage2 &validate\_u8    &             & read        & write&       &       &        &        &        &            &               \\ 
45       &gen\_scope      &             &             &      & read  & write &        &        &        &            &               \\ 
46       &parse\_CtCDPI   &             &             &      & read  & read  & write  &        &        &            & write         \\ 
47       &parse\_ref      &             &             &      & read  & read  & read   & write  &        &            &               \\ \hline
48Stage3 &parse\_tag      &             &             &      & read  & read  & read   &        & write  &            &               \\ 
49       &validate\_name  &             &             & read & read  &       & read   & read   & read   & write      & write         \\ 
50       &gen\_check      &             &             & read & read  & read  & read   &        & read   & read       & write         \\ \hline
51Stage4 &postprocessing  & read        &             &      & read  &       & read   & read   &        &            & read          \\ \hline
52\end{tabular}
53\end{center}
54\caption{Relationship between Each Pass and Data Structures} 
55\label{pass_structure}
56} 
57\end{table*}
58
59Figure \ref{multithread_perf} demonstrates the XML well-formedness checking performance of
60the multi-threaded Parabix in comparison with the single-threaded version.
61The multi-threaded Parabix is more than two times faster and runs at 2.7 cycles per input byte on the \SB{} machine.
62
63
64Figure \ref{power} shows the average power consumed by the multi-threaded Parabix in comparison with the single-threaded version.
65By running four threads and using all the cores at the same time, the power consumption of the multi-threaded Parabix is much higher
66than the single-threaded version. However, the energy consumption is about the same, because the multi-threaded Parabix needs less processing time.
67In fact, as shown in Figure \ref{energy}, parsing soap.xml using multi-threaded Parabix consumes less energy than using single-threaded Parabix.
68
69\begin{figure}
70\subfigure[Performance (Cycles / Byte)]{
71\includegraphics[width=0.32\textwidth]{plots/performance.pdf}
72\label{performance}
73}
74\subfigure[Avg. Power Consumption]{
75\includegraphics[width=0.32\textwidth]{plots/power.pdf}
76\label{power}
77}
78\subfigure[Avg. Energy Consumption (nJ / Byte)]{
79  \includegraphics[width=0.32\textwidth]{plots/energy.pdf}
80\label{energy}
81}
82\caption{Multithreaded Parabix}
83\label{multithread_perf}
84\end{figure}
85
Note: See TracBrowser for help on using the repository browser.