# Changeset 1348 for docs/HPCA2012/09-pipeline.tex

Ignore:
Timestamp:
Aug 23, 2011, 1:02:30 AM (8 years ago)
Message:

new abstract for new intro

File:
1 edited

### Legend:

Unmodified
 r1347 Moreover, using mulitiple cores, we can further improve the performance of Parabix while keeping the energy consumption at the same level. A typical approach to parallelizing software, data parallelism, requires nearly independent data, However, the nature of XML files makes them hard to partition nicely for data parallelism. Several approaches have been used to address this problem. A preparsing phase has been proposed to help partition the XML document \cite{dataparallel}. The goal of this preparsing is to determine the tree structure of the XML document so that it can be used to guide the full parsing in the next phase. Another data parallel algorithm is called ParDOM \cite{Shah:2009}. It first builds partial DOM node tree structures for each data segments and then link them using preorder numbers that has been assigned to each start element to determine the ordering among siblings and a stack to manage the parent-child relationship between elements. A typical approach to parallelizing software, data parallelism, requires nearly independent data, However, the nature of XML files makes them hard to partition nicely for data parallelism.  Several approaches have been used to address this problem.  A preparsing phase has been proposed to help partition the XML document \cite{dataparallel}.  The goal of this preparsing is to determine the tree structure of the XML document so that it can be used to guide the full parsing in the next phase.  Another data parallel algorithm is called ParDOM \cite{Shah:2009}.  It first builds partial DOM node tree structures for each data segments and then link them using preorder numbers that has been assigned to each start element to determine the ordering among siblings and a stack to manage the parent-child relationship between elements. Data parallelism approaches introduce a lot of overheads to solve the data dependencies between segments. Therefore, instead of partitioning the data into segments and assigning different data segments to different cores, we propose a pipeline parallelism strategy that partitions the process into several stages and let each core work with one single stage. Data parallelism approaches introduce a lot of overheads to solve the data dependencies between segments.  Therefore, instead of partitioning the data into segments and assigning different data segments to different cores, we propose a pipeline parallelism strategy that partitions the process into several stages and let each core work with one single stage. The interface between stages is implemented using a circular array, where each entry consists of all ten data structures for one segment as listed in Table \ref{pass_structure}. Each thread keeps an index of the array ($I_N$), which is compared with the index ($I_{N-1}$) kept by its previous thread before processing the segment. If $I_N$ is smaller than $I_{N-1}$, thread N can start processing segment $I_N$, otherwise the thread keeps reading $I_{N-1}$ until $I_{N-1}$ is larger than $I_N$. The time consumed by continuously loading the value of $I_{N-1}$ and comparing it with $I_N$ will be later referred as stall time. When a thread finishes processing the segment, it increases the index by one. The interface between stages is implemented using a circular array, where each entry consists of all ten data structures for one segment as listed in Table \ref{pass_structure}.  Each thread keeps an index of the array ($I_N$), which is compared with the index ($I_{N-1}$) kept by its previous thread before processing the segment.  If $I_N$ is smaller than $I_{N-1}$, thread N can start processing segment $I_N$, otherwise the thread keeps reading $I_{N-1}$ until $I_{N-1}$ is larger than $I_N$.  The time consumed by continuously loading the value of $I_{N-1}$ and comparing it with $I_N$ will be later referred as stall time.  When a thread finishes processing the segment, it increases the index by one. \begin{table*}[t]