# Changeset 2505 for docs/Working/icXML/multithread.tex

Ignore:
Timestamp:
Oct 19, 2012, 6:56:35 PM (7 years ago)
Message:

more edits

File:
1 edited

Unmodified
Added
Removed
• ## docs/Working/icXML/multithread.tex

 r2502 \section{Leveraging SIMD Parallelism for Multicore: Pipeline Parallelism} %\section{Leveraging SIMD Parallelism for Multicore: Pipeline Parallelism} \section{Leveraging Pipeline Parallelism} % As discussed in section \ref{background:xerces}, Xerces can be considered a complex finite-state machine % Finite-state machine belongs to the hardest application class to parallelize and process efficiently As discussed in section \ref{background:xerces}, Xerces can be considered a complex finite-state machine, the hardest type of application to parallelize and process efficiently \cite{Asanovic:EECS-2006-183}. However, \icXML{} provides logical layers between modules, which naturally enables pipeline parallel processing. However \icXML{} provides logical layers between modules, which naturally enables pipeline parallel processing. In our pipeline model, each thread is in charge of one module or one group of modules. A straight forward division is to take advantage of the layer between Parabix Subsystem and Markup Processor. In this case, the first thread $T_1$ will read 16k of XML input $I$ at a time and process all the modules in Parabix Subsystem to generates % content buffer, symbol array, URI array, context ID array and store them to a pre-allocated shared data structure $S$. content buffer, symbol array, URI array, and store them to a pre-allocated shared data structure $S$. The second thread $T_2$ consumes the data provided by the first thread and goes through all the modules in Markup Processor and writes output $O$. In the pipeline model, each thread is in charge of a group of modules. A straight forward division is to take advantage of the layer between \PS{} and \MP{}. In this case, the first thread $T_1$ reads 16k of XML input $I$ at a time and produces the content, symbol and URI streams, then stores them in a pre-allocated shared data structure $S$. The second thread $T_2$ consumes $S$, performs well-formedness and grammar-based validation then generates the output $O$. The shared data structure is implemented using a ring buffer, where each entry consists of all the arrays shared between the two threads with size of 160k. where every entry contains an independent set of data streams. In the example of Figure \ref{threads_timeline1} and \ref{threads_timeline2}, the ring buffer has four entries. A lock-free mechanism is applied to ensure that each entry can only be read or written by one thread at the same time. In Figure \ref{threads_timeline1}, the processing time of the first thread is longer, thus the second thread always wait for the first thread to finish processing one chunk of input and write to the shared memory. Figure \ref{threads_timeline2} illustrates a different situation where the second thread is slower and the first thread has to wait for the second thread finishing reading the shared data before it can reuse the memory space. In Figure \ref{threads_timeline1} the processing time of $T_1$ is longer than $T_2$; thus $T_2$ always waits for $T_1$ to write to the shared memory. Figure \ref{threads_timeline2} illustrates the scenario in which $T_1$ is faster and must wait for $T_2$ to finish reading the shared data before it can reuse the memory space. % In our pipeline model, each thread is in charge of one module or one group of modules. % A straight forward division is to take advantage of the layer between \PS{} and \MP{}. % In this case, the first thread $T_1$ will read 16k of XML input $I$ at a time % and process all the modules in \PS{} to generates % content buffer, symbol array, URI array, and store them to a pre-allocated shared data structure $S$. % The second thread $T_2$ consumes the data provided by the first thread and % goes through all the modules in Markup Processor and writes output $O$. % The shared data structure is implemented using a ring buffer, % where each entry consists of all the arrays shared between the two threads with size of 160k. % In the example of Figure \ref{threads_timeline1} and \ref{threads_timeline2}, the ring buffer has four entries. % A lock-free mechanism is applied to ensure that each entry can only be read or written by one thread at the same time. % In Figure \ref{threads_timeline1}, the processing time of the first thread is longer, % thus the second thread always wait for the first thread to finish processing one chunk of input % and write to the shared memory. % Figure \ref{threads_timeline2} illustrates a different situation where the second thread is slower % and the first thread has to wait for the second thread finishing reading the shared data before it can reuse the memory space. To understand the performance improvement that can be achieved by this pipeline model,
Note: See TracChangeset for help on using the changeset viewer.