Selected Publications
You can find below a list of selected publications. To view all publications, please click on the following button, or download the full bibliography in BibTex style on the second button.
View all publicationsDownload bibliography
2015 |
|
Adornes, Daniel; Griebler, Dalvan; Ledur, Cleverson; Fernandes, Luiz G. A Unified MapReduce Domain-Specific Language for Distributed and Shared Memory Architectures Inproceedings doi In: The 27th International Conference on Software Engineering & Knowledge Engineering, pp. 6, Knowledge Systems Institute Graduate School, Pittsburgh, USA, 2015. (Abstract | Links | BibTeX | Tags: ) @inproceedings{ADORNES:SEKE:15,MapReduce is a suitable and efficient parallel programming pattern for processing big data analysis. In recent years, many frameworks/languages have implemented this pattern to achieve high performance in data mining applications, particularly for distributed memory architectures (e.g., clusters). Nevertheless, the industry of processors is now able to offer powerful processing on single machines (e.g., multi-core). Thus, these applications may address the parallelism in another architectural level. The target problems of this paper are code reuse and programming effort reduction since current solutions do not provide a single interface to deal with these two architectural levels. Therefore, we propose a unified domain-specific language in conjunction with transformation rules for code generation for Hadoop and Phoenix++. We selected these frameworks as state-of-the-art MapReduce implementations for distributed and shared memory architectures, respectively. Our solution achieves a programming effort reduction from 41.84% and up to 95.43% without significant performance losses (below the threshold of 3%) compared to Hadoop and Phoenix++. | |
2014 |
|
Griebler, Dalvan; Adornes, Daniel; Fernandes, Luiz G. Performance and Usability Evaluation of a Pattern-Oriented Parallel Programming Interface for Multi-Core Architectures Inproceedings In: The 26th International Conference on Software Engineering & Knowledge Engineering, pp. 25-30, Knowledge Systems Institute Graduate School, Vancouver, Canada, 2014. (Abstract | Links | BibTeX | Tags: ) @inproceedings{GRIEBLER:SEKE:14,Multi-core architectures have increased the power of parallelism by coupling many cores in a single chip. This becomes even more complex for developers to exploit the avail-able parallelism in order to provide high performance scalable programs. To address these challenges, we propose the DSL-POPP (Domain-Specific Language for Pattern-Oriented Parallel Programming), which links the pattern-based approach in the programming interface as an alternative to reduce the effort of parallel software development, and achieve good performance in some applications. In this paper, the objective is to evaluate the usability and performance of the master/slave pattern and compare it to the Pthreads library. Moreover, experiments have shown that the master/slave interface of the DSL-POPP reduces up to 50% of the programming effort, without significantly affecting the performance. | |
2013 |
|
Griebler, Dalvan; Fernandes, Luiz G. Towards a Domain-Specific Language for Patterns-Oriented Parallel Programming Inproceedings doi In: Programming Languages - 17th Brazilian Symposium - SBLP, pp. 105-119, Springer Berlin Heidelberg, Brasilia, Brazil, 2013. (Abstract | Links | BibTeX | Tags: ) @inproceedings{GRIEBLER:SBLP:13,Pattern-oriented programming has been used in parallel code development for many years now. During this time, several tools (mainly frameworks and libraries) proposed the use of patterns based on programming primitives or templates. The implementation of patterns using those tools usually requires human expertise to correctly set up communication/synchronization among processes. In this work, we propose the use of a Domain Specific Language to create pattern-oriented parallel programs (DSL-POPP). This approach has the advantage of offering a higher programming abstraction level in which communication/synchronization among processes is hidden from programmers. We compensate the reduction in programming flexibility offering the possibility to use combined and/or nested parallel patterns (i.e., parallelism in levels), allowing the design of more complex parallel applications. We conclude this work presenting an experiment in which we develop a parallel application exploiting combined and nested parallel patterns in order to demonstrate the main properties of DSL-POPP. | |