2012
|
Griebler, Dalvan Proposta de uma Linguagem Específica de Domínio de Programação Paralela Orientada a Padrões Paralelos: Um Estudo de Caso Baseado no Padrão Mestre/Escravo para Arquiteturas Multi-Core Masters Thesis Faculdade de Informática - PPGCC - PUCRS, Porto Alegre, Brazil, 2012. @mastersthesis{GRIEBLER:DM:12,
title = {Proposta de uma Linguagem Específica de Domínio de Programação Paralela Orientada a Padrões Paralelos: Um Estudo de Caso Baseado no Padrão Mestre/Escravo para Arquiteturas Multi-Core},
author = {Dalvan Griebler},
url = {http://tede.pucrs.br/tde_busca/arquivo.php?codArquivo=4265},
year = {2012},
date = {2012-03-01},
address = {Porto Alegre, Brazil},
school = {Faculdade de Informática - PPGCC - PUCRS},
abstract = {This work proposes a Domain-Specific Language for Parallel Patterns Oriented Parallel Programming (LED-PPOPP). Its main purpose is to provide a way to decrease the amount of effort necessary to develop parallel programs, offering a way to guide developers through patterns which are implemented by the language interface. The idea is to exploit this approach avoiding large performance losses in the applications. Patterns are specialized solutions, previously studied, and used to solve a frequent problem. Thus, parallel patterns offer a higher abstraction level to organize the algorithms in the exploitation of parallelism. They also can be easily learned by inexperienced programmers and software engineers. This work carried out a case study based on the Master/Slave pattern, focusing on the parallelization of algorithms for multi-core architectures. The implementation was validated through experiments to evaluate the programming effort to write code in LED-PPOPP and the performance achieved by the parallel code automatically generated. The obtained results let us conclude that a significant reduction in the parallel programming effort occurred in comparison to the Pthreads library utilization. Additionally, the final performance of the parallelized algorithms confirms that the parallelization with LED-PPOPP does not bring on significant losses related to parallelization using OpenMP in most of the all experiments carried out.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
This work proposes a Domain-Specific Language for Parallel Patterns Oriented Parallel Programming (LED-PPOPP). Its main purpose is to provide a way to decrease the amount of effort necessary to develop parallel programs, offering a way to guide developers through patterns which are implemented by the language interface. The idea is to exploit this approach avoiding large performance losses in the applications. Patterns are specialized solutions, previously studied, and used to solve a frequent problem. Thus, parallel patterns offer a higher abstraction level to organize the algorithms in the exploitation of parallelism. They also can be easily learned by inexperienced programmers and software engineers. This work carried out a case study based on the Master/Slave pattern, focusing on the parallelization of algorithms for multi-core architectures. The implementation was validated through experiments to evaluate the programming effort to write code in LED-PPOPP and the performance achieved by the parallel code automatically generated. The obtained results let us conclude that a significant reduction in the parallel programming effort occurred in comparison to the Pthreads library utilization. Additionally, the final performance of the parallelized algorithms confirms that the parallelization with LED-PPOPP does not bring on significant losses related to parallelization using OpenMP in most of the all experiments carried out. |
2011
|
Raeder, Mateus; Griebler, Dalvan; Baldo, Lucas; Fernandes, Luiz G. Performance Prediction of Parallel Applications with Parallel Patterns Using Stochastic Methods Inproceedings doi In: Sistemas Computacionais (WSCAD-SSC), XII Simpósio em Sistemas Computacionais de Alto Desempenho, pp. 1-13, IEEE, Espírito Santo, Brasil, 2011. @inproceedings{RAEDER:WSCAD:11,
title = {Performance Prediction of Parallel Applications with Parallel Patterns Using Stochastic Methods},
author = {Mateus Raeder and Dalvan Griebler and Lucas Baldo and Luiz G. Fernandes},
url = {https://doi.org/10.1109/WSCAD-SSC.2011.18},
doi = {10.1109/WSCAD-SSC.2011.18},
year = {2011},
date = {2011-10-01},
booktitle = {Sistemas Computacionais (WSCAD-SSC), XII Simpósio em Sistemas Computacionais de Alto Desempenho},
pages = {1-13},
publisher = {IEEE},
address = {Espírito Santo, Brasil},
abstract = {One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such applications seems to be an interesting alternative and can help to identify the best implementation strategies. In this work, the Stochastic Automata Network formalism is adopted to model and evaluate the performance of parallel applications, specially developed for clusters of workstations platforms. The methodology used is based on the construction of generic models to describe classical parallel implementation schemes, like Master/Slave, Parallel Phases, Pipeline and Divide and Conquer. Those models are adapted to represent cases of real applications through the definition of input parameters values. Finally, aiming to verify the accuracy of the adopted technique, some comparisons with real applications implementation results are presented.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such applications seems to be an interesting alternative and can help to identify the best implementation strategies. In this work, the Stochastic Automata Network formalism is adopted to model and evaluate the performance of parallel applications, specially developed for clusters of workstations platforms. The methodology used is based on the construction of generic models to describe classical parallel implementation schemes, like Master/Slave, Parallel Phases, Pipeline and Divide and Conquer. Those models are adapted to represent cases of real applications through the definition of input parameters values. Finally, aiming to verify the accuracy of the adopted technique, some comparisons with real applications implementation results are presented. |
Griebler, Dalvan; Raeder, Mateus; Fernandes, Luiz Gustavo Padrões e Frameworks de Programação Paralela em Ambientes Multi-Core Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 2, Sociedade Brasileira de Computação (SBC), Porto Alegre, RS, BR, 2011. @inproceedings{GRIEBLER:ERAD:11,
title = {Padrões e Frameworks de Programação Paralela em Ambientes Multi-Core},
author = {Dalvan Griebler and Mateus Raeder and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2011/CR_ERAD_2011.pdf},
year = {2011},
date = {2011-03-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {2},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Porto Alegre, RS, BR},
abstract = {Nos últimos anos, o mercado recente de estações de trabalho e servidores vem aumentando gradativamente a quantidade de núcleos e processadores, inserindo na sua programação o paralelismo, o que resultou no aumento da complexidade em lidar com este tipo de hardware. Neste cenário, é necessário a disponibilidade de mecanismos que possam fornecer escalabilidade e permitir a exploração de paralelismo nestas arquiteturas, conhecidas como multi-core. Não basta que tais arquiteturas multiprocessadas estejam disponíveis, se estas não são exploradas devidamente. Depuração, condições de corrida, sincronização de threads ou processos e controle de acesso aos dados são exemplos de fatores críticos para a programação destes ambientes paralelos. Novas maneiras de abstrair a complexidade em lidar com estes sistemas estão sendo estudadas, a fim de que a programação paralela seja algo menos complexo para os desenvolvedores de software. Padrões paralelos vêm sendo o alvo de constantes estudos com intuito de padronizar a programação.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Nos últimos anos, o mercado recente de estações de trabalho e servidores vem aumentando gradativamente a quantidade de núcleos e processadores, inserindo na sua programação o paralelismo, o que resultou no aumento da complexidade em lidar com este tipo de hardware. Neste cenário, é necessário a disponibilidade de mecanismos que possam fornecer escalabilidade e permitir a exploração de paralelismo nestas arquiteturas, conhecidas como multi-core. Não basta que tais arquiteturas multiprocessadas estejam disponíveis, se estas não são exploradas devidamente. Depuração, condições de corrida, sincronização de threads ou processos e controle de acesso aos dados são exemplos de fatores críticos para a programação destes ambientes paralelos. Novas maneiras de abstrair a complexidade em lidar com estes sistemas estão sendo estudadas, a fim de que a programação paralela seja algo menos complexo para os desenvolvedores de software. Padrões paralelos vêm sendo o alvo de constantes estudos com intuito de padronizar a programação. |