2015
|
 | Roveda, Demétrius; Vogel, Adriano; Griebler, Dalvan Understanding, Discussing and Analyzing the OpenNebula's and OpenStack's IaaS Management Layers Journal Article doi In: Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), vol. 1, no. 3, pp. 15, 2015. @article{larcc:openebula_openstack:REABTIC:15,
title = {Understanding, Discussing and Analyzing the OpenNebula's and OpenStack's IaaS Management Layers},
author = {Demétrius Roveda and Adriano Vogel and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/ROVEDA_REABTIC_2015A.pdf},
doi = {10.5281/zenodo.59467},
year = {2015},
date = {2015-08-01},
journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)},
volume = {1},
number = {3},
pages = {15},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {The OpenNebula and OpenStack tools have been used for large corporations and research centers to implement IaaS clouds. The management layer is an important element for the user and administrator because it deals with the resources monitoring, development support, orchestration, and integration with other cloud platforms and services. The goal of this paper is to discuss and analyze the differences in the management layer for pointing out advantages and disadvantages of the tools. The results demonstrated that OpenNebula is more restrict and focused on simplicity in almost all comparisons while OpenStack is fragmented, complex and robust.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The OpenNebula and OpenStack tools have been used for large corporations and research centers to implement IaaS clouds. The management layer is an important element for the user and administrator because it deals with the resources monitoring, development support, orchestration, and integration with other cloud platforms and services. The goal of this paper is to discuss and analyze the differences in the management layer for pointing out advantages and disadvantages of the tools. The results demonstrated that OpenNebula is more restrict and focused on simplicity in almost all comparisons while OpenStack is fragmented, complex and robust. |
| Adornes, Daniel; Griebler, Dalvan; Ledur, Cleverson; Fernandes, Luiz G. A Unified MapReduce Domain-Specific Language for Distributed and Shared Memory Architectures Inproceedings doi In: The 27th International Conference on Software Engineering & Knowledge Engineering, pp. 6, Knowledge Systems Institute Graduate School, Pittsburgh, USA, 2015. @inproceedings{ADORNES:SEKE:15,
title = {A Unified MapReduce Domain-Specific Language for Distributed and Shared Memory Architectures},
author = {Daniel Adornes and Dalvan Griebler and Cleverson Ledur and Luiz G. Fernandes},
url = {http://dx.doi.org/10.18293/SEKE2015-204},
doi = {10.18293/SEKE2015-204},
year = {2015},
date = {2015-07-01},
booktitle = {The 27th International Conference on Software Engineering & Knowledge Engineering},
pages = {6},
publisher = {Knowledge Systems Institute Graduate School},
address = {Pittsburgh, USA},
abstract = {MapReduce is a suitable and efficient parallel programming pattern for processing big data analysis. In recent years, many frameworks/languages have implemented this pattern to achieve high performance in data mining applications, particularly for distributed memory architectures (e.g., clusters). Nevertheless, the industry of processors is now able to offer powerful processing on single machines (e.g., multi-core). Thus, these applications may address the parallelism in another architectural level. The target problems of this paper are code reuse and programming effort reduction since current solutions do not provide a single interface to deal with these two architectural levels. Therefore, we propose a unified domain-specific language in conjunction with transformation rules for code generation for Hadoop and Phoenix++. We selected these frameworks as state-of-the-art MapReduce implementations for distributed and shared memory architectures, respectively. Our solution achieves a programming effort reduction from 41.84% and up to 95.43% without significant performance losses (below the threshold of 3%) compared to Hadoop and Phoenix++.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
MapReduce is a suitable and efficient parallel programming pattern for processing big data analysis. In recent years, many frameworks/languages have implemented this pattern to achieve high performance in data mining applications, particularly for distributed memory architectures (e.g., clusters). Nevertheless, the industry of processors is now able to offer powerful processing on single machines (e.g., multi-core). Thus, these applications may address the parallelism in another architectural level. The target problems of this paper are code reuse and programming effort reduction since current solutions do not provide a single interface to deal with these two architectural levels. Therefore, we propose a unified domain-specific language in conjunction with transformation rules for code generation for Hadoop and Phoenix++. We selected these frameworks as state-of-the-art MapReduce implementations for distributed and shared memory architectures, respectively. Our solution achieves a programming effort reduction from 41.84% and up to 95.43% without significant performance losses (below the threshold of 3%) compared to Hadoop and Phoenix++. |
| Ledur, Cleverson; Griebler, Dalvan; Fernandes, Luiz Gustavo; Manssour, Isabel Uma Linguagem Específica de Domínio com Geração de Código Paralelo para Visualização de Grandes Volumes de Dados Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 139-140, Sociedade Brasileira de Computação (SBC), Gramado, RS, BR, 2015. @inproceedings{LEDUR:ERAD:15,
title = {Uma Linguagem Específica de Domínio com Geração de Código Paralelo para Visualização de Grandes Volumes de Dados},
author = {Cleverson Ledur and Dalvan Griebler and Luiz Gustavo Fernandes and Isabel Manssour},
url = {https://gmap.pucrs.br/dalvan/papers/2015/CR_ERAD_PG_2015.pdf},
year = {2015},
date = {2015-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {139-140},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Gramado, RS, BR},
abstract = {Este artigo apresenta uma análise sobre linguagens específicas de domínio para a criação de visualizações. Ao final, propõe uma nova linguagem específica de domínio para geração de visualizações de quantidades massivas de dados, paralelizando não só a geração e a interação da visualização, mas também o pré-processamento dos dados brutos.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Este artigo apresenta uma análise sobre linguagens específicas de domínio para a criação de visualizações. Ao final, propõe uma nova linguagem específica de domínio para geração de visualizações de quantidades massivas de dados, paralelizando não só a geração e a interação da visualização, mas também o pré-processamento dos dados brutos. |
 | Roveda, Demétrius; Vogel, Adriano; Souza, Samuel; Griebler, Dalvan Uma Avaliação Comparativa dos Mecanismos de Segurança nas Ferramentas OpenStack, OpenNebula e CloudStack Journal Article doi In: Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), vol. 1, no. 4, pp. 15, 2015. @article{larcc:security_IaaS_tools:REABTIC:15,
title = {Uma Avaliação Comparativa dos Mecanismos de Segurança nas Ferramentas OpenStack, OpenNebula e CloudStack},
author = {Demétrius Roveda and Adriano Vogel and Samuel Souza and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/ROVEDA_REABTIC_2015.pdf},
doi = {10.5281/zenodo.59478},
year = {2015},
date = {2015-03-01},
journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)},
volume = {1},
number = {4},
pages = {15},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {The IaaS service model is gaining attention due its importance to the cloud computing environment, it is responsible for simplifying the access and the management of high-end processing and storage systems, besides being the base that allows the outsourcing of upper layers, PaaS and Saas. The IaaS cloud tools are responsible for controlling the virtual infrastructure as well the environment security, which is an important characteristic for cloud applications, once the system can be integrated with publuc clouds through the Internet. In this paper, the goals are evaluate and compare the security layer, from the administrator point of view, of three open source IaaS tools: OpenStack, OpenNebula and Cloudstack. Considering the security layer from Dukaric taxonomy, the results shown that all the tools have a equivalent security level, however, there are evidence that not all the security features found in the tools fits in the taxonomy description.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The IaaS service model is gaining attention due its importance to the cloud computing environment, it is responsible for simplifying the access and the management of high-end processing and storage systems, besides being the base that allows the outsourcing of upper layers, PaaS and Saas. The IaaS cloud tools are responsible for controlling the virtual infrastructure as well the environment security, which is an important characteristic for cloud applications, once the system can be integrated with publuc clouds through the Internet. In this paper, the goals are evaluate and compare the security layer, from the administrator point of view, of three open source IaaS tools: OpenStack, OpenNebula and Cloudstack. Considering the security layer from Dukaric taxonomy, the results shown that all the tools have a equivalent security level, however, there are evidence that not all the security features found in the tools fits in the taxonomy description. |
| Maron, Carlos A. F.; Griebler, Dalvan; Vogel, Adriano; Schepke, Claudio Em Direção à Comparação do Desempenho das Aplicações Paralelas nas Ferramentas OpenStack e OpenNebula Inproceedings In: 15th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 205-208, Sociedade Brasileira de Computação, Gramado, RS, Brazil, 2015. @inproceedings{hiperfcloud:nas_bech_openstack_opennebula:ERAD:15,
title = {Em Direção à Comparação do Desempenho das Aplicações Paralelas nas Ferramentas OpenStack e OpenNebula},
author = {Carlos A. F. Maron and Dalvan Griebler and Adriano Vogel and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_ERAD_2015.pdf},
year = {2015},
date = {2015-03-01},
booktitle = {15th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {205-208},
publisher = {Sociedade Brasileira de Computação},
address = {Gramado, RS, Brazil},
abstract = {A infraestrutura de Computação em Nuvem vem sendo uma alternativa à execução de aplicações de alto desempenho. No entanto, o desempenho pode ser prejudicado devido a camada de virtualização e da ação das ferramentas de administração de nuvem. O objetivo deste trabalho foi comparar o desempenho de aplicações em OpenStack e OpenNebula. Os resultados apresentaram diferença significativa entre as ferramentas e positiva ao OpenNebula.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A infraestrutura de Computação em Nuvem vem sendo uma alternativa à execução de aplicações de alto desempenho. No entanto, o desempenho pode ser prejudicado devido a camada de virtualização e da ação das ferramentas de administração de nuvem. O objetivo deste trabalho foi comparar o desempenho de aplicações em OpenStack e OpenNebula. Os resultados apresentaram diferença significativa entre as ferramentas e positiva ao OpenNebula. |
2014
|
| Maron, Carlos A. F.; Griebler, Dalvan; Vogel, Adriano; Schepke, Claudio Avaliação e Comparação do Desempenho das Ferramentas OpenStack e OpenNebula Inproceedings In: 12th Escola Regional de Redes de Computadores (ERRC), pp. 1-5, Sociedade Brasileira de Computação, Canoas, 2014. @inproceedings{hiperfcloud:isolation_bechs_openstack_opennebula:ERRC:14,
title = {Avaliação e Comparação do Desempenho das Ferramentas OpenStack e OpenNebula},
author = {Carlos A. F. Maron and Dalvan Griebler and Adriano Vogel and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_ERRC_2014.pdf},
year = {2014},
date = {2014-11-01},
booktitle = {12th Escola Regional de Redes de Computadores (ERRC)},
pages = {1-5},
publisher = {Sociedade Brasileira de Computação},
address = {Canoas},
abstract = {A computação em nuvem está cada vez mais presente nas infraestruturas corporativas. Por causa disso, diversas ferramentas estão sendo criadas para auxiliar na administração dos recursos na nuvem. O objetivo deste trabalho é avaliar o impacto que as ferramentas OpenStack e OpenNebula (implantadas em um ambiente de nuvem privado) podem causar no desempenho dos sistemas de memória, armazenamento, rede, e processador. Os resultados obtidos mostram que o desempenho no OpenStack é significativamente melhor nos testes do sistema de armazenamento, enquanto que no OpenNebula o restante dos testes foram melhores.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A computação em nuvem está cada vez mais presente nas infraestruturas corporativas. Por causa disso, diversas ferramentas estão sendo criadas para auxiliar na administração dos recursos na nuvem. O objetivo deste trabalho é avaliar o impacto que as ferramentas OpenStack e OpenNebula (implantadas em um ambiente de nuvem privado) podem causar no desempenho dos sistemas de memória, armazenamento, rede, e processador. Os resultados obtidos mostram que o desempenho no OpenStack é significativamente melhor nos testes do sistema de armazenamento, enquanto que no OpenNebula o restante dos testes foram melhores. |
| Griebler, Dalvan; Adornes, Daniel; Fernandes, Luiz G. Performance and Usability Evaluation of a Pattern-Oriented Parallel Programming Interface for Multi-Core Architectures Inproceedings In: The 26th International Conference on Software Engineering & Knowledge Engineering, pp. 25-30, Knowledge Systems Institute Graduate School, Vancouver, Canada, 2014. @inproceedings{GRIEBLER:SEKE:14,
title = {Performance and Usability Evaluation of a Pattern-Oriented Parallel Programming Interface for Multi-Core Architectures},
author = {Dalvan Griebler and Daniel Adornes and Luiz G. Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2014/CR_SEKE_2014.pdf},
year = {2014},
date = {2014-07-01},
booktitle = {The 26th International Conference on Software Engineering & Knowledge Engineering},
pages = {25-30},
publisher = {Knowledge Systems Institute Graduate School},
address = {Vancouver, Canada},
abstract = {Multi-core architectures have increased the power of parallelism by coupling many cores in a single chip. This becomes even more complex for developers to exploit the avail-able parallelism in order to provide high performance scalable programs. To address these challenges, we propose the DSL-POPP (Domain-Specific Language for Pattern-Oriented Parallel Programming), which links the pattern-based approach in the programming interface as an alternative to reduce the effort of parallel software development, and achieve good performance in some applications. In this paper, the objective is to evaluate the usability and performance of the master/slave pattern and compare it to the Pthreads library. Moreover, experiments have shown that the master/slave interface of the DSL-POPP reduces up to 50% of the programming effort, without significantly affecting the performance.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Multi-core architectures have increased the power of parallelism by coupling many cores in a single chip. This becomes even more complex for developers to exploit the avail-able parallelism in order to provide high performance scalable programs. To address these challenges, we propose the DSL-POPP (Domain-Specific Language for Pattern-Oriented Parallel Programming), which links the pattern-based approach in the programming interface as an alternative to reduce the effort of parallel software development, and achieve good performance in some applications. In this paper, the objective is to evaluate the usability and performance of the master/slave pattern and compare it to the Pthreads library. Moreover, experiments have shown that the master/slave interface of the DSL-POPP reduces up to 50% of the programming effort, without significantly affecting the performance. |
| Maron, Carlos A. F.; Griebler, Dalvan; Schepke, Claudio Comparação das Ferramentas OpenNebula e OpenStack em Nuvem Composta de Estações de Trabalho Inproceedings In: 14th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 173-176, Sociedade Brasileira de Computação, Alegrete, RS, Brazil, 2014. @inproceedings{larcc:evaluation_openstack_opnnebula:ERAD:14,
title = {Comparação das Ferramentas OpenNebula e OpenStack em Nuvem Composta de Estações de Trabalho},
author = {Carlos A. F. Maron and Dalvan Griebler and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_ERAD_2014.pdf},
year = {2014},
date = {2014-03-01},
booktitle = {14th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {173-176},
publisher = {Sociedade Brasileira de Computação},
address = {Alegrete, RS, Brazil},
abstract = {Ferramentas de computação em nuvem para o modelo de serviço IaaS como o OpenNebula e o OpenStack são implantadas em grandes centros de processamento. O objetivo deste trabalho é investigar e comparar o comportamento delas em um ambiente mais restrito, como o de estações de trabalho. Os resultados mostraram que a ferramenta OpenNebula leva vantagem nas principais características avaliadas.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ferramentas de computação em nuvem para o modelo de serviço IaaS como o OpenNebula e o OpenStack são implantadas em grandes centros de processamento. O objetivo deste trabalho é investigar e comparar o comportamento delas em um ambiente mais restrito, como o de estações de trabalho. Os resultados mostraram que a ferramenta OpenNebula leva vantagem nas principais características avaliadas. |
| Rui, Fernando; Castro, Márcio; Griebler, Dalvan; Fernandes, Luiz Gustavo Evaluating the Impact of Transactional Characteristics on the Performance of Transactional Memory Applications Inproceedings doi In: 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, pp. 93-97, IEEE, Torino, Italy, 2014. @inproceedings{gmap:RUI:PDP:14,
title = {Evaluating the Impact of Transactional Characteristics on the Performance of Transactional Memory Applications},
author = {Fernando Rui and Márcio Castro and Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://doi.org/10.1109/PDP.2014.57},
doi = {10.1109/PDP.2014.57},
year = {2014},
date = {2014-02-01},
booktitle = {22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing},
pages = {93-97},
publisher = {IEEE},
address = {Torino, Italy},
series = {PDP'14},
abstract = {Transactional Memory (TM) is reputed by many researchers to be a promising solution to ease parallel programming on multicore processors. This model provides the scalability of fine-grained locking while avoiding common issues of traditional mechanisms, such as deadlocks. During these almost twenty years of research, several TM systems and benchmarks have been proposed. However, TM is not yet widely adopted by the scientific community to develop parallel applications due to unanswered questions in the literature, such as "how to identify if a parallel application can exploit TM to achieve better performance?" or "what are the reasons of poor performances of some TM applications?". In this work, we contribute to answer those questions through a comparative evaluation of a set of TM applications on four different state- of-the-art TM systems. Moreover, we identify some of the most important TM characteristics that impact directly the performance of TM applications. Our results can be useful to identify opportunities for optimizations.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Transactional Memory (TM) is reputed by many researchers to be a promising solution to ease parallel programming on multicore processors. This model provides the scalability of fine-grained locking while avoiding common issues of traditional mechanisms, such as deadlocks. During these almost twenty years of research, several TM systems and benchmarks have been proposed. However, TM is not yet widely adopted by the scientific community to develop parallel applications due to unanswered questions in the literature, such as "how to identify if a parallel application can exploit TM to achieve better performance?" or "what are the reasons of poor performances of some TM applications?". In this work, we contribute to answer those questions through a comparative evaluation of a set of TM applications on four different state- of-the-art TM systems. Moreover, we identify some of the most important TM characteristics that impact directly the performance of TM applications. Our results can be useful to identify opportunities for optimizations. |
2013
|
| Thomé, Bruna; Hentges, Eduardo; Griebler, Dalvan Computação em Nuvem: Análise Comparativa de Ferramentas Open Source para IaaS Inproceedings In: 11th Escola Regional de Redes de Computadores (ERRC), pp. 4, Sociedade Brasileira de Computação, Porto Alegre, RS, Brazil, 2013. @inproceedings{larcc:iaas_survey:ERRC:13,
title = {Computação em Nuvem: Análise Comparativa de Ferramentas Open Source para IaaS},
author = {Bruna Thomé and Eduardo Hentges and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/THOME_ERRC_2013.pdf},
year = {2013},
date = {2013-11-01},
booktitle = {11th Escola Regional de Redes de Computadores (ERRC)},
pages = {4},
publisher = {Sociedade Brasileira de Computação},
address = {Porto Alegre, RS, Brazil},
abstract = {Este artigo tem por objetivo estudar, apresentar e comparar as principais ferramentas open source de computação em nuvem. O conceito de computação em nuvem está cada vez mais presente nas redes de computadores. A dificuldade não está apenas em implantar uma nuvem, mas também em escolher a ferramenta mais apropriada. Assim, este trabalho buscou estudar as seguintes ferramentas: Eucalyptus, OpenNebula, OpenQRM, OpenStack, CloudStack Ubuntu Enterprise Cloud, Abiquo, Convirt, Apache Virtual Lab e Nimbus. Para estas, foram consideradas as características, funcionalidades e formas de operação, evidenciando o cenário mais indicado para cada uma delas.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Este artigo tem por objetivo estudar, apresentar e comparar as principais ferramentas open source de computação em nuvem. O conceito de computação em nuvem está cada vez mais presente nas redes de computadores. A dificuldade não está apenas em implantar uma nuvem, mas também em escolher a ferramenta mais apropriada. Assim, este trabalho buscou estudar as seguintes ferramentas: Eucalyptus, OpenNebula, OpenQRM, OpenStack, CloudStack Ubuntu Enterprise Cloud, Abiquo, Convirt, Apache Virtual Lab e Nimbus. Para estas, foram consideradas as características, funcionalidades e formas de operação, evidenciando o cenário mais indicado para cada uma delas. |
| Griebler, Dalvan; Fernandes, Luiz G. Towards a Domain-Specific Language for Patterns-Oriented Parallel Programming Inproceedings doi In: Programming Languages - 17th Brazilian Symposium - SBLP, pp. 105-119, Springer Berlin Heidelberg, Brasilia, Brazil, 2013. @inproceedings{GRIEBLER:SBLP:13,
title = {Towards a Domain-Specific Language for Patterns-Oriented Parallel Programming},
author = {Dalvan Griebler and Luiz G. Fernandes},
url = {http://dx.doi.org/10.1007/978-3-642-40922-6_8},
doi = {10.1007/978-3-642-40922-6_8},
year = {2013},
date = {2013-10-01},
booktitle = {Programming Languages - 17th Brazilian Symposium - SBLP},
volume = {8129},
pages = {105-119},
publisher = {Springer Berlin Heidelberg},
address = {Brasilia, Brazil},
series = {Lecture Notes in Computer Science},
abstract = {Pattern-oriented programming has been used in parallel code development for many years now. During this time, several tools (mainly frameworks and libraries) proposed the use of patterns based on programming primitives or templates. The implementation of patterns using those tools usually requires human expertise to correctly set up communication/synchronization among processes. In this work, we propose the use of a Domain Specific Language to create pattern-oriented parallel programs (DSL-POPP). This approach has the advantage of offering a higher programming abstraction level in which communication/synchronization among processes is hidden from programmers. We compensate the reduction in programming flexibility offering the possibility to use combined and/or nested parallel patterns (i.e., parallelism in levels), allowing the design of more complex parallel applications. We conclude this work presenting an experiment in which we develop a parallel application exploiting combined and nested parallel patterns in order to demonstrate the main properties of DSL-POPP.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Pattern-oriented programming has been used in parallel code development for many years now. During this time, several tools (mainly frameworks and libraries) proposed the use of patterns based on programming primitives or templates. The implementation of patterns using those tools usually requires human expertise to correctly set up communication/synchronization among processes. In this work, we propose the use of a Domain Specific Language to create pattern-oriented parallel programs (DSL-POPP). This approach has the advantage of offering a higher programming abstraction level in which communication/synchronization among processes is hidden from programmers. We compensate the reduction in programming flexibility offering the possibility to use combined and/or nested parallel patterns (i.e., parallelism in levels), allowing the design of more complex parallel applications. We conclude this work presenting an experiment in which we develop a parallel application exploiting combined and nested parallel patterns in order to demonstrate the main properties of DSL-POPP. |
| Griebler, Dalvan; Fernandes, Luiz Gustavo DSL-POPP: Linguagem Específica de Domínio para Programação Paralela Orientada a Padrões Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 2, Sociedade Brasileira de Computação (SBC), Porto Alegre, RS, BR, 2013. @inproceedings{GRIEBLER:ERAD:13,
title = {DSL-POPP: Linguagem Específica de Domínio para Programação Paralela Orientada a Padrões},
author = {Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2013/CR_ERAD_2013.pdf},
year = {2013},
date = {2013-03-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {2},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Porto Alegre, RS, BR},
abstract = {A proposta deste trabalho é induzir o programador a desenvolver programas orientados a padrões paralelos, que implementados na interface de uma linguagem específica de domínio ajudam a reduzir o esforço de programação sem comprometer o desempenho de uma aplicação. Resultados experimentais com o padrão mestre/escravo mostraram um bom desempenho nos algoritmos paralelizados.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A proposta deste trabalho é induzir o programador a desenvolver programas orientados a padrões paralelos, que implementados na interface de uma linguagem específica de domínio ajudam a reduzir o esforço de programação sem comprometer o desempenho de uma aplicação. Resultados experimentais com o padrão mestre/escravo mostraram um bom desempenho nos algoritmos paralelizados. |
2012
|
| Griebler, Dalvan Proposta de uma Linguagem Específica de Domínio de Programação Paralela Orientada a Padrões Paralelos: Um Estudo de Caso Baseado no Padrão Mestre/Escravo para Arquiteturas Multi-Core Masters Thesis Faculdade de Informática - PPGCC - PUCRS, Porto Alegre, Brazil, 2012. @mastersthesis{GRIEBLER:DM:12,
title = {Proposta de uma Linguagem Específica de Domínio de Programação Paralela Orientada a Padrões Paralelos: Um Estudo de Caso Baseado no Padrão Mestre/Escravo para Arquiteturas Multi-Core},
author = {Dalvan Griebler},
url = {http://tede.pucrs.br/tde_busca/arquivo.php?codArquivo=4265},
year = {2012},
date = {2012-03-01},
address = {Porto Alegre, Brazil},
school = {Faculdade de Informática - PPGCC - PUCRS},
abstract = {This work proposes a Domain-Specific Language for Parallel Patterns Oriented Parallel Programming (LED-PPOPP). Its main purpose is to provide a way to decrease the amount of effort necessary to develop parallel programs, offering a way to guide developers through patterns which are implemented by the language interface. The idea is to exploit this approach avoiding large performance losses in the applications. Patterns are specialized solutions, previously studied, and used to solve a frequent problem. Thus, parallel patterns offer a higher abstraction level to organize the algorithms in the exploitation of parallelism. They also can be easily learned by inexperienced programmers and software engineers. This work carried out a case study based on the Master/Slave pattern, focusing on the parallelization of algorithms for multi-core architectures. The implementation was validated through experiments to evaluate the programming effort to write code in LED-PPOPP and the performance achieved by the parallel code automatically generated. The obtained results let us conclude that a significant reduction in the parallel programming effort occurred in comparison to the Pthreads library utilization. Additionally, the final performance of the parallelized algorithms confirms that the parallelization with LED-PPOPP does not bring on significant losses related to parallelization using OpenMP in most of the all experiments carried out.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
This work proposes a Domain-Specific Language for Parallel Patterns Oriented Parallel Programming (LED-PPOPP). Its main purpose is to provide a way to decrease the amount of effort necessary to develop parallel programs, offering a way to guide developers through patterns which are implemented by the language interface. The idea is to exploit this approach avoiding large performance losses in the applications. Patterns are specialized solutions, previously studied, and used to solve a frequent problem. Thus, parallel patterns offer a higher abstraction level to organize the algorithms in the exploitation of parallelism. They also can be easily learned by inexperienced programmers and software engineers. This work carried out a case study based on the Master/Slave pattern, focusing on the parallelization of algorithms for multi-core architectures. The implementation was validated through experiments to evaluate the programming effort to write code in LED-PPOPP and the performance achieved by the parallel code automatically generated. The obtained results let us conclude that a significant reduction in the parallel programming effort occurred in comparison to the Pthreads library utilization. Additionally, the final performance of the parallelized algorithms confirms that the parallelization with LED-PPOPP does not bring on significant losses related to parallelization using OpenMP in most of the all experiments carried out. |
2011
|
| Raeder, Mateus; Griebler, Dalvan; Baldo, Lucas; Fernandes, Luiz G. Performance Prediction of Parallel Applications with Parallel Patterns Using Stochastic Methods Inproceedings doi In: Sistemas Computacionais (WSCAD-SSC), XII Simpósio em Sistemas Computacionais de Alto Desempenho, pp. 1-13, IEEE, Espírito Santo, Brasil, 2011. @inproceedings{RAEDER:WSCAD:11,
title = {Performance Prediction of Parallel Applications with Parallel Patterns Using Stochastic Methods},
author = {Mateus Raeder and Dalvan Griebler and Lucas Baldo and Luiz G. Fernandes},
url = {https://doi.org/10.1109/WSCAD-SSC.2011.18},
doi = {10.1109/WSCAD-SSC.2011.18},
year = {2011},
date = {2011-10-01},
booktitle = {Sistemas Computacionais (WSCAD-SSC), XII Simpósio em Sistemas Computacionais de Alto Desempenho},
pages = {1-13},
publisher = {IEEE},
address = {Espírito Santo, Brasil},
abstract = {One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such applications seems to be an interesting alternative and can help to identify the best implementation strategies. In this work, the Stochastic Automata Network formalism is adopted to model and evaluate the performance of parallel applications, specially developed for clusters of workstations platforms. The methodology used is based on the construction of generic models to describe classical parallel implementation schemes, like Master/Slave, Parallel Phases, Pipeline and Divide and Conquer. Those models are adapted to represent cases of real applications through the definition of input parameters values. Finally, aiming to verify the accuracy of the adopted technique, some comparisons with real applications implementation results are presented.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such applications seems to be an interesting alternative and can help to identify the best implementation strategies. In this work, the Stochastic Automata Network formalism is adopted to model and evaluate the performance of parallel applications, specially developed for clusters of workstations platforms. The methodology used is based on the construction of generic models to describe classical parallel implementation schemes, like Master/Slave, Parallel Phases, Pipeline and Divide and Conquer. Those models are adapted to represent cases of real applications through the definition of input parameters values. Finally, aiming to verify the accuracy of the adopted technique, some comparisons with real applications implementation results are presented. |
| Griebler, Dalvan; Raeder, Mateus; Fernandes, Luiz Gustavo Padrões e Frameworks de Programação Paralela em Ambientes Multi-Core Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 2, Sociedade Brasileira de Computação (SBC), Porto Alegre, RS, BR, 2011. @inproceedings{GRIEBLER:ERAD:11,
title = {Padrões e Frameworks de Programação Paralela em Ambientes Multi-Core},
author = {Dalvan Griebler and Mateus Raeder and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2011/CR_ERAD_2011.pdf},
year = {2011},
date = {2011-03-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {2},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Porto Alegre, RS, BR},
abstract = {Nos últimos anos, o mercado recente de estações de trabalho e servidores vem aumentando gradativamente a quantidade de núcleos e processadores, inserindo na sua programação o paralelismo, o que resultou no aumento da complexidade em lidar com este tipo de hardware. Neste cenário, é necessário a disponibilidade de mecanismos que possam fornecer escalabilidade e permitir a exploração de paralelismo nestas arquiteturas, conhecidas como multi-core. Não basta que tais arquiteturas multiprocessadas estejam disponíveis, se estas não são exploradas devidamente. Depuração, condições de corrida, sincronização de threads ou processos e controle de acesso aos dados são exemplos de fatores críticos para a programação destes ambientes paralelos. Novas maneiras de abstrair a complexidade em lidar com estes sistemas estão sendo estudadas, a fim de que a programação paralela seja algo menos complexo para os desenvolvedores de software. Padrões paralelos vêm sendo o alvo de constantes estudos com intuito de padronizar a programação.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Nos últimos anos, o mercado recente de estações de trabalho e servidores vem aumentando gradativamente a quantidade de núcleos e processadores, inserindo na sua programação o paralelismo, o que resultou no aumento da complexidade em lidar com este tipo de hardware. Neste cenário, é necessário a disponibilidade de mecanismos que possam fornecer escalabilidade e permitir a exploração de paralelismo nestas arquiteturas, conhecidas como multi-core. Não basta que tais arquiteturas multiprocessadas estejam disponíveis, se estas não são exploradas devidamente. Depuração, condições de corrida, sincronização de threads ou processos e controle de acesso aos dados são exemplos de fatores críticos para a programação destes ambientes paralelos. Novas maneiras de abstrair a complexidade em lidar com estes sistemas estão sendo estudadas, a fim de que a programação paralela seja algo menos complexo para os desenvolvedores de software. Padrões paralelos vêm sendo o alvo de constantes estudos com intuito de padronizar a programação. |