2017
|
| Araujo, Gabriell; Ledur, Cleverson; Griebler, Dalvan; Fernandes, Luiz G. Exploração do Paralelismo em Algoritmos de Mineração de Dados com Pthreads, OpenMP, FastFlow, TBB e Phoenix++ Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 4, Sociedade Brasileira de Computação (SBC), Ijuí, RS, BR, 2017. @inproceedings{ARAUJO:ERAD:17,
title = {Exploração do Paralelismo em Algoritmos de Mineração de Dados com Pthreads, OpenMP, FastFlow, TBB e Phoenix++},
author = {Gabriell Araujo and Cleverson Ledur and Dalvan Griebler and Luiz G. Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2017/CR_ERAD_IC_Araujo_2017.pdf},
year = {2017},
date = {2017-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {4},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Ijuí, RS, BR},
abstract = {Com o objetivo de introduzir algoritmos de mineração de dados paralelos na DSL GMaVis, foram paralelizadas quatro aplicações com cinco interfaces de programação paralela. Este trabalho apresenta a comparação destas interfaces, a fim de avaliar qual oferece maior desempenho e produtividade de código. Os resultados demonstram que é possível atingir menor número de linhas de código e bom desempenho com OpenMP e FastFlow.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Com o objetivo de introduzir algoritmos de mineração de dados paralelos na DSL GMaVis, foram paralelizadas quatro aplicações com cinco interfaces de programação paralela. Este trabalho apresenta a comparação destas interfaces, a fim de avaliar qual oferece maior desempenho e produtividade de código. Os resultados demonstram que é possível atingir menor número de linhas de código e bom desempenho com OpenMP e FastFlow. |
| Baum, Willian; Maron, Carlos A. F.; Griebler, Dalvan; Schepke, Claudio Caracterização do Desempenho de Aplicações Pipeline em Instâncias KVM e LXC de uma Nuvem CloudStack Inproceedings In: 17th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 267-270, Sociedade Brasileira de Computação, Ijuí, RS, Brazil, 2017. @inproceedings{hiperfcloud:parsec_pipeline:ERAD:17,
title = {Caracterização do Desempenho de Aplicações Pipeline em Instâncias KVM e LXC de uma Nuvem CloudStack},
author = {Willian Baum and Carlos A. F. Maron and Dalvan Griebler and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/BAUM_ERAD_2017.pdf},
year = {2017},
date = {2017-04-01},
booktitle = {17th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {267-270},
publisher = {Sociedade Brasileira de Computação},
address = {Ijuí, RS, Brazil},
abstract = {Nuvens computacionais são uma alternativa para a computação de alto desempenho. Este artigo avalia o desempenho de aplicações estruturadas com o padrão pipeline em uma implantação de nuvem CloudStack com instâncias do tipo LXC e KVM. Foram testadas as aplicações Ferret e Dedup da suíte PARSEC, bem como constatado uma diferença significativa no Dedup. Na média geral, para esta aplicação a instância LXC é 40,19% melhor que a KVM.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Nuvens computacionais são uma alternativa para a computação de alto desempenho. Este artigo avalia o desempenho de aplicações estruturadas com o padrão pipeline em uma implantação de nuvem CloudStack com instâncias do tipo LXC e KVM. Foram testadas as aplicações Ferret e Dedup da suíte PARSEC, bem como constatado uma diferença significativa no Dedup. Na média geral, para esta aplicação a instância LXC é 40,19% melhor que a KVM. |
| Löff, Júnior; Griebler, Dalvan; Ledur, Cleverson; Fernandes, Luiz G. Explorando a Flexibilidade e o Desempenho da Biblioteca FastFlow com o Padrão Paralelo Farm Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 4, Sociedade Brasileira de Computação (SBC), Ijuí, RS, BR, 2017. @inproceedings{LOFF:ERAD:17,
title = {Explorando a Flexibilidade e o Desempenho da Biblioteca FastFlow com o Padrão Paralelo Farm},
author = {Júnior Löff and Dalvan Griebler and Cleverson Ledur and Luiz G. Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2017/CR_ERAD_IC_Loff_2017.pdf},
year = {2017},
date = {2017-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {4},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Ijuí, RS, BR},
abstract = {O paralelismo é uma tarefa para especialistas, onde o desafio é utilizar abstrações que ofereçam a flexibilidade e expressividade necessária para atingir o melhor desempenho. Este artigo visa explorar variações na implementação do padrão Farm utilizando a biblioteca FastFlow nos algoritmos K-means (domínio da mineração de dados) e Mandelbrot Set (domínio da matemática). Concluímos que o padrão Farm oferece boa flexibilidade e bom desempenho.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
O paralelismo é uma tarefa para especialistas, onde o desafio é utilizar abstrações que ofereçam a flexibilidade e expressividade necessária para atingir o melhor desempenho. Este artigo visa explorar variações na implementação do padrão Farm utilizando a biblioteca FastFlow nos algoritmos K-means (domínio da mineração de dados) e Mandelbrot Set (domínio da matemática). Concluímos que o padrão Farm oferece boa flexibilidade e bom desempenho. |
| Vogel, Adriano; Griebler, Dalvan; Fernandes, Luiz Gustavo Proposta de Implementação de Grau de Paralelismo Adaptativo em uma DSL para Paralelismo de Stream Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 4, Sociedade Brasileira de Computação (SBC), Ijuí, RS, BR, 2017. @inproceedings{VOGEL:ERAD:17,
title = {Proposta de Implementação de Grau de Paralelismo Adaptativo em uma DSL para Paralelismo de Stream},
author = {Adriano Vogel and Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2017/CR_ERAD_PG_Vogel_2017.pdf},
year = {2017},
date = {2017-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {4},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Ijuí, RS, BR},
abstract = {A classe de aplicações de stream possuem características únicas, como variação nas entradas/saídas e execuções por períodos indefinidos. Este paradigma é utilizado com intuito de diminuir os tempos de execução e aumentar a vazão das aplicações. Nesse estudo é proposto o suporte adaptativo do grau de paralelismo de stream na DSL (Domain-Specific Language) SPar.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A classe de aplicações de stream possuem características únicas, como variação nas entradas/saídas e execuções por períodos indefinidos. Este paradigma é utilizado com intuito de diminuir os tempos de execução e aumentar a vazão das aplicações. Nesse estudo é proposto o suporte adaptativo do grau de paralelismo de stream na DSL (Domain-Specific Language) SPar. |
| Maliszewski, Anderson M.; Vogel, Adriano; Griebler, Dalvan; Schepke, Claudio Desempenho das Operações de Criar e Deletar Instâncias KVM Simultâneas em Nuvens CloudStack e OpenStack Inproceedings In: 17th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 283-286, Sociedade Brasileira de Computação, Ijuí, RS, Brazil, 2017. @inproceedings{hiperfcloud:management_operations:ERAD:17,
title = {Desempenho das Operações de Criar e Deletar Instâncias KVM Simultâneas em Nuvens CloudStack e OpenStack},
author = {Anderson M. Maliszewski and Adriano Vogel and Dalvan Griebler and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MALISZEWSKI_ERAD_2017.pdf},
year = {2017},
date = {2017-04-01},
booktitle = {17th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {283-286},
publisher = {Sociedade Brasileira de Computação},
address = {Ijuí, RS, Brazil},
abstract = {Plataformas de gerenciamento IaaS como OpenStack e CloudStack, são implantadas para criação de nuvens privadas. O desempenho é importante pois impacta no tempo de disponibilização de recursos. Este artigo avalia o desempenho do gerenciamento das plataformas. Os resultados mostram uma diferença média de 66,3% na criação de instâncias. Na exclusão das instâncias, houve a diferença de 28,6%, sendo ambos resultados favoráveis ao OpenStack},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Plataformas de gerenciamento IaaS como OpenStack e CloudStack, são implantadas para criação de nuvens privadas. O desempenho é importante pois impacta no tempo de disponibilização de recursos. Este artigo avalia o desempenho do gerenciamento das plataformas. Os resultados mostram uma diferença média de 66,3% na criação de instâncias. Na exclusão das instâncias, houve a diferença de 28,6%, sendo ambos resultados favoráveis ao OpenStack |
| Filho, Renato B. H.; Griebler, Dalvan; Ledur, Cleverson; Fernandes, Luiz G. Avaliando a Produtividade e o Desempenho da DSL SPar em uma Aplicação de Detecção de Pistas Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 4, Sociedade Brasileira de Computação (SBC), Ijuí, RS, BR, 2017. @inproceedings{FILHO:ERAD:17,
title = {Avaliando a Produtividade e o Desempenho da DSL SPar em uma Aplicação de Detecção de Pistas},
author = {Renato B. H. Filho and Dalvan Griebler and Cleverson Ledur and Luiz G. Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2017/CR_ERAD_IC_Hoffmann.pdf},
year = {2017},
date = {2017-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {4},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Ijuí, RS, BR},
abstract = {A linguagem de domínio específico SPar, embarcada na linguagem C++, fornece através de anotações uma alternativa para explorar o paralelismo de stream em arquiteturas multi-núcleo. Neste artigo, o objetivo é demonstrar indicadores de desempenho e produtividade em uma aplicação de detecção de pistas. Os resultados comprovaram que a SPar apresentou maior produtividade e bom desempenho.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A linguagem de domínio específico SPar, embarcada na linguagem C++, fornece através de anotações uma alternativa para explorar o paralelismo de stream em arquiteturas multi-núcleo. Neste artigo, o objetivo é demonstrar indicadores de desempenho e produtividade em uma aplicação de detecção de pistas. Os resultados comprovaram que a SPar apresentou maior produtividade e bom desempenho. |
| Mesquita, Cassiano E.; Ledur, Cleverson; Griebler, Dalvan; Fernandes, Luiz G. Proposta de uma Plataforma para Experimentos de Software em Programação Paralela Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 4, Sociedade Brasileira de Computação (SBC), Ijuí, RS, BR, 2017. @inproceedings{MESQUITA:ERAD:17,
title = {Proposta de uma Plataforma para Experimentos de Software em Programação Paralela},
author = {Cassiano E. Mesquita and Cleverson Ledur and Dalvan Griebler and Luiz G. Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2017/CR_ERAD_PG_Mesquita_2017.pdf},
year = {2017},
date = {2017-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {4},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Ijuí, RS, BR},
abstract = {Este artigo propõe uma plataforma web para simplificar a avaliação de interfaces de programação paralela. A ideia central é identificar as dificuldades enfrentadas por potenciais desenvolvedores a fim de propor melhorias que irão reduzir o esforço na paralelização de aplicações. A plataforma prevista é composta de uma interface web, implementada com linguagens PHP e Javascript.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Este artigo propõe uma plataforma web para simplificar a avaliação de interfaces de programação paralela. A ideia central é identificar as dificuldades enfrentadas por potenciais desenvolvedores a fim de propor melhorias que irão reduzir o esforço na paralelização de aplicações. A plataforma prevista é composta de uma interface web, implementada com linguagens PHP e Javascript. |
 | Griebler, Dalvan; Danelutto, Marco; Torquati, Massimo; Fernandes, Luiz Gustavo SPar: A DSL for High-Level and Productive Stream Parallelism Journal Article doi In: Parallel Processing Letters, vol. 27, no. 01, pp. 1740005, 2017. @article{GRIEBLER:PPL:17,
title = {SPar: A DSL for High-Level and Productive Stream Parallelism},
author = {Dalvan Griebler and Marco Danelutto and Massimo Torquati and Luiz Gustavo Fernandes},
url = {http://dx.doi.org/10.1142/S0129626417400059},
doi = {10.1142/S0129626417400059},
year = {2017},
date = {2017-03-01},
urldate = {2017-03-01},
journal = {Parallel Processing Letters},
volume = {27},
number = {01},
pages = {1740005},
publisher = {World Scientific},
abstract = {This paper introduces SPar, an internal C++ Domain-Specific Language (DSL) that supports the development of classic stream parallel applications. The DSL uses standard C++ attributes to introduce annotations tagging the notable components of stream parallel applications: stream sources and stream processing stages. A set of tools process SPar code (C++ annotated code using the SPar attributes) to generate FastFlow C++ code that exploits the stream parallelism denoted by SPar annotations while targeting shared memory multi-core architectures. We outline the main SPar features along with the main implementation techniques and tools. Also, we show the results of experiments assessing the feasibility of the entire approach as well as SPar’s performance and expressiveness.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This paper introduces SPar, an internal C++ Domain-Specific Language (DSL) that supports the development of classic stream parallel applications. The DSL uses standard C++ attributes to introduce annotations tagging the notable components of stream parallel applications: stream sources and stream processing stages. A set of tools process SPar code (C++ annotated code using the SPar attributes) to generate FastFlow C++ code that exploits the stream parallelism denoted by SPar annotations while targeting shared memory multi-core architectures. We outline the main SPar features along with the main implementation techniques and tools. Also, we show the results of experiments assessing the feasibility of the entire approach as well as SPar’s performance and expressiveness. |
| Vogel, Adriano; Griebler, Dalvan; Schepke, Claudio; Fernandes, Luiz Gustavo An Intra-Cloud Networking Performance Evaluation on CloudStack Environment Inproceedings doi In: 25th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 5, IEEE, St. Petersburg, Russia, 2017. @inproceedings{larcc:intra-cloud_networking_cloudstack:PDP:17,
title = {An Intra-Cloud Networking Performance Evaluation on CloudStack Environment},
author = {Adriano Vogel and Dalvan Griebler and Claudio Schepke and Luiz Gustavo Fernandes},
url = {http://ieeexplore.ieee.org/document/7912689/},
doi = {10.1109/PDP.2017.40},
year = {2017},
date = {2017-03-01},
booktitle = {25th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)},
pages = {5},
publisher = {IEEE},
address = {St. Petersburg, Russia},
series = {PDP'17},
abstract = {Infrastructure-as-a-Service (IaaS) is a cloud on-demand commodity built on top of virtualization technologies and managed by IaaS tools. In this scenario, performance is a relevant matter because a set of aspects may impact and increase the system overhead.Specific on the network, the use of virtualized capabilities may cause performance degradation (eg.,latency, throughput). The goal of this paper is to contribute to networking performance evaluation, providing new insights for private IaaS clouds. To achieve our goal, we deploy CloudStack environments and conduct experiments with different configurations and techniques. The research findings demonstrate that KVM-based cloud instances have small network performance degradation regarding throughput (about 0.2% for coarse-grained and 6.8% for fine-grained messages) while container-based instances have even better results. On the other hand, the KVM instances present worst latency (about 12.4% on coarse-grained and two times more on fine-grained messages w.r.t. native environment) and better in container-based instances, where the performance results are close to the native environment. Furthermore, we demonstrate a performance optimization of applications running on KVM.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Infrastructure-as-a-Service (IaaS) is a cloud on-demand commodity built on top of virtualization technologies and managed by IaaS tools. In this scenario, performance is a relevant matter because a set of aspects may impact and increase the system overhead.Specific on the network, the use of virtualized capabilities may cause performance degradation (eg.,latency, throughput). The goal of this paper is to contribute to networking performance evaluation, providing new insights for private IaaS clouds. To achieve our goal, we deploy CloudStack environments and conduct experiments with different configurations and techniques. The research findings demonstrate that KVM-based cloud instances have small network performance degradation regarding throughput (about 0.2% for coarse-grained and 6.8% for fine-grained messages) while container-based instances have even better results. On the other hand, the KVM instances present worst latency (about 12.4% on coarse-grained and two times more on fine-grained messages w.r.t. native environment) and better in container-based instances, where the performance results are close to the native environment. Furthermore, we demonstrate a performance optimization of applications running on KVM. |
2016
|
 | Maron, Carlos A. F.; Griebler, Dalvan; Schepke, Claudio; Fernandes, Luiz Gustavo Desempenho de OpenStack e OpenNebula em Estações de Trabalho: Uma Avaliação com Microbenchmarks e NPB Journal Article doi In: Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), vol. 1, no. 6, pp. 15, 2016. @article{larcc:nas_workstations:REABTIC:16,
title = {Desempenho de OpenStack e OpenNebula em Estações de Trabalho: Uma Avaliação com Microbenchmarks e NPB},
author = {Carlos A. F. Maron and Dalvan Griebler and Claudio Schepke and Luiz Gustavo Fernandes},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_REABTIC_2016.pdf},
doi = {10.5281/zenodo.345597},
year = {2016},
date = {2016-12-01},
journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)},
volume = {1},
number = {6},
pages = {15},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {IaaS (Infrastructure as a Service) clouds provide on-demand computing resources (i.e, memory, networking, storage and processing unit) for running applications. Studies that evaluate the IaaS cloud performance are limited to the virtualization layer and ignore the impact of management tools analysis. In contrast, our research investigates the impact of them in order to identify if there are influences or differences between OpenStack and OpenNebula. We used intensive workloads (microbenchmarks) and scientific parallel applications. Statistically, the results demonstrated that OpenNebula was 11.07% better using microbenchmarks and 8.41% with scientific parallel applications.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
IaaS (Infrastructure as a Service) clouds provide on-demand computing resources (i.e, memory, networking, storage and processing unit) for running applications. Studies that evaluate the IaaS cloud performance are limited to the virtualization layer and ignore the impact of management tools analysis. In contrast, our research investigates the impact of them in order to identify if there are influences or differences between OpenStack and OpenNebula. We used intensive workloads (microbenchmarks) and scientific parallel applications. Statistically, the results demonstrated that OpenNebula was 11.07% better using microbenchmarks and 8.41% with scientific parallel applications. |
| Maron, Carlos A. F.; Vogel, Adriano; Benedetti, Vera L. L.; Shubeita, Fauzi; Schepke, Claudio; Griebler, Dalvan Panorama Geral e Resultados do Projeto HiPerfCloud Inproceedings In: 15th Jornada de Pesquisa SETREM, pp. 4, SETREM, Três de Maio, Brazil, 2016. @inproceedings{larcc:hiperfcloud:JP:16,
title = {Panorama Geral e Resultados do Projeto HiPerfCloud},
author = {Carlos A. F. Maron and Adriano Vogel and Vera L. L. Benedetti and Fauzi Shubeita and Claudio Schepke and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/HiPerfCloud_JP_SETREM_2016.pdf},
year = {2016},
date = {2016-10-01},
booktitle = {15th Jornada de Pesquisa SETREM},
pages = {4},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {O projeto HiPerfCloud, em andamento no LARCC da Faculdade SETREM, desenvolve pesquisas em nível de infraestrutura em nuvens computacionais. O objetivo do projeto é analisar o impacto que aplicações científicas de alto desempenho sofrem quando executadas em nuvens privadas e avaliar as tecnologias de implantação envolvidas. As publicações de artigos em eventos nacionais e internacionais do projeto tem colaborado com o estado da arte da área. As descobertas recentes apontaram que aspectos de infraestrutura, rede e virtualização, exercem influência no desempenho de aplicações executadas em nuvem, enquanto as ferramentas de IaaS possuem contrastes em relação ao gerenciamento (escalonamento, disponibilidade, segurança).},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
O projeto HiPerfCloud, em andamento no LARCC da Faculdade SETREM, desenvolve pesquisas em nível de infraestrutura em nuvens computacionais. O objetivo do projeto é analisar o impacto que aplicações científicas de alto desempenho sofrem quando executadas em nuvens privadas e avaliar as tecnologias de implantação envolvidas. As publicações de artigos em eventos nacionais e internacionais do projeto tem colaborado com o estado da arte da área. As descobertas recentes apontaram que aspectos de infraestrutura, rede e virtualização, exercem influência no desempenho de aplicações executadas em nuvem, enquanto as ferramentas de IaaS possuem contrastes em relação ao gerenciamento (escalonamento, disponibilidade, segurança). |
 | Pieper, Ricardo; Griebler, Dalvan; Lovato, Adalberto Towards a Software as a Service for Biodigestor Analytics Journal Article doi In: Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), vol. 1, no. 5, pp. 15, 2016. @article{larcc:saas_analytics:REABTIC:16,
title = {Towards a Software as a Service for Biodigestor Analytics},
author = {Ricardo Pieper and Dalvan Griebler and Adalberto Lovato},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/PIEPER_REABTIC_2016.pdf},
doi = {10.5281/zenodo.345587},
year = {2016},
date = {2016-08-01},
journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)},
volume = {1},
number = {5},
pages = {15},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {The field of machine learning is becoming even more important in the last years. The ever-increasing amount of data and complexity of computational problems challenges the currently available technology. Meanwhile, anaerobic digesters represent a good alternative for renewable energy production in Brazil. However, performing efficient and accurate predictions/analytics while completely abstracting machine learning details from end-users might not be a simple task to achieve. Usually, such tools are made for a specific scenario and may not fit with particular and general needs. Our goal was to create a SaaS for biogas data analytics by using a neural network. Therefore, an open source, cloud-enabled SaaS (Software as a Service) was developed and deployed in LARCC (Laboratory of Advanced Researches on Cloud Computing) at SETREM. The results have shown the SaaS application is able to perform predictions. The neural network's accuracy is not significantly worse than a state-of-the-art implementation, and its training speed is faster. The user interface demonstrates to be intuitive, and the predictions were accurate when providing the training algorithm with sufficient data. In addition, the file processing and network training time were good enough under traditional workload conditions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The field of machine learning is becoming even more important in the last years. The ever-increasing amount of data and complexity of computational problems challenges the currently available technology. Meanwhile, anaerobic digesters represent a good alternative for renewable energy production in Brazil. However, performing efficient and accurate predictions/analytics while completely abstracting machine learning details from end-users might not be a simple task to achieve. Usually, such tools are made for a specific scenario and may not fit with particular and general needs. Our goal was to create a SaaS for biogas data analytics by using a neural network. Therefore, an open source, cloud-enabled SaaS (Software as a Service) was developed and deployed in LARCC (Laboratory of Advanced Researches on Cloud Computing) at SETREM. The results have shown the SaaS application is able to perform predictions. The neural network's accuracy is not significantly worse than a state-of-the-art implementation, and its training speed is faster. The user interface demonstrates to be intuitive, and the predictions were accurate when providing the training algorithm with sufficient data. In addition, the file processing and network training time were good enough under traditional workload conditions. |
 | Barth, Andréia; Wolfer, Camila; Lovato, Adalberto; Griebler, Dalvan Avaliação da Irradiação Solar como Fonte de Energia Renovável no Noroeste do Estado do Rio Grande do Sul Através de Uma Rede Neural Journal Article doi In: Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), vol. 1, no. 5, pp. 15, 2016. @article{larcc:neural_networks:REABTIC:16,
title = {Avaliação da Irradiação Solar como Fonte de Energia Renovável no Noroeste do Estado do Rio Grande do Sul Através de Uma Rede Neural},
author = {Andréia Barth and Camila Wolfer and Adalberto Lovato and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2018/02/ANDREIA_CAMILA_REABTIC_2016.pdf},
doi = {10.5281/zenodo.345585},
year = {2016},
date = {2016-08-01},
journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)},
volume = {1},
number = {5},
pages = {15},
publisher = {SETREM},
address = {Três de Maio, RS, Brazil},
abstract = {Solar irradiation is one of the cleanest renewable energy sources of nowadays. In this work, the goal was to implement a neural network capable of evaluating the solar irradiation in the Northwest region of Rio Grande do Sul. In case, this assessment targets meteorological data, from January to April 2015. The network Perceptron was implemented and trained using MATLAB software. The results have indicated that the system obtained a highly accurate and that the region is a good enough place for stemmed energy production of solar irradiation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Solar irradiation is one of the cleanest renewable energy sources of nowadays. In this work, the goal was to implement a neural network capable of evaluating the solar irradiation in the Northwest region of Rio Grande do Sul. In case, this assessment targets meteorological data, from January to April 2015. The network Perceptron was implemented and trained using MATLAB software. The results have indicated that the system obtained a highly accurate and that the region is a good enough place for stemmed energy production of solar irradiation. |
| Griebler, Dalvan Domain-Specific Language & Support Tool for High-Level Stream Parallelism PhD Thesis Faculdade de Informática - PPGCC - PUCRS, 2016. @phdthesis{GRIEBLER:PHD:16,
title = {Domain-Specific Language & Support Tool for High-Level Stream Parallelism},
author = {Dalvan Griebler},
url = {http://tede2.pucrs.br/tede2/handle/tede/6776},
year = {2016},
date = {2016-06-01},
address = {Porto Alegre, Brazil},
school = {Faculdade de Informática - PPGCC - PUCRS},
abstract = {Stream-based systems are representative of several application domains including video, audio, networking, graphic processing, etc. Stream programs may run on different kinds of parallel architectures (desktop, servers, cell phones, and supercomputers) and represent significant workloads on our current computing systems. Nevertheless, most of them are still not parallelized. Moreover, when new software has to be developed, programmers often face a trade-off between coding productivity, code portability, and performance. To solve this problem, we provide a new Domain-Specific Language (DSL) that naturally/on-the-fly captures and represents parallelism for stream-based applications. The aim is to offer a set of attributes (through annotations) that preserves the program's source code and is not architecture-dependent for annotating parallelism. We used the C++ attribute mechanism to design a ``textitde-facto'' standard C++ embedded DSL named SPar. However, the implementation of DSLs using compiler-based tools is difficult, complicated, and usually requires a significant learning curve. This is even harder for those who are not familiar with compiler technology. Therefore, our motivation is to simplify this path for other researchers (experts in their domain) with support tools (our tool is CINCLE) to create high-level and productive DSLs through powerful and aggressive source-to-source transformations. In fact, parallel programmers can use their expertise without having to design and implement low-level code. The main goal of this thesis was to create a DSL and support tools for high-level stream parallelism in the context of a programming framework that is compiler-based and domain-oriented. Thus, we implemented SPar using CINCLE. SPar supports the software developer with productivity, performance, and code portability while CINCLE provides sufficient support to generate new DSLs. Also, SPar targets source-to-source transformation producing parallel pattern code built on top of FastFlow and MPI. Finally, we provide a full set of experiments showing that SPar provides better coding productivity without significant performance degradation in multi-core systems as well as transformation rules that are able to achieve code portability (for cluster architectures) through its generalized attributes.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Stream-based systems are representative of several application domains including video, audio, networking, graphic processing, etc. Stream programs may run on different kinds of parallel architectures (desktop, servers, cell phones, and supercomputers) and represent significant workloads on our current computing systems. Nevertheless, most of them are still not parallelized. Moreover, when new software has to be developed, programmers often face a trade-off between coding productivity, code portability, and performance. To solve this problem, we provide a new Domain-Specific Language (DSL) that naturally/on-the-fly captures and represents parallelism for stream-based applications. The aim is to offer a set of attributes (through annotations) that preserves the program's source code and is not architecture-dependent for annotating parallelism. We used the C++ attribute mechanism to design a ``textitde-facto'' standard C++ embedded DSL named SPar. However, the implementation of DSLs using compiler-based tools is difficult, complicated, and usually requires a significant learning curve. This is even harder for those who are not familiar with compiler technology. Therefore, our motivation is to simplify this path for other researchers (experts in their domain) with support tools (our tool is CINCLE) to create high-level and productive DSLs through powerful and aggressive source-to-source transformations. In fact, parallel programmers can use their expertise without having to design and implement low-level code. The main goal of this thesis was to create a DSL and support tools for high-level stream parallelism in the context of a programming framework that is compiler-based and domain-oriented. Thus, we implemented SPar using CINCLE. SPar supports the software developer with productivity, performance, and code portability while CINCLE provides sufficient support to generate new DSLs. Also, SPar targets source-to-source transformation producing parallel pattern code built on top of FastFlow and MPI. Finally, we provide a full set of experiments showing that SPar provides better coding productivity without significant performance degradation in multi-core systems as well as transformation rules that are able to achieve code portability (for cluster architectures) through its generalized attributes. |
| Griebler, Dalvan Domain-Specific Language & Support Tool for High-Level Stream Parallelism PhD Thesis Computer Science Department - University of Pisa, 2016. @phdthesis{GRIEBLER:PHD_PISA:16,
title = {Domain-Specific Language & Support Tool for High-Level Stream Parallelism},
author = {Dalvan Griebler},
url = {https://gmap.pucrs.br/dalvan/papers/2016/thesis_dalvan_UNIPI_2016.pdf},
year = {2016},
date = {2016-04-01},
address = {Pisa, Italy},
school = {Computer Science Department - University of Pisa},
abstract = {Stream-based systems are representative of several application domains including video, audio, networking, graphic processing, etc. Stream programs may run on different kinds of parallel architectures (desktop, servers, cell phones, and supercomputers) and represent significant workloads on our current computing systems. Nevertheless, most of them are still not parallelized. Moreover, when new software has to be developed, programmers often face a trade-off between coding productivity, code portability, and performance. To solve this problem, we provide a new Domain-Specific Language (DSL) that naturally/on-the-fly captures and represents parallelism for stream-based applications. The aim is to offer a set of attributes (through annotations) that preserves the program's source code and is not architecture-dependent for annotating parallelism. We used the C++ attribute mechanism to design a ``textitde-facto'' standard C++ embedded DSL named SPar. However, the implementation of DSLs using compiler-based tools is difficult, complicated, and usually requires a significant learning curve. This is even harder for those who are not familiar with compiler technology. Therefore, our motivation is to simplify this path for other researchers (experts in their domain) with support tools (our tool is CINCLE) to create high-level and productive DSLs through powerful and aggressive source-to-source transformations. In fact, parallel programmers can use their expertise without having to design and implement low-level code. The main goal of this thesis was to create a DSL and support tools for high-level stream parallelism in the context of a programming framework that is compiler-based and domain-oriented. Thus, we implemented SPar using CINCLE. SPar supports the software developer with productivity, performance, and code portability while CINCLE provides sufficient support to generate new DSLs. Also, SPar targets source-to-source transformation producing parallel pattern code built on top of FastFlow and MPI. Finally, we provide a full set of experiments showing that SPar provides better coding productivity without significant performance degradation in multi-core systems as well as transformation rules that are able to achieve code portability (for cluster architectures) through its generalized attributes.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Stream-based systems are representative of several application domains including video, audio, networking, graphic processing, etc. Stream programs may run on different kinds of parallel architectures (desktop, servers, cell phones, and supercomputers) and represent significant workloads on our current computing systems. Nevertheless, most of them are still not parallelized. Moreover, when new software has to be developed, programmers often face a trade-off between coding productivity, code portability, and performance. To solve this problem, we provide a new Domain-Specific Language (DSL) that naturally/on-the-fly captures and represents parallelism for stream-based applications. The aim is to offer a set of attributes (through annotations) that preserves the program's source code and is not architecture-dependent for annotating parallelism. We used the C++ attribute mechanism to design a ``textitde-facto'' standard C++ embedded DSL named SPar. However, the implementation of DSLs using compiler-based tools is difficult, complicated, and usually requires a significant learning curve. This is even harder for those who are not familiar with compiler technology. Therefore, our motivation is to simplify this path for other researchers (experts in their domain) with support tools (our tool is CINCLE) to create high-level and productive DSLs through powerful and aggressive source-to-source transformations. In fact, parallel programmers can use their expertise without having to design and implement low-level code. The main goal of this thesis was to create a DSL and support tools for high-level stream parallelism in the context of a programming framework that is compiler-based and domain-oriented. Thus, we implemented SPar using CINCLE. SPar supports the software developer with productivity, performance, and code portability while CINCLE provides sufficient support to generate new DSLs. Also, SPar targets source-to-source transformation producing parallel pattern code built on top of FastFlow and MPI. Finally, we provide a full set of experiments showing that SPar provides better coding productivity without significant performance degradation in multi-core systems as well as transformation rules that are able to achieve code portability (for cluster architectures) through its generalized attributes. |
| Bairros, Gildomiro; Griebler, Dalvan; Fernandes, Luiz Gustavo Proposta de Suporte a Elasticidade Automática em Nuvem para uma Linguagem Específica de Domínio Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 197-198, Sociedade Brasileira de Computação (SBC), São Leopoldo, RS, BR, 2016. @inproceedings{BAIRROS:ERAD:16,
title = {Proposta de Suporte a Elasticidade Automática em Nuvem para uma Linguagem Específica de Domínio},
author = {Gildomiro Bairros and Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2016/CR_ERAD_PG__2016.pdf},
year = {2016},
date = {2016-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {197-198},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {São Leopoldo, RS, BR},
abstract = {Este artigo apresenta uma proposta de desenvolvimento de um middleware para prover elasticidade para aplicações desenvolvidas com uma linguagem específica de domínio voltada para o paralelismo de stream. O middleware atuará a nível de PaaS e colocará instruções de elasticidade de forma transparente ao desenvolvedor, fazendo o parser do código e injetando automaticamente as instruções de elasticidade.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Este artigo apresenta uma proposta de desenvolvimento de um middleware para prover elasticidade para aplicações desenvolvidas com uma linguagem específica de domínio voltada para o paralelismo de stream. O middleware atuará a nível de PaaS e colocará instruções de elasticidade de forma transparente ao desenvolvedor, fazendo o parser do código e injetando automaticamente as instruções de elasticidade. |
| Vogel, Adriano; Maron, Carlos A. F.; Griebler, Dalvan; Schepke, Claudio Medindo o Desempenho de Implantações de OpenStack, CloudStack e OpenNebula em Aplicações Científicas Inproceedings In: 16th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 279-282, Sociedade Brasileira de Computação, São Leopoldo, RS, Brazil, 2016. @inproceedings{hiperfcloud:nas_all:ERAD:16,
title = {Medindo o Desempenho de Implantações de OpenStack, CloudStack e OpenNebula em Aplicações Científicas},
author = {Adriano Vogel and Carlos A. F. Maron and Dalvan Griebler and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/VOGEL_ERAD_2016.pdf},
year = {2016},
date = {2016-04-01},
booktitle = {16th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {279-282},
publisher = {Sociedade Brasileira de Computação},
address = {São Leopoldo, RS, Brazil},
abstract = {Ambientes de nuvem possibilitam a execução de aplicações sob demanda e são uma alternativa para aplicações científicas. O desempenho é um dos principais desafios, devido ao uso da virtualização que induz perdas e variações. O objetivo do trabalho foi implantar ambientes de nuvem privada com diferentes ferramentas de IaaS, medindo o desempenho de aplicações paralelas. Consequentemente, os resultados apresentaram poucos contrastes.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ambientes de nuvem possibilitam a execução de aplicações sob demanda e são uma alternativa para aplicações científicas. O desempenho é um dos principais desafios, devido ao uso da virtualização que induz perdas e variações. O objetivo do trabalho foi implantar ambientes de nuvem privada com diferentes ferramentas de IaaS, medindo o desempenho de aplicações paralelas. Consequentemente, os resultados apresentaram poucos contrastes. |
| Maron, Carlos A. F.; Griebler, Dalvan; Fernandes, Luiz Gustavo Em Direção à um Benchmark de Workload Sintético para Paralelismo de Stream em Arquiteturas Multicore Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 171-172, Sociedade Brasileira de Computação (SBC), São Leopoldo, RS, BR, 2016. @inproceedings{MARON:ERAD:16,
title = {Em Direção à um Benchmark de Workload Sintético para Paralelismo de Stream em Arquiteturas Multicore},
author = {Carlos A. F. Maron and Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2016/CR_ERAD_PG_2016.pdf},
year = {2016},
date = {2016-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {171-172},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {São Leopoldo, RS, BR},
abstract = {O processamento de fluxos contínuos de dados (stream) está provocando novos desafios na exploração de paralelismo. Suítes clássicas de benchmarks não exploram totalmente os aspectos de stream, focando-se em problemas de natureza científica e execução finita. Para endereçar este problema em ambientes de memória compartilhada, o trabalho propõe um benchmark de workload sintético voltado para paralelismo stream},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
O processamento de fluxos contínuos de dados (stream) está provocando novos desafios na exploração de paralelismo. Suítes clássicas de benchmarks não exploram totalmente os aspectos de stream, focando-se em problemas de natureza científica e execução finita. Para endereçar este problema em ambientes de memória compartilhada, o trabalho propõe um benchmark de workload sintético voltado para paralelismo stream |
| Vogel, Adriano; Griebler, Dalvan; Maron, Carlos A. F.; Schepke, Claudio; Fernandes, Luiz Gustavo Private IaaS Clouds: A Comparative Analysis of OpenNebula, CloudStack and OpenStack Inproceedings doi In: 24th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 672-679, IEEE, Heraklion Crete, Greece, 2016. @inproceedings{larcc:IaaS_private:PDP:16,
title = {Private IaaS Clouds: A Comparative Analysis of OpenNebula, CloudStack and OpenStack},
author = {Adriano Vogel and Dalvan Griebler and Carlos A. F. Maron and Claudio Schepke and Luiz Gustavo Fernandes},
url = {http://ieeexplore.ieee.org/document/7445407/},
doi = {10.1109/PDP.2016.75},
year = {2016},
date = {2016-02-01},
booktitle = {24th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)},
pages = {672-679},
publisher = {IEEE},
address = {Heraklion Crete, Greece},
series = {PDP'16},
abstract = {Despite the evolution of cloud computing in recent years, the performance and comprehensive understanding of the available private cloud tools are still under research. This paper contributes to an analysis of the Infrastructure as a Service (IaaS) domain by mapping new insights and discussing the challenges for improving cloud services. The goal is to make a comparative analysis of OpenNebula, OpenStack and CloudStack tools, evaluating their differences on support for flexibility and resiliency. Also, we aim at evaluating these three cloud tools when they are deployed using a mutual hypervisor (KVM) for discovering new empirical insights. Our research results demonstrated that OpenStack is the most resilient and CloudStack is the most flexible for deploying an IaaS private cloud. Moreover, the performance experiments indicated some contrasts among the private IaaS cloud instances when running intensive workloads and scientific applications.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Despite the evolution of cloud computing in recent years, the performance and comprehensive understanding of the available private cloud tools are still under research. This paper contributes to an analysis of the Infrastructure as a Service (IaaS) domain by mapping new insights and discussing the challenges for improving cloud services. The goal is to make a comparative analysis of OpenNebula, OpenStack and CloudStack tools, evaluating their differences on support for flexibility and resiliency. Also, we aim at evaluating these three cloud tools when they are deployed using a mutual hypervisor (KVM) for discovering new empirical insights. Our research results demonstrated that OpenStack is the most resilient and CloudStack is the most flexible for deploying an IaaS private cloud. Moreover, the performance experiments indicated some contrasts among the private IaaS cloud instances when running intensive workloads and scientific applications. |
2015
|
 | Adornes, Daniel; Griebler, Dalvan; Ledur, Cleverson; Fernandes, Luiz G. Coding Productivity in MapReduce Applications for Distributed and Shared Memory Architectures Journal Article doi In: International Journal of Software Engineering and Knowledge Engineering, vol. 25, no. 10, pp. 1739-1741, 2015. @article{ADORNES:IJSEKE:15,
title = {Coding Productivity in MapReduce Applications for Distributed and Shared Memory Architectures},
author = {Daniel Adornes and Dalvan Griebler and Cleverson Ledur and Luiz G. Fernandes},
url = {http://dx.doi.org/10.1142/S0218194015710096},
doi = {10.1142/S0218194015710096},
year = {2015},
date = {2015-12-01},
urldate = {2015-12-01},
journal = {International Journal of Software Engineering and Knowledge Engineering},
volume = {25},
number = {10},
pages = {1739-1741},
publisher = {World Scientific},
abstract = {MapReduce was originally proposed as a suitable and efficient approach for analyzing and processing large amounts of data. Since then, many researches contributed with MapReduce implementations for distributed and shared memory architectures. Nevertheless, different architectural levels require different optimization strategies in order to achieve high-performance computing. Such strategies in turn have caused very different MapReduce programming interfaces among these researches. This paper presents some research notes on coding productivity when developing MapReduce applications for distributed and shared memory architectures. As a case study, we introduce our current research on a unified MapReduce domain-specific language with code generation for Hadoop and Phoenix++, which has achieved a coding productivity increase from 41.84% and up to 94.71% without significant performance losses (below 3%) compared to those frameworks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
MapReduce was originally proposed as a suitable and efficient approach for analyzing and processing large amounts of data. Since then, many researches contributed with MapReduce implementations for distributed and shared memory architectures. Nevertheless, different architectural levels require different optimization strategies in order to achieve high-performance computing. Such strategies in turn have caused very different MapReduce programming interfaces among these researches. This paper presents some research notes on coding productivity when developing MapReduce applications for distributed and shared memory architectures. As a case study, we introduce our current research on a unified MapReduce domain-specific language with code generation for Hadoop and Phoenix++, which has achieved a coding productivity increase from 41.84% and up to 94.71% without significant performance losses (below 3%) compared to those frameworks. |
| Ledur, Cleverson; Griebler, Dalvan; Manssour, Isabel; Fernandes, Luiz G. Towards a Domain-Specific Language for Geospatial Data Visualization Maps with Big Data Sets Inproceedings doi In: ACS/IEEE International Conference on Computer Systems and Applications, pp. 8, IEEE, Marrakech, Marrocos, 2015. @inproceedings{LEDUR:AICCSA:15,
title = {Towards a Domain-Specific Language for Geospatial Data Visualization Maps with Big Data Sets},
author = {Cleverson Ledur and Dalvan Griebler and Isabel Manssour and Luiz G. Fernandes},
url = {http://dx.doi.org/10.1109/AICCSA.2015.7507178},
doi = {10.1109/AICCSA.2015.7507178},
year = {2015},
date = {2015-11-01},
booktitle = {ACS/IEEE International Conference on Computer Systems and Applications},
pages = {8},
publisher = {IEEE},
address = {Marrakech, Marrocos},
series = {AICCSA'15},
abstract = {Data visualization is an alternative for representing information and helping people gain faster insights. However, the programming/creating of a visualization for large data sets is still a challenging task for users with low-level of software development knowledge. Our goal is to increase the productivity of experts who are familiar with the application domain. Therefore, we proposed an external Domain-Specific Language (DSL) that allows massive input of raw data and provides a small dictionary with suitable data visualization keywords. Also, we implemented it to support efficient data filtering operations and generate HTML or Javascript output code files (using Google Maps API). To measure the potential of our DSL, we evaluated four types of geospatial data visualization maps with four different technologies. The experiment results demonstrated a productivity gain when compared to the traditional way of implementing (e.g., Google Maps API, OpenLayers, and Leaflet), and efficient algorithm implementation.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Data visualization is an alternative for representing information and helping people gain faster insights. However, the programming/creating of a visualization for large data sets is still a challenging task for users with low-level of software development knowledge. Our goal is to increase the productivity of experts who are familiar with the application domain. Therefore, we proposed an external Domain-Specific Language (DSL) that allows massive input of raw data and provides a small dictionary with suitable data visualization keywords. Also, we implemented it to support efficient data filtering operations and generate HTML or Javascript output code files (using Google Maps API). To measure the potential of our DSL, we evaluated four types of geospatial data visualization maps with four different technologies. The experiment results demonstrated a productivity gain when compared to the traditional way of implementing (e.g., Google Maps API, OpenLayers, and Leaflet), and efficient algorithm implementation. |
| Vogel, Adriano; Maron, Carlos A. F.; Benedetti, Vera L. L.; Shubeita, Fauzi; Schepke, Claudio; Griebler, Dalvan HiPerfCloud: Um Projeto de Alto Desempenho em Nuvem Inproceedings In: 14th Jornada de Pesquisa SETREM, pp. 4, SETREM, Três de Maio, Brazil, 2015. @inproceedings{larcc:hiperfcloud:JP:15,
title = {HiPerfCloud: Um Projeto de Alto Desempenho em Nuvem},
author = {Adriano Vogel and Carlos A. F. Maron and Vera L. L. Benedetti and Fauzi Shubeita and Claudio Schepke and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/HiPerfCloud_JP_SETREM_2015.pdf},
year = {2015},
date = {2015-10-01},
booktitle = {14th Jornada de Pesquisa SETREM},
pages = {4},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {Computação em nuvem é uma necessidade real para os ambientes de pesquisas e empresas. Embora bastante usada e estudada, ela ainda traz diversos desafios. Um deles é a obtenção de alto desempenho, sendo o principal foco do projeto HiPerfCloud. Esta é uma tarefa complexa, pois é preciso combinar tecnologias, avaliar modelos de implantação e usar soluções adequadas. Este artigo irá apresentar o projeto de pesquisa, seus principais objetivos e os principais resultados alcançados até o momento. Além disso, demonstrar as perspectivas da pesquisa no projeto.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Computação em nuvem é uma necessidade real para os ambientes de pesquisas e empresas. Embora bastante usada e estudada, ela ainda traz diversos desafios. Um deles é a obtenção de alto desempenho, sendo o principal foco do projeto HiPerfCloud. Esta é uma tarefa complexa, pois é preciso combinar tecnologias, avaliar modelos de implantação e usar soluções adequadas. Este artigo irá apresentar o projeto de pesquisa, seus principais objetivos e os principais resultados alcançados até o momento. Além disso, demonstrar as perspectivas da pesquisa no projeto. |
| Griebler, Dalvan; Danelutto, Marco; Torquati, Massimo; Fernandes, Luiz G. An Embedded C++ Domain-Specific Language for Stream Parallelism Inproceedings doi In: Parallel Computing: On the Road to Exascale, Proceedings of the International Conference on Parallel Computing, pp. 317-326, IOS Press, Edinburgh, Scotland, UK, 2015. @inproceedings{GRIEBLER:PARCO:15,
title = {An Embedded C++ Domain-Specific Language for Stream Parallelism},
author = {Dalvan Griebler and Marco Danelutto and Massimo Torquati and Luiz G. Fernandes},
url = {http://dx.doi.org/10.3233/978-1-61499-621-7-317},
doi = {10.3233/978-1-61499-621-7-317},
year = {2015},
date = {2015-09-01},
booktitle = {Parallel Computing: On the Road to Exascale, Proceedings of the International Conference on Parallel Computing},
pages = {317-326},
publisher = {IOS Press},
address = {Edinburgh, Scotland, UK},
series = {ParCo'15},
abstract = {This paper proposes a new C++ embedded Domain-Specific Language (DSL) for expressing stream parallelism by using standard C++11 attributes annotations. The main goal is to introduce high-level parallel abstractions for developing stream based parallel programs as well as reducing sequential source code rewriting. We demonstrated that by using a small set of attributes it is possible to produce different parallel versions depending on the way the source code is annotated. The performances of the parallel code produced are comparable with those obtained by manual parallelization.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
This paper proposes a new C++ embedded Domain-Specific Language (DSL) for expressing stream parallelism by using standard C++11 attributes annotations. The main goal is to introduce high-level parallel abstractions for developing stream based parallel programs as well as reducing sequential source code rewriting. We demonstrated that by using a small set of attributes it is possible to produce different parallel versions depending on the way the source code is annotated. The performances of the parallel code produced are comparable with those obtained by manual parallelization. |
| Roveda, Demétrius; Vogel, Adriano; Maron, Carlos A. F.; Griebler, Dalvan; Schepke, Claudio Analisando a Camada de Gerenciamento das Ferramentas CloudStack e OpenStack para Nuvens Privadas Inproceedings In: 13th Escola Regional de Redes de Computadores (ERRC), pp. 8, Sociedade Brasileira de Computação, Passo Fundo, Brazil, 2015. @inproceedings{larcc:cloudstack_openstack:ERRC:15,
title = {Analisando a Camada de Gerenciamento das Ferramentas CloudStack e OpenStack para Nuvens Privadas},
author = {Demétrius Roveda and Adriano Vogel and Carlos A. F. Maron and Dalvan Griebler and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/ROVEDA_ERRC_2015.pdf},
year = {2015},
date = {2015-09-01},
booktitle = {13th Escola Regional de Redes de Computadores (ERRC)},
pages = {8},
publisher = {Sociedade Brasileira de Computação},
address = {Passo Fundo, Brazil},
abstract = {A camada de gerenciamento é um dos elementos mais importantes para o modelo de serviço IaaS nas ferramentas de administração de nuvem privada. Isso porque oferece aos usuários/clientes os recursos de infraestrutura sob-demanda e controla questões administrativas da nuvem. Nesse artigo, o objetivo é realizar uma análise da interface de gerenciamento das ferramentas CloudStack e OpenStack. Com o estudo realizado, constatou-se que as ferramentas tem gerenciamento distinto. No entanto, OpenStack se mostrou mais robusto e complexo, enquanto CloudStack é mais centralizado e possui uma interface gráfica mais completa e intuitiva.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A camada de gerenciamento é um dos elementos mais importantes para o modelo de serviço IaaS nas ferramentas de administração de nuvem privada. Isso porque oferece aos usuários/clientes os recursos de infraestrutura sob-demanda e controla questões administrativas da nuvem. Nesse artigo, o objetivo é realizar uma análise da interface de gerenciamento das ferramentas CloudStack e OpenStack. Com o estudo realizado, constatou-se que as ferramentas tem gerenciamento distinto. No entanto, OpenStack se mostrou mais robusto e complexo, enquanto CloudStack é mais centralizado e possui uma interface gráfica mais completa e intuitiva. |
 | Roveda, Demétrius; Vogel, Adriano; Griebler, Dalvan Understanding, Discussing and Analyzing the OpenNebula's and OpenStack's IaaS Management Layers Journal Article doi In: Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), vol. 1, no. 3, pp. 15, 2015. @article{larcc:openebula_openstack:REABTIC:15,
title = {Understanding, Discussing and Analyzing the OpenNebula's and OpenStack's IaaS Management Layers},
author = {Demétrius Roveda and Adriano Vogel and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/ROVEDA_REABTIC_2015A.pdf},
doi = {10.5281/zenodo.59467},
year = {2015},
date = {2015-08-01},
journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)},
volume = {1},
number = {3},
pages = {15},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {The OpenNebula and OpenStack tools have been used for large corporations and research centers to implement IaaS clouds. The management layer is an important element for the user and administrator because it deals with the resources monitoring, development support, orchestration, and integration with other cloud platforms and services. The goal of this paper is to discuss and analyze the differences in the management layer for pointing out advantages and disadvantages of the tools. The results demonstrated that OpenNebula is more restrict and focused on simplicity in almost all comparisons while OpenStack is fragmented, complex and robust.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The OpenNebula and OpenStack tools have been used for large corporations and research centers to implement IaaS clouds. The management layer is an important element for the user and administrator because it deals with the resources monitoring, development support, orchestration, and integration with other cloud platforms and services. The goal of this paper is to discuss and analyze the differences in the management layer for pointing out advantages and disadvantages of the tools. The results demonstrated that OpenNebula is more restrict and focused on simplicity in almost all comparisons while OpenStack is fragmented, complex and robust. |
| Adornes, Daniel; Griebler, Dalvan; Ledur, Cleverson; Fernandes, Luiz G. A Unified MapReduce Domain-Specific Language for Distributed and Shared Memory Architectures Inproceedings doi In: The 27th International Conference on Software Engineering & Knowledge Engineering, pp. 6, Knowledge Systems Institute Graduate School, Pittsburgh, USA, 2015. @inproceedings{ADORNES:SEKE:15,
title = {A Unified MapReduce Domain-Specific Language for Distributed and Shared Memory Architectures},
author = {Daniel Adornes and Dalvan Griebler and Cleverson Ledur and Luiz G. Fernandes},
url = {http://dx.doi.org/10.18293/SEKE2015-204},
doi = {10.18293/SEKE2015-204},
year = {2015},
date = {2015-07-01},
booktitle = {The 27th International Conference on Software Engineering & Knowledge Engineering},
pages = {6},
publisher = {Knowledge Systems Institute Graduate School},
address = {Pittsburgh, USA},
abstract = {MapReduce is a suitable and efficient parallel programming pattern for processing big data analysis. In recent years, many frameworks/languages have implemented this pattern to achieve high performance in data mining applications, particularly for distributed memory architectures (e.g., clusters). Nevertheless, the industry of processors is now able to offer powerful processing on single machines (e.g., multi-core). Thus, these applications may address the parallelism in another architectural level. The target problems of this paper are code reuse and programming effort reduction since current solutions do not provide a single interface to deal with these two architectural levels. Therefore, we propose a unified domain-specific language in conjunction with transformation rules for code generation for Hadoop and Phoenix++. We selected these frameworks as state-of-the-art MapReduce implementations for distributed and shared memory architectures, respectively. Our solution achieves a programming effort reduction from 41.84% and up to 95.43% without significant performance losses (below the threshold of 3%) compared to Hadoop and Phoenix++.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
MapReduce is a suitable and efficient parallel programming pattern for processing big data analysis. In recent years, many frameworks/languages have implemented this pattern to achieve high performance in data mining applications, particularly for distributed memory architectures (e.g., clusters). Nevertheless, the industry of processors is now able to offer powerful processing on single machines (e.g., multi-core). Thus, these applications may address the parallelism in another architectural level. The target problems of this paper are code reuse and programming effort reduction since current solutions do not provide a single interface to deal with these two architectural levels. Therefore, we propose a unified domain-specific language in conjunction with transformation rules for code generation for Hadoop and Phoenix++. We selected these frameworks as state-of-the-art MapReduce implementations for distributed and shared memory architectures, respectively. Our solution achieves a programming effort reduction from 41.84% and up to 95.43% without significant performance losses (below the threshold of 3%) compared to Hadoop and Phoenix++. |
| Ledur, Cleverson; Griebler, Dalvan; Fernandes, Luiz Gustavo; Manssour, Isabel Uma Linguagem Específica de Domínio com Geração de Código Paralelo para Visualização de Grandes Volumes de Dados Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 139-140, Sociedade Brasileira de Computação (SBC), Gramado, RS, BR, 2015. @inproceedings{LEDUR:ERAD:15,
title = {Uma Linguagem Específica de Domínio com Geração de Código Paralelo para Visualização de Grandes Volumes de Dados},
author = {Cleverson Ledur and Dalvan Griebler and Luiz Gustavo Fernandes and Isabel Manssour},
url = {https://gmap.pucrs.br/dalvan/papers/2015/CR_ERAD_PG_2015.pdf},
year = {2015},
date = {2015-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {139-140},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Gramado, RS, BR},
abstract = {Este artigo apresenta uma análise sobre linguagens específicas de domínio para a criação de visualizações. Ao final, propõe uma nova linguagem específica de domínio para geração de visualizações de quantidades massivas de dados, paralelizando não só a geração e a interação da visualização, mas também o pré-processamento dos dados brutos.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Este artigo apresenta uma análise sobre linguagens específicas de domínio para a criação de visualizações. Ao final, propõe uma nova linguagem específica de domínio para geração de visualizações de quantidades massivas de dados, paralelizando não só a geração e a interação da visualização, mas também o pré-processamento dos dados brutos. |
 | Roveda, Demétrius; Vogel, Adriano; Souza, Samuel; Griebler, Dalvan Uma Avaliação Comparativa dos Mecanismos de Segurança nas Ferramentas OpenStack, OpenNebula e CloudStack Journal Article doi In: Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), vol. 1, no. 4, pp. 15, 2015. @article{larcc:security_IaaS_tools:REABTIC:15,
title = {Uma Avaliação Comparativa dos Mecanismos de Segurança nas Ferramentas OpenStack, OpenNebula e CloudStack},
author = {Demétrius Roveda and Adriano Vogel and Samuel Souza and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/ROVEDA_REABTIC_2015.pdf},
doi = {10.5281/zenodo.59478},
year = {2015},
date = {2015-03-01},
journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)},
volume = {1},
number = {4},
pages = {15},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {The IaaS service model is gaining attention due its importance to the cloud computing environment, it is responsible for simplifying the access and the management of high-end processing and storage systems, besides being the base that allows the outsourcing of upper layers, PaaS and Saas. The IaaS cloud tools are responsible for controlling the virtual infrastructure as well the environment security, which is an important characteristic for cloud applications, once the system can be integrated with publuc clouds through the Internet. In this paper, the goals are evaluate and compare the security layer, from the administrator point of view, of three open source IaaS tools: OpenStack, OpenNebula and Cloudstack. Considering the security layer from Dukaric taxonomy, the results shown that all the tools have a equivalent security level, however, there are evidence that not all the security features found in the tools fits in the taxonomy description.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The IaaS service model is gaining attention due its importance to the cloud computing environment, it is responsible for simplifying the access and the management of high-end processing and storage systems, besides being the base that allows the outsourcing of upper layers, PaaS and Saas. The IaaS cloud tools are responsible for controlling the virtual infrastructure as well the environment security, which is an important characteristic for cloud applications, once the system can be integrated with publuc clouds through the Internet. In this paper, the goals are evaluate and compare the security layer, from the administrator point of view, of three open source IaaS tools: OpenStack, OpenNebula and Cloudstack. Considering the security layer from Dukaric taxonomy, the results shown that all the tools have a equivalent security level, however, there are evidence that not all the security features found in the tools fits in the taxonomy description. |
| Maron, Carlos A. F.; Griebler, Dalvan; Vogel, Adriano; Schepke, Claudio Em Direção à Comparação do Desempenho das Aplicações Paralelas nas Ferramentas OpenStack e OpenNebula Inproceedings In: 15th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 205-208, Sociedade Brasileira de Computação, Gramado, RS, Brazil, 2015. @inproceedings{hiperfcloud:nas_bech_openstack_opennebula:ERAD:15,
title = {Em Direção à Comparação do Desempenho das Aplicações Paralelas nas Ferramentas OpenStack e OpenNebula},
author = {Carlos A. F. Maron and Dalvan Griebler and Adriano Vogel and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_ERAD_2015.pdf},
year = {2015},
date = {2015-03-01},
booktitle = {15th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {205-208},
publisher = {Sociedade Brasileira de Computação},
address = {Gramado, RS, Brazil},
abstract = {A infraestrutura de Computação em Nuvem vem sendo uma alternativa à execução de aplicações de alto desempenho. No entanto, o desempenho pode ser prejudicado devido a camada de virtualização e da ação das ferramentas de administração de nuvem. O objetivo deste trabalho foi comparar o desempenho de aplicações em OpenStack e OpenNebula. Os resultados apresentaram diferença significativa entre as ferramentas e positiva ao OpenNebula.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A infraestrutura de Computação em Nuvem vem sendo uma alternativa à execução de aplicações de alto desempenho. No entanto, o desempenho pode ser prejudicado devido a camada de virtualização e da ação das ferramentas de administração de nuvem. O objetivo deste trabalho foi comparar o desempenho de aplicações em OpenStack e OpenNebula. Os resultados apresentaram diferença significativa entre as ferramentas e positiva ao OpenNebula. |
2014
|
| Maron, Carlos A. F.; Griebler, Dalvan; Vogel, Adriano; Schepke, Claudio Avaliação e Comparação do Desempenho das Ferramentas OpenStack e OpenNebula Inproceedings In: 12th Escola Regional de Redes de Computadores (ERRC), pp. 1-5, Sociedade Brasileira de Computação, Canoas, 2014. @inproceedings{hiperfcloud:isolation_bechs_openstack_opennebula:ERRC:14,
title = {Avaliação e Comparação do Desempenho das Ferramentas OpenStack e OpenNebula},
author = {Carlos A. F. Maron and Dalvan Griebler and Adriano Vogel and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_ERRC_2014.pdf},
year = {2014},
date = {2014-11-01},
booktitle = {12th Escola Regional de Redes de Computadores (ERRC)},
pages = {1-5},
publisher = {Sociedade Brasileira de Computação},
address = {Canoas},
abstract = {A computação em nuvem está cada vez mais presente nas infraestruturas corporativas. Por causa disso, diversas ferramentas estão sendo criadas para auxiliar na administração dos recursos na nuvem. O objetivo deste trabalho é avaliar o impacto que as ferramentas OpenStack e OpenNebula (implantadas em um ambiente de nuvem privado) podem causar no desempenho dos sistemas de memória, armazenamento, rede, e processador. Os resultados obtidos mostram que o desempenho no OpenStack é significativamente melhor nos testes do sistema de armazenamento, enquanto que no OpenNebula o restante dos testes foram melhores.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A computação em nuvem está cada vez mais presente nas infraestruturas corporativas. Por causa disso, diversas ferramentas estão sendo criadas para auxiliar na administração dos recursos na nuvem. O objetivo deste trabalho é avaliar o impacto que as ferramentas OpenStack e OpenNebula (implantadas em um ambiente de nuvem privado) podem causar no desempenho dos sistemas de memória, armazenamento, rede, e processador. Os resultados obtidos mostram que o desempenho no OpenStack é significativamente melhor nos testes do sistema de armazenamento, enquanto que no OpenNebula o restante dos testes foram melhores. |
| Griebler, Dalvan; Adornes, Daniel; Fernandes, Luiz G. Performance and Usability Evaluation of a Pattern-Oriented Parallel Programming Interface for Multi-Core Architectures Inproceedings In: The 26th International Conference on Software Engineering & Knowledge Engineering, pp. 25-30, Knowledge Systems Institute Graduate School, Vancouver, Canada, 2014. @inproceedings{GRIEBLER:SEKE:14,
title = {Performance and Usability Evaluation of a Pattern-Oriented Parallel Programming Interface for Multi-Core Architectures},
author = {Dalvan Griebler and Daniel Adornes and Luiz G. Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2014/CR_SEKE_2014.pdf},
year = {2014},
date = {2014-07-01},
booktitle = {The 26th International Conference on Software Engineering & Knowledge Engineering},
pages = {25-30},
publisher = {Knowledge Systems Institute Graduate School},
address = {Vancouver, Canada},
abstract = {Multi-core architectures have increased the power of parallelism by coupling many cores in a single chip. This becomes even more complex for developers to exploit the avail-able parallelism in order to provide high performance scalable programs. To address these challenges, we propose the DSL-POPP (Domain-Specific Language for Pattern-Oriented Parallel Programming), which links the pattern-based approach in the programming interface as an alternative to reduce the effort of parallel software development, and achieve good performance in some applications. In this paper, the objective is to evaluate the usability and performance of the master/slave pattern and compare it to the Pthreads library. Moreover, experiments have shown that the master/slave interface of the DSL-POPP reduces up to 50% of the programming effort, without significantly affecting the performance.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Multi-core architectures have increased the power of parallelism by coupling many cores in a single chip. This becomes even more complex for developers to exploit the avail-able parallelism in order to provide high performance scalable programs. To address these challenges, we propose the DSL-POPP (Domain-Specific Language for Pattern-Oriented Parallel Programming), which links the pattern-based approach in the programming interface as an alternative to reduce the effort of parallel software development, and achieve good performance in some applications. In this paper, the objective is to evaluate the usability and performance of the master/slave pattern and compare it to the Pthreads library. Moreover, experiments have shown that the master/slave interface of the DSL-POPP reduces up to 50% of the programming effort, without significantly affecting the performance. |
| Maron, Carlos A. F.; Griebler, Dalvan; Schepke, Claudio Comparação das Ferramentas OpenNebula e OpenStack em Nuvem Composta de Estações de Trabalho Inproceedings In: 14th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 173-176, Sociedade Brasileira de Computação, Alegrete, RS, Brazil, 2014. @inproceedings{larcc:evaluation_openstack_opnnebula:ERAD:14,
title = {Comparação das Ferramentas OpenNebula e OpenStack em Nuvem Composta de Estações de Trabalho},
author = {Carlos A. F. Maron and Dalvan Griebler and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_ERAD_2014.pdf},
year = {2014},
date = {2014-03-01},
booktitle = {14th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {173-176},
publisher = {Sociedade Brasileira de Computação},
address = {Alegrete, RS, Brazil},
abstract = {Ferramentas de computação em nuvem para o modelo de serviço IaaS como o OpenNebula e o OpenStack são implantadas em grandes centros de processamento. O objetivo deste trabalho é investigar e comparar o comportamento delas em um ambiente mais restrito, como o de estações de trabalho. Os resultados mostraram que a ferramenta OpenNebula leva vantagem nas principais características avaliadas.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ferramentas de computação em nuvem para o modelo de serviço IaaS como o OpenNebula e o OpenStack são implantadas em grandes centros de processamento. O objetivo deste trabalho é investigar e comparar o comportamento delas em um ambiente mais restrito, como o de estações de trabalho. Os resultados mostraram que a ferramenta OpenNebula leva vantagem nas principais características avaliadas. |
| Rui, Fernando; Castro, Márcio; Griebler, Dalvan; Fernandes, Luiz Gustavo Evaluating the Impact of Transactional Characteristics on the Performance of Transactional Memory Applications Inproceedings doi In: 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, pp. 93-97, IEEE, Torino, Italy, 2014. @inproceedings{gmap:RUI:PDP:14,
title = {Evaluating the Impact of Transactional Characteristics on the Performance of Transactional Memory Applications},
author = {Fernando Rui and Márcio Castro and Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://doi.org/10.1109/PDP.2014.57},
doi = {10.1109/PDP.2014.57},
year = {2014},
date = {2014-02-01},
booktitle = {22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing},
pages = {93-97},
publisher = {IEEE},
address = {Torino, Italy},
series = {PDP'14},
abstract = {Transactional Memory (TM) is reputed by many researchers to be a promising solution to ease parallel programming on multicore processors. This model provides the scalability of fine-grained locking while avoiding common issues of traditional mechanisms, such as deadlocks. During these almost twenty years of research, several TM systems and benchmarks have been proposed. However, TM is not yet widely adopted by the scientific community to develop parallel applications due to unanswered questions in the literature, such as "how to identify if a parallel application can exploit TM to achieve better performance?" or "what are the reasons of poor performances of some TM applications?". In this work, we contribute to answer those questions through a comparative evaluation of a set of TM applications on four different state- of-the-art TM systems. Moreover, we identify some of the most important TM characteristics that impact directly the performance of TM applications. Our results can be useful to identify opportunities for optimizations.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Transactional Memory (TM) is reputed by many researchers to be a promising solution to ease parallel programming on multicore processors. This model provides the scalability of fine-grained locking while avoiding common issues of traditional mechanisms, such as deadlocks. During these almost twenty years of research, several TM systems and benchmarks have been proposed. However, TM is not yet widely adopted by the scientific community to develop parallel applications due to unanswered questions in the literature, such as "how to identify if a parallel application can exploit TM to achieve better performance?" or "what are the reasons of poor performances of some TM applications?". In this work, we contribute to answer those questions through a comparative evaluation of a set of TM applications on four different state- of-the-art TM systems. Moreover, we identify some of the most important TM characteristics that impact directly the performance of TM applications. Our results can be useful to identify opportunities for optimizations. |
2013
|
| Thomé, Bruna; Hentges, Eduardo; Griebler, Dalvan Computação em Nuvem: Análise Comparativa de Ferramentas Open Source para IaaS Inproceedings In: 11th Escola Regional de Redes de Computadores (ERRC), pp. 4, Sociedade Brasileira de Computação, Porto Alegre, RS, Brazil, 2013. @inproceedings{larcc:iaas_survey:ERRC:13,
title = {Computação em Nuvem: Análise Comparativa de Ferramentas Open Source para IaaS},
author = {Bruna Thomé and Eduardo Hentges and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/THOME_ERRC_2013.pdf},
year = {2013},
date = {2013-11-01},
booktitle = {11th Escola Regional de Redes de Computadores (ERRC)},
pages = {4},
publisher = {Sociedade Brasileira de Computação},
address = {Porto Alegre, RS, Brazil},
abstract = {Este artigo tem por objetivo estudar, apresentar e comparar as principais ferramentas open source de computação em nuvem. O conceito de computação em nuvem está cada vez mais presente nas redes de computadores. A dificuldade não está apenas em implantar uma nuvem, mas também em escolher a ferramenta mais apropriada. Assim, este trabalho buscou estudar as seguintes ferramentas: Eucalyptus, OpenNebula, OpenQRM, OpenStack, CloudStack Ubuntu Enterprise Cloud, Abiquo, Convirt, Apache Virtual Lab e Nimbus. Para estas, foram consideradas as características, funcionalidades e formas de operação, evidenciando o cenário mais indicado para cada uma delas.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Este artigo tem por objetivo estudar, apresentar e comparar as principais ferramentas open source de computação em nuvem. O conceito de computação em nuvem está cada vez mais presente nas redes de computadores. A dificuldade não está apenas em implantar uma nuvem, mas também em escolher a ferramenta mais apropriada. Assim, este trabalho buscou estudar as seguintes ferramentas: Eucalyptus, OpenNebula, OpenQRM, OpenStack, CloudStack Ubuntu Enterprise Cloud, Abiquo, Convirt, Apache Virtual Lab e Nimbus. Para estas, foram consideradas as características, funcionalidades e formas de operação, evidenciando o cenário mais indicado para cada uma delas. |
| Griebler, Dalvan; Fernandes, Luiz G. Towards a Domain-Specific Language for Patterns-Oriented Parallel Programming Inproceedings doi In: Programming Languages - 17th Brazilian Symposium - SBLP, pp. 105-119, Springer Berlin Heidelberg, Brasilia, Brazil, 2013. @inproceedings{GRIEBLER:SBLP:13,
title = {Towards a Domain-Specific Language for Patterns-Oriented Parallel Programming},
author = {Dalvan Griebler and Luiz G. Fernandes},
url = {http://dx.doi.org/10.1007/978-3-642-40922-6_8},
doi = {10.1007/978-3-642-40922-6_8},
year = {2013},
date = {2013-10-01},
booktitle = {Programming Languages - 17th Brazilian Symposium - SBLP},
volume = {8129},
pages = {105-119},
publisher = {Springer Berlin Heidelberg},
address = {Brasilia, Brazil},
series = {Lecture Notes in Computer Science},
abstract = {Pattern-oriented programming has been used in parallel code development for many years now. During this time, several tools (mainly frameworks and libraries) proposed the use of patterns based on programming primitives or templates. The implementation of patterns using those tools usually requires human expertise to correctly set up communication/synchronization among processes. In this work, we propose the use of a Domain Specific Language to create pattern-oriented parallel programs (DSL-POPP). This approach has the advantage of offering a higher programming abstraction level in which communication/synchronization among processes is hidden from programmers. We compensate the reduction in programming flexibility offering the possibility to use combined and/or nested parallel patterns (i.e., parallelism in levels), allowing the design of more complex parallel applications. We conclude this work presenting an experiment in which we develop a parallel application exploiting combined and nested parallel patterns in order to demonstrate the main properties of DSL-POPP.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Pattern-oriented programming has been used in parallel code development for many years now. During this time, several tools (mainly frameworks and libraries) proposed the use of patterns based on programming primitives or templates. The implementation of patterns using those tools usually requires human expertise to correctly set up communication/synchronization among processes. In this work, we propose the use of a Domain Specific Language to create pattern-oriented parallel programs (DSL-POPP). This approach has the advantage of offering a higher programming abstraction level in which communication/synchronization among processes is hidden from programmers. We compensate the reduction in programming flexibility offering the possibility to use combined and/or nested parallel patterns (i.e., parallelism in levels), allowing the design of more complex parallel applications. We conclude this work presenting an experiment in which we develop a parallel application exploiting combined and nested parallel patterns in order to demonstrate the main properties of DSL-POPP. |
| Griebler, Dalvan; Fernandes, Luiz Gustavo DSL-POPP: Linguagem Específica de Domínio para Programação Paralela Orientada a Padrões Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 2, Sociedade Brasileira de Computação (SBC), Porto Alegre, RS, BR, 2013. @inproceedings{GRIEBLER:ERAD:13,
title = {DSL-POPP: Linguagem Específica de Domínio para Programação Paralela Orientada a Padrões},
author = {Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2013/CR_ERAD_2013.pdf},
year = {2013},
date = {2013-03-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {2},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Porto Alegre, RS, BR},
abstract = {A proposta deste trabalho é induzir o programador a desenvolver programas orientados a padrões paralelos, que implementados na interface de uma linguagem específica de domínio ajudam a reduzir o esforço de programação sem comprometer o desempenho de uma aplicação. Resultados experimentais com o padrão mestre/escravo mostraram um bom desempenho nos algoritmos paralelizados.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A proposta deste trabalho é induzir o programador a desenvolver programas orientados a padrões paralelos, que implementados na interface de uma linguagem específica de domínio ajudam a reduzir o esforço de programação sem comprometer o desempenho de uma aplicação. Resultados experimentais com o padrão mestre/escravo mostraram um bom desempenho nos algoritmos paralelizados. |
2012
|
| Griebler, Dalvan Proposta de uma Linguagem Específica de Domínio de Programação Paralela Orientada a Padrões Paralelos: Um Estudo de Caso Baseado no Padrão Mestre/Escravo para Arquiteturas Multi-Core Masters Thesis Faculdade de Informática - PPGCC - PUCRS, Porto Alegre, Brazil, 2012. @mastersthesis{GRIEBLER:DM:12,
title = {Proposta de uma Linguagem Específica de Domínio de Programação Paralela Orientada a Padrões Paralelos: Um Estudo de Caso Baseado no Padrão Mestre/Escravo para Arquiteturas Multi-Core},
author = {Dalvan Griebler},
url = {http://tede.pucrs.br/tde_busca/arquivo.php?codArquivo=4265},
year = {2012},
date = {2012-03-01},
address = {Porto Alegre, Brazil},
school = {Faculdade de Informática - PPGCC - PUCRS},
abstract = {This work proposes a Domain-Specific Language for Parallel Patterns Oriented Parallel Programming (LED-PPOPP). Its main purpose is to provide a way to decrease the amount of effort necessary to develop parallel programs, offering a way to guide developers through patterns which are implemented by the language interface. The idea is to exploit this approach avoiding large performance losses in the applications. Patterns are specialized solutions, previously studied, and used to solve a frequent problem. Thus, parallel patterns offer a higher abstraction level to organize the algorithms in the exploitation of parallelism. They also can be easily learned by inexperienced programmers and software engineers. This work carried out a case study based on the Master/Slave pattern, focusing on the parallelization of algorithms for multi-core architectures. The implementation was validated through experiments to evaluate the programming effort to write code in LED-PPOPP and the performance achieved by the parallel code automatically generated. The obtained results let us conclude that a significant reduction in the parallel programming effort occurred in comparison to the Pthreads library utilization. Additionally, the final performance of the parallelized algorithms confirms that the parallelization with LED-PPOPP does not bring on significant losses related to parallelization using OpenMP in most of the all experiments carried out.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
This work proposes a Domain-Specific Language for Parallel Patterns Oriented Parallel Programming (LED-PPOPP). Its main purpose is to provide a way to decrease the amount of effort necessary to develop parallel programs, offering a way to guide developers through patterns which are implemented by the language interface. The idea is to exploit this approach avoiding large performance losses in the applications. Patterns are specialized solutions, previously studied, and used to solve a frequent problem. Thus, parallel patterns offer a higher abstraction level to organize the algorithms in the exploitation of parallelism. They also can be easily learned by inexperienced programmers and software engineers. This work carried out a case study based on the Master/Slave pattern, focusing on the parallelization of algorithms for multi-core architectures. The implementation was validated through experiments to evaluate the programming effort to write code in LED-PPOPP and the performance achieved by the parallel code automatically generated. The obtained results let us conclude that a significant reduction in the parallel programming effort occurred in comparison to the Pthreads library utilization. Additionally, the final performance of the parallelized algorithms confirms that the parallelization with LED-PPOPP does not bring on significant losses related to parallelization using OpenMP in most of the all experiments carried out. |
2011
|
| Raeder, Mateus; Griebler, Dalvan; Baldo, Lucas; Fernandes, Luiz G. Performance Prediction of Parallel Applications with Parallel Patterns Using Stochastic Methods Inproceedings doi In: Sistemas Computacionais (WSCAD-SSC), XII Simpósio em Sistemas Computacionais de Alto Desempenho, pp. 1-13, IEEE, Espírito Santo, Brasil, 2011. @inproceedings{RAEDER:WSCAD:11,
title = {Performance Prediction of Parallel Applications with Parallel Patterns Using Stochastic Methods},
author = {Mateus Raeder and Dalvan Griebler and Lucas Baldo and Luiz G. Fernandes},
url = {https://doi.org/10.1109/WSCAD-SSC.2011.18},
doi = {10.1109/WSCAD-SSC.2011.18},
year = {2011},
date = {2011-10-01},
booktitle = {Sistemas Computacionais (WSCAD-SSC), XII Simpósio em Sistemas Computacionais de Alto Desempenho},
pages = {1-13},
publisher = {IEEE},
address = {Espírito Santo, Brasil},
abstract = {One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such applications seems to be an interesting alternative and can help to identify the best implementation strategies. In this work, the Stochastic Automata Network formalism is adopted to model and evaluate the performance of parallel applications, specially developed for clusters of workstations platforms. The methodology used is based on the construction of generic models to describe classical parallel implementation schemes, like Master/Slave, Parallel Phases, Pipeline and Divide and Conquer. Those models are adapted to represent cases of real applications through the definition of input parameters values. Finally, aiming to verify the accuracy of the adopted technique, some comparisons with real applications implementation results are presented.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such applications seems to be an interesting alternative and can help to identify the best implementation strategies. In this work, the Stochastic Automata Network formalism is adopted to model and evaluate the performance of parallel applications, specially developed for clusters of workstations platforms. The methodology used is based on the construction of generic models to describe classical parallel implementation schemes, like Master/Slave, Parallel Phases, Pipeline and Divide and Conquer. Those models are adapted to represent cases of real applications through the definition of input parameters values. Finally, aiming to verify the accuracy of the adopted technique, some comparisons with real applications implementation results are presented. |
| Griebler, Dalvan; Raeder, Mateus; Fernandes, Luiz Gustavo Padrões e Frameworks de Programação Paralela em Ambientes Multi-Core Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 2, Sociedade Brasileira de Computação (SBC), Porto Alegre, RS, BR, 2011. @inproceedings{GRIEBLER:ERAD:11,
title = {Padrões e Frameworks de Programação Paralela em Ambientes Multi-Core},
author = {Dalvan Griebler and Mateus Raeder and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2011/CR_ERAD_2011.pdf},
year = {2011},
date = {2011-03-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {2},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Porto Alegre, RS, BR},
abstract = {Nos últimos anos, o mercado recente de estações de trabalho e servidores vem aumentando gradativamente a quantidade de núcleos e processadores, inserindo na sua programação o paralelismo, o que resultou no aumento da complexidade em lidar com este tipo de hardware. Neste cenário, é necessário a disponibilidade de mecanismos que possam fornecer escalabilidade e permitir a exploração de paralelismo nestas arquiteturas, conhecidas como multi-core. Não basta que tais arquiteturas multiprocessadas estejam disponíveis, se estas não são exploradas devidamente. Depuração, condições de corrida, sincronização de threads ou processos e controle de acesso aos dados são exemplos de fatores críticos para a programação destes ambientes paralelos. Novas maneiras de abstrair a complexidade em lidar com estes sistemas estão sendo estudadas, a fim de que a programação paralela seja algo menos complexo para os desenvolvedores de software. Padrões paralelos vêm sendo o alvo de constantes estudos com intuito de padronizar a programação.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Nos últimos anos, o mercado recente de estações de trabalho e servidores vem aumentando gradativamente a quantidade de núcleos e processadores, inserindo na sua programação o paralelismo, o que resultou no aumento da complexidade em lidar com este tipo de hardware. Neste cenário, é necessário a disponibilidade de mecanismos que possam fornecer escalabilidade e permitir a exploração de paralelismo nestas arquiteturas, conhecidas como multi-core. Não basta que tais arquiteturas multiprocessadas estejam disponíveis, se estas não são exploradas devidamente. Depuração, condições de corrida, sincronização de threads ou processos e controle de acesso aos dados são exemplos de fatores críticos para a programação destes ambientes paralelos. Novas maneiras de abstrair a complexidade em lidar com estes sistemas estão sendo estudadas, a fim de que a programação paralela seja algo menos complexo para os desenvolvedores de software. Padrões paralelos vêm sendo o alvo de constantes estudos com intuito de padronizar a programação. |