2016
|
| Vogel, Adriano; Maron, Carlos A. F.; Griebler, Dalvan; Schepke, Claudio Medindo o Desempenho de Implantações de OpenStack, CloudStack e OpenNebula em Aplicações Científicas Inproceedings In: 16th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 279-282, Sociedade Brasileira de Computação, São Leopoldo, RS, Brazil, 2016. @inproceedings{hiperfcloud:nas_all:ERAD:16,
title = {Medindo o Desempenho de Implantações de OpenStack, CloudStack e OpenNebula em Aplicações Científicas},
author = {Adriano Vogel and Carlos A. F. Maron and Dalvan Griebler and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/VOGEL_ERAD_2016.pdf},
year = {2016},
date = {2016-04-01},
booktitle = {16th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {279-282},
publisher = {Sociedade Brasileira de Computação},
address = {São Leopoldo, RS, Brazil},
abstract = {Ambientes de nuvem possibilitam a execução de aplicações sob demanda e são uma alternativa para aplicações científicas. O desempenho é um dos principais desafios, devido ao uso da virtualização que induz perdas e variações. O objetivo do trabalho foi implantar ambientes de nuvem privada com diferentes ferramentas de IaaS, medindo o desempenho de aplicações paralelas. Consequentemente, os resultados apresentaram poucos contrastes.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ambientes de nuvem possibilitam a execução de aplicações sob demanda e são uma alternativa para aplicações científicas. O desempenho é um dos principais desafios, devido ao uso da virtualização que induz perdas e variações. O objetivo do trabalho foi implantar ambientes de nuvem privada com diferentes ferramentas de IaaS, medindo o desempenho de aplicações paralelas. Consequentemente, os resultados apresentaram poucos contrastes. |
| Maron, Carlos A. F.; Griebler, Dalvan; Fernandes, Luiz Gustavo Em Direção à um Benchmark de Workload Sintético para Paralelismo de Stream em Arquiteturas Multicore Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 171-172, Sociedade Brasileira de Computação (SBC), São Leopoldo, RS, BR, 2016. @inproceedings{MARON:ERAD:16,
title = {Em Direção à um Benchmark de Workload Sintético para Paralelismo de Stream em Arquiteturas Multicore},
author = {Carlos A. F. Maron and Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2016/CR_ERAD_PG_2016.pdf},
year = {2016},
date = {2016-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {171-172},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {São Leopoldo, RS, BR},
abstract = {O processamento de fluxos contínuos de dados (stream) está provocando novos desafios na exploração de paralelismo. Suítes clássicas de benchmarks não exploram totalmente os aspectos de stream, focando-se em problemas de natureza científica e execução finita. Para endereçar este problema em ambientes de memória compartilhada, o trabalho propõe um benchmark de workload sintético voltado para paralelismo stream},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
O processamento de fluxos contínuos de dados (stream) está provocando novos desafios na exploração de paralelismo. Suítes clássicas de benchmarks não exploram totalmente os aspectos de stream, focando-se em problemas de natureza científica e execução finita. Para endereçar este problema em ambientes de memória compartilhada, o trabalho propõe um benchmark de workload sintético voltado para paralelismo stream |
| Vogel, Adriano; Griebler, Dalvan; Maron, Carlos A. F.; Schepke, Claudio; Fernandes, Luiz Gustavo Private IaaS Clouds: A Comparative Analysis of OpenNebula, CloudStack and OpenStack Inproceedings doi In: 24th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 672-679, IEEE, Heraklion Crete, Greece, 2016. @inproceedings{larcc:IaaS_private:PDP:16,
title = {Private IaaS Clouds: A Comparative Analysis of OpenNebula, CloudStack and OpenStack},
author = {Adriano Vogel and Dalvan Griebler and Carlos A. F. Maron and Claudio Schepke and Luiz Gustavo Fernandes},
url = {http://ieeexplore.ieee.org/document/7445407/},
doi = {10.1109/PDP.2016.75},
year = {2016},
date = {2016-02-01},
booktitle = {24th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)},
pages = {672-679},
publisher = {IEEE},
address = {Heraklion Crete, Greece},
series = {PDP'16},
abstract = {Despite the evolution of cloud computing in recent years, the performance and comprehensive understanding of the available private cloud tools are still under research. This paper contributes to an analysis of the Infrastructure as a Service (IaaS) domain by mapping new insights and discussing the challenges for improving cloud services. The goal is to make a comparative analysis of OpenNebula, OpenStack and CloudStack tools, evaluating their differences on support for flexibility and resiliency. Also, we aim at evaluating these three cloud tools when they are deployed using a mutual hypervisor (KVM) for discovering new empirical insights. Our research results demonstrated that OpenStack is the most resilient and CloudStack is the most flexible for deploying an IaaS private cloud. Moreover, the performance experiments indicated some contrasts among the private IaaS cloud instances when running intensive workloads and scientific applications.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Despite the evolution of cloud computing in recent years, the performance and comprehensive understanding of the available private cloud tools are still under research. This paper contributes to an analysis of the Infrastructure as a Service (IaaS) domain by mapping new insights and discussing the challenges for improving cloud services. The goal is to make a comparative analysis of OpenNebula, OpenStack and CloudStack tools, evaluating their differences on support for flexibility and resiliency. Also, we aim at evaluating these three cloud tools when they are deployed using a mutual hypervisor (KVM) for discovering new empirical insights. Our research results demonstrated that OpenStack is the most resilient and CloudStack is the most flexible for deploying an IaaS private cloud. Moreover, the performance experiments indicated some contrasts among the private IaaS cloud instances when running intensive workloads and scientific applications. |
2015
|
| Adornes, Daniel; Griebler, Dalvan; Ledur, Cleverson; Fernandes, Luiz G. Coding Productivity in MapReduce Applications for Distributed and Shared Memory Architectures Journal Article doi In: International Journal of Software Engineering and Knowledge Engineering, vol. 25, no. 10, pp. 1739-1741, 2015. @article{ADORNES:IJSEKE:15,
title = {Coding Productivity in MapReduce Applications for Distributed and Shared Memory Architectures},
author = {Daniel Adornes and Dalvan Griebler and Cleverson Ledur and Luiz G. Fernandes},
url = {http://dx.doi.org/10.1142/S0218194015710096},
doi = {10.1142/S0218194015710096},
year = {2015},
date = {2015-12-01},
journal = {International Journal of Software Engineering and Knowledge Engineering},
volume = {25},
number = {10},
pages = {1739-1741},
publisher = {World Scientific},
abstract = {MapReduce was originally proposed as a suitable and efficient approach for analyzing and processing large amounts of data. Since then, many researches contributed with MapReduce implementations for distributed and shared memory architectures. Nevertheless, different architectural levels require different optimization strategies in order to achieve high-performance computing. Such strategies in turn have caused very different MapReduce programming interfaces among these researches. This paper presents some research notes on coding productivity when developing MapReduce applications for distributed and shared memory architectures. As a case study, we introduce our current research on a unified MapReduce domain-specific language with code generation for Hadoop and Phoenix++, which has achieved a coding productivity increase from 41.84% and up to 94.71% without significant performance losses (below 3%) compared to those frameworks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
MapReduce was originally proposed as a suitable and efficient approach for analyzing and processing large amounts of data. Since then, many researches contributed with MapReduce implementations for distributed and shared memory architectures. Nevertheless, different architectural levels require different optimization strategies in order to achieve high-performance computing. Such strategies in turn have caused very different MapReduce programming interfaces among these researches. This paper presents some research notes on coding productivity when developing MapReduce applications for distributed and shared memory architectures. As a case study, we introduce our current research on a unified MapReduce domain-specific language with code generation for Hadoop and Phoenix++, which has achieved a coding productivity increase from 41.84% and up to 94.71% without significant performance losses (below 3%) compared to those frameworks. |
| Ledur, Cleverson; Griebler, Dalvan; Manssour, Isabel; Fernandes, Luiz G. Towards a Domain-Specific Language for Geospatial Data Visualization Maps with Big Data Sets Inproceedings doi In: ACS/IEEE International Conference on Computer Systems and Applications, pp. 8, IEEE, Marrakech, Marrocos, 2015. @inproceedings{LEDUR:AICCSA:15,
title = {Towards a Domain-Specific Language for Geospatial Data Visualization Maps with Big Data Sets},
author = {Cleverson Ledur and Dalvan Griebler and Isabel Manssour and Luiz G. Fernandes},
url = {http://dx.doi.org/10.1109/AICCSA.2015.7507178},
doi = {10.1109/AICCSA.2015.7507178},
year = {2015},
date = {2015-11-01},
booktitle = {ACS/IEEE International Conference on Computer Systems and Applications},
pages = {8},
publisher = {IEEE},
address = {Marrakech, Marrocos},
series = {AICCSA'15},
abstract = {Data visualization is an alternative for representing information and helping people gain faster insights. However, the programming/creating of a visualization for large data sets is still a challenging task for users with low-level of software development knowledge. Our goal is to increase the productivity of experts who are familiar with the application domain. Therefore, we proposed an external Domain-Specific Language (DSL) that allows massive input of raw data and provides a small dictionary with suitable data visualization keywords. Also, we implemented it to support efficient data filtering operations and generate HTML or Javascript output code files (using Google Maps API). To measure the potential of our DSL, we evaluated four types of geospatial data visualization maps with four different technologies. The experiment results demonstrated a productivity gain when compared to the traditional way of implementing (e.g., Google Maps API, OpenLayers, and Leaflet), and efficient algorithm implementation.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Data visualization is an alternative for representing information and helping people gain faster insights. However, the programming/creating of a visualization for large data sets is still a challenging task for users with low-level of software development knowledge. Our goal is to increase the productivity of experts who are familiar with the application domain. Therefore, we proposed an external Domain-Specific Language (DSL) that allows massive input of raw data and provides a small dictionary with suitable data visualization keywords. Also, we implemented it to support efficient data filtering operations and generate HTML or Javascript output code files (using Google Maps API). To measure the potential of our DSL, we evaluated four types of geospatial data visualization maps with four different technologies. The experiment results demonstrated a productivity gain when compared to the traditional way of implementing (e.g., Google Maps API, OpenLayers, and Leaflet), and efficient algorithm implementation. |
| Vogel, Adriano; Maron, Carlos A. F.; Benedetti, Vera L. L.; Shubeita, Fauzi; Schepke, Claudio; Griebler, Dalvan HiPerfCloud: Um Projeto de Alto Desempenho em Nuvem Inproceedings In: 14th Jornada de Pesquisa SETREM, pp. 4, SETREM, Três de Maio, Brazil, 2015. @inproceedings{larcc:hiperfcloud:JP:15,
title = {HiPerfCloud: Um Projeto de Alto Desempenho em Nuvem},
author = {Adriano Vogel and Carlos A. F. Maron and Vera L. L. Benedetti and Fauzi Shubeita and Claudio Schepke and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/HiPerfCloud_JP_SETREM_2015.pdf},
year = {2015},
date = {2015-10-01},
booktitle = {14th Jornada de Pesquisa SETREM},
pages = {4},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {Computação em nuvem é uma necessidade real para os ambientes de pesquisas e empresas. Embora bastante usada e estudada, ela ainda traz diversos desafios. Um deles é a obtenção de alto desempenho, sendo o principal foco do projeto HiPerfCloud. Esta é uma tarefa complexa, pois é preciso combinar tecnologias, avaliar modelos de implantação e usar soluções adequadas. Este artigo irá apresentar o projeto de pesquisa, seus principais objetivos e os principais resultados alcançados até o momento. Além disso, demonstrar as perspectivas da pesquisa no projeto.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Computação em nuvem é uma necessidade real para os ambientes de pesquisas e empresas. Embora bastante usada e estudada, ela ainda traz diversos desafios. Um deles é a obtenção de alto desempenho, sendo o principal foco do projeto HiPerfCloud. Esta é uma tarefa complexa, pois é preciso combinar tecnologias, avaliar modelos de implantação e usar soluções adequadas. Este artigo irá apresentar o projeto de pesquisa, seus principais objetivos e os principais resultados alcançados até o momento. Além disso, demonstrar as perspectivas da pesquisa no projeto. |
| Griebler, Dalvan; Danelutto, Marco; Torquati, Massimo; Fernandes, Luiz G. An Embedded C++ Domain-Specific Language for Stream Parallelism Inproceedings doi In: Parallel Computing: On the Road to Exascale, Proceedings of the International Conference on Parallel Computing, pp. 317-326, IOS Press, Edinburgh, Scotland, UK, 2015. @inproceedings{GRIEBLER:PARCO:15,
title = {An Embedded C++ Domain-Specific Language for Stream Parallelism},
author = {Dalvan Griebler and Marco Danelutto and Massimo Torquati and Luiz G. Fernandes},
url = {http://dx.doi.org/10.3233/978-1-61499-621-7-317},
doi = {10.3233/978-1-61499-621-7-317},
year = {2015},
date = {2015-09-01},
booktitle = {Parallel Computing: On the Road to Exascale, Proceedings of the International Conference on Parallel Computing},
pages = {317-326},
publisher = {IOS Press},
address = {Edinburgh, Scotland, UK},
series = {ParCo'15},
abstract = {This paper proposes a new C++ embedded Domain-Specific Language (DSL) for expressing stream parallelism by using standard C++11 attributes annotations. The main goal is to introduce high-level parallel abstractions for developing stream based parallel programs as well as reducing sequential source code rewriting. We demonstrated that by using a small set of attributes it is possible to produce different parallel versions depending on the way the source code is annotated. The performances of the parallel code produced are comparable with those obtained by manual parallelization.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
This paper proposes a new C++ embedded Domain-Specific Language (DSL) for expressing stream parallelism by using standard C++11 attributes annotations. The main goal is to introduce high-level parallel abstractions for developing stream based parallel programs as well as reducing sequential source code rewriting. We demonstrated that by using a small set of attributes it is possible to produce different parallel versions depending on the way the source code is annotated. The performances of the parallel code produced are comparable with those obtained by manual parallelization. |
| Roveda, Demétrius; Vogel, Adriano; Maron, Carlos A. F.; Griebler, Dalvan; Schepke, Claudio Analisando a Camada de Gerenciamento das Ferramentas CloudStack e OpenStack para Nuvens Privadas Inproceedings In: 13th Escola Regional de Redes de Computadores (ERRC), pp. 8, Sociedade Brasileira de Computação, Passo Fundo, Brazil, 2015. @inproceedings{larcc:cloudstack_openstack:ERRC:15,
title = {Analisando a Camada de Gerenciamento das Ferramentas CloudStack e OpenStack para Nuvens Privadas},
author = {Demétrius Roveda and Adriano Vogel and Carlos A. F. Maron and Dalvan Griebler and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/ROVEDA_ERRC_2015.pdf},
year = {2015},
date = {2015-09-01},
booktitle = {13th Escola Regional de Redes de Computadores (ERRC)},
pages = {8},
publisher = {Sociedade Brasileira de Computação},
address = {Passo Fundo, Brazil},
abstract = {A camada de gerenciamento é um dos elementos mais importantes para o modelo de serviço IaaS nas ferramentas de administração de nuvem privada. Isso porque oferece aos usuários/clientes os recursos de infraestrutura sob-demanda e controla questões administrativas da nuvem. Nesse artigo, o objetivo é realizar uma análise da interface de gerenciamento das ferramentas CloudStack e OpenStack. Com o estudo realizado, constatou-se que as ferramentas tem gerenciamento distinto. No entanto, OpenStack se mostrou mais robusto e complexo, enquanto CloudStack é mais centralizado e possui uma interface gráfica mais completa e intuitiva.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A camada de gerenciamento é um dos elementos mais importantes para o modelo de serviço IaaS nas ferramentas de administração de nuvem privada. Isso porque oferece aos usuários/clientes os recursos de infraestrutura sob-demanda e controla questões administrativas da nuvem. Nesse artigo, o objetivo é realizar uma análise da interface de gerenciamento das ferramentas CloudStack e OpenStack. Com o estudo realizado, constatou-se que as ferramentas tem gerenciamento distinto. No entanto, OpenStack se mostrou mais robusto e complexo, enquanto CloudStack é mais centralizado e possui uma interface gráfica mais completa e intuitiva. |
| Roveda, Demétrius; Vogel, Adriano; Griebler, Dalvan Understanding, Discussing and Analyzing the OpenNebula's and OpenStack's IaaS Management Layers Journal Article doi In: Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), vol. 1, no. 3, pp. 15, 2015. @article{larcc:openebula_openstack:REABTIC:15,
title = {Understanding, Discussing and Analyzing the OpenNebula's and OpenStack's IaaS Management Layers},
author = {Demétrius Roveda and Adriano Vogel and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/ROVEDA_REABTIC_2015A.pdf},
doi = {10.5281/zenodo.59467},
year = {2015},
date = {2015-08-01},
journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)},
volume = {1},
number = {3},
pages = {15},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {The OpenNebula and OpenStack tools have been used for large corporations and research centers to implement IaaS clouds. The management layer is an important element for the user and administrator because it deals with the resources monitoring, development support, orchestration, and integration with other cloud platforms and services. The goal of this paper is to discuss and analyze the differences in the management layer for pointing out advantages and disadvantages of the tools. The results demonstrated that OpenNebula is more restrict and focused on simplicity in almost all comparisons while OpenStack is fragmented, complex and robust.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The OpenNebula and OpenStack tools have been used for large corporations and research centers to implement IaaS clouds. The management layer is an important element for the user and administrator because it deals with the resources monitoring, development support, orchestration, and integration with other cloud platforms and services. The goal of this paper is to discuss and analyze the differences in the management layer for pointing out advantages and disadvantages of the tools. The results demonstrated that OpenNebula is more restrict and focused on simplicity in almost all comparisons while OpenStack is fragmented, complex and robust. |
| Adornes, Daniel; Griebler, Dalvan; Ledur, Cleverson; Fernandes, Luiz G. A Unified MapReduce Domain-Specific Language for Distributed and Shared Memory Architectures Inproceedings doi In: The 27th International Conference on Software Engineering & Knowledge Engineering, pp. 6, Knowledge Systems Institute Graduate School, Pittsburgh, USA, 2015. @inproceedings{ADORNES:SEKE:15,
title = {A Unified MapReduce Domain-Specific Language for Distributed and Shared Memory Architectures},
author = {Daniel Adornes and Dalvan Griebler and Cleverson Ledur and Luiz G. Fernandes},
url = {http://dx.doi.org/10.18293/SEKE2015-204},
doi = {10.18293/SEKE2015-204},
year = {2015},
date = {2015-07-01},
booktitle = {The 27th International Conference on Software Engineering & Knowledge Engineering},
pages = {6},
publisher = {Knowledge Systems Institute Graduate School},
address = {Pittsburgh, USA},
abstract = {MapReduce is a suitable and efficient parallel programming pattern for processing big data analysis. In recent years, many frameworks/languages have implemented this pattern to achieve high performance in data mining applications, particularly for distributed memory architectures (e.g., clusters). Nevertheless, the industry of processors is now able to offer powerful processing on single machines (e.g., multi-core). Thus, these applications may address the parallelism in another architectural level. The target problems of this paper are code reuse and programming effort reduction since current solutions do not provide a single interface to deal with these two architectural levels. Therefore, we propose a unified domain-specific language in conjunction with transformation rules for code generation for Hadoop and Phoenix++. We selected these frameworks as state-of-the-art MapReduce implementations for distributed and shared memory architectures, respectively. Our solution achieves a programming effort reduction from 41.84% and up to 95.43% without significant performance losses (below the threshold of 3%) compared to Hadoop and Phoenix++.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
MapReduce is a suitable and efficient parallel programming pattern for processing big data analysis. In recent years, many frameworks/languages have implemented this pattern to achieve high performance in data mining applications, particularly for distributed memory architectures (e.g., clusters). Nevertheless, the industry of processors is now able to offer powerful processing on single machines (e.g., multi-core). Thus, these applications may address the parallelism in another architectural level. The target problems of this paper are code reuse and programming effort reduction since current solutions do not provide a single interface to deal with these two architectural levels. Therefore, we propose a unified domain-specific language in conjunction with transformation rules for code generation for Hadoop and Phoenix++. We selected these frameworks as state-of-the-art MapReduce implementations for distributed and shared memory architectures, respectively. Our solution achieves a programming effort reduction from 41.84% and up to 95.43% without significant performance losses (below the threshold of 3%) compared to Hadoop and Phoenix++. |
| Ledur, Cleverson; Griebler, Dalvan; Fernandes, Luiz Gustavo; Manssour, Isabel Uma Linguagem Específica de Domínio com Geração de Código Paralelo para Visualização de Grandes Volumes de Dados Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 139-140, Sociedade Brasileira de Computação (SBC), Gramado, RS, BR, 2015. @inproceedings{LEDUR:ERAD:15,
title = {Uma Linguagem Específica de Domínio com Geração de Código Paralelo para Visualização de Grandes Volumes de Dados},
author = {Cleverson Ledur and Dalvan Griebler and Luiz Gustavo Fernandes and Isabel Manssour},
url = {https://gmap.pucrs.br/dalvan/papers/2015/CR_ERAD_PG_2015.pdf},
year = {2015},
date = {2015-04-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {139-140},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Gramado, RS, BR},
abstract = {Este artigo apresenta uma análise sobre linguagens específicas de domínio para a criação de visualizações. Ao final, propõe uma nova linguagem específica de domínio para geração de visualizações de quantidades massivas de dados, paralelizando não só a geração e a interação da visualização, mas também o pré-processamento dos dados brutos.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Este artigo apresenta uma análise sobre linguagens específicas de domínio para a criação de visualizações. Ao final, propõe uma nova linguagem específica de domínio para geração de visualizações de quantidades massivas de dados, paralelizando não só a geração e a interação da visualização, mas também o pré-processamento dos dados brutos. |
| Roveda, Demétrius; Vogel, Adriano; Souza, Samuel; Griebler, Dalvan Uma Avaliação Comparativa dos Mecanismos de Segurança nas Ferramentas OpenStack, OpenNebula e CloudStack Journal Article doi In: Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), vol. 1, no. 4, pp. 15, 2015. @article{larcc:security_IaaS_tools:REABTIC:15,
title = {Uma Avaliação Comparativa dos Mecanismos de Segurança nas Ferramentas OpenStack, OpenNebula e CloudStack},
author = {Demétrius Roveda and Adriano Vogel and Samuel Souza and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/ROVEDA_REABTIC_2015.pdf},
doi = {10.5281/zenodo.59478},
year = {2015},
date = {2015-03-01},
journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)},
volume = {1},
number = {4},
pages = {15},
publisher = {SETREM},
address = {Três de Maio, Brazil},
abstract = {The IaaS service model is gaining attention due its importance to the cloud computing environment, it is responsible for simplifying the access and the management of high-end processing and storage systems, besides being the base that allows the outsourcing of upper layers, PaaS and Saas. The IaaS cloud tools are responsible for controlling the virtual infrastructure as well the environment security, which is an important characteristic for cloud applications, once the system can be integrated with publuc clouds through the Internet. In this paper, the goals are evaluate and compare the security layer, from the administrator point of view, of three open source IaaS tools: OpenStack, OpenNebula and Cloudstack. Considering the security layer from Dukaric taxonomy, the results shown that all the tools have a equivalent security level, however, there are evidence that not all the security features found in the tools fits in the taxonomy description.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The IaaS service model is gaining attention due its importance to the cloud computing environment, it is responsible for simplifying the access and the management of high-end processing and storage systems, besides being the base that allows the outsourcing of upper layers, PaaS and Saas. The IaaS cloud tools are responsible for controlling the virtual infrastructure as well the environment security, which is an important characteristic for cloud applications, once the system can be integrated with publuc clouds through the Internet. In this paper, the goals are evaluate and compare the security layer, from the administrator point of view, of three open source IaaS tools: OpenStack, OpenNebula and Cloudstack. Considering the security layer from Dukaric taxonomy, the results shown that all the tools have a equivalent security level, however, there are evidence that not all the security features found in the tools fits in the taxonomy description. |
| Maron, Carlos A. F.; Griebler, Dalvan; Vogel, Adriano; Schepke, Claudio Em Direção à Comparação do Desempenho das Aplicações Paralelas nas Ferramentas OpenStack e OpenNebula Inproceedings In: 15th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 205-208, Sociedade Brasileira de Computação, Gramado, RS, Brazil, 2015. @inproceedings{hiperfcloud:nas_bech_openstack_opennebula:ERAD:15,
title = {Em Direção à Comparação do Desempenho das Aplicações Paralelas nas Ferramentas OpenStack e OpenNebula},
author = {Carlos A. F. Maron and Dalvan Griebler and Adriano Vogel and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_ERAD_2015.pdf},
year = {2015},
date = {2015-03-01},
booktitle = {15th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {205-208},
publisher = {Sociedade Brasileira de Computação},
address = {Gramado, RS, Brazil},
abstract = {A infraestrutura de Computação em Nuvem vem sendo uma alternativa à execução de aplicações de alto desempenho. No entanto, o desempenho pode ser prejudicado devido a camada de virtualização e da ação das ferramentas de administração de nuvem. O objetivo deste trabalho foi comparar o desempenho de aplicações em OpenStack e OpenNebula. Os resultados apresentaram diferença significativa entre as ferramentas e positiva ao OpenNebula.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A infraestrutura de Computação em Nuvem vem sendo uma alternativa à execução de aplicações de alto desempenho. No entanto, o desempenho pode ser prejudicado devido a camada de virtualização e da ação das ferramentas de administração de nuvem. O objetivo deste trabalho foi comparar o desempenho de aplicações em OpenStack e OpenNebula. Os resultados apresentaram diferença significativa entre as ferramentas e positiva ao OpenNebula. |
2014
|
| Maron, Carlos A. F.; Griebler, Dalvan; Vogel, Adriano; Schepke, Claudio Avaliação e Comparação do Desempenho das Ferramentas OpenStack e OpenNebula Inproceedings In: 12th Escola Regional de Redes de Computadores (ERRC), pp. 1-5, Sociedade Brasileira de Computação, Canoas, 2014. @inproceedings{hiperfcloud:isolation_bechs_openstack_opennebula:ERRC:14,
title = {Avaliação e Comparação do Desempenho das Ferramentas OpenStack e OpenNebula},
author = {Carlos A. F. Maron and Dalvan Griebler and Adriano Vogel and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_ERRC_2014.pdf},
year = {2014},
date = {2014-11-01},
booktitle = {12th Escola Regional de Redes de Computadores (ERRC)},
pages = {1-5},
publisher = {Sociedade Brasileira de Computação},
address = {Canoas},
abstract = {A computação em nuvem está cada vez mais presente nas infraestruturas corporativas. Por causa disso, diversas ferramentas estão sendo criadas para auxiliar na administração dos recursos na nuvem. O objetivo deste trabalho é avaliar o impacto que as ferramentas OpenStack e OpenNebula (implantadas em um ambiente de nuvem privado) podem causar no desempenho dos sistemas de memória, armazenamento, rede, e processador. Os resultados obtidos mostram que o desempenho no OpenStack é significativamente melhor nos testes do sistema de armazenamento, enquanto que no OpenNebula o restante dos testes foram melhores.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A computação em nuvem está cada vez mais presente nas infraestruturas corporativas. Por causa disso, diversas ferramentas estão sendo criadas para auxiliar na administração dos recursos na nuvem. O objetivo deste trabalho é avaliar o impacto que as ferramentas OpenStack e OpenNebula (implantadas em um ambiente de nuvem privado) podem causar no desempenho dos sistemas de memória, armazenamento, rede, e processador. Os resultados obtidos mostram que o desempenho no OpenStack é significativamente melhor nos testes do sistema de armazenamento, enquanto que no OpenNebula o restante dos testes foram melhores. |
| Griebler, Dalvan; Adornes, Daniel; Fernandes, Luiz G. Performance and Usability Evaluation of a Pattern-Oriented Parallel Programming Interface for Multi-Core Architectures Inproceedings In: The 26th International Conference on Software Engineering & Knowledge Engineering, pp. 25-30, Knowledge Systems Institute Graduate School, Vancouver, Canada, 2014. @inproceedings{GRIEBLER:SEKE:14,
title = {Performance and Usability Evaluation of a Pattern-Oriented Parallel Programming Interface for Multi-Core Architectures},
author = {Dalvan Griebler and Daniel Adornes and Luiz G. Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2014/CR_SEKE_2014.pdf},
year = {2014},
date = {2014-07-01},
booktitle = {The 26th International Conference on Software Engineering & Knowledge Engineering},
pages = {25-30},
publisher = {Knowledge Systems Institute Graduate School},
address = {Vancouver, Canada},
abstract = {Multi-core architectures have increased the power of parallelism by coupling many cores in a single chip. This becomes even more complex for developers to exploit the avail-able parallelism in order to provide high performance scalable programs. To address these challenges, we propose the DSL-POPP (Domain-Specific Language for Pattern-Oriented Parallel Programming), which links the pattern-based approach in the programming interface as an alternative to reduce the effort of parallel software development, and achieve good performance in some applications. In this paper, the objective is to evaluate the usability and performance of the master/slave pattern and compare it to the Pthreads library. Moreover, experiments have shown that the master/slave interface of the DSL-POPP reduces up to 50% of the programming effort, without significantly affecting the performance.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Multi-core architectures have increased the power of parallelism by coupling many cores in a single chip. This becomes even more complex for developers to exploit the avail-able parallelism in order to provide high performance scalable programs. To address these challenges, we propose the DSL-POPP (Domain-Specific Language for Pattern-Oriented Parallel Programming), which links the pattern-based approach in the programming interface as an alternative to reduce the effort of parallel software development, and achieve good performance in some applications. In this paper, the objective is to evaluate the usability and performance of the master/slave pattern and compare it to the Pthreads library. Moreover, experiments have shown that the master/slave interface of the DSL-POPP reduces up to 50% of the programming effort, without significantly affecting the performance. |
| Maron, Carlos A. F.; Griebler, Dalvan; Schepke, Claudio Comparação das Ferramentas OpenNebula e OpenStack em Nuvem Composta de Estações de Trabalho Inproceedings In: 14th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 173-176, Sociedade Brasileira de Computação, Alegrete, RS, Brazil, 2014. @inproceedings{larcc:evaluation_openstack_opnnebula:ERAD:14,
title = {Comparação das Ferramentas OpenNebula e OpenStack em Nuvem Composta de Estações de Trabalho},
author = {Carlos A. F. Maron and Dalvan Griebler and Claudio Schepke},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_ERAD_2014.pdf},
year = {2014},
date = {2014-03-01},
booktitle = {14th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)},
pages = {173-176},
publisher = {Sociedade Brasileira de Computação},
address = {Alegrete, RS, Brazil},
abstract = {Ferramentas de computação em nuvem para o modelo de serviço IaaS como o OpenNebula e o OpenStack são implantadas em grandes centros de processamento. O objetivo deste trabalho é investigar e comparar o comportamento delas em um ambiente mais restrito, como o de estações de trabalho. Os resultados mostraram que a ferramenta OpenNebula leva vantagem nas principais características avaliadas.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ferramentas de computação em nuvem para o modelo de serviço IaaS como o OpenNebula e o OpenStack são implantadas em grandes centros de processamento. O objetivo deste trabalho é investigar e comparar o comportamento delas em um ambiente mais restrito, como o de estações de trabalho. Os resultados mostraram que a ferramenta OpenNebula leva vantagem nas principais características avaliadas. |
| Rui, Fernando; Castro, Márcio; Griebler, Dalvan; Fernandes, Luiz Gustavo Evaluating the Impact of Transactional Characteristics on the Performance of Transactional Memory Applications Inproceedings doi In: 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, pp. 93-97, IEEE, Torino, Italy, 2014. @inproceedings{gmap:RUI:PDP:14,
title = {Evaluating the Impact of Transactional Characteristics on the Performance of Transactional Memory Applications},
author = {Fernando Rui and Márcio Castro and Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://doi.org/10.1109/PDP.2014.57},
doi = {10.1109/PDP.2014.57},
year = {2014},
date = {2014-02-01},
booktitle = {22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing},
pages = {93-97},
publisher = {IEEE},
address = {Torino, Italy},
series = {PDP'14},
abstract = {Transactional Memory (TM) is reputed by many researchers to be a promising solution to ease parallel programming on multicore processors. This model provides the scalability of fine-grained locking while avoiding common issues of traditional mechanisms, such as deadlocks. During these almost twenty years of research, several TM systems and benchmarks have been proposed. However, TM is not yet widely adopted by the scientific community to develop parallel applications due to unanswered questions in the literature, such as "how to identify if a parallel application can exploit TM to achieve better performance?" or "what are the reasons of poor performances of some TM applications?". In this work, we contribute to answer those questions through a comparative evaluation of a set of TM applications on four different state- of-the-art TM systems. Moreover, we identify some of the most important TM characteristics that impact directly the performance of TM applications. Our results can be useful to identify opportunities for optimizations.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Transactional Memory (TM) is reputed by many researchers to be a promising solution to ease parallel programming on multicore processors. This model provides the scalability of fine-grained locking while avoiding common issues of traditional mechanisms, such as deadlocks. During these almost twenty years of research, several TM systems and benchmarks have been proposed. However, TM is not yet widely adopted by the scientific community to develop parallel applications due to unanswered questions in the literature, such as "how to identify if a parallel application can exploit TM to achieve better performance?" or "what are the reasons of poor performances of some TM applications?". In this work, we contribute to answer those questions through a comparative evaluation of a set of TM applications on four different state- of-the-art TM systems. Moreover, we identify some of the most important TM characteristics that impact directly the performance of TM applications. Our results can be useful to identify opportunities for optimizations. |
2013
|
| Thomé, Bruna; Hentges, Eduardo; Griebler, Dalvan Computação em Nuvem: Análise Comparativa de Ferramentas Open Source para IaaS Inproceedings In: 11th Escola Regional de Redes de Computadores (ERRC), pp. 4, Sociedade Brasileira de Computação, Porto Alegre, RS, Brazil, 2013. @inproceedings{larcc:iaas_survey:ERRC:13,
title = {Computação em Nuvem: Análise Comparativa de Ferramentas Open Source para IaaS},
author = {Bruna Thomé and Eduardo Hentges and Dalvan Griebler},
url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/THOME_ERRC_2013.pdf},
year = {2013},
date = {2013-11-01},
booktitle = {11th Escola Regional de Redes de Computadores (ERRC)},
pages = {4},
publisher = {Sociedade Brasileira de Computação},
address = {Porto Alegre, RS, Brazil},
abstract = {Este artigo tem por objetivo estudar, apresentar e comparar as principais ferramentas open source de computação em nuvem. O conceito de computação em nuvem está cada vez mais presente nas redes de computadores. A dificuldade não está apenas em implantar uma nuvem, mas também em escolher a ferramenta mais apropriada. Assim, este trabalho buscou estudar as seguintes ferramentas: Eucalyptus, OpenNebula, OpenQRM, OpenStack, CloudStack Ubuntu Enterprise Cloud, Abiquo, Convirt, Apache Virtual Lab e Nimbus. Para estas, foram consideradas as características, funcionalidades e formas de operação, evidenciando o cenário mais indicado para cada uma delas.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Este artigo tem por objetivo estudar, apresentar e comparar as principais ferramentas open source de computação em nuvem. O conceito de computação em nuvem está cada vez mais presente nas redes de computadores. A dificuldade não está apenas em implantar uma nuvem, mas também em escolher a ferramenta mais apropriada. Assim, este trabalho buscou estudar as seguintes ferramentas: Eucalyptus, OpenNebula, OpenQRM, OpenStack, CloudStack Ubuntu Enterprise Cloud, Abiquo, Convirt, Apache Virtual Lab e Nimbus. Para estas, foram consideradas as características, funcionalidades e formas de operação, evidenciando o cenário mais indicado para cada uma delas. |
| Griebler, Dalvan; Fernandes, Luiz G. Towards a Domain-Specific Language for Patterns-Oriented Parallel Programming Inproceedings doi In: Programming Languages - 17th Brazilian Symposium - SBLP, pp. 105-119, Springer Berlin Heidelberg, Brasilia, Brazil, 2013. @inproceedings{GRIEBLER:SBLP:13,
title = {Towards a Domain-Specific Language for Patterns-Oriented Parallel Programming},
author = {Dalvan Griebler and Luiz G. Fernandes},
url = {http://dx.doi.org/10.1007/978-3-642-40922-6_8},
doi = {10.1007/978-3-642-40922-6_8},
year = {2013},
date = {2013-10-01},
booktitle = {Programming Languages - 17th Brazilian Symposium - SBLP},
volume = {8129},
pages = {105-119},
publisher = {Springer Berlin Heidelberg},
address = {Brasilia, Brazil},
series = {Lecture Notes in Computer Science},
abstract = {Pattern-oriented programming has been used in parallel code development for many years now. During this time, several tools (mainly frameworks and libraries) proposed the use of patterns based on programming primitives or templates. The implementation of patterns using those tools usually requires human expertise to correctly set up communication/synchronization among processes. In this work, we propose the use of a Domain Specific Language to create pattern-oriented parallel programs (DSL-POPP). This approach has the advantage of offering a higher programming abstraction level in which communication/synchronization among processes is hidden from programmers. We compensate the reduction in programming flexibility offering the possibility to use combined and/or nested parallel patterns (i.e., parallelism in levels), allowing the design of more complex parallel applications. We conclude this work presenting an experiment in which we develop a parallel application exploiting combined and nested parallel patterns in order to demonstrate the main properties of DSL-POPP.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Pattern-oriented programming has been used in parallel code development for many years now. During this time, several tools (mainly frameworks and libraries) proposed the use of patterns based on programming primitives or templates. The implementation of patterns using those tools usually requires human expertise to correctly set up communication/synchronization among processes. In this work, we propose the use of a Domain Specific Language to create pattern-oriented parallel programs (DSL-POPP). This approach has the advantage of offering a higher programming abstraction level in which communication/synchronization among processes is hidden from programmers. We compensate the reduction in programming flexibility offering the possibility to use combined and/or nested parallel patterns (i.e., parallelism in levels), allowing the design of more complex parallel applications. We conclude this work presenting an experiment in which we develop a parallel application exploiting combined and nested parallel patterns in order to demonstrate the main properties of DSL-POPP. |
| Griebler, Dalvan; Fernandes, Luiz Gustavo DSL-POPP: Linguagem Específica de Domínio para Programação Paralela Orientada a Padrões Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 2, Sociedade Brasileira de Computação (SBC), Porto Alegre, RS, BR, 2013. @inproceedings{GRIEBLER:ERAD:13,
title = {DSL-POPP: Linguagem Específica de Domínio para Programação Paralela Orientada a Padrões},
author = {Dalvan Griebler and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2013/CR_ERAD_2013.pdf},
year = {2013},
date = {2013-03-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {2},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Porto Alegre, RS, BR},
abstract = {A proposta deste trabalho é induzir o programador a desenvolver programas orientados a padrões paralelos, que implementados na interface de uma linguagem específica de domínio ajudam a reduzir o esforço de programação sem comprometer o desempenho de uma aplicação. Resultados experimentais com o padrão mestre/escravo mostraram um bom desempenho nos algoritmos paralelizados.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A proposta deste trabalho é induzir o programador a desenvolver programas orientados a padrões paralelos, que implementados na interface de uma linguagem específica de domínio ajudam a reduzir o esforço de programação sem comprometer o desempenho de uma aplicação. Resultados experimentais com o padrão mestre/escravo mostraram um bom desempenho nos algoritmos paralelizados. |
2012
|
| Griebler, Dalvan Proposta de uma Linguagem Específica de Domínio de Programação Paralela Orientada a Padrões Paralelos: Um Estudo de Caso Baseado no Padrão Mestre/Escravo para Arquiteturas Multi-Core Masters Thesis Faculdade de Informática - PPGCC - PUCRS, Porto Alegre, Brazil, 2012. @mastersthesis{GRIEBLER:DM:12,
title = {Proposta de uma Linguagem Específica de Domínio de Programação Paralela Orientada a Padrões Paralelos: Um Estudo de Caso Baseado no Padrão Mestre/Escravo para Arquiteturas Multi-Core},
author = {Dalvan Griebler},
url = {http://tede.pucrs.br/tde_busca/arquivo.php?codArquivo=4265},
year = {2012},
date = {2012-03-01},
address = {Porto Alegre, Brazil},
school = {Faculdade de Informática - PPGCC - PUCRS},
abstract = {This work proposes a Domain-Specific Language for Parallel Patterns Oriented Parallel Programming (LED-PPOPP). Its main purpose is to provide a way to decrease the amount of effort necessary to develop parallel programs, offering a way to guide developers through patterns which are implemented by the language interface. The idea is to exploit this approach avoiding large performance losses in the applications. Patterns are specialized solutions, previously studied, and used to solve a frequent problem. Thus, parallel patterns offer a higher abstraction level to organize the algorithms in the exploitation of parallelism. They also can be easily learned by inexperienced programmers and software engineers. This work carried out a case study based on the Master/Slave pattern, focusing on the parallelization of algorithms for multi-core architectures. The implementation was validated through experiments to evaluate the programming effort to write code in LED-PPOPP and the performance achieved by the parallel code automatically generated. The obtained results let us conclude that a significant reduction in the parallel programming effort occurred in comparison to the Pthreads library utilization. Additionally, the final performance of the parallelized algorithms confirms that the parallelization with LED-PPOPP does not bring on significant losses related to parallelization using OpenMP in most of the all experiments carried out.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
This work proposes a Domain-Specific Language for Parallel Patterns Oriented Parallel Programming (LED-PPOPP). Its main purpose is to provide a way to decrease the amount of effort necessary to develop parallel programs, offering a way to guide developers through patterns which are implemented by the language interface. The idea is to exploit this approach avoiding large performance losses in the applications. Patterns are specialized solutions, previously studied, and used to solve a frequent problem. Thus, parallel patterns offer a higher abstraction level to organize the algorithms in the exploitation of parallelism. They also can be easily learned by inexperienced programmers and software engineers. This work carried out a case study based on the Master/Slave pattern, focusing on the parallelization of algorithms for multi-core architectures. The implementation was validated through experiments to evaluate the programming effort to write code in LED-PPOPP and the performance achieved by the parallel code automatically generated. The obtained results let us conclude that a significant reduction in the parallel programming effort occurred in comparison to the Pthreads library utilization. Additionally, the final performance of the parallelized algorithms confirms that the parallelization with LED-PPOPP does not bring on significant losses related to parallelization using OpenMP in most of the all experiments carried out. |
2011
|
| Raeder, Mateus; Griebler, Dalvan; Baldo, Lucas; Fernandes, Luiz G. Performance Prediction of Parallel Applications with Parallel Patterns Using Stochastic Methods Inproceedings doi In: Sistemas Computacionais (WSCAD-SSC), XII Simpósio em Sistemas Computacionais de Alto Desempenho, pp. 1-13, IEEE, Espírito Santo, Brasil, 2011. @inproceedings{RAEDER:WSCAD:11,
title = {Performance Prediction of Parallel Applications with Parallel Patterns Using Stochastic Methods},
author = {Mateus Raeder and Dalvan Griebler and Lucas Baldo and Luiz G. Fernandes},
url = {https://doi.org/10.1109/WSCAD-SSC.2011.18},
doi = {10.1109/WSCAD-SSC.2011.18},
year = {2011},
date = {2011-10-01},
booktitle = {Sistemas Computacionais (WSCAD-SSC), XII Simpósio em Sistemas Computacionais de Alto Desempenho},
pages = {1-13},
publisher = {IEEE},
address = {Espírito Santo, Brasil},
abstract = {One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such applications seems to be an interesting alternative and can help to identify the best implementation strategies. In this work, the Stochastic Automata Network formalism is adopted to model and evaluate the performance of parallel applications, specially developed for clusters of workstations platforms. The methodology used is based on the construction of generic models to describe classical parallel implementation schemes, like Master/Slave, Parallel Phases, Pipeline and Divide and Conquer. Those models are adapted to represent cases of real applications through the definition of input parameters values. Finally, aiming to verify the accuracy of the adopted technique, some comparisons with real applications implementation results are presented.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such applications seems to be an interesting alternative and can help to identify the best implementation strategies. In this work, the Stochastic Automata Network formalism is adopted to model and evaluate the performance of parallel applications, specially developed for clusters of workstations platforms. The methodology used is based on the construction of generic models to describe classical parallel implementation schemes, like Master/Slave, Parallel Phases, Pipeline and Divide and Conquer. Those models are adapted to represent cases of real applications through the definition of input parameters values. Finally, aiming to verify the accuracy of the adopted technique, some comparisons with real applications implementation results are presented. |
| Griebler, Dalvan; Raeder, Mateus; Fernandes, Luiz Gustavo Padrões e Frameworks de Programação Paralela em Ambientes Multi-Core Inproceedings In: Escola Regional de Alto Desempenho (ERAD-RS), pp. 2, Sociedade Brasileira de Computação (SBC), Porto Alegre, RS, BR, 2011. @inproceedings{GRIEBLER:ERAD:11,
title = {Padrões e Frameworks de Programação Paralela em Ambientes Multi-Core},
author = {Dalvan Griebler and Mateus Raeder and Luiz Gustavo Fernandes},
url = {https://gmap.pucrs.br/dalvan/papers/2011/CR_ERAD_2011.pdf},
year = {2011},
date = {2011-03-01},
booktitle = {Escola Regional de Alto Desempenho (ERAD-RS)},
pages = {2},
publisher = {Sociedade Brasileira de Computação (SBC)},
address = {Porto Alegre, RS, BR},
abstract = {Nos últimos anos, o mercado recente de estações de trabalho e servidores vem aumentando gradativamente a quantidade de núcleos e processadores, inserindo na sua programação o paralelismo, o que resultou no aumento da complexidade em lidar com este tipo de hardware. Neste cenário, é necessário a disponibilidade de mecanismos que possam fornecer escalabilidade e permitir a exploração de paralelismo nestas arquiteturas, conhecidas como multi-core. Não basta que tais arquiteturas multiprocessadas estejam disponíveis, se estas não são exploradas devidamente. Depuração, condições de corrida, sincronização de threads ou processos e controle de acesso aos dados são exemplos de fatores críticos para a programação destes ambientes paralelos. Novas maneiras de abstrair a complexidade em lidar com estes sistemas estão sendo estudadas, a fim de que a programação paralela seja algo menos complexo para os desenvolvedores de software. Padrões paralelos vêm sendo o alvo de constantes estudos com intuito de padronizar a programação.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Nos últimos anos, o mercado recente de estações de trabalho e servidores vem aumentando gradativamente a quantidade de núcleos e processadores, inserindo na sua programação o paralelismo, o que resultou no aumento da complexidade em lidar com este tipo de hardware. Neste cenário, é necessário a disponibilidade de mecanismos que possam fornecer escalabilidade e permitir a exploração de paralelismo nestas arquiteturas, conhecidas como multi-core. Não basta que tais arquiteturas multiprocessadas estejam disponíveis, se estas não são exploradas devidamente. Depuração, condições de corrida, sincronização de threads ou processos e controle de acesso aos dados são exemplos de fatores críticos para a programação destes ambientes paralelos. Novas maneiras de abstrair a complexidade em lidar com estes sistemas estão sendo estudadas, a fim de que a programação paralela seja algo menos complexo para os desenvolvedores de software. Padrões paralelos vêm sendo o alvo de constantes estudos com intuito de padronizar a programação. |