Search results

1 – 5 of 5
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 3 October 2016

Chi-Chung Chen, Li Ping Shen, Chien-Feng Huang and Bao-Rong Chang

The purpose of this paper is to propose a new population-based metaheuristic optimization algorithm, assimilation-accommodation mixed continuous ant colony optimization (ACACO)…

211

Abstract

Purpose

The purpose of this paper is to propose a new population-based metaheuristic optimization algorithm, assimilation-accommodation mixed continuous ant colony optimization (ACACO), to improve the accuracy of Takagi-Sugeno-Kang-type fuzzy systems design.

Design/methodology/approach

The original N solution vectors in ACACO are sorted and categorized into three groups according to their ranks. The Research Learning scheme provides the local search capability for the best-ranked group. The Basic Learning scheme uses the ant colony optimization (ACO) technique for the worst-ranked group to approach the best solution. The operations of assimilation, accommodation, and mutation in Mutual Learning scheme are used for the middle-ranked group to exchange and accommodate the partial information between groups and, globally, search information. Only the N top-best-performance solutions are reserved after each iteration of learning.

Findings

The proposed algorithm outperforms some reported ACO algorithms for the fuzzy system design with the same number of rules. The performance comparison with various previously published neural fuzzy systems also shows its superiority even with a smaller number of fuzzy rules to those neural fuzzy systems.

Research limitations/implications

Future work will consider the application of the proposed ACACO to the recurrent fuzzy network.

Originality/value

The originality of this work is to mix the work of the well-known psychologist Jean Piaget and the continuous ACO to propose a new population-based optimization algorithm whose superiority is demonstrated.

Details

Engineering Computations, vol. 33 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Article
Publication date: 28 October 2014

Bao Rong Chang, Hsiu-Fen Tsai, Chi-Ming Chen and Chien-Feng Huang

The physical server transition to virtualized infrastructure server have encountered crucial problems such as server consolidation, virtual machine (VM) performance, workload…

424

Abstract

Purpose

The physical server transition to virtualized infrastructure server have encountered crucial problems such as server consolidation, virtual machine (VM) performance, workload density, total cost of ownership (TCO), and return on investments (ROIs). In order to solve the problems mentioned above, the purpose of this paper is to perform the analysis of virtualized cloud server together with shared storage as well as the estimation of consolidation ratio and TCO/ROI in server virtualization.

Design/methodology/approach

This paper introduces five distinct virtualized cloud computing servers (VCCSs), and provides the appropriate assessment to five well-known hypervisors built in VCCSs. The methodology the authors proposed in this paper will gives people an insight into the problem of physical server transition to virtualized infrastructure server.

Findings

As a matter of fact, VM performance seems almost to achieve the same level, but the estimation of VM density and TCO/ROI are totally different among hypervisors. As a result, the authors have the recommendation to choose the hypervisor ESX server if you need a scheme with higher ROI and lower TCO. Alternatively, Proxmox VE would be the second choice if you like to save the initial investment at first and own a pretty well management interface at console.

Research limitations/implications

In the performance analysis, instead of ESX 5.0, the authors adopted ESXi 5.0 that is free software, its function is limited, and does not have the full functionality of ESX server, such as: distributed resource scheduling, high availability, consolidated backup, fault tolerance, and disaster recovery. Moreover, this paper do not discuss the security problem on VCCS which is related to access control and cryptograph in VMs to be explored in the further work.

Practical implications

In the process of virtualizing the network, ESX/ESXi has restrictions on the brand of the physical network card, only certain network cards can be detected by the VM. For instance: Intel and Broadcom network cards. The newer versions of ESXi 5.0.0 and above now support parts of Realtek series (Realtek 8186, Realtek 8169, and Realtek 8111E).

Originality/value

How to precisely assess the hypervisor for server/desktop virtualization is also of hard question needed to deal with it crisply before deploying new IT with VCCS on site. The authors have utilized the VMware calculator and developed an approach to server/desktop consolidation, virtualization performance, VM density, TCO, and ROIs. As a result, in this paper the authors conducted a comprehensive approach to analyze five well-known hypervisors and will give the recommendation for IT manager to choose a right solution for server virtualization.

Details

Engineering Computations, vol. 31 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Article
Publication date: 1 August 2016

Chin-Fu Kuo, Yung-Feng Lu and Bao-Rong Chang

The purpose of this paper is to investigate the scheduling problem of real-time jobs executing on a DVS processor. The jobs must complete their executions by their deadlines and…

108

Abstract

Purpose

The purpose of this paper is to investigate the scheduling problem of real-time jobs executing on a DVS processor. The jobs must complete their executions by their deadlines and the energy consumption also must be minimized.

Design/methodology/approach

The two-phase energy-efficient scheduling algorithm is proposed to solve the scheduling problem for real-time jobs. In the off-line phase, the maximum instantaneous total density and instantaneous total density (ITD) are proposed to derive the speed of the processor for each time instance. The derived speeds are saved for run time. In the on-line phase, the authors set the processor speed according to the derived speeds and set a timer to expire at the corresponding end time instance of the used speed.

Findings

When the DVS processor executes a job at a proper speed, the energy consumption of the system can be minimized.

Research limitations/implications

This paper does not consider jobs with precedence constraints. It can be explored in the further work.

Practical implications

The experimental results of the proposed schemes are presented to show the effectiveness.

Originality/value

The experimental results show that the proposed scheduling algorithm, ITD, can achieve energy saving and make the processor fully utilized.

Details

Engineering Computations, vol. 33 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Article
Publication date: 28 October 2014

Chien-Feng Huang, Tsung-Nan Hsieh, Bao Rong Chang and Chih-Hsiang Chang

Stock selection has long been identified as a challenging task. This line of research is highly contingent upon reliable stock ranking for successful portfolio construction. The…

255

Abstract

Purpose

Stock selection has long been identified as a challenging task. This line of research is highly contingent upon reliable stock ranking for successful portfolio construction. The purpose of this paper is to employ the methods from computational intelligence (CI) to solve this problem more effectively.

Design/methodology/approach

The authors develop a risk-adjusted strategy to improve upon the previous stock selection models by two main risk measures – downside risk and variation in returns. Moreover, the authors employ the genetic algorithm for optimization of model parameters and selection for input variables simultaneously.

Findings

It is found that the proposed risk-adjusted methodology via maximum drawdown significantly outperforms the benchmark and improves the previous model in the performance of stock selection.

Research limitations/implications

Future work considers an extensive study for the risk-adjusted model using other risk measures such as Value at Risk, Block Maxima, etc. The authors also intend to use financial data from other countries, if available, in order to assess if the method is generally applicable and robust across different environments.

Practical implications

The authors expect this risk-adjusted model to advance the CI research for financial engineering and provide an promising solutions to stock selection in practice.

Originality/value

The originality of this work is that maximum drawdown is being successfully incorporated into the CI-based stock selection model in which the model's effectiveness is validated with strong statistical evidence.

Details

Engineering Computations, vol. 31 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Article
Publication date: 1 August 2016

Bao-Rong Chang, Hsiu-Fen Tsai, Yun-Che Tsai, Chin-Fu Kuo and Chi-Chung Chen

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big…

516

Abstract

Purpose

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big data environment.

Design/methodology/approach

First, the integration of Apache Hive, Cloudera Impala and BDAS Shark make the platform support SQL-like query. Next, users can access a single interface and select the best performance of big data warehouse platform automatically by the proposed optimizer. Finally, the distributed memory storage system Memcached incorporated into the distributed file system, Apache HDFS, is employed for fast caching query results. Therefore, if users query the same SQL command, the same result responds rapidly from the cache system instead of suffering the repeated searches in a big data warehouse and taking a longer time to retrieve.

Findings

As a result the proposed approach significantly improves the overall performance and dramatically reduces the search time as querying a database, especially applying for the high-repeatable SQL commands under multi-user mode.

Research limitations/implications

Currently, Shark’s latest stable version 0.9.1 does not support the latest versions of Spark and Hive. In addition, this series of software only supports Oracle JDK7. Using Oracle JDK8 or Open JDK will cause serious errors, and some software will be unable to run.

Practical implications

The problem with this system is that some blocks are missing when too many blocks are stored in one result (about 100,000 records). Another problem is that the sequential writing into In-memory cache wastes time.

Originality/value

When the remaining memory capacity is 2 GB or less on each server, Impala and Shark will have a lot of page swapping, causing extremely low performance. When the data scale is larger, it may cause the JVM I/O exception and make the program crash. However, when the remaining memory capacity is sufficient, Shark is faster than Hive and Impala. Impala’s consumption of memory resources is between those of Shark and Hive. This amount of remaining memory is sufficient for Impala’s maximum performance. In this study, each server allocates 20 GB of memory for cluster computing and sets the amount of remaining memory as Level 1: 3 percent (0.6 GB), Level 2: 15 percent (3 GB) and Level 3: 75 percent (15 GB) as the critical points. The program automatically selects Hive when memory is less than 15 percent, Impala at 15 to 75 percent and Shark at more than 75 percent.

1 – 5 of 5
Per page
102050