Shravan Kumar Bandari, V.V. Mani and A. Drosopoulos
The purpose of this paper is to study the performance of generalized frequency division multiplexing (GFDM) in some frequency selective fading channels. The exact symbol error…
Abstract
Purpose
The purpose of this paper is to study the performance of generalized frequency division multiplexing (GFDM) in some frequency selective fading channels. The exact symbol error rate (SER) expressions in Hoyt (Nakagami-q) and Weibull-v fading channels are derived. A GFDM transceiver simulation test bed is provided to validate the obtained analytical expressions.
Design/methodology/approach
Modern cellular system demands higher data rates, very low-latency transmissions and sensors with ultra low-power consumption. Current cellular systems of the fourth generation (4G) are not able to meet these emerging demands of future mobile communication systems. To address this requirement, GFDM, a novel multi-carrier modulation technique is proposed to satisfy the future needs of fifth generation technology. GFDM is a block-based transmission method where pulse shaping is applied circularly to individual subcarriers. Unlike traditional orthogonal frequency division multiplexing, GFDM transmits multiple symbols per subcarrier. The authors have used the probability density function approach in solving the final analytical expressions.
Findings
Detailed analysis of GFDM performance under Hoyt-q, Weibull-v and Log-Normal Shadowing fading channels. Exact analytical formulae were derived which support the simulations carried out by authors and other authors. The exact dependence of SER on fading parameters and roll-off factor α in the raised cosine pulse shape filter was determined.
Practical implications
Development and fabrication of high-performance GFDM systems under fading channel conditions.
Originality/value
Theoretical support to simulated system performance.
Details
Keywords
Abbas Rezaeian, Mona Mansoori and Amin Khajehdezfuly
Top-seat angle connection is known as one of the usual uncomplicated beam-to-column joints used in steel structures. This article investigates the fire performance of welded…
Abstract
Purpose
Top-seat angle connection is known as one of the usual uncomplicated beam-to-column joints used in steel structures. This article investigates the fire performance of welded top-seat angle connections.
Design/methodology/approach
A finite element (FE) model, including nonlinear contact interactions, high-temperature properties of steel, and material and geometric nonlinearities was created for accomplishing the fire performance analysis. The FE model was verified by comparing its simulation results with test data. Using the verified model, 24 steel-framed top-seat angle connection assemblies are modeled. Parametric studies were performed employing the verified FE model to study the influence of critical factors on the performance of steel beams and their welded angle joints.
Findings
The results obtained from the parametric studies illustrate that decreasing the gap size and the top angle size and increasing the top angles thickness affect fire behavior of top-seat angle joints and decrease the beam deflection by about 16% at temperatures beyond 570 °C. Also, the fire-resistance rating of the beam with seat angle stiffener increases about 15%, compared to those with and without the web stiffener. The failure of the beam happens when the deflections become more than span/30 at temperatures beyond 576 °C. Results also show that load type, load ratio and axial stiffness levels significantly control the fire performance of the beam with top-seat angle connections in semi-rigid steel frames.
Originality/value
Development of design methodologies for these joints and connected beam in fire conditions is delayed by current building codes due to the lack of adequate understanding of fire behavior of steel beams with welded top-seat angle connections.
Details
Keywords
Huseyin Saglik, Airong Chen and Rujin Ma
Beginners and even experienced ones have difficulties in completing the structural fire analysis due to numerical difficulties such as convergence errors and singularity and have…
Abstract
Purpose
Beginners and even experienced ones have difficulties in completing the structural fire analysis due to numerical difficulties such as convergence errors and singularity and have to spend a lot of time making many repetitive changes on the model. The aim of this article is to highlight the advantages of explicit solver which can eliminate the mentioned difficulties in finite element analysis containing highly nonlinear contacts, clearance between modeled parts at the beginning and large deflections because of high temperature. This article provides important information, especially for researchers and engineers who are new to structural fire analysis.
Design/methodology/approach
The finite element method is utilized to achieve mentioned purposes. First, a comparative study is conducted between implicit and explicit solvers by using Abaqus. Then, a validation process is carried out to illustrate the explicit process by using sequentially coupled heat transfer and structural analysis.
Findings
Explicit analysis offers an easier solution than implicit analysis for modeling multi-bolted connections under high temperatures. An optimum mesh density for bolted connections is presented to reflect the realistic structural behavior. Presented explicit process with the offered mesh density is used in the validation of an experimental study on multi-bolted splice connection under ISO 834 standard fire curve. A good agreement is achieved.
Originality/value
What makes the study valuable is that the points to be considered in the structural fire analysis are examined and it is a guide that future researchers can benefit from. This is especially true for modeling and analysis of multi-bolted connections in finite element software under high temperatures. The article can help to shorten and even eliminate the iterative debugging phases, which is a problematic and very time-consuming process for many researchers.
Details
Keywords
Dessy Harisanty, Kathleen Lourdes Ballesteros Obille, Nove E. Variant Anna, Endah Purwanti and Fitri Retrialisca
This study aims to investigate the performance analysis, science mapping and future direction of artificial intelligence (AI) technology, applications, tools and software used to…
Abstract
Purpose
This study aims to investigate the performance analysis, science mapping and future direction of artificial intelligence (AI) technology, applications, tools and software used to preserve, curate and predict the historical value of cultural heritage.
Design/methodology/approach
This study uses the bibliometric research method and utilizes the Scopus database to gather data. The keywords used are “artificial intelligence” and “cultural heritage,” resulting in 718 data sets spanning from 2001 to 2023. The data is restricted to the years 2001−2023, is in English language and encompasses all types of documents, including conference papers, articles, book chapters, lecture notes, reviews and editorials.
Findings
The performance analysis of research on the use of AI to aid in the preservation of cultural heritage has been ongoing since 2001, and research in this area continues to grow. The countries contributing to this research include Italy, China, Greece, Spain and the UK, with Italy being the most prolific in terms of authored works. The research primarily falls under the disciplines of computer science, mathematics, engineering, social sciences and arts and humanities, respectively. Document types mainly consist of articles and proceedings. In the science mapping process, five clusters have been identified. These clusters are labeled according to the contributions of AI tools, software, apps and technology to cultural heritage preservation. The clusters include “conservation assessment,” “exhibition and visualization,” “software solutions,” “virtual exhibition” and “metadata and database.” The future direction of research lies in extended reality, which integrates virtual reality (VR), augmented reality (AR) and mixed reality (MR); virtual restoration and preservation; 3D printing; as well as the utilization of robotics, drones and the Internet of Things (IoT) for mapping, conserving and monitoring historical sites and cultural heritage sites.
Practical implications
The cultural heritage institution can use this result as a source to develop AI-based strategic planning for curating, preservation, preventing and presenting cultural heritages. Researchers and academicians will get insight and deeper understanding on the research trend and use the interdisciplinary of AI and cultural heritage for expanding collaboration.
Social implications
This study will help to reveal the trend and evolution of AI and cultural heritage. The finding also will fill the knowledge gap on the research on AI and cultural heritage.
Originality/value
Some similar bibliometric studies have been conducted; however, there are still limited studies on contribution of AI to preserve cultural heritage in wider view. The value of this study is the cluster in which AI is used to preserve, curate, present and assess cultural heritages.
Details
Keywords
Andreas D. Theocharis, Vasilis P. Charalampakos, Anastasios Drosopoulos and John Milias‐Argitis
The purpose of this paper is to develop a linearized equivalent electrical circuit of a photovoltaic generator. This circuit is appropriate to confront problems such as numerical…
Abstract
Purpose
The purpose of this paper is to develop a linearized equivalent electrical circuit of a photovoltaic generator. This circuit is appropriate to confront problems such as numerical instability, increased computational time and nonlinear/non‐canonical form of system equations that arise when a photovoltaic system is modelled, either with differential equations or with equivalent resistive circuits that are generated by electromagnetic transient software packages for power systems studies.
Design/methodology/approach
The proposed technique is based on nonlinear and well‐tested ipv−vpv equations which are however used in an alternative mathematical manner. The application of the Newton‐Raphson algorithm on the ipv−vpv equations leads to uncoupling of the ipv and vpv quantities in each time step of a digital simulation. This uncoupling is represented by a linearized equivalent electrical circuit.
Findings
The application of nodal analysis equivalent resistive circuits using the proposed equivalent photovoltaic generator circuit leads to a system model based on linear algebraic equations. This is in opposition to the nonlinear models that normally result when a nonlinear ipv−vpv equation is used. In addition, using the proposed scheme, the regular systematic methods of circuit analysis are fully capable of deriving the differential equations of a photovoltaic system in standard form, thus avoiding the time‐consuming solution process of nonlinear models.
Originality/value
In this paper, a new method of using the ipv−vpv characteristic equations is proposed which remarkably simplifies photovoltaic systems modeling. Moreover, a very important practical application is that by using this methodology one can develop a photovoltaic generator element in electromagnetic transient programs for power systems analysis, of great value to power engineers who are involved in photovoltaic systems modeling.
Details
Keywords
Paolo Manghi, Michele Artini, Claudio Atzori, Alessia Bardi, Andrea Mannocci, Sandro La Bruzzo, Leonardo Candela, Donatella Castelli and Pasquale Pagano
The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the…
Abstract
Purpose
The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the tools for constructing and operating aggregative infrastructures (systems for aggregating data sources with heterogeneous data models and technologies) in a cost-effective way. Designers and developers can select from a variety of D-NET data management services, can configure them to handle data according to given data models, and can construct autonomic workflows to obtain personalized aggregative infrastructures.
Design/methodology/approach
The paper provides a definition of aggregative infrastructures, sketching architecture, and components, as inspired by real-case examples. It then describes the limits of current solutions, which find their lacks in the realization and maintenance costs of such complex software. Finally, it proposes D-NET as an optimal solution for designers and developers willing to realize aggregative infrastructures. The D-NET architecture and services are presented, drawing a parallel with the ones of aggregative infrastructures. Finally, real-cases of D-NET are presented, to show-case the statement above.
Findings
The D-NET software toolkit is a general-purpose service-oriented framework where designers can construct customized, robust, scalable, autonomic aggregative infrastructures in a cost-effective way. D-NET is today adopted by several EC projects, national consortia and communities to create customized infrastructures under diverse application domains, and other organizations are enquiring for or are experimenting its adoption. Its customizability and extendibility make D-NET a suitable candidate for creating aggregative infrastructures mediating between different scientific domains and therefore supporting multi-disciplinary research.
Originality/value
D-NET is the first general-purpose framework of this kind. Other solutions are available in the literature but focus on specific use-cases and therefore suffer from the limited re-use in different contexts. Due to its maturity, D-NET can also be used by third-party organizations, not necessarily involved in the software design and maintenance.