Leonardo Candela, Donatella Castelli and Pasquale Pagano
The aim of this paper is to discuss how new technologies for supporting scientific research will possibly influence the librarians' work.
Abstract
Purpose
The aim of this paper is to discuss how new technologies for supporting scientific research will possibly influence the librarians' work.
Design/methodology/approach
The discussion is conducted in a context that takes into account the emergence of e‐infrastructures as means to realise a new model of producing, using and sharing information resources and even to change the concept of information resource itself. At the core of this innovation there are virtual research environments, i.e. evolved versions of the current “research libraries”.
Findings
The environments provide scientists with collaborative and customised environments supporting results production and exchange around the globe in a cost‐efficient manner. The experiences made with these innovative research environments within the D4Science project is reported.
Originality/value
On the basis of this experience, possible professional profiles are suggested for librarians working in these new evolved “research libraries”.
Details
Keywords
Yannis Tzitzikas, Carlo Allocca, Chryssoula Bekiari, Yannis Marketakis, Pavlos Fafalios, Martin Doerr, Nikos Minadakis, Theodore Patkos and Leonardo Candela
Marine species data are scattered across a series of heterogeneous repositories and information systems. There is no repository that can claim to have all marine species data…
Abstract
Purpose
Marine species data are scattered across a series of heterogeneous repositories and information systems. There is no repository that can claim to have all marine species data. Moreover, information on marine species are made available through different formats and protocols. The purpose of this paper is to provide models and methods that allow integrating such information either for publishing it, browsing it or querying it. Aiming at providing a valid and reliable knowledge ground for enabling semantic interoperability of marine species data, in this paper the authors motivate a top level ontology, called MarineTLO and discuss its use for creating MarineTLO-based warehouses.
Design/methodology/approach
In this paper the authors introduce a set of motivating scenarios that highlight the need of having a top level ontology. Afterwards the authors describe the main data sources (Fisheries Linked Open Data, ECOSCOPE, WoRMS, FishBase and DBpedia) that will be used as a basis for constructing the MarineTLO.
Findings
The paper discusses about the exploitation of MarineTLO for the construction of a warehouse. Furthermore a series of uses of the MarineTLO-based warehouse is being reported.
Originality/value
In this paper the authors described the design of a top level ontology for the marine domain able to satisfy the need for maintaining integrated sets of facts about marine species and thus assisting ongoing research on biodiversity. Apart from the ontology the authors also elaborated with the mappings that are required for building integrated warehouses.
Details
Keywords
Paolo Manghi, Michele Artini, Claudio Atzori, Alessia Bardi, Andrea Mannocci, Sandro La Bruzzo, Leonardo Candela, Donatella Castelli and Pasquale Pagano
The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the…
Abstract
Purpose
The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the tools for constructing and operating aggregative infrastructures (systems for aggregating data sources with heterogeneous data models and technologies) in a cost-effective way. Designers and developers can select from a variety of D-NET data management services, can configure them to handle data according to given data models, and can construct autonomic workflows to obtain personalized aggregative infrastructures.
Design/methodology/approach
The paper provides a definition of aggregative infrastructures, sketching architecture, and components, as inspired by real-case examples. It then describes the limits of current solutions, which find their lacks in the realization and maintenance costs of such complex software. Finally, it proposes D-NET as an optimal solution for designers and developers willing to realize aggregative infrastructures. The D-NET architecture and services are presented, drawing a parallel with the ones of aggregative infrastructures. Finally, real-cases of D-NET are presented, to show-case the statement above.
Findings
The D-NET software toolkit is a general-purpose service-oriented framework where designers can construct customized, robust, scalable, autonomic aggregative infrastructures in a cost-effective way. D-NET is today adopted by several EC projects, national consortia and communities to create customized infrastructures under diverse application domains, and other organizations are enquiring for or are experimenting its adoption. Its customizability and extendibility make D-NET a suitable candidate for creating aggregative infrastructures mediating between different scientific domains and therefore supporting multi-disciplinary research.
Originality/value
D-NET is the first general-purpose framework of this kind. Other solutions are available in the literature but focus on specific use-cases and therefore suffer from the limited re-use in different contexts. Due to its maturity, D-NET can also be used by third-party organizations, not necessarily involved in the software design and maintenance.
Details
Keywords
Johann Van Wyk, Theo Bothma and Marlene Holmner
The purpose of this article is to give an overview of the development of a Virtual Research Environment (VRE) conceptual model for the management of research data at a South…
Abstract
Purpose
The purpose of this article is to give an overview of the development of a Virtual Research Environment (VRE) conceptual model for the management of research data at a South African university.
Design/methodology/approach
The research design of this article consists of empirical and non-empirical research. The non-empirical part consists of a critical literature review to synthesise the strengths, weaknesses (limitations) and omissions of identified VRE models as found in literature to develop a conceptual VRE model. As part of the critical literature review concepts were clarified and possible applications of VREs in research lifecycles and research data lifecycles were explored. The empirical part focused on the practical application of this model. This part of the article follows an interpretivist paradigm, and a qualitative research approach, using case studies as inquiry method. Case studies with a positivist perspective were selected through purposive sampling, and inferences were drawn from the sample to design and test a conceptual VRE model, and to investigate the management of research data through a VRE. Investigation was done through a process of participatory action research (PAR) and included semi-structured interviews and participant observation data collection techniques. Evaluation of findings was done through formative and summative evaluation.
Findings
The article presents a VRE conceptual model, with identified generic component layers and components that could potentially be applied and used in different research settings/disciplines. The article also reveals the role that VREs play in the successful management of research data throughout the research lifecycle. Guidelines for setting up a conceptual VRE model are offered.
Practical implications
This article assisted in clarifying and validating the various components of a conceptual VRE model that could be used in different research settings and disciplines for research data management.
Originality/value
This article confirms/validates generic layers and components that would be needed in a VRE by synthesising these in a conceptual model in the context of a research lifecycle and presents guidelines for setting up a conceptual VRE model.
Details
Keywords
The determination of the torsional stiffness and shear stresses in a multi‐webbed cylinder is quite straight‐forward. However, if the number of webs is large (say, Ave or more…
Abstract
The determination of the torsional stiffness and shear stresses in a multi‐webbed cylinder is quite straight‐forward. However, if the number of webs is large (say, Ave or more) the calculations become tedious. This Note gives a rapid, approximate solution by utilizing the physical conception of spreading the internal webs out into an equivalent uniform web medium.
Veljko Potkonjak, Kosta Jovanović, Owen Holland and James Uhomoibhi
The purpose of this paper is to present an improved concept of software‐based laboratory exercises, namely a Virtual Laboratory for Engineering Sciences (VLES).
Abstract
Purpose
The purpose of this paper is to present an improved concept of software‐based laboratory exercises, namely a Virtual Laboratory for Engineering Sciences (VLES).
Design/methodology/approach
The implementation of distance learning and e‐learning in engineering sciences (such as Mechanical and Electrical Engineering) is still far behind current practice in narrative disciplines (Economics, Management, etc.). This is because education in technical disciplines requires laboratory exercises, providing skill‐acquisition and hands‐on experience. In order to overcome this problem for distance‐learning developers and practitioners, a new modular and hierarchically organized approach is needed.
Findings
The concept involves simulation models to emulate system dynamics, full virtual reality to provide visualization, advanced social‐clubbing to ensure proper communication, and an AI tutor to supervise the lab work. Its modularity and hierarchical organization offer the possibility of applying the concept to practically any engineering field: a higher level provides the general framework – it considers lab workplaces as objects regardless of the technical field they come from, and provides communication and supervision – while the lower level deals with particular workplaces. An improved student's motivation is expected.
Originality/value
The proposed concept aims rather high, thus making the work truly challenging. With the current level of information and communication technologies, some of the required features can only be achieved with difficulty; however, the rapid growth of the relevant technologies supports the eventual practicality of the concept. This paper is not intended to present any final results, solutions, or experience. The idea is to promote the concept, identify problems, propose guidelines, and possibly open a discussion.
Details
Keywords
Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi
Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…
Abstract
Purpose
Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.
Design/methodology/approach
This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.
Findings
GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.
Originality/value
To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.