Search results
1 – 2 of 2Luciano Bohn, Clea Beatriz Macagnan and Clóvis Antônio Kronbauer
In 2020, the IFRS Foundation’s public consultation on Sustainability Reporting provided an opportunity for stakeholders to share their opinions on the Foundation’s proposals. This…
Abstract
Purpose
In 2020, the IFRS Foundation’s public consultation on Sustainability Reporting provided an opportunity for stakeholders to share their opinions on the Foundation’s proposals. This paper aims to analyze the comment letters that would legitimize the IFRS Foundation to institutionalize the International Sustainability Standards Board (ISSB).
Design/methodology/approach
This study used Python to develop a model for analyzing all 577 submissions that the IFRS Foundation received, using a combination of quantitative and qualitative content analysis methods.
Findings
Support for the creation of the ISSB was not unanimous but reached 68%. Key supporting arguments were that the IFRS Foundation could harmonize sustainability reporting standards by leveraging its expertise in setting accounting standards, and use its existing relationships to enforce sustainability reporting. Key counterarguments were: the IFRS Foundation lacks expertise in the areas of sustainability and climate; sustainability reporting should be integrated into financial reporting rather than being disclosed separately; the proposals were limited in scope (single materiality, focus on investors’ information needs and climate change centrism); and the IFRS Foundation should aim to endorse already established frameworks instead.
Practical implications
A consensus between supporters and critics was the need to make sustainability reporting mandatory. Endorsed by IOSCO, the ISSB released its inaugural standards, focusing on climate-related disclosures, effective from 2024 in jurisdictions that choose to adopt them.
Originality/value
The findings show that the establishment of the ISSB by the IFRS Foundation only partially fulfilled the demand for the harmonization of sustainability reporting standards. As a result, broader and non-investor-centric sustainability information may continue to be reported under alternative frameworks.
Details
Keywords
Carmela Occhipinti, Antonio Carnevale, Luigi Briguglio, Andrea Iannone and Piercosma Bisconti
The purpose of this paper is to present the conceptual model of an innovative methodology (SAT) to assess the social acceptance of technology, especially focusing on artificial…
Abstract
Purpose
The purpose of this paper is to present the conceptual model of an innovative methodology (SAT) to assess the social acceptance of technology, especially focusing on artificial intelligence (AI)-based technology.
Design/methodology/approach
After a review of the literature, this paper presents the main lines by which SAT stands out from current methods, namely, a four-bubble approach and a mix of qualitative and quantitative techniques that offer assessments that look at technology as a socio-technical system. Each bubble determines the social variability of a cluster of values: User-Experience Acceptance, Social Disruptiveness, Value Impact and Trust.
Findings
The methodology is still in development, requiring further developments, specifications and validation. Accordingly, the findings of this paper refer to the realm of the research discussion, that is, highlighting the importance of preventively assessing and forecasting the acceptance of technology and building the best design strategies to boost sustainable and ethical technology adoption.
Social implications
Once SAT method will be validated, it could constitute a useful tool, with societal implications, for helping users, markets and institutions to appraise and determine the co-implications of technology and socio-cultural contexts.
Originality/value
New AI applications flood today’s users and markets, often without a clear understanding of risks and impacts. In the European context, regulations (EU AI Act) and rules (EU Ethics Guidelines for Trustworthy) try to fill this normative gap. The SAT method seeks to integrate the risk-based assessment of AI with an assessment of the perceptive-psychological and socio-behavioural aspects of its social acceptability.
Details