Prompt engineering techniques for semantic enhancement in business process models
Abstract
Purpose
The term knowledge refers to the part of the world investigated by a specific discipline and that includes a specific taxonomy, vocabulary, concepts, theories, research methods and standards of justification. Our approach uses domain knowledge to improve the quality of business process models (BPMs) by exploiting the domain knowledge provided by large language models (LLMs). Among these models, ChatGPT stands out as a notable example of an LLM capable of providing in-depth domain knowledge. The lack of coverage presents a limitation in each approach, as it hinders the ability to fully capture and represent the domain’s knowledge. To solve such limitations, we aim to exploit GPT-3.5 knowledge. Our approach does not ask GPT-3.5 to create a visual representation; instead, it needs to suggest missing concepts, thus helping the modeler improve his/her model. The GPT-3.5 may need to refine its suggestions based on feedback from the modeler.
Design/methodology/approach
We initiate our semantic quality enhancement process of a BPM by first extracting crucial elements including pools, lanes, activities and artifacts, along with their corresponding relationships such as lanes being associated with pools, activities belonging to each lane and artifacts associated with each activity. These data are systematically gathered and structured into ArrayLists, a form of organized collection that allows for efficient data manipulation and retrieval. Once we have this structured data, our methodology involves creating a series of prompts based on each data element. We adopt three approaches to prompting: zero-shot, few-shot and chain of thoughts (CoT) prompts. Each type of prompting is specifically designed to interact with the OpenAI language model in a unique way, aiming to elicit a diverse array of suggestions. As we apply these prompting techniques, the OpenAI model processes each prompt and returns a list of suggestions tailored to that specific element of the BPM. Our approach operates independently of any specific notation and offers semi-automation, allowing modelers to select from a range of suggested options.
Findings
This study demonstrates the significant potential of prompt engineering techniques in enhancing the semantic quality of BPMs when integrated with LLMs like ChatGPT. Our analysis of model activity richness and model artifact richness across different prompt techniques and input configurations reveals that carefully tailored prompts can lead to more complete BPMs. This research is a step forward for further exploration into the optimization of LLMs in BPM development.
Research limitations/implications
The limitation is the domain ontology that we are relying on to evaluate the semantic completeness of the new BPM. In our future work, the modeler will have the option to ask for synonyms, hyponyms, hypernyms or keywords. This feature will facilitate the replacement of existing concepts to improve not only the completeness of the BPM but also the clarity and specificity of concepts in BPMs.
Practical implications
To demonstrate our methodology, we take the “Hospitalization” process as an illustrative example. In the scope of our research, we have presented a select set of instructions pertinent to the “chain of thought” and “few-shot prompting.” Due to constraints in presentation and the extensive nature of the instructions, we have not included every detail within the body of this paper. However, they can be found in the previous GitHub link. Two appendices are given at the end. Appendix 1 describes the different prompt instructions. Appendix 2 presents the application of the instructions in our example.
Originality/value
In our research, we rely on the domain application knowledge provided by ChatGPT-3 to enhance the semantic quality of BPMs. Typically, the semantic quality of BPMs may suffer due to the modeler's lack of domain knowledge. To address this issue, our approach employs three prompt engineering methods designed to extract accurate domain knowledge. By utilizing these methods, we can identify and propose missing concepts, such as activities and artifacts. This not only ensures a more comprehensive representation of the business process but also contributes to the overall improvement of the model's semantic quality, leading to more effective and accurate business process management.
Keywords
Acknowledgements
The authors acknowledge the Arab Open University for funding this work.
Citation
Ayad, S. and Alsayoud, F. (2024), "Prompt engineering techniques for semantic enhancement in business process models", Business Process Management Journal, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/BPMJ-02-2024-0108
Publisher
:Emerald Publishing Limited
Copyright © 2024, Emerald Publishing Limited