Korea Computer Congress 2023(KCC 2023)

2023-06-17 news workshop review

김서희 (Seo Hee, Kim)
김서희 (Seo Hee, Kim)
Undergraduate

Designing the Future of Computing for the Next 50 Years: Leading Together from the Past 50 Years to a New Era

From June 18th to 20th, 2023, the Korea Computer Congress took place at the Ramada Plaza in Jeju Hotel. This highly anticipated conference brought together researchers, professionals, and enthusiasts in the field of computing.

Notably, the prestigious LIFS lab made a significant contribution to the event by having four papers published in the artificial intelligence domain.

dataset1

Among the talented researchers from LIFS lab, students Choi Ari, Gu Yeri, and Park Yerin had the honor of presenting their papers during the Oral session. Their research showcased innovative advancements in artificial intelligence and offered valuable insights into emerging technologies and applications.

Additionally, student Park Jeewon’s paper was selected for presentation during the Poster session. This session provided an interactive platform for researchers to engage with attendees and exchange ideas in a more informal setting.

If we were to briefly describe each student’s paper :

dataset1

Choi Ari’s paper explores the search for similar cases by analyzing criminal lower court judgments. Human evaluation was attempted due to the lack of consensus among models, but even human agreement was unclear. Evaluators considered weapons and methods of murder as important factors, but motives were perceived as more crucial. Motives, such as revenge, grudge, monetary gain, and mental instability, were categorized based on crime facts in the judgments. The study constructed a dataset using sentences and keywords describing motives. The KLUE (bert-base) model performed best for sentence data, while the Random Forest model performed best for keyword data. Transformer-based models trained on extensive data are preferable for achieving human-level performance in keyword extraction. Future research will focus on enabling meaningful searches for similar cases by considering various crime methods in addition to motives.  

dataset1

In Park Yerin’s study, we proposed the “GPT-3.5 & Annotator-in-the-loop approach” to efficiently create a crime facts information extraction dataset. By using this method, we achieved significant time and cost savings of over 90%. We also found that providing specific conditions and examples improves the accuracy of prompts for GPT-3.5.

Comparing to a previous study, fine-tuning GPT-3.5 with our dataset could yield even better results. We plan to use the dataset to build a crime domain-specific model and automate crime facts information extraction tasks in the future.  

dataset1

In Gu Yeri’s paper, we propose a system for extracting argument structures that can assist in the logical review process of criminal investigations. To accomplish this, we develop the ToolMin+ argument model suitable for analyzing argument structures in Korean legal judgments and construct a new corpus with tagged argument components and relationships. We classify argument components using a pre-trained Korean BERT model on the constructed dataset and employ two modules, a Multiple-choice model and a BERT-based Natural Language Inference (NLI) model, to classify argument relationships. Experimental results confirm the successful extraction of argument structures from legal text using the proposed transformer-based architecture. We also demonstrate that the automatically extracted argument structures align with manually analyzed argument patterns within the legal judgments, validating the system’s ability to successfully identify argument structures in legal texts.  

dataset1

In Park Jeewon’s paper, the study evaluated the inference process of LSAT problem-solving using the GPT-3.5 and GPT-4 models with two different prompts. The application of the Zero-shot+CoT prompt revealed that the performance of GPT-3.5 deteriorated in all sections. Analyzing the problem-solving process of GPT, it was found that prompt design significantly influenced GPT utilization, and potential errors in GPT that fail to meet the specified prompt conditions were identified. This study suggests that there is room for improvement in the inference capabilities of GPT based on these errors. Therefore, future research will focus on developing more effective prompts to overcome the limitations of the model and utilize GPT in the legal domain.

  The papers presented by these dedicated students from LIFS lab not only demonstrated their expertise and commitment but also highlighted the lab’s exceptional research contributions. Their work undoubtedly added significant value to the Korea Computer Congress 2023 (KCC 2023), fostering intellectual discussions and pushing the boundaries of computing knowledge and innovation.