Skip to main content
UniPar: A Unified LLM-Based Framework for Parallel and Accelerated Code Translation in HPC

UniPar: A Unified LLM-Based Framework for Parallel and Accelerated Code Translation in HPC

Introduces UniPar, an evaluation framework for assessing how large language models translate between parallel programming languages, achieving 69% compilation success and 33% functional correctness through fine-tuning and compiler-guided repair.

Published on September 15, 2025
← Back to Research
UniPar: A Unified LLM-Based Framework for Parallel and Accelerated Code Translation in HPC

Abstract

This research introduces UniPar, an evaluation framework for assessing how large language models translate between parallel programming languages. The team targeted translations between serial code, CUDA, and OpenMP, testing models like GPT-4o-mini and LLaMA-3.3-70B-Instruct. Their approach combined fine-tuning, hyperparameter tuning, and compiler-guided repair, improving performance from 46% to 69% compilation success and 15% to 33% functional correctness. They also created the PARATRANS dataset covering serial-to-parallel and cross-paradigm code transformations, with both code and data made publicly available.

Read Full Article