tac icon indicating copy to clipboard operation
tac copied to clipboard

Zorse

Open slh1109 opened this issue 1 year ago • 5 comments

Project description

This project aims to collect a dataset of production COBOL and associated mainframe languages (JCL, REXX, PL/I) which Large Language Models (LLMs) can be fine-tuned on. It also aims to develop an evaluation suite to measure LLMs' ability to comprehend, explain, and write COBOL. This project will:

  • Improve the utility of LLMs in the mainframe domain, helping engineers maintain mainframe applications
  • Create an industry-standard benchmark to track LLM performance on COBOL over time

Dataset

The dataset should be composed of high quality COBOL code that is permissively licensed. The code should be representative of production COBOL applications, and should be cleaned of any personally identifiable information (PII).

Evaluation Suite

The evaluation suite should comprise a series of tasks that quantitatively measure an arbitrary LLM's ability to read and write COBOL. BloopAI's COBOLEval benchmark can be used as a foundation for the suite. It is a translation of the widely-used OpenAI developed HumanEval LLM benchmark to COBOL.

Statement on alignment with Open Mainframe Project Mission and Vision statements

Enable the mainframe to be more consumable by developers with a transparent experience in leveraging the value propositions of the mainframe.

Are there similar/related projects out there?

None that we are aware of for mainframe languages. Software Heritage archives decommissioned software systems of all languages.

External dependencies (including licenses)

https://github.com/BloopAI/COBOLEval (MIT)

Sponsor from TAC

Joe Bostian

Proposed Project Stage

Sandbox

License and contribution guidelines

unknown

Current or desired source control repository

Github

External dependencies (including licenses)

none, tbd

Initial committers

tbd

Infrastructure requests

tbd

Communication channels

email, Google Docs, Zoom meetings

Communication channels

Google docs

Website

none/tbd

Release methodology and mechanics

tbd

Social media accounts

none/tbd

Community size and any existing sponsorship

initial team of around half dozen: John Mertic [email protected]; "Ed Airey" [email protected]; Elpida Tzortzatos [email protected]; Jim Porell [email protected]; Joseph Bostian [email protected]; Leonard Santalucia [email protected]; Louis Knight-Webb [email protected]; Per Kroll [email protected]; Venkatauday Balabhadrapatruni [email protected]; Goran Begic [email protected]; Gabriel Gordon-Hall [email protected], Stephen Hodges [email protected]

slh1109 avatar Apr 17 '24 18:04 slh1109

My take:

  1. I do not think we would build or want to build a LLM from scratch for COBOL. There are a LOT of open source models out there that can spell COBOL to some extent. Leveraging that as a starting point / foundation for our needs is the way to go. bloop.ai has an open source model that has been trained on Gnu COBOL.
  2. The key primary goal for this community IMHO is not to build a model - but gather / create good quality IBM Enterprise COBOL code that can be used to fine-tune an open source model that we pick. This code should have clear ownership / provenance so that we can prove that the model has been trained with code that has right IP and licenses. Models will evolve, the data and the mechanics of how to fine tune/train a model will remain. Hence, the question that needs to be answered is - what is the path to get the quality Enterprise COBOL data that would be a good enough to fine tune an existing LLM.

venkatzhub avatar Apr 18 '24 02:04 venkatzhub

Agree with @venkatzhub

IBM has Granite LLMs under Apache2 license but the training data is fairly limited with only 727 COBOL programs as compared to 4M+ C++ programs. Further, there is currently no coverage of PL/I, HLASM, REXX, JCL et al.

markbsigler avatar May 14 '24 19:05 markbsigler

@venkatzhub I'm reading your email but will reply here to maintain the trail.

IBM references their project CodeNet with a detailed spreadsheet on each language and the quantity of accepted submissions, and further that the code sourced from two Japanese coding challenge websites. It's overwhelmingly C++ and Python.

markbsigler avatar May 14 '24 20:05 markbsigler

@venkatzhub I'm reading your email but will reply here to maintain the trail.

IBM references their project CodeNet with a detailed spreadsheet on each language and the quantity of accepted submissions, and further that the code sourced from two Japanese coding challenge websites. It's overwhelmingly C++ and Python.

Thanks @markbsigler !

venkatzhub avatar May 14 '24 20:05 venkatzhub

Project approved on 2024-07-11

jmertic avatar Jul 25 '24 17:07 jmertic

Rescheduled for 8/14

slandath avatar Jul 09 '25 15:07 slandath

During the Zorse annual review, Gabriel recommended archiving the project, as he no longer has the resources to move it forward. Discussion ensued, and the TAC agreed to socialize with the community at the SHARE event next week in hopes of finding new maintainers.

Will revisit at the September 11, 2025 meeting.

slandath avatar Aug 14 '25 17:08 slandath

@pleia2 to bring to the IBM AI folks to review.

jmertic avatar Aug 28 '25 17:08 jmertic

@pleia2 has there been any interest from IBM on this one?

jmertic avatar Sep 18 '25 13:09 jmertic

In voting to move to Emeritus

jmertic avatar Nov 18 '25 13:11 jmertic

Approved

slandath avatar Dec 10 '25 15:12 slandath