DevoxxGenieIDEAPlugin icon indicating copy to clipboard operation
DevoxxGenieIDEAPlugin copied to clipboard

Support for JLama

Open stephanj opened this issue 1 year ago • 2 comments

Jlama is a fast modern Java library for running many LLMs. Jlama is built on Java 21 and utilizes the Panama Vector API for fast inference.

https://github.com/langchain4j/langchain4j-examples/tree/main/jlama-examples

ChatLanguageModel model = JlamaChatModel.builder()
                    .modelName("tjake/TinyLlama-1.1B-Chat-v1.0-Jlama-Q4")
                    .temperature(0.0f) //Force same output every run
                    .build();

Streaming model

 StreamingChatLanguageModel model = JlamaStreamingChatModel.builder()
                    .modelName("tjake/TinyLlama-1.1B-Chat-v1.0-Jlama-Q4")
                    .temperature(0.0f) //Force same output every run
                    .build();

stephanj avatar Aug 13 '24 11:08 stephanj

Claude Sonnet 3.5 suggestion:

To support JLama in your DevoxxGenie IntelliJ plugin, you'll need to make several changes to your existing codebase. Here's a step-by-step guide on how to integrate JLama:

  1. Add JLama dependency: First, add the JLama dependency to your build.gradle file:
dependencies {
    implementation 'dev.langchain4j:langchain4j-jlama:0.24.0'
}
  1. Create a new JLamaChatModelFactory: Create a new class called JLamaChatModelFactory in the com.devoxx.genie.chatmodel.jlama package:
package com.devoxx.genie.chatmodel.jlama;

import com.devoxx.genie.chatmodel.ChatModelFactory;
import com.devoxx.genie.model.ChatModel;
import com.devoxx.genie.model.LanguageModel;
import com.devoxx.genie.model.enumarations.ModelProvider;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.chat.StreamingChatLanguageModel;
import dev.langchain4j.model.jlama.JlamaChatModel;
import dev.langchain4j.model.jlama.JlamaStreamingChatModel;
import org.jetbrains.annotations.NotNull;

import java.time.Duration;
import java.util.List;

public class JLamaChatModelFactory implements ChatModelFactory {

    @Override
    public ChatLanguageModel createChatModel(@NotNull ChatModel chatModel) {
        return JlamaChatModel.builder()
            .modelName(chatModel.getModelName())
            .temperature(chatModel.getTemperature().floatValue())
            .maxTokens(chatModel.getMaxTokens())
            .timeout(Duration.ofSeconds(chatModel.getTimeout()))
            .build();
    }

    @Override
    public StreamingChatLanguageModel createStreamingChatModel(@NotNull ChatModel chatModel) {
        return JlamaStreamingChatModel.builder()
            .modelName(chatModel.getModelName())
            .temperature(chatModel.getTemperature().floatValue())
            .timeout(Duration.ofSeconds(chatModel.getTimeout()))
            .build();
    }

    @Override
    public List<LanguageModel> getModels() {
        return List.of(
            LanguageModel.builder()
                .provider(ModelProvider.JLama)
                .modelName("tjake/TinyLlama-1.1B-Chat-v1.0-Jlama-Q4")
                .displayName("TinyLlama 1.1B Chat")
                .apiKeyUsed(false)
                .inputCost(0)
                .outputCost(0)
                .contextWindow(2048)
                .build()
        );
    }
}
  1. Update ModelProvider enum: Add JLama to the ModelProvider enum in com.devoxx.genie.model.enumarations.ModelProvider:
public enum ModelProvider {
    // ... existing providers
    JLama("JLama");
    
    // ... rest of the enum implementation
}
  1. Update ChatModelFactoryProvider: Add JLama to the ChatModelFactoryProvider in com.devoxx.genie.chatmodel.ChatModelFactoryProvider:
public class ChatModelFactoryProvider {
    private static final Map<String, Supplier<ChatModelFactory>> FACTORY_SUPPLIERS = Map.of(
        // ... existing providers
        ModelProvider.JLama.getName(), JLamaChatModelFactory::new
    );
    
    // ... rest of the class implementation
}
  1. Update DevoxxGenieSettingsService: Add JLama-related methods to the DevoxxGenieSettingsService interface:
public interface DevoxxGenieSettingsService {
    // ... existing methods

    String getJLamaModelUrl();
    void setJLamaModelUrl(String url);
}
  1. Update DevoxxGenieStateService: Implement the new JLama-related methods in the DevoxxGenieStateService class:
@State(
    name = "com.devoxx.genie.ui.SettingsState",
    storages = @Storage("DevoxxGenieSettingsPlugin.xml")
)
public final class DevoxxGenieStateService implements PersistentStateComponent<DevoxxGenieStateService>, DevoxxGenieSettingsService {
    // ... existing fields and methods

    private String jlamaModelUrl = "http://localhost:8080"; // Default URL

    @Override
    public String getJLamaModelUrl() {
        return jlamaModelUrl;
    }

    @Override
    public void setJLamaModelUrl(String url) {
        this.jlamaModelUrl = url;
    }

    // ... rest of the class implementation
}
  1. Update LLMProvidersComponent: Add JLama to the UI settings in com.devoxx.genie.ui.settings.llm.LLMProvidersComponent:
public class LLMProvidersComponent extends AbstractSettingsComponent {
    // ... existing fields

    @Getter
    private final JTextField jlamaModelUrlField = new JTextField(stateService.getJLamaModelUrl());

    // ... existing methods

    @Override
    public JPanel createPanel() {
        // ... existing panel setup

        addSection(panel, gbc, "Local LLM Providers");
        // ... existing local providers
        addSettingRow(panel, gbc, "JLama URL", createTextWithLinkButton(jlamaModelUrlField, "https://github.com/tjake/Jlama"));

        // ... rest of the panel setup
    }

    // ... rest of the class implementation
}
  1. Update LLMProvidersConfigurable: Update the LLMProvidersConfigurable class to handle JLama settings:
public class LLMProvidersConfigurable implements Configurable {
    // ... existing methods

    @Override
    public boolean isModified() {
        DevoxxGenieSettingsService settings = DevoxxGenieSettingsServiceProvider.getInstance();

        boolean isModified = false;

        // ... existing checks

        isModified |= isFieldModified(llmSettingsComponent.getJlamaModelUrlField(), settings.getJLamaModelUrl());

        return isModified;
    }

    @Override
    public void apply() {
        DevoxxGenieSettingsService settings = DevoxxGenieSettingsServiceProvider.getInstance();

        // ... existing applications

        settings.setJLamaModelUrl(llmSettingsComponent.getJlamaModelUrlField().getText());

        // ... rest of the method implementation
    }

    @Override
    public void reset() {
        DevoxxGenieSettingsService settings = DevoxxGenieSettingsServiceProvider.getInstance();

        // ... existing resets

        llmSettingsComponent.getJlamaModelUrlField().setText(settings.getJLamaModelUrl());

        // ... rest of the method implementation
    }
}
  1. Update ChatModelProvider: Add JLama to the ChatModelProvider class:
public class ChatModelProvider {
    private final Map<ModelProvider, ChatModelFactory> factories = new HashMap<>();

    public ChatModelProvider() {
        // ... existing factories
        factories.put(ModelProvider.JLama, new JLamaChatModelFactory());
    }

    // ... rest of the class implementation
}
  1. Create JLamaChatModelFactoryTest: Create a new test class for JLama in the test directory:
package com.devoxx.genie.chatmodel.jlama;

import com.devoxx.genie.chatmodel.AbstractLightPlatformTestCase;
import com.devoxx.genie.model.ChatModel;
import com.devoxx.genie.model.LanguageModel;
import com.devoxx.genie.ui.settings.DevoxxGenieStateService;
import com.intellij.openapi.application.ApplicationManager;
import com.intellij.testFramework.ServiceContainerUtil;
import dev.langchain4j.model.chat.ChatLanguageModel;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;

import java.util.List;

import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

public class JLamaChatModelFactoryTest extends AbstractLightPlatformTestCase {

    @BeforeEach
    public void setUp() throws Exception {
        super.setUp();
        DevoxxGenieStateService settingsStateMock = mock(DevoxxGenieStateService.class);
        when(settingsStateMock.getJLamaModelUrl()).thenReturn("http://localhost:8080");
        ServiceContainerUtil.replaceService(ApplicationManager.getApplication(), DevoxxGenieStateService.class, settingsStateMock, getTestRootDisposable());
    }

    @Test
    public void createChatModel() {
        JLamaChatModelFactory factory = new JLamaChatModelFactory();
        ChatModel chatModel = new ChatModel();
        chatModel.setModelName("tjake/TinyLlama-1.1B-Chat-v1.0-Jlama-Q4");
        chatModel.setTemperature(0.7);
        chatModel.setMaxTokens(100);

        ChatLanguageModel result = factory.createChatModel(chatModel);

        assertThat(result).isNotNull();
    }

    @Test
    public void getModels() {
        JLamaChatModelFactory factory = new JLamaChatModelFactory();
        List<LanguageModel> models = factory.getModels();

        assertThat(models).isNotEmpty();
        assertThat(models).hasSize(1);
        assertThat(models.get(0).getModelName()).isEqualTo("tjake/TinyLlama-1.1B-Chat-v1.0-Jlama-Q4");
    }
}

These changes will integrate JLama into your DevoxxGenie IntelliJ plugin. Remember to update your documentation and any relevant UI elements to reflect the new JLama option. Also, ensure that you handle any potential exceptions or errors that may occur when using JLama, and provide appropriate feedback to the user.

stephanj avatar Aug 13 '24 11:08 stephanj

So I tried doing this and the first issue was I needed to upgrade the plugin to run on 2024.2+: Java 21.

This seemed to get me further but then the Classloader for Plugins failed on one of the dependencies and I ran out of time... Will try again

tjake avatar Oct 05 '24 03:10 tjake