hollama icon indicating copy to clipboard operation
hollama copied to clipboard

feat: add support for OpenAI

Open GregoMac1 opened this issue 1 year ago β€’ 4 comments

GregoMac1 avatar Sep 30 '24 18:09 GregoMac1

Thanks for the feedback! I'm working on it 🫑

GregoMac1 avatar Oct 02 '24 00:10 GregoMac1

@fmaclen Just to be sure: should we get rid of all the 'connected' or 'disconnected' logic? For example, in the settings page, should the status badge be removed?

GregoMac1 avatar Oct 04 '24 14:10 GregoMac1

should we get rid of all the 'connected' or 'disconnected' logic?

@GregoMac1 let's keep that functionality only in the Settings view, it's still useful to know if we are correctly connected to an Ollama server.

fmaclen avatar Oct 05 '24 09:10 fmaclen

Remaining tasks:

  • [x] Match types between both APIs (that should fix the linting errors).
  • [x] Check if the OpenAI strategy works correctly with a real API key.
  • [x] Add missing tanslations for es, ja, and tr.
  • [x] Fix existing tests.
  • [x] Add new tests.

GregoMac1 avatar Oct 07 '24 22:10 GregoMac1

The only bug I found was that after choosing an OpenAI model (i.e. gpt-3.5-turbo) and submitting a prompt, I was expecting it to show up in the Recently used models list. It did show up later after I left the Sessions page and returned to it.

@GregoMac1 I'm still seeing this issue.

fmaclen avatar Oct 18 '24 22:10 fmaclen

Here's a test of tests cases:

  • Fetching data from OpenAI with a correct API Key
  • Trying to fetch data with an incorrect API key
  • Trying to fetch data with a network connection error
  • Can't send any fetch requests without an apiKey or baseUrl set
  • Check the sorting/order of the models list is correct
  • Once an OpenAI model is used it gets added to the "recently used" list
  • An OpenAI model can be used in a session and that the model is saved to the localStorage for that specific session
  • In FieldModelSelect:
    • Check only the gpt filtered models are available
    • Check that Ollama models have an ollama badge
    • Check that OpenAI models have an openai badge

fmaclen avatar Oct 22 '24 14:10 fmaclen

The only bug I found was that after choosing an OpenAI model (i.e. gpt-3.5-turbo) and submitting a prompt, I was expecting it to show up in the Recently used models list. It did show up later after I left the Sessions page and returned to it.

@fmaclen This should be fixed now!

I've fixed the remaining bugs. I'm going to continue with the tests now.

GregoMac1 avatar Oct 22 '24 17:10 GregoMac1

Deploying hollama with Β Cloudflare Pages Β Cloudflare Pages

Latest commit: 330658e
Status:Β βœ…Β  Deploy successful!
Preview URL: https://18201545.hollama.pages.dev
Branch Preview URL: https://94-support-openai-api-format.hollama.pages.dev

View logs

@GregoMac1 I get these 2 tests failing:

openai.test.ts:56:2 β€Ί OpenAI Integration β€Ί models list is sorted correctly
session.test.ts:720:2 β€Ί Session β€Ί handles errors when fetching models

This one I'm not entirely sure why it's failing, ~~looks like a race condition~~ (it wasn't, here's why it fails): trace.zip

1) openai.test.ts:56:2 β€Ί OpenAI Integration β€Ί models list is sorted correctly ────────────────────

    Test timeout of 5000ms exceeded.

    Error: expect(locator).toHaveText(expected)

    Locator: locator('div[role="option"]').first()
    Expected string: "gpt-3.5-turbo"
    Received string: " codegemma:latest 9B  "
    Call log:
      - expect.toHaveText with timeout 5000ms
      - waiting for locator('div[role="option"]').first()
      -   locator resolved to <div role="option" id="yfbQhZM0We" aria-selected="fal…>…</div>
      -   unexpected value " codegemma:latest 9B  "
      -   locator resolved to <div role="option" id="yfbQhZM0We" aria-selected="fal…>…</div>
      -   unexpected value " codegemma:latest 9B  "

      61 |
      62 | 		const modelOptions = page.locator('div[role="option"]');
    > 63 | 		await expect(modelOptions.nth(0)).toHaveText('gpt-3.5-turbo');
         | 		                                  ^
      64 | 		await expect(modelOptions.nth(1)).toHaveText('gpt-4');
      65 | 	});
      66 |

        at /Users/winston/projects/hollama/tests/openai.test.ts:63:37

This next test actually broke with the changes we made to Sessions: trace.zip

I suspect that when you were running the tests locally you didn't have an Ollama server running at the same time, if you do you'll see the error I'm getting in the flash alert.

I think the issue is that we were mocking **/api/tags before and now we need to mock **/api/chat.

  2) session.test.ts:720:2 β€Ί Session β€Ί handles errors when fetching models ─────────────────────────

    Test timeout of 5000ms exceeded.

    Error: expect(locator).toBeVisible()

    Locator: locator('ol[data-sonner-toaster] li').filter({ hasText: 'Can\'t connect to Ollama server' })
    Expected: visible
    Received: hidden
    Call log:
      - expect.toBeVisible with timeout 5000ms
      - waiting for locator('ol[data-sonner-toaster] li').filter({ hasText: 'Can\'t connect to Ollama server' })


      741 | 		await expect(
      742 | 			page.locator('ol[data-sonner-toaster] li', { hasText: "Can't connect to Ollama server" })
    > 743 | 		).toBeVisible();
          | 		  ^
      744 | 	});
      745 |
      746 | 	test('can navigate out of session during completion', async ({ page }) => {

        at /Users/winston/projects/hollama/tests/session.test.ts:743:5

fmaclen avatar Oct 26 '24 02:10 fmaclen

:tada: This PR is included in version 0.20.0 :tada:

The release is available on GitHub release

Your semantic-release bot :package::rocket:

fmaclen avatar Oct 28 '24 15:10 fmaclen