connery-sdk icon indicating copy to clipboard operation
connery-sdk copied to clipboard

connery in openGPTs

Open hchenphd opened this issue 11 months ago • 1 comments

I config the connery with gmail , but when I access it from openGPTs, the output is error, the log is "runner:start: {"type":"all-exceptions-filter","message":"Not Found"}"

runner:start: [4:01:34 AM] Found 0 errors. Watching for file changes. runner:start: runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [NestFactory] Starting Nest application... runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] AppModule dependencies initialized +15ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] HttpModule dependencies initialized +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] ConfigHostModule dependencies initialized +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] TerminusModule dependencies initialized +1ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] ConfigModule dependencies initialized +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] ConfigModule dependencies initialized +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] HealthModule dependencies initialized +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] SharedModule dependencies initialized +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] AdminApiModule dependencies initialized +1ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [InstanceLoader] ClientsApiModule dependencies initialized +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RoutesResolver] HealthController {/health}: +8ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/health, GET} route +2ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RoutesResolver] PluginsController {/}: +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/admin/plugins/refresh, GET} route +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RoutesResolver] ActionsController {/}: +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/actions, GET} route +1ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/actions/:actionId, GET} route +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/actions/identify, POST} route +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/actions/:actionId/run, POST} route +1ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/actions/specs/openapi, GET} route +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/actions/specs/openai-functions, GET} route +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RoutesResolver] PluginsController {/}: +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/plugins, GET} route +1ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/plugins/:pluginId, GET} route +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RoutesResolver] ToolsController {/}: +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/, GET} route +1ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [RouterExplorer] Mapped {/v1/verify-access, GET} route +0ms runner:start: [Nest] 4386 - 02/29/2024, 4:01:35 AM LOG [NestApplication] Nest application successfully started +2ms runner:start: ✅ Runner is successfully started on port 4201 runner:start: {"type":"all-exceptions-filter","message":"Not Found"}

hchenphd avatar Feb 29 '24 04:02 hchenphd

@hchenphd, can you please provide the steps to reproduce?

I'm most interested in:

  • Environment (Do you set up and run both OpenGPTs and Connery locally?)
  • Steps to set up (Which versions/branches did you set up the OpenGPTs/Connery/Gmail plugin from?)

machulav avatar Feb 29 '24 13:02 machulav

Hi, could this issue be reopened ? I’m experiencing the same problem while trying to communicate between the runner and my OpenGPTs client :

{"type":"all-exceptions-filter","message":"Not Found"}"

The Connery runner is hosted on Codespaces with the Gmail plugin. I used the CONNERY_RUNNER_URL without a trailing slash in my local OpenGPTs setup, along with the CONNERY_RUNNER_API_KEY provided by the runner.”

curl -X GET CONNERY_RUNNER_URL -H "x-api-key: CONNERY_RUNNER_API_KEY" works fine {"status":"success","data":{"message":"Welcome to the Connery Runner API 👋"}}%

Thanks

drzagor avatar Oct 19 '24 10:10 drzagor

Sorry, I need to provide additional information. I am currently using an older version of Connery v0.0.7 after testing v0.0.8 and v0.3.5. The Gmail plugin is connery-io/[email protected]. OpenGPTs is running locally using Docker Compose. It has worked before (I successfully sent a webpage summary to an email address). I must say, it’s frustrating now!

drzagor avatar Oct 19 '24 10:10 drzagor

Hi @drzagor, thank you for reaching out. I’m sorry to hear about your frustration with the SDK.

We recently made significant changes to align the SDK with our new strategy, resulting in breaking changes. The previous approach using the Runner is no longer supported. Instead, the latest SDK architecture wraps each plugin as a standalone server (Plugin Server), which can be hosted anywhere, including serverless environments like AWS Lambda. Each Plugin Server exposes a REST API, allowing interaction and action execution.

At the moment, OpenGPTs are not supported in the new SDK, as we haven’t observed sufficient demand in that area. However, you can still use the plugins packaged with the latest SDK version alongside OpenGPTs by creating a custom LangChain wrapper and calling the actions via the Plugin Server REST API.

Please note that the SDK is still in beta and has no major version yet. As such, there may be occasional small breaking changes until we release the major version.

Thank you for your understanding.

machulav avatar Oct 19 '24 13:10 machulav

Thank you for the detailed response and for clarifying the recent changes to the SDK. I appreciate your advice on how to continue using the plugins with OpenGPTs through a custom LangChain wrapper. I understand the challenges during the beta phase and look forward to future updates.

drzagor avatar Oct 19 '24 14:10 drzagor