OpenAI-sublime-text icon indicating copy to clipboard operation
OpenAI-sublime-text copied to clipboard

Add Anthropic Claude documentation

Open james2doyle opened this issue 6 months ago • 3 comments

james2doyle avatar May 05 '25 22:05 james2doyle

Looks like Claude works as well here. Tried it out the same way as I did with Gemini

james2doyle avatar May 05 '25 22:05 james2doyle

This one is really weird. I mean last time I've tried claude by myself it was required strict message ordering, which this plugin doesn't support. So either they're loosed their restrictions or I occasionally implemented it recently. However I'll have to test it by my own within the few weeks to merge.

yaroslavyaroslav avatar May 06 '25 10:05 yaroslavyaroslav

Sounds good. No rush

On Tue, May 6, 2025, 3:56 a.m. Yaroslav @.***> wrote:

yaroslavyaroslav left a comment (yaroslavyaroslav/OpenAI-sublime-text#122) https://github.com/yaroslavyaroslav/OpenAI-sublime-text/pull/122#issuecomment-2854136176

This one is really weird. I mean last time I've tried claude by myself it was required strict message ordering, which this plugin doesn't support. So either they're loosed their restrictions or I occasionally implemented it recently. However I'll have to test it by my own within the few weeks to merge.

— Reply to this email directly, view it on GitHub https://github.com/yaroslavyaroslav/OpenAI-sublime-text/pull/122#issuecomment-2854136176, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAK37GCGOI44VISMVCDW2ID25CIM7AVCNFSM6AAAAAB4PYNNX2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQNJUGEZTMMJXGY . You are receiving this because you authored the thread.Message ID: @.***>

james2doyle avatar May 06 '25 15:05 james2doyle

This PR has been automatically marked as stale due to inactivity. It will be closed if no further activity occurs.

github-actions[bot] avatar May 23 '25 01:05 github-actions[bot]

Nope, this is not working.

History is broken completely, more that than it's using breaks whole service.

The true reason is to be found yet, but overall one is that there's actually two data flows happens in plugin between rust and python parts. One is plain chars streaming that is presented on ST side, another is to store whole response content on its finish to send it back on server on the next iteration.

So currently on my side it successfully streams text but fails to save any of it. It's because there's two separate hunks of code responsible for extracting content data from the response stream. And the latter is failing.

On top of that tooling is broken too. Claudea is capable to call a tool but plugin responses in openAI'ish way so it always fails to accept its result.

yaroslavyaroslav avatar May 29 '25 18:05 yaroslavyaroslav

That's too bad. I have noticed that a lot of the services that provide openai compatible endpoints are not actually following the openai API specification close enough.

For example, Cloudflare doesn't respond to the /models endpoint. So you can't tell what models it uses. But Gemini does.

What do you think the solution is here? It seems like a lot of the other tools use "adapters" to provide smoother access to models on different platforms.

It would be nice to support other platforms just given how expensive open AI can be.

What do you think?

james2doyle avatar May 29 '25 20:05 james2doyle

Yeah, adapter is actually the thing that I tried to implement in rust part. And it works in someway, like plain mode (old fashioned API) and OpenAI one are made by adapters.

But unfortunately I didn't accomplished this task well enough yet, thus each endpoint is still not separated enough.

Also there's suggestion of a third part rust lib to use for the whole network layer in the rust part repo of this plugin, which I think is the best option available.

However I have too little time to implement any of that for now and in like next half a year it would be the same.

So mine main efforts would be focused to stabilize already existing features and maybe replace input panel with kinda custom output panel (similar to what gitsavy plugin has).

Also I found it's a little bit annoying to develop cross platform tool by myself. Especially when it comes to agentic stuff. A lot of things that can be just Process.exec with a proper tool preinstalled has to be implemented from the scratch and as if it's not enough it has to be implemented using someway limited Sublime Text API.

So I think in some forceable future I'll drift to a platform specific agentic cli tool like aider or similar.

Ps: I as developer, not I as the current plugin.

yaroslavyaroslav avatar Jun 01 '25 12:06 yaroslavyaroslav

That all makes sense. I've implemented a couple of MCPs and they are time consuming just testing out. Let alone trying to write all the actual code.

I was going to suggest aider too. They have a python API for writing tools that wrap it.

Would that be a whole new plugin though? Sublime-Aider?

james2doyle avatar Jun 01 '25 17:06 james2doyle

Yeah, I agree with your conclusion about MCP stuff.

I was going to suggest aider too. They have a python API for writing tools that wrap it.

The main reason why I picked to reimplement the core of the plugin on rust is not because of my lunatism about rust itself. But bc of ST plugin ecosystem has pretty strict limitation when it comes to any dependencies, and it doesn't seems to become any better in foreseeable future, even when ST updates Python 3.8 plugin engine to 3.13 later this year, so the main reason to switch was a strive to reach out rich crate ecosystem.

The issue with ST plugin ecosystem is in Package Control design itself. So aider's python API makes no sense here, bash commands with plain text output does tho.

Would that be a whole new plugin though? Sublime-Aider?

I don't see much value in aider with ST integration. Aider is pretty much nice and useful by itself. Last time I tried it I met no issues with keeping it side by side working with it and ST on the same project.

However, personally I more fan of codex by OpenAI approach. I mean the infinite loop of llm calling posix cli tools on a local machine to interactively discover and then to modify project's code according to a task set. Exactly this is implemented and pretty well working on a last develop commit, the UI is pretty crappy tho.

So I consider to push this approach further, and here is the place where all of mine concerns arising:

  1. I use only one operation system,
  2. I have a ton of tools installed, which can be used instead of being poorly reimplemented by my own in python.
  3. The main thing that is annoyingly missing for me in ST is a built in side to side differ to check what llm has patched during the session, currently I'm switching back and forth between SM and ST for that, but this is out of desperation.

So if I'll feel enough of courage and free time next I'd start some kind of Dash/SnippetLab UI alike agentic utility with free floating window which would be IDE independent. And it'll most likely be mac only, since I have one.

yaroslavyaroslav avatar Jun 02 '25 21:06 yaroslavyaroslav

I actually used a build system to call aider:

{
	"working_dir": "$project_path",
	"interactive": true,
	"shell_cmd": "aider --no-pretty --no-fancy-input --file $file",
	"variants": [
		{
			"name": "File",
			"working_dir": "$project_path",
			"interactive": true,
			"cmd": ["aider", "--no-pretty", "--no-fancy-input", "--file", "$file"]
		},
		{
			"name": "Read",
			"working_dir": "$project_path",
			"interactive": true,
			"cmd": ["aider", "--no-pretty", "--no-fancy-input", "--read", "$file"]
		},
		{
			"name": "Load",
			"working_dir": "$project_path",
			"interactive": true,
			"cmd": ["aider", "--no-pretty", "--no-fancy-input", "--load", "$file"]
		},
		{
			"name": "Selection Message",
			"target": "exec_and_replace_selection",
			"interactive": true,
			"shell_cmd": "aider --no-pretty --no-fancy-input --message \"SELECTION\""
		}
	]
}

I needed to use a plugin to pass the selection to the command but it works alright. I haven’t had a lot of luck with the --watch command but that is an option as well.

james2doyle avatar Jun 02 '25 22:06 james2doyle

What was an issue with --watch command for you?

For me the main issue with aider in December was overall problem with unreliable patching tool. Which means patches being applied left me with code broken on a syntax level. This wasn't just aider issue back then, any tool that I've tried patched unreliable back there.

So since we're here, if you're using aider with Claude models, is there some preferable patches format to force model to response with?

Ps: is newly released interactive build feature really useful for aider?

yaroslavyaroslav avatar Jun 02 '25 22:06 yaroslavyaroslav

I just found it was clunky to use. I typically don’t want the entire file to be sent to the context. I also found the lack of MCP support to be challenging. There is an open PR for MCP support but it has missed multiple releases.

I also found the patching to be hit or miss. I also find the way it deals with git and commits to be strange.

I haven't been using Claude with aider because I have a small budget for it. I accidentally used my whole budget last month because of bad usage on my part. I usually use ChatWise with Claude.

I think interactive is perfect for aider. I can just open it and up run a message or add the current file in the context and then enter in a follow up messsage

james2doyle avatar Jun 02 '25 23:06 james2doyle

I just found it was clunky to use. I typically don’t want the entire file to be sent to the context. I also found the lack of MCP support to be challenging. There is an open PR for MCP support but it has missed multiple releases.

I also found the patching to be hit or miss. I also find the way it deals with git and commits to be strange.

Same for me, this was the reason not to use it heavily any further.

I really encourage you to give it a try to codex with o4-mini. I barely see myself working without it since I tried it.

It's pretty close to the very recent zed's agentic release, but I find it more accurate and effective (thus cheaper) than zed's implementation.

yaroslavyaroslav avatar Jun 03 '25 10:06 yaroslavyaroslav

I have tried it but the problem is that it uses the global system node.js. So if you have a project that uses nvm to control the node version, and that version is lower than the one that codex needs, it won't work properly...

I'm sure there is a workaround but I didn't try to find one. I'll revisit it.

I've been playing with goose and opencode and those are both decent. I've been working on my own opinionated tool as well but it's not ready yet

James Doyle

t: @james2doyle https://twitter.com/james2doylew: ohdoylerules.com http://ohdoylerules.com

On Tue, Jun 3, 2025, 3:19 a.m. Yaroslav @.***> wrote:

yaroslavyaroslav left a comment (yaroslavyaroslav/OpenAI-sublime-text#122) https://github.com/yaroslavyaroslav/OpenAI-sublime-text/pull/122#issuecomment-2934540784

I just found it was clunky to use. I typically don’t want the entire file to be sent to the context. I also found the lack of MCP support to be challenging. There is an open PR for MCP support but it has missed multiple releases.

I also found the patching to be hit or miss. I also find the way it deals with git and commits to be strange.

Same for me, this was the reason not to use it heavily any further.

I really encourage you to give it a try to codex https://github.com/openai/codex with o4-mini. I barely see myself working without it since I tried it.

It's pretty close to the very recent zed's agentic release, but I find it more accurate and effective (thus cheaper) than zed's implementation.

— Reply to this email directly, view it on GitHub https://github.com/yaroslavyaroslav/OpenAI-sublime-text/pull/122#issuecomment-2934540784, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAK37GHT4OWZ2QARDGBOBGL3BVZEDAVCNFSM6AAAAAB4PYNNX2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSMZUGU2DANZYGQ . You are receiving this because you modified the open/close state.Message ID: @.***>

james2doyle avatar Jun 03 '25 15:06 james2doyle