Add SmartTrend pipe
SmartTrend is a pipe created for the Developer Program that acts as a Twitter engagement assistant that finds relevant tweets and suggests replies to help the user engage more easily.
https://github.com/user-attachments/assets/74dcc66d-c244-48b8-a5fd-63c8df113b01
/claim #1191 /closes #1191
@giraffekey is attempting to deploy a commit to the louis030195's projects Team on Vercel.
A member of the Team first needs to authorize it.
https://cap.so/s/d55sm9sy0vqd1hs
I think this is an issue with different setups. I tried running the linkedin ai assistant to see how it works and got the error:
[chrome-route] chrome launch failed with error: Error: failed to connect to chrome debug port after all attempts
Mine works if you install puppeteer with npm i puppeteer, but I don't think that's very portable. I'm not sure how to automate these installations.
I added a button that installs Chrome before the Connect button is enabled.
https://cap.so/s/zrs787221s3qv6f
not enough user feedback, please show what the bot is doing, like write some status text idk
does not work in general, waited a bit, nothing happen
https://cap.so/s/n1mn2yxspxehbgq
Should now connect to the already running browser instance instead of downloading Chrome for Testing. Added status text for the processes and toasts for any caught errors.
1 does not compile
2
Error: dlopen(/Users/louisbeaumont/.screenpipe/pipes/smarttrend/node_modules/lmdb/build/Release/lmdb.node, 0x 0001): tried: '/Users/louisbeaumont/.screenpipe/pipes/smarttrend/node_modules/lmdb/build/Release/lmdb.node' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e' or 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/louisbeaumont/.screenpipe/pipes/smarttrend/node_modules/lmdb/build/Release/lmdb.node' (no such file), '/Users/louisbeaumont/.scr eenpipe/pipes/smarttrend/node_modules/lmdb/build/Release/lmdb.node' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e' or 'arm64'))
Should be fixed.
https://cap.so/s/ds7rtm2rs18vwp3
does not work
It can only connect to Chrome if the remote debugging port is open. Added code to open chrome with the correct args.
https://cap.so/s/kgkb08pphckkebq
The only time it fails silently like that is when the AI model can't be detected.
It should work correctly with aiProviderType set to "openai" and openaiApiKey being set. It's also supposed to work with Ollama.
https://cap.so/s/wn5pt3tgv6y2n0v
It turns out screenpipe cloud works differently than other OpenAI providers. I'm not able to use the ai crate's streamText and streamObject functions with it. I've implemented a fix to make text streaming work by parsing the errors, but streaming structured data is still only working with the normal OpenAI provider, which means tweet suggestions are not currently working.
This code works with openai.com but does not work with screenpipe cloud:
import { streamObject } from "ai";
import { z } from "zod";
const schema = z.object({
tweetId: z.string(),
handle: z.string(),
reason: z.string(),
reply: z.string(),
});
const { elementStream } = await streamObject({
model,
output: "array",
schema,
prompt: "...",
});
It should work with screenpipe cloud now, but it might sometimes fail if the AI returns incorrect schema, since it doesn't support object streaming.
It should work with screenpipe cloud now, but it might sometimes fail if the AI returns incorrect schema, since it doesn't support object streaming.
you mean screenpipe-cloud does not support object streaming?
@Gmin2 can you fix that? will give tip
https://cap.so/s/1s3zr27j73w9ek5
thats great, starts to become rly good, few issues:
- it grab focus on the chrome window and distract me from work, usually the linkedin agent was properly acting in background completely?
- click on post does not find the tweet many times
- stop bot does not do anything or there is no user feedback (need loading or something) and is slow
- maybe need more settings on frequency, time, etc.
- should let me give system prompt and/or use my style from usual x replies
logs
ill try to use more daily to see if it's useful
@Gmin2 can you fix that? will give tip
sure @louis030195 , can yu share the output when yu run screenpipe cloud with strucctured outputs(cc @giraffekey )
@Gmin2 can you fix that? will give tip
sure @louis030195 , can yu share the output when yu run screenpipe cloud with strucctured outputs(cc @giraffekey )
It doesn't give output. What happens is it hangs for a while, then stops the stream without having ever returned anything. When I use generateObject instead of streamObject I get a "max retries reached" error. Maybe it's trying to call an endpoint that doesn't exist.
It doesn't give output. What happens is it hangs for a while, then stops the stream without having ever returned anything. When I use
generateObjectinstead ofstreamObjectI get a "max retries reached" error. Maybe it's trying to call an endpoint that doesn't exist.
Investigating 🕵️
any update on this?
would it make sense to merge a first version in coming days for part of the bounty and iterate from user feedback?
I'm ready for merge. I consider the first version of this already complete (unless there's any more caveats).
https://cap.so/s/wnpy118svjrdznq
does not work
https://cap.so/s/wnpy118svjrdznq
does not work
Does screenpipe cloud w/ gpt-4 work?
i used openai directly
Hm. Both are working for me.
I don't think I changed any of the API model code since the last time you ran it.
I fixed something but I doubt it was related.