Ryan Barrett
Ryan Barrett
Added a custom HTTP tunnel ^ through the app itself, works ok (not great, just ok) for manual commands. Local Python shell still isn't connected though.
Getting more important, this is making manual datastore changes unreliable or not usable.
This has the most comprehensive set of options I've seen so far for connecting to Memorystore from outside GCP: https://blog.stackademic.com/connect-to-google-cloud-memorystore-redis-from-a-local-machine-and-use-redis-in-next-js-5e5a534d45b6 Currently trying the IAP TCP forwarding route, https://cloud.google.com/iap/docs/tcp-forwarding-overview
So one catch here is that ideally I'd set up the tunnel directly to the Memorystore instance, not to a generic Compute Engine VM. That seems uncommon, but maybe possible?...
Progress maybe, changed `--network` and added `--dest-group`: ``` gcloud compute start-iap-tunnel 10.126.144.3 11211 \ --region=us-central1 \ --network=default \ --dest-group=memorystore-memcached \ --local-host-port=localhost:11211 ... ERROR: (gcloud.compute.start-iap-tunnel) While checking if a connection can...
Other ideas: * add a custom memcached tunnel to atproto-hub * ssh into atproto-hub and run the Python REPL there, https://cloud.google.com/appengine/docs/flexible/debugging-an-instance#connecting_to_the_instance
One workaround/hack for this would be a way to clear individual ndb objects from memcache. Wouldn't help with in-memory caches, but maybe still worthwhile...? Not sure.
Here's how Gemini says to do this: > Core Concept: Shared VPC for Cross-Project Connectivity > > Memorystore for Redis instances are deployed within a VPC network. To allow a...
Current instance is almost all from one user, [did:plc:v35n6kafnti65tv44uuwtbmn](https://fed.brid.gy/bsky/did:plc:v35n6kafnti65tv44uuwtbmn). Example: `at://did:plc:v35n6kafnti65tv44uuwtbmn/app.bsky.feed.post/3lcvbt6idwm2w#delete` ```json { "objectType": "activity", "verb": "delete" "id": "at://did:plc:v35n6kafnti65tv44uuwtbmn/app.bsky.feed.post/3lcvbt6idwm2w#delete", "actor": "did:plc:v35n6kafnti65tv44uuwtbmn", "object": "at://did:plc:v35n6kafnti65tv44uuwtbmn/app.bsky.feed.post/3lcvbt6idwm2w", } ``` Looks like a valid delete...
This one at least seems like someone mass deleting their old posts, which we probably want to allow, and bridge. So, maybe we rate limit per user and gradually spread...