omnivore
omnivore copied to clipboard
Improving Self-Hosting and Removing 3rd Party dependencies.
@Podginator is attempting to deploy a commit to the omnivore Team on Vercel.
A member of the Team first needs to authorize it.
This is awesome, it looks like it's taking shape! I might try it out this weekend.
Are there contributions from the community that you can think of that would be helpful for you?
does it mean i would be able to deploy open-source omnivore to vercel? and be able to use this great app even after their shut-down? 🙏
If this gets worked out I'll add a template for easy self-hosting with Coolify
Are you guys planning to add a docker container to self-host Omnivore?
Looking forward to a successful completion of this pr and thanks to the community for all their work.
Is there also a plan to also rebuild the android apk to allow specifying a server url?
Is there also a plan to also rebuild the android apk to allow specifying a server url?
You should be able to do this today. If you log out there's a self-hosting option on the android App.
Also the iOS App.
I think there's a few bugs in both that need ironing out. In the android version you seemingly can't add links currently.
thanks to the community for all their work.
@Podginator thanks for leading this effort. I'm sure this is really important to a lot of people in the community. How can we help push this along faster?
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
Also the iOS App.
I think there's a few bugs in both that need ironing out. In the android version you seemingly can't add links currently.
Hey, I made a PR which was merged into main (https://github.com/omnivore-app/omnivore/pull/4110) but didn't make it into the App Store. It solves an intermittent crash when using a self-hosted backend.
was able to get an instance up on an arm64 machine. as mentioned above content-fetch works fine on arm64. the only issue i had was imageproxy. in order to get it to work, i had to update the version to this one, which has arm64 binaries compiled: https://github.com/willnorris/imageproxy/pkgs/container/imageproxy/223956220?tag=main
everything else seemed to work fine, although i did not test the email stuff as I don't use that feature.
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
I also face this issue ON my linux vps
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
I also face this issue ON my linux vps
There should be no need to change the code. I have recently updated the guide to include the IMAGE_PROXY_URL and CLIENT_URL parameters. Did you ensure that these were also changed?
As well as the ones in the docker-compose file for the web. These are the things that write the redirect url, and so may be causing issues.
I run mine on a Linux server, so that shouldn't be causing any issues.
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
I also face this issue ON my linux vps
There should be no need to change the code. I have recently updated the guide to include the IMAGE_PROXY_URL and CLIENT_URL parameters. Did you ensure that these were also changed?
As well as the ones in the docker-compose file for the web. These are the things that write the redirect url, and so may be causing issues.
I run mine on a Linux server, so that shouldn't be causing any issues.
I managed to get it running with a TLD and with rebuilding the whole images. But i still face a issue when pointing to https://read.mydomain.com/api where I get the following error:
Cannot GET /api/
I made a NGINX custom location with the following parameters:
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
I also face this issue ON my linux vps
There should be no need to change the code. I have recently updated the guide to include the IMAGE_PROXY_URL and CLIENT_URL parameters. Did you ensure that these were also changed? As well as the ones in the docker-compose file for the web. These are the things that write the redirect url, and so may be causing issues. I run mine on a Linux server, so that shouldn't be causing any issues.
I managed to get it running with a TLD and with rebuilding the whole images. But i still face a issue when pointing to https://read.mydomain.com/api where I get the following error:
Cannot GET /api/I made a NGINX custom location with the following parameters:
Is it because you've pointed it to https but your nginx config specifies http as per your screenshot?
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
I also face this issue ON my linux vps
There should be no need to change the code. I have recently updated the guide to include the IMAGE_PROXY_URL and CLIENT_URL parameters. Did you ensure that these were also changed? As well as the ones in the docker-compose file for the web. These are the things that write the redirect url, and so may be causing issues. I run mine on a Linux server, so that shouldn't be causing any issues.
I managed to get it running with a TLD and with rebuilding the whole images. But i still face a issue when pointing to https://read.mydomain.com/api where I get the following error:
Cannot GET /api/I made a NGINX custom location with the following parameters:Is it because you've pointed it to https but your nginx config specifies http as per your screenshot?
With the nginx config I point per http to the specific api docker container over https. If I change it to http i get no connection at all. At the moment my login and the content fetch are working. The content has no images and the URL Save over read.mydomain.com/api/save?={url} are not working. So I assume that the connection to the image-proxy and the api container are not working.
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
I also face this issue ON my linux vps
There should be no need to change the code. I have recently updated the guide to include the IMAGE_PROXY_URL and CLIENT_URL parameters. Did you ensure that these were also changed? As well as the ones in the docker-compose file for the web. These are the things that write the redirect url, and so may be causing issues. I run mine on a Linux server, so that shouldn't be causing any issues.
I managed to get it running with a TLD and with rebuilding the whole images. But i still face a issue when pointing to https://read.mydomain.com/api where I get the following error:
Cannot GET /api/I made a NGINX custom location with the following parameters:Is it because you've pointed it to https but your nginx config specifies http as per your screenshot?
With the nginx config I point per http to the specific api docker container over https. If I change it to http i get no connection at all. At the moment my login and the content fetch are working. The content has no images and the URL Save over read.mydomain.com/api/save?={url} are not working. So I assume that the connection to the image-proxy and the api container are not working.
Will preface this by saying that I changed the NGINX to run with port 80 rather than ssl.
The cannot get /api/ makes sense. There is no route /api - The fact that you're getting that express error shows that the API is indeed working. Going there myself I also get the Cannot GET /api - but the rest of my application works fine.
When changing the URL, are you including /api in the places where you are giving the API address. This will cause the url to become /api/api/graphql rather than /api.
If you want to continue using that, you can add the following in the NGINX config, which will convert it back to a single /api.
location /api {
rewrite ^/api/api/(.*)$ /api/$1 break;
proxy_pass http://omnivore_backend;
}
As for imageproxy - I'm not sure what's happening there. For me, this just worked off the bat, but I did have to include /images in the Environment Variable IMAGE_PROXY_URL=http://localhost/images
If you are using subdomains instead you can modify the nginx to include this. In the mean time I would just use this.
Feel free to reach out on Discord too, we can try further debugging there.
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
I also face this issue ON my linux vps
There should be no need to change the code. I have recently updated the guide to include the IMAGE_PROXY_URL and CLIENT_URL parameters. Did you ensure that these were also changed? As well as the ones in the docker-compose file for the web. These are the things that write the redirect url, and so may be causing issues. I run mine on a Linux server, so that shouldn't be causing any issues.
I managed to get it running with a TLD and with rebuilding the whole images. But i still face a issue when pointing to https://read.mydomain.com/api where I get the following error:
Cannot GET /api/I made a NGINX custom location with the following parameters:Is it because you've pointed it to https but your nginx config specifies http as per your screenshot?
With the nginx config I point per http to the specific api docker container over https. If I change it to http i get no connection at all. At the moment my login and the content fetch are working. The content has no images and the URL Save over read.mydomain.com/api/save?={url} are not working. So I assume that the connection to the image-proxy and the api container are not working.
Will preface this by saying that I changed the NGINX to run with port 80 rather than ssl.
The cannot get /api/ makes sense. There is no route /api - The fact that you're getting that express error shows that the API is indeed working. Going there myself I also get the
Cannot GET /api- but the rest of my application works fine.When changing the URL, are you including /api in the places where you are giving the API address. This will cause the url to become /api/api/graphql rather than /api.
If you want to continue using that, you can add the following in the NGINX config, which will convert it back to a single /api.
location /api { rewrite ^/api/api/(.*)$ /api/$1 break; proxy_pass http://omnivore_backend; }As for imageproxy - I'm not sure what's happening there. For me, this just worked off the bat, but I did have to include /images in the Environment Variable
IMAGE_PROXY_URL=http://localhost/imagesIf you are using subdomains instead you can modify the nginx to include this. In the mean time I would just use this.
Feel free to reach out on Discord too, we can try further debugging there.
Thanks for the tip with the environment variable, images work now :) With the API Issue I am trying to use a browser extension called "Omnivore List Popup". The standard URL for the API in the settings is:
https://api-prod.omnivore.app/api/graphql
I tried using the following URL:
https://read.mydomain.com/api/graphql
which is not working. Any idea why? For the other issue I faced (the Cannot get /api/) I tried to save pages via the bookmarklet:
javascript:void(open('https://read.mydomain.com/api/save?url='+encodeURIComponent(location.href),'Omnivore'))
Still not working. I need to say, that I didnt have the /api/api issue u mentioned in the last comment.
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
I also face this issue ON my linux vps
There should be no need to change the code. I have recently updated the guide to include the IMAGE_PROXY_URL and CLIENT_URL parameters. Did you ensure that these were also changed? As well as the ones in the docker-compose file for the web. These are the things that write the redirect url, and so may be causing issues. I run mine on a Linux server, so that shouldn't be causing any issues.
I managed to get it running with a TLD and with rebuilding the whole images. But i still face a issue when pointing to https://read.mydomain.com/api where I get the following error:
Cannot GET /api/I made a NGINX custom location with the following parameters:Is it because you've pointed it to https but your nginx config specifies http as per your screenshot?
With the nginx config I point per http to the specific api docker container over https. If I change it to http i get no connection at all. At the moment my login and the content fetch are working. The content has no images and the URL Save over read.mydomain.com/api/save?={url} are not working. So I assume that the connection to the image-proxy and the api container are not working.
Will preface this by saying that I changed the NGINX to run with port 80 rather than ssl. The cannot get /api/ makes sense. There is no route /api - The fact that you're getting that express error shows that the API is indeed working. Going there myself I also get the
Cannot GET /api- but the rest of my application works fine. When changing the URL, are you including /api in the places where you are giving the API address. This will cause the url to become /api/api/graphql rather than /api. If you want to continue using that, you can add the following in the NGINX config, which will convert it back to a single /api.location /api { rewrite ^/api/api/(.*)$ /api/$1 break; proxy_pass http://omnivore_backend; }As for imageproxy - I'm not sure what's happening there. For me, this just worked off the bat, but I did have to include /images in the Environment Variable
IMAGE_PROXY_URL=http://localhost/imagesIf you are using subdomains instead you can modify the nginx to include this. In the mean time I would just use this. Feel free to reach out on Discord too, we can try further debugging there.Thanks for the tip with the environment variable, images work now :) With the API Issue I am trying to use a browser extension called "Omnivore List Popup". The standard URL for the API in the settings is:
https://api-prod.omnivore.app/api/graphql
I tried using the following URL:
https://read.mydomain.com/api/graphql
which is not working. Any idea why? For the other issue I faced (the Cannot get /api/) I tried to save pages via the bookmarklet:
javascript:void(open('https://read.mydomain.com/api/save?url='+encodeURIComponent(location.href),'Omnivore'))Still not working. I need to say, that I didnt have the /api/api issue u mentioned in the last comment.
I've figured out the extension issue.
The extension has the following in it's manifest to allow cross-site calls.
When changing the URL to yours there will be a CORS issue when trying to call it.
To fix this you can do the following:
Either download the source code here, and in the manifest file add your URL to the array found here. https://github.com/herrherrmann/omnivore-list-popup/blob/27d20f951642ccb8c0f578d9ab05681878470af0/src/manifest.chrome.json#L18
Alternatively you can download the extension using something like https://chromewebstore.google.com/detail/chrome-extension-source-v/jifpbeccnghkjeaalbbjmodiffmgedin and modify the manifest yourself.
After installing the unpacked extension you will have your contents load.
As for the other problem, I think I have an idea of how to fix this - but I am unable to test right now.
In your NGINX add the following:
location /api/save {
proxy_pass http://omnivore_web;
}
Then reload.
I think the problem might be that this is a route for the frontend application, but it's directing it to the backend. You can see in the past we had a similar problem here:
location /api/client/auth {
proxy_pass http://omnivore_web;
}
Give that a try and let me know. If it's solved with that I will add it to the official one.
even i changed the URL, the login will still redirect to localhost. anyway thx for your contribution @Podginator
Have you changed it both in the env file, and in the docker compose file? For the web it needs to be changed too.
Yeah I did. I guess I need to change code also? I even rebuilt the images. Actually
I also face this issue ON my linux vps
There should be no need to change the code. I have recently updated the guide to include the IMAGE_PROXY_URL and CLIENT_URL parameters. Did you ensure that these were also changed? As well as the ones in the docker-compose file for the web. These are the things that write the redirect url, and so may be causing issues. I run mine on a Linux server, so that shouldn't be causing any issues.
I managed to get it running with a TLD and with rebuilding the whole images. But i still face a issue when pointing to https://read.mydomain.com/api where I get the following error:
Cannot GET /api/I made a NGINX custom location with the following parameters:Is it because you've pointed it to https but your nginx config specifies http as per your screenshot?
With the nginx config I point per http to the specific api docker container over https. If I change it to http i get no connection at all. At the moment my login and the content fetch are working. The content has no images and the URL Save over read.mydomain.com/api/save?={url} are not working. So I assume that the connection to the image-proxy and the api container are not working.
Will preface this by saying that I changed the NGINX to run with port 80 rather than ssl. The cannot get /api/ makes sense. There is no route /api - The fact that you're getting that express error shows that the API is indeed working. Going there myself I also get the
Cannot GET /api- but the rest of my application works fine. When changing the URL, are you including /api in the places where you are giving the API address. This will cause the url to become /api/api/graphql rather than /api. If you want to continue using that, you can add the following in the NGINX config, which will convert it back to a single /api.location /api { rewrite ^/api/api/(.*)$ /api/$1 break; proxy_pass http://omnivore_backend; }As for imageproxy - I'm not sure what's happening there. For me, this just worked off the bat, but I did have to include /images in the Environment Variable
IMAGE_PROXY_URL=http://localhost/imagesIf you are using subdomains instead you can modify the nginx to include this. In the mean time I would just use this. Feel free to reach out on Discord too, we can try further debugging there.Thanks for the tip with the environment variable, images work now :) With the API Issue I am trying to use a browser extension called "Omnivore List Popup". The standard URL for the API in the settings is: https://api-prod.omnivore.app/api/graphql I tried using the following URL: https://read.mydomain.com/api/graphql which is not working. Any idea why? For the other issue I faced (the Cannot get /api/) I tried to save pages via the bookmarklet:
javascript:void(open('https://read.mydomain.com/api/save?url='+encodeURIComponent(location.href),'Omnivore'))Still not working. I need to say, that I didnt have the /api/api issue u mentioned in the last comment.I've figured out the extension issue.
The extension has the following in it's manifest to allow cross-site calls.
When changing the URL to yours there will be a CORS issue when trying to call it.
To fix this you can do the following:
Either download the source code here, and in the manifest file add your URL to the array found here. https://github.com/herrherrmann/omnivore-list-popup/blob/27d20f951642ccb8c0f578d9ab05681878470af0/src/manifest.chrome.json#L18
Alternatively you can download the extension using something like https://chromewebstore.google.com/detail/chrome-extension-source-v/jifpbeccnghkjeaalbbjmodiffmgedin and modify the manifest yourself.
After installing the unpacked extension you will have your contents load.
As for the other problem, I think I have an idea of how to fix this - but I am unable to test right now.
In your NGINX add the following:
location /api/save { proxy_pass http://omnivore_web; }Then reload.
I think the problem might be that this is a route for the frontend application, but it's directing it to the backend. You can see in the past we had a similar problem here:
location /api/client/auth { proxy_pass http://omnivore_web; }Give that a try and let me know. If it's solved with that I will add it to the official one.
I couldn't try the extension atm but the solution with the nginx location worked. Saving by URL is now working properly!
It has been confirmed that for the Android App's clipping function, even if the self-hosted URL is configured, only the official Omnivore URL will be used to connect.
Search for "ApolloClient.Builder" in the app code , then modify these codes to use self-hosted setting, just like networker.kt to do (....."authenticatedApolloClient() = ApolloClient.Builder().serverUrl(serverUrl))....). Rebuild and package apk。then with the self-hosted service, the Android app will be able to add and share link.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| omnivore-demo | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Dec 8, 2024 3:31pm |
| omnivore-prod | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Dec 8, 2024 3:31pm |
Hi everyone, very happy to see the level of engagement here! unfortunately this is all quite technical so I was wondering if there was some readme or roadmap somewhere that listed the crurent feasibility of self hosting, gotchas, maturity etc as well as how to import the data we exported from the original server. I surely can't be the only one to have the skills to run a docker ready system but lack the knowledge to follow closely this (very active) PR. And thank you all very much!
Hi everyone, very happy to see the level of engagement here! unfortunately this is all quite technical so I was wondering if there was some readme or roadmap somewhere that listed the crurent feasibility of self hosting, gotchas, maturity etc as well as how to import the data we exported from the original server. I surely can't be the only one to have the skills to run a docker ready system but lack the knowledge to follow closely this (very active) PR. And thank you all very much!
Your best bet is probably to join us in the Omnivore discord, specifically the #self-hosting channel. We have some guides there.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
2 out of 3 committers have signed the CLA.
:white_check_mark: Podginator
:white_check_mark: m1xxos
:x: weeebdev
You have signed the CLA already but the status is still pending? Let us recheck it.
Will this ever happen?


