hypermerge icon indicating copy to clipboard operation
hypermerge copied to clipboard

Browser support?

Open mattlenz opened this issue 6 years ago • 16 comments

Compiling into an app with webpack, I get the following

module not found:

- dgram: imported at node_modules/dns-socket/index.js
- dns: imported at node_modules/k-rpc-socket/index.js
- net: imported at node_modules/discovery-swarm/index.js

Is browser support planned for Hypermerge -- or maybe I have missed some configuration?

AFAIK hypercore can work in the browser.

mattlenz avatar Mar 19 '18 23:03 mattlenz

I recently dealt with the same problem. You can modify your webpack configuration like this:

  // Some libraries import Node modules but don't use them in the browser.
  // Tell Webpack to provide empty mocks for them so importing them works.
  node: {
    dgram: 'empty',
    fs: 'empty',
    net: 'empty',
    tls: 'empty',
    child_process: 'empty',
    dns: 'empty'
  },

Here is the webpack documentation:

https://webpack.js.org/configuration/node/

jimpick avatar Mar 20 '18 03:03 jimpick

Thanks @jimpick -- compiles now but with a warning that one require statement is the result of an expression.

Now the next trick will be a replacement for discovery-swarm that will work in browser. May be possible with a https://github.com/mafintosh/signalhub and https://github.com/mafintosh/webrtc-swarm

mattlenz avatar Apr 04 '18 04:04 mattlenz

Not sure what happened with those links: https://github.com/mafintosh/signalhub and https://github.com/mafintosh/webrtc-swarm

millette avatar Jun 27 '18 22:06 millette

Thanks @millette -- fixed those links!

mattlenz avatar Jun 29 '18 05:06 mattlenz

Any update on this? Really interested in running this in the browser

jackmac92 avatar Sep 15 '19 19:09 jackmac92

Hi! I don't think anyone is working on this at the moment, but I'm curious why you'd want this? If your users are downloading your application in a browser, by definition they're accessing a server. If they're communicating via WebRTC, they can both reach a signalling server to communicate.

A much, much, much simpler solution (for the browser) if you already have users who are online and able to reach the same server is just to write a very simple websocket echo server. It only takes a few dozen lines of code to implement something passable and reasonably scalable for small application and it has all the same availability characteristics.

I might work on this at some point because I already have a local-first application I want to add cloud-support to so that I can share content with people who haven't installed the tool, so it's not like I think it's useless or anything, I just am genuinely interested in better understanding why this has come up a few times.

pvh avatar Sep 16 '19 18:09 pvh

Well, the web is still the de-facto way of distribution for many applications (besides apps that are native or built with Electron or react-native). With the reliance on SQLite in the current state of hypermerge we're somewhat limited, but generally speaking, there are ways to realize at least the serverless-ish network within the browser with hyperswarm-proxy: https://github.com/RangerMauve/hyperswarm-proxy

What I'm currently attempting is an architecture where feed replications are happening 'natively' in local applications, but integration with web-based apps happens via Hyperswarm/WebRTC and W3C standards (usig HTTP, but within a decentralized swarm). The idea is to decouple the API from Hypermerge's replication mechanism, and create an SDK that works in most modern browsers.

Maybe that's of use for some folks.

jankaszel avatar Oct 21 '19 11:10 jankaszel

The geut folks are working on something similar for wireline. I think the projected path forward for web applications is a server-side/non-web component that runs the hypermerge backend, and does storage & sync, and a thin client that works like the renderer in pushpin.

Speaking to the meta-point, which is that browsers are what people use... yes. That's basically the problem. Browsers are a poor solution for building local-first software, because they can't be trusted with it. The browser will delete PWAs due to cache pressure, there's no way to enumerate what you have installed on your machine, and it's very difficult to tell whether an application will survive the loss of internet.

Couple that with the technical shortcomings of the browser, and you will understand why we wound up writing Electron apps.

There are certainly folks trying to build browser-based P2P software but for the most-part it winds up being a very complicated way of implementing a WebSocket echo server.

I'd be happy to talk more about this, but I think it's important to enumerate what your goals are before choosing a platform. There are certainly projects whose goals are compatible with being browser-first, but I don't believe PushPin/hypermerge currently qualify.

On Mon, Oct 21, 2019 at 4:01 AM falafeljan [email protected] wrote:

Well, the web is still the de-facto way of distribution for many applications (besides apps that are native or built with Electron or react-native). With the reliance on SQLite in the current state of hypermerge we're somewhat limited, but generally speaking, there are ways to realize at least the serverless-ish network within the browser with hyperswarm-proxy: https://github.com/RangerMauve/hyperswarm-proxy

What I'm currently attempting is an architecture where feed replications are happening 'natively' in local applications, but integration with web-based apps happens via Hyperswarm/WebRTC and W3C standards (usig HTTP, but within a decentralized swarm). The idea is to decouple the API from Hypermerge's replication mechanism, and create an SDK that works in most modern browsers.

Maybe that's of use for some folks.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/automerge/hypermerge/issues/3?email_source=notifications&email_token=AAAAWQB6IDWSMJEFYJ7YVZTQPWDXLA5CNFSM4EWEHO2KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBZ5WJQ#issuecomment-544463654, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAAWQFDRS6BCK4KJNDLUTDQPWDXLANCNFSM4EWEHO2A .

-- Peter van Hardenberg San Francisco, California "Everything was beautiful, and nothing hurt."—Kurt Vonnegut

pvh avatar Oct 21 '19 16:10 pvh

Speaking to the meta-point, which is that browsers are what people use... yes. That's basically the problem. Browsers are a poor solution for building local-first software, because they can't be trusted with it. The browser will delete PWAs due to cache pressure, there's no way to enumerate what you have installed on your machine, and it's very difficult to tell whether an application will survive the loss of internet.

@pvh, As an aspiring local-first app developer, can you help me understand this better? I've read your paper but don't understand why PWAs are not suitable.

As context about where I'm coming from: I do worry about fragility of browsers as a platform for decentralised apps, but I think perhaps a bigger problem right now is getting noticed at all by the general public. Many people don't really think about not storing data in the cloud as a possibility -- at least not while retaining features they've got used to. If they did, I think that would create demand which would make it harder for large companies to break decentralised apps by changing the browser platform. I think it's easier to get public attention with a PWA than with an electron application -- if only because many people can't use electron apps at all because they don't have a desktop machine.

About PWAs: First, if I understand this right -- and quite possibly I don't -- it seems that as far as Apple is concerned, for example, if you "add it to the home screen", app data isn't supposed to be deleted:

https://news.ycombinator.com/item?id=22686602 https://webkit.org/blog/10218/full-third-party-cookie-blocking-and-more/#:~:text=A%20Note%20On%20Web%20Applications%20Added%20to%20the%20Home%20Screen

What I gather from that admittedly confusing communication is that, if you "install" it (add it to home screen), Apple has publicly committed itself to not deleting the data: i.e. that it's not merely a cache. And if I understand you and what "add it to the home screen" means (both are in doubt :-) ), a user can "enumerate" what they have installed by looking at their home screen. As I understand it, they also have a similar concept of installing a PWA on a phone. I think I've also seen the idea of installing as a way for users to communicate the intent of data durability elsewhere than Apple, though I don't have a link to hand. I think that was part of a larger web standards effort, perhaps involving some people at Google? Finally, at least for open source apps, it's not so hard to have confidence it will survive loss of internet if it advertises itself as such. In fact I would think the same is almost as true of closed source apps if they advertise themselves as such?

Right now I don't see my ~/.mozilla directory as a trustworthy durable data store. But if users could choose to replicate that data in a useable way over WebRTC (see below) to their other devices -- not necessarily web browsers -- and to servers, that could no longer be true, and the situation could improve over time if Mozilla (and even Google!) see real apps using this model.

There are certainly folks trying to build browser-based P2P software but for the most-part it winds up being a very complicated way of implementing a WebSocket echo server.

As a spare-time developer, I don't think I'm keen to operate what sounds like some sort of semi-public proxy (websocket echo server): I don't want to pay for the resources, nor be responsible for restricting access and dealing with 'abuse'. On the other hand, operating a signalling server doesn't sound quite so much of a headache. Have I misunderstood the issues there? I do also wonder if there could be an effort to establish public signalling servers that could be used as a shared resource by open source webrtc apps.

Thanks for your fantastic work on this! :-)

jjlee avatar Apr 13 '20 11:04 jjlee

The notion of a community WebRTC signalling server is an interesting one... I don't know exactly what that would look like, though. I also think the recent development in Brave to add IPFS support is promising. An Automerge-over-IPFS prototype might be a good experiment.

I'm going to offer two conflict perspectives in response to your questions -- a pragmatic one and an ideological one.

The pragmatic response is, "hey yeah, go for it". In other words, if you think folks will use your software in a browser and that by adopting some of the ideas in Local-First software you'll be making your programs better then you have my support 100%! That said, again pragmatically, my own experiences with with WebRTC suggest that its reliability is pretty low without a TURN (traffic proxy) server. I think understanding WebRTC as an effective way to lower bandwidth costs for video conference applications is more accurate than thinking about it as a true P2P solution.

The ideological response is that the ideal of "local-first" software is that it should work in every place and under every kind of network condition for as long as possible. It's true that PWAs have tried to provide some of those properties, but my experience actually using PWAs in the field has been pretty negative. I've frequently had applications I tried to rely on simply fail to load, and while we could chock that up to either developer or operator error, my diagnosis is that a "browser" is inherently ill-suited to the problem of reliably storing data on your local machine by design. It's just not a priority.

At Ink & Switch, we ran a fairly substantial experiment in creating a local-first software platform we called farm but it is very much prototype-ware and not really fit for real-world usage. Sadly I have not published anything public about it though it's somewhere on my guilty-backlog of TODO projects.

On the whole, folks building local-first software helps the cause, particularly when they also report their experiences to platform vendors. Given that there are a number of unsolved problems out there, the best thing you can do is to think about what's important to you and your users, focus on making those parts work, and as Robin Sloane would say, "work with the garage door open". Be open about the struggles, share your successes, and inspire more people.

pvh avatar Apr 13 '20 15:04 pvh

my own experiences with with WebRTC suggest that its reliability is pretty low without a TURN (traffic proxy) server

I hesitate to drag this issue further off topic, but: TURN-less WebRTC is unreliable because of NAT configuration, right? Perhaps you are pointing to some specific technical flaws of WebRTC/signalling/ICE/STUN, but if any such flaws were fixed, the WebRTC stack would still be unreliable between any given pair of peers because some NAT configurations prevent NAT traversal. Any solution to that presumably involves building some kind of network of peers. Disregarding the impurity of signalling, are you saying there is a fundamental reason that a reliable peer network can't be built on WebRTC as currently deployed in Chrome and Firefox? And if it's instead a practical reason, that reason can't be WebRTC's problem with NAT, because all P2P efforts face that same problem?

my diagnosis is that a "browser" is inherently ill-suited to the problem of reliably storing data on your local machine by design. It's just not a priority.

I think we have a legitimate disagreement there about the future of this -- thanks for explaining your point of view. Time will tell, but if you're proven wrong, as you say it will be because of efforts like yours and others.

jjlee avatar Apr 14 '20 19:04 jjlee

I hope I am wrong! In time, if we can prove viability for these approaches, platforms will adapt to absorb them. Still, I'm quite confident that efforts to write local-first software that is browser-first instead of local-first will struggle to deliver on local-first principles in the short term.

I've searched far and wide for evidence that people use PWAs and come up short. The closest I've seen is a few people saying they prefer the Twitter PWA client to the native app, but obviously Twitter is just a thin client. In the cases where I've tried to adopt other people's PWAs they've consistently failed me "in the field". I get on an airplane and discover documents are missing. I'm using them on public transit and they don't load.

As to WebRTC, well... there are essential challenges with NAT traversal, incidental complexities that come from the cumbersome implementation of WebRTC, and limitations as a result of the architectural design of WebRTC. At the end of the day, WebRTC-backed applications require a signaling server to do peer discovery, and that thing has to be always available, so if you're going to do that, my honest and direct advice is just run a simple websocket echo server and focus on local caching of data.

All this said, you don't have to take my word for it. I'm speaking with two or three year out-of-date experience, but it's echoed by all kinds of folk I've talked to. I also know for a fact there are multiple teams at Google working on trying to fix and replace WebRTC for all the reasons I've outlined above. (Maybe some of them have been successful! That would be great.) In any case, if you DO go after WebRTC and browser support please, please keep me abreast of your work and conclusions whether positive or negative. There's genuinely nothing that would make me happier than being wrong here. If it doesn't work out, I promise not to say "I told you so" and will happily share whatever else I've learned in the meantime.

Best of luck!

pvh avatar Apr 14 '20 23:04 pvh

Maybe my two cents on this would be of interest.

I originally posted on this thread when I learned that neither hyperswarm nor hyperwell run in the browser (hyperswarm does, now, with a WebRTC bridge). When running JavaScript apps anywhere—within browsers, Electron apps, or just plainly with Node.js—it is tempting to assume that local-first apps should be running anywhere, too. Collaboration environments such as Google Docs and Trello (the local-first apps paper provides good examples) provide great real-time collaboration if you're online, but I presume it's easy to confuse (distributed) real-time collaboration (CRDTs) with actual local-first apps. However, it can be pragmatically as well as economically beneficial for developers to use decentralized time collaboration, which might explain the recent popularity of CRDTs.

After putting some thought and work into both local-first apps themselves and the architecture behind such systems, I would conclude with the following:

  • Local-first apps are somewhat different in their usability from web apps. Web apps tend to be volatile (despite local storage) and being scheduled arbitrarily by the browser's sandbox. Local-first apps, however, require some daemon- or server-like functionalities as they also provide data availability to you and others—if you don't use external seeding facilities. The dilemma is: As long as your apps are of a volatile nature, you will need some other long-running process providing data to others if the app is not running, and this might be running on somebody's server anyway.
  • If you mix apps running locally with web apps in the same system, you're putting a strain on a minority of nodes in the network and, thus, decrease data distribution and homogeneity of the network. In my experiments I even had to redesign an architecture and use a WebSocket bridge that takes some load from apps running locally in order to maintain performance. Folks getting started with local-first apps and log-based storage (and hence, sparse replication) should get some insights on possible issues with large-scale distributed networks. Some examples I can recommend are Gnutella, BitTorrent, and even the old Skype network. While I think hypermerge is a fantastic piece of software, it doesn't solve architectural issues for you.
  • The web does not provide the foundation for proper distributed networking such as a DHT like hyperswarm or even something like Beaker's PeerSocket API. @pvh has put it quite well in a comment above, saying that most web-based solutions quite likely will use a WebSocket echo server while there are no genuine alternatives for distributed networking.

Quite certainly, the above points don't make it easier to make people adopt local-first apps. Folks use Google Docs because it's free and they don't have to download something, and (most) people won't use some alternative just because it's labeled as local-first software... sadly.

jankaszel avatar Apr 15 '20 14:04 jankaszel

Well put, @falafeljan! I would just add that I think web support is absolutely essential but should come second. Literally, I think we should focus on "local-first".

Related, I also believe that users will absolutely download and install software, it's just a question of adoption and onboarding. The vast majority of applications on your phone are downloaded and installed. On desktop, most users have installed all kinds of software from Spotify to Slack to VS Code, but notably all of them have a browser experience too.

pvh avatar Apr 15 '20 15:04 pvh

Sorry falafeljan, I probably asked the wrong question -- but the answers here, including yours, were helpful to me anyway!

So, I'm not asking for more info here about why hypermerge is as it is, but just for the record: I should have asked instead about what prevents a degenerate form of local-first (which perhaps you wouldn't label as such at all) from working in PWAs -- something like: CRDTs, but aiming for reliable syncing only with local network and optionally with central server for backup. I see that as trying to help get a local-first foot in the door, not fully achieving all the ideals.

That's what I'm trying to figure out the best way to do for my little proto-app. I do see that "uptime" is a fundamental issue with any decentralised app, and especially so on the web platform (though I didn't follow what you said about websocket bridge, I'm curious about that). I still don't know enough yet to understand exactly why WebRTC isn't sufficient for building mesh networks -- but I don't expect to figure that out fully here and certainly take your experience seriously and have some sense how that ambitious task is full of frustrating and gnarly details.

By the way: I suspect the only place I see this differently is about future browser development and possibly about which of web and native is chicken and which is egg -- hope I didn't come across as "hey why are you doing all this work and research, just use the web dude"!

jjlee avatar Apr 26 '20 14:04 jjlee

pvh said, ages ago: I just am genuinely interested in better understanding why this has come up a few times

Datapoint:

  • Working on a quantified-self type app. Actually not much collaboration planned, but CRDTs appealing anyway to support local data ownership and offline usage, while avoiding annoying conflicts
  • For various reasons I imagine getting started with native. Then, trying PWAs later -- on the theory (for which I lack any data) that even having to "install" a PWA is lower-barrier than installing mobile native or desktop app. Despite starting with native, I'm thinking about it up front, trying to get some sense of where I'm going, hoping not to go down too many wrong paths for my goals.
  • I see just syncing on local network between desktop and mobile as a useful place to start
  • Later: Central server as backup, and as a way of read-only sharing with wider world via plain old web. I don't have in mind hypermerge in the browser for the latter, rather just keeping the CRDTs server-side for that and rendering more or less static pages.
  • I could see myself maybe running a signalling server if that helped, but I feel myself losing enthusiasm thinking about running an echo server. Partly that's about operational hassle (abuse?), partly it's about "getting a foot in the door" with decentralised apps, partly I don't want that to be the only way such that all styles of deployment for the app depend on it (I'd be fine with it being part of one means of deployment -- if it weren't for my other objections to it), partly it just doesn't sound like fun
  • Problem: App code and resources not fully downloaded when you go offline. I don't regard this as a blocker for me -- I'm more concerned with data ownership and, as far as PWAs are concerned, low barrier to adoption, than fully reliable offline operation for all modes of deployment. I defer to your experience, but I also have some perhaps misplaced faith that at least 'installed' PWAs will do that reliably at least at some point.
  • Problem: over-complicated architecture: I guess the relevant question for me is "complicated compared to what?". As a newbie and given that the whole area is "new", I'm still muddling through the maze of possible implementation and architectural choices. And as a spare-time project, architectural/implementation complexity is bad, but so is trying to implement too much of my own infrastructure -- hence being tempted by automerge, despite not yet even understanding all that it does. My plan for now is to start with native, without hypermerge, with something hand-rolled and very simple: maybe something like just Automerge.save/load for persistence, Automerge.Connection plus maybe noise-peer for syncing, and no discovery. But I'm certainly interested in anything web-based that fits with some of my goals without giving myself a big infrastructure project I won't finish, so I'll watch this space (both this issue and what other people are up to). I'd better stop cluttering up this issue and get back to my project!

jjlee avatar Apr 26 '20 14:04 jjlee