Compose.jl icon indicating copy to clipboard operation
Compose.jl copied to clipboard

3D

Open shashi opened this issue 9 years ago • 6 comments

I think https://github.com/dcjones/Gadfly.jl/issues/520 should start off with a 3D Compose API.

Rough plan could be:

  1. Extend (and possibly simplify) measures.jl to support 3D
  2. 4x4 MatrixTransforms (It's probably also advisable to separate out 2D transform types and 3D transform types so that the 2D stuff continues to work)
  3. Add 3D form primitives: Polyhedron, Cuboid, Sphere, Cone etc.
  4. Write an OpenGL backend
  5. Write a WebGL backend
  6. Invent 3D GoG with Gadfly :rocket:

shashi avatar Dec 23 '14 06:12 shashi

This is a solid plan. I don't think any of the basic semantics of compose are inherently 2D (maybe stuff like hstack, vstack, etc). What I'm unsure of is whether the 3D api should be an entirely separate but equivalent api. That is, whether there be a Compose3D, or wether there is enough code that can be shared that it's worth extending Compose.

dcjones avatar Dec 23 '14 07:12 dcjones

Would a WebGL backend mean it was possible to get 3D plots embedded in IJulia notebooks?

johansigfrids avatar Dec 23 '14 13:12 johansigfrids

The main work will be to abstract OpenGL in a good way and implementing a scene graph. If that step is not done right, things will be inflexible and slow. It's always sad, if you need to say about a package You want to draw more than 300 objects? Sorry, you need a "real" graphic library for this! ;)

Good thing is, that I'm quite close to having a generic mesh API, which could be used for this purpose. But there is still the question of how we can map Composes API to the mesh API, without sacrificing performance. If we only draw a few primitives, we can use OpenGL's instance drawing, which leaves us with the task of uploading the transformations for the primitives in a smart way. Also transparency is a big issue, which is still unsolved in my rendering library. Here as a reference, something I'd might like to implement at some point: http://de.slideshare.net/acbess/order-independent-transparency-presentation

Concerning WebGL: http://stackoverflow.com/questions/6555752/some-questions-about-webgl Wrapping WebGL seems a little cumbersome and inefficient. The only reason I see why we should have WebGL support is, to inline the graphics in IJulia.

I soon have a prototype of an IDE which is a little bit similar to IJulia implemented with Julia and OpenGL, which can inline 3D and 2D graphics. Why do I prefer this? It's all implemented in one language, has no bottlenecks and can scale to huge datasets, which I find crucial for scientific computing. So I'd much rather put work into this, before I put work into something, which can't be used for bigger projects.

All in all, I'd be happy to help you using my mesh API to draw the primitives. I just hope, that this doesn't lead to yet another OpenGL library, adding to the fragmentation of OpenGL libraries in Julia.

SimonDanisch avatar Dec 23 '14 14:12 SimonDanisch

I am interested in this topic, too, and have long wondered about implementing a ComposeGL backend.

But even if Gadfly can hand stuff off to OpenGL, I'm a bit concerned about the performance of the rest of Gadfly/Compose. I haven't had time to come back to some of the optimization work I started right after JuliaCon, but last I checked there are pretty bad bottlenecks all over Gadfly & Compose.

I worry that some of this is pretty fundamental to the design; for example, IIUC there is (or rather, was, the last time I looked at Gadfly's code) no way to plot two columns of a matrix as two lines unless you first copy each element into a DataFrame, using a different string tag to specify whether the element is from line1 or line2. Especially if there is going to be any kind of dialog between OpenGL and Gadfly (say for hit testing, zooming, etc), those kinds of inefficiencies could be pretty deadly.

timholy avatar Dec 23 '14 15:12 timholy

@dcjones I think it's worth extending Compose with 3D instead of making a Compose3D (as you can see 1, 2, and 3 talk about adding the extra dimension), there will definitely be shared code. With the right type abstractions, we can make sure that the 3D code will not end up changing the current 2D API beyond recognition. It would be nice for Compose to actually be what Base.Graphics wanted to be (this comment!). At least to begin with, we can put this stuff inside a Compose.ThreeD module or something.

@johansigfrids that's the idea

@SimonDanisch I think the main advantage of Compose is that it is denotative. In some sense, the Context tree generated by Compose is itself the scene graph. And yes, it is good to support a small number of primitives using the OpenGL instance drawing. I have little idea about these *GL languages and performance issues though. I think it's OK for 3D support in Compose to start off slow and at least allow creating static diagrams. On another note, an IDE in OpenGL will be really cool!!

@timholy I am pretty optimistic that many inefficiencies in Gadfly can be ironed out. Especially if you take a go at it. I think there are many low hanging fruits as it is, both in Gadfly and Compose (globals e.g.?). Having seeing your 3D visualizations of neurons from the JuliaCon talk (they are Awesome) I get what you mean by "inefficiencies could be pretty deadly". As for the API for plotting a two-column matrix, I suppose that can be sorted out as well... I am going to take a look.

Note that I opened this issue in the tongue-in-cheek spirit of https://github.com/dcjones/Gadfly.jl/issues/520, but I believe with sufficient hacking Compose can become a viable (and pleasant) way to do 3D in Julia.

shashi avatar Dec 23 '14 16:12 shashi

@shashi

Yeah I know that compose is basically a scene graph, I'm simply just wondering if its a good fit for OpenGL.

I suspect that it isn't, but I actually don't know Compose very well. Its just better to consider the challenges from the start, so that it isn't a useless effort in the end ;)

But we might be able to work well together, anyhow. I'm working on this API, which uses instance rendering and the like, to layout mesh instances.

So you always have some sort of layout description, a transformation description and a primitive that gets spread around in space.

After you composed that, you can just directly manipulate the data on the GPU, without re-composing the whole thing. That would relax the problems with the long composing times.

This works very well with OpenGL, as rendering is very fast, if you have the data in VRAM and then just modify small parts of it afterwards.

That's how I implemented vector fields for example. The arrow is just a mesh, the layout is implicitly defined by the velocity volume and the rotation is defined by the direction of the velocity vector. Also my text API works much like this. So for example changing the line-height is very efficient, as it just changes one float input to the OpenGL shader.

I'm not very far with the high level API, but I'm closing in on the low level challenges... I might take some time in the next days to describe my design concepts and we can discuss, if things are usable in this context ;)

SimonDanisch avatar Dec 23 '14 21:12 SimonDanisch