Graphite
Graphite copied to clipboard
Bezier-rs: Add calculations for area and centroid of subpaths
Closes #1634
- Improvements to the existing intersection algorithm for bezier curves:
- minimum_separation filtering was only applied for intersection for two different bezier curves. It is now applied for self_intersection of a bezier curve too. The appropriate web demo is updated too.
- There is a filter that removes any t-value that is very near to a endpoint regardless if the intersecting curves are consecutive or not. A new function named
all_self_intersections
(equivalent toself_intersections
) was created wherein the check runs for consecutive curves only. - A new function named
all_intersections
(equivalent tointersections
) was added so that it returns pairs of t-values.
- Add code to doing polynomial math:
- Add a new struct
Polynomial
in bezier-rs representing a polynmomial and implementing common operations like addition, multiplication, differentation, integration, etc. - Add a function
parametric_polynomial
to convert a bezier curve to a parametric polynomial.
- Add a new struct
- Add code for area and centroid:
- Add functions
area
andcentroid
to subpath. - Add web demos for the above.
- Add functions
Implementation Notes: Area and centroid are calculated by Green's Theorem using a line integral. The vector fields used are as follows:
- Area: F = 0i + xj
- X coordinate of centroid: F = -xyi + 0j
- Y coordinate of centroid: F = 0i + xyj
This looks promising!
Would you be able to add these to the web demos so I could test it out? It's in website/other/bezier-rs-demos
.
And having a Graphite node for each would be helpful too, but let's focus on the web demos first.
Nice work on this.
This has become much harder than anticipated. While trying to make the demo, it constantly panics due a overflow caused by a left shift within rustnomial
. On looking deeper, I realized that it assumes that usize
is 8 bytes but in the wasm environment it is only 4 bytes.
I will probably have to look for another library or manually implement it. Another way might be to use f32
instead f64
as the generic parameter for Polynomial but we will lose some precision. The same problem does not occur while using f32
.
Oh wow, that's a rather unexpected roadblock indeed. Sorry you've had to run into that :(
I don't have much knowledge of how complex it is to reimplement things, but I will say that having Bezier-rs remain dependency-free is a goal. If it's not horrendously difficult to go that route, that may be preferrable. (Otherwise, we'll want to put this under a feature flag so the extra dependency is optional.)
For now, I have chosen a different library this time: poly_it
. I have also used ArrayVec
to reduce allocations. And also changed the vector field so as to reduce the calculations required. I will try to implement the required parts of this library within bezier-rs in the mean time to remove the dependency.
The demo works perfectly for non-intersecting subpaths but does start to behave wildly for self-intersecting subpaths. Does the use-case of this function require it to work with self-intersecting paths?
Seems to work great! That's handy having the web demo to play with it.
The demo works perfectly for non-intersecting subpaths but does start to behave wildly for self-intersecting subpaths. Does the use-case of this function require it to work with self-intersecting paths?
Yes, we do ideally want to support that if it's not too challenging. For consistency, it should probably use the same fill rule as that used in the Poisson-Disk Points function for its handling of self-intersection.
I seem to be finding some rather unexpected and potentially incorrect behavior with the centroid calculation, can you try and see if there's something wrong with it?
Open subpath:
Closed subpath:
I seem to be finding some rather unexpected and potentially incorrect behavior with the centroid calculation, can you try and see if there's something wrong with it?
I did notice them. It is primarily because the number of intersections comes out to be odd due to the extra filtering. The number of intersections should always be even since an intersection in one curve means intersection in another curve too. But the functions already present in the bezier-rs library only provide the t-value for one of them. This is why I had to make a new function all_self_intersections
to do just that. If you keep the error and minimum_separation low enough, it should not create such problems.
Although I am trying to find a way of filtering in better way as the most of the filtering mechanisms are designed with only one point in mind.
Another plan could be to return None if the number of intersections come out to be odd.
Do you think it might be possible to perturb the error and minimum_separation and repeatedly until the number of intersections turns out even? (But give up after some maximum number of tries and just accept the incorrect result that we have now.)
Do you think it might be possible to perturb the error and minimum_separation and repeatedly until the number of intersections turns out even? (But give up after some maximum number of tries and just accept the incorrect result that we have now.)
Can be possible. Actually this is a good idea. I will try to implement this and check the results. But before doing this, I have a feeling that minimum_seperation is actually useless for this use case and therefore I want to first try by removing those filters first.
But before doing this, I have a feeling that minimum_separation is actually useless for this use case and therefore I want to first try by removing those filters first.
Removing minimum_separation was a bad idea.
Do you think it might be possible to perturb the error and minimum_separation and repeatedly until the number of intersections turns out even?
I now realised that sometimes even two or more t-values may get skipped from different locations along the curve so the result is wrong but it still has an even number of points. So I doubt we can completely rely on this method.
@Keavon So currently I have kept the filters the same. The only changes I made was that it now calculates both the intersection values in pairs and also changed when the minimum_separation filtering takes place. Doing this after combining all points from all curves of subpath seems to give better results.
Oh and don't mind the output of Area in the bezier web demo. I did that for debugging purposes.
Ok, sounds great. Is there anything else you think is needed before this merges? There's also the aspect of adding nodes to expose these features. But there are also probably several other bezier-rs features missing nodes as well. Maybe if you wanted to wait for a separate PR to add these two, as well as others, that might be a good approach (or in this PR if you're inclined, either is fine!).
Is there anything else you think is needed before this merges
Atleast for this PR, I was thinking on adding documentation for the new polynoimial.rs file.
@Keavon After a deep investigation the algorithm for finding intersection is finally solved. In fact the error and minimum_separation need not even be small. It now works perfectly with the default error and minimum_separation and still provides a much more stable centroid when crossing intersection or when two points are too close to each other.
I changed when the minimum_separation filtering takes place. Doing this after combining all points from all curves of subpath seems to give better results.
Please forget the above conclusion. The real reason why the intersection algorithm was not working was because of the following:
- The minimum_separation filtering helps remove repeated t-values. But this filter was not originally applied for self_intersection of a bezier curve. It was only applied for intersection for two different bezier curves.
- While calculating intersections, the endpoints of consecutive bezier curves also come up as intersections (since they are connected after all). To remove those, it originally had a filter that removes any t-value that is very near to a endpoint regardless if the intersecting curves are consecutive or not. I have changed it so that the check runs for consecutive curves only.
I will work on documentation from here on.
Great investigative work! And I presume these changes will partly apply also to the other algorithms so they'll hopefully benefit from the added stability too?
And I presume these changes will partly apply also to the other algorithms so they'll hopefully benefit from the added stability too?
The changes in the first point are propagated to other algorithms.
But the second point has not. For making the changes said in the second point I will first have to convert the functions to work with sequence of pairs of t-values instead of a sequence of t-values. Currently the function naming scheme is also all over the place, some functions only return one t-value, some return pairs of t-value, some of do the filtering and some of them don't. I plan on fixing these in a separate PR since I feel like this PR is already doing more than what should be done in a single PR.
~There is still a problem if two points are exactly at the same position. Although getting an irregularity now requires more precision. I am thinking of a method to resolve this.~
@Keavon I am not able to reproduce this. I am starting to think that maybe I used a version before the fix to get to this state. Anyway, if finding an irregularity requires too much work and precision then I think we should skip this. You can test the demo now as see if you are satisfied with the current results.
The documentation is done and this PR is ready to be reviewed.