CuraEngine icon indicating copy to clipboard operation
CuraEngine copied to clipboard

Systematic use of coord_t conversions, enabling to change unit size

Open Piezoid opened this issue 2 years ago • 14 comments

This is a first shot at enabling unit size to be changed globally at compile time.

  • More user-defined literals use,
  • Refactor or annotate magic numbers,
  • New constants that don't scale with the unit size: INT_EPSILON = 10 (arc precision, snapping, and INT_PRECISION_COMP = 10000 (length of normed vectors for minimizing rounding error during some operation like rotations).

There should be no change of behavior, since the constants remain identical. With one exception: INT_PRECISION_COMP is used at some places where the constant was previously lower (see commit scale (or not) magic numbers related to precision and snapping)

Piezoid avatar Aug 07 '22 19:08 Piezoid

I had a discussion with @casperlamboo just today about this library https://mpusz.github.io/units/ and seeing this PR from you come makes me very happy. But at a first glance (on my phone) I think you're trying to solve something here for which could also be done with the mpusz unit library (which is subject to C++23 ISO standard already.)

Which should take care of unit and prefixes conversions at compile time and I'm all in favor of adding those kind of check.

Are you familiar with the mpusz unit library and can you tell me if I made a correct assumption that you try to solve something similar here. If not what is the difference?

Didn't look at the code yet.

Also tagged @rburema because he is probably also interested in this.

jellespijker avatar Aug 08 '22 17:08 jellespijker

@Piezoid before you spend time answering, let me first actually read the code and the intent of this PR, because my previous comment, might have been a bit premature and I think you might solving something different then what I first assumed.

I will put my phone away now and enjoy the evening first. 😉

jellespijker avatar Aug 08 '22 18:08 jellespijker

Are you familiar with the mpusz unit library and can you tell me if I made a correct assumption that you try to solve something similar here. If not what is the difference?

I'm not familiar with this library. But I would say no. The objective of this PR is to make fixed point precision variable at compile time and have all the constants change automatically in the code. The usage of user-defined literals for additional units to _mu (_mm, _cm) is just a convenience and is orthogonal to this issue.

mpusz' library could indeed handle that last bit about units. However I don't know if it can work with fixed points numbers the way CuraEngine does.

Usually, fixed point representations divide the result of multiplications by the scaling factor. For example, representing milimeters with a scaling factor of 1000, an area computation looks like: [1.234 mm * 0.500 mm] = 1234 * 5000 / 1000 = 6170 = [6.170 mm²] -> The scaling factor for areas is kept at 1000. Cura doesn't use real fixed points numbers, instead everything is in µm and areas are in µm². This is simpler and faster when the scaling factor is not a power of 2. However it creates an excess of precision which can lead to overflows. This PR keeps Cura's "pseudo-fixed point system", but enables changing the scaling factor (or base unit).

For example, you can increase INT10POW_PER_MM from 3 to 4 and have Cura do all the computations in 10th of micrometers (0.1 µm). Technically this could allow to print a house or aim a ion beam deposition machine without manually scaling back and forth the input and the output. A more practical improvement is that all the constants are passed to the same scaling code. There is no longer the implicit assumption that 1000 means 1 mm. During testing, this allows to track overflows and precision issues.

At the current scale factor, I have seen some cylindrical models slice with a number of beads that vary around the perimeter. Increasing the scale factor made this unevenness disappear: cylinder Also, near intersections are less of a problem with a higher scaling factor. For example with 0.1µm units, this model Ultimaker/Cura#12984 can be sliced without hitting CuraEngine/src/SkeletalTrapezoidation.cpp:335: void cura::SkeletalTrapezoidation::computeSegmentCellRange(vd_t::cell_type &, cura::Point &, cura::Point &, vd_t::edge_type *&, vd_t::edge_type *&, const std::vector<Point> &, const std::vector<Segment> &): Assertion '! (v0 == to && v1 == from)' failed.

To summary, this is about adding some flexibility for testing and future proofing. Also having all the dimensional constants indexed by few commits can be useful :slightly_smiling_face:

Edit: spelling, sample image, and

I will put my phone away now and enjoy the evening first. :wink:

good evening!

Piezoid avatar Aug 08 '22 18:08 Piezoid

Unit Test Results

25 tests  ±0   25 :heavy_check_mark: ±0   10s :stopwatch: -1s   1 suites ±0     0 :zzz: ±0    1 files   ±0     0 :x: ±0 

Results for commit 4d81c520. ± Comparison against base commit 36968e30.

:recycle: This comment has been updated with latest results.

github-actions[bot] avatar Sep 01 '22 12:09 github-actions[bot]

Uh? The CI built origin/main (75c3d060) , not the PR branch (4d81c52). @jellespijker

Piezoid avatar Sep 01 '22 12:09 Piezoid

Also what I see: https://github.com/Ultimaker/CuraEngine/runs/8134929036?check_suite_focus=true

Resolving deltas: 100% (21/21), done.
  From https://github.com/Ultimaker/CuraEngine
   * [new ref]         75c3d060df97e294cb586053f7f9a3e257107[384](https://github.com/Ultimaker/CuraEngine/runs/8134929036?check_suite_focus=true#step:3:389) -> origin/main
Determining the checkout info
Checking out the ref
  /usr/bin/git checkout --progress --force -B main refs/remotes/origin/main
  Switched to a new branch 'main'
  branch 'main' set up to track 'origin/main'.
/usr/bin/git log -1 --format='%H'
'75c3d060df97e294cb586053f7f9a3e257107384'

rburema avatar Sep 01 '22 13:09 rburema

However, I see that it goes right for the updated workflows for the other PR's, so I'll try to open and close this like @jellespijker told me to (to restart the unit tests).

rburema avatar Sep 01 '22 13:09 rburema

I think that the issue is the pull_request_target event source:

This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does.

Piezoid avatar Sep 01 '22 13:09 Piezoid

Yes I think that was a remnant of an old attempt to fix it.

We only need to distinction between PR from forks and internal PR in our reusable version-workflow.

I updated the the unit test work flow. Just to be sure I will close and reopen this PR.

Sorry for being our guinea pig

jellespijker avatar Sep 01 '22 13:09 jellespijker

😞 we have a failing test, but the workflow seems to work again.

Although the results aren't published

Warning: This action is running on a pull_request event for a fork repository. It cannot do anything useful like creating check runs or pull request comments. To run the action on fork repository pull requests, see https://github.com/EnricoMi/publish-unit-test-result-action/blob/v1.20/README.md#support-fork-repositories-and-dependabot-branches

jellespijker avatar Sep 01 '22 13:09 jellespijker

Sorry for being our guinea pig

No worries, I pushed because I saw some activity in the workflows and wanted to give it a try.

Fails as expected, that confirms the source commit.

Piezoid avatar Sep 01 '22 13:09 Piezoid

We actually love this PR and something like this was on our wishlist for a while now. Let us know when it is ready to merged.

jellespijker avatar Sep 01 '22 16:09 jellespijker

We actually love this PR and something like this was on our wishlist for a while now. Let us know when it is ready to merged.

@jellespijker Thanks!

Well, I think it's ready, even if I occasionally find missing units on literal when debugging my branch with increased precision. Although, i'm unsure I this PR can achieve complete exhaustiveness in a reasonable time.

I'm open to renaming suggestions for constants and functions in Coord_t.h.

Piezoid avatar Sep 07 '22 13:09 Piezoid

I ended up adding to this PR the reimplementation of the serialization of coord_t into decimal string, I couldn't deal with the old code there (This superseeds #1584). Beside, it is faster than calling sprintf: the duration of the export phase was reduced by 40% (torture test toaster scaled at 150%, export phase duration reduced from 21.8 s to 13.1 s).

The tests pass with a integer unit ranging from 1 nm to 0.1 mm (INT10POW_PER_MM 1 to 6). This doesn't means that Cura is usable at these scales, it will suffer from rounding issues and overflows very quickly. For example, measuring prime time tower volume in 1 nm^3 units will certainly overflow.

I'm experimenting on my dev branch with using more floating point in computations to solve this. The method is to have all irrational functions (like vSize) return a floating point and mechanically pushing it down untill it is converted back to a 'coord_t' for insertion into a Polygon. Some of these changes are very extensive, like these led to converting the line widths into floating point inside Beading strategies from top to bottom. From a performance standpoint it doesn't make sense on modern CPU to use integer computations exclusively. The integer ALU execution ports are probably saturated by address computations while the FPU ports sit mostly unused. Conversions from/to floating points can have surprising effects and need special care, but doing them intentionally reduces the number of implicit truncation. This is very much a WIP, some things are currently broken, but it slice every "failed to slice" models from the bug tacker without Lukáš Hejl's patches (#1731, though they should definitively be merged!). It also improves the issues of wandering seams and walls transitioning back and forth on even contours (like in the screenshot above). (The code published a linearization of my dev branches that I use for bisecting during tests, thus it contains different intents and may be rebased at any time)

Piezoid avatar Sep 13 '22 10:09 Piezoid

No support for <span> in gcc 9. :disappointed: I guess I could use <range/v3/view/span.hpp>?

Piezoid avatar Sep 27 '22 10:09 Piezoid

I guess I could use <range/v3/view/span.hpp>?

Even if ranges-v3 is included in the cmake dependencies, I could not include it: fatal error: 'range/v3/view/span.hpp' file not found This is the list of include path passed to the compiler:

-I./include
-I./tests/../include
-isystem $HOME/.conan/data/spdlog/1.10.0/_/_/package/808f890cf60c33f93ca0487b4edb472493ec000d/include
-isystem $HOME/.conan/data/fmt/9.0.0/_/_/package/290cacd6edfcde8615e4e76e8a93b473347b0321/include
-isystem $HOME/.conan/data/arcus/5.2.0-alpha/piezo/testing/package/ce11c40a6970dfa7b6e489d3e84271fb2bc7f688/include
-isystem $HOME/.conan/data/protobuf/3.21.4/_/_/package/22d9ad5d81e60c43a5f5b504fd8e70c9bf2de2cf/include
-isystem $HOME/.conan/data/zlib/1.2.12/_/_/package/24d596ecc3e7cfef35630620092c5615473ba82a/include
-isystem $HOME/.conan/data/clipper/6.4.2/_/_/package/44b32a5143880d5efe1678c0bc6f3bd000783e6f/include

No ranges-v3 here...

I also added a commit fixing the scaling of the meshfix_maximum_extrusion_area_deviation setting, which was wrong in the constructor of Simplify. That might explain why you had to adjust profiles (it was divided by 1000).

Piezoid avatar Sep 30 '22 15:09 Piezoid

Hi @Piezoid sorry that this PR is taking such a long time to get merged, even though we want something like this in there for a long time now. We have been pressed for time a while now, but we are blocking our calendar at least for an hour a week as team to collectively pick-up more PR's.

I dibs this one. Devs see CURA-9775.

jellespijker avatar Oct 21 '22 08:10 jellespijker

Did you do a new conan install . --build=missing --update && conan install . --build=missing --update -s build_type=Debug again, because Ranges-V3 should be installed then

jellespijker avatar Oct 21 '22 08:10 jellespijker

I would love to see the conversion functions written a bit more generic; I think there are to much specializations atm, maybe something like this:

#include <concepts>

template<class T>
concept is_number = std::integral<T> || std::floating_point<T>;

static constexpr std::integral auto INTEGER_MM_FACTOR = 1000;

template<size_t dimension = 1>
constexpr std::floating_point auto to_mm(const is_number auto& number) noexcept
{
    if constexpr (dimension == 0)
    {
        return  1. ;
    }
    if constexpr (std::is_integral_v<decltype(number)>)
    {
        constexpr std::floating_point auto rounding = 0.5;
        return number / std::pow(INTEGER_MM_FACTOR, dimension) + std::copysign(rounding, number);
    }
    return number / std::pow(INTEGER_MM_FACTOR, dimension);
}

Which could then be used as:

coord_t lvalue_coordt = 12345678;
auto p_mm = to_mm<>(lvalue_coordt);
auto p_mm_rvalue = to_mm<>(12345678);
auto p_mm2 = to_mm<2>(lvalue_coordt);
auto p_mm2_rvalue = to_mm<2>(12345678);

The suggestion show the same performance as your code, but adds more flexibility imo. image

https://quick-bench.com/q/uu0jfBz57K5gVqLtAHZTbLcScoc

With the use of concepts and constrains we can easily add an specialization for points

#include <concepts>
#include <cmath>

template<class T>
concept is_number = std::integral<T> || std::floating_point<T>;

template<class T>
concept is_point_2d = requires(T p){
    { is_number<decltype(p.X)> };
    { is_number<decltype(p.Y)> };
};

template<class T>
concept is_point_3d = requires(T p){
    { is_number<decltype(p.x)> };
    { is_number<decltype(p.y)> };
    { is_number<decltype(p.z)> };
};

template<class T>
concept is_point = is_point_2d<T> || is_point_3d<T>;

static constexpr std::integral auto INTEGER_MM_FACTOR = 1000;

template<size_t dimension = 1>
constexpr std::floating_point auto to_mm(const is_number auto& number)
{
    if constexpr (dimension == 0)
    {
        return  1. ;
    }
    if constexpr (std::is_integral_v<decltype(number)>)
    {
        constexpr std::floating_point auto rounding = 0.5;
        return number / std::pow(INTEGER_MM_FACTOR, dimension) + std::copysign(rounding, number);
    }
    return number / std::pow(INTEGER_MM_FACTOR, dimension);
}

template<size_t dimension = 1>
constexpr is_point auto to_mm(const is_point auto& point)
{
    using point_t = decltype(point);
    if constexpr (is_point_2d<point_t>)
    {
        return point_t{ to_mm<dimension>(point.X), to_mm<dimension>(point.Y) };
    }
    return point_t{ to_mm<dimension>(point.x), to_mm<dimension>(point.y), to_mm<dimension>(point.z) };
}

In order for that to work however we do need to modernize the class Point3 and class Point but you get the idea.

What is your opinion about such a change, we can help out here, by opening a PR against your branch. Suggestion__Contributes_to_CURA-9775_Systematic_use_of_coord_t_conversions,_enabling_to_ch.patch.txt

jellespijker avatar Oct 21 '22 11:10 jellespijker

I would love to see the conversion functions written a bit more generic; I think there are to much specializations atm, maybe something like this:

I'll look into that (I will probably not be available enough in the forthcoming weeks).

I've been using a generic version on my branch that adds floating point computations: Coord_t.h and IntPoint.h. It's more of a playground in constant evolution, but I could back port some of it.

Your version brings some improvement that I'll merge and back port to this PR.

Piezoid avatar Oct 21 '22 11:10 Piezoid

I will probably not be available enough in the forthcoming weeks

Are you open to accepting PR's from us to help this PR forward?

jellespijker avatar Oct 23 '22 07:10 jellespijker

I will probably not be available enough in the forthcoming weeks

Are you open to accepting PR's from us to help this PR forward?

Yes, PRs are welcome!

Edit: some minor observations on the generic code:

  • whether the trunc->round offset is applied or not should depend on the return type rather than the argument.
  • It can be useful to have a separate function for rounding floating to coord (for example using it in VoronoiUtils)
  • multiplying number * (1 / power) can be marginally faster than double division,
  • I'd rather have a coordinate_t that restrict to coord_t | std::floating_point, this would catch some use of narrower int types, more subject to overflows.

Piezoid avatar Oct 23 '22 11:10 Piezoid

I'm not familiar with this library. But I would say no. The objective of this PR is to make fixed point precision variable at compile time and have all the constants change automatically in the code. The usage of user-defined literals for additional units to _mu (_mm, _cm) is just a convenience and is orthogonal to this issue.

mpusz' library could indeed handle that last bit about units. However I don't know if it can work with fixed points numbers the way CuraEngine does.

Usually, fixed point representations divide the result of multiplications by the scaling factor. For example, representing milimeters with a scaling factor of 1000, an area computation looks like: [1.234 mm * 0.500 mm] = 1234 * 5000 / 1000 = 6170 = [6.170 mm²] -> The scaling factor for areas is kept at 1000. Cura doesn't use real fixed points numbers, instead everything is in µm and areas are in µm². This is simpler and faster when the scaling factor is not a power of 2. However it creates an excess of precision which can lead to overflows. This PR keeps Cura's "pseudo-fixed point system", but enables changing the scaling factor (or base unit).

For example, you can increase INT10POW_PER_MM from 3 to 4 and have Cura do all the computations in 10th of micrometers (0.1 µm). Technically this could allow to print a house or aim a ion beam deposition machine without manually scaling back and forth the input and the output. A more practical improvement is that all the constants are passed to the same scaling code. There is no longer the implicit assumption that 1000 means 1 mm. During testing, this allows to track overflows and precision issues.

I personally think that to usage of mp-units would also solve this, while saving us from implementing this type handling our self. Especially since it should be part of the C++23 standard.

Following code is based on me playing around a bit with definition of points and polygons https://github.com/Ultimaker/CuraEngine/compare/main...polygonamory#

using base_length = units::isq::si::micrometre; // The base length unit, use a different prefix for more or less precision

template<class T>
concept Scalar = std::is_integral_v<T> || std::is_integral_v<typename T::rep>;

template<Scalar Tp, std::size_t Nm = 2>
using Point = std::array<Tp, Nm>;

// Length unit specialization of Point
template<Scalar Tp, std::size_t Nm = 2>
using Position = Point<units::isq::si::length<unit::base_length, Tp>, Nm>;

auto point_1 = Position<std::int64_t>{ 2000 * u::mm, 5 * u::km };
auto point_2 = Position<std::int64_t>{ 2 * u::m, 40 * u::um };

auto point_3 = point_1 + point_2;
EXPECT_EQ(point_3.at(0), 4 * u::m);

We discussed this PR in the team however and we came to the conclusion that we first will focus on systematic usage of the coord_t type throughout the code (devs see CURA-9802) This ticket is on our backlog and will be picked up on of these weeks as part of our Technical Improvements epic.

Once we use coord_t consistently through out the code unit-wise calculation and scaling such as you also cover in this ticket or I myself suggest with the help of mp-units will be revisited as a discussion (devs see CURA-9802). Both approaches have merit imo, and both would greatly reduce bug-prone behavior as what you're fixing in PR Ultimaker/CuraEngine#1738

jellespijker avatar Nov 11 '22 11:11 jellespijker

We discussed this PR in the team however and we came to the conclusion that we first will focus on systematic usage of the coord_t type throughout the code (devs see CURA-9802)

If that can help, I have few commits substituting s/int/coord_t/ in (I hope) the right places: d65bdad2, ead72835, 70590f56

Piezoid avatar Nov 11 '22 13:11 Piezoid

@Piezoid just want to explicitly say: You're appreciated!

thx

jellespijker avatar Nov 11 '22 18:11 jellespijker

Just to make sure you are aware: Reducing Curas internal 64 bit fixed point numbers from 1µm to 10nm units DOES reduce the theoretical maximal representable distance from the entire solar system (heliopause is ~18 billion km) to only the orbital radius of the earth (~150 million km). So this change might make Cura unusable for some interplanetary scale print jobs

DerGenaue avatar Dec 18 '22 16:12 DerGenaue