libopenshot
libopenshot copied to clipboard
Implement blend modes (#320)
This pull request implement blend modes as per #320 .
I tested it manually and it seems to work OK. I'd like to add some automated tests, but I'd need some guidance to do it (I am not a C++ programmer).
Thanks for the great software!
@nilp0inter Thanks for submitting this! I think it's a great idea. Some initial thoughts, I'll follow up with inline comments afterwards, to address specifics.
One minor factor here is this note from the Qt 5.5 docs:
When the paint device is a
QImage
, the image format must be set toFormat_ARGB32_Premultiplied
orFormat_ARGB32
for the composition modes to have any effect. For performance the premultiplied version is the preferred format.
Our images, notably, are not any type of Format_ARGB32
, they're Format_RGBA8888_Premultiplied
. Now, you say you tested this and it works (and I believe you), which is explained by the fact that in the Qt 5.15 docs, that paragraph changes to:
Several composition modes require an alpha channel in the source or target images to have an effect. For optimal performance the image format
Format_ARGB32_Premultiplied
is preferred.
So it appears that at some point between Qt 5.5 and 5.15, they removed that limitation. Whatever that cutoff point is, AIUI these modes won't work with any older Qt version.
That's not necessarily a dealbreaker — we can always just document the limitation, but it's something we should be aware of. And probably try to figure out where the version cutoff is.
I initially went into the archived docs to check whether QPainter
composition mode enum values are defined the same in all Qt versions. It looks like they are, in Qt 6 and as far back in Qt 5 as matters to us. (Qt 5.5 is already older than we support.) But I'll write more about that in my review comments.
@nilp0inter
Re: tests... tests would be good, for sure, if you're willing to work on them. They certainly don't have to be exhaustive, but at least a couple that confirm the basic modes are working as expected would be good for peace of mind.
(This PR may also crater coverage without it... though, I don't actually think so, since all of the enum values are accessed by add_property_choice_json()
calls. The only two lines of source you add that might not be covered by existing tests (and even they might be) are these two:
https://github.com/OpenShot/libopenshot/blob/0d1ae39c4fc1bc9d505cdfab70014e7996a9ead7/src/Clip.cpp#L1027-L1028
Anyway, as far as writing tests, the existing tests/Clip.cpp
and tests/ChromaKey.cpp
can be used as templates. Our unit test framework is Catch2, which is really simple to use. My 'primer' on it (which was targeted towards existing developers switching from our previous framework, but also works well enough as a general intro) is in the repo wiki.
My knee-jerk thought is that the tests should go into tests/Clip.cpp
, since they're testing a new Clip property. However, that property is only testable in the context of a Timeline object with multiple Clips, and in order to test it you'd need write tests that look more like the ones in tests/Timeline.cpp
. (The argument could even be made that they should actually be Timeline tests, since the property is essentially meaningless for a single Clip object. So I can see the logic in that position, too. I wouldn't object to either test placement.)
Wherever they get put, in spirit the tests would be a hybrid of the existing Timeline and ChromaKey tests.
Actually, for starters you'd want to test any conversion method(s) you add to the Clip class, since it's easy to whip one of those off. If there are any issues with the conversion, they'll be the source of hard-to-find bugs later on. (And there you have it, the entire argument for unit-testing condensed into an easy-to-swallow pill form.)
For the blend tests, you'd start off like the tests/Timeline.cpp
cases "two-track video" or "Effect order":
-
Construct a Timeline object
-
Put some Clips on it, with overlapping Positions but on different Layers (tracks)
-
Set/adjust the
Clip.blend
member for at least one Clip.Clip properties are stored as
openshot::Keyframe
objects, you add fix the value at certain points on the Timeline by inserting one or moreopenshot::Point(x, y)
values that corresponding to(frame number, property value)
. The value will be interpolated between those points to form a curve, which can be adjusted using additional properties on the Keyframe's fixed Points. Only a single, simple numeric value is supported for each property, represented by they
axis values of the Keyframe curve. (Color properties are a special case consisting of three separate Keyframe objects, one for each color channel; if alpha is supported it's a separate property.)(In fact, the Keyframe class — via Point, via Coordinate, which is the basic simple
(x, y)
pair without additional parameters — can only storefloat
s. The JSON description for each property allows further constraints, like the "source" (or destination) datatype for that property's values and the min/max range of values accepted.) -
Call
Timeline::GetFrame(number)
to produce composited frame(s) -
Confirm that the results are as expected
I would strongly suggest constructing test data more like the ChromaKey tests, though:
-
Rather than importing "live" test data, use either Frames filled with solid color, or Frames containing simple drawings. (Like a 640×360 red square centered in a transparent 1280×720 frame, that kind of thing.) It'll make the expected results of the blends predictable and simple to determine.
-
Testing Clip compositing on the Timeline is a little trickier than Effect testing. You can just feed Frames directly to an Effect to modify, but that won't work on the Timeline. But you can fake it with the DummyReader.
See
tests/DummyReader.cpp
for examples, but basically if you create a CacheMemory object, shove Frames into it, and hand it to DummyReader, it will return those Frames when itsGetFrame()
is called.So if you pass that DummyReader instance to the
Clip()
constructor, it'll set the output frames for the Clip. It should be safe to use the same DummyReader with multiple Clips (if not that's a bug to report), which would provide a good, easy way to test the basic Blend features: Create a few Clips with the same Frames, set differentblend
parameters, and confirm that has the expected effect on the images produced when you callTimeline::GetFrame()
.You unfortunately can't use the QtImageReader to supply still images for Clips, because I'm now noticing that the only option for constructing one is to give it the path to a disk file as a string. There's no constructor that takes a QImage, even though the first thing it does is load the supplied path into a QImage which it stores and uses to construct its output Frames.
IOW, not having a
QtImageReader(QImage)
constructor is dumb, and we should fix that. -
For confirming the results, definitely prefer the method used in ChromaKey tests: Use
Frame::GetImage()
to retrieve the outputQImage
, call itsQImage::pixelColor(x, y)
method to examine individual pixels, and compare the value returned to aQColor()
of the expected value.The alternatives are either direct byte-array spelunking (testing each individual channel in turn), or
CheckPixel()
which is... bad.There's a stream output operator definition at the top of
tests/ChromaKey.cpp
that lets Catch2 display QColor values, which becomes useful when tests fail. With it you get messages like:tests/ChromaKey.cpp:69: FAILED: CHECK( pix_e == expected ) with expansion: QColor(0, 204, 0, 255) == QColor(0, 192, 0, 255)
Without it, that's:
tests/ChromaKey.cpp:69: FAILED: CHECK( pix_e == expected ) with expansion: {?} == {?}
(With
CheckPixel
it's almost as unhelpful:)tests/FFmpegReader.cpp:95: FAILED: CHECK( f->CheckPixel(10, 112, 21, 191, 84, 255, 5) == true ) with expansion: false == true
You could just copy-paste the code to where you need it, or (preferably) we can move it into a header file. Includes living in the tests directory might require a slight include-path adjustment in CMake, but I can take care of that.
I have no idea why the CI jobs are all failing, but it's some sort of Github Actions issue. They're not even getting far enough into the process to load the repo code, never mind compile it. I'm sure it'll clear up shortly.
I have no idea why the CI jobs are all failing
We seem to be back in business.
Codecov Report
Merging #791 (30053c9) into develop (cfca6e7) will increase coverage by
0.12%
. The diff coverage is87.87%
.
:exclamation: Current head 30053c9 differs from pull request most recent head 56bd955. Consider uploading reports for the commit 56bd955 to get more accurate results
@@ Coverage Diff @@
## develop #791 +/- ##
===========================================
+ Coverage 48.92% 49.04% +0.12%
===========================================
Files 184 184
Lines 15815 15733 -82
===========================================
- Hits 7738 7717 -21
+ Misses 8077 8016 -61
Impacted Files | Coverage Δ | |
---|---|---|
src/Clip.h | 62.50% <ø> (ø) |
|
src/FFmpegWriter.cpp | 60.69% <50.00%> (-0.71%) |
:arrow_down: |
src/Clip.cpp | 45.55% <90.00%> (+1.27%) |
:arrow_up: |
src/Timeline.cpp | 42.47% <100.00%> (-0.18%) |
:arrow_down: |
src/FFmpegReader.cpp | 68.17% <0.00%> (-5.60%) |
:arrow_down: |
src/AudioReaderSource.cpp | 0.86% <0.00%> (-1.36%) |
:arrow_down: |
src/MagickUtilities.cpp | 95.65% <0.00%> (-0.19%) |
:arrow_down: |
... and 30 more |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Thanks @nilp0inter for working on this. Really look forward to seeing this added as a new feature.
Thank you @ferdnyc for the thorough review, it is really appreciated. I'll be back working on this over the weekend.
@nilp0inter Now that I've finally fixed all of the dumb typos and Qt versioning vagaries that were preventing it from passing CI, my unit-test-tools
branch is up at PR #795. So if you do find time to work on unit tests, it takes care of both of the things I wrote about the other day, regarding unit testing:
-
The
QColor()
stream output operator is moved into a header file, so that any test class can#include
it. (It also adds aQSize
operator to go with theQColor()
one, not that you're likely to need it.) Nothing more is required, other than#include "test_utils.h"
, for Catch2 to display anyQColor()
andQSize()
values used in comparisons when a test fails. -
There's now a
QtImageReader(const QImage&)
overload to the QtImageReader constructor, which provides a much simpler means of creatingClip
objects that generateFrame
s which have a particular, predefinedQImage
as their video component. Should make testing of compositing a lot simpler and more predictable.
(One caveat: When I say "Frame
s which have a particular, predefined QImage
as their video component", that's subject to the image scaling that gets applied throughout libopenshot — if you want the Frame
s that come out of Timeline::GetFrame()
to contain the same image you passed in to QtImageReader()
, make sure to create that Timeline
with the same dimensions your image has. Otherwise you'll discover that things are getting resized on you.)
Any update on this one? Would be a great feature!
@jeffski , thank you for the remainder. I absolutely forgot about this. I am updating my branch and working on the proposed changes right now.
@ferdnyc , I've changed the code according to your comments (thank you very much for the detailed explanation, btw) and tested it manually using openshot-qt. Everything appears to be working.
Regarding the tests, I'd like to give it a try, and will do in the following days.
Please, comment if you see something wrong with the actual implementation.
Merge conflicts have been detected on this PR, please resolve.