Rubberduck icon indicating copy to clipboard operation
Rubberduck copied to clipboard

Increased test run time from 2.5.0 to 2.5.1

Open Irubataru opened this issue 3 years ago • 9 comments

Rubberduck version information

Version 2.5.1.5557
OS: Microsoft Windows NT 10.0.19041.0, x64
Host Product: Microsoft Office x64
Host Version: 16.0.13029.20308
Host Executable: EXCEL.EXE

Description For the past couple of months I have noticed that my Rubberduck installation has been running slower and slower, however these slownesses are generally hard to both document and pin down so I thought I'd at least provide some evidence. Initially I assumed that it was simply my project increasing in complexity, but I thought I'd at least test that hypothesis by running my unit tests and see the results. At the moment I have:

v2.5.0: Runtime 2:06 - 2:12 v2.5.1: Runtime 3:26 - 3:48

I haven't been able to find a good way to measure the other slownesses I am experiencing, but this is a place to start. I'll see if I can create an MWE at some point. Other symptoms mostly include the UI being slow, and when it hangs it hangs so that Windows puts the window in the background because it thinks it hangs. Some examples are if I open the unit testing framework, run a parse, wait until task manager reports no activity in Excel and then click the "Run" menu item then Rubberduck freezes for about 10-15 seconds before the dropdown menu becomes active and I can click something.

Logfile RubberduckLog - 2.5.0.txt RubberduckLog - 2.5.1.txt

Additional context The project I am currently testing has 400 module and class files, 50 test modules and 475 tests. It's about 87k lines of code.

If I export the runtimes form the unit test system then the tests always report running faster on 2.5.1 than on 2.5.0. From that report the new version should have been faster by 5 seconds, but it is in fact slower by over a minute so maybe there is something in the test setup/teardown. I have also timed it with a stopwatch to check that the timing function isn't wrong, and I get the same times.

Irubataru avatar Aug 26 '20 14:08 Irubataru

I'm assuming the tests are not grouped by outcome, correct?

The execution time is measured while the actual test method is running - the timer is stopped as soon as the test finishes running, so any additional apparent runtime includes the overhead of updating the test explorer UI.

Thinking perhaps we should consider updating the UI at each x% of tests executed, rather than every time.

retailcoder avatar Aug 26 '20 16:08 retailcoder

Yes, they are grouped by location in all cases.

In general though, it seems that the rubberduck UI slows down significantly with project size. For some things that is obviously understandable, there is more to render, but there are also cases such as the example with clicking the "Run" button where I don't really see why that would be the case. I assume these things would improve immensely when you spin out (if you spin out) the language server into a separate process. I love Rubberduck to death, but I have to admit that it is a bit frustrating developing in a somewhat sluggish environment and it would be great if it was possible to identify why it slows down in scenarios where one wouldn't (naively) expect it.

Irubataru avatar Aug 26 '20 16:08 Irubataru

In the new version I also have issues where Excel crashes while running unit test. The general symptom is that the test explorer is running, but the UI isn't updating, and then after a while Excel crashes. It mostly happens if things have been running for a while, if I reopen it and run the tests immediately it hasn't crashed yet. I experience Excel crashes once or twice a day, but since I have no way of knowing whether that is due to Excel or Rubberduck I haven't really talked about it here (also the logs doesn't say anything). However, since I started using the new version I get crashes while it runs tests, so there might be at least some connection. I'm not sure, Excel isn't exactly the world's most stable program. Do you know if there is any way to check what type of error occurred? For instance maybe it's running out of memory.

Irubataru avatar Aug 27 '20 10:08 Irubataru

Yes, they are grouped by location in all cases.

For the sake of knowing, I noticed tests run smoothly only grouping by category, not by outcome, nor by location. My 60 tests run in 1sec. when grouped by category, 18sec. when grouped by location and 39sec. when grouped by outcome.

gicif avatar Aug 27 '20 12:08 gicif

So I have run a couple of more tests. Two more in each version to be precise. Once I tried by category and then I tried by location but filtering out all outcomes meaning that no tests would show in the test explorer neither before the run nor after and here is what I found (the runtimes are not directly comparable to what I first reported because I changed the code a bit and that probably also changed the times a bit):

v2.5.0 (by category): Runtime 3:32 v2.5.1 (by category): Runtime 10:40

v2.5.0 (filtered): Runtime 2:18 v2.5.1 (filtered): Runtime 2:17

So it seems (probably as you expected) that it is the UI updating that is the culprit.

Btw, I made a gif of what it looks like when I click the Run button in the test explorer after it is done parsing (and I have waited for Task Manager to show 0 activity). RunLag

Irubataru avatar Aug 27 '20 14:08 Irubataru

Yea it sounds like we need to take a hard look at the TestExplorer updating its UI and the TestEngine dispatching results... The UI updates should not affect the measured test runtime in any significant way. There might be shenanigans to be had with batching updates, but that sounds like a lot of trouble ...

Vogel612 avatar Aug 27 '20 14:08 Vogel612

Just as a further tip, if you set a filter for the tests that filters away all tests, the tests run the fastest. Afterwards, you can remove the filter and see the results.

This also indicates that we really should batch updates to the UI.

MDoerner avatar Aug 27 '20 14:08 MDoerner

Just as a further tip, if you set a filter for the tests that filters away all tests, the tests run the fastest. Afterwards, you can remove the filter and see the results.

That's what I did for the second of the new tests (the ones marked filtered). But yeah, that is what I will start doing ^^

Irubataru avatar Aug 27 '20 14:08 Irubataru

I'm assuming the tests are not grouped by outcome, correct?

The execution time is measured while the actual test method is running - the timer is stopped as soon as the test finishes running, so any additional apparent runtime includes the overhead of updating the test explorer UI.

Thinking perhaps we should consider updating the UI at each x% of tests executed, rather than every time.

Testing GUI is horrendously inefficient and slows down quickly with the number of tests. I would like to support the idea of adding a setting for the number of tests between GUI updates.

For example, 175 fairly basic tests take 3.5-4 min with GUI updates vs 2 sec with GUI disabled. Attached is my test file. RD Testing Testing.xls

pchemguy avatar Oct 23 '21 14:10 pchemguy