feat(output plugin): Add new output plugin to support Apache IoTDB
- [x] Updated associated README.md.
- [x] Wrote appropriate unit tests.
- [x] Pull request title or commits are in conventional commit format
resolves #
Added an output plugin to support Apache IoTDB. Modify file "go.mod", added external dependency: iotdb-client-go. Added readme of Apache IoTDB output plugin in English and Chinese. Information about relevant database: Apache IoTDB github, Apache IoTDB client for Golang
By the way, unit test of IoTDB output plugin requires an external running database, on 'localhost:6667' by default.
Related issue #11561
Thank you @citrusreticulata for the nice and fast update! This is a huge improvement already. I have some more comments and suggestions, but I feel we are coming closer with the PR. Please also convert your integration test-cases to use containers instead of relying on IoTDB being installed. Also switch to table based tests (see comment) and clearly separate into integration tests and short-tests.
Thank you for your appreciation and quick reply! I will complete the above modification suggestions.
I don't know why but ALL integration tests failed on my local machine.
Not just for iotdb output plugin:

But also for other output plugins,. For example, mongoDB:

They're all stuck at Start ()
This makes it difficult for me to do local debugging
I don't know why testcontainers didn't work.
The CI error log says:
2022/08/02 05:51:17 Starting container id: e50fcba886ae image: testcontainers/ryuk:0.3.3
2022/08/02 05:51:18 Waiting for container id e50fcba886ae image: testcontainers/ryuk:0.3.3
2022/08/02 05:51:18 Container is ready id: e50fcba886ae image: testcontainers/ryuk:0.3.3
2022/08/02 05:51:28 Starting container id: 49995306a977 image: apache/iotdb:0.13.0-node
2022/08/02 05:51:29 Waiting for container id 49995306a977 image: apache/iotdb:0.13.0-node
--- FAIL: TestIntegrationInserts (71.85s)
iotdb_test.go:275:
Error Trace: /home/circleci/project/plugins/outputs/iotdb/iotdb_test.go:275
Error: Received unexpected error:
container failed to start: failed to start container: context deadline exceeded
Test: TestIntegrationInserts
Messages: failed to start IoTDB container
FAIL
FAIL github.com/influxdata/telegraf/plugins/outputs/iotdb 71.911s
So I reuse the original test method, that is, assume that there is an running instance locally, just like clickhouse , opentsdb or other plugins not using testcontainers.
I know it's not friendly for CI, and testing is a little more troublesome than using testcontainers, but I really can't handle the abnormal situation of testcontainer on my local machines(See last comment).
For the testcontainers: The first time you use it, it will take some time as the system needs to download the docker container. I tested locally and reverting 27806bc7d179ca3604e19facbea52ff8a7fc9adb, plus commenting the wait.ForLog statement (line 271) make the test pass.
The problem is probably that you copied the testcontainers setup from somewhere else. This setup waits for both, the port becoming available (wait.ForListeningPort(...)) and for a log message containing "Waiting for connections" (wait.ForLog(...)). This message is not emitted by the IoTDB container (I think it's from postgres) and thus the tests waits forever....
For the testcontainers: The first time you use it, it will take some time as the system needs to download the docker container. I tested locally and reverting
27806bc7d179ca3604e19facbea52ff8a7fc9adb, plus commenting thewait.ForLogstatement (line 271) make the test pass.The problem is probably that you copied the
testcontainerssetup from somewhere else. This setup waits for both, the port becoming available (wait.ForListeningPort(...)) and for a log message containing"Waiting for connections"(wait.ForLog(...)). This message is not emitted by the IoTDB container (I think it's from postgres) and thus the tests waits forever....
This is really exciting news. I'll try it. Thank you very much.
For reference here my pimped and extended tests... iotdb_test.go.txt
For reference here my pimped and extended tests... iotdb_test.go.txt
Thanks for your testcases! I'm working on this and I will add some of what you uploaded. Add I will use anonymous function to make creating new client shortly, like this:
{ // TestCase 01, normal config
name: "01 test normal config",
cli: func() *IoTDB { s := newIoTDB(); return s }(),
...... // Other parts are omitted
}
Thanks for your testcases! I'm working on this and I will add some of what you uploaded. Add I will use anonymous function to make creating new client shortly, like this:
Can you please start from my test file and add what you think is necessary? The way I structured the test makes it easily extensible and easy to find issues.
Thanks for your testcases! I'm working on this and I will add some of what you uploaded. Add I will use anonymous function to make creating new client shortly, like this:
Can you please start from my test file and add what you think is necessary? The way I structured the test makes it easily extensible and easy to find issues.
After Last commit, the code structure is nearly the same with yours. Here are the differences:
- I use anonymous function to call
newIoTDB()instead of creating an instance directly, which is shorter and easier to compare. Therefore, I didn't use variable likeuintConversion, ortimestampPrecisionto specify the specific configuration of the client. - As I said in this comment,
TestMetricConversion()tests two kinds of conversion. Therefore, bool variableneedModifyis retained. But it's name isdoTestModifynow. That means some testcases are relevant with modify testing, others are not. Just makeTagsListto nil can't completely solve this problem. But your code structure have been integrated, making the code more readable and concise. Thanks a lot for that. - Current testcases make full use of yours, more situation and configuration are covered. Thanks a lot for that.
I use anonymous function to call newIoTDB() instead of creating an instance directly, which is shorter and easier to compare. Therefore, I didn't use variable like uintConversion, or timestampPrecision to specify the specific configuration of the client.
Yes, but it means that tests will change as soon as newIoTDB changes. Please add at least one test with a fixed setting!
As I said in this comment, TestMetricConversion() tests two kinds of conversion. Therefore, bool variable needModify is retained. But it's name is doTestModify now. That means some testcases are relevant with modify testing, others are not. Just make TagsList to nil can't completely solve this problem. But your code structure have been integrated, making the code more readable and concise. Thanks a lot for that.
It doesn't matter if they are relevant or not, your code calls modifyRecordsWithTags() unconditionally in writeRecordsWithTags() , therefor please reproduce this in the tests! You want to test your actual functionality not some arbitrary cases.
I use anonymous function to call newIoTDB() instead of creating an instance directly, which is shorter and easier to compare. Therefore, I didn't use variable like uintConversion, or timestampPrecision to specify the specific configuration of the client.
Yes, but it means that tests will change as soon as
newIoTDBchanges. Please add at least one test with a fixed setting!As I said in this comment, TestMetricConversion() tests two kinds of conversion. Therefore, bool variable needModify is retained. But it's name is doTestModify now. That means some testcases are relevant with modify testing, others are not. Just make TagsList to nil can't completely solve this problem. But your code structure have been integrated, making the code more readable and concise. Thanks a lot for that.
It doesn't matter if they are relevant or not, your code calls
modifyRecordsWithTags()unconditionally inwriteRecordsWithTags(), therefor please reproduce this in the tests! You want to test your actual functionality not some arbitrary cases.
I designed it to do unit testing. Because IoTDB is likely to support other tags processing methods in the future, in this case modifyRecordsWithTags() will show different behaviors in this case.
Dividing testing into two parts by variable doTestModify can make sure programmer focusing on one function at a time.
In fact, they test different functions separately.
If there are x cases for convertMetricsToRecordsWithTags() and y cases for modifyRecordsWithTags(), you have to use x*y cases to cover them if don't use unit testing. But x+y cases are enough if using unit testing.
So maybe dividing this group of tests into two functions is better for understand... I'd better do this.
I have solved the problem that the last test function is not easy to read. All CI tests passed. And according to actual functionality, I divided Metric Conversion tests into two unit-tests, which are reasonable and meet actual handling process. I think it's time to do final review.
Thank you @srebhan very much for your support and help these days. We have had nearly 100 conversations and gradually improved the code to meet the standards and become excellent, which is inseparable from your help. I am very grateful for this! I also learned a lot from the reviews these days.
I designed it to do unit testing. Because IoTDB is likely to support other tags processing methods in the future, in this case modifyRecordsWithTags() will show different behaviors in this case.
We exactly want to get failing tests in this case! If the behavior changes, we want to know, as this might break existing use-cases. That's my point, your processing will always run both, so your tests (at least some) should also test this!
I designed it to do unit testing. Because IoTDB is likely to support other tags processing methods in the future, in this case modifyRecordsWithTags() will show different behaviors in this case.
We exactly want to get failing tests in this case! If the behavior changes, we want to know, as this might break existing use-cases. That's my point, your processing will always run both, so your tests (at least some) should also test this!
All right. Entire conversion tests and failure test cases will be added in next commit.
And one merge conflict to be resolved...
Emm... I think the "go.sum" conflict is not for me to solve. I have run go mod tidy so "go.sum" is automatically generated. I think I'd better not edit it manually. Maybe it's because some other PRs didn't run go mod tidy or more dependence was added.
The conflict is that there is a row in the master branch that does not exist in this branch.
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
I think it's because someone forget to run go mod tidy or more dependencies were added.
But in either case, I think I'd better not modify this file manually, because it's generated automatically. If I delete it, some plugins in master may go wrong, but if I retain it, CI tests will raise errors(I tried it a few minutes ago).
And I run go mod tidy in master branch locally, this row is not removed....
To be honest, I don't know how to deal with this conflict.
Problem solved!
I merged upstream breach master into branch apache-iotdb, then run go mod tidy and make docs.
Conflict is resolved and PR is ready to merge now.
Thanks a lot for team's help and support!
Download PR build artifacts for linux_amd64.tar.gz, darwin_amd64.tar.gz, and windows_amd64.zip. Downloads for additional architectures and packages are available below.
:warning: This pull request increases the Telegraf binary size by 1.06 % for linux amd64 (new size: 150.0 MB, nightly size 148.5 MB)