security
security copied to clipboard
Optimized Privilege Evaluation [DRAFT]
Description
This is a preview of a possible implementation for #3870: Optimized Privilege Evaluation.
The major purpose is to give an initial impression on the approach and to facilitate review. However, more implementation will be necessary.
- Category: Enhancement
- Why these changes are required?
Performance tests indicate that the OpenSearch security layer adds a noticeable overhead to the indexing throughput of an OpenSearch cluster. The overhead may vary depending on the number of indices, the use of aliases, the number of roles and the size of the user object. The goal of these changes is to improve privilege evaluation performance and to make it less dependent on the number of indices, etc.
- What is the old behavior before changes and new behavior after changes?
The main behavior will not change. However, I would like to discuss whether this opportunity can be used to get rid of special behaviors which seem to be obscure and mostly useless.
Issues Resolved
- #3870
Testing
- At the moment, there is a unit test at org.opensearch.security.privileges.ActionPrivilegesTest
- However, as this is a draft, it is likely that some of the existing integration tests will fail.
Check List
- [] New functionality includes testing
- [ ] New functionality has been documented
- [x] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Please have a look at the approaches. I would be very interested in your opinions.
As mentioned above, the implementation is not complete yet. The code contains a couple of TODO comments to indicate what work needs to be done.
I would also like to discuss whether a couple of things would be really necessary or whether there might be a chance to simplify the implementation by abolishing them.
These are:
- At the moment, only RoleV7 and ActionGroupV7 configurations are supported by the class ActionPrivileges. I am wondering wether there are any plans to get rid of the *V6 configurations at some point in time. The figure V6 still stems from ODFE support for Elasticsearch 6. Are there really still OpenSearch users on this configuration? Additionally, the use of two impl classes per config type makes many unsafe casts of the generic SecurityDynamicConfiguration class necessary. If there would be only one impl class per config type, the APIs could be designed much safer.
- In config.yml, the semantics of roles.yml can be changed by setting
multi_rolespan_enabledtofalse. The OpenSearch docs do not mention this flag. In my perception, there is no real use of this setting except maintaining backwards compatiblity. However, for OpenSearch, the default of was always true since its inception. Are there really users having it set to false?
I have also started to work on the micro benchmarks as discussed in #3903. The generally accepted standard for micro benchmarks in Java is the JMH framework. However, this is licensed as GPL v2 with classpath exception: https://github.com/openjdk/jmh/blob/master/LICENSE Is the inclusion of a dependency with such a license acceptable in OpenSearch?
I have also started to work on the micro benchmarks as discussed in #3903. The generally accepted standard for micro benchmarks in Java is the JMH framework. However, this is licensed as GPL v2 with classpath exception: https://github.com/openjdk/jmh/blob/master/LICENSE Is the inclusion of a dependency with such a license acceptable in OpenSearch?
@cwperks Can you look into this?
@cwperks Can you look into this?
We're looking into this and will get back with an answer.
However, this is licensed as GPL v2 with classpath exception: https://github.com/openjdk/jmh/blob/master/LICENSE Is the inclusion of a dependency with such a license acceptable in OpenSearch?
The feedback we received was that the code can be used only for internal operations. Since JMH usage will be part of Open-source security, my understanding is that this is not approved.
Codecov Report
Attention: Patch coverage is 86.11247% with 284 lines in your changes missing coverage. Please review.
Project coverage is 71.41%. Comparing base (
1c898dc) to head (3064614). Report is 79 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #4380 +/- ##
==========================================
+ Coverage 69.90% 71.41% +1.51%
==========================================
Files 320 334 +14
Lines 21688 22491 +803
Branches 3460 3580 +120
==========================================
+ Hits 15160 16062 +902
+ Misses 4734 4637 -97
+ Partials 1794 1792 -2
| Files with missing lines | Coverage Δ | |
|---|---|---|
| ...ty/configuration/ConfigurationLoaderSecurity7.java | 70.24% <100.00%> (+0.24%) |
:arrow_up: |
| ...ecurity/configuration/ConfigurationRepository.java | 76.51% <100.00%> (-3.41%) |
:arrow_down: |
| ...urity/configuration/PrivilegesInterceptorImpl.java | 59.77% <100.00%> (+2.45%) |
:arrow_up: |
| ...va/org/opensearch/security/configuration/Salt.java | 100.00% <ø> (ø) |
|
| ...rity/configuration/SystemIndexSearcherWrapper.java | 91.52% <100.00%> (+0.14%) |
:arrow_up: |
| ...nsearch/security/dlic/rest/api/RolesApiAction.java | 95.83% <100.00%> (-2.21%) |
:arrow_down: |
| ...org/opensearch/security/filter/SecurityFilter.java | 65.87% <100.00%> (ø) |
|
| ...rity/privileges/ExpressionEvaluationException.java | 100.00% <100.00%> (ø) |
|
| ...ch/security/privileges/PitPrivilegesEvaluator.java | 96.15% <100.00%> (-0.15%) |
:arrow_down: |
| ...es/PrivilegesConfigurationValidationException.java | 100.00% <100.00%> (ø) |
|
| ... and 40 more |
@cwperks @peternied @DarshitChanpura @scrawfor99
Just FYI:
I worked a bit on the micro benchmarking part of this issue. As JMH was out due to its license, I reviewed other frameworks. It is remarkable that in most cases the descriptions of the frameworks will say "rather use JMH instead of this framework".
Anyway, I tried out https://github.com/noconnor/JUnitPerf because the idea of using JUnit infrastructure seemed to be nice. The big downside of JUnitPerf is that it does not work well together with parameterized JUnit tests.
See here for an example:
https://github.com/opensearch-project/security/blob/004df3bbdc69514f0c95acd2a1653a01e71758b9/src/performanceTest/java/org/opensearch/security/privileges/PrivilegesEvaluatorPerformanceTest.java
The high number of very similar methods is caused by the lack of parameter support - in the end we need to test quite a few different dimensions (like number of indices, number of roles, etc), on the same operation.
As I was really keen on getting some broader result, I went on the "roll your own" path and quick threw together some naive micro benchmarking code. So, this is just a temporary thing, thus very messy, but it gives me some numbers. See here:
https://github.com/opensearch-project/security/blob/004df3bbdc69514f0c95acd2a1653a01e71758b9/src/performanceTest/java/org/opensearch/security/privileges/PrivilegesEvaluatorPeformanceTest2.java
So, I let run some tests and here are some preliminary results.
Micro benchmark test results
Disclaimer
Generally, the real world meaningfulness of micro benchmarks is limited. On a full real cluster, this can look totally different due to:
- Proportion of effects to other time consuming operations
- Effects caused by garbage collection, thread synchronization or JIT
- Different hardware which can sustain constant CPU load much better than the consumer system I used to run the benchmarks
On the other hand, micro benchmarks make some tests so much easier. For micro benchmarking, a Metadata instance with 100000 indices can be mocked within a few seconds. On the other hand, creating so many indices on a real cluster would take much, much longer.
Full cluster benchmarks are also coming up, but these are still in the works.
Scope
The micro benchmarks were applied to the following code:
https://github.com/opensearch-project/security/blob/004df3bbdc69514f0c95acd2a1653a01e71758b9/src/performanceTest/java/org/opensearch/security/privileges/PrivilegesEvaluatorPeformanceTest2.java#L501-L512
For comparison, we also applied the micro benchmarks to the following code on the old code base:
https://github.com/nibix/security/blob/300d138578ef853071d649d647335d8430320f14/src/performanceTest/java/org/opensearch/security/privileges/PrivilegesEvaluatorPeformanceTest2.java#L502-L510
Due to refactorings, the code looks different. However, what happens under the hood is effectively the same.
Additionally some further code changes were necessary to make PrivilegeEvaluator independent from dependencies like ClusterService in order to make it really unit testable/benchmarkable. I first tried to use Mockito to mock ClusterService instances but had to learn that the performance characteristics of Mockito are so bad that it is unsuitable for micro benchmarking.
As we only look at the evaluate() method, DLS/FLS evaluation is disabled for this scope.
Tested dimensions
Action requests
We tested privilege evaluation with three different actions:
indices:data/write/bulk[s]withBulkShardRequest- with 10 bulk items
- with 1000 bulk items
indices:data/write/bulkwithBulkRequestindices:data/read/searchwithSearchRequest- with an index pattern that matches 2% of all indices (randomized)
- with an index pattern that matches 20% of all indices (randomized)
Number of indices on cluster
We tested with these indices:
- 10 indices:
index_a0,index_a1,index_b0,index_b1,index_c0,index_c1, ... ,index_e0,index_e1 - 30 indices:
index_a0, ...,index_a5, ... ,index_e0, ...index_e5 - 100 indices:
index_a0, ...,index_a19, ... ,index_e0, ...index_e19 - 300 indices
- 1000 indices
- 3000 indices
- 10000 indices
- 30000 indices
- 100000 indices
Different user configurations
- A user with full privileges (using
*forindex_permissionsandcluster_permssions) - A user with a single role giving CRUD permissions on
index_a*andindex_b* - A user with 20 roles giving CRUD permissions individually on
index_a0,index_a1, ... - A user with 40 roles in total: 20 roles giving READ permissions individually on
index_a0,index_a1, ... and 20 more roles giving WRITE permissions on the same indices - A user with a single role which uses a regex index pattern with a user attribute. This is interesting because it makes certain pre-computations impossible and requires to re-evaluate the index patterns for each request.
Results
The raw result data can be found here: https://docs.google.com/spreadsheets/d/1Hd6pZFICTeplXIun3UpEplANAwQDE0jXbqLnnJz61AI/edit?usp=sharing
In the shards below, dashed lines indicate the performance of the old privilege evaluation code on a particular combination of test dimensions. Solid lines with the same color indicate the performance of the new code with the same test dimensions. The x-axis represents the number of indices on the cluster, the y-axis represents the throughput in operations per second.
bulk[s], BulkShardRequest
The performance of BulkShardRequests is the most interesting factor on clusters doing heavy ingestion. A single bulk requests will be broken down into the individual indices and shards, resulting in quite a few BulkShardRequests for which the privilege evaluation needs to be done in parallel, thus performance characteristics here have a high impact.
The privilege evaluation for the top level BulkRequest is less interesting because it is just an index-independent cluster privilege, which is easy to evaluate. Still, we will also review this below.
Requests with 10 items
Requests with 1000 items
Observation
The performance of the old code degrades with the increasing number of indices. Starting with 30000 indices, we have a method call latency which is > 10 ms. This is where users on ingestion heavy clusters often start to experience performance issues and the method calls start to show up in the hot thread dumps.
In contrast, the throughput of the new code stays constant, independent of the number of indices. It can be seen that the number of roles still has quite an effect on the throughput. But here we talk about time differences below 0.1 ms, which should not be significant in a real world cluster.
bulk, BulkRequest
The top level bulk action is a cluster action, so it does not require considering the indices on a cluster.
Observation
As expected, performance is independent of number of indices, both on the new implementation and on the old implementation. However, the new implementation improves throughput by a factor between 2 and 3.
search, SearchRequest
Search operations become interesting when there are monitoring/alerting solutions issuing search requests on broad index patterns in short time intervals.
Search with search patterns that match 2% of the indices
Search with search patterns that match 20% of the indices
Observation
Both the old and new code degrade with the growing number of indices. Profiling shows that this is mostly not due to privilege evaluation, but due to the index pattern expression resolution.
However, the new code retains method call latencies below 20 ms even on clusters with 100000 indices. The old code however, takes up to 5 seconds for a single method call on clusters with 100000 indices.
See the following chart for a zoomed in section of the 2% of indices case for 10000-100000 indices:
@nibix thank you for the update. The results of bulk indexing in a cluster with a large number of indices is great to see and I like how the test cases isolate PrivilegeEvaluation for performance testing. Is there any work remaining before labeling this PR ready for review?
@cwperks
Is there any work remaining before labeling this PR ready for review?
Yes. Actually, I was also about to ping you regarding this.
- The heap used by the de-normalized data structure needs to be capped (see https://github.com/opensearch-project/security/issues/3870#issuecomment-2222509679 )
- We need to come to a conclusion on the breaking changes (see https://github.com/opensearch-project/security/issues/4493 and https://github.com/opensearch-project/security/issues/4495 ). At the moment, this only supports RoleV7. Also,
multi_rolespan_enabled: falseis unsupported. Of course it would be nice to have the benefits already available for OpenSearch 2. But that would mean either additional effort or the acceptance of some breaking changes. I have sketched a way to retire RoleV6 without breaking changes in https://github.com/opensearch-project/security/pull/4546 . However, I am at the moment not sure how a solution could look like that supportsmulti_rolespan_enabled: falsein the new code. - We are also working on actual full cluster benchmarks, which give you concrete figures on the actual throughput improvements.
Note: This leaves the DLS/FLS implementation unchanged at the moment. Thus, we will still have a part of the problematic performance characteristics. However, I would still get this merged and then add DLS/FLS support in a further PR to keep things (relatively) small.
Benchmark results
After having shared the micro benchmarks before, I can now share results of benchmarks performed on an actual cluster. This brings the following benefits:
- It puts the performance gains into perspective of the performance characteristics of the whole software. This way it gets apparent whether the performance gains actually have significance or if they just "vanish" behind other dominating performance characteristics.
- We can also test the performance gains for DLS. With micro benchmarks, this is not possible as DLS is not implemented as an isolated piece of code, but is performed distributed over the cluster during a search operation.
Disclaimer
The usual benchmarking disclaimer: Of course, these figures can only represent very specific scenarios. Other scenarios can look quite different, as there are so many variables which can be different. Yet, especially the "slow parts" of the benchmark results can give one an impression where real performance issues are.
Test environment
We ran the tests on an OpenSearch cluster hosted using Kubernetes. The host machine for the K8S nodes was a physical server with an AMD EPYC 7401P CPU. We ran an OpenSearch cluster with 3 nodes, each node had 64 GB of RAM, 32 GB of Java Heap, and a cpu: 8 limit configured in K8S. The version of OpenSearch was a recent snapshot of main.
Tested dimensions
Operations
We benchmarked the following operations:
- Ingestion with the REST
_bulkAPI into a single index with 8 parallel clients- with 10 bulk items per request
- with 1000 bulk items per request
- Search requests with the REST
_searchAPI with 16 parallel clients- on exactly one index
- on an index expression matching 2% of the indices on the cluster
- on an index expression matching 20% of the indices on the cluster
Number of indices on cluster
We tested with these indices:
- 10 indices:
index_a0,index_a1,index_b0,index_b1,index_c0,index_c1, ... ,index_e0,index_e1 - 30 indices:
index_a0, ...,index_a5, ... ,index_e0, ...index_e5 - 100 indices:
index_a0, ...,index_a19, ... ,index_e0, ...index_e19 - 300 indices
- 1000 indices
- 3000 indices
- 10000 indices
User configurations
We tested with differently configured users to find out about the effects of complexity of roles, DLS rules, etc. The users were:
- A user identified by the super admin certificate. Using this certificate, most of the privilege evaluation code is by-passed. This gives us a rough "upper limit" for the possible performance.
- A user with full privileges (using
*forindex_permissionsandcluster_permssions) - A user with a single role giving CRUD permissions on
index_a*andindex_b* - A user with 20 roles giving CRUD permissions individually on
index_a0,index_a1, ... - A user with 40 roles in total: 20 roles giving READ permissions individually on
index_a0,index_a1, ... and 20 more roles giving WRITE permissions on the same indices - A user with a single role giving CRUD permissions on
index_a*andindex_b*and additionally a simple DLS query
Test results
A commented summary of the results follows in the upcoming sections.
The raw test results can be also reviewed at https://docs.google.com/spreadsheets/d/16VRr9B2bPTyyS_T-IZobUG3C0uJxnXRxlbmsdH_MeCg/edit?usp=sharing
Indexing throughput
Bulk size 10
In this and the following charts, the dashed lines represent OpenSearch with the standard security plugin. The solid lines represent OpenSearch with the security plugin with the optimized privilege evaluation code.
The blue line represents requests authenticated by the super admin certificate, which by-passes most of privilege evaluation. Thus, the blue line forms a kind of "hull curve", it can be seen as a rough theoretical maximum from a security plugin POV.
The green lines represent users with full privileges. Yellow lines represent users with limited privileges - the darker the yellow, the more complex the role configuration.
The benchmark results for the standard security plugin show a clear performance decline starting at about 300 indices on the cluster. This is caused by the privilege evaluation code resolving the index patterns in the roles configuration against all cluster indices for each request. At 1000 indices, we only get roughly 60% of the throughput that was observed for 10 indices.
Additionally, it can be seen that growing role complexity has a clear effect on throughput. More complex roles show a significant lower throughput.
On the other hand, the optimized security plugin shows a mostly linear throughput, which is independent of the number of indices. There is a slight decline starting from 3000 indices. The reasons for this are unknown so far. Yet, these values are still well above the standard plugin.
Bulk size 1000
Larger bulk sizes shift the place where the gap between standard and optimized plugin performance opens a bit to the right. Here we see a strong performance decline in the standard security plugin starting from 1000 indices.
The optimized security plugin exhibits better performance in all cases, though. Additionally, the performance decline that was visible for the bulk: 10 case is not really visible here any more.
The charts show a low throughput for the full privileges user both for the standard plugin and for the optimized plugin in the indices: 10 case. However, I would guess that this is some artifact of the used benchmarking process and is not a real performance characteristic.
Search throughput
Search on single index
The search throughput graphs introduce one further user: Users symbolized by the purple line are users which do have a document level access restriction implemented with a DLS query in one role.
Like for the bulk indexing, the standard plugin shows a declining performance with increasing number of indices on the clusters. Also here, the complexity of the roles configuration has a very significant effect on the throughput. Especially the user with DLS exhibits heavy performance degradations. With 300 indices, the DLS user shows less than half the throughput of the full privileges user of the standard plugin.
On the other hand, the optimized plugin shows mostly constant performance characteristics, independent of the number of indices. Also the DLS user does not show any significant degradation of performance.
Search on 2% of indices
When an index pattern comes into play, both the standard plugin and the optimized plugin show a performance degradation with a growing number of indices. This is due to the necessary resolution of the index pattern against all indices on the cluster.
The blue line - the super admin user - shows that there is quite a gap (about 20%, growing with higher number of indices) between the theoretically possible throughput and also the optimized plugin. This is likely due to the code in https://github.com/nibix/security/blob/main/src/main/java/org/opensearch/security/resolver/IndexResolverReplacer.java which still leaves some room for improvement.
Still, the optimized plugin delivers a clearly higher throughput in all cases.
Especially the DLS performance has strongly improved. For 1000 indices, we see a throughput of 1035 operations per second on the optimized plugin. The standard plugin just manages 53 operations per second with a service time of 300 ms per request. With 10000 indices, DLS gets virtually unusable on the standard plugin with just 0.6 operations per second and a service time of 16 seconds. The optimized plugin still delivers 99 operations per second.
Search on 20% of indices
With a search operation on 20% of indices, the index resolution performance gets so dominant that significant performance gains are no longer visible - except for DLS, which still shows very strong improvements. Using DLS with the standard plugin delivers a throughput of 7 operations per second already for just 1000 indices (service time 2.2 seconds). The optimized plugin still delivers a throughput of 113 operations per second (service time 176 ms).
The blue line - the admin cert user - shows that there is still room for improvement, though.
@nibix The results are very promising from the benchmarks. Should the index patten resolver be optimized as follow-up?
Side note: This is not the end of the story - there's still significant potential for performance improvements. I will file issues about that soon.
Note: The patch coverage is only 84.5% which is a bit low for my taste: https://github.com/opensearch-project/security/pull/4380#issuecomment-2197145658
However, this low figure is mostly due to indentation changes in DLS/FLS classes due to newly necessary exception handlers. Thus, the low coverage of these classes leaks into this patch.
I have a couple of ideas on how to improve coverage for these classes, but I guess that this is outside of the scope of this PR.
Side note: This is not the end of the story - there's still significant potential for performance improvements. I will file issues about that soon.
I look forward to seeing the new issues filed. Great detective work @nibix !
@nibix Can you rebase with the latest from main? I don't have any further comments on this PR. IMO I'd like to get this PR merged to main in the near future for a couple of reasons: 1) A 3.0.0 snapshot can be produced where we can perform additional testing and 2) allow other PRs to be rebased on top of this PR.
An additional review can/will be done for this change when backported to 2.x.
@cwperks I just pushed some final fixed and a rebased branch
In order to be able to move forward, I have also resolved this comment about setting LEGACY_HEADERS_UNNECESSARY_AS_OF to the correct release version of this feature: https://github.com/opensearch-project/security/pull/4380#pullrequestreview-2374642450
However, we have to keep in mind to adapt this version in the backport and in the main branch as soon as the release version is known.
@reta @willyborankin would love to have your inputs on this as well on this major enhancement by @nibix
@reta @willyborankin would love to have your inputs on this as well on this major enhancement by @nibix
Apologies, struggling to keep up with reviews, I think we have sufficient amount of eyes already on this change, @peternied I think only your approval is pending
Please don't wait on my approval, I'm thick of another release. It looks like we've got 3 maintainers approvals, IMO that seems like more than enough of a reason to merge. If there is a specific question or thread that you'd like me to follow up on I can take a targeted look.
What do you think @nibix?
@peternied
What do you think @nibix?
For me personally, that's fine.
@nibix Can you open a backport to 2.x? I'd add the backport label, but I'm fairly confident that this will require a manual backport ;)
Backport in #4898