Automated tests for policy evaluation & traffic attribution
As Subgraph Firewall's complexity and scope grows, we more than ever need automated tests for FW policy evaluation and traffic source identification. This will help us clearly define & articulate the policy logic as well as the limits of SGFW.
Tests objectives:
Verifying policy decision in as many different cases as possible:
Create or generate test case allow/deny rules. Simulate connections/packets of different types, policy code evaluates packet against test rules, test pass if policy decision against expected result.
Accurately attributing traffic origin:
Simulates traffic from:
- Processes (/proc)
- Proxy ports (Tor, i2p, ssh socks5 proxy)
- Sandboxes (oz-daemon, cleranet bridge..)
Test pass if fw-daemon's identified traffic origin matches expected result.
Another thing to test: testing the parser, i.e., validating that rules read from disk match expected results
Yet more testing: address globbing / wildcards vs. traffic and expected filter matches.
Extensive testing of TLSGuard: all versions and various types of TLS handshakes, including injection of TLS alerts, session resumption, and other things that can occur that we may not have expected.
Also testing onioncircuits style tracking of circuits vs. the attempt at forcing stream isolation in the SOCKS proxy, this could be automated.