mdns-sd
mdns-sd copied to clipboard
Memory leak or unbounded growth in ServiceDaemon
In our use of this library we encountered a memory issue eventually causing the OS to stop our process. After some investigation it seems the issue may be in the ServiceDaemon::browse. I've created the following minimal reproducible to investigate the issue.
Simply watching the memory on this with watch -n 0.5 "ps -o pid,rss,vsz,command ax | grep minimal_test" will show a constant uptick in memory usage. Further investigation of our original program and this minimal reproducible with heaptrack is what lead me to to explore the ServiceDaemon.
Simple heaptrack output from running heaptrack minimal_test sync for 5-10 seconds. Heaptrack will output a tar.gz which can be view in the heaptrack-gui for further investigation.
heaptrack stats:
allocations: 39608
leaked allocations: 332
temporary allocations: 21468
The test here has sync and async (using the stream). We use the stream thus it's addition. Both have ongoing memory growth.
I added the following print log::error!("Timer Count : {}", zc.timers.len()); to the ServiceDeamon run (inside the loop) and can see very rapid growth of the timers container. I believe there may be an issue with clearing of out of date timers.
Minimal reproducible test
use futures_util::StreamExt;
use mdns_sd::{ServiceDaemon, ServiceEvent};
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
#[tokio::main]
async fn main() {
println!("Starting simplified mDNS monitor for _roger._tcp.local.");
println!("Press Ctrl+C to exit.");
// Create a single counter for discovered services
let record_count = Arc::new(AtomicUsize::new(0));
// Create a single daemon that will live for the entire program
let mdns = match ServiceDaemon::new() {
Ok(mdns) => mdns,
Err(e) => {
eprintln!("Failed to create mdns daemon: {:?}", e);
return;
}
};
// The service name we want to monitor
let service_name = "_roger._tcp.local.";
// Start browsing for the service
let receiver = match mdns.browse(service_name) {
Ok(receiver) => receiver,
Err(e) => {
eprintln!("Failed to browse mdns service: {:?}", e);
return;
}
};
// Clone for use in task
let count_clone = record_count.clone();
// Spawn a task to print stats every 5 seconds
tokio::spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
println!(
"Total records found so far: {}",
count_clone.load(Ordering::Relaxed)
);
}
});
// Main receiver stream - continues until program is interrupted
// Use `record_count` clone for the task so we still have the original
let task_record_count = record_count.clone();
let _task = tokio::spawn(async move {
let mut stream = receiver.stream();
while let Some(event) = stream.next().await {
match event {
ServiceEvent::ServiceResolved(info) => {
let count = task_record_count.fetch_add(1, Ordering::Relaxed) + 1;
println!("Service found: {} (Total: {})", info.get_hostname(), count);
println!("Service details: {:?}", info);
}
ServiceEvent::SearchStopped(name) => {
println!("Search stopped for {}", name);
return;
}
other_event => {
println!("Received other event: {:?}", other_event);
}
}
}
});
// Wait for Ctrl+C
match tokio::signal::ctrl_c().await {
Ok(()) => {
println!("Received Ctrl+C, shutting down");
}
Err(e) => {
eprintln!("Error waiting for Ctrl+C: {}", e);
}
}
// Shutdown cleanly
if let Err(e) = mdns.stop_browse(service_name) {
eprintln!("Error stopping browse: {:?}", e);
}
if let Err(e) = mdns.shutdown() {
eprintln!("Error shutting down mdns daemon: {:?}", e);
}
println!(
"Exiting. Final count: {}",
record_count.load(Ordering::Relaxed)
);
}
Thank you for opening the issue and the detailed info! I tried the minimal test code: running it inside a Docker container of Ubuntu Linux 24.04. The watch output did not increase for some reason. But heaptrack did report leaked mem. (I had trouble to use heaptrack-ui and heaptrack-print did not print meaningful backtraces). I will take a closer look.
@keepsimple1 Apologies, you are correct that my original test would not grow in memory. I've updated my original post with a new test and a note about the ZeroConf timers. I believe this is the source of the issue. Our test was running on a system with many devices under this service name. Likely resulting in retries and many records.
It's also possible our usage is unexpected for the library?
It's unclear to me if heaptrack understands that the timer issue is a "leak" due to the lack of removal code. I don't believe there is a leak in the traditional sense, just a container that can keep growing quickly.
No worries. I have a couple of questions:
I added the following print log::error!("Timer Count : {}", zc.timers.len()); to the ServiceDeamon run (inside the loop) and can see very rapid growth of the timers container. I believe there may be an issue with clearing of out of date timers.
I tried out your new version of test code, but didn't see Timer Count error print yet. Is it because I don't have many devices in my local network? Could you paste the log you saw with Timer count?
Currently we remove the timers at this place: https://github.com/keepsimple1/mdns-sd/blob/3e3aec91d8d5f14a8ff86b7e6635162766084ac8/src/service_daemon.rs#L569C24-L569C42
So timers would be cleaned up unless they are added faster than removed.
It's also possible our usage is unexpected for the library?
It should not. This library is intended for use in any environment. If it has a problem in this case, we will fix it :)
we encountered a memory issue eventually causing the OS to stop our process
What OS is it? and what's the message (error) when the OS stopped the process? Thanks!
[2025-04-01T14:39:35Z ERROR mdns_sd::service_daemon] Timer Count : 533871
[2025-04-01T14:39:35Z ERROR mdns_sd::service_daemon] Timer Count : 536992
[2025-04-01T14:39:35Z ERROR mdns_sd::service_daemon] Timer Count : 533870
[2025-04-01T14:39:35Z ERROR mdns_sd::service_daemon] Timer Count : 536991
I had put the print right above this inside the loop: https://github.com/keepsimple1/mdns-sd/blob/3e3aec91d8d5f14a8ff86b7e6635162766084ac8/src/service_daemon.rs#L548
I think it is creating more than it cleans up. We have a site with many systems reporting on that service name. Maybe multiple devices is a requirement for reproducibility?
Ubuntu 20.04. I don't have that error but I will search for it and get back to you.
Thanks! The timer count does look very high. Probably because there are many DNS records received like you mentioned. I've created a PR #338 with a change to clean up old timers. Would you try it out to see if it helps?
Another thing we could do is to only store records in the cache the client is interested in (active browsing).
Unfortunately no luck in our setup. Really high timer counts after just a few seconds. I believe it did reduce the growth but not enough for us to use unfortunately.
[2025-04-03T14:38:15Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 27392
I updated the PR to skip the DNS records that are not actively queried (except already existing). Would you please try and see if it is helpful? Thanks!
Interesting results with that one. My print is still in the same spot but I'm getting 2 different values:
[2025-04-04T16:10:18Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 31060
[2025-04-04T16:10:18Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 31060
[2025-04-04T16:10:18Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 343
[2025-04-04T16:10:18Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 343
[2025-04-04T16:10:18Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 32723
[2025-04-04T16:10:18Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 343
[2025-04-04T16:10:18Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 343
[2025-04-04T16:10:18Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 32723
The 343 seems to have stabilized a bit but the 32723 was again ticking up quite rapidly. I should mention our usage has 2 completely separate ServiceDeamon instances in use for different service_names. This explains the separate print values.
That's interesting! Did you observe the same memory issue eventually? And, does the issue happen on both a server (responder with registered services) and a client (without registered services)?
I've updated PR to include some memory stats in GetMetrics command. Is it possible for you to use get_metrics to retrieve the stats and see if any of the numbers are outstanding?
Meantime, I will check more around the timers.
The memory does in fact keep growing yes. It takes a while to crash but I have not waited for it. I'm not sure I follow what you mean by server and client. I've just been using the minimal test case I shared.
I will try the get_metrics and share results.
[2025-04-07T19:28:57Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1622, Service Name:
[2025-04-07T19:28:57Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1622, Service Name:
[2025-04-07T19:28:57Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1622, Service Name:
[2025-04-07T19:28:58Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1621, Service Name:
[2025-04-07T19:28:58Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1621, Service Name:
[2025-04-07T19:28:58Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1621, Service Name:
[2025-04-07T19:28:58Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1621, Service Name:
[2025-04-07T19:28:59Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1622, Service Name:
[2025-04-07T19:28:59Z INFO ::discovery] mDNS Metrics: {"cached-ptr": 0, "dns-registry-probe": 0, "cached-srv": 3, "cache-refresh-srv": 0, "browse": 1922, "known-answer-suppression": 0, "dns-registry-name-change": 0, "cached-txt": 3, "dns-registry-timer": 0, "cached-subtype": 0, "respond": 328803, "cache-refresh-ptr": 0, "cache-refresh-addr": 0, "cached-nsec": 4, "dns-registry-active": 0, "cached-addr": 20}
[2025-04-07T19:28:59Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1622, Service Name:
[2025-04-07T19:28:59Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1622, Service Name:
[2025-04-07T19:28:59Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1622, Service Name:
[2025-04-07T19:29:00Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1621, Service Name:
[2025-04-07T19:29:00Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1621, Service Name:
[2025-04-07T19:29:00Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1621, Service Name:
[2025-04-07T19:29:00Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1621, Service Name:
[2025-04-07T19:29:00Z ERROR::rest::axum_rest] Request failed: 500 Internal Server Error
[2025-04-07T19:29:01Z ERROR mdns_sd::service_daemon] Timer Count (remove-timers): 1622, Service Name:
[2025-04-07T19:29:01Z INFO ::discovery] mDNS Metrics: {"cached-ptr": 0, "dns-registry-probe": 0, "cached-srv": 3, "cache-refresh-srv": 0, "browse": 1924, "known-answer-suppression": 0, "dns-registry-name-change": 0, "cached-txt": 3, "dns-registry-timer": 0, "cached-subtype": 0, "respond": 329133, "cache-refresh-ptr": 0, "cache-refresh-addr": 0, "cached-nsec": 4, "dns-registry-active": 0, "cached-addr": 20}
This is the end of a log (running for about 1.5 hours), I believe this was at it's highest point. This would then drop back down to 10s of timers.
I don't think the the tool is working as expected anymore on this branch. I'm not discovering any of the devices I expect (I do with v0.13.3)
I've merged the PR #338 and it automatically closed this issue. I just re-opened it.
I saw your earlier version of the comment requesting a new release with the current changes, and I thought it makes sense. So I released 0.13.6 before I saw the current comment.
I don't think the the tool is working as expected anymore on this branch. I'm not discovering any of the devices I expect (I do with v0.13.3)
Did you mean 0.13.5 did not work (not discovering any devices), or just the current main (which would also be 0.13.6)?
Sorry I should have been more clear in my communications. I've integrated 0.13.6 into our system (not the limited testing app) and I am seeing the expected mdns records. I'm also not seeing rapid memory growth. I'm going to run this test for a few days and track memory usage. I will update this thread with my findings.
We very much appreciate the quick fix and release!
Not a problem! Happy to know that!
Unfortunately, it seems we still have an issue. I ran a memory test and saw much slower but sustained memory growth over several hours (running v0.13.6). This prompted me to once again log some timer counts in the code and run 2 more tests. These were run on 2 different networks.
Test1 v0.13.6 (without extra logging, just tracking memory externally)
The code added at line 680 of service_daemon.rs
let log_file_name = {
let service_types = if !zc.service_queriers.is_empty() {
zc.service_queriers
.keys()
.map(|s| s.replace(".", "_"))
.collect::<Vec<_>>()
.join("_")
} else if !zc.my_services.is_empty() {
zc.my_services
.keys()
.map(|s| s.replace(".", "_"))
.collect::<Vec<_>>()
.join("_")
} else {
"unknown_service_type".to_string()
};
format!(
"service_daemon_{}_{}.log",
std::process::id(),
service_types
)
};
// Log the number of timers and the current timestamp to a file
if let Ok(mut file) = OpenOptions::new()
.create(true)
.append(true)
.open(&log_file_name)
{
let now = current_time_millis();
if let Err(e) = writeln!(file, "{},{}", now, zc.timers.len()) {
log::error!("Failed to write to log file: {}", e);
}
} else {
log::error!("Failed to open log file: {}", log_file_name);
}
Test2 images from small network with ~ 8 devices reporting mdns records.
Test3 images from larger network with 30+ devices reporting.
The difference in trends is interesting for the one image, I don't have an explanation for that yet.
Thanks for the details!
Test1 v0.13.6 (without extra logging, just tracking memory externally)
Is the "Memory Usage over Time" showing the total memory used by the program that uses mdns-sd lib? or is it already isolated to mdns-sd lib only?
While trying to understand these graphs, I went back to earlier logs and have a question about the log in this comment: https://github.com/keepsimple1/mdns-sd/issues/337#issuecomment-2784443653
The metrics collected in that log are all low except: "browse": 1924 and "respond": 329133. As each browse would create a new flume channel, they would also contribute to memory footprint. I am wondering why browse() was called many times during that time, and will the program continue to call more browse over time?
And just a note, timers is defined as BinaryHeap<Reverse<u64>>, which should have a memory footprint of Vec<u64> with some small constant overhead. Even if 200K timers, these timers self uses maybe <= 2MB. Of course, I think more importantly the high number of timers indicates that there are high number of other data structure too.
Here is our exact production usage of the ServiceDaemon. Might be easier to understand this way.
) {
let mut mdns = match ServiceDaemon::new() {
Ok(mdns) => mdns,
Err(e) => {
log::error!("Failed to create mdns daemon: {:?}", e);
return;
}
};
log::info!(
"Starting browsing service for {:?} with a pulse rate of {:?} seconds",
service_name,
query_interval_seconds
);
loop {
log::debug!(
"{service_name:?} Browsing for service: {}",
service_name.as_str()
);
// Browse for a service type.
let receiver = match mdns.browse(service_name.as_str()) {
Ok(receiver) => Some(receiver),
Err(e) => {
log::error!("Failed to browse mdns service: {:?}", e);
None
}
};
if let Some(receiver) = receiver {
let state_clone = state.clone();
let sensor_serializer_clone = sensor_serializer.clone();
let sn = service_name.clone();
let task = tokio::spawn(async move {
let mut stream = receiver.stream();
while let Some(event) = stream.next().await {
match event {
ServiceEvent::ServiceResolved(info) => {
let sensors = &mut *state_clone.lock().await;
let serializer = &mut *sensor_serializer_clone.lock().await;
process_service(sensors, &info, serializer);
log::debug!("Service processed on {sn:?}: {:?}", &info.get_hostname());
}
ServiceEvent::SearchStopped(service_name) => {
log::debug!("Search stopped on {service_name}");
return;
}
other_event => {
// We likely discover all the sensors initially here before the service is resolved
log::debug!(
"Received other service event on {sn:?}: {:?}",
&other_event
);
}
}
}
});
log::debug!(
"Browsing for service: {} started, waiting then stopping",
service_name.as_str()
);
// Wait then stop browsing (during this time, responses should be coming in)
tokio::time::sleep(query_interval_seconds).await;
// Stop browsing, shouldn't block execution
log::debug!("Stopping browse for {}", service_name.as_str());
// Try to stop browsing, if it fails retry if possible, otherwise restart the daemon
while let Err(e) = mdns.stop_browse(service_name.as_str()) {
log::error!(
"Failed to stop mDNS browsing service: {:?}. Error: {:?}",
service_name,
e
);
if let mdns_sd::Error::Again = e {
log::debug!("Retrying to stop mDNS browsing service: {:?}", service_name);
continue;
} else {
log::error!(
"Failed to stop mDNS browse: {:?}. Cannot retry, shutting down mDNS daemon.",
service_name
);
// Try to shutdown the mDNS daemon, if it fails, abort the task and start a new mDNS daemon.
if let Err(e) = mdns.shutdown() {
log::error!(
"Failed to shutdown mDNS daemon: {:?}. Shutting down thread and restarting mDNS deamon.",
e
);
task.abort();
}
// Try to create a new mDNS daemon, if it fails, log the error and stop discovery service.
mdns = match ServiceDaemon::new() {
Ok(mdns) => mdns,
Err(e) => {
log::error!(
"Critical failure in creating new mDNS daemon: {:?}. Shutting down discovery service.",
e
);
return;
}
};
}
}
log::debug!(
"Stopped browse for {}, restarting...",
service_name.as_str()
);
}
}
}
FWIW I've only ever see the error handling prints after stop_browse once and that computer was completely out of memory at that time.
For the memory graph in my previous post, it was captured externally, so it was our whole applications memory.
I will try to do something similar locally and see if I can find anything.
I tried to run a program similar to your code in (https://github.com/keepsimple1/mdns-sd/issues/337#issuecomment-2794240225) , but could not reproduce the issue yet. But I wanted to try something related to stop_browse: opened PR #344 to remove related cache entries when stop_browse was called. Would you please give it a try if possible?
I also noticed something in your earlier graph:
it seems that every 2 minutes there is a downturn. This 2 minutes matches with the TTL of address records and SRV records: https://github.com/keepsimple1/mdns-sd/blob/4192b1a9bdfa8cc003e2971b3a0abead481e2deb/src/service_info.rs#L20
I was wondering if in this case SRV and ADDR records are removed in cache but others not removed yet. This is another reason I wanted to try the new PR #344 .
Thanks!
I have had good success with this lates build. See the memory plot below. I think we could use this in production as we monitor further. Please let me know if you plan a release with these changes.
Thanks for testing! This is awesome! I've merged the changes and published a new release 0.13.7. If any problems, please let me know.
@clarkholdham just wanted to circle back, is there anything else not addressed for this issue? If not, I will close this issue soon. Thanks!
Sorry, we were performing all kinds of tests and trying narrow down issues. We currently believe that your latest build does NOT continue to grow in memory. We've ran simplified tests that show memory usage climb (likely the addition of timers/retries) and then hold constant for quite some time.
However, while testing we started using valgrind and discovered there may be some issues with the daemon cleanup. We saw some issues in the valgrind output of our program related to the mdns-sd callstack. This lead me to run the following code with valgrind (per your docs).
This may warant another ticket if you wish to investigate further seperately. valgrind --leak-check=full ./target is the command we used to product the output below. I would get similar output regardless of SERVICE_TYPE having devices on the network or not.
FWIW instrumentation like this can produce false positives. I apologize if this is the case.
use mdns_sd::{ServiceDaemon, ServiceEvent};
const SERVICE_TYPE: &str = "_roger3242._tcp.local.";
fn main() {
// Create a daemon
let mdns = ServiceDaemon::new().expect("Failed to create daemon");
// Browse for a service type.
let receiver = mdns.browse(SERVICE_TYPE).expect("Failed to browse");
// Receive the browse events in sync or async. Here is
// an example of using a thread. Users can call `receiver.recv_async().await`
// if running in async environment.
std::thread::spawn(move || {
while let Ok(event) = receiver.recv() {
match event {
ServiceEvent::ServiceResolved(info) => {
println!("Resolved a new service: {}", info.get_fullname());
}
other_event => {
println!("Received other event: {:?}", &other_event);
}
}
}
});
// Gracefully shutdown the daemon.
std::thread::sleep(std::time::Duration::from_secs(1));
mdns.shutdown().unwrap();
}
Output
valgrind --leak-check=full ./target/debug/mdns_test_2 choldham@chorizo
==2707901== Memcheck, a memory error detector
==2707901== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al.
==2707901== Using Valgrind-3.22.0 and LibVEX; rerun with -h for copyright info
==2707901== Command: ./target/debug/mdns_test_2
==2707901==
Received other event: SearchStarted("_roger3242._tcp.local. on 22 interfaces [fe80::2e0:4cff:fe78:5503 (enx00e04c785503), fe80::9ce5:afff:fe7e:a124 (veth69a66c0), 100.94.81.84 (tailscale0), fe80::6c04:1bff:fe91:cb30 (docker0), fe80::e0de:2aff:fe81:87dc (veth9934698), 172.17.0.1 (docker0), fe80::9cfa:b5ff:fefd:6c6e (veth731e47d), 172.21.0.1 (br-bb2313ac0ab3), 172.20.0.1 (br-586b36faabda), fe80::5479:23ff:fe7d:177f (br-bef61c9823f7), fe80::e054:a7ff:fe5b:5bfc (vetha5354dc), 172.18.0.1 (br-a42913eedc09), fe80::9e37:9899:f66c:24b7 (tailscale0), 10.48.20.202 (enx00e04c785503), fe80::f432:f4ff:feb6:97ab (br-a42913eedc09), 172.22.0.1 (br-bef61c9823f7), fe80::18d8:24ff:fe39:8cf2 (br-bb2313ac0ab3), fe80::b8d2:60ff:fee6:c6fc (vethc3dc327), fe80::2c5b:93ff:fee8:624f (vethf1e1acf), fe80::c02e:b0ff:fedf:f3cd (vethc669b66), 172.19.0.1 (br-b6014ac16729), fd7a:115c:a1e0::b501:5154 (tailscale0)]")
==2707901==
==2707901== HEAP SUMMARY:
==2707901== in use at exit: 43,215 bytes in 126 blocks
==2707901== total heap usage: 9,654 allocs, 9,528 frees, 24,492,394 bytes allocated
==2707901==
==2707901== 8 bytes in 1 blocks are possibly lost in loss record 1 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x285A75: alloc::raw_vec::finish_grow (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x285C2C: alloc::raw_vec::RawVecInner<A>::grow_amortized (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x1271B5: alloc::raw_vec::RawVecInner<A>::reserve::do_reserve_and_handle (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x22C2F9: alloc::vec::Vec<T,A>::append_elements (mod.rs:2566)
==2707901== by 0x1D1570: spec_extend<u8, alloc::alloc::Global> (spec_extend.rs:61)
==2707901== by 0x1D1570: extend_from_slice<u8, alloc::alloc::Global> (mod.rs:3025)
==2707901== by 0x1D1570: alloc::string::String::push_str (string.rs:1127)
==2707901== by 0x1CC976: <alloc::string::String as core::fmt::Write>::write_str (string.rs:3086)
==2707901== by 0x293F94: core::fmt::write (mod.rs:1214)
==2707901== by 0x1B9D89: core::fmt::Formatter::write_fmt (mod.rs:1669)
==2707901== by 0x13D123: <mdns_sd::service_daemon::Counter as core::fmt::Display>::fmt (service_daemon.rs:136)
==2707901== by 0x13CC02: <T as alloc::string::ToString>::to_string (string.rs:2687)
==2707901== by 0x157E09: mdns_sd::service_daemon::Zeroconf::increase_counter (service_daemon.rs:2490)
==2707901==
==2707901== 8 bytes in 1 blocks are possibly lost in loss record 2 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x285A75: alloc::raw_vec::finish_grow (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x285C2C: alloc::raw_vec::RawVecInner<A>::grow_amortized (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x1271B5: alloc::raw_vec::RawVecInner<A>::reserve::do_reserve_and_handle (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x22C2F9: alloc::vec::Vec<T,A>::append_elements (mod.rs:2566)
==2707901== by 0x1D1570: spec_extend<u8, alloc::alloc::Global> (spec_extend.rs:61)
==2707901== by 0x1D1570: extend_from_slice<u8, alloc::alloc::Global> (mod.rs:3025)
==2707901== by 0x1D1570: alloc::string::String::push_str (string.rs:1127)
==2707901== by 0x1CC976: <alloc::string::String as core::fmt::Write>::write_str (string.rs:3086)
==2707901== by 0x293F94: core::fmt::write (mod.rs:1214)
==2707901== by 0x1B9D89: core::fmt::Formatter::write_fmt (mod.rs:1669)
==2707901== by 0x13D0C1: <mdns_sd::service_daemon::Counter as core::fmt::Display>::fmt (service_daemon.rs:134)
==2707901== by 0x13CC02: <T as alloc::string::ToString>::to_string (string.rs:2687)
==2707901== by 0x157E09: mdns_sd::service_daemon::Zeroconf::increase_counter (service_daemon.rs:2490)
==2707901==
==2707901== 17 bytes in 1 blocks are possibly lost in loss record 8 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x285A75: alloc::raw_vec::finish_grow (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x285C2C: alloc::raw_vec::RawVecInner<A>::grow_amortized (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x1271B5: alloc::raw_vec::RawVecInner<A>::reserve::do_reserve_and_handle (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x22C2F9: alloc::vec::Vec<T,A>::append_elements (mod.rs:2566)
==2707901== by 0x1D1570: spec_extend<u8, alloc::alloc::Global> (spec_extend.rs:61)
==2707901== by 0x1D1570: extend_from_slice<u8, alloc::alloc::Global> (mod.rs:3025)
==2707901== by 0x1D1570: alloc::string::String::push_str (string.rs:1127)
==2707901== by 0x1CC976: <alloc::string::String as core::fmt::Write>::write_str (string.rs:3086)
==2707901== by 0x293F94: core::fmt::write (mod.rs:1214)
==2707901== by 0x1B9D89: core::fmt::Formatter::write_fmt (mod.rs:1669)
==2707901== by 0x13D154: <mdns_sd::service_daemon::Counter as core::fmt::Display>::fmt (service_daemon.rs:137)
==2707901== by 0x13CC02: <T as alloc::string::ToString>::to_string (string.rs:2687)
==2707901== by 0x157E09: mdns_sd::service_daemon::Zeroconf::increase_counter (service_daemon.rs:2490)
==2707901==
==2707901== 17 bytes in 1 blocks are possibly lost in loss record 9 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x285A75: alloc::raw_vec::finish_grow (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x285C2C: alloc::raw_vec::RawVecInner<A>::grow_amortized (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x1271B5: alloc::raw_vec::RawVecInner<A>::reserve::do_reserve_and_handle (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x22C2F9: alloc::vec::Vec<T,A>::append_elements (mod.rs:2566)
==2707901== by 0x1D1570: spec_extend<u8, alloc::alloc::Global> (spec_extend.rs:61)
==2707901== by 0x1D1570: extend_from_slice<u8, alloc::alloc::Global> (mod.rs:3025)
==2707901== by 0x1D1570: alloc::string::String::push_str (string.rs:1127)
==2707901== by 0x1CC976: <alloc::string::String as core::fmt::Write>::write_str (string.rs:3086)
==2707901== by 0x293F94: core::fmt::write (mod.rs:1214)
==2707901== by 0x1B9D89: core::fmt::Formatter::write_fmt (mod.rs:1669)
==2707901== by 0x13D185: <mdns_sd::service_daemon::Counter as core::fmt::Display>::fmt (service_daemon.rs:138)
==2707901== by 0x13CC02: <T as alloc::string::ToString>::to_string (string.rs:2687)
==2707901== by 0x157E09: mdns_sd::service_daemon::Zeroconf::increase_counter (service_daemon.rs:2490)
==2707901==
==2707901== 18 bytes in 1 blocks are possibly lost in loss record 10 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x285A75: alloc::raw_vec::finish_grow (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x285C2C: alloc::raw_vec::RawVecInner<A>::grow_amortized (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x1271B5: alloc::raw_vec::RawVecInner<A>::reserve::do_reserve_and_handle (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x22C2F9: alloc::vec::Vec<T,A>::append_elements (mod.rs:2566)
==2707901== by 0x1D1570: spec_extend<u8, alloc::alloc::Global> (spec_extend.rs:61)
==2707901== by 0x1D1570: extend_from_slice<u8, alloc::alloc::Global> (mod.rs:3025)
==2707901== by 0x1D1570: alloc::string::String::push_str (string.rs:1127)
==2707901== by 0x1CC976: <alloc::string::String as core::fmt::Write>::write_str (string.rs:3086)
==2707901== by 0x293F94: core::fmt::write (mod.rs:1214)
==2707901== by 0x1B9D89: core::fmt::Formatter::write_fmt (mod.rs:1669)
==2707901== by 0x13D1B6: <mdns_sd::service_daemon::Counter as core::fmt::Display>::fmt (service_daemon.rs:139)
==2707901== by 0x13CC02: <T as alloc::string::ToString>::to_string (string.rs:2687)
==2707901== by 0x157E09: mdns_sd::service_daemon::Zeroconf::increase_counter (service_daemon.rs:2490)
==2707901==
==2707901== 22 bytes in 1 blocks are possibly lost in loss record 12 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x29142D: alloc (alloc.rs:99)
==2707901== by 0x29142D: alloc_impl (alloc.rs:195)
==2707901== by 0x29142D: allocate (alloc.rs:257)
==2707901== by 0x29142D: try_allocate_in<alloc::alloc::Global> (raw_vec.rs:477)
==2707901== by 0x29142D: with_capacity_in<alloc::alloc::Global> (raw_vec.rs:423)
==2707901== by 0x29142D: with_capacity_in<u8, alloc::alloc::Global> (raw_vec.rs:194)
==2707901== by 0x29142D: with_capacity_in<u8, alloc::alloc::Global> (mod.rs:803)
==2707901== by 0x29142D: to_vec<u8, alloc::alloc::Global> (slice.rs:159)
==2707901== by 0x29142D: to_vec<u8, alloc::alloc::Global> (slice.rs:108)
==2707901== by 0x29142D: to_vec_in<u8, alloc::alloc::Global> (slice.rs:502)
==2707901== by 0x29142D: clone<u8, alloc::alloc::Global> (mod.rs:3276)
==2707901== by 0x29142D: <alloc::string::String as core::clone::Clone>::clone (string.rs:2213)
==2707901== by 0x15B573: mdns_sd::service_daemon::Zeroconf::exec_command_browse (service_daemon.rs:2691)
==2707901== by 0x158BB1: mdns_sd::service_daemon::Zeroconf::exec_command (service_daemon.rs:2546)
==2707901== by 0x1402D8: mdns_sd::service_daemon::ServiceDaemon::run (service_daemon.rs:619)
==2707901== by 0x13F2B8: mdns_sd::service_daemon::ServiceDaemon::daemon_thread (service_daemon.rs:505)
==2707901== by 0x13E21F: mdns_sd::service_daemon::ServiceDaemon::new::{{closure}} (service_daemon.rs:209)
==2707901== by 0x16D9A5: std::sys::backtrace::__rust_begin_short_backtrace (backtrace.rs:154)
==2707901== by 0x1B8E7A: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}} (mod.rs:561)
==2707901== by 0x1DB780: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (unwind_safe.rs:272)
==2707901== by 0x17E560: std::panicking::try::do_call (panicking.rs:557)
==2707901== by 0x1C89AA: __rust_try (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901==
==2707901== 24 bytes in 1 blocks are possibly lost in loss record 14 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x285A75: alloc::raw_vec::finish_grow (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x285C2C: alloc::raw_vec::RawVecInner<A>::grow_amortized (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x1271B5: alloc::raw_vec::RawVecInner<A>::reserve::do_reserve_and_handle (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x22C2F9: alloc::vec::Vec<T,A>::append_elements (mod.rs:2566)
==2707901== by 0x1D1570: spec_extend<u8, alloc::alloc::Global> (spec_extend.rs:61)
==2707901== by 0x1D1570: extend_from_slice<u8, alloc::alloc::Global> (mod.rs:3025)
==2707901== by 0x1D1570: alloc::string::String::push_str (string.rs:1127)
==2707901== by 0x1CC976: <alloc::string::String as core::fmt::Write>::write_str (string.rs:3086)
==2707901== by 0x293F94: core::fmt::write (mod.rs:1214)
==2707901== by 0x1B9D89: core::fmt::Formatter::write_fmt (mod.rs:1669)
==2707901== by 0x13D1E7: <mdns_sd::service_daemon::Counter as core::fmt::Display>::fmt (service_daemon.rs:140)
==2707901== by 0x13CC02: <T as alloc::string::ToString>::to_string (string.rs:2687)
==2707901== by 0x157E09: mdns_sd::service_daemon::Zeroconf::increase_counter (service_daemon.rs:2490)
==2707901==
==2707901== 66 bytes in 3 blocks are possibly lost in loss record 24 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x29043D: alloc (alloc.rs:99)
==2707901== by 0x29043D: alloc_impl (alloc.rs:195)
==2707901== by 0x29043D: allocate (alloc.rs:257)
==2707901== by 0x29043D: try_allocate_in<alloc::alloc::Global> (raw_vec.rs:477)
==2707901== by 0x29043D: with_capacity_in<alloc::alloc::Global> (raw_vec.rs:423)
==2707901== by 0x29043D: with_capacity_in<u8, alloc::alloc::Global> (raw_vec.rs:194)
==2707901== by 0x29043D: with_capacity_in<u8, alloc::alloc::Global> (mod.rs:803)
==2707901== by 0x29043D: with_capacity<u8> (mod.rs:483)
==2707901== by 0x29043D: convert_while_ascii (str.rs:644)
==2707901== by 0x29043D: alloc::str::<impl str>::to_lowercase (str.rs:385)
==2707901== by 0x1BCFF7: mdns_sd::dns_cache::DnsCache::add_or_update (dns_cache.rs:230)
==2707901== by 0x152349: mdns_sd::service_daemon::Zeroconf::handle_response (service_daemon.rs:2034)
==2707901== by 0x14E177: mdns_sd::service_daemon::Zeroconf::handle_read (service_daemon.rs:1749)
==2707901== by 0x151555: mdns_sd::service_daemon::Zeroconf::handle_poller_events (service_daemon.rs:1945)
==2707901== by 0x13FE62: mdns_sd::service_daemon::ServiceDaemon::run (service_daemon.rs:582)
==2707901== by 0x13F2B8: mdns_sd::service_daemon::ServiceDaemon::daemon_thread (service_daemon.rs:505)
==2707901== by 0x13E21F: mdns_sd::service_daemon::ServiceDaemon::new::{{closure}} (service_daemon.rs:209)
==2707901== by 0x16D9A5: std::sys::backtrace::__rust_begin_short_backtrace (backtrace.rs:154)
==2707901== by 0x1B8E7A: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}} (mod.rs:561)
==2707901== by 0x1DB780: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (unwind_safe.rs:272)
==2707901==
==2707901== 148 bytes in 1 blocks are possibly lost in loss record 27 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x24C02C: alloc::alloc::alloc (alloc.rs:99)
==2707901== by 0x24C16C: alloc::alloc::Global::alloc_impl (alloc.rs:195)
==2707901== by 0x24C3E8: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:257)
==2707901== by 0x24E791: do_alloc<alloc::alloc::Global> (alloc.rs:15)
==2707901== by 0x24E791: hashbrown::raw::RawTableInner::new_uninitialized (mod.rs:1534)
==2707901== by 0x24EB1B: hashbrown::raw::RawTableInner::fallible_with_capacity (mod.rs:1572)
==2707901== by 0x20DEBD: prepare_resize<alloc::alloc::Global> (mod.rs:2633)
==2707901== by 0x20DEBD: resize_inner<alloc::alloc::Global> (mod.rs:2829)
==2707901== by 0x20DEBD: reserve_rehash_inner<alloc::alloc::Global> (mod.rs:2719)
==2707901== by 0x20DEBD: hashbrown::raw::RawTable<T,A>::reserve_rehash (mod.rs:1045)
==2707901== by 0x224E84: hashbrown::raw::RawTable<T,A>::reserve (mod.rs:993)
==2707901== by 0x2183FD: hashbrown::raw::RawTable<T,A>::find_or_find_insert_slot (mod.rs:1206)
==2707901== by 0x2398DD: find_or_find_insert_slot<alloc::string::String, flume::Sender<mdns_sd::service_daemon::ServiceEvent>, std::hash::random::RandomState, alloc::alloc::Global, alloc::string::String> (map.rs:1810)
==2707901== by 0x2398DD: hashbrown::map::HashMap<K,V,S,A>::insert (map.rs:1790)
==2707901== by 0x1B2793: std::collections::hash::map::HashMap<K,V,S>::insert (map.rs:1169)
==2707901== by 0x15B5FD: mdns_sd::service_daemon::Zeroconf::exec_command_browse (service_daemon.rs:2691)
==2707901==
==2707901== 212 bytes in 1 blocks are possibly lost in loss record 31 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x24C02C: alloc::alloc::alloc (alloc.rs:99)
==2707901== by 0x24C16C: alloc::alloc::Global::alloc_impl (alloc.rs:195)
==2707901== by 0x24C3E8: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:257)
==2707901== by 0x24E791: do_alloc<alloc::alloc::Global> (alloc.rs:15)
==2707901== by 0x24E791: hashbrown::raw::RawTableInner::new_uninitialized (mod.rs:1534)
==2707901== by 0x24EB1B: hashbrown::raw::RawTableInner::fallible_with_capacity (mod.rs:1572)
==2707901== by 0x205A3E: prepare_resize<alloc::alloc::Global> (mod.rs:2633)
==2707901== by 0x205A3E: resize_inner<alloc::alloc::Global> (mod.rs:2829)
==2707901== by 0x205A3E: reserve_rehash_inner<alloc::alloc::Global> (mod.rs:2719)
==2707901== by 0x205A3E: hashbrown::raw::RawTable<T,A>::reserve_rehash (mod.rs:1045)
==2707901== by 0x225294: hashbrown::raw::RawTable<T,A>::reserve (mod.rs:993)
==2707901== by 0x236F91: reserve<alloc::string::String, alloc::vec::Vec<alloc::boxed::Box<dyn mdns_sd::dns_parser::DnsRecordExt, alloc::alloc::Global>, alloc::alloc::Global>, std::hash::random::RandomState, alloc::alloc::Global> (map.rs:1102)
==2707901== by 0x236F91: hashbrown::rustc_entry::<impl hashbrown::map::HashMap<K,V,S,A>>::rustc_entry (rustc_entry.rs:45)
==2707901== by 0x1B2123: std::collections::hash::map::HashMap<K,V,S>::entry (map.rs:854)
==2707901== by 0x1BD13B: mdns_sd::dns_cache::DnsCache::add_or_update (dns_cache.rs:232)
==2707901== by 0x152349: mdns_sd::service_daemon::Zeroconf::handle_response (service_daemon.rs:2034)
==2707901==
==2707901== 212 bytes in 1 blocks are possibly lost in loss record 32 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x24C02C: alloc::alloc::alloc (alloc.rs:99)
==2707901== by 0x24C16C: alloc::alloc::Global::alloc_impl (alloc.rs:195)
==2707901== by 0x24C3E8: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:257)
==2707901== by 0x24E791: do_alloc<alloc::alloc::Global> (alloc.rs:15)
==2707901== by 0x24E791: hashbrown::raw::RawTableInner::new_uninitialized (mod.rs:1534)
==2707901== by 0x24EB1B: hashbrown::raw::RawTableInner::fallible_with_capacity (mod.rs:1572)
==2707901== by 0x205A3E: prepare_resize<alloc::alloc::Global> (mod.rs:2633)
==2707901== by 0x205A3E: resize_inner<alloc::alloc::Global> (mod.rs:2829)
==2707901== by 0x205A3E: reserve_rehash_inner<alloc::alloc::Global> (mod.rs:2719)
==2707901== by 0x205A3E: hashbrown::raw::RawTable<T,A>::reserve_rehash (mod.rs:1045)
==2707901== by 0x225294: hashbrown::raw::RawTable<T,A>::reserve (mod.rs:993)
==2707901== by 0x236F91: reserve<alloc::string::String, alloc::vec::Vec<alloc::boxed::Box<dyn mdns_sd::dns_parser::DnsRecordExt, alloc::alloc::Global>, alloc::alloc::Global>, std::hash::random::RandomState, alloc::alloc::Global> (map.rs:1102)
==2707901== by 0x236F91: hashbrown::rustc_entry::<impl hashbrown::map::HashMap<K,V,S,A>>::rustc_entry (rustc_entry.rs:45)
==2707901== by 0x1B2123: std::collections::hash::map::HashMap<K,V,S>::entry (map.rs:854)
==2707901== by 0x1BD0ED: mdns_sd::dns_cache::DnsCache::add_or_update (dns_cache.rs:235)
==2707901== by 0x152349: mdns_sd::service_daemon::Zeroconf::handle_response (service_daemon.rs:2034)
==2707901==
==2707901== 261 bytes in 21 blocks are possibly lost in loss record 33 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x29142D: alloc (alloc.rs:99)
==2707901== by 0x29142D: alloc_impl (alloc.rs:195)
==2707901== by 0x29142D: allocate (alloc.rs:257)
==2707901== by 0x29142D: try_allocate_in<alloc::alloc::Global> (raw_vec.rs:477)
==2707901== by 0x29142D: with_capacity_in<alloc::alloc::Global> (raw_vec.rs:423)
==2707901== by 0x29142D: with_capacity_in<u8, alloc::alloc::Global> (raw_vec.rs:194)
==2707901== by 0x29142D: with_capacity_in<u8, alloc::alloc::Global> (mod.rs:803)
==2707901== by 0x29142D: to_vec<u8, alloc::alloc::Global> (slice.rs:159)
==2707901== by 0x29142D: to_vec<u8, alloc::alloc::Global> (slice.rs:108)
==2707901== by 0x29142D: to_vec_in<u8, alloc::alloc::Global> (slice.rs:502)
==2707901== by 0x29142D: clone<u8, alloc::alloc::Global> (mod.rs:3276)
==2707901== by 0x29142D: <alloc::string::String as core::clone::Clone>::clone (string.rs:2213)
==2707901== by 0x1D4868: <if_addrs::Interface as core::clone::Clone>::clone (lib.rs:42)
==2707901== by 0x13FC4A: mdns_sd::service_daemon::ServiceDaemon::run (service_daemon.rs:543)
==2707901== by 0x13F2B8: mdns_sd::service_daemon::ServiceDaemon::daemon_thread (service_daemon.rs:505)
==2707901== by 0x13E21F: mdns_sd::service_daemon::ServiceDaemon::new::{{closure}} (service_daemon.rs:209)
==2707901== by 0x16D9A5: std::sys::backtrace::__rust_begin_short_backtrace (backtrace.rs:154)
==2707901== by 0x1B8E7A: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}} (mod.rs:561)
==2707901== by 0x1DB780: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (unwind_safe.rs:272)
==2707901== by 0x17E560: std::panicking::try::do_call (panicking.rs:557)
==2707901== by 0x1C89AA: __rust_try (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x1B8A7D: try<(), core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<mdns_sd::service_daemon::{impl#1}::new::{closure_env#4}, ()>>> (panicking.rs:520)
==2707901== by 0x1B8A7D: catch_unwind<core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<mdns_sd::service_daemon::{impl#1}::new::{closure_env#4}, ()>>, ()> (panic.rs:358)
==2707901== by 0x1B8A7D: std::thread::Builder::spawn_unchecked_::{{closure}} (mod.rs:559)
==2707901==
==2707901== 271 bytes in 22 blocks are possibly lost in loss record 35 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x29142D: alloc (alloc.rs:99)
==2707901== by 0x29142D: alloc_impl (alloc.rs:195)
==2707901== by 0x29142D: allocate (alloc.rs:257)
==2707901== by 0x29142D: try_allocate_in<alloc::alloc::Global> (raw_vec.rs:477)
==2707901== by 0x29142D: with_capacity_in<alloc::alloc::Global> (raw_vec.rs:423)
==2707901== by 0x29142D: with_capacity_in<u8, alloc::alloc::Global> (raw_vec.rs:194)
==2707901== by 0x29142D: with_capacity_in<u8, alloc::alloc::Global> (mod.rs:803)
==2707901== by 0x29142D: to_vec<u8, alloc::alloc::Global> (slice.rs:159)
==2707901== by 0x29142D: to_vec<u8, alloc::alloc::Global> (slice.rs:108)
==2707901== by 0x29142D: to_vec_in<u8, alloc::alloc::Global> (slice.rs:502)
==2707901== by 0x29142D: clone<u8, alloc::alloc::Global> (mod.rs:3276)
==2707901== by 0x29142D: <alloc::string::String as core::clone::Clone>::clone (string.rs:2213)
==2707901== by 0x1D4868: <if_addrs::Interface as core::clone::Clone>::clone (lib.rs:42)
==2707901== by 0x1447B2: mdns_sd::service_daemon::Zeroconf::new (service_daemon.rs:997)
==2707901== by 0x13F26E: mdns_sd::service_daemon::ServiceDaemon::daemon_thread (service_daemon.rs:503)
==2707901== by 0x13E21F: mdns_sd::service_daemon::ServiceDaemon::new::{{closure}} (service_daemon.rs:209)
==2707901== by 0x16D9A5: std::sys::backtrace::__rust_begin_short_backtrace (backtrace.rs:154)
==2707901== by 0x1B8E7A: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}} (mod.rs:561)
==2707901== by 0x1DB780: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (unwind_safe.rs:272)
==2707901== by 0x17E560: std::panicking::try::do_call (panicking.rs:557)
==2707901== by 0x1C89AA: __rust_try (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x1B8A7D: try<(), core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<mdns_sd::service_daemon::{impl#1}::new::{closure_env#4}, ()>>> (panicking.rs:520)
==2707901== by 0x1B8A7D: catch_unwind<core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<mdns_sd::service_daemon::{impl#1}::new::{closure_env#4}, ()>>, ()> (panic.rs:358)
==2707901== by 0x1B8A7D: std::thread::Builder::spawn_unchecked_::{{closure}} (mod.rs:559)
==2707901==
==2707901== 280 bytes in 1 blocks are possibly lost in loss record 37 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x24C02C: alloc::alloc::alloc (alloc.rs:99)
==2707901== by 0x24C16C: alloc::alloc::Global::alloc_impl (alloc.rs:195)
==2707901== by 0x24C3E8: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:257)
==2707901== by 0x24E791: do_alloc<alloc::alloc::Global> (alloc.rs:15)
==2707901== by 0x24E791: hashbrown::raw::RawTableInner::new_uninitialized (mod.rs:1534)
==2707901== by 0x24EB1B: hashbrown::raw::RawTableInner::fallible_with_capacity (mod.rs:1572)
==2707901== by 0x204B5D: prepare_resize<alloc::alloc::Global> (mod.rs:2633)
==2707901== by 0x204B5D: resize_inner<alloc::alloc::Global> (mod.rs:2829)
==2707901== by 0x204B5D: reserve_rehash_inner<alloc::alloc::Global> (mod.rs:2719)
==2707901== by 0x204B5D: hashbrown::raw::RawTable<T,A>::reserve_rehash (mod.rs:1045)
==2707901== by 0x2249A4: hashbrown::raw::RawTable<T,A>::reserve (mod.rs:993)
==2707901== by 0x21787D: hashbrown::raw::RawTable<T,A>::find_or_find_insert_slot (mod.rs:1206)
==2707901== by 0x23AA55: find_or_find_insert_slot<alloc::string::String, i64, std::hash::random::RandomState, alloc::alloc::Global, alloc::string::String> (map.rs:1810)
==2707901== by 0x23AA55: hashbrown::map::HashMap<K,V,S,A>::insert (map.rs:1790)
==2707901== by 0x1B25F6: std::collections::hash::map::HashMap<K,V,S>::insert (map.rs:1169)
==2707901== by 0x157EAC: mdns_sd::service_daemon::Zeroconf::increase_counter (service_daemon.rs:2494)
==2707901==
==2707901== 304 bytes in 1 blocks are possibly lost in loss record 38 of 49
==2707901== at 0x484D953: calloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x40145AB: calloc (rtld-malloc.h:44)
==2707901== by 0x40145AB: allocate_dtv (dl-tls.c:370)
==2707901== by 0x40145AB: _dl_allocate_tls (dl-tls.c:629)
==2707901== by 0x4937616: allocate_stack (allocatestack.c:429)
==2707901== by 0x4937616: pthread_create@@GLIBC_2.34 (pthread_create.c:655)
==2707901== by 0x277851: std::sys::pal::unix::thread::Thread::new (thread.rs:84)
==2707901== by 0x1B8465: std::thread::Builder::spawn_unchecked_ (mod.rs:600)
==2707901== by 0x1B794F: std::thread::Builder::spawn_unchecked (mod.rs:467)
==2707901== by 0x1B916D: std::thread::Builder::spawn (mod.rs:400)
==2707901== by 0x13DA09: mdns_sd::service_daemon::ServiceDaemon::new (service_daemon.rs:207)
==2707901== by 0x12A1A9: mdns_test_2::main (main.rs:7)
==2707901== by 0x127F1A: core::ops::function::FnOnce::call_once (function.rs:250)
==2707901== by 0x12B0AD: std::sys::backtrace::__rust_begin_short_backtrace (backtrace.rs:154)
==2707901== by 0x12A450: std::rt::lang_start::{{closure}} (rt.rs:195)
==2707901==
==2707901== 304 bytes in 1 blocks are possibly lost in loss record 39 of 49
==2707901== at 0x484D953: calloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x40145AB: calloc (rtld-malloc.h:44)
==2707901== by 0x40145AB: allocate_dtv (dl-tls.c:370)
==2707901== by 0x40145AB: _dl_allocate_tls (dl-tls.c:629)
==2707901== by 0x4937616: allocate_stack (allocatestack.c:429)
==2707901== by 0x4937616: pthread_create@@GLIBC_2.34 (pthread_create.c:655)
==2707901== by 0x277851: std::sys::pal::unix::thread::Thread::new (thread.rs:84)
==2707901== by 0x128FCD: std::thread::Builder::spawn_unchecked_ (mod.rs:600)
==2707901== by 0x128555: std::thread::Builder::spawn_unchecked (mod.rs:467)
==2707901== by 0x1284F4: spawn<mdns_test_2::main::{closure_env#0}, ()> (mod.rs:400)
==2707901== by 0x1284F4: std::thread::spawn (mod.rs:730)
==2707901== by 0x12A26E: mdns_test_2::main (main.rs:15)
==2707901== by 0x127F1A: core::ops::function::FnOnce::call_once (function.rs:250)
==2707901== by 0x12B0AD: std::sys::backtrace::__rust_begin_short_backtrace (backtrace.rs:154)
==2707901== by 0x12A450: std::rt::lang_start::{{closure}} (rt.rs:195)
==2707901== by 0x26EFA6: call_once<(), (dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (function.rs:284)
==2707901== by 0x26EFA6: do_call<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panicking.rs:557)
==2707901== by 0x26EFA6: try<i32, &(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (panicking.rs:520)
==2707901== by 0x26EFA6: catch_unwind<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panic.rs:358)
==2707901== by 0x26EFA6: {closure#1} (rt.rs:174)
==2707901== by 0x26EFA6: do_call<std::rt::lang_start_internal::{closure_env#1}, isize> (panicking.rs:557)
==2707901== by 0x26EFA6: try<isize, std::rt::lang_start_internal::{closure_env#1}> (panicking.rs:520)
==2707901== by 0x26EFA6: catch_unwind<std::rt::lang_start_internal::{closure_env#1}, isize> (panic.rs:358)
==2707901== by 0x26EFA6: std::rt::lang_start_internal (rt.rs:174)
==2707901==
==2707901== 408 bytes in 1 blocks are possibly lost in loss record 40 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x24C02C: alloc::alloc::alloc (alloc.rs:99)
==2707901== by 0x24C16C: alloc::alloc::Global::alloc_impl (alloc.rs:195)
==2707901== by 0x24C3E8: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:257)
==2707901== by 0x24E791: do_alloc<alloc::alloc::Global> (alloc.rs:15)
==2707901== by 0x24E791: hashbrown::raw::RawTableInner::new_uninitialized (mod.rs:1534)
==2707901== by 0x24EB1B: hashbrown::raw::RawTableInner::fallible_with_capacity (mod.rs:1572)
==2707901== by 0x205A3E: prepare_resize<alloc::alloc::Global> (mod.rs:2633)
==2707901== by 0x205A3E: resize_inner<alloc::alloc::Global> (mod.rs:2829)
==2707901== by 0x205A3E: reserve_rehash_inner<alloc::alloc::Global> (mod.rs:2719)
==2707901== by 0x205A3E: hashbrown::raw::RawTable<T,A>::reserve_rehash (mod.rs:1045)
==2707901== by 0x225294: hashbrown::raw::RawTable<T,A>::reserve (mod.rs:993)
==2707901== by 0x236F91: reserve<alloc::string::String, alloc::vec::Vec<alloc::boxed::Box<dyn mdns_sd::dns_parser::DnsRecordExt, alloc::alloc::Global>, alloc::alloc::Global>, std::hash::random::RandomState, alloc::alloc::Global> (map.rs:1102)
==2707901== by 0x236F91: hashbrown::rustc_entry::<impl hashbrown::map::HashMap<K,V,S,A>>::rustc_entry (rustc_entry.rs:45)
==2707901== by 0x1B2123: std::collections::hash::map::HashMap<K,V,S>::entry (map.rs:854)
==2707901== by 0x1BD18D: mdns_sd::dns_cache::DnsCache::add_or_update (dns_cache.rs:234)
==2707901== by 0x152349: mdns_sd::service_daemon::Zeroconf::handle_response (service_daemon.rs:2034)
==2707901==
==2707901== 408 bytes in 1 blocks are possibly lost in loss record 41 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x24C02C: alloc::alloc::alloc (alloc.rs:99)
==2707901== by 0x24C16C: alloc::alloc::Global::alloc_impl (alloc.rs:195)
==2707901== by 0x24C3E8: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:257)
==2707901== by 0x24E791: do_alloc<alloc::alloc::Global> (alloc.rs:15)
==2707901== by 0x24E791: hashbrown::raw::RawTableInner::new_uninitialized (mod.rs:1534)
==2707901== by 0x24EB1B: hashbrown::raw::RawTableInner::fallible_with_capacity (mod.rs:1572)
==2707901== by 0x205A3E: prepare_resize<alloc::alloc::Global> (mod.rs:2633)
==2707901== by 0x205A3E: resize_inner<alloc::alloc::Global> (mod.rs:2829)
==2707901== by 0x205A3E: reserve_rehash_inner<alloc::alloc::Global> (mod.rs:2719)
==2707901== by 0x205A3E: hashbrown::raw::RawTable<T,A>::reserve_rehash (mod.rs:1045)
==2707901== by 0x225294: hashbrown::raw::RawTable<T,A>::reserve (mod.rs:993)
==2707901== by 0x236F91: reserve<alloc::string::String, alloc::vec::Vec<alloc::boxed::Box<dyn mdns_sd::dns_parser::DnsRecordExt, alloc::alloc::Global>, alloc::alloc::Global>, std::hash::random::RandomState, alloc::alloc::Global> (map.rs:1102)
==2707901== by 0x236F91: hashbrown::rustc_entry::<impl hashbrown::map::HashMap<K,V,S,A>>::rustc_entry (rustc_entry.rs:45)
==2707901== by 0x1B2123: std::collections::hash::map::HashMap<K,V,S>::entry (map.rs:854)
==2707901== by 0x1BD1DF: mdns_sd::dns_cache::DnsCache::add_or_update (dns_cache.rs:233)
==2707901== by 0x152349: mdns_sd::service_daemon::Zeroconf::handle_response (service_daemon.rs:2034)
==2707901==
==2707901== 625 bytes in 14 blocks are possibly lost in loss record 42 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x285CD5: alloc::raw_vec::RawVecInner<A>::try_allocate_in (in /home/choldham/workspaces/ouster-solutions-agent/target/debug/mdns_test_2)
==2707901== by 0x242AAD: alloc::raw_vec::RawVecInner<A>::with_capacity_in (raw_vec.rs:423)
==2707901== by 0x2404E6: with_capacity_in<u8, alloc::alloc::Global> (raw_vec.rs:194)
==2707901== by 0x2404E6: with_capacity_in<u8, alloc::alloc::Global> (mod.rs:803)
==2707901== by 0x2404E6: <T as alloc::slice::hack::ConvertVec>::to_vec (slice.rs:159)
==2707901== by 0x1CB11C: to_vec<u8, alloc::alloc::Global> (slice.rs:108)
==2707901== by 0x1CB11C: to_vec_in<u8, alloc::alloc::Global> (slice.rs:502)
==2707901== by 0x1CB11C: to_vec<u8> (slice.rs:477)
==2707901== by 0x1CB11C: to_owned<u8> (slice.rs:885)
==2707901== by 0x1CB11C: to_owned (str.rs:211)
==2707901== by 0x1CB11C: from (string.rs:2880)
==2707901== by 0x1CB11C: <str as alloc::string::ToString>::to_string (string.rs:2795)
==2707901== by 0x1BC8AF: mdns_sd::dns_cache::DnsCache::add_or_update (dns_cache.rs:198)
==2707901== by 0x152349: mdns_sd::service_daemon::Zeroconf::handle_response (service_daemon.rs:2034)
==2707901== by 0x14E177: mdns_sd::service_daemon::Zeroconf::handle_read (service_daemon.rs:1749)
==2707901== by 0x151555: mdns_sd::service_daemon::Zeroconf::handle_poller_events (service_daemon.rs:1945)
==2707901== by 0x13FE62: mdns_sd::service_daemon::ServiceDaemon::run (service_daemon.rs:582)
==2707901== by 0x13F2B8: mdns_sd::service_daemon::ServiceDaemon::daemon_thread (service_daemon.rs:505)
==2707901== by 0x13E21F: mdns_sd::service_daemon::ServiceDaemon::new::{{closure}} (service_daemon.rs:209)
==2707901==
==2707901== 3,120 bytes in 1 blocks are possibly lost in loss record 45 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x24C02C: alloc::alloc::alloc (alloc.rs:99)
==2707901== by 0x24C16C: alloc::alloc::Global::alloc_impl (alloc.rs:195)
==2707901== by 0x24C3E8: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:257)
==2707901== by 0x24E791: do_alloc<alloc::alloc::Global> (alloc.rs:15)
==2707901== by 0x24E791: hashbrown::raw::RawTableInner::new_uninitialized (mod.rs:1534)
==2707901== by 0x24EB1B: hashbrown::raw::RawTableInner::fallible_with_capacity (mod.rs:1572)
==2707901== by 0x20864E: prepare_resize<alloc::alloc::Global> (mod.rs:2633)
==2707901== by 0x20864E: resize_inner<alloc::alloc::Global> (mod.rs:2829)
==2707901== by 0x20864E: reserve_rehash_inner<alloc::alloc::Global> (mod.rs:2719)
==2707901== by 0x20864E: hashbrown::raw::RawTable<T,A>::reserve_rehash (mod.rs:1045)
==2707901== by 0x2248D4: hashbrown::raw::RawTable<T,A>::reserve (mod.rs:993)
==2707901== by 0x21884D: hashbrown::raw::RawTable<T,A>::find_or_find_insert_slot (mod.rs:1206)
==2707901== by 0x23A564: find_or_find_insert_slot<usize, if_addrs::Interface, std::hash::random::RandomState, alloc::alloc::Global, usize> (map.rs:1810)
==2707901== by 0x23A564: hashbrown::map::HashMap<K,V,S,A>::insert (map.rs:1790)
==2707901== by 0x1B264A: std::collections::hash::map::HashMap<K,V,S>::insert (map.rs:1169)
==2707901== by 0x145BB9: mdns_sd::service_daemon::Zeroconf::add_poll_impl (service_daemon.rs:1134)
==2707901==
==2707901== 8,240 bytes in 1 blocks are possibly lost in loss record 47 of 49
==2707901== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==2707901== by 0x24C02C: alloc::alloc::alloc (alloc.rs:99)
==2707901== by 0x24C16C: alloc::alloc::Global::alloc_impl (alloc.rs:195)
==2707901== by 0x24C3E8: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:257)
==2707901== by 0x24E791: do_alloc<alloc::alloc::Global> (alloc.rs:15)
==2707901== by 0x24E791: hashbrown::raw::RawTableInner::new_uninitialized (mod.rs:1534)
==2707901== by 0x24EB1B: hashbrown::raw::RawTableInner::fallible_with_capacity (mod.rs:1572)
==2707901== by 0x20FC4E: prepare_resize<alloc::alloc::Global> (mod.rs:2633)
==2707901== by 0x20FC4E: resize_inner<alloc::alloc::Global> (mod.rs:2829)
==2707901== by 0x20FC4E: reserve_rehash_inner<alloc::alloc::Global> (mod.rs:2719)
==2707901== by 0x20FC4E: hashbrown::raw::RawTable<T,A>::reserve_rehash (mod.rs:1045)
==2707901== by 0x224CE4: hashbrown::raw::RawTable<T,A>::reserve (mod.rs:993)
==2707901== by 0x218B2D: hashbrown::raw::RawTable<T,A>::find_or_find_insert_slot (mod.rs:1206)
==2707901== by 0x23ACC7: find_or_find_insert_slot<if_addrs::Interface, mdns_sd::service_info::DnsRegistry, std::hash::random::RandomState, alloc::alloc::Global, if_addrs::Interface> (map.rs:1810)
==2707901== by 0x23ACC7: hashbrown::map::HashMap<K,V,S,A>::insert (map.rs:1790)
==2707901== by 0x1B2766: std::collections::hash::map::HashMap<K,V,S>::insert (map.rs:1169)
==2707901== by 0x144898: mdns_sd::service_daemon::Zeroconf::new (service_daemon.rs:997)
==2707901==
==2707901== LEAK SUMMARY:
==2707901== definitely lost: 0 bytes in 0 blocks
==2707901== indirectly lost: 0 bytes in 0 blocks
==2707901== possibly lost: 14,973 bytes in 77 blocks
==2707901== still reachable: 28,242 bytes in 49 blocks
==2707901== suppressed: 0 bytes in 0 blocks
==2707901== Reachable blocks (those to which a pointer was found) are not shown.
==2707901== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==2707901==
==2707901== For lists of detected and suppressed errors, rerun with: -s
==2707901== ERROR SUMMARY: 21 errors from 21 contexts (suppressed: 0 from 0)
Thanks for getting back and the details!
This may warant another ticket if you wish to investigate further seperately.
Yes I think that's a good idea. Please open a new issue and reference this. I will go through the valgrind report and check it out.
I tried out valgrind with your test program. I saw similar output. Looks like valgrind reports possibly lost because the daemon thread did not get chance to free the memory completely yet, as the main thread already terminated.
So it seems false positives. To verify that, I added 1 second sleep in the main thread before exits, to give the daemon thread more time to free memory.
<snip>
mdns.shutdown().unwrap();
println!("sleep 1 second before exit");
std::thread::sleep(std::time::Duration::from_secs(1));
I re-ran the valgrind, and no more possible lost:
# valgrind --leak-check=full target/debug/mdns-valgrind
==606== Memcheck, a memory error detector
==606== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al.
==606== Using Valgrind-3.22.0 and LibVEX; rerun with -h for copyright info
==606== Command: target/debug/mdns-valgrind
==606==
Received other event: SearchStarted("_roger3242._tcp.local. on 1 interfaces [172.17.0.2 (eth0)]")
sleep 1 second before exit
==606==
==606== HEAP SUMMARY:
==606== in use at exit: 0 bytes in 0 blocks
==606== total heap usage: 163 allocs, 163 frees, 120,764 bytes allocated
==606==
==606== All heap blocks were freed -- no leaks are possible
==606==
==606== For lists of detected and suppressed errors, rerun with: -s
==606== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Closing now as the original memory issue is fixed. The most recent valgrind mem report seems to be false positives based on my testing (detailed see the above). If not, please open a new issue. Thanks!