MQTT-TLS
MQTT-TLS copied to clipboard
stack footprint??
hi there, I'm just wondering what the memory footprint looks like for this while sending. I've been getting some memory errors(not errors but corrupted data) when I try to send messages over 2-4 kb. I know that I have 20+ kb free ram but on the boron I only have 6kb of stack space. I see that the buffer itself is declared on the heap but it would seem that it gets loaded onto the stack at some point during the send. Is this the case? If so, what should I be looking at for a max message size?
Thanks, Colin
Hi, I think 2-4kbyte message size is too big. You could check/debug the TLS sequence on "MQTT-TLS/src/mbedtls/config.h". #define MBEDTLS_DEBUG_C #define MBEDTLS_MEMORY_DEBUG #define MBEDTLS_SSL_DEBUG_ALL
hirotakaster- thanks for the quick reply :) I'm running this on an embedded device so it's pretty difficult to throw stderr errors. Honestly I wouldn't know where to begin troubleshooting this. I think it'd be wise to just break my messages into smaller chunks.. What would be the approximate overhead cost of 20 500 byte messages vs 1 10k message?
my manager suggested some sort of cash flow scenario where we pay to get the lib updated to be able to send 10k messages. Let me know if you're interested in that, it probably would be a few hundred usd if you can get it working.
Thanks again, Colin
I check with my Argon, about 9Kbyte MQTT message size available.
- replace config-mini.h to config.h
- update #define MQTT_MAX_PACKET_SIZE 9728 #define MBEDTLS_SSL_MAX_CONTENT_LEN 9728
- build and flash.
10Kbyte/message it too big for apps, I think it won't work. You could check the device memory usage on the particle console VITAL tab(https://docs.particle.io/tutorials/diagnostics/device-vitals/#memory-usage). here is sample source code 9Kbyte message.
#include <MQTT-TLS.h>
void callback(char* topic, byte* payload, unsigned int length);
#define AMAZON_IOT_ROOT_CA_PEM \
"-----BEGIN CERTIFICATE----- \r\n" \
" Root CA "
"-----END CERTIFICATE----- "
const char amazonIoTRootCaPem[] = AMAZON_IOT_ROOT_CA_PEM;
#define CELINT_KEY_CRT_PEM \
"-----BEGIN CERTIFICATE----- \r\n" \
" Your Certificate "
"-----END CERTIFICATE----- "
const char clientKeyCrtPem[] = CELINT_KEY_CRT_PEM;
#define CELINT_KEY_PEM \
"-----BEGIN RSA PRIVATE KEY-----\r\n" \
" Your Private Key "
"-----END RSA PRIVATE KEY----- "
const char clientKeyPem[] = CELINT_KEY_PEM;
MQTT client("IoT core server", 8883, callback);
// recieve message
void callback(char* topic, byte* payload, unsigned int length) {
Serial.write(payload, length);
Serial.print(":");
Serial.println(length);
if (strncmp("RED", (const char*)payload, strlen("RED")) == 0)
RGB.color(255, 0, 0);
else if (strncmp("GREEN", (const char*)payload, strlen("GREEN")) == 0)
RGB.color(0, 255, 0);
else if (strncmp("BLUE", (const char*)payload, strlen("BLUE")) == 0)
RGB.color(0, 0, 255);
else {
RGB.color(255, 255, 255);
}
delay(1000);
}
#define ONE_DAY_MILLIS (24 * 60 * 60 * 1000)
unsigned long lastSync = millis();
int counter = 0;
void setup() {
if (millis() - lastSync > ONE_DAY_MILLIS) {
Particle.syncTime();
lastSync = millis();
}
RGB.control(true);
// enable tls. set Root CA pem, private key file.
client.enableTls(amazonIoTRootCaPem, sizeof(amazonIoTRootCaPem),
clientKeyCrtPem, sizeof(clientKeyCrtPem),
clientKeyPem, sizeof(clientKeyPem));
Serial.println("tls enable");
// connect to the server
client.connect("sparkclient");
// publish/subscribe
if (client.isConnected()) {
Serial.println("client connected");
client.publish("outTopic/message", "hello world");
client.subscribe("inTopic/message");
}
}
void loop() {
if (client.isConnected()) {
client.loop();
}
delay(200);
}
Max about 9500 bytes MQTT message could work with AWS IoT, but this source code is only MQTT message pub/sub. Particle Console VITAL data is this. memory usage : 93% Raw memory used : 137Kbyte Raw memory available : 147Kbyte
I think there is no more free memory for application source code running on the device.
We have this config working in production: #define MQTT_MAX_PACKET_SIZE 2048
We played with this a bit and I don't have old commits to reference but I'm certain I hit intermittent memory instability at just over 3k. Due to some other processes backing off to 2k was stable and convenient. I found this by watching free memory outputs from multiple test runs and then testing for extended periods in the field.
I agree with Hiro, there is a hard limit on application source code free memory.
As for the overhead cost, there's two considerations, bytes of overhead and time to transmit. We saw a definite impact in throughput, but for us this was due to the latency in the serial delivery of "10x 500b packets". We lost time in both the ACK and preparation of the next 500b packet. MQTT should be really efficient (20-40b of overhead???, but check the specs).
In an nutshell, 2048b packets gave about 3-4x the throughput of Particle.publish() but 9500b would be another 2x or more I imagine...