Fleet API self-usage / run own Fleet Telemetry Server
I have read that the Fleet API is free for the owner of the vehicle with some sane limits. Isn't it possible for TeslaLogger to use my Fleet API developer credentials? I guess it requires self-hosting some daemon which will listen for the connections from Tesla servers, but that should not be an issue at least for some of the users who run TeslaLogger in docker. Or is getting Fleet API developer account somehow limited to only big developer companies?
I'm not supporting own Developer instances, but I'm happy if anybody will contribute a pull request.
You just need to forward telemetry data to the class TelemetryConnection:handleMessage(string content)
https://github.com/bassmaster187/TeslaLogger/blob/22ff7e35a11ed1acac1a3f86a8c6975be0d5b0de/TeslaLogger/TelemetryConnection.cs#L229
Kafka is not the right dispatcher for a small environment, but there is a pull request for a MQTT dispatcher. For a small amount of cars that should be enough.
https://github.com/teslamotors/fleet-telemetry/pull/220
What i can do is separate the connection to my telemetry server and to the parser, so it is much more easier for you. But that will happen in January, because I have a buch of work until everything is working.
After everything is done, I can provide you with the latest config I am sending to the vehicles to get the same result.
Maybe you can find a small team and you can share todos?
@Adminius
I've somebody who knows how to secure install Cloudflare Tunnel with Docker compose. He will help us to add a Tesla Porxy stuff and co. Let's see how it goes.
Cloudflare is IMHO best solution for such setup, because it handles automatically certificates, no ports should be open in local router and so on.
I can double check what you are doing
I'll try to strip all Kafka etc. from the telemetry server and add the MQTT datastore from erwin314. I hope this will run as a service on a Raspberry Pi. It would be nice if the Teslalogger could support MQTT as source for the telemetry data.
Additionally I'm planning to add the authorization callback handler and the public key url to the public port and some local Web UI to register the application and do the authorization. So everything should be contained in a single service.
I'll let you know when I have something working.
It is in fact quite easy to run the fleet-telemetry server on a Raspi:
- Checkout the tesla repository on the Raspi
- Merge the erwin314 MQTT changes (more later)
- Remove Kafka. It does not compile on a Raspberry / on any 32-bit system
- 'go build' in cmd folder to get a >25 MB (!) binary
- create a config.json with 'mqtt' settings. Use 'mqtt' in the records/V array, optionally also records/alerts and records/errors
- run './cmd'
- to have it run automatically install it as a Linux service.
I'm not happy with the MQTT implementation from erwin314 because he splits all received data records into single values and drops the original created_at timestamp. As far as I understand the Teslalogger gets forwarded the original data record converted to json. I'll try to change that.
I'll create a fork on Github when I'm finished. Maybe I can finally have Github create a deb package to install it all as a Linux service.
Currently I send the configuration with some quick hacks. I'll create a separate service for handling the authorization callback, token refresh and sending of the configuration to the vehicle.
Yes, you need to setup the proxy command as well for auth token and config send
https://github.com/mgerczuk/fleet-telemetry-raspi has the compilable source still with single value MQTT.
When you use Let's encrypt with apache a valid config.json looks like:
{
"host": "",
"port": 4444,
"log_level": "debug",
"logger": {
"verbose": true
},
"mqtt": {
"broker": "<your mqtt server>:1883",
"client_id": "client-1",
"topic_base": "telemetry",
"qos": 1,
"retained": false,
"connect_timeout_ms": 30000,
"publish_timeout_ms": 1000
},
"records": {
"alerts": ["mqtt"],
"errors": ["mqtt"],
"V": ["mqtt"]
},
"tls": {
"server_cert": "/etc/letsencrypt/live/<your domain name>/fullchain.pem",
"server_key": "/etc/letsencrypt/live/<your domain name>/privkey.pem"
}
}
Either place the config.json in the folder with cmd executable or specify it's location with ./cmd -config path-to-json.config. You may rename the cmd executable, of course.
Now you "only" have to open 4444 in your router and instruct your Tesla to send the telemetry data to <your-domain-name>:4444 :-)
Yes, you need to setup the proxy command as well for auth token and config send
I'll write my own "proxy" since something must periodically refresh the tokens and I want to have a more user friendly way of updating the config. Of course that means that I will have to adapt to possible Tesla API changes, but I hope they will not happen so often.
And I tasted blood coding with Go so it'll be fun to write!
I separated the connection to my telemetry server and the parser, so it is now very eazy for you to exchange the transport protocol and just fill the parser with handleMessage()
next benefit: it's now very eazy to make some unit tests.
Let me know if you need more help
https://github.com/bassmaster187/TeslaLogger/commit/28a52dd1b30065504648a970c485486b45f30030
I'm waiting for Tesla confirmation
https://github.com/yvolchkov/tesla-fleet-helper
two weeks :-)
I'm waiting for Tesla confirmation
https://github.com/yvolchkov/tesla-fleet-helper
What is the Cloudflare tunnel for? Just enables the docker to be used also behind NAT? So if I want to host it on my webserver with public IP (in docker or directly), I don't need it? So I can just follow your script to generate keys, host them and register at Tesla?
Cloudflare Tunnel creates secure connection to your server without opening ports or creating certificates. The script requires Cloudflare Tunel. If you don't want use it, you have to find another way.
For someone who has already setup Tesla HTTP Proxy in HomeAssistant, does it serve the same purpose of the Tesla-Fleet-Helper?
For someone who has already setup Tesla HTTP Proxy in HomeAssistant, does it serve the same purpose of the Tesla-Fleet-Helper?
to @jjjasont: If you have already gone through Tesla HTTP Proxy in HomeAssistant, you should have all the necessary keys, yes. The idea of fleet-helper is to automate the process of generating keys and registering them as a 3p application. The process of registering is complex and requires many steps. I tried to make it as simple as possible with this script.
Also HomeAssisant proxy goes further and implements the actual proxy itself. While with filet-helper this is till work in progress. Once I and @Adminius get confirmation from Tesla that our third-party app is registered we will work on integrating TeslaProxy into TeslaLogger. In other words Fleet-Helper is a prerequisite for this efforts.
What is the Cloudflare tunnel for? Just enables the docker to be used also behind NAT? So if I want to host it on my webserver with public IP (in docker or directly), I don't need it? So I can just follow your script to generate keys, host them and register at Tesla?
to @marki555: script was developed specifically for use with Cloudflare, because that is the easiest way, and doesn't require a vps. I could add an option to skip Cloudflare step. But user will have to provide valid ssl certificates, and make sure that port 443 is available, as Tesla requires https for public-key.pem.
What do you need Tesla confirmation for? I followed https://developer.tesla.com/docs/fleet-api/getting-started/what-is-fleet-api and https://developer.tesla.com/docs/fleet-api/authentication/third-party-tokens and got my tokens without any delay.
The Cloudflare tunnel looks interesting! But I guess it costs money?
P.S. my Raspi service https://github.com/mgerczuk/fleet-telemetry-raspi runs smoothly. The configuration tool https://github.com/mgerczuk/fleet-telemetry-config is barely working and looks really awful. If someone wants to help with the html...?
Maybe you can also use tesla-fleet-helper to send the configuration to the car.
But I guess it costs money?
It does not. Cloudflare tunnels free tier is way more than needed for our purposes.
What do you need Tesla confirmation for?
One of the source of inspiration for my script are these instructions. And the guide says: "Once this is submitted, Tesla will process the CSR and update your account on the backend accordingly. It may take a few weeks to process". Maybe it is outdated, I will have to check, maybe everything is working already and there is no need to wait for any further confirmations.
Actually let me try your fleet-telemetry-raspi. I guess we should sink-up offline and join the efforts.
to @marki555: script was developed specifically for use with Cloudflare, because that is the easiest way, and doesn't require a vps. I could add an option to skip Cloudflare step. But user will have to provide valid ssl certificates, and make sure that port 443 is available, as Tesla requires https for public-key.pem.
Yes, maybe separate the creation of the public/private key and app from the cloudflare tunnel/hosting (so it would just create the keys, pause and ask user to manuall copy the keys to the correct webserver for a domain and then continue with checking if the key is hosted and next steps).
I tried to follow the script manually and I have created the keys and hosted them on my subdomain via https. However the next step is little misleading as it says to just create a developer account and provide client_id. As far as I unserstand I had to also create an App on the developer portal and provide the client_id/secret of the app, not of the developer account itself (the confusion maybe arised from the instructions you linked in your previous step which are from year ago and now the process is little different).
The script then showed the response from Tesla, however I'm not sure how to guess if it was successful (the scr, issuer and ca fields are null).
Yes, maybe separate the creation of the public/private key and app from the cloudflare tunnel/hosting (so it would just create the keys, pause and ask user to manuall copy the keys to the correct webserver for a domain and then continue with checking if the key is hosted and next steps).
creating keys is just two lines in bash. I don't see any point in separating that from the script. Furthermore what you are describing here is pretty much manual step by step process. Immo doing that defeats the purpose of the script. If you can do what you described above - you can do everything manually.
Since yesterday I have telemetry running on my local server. tesla-fleet-helper mentioned earlier doesn't work as expected because of mTLS challenges.
at the moment we did manual installation with opened port. Now I can focus on connection between local telemetry server and teslalogger connection and @yvolchkov will rewrite script/helper for easier installation
The good thing: you do not have to wait for Tesla approval anymore :)
tesla-fleet-helper mentioned earlier doesn't work as expected because of mTLS challenges.
that's not entirely correct though. It still does work. However the plan we had for telemetry crashed hard against the mTLS. But there's still a chance that we can manage to make it work with CF tunnels. Port forwarding shall be the last resort in case we failed. Alternatively, tunnel support will be added later, depending on complexity of the efforts.
Status update: I was able to connect TeslaLogger with my local Telemetry Server via ZeroMQ protocol Now I have to clean up my code and do some more testing.
One more update: Since today I will test my local_telemetry-TL-version on daily base. Driving/Charging/Sentry working as expected. ~~To verify: renew Tokens. I'm not sure why MQTT doesn't work any more...hm~~
Next steps:
- save additional data (that's not available/used by official logger) in into database (e.g. EnergyRemainig)
- test commands (wake up, sentry on/off and so on)
- script for easy telemetry installation
- script to send car configuration -???
- merge in to main repository
#1452
I've been running my own telemetry-server since december to check stability. Yesterday I started using PR-1452 for gathering this data into TL. As of now both my cars send signals to private telemetry-server and teslalogger.de.
I'm no expert, but am I correct to assume that UpdateTeslaTokenFromRefreshTokenFromFleetAPI(string refresh_token) in webhelper.cs needs to be changed to fully use our own application registration? So directly refresh with our own clientId not using teslalogger.de as endpoint. An alternative would be turn off token refresh and do this outside TL.
I'm no expert, but am I correct to assume that UpdateTeslaTokenFromRefreshTokenFromFleetAPI(string refresh_token) in webhelper.cs needs to be changed to fully use our own application registration? So directly refresh with our own clientId not using teslalogger.de as endpoint. An alternative would be turn off token refresh and do this outside TL.
Yes. What I could do is a special method for UpdateTeslaTokenFromRefreshTokenFromFleetAPI to get a new access token, if you provide your own clientid.
The code for getting token directly is pretty similar, maybe the easiest solution is to check app.config for a "ClientId" section and then use this in a if-statement in the method. This value could also be used to enable commands.
All commands working as long your are on local tesla command proxy.
if you provide the client-id in teslalogger.exe.config, teslalogger will take care to get a new access token from refresh token if it gets expired
<setting name="TelemetryClientID" serializeAs="String">
<value>xxxxxxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx</value>
</setting>
https://github.com/bassmaster187/TeslaLogger/commit/b5da22a263e4645cc4a59959e2346aeb72498826
I couldn't get the current code working at home (maybe user error, logging stopped), but my own similar code works. Edit: Works fine, had to restart web-container. The documentation calls for this url https://fleet-auth.prd.vn.cloud.tesla.com/oauth2/v3/token when refreshing tokens, but I guess https://auth.tesla.com/oauth2/v3/token also works.
I used password.php to seed database with initially token and refresh, had to change fleetApi=1 in database afterwards manually. Maybe password_fleet.php could be used to seed database initially by removing read only when own ClientID is used.
Will still be subscribing to support your good work! For me it is only higher frequency and nerd-factor driving me, not the price.