GoSungrow icon indicating copy to clipboard operation
GoSungrow copied to clipboard

Help for a Dummy

Open HPhome opened this issue 8 months ago • 4 comments

Hello, I am a solar installer and computer system technician and install Sungrow PV systems with my brother-in-law. Unfortunately, my programming skills are very rudimentary. I have read many posts here, but I am not sure whether my problem can be solved with GoSungrow.

I currently have two Sungrow inverters with 19kWp in operation with an iHomeManager. I also have a Raspberry PI 3b running HomeAssistant (HA native installation). I have solved the integration via ModbusTCP (https://github.com/mkaiser/Sungrow-SHx-Inverter-Modbus-Home-Assistant/wiki/FAQ:-How-to-install). This works well.

I have the following task. In April of this year, the EEG law was changed in Germany. This means that you now receive a feed-in tariff of 14 cents, but the energy supply companies are also allowed to switch off the feed-in if the hourly average electricity prices on the EPEX Spot Market become negative. A prime example of this was 22.6.2025, when the electricity price was negative between 8 a.m. and 6 p.m. (see https://www.energymarket.solutions/day-ahead-borsenpreise/). In other words, our customers would not receive a feed-in tariff for the entire production time on that day.

It is still possible to have your system billed voluntarily in accordance with the new EEG law. In other words, instead of the permanent 8-cent continuous feed-in tariff (old EEG Act), you can opt for the higher tariff, but with disconnection. In addition, all systems that undergo any changes automatically fall under the new EEG Act.

This raises the following questions for me:

  1. is it worthwhile for our customers to switch, or to make a change to the system?
  2. for some customers, the PV modules can either be installed facing south or facing east-west (e.g. flat roofs). In the south direction you get the maximum yield, in the east-west direction you extend the yield into the morning and evening hours and thus achieve a higher self-consumption rate. This raises the question of what effect this will have under the new EEG law. So what I need is a crawler that reads the feed-in data from the customer systems from the iSolarCloud and stores it in a file in CVS format. I also need the same crawler to read the data from the EPEX spot market (which is not the issue here).

My questions are:

  1. can I use GoSungrow for this?
  2. does it run on my Raspberry PI with the native HA installation?
  3. which registers do I have to query to get the feed-in data depending on the feed-in time?

I hope my questions/requests are not too specific and are legitimate, contrary to the pure problem help which other users address. Otherwise, please ignore or delete my message. But I would be more than happy to receive an answer.

Greetings to all who are spending there time on GoSungrow, HP

HPhome avatar Jun 30 '25 08:06 HPhome

Maybe. It depends on what you mean by "crawler". If you mean some kind of automatic discovery (like a web crawler), I don't think that's possible. But, if you mean "I have a list of usernames and passwords for different installations - can I iterate over those with GoSungrow?" then the answer may well be "yes".

Another possibility. I know my installer can view my system to make sure it's working. I assume you can do the same with all the systems you've installed so, when I talk about a "list of usernames and passwords", it may also be possible to iterate over a list of PSIDs with just your installer credentials.

My advice is to leave Home Assistant out of it. Instead, take a look at gosungrow-fetch which explains how I use GoSungrow on a daily basis. A script calls GoSungrow to login to iSolarCloud, then pulls down the metrics I'm interested in, which get exported to CSV.

If I want to see what my inverter is doing now then I use the GoSungrow iPad app. But for everything else, I use cron to run the script described in gosungrow-fetch every day at around 3am to download "yesterday's" data, at 5-minute resolution (ie 288 rows in the CSV file). I load the CSV into a database.

Follow the steps in the gosungrow-fetch README up to where you are told to run:

$ GoSungrow show ps list

If you get a whole bunch of PSIDs for all the sites you've installed then that will prove the theory. You can extract the unique PSIDs (either by hand or by writing some code), and then iterate those PSIDs. That would approximate "crawling".

Does that help?


People in Australia keep talking about using Smart meters to turn off solar production entirely.

I would not care at all about losing my pathetic AUD0.05/kWh feed-in tariff. I also would not care about my export being curtailed. But I draw the line at having all the production turned off just so my house creates demand.

The only way I'll consider that is if I'm not charged for any of that demand (ie turning off the solar system simultaneously turns off the meter; turning the solar back on re-enables the meter). Any notion of turning off production while still billing for consumption is just plain old-fashioned thievery.

Paraphraser avatar Jun 30 '25 10:06 Paraphraser

Hi MickMake, you are fast! I definitely hadn't expected an answer in that short time. Thanks for that, as well as for the time you invested in writing the software and then letting others participate. In this day and age, something like this is anything but a matter of course and deserves everyone's respect!

The only way I'll consider that is if I'm not charged for any of that demand (ie turning off the solar system simultaneously turns off the meter; turning the solar back on re-enables the meter). Any notion of turning off production while still billing for consumption is just plain old-fashioned thievery.

Yes, I'm with you. In Germany, however, only the feed-in is regulated. Self-consumption is still covered by the electricity from the solar system. Anything else would also be theft in my eyes.

The amount of the feed-in tariff is also just a supplement. I and my customers see it that way too. The music plays in self-consumption. But why should you do without it? That's also around €50/month for a 10kWp system, which I would rather donate to a social cause than give to an energy supply company. That's why our customers are already asking themselves whether this could be optimized by changing the feed-in tariff. For me, it's probably more of a “male play instinct”. I like to see what's possible and always push my limits. In addition, anyone who can afford a solar system is not really dependent on it.

I had used the word crawler. What I meant was that I wanted to read the historical data on solar feed-in from each systems from the iSoarCloud. I have access to the solar systems and data from my customers also they agree to me using them. After all, they also benefit from my efforts. Ultimately, I want to iterate over the days for each solar system and then save the feed-in data with a time stamp in a file. I must have used the word crawler incorrectly (shame on me). If that works, then I've already helped. I have not yet looked what GoSungow-fetch can do. But it sounds like exactly what I'm looking for.

If GoSngrow-fetch is not limited to the last day and 5-minute resolution, then that should be exactly what I want. Great tip. If I have another question, I would get back to you.

Thanks a lot, HP

HPhome avatar Jun 30 '25 13:06 HPhome

Hi @HPhome,

I’m a Sungrow partner a here's possible answer to your queries.

Prerequisite

  1. With partner api access, we can see end customers Sungrow plant configuration and historical data.
  2. End customer need to add us as administrator or viewer of your Sungrow plant

Once the above has been done, using partner api, we can iterate thru list of psid historical data.

In Australia, we can also curtail solar pv generation by setting the export limit to zero. I am also having two hybrid inverter which can be much more involved in configuration and setup. You will find that having two hybrid in on same plant will cause issues in your data being read for load and curtailment.

Let me know if you have more queries. Happy to help.

rcmlee99uts avatar Jun 30 '25 22:06 rcmlee99uts

@HPhome - I think you have misunderstood - I am "Paraphraser" not "MickMake".

MickMake is the owner of this repo and the author of GoSungrow.

I'm the owner of the gosungrow-fetch repo, and the author of the compiling/updating GoSungrow gist.

There's some history behind my involvement. I used to have a SolaX inverter. SolaX cloud data is in the form of daily CSVs with 5-minute-interval time-series. All my analysis tools were geared around the basic fact of 5 minutes and 288 rows per day.

In early 2023 I replaced the SolaX with a Sungrow. Rather than reinvent the wheel, I decided to try to continue the time-series. I found GoSungrow and figured out how to make it do what I wanted. gosungrow-fetch contains the script I wrote which calls GoSungrow to do the work.

Jumping slightly ahead and answering one of your questions, I am fairly sure that GoSungrow can fetch data for any period that you like. The parameters I use are:

$ GoSungrow show point data 202506300000 202506302359 5 «metricID» ...

In words, from 00:00 on 2025-06-30, through 23:59 on 2025-06-30, at 5 minute resolution, fetch the metrics listed on the end of the command.

I have never actually tried more than a day so I can't tell you whether there are any inherent limits. Neither have I tried different resolutions so I can't say how fine/coarse you can go. You'll have to experiment.

One of the reasons I chose 23:59 as the end point rather than 00:00 on the next day is because some people reported "spikes" across midnight. Using 23:59 avoided that, albeit at the price of the last observation on each day being only 4 minutes. But the sun ain't shining then so it isn't an issue. I mention this because, if you try to pull down data for a week or a month then you might see spikes across midnight transitions.

Returning to the history, around the end of November 2023 everything turned to custard. See issue #101. It seems that Sungrow decided to change the API keys and encrypt API communications. At the time it looked like every single implementation of GoSungrow had been broken by those changes (but issue #134 now makes me wonder whether there might be more to it).

This was all occurring right around the time that MickMake seemed to disappear in the sense of no longer responding to issues or updating GoSungrow. That is still the situation.

The community came together. In particular, @triamazikamno did all the work to come up with a patch for the encryption problem. I can't now remember who figured out and contributed the new API keys but the information will be buried somewhere in the issues on this repo.

Prior to all of this, I had written the compiling GoSungrow gist in Feb 2023. This was shortly after my SolaX was replaced with the Sungrow. I can't remember why I needed to compile the program, rather than just download the compiled assets, but there must have been a reason.

I'm inherently lazy. I hate having to reinvent wheels so I write things down as I go. I often turn my notes into Gists. I take that extra step as a way of "paying it forward". MickMake has written GoSungrow; I'm using it; I need to compile it; the how-to isn't written down anywhere that I can find it; I figure it out; maybe other people will be helped by my notes; here's a gist...

When everything turned to custard in late 2023, the gist seemed like a good place to consolidate what we (the community) were learning as we went along, so I started dividing the gist into "parts" to deal with the various use-cases. Nobody asked me to do that. It just seemed like a good idea.

From what I can tell, most people use GoSungrow in its Docker container as a Home Assistant add-on form. However, I got some requests to explain how I was running GoSungrow from the command line, which is why I put up the gosungrow-fetch repo in September last year.

I really hope that MickMake will resurface and resume regular maintenance and enhancement of GoSungrow, starting with the triamazikamno encryption mods. He is also much better placed than I am to field questions.

One of my first questions would be, "is there a way to get GoSungrow to export CSV?" because my sungrowToCSV program is a hack.

In the meantime, the only reason I respond to questions is because:

  1. I have a "watch" on this repo so I get the emails when issues are opened;
  2. I know that MickMake isn't likely to respond (I'll shut up as soon as he resurfaces - he has knowledge while I merely have guesswork);
  3. I like to think that "activity" indicates "being used" which, hopefully, makes it more likely that MickMake will resurface; and
  4. GoSungrow is a very useful tool, which I use, so I like to "pay it forward".

That's why our customers are already asking themselves whether this could be optimized by changing the feed-in tariff.

One of the things I've noticed (at least down thisaway) is that governments are very quick to offer incentives to motivate some desired behaviour, and then equally as quick to remove those incentives, claiming they've "done their job" or "can no longer be afforded" or both. In short, my (jaundiced) view is that jumping to the new EEG just to gain a few cents would likely turn out to be a fool's errand, particularly if you wound up being curtailed more than you predicted.


The only reason I queried "crawler" was to make sure I understood your goal. I don't think "crawler" is a technical term with a well-defined meaning. It's more of a general concept.


If you use the gosungrow-fetch approach (ie GoSungrow's show point data command) then each row already has a timestamp denoting the start of the interval (5 minutes in my case). You won't need to add one.

What I don't know is whether "instantaneous" metrics (eg voltage) are the observed value at the start or end of the interval, or an average across the interval. I also don't know how "accumulated" metrics are aligned.

I suspect that everything is "as at the start of the interval" but I have not been able to prove that. Someone with more knowledge may be able to help.

Also, given that "time" is fairly flexible when you're talking about the time an inverter knows, data-transmission delays, time of ingestion into iSolarCloud, actual frequency at which inverters log data, and so on, the "start of an interval" is probably a bit rubbery. It's more likely to be either forcibly aligned to the nearest minute (or 30 seconds or whatever) on ingestion, or selected as the "nearest" observation to the start of the requested reporting interval (5 minutes for me). This might be something you need to resolve if you are trying to match disparate data-sets with precision.

Another thing I don't know is how time is calculated for timestamps in reports or what assumptions are made. When I use a date-time string like 202506300000 then I'm expressing an intention in my local time. The timestamps in what comes back also seem to be in my local time. But if it really was local time then on days when daylight-saving transitions occur, I'd expect to see time jumps in the form of additional or fewer rows.

I always get 288 rows with no jumps. Rain, hail or shine.

The only "weirdness" occurs when I go onto daylight saving (ie from UTC+10 to UTC+11). In both 2023 and 2024, there are 12 rows (ie exactly one hour) timestamped 02:25:00 through 03:20:00, where most of the fields are replaced with "--" which I take to be some representation of NULL. The only two non-null values are EnergyTotal (.p2) and FromGrid (.p83102) which repeat the values from the 02:20:00 row. You can see that as "padding" for the hour that disappears but the actual jump (legal definition) is at 02:00 so how that becomes 02:25 is a mystery.

My inverter was installed in early 2023 so I only have the 2023 and 2024 "on" DST transitions to work with but that pattern is pretty specific.

Then, if you see the null rows in an "on" jump as padding, how is an "off" jump being dealt with? I can't spot any inconsistencies so I'm assuming some kind of aggregation is occurring to smooth-out the extra hour.

The DST "on" jumps are not the only examples of nulls. I can find a total of 134 rows across 866 daily files (249409 total rows) which is ~0.05% incidence. The other nulls do occur in runs but there's no obvious pattern (time of day or month, number of null rows in sequence, all seem random). Only the two DST "on" jumps to date are consistent.

I could explain the 288 rows around DST jumps by assuming that both GoSungrow (ie my machine) and the server (augateway) agree on wallclock time so, given my data request is always being presented the "next day", the server is just interpreting the request as relative to "now", rather than absolute time. But whether that's true? 🤷

Plus, taking a much higher-level view, I don't know where date calculations are being done. I can't think of any reason why GoSungrow would pick 02:25 as the start of an "on" jump so I'm assuming iSolarCloud but I don't know. I haven't drilled deeply enough to know whether GoSungrow places fetch requests "as is" (ie in local time) or converts to UTC. I also can't think of any reason why GoSungrow would occasionally stick nulls into the data so I'm assuming iSolarCloud is the source of those.

I've decided to ignore all this on the basis that the sun ain't shining at the relevant times so it isn't going to affect my calculations. However, even though 0.05% is pretty low, you're going to be fetching a lot of data, so nulls are bound to appear in your results, so you'll have to deal with them.

I hope this helps rather than hinders your efforts. Sounds like an interesting problem you are trying to solve.

Paraphraser avatar Jul 01 '25 03:07 Paraphraser