FMU-proxy icon indicating copy to clipboard operation
FMU-proxy copied to clipboard

Feedback from users

Open markaren opened this issue 6 years ago • 8 comments

Hi, I would very much like to hear from you, the person reading this. Why are you interested in this project, have you tried it? What is unclear, what can be improved?

Note that it is also possible to chat on gitter!

markaren avatar Feb 15 '19 08:02 markaren

This is very interesting project! What is missing and what could be perhaps the "killer application" would be, of one could wrap the the client again as an FMU. Thus the client could be imported in any FMI importing tool. Use cases would be distributed co-simulation or wrapping 32bit FMUs a 64-bit FMUs ...

chrbertsch avatar Jun 13 '19 19:06 chrbertsch

Thanks for the input @chrbertsch . It's an interesting idea. While manageable, I think wrapping the clients as an FMU is quite hard. Since the FMU is static, it must be compiled against a target FMU running on the server. I guess one could modify the XML to take the IP address as an input, so it would not be specific to a particular host machine.

Or perhaps we could supply a FMU without a modelDescription.xml, to be populated by the user itself. I should think a little about his. Regardless, before I potentially act on this, I plan to add FMI export capabilities to FMI4j (https://github.com/NTNU-IHB/FMI4j/issues/26). This module could then potentially be re-used for something like this.

markaren avatar Jun 14 '19 05:06 markaren

I would be interested to know the performance overhead of calling an FMU using FMU-Proxy in comparison to calling the FMU "natively" using FMI4cpp in a C++ app. I'm thinking of implementing a tool that could simulate connected ME/CS fmus regardless of architecture and OS. I guess having a system of connected ME FMUs which are called via gRPC does have a noticable performance overhead?

Thank you, Jan

jnjaeschke avatar Jun 27 '19 07:06 jnjaeschke

Yes, there is some overhead. I did some tests on a subset of the fmi-crosscheck FMUs. See below:

image

Edit 1: JVM server and client. Client a normal laptop with an i7. Server is a desktop with a 2 year old i7. Edit2: Baseline is using FMI4j, which is about 10-15% slower than say FMI4cpp.

markaren avatar Jun 27 '19 08:06 markaren

Wow, that was quick, thank you! :)

Is the number in the "no. calls" column the 'absolute' number of function calls (say, do step, get values, set values, ...) or just the number of steps (do step calls)? If the set/get values rpc calls are included: is there a separate call for each variable/vr? Or are they bundled?

Actually I thought (and hoped) that gRPC was doing better... did you try the flatbuffer option?

I did recognize that it is faster to have few larger messages than many small ones though. I'm not sure if that is due to converting to protobuf or establishing the actual call.

Do you have some experience regarding the streaming feature of gRPC? One could think of a bidirectional stream for controlling simulation (sending dostep, getting values). That way you reduce the number of separate calls over the wire to 1. On the other hand, you lose the "rpc" functionality with having dedicated methods.

jnjaeschke avatar Jun 27 '19 08:06 jnjaeschke

no.calls is just do_step, but I have been meaning to "upgrade" the RPC to include set/get/dostep in a single call so that should give similar performance.

flatbuffers does not exists for gRPC Java, and it can't be enabled on gRPC C++ as it would break compatiablity with other languages.

I have tried the stream API before, but I found that it had no use in this project. I.e. the list of ScalarVariables could be a stream, but I reckon using a stream is only beneficial when the amount of data to receive is huuge.

markaren avatar Jun 27 '19 08:06 markaren

But were there calls to set/get during the benchmarking or was it just do_step?

I agree, for getting list of variables etc its much better to use one single large message. But it could be useful if there are multiple messages that are not yet available at the rpc call time, such as logging:

In a gRPC-based application I am working on I am using a server stream for logging (stream is started on instantiate and runs in a separate thread until freeInstance. When logging callback is invoked, a new message is pushed to the stream).

Thinking of that, this could also be useful for communication during simulation. Start the stream on start of simulation, send "do_step requests" / receive data during simulation, stop it after simulation finished. But I guess, for ModelExchange this still gets very messy...

jnjaeschke avatar Jun 27 '19 08:06 jnjaeschke

Just do_step unfortunatly.

Using the stream for logging sounds like a good idea. fmu-proxy currently offers no way of retreiving log data.

markaren avatar Jun 27 '19 08:06 markaren