How to expose a Python package to C# using python .Net vs ZeroMQ or other

Question:

I am developing an application which is written in Python3 and is composed of a Python library/package (which contains the core functionality) and a Python application which will provide a cli shell and handles user commands.

In addition the functionality contained within the Python package must be exposed to existing gui applications written in C# (using the Microsoft .Net framework).

I’ve done a fair bit of research into how this might be done and have come up with a few potential solutions.

  1. Use Python.Net to implement a Python script within a C# application that imports my python package and calls the desired methods/attributes. I haven’t been able to get this to work on monodevelop myself yet but this seems to be a popular option despite there not being much documentation in regards to my use case.
  2. Embed my Python library as a DLL using CFFI. This option seems like it would not take a lot of work but it’s hard to see how I would maintain my interfaces/what I am exposing to someone using the DLL in C#. This option also doesn’t seem to be support by a lot of documentation pertaining to my use case.
  3. Create a small Python application which imports my python package and exposes its functionality via ZeroMQ or gRPC. This seems to be the most flexible option with ample documentation however I am concerned about latency as ultimately this tool is used for hardware control.

Note I am not well versed in C# and will be doing the majority of the development in linux.

I’m really looking to get feedback on which option will provide the best balance between a clean interface to my library and low latency/good performance (emphasis on the later).

Asked By: Reginald Marr

||

Answers:

You say your python application has a cli so another potential option is to have your C# application interact with your python application via the command line.

You would need to expose the python functionality via command line arguments (which you might already be doing anyway) and your python application would need to be able to return results as json data which would probably be the easiest way to consume it from C#.

It all depends on how complicated the interaction between your C# gui and the python application would need to be though.

Answered By: Emrah Diril

The Target: latency under ~ 10 [ms] for a SuT-stability ?

Thanks for details added about a rather wide range of latency ceilings ~ 10 .. 100 [ms]
+

…this is actually replacing something that was previously implemented in C. The idea is that if the interface layer of the library and the cli are implemented in Python it would be easier for users to build off the core functionality for their use case. Some of the more demanding control loops may have to be implemented as a static C library or rust library which we would call in with python. In any case the top layer is still implemented in Python which will have to interface with C#
( = the most important takeaway from here
The need to understand both
The Costs of wished-to-have ease of user-extensions & refactoring the architecture
+ Who pays these Costs)


Before we even start a search for the solution:

For this to be done safe & professional, you will most probably like this, not to repeat common errors of uninformed decisions, where general remarks source from heaps of first-hand experience with crafting a system with control-loop under ~ 80 [us]

Map you control-system’s – both the internal eco-system (resources) & exo-system ( interactions with outer world )

enter image description here

Next comes the Architecture :

Without due understanding of toys, no one can decide about The Right-enough Architecture.

Understanding the landscape of devices in a latency-motivated device requires us to first know ( read + test + benchmark also it’s jitter/wander envelope(s) under (over)-loaded conditions of the System-under-Test ). Not knowing this will lead to but a blind & facts unsupported belief, our SuT will never ever headbang into the wall of reality, which will proof itself wrong, typically in the least pleasant moments.

Irreversibly wrong and bad practice, as all the so far accrued costs have been already burnt…

Knowing & testing is a core step before sketching the architecture – where details matter ( ref. How much does one loose in h2d/d2h latencies [us]? – why these principal costs are so weakly reported? Does that mean those costs do not exist? No. They do exist and your control-loops will pay them each and every time… so better know about all such hidden costs, paid in the TimeDOMAIN, well beforehand …before Architecture gets designed and drafted. )


Do not hesitate to go Distributed ( where reasonably supported ) :

Learn from NASA Apollo mission design
– it was deeply distributed
and
– proper engineering helped to reach the Moon
– it saved both the National Pride and the lives of these first, and so far the only, Extra Terrestrians
( credits to Ms.Margaret HAMILTON‘s wisdom in defining her design rules and her changing the minds of the proper engineering of the many control-loops-systems’ coordination strategies )

Either ZeroMQ ( zmq, being a mature, composable, well scaling, architecture of principally distributed many-to-many behaviours, developed atop a set of a few Trivial Scalable Formal Communication Pattern Archetypes ) or it’s Marting SUSTRIK co-fathered younger and light-weighted sister, nanomsg, may help one a lot to compose a smart macro-system, where individual component’s strengths (or monopolies of having none substitute for) may get interconnected into a still-within-latency-thresholds stable, priority-aware macro-system, for which one cannot in principle ( or does not want to, due to some other reasons – economy of costs, time-to-market, legal-constraints being the first ones on hand ) design a monolithic all-in-one system.

While on the first look this may sound as complicating the problem, one may soon realise, that it does serve the very opposite :

  • burning no fuel ( yes, investors’ money ) on a just another re-inventing wheel(s…)
  • using industry-proven tools most often improves reliability, sure, if using ’em right…
  • performance scaling may come as a nice side-effect, not as a panic of a too late to re-factor nightmare

not mentioning the positive benefits from such tools independent evolution and their further extensions.

My system was in a similar dilemma – #C not being a way for me for a second (closed source app dependency was too expensive if not fatal for our success).

  • CLI: called a remote-keyboard was the exact example of split away a first python, where remote could be read as a trans-atlantic-keyboard
  • ML: was a least controlled latency element in the town, so fusing was needed
  • core-App: was extended, using industry-standard DLL, into a system, without letting it know that (only the stripped-off core-logic remained in-place, everything else went distributed, so as to minimise all the control-loops’ latencies and let to handle the different levels of priorities )
  • non-blocking add-ons: were off-loaded from the core-App
  • core-App-(1+N)-Hot-Standby-Shading: was introduced into an originally monolithic C/S exo-system

Is here any need to add more for going rather Distributed and independent from the original Vendor-Lock-in?

Having chosen but sweat, tears and blood – to start with ZeroMQ in its days of mature v2.x, I regret no single hour of having done so and cannot imagine to meet all of the above without having done so.

Answered By: user3666197
Categories: questions Tags: , , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.