kala-tamin cross platform IPC


cross platform IPC



I'm looking for suggestions on possible IPC mechanisms that are:

  • cross platform (WIN32 and Linux at least)
  • Simple to implement in C++ as well as the most common scripting languages (perl, ruby python etc).
  • Finally, simple to use from a programming point of view!

What are my options? I'm programming under Linux, but I'd like what I write to be portable to other OSes in the future. I've thought about using sockets, named pipes, or something like DBus.

Does anyone have any advice?




What is a scripting engine?

1:



How does ironpython speed compare to other .net languages?
In terms of speed, the best cross-platform IPC mechanism will be pipes.


Finding a logic bug in converting Python code to PHP
That assumes, however, that you want cross-platform IPC on the same machine.


Difference between ^ Operator in JS and Python
If you want to be able to talk to processes on remote machines, you'll want to look at using sockets instead.


generating javascript string in python
Luckily, if you're talking about TCP at least, sockets and pipes behave pretty much the same behavior.


Which database should I use to store records, and how should I use it?
While the APIs for setting them up and connecting them are different, they both just act like streams of data..
Checking if a domain name is registered
The difficult part, however, is not the communication channel, but the messages you pass over it.


Pythonic way to find a regular expression match
You really want to look at something that will perform verification and parsing for you.

I recommend looking at Google's Protocol Buffers.

You basically create a spec file that describes the object you want to pass between processes, and there is a compiler that generates code in a number of different languages for reading and writing objects that match the spec.

It's much easier (and less bug prone) than trying to come up with a messaging protocol and parser yourself..


2:


For C++, check out Boost IPC.
You can probably create or find some bindings for the scripting languages as well.

. Otherwise if it's really important to be able to interface with scripting languages your best bet is simply to use files, pipes or sockets or even a higher level abstraction like HTTP.

.


3:


Why not D-Bus? It's a very simple message passing system that runs on almost all platforms and is designed for robustness.

It's supported by pretty much every scripting language at this point.. http://freedesktop.org/wiki/Software/dbus.


4:


You might want to try YAMI , it's very simple yet functional, portable and comes with binding to few languages.


5:


How about Facebook's Thrift?.
Thrift is a software framework for scalable cross-language services development.

It combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, and OCaml..


6:


I think you'll want something based on sockets.

. If you want RPC rather than just IPC I would suggest something like XML-RPC/SOAP which runs over HTTP, and can be used from any language..


7:


If you're willing to try something a little different, there's the ICE platform from ZeroC.

It's open source, and is supported on pretty much every OS you can think of, as well as having language support for C++, C#, Java, Ruby, Python and PHP.

Finally, it's very easy to drive (the language mappings are tailored to fit naturally into each language).

It's also fast and efficient.

There's even a cut-down version for devices..


8:


It doesn't get more simple than using pipes, which are supported on every OS I know of, and can be accessed in pretty much every language.. Check out this tutorial..


9:


Distributed computing is usually complex and you are well advised to use existing libraries or frameworks instead of reinventing the wheel.

Previous poster have already enumerated a couple of these libraries and frameworks.

Depending on your needs you can pick either a very low level (like sockets) or high level framework (like CORBA).

There can not be a generic "use this" answer.

You need to educate yourself about distributed programming and then will find it much easier to pick the right library or framework for the job.. There exists a wildly used C++ framework for distributed computing called ACE and the CORBA ORB TAO (which is buildt upon ACE).

There exist very good books about ACE http://www.cs.wustl.edu/~schmidt/ACE/ so you might take a look.

Take care!.


10:


YAMI - Yet Another Messaging Infrastructure is a lightweight messaging and networking framework..


11:


TCP sockets to localhost FTW..


12:


I would use TCP sockets as other have suggested.. But to add something different, if you want a portable, easy to use, multi-language and LGPLed solution, I would recommend you ZeroMQ.

ZeroMQ + Protocol Buffers (which other have already mentioned as well) is a powerfull combination!. ZeroMQ is amazingly fast and simple.

Suitable for simple and complex systems/architectures.

You'll need to try it to believe it..


13:


You might want to check out openbinder..


14:


Python has a pretty good IPC library: see https://docs.python.org/2/library/ipc.html.


15:


I can suggest you to use the plibsys C library.

It is very simple, lightweight and cross-platform.

Released under the LGPL.

It provides:.
  • named system-wide shared memory regions (System V, POSIX and Windows implementations);
  • named system-wide semaphores for access synchronization (System V, POSIX and Windows implementations);
  • named system-wide shared buffer implementation based on the shared memory and semaphore;
  • sockets (TCP, UDP, SCTP) with IPv4 and IPv6 support (UNIX and Windows implementations).
It is easy to use library with quite a good documentation.

As it is written in C you can easily make bindings from scripting languages.. If you need to pass large data sets between processes (especially if speed is essential) it is better to use shared memory to pass the data itself and sockets to notify a process that the data is ready.

You can make it as following:.
  • a process puts the data into a shared memory segment and sends a notification via a socket to another process; as a notification usually is very small the time overhead is minimal;
  • another process receives the notification and reads the data from the shared memory segment; after that it sends a notification that the data was read back to the first process so it can feed more data.
This approach can be implemented in a cross-platform fashion..


16:


google protobufs are a really bad idea with you want easy to maintain and debug code.

its too easy for people to abuse it and use it to pollute your code.

the proto files are nice, but its basically the same thing as a structure header file, and the code it generates is complete crap making you wonder if it really a covert attack tool to sabotage software projects instead of automating them.

After you use it for a while its almost impossible to remove it from your code.

you are better off just using a header file of fix format structures that are easily debugged.

. if you really need compression, switch to an address/data mapping of filing structures remotely...

then packets are just a bundle of address/data pairs...

also a structure that is very easy to automate with your own perl scripts that produce code that is human readable and debugable .



83 out of 100 based on 83 user ratings 983 reviews