I'm looking for suggestions on possible IPC mechanisms that are:
- cross platform (WIN32 and Linux at least)
- Simple to implement in C++ as well as the most common scripting languages (perl, ruby python etc).
- Finally, simple to use from a programming point of view!
What are my options? I'm programming under Linux, but I'd like what I write to be portable to other OSes in the future. I've thought about using sockets, named pipes, or something like DBus.
Does anyone have any advice?
What is a scripting engine?
How does ironpython speed compare to other .net languages?
Finding a logic bug in converting Python code to PHP
That assumes, however, that you want cross-platform IPC on the same machine.
Difference between ^ Operator in JS and Python
If you want to be able to talk to processes on remote machines, you'll want to look at using sockets instead.
Luckily, if you're talking about TCP at least, sockets and pipes behave pretty much the same behavior.
Which database should I use to store records, and how should I use it?
While the APIs for setting them up and connecting them are different, they both just act like streams of data..
Checking if a domain name is registered
The difficult part, however, is not the communication channel, but the messages you pass over it.
Pythonic way to find a regular expression match
You really want to look at something that will perform verification and parsing for you.
I recommend looking at Google's Protocol Buffers.
You basically create a spec file that describes the object you want to pass between processes, and there is a compiler that generates code in a number of different languages for reading and writing objects that match the spec.
It's much easier (and less bug prone) than trying to come up with a messaging protocol and parser yourself..
You can probably create or find some bindings for the scripting languages as well.
. Otherwise if it's really important to be able to interface with scripting languages your best bet is simply to use files, pipes or sockets or even a higher level abstraction like HTTP.
It's supported by pretty much every scripting language at this point.. http://freedesktop.org/wiki/Software/dbus.
Thrift is a software framework for scalable cross-language services development.
It combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, and OCaml..
. If you want RPC rather than just IPC I would suggest something like XML-RPC/SOAP which runs over HTTP, and can be used from any language..
It's open source, and is supported on pretty much every OS you can think of, as well as having language support for C++, C#, Java, Ruby, Python and PHP.
Finally, it's very easy to drive (the language mappings are tailored to fit naturally into each language).
It's also fast and efficient.
There's even a cut-down version for devices..
Previous poster have already enumerated a couple of these libraries and frameworks.
Depending on your needs you can pick either a very low level (like sockets) or high level framework (like CORBA).
There can not be a generic "use this" answer.
You need to educate yourself about distributed programming and then will find it much easier to pick the right library or framework for the job.. There exists a wildly used C++ framework for distributed computing called ACE and the CORBA ORB TAO (which is buildt upon ACE).
There exist very good books about ACE http://www.cs.wustl.edu/~schmidt/ACE/ so you might take a look.
ZeroMQ + Protocol Buffers (which other have already mentioned as well) is a powerfull combination!. ZeroMQ is amazingly fast and simple.
Suitable for simple and complex systems/architectures.
You'll need to try it to believe it..
It is very simple, lightweight and cross-platform.
Released under the LGPL.
- named system-wide shared memory regions (System V, POSIX and Windows implementations);
- named system-wide semaphores for access synchronization (System V, POSIX and Windows implementations);
- named system-wide shared buffer implementation based on the shared memory and semaphore;
- sockets (TCP, UDP, SCTP) with IPv4 and IPv6 support (UNIX and Windows implementations).
As it is written in C you can easily make bindings from scripting languages.. If you need to pass large data sets between processes (especially if speed is essential) it is better to use shared memory to pass the data itself and sockets to notify a process that the data is ready.
You can make it as following:.
- a process puts the data into a shared memory segment and sends a notification via a socket to another process; as a notification usually is very small the time overhead is minimal;
- another process receives the notification and reads the data from the shared memory segment; after that it sends a notification that the data was read back to the first process so it can feed more data.
its too easy for people to abuse it and use it to pollute your code.
the proto files are nice, but its basically the same thing as a structure header file, and the code it generates is complete crap making you wonder if it really a covert attack tool to sabotage software projects instead of automating them.
After you use it for a while its almost impossible to remove it from your code.
you are better off just using a header file of fix format structures that are easily debugged.
. if you really need compression, switch to an address/data mapping of filing structures remotely...
then packets are just a bundle of address/data pairs...
also a structure that is very easy to automate with your own perl scripts that produce code that is human readable and debugable .