Pages

Saturday, September 06, 2008

Interfacing with external programs

Say you have a program. A popular program (at least in some circles) that other people want to write add-ons for. Your program does some job and in some processing steps you want to be able to execute some external code and then proceed according to the result returned by that code. Similar to the way CGI code is executed when a HTTP server processes a HTTP request.

The other day I was trying to enumerate all possible ways to do that. I found following solutions:

  • Running external program. Data can be passed in program’s standard input and read from its standard output – just like when CGI program is launched from the HTTP server. Simple, stable (it is simple to protect against malfunctioning add-ons), but quite slow.
  • Third-party DLL. Relatively simple, very fast, but can seriously destabilize the whole product. Complicated to upgrade the DLL (must shut down main application to upgrade add-on).
  • [D]COM[+]. Not my bag of Swedish … sorry, wrong movie. Definitely not the way I’d like to pursue. Unstable. Leads to problems that nobody seems to be able to troubleshoot.
  • Windows messages. Messy. Plus the main program runs as a service while add-on maybe wouldn’t.
  • TCP. Implement add-on as a text-processing TCP/IP service (another HTTP server, if we continue the CGI analogy). Interesting idea, but not very simple to implement. Fast when both are running on the same machine. Flexible – each can be shutdown and upgraded independently; processing can be distributed over several computers. Complicated to configure when multiple add-ons are installed (each must be configured to a different port). Firewall and antivirus software may cause problems.
  • Drop folder. Main app drops a file into some folder and waits for the result file. Clumsy, possibly faster than the external program solution (add-on can be always running), simple to implement and very stable.
  • Message queues (as in the MSMQ). Interesting but possibly too complicated for most customers to install and manage.

And now to the main point of my ramblings. What did I miss? Are there more possibilities? If you have any idea how to approach my problem from a different direction, leave me a comment. Don’t mention SOAP, BTW, it is implicitly included in the “add-on as a TCP server” solution.

And thanks!

19 comments:

  1. There's not that much left, I suppose...

    If you really want to remain in control, you'd need to use some kind of "safe" scripting language and enforce that add-ons be written entirely using that language. The main app would then load the script and interpret it in a controlled manner. But that may turn out to be a bit of an overkill, depending on your app type.

    Anyway, I'd probably go with DLLs in practice. If there was a legitimate concern that a DLL might destabilize the main app, I could always sandbox it into a separate process and then use any appropriate communication channel to marshal data between it and the main app.

    As for upgrade problems with DLLs, I see two possible solutions:

    1. The main app provides a way to unload the specific DLL. If sandboxed, the child process is simply terminated instead.

    2. The main app (or the sandbox process) never loads the DLL directly, but rather its copy. Then, when the original DLL gets replaced via simple copy-and-overwrite, the main app can detect that, unload the old DLL (or terminate the sandbox process), make a copy of the new one and finally reload it.

    ReplyDelete
  2. I don't want to stay completely in control so I don't need a scripting language. Still, it should be included in my list, I do agree.

    Sandboxing DLLs is definitely a good way to implement extensibility if you want to deploy a stable product. I wanted to keep my writing short and skipped that.

    Making a copy of the 3rd party DLL is definitely something that should be considered. I never thought of that before so thanks for the idea!

    ReplyDelete
  3. COM: Not sure if I agree on your COM observations. OLE/ODBC/ADO are examples of fairly flexible and reasonably robust application integration mechanisms. Yes, there are pitfalls - but COM done right works very well.

    Publish/Subscribe: There are other protocols than MQ that are simpler and not at least - cheaper.

    Database Publish/Subscribe: Use a local database with triggers as the mediator. Allows asynch. parallell offloading to other processes or even machines, as well as data integrity control and sanitation with very low risk.

    ReplyDelete
  4. Hmm, the TCP approach is definitely interesting. I was thinking of doing something similar:
    Have a computationally intensive core program (an it should be crossplatform – that means freepacal) with a shoes UI (http://shoooes.net/).
    Using an Indy client/server should help in the development.

    Also, why is running an external program so slow?
    (If pipe redirection is a problem you can always just create an input file and have the external program read that and then create an output file).

    Have you thought about using
    mailslots (http://msdn.microsoft.com/en-us/library/aa365576(VS.85).aspx)
    named pipes (http://msdn.microsoft.com/en-us/library/aa365590(VS.85).aspx) or
    memory mapped files (http://msdn.microsoft.com/en-us/library/aa366556(VS.85).aspx) instead of TCP for interprocess communication?

    ReplyDelete
  5. The interface will vary greatly depending on the project and who the end consumer is and what type of additional processing might be necessary.

    For one of my commercial applications, I implemented "plugins" into the process directly by creating a simple job file, and invoking a batch file to perform that job. When something needed to be added to the process, it was as simple as editing the batch file to adjust for the necessary work. This allowed me, for instance, to add virus checking to my inbound email processing job.

    ReplyDelete
  6. Anonymous03:11

    You must be using the wrong TCP tools. I find adding on TCP interfaces to be pretty darn simple myself.

    ReplyDelete
  7. @Lars:

    RE: COM: Let's agree to disagree, OK?

    RE: Database Publish/Subscribe: A good idea, thanks!

    ReplyDelete
  8. @ajasja:

    RE: "Also, why is running an external program so slow?"

    It is not. Running external program million times a day is slow. There may be a situation when external filter must be called many times a minute.

    RE: Mailslots, named pipes, and memory mapped files.

    Definitely worth of putting on the list. I never used mailslots myself so I can be excused of forgetting them, but named pipes and MMF are used a lot in my own interprocessing work. There is a problem with MMF - it requires that both sides follow some protocol, that is not inherently enforced by the transfer media. Because of that, there is higher risk for implementation problems.

    ReplyDelete
  9. @Xepol:

    ICS, of course.

    ReplyDelete
  10. Anonymous08:52

    Just for completeness sake: "Dynamic Data Exchange" (DDE)
    See http://msdn.microsoft.com/en-us/library/ms648711.aspx

    ReplyDelete
  11. Anonymous13:07

    Another possibility when the message to and from are not that large in size could be shared memory between two processes; quite soimple to implement, works across winsta boundaries, there are some default libraries on the web etc.

    ReplyDelete
  12. @Ritsaert Hornstra:
    Yup, that's basically what DDE does as well (but it seems to be poorly implemented).

    ReplyDelete
  13. Anonymous06:20

    @gabr - yep MMF sprang to mind also. The protocol issue can be handled by encapsulating the protocol in an API


    A variation on the drop folder theme is SMTP/POP3 - send requests in the form of emails to an address known to be "listening" for requests, collect results via mailbox setup to receive them.

    ReplyDelete
  14. DDE is a little finicky, but there is an old Delphi component library (Django)that works pretty well with DDE. DDE is officially deprecated though. We could have tens of thousands of content links updating in Excel in "real-time" using that kit.

    Gabr, If you could indulge my curiosity - I want to understand why you won't consider COM? I mean, COM was after all created with this specific purpose in mind - right? Which problems in particular puts COM off your list?

    ReplyDelete
  15. @Jolyon - Sure thing, but one must remember that external add-on needs not to be written in Delphi

    @Lars - Just a long history of reading about COM-related problems...

    ReplyDelete
  16. Anonymous08:20

    Yes, COM does have a long history of problems. It does have a learning curve. Other techniques, like memory mapped files, may seem simpler to get running. But on the other hand, COM does give you a lot of advantages. You get an interface definition that makes cross-language calls easy. You do not have to use DCOM, but if you decide in the future it would be nice to distribute the apps over more machines, it is there for you. You also don't need to use COM+, but if you decide later that it would be convenient to run the COM server app as a service, it is there for you.
    And believe me, COM based communication is not inherently inreliable, I have been using DCOM and COM+ services on large scale systems (100+ machines, 1000+ services). Yes, you will also ecounter troubles, but that will happen with any of the methods you mention. At least with COM, there is a good chance that others will have had these problems earlier and that solutions are posted on the web.

    ReplyDelete
  17. Anonymous11:59

    The last large system I used HTTP as the transport between two systems (using the Indy components). Took a little while to write the library code, but it has been incredably successful. We also used RESTful type urls and architecture, but that may be overkill for a small project.


    HTTP is a well designed, and implemented protocol, and they have pretty much covered everything you need to do.
    - Security
    - Encryption
    - Compression
    - Caching

    HTTP libraries are written in every language so add ons are easy, and its nicely decoupled. For data transfer, I use the Content-Type headers in HTTP to allow different representations ( binary ClientDataset packets, or XML ) for flexibility.

    Forcing you to think statelessly solves loads of bugs and makes everything nice and resilient, and debugging using Fiddler is worth its weight in gold !

    ReplyDelete
  18. Internal scripting engine (LUA, ActiveScript, ScriptPascal)

    ReplyDelete
  19. sx200818:15

    COM is not so bad if you do it correctly. I think it's the best technique for plugins.
    But COM has a very long learning curve. To build a flexible plugin system you need at least one year experience with COM.
    Distributed COM sounds promising but you will always get some serious trouble with windows permissions.

    ReplyDelete