Scaling and performance
The primary gating factor to the speed of your data flows will be I/O; the time it takes to retrieve data from sources, or to push changes out to targets greatly overshadows processing time inside your AL. Furthermore, negotiating connections can also be very costly, especially when security handshaking is involved.
For example, imagine a Server Mode-based AL that receives incoming protocol requests from clients, like the HTTP Server solution you built in the last exercise. Each request received causes a service AssemblyLine to be launched in order to carry out the actual work. If this called AL has to initialize components, then the turnaround for each request will be at least as long as the sum of all connection times.
Three ways to alleviate this situation are:
- Have the main Server Mode AL do the actual processing instead of dispatching it, therefore not requiring connections to be set up and broken down for each request;
- Design the service AssemblyLine(s) so that it can be invoked in Manual/Cycle Mode. Manual/Cycle mode causes the service AL to initialize when the AL FC initializes. Furthermore, the AssemblyLine Function component drives the service AL only a single cycle for each call. In this way, the service AL acts just like a component of the calling AssemblyLine and must be built to accommodate this behavior;
- Use Global Connector Pooling. This feature is described in the Security Directory Integrator v7.2 Reference Guide and allows you to define a pool of Connectors that are initialized at Server start-up and shared between AssemblyLines as needed.
Another way to improve performance is to divide heaving processing tasks across multiple simultaneous AssemblyLines. Take for instance a migration task where instead of having one AssemblyLine work with the entire source data set, you launch multiple ALs that each handles a subset. Since we can pass initialization parameters into an AssemblyLine when you call it, a single AL can be developed that is started multiple times with a filter parameter for controlling the range of data that instance should process.
Yet another technique is to incorporate a message bus in the solution, as described in the previous section. This approach has been used with great success by some of IBM’s largest clients.
In cases where processing speed is hampered by unstable network links or systems with low availability, we can deploy additional ALs as background tasks to synchronize hard-to-reach data to local high-speed stores. Particularly when implementing real-time services, this technique can help ensure satisfactory response time for client requests.
Parent topic:
Hardening your Integration Solutions