+

Search Tips   |   Advanced Search

Scaling and performance

The primary gating factor to the speed of your data flows will be I/O; the time it takes to retrieve data from sources, or to push changes out to targets greatly overshadows processing time inside your AL. Furthermore, negotiating connections can also be very costly, especially when security handshaking is involved.

For example, imagine a Server Mode-based AL that receives incoming protocol requests from clients, like the HTTP Server solution you built in the last exercise. Each request received causes a service AssemblyLine to be launched in order to carry out the actual work. If this called AL has to initialize components, then the turnaround for each request will be at least as long as the sum of all connection times.

Three ways to alleviate this situation are:

Another way to improve performance is to divide heaving processing tasks across multiple simultaneous AssemblyLines. Take for instance a migration task where instead of having one AssemblyLine work with the entire source data set, you launch multiple ALs that each handles a subset. Since we can pass initialization parameters into an AssemblyLine when you call it, a single AL can be developed that is started multiple times with a filter parameter for controlling the range of data that instance should process.

Yet another technique is to incorporate a message bus in the solution, as described in the previous section. This approach has been used with great success by some of IBM’s largest clients.

In cases where processing speed is hampered by unstable network links or systems with low availability, we can deploy additional ALs as background tasks to synchronize hard-to-reach data to local high-speed stores. Particularly when implementing real-time services, this technique can help ensure satisfactory response time for client requests.


Parent topic:

Hardening your Integration Solutions