+

Search Tips   |   Advanced Search

Security Directory Integrator (SDI) v7.2


Reference Guide

  1. Connectors
  2. Connector availability
  3. Active Directory Change Detection Connector
  4. AssemblyLine Connector
  5. Axis Easy Web Service Server Connector
  6. Axis2 Web Service Server Connector
  7. CCMDB Connector
  8. Command line Connector
  9. Database Connector
  10. Deployed Assets Connector
  11. Direct TCP /URL scripting
  12. Domino/Lotus Notes Connectors
  13. TIM DSMLv2 Connector
  14. DSMLv2 SOAP Connector
  15. DSMLv2 SOAP Server Connector
  16. EIF Connector
  17. File Connector
  18. File Management Connector
  19. HTTP Client Connector
  20. HTTP Server Connector
  21. IBM Security Access Manager Connector
  22. ITIM Agent Connector
  23. IBM MQ Connector
  24. JDBC Connector
  25. JMS Password Store Connector
  26. JMX Connector
  27. JNDI Connector
  28. LDAP Connector
  29. LDAP Group Members Connector
  30. LDAP Server Connector
  31. Log Connector
  32. Lotus Notes Connector
  33. Mailbox Connector
  34. Memory Queue Connector (MemQueue)
  35. Memory Stream Connector
  36. Properties Connector
  37. RAC Connector
  38. RDBMS Change Detection Connector
  39. SCIM Connector
  40. Script Connector
  41. Server Notifications Connector
  42. Simple Tpae IF Connector
  43. SNMP Connector
  44. SNMP Server Connector
  45. Sun Directory Change Detection Connector
  46. System Queue Connector
  47. System Store Connector
  48. TADDM Change Detection Connector
  49. TADDM Connector
  50. TCP Connector
  51. TCP Server Connector
  52. Timer Connector
  53. Tpae IF Change Detection Connector
  54. Tpae IF Connector
  55. URL Connector
  56. URL Connector
  57. Web Service Receiver Server Connector
  58. Windows Users and Groups Connector
  59. Java Class Function Component
  60. Java Class Function Component
  61. Java Class Function Component
  62. Parser Function Component
  63. Parsers


Introduction

To work with examples complementing this manual, you must refer back to the installation package to download the necessary files.

To access these example files, go to the examples directory in the installation directories (usually referred to as TDI_install_dir/).

Examples provided with Security Directory Integrator are provided as-is, and carry no official IBM support.


Connectors

Connector availability

Presented here is a list of all Connector Interfaces included with the SDI. The Connector Interface is the part of the Connector that implements the actual logic to communicate with the Data Source it is supposed to handle. We can also make our own Connector Interfaces if needed; the AssemblyLine wraps them so they are available as AssemblyLine Connectors. Before Connectors can be meaningfully deployed in an AssemblyLine, it needs to be configured. A number of Connectors have different parameter sets, depending on the Mode they are set to; this implies that, for example, a parameter which is significant in Iterator mode, is not necessary and therefore not present in the list of parameters in AddOnly Mode. All following AssemblyLine Connectors have access to the methods described in...

...in addition to the methods and properties of the Connector Interface. For documentation of the methods, see the JavaDocs (from the CE, choose Help -> Welcome -> JavaDocs.)

Connector Interfaces

For a list of Supported Modes, see Legend for the Supported Mode columns. For each Connector Interface listed, see the documentation outlined in this chapter.

Legend for the Supported Mode columns

Connector re-use

When a Connector is instantiated, usually it allocates a certain amount of resources to communicate with a particular system (connection objects, session objects, result set, and so forth). When multiple Connectors of the same type are connected to the same system, often it is reasonable to share the underlying resources. This means that a single connection to the given system will be re-used by multiple Connectors.

SDI allows Connector re-use to happen within an AssemblyLine. For a given AssemblyLine you have the option to re-use an already configured Connector from the same AssemblyLine.

With regards to the SDI Server, when re-using a Connector, a single physical Connector object is instantiated and a number of logical Connectors share it.

With regards to configuration, Connector re-using is a master-slave relation: the re-used ("master") Connector has a full connection and parser configuration and all re-using Connectors have references to the master Connector. All re-using Connectors share the connection and parser settings of the Connector they re-use. Although connection and parser settings are fixed for re-using Connectors, certain other features are configured separately (if any parameter is not configured separately, it is inherited from the master Connector):

Generally, a Connector can be re-used in the same mode (except for Iterator and Server) without any problem. This means that, for example, we can safely re-use a Connector in Lookup mode as many times as we wish.

A problem can potentially arise when a Connector is re-used in different modes. The shared physical Connector object is initialized and terminated only once. So the Connector's initialization and termination procedure must be common for all supported modes.

Following is a list of SDI Connectors which can be re-used in different modes:

Any Connector not in this list can not be re-used in the same AssemblyLine; either because it makes no sense, or because the Connector's internal logic does not allow it.

For configuring a Connector for re-use in an AssemblyLine, refer to Security Directory Integrator v7.2 Users Guide. In the configured AssemblyLine, the re-used Connectors will show up with their name prepended with '@'.


Active Directory Change Detection Connector

The Active Directory Change Detection Connector (hereafter referred to as ADCD Connector) is a specialized instance of the LDAP Connector. It reports changed Active Directory objects so that other repositories can be synchronized with Active Directory.

The LDAP protocol is used for retrieving changed objects.

When run the Connector reports the object changes necessary to synchronize other repositories with Active Directory regardless of whether these changes occurred while the Connector has been offline or they are happening as the Connector is online and operating.

This connector also supports Delta Tagging, at the Entry level only.

The ADCD Connector operates in Iterator mode.

Tracking changes in Active Directory

Active Directory does not provide a Changelog as IBM Security Directory Server and some other LDAP Servers do.

The ADCD Connector uses the uSNChanged Active Directory attribute to detect changed objects.

Each Active Directory object has an uSNChanged attribute that corresponds to a directory-global USN (Update Sequence Number) object. Whenever an Active Directory object is created, modified or deleted, the global sequence object value is increased, and the new value is assigned to the object's uSNChanged attribute.

On each AssemblyLine iteration (each call of the getNextEntry() Connector's method) it delivers a single object that has changed in Active Directory. It delivers the changed Active Directory objects as they are, with all their current attributes and also reports the type of object change - whether the object was updated (added or modified) or deleted. The Connector does not report which attributes have changed in this object and the type of attribute change.

Synchronization state is kept by the Connector and saved in the User Property Store - after each reported changed object the Connector saves the USN number necessary to continue from the correct place in case of interruption and restart; when started, the ADCD Connector reads this USN value from the SDI's User Property Store stored from the most recent ADCD Connector session.

Information from MSDN about tracking changes in Active Directory can be found here, and information about polling for changes using the uSNChanged attribute is here.

Deleted objects in Active Directory

When an object is deleted from the directory, Active Directory performs the following steps:

Tombstones or deleted objects are garbage collected some time after the deletion takes place. Two settings on the "cn=Directory Service,cn=Windows NT,cn=Service,cn=Configuration,dc=ForestRootDomain" object determine when and which tombstones are deleted:

The above specifics imply the following requirements for synchronization processes that have to handle deleted objects:

Moved objects in Active Directory

When an object is moved from one location of the Active Directory tree to another, its distinguishedName attribute changes. When this object change is detected based on the new increased value of the object's uSNChanged attribute, this change looks like any other modify operation - there is no information about the object's old distinguished name.

A synchronization process that has to handle moved objects properly should use the objectGUID attribute - it doesn't change when objects are moved. A search by the objectGUID attribute in the repository which is synchronized will locate the proper object and then the old and new distinguished names can be compared to check if the object has been moved.

Use objectGUID as the object identifier

When tracking changes in Active Directory the objectGUID attribute should be used for object identifier and not the LDAP distinguished name. This is so because the distinguished name is lost when an object is deleted or moved in Active Directory. The objectGUID attribute is always preserved, it never changes and can be used to identify an object.

When the ADCD Connector reports that an entry is changed, a search by objectGUID value should be performed in the other repository to locate the object that has to be modified or deleted. This means that the objectGUID attribute should be synchronized and stored into the other repository.

Change detection

Change detection mechanism

The ADCD Connector detects and reports changed objects following the chronology of the uSNChanged attribute values: changed objects with lower uSNChanged values will be reported before changed objects with higher uSNChanged values.

The Connector executes an LDAP query of type (usnChanged>=X) where X is the USN number that represents the current synchronization state. Sort and Page LDAP v3 controls are used with the search operation and provide for chronology of changes and ability to process large result sets. The Show Deleted LDAP v3 request control (OID "1.2.840.113556.1.4.417") is used to specify that search results should include deleted objects as well.

The ADCD Connector consecutively reports all changed objects regardless of interruptions, regardless of when it is started and stopped and whether the changes happened while the Connector was online or offline. Synchronization state is kept by the Connector and saved in the User Property Store - after each reported changed object the Connector saves the USN number necessary to continue from the correct place in case of interruption and restart.

The Connector will signal end of data and stop (according to the timeout value) when there are no more changes to report.

When there are no more changed Active Directory objects to retrieve, the Active Directory Connector cycles, waiting for a new object change in Active Directory. The Sleep Interval parameter specifies the number of seconds between two successive polls when the Connector waits for new changes. The Connector loops until a new Active Directory object is retrieved or the timeout (specified by the Timeout parameter) expires. If the timeout expires, the Active Directory Connector returns a null Entry, indicating there are no more Entries to return. If a new Active Directory object is retrieved, it is processed as previously described, and the new Entry is returned by the Active Directory Connector.

In older versions of the Connector, it reported both added and modified entries as updated. Currently, the Connector differentiates between add and modify and reports each operation separately (for details see Offline and Paged results cases.)

The ADCD Connector delivers changed Active Directory objects as they are, with all their current attributes. It does not determine which object attributes have changed, nor how many times an object has been modified. All intermediate changes to an object are irrevocably lost. Each object reported by the Active Directory Connector represents the cumulative effect of all changes performed to that object. The Active Directory Connector, however, recognizes the type of object change that has to be performed on the replicated data source and reports whether the object must be updated or deleted in the replicated data source.

Note: We can retrieve only objects and attributes that you have permission to read. The Connector does not retrieve an object or an attribute that you do not have permission to read, even if it exists in Active Directory. In such a case the ADCD Connector acts as if the object or the attribute does not exist in Active Directory.

Offline and Paged results cases

When the Connector is offline or when Paged results is enabled and an initial search request is made but the page containing modified entry is not retrieved yet, multiple changes made to that entry are merged. In other words the Connector receives only one entry containing the results of all operations that have been applied on it.

In these cases when an entry is added and then deleted in Active Directory the Connector will report "delete" operation for entries that have not been added to the repository being synchronized with Active Directory. This is not a serious restriction because SDI v7.2's Delete Connector mode first checks if the entry to be deleted exists and if it does not exist, the "On No Match" hook is called - this is where we can place code to handle/ignore such unnecessary deletes.

Another scenario is when entry is added and then modified. In that case the Connector will report an "add" operation for that entry, and the entry will contain all the changes made to it after the adding.

In all other possible cases the return entry will contain all the changes and it will be tagged with the last operation made to it.

Using the Active Directory Change Detection Connector

Each delivered entry by the Connector contains the changeType attribute whose value is either "add" (for newly created objects), "modify" (for modified objects) or "delete" (for deleted Active Directory objects). Each entry also contains 2 attributes that represent the objectGUID value:

If you need to detect and handle moved or deleted objects, you must use the objectGUID value as object identifier instead of the LDAP distinguished name. The LDAP distinguished name changes when an object is moved or deleted, while the objectGUID attribute always remains unchanged. Store the objects' objectGUID attribute in the replicated data source and search by this attribute to locate objects.

Note: Deleted objects in Active Directory live for a configurable period of time (60 days by default), after which they are completely removed. To avoid missing deletions, perform incremental synchronizations more frequently.

The ADCD Connector can be interrupted at any time during the synchronization process. It saves the state of the synchronization process in the User Property Store of the SDI (after each Entry retrieval), and the next time the Active Directory Connector is started, it successfully continues the synchronization from the point the Active Directory Connector was interrupted.

Authentication of the Connector to the directory

Different versions of the LDAP protocol support different authentication methods. LDAP v2 supports three: Anonymous, Simple, Kerberos v4. LDAP v3 supports: Anonymous, Simple and SASL authentication. The AD Change Detection Connector supports Anonymous, Simple and SASL authentication.

Error flows

Using the Active Directory Change Detection Connector

Each delivered entry by the Connector contains the changeType attribute whose value is either "add" (for newly created objects), "modify" (for modified objects) or "delete" (for deleted Active Directory objects). Each entry also contains 2 attributes that represent the objectGUID value:

If you need to detect and handle moved or deleted objects, you must use the objectGUID value as object identifier instead of the LDAP distinguished name. The LDAP distinguished name changes when an object is moved or deleted, while the objectGUID attribute always remains unchanged. Store the objects' objectGUID attribute in the replicated data source and search by this attribute to locate objects.

Note: Deleted objects in Active Directory live for a configurable period of time (60 days by default), after which they are completely removed. To avoid missing deletions, perform incremental synchronizations more frequently.

The ADCD Connector can be interrupted at any time during the synchronization process. It saves the state of the synchronization process in the User Property Store of the SDI (after each Entry retrieval), and the next time the Active Directory Connector is started, it successfully continues the synchronization from the point the Active Directory Connector was interrupted.

Authentication of the Connector to the directory

Different versions of the LDAP protocol support different authentication methods. LDAP v2 supports three: Anonymous, Simple, Kerberos v4. LDAP v3 supports: Anonymous, Simple and SASL authentication. The AD Change Detection Connector supports Anonymous, Simple and SASL authentication.

Error flows

Configuration

The Connector needs the following parameters:

Note: Changing Timeout or Sleep Interval values will automatically adjust its peer to a valid value after being changed (for example, when timeout is greater than sleep interval the value that was not edited is adjusted to be in line with the other). Adjustment is done when the field editor looses focus.


AssemblyLine Connector

AssemblyLines are often called as compound functions from other AssemblyLines. Setting up a call to perform a specific task and mapping in and out parameters can be tedious in a scripting environment. To ease the integration of AssemblyLines into a work flow, the AssemblyLine Connector provides a standard and familiar way of doing this; it wraps much of the scripting involved to execute an AssemblyLine. The AssemblyLine connector uses the AssemblyLine manual cycle mode for inline execution; and internally it uses the AssemblyLine Function Component to do its work.

The AssemblyLine Connector supports Iterator mode only, except when calling another AssemblyLine which supports AssemblyLine Operations. See "AssemblyLine Operations" in Security Directory Integrator v7.2 Users Guide and Appendix E. Creating new components using Adapters for more information.

The server-server capability made possible by using this Connector addresses security concerns when managers want SDI developers to access connected systems, but not to access the operational parameters of the Connector - or to impact its availability by deploying the new function on the same physical server.

Configuration

The Connector needs the following parameters:

Using the Connector

The AssemblyLine Connector iterates on the result set from the target AssemblyLine which is always run synchronously in manual cycle mode by the AssemblyLine Connector. The target AssemblyLine can be local to the thread or on a remote server by use of the Server API.

Note that most of the functionality is implemented in the AssemblyLine Function Component component, so the AssemblyLine Connector simply redirects the occurring errors.

Attribute Mapping (Schema) and modes

The AssemblyLineConnector dynamically reports its available connector modes (for example, Iterator) based on the available operations in the target AssemblyLine. The target AssemblyLine can define any operation name which will appear in the connector's mode drop-down list. Any operation/mode that is not a standard mode name will implicitly use CallReply mode internally (that is, the UI changes to the CallReply equivalent layout and the queryReply method is invoked on the AssemblyLineConnector). To further aid in development of custom connectors, the AssemblyLine connector gives the operation names listed below special significance. The operation names are the same as the function names for the ScriptConnector and also the same names as the ConnectorInterface method names.
Table 1. Operation Names
Computed Mode Required Operations
Iterator getNextEntry selectEntries
AddOnly putEntry
Lookup findEntry
Update findEntry modEntry putEntry
Delete findEntry deleteEntry
CallReply queryReply
N/A initialize terminate

These two are optional operations but will be invoked if present.

When one or more of these are present, the AssemblyLine Connector will compute supported modes based on the operations and the target AssemblyLine is said to be in adapter mode. The difference between normal mode and adapter mode is how the AssemblyLine connector calls the target AssemblyLine's operations.

If a mode ends up in an internal connector interface method, with no corresponding operation defined, an exception is thrown. The exception is the CallReply mode which is the default for all non-standard modes.

As an example, if the target AssemblyLine implements findEntry as an operation, the UI will show Lookup as an available mode. When the AssemblyLine Connector is called by the AssemblyLine, it will forward the "native" methods (for example, findEntry) directly to the target AssemblyLine by invoking the findEntry operation. Another example is Delete where the AssemblyLine connector will invoke findEntry followed by deleteEntry to perform a delete operation. In normal mode (for example, the target AssemblyLine defines the DeleteUser operation), the AssemblyLine Connector would simply invoke the DeleteUser operation leaving the entire delete operation up to the target AssemblyLine. Although the target AssemblyLine can define standard modes as operations, this is not recommended as some operations will simply not function correctly because they require more than one operation to complete the mode operation (like delete and update that calls findEntry before deleting or updating).

When the AssemblyLine Connector invokes an operation in an adapter mode AssemblyLine it will pass the result from its output attribute map to the target AssemblyLine's work entry. In cases where a link criteria is required, the AssemblyLine Connector adds the com.ibm.di.server.SearchCriteria instance object to the op-entry of the target AssemblyLine as search. The target AssemblyLine can retrieve this object by calling task.getOpEntry().getObject("search").

The result from the target AssemblyLine is always communicated back in the work entry. This entry becomes the conn entry of the AssemblyLine Connector which is then subjected to its input attribute map. One exception to this rule is when the resulting work entry contains an attribute named "conn". When this attribute is present, the AssemblyLine connector will disregard all attributes in the returned work entry and use the conn attribute as the result from the operation. The conn entry can contain any number of Entry objects. This is typically used when findEntry returns either null or more than one entry. If the conn attribute has no values it is the equivalent of returning null, which will cause the on-no-match hook to be called. When the conn attribute has more than one value, the AssemblyLine Connector will add all entries to its multiple-found array so that the AssemblyLine triggers the on multiple found hook and makes the duplicate entries available using the getFindEntryCount() and getNextFindEntry() methods. If the conn attribute contains objects that are not of type com.ibm.di.entry.Entry an error will be thrown.

When the target AssemblyLine's operations are invoked through the native connector interface methods, for example, getnext, selectentries, and so on, the AssemblyLine connector provides a work entry with predefined attribute names. These attributes are:

Conversely, when the target AssemblyLine returns data for these methods or operations, it must also return an entry with these attribute names.

If the AssemblyLine Connector's target AssemblyLine has no operations defined the AssemblyLine Connector will report Iterator as its only mode. The AssemblyLine connector will not invoke operations on the target AssemblyLine but simply invoke the executeCycle(work) of the target AssemblyLine to get the next input entry. Schema Discovery

The schema for an AssemblyLine Connector can be retrieved after the Connector has been configured with the correct target AssemblyLine, and the mode to use has been chosen. In order to facilitate discovering the schema the following considerations apply:

  1. The target AL is checked to see if it has an operation that matches exactly the Mode chosen when you configure the AL Connector. If this name is for example &sdq;myCleverOps&sdq;, then an operation called &sdq;myCleverOps&sdq; is checked; if found, its schema is returned to the AL Connector.
  2. If the name is a derived name (like Iterator, AddOnly and so on) the same is done, even though the actual operations called are the ones listed in Table 1. If the operation with the derived name does not exist, the schema from the actual operation called will be used, if present.

If neither of the two preceding steps yield a match (for example, because you use an unknown operation) the schema is retrieved form an operation called "querySchema". This operation is never called; it is only used to define a schema that can be retrieved in the AL Connector.

A value of "*" will map all attributes.

AssemblyLine Parameters

The target AssemblyLine can be passed a Task Control Block (TCB) as a parameter. This parameter is runtime generated and the AssemblyLine Connector will use this to pass parameters to the target AssemblyLine. This is an alternative to providing parameters through the target AL's "Published AssemblyLine Initialize Parameters" Operation, in conjunction with the AssemblyLine Parameters parameter.


Axis Easy Web Service Server Connector

The Axis Easy Web Service Server Connector is part of the SDI Web Services suite. It is a simplified version of the Web Service Receiver Server Connector in that it internally instantiates, configures and uses the AxisSoapToJava and AxisJavaToSoap FCs.

Note: Due to limitations of the Axis library used by this component only WSDL (http://www.w3.org/TR/wsdl) version 1.1 documents are supported. Furthermore, the supported message exchange protocol is SOAP 1.1.

The functionality provided is the same as if you chain and configure these FCs in an AssemblyLine which hosts the Web Service Receiver Server Connector. When using this Connector you forgo the possibility of hooking custom processing before parsing the SOAP request and after serializing the SOAP response, that is, you are tied to the processing and binding provided by Axis, but you gain simplicity of setup and use.

The Axis Easy Web Service Server Connector operates in Server mode only.

AssemblyLines support an Operation Entry (op-entry). The op-entry has an attribute $operation that contains the name of the current operation executed by the AssemblyLine. In order to process different web service operations easier, the Axis Easy Web Service Server Connector will set the $operation attribute of the op-entry.

The Axis Easy Web Service Server Connector supports generation of a WSDL file according to the input and output schema of the AssemblyLine. As in SDI v7.2 AssemblyLines support multiple operations, the WSDL generation can result in a web service definition with multiple operations. There are some rules about naming the operations:

This Connector's configuration is relatively simple. The Connector parses the incoming SOAP request, stores it (along with HTTP specific data) into the event Entry and then presents this Entry to the AssemblyLine for Attribute mapping. When the work Entry (now storing the Java™ representation of the SOAP response) is returned to the Connector in the Response phase, the Connector serializes the response and returns it to the Web Service client.

When this Connector receives a SOAP request, the connector parses it and sets the $operation attribute of the op-entry. The name of the operation is determined by the name of the element nested in the Body element of the SOAP envelope. For parsing the SOAP messages, a SAX parser is used, which compared to a DOM parser adds less performance overhead.

There are several types of SOAP messages:

Hosting a WSDL file

The Axis Easy Web Service Server Connector provides the "wsdlRequested" Connector Attribute to the AssemblyLine.

If an HTTP request arrives and the requested HTTP resource ends with "?WSDL" then the Connector sets the value of the "wsdlRequested" Attribute to true and reads the contents of the file specified by the WSDL File parameter into the "soapResponse" Connector Attribute; otherwise the value of this Attribute is set to false.

This Attribute's value thus allows you to distinguish between pure SOAP requests and HTTP requests for the WSDL file. The AssemblyLine can use a Branch Component to execute only the appropriate piece of logic - (1) when a request for the WSDL file has been received, then the AssemblyLine could perform some optional logic or read a different WSDL file and send it back to the web service client, or just rely on default processing; (2) when a SOAP request has been received the AssemblyLine will handle the SOAP request. Alternatively, you could program the system.skipEntry(); call at an appropriate place (in a script component, in a hook in the first Connector in the AssemblyLine, etc.) to skip further processing and go directly to the Response channel processing.

It is the responsibility of the AssemblyLine to provide the necessary response to a SOAP request.

The Connector implements a public method:

public String readFile (String aFileName) throws IOException;

This method can be used from SDI JavaScript in a script component to read the contents of a WSDL file on the local file system. The AssemblyLine can then return the contents of the WSDL in the "soapResponse" Attribute, and thus to the web service client in case a request for the WSDL was received.

Schema

Input Schema

Table 2. Axis Easy Web Service Server Connector Input Schema
Attribute Value
host Type is String. Contains the name of the host to which the request is sent. This parameter is set only if "wsdlRequested" is false.
requestObjArray The soapRequest represented as an array of objects; converts java.lang.String to Object[] with the perform method of SoapToJava function component.
requestedResource The requested HTTP resource.
soapAction The SOAP action HTTP header. This parameter is set only if "wsdlRequested" is false.
soapFault If a SOAP error occurs, an org.apache.axis.AxisFault is stored in this attribute.
soapRequest The SOAP request in txt/XML or DOMElement format. This parameter is set only if "wsdlRequested" is false.
soapResponse The SOAP response message. If wsdlRequested is true, then soapResponse is set to the contents of the WSDL file.
wsdlRequested This parameter is true if a WSDL file is requested and false otherwise.
http.username This attribute is used only when HTTP basic authentication is enabled. The value is the username of the client connected.
http.password This attribute is used only when HTTP basic authentication is enabled. The value is the password of the client connected.

Output Schema

Table 3. Axis Easy Web Service Server Connector Output Schema
Attribute Value
responseContentType The response type.
responseObjArray The soapRequest represented as an array of objects; the soapResponse gets the value from here, using the JavaToSoap function component to convert Object[] to java.lang.String.
soapFault If a SOAP error occurs, an org.apache.axis.AxisFault is stored in this attribute.
soapResponse The SOAP response message. If wsdlRequested is true, then soapResponse is set to the contents of the WSDL file.
wsdlRequested This parameter is true if a WSDL file is requested and false otherwise.
http.credentialsValid This attribute is used only when HTTP basic authentication is enabled. Its syntax is boolean and if true client authentication is successful. It is responsibility of the AssemblyLine to set this parameter's value when HTTP basic authentication is used.

Configuration

Parameters

The Generate WSDL button runs the WSDL generation utility.

The WSDL Generation utility takes as input the name of the WSDL file to generate and the URL of the provider of the web service (the web service location). This utility extracts the input and output parameters of the AssemblyLine in which the Connector is embedded and uses that information to generate the WSDL parts of the input and output WSDL messages. It is mandatory that for each Entry Attribute in the "Initial Work Entry" and "Result Entry" Schema the "Native Syntax" column be filled in with the Java type of the Attribute (for example, "java.lang.String"). The WSDL file generated by this utility can then be manually edited.

The operation style of the SOAP Operation defined in the generated WSDL is "rpc".

The WSDL generation utility cannot generate a <types...>...</types> section for complex types in the WSDL.

Connector Operation

For an overview of the Axis Easy Web Service Server Connector Attributes, used to exchange information to and from the HTTP/SOAP request, see Schema.

This Connector parses the incoming SOAP request message and stores the Java™ representation of the SOAP request in the "requestObjArray" Connector Attribute. The Connector is capable of parsing both Document-style and RPC-style SOAP messages as well as generating (a) Document-style SOAP response messages, (b) RPC-style SOAP response messages and (c) SOAP Fault response messages. The style of the message generated is determined by the WSDL specified by the WSDL File Connector parameter.

The Connector is capable of parsing SOAP request messages and generating SOAP response messages which contain values of complex types which are defined in the <types> section of the WSDL document. In order to do that this Connector requires that (1) the Complex Types Connector parameter contains the names of all Java classes that implement the complex types used as request and response parameters to the SOAP operation and that (2) these Java classes' class files are located in the Java class path of SDI.

If during parsing the SOAP request an Exception is thrown by the parsing code, then the Connector generates a SOAP Fault Object (org.apache.axis.AxisFault) and stores it in the "soapFault" Connector Attribute.

This Connector is capable of parsing and generating SOAP response messages encoded using both "literal" encoding and SOAP Section 5 encoding. The encoding of the SOAP response message generated is determined by the WSDL specified by the WSDL File Connector parameter.

At the end of AssemblyLine processing in the Response channel phase, this Connector requires the Java representation (Object[]) of the SOAP response message from the "responseObjArray" Attribute of the work Entry to be mapped out. The Connector then serializes the SOAP response message, wraps it into an HTTP response and returns it to the web service client.


Axis2 Web Service Server Connector

The Axis2 Web Service Server Connector can be used to provide a SOAP web service, which is accessible via HTTP/HTTPS.

The logic of such a service is supposed to be implemented as an SDI AssemblyLine, thus leveraging existing SDI components.

The Connector is named after the underlying Axis2 Java™ library: http://ws.apache.org/axis2/.

Both WSDL 1.1 (http://www.w3.org/TR/wsdl/) and WSDL 2.0 (http://www.w3.org/TR/wsdl20/) documents are supported.

Both SOAP 1.1 and SOAP 1.2 protocols are supported. Only literal SOAP messages can be used, encoded SOAP messages are not supported. This is a limitation of the underlying Axis2 library (version 1.4.0.1).

The Axis2 Web Service Server Connector supports Server Mode only.

Comparison between Axis1 and Axis2 components

Generally, there are only few cases in which you should use Axis1 components:

In all other cases the Axis2 components should be used because they:

SOAP encoding support

The binding in a WSDL1.1 document describes how the service is bound to a messaging protocol, particularly the SOAP messaging protocol. A WSDL SOAP binding can be either a Remote Procedure Call (RPC) style binding or a document style binding. A SOAP binding can also have an encoded use or a literal use. This gives you four style/use models:

  1. RPC/encoded
  2. RPC/literal
  3. Document/encoded
  4. Document/literal

See http://www.ibm.com/developerworks/webservices/library/ws-whichwsdl/.

Support of style/use models in the SDIAxis2 components is as follows:

  1. RPC/encoded is not supported due to limitations of the Axis2 library. The RPC-encoded binding is not compliant with WS-I Basic Profile (http://www.ws-i.org/Profiles/BasicProfile-1.1.html#Consistency_of_style_Attribute).
  2. RPC/literal - supported.
  3. Document/encoded is not supported but this is not a problem since it is not used at all; in addition, it is not WS-I compliant.
  4. Document/literal - supported.

In WSDL 2.0 everything is similar to the document/literal model (all messages are defined directly using a type language, such as XML Schema) so there is no problem with our Axis2 components. As for RPC calls, WSDL2.0 defines a set of rules for designing messages suitable for them. See http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/74bae690-0201-0010-71a5-9da49f4a53e2.

Popularity of the RPC/encoded model

All major frameworks for web services support Document/literal messages. Most of the popular frameworks also have some support for rpc/encoded, so developers can still use it to create encoded-only services. As a result it is hard to estimate how many web services, in production use, work only with SOAP encoded messages. However there is a tendency to move away from RPC/encoded towards Document/literal. This is so, because the SOAP encoding specification does not guarantee 100% interoperability and there are vendor deviations in the implementation of RPC/encoded.

Here are some references about encoded support in some popular frameworks:

Alternatives

If you need to use the RPC/encoded model the old web services suite can be used. Also, if you have more information on the service and the SOAP messages, a solution can be created using the HTTP Components and XML Parser.

Using the Connector

The Axis2 Web Service Server Connector is designed for a "WSDL first" way of development. This means that the Connector requires a WSDL document describing the web service, so that it knows how clients expect the web service to behave. The implementation of the web service then must stick to the model outlined by the WSDL. (An alternative would be to implement the logic first and have the Connector produce an appropriate WSDL for that implementation.) The reason for this design choice is to make it easy for SDI to fit into an existing communication model by conforming to an already established WSDL description.

For situations where an existing Assembly Line needs to be exposed through a web service interface, SDI offers some basic WSDL generation functionality (see WSDL Generation).

A WSDL document can describe multiple interfaces (or port types in WSDL 1.1 terms). Each interface groups a set of operations. One instance of the Axis2 Web Service Server Connector can be used to implement just a single interface. To help the AssemblyLine logic distinguish between different operations, the Connector passes the name of the operation (the local part of the qualified name) in the $operation Attribute of the Operational Entry (op-entry). For more information on AssemblyLine Operations and the Operational Entry see SDI v7.2 Users Guide.

Notes:

  1. The SOAP version depends on the client: If the client sends a SOAP 1.1 request, the Connector will send back a SOAP 1.1 response. If the client sends a SOAP 1.2 request, the Connector will send back a SOAP 1.2 response. The SOAP version settings from the WSDL document are ignored.
  2. The Connector does not perform XML Schema validation of incoming or outgoing SOAP messages.
  3. The Connector does SOAP processing only on HTTP POST requests. Other HTTP requests are left for the Assembly Line logic to handle.
  4. The Connector will generate a SOAP response only as an answer to a SOAP request: If the HTTP request does not contain a body, the Connector will not generate a SOAP response.
  5. The Assembly Line can override the response for all requests by specifying an http.body Output Attribute. The Connector will not generate a SOAP response if the Assembly Line provides an overriding http.body Attribute.
  6. For special cases, we can configure the logging level of the underlying Axis2 library in the Log4j configuration file of the SDI Server (etc/log4j.properties).

Supported Message Exchange Patterns

The Axis2 Web Service Server Connector supports the following message exchange patterns (described in WSDL 2.0 terms):

For more information on message exchange patterns see: http://www.w3.org/TR/wsdl20-adjuncts/#patterns http://www.w3.org/TR/wsdl#_porttypes

Note: When the server does not generate a SOAP response, it still sends an HTTP response back to the client. In that case the HTTP response body will not contain a SOAP message.

SOAP Faults

We can instruct the Axis2 Web Service Server Connector to generate a SOAP fault in response to a client's request.

This can be achieved using the following Connector attributes:

See Schema a detailed description of these and other attributes.

For more information on SOAP faults see: http://www.w3.org/TR/soap12-part1/#soapfault http://www.w3.org/TR/2000/NOTE-SOAP-20000508/#_Toc478383507

SOAP Headers

The Connector provides access to the SOAP header of the SOAP request for analysis, in case of special or advanced use.

It also allows user-defined SOAP headers to be included in the response.

Note that any user-defined SOAP headers affect both normal SOAP messages and SOAP faults.

See section Schema for a detailed description of the attributes.

The HTTP Transport Layer

The Axis2 Web Service Server Connector uses the HTTP Server Connector as its HTTP transport.

In special, advanced cases we can take advantage of the control that the HTTP Server Connector provides over the HTTP request and the HTTP response.

We can analyze the HTTP headers of the request and set the HTTP headers of the response.

We can even override the whole HTTP body of a response. The Axis2 Web Service Server Connector parses SOAP messages out of HTTP POST requests only. HTTP GET requests are not processed by the SOAP engine, and you are free to implement our own logic in such cases - for example we can return a WSDL document if an HTTP GET request arrives with an URI that ends with "?wsdl".

WSDL Generation

The Axis2 WS Server Connector requires a WSDL document in order to function. If you have a working AssemblyLine but you do not have a WSDL document, we can use SDI to generate one, using an instance of this Connector. The service name in the generated WSDL document will be set to the name of the AssemblyLine.

Note that the WSDL generation functionality is aimed at novice users as a quick start. If you have some web service expertise, we strongly recommend that you design the WSDL document yourself or at least thoroughly inspect the generated WSDL document before putting it into production use.

To generate a WSDL file:

  1. Add an instance of the Axis2 WS Server Connector to the AssemblyLine for which you want to generate a WSDL document.
  2. Fill in the WSDL Output to Filename, Web Service provider URL and WSDL Version parameters.
  3. Press the Generate WSDL button.

Schema

Input Schema

See the documentation of the HTTP Server Connector for transport related attributes.

We can add attributes such as http.content-type and http.content-length to the Input Map and use these parameters of the SOAP request in the logic of the AssemblyLine.

Another useful attribute is http.method, which holds the type of request received by the server (GET or POST). Since the connector parses only POST requests, the value of this attribute can be checked and in case of a GET request a specific return value set (for example the WSDL document describing the service or an HTML document).

http.SOAPAction is also a significant HTTP header that is important for the Web Services. It is set in the HTTP binding of the SOAP message, and its value is an URI. Some SOAP bindings do not require a SOAPAction and omit this attribute.

A SOAP Message Embedded in an HTTP Request:

POST /StockQuote HTTP/1.1
Host: www.stockquoteserver.com
Content-Type: text/xml; charset="utf-8"
Content-Length: nnnn
SOAPAction: "Some-URI"

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
    <soapenv:Body>
	...
    </soapenv:Body>
</soapenv:Envelope>

See the Schema section of the Axis2 WS Client Function Component for a description of WSDL-specific Attributes, such as the incoming message.

Output Schema

See the documentation of the HTTP Server Connector for transport related attributes.

The HTTP attributes can be used not only to set characteristics of the SOAP response, but to alter the behavior of the AssemblyLine. For instance when the attribute http.body is mapped in the Output Map of the connector, its value is directly set as SOAP response and the Axis2 engine is not used to generate it. A similar technique is used in the first of the shipped examples. There the value of http.method is checked and in case of a GET request the http.body attribute is set with the contents of the WSDL file describing the service. If a POST request is received a SOAP response is assembled and sent.

Another useful HTTP attribute is http.status. It can be mapped to the Output Map of the connector and its value set according to the AssemblyLine logic. This way we can modify the status of the HTTP response that the server will send. Set "200" for OK, "403" for Forbidden, "404" for Not Found, and so forth.

See the Axis2 WS Client Function Component Schema section for description of WSDL-specific Attributes, such as the incoming message.

Configuration

The Axis2 Web Service Server Connector has the following parameters:

WSDL File Generator Parameters

The following parameters are related to WSDL file generation (they are not used at runtime):

Security and Authentication

Encryption

The Axis2 Web Service Server Connector supports transport level security by the use of SSL/TLS.

To turn SSL on, set the Use SSL Connector parameter to true.

To turn SSL client authentication on, set the Require Client Authentication Connector parameter to true.

For more information on Connector parameters, see Configuration.

Authentication

By default the Axis2 Web Service Server Connector has HTTP basic authentication disabled.

To turn HTTP basic authentication on, set the HTTP Basic Authentication Connector parameter to true. Also set the Auth Realm to the name of the authentication realm - the client will be prompted to authenticate against that realm.

For more information on Connector parameters see Configuration.

The following Connector Attributes are related to HTTP basic authentication: Input Schema

Output Schema

Note that the actual authentication logic must be implemented in the associated Assembly Line, for example by verifying client credentials against a database or an LDAP server.

Authorization

We can implement custom authorization logic in the Connector's associated AssemblyLine, based on the username (see Authentication) provided by the Connector.

See also

The example in TDI_install_dir/examples/axis2_web_services.


CCMDB Connector

Note: This connector is deprecated and will be removed in a future version of SDI.

Use the CMDB Connector to read, write, delete, or search configuration items (CIs) and relationships between them in the IBM Tivoli® Change and Configuration Management Database (CCMDB).

Overview of CCMDB Connector

The CCMDB is an integrated productivity tool and database that helps you manage, audit, and coordinate the change and configuration management processes using user interfaces and workflow that are designed to facilitate cross-silo cooperation. CCMDB includes a database that serves as a logical aggregation of many databases, providing critical information about IT infrastructure resources, including key attributes, their configurations, and their physical and logical relationships to other infrastructure resources.

The CCMDB Connector uses JDBC to connect to the database and supports only the DB2® database. This connector uses the queries.xml configuration file to include static SQL statements to retrieve or store data into the database.

In the CCMDB Connector, a hierarchical data source schema is used to represent information. The schema depends on the specified artifact type such as actual configuration item or relationship, and the specified class type.

For more information about CCMDB, see http://publib.boulder.ibm.com/infocenter/tivihelp/v10r1/topic/com.ibm.ccmdb.doc_7.1.1/overview/c_ccmdb_overview.html.

Architecture of CCMDB Connector

The CCMDB data layer contains three data spaces that hold configuration items, process artifacts, and the relationships between these objects. This data layer provides a dependency mapping of the discovered environment and a specification of authorized configuration items that define:

The CCMDB Connector supports the Common Data Model (CDM) across all three data spaces. The CDM is a logical information model that is used to support the sharing of consistent data definitions and the exchange of data between Security management products concerning managed resources and components of a customer's business environment. The following figure depicts the three data spaces of the CCMDB solution, its interoperability, and its relationship to other data structures such as process artifacts.

For more information about CDM, see http://publib.boulder.ibm.com/infocenter/tivihelp/v10r1/index.jsp?topic=/com.ibm.taddm.doc_7.2/SDKDevGuide/c_cmdbsdk_understandingdatamodel.html.

The CCMDB Connector works with configuration items stored in the Actual CIs data space. Actual CIs are a subset of configuration items and relationships of the Discovered CIs data space, which is copied to the Actual CIs data space. In the Actual CIs data space, the system deals with subsets of the data from Discovered CIs data space. The system still contains data to perform all the process management and service management capabilities such as CI auditing as part of configuration management or change management.

Data representation modes

The CCMDB Connector supports the following two modes of data representation:

We can select the data representation mode to be used by the CCMDB Connector in the Configuration Editor.

IdML mode

In the IdML mode, all attributes are represented with their CDM names capitalized and are prefixed with cdm:. All relationships contain two parts namely, relationship and a related item. The related class carries information for the relationship direction. Thus, the sys.ComputerSystem's relationship runson changes to cdm-rel:runsOn . cdm-src:sys.OperatingSystem. The first part, cdm-rel:runsOn, describes the relationship as seen in the prefix cdm-rel. The second part represents a related type of class sys.OperatingSystem and its prefix, if the related item is the source of the relationship.

In the IdML mode:

Native mode

In the native mode, all configuration items and relationships are represented according to the internal data model. The connector does not generate GUIDs for configuration items. It relies on the Id values specified in the input data.

Schema comparison

Refer to the following example data structure, in native mode and IdML mode, for an operating system with the installed software.
IdML mode Native mode
{
"@ClassType": "sys.OperatingSystem",
"@Guid": "46E8E8DE2946319190AF5B70BDCF4A60",
"cdm:Guid": "46E8E8DE2946319190AF5B70BDCF4A60",
"cdm:CreatedBy": "system",
"cdm:DisplayName": "192.168.1.1",
"cdm:LastModifiedBy": "system",
"cdm:LastModifiedTime": 1.222355157147E12,
"cdm:OSConfidence": 15.0,
"cdm:OSName": "D-Link embedded",
"cdm-rel:installedOn": {
	"cdm-trg:sys.ComputerSystem": {
		"@ClassType": "sys.ComputerSystem",
		"@Guid": "272A0D2B9DE73C8A9C86740263CD39FA",
		"cdm:Guid": "272A0D2B9DE73C8A9C86740263CD39FA",
		"cdm:Fqdn": "192.168.1.1",
		"cdm:Name": "192",
		"cdm:Signature": "192.168.1.1",
		"cdm:Type": "ComputerSystem",
		"cdm:ContextIp": "NULL_CONTEXT",
		"cdm:CreatedBy": "system",
		"cdm:DisplayName": "192.168.1.1",
		"cdm:LastModifiedBy": "system",
		"cdm:LastModifiedTime": "1.222355157147E12"
	}
},.....
{
"actciname": "192.168.1.1",
"actcinum": "192.168.1.1~22335",
"bidiflag": 3.0,
"changeby": "SYSTEM",
"changedate": "2008-09-25 11:05:57.0",
"classification": "SYS.OPERATINGSYSTEM",
"classstructureid": "1695",
"createdby": "system",
"displayname": "192.168.1.1",
"guid": "46E8E8DE2946319190AF5B70BDCF4A60",
"hasld": 0,
"langcode": "EN",
"lastmodifiedby": "system",
"lastmodifiedtime": 1.222355157147E12,
"lastscandt": "2008-09-25 11:05:57.0",
"osconfidence": 15.0,
"osname": "D-Link embedded",
"installedon": {
	"relation": {
		"ancestorci": "192.168.1.1",
		"relationnum": "RELATION.INSTALLEDON",
		"sourceci": "192.168.1.1~22335",
		"sourceciguid": 
         "46E8E8DE2946319190AF5B70BDCF4A60",
		"swapped": "1",
		"targetci": "192.168.1.1~22333",
		"targetciguid": 
         "272A0D2B9DE73C8A9C86740263CD39FA",
		"target": {
			"actciname": "192.168.1.1",
			"actcinum": "192.168.1.1~22333",
			"changeby": "SYSTEM",
			"changedate": "2008-09-25 11:05:57.0",
			"classstructureid": "1625",
			"guid": "272A0D2B9DE73C8A9C86740263CD39FA",
			"hasld": "0",
			"langcode": "EN",
			"lastscandt": "2008-09-25 11:05:57.0",
			"fqdn": "192.168.1.1",
			"name": "192",
			"signature": "192.168.1.1",
			"type": "ComputerSystem",
			"bidiflag": "3.0",
			"contextip": "NULL_CONTEXT",
			"createdby": "system",
			"displayname": "192.168.1.1",
			"lastmodifiedby": "system",
			"lastmodifiedtime": "1.222355157147E12"
		}
	}
},...

We can switch between these two modes of data representation using the IdML Mode option in the Configuration Editor. To check the structure of the current schema, use the Query Schema function of the CCMDB Connector.

Refer to the following example data structure, in native mode and IdML mode, for installedOn relationship.
IdML mode Native mode
{
"cdm-rel:installedOn": {
	"cdm-src:sys.zOS.ZOS": {
		"@ClassType": "sys.zOS.ZOS",
		"@Guid": "4E043C3C223B38B9AC647FE699B83365",
		"cdm:Guid": "4E043C3C223B38B9AC647FE699B83365",
		"cdm:CreatedBy": "system",
		"cdm:DisplayName": "OMO1",
		"cdm:Label": "OMO1-SYSPLEXO",
		"cdm:LastModifiedBy": "administrator",
		"cdm:LastModifiedTime": "1.176320919296E12",
		"cdm:SourceToken": "OMO1-ZOS",
		"cdm:FQDN": "PTHOMO1.PERTHAPC.AU.IBM.COM",
		"cdm:Name": "PTHOMO1.PERTHAPC.AU.IBM.COM",
		"cdm:OSName": "OMO1",
		"cdm:VersionString": "01.08.00",
		"cdm:IPLParmDataset": "SYS8.IPLPARM",
		"cdm:IPLParmDevice": "E81A",
		"cdm:IPLParmMember": "LOAD00",
		"cdm:IPLParmVolume": "$$SR81",
		"cdm:IPLTime": "1.174551129E12",
		"cdm:JESNode": "PTHAPO0",
		"cdm:MasterCatalogDataset": 
         "CATALOG.MASTER.SYSPLEXO",
		"cdm:MasterCatalogVolume": "O$SY01",
		"cdm:NetID": "AUIBMQXP",
		"cdm:NetidSSCP": "AUIBMQXP.OMO1CDRM",
		"cdm:PrimaryJES": "JES2",
		"cdm:ProcessCapacityUnits": "3.0",
		"cdm:ProcessingCapacity": "52.0",
		"cdm:SMFID": "OMO1",
		"cdm:SSCP": "OMO1CDRM",
		"cdm:SysResVolume": "$$SR81"
	},
	"cdm-trg:sys.zOS.ZVMGuest": {
		"@ClassType": "sys.zOS.ZVMGuest",
		"@Guid": "A97257B6DA5434E69F2C47D34FC115ED",
		"cdm:Guid": "A97257B6DA5434E69F2C47D34FC115ED",
		"cdm:Name": "PTHOMO1",
		"cdm:ProcessCapacityUnits": "3.0",
		"cdm:ProcessingCapacity": "52.0",
		"cdm:Type": "IpDevice",
		"cdm:VMID": "PTHOMO1-PTHVM8",
		"cdm:CreatedBy": "administrator",
		"cdm:DisplayName": "PTHOMO1",
		"cdm:Label": "PTHOMO1-PTHVM8",
		"cdm:LastModifiedBy": "administrator",
		"cdm:LastModifiedTime": "1.176317254312E12",
		"cdm:SourceToken": 
         "PTHOMO1-PTHVM8-ES64-PTHES6-VMGuest"
	}
}
}
{
"ancestorci": "1625",
"relationnum": "RELATION.INSTALLEDON",
"sourceci": "OMO1-SYSPLEXO~1828",
"sourceciguid": "4E043C3C223B38B9AC647FE699B83365",
"swapped": 0,
"targetci": "PTHOMO1-PTHVM8~1829",
"targetciguid": "A97257B6DA5434E69F2C47D34FC115ED",
"source": {
	"actciname": "OMO1-SYSPLEXO",
	"actcinum": "OMO1-SYSPLEXO~1828",
	"changeby": "ADMINISTRATOR",
	"changedate": "2007-04-11 15:48:39.0",
	"classstructureid": "1761",
	"guid": "4E043C3C223B38B9AC647FE699B83365",
	"lastscandt": "2007-04-11 15:48:39.0",
	"createdby": "system",
	"displayname": "OMO1",
	"label": "OMO1-SYSPLEXO",
	"lastmodifiedby": "administrator",
	"lastmodifiedtime": "1.176320919296E12",
	"sourcetoken": "OMO1-ZOS",
	"fqdn": "PTHOMO1.PERTHAPC.AU.IBM.COM",
	"name": "PTHOMO1.PERTHAPC.AU.IBM.COM",
	"osname": "OMO1",
	"versionstring": "01.08.00",
},
"target": {
	"actciname": "PTHOMO1-PTHVM8",
	"actcinum": "PTHOMO1-PTHVM8~1829",
	"changeby": "ADMINISTRATOR",
	"changedate": "2007-04-11 14:47:34.0",
	"classstructureid": "1995",
	"guid": "A97257B6DA5434E69F2C47D34FC115ED",
	"lastscandt": "2007-04-11 14:47:34.0",
	"name": "PTHOMO1",
	"type": "IpDevice",
	"createdby": "administrator",
	"displayname": "PTHOMO1",
	"label": "PTHOMO1-PTHVM8",
	"lastmodifiedby": "administrator",
	"lastmodifiedtime": "1.176317254312E12",
	"sourcetoken": 
       "PTHOMO1-PTHVM8-ES64-PTHES6-VMGuest"
}
}

Notes:

  1. In this example data structure, a few system attributes are skipped for simplicity.
  2. In native mode, the schema has more attributes, for example, relationnum, swapped, and so on, than IdML mode because relationships have only source and target items.

Operation modes of CCMDB Connector

The CCMDB Connector supports AddOnly, Delete, Update, Iterator, and Lookup operation modes. The following table shows the supported modes of operation for various artifact types and schema.
Mode Artifact Type
Actual CI

(Native schema)

Actual CI

(IdML schema)

Relationship

(Native schema)

Relationship

(IdML schema)

Iterator Yes Yes Yes Yes
Lookup Yes Yes Yes No
AddOnly Yes Yes Yes Yes
Update Yes Yes No No
Delete Yes Yes Yes Yes

Note: For more details on connector operation modes, attribute mapping, and link criteria, see "“General Concepts" chapter of SDI v7.2 Users Guide.

AddOnly mode

In the AddOnly mode of operation, using the available attributes in the Entry, an actual configuration item or relationship is created and inserted to the database. To add a configuration item or relationship, in the Configuration Editor, set up the attribute mapping to specify values for the attributes.

Following tables describe the operations in AddOnly mode.
Table 4. Actual configuration item
Artifact Operation
Root configuration item Adds item to the database if it does not exist, else throws an exception.
Relationships Adds the relationship.
Related configuration items Skips if the item exists.

Adds the non-existent items.

Table 5. Relationship
Artifact Operation
Relationship Adds the specified relation to the database if it does not exist, else throws an exception.
Related configuration items Skips if the item exists.

Adds the non-existent items.

Update mode

In the Update mode, using the available attributes in the Entry, an actual configuration item or relationship is updated in the database. To update a configuration item or relationship, in the Configuration Editor, specify the link criteria and set up the output attribute mapping to provide value for the attribute to be updated.

Following table describes the operations in Update mode.
Table 6. Actual configuration item
Artifact Operation
Root configuration item Updates the item
Relationship Overwrites the relationship
Related configuration items Skips if the item exists

Adds the non-existent items

Delete mode

In the Delete mode, using the available attributes in the Entry, an actual configuration item or relationship is deleted from the database. To delete a configuration item or a relationship, in the Configuration Editor, specify the link criteria and input attribute mapping for the attribute.

Following tables describe the operations in Delete mode.
Table 7. Actual configuration item
Artifact Operation
Root configuration item Deletes the specified item from the database
Relationships Deletes the relationship from the database
Related configuration items Skips the delete operation
Table 8. Relationship
Artifact Operation
Relationship Deletes the specified relationship from the database
Related configuration items Skips the delete operation

Iterator mode

In Iterator mode, the CCMDB Connector is used to read actual configuration items and their relationships for the specified class type in the Configuration Editor.

Lookup mode

In the Lookup mode, the CCMDB Connector is used to search for the matching attribute. Based on the required configuration item/relationship class, the Entry is constructed of that type relating to the selection criteria passed into the method invocation.

Note: The IdML schema does not define any attributes at relation level.

Examples

Go to the TDI_install_dir/examples/CCMDBConnector directory of your SDI installation.


Command line Connector

The command line Connector enables you to read the output from a command line or pipe data to a command line's standard input. Every command argument is separated by a space character, and quotes are ignored. The command is run on the local machine.

Note: You do not get a separate shell, so redirection characters ( | > and so forth) do not work. To use redirection, make a shell-script (UNIX) or batch command (DOS) with a suitable set of parameters. For example, on a Windows system, type

cmd /c dir
to list the contents of a directory.

The Connector supports Iterator and AddOnly mode, as well as CallReply mode.

In Iterator and AddOnly mode, the command specified by the Command Line parameter is issued to the target system during Connector initialization, which implies it will only be issued once for the whole AssemblyLine lifetime.

However, in CallReply mode, the command is issued to the target system on each iteration of the AssemblyLine, after Output Attribute Mapping (call phase), and before Input Attribute Mapping (reply phase). In this mode, you must provide the command to be executed in an attribute called command.line; after it has executed you will find the output result in an attribute called command.output.

If a Parser is attached to the Command Line Connector, the output result will be parsed.

Native-encoded output on some operating systems

When you use the Command Line Connector to run a program on a Windows operating system, the output from the program might be encoded using a DOS code page. This can cause unexpected results, because Windows programs usually use a Windows code page. Because a DOS code page is different from a Windows code page, it might be necessary to set the Character Encoding in the Command Line Connector's Parser to the correct DOS code page for your region; for example: cp850.

Also see Character Encoding conversion.

Some words on quoting

On Linux/Unix systems, this Connector has the capability to attempt to deal with the quoting of parameters that may contain lexically important characters. When the parameter Use sh is checked, SDI uses the sh program (for example, the standard Linux shell) to run the command line, and sh will handle quoting as you expect. If you do not have sh on your operating system, do not check this box.

Without using sh, when the Command Line Connector is run on a Unix/Linux platform, it does not handle a command line with a parameter in quotes correctly. For example, the command:

Report -view fileView -raw -where "releaseName = 'ibmdi_60' and nuPathName like 'src/com/ibm/di%' " 

This command should have the phrase "releaseName = 'ibmdi_60' and nuPathName like 'src/com/ibm/di%' " as one parameter, but it does not. The reason is that SDI uses the Java™ Runtime exec() method, which splits all commands at spaces, and ignores all quoting. We would have liked this to be split according to the quotes. Checking Use sh (when possible) solves this problem.

Configuration

The Connector needs the following parameters:

Examples

Refer to the TDI_install_dir/examples/commandLine_connector directory of your SDI installation.

See also

Remote Command Line Function Component, z/OS TSO/E Command Line Function Component


Database Connector

The Database connector is a simplified version of the JDBC Connector. The Database connector is intended to provide a more user-friendly user interface to configuring a database. All advanced parameters are removed from the connector configuration form and the most commonly used parameters are the only available ones.

For behaviour, Connector Modes and so forth, see the appropriate sections of the JDBC Connector. Specifically, for information on where to place driver libraries in order to be able to set up a connection to a database system, see Understanding JDBC Drivers; however, the Database connector automates some of the process of creating a JDBC URL.

Configuration

The Database Connector requires the following parameters:


Deployed Assets Connector

The Deployed Assets Connector is used for data integration with IBM Tivoli® Asset Management for IT database. IBM Tivoli Asset Management for IT helps you to manage your IT environment effectively and efficiently.

The Deployed Assets Connector uses JDBC to connect to the database for efficient data transformation and validation. The key functions of the Deployed Assets Connector are:

Note: Only the DB2® database is currently supported.

Using the Deployed Assets Connector

This section provides information about the architecture, supported operation modes, and running Deployed Assets Connector.

Architecture of Deployed Assets Connector

In the Deployed Assets Connector, the deployed IT assets data is presented in a hierarchical manner. For example, the asset of type Computers can have simple attributes such as Domainname, Biosname, and Ramsize. It can also have one or more related CPU as represented in the following hierarchical data model.

{
"Assetclass": "COMPUTER",
"Biosdate": "1998-03-26 00:00:00.0",
"Biosname": "Intel 4A4LL0X0.10A.0027.P09",
"Biospnp": 0,
"Biosversion": "4A4LL0X0.10A.0027.P09",
"Changedate": "2005-02-03 14:23:16.0",
"Createdate": "2005-02-03 14:23:16.0",
"Description": "KLINGON_EMPIRE administrator enterprise 00400542C346 192.168.100.95/24",
"Domainname": "enterprise",
"Hwdetectiontool": "Maximo Discovery 4.5",
"Hwlastscandate": "2003-06-26 16:52:00.0",
"Logonname": "administrator",
"Makemodel": "Dell Dimension XPS D266",
"Manufacturer": "Dell",
"Nodename": "KLINGON_EMPIRE 00400542C346",
"Ramdescription": "Total Ram",
"Ramsize": 64.00,
"Ramtotalslots": 0,
"Ramunit": "MB",
"Ramunusedslots": 0,
"Serialnumber": "NB7V7",
"Smbios": 0,
"Supportssnmp": 1,
"Supportswmi": 1,
"Swdetectiontool": "Maximo Discovery 4.5",
"Systemrole": "Network PC",
"CPUList": {
	"Processor": {
		"Changedate": "2005-02-03 14:23:17.0",
		"Createdate": "2005-02-03 14:23:17.0",
		"Currspeed": "0.00",
		"Makemodel": "Pentium II",
		"Manufacturer": "Intel",
		"Maxspeed": "266.00",
		"Serialnumber": "Source ID: 963",
		"Speedunit": "MHz",
		"CPUVariant": {
			"Processorname": "Pentium II",
			"Processorvar": "Pentium II",
		},
		"manufacturer-info": {
			"Manufacturername": "Intel",
			"Manufacturervar": "Intel",
		
		}
	}
},
....
}

Operation modes of Deployed Assets Connector

The Deployed Assets Connector supports the following operation modes:

AddOnly mode

In the AddOnly mode, we can use the Deployed Assets Connector to add deployed asset data to the database. For the specified asset type (Computers, Network Devices, or Network Printers), you need to map the attributes in the Output Map of the Connector.

Iterator mode

In the Iterator mode, use the Deployed Assets Connector to read root level assets and their relationships for the specified asset type such as Computers, Network Devices, or Network Printers.

Lookup mode

In the Lookup mode, we can use the Deployed Assets Connector to search for the matching assets. To search for an asset, specify the attributes of the selected asset type as the Link Criteria. For example, to find an asset with NodeID 100, you need to set NodeID equals 100, as the Link Criteria.

Delete mode

In the Delete mode, we can use the Deployed Assets Connector to delete existing assets from the database. To delete an asset, specify the attributes of the selected asset type as the Link Criteria.

For more details on using the Attribute Map and Link Criteria, see “General Concepts" chapter of SDI v7.2 Users Guide.

Configuration

The parameters of Deployed Assets Connector are:

Examples

Go to the TDI_install_dir/examples/DPAConnector directory of your SDI installation.


Direct TCP /URL scripting

You might want to access URL objects or TCP ports directly, not using the Connectors. The following code is an example that can be put in your Prolog:

TCP

// This example creates a TCP connection to www.example_page_only.com 
		and asks for a bad page

var tcp = new java.net.Socket ( "www.example_page_only.com", 80 );
var inp = new java.io.BufferedReader ( new java.io.InputStreamReader
		( tcp.getInputStream() ) );
var out = new java.io.BufferedWriter ( new java.io.OutputStreamWriter
		( tcp.getOutputStream() ) );

task.logmsg ("Connected to server");

// Ask for a bad page
out.write ("GET /smucky\r\n");
out.write ("\r\n");

// When using buffered writers always call flush to make sure data
		is sent on connection
out.flush ();

task.logmsg ("Wait for response");
var response = inp.readLine ();

task.logmsg ( "Server said: " + response );

URL

// This example uses the java.net.URL object instead of the raw 
		TCP socket object

var url = new java.net.URL("http://www.example_page_only.com");
var obj = url.getContent();

var inp = new java.io.BufferedReader ( new java.io.InputStreamReader
		( obj ) );
while ( ( str = inp.readLine() ) != null ) {
task.logmsg ( str );
}


Domino/Lotus Notes Connectors

In order to connect to a Domino® Server or a Lotus® Notes® system, a discussion on what types of connections ("Session types" in Lotus Notes terminology) are possible, is appropriate. For these Connectors to operate, you will need to install a Domino/Lotus Notes client library, and the decision on which client library to install (also see Post Install Configuration) hinges on which Session Type is required.

Session types

Post Install Configuration

There are a few configuration steps which must be performed after SDI has been installed so that the Lotus® Notes/Domino Connectors can run.

Verify that the version of the Domino® Server or Lotus Notes® client (the Connector will be used with) is supported.

When a Connector is deployed on a Notes client machine these steps need to be performed only on the Notes client machine and not on the Domino Server machine to which the Notes client is connected.

When a Connector is deployed on a Domino Server machine these steps need to be performed on that Domino Server machine.

Lotus provides a Java™ library called Notes.jar, which provides interfaces to native calls that access the Domino Server (possibly through the network). It can be found in the folder where the Domino Server or the Lotus Notes client is installed (for example, C:\Lotus\Domino or C:\Lotus\Notes). Different settings must be performed depending on whether you create Local Client Session or Local Server Session, because the binaries differ between Lotus Domino and Lotus Notes.

If you create Local Client Session, perform the following:

If you create Local Server Session, perform the following:

Note: Because of the way the Notes API is implemented, there can only exist one of two jars: either ncso.jar or Notes.jar in TDI_install_dir/jars/3rdparty/IBM. If both jars are present, then the behavior of the Connector is unpredictable.

If you create IIOP Session, perform the following:

Native API call threading

When an AssemblyLine (containing Connectors) is executed by the SDI Server it runs in a single thread and it is only the AssemblyLine thread that accesses the AssemblyLine Connectors. The initialization of the Notes® API, selecting entries, iterating through the entries and the termination of the Connector is performed by one worker thread.

A requirement of the Notes API is that when a local session is used each thread that executes Notes API functions must initialize the NotesThread object, before calling any Notes API functions. The Config Editor GUI threads do not initialize the NotesThread object and this causes a Notes exception.

There are several ways to initialize the NotesThread object. The way the Connectors do it is to call the NotesThread.sinitThread() method when a local session is created.

That is why the all Lotus® Notes and Domino® Connectors use their own internal thread to initialize the Notes runtime and to call all the Notes API functions. The internal thread is created and started on Connector initialization and is stopped when the Connector is terminated. The Connector delegates the execution of all native Notes API calls to this internal thread. The internal thread itself waits for and executes requests for native Notes API calls sent by other threads.

This implementation makes Connectors support the Config Editor GUI functionality and multithread access in general. The Lotus Notes Connector initializes the Notes runtime if a local session is created.

The ncso.jar file

In order to use IIOP sessions, the SDI Lotus® Notes/Domino components require the presence of the ncso.jar file.

From SDI 6.1, ncso.jar will no longer be shipped with the SDI product. You need to manually provide this file in order for the SDI Lotus/Domino components to function properly.

However, the ncso.jar file is shipped with the Domino® Server. This file can be taken from the Domino installation (usually <Domino_root>\Data\domino\java\ncso.jar on Windows platforms) and place it in the SDI_root\jars\3rdparty\IBM folder, so that the SDI Server will load it on initialization. Since the ncso.jar will not be provided as part of the SDI v7.2 installation, some existing SDI 6.0 functionality will change as follows.

Server aspects

The "-v" command-line option

The SDI Server provides the "-v" command-line option which displays the versions of all SDI components. Since the ncso.jar file will not be provided as part of the SDI installation, if ncso.jar is not taken from the Domino server or Lotus Notes® installation, messages like the following will be displayed (The components which do not rely on the ncso.jar have their versions displayed properly):

ibmdi.DominoUsersConnector:
com.ibm.di.connector.dominoUsers.DominoUsersConnector: 
2006-03-03: CTGDIS001E The version number of the Connector is undefined

The Server API getServerInfo method

The Server API provides a method to request version information about SDI components (Session.getServerInfo). If version information is requested via the Server API about any of the Connectors which rely on ncso.jar and if this jar is not taken from the Domino server or Lotus Notes installation, an error is thrown. For example if the local Server API is accessed through a script like this:

session.getServerInfo().getInstalledConnectors()

the following error is displayed:

18:16:12  CTGDKD258E Could not retrieve version info for class 
'com.ibm.di.connector.DominoChangeDetectionConnector'.: 
java.lang.NoClassDefFoundError: lotus.domino.NotesException

Running an AssemblyLine, IIOP Session

AssemblyLines which use a Connector (which uses an IIOP session) will fail to execute with a NoClassDefFoundError exception, if the ncso.jar file is not taken from the Domino Server or Lotus Notes installation.

Reported component availability

Component version table

This is the table with the versions of all installed SDI components (available from the context-menu option Show Installed Components on a server visible in the Servers panel.) This table will fail to display component versions for any of the Notes/Domino Connectors if neither the Notes.jar nor the ncso.jar is taken from the Domino/Notes installation

Component combo box

The Insert new object box (activated by choosing Insert Component..., where available) will display all existing SDI Connector modes (not only the supported ones) for the Notes/Domino Connectors, if neither the Notes.jar nor the ncso.jar is taken from the Domino/Notes installation.

"Input Map" connection to the data source

Attempting a connection to the data source from the Input Map tab for any of the Notes/Domino Connectors will display an error that the Connector could not be loaded, if the jar library is not taken from the Notes/Domino installation, whatever session is created.

"Classes" folder

This folder, located at TDI_Install_Folder/classes contains user-provided class files that are loaded by the system class loader. This folder can be used for specifying custom classes which must be loaded by the system class loader.

In order for a class file to be loaded by the system class loader, the class file needs to be copied to this folder. If the class is inside a Java™ package, then the class file must be put in the corresponding folder under the "classes" folder. For example, if a class file is contained in a Java package named com.ibm.di.classes, then the class file must be put inside the TDI_Install_Folder/classes/com/ibm/di/classes folder.

Only class files in the "classes" folder are loaded. That means that if a jar file is located in this folder, it will not be loaded at all. If the classes are packaged in a jar file, then these classes need to be extracted from the jar file into the "classes" folder.

Domino Change Detection Connector

The Domino® Change Detection Connector enables SDI v7.2 to detect when changes have occurred to a database maintained on a Lotus® Domino server. The Domino Change Detection Connector retrieves changes that occur in a database (NSF file) on a Domino Server. It reports changed Domino documents so that other repositories can be synchronized with Lotus Domino.

Notes:

  1. Due to the Lotus Notes® architecture this Connector requires native libraries (for both session types: IIOP as well as LocalClient), and is therefore only supported on Windows platforms; and the path to the local client libraries as well your Domino server installation should be added to the definition of the PATH variable in the SDI Server startup script, ibmdisrv.bat.
  2. Refer to Supported session types by Connector for an overview of which session types are possible with this Connector.

When running the Connector reports the object changes necessary to synchronize other repositories with a Domino database, regardless of whether these changes have occurred while the Connector has been offline or they are happening while it runs.

The Domino Change Detection Connector operates in Iterator mode, and reports document changes at the Entry level only.

On each AssemblyLine iteration the Domino Change Detection Connector delivers a single document object which has changed in the Domino database. The Connector delivers the changed Domino document objects as they are, with all their current items and also reports the type of object change - whether the document was added, modified or deleted. The Connector does not report which items have changed in this document or the type of item change. After the Connector retrieves a document change, it parses it and copies all the document items to a new Entry object as Entry Attributes. This Entry object is then returned by the Connector.

This connector supports Delta Tagging at the Entry level only.

This Connector can be used in conjunction with the IBM Password Synchronization plug-ins. For more information about installing and configuring the IBM Password Synchronization plug-ins, see the SDI v7.2 Password Synchronization Plug-ins Guide.

The Connector stores locally, on the SDI v7.2 machine, the state of the synchronization. When started it continues from the last synchronization point and reports all changes after this point, including these changes that happened while the Connector was offline.

Note: Changed documents are not delivered in chronological order or in any other particular order, unless you check the "Deliver Sorted" checkbox in the configuration screen. Refer to Sorting for more information. Without using this option, it means that documents changed later can be delivered before documents changed earlier and vice versa.

The Connector will signal end of data and stop when there are no more changes to report. It can however be configured not to exit when all changes have been reported, but stay alive and repeatedly poll Domino for changes.

Using the Connector

Document identification

The Domino Change Detection Connector retrieves the Universal ID (UnID) of Domino documents. Use the UnID value to track document changes reported by the Connector.

For example, when a deleted document is reported, use its UnID value to lookup the object that has to be deleted in the repository you are synchronizing with. If you are synchronizing Domino users (Person documents), then you might need to find out when a user is renamed. When a user is renamed (the FullName item of the Person document is changed), the Connector will report this as a "modify" operation. When you lookup objects in the other repository by UnID, you will be able to find the original object, read its old FullName attribute, compare it against the new FullName value and determine that the user has been renamed.

Deleted documents

Documents that are deleted from a Domino database can be tracked by "deletion stub" objects. Deletion stubs provide the Universal ID and Note ID of the deleted document, but nothing more. That is why when the Connector comes across a deleted document, it returns an Entry which does not contain any document items, but only the following Entry Attributes added by the Connector itself:

Minimal synchronization interval

There is a parameter for each database called "Remove documents not modified in the last x days". Deletion stubs older than this value will be removed. If you are interested in processing deleted documents, you must synchronize (run the Connector) on intervals shorter than the value of this parameter.

On both Domino R8.0 and Domino R8.5, this parameter can be accessed from the Lotus Domino Administrator: open the database, then choose from the menu File -> Replication -> Options for this Application..., select Space Savers – the parameter is called Remove documents not modified in the last x days.

The default value of this parameter is 90 days.

Switching to a database replica

UnIDs are the same across replicas of the same database. This allows you to switch to another replica of the Domino database in case the original database is corrupted or not available.

Document timestamps, however, are different for the different replicas. That is why when a switch to a replica is done, you must perform a full synchronization (use a new key for "Iterator State Key" and set the "Start At" parameter to "Start Of Data"). This will possibly report a lot of document additions and deletions which have already been applied to the other repository, but will guarantee that no updates are missed.

Structure of the Entries returned by the Connector

All items contained in a document are mapped to Entry Attributes with their original item names.

All date values are returned as java.util.Date objects.

The following Entry Attributes are added by the Connector itself (their values are not available as document items):

The $$UNID and $$NoteID Attributes

The Universal ID (UnID) is the value that uniquely identifies a Domino document. All replicas of the document have the same UnID and the UnID is not changed when the document is modified. This value should be used for tracking objects during synchronization. The Universal ID value is mapped to the $$UNID Attribute of Entry objects delivered by the Connector. The value of the $$UNID Attribute is a string of 32 characters, each one representing a hexadecimal digit (0-9, A-F).

The Connector also returns the NoteID document values. This value is unique only in the context of the current database (a replica of this document will in general have a different NoteID). The Connector delivers the NoteID through the $$NoteID Entry's Attribute. The value of this Attribute is a string containing up to 8 hexadecimal characters.

The $$ChangeType Attribute

An Attribute named $$ChangeType is added to all Entries returned by the Domino Change Detection Connector.

The value of the $$ChangeType Attribute can be one of:

Synchronization state values

Several values are saved into the System Store and represent the current synchronization state. The Connector reads these values on startup and continues reporting changes from the right place.

Regardless of the mode in which the Connector is run two synchronization state values are stored in the User Property Store. These two values a stored in an Entry object as Attributes with the following names and meaning:

When the Connector is run, in addition to storing values in the User Property Store it creates (if not already created) a Connector-specific table in the System Store. The name of this table is the concatenation of "domch_" and the value of the Iterator State Key Connector parameter. This Connector-specific table stores values with the following characteristics:

The Connector-specific table is cleared each time the Connector successfully completes a synchronization session.

For each instance of the Domino Change Detection Connector executed on the same SDI Server there is a different Connector-specific table in the System Store.

Accessing the Connector synchronization state

While the Connector is offline we can access the "since" datetime that will be used on the next Connector run. This datetime is stored in the User Property Store.

This is how we can get the datetime value for the next synchronization:

var syncTime = system.getPersistentObject("dcd_sync");
var sinceDateTimeAttribute = syncTime.getAttribute("SYNC_TIME");
var sinceDateTime = sinceDateTimeAttribute.getValue(0);
if (sinceDateTime.getClass().getName().equals("java.util.Date")) {
main.logmsg("Start date: " + sinceDateTime);
}
else {
main.logmsg("Start date: Start Of Data");
}

"dcd_sync" is the value specified by the Iterator Store Key Connector parameter.

This is how we can set a start datetime for the next synchronization:

var syncTime = system.newEntry();
syncTime.setAttribute("SYNC_TIME", new java.util.Date()); //current time
syncTime.setAttribute("SYNC_CHECK_DOCS", new java.lang.Boolean("false"));
system.setPersistentObject("dcd_sync", syncTime);

Filtering entries

No filtering of documents is performed in this version of the Connector. All database documents that have been created, modified or deleted are reported by the Connector.

If you need filtering you must do this yourself by scripting in the Connector hooks.

Sorting

The changed documents can be delivered sorted by the date they were last modified on. This is done by checking the checkbox "Deliver Sorted" in the configuration screen.

Note: Using sorting comes with a performance penalty, in terms of memory usage and CPU time. That is why you should consider carefully whether you really need sorting.

Domino Server system time is used

The Domino Change Detection Connector uses the timestamp of last modification for detecting changes in a Domino database. The Connector state includes timestamp values read by the Domino Server system clock. That is why changing the Domino Server system time while the Connector is running or between Connector runs might result in incorrect Connector operation – changes missed or repeated, incorrect change type reported, etc.

Processing very large Domino databases (.nsf files)

The Connector could need a bigger amount of physical memory – for example, when working on very large databases containing 1,000,000 documents or more, especially when performing a full synchronization. This is caused by the Connector keeping all retrieved document UnIDs in memory for the duration of the synchronization session. For example, 512 MB of physical memory should be enough for processing a database that contains about 1,000,000 changed documents (provided that no other memory consuming processes are running). If this amount of memory is unavailable, then we can increase the memory available to SDI.

Also, be mindful of the "Deliver Sorted" parameter - enabling this could have a major performance impact.

API

The Domino Change Detection Connector supports two methods specific to it, as follows:

    /**
     * This method saves the synchronization state of the Connector.
     * This method should be called by users (using script component) whenever 
     * they want to save the synchronization state.
     * 
     * @throws Exception	if the synchronization fails.
     */
	public void saveStateKey() throws Exception;

	/**
	 * Skip the current document. Use this method to skip problem documents when
	 * the Connector will otherwise die with an exception.
	 * <p>
	 * For example use the following script in the "Default On Error" hook of
	 * the Connector:
	 * 
	 * <pre>
	 * thisConnector.connector.skipCurrentDocument();
	 * </pre>
	 * 
	 * </p>
	 * 
	 * 
	 * @throws Exception
	 *             If the Notes thread is not running or the Notes thread
	 *             encounters an error while processing the command.
	 */	
	public void skipCurrentDocument()throws Exception;

Required Setup of the SDI

See the section, Supported session types by Connector and the sections below about the issue of required libraries, and possible library conflicts.

Required Domino Setup

Required Domino Server tasks

The Connector requires that the following Domino Server tasks be started on the Domino Server:

If these Domino Server tasks are not started on the server the Connector will fail.

Required privileges

The Domino Change Detection Connector creates two sessions to the Domino Server – a session through the local Notes client code using the local User ID file and a remote IIOP session using an internet user account (the same Domino user can be used for establishing both sessions but this is not required). The accounts used for these sessions must have the following privileges:

Configuration

The Domino Change Detection Connector provides the following parameters:

Troubleshooting the Domino Change Detection Connector

  1. Problem: When you run an AssemblyLine that uses the Domino Change Detection Connector, the AssemblyLine fails with the following exception: NotesException: Could not get IOR from Domino Server: ... where <domino_server_ip> is the IP address of the Domino Server you are trying to access, that is, the value of the Domino Server IP address Connector parameter.

    Solution: This exception indicates that the HTTP Web Server task on the Domino Server is not running. Start the HTTP Web Server task on the Domino Server you are trying to access and then start the AssemblyLine again.

  2. Problem: When you run an AssemblyLine that uses the Domino Change Detection Connector, just after you enter the User ID password at the password prompt the AssemblyLine fails with the following exception: NotesException: Could not open Notes session: org.omg.CORBA.COMM_FAILURE: java.net.ConnectException: Connection refused: connect Host: <domino_server_ip> Port: XXXXX vmcid: 0x0 minor code: 1 completed: No where <domino_server_ip> is the IP address of the Domino Server you are trying to access, that is, the value of the Domino Server IP address Connector parameter.

    Solution: This exception indicates that the DIIOP Server task on the Domino Server is not running. Start the DIIOP Server task on the Domino Server you are trying to access and then start the AssemblyLine again.

    Another reason for this message is that the fully qualified host name of the Lotus Domino server is not correctly set (for example, it was left as localhost or 127.0.0.1). To solve this problem start "Domino Admin", open the server used, go to the Configuration tab and edit the "Server->Current Server document". In this document under the Basic tab you must add the correct value for "Fully qualified Internet host name", save the document and restart the server.

  3. Problem: While the Domino Change Detection Connector is retrieving changes the following exception occurs: Exception in thread "main" java.lang.OutOfMemoryError

    Solution: This exception indicates that the memory available to the SDI Java™ Virtual Machine (the JVM maximum heap size) is insufficient. In general the Java Virtual Machine does not use all the available memory. We can increase the memory available to the SDI JVM by following this procedure:

    Edit ibmdisrv.bat file in the SDI install directory to change the heap memory size parameters (-Xms and -Xmx). Refer to the SDI v7.2 Problem Determination Guide for more details.

    Note: -Xms is the initial heap size in bytes and -Xmx is the maximum heap size in bytes. We can set these values according to your needs.

  4. Problem: The Connector reports all database documents as deleted although they are not deleted.

    Solution: The user of the local User ID file is not given the necessary privileges on the database polled for changes. Give the necessary user rights as described in Required privileges.

  5. Problem: When you run an AssemblyLine that uses the Domino Change Detection Connector, the following exception occurs: java.lang.UnsatisfiedLinkError: <Security Directory Integrator_install_folder>\libs\domchdet.dll: Can't find dependent libraries where <Security Directory Integrator_install_folder> is the folder where SDI is installed.

    Note: If you run the SDI Server from the command prompt, then before this exception message is printed, a popup dialog box appears saying "This application has failed to start because nNOTES.dll was not found. Re-installing the application may fix this problem."

    Solution: This exception message as well as the popup dialog box are displayed because the Connector is unable to locate the Lotus Notes dynamic-link libraries. Most probably the path to the Lotus Notes directory specified in ibmditk.bat or in ibmdisrv.bat is either incorrect or not specified at all. That is why you should verify that the Lotus Notes directory specified in the PATH environment variable in both ibmditk.bat and ibmdisrv.bat is correct. For more information please see Required Setup of the SDI.

  6. Problem: Some of the documents contain invalid data and cause the Connector to throw an exception and stop. These documents comprise only a small fraction of the whole database. Skip the problem documents and continue iterating.

    Solution: Override the "Default On Error" hook of the Connector. Use the skipCurrentDocument() method to increment the internal document counter of the Connector so that it skips the problem document. Also use the system.skipEntry() method to instruct the AssemblyLine to skip the current cycle – the Connector failed to read the document so it has no meaningful data to provide for this cycle.

    The script that you put in the "Default On Error"hook should make difference between non-fatal errors (for example, document contains an invalid field) and fatal errors (for example, the Domino server is not running). You should not let the Connector continue iterating after a fatal error occurs. Here is a sample script for the "Default On Error" hook. The script skips only documents which contain an invalid date:

    var ex = error.getObject("exception");
    var goOn = false;
    if (ex != null) {
    	if (ex.getMessage().indexOf("Invalid date") != -1) {
    		goOn = true;
    	}
    }
    if (goOn) {
    	thisConnector.connector.skipCurrentDocument();
    	system.skipEntry();
    } else {
    	throw "Fatal error: "+error;
    }
  7. Problem: When you run an AssemblyLine that uses the Domino Change Detection Connector, the AssemblyLine fails with the following exception:
    java.lang.Exception: CTGDJE010E Connector was unable to initialize local Notes session to Domino Server. 
    Exception encountered: java.lang.Exception: Native call SECKFMSwitchToIDFile failed with error: code 259, 'File does not exist'.
    Solution: Detailed information for this problem can be found in section Post Install Configuration, both in paragraphs "If you create Local Client Session" and "If you create IIOP Session".

Compatibilty

Refer to the section on Supported session types by Connector on how this connector should be set up with the necessary libraries, and about interactions with other Domino/Lotus Notes Connectors.

See also

Accessing Java Session Objects, Accessing documents using Java classes.

Domino Users Connector

The Domino® Users Connector provides access to Lotus® Domino user accounts and the means for managing them. With it, we can do the following actions:

Currently, the Connector does not support the process of Users recertifying.

The Domino Server accessed can be on a remote server, or on the local machine.

It operates in Iterator, Lookup, AddOnly, Update and Delete modes, and enables the following operations to be performed:

This Connector can be used in conjunction with the IBM Password Synchronization plug-ins. For more information about installing and configuring the IBM Password Synchronization plug-ins, please see the SDI v7.2 Password Synchronization Plug-ins Guide.

Note: The Domino Users Connector requires Lotus Notes® release 7.0, 8.0, or 8.5.x; and Lotus Domino Server version 7.0, 8.0, or 8.5.x.

Deployment and connection to Domino server

Refer to the section, Supported session types by Connector for more information about required libraries setup, and possible library conflicts.

To authenticate the local server connection, Domino requires the user's short name and internet password (these are Connector's parameters).

Configuration

The Connector needs the following parameters:

Parameter migration from earlier versions

SDI v7.2 introduces a Session Type parameter for this Connector, as part of a harmonization with the other Lotus Notes/Domino Connectors. The Session Type parameter covers functionality previously configured through the Authentication Mechanism parameter:

Security

To have the SDI access the Domino Server, you might have to enable it through Domino Administrator -> Configuration -> Current® Server Document -> Security -> Java/COM Restrictions. The user account you have configured the SDI to use must belong to a group listed under Run restricted Java/Javascript/COM and Run unrestricted Java/Javascript/COM.

Configure encryption between the Domino Server and a client

When the Domino Users Connector is running on a Notes client machine, there is communication going on between the Notes client machine and the Domino Server machine.

Port encryption in Domino and/or Notes can be used to encrypt this communication. Two options are available:

Authentication

The Domino Users Connector impersonates as a Domino user in order to access the Domino Directory (Names and Address Book database).

The Domino Users Connector supports two authentication mechanisms - Internet Password authentication and Notes ID file based authentication.

Authorization

The Domino Server uses the Access Control Lists of the Domino Directory (Names and Address Book database) to verify that the Domino user which the Connector uses has actually the right to access the required database, document or field.

If the Connector is used to change the FirstName or LastName or both of a Domino user, then the Access Control Lists of databases to which the user used to have access before the renaming occurred must be updated manually, so that the new user name would be recognized.

Using the Domino Users Connector

Iterator mode

The Connector iterates through the Person documents of the Name and Address Book database. All Person documents (matching the filter, if filter is set) are delivered as Entry objects, and all document items, except attachments, are transformed into Entry attributes.

Along with the attributes corresponding to the Person document items, the Entry returned by the Connector contains some extra (derived) attributes for which values are calculated by the Connector. Here is the list of the derived attributes:

Lookup mode

In Lookup mode, the Connector performs searches for user documents, and the type of search depends on the value of the Use full-text search parameter:

When simple link criteria are used, we can use both canonical (CN=UserName/O=Org) and abbreviated (UserName/Org) name values to specify the user's FullName. The Connector automatically processes and converts the value you specified, if necessary.

When advanced link criteria is used, you must be careful and specify the user's FullName in the correct format, which is:

AddOnly mode

The AddOnly mode always adds a new Person document in the Name and Address Book database. The add process accepts whatever attributes are provided by the Attribute Mapping, however to have correct user processing by Domino, the attribute names must match the Item names Domino operates with. As the Connector operates with users only, it always sets the attributes Type and Form to the value of Person, thus overriding any values set to these attributes during the Attribute Mapping process. The LastName Domino user attribute is required for successful creation of a Person document. The HTTPPassword attribute is not required, but if present its value is automatically hashed by the Connector.

Depending on a fixed schema of attributes, the Connector can register the new user. The table below specifies these attributes and the Connector behavior according to their presence or absence in the conn Entry, and their values:
Attribute name Type Required for registration? Value
REG_Perform Boolean Yes If set to true the Connector performs user registration.

If this attribute is missing, or its value is false, the Connector does not perform user registration, regardless of the presence and the values of the other REG_ Attributes.

REG_IdFile String Yes Contains the full path of the ID file to be registered. For example,
c:\\newuserdata
\\newuser.id
REG_UserPw String No The user's password.
REG_Server String No The name of the server containing the user's mail file.

If the Attribute is missing, the value will be obtained from the current Connector's Domino Session.

When the Connector is running on a Notes client machine and is registering a user, this Attribute must be specified in order to create a mail file on the server for the newly registered user.

REG_CertifierIDFile String Yes The full file path to the certifier ID file.
REG_CertPassword String Yes The password for the certifier ID file.

Note: If the certifier password is wrong when registering users, a popup window is displayed. Ensure that the Certifier password is correctly specified.

REG_Forward String No The forwarding domain for the user's mail file.
REG_AltOrgUnit Vector of <String> No Alternate names for the organizational unit to use when creating ID file.

REG_AltOrgUnit

Lang

Vector of <String> No Alternate language names for the organizational unit to use when creating ID file.
REG_CreateMailDb Boolean/String No true – Creates a mail database;

false – Does not create a mail database; it is created during setup.

If this attribute is missing, a default value of false is assumed. If this attribute is true, the MailFile attribute must be mapped to a valid path.

REG_MailTemplateFile String No The filename of a Notes template database, which the Connector will use to create the user mail file. If this Attribute does not exist the default mail template is used.
REG_MailTemplateServer String No The IP address or hostname of the Domino server machine on which the mail template database (specified by "REG_MailTemplateFile") resides. If this Attribute does not exist the local Domino server machine is used.
REG_MailDbInherit Boolean/String No true - the user mail database to be created will inherit any changes to the mail template database design;

false - the user mail database to be created will not inherit any changes to the mail template database design.

If this Attribute is missing, a default value of "false" will be assumed.

REG_StoreIDInAddress

Book

Boolean/String No

true - stores the ID file in the server's Domino Directory;

false - does not store the ID file in the server's Domino Directory.

If this Attribute is missing, a default value of "false" is used.

REG_Expiration Date No The expiration date to use when creating the ID file. If the attribute is missing, or its value is null, a default value of the current date + 2 years is used.
REG_IDType Integer/String No The type of ID file to create: 0 - create a flat ID; 1 - create a hierarchical ID; 2 - create an ID that depends on whether the certifier ID is flat or hierarchical.

If the attribute is missing, a default value of 2 is used.

REG_Is

NorthAmerican

Boolean/String No true – the ID file is North American;

false – the ID file is not North American.

If this attribute is missing, a default value of true is used.

REG_MinPassword

Length

Integer/String No The REG_MinPasswordLength value defines the minimum password length required for subsequent changes to the password by the user. The password used when the user registers is not restricted by the REG_MinPasswordLength value.

If this attribute is missing, a default value of 0 is used.

REG_OrgUnit String No The organizational unit to use when creating the ID file. If this attribute is missing, a default value of " " is used.
REG_RegistrationLog String No The log file to use when creating the ID file. If this attribute is missing, a default value of " " is used.

REG_Registration

Server

String No The server to use when creating the ID file. This attribute is used only when the created ID is stored in the server Domino Directory, or when a mail database is created for the new user.

REG_StoreID

InAddressBook

Boolean/String No true - stores the ID file in the server's Domino Directory false - does not store the ID file in the server's Domino Directory. If this attribute is missing, a default value of false is used.

The attributes for which the Required for registration field is set to Yes are required for successful user registration. Along with these REG_ Attributes, the LastName Domino user attribute is also required for successful user registration.

If REG_Perform is set to true and any of the other attributes required for registration are missing, the Connector throws an Exception with a message explaining the problem.

Update mode

In Update mode, the following happens:

  1. A search for the Entry to be updated is performed in Domino.
  2. If an Entry is not found, an AddOnly operation is performed as described in the AddOnly mode (including user registration if the necessary REG_ Attributes are supplied).
  3. If the Entry is found, a modify operation is performed.

When modifying a user, the Domino Users Connector always modifies its Person document in the Name and Address Book database with the attributes provided. The modify process accepts whatever Attributes are provided by the Attribute Mapping, however to have correct user processing by Domino, the Attribute names must match the Item names Domino operates with. See List of Domino user attributes (or Person document items) for a (possibly not full) list of Domino user properties.

As the Connector operates with users only, it does not modify the attributes Type and Form (their value must be Person) regardless of the Attribute Mapping process. If the HTTPPassword attribute is specified, its value is automatically hashed by the Connector.

In the process of modifying users, the Domino Users Connector provides the options to disable and enable users. A user is disabled by adding his name into a specified Deny List only group (consult the Domino documentation for information on Deny List only groups. Go to http://www.lotus.com/products/domdoc.nsf, and click the Lotus Domino Document Manager 3.5 link). A user is enabled by removing his name from all Deny List only groups.

The Connector performs user disabling or enabling depending on the presence in the conn Entry, and the values of the following Entry attributes:

The Connector can perform user registration on modify too. To determine whether or not to perform registration, the same rules apply as in the AddOnly mode. The same schema of attributes is used and all REG_ Attributes have the same meaning.

If the REG_ Attributes determine that registration is performed, the following cases might happen:

Notes:

  1. When registering users on modify, turn off the Compute Changes Connector option. When turned on, the Compute Changes function might clear attributes required in certain variants of user registration, and this results in registration failure.
  2. When registering users on modify, you must know beforehand what is the user's FullName after registration, and you must provide the attribute FullName in the conn Entry with this value (which is probably constructed by scripting). This is not very convenient and requires deep knowledge of the Domino registration process. Without setting the expected user's FullName beforehand, however, you risk registering a new user instead of the existing one.
  3. When registering users on modify, you must provide the attribute FirstName in the conn Entry with the value of the FirstName of the user you need to register. If the FirstName attribute is not provided, you risk creating a new user.

Delete mode

For user deletion, the Connector uses the Domino Administration Process.

The Connector posts Delete in Address Book requests in the Administration Requests Database . Each request of type Delete in Address Book, when processed by the Domino Administration Process, triggers the whole chain of posting and processing administration requests and actions performed by the Administration Process to delete a user. The result of posting a Delete in Address Book administration request is the same as manually deleting a user through the Domino Administrator. In particular:

The Connector enables tuning of each single user deletion it initiates. The parameters that can be configured are:

The delete parameters described previous, have default values that can also be changed through APIs provided by the Domino Users Connector. Each time an instance of the Domino Users Connector is created (in particular on each AssemblyLine start), the parameters have the following default values:

If the default values fit the type of deletion you want, then no special configuration for the deletion is needed. You must specify the correct link criteria in the Delete Connector.

We can however use the APIs provided by the Domino Users Connector, to change these default values at runtime (using scripting):

The default values for the delete parameters are used in all deletions performed by the Connector, until another change in their values is made, or the Connector instance (object) is destroyed.

The following are possible scenarios that use these methods:

Another method to manipulate the delete parameters, is to provide the following attributes in the conn Entry:

The use of the DEL_DeleteMailFile and DEL_DeleteGroupName attributes in the conn Entry overrides the default values of the corresponding delete parameters for the current deletion only.

Setting the DEL_DeleteMailFile and DEL_DeleteGroupName attributes in the conn Entry can be done through scripting in the Before Delete hook. Adding attributes by scripting might not be very convenient, so you might prefer to use the default delete parameters values and the APIs that change them.

List of Domino user attributes (or Person document items)

The following is a list (possibly not full) of Domino user document items, which are understood or processed by Domino when the server operates with users. For more information on these Items consult the Lotus Domino documentation.

The same names must be used for Entry attribute names when performing Add, Modify, Delete or Lookup operations with the Connector.

Domino Server for Unix/Linux

For Domino Users Connector with Domino Server for Unix/Linux, you must update the ibmditk and ibmdisrv scripts. Add the following two lines in the script, after the PATH definition and before the startup line:

LD_LIBRARY_PATH=Domino_binary_folder
export LD_LIBRARY_PATH

where Domino_binary_folder is the folder containing Domino native libraries, for example, /opt/lotus/notes/latest/sunspa for Solaris, and /opt/lotus/notes/latest/linux for Linux.

Start SDI with the Domino user (do not use root). The Domino user is called notes unless it is changed during the installation of the Domino Server.

Examples

Go to the TDI_install_dir/examples/DominoUserConnector directory of your SDI installation.

See also

Domino AdminP Connector,

Lotus Notes Connector,

Register Users in Domino,

Register Users in Domino using Java™.

Domino AdminP Connector

The Domino® Administration Process is a program that automates many routine administrative tasks. For example, if you delete a user account, the Administration Process locates that user's name in the Domino Directory and removes it, locates and removes the user's name from ACLs, and makes any other necessary deletions for that user. When you put a request in the Domino Administration Requests database the process carries out all required actions.

Overview

The Domino AdminP Connector is a special version of the Lotus® Notes® Connector. For the Domino AdminP Connector, it has been enhanced to have the capability to sign fields while adding a document to the Domino database. In comparison with the Lotus Notes Connector, the Database parameter is not visible (it is always set to admin4.nsf, the Domino Administration Requests database), and it has a new parameter: Admin Request Type.

The Domino AdminP Connector supports the following Connector modes:

Admin requests signing

Domino Administration requests need to be signed before they are added to the admin4.nsf database in order to be further processed by the Administration process. When you sign a document a unique portion of your user ID is attached to the signed note to identify you as author. Otherwise the following error appears:

All of the required fields in the request have not been signed. 

Cause of error - An unauthorized person or a non-Domino program edited a posted request. 
This indicates a failed security attack.

Special coding in the Domino AdminP Connector ensures that all items of the Lotus Domino Document are being signed before the Document is saved.

Note: Even the Lotus Domino administrator should have the rights to "Run Unrestricted methods & operations" in order to be able to sign documents. This can be accomplished using the Domino Administrator by adding that account, for example, administrator/IBM, in the Server -> Security -> Run Unrestricted methods & operations list.

Schema

The Domino AdminP Connector has a set of predefined Administration requests schemas. They are described in its configuration file, tdi.xml, in a similar manner as the input/output schemas defined in all other Connectors. However, there are two differences:

Currently, there are only two types of Admin requests bundled in the configuration of this Connector. They have the following definition: Rename User
Table 12. Rename User Schema
Attribute Description
Form The form of the request. Should be "AdminRequest".
ProxyAction Corresponds to the id of the request made. For RenameUser the id is "118".
ProxyAuthor The author of the request who must have administrative privileges. (for example, CN=administrator/O=IBM ).
ProxyNameList The Domino user's name to be modified.
ProxyNewWebFirstName The new first name of the user.
ProxyNewWebLastName The new last name of the user.
ProxyNewWebMI The new middle name of the user.
ProxyNewWebName A new UserName to be added.
ProxyProcess The process of the request. Should be "AdminP".
ProxyServer The target server. Typically "*".
ProxyWebNameChangeExpires The expiration period of the request. The default is 21 days.
Type Type of the request. Should be "AdminRequest".
Rename Group
Table 13. Rename Group Schema
Attribute Description
Form The form of the request. Should be "AdminRequest".
ProxyAction Corresponds to the id of the request made. For RenameGroup the id is "40".
ProxyAuthor The author of the request who must have administrative privileges. (for example, CN=administrator/O=IBM ).
ProxyNameList The Domino group's name to be modified.
ProxyNewGroupName The new name of the group.
ProxyProcess The process of the request. Should be "AdminP".
ProxyServer The target server. Typically "*".
ProxyWebNameChangeExpires The expiration period of the request. The default is 21 days.
Type Type of the request. Should be "AdminRequest".

These Schemas are returned when you perform a Discover Attributes action in the Configuration Editor. All

No attributes defined.

The "Rename User" and "Rename Group" schemas define the necessary fields to rename a user or group in a Domino Directory with samples of the values needed. The other schema ("All") is empty and used for any other type of requests; in order to use these you must add new schemas with valid attributes and corresponding new dropdown items.

Configuration

The Connector needs the following parameters:

See also

Domino Users Connector,

Lotus Notes Connector

Java™ API.

Lotus Notes Connector

The Lotus® Notes® Connector provides access to Lotus Domino® databases.

It enables you to do the following tasks:

Note: Lotus Notes Connector requires Lotus Notes to be release 7.0 or higher.

Known limitations

For Lotus Notes Connector using Local Client or Local Server modes only: you might not be able to use the SDI Config Editor to connect to your Notes database. Sometimes, the Notes Connector prompts the user for a password even though the Notes Connector provides it to the Notes APIs. The prompt is written to standard-output, and input from the user is read from standard-input. This prompting is performed by the Notes API and is outside the control of SDI:

When the Session Type is LocalClient, we can start your Notes or Designer client and permit other applications to use its connection by setting a flag in the File -> Security -> User Security panel; click Security Basics, and select Don't prompt for a password from other Lotus Notes-based programs (reduces security) under "Your Login and Password Settings.". In this case, the Notes Connector (that is, the Notes API) ignores the provided password and reuses the current session established by the Notes or Designer client. The Notes or Designer client must be running to enable SDI to reuse its session.

Note: You can switch to using DIIOP mode to configure the AssemblyLines and switch back to Local Client or Local Server mode when you run the AssemblyLine through SDI Server.

Session types

The following session types are supported (also refer to Supported session types by Connector for more information regarding libraries, setups and incompatibilities with other Domino Connectors):

Connect with IIOP

The Connector can use IIOP to communicate with a Domino server. To establish an IIOP session with a Domino server, the Connector needs the IOR string that locates the IIOP process on the server.

When you configure the Notes Connector, specify a hostname and, optionally, a port number where the server is located. This hostname:port string is in reality the address to the Domino server's http service from which the Connector retrieves the IOR string. The IOR string is then used to create the IIOP session with the server's IIOP service (diiop). The need for the http service is only for the discovery of the IOR string. This operation is very simple. The Connector requests a document called /diiop_ior.txt from the domino http server that is expected to contain the IOR string. We can replace the hostname:port specification with this string and bypass the first step and also the dependency of the http server. The diio_ior.txt file is typically located in the data/domino/html directory in your Domino server installation directory. Check the Web configuration in the Lotus Administrator for the exact location.

To verify the first step, go to the following URL: http://hostname:port/diiop_ior.txt where hostname is the hostname, and port is the port number of your domino server. You receive a document that says IOR: numbers. If you get a response similar to this, the first step is verified. If this fails, you must check both the HTTP configuration on the server that it enables anonymous access, and verify that the process is running.

Note: When configuring an IIOP session, in order to be able to browse available databases in the configuration of the Connector in the Config Editor, the Domino server must support that and also allow the various controls be populated with lists of available databases, views, forms and agents. The Domino server setting to make databases available for browsing is located under Server document -> Internet Protocols -> HTTP tab -> Allow HTTP clients to browse databases. It must be set to Yes and the Domino server must be restarted.

Configuration

The Connector needs the following parameters:

UNID Support

UNID is the universally unique ID of a Notes document - it is unique even across database replicas.

The Notes API does not allow the UNID to be used directly in a search filter passed to the

Notes/Domino search functions. That is why adding the option of using the UNID in search criteria will be accomplished in the following way:

The described functionality will not cause compatibility with earlier versions issues because the Notes API does not allow UNIDs in formulas/search filters.

Support of RichText attributes

Lotus Domino documents contain items of type lotus.domino.RichTextItem. Traditionally, when read, such items are received in the attribute map of an Entry as plain text and vice-versa - written as plain text back to the domino document. The Connector supports additional functionality for these kind of items:

Example scripts

Creating a RichTextItem
var rti = NotesConnectorName.connector.getDominoSession().getDatabase("", <database_name>).createDocument().createRichTextItem("Body");
var header = NotesConnectorName.connector.getDominoSession().createRichTextStyle();

header.setBold(lotus.domino.RichTextStyle.YES);
header.setColor(lotus.domino.RichTextStyle.COLOR_YELLOW);
header.setEffects(lotus.domino.RichTextStyle.EFFECTS_SHADOW);
header.setFont(lotus.domino.RichTextStyle.FONT_ROMAN);
header.setFontSize(14);
	    
rti.appendStyle(header);
rti.appendText("Sample text which will be formatted with the above style.");

work.setAttribute("Body", rti);
Extracting attachment from a RichTextItem
var doc = NotesConnectorName.connector.getDominoSession().getDatabase("", <database_name>).getAllDocuments().getFirstDocument();
      if (doc.hasEmbedded()) {
        var body = doc.getFirstItem("Body");
        var rtnav = body.createNavigator();
        if (rtnav.findFirstElement(
        lotus.domino.RichTextItem.RTELEM_TYPE_FILEATTACHMENT)) {
          do {
            var att = rtnav.getElement();
            var path = "c:\\Files\\" + att.getSource();
            att.extractFile(path);
            main.logmsg(path + " extracted");
          } while (rtnav.findNextElement());
        }
        else
          main.logmsg ("No attachments");
      }
      else
        main.logmsg ("No attachments or embedded objects");

For more information and examples see the Domino documentation at: http://www-128.ibm.com/developerworks/lotus/documentation/dominodesigner/

RichText limitations

The lotus.domino.RichTextItem class is not serializable and items of this type can not be transferred through RMI. This will cause a java.io.NotSerializableException when an Object of this type is accessed through Remote Server APIs. Once a not serializable object gets into the Entry the whole Entry becomes not serializable.

Note: This also applies to the lotus.domino.DateTime class.

Also, this serialization limitation restricts lotus.domino.RichTextItems to be transferred between different Domino databases. For this reason a RichTextItem can be added/updated only with another RichTextItem from the same database or with a RichTextItem created with a script using the domino API, but still for the same database.

Setting quota and file ownership

The Connector can only directly manipulate Lotus Notes database entries, but not database properties. However, database quotas can be set by means of scripting, using a Script Component and a configured Lotus Notes Connector. The sample code below should set file size quota and write the desired MailOwnerAccess to the ACL.

//NotesIterator is the NotesConnector name in the AssemblyLine
var db = NotesIterator.connector.getDominoDatabase(null);
//uses the public getDominoDatabase(...) method of the NotesConnector class; 
//giving null for method parameter will return the database configured in the Connector
main.logmsg("Old quota: " + db.getSizeQuota()); 
//should print the old database size quota
db.setSizeQuota(5000); 
//sets the size quota to 5000KB
main.logmsg("New quota: " + db.getSizeQuota()); 
//will print the new database size quota in kilobytes, i.e. 5000
var acl = db.getACL();	
//get the database access control list
var ACLEntry = acl.createACLEntry("DesiredNotesUser", lotus.domino.ACL.LEVEL_MANAGER);
//create new ACL Entry
ACLEntry.setUserType(lotus.domino.ACLEntry.TYPE_PERSON);	//set user type equal to Person
acl.save();	
//save the access control list

Security

To have SDI access your Domino server, you must enable it through Domino Administrator -> Security -> IIOP restriction. The user account you configured for the SDI to use must belong to a group listed under Run restricted Java/Javascript and Run unrestricted Java/Javascript.

The Domino Web server must be configured to enable anonymous access. If not, the current version of the Notes Connector cannot connect to the Domino IIOP server.

Note: If you want to encrypt the HTTPPassword field of a Notes Address Book, add the following code to the AssemblyLine:

var pwd = "Mypassword";
var v = dom.connector.getDominoSession().evaluate
("@Password(\"" + pwd + "\")" ) ;
ret.value = v.elementAt(0);
This code uses Domino's password encryption routines to encrypt the variable pwd. It can be used anywhere that you want to encrypt a string using the @Password function that Domino provides. A good place to use this code is in the Output Map for the HTTPPassword attribute.

See also

Wikipedia on Lotus® Notes®.


TIM DSMLv2 Connector

This Connector is used in solutions which require communication with IBM Security Identity Manager.

The ITIM server provides a communication interface which uses a ITIM-proprietary version of DSMLv2. This ITIM-proprietary version of DSMLv2 doesn't fully comply with the DSMLv2 specification. Hence the Connector name - TIM DSMLv2 Connector.

This Connector is used for both:

using the ITIM-proprietary DSMLv2 communication interface.

The version of ITIM supported is 5.0 and higher.

The Directory Services Markup Language v1.0 (DSMLv1) enables the representation of directory structural information as an XML document. DSMLv2 goes further, providing a method for expressing directory queries and updates (and the results of these operations) as XML documents.

Note: This Connector is specially designed for use with ITIM; for generic use, use the DSMLv2Soap Connector and/or the DSMLv2SoapServer Connector instead.

The TIM DSMLv2 Connector which connects to an IBM Security Identity Manager Server repository using DSML over HTTP.

The Connector connects to the DSMLv2 ITIM event handler (introduced in ITIM 4.5) that allows the import of data into ITIM with ITIM acting as a DSMLv2 server. Therefore, only ITIM Server 4.5 and above is supported. The TIM DSMLv2 Connector uses the ITIM DSML JNDI driver "dsml2.jar", to connect to and interact with the ITIM Server. Deployment of the DSMLv2 Connector uses JNDI queries to interact with the ITIM repository.

The Connector supports the AddOnly, Delete, Iterator, Lookup and Update modes.

Skip Lookup in Update and Delete mode

The TIM DSMLv2 Connector supports the Skip Lookup general option in Update or Delete mode. When it is selected, no search is performed prior to actual update and delete operations. It requires a Name parameter (for example, $dn for LDAP) to be specified in order to operate properly.

Using the Connector with ITIM Server

When connecting to a ITIM Server the following URL should be specified in the TIM DSMLv2 Connector: http://<ITIM_Server_host:ITIM_Server_port>/enrole/dsml2_event_handler; for example, "http://192.168.113.12:9080/enrole/dsml2_event_handler".

The following limitations apply to TIM DSMLv2 Connector modes when interacting with ITIM Server:

When interacting with ITIM Server, all JNDI queries and filters, used either from the GUI or in scripting (in Advance Search Criteria, for example) must be enclosed in brackets, for example "(uid=user1)".

HTTPS (SSL) Support

In order to use a secure HTTPS connection to the DSMLv2 Server, the provider URL specified must begin with "https://" and the server's certificate must be included in SDI's trust store.

Configuration

The TIM DSMLv2 Connector needs the following parameters:

See also

ITIM Agent Connector,

DSML Identity Feed.


DSMLv2 SOAP Connector

The DSMLv2 SOAP Connector implements the DSMLv2 standard. The Connector is able to:

Supported Connector Modes

The Connector mode determines the type of DSML operation the Connector requests. The DSMLv2 SOAP Connector supports the following modes:

Extended Operations

In CallReply mode, the DSMLv2 SOAP Connector can send DSMLv2 extended operations. Extended operations are identified by their Operation Identifier (OIDs). For example, the OID of the extended operation for retrieving a part of the log file of the IBM Security Directory Server is 1.3.18.0.2.12.22.

Extended operations can also have a value property, which is a data structure containing input data for the corresponding operation. The value property of the extended operation must be Basic Encoding Rules (BER) encoded and then base-64 encoded in the DSMLv2 message. The user of the DSMLv2 SOAP Connector is responsible only for BER encoding the value property. The Connector will automatically base-64 encode the data when creating the DSMLv2 message.

Two classes are used for BER encoding and decoding: BEREncoder and BERDecoder, located in thecom.ibm.asn1 package.

The following example illustrates sending a DSMLv2 extended operation request and the processing of the response:

  1. Place the following script code in Output Map for attribute dsml.extended.requestvalue:
    enc = new Packages.com.ibm.asn1.BEREncoder();
    serverFile = 1;  //slapdErrors log file
    
    nFirstLine = new java.lang.Integer(7200);  
    nLastLine = new java.lang.Integer(7220);
    
    seq_nr = enc.encodeSequence();
    enc.encodeEnumeration(serverFile);   
       
    enc.encodeInteger(nFirstLine);
    enc.encodeInteger(nLastLine);
    
    enc.endOf(seq_nr);
    var myByte = enc.toByteArray();
    
    ret.value = myByte;
  2. Place the following script code in the After CallReply hook of the Connector:
    var ba = conn.getAttribute("dsml.response").getValue(0);
    bd = new Packages.com.ibm.asn1.BERDecoder(ba);
    
    main.logmsg("SLAPD log file:");
    main.logmsg(new java.lang.String(bd.decodeOctetString()));

SOAPAction Header

The DSMLv2 SOAP Connector by default always sends an empty header for the SOAPAction header. The OASIS Standard around SOAP states the this: "Each SOAP request body contains a single batchRequest. A SOAP node SHOULD indicate in the 'SOAPAction' header field the element name of the top-level element in the <body> of the SOAP request." It is valid for this header to be empty but it should optionally be something that can be set. Additionally, some vendors have defined the header to be mandatory in their DSML definitions (Sun is an example, see http://docs.oracle.com/cd/E19261-01/820-2765/6nebir7ld/index.html ).

If needed, we can set the SOAPAction Header yourself by means of the SOAPAction Header parameter.

Configuration

The DSMLv2 SOAP Connector uses the following parameters:


DSMLv2 SOAP Server Connector

The DSMLv2 SOAP Server Connector listens for DSMLv2 requests over HTTP. Once it receives the request, the Connector parses the request and sends the parsed request to the AssemblyLine workflow for processing. The result is sent back to the client over HTTP.

The DSMLv2 SOAP Server Connector is able to:

Extended operations

The DSMLv2 SOAP Server Connector supports extended operations. The value property of the extended operation is automatically base-64 decoded from the DSMLv2 message. You must then prope Basic Encoding Rules (BER) decode this value. You must also BER encode the responseValue property represented by the dsml.response Entry Attribute. The Connector will automatically base-64 encode the data when creating and sending the DSMLv2 response.

We can use the following two helper classes to BER encode and decode data:

Note: The schema of the extended operations cannot be automatically determined by the Connector. There is no metadata that describes the structure of an extended operation request.

The following example illustrates an extended operation request to return a part of the IBM Security Directory Server log:

var name = work.getString("dsml.extended.requestname");
   var ba = work.getAttribute("dsml.extended.requestvalue").getValue(0);

decoder = new Packages.com.ibm.asn1.BERDecoder(ba);
iSecuence = decoder.decodeSequence();
fileNumber = decoder.decodeEnumeration();
firstLine = decoder.decodeIntegerAsInt();
lastLine = decoder.decodeIntegerAsInt();

main.logmsg("Operation: " + name);
main.logmsg("File: " + fileNumber);
main.logmsg("First line: " + firstLine);
main.logmsg("Last line: " + lastLine);

// send the response, assuming this sample string is the log file content 
var str = new java.lang.String("Apr 13 16:18:18 2005  Entry cn=chavdar kovachev,o=ibm,c=us already exists.");

enc = new Packages.com.ibm.asn1.BEREncoder();
enc.encodeOctetString(str.getBytes());
myByte = enc.toByteArray();

work.setAttribute("dsml.response", myByte);
work.setAttribute("dsml.responseName", "1.3.18.0.2.12.23");
work.setAttribute("dsml.resultdescr", "success");

Configuration

The DSMLv2 SOAP Server Connector uses the following parameters:


EIF Connector

SDI Uses the capabilities of the Event Integration Facility in the process of integration with enterprise systems like Netcool/OMNIbus and IBM Tivoli® Enterprise Console. EIF enables SDI to create and send alerts and status information that can be recognized by the Netcool/OMNIbus Event Management system as events.

The EIF Connector allows SDI to both send and receive EIF event messages, facilitating bi-directional communications with EIF-capable systems like TEC and Netcool/Omnibus.

Sending is done using the Connector AddOnly mode, while reading events is handled by Iterator mode.

Introduction to IBM Tivoli Netcool/OMNIbus

IBM Tivoli® Netcool®/OMNIbus is a service level management (SLM) system that collects enterprise-wide event information from many different network data sources and presents a simplified view of this information to operators and administrators.

This information can then be:

Tivoli Netcool/OMNIbus can also consolidate information from different domain-limited network management platforms in remote locations. By working in conjunction with existing management systems and applications, Tivoli Netcool/OMNIbus minimizes deployment time and enables employees to use their existing network management skills.

Tivoli Netcool/OMNIbus tracks alert information in a high-performance, in-memory database and presents information of interest to specific users through individually configurable filters and views. Tivoli Netcool/OMNIbus automation functions can perform intelligent processing on managed alerts.

The IBM Tivoli Netcool/OMNIbus documentation Web site is available at http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/index.jsp?toc=/com.ibm.netcool_OMNIbus.doc/toc.xml

Introduction to Tivoli Enterprise Console

The IBM Tivoli® Enterprise Console (TEC) product is a rule-based event management application that integrates system, network, database, and application management to help ensure the optimal availability of an organization's IT services.

The Tivoli Enterprise Console® product:

The Tivoli Enterprise Console product helps you effectively process the high volume of events in an IT environment by:

Refer to the IBM Tivoli Enterprise Console User's Guide Version 3.9, SC32-1235, for more information about this product and its components.

Introduction to the Event Integration Facility

The IBM Tivoli® Event Integration Facility (EIF) is an event distribution and integration point for the event console. With the Tivoli Event Integration Facility toolkit, we can build event adapters and integrate them into the IBM Tivoli Enterprise Console® environment.

IBM Tivoli Enterprise Console adapters are the integration link. Adapters collect events, perform local filtering, translate relevant events to the proper format for the event console, and forward these events to the event server. A variety of adapters for systems, Tivoli software applications, and third-party applications are available. To monitor a source (such as a third-party or custom application) that is not supported by an existing adapter, we can use Tivoli Event Integration Facility to create an adapter for the source.

We can use Tivoli Event Integration Facility to:

Refer to the IBM Tivoli Event Integration Facility User's Guide Version 3.8, GC32-0691-01, for more information.

Schema

The EIF Connector schema will be retrieved from the Netcool® gateway mapping file if it is specified in the Schema File (eifSchemaFile) configuration parameter. See Configuration.

Iterator mode

When the EIF Connector is in Iterator mode, it will feed the AssemblyLine with entries that comply with the following structure:
Attribute name Value
className String
slotname String
slotname String
...

where slotname is the name of the slot specified in the received event.

AddOnly mode

When the EIF Connector is in AddOnly mode, it will send an event to a remote system. The event to be sent is specified as an Entry. The connector expects the provided to it entry to comply with the following structure:
Attribute name Value
className String
slotname String
slotname String
...

Configuration

The EIF Connector expects the following parameters:


File Connector

The File Connector, which was previously known as File System Connector, is a transport connector that requires a Parser to operate. The File Connector reads and writes files available on the system it runs on. Concurrent usage of a file can be controlled by means of a locking mechanism.

Note: This Connector can only be used in Iterator or AddOnly mode, or for equivalent operations in Passive state.

Configuration

The Connector needs the following parameters:

See also

URL Connector.


File Management Connector

The File Management Connector reads and modifies file system structures and file system metadata available on the system it runs on. More specifically, it can create, find, and delete files and directories.

This connector can iterate over a directory structure starting from a user-defined location and return the discovered files (directories). Furthermore, it is able to locate, rename, and delete files (and directories) and move or copy them to other locations on the file system. It can also be used to create empty files and directories.

This connector does not operate on the actual contents of files. We can use the File Connector to couple it with a connector driven loop component and create an AssemblyLine that reads the content of all files (or subset from a specific type, say IdMLs) from a provided folder and its subfolders.

Using the Connector

Traversing a Directory Structure

When iterating over a directory structure or searching for a particular file or directory, the connector recursively traverses the specified tree. In Iterator mode, all discovered file or directories are returned. Whereas in Lookup mode, only the file that matches the Link Criteria is returned. Use the On Multiple Entries hook for multiple matches.

The Start directory is set in the Directory Path parameter in the configuration tab of the Connector.

Note: Directory Path accepts a UNC path and mapped network drives as a valid start directory in the Windows OS.

To limit the count of traversed files and directories we can specify the depth of the iteration. If the Depth parameter is left blank, the connector traverses all subdirectories from the start directory. A zero value means that it iterates only the start directory. Positive values indicate that how deep the connector can go in the directory structure.

For example, if the Depth parameter has a value of 1, only the start directory and level1 subdirectories are iterated.

/startDirectory
		|
		|--- /dir1
		|		|
		|		|--- file1
		|		|--- file2
		|
		|--- /dir2
				|
				|--- /dir3
						|
						|--- file3
						|--- file4
				^		^		^
	  			Lvl-0 Lvl-1 Lvl-2

In this case, the connector returns dir1, file1, file2, dir2 and dir3, but file3 and file4 are not returned because they have a depth of two.

In addition, the Depth parameter can be used to avoid infinite recursion caused by symbolic links. For example, this problem can occur if you have a child subdirectory linked to its parent. Without the Depth parameter, the connector iterates the structure indefinitely, until it is limited by the underlying operating system. By setting a fixed depth to the directory iteration, we can guarantee that such recursions are limited.

The Filter parameter can be used to reduce the set of returned files and directories. By default, the provided filter uses Glob expression syntax, but for advanced usage, regular expressions are also supported. To switch between these behaviors, you need to enable or disable the Use Regular Expression parameter in the configuration of the Connector.

A glob pattern is specified as a string and is matched against other strings, such as directory or file names. Glob syntax follows a number of simple rules:

Here are some examples of glob syntax:

The description for Regular Expressions in Java™ can be found here: http://java.sun.com/developer/technicalArticles/releases/1.4regex/

Symbolic Links

The connector detects symbolic links (but not hard links) and if found, sets the isSymbolicLink attribute in the Input Map to true.

Furthermore, if a directory is a symbolic link and option Follow Symbolic Links is enabled, the connector iterates the content of this directory.

Note: The connector does not detect symbolic links in Windows and is the limitation of the Java 1.6 virtual machine and Windows.

Providing fullPath in Link Criteria

The fullPath attribute is the unique identifier for every file or directory. If this attribute is provided in the Link Criteria and the "equal" match condition is used, the connector skips searching the directory tree and tries to match this file or directory with the rest of the Link Criteria. Using this attribute reduces the time needed for searching.

Example: If the start directory path of the connector is set to /user and its Link Criteria consist of attribute fullPath equal to /user/home/file.txt and isDirectory equal to false, then there is only one file that can match these conditions. Therefore, the Connector does not search the directory tree and just checks if the specified file matches the rest of the criteria. If so, an Entry containing details for this file is returned.

Updating a file or directory

In Update mode the connector, can modify three general attributes of an Entry such as fullPath, parent, and name (in priority order, the first one having the highest). However, only one of them can be changed in a single modification. If more attributes are provided, the connector takes the one with the highest priority. Using content along with those attributes instructs the connector to put the provided content in the file being updated. If only content is provided in the Output Map (no fullPath, parent or name) the existing content of the file is overwritten.

Note: If the connector is not provided with fullPath and name attributes, and only content attribute and the Link Criteria is not able to match anything, no file is created, and an error is returned.

If only the name attribute is changed, the connector performs a local rename instead of a move operation.

Here is a breakdown of possible file and directory updates:

Notes:

  1. If option the Keep Original is selected, a copy operation is performed instead of a move or rename.
  2. If a directory is copied or moved and it is a symbolic link or contains a symbolic link, each symbolic link is invalidated.

Force Deleting files and directories

In Delete mode, the File Management Connector deletes the discovered file or directory. However, if it is a read-only file or a non-empty directory, the delete operation fails. In this case, we can have two options to remove the file or directory:

Note: If deleting a directory that is a symbolic link or contains a symbolic link when enabled, Force Delete removes not only the symbolic link but the actual directory the link refers to. The Follow Symbolic Links option is not considered when forcing a directory deletion.

Create empty files and directories

In AddOnly mode, the connector can create an empty file or directory. In addition, it creates all missing directories in the path of the new file or directory.

If the isDirectory attribute is not provided in the Output Map of the Connector, the Create File check box is used to specify whether a new file or directory must be created. In addition, the fully qualified name of the file or directory must also be provided. Since files are characterized by attributes fullPath, parent and name, the following options exist:

The content attribute can be used to provide initial content for that file. If the content is a String object, the charSet attribute specifies which Character Set to be used for serialization of this String.

The final attribute supported in AddOnly mode is isReadOnly. The File Management Connector cannot create symbolic links or hidden files or directories because these operations depend on the underlying platform or file system.

Schema

Input Schema

Output Schema

Configuration

The File Management Connector has the following parameters:

Examples

Go to the TDI_install_dir/examples/FileManagementConnector directory of your SDI installation.

Introduction

Note: This connector is deprecated and will be removed in a future version of SDI.

The Generic Log Adapter Connector processes log files and transforms them to Common Base Event (CBE) objects, which are then fed into the AssemblyLine.

It uses Generic Log Adapter (GLA) technology, part of IBM Autonomic Computing Toolkit, to process log files and transform their contents into Common Base Event format. The IBM Redbook A Practical Guide to the IBM Autonomic Computing Toolkit contains information on how to configure and use Generic Log Adapter.

Adapter configuration file

An adapter configuration file, prepared externally using the Adapter Configuration Editor Eclipse plug-in is used in conjunction with the Generic Log Adapter Connector. It provides the tooling to create the specific parser rules that are used by the Generic Log Adapter Connector at runtime to create Common Base Event objects, that is, the configuration file contains information about the log file which will be processed. From this file all the CBE objects will be later created. The logic for parsing the objects is also implemented in the adapter configuration file. In the Eclipse GLA plug-in there are several examples of such configuration files, made to process some well known application log files. We can also create our own configuration files using the Eclipse's user interface.

In both cases, either using an already created configuration file or creating a new configuration file, you should note that a specially made outputter called TDIOutputter needs to be configured. This should be done because when the CBE objects are created, the TDIOutputter sends these CBE objects to the Generic Log Adapter Connector (which can then send them into the AssemblyLine using the ordinary mapping mechanism ).

Using more than one outputter in the configuration file

It is not possible to use more than one TDIOutputter. If two or more TDIOutputters are configured in the adapter configuration file an Exception will be thrown when the Generic Log Adapter Connector tries to get the correlation ID from the TDIOutputter. When there is more than one TDIOutputter configured, the Generic Log Adapter Connector which uses the configuration file does not know which of the multiple TDIOutputters to use-- it is not possible to get the CBE Objects from the correct TDIOutputter.

However, it is possible to define more than one outputter as long as it is not a TDIOutputter. For example, you could combine the TDIOutputter with a FileOutputter. This will cause all CBE objects to be sent (and saved) both to a file and to the Generic Log Adapter Connector.

Configuration

To configure the Generic Log Adapter Connector you must have a valid adapter configuration file. The path to the file must be set in the Connector Config File Path parameter. The configuration file is being checked for validity and if this is not a valid adapter configuration file an Exception will be thrown.

Configuring the TDIOutputter

In order to configure the adapter file to use the TDIOutputter you use Eclipse's GLA user interface (the Eclipse GLA plug-in). Below a description of how to configure the outputter using the Eclipse user interface:

  1. Open the adapter configuration file for editing. Now the Eclipse plug-in is showing the contents of the configuration file.
  2. Go to Adapter -> Configuration -> Context Instance.
  3. Right click on Context Instance.
  4. Choose Add -> Logging Agent Outputter. Now you are able to see and configure the outputter.
  5. For the outputter type choose undeclared.
  6. Type a description of your choosing in the Description field.
  7. Right click on the Outputter and choose Add -> Property.
  8. Name the property "tdi_correlation_id".
  9. For the value of this property use an arbitrary and unique value which will become the correlation ID of the Generic Log Adapter Connector using this configuration file.
  10. If no value is filled the TDIOutputter will use a default value and will register any Connector which attempts to start its adapter configuration file.
  11. Go to Adapter -> Contexts -> Basic Context Implementation and right click over it.
  12. Choose add -> Logging Agent Outputter.
  13. Fill the name and description fields.
  14. In the Executable Class field enter "com.ibm.di.connector.gla.TDIOutputter".
  15. Make sure the role is set to outputter.
  16. In the Role version field add a number (for example: 1.0.0).
  17. For the unique ID click browse and choose the outputter you just made in steps 2 - 7.
  18. Now you have configured the outputter and you are ready to use this adapter configuration file with the Generic Log Adapter Connector.

Using the Connector

To configure the Generic Log Adapter Connector you must have a valid adapter configuration file. The path to the file must be set in the Connector Config File Path parameter.

When the Generic Log Adapter Connector starts, a GLA instance is started in a separate thread inside the Connector. Starting the adapter in a separate thread makes it possible to start iterating through the entries before GLA has completed processing the entire log file. When the Connector receives CBE objects it stores them into a queue, which orders the elements in FIFO (first-in-first-out) manner. When there are no elements in the queue and the Connector wants to take an element from it, it will not return null value but will wait until an element is available (that is, it blocks). On the other side of the queue when it is full and an element needs to be added to it, it will block until there is available space in the queue.

Conditions like end-of-data, GLA adapter errors etc. are handled by special messages in the queue, enabling the Connector to work in the manner expected of an SDI Connector.

When iterating, CBE objects are read one by one from the queue, and delivered to the Generic Log Adapter Connector. The CBE object itself is stored into an Attribute called "rawCBEObject" of the work Entry. The CBE attributes are also set in the work Entry.

In order to be able to handle the situation when more than one Connector instance is running simultaneously a mechanism to send the correct CBE objects to the correct Generic Log Adapter Connector is required. This is achieved by using a unique correlation ID parameter in the TDIOutputter configuration. Before a Generic Log Adapter Connector starts the adapter configuration file it gets the correlation ID from the TDIOutputter configuration (the Connector actually parses the adapter configuration file). Then it registers it in an internal TDIOutputter table. When the TDIOutputter is ready to send the generated CBE object it gets its correlation ID and takes the Generic Log Adapter Connector which is registered with this ID in the table.

Schema

The unprocessed, raw CBE object read from the TDIoutputter queue object is available in the following attribute, ready to be mapped into the work entry:
Attribute Name Description
$rawCBE This attribute holds a single CBE object which is a result of the processed application log file. The number of the CBE objects depends on the configuration of the parser in the adapter configuration file.

The remaining attributes follow the specification as outlined in the output map schema definition in the documentation for the CBE Parser.

See also

The example demonstrating the processing of a DB2® log file, in the TDI_install_dir/examples/glaconnector directory,

RAC Connector,

CBE Function Component,

GLA Users Guide.


HTTP Client Connector

The HTTP Client Connector enables greater control on HTTP sessions than the URL Connector provides. With the HTTP Connector we can set HTTP headers and body using predefined attributes. Also, any request to a server that returns data is available for the user as attributes.

This Connector supports secure connections using the SSL protocol when so requested by the server, for example when accessing a server using the 'https://' prefix in an URL. If client-side certificates are required by the server, you will need to add these to the SDI truststore, and configure the truststore in global.properties or solution.properties. More information about this can be found in the SDI v7.2 Installation and Administrator Guide, in the section named "Client SSL configuration of SDI components".

Note: The HTTP Client Connector does not support the Advanced Link Criteria (see "Advanced link criteria" in SDI v7.2 Users Guide).

Modes

The HTTP client Connector can be used in four different AssemblyLine modes. These are:

Lookup Mode

In Lookup mode we can dynamically change the request URL by setting the search criteria as follows:

Special attributes

When using the Connector in Iterator or Lookup mode the following set of attributes or properties is returned in the Connector ("conn") entry:

When using the Connector in AddOnly mode the Connector transmits any attribute named http. as a header. Thus, to set the content type for a request name the attribute http.content-type and provide the value as usual. One special attribute is http.body that can contain a string or any java.io.InputStream or java.io.Reader subclass.

For all modes the Connector always sets the http.responseCode and http.responseMsg attributes. In AddOnly mode this is special because the conn object being passed to the Connector is the object being populated with these attributes. To access these you must obtain the value in the Connector's After Add hook.

Character Encoding

The HTTP Client Connector uses internally the HTTP Parser to parse the input and output streams of the created socket to the specified URL. The default character encoding used for this is ISO-8859-1.

If the HTTP Client Connector has a configured Parser, then this parser is used to write the http.body attribute using its specified character encoding; or, if not specified, the default character encoding of the platform is used.

To explicitly specify the character encoding of the http.body attribute, use the Content Type parameter of the HTTP Client Connector. For more information see Configuration.

Configuration

The Connector has the following parameters:

You select a Parser from the Parser pane; select the parser by clicking the top-left Select Parser button. If specified, this Parser is used to generate the http.body content when sending data. The parser gets an entry with those attributes where the name does not begin with http. Also, this Parser (if specified) gets the http.body for additional parsing when receiving data. However, do not specify system:/Parsers/ibmdi.HTTP, because a message body does not contain another message.

Examples

In your attribute map we can use the following assignment to post the contents of a file to the HTTP server:

// Attribute assignment for "http.body" 
ret.value = new java.io.FileInputStream ("myfile.txt"); 

// Attribute assignment for "http.content-type" 
ret.value = "text/plain";

The Connector computes the http.content-length attribute for you. There is no need to specify this attribute.


HTTP Server Connector

SDI provides a HTTP Server Connector that listens for incoming HTTP connections and acts like a HTTP server. Once it receives the request, it parses the request and sends the parsed request to the AssemblyLine workflow to process it. The result is sent back to the HTTP client. By default, the returned result has a content-type of "text/html".

The Connector supports Server and Iterator Modes. Server mode is the recommended mode:

If a Parser is specified, the connector processes post requests and parses the contents using the specified Parser; get requests do not use the Parser. If a post request is received and no Parser is specified, the contents of the post data is returned as an attribute (postdata) in the returned entry.

The HTTP Server Connector uses ibmdi.HTTP as internal Parser if no Parser is specified.

The Connector parses URL requests and populates an entry in the following manner:

http://localhost:8888/path?p1=v1&p2=v2

http.method :	'GET'
http.Host   :	'localhost:8888'
http.base   :	'/path'
http.qs.p1  :	'v1'
http.qs.p2  :	'v2'
http://localhost:8888/?p1=v1&p2=v2

http.method :	'GET'
http.Host   :	'localhost:8888'
http.base   :	'/'
http.qs.p1  :	'v1'
http.qs.p2  :	'v2'

If a post request is used then it is expected that the requestor is sending data on the connection as well. Depending on the value for the Parser parameter the Connector performs the following actions:

The session with the HTTP client is closed when the Connector receives a getNext request from the AssemblyLine and there is no more data to fetch. For example, if the Parser has returned a null value, or on the second call to getNext if no Parser is present.

Connector structure and workflow

The HTTP Server Connector receives HTTP requests from HTTP clients and sends HTTP responses back. As mentioned above, the default content-type header is set to "text/html"; we can override that by setting the Entry attribute http.content-type to the appropriate value before the Connector returns the result to the client.

After the AssemblyLine initializes the HTTP Server Connector, it calls the getNextClient() method of the Connector. This method blocks until a client request arrives. When a request is received (and Server Mode is selected), the Connector creates a new instance of itself, which is handed over to the AssemblyLine that subsequently spawns a new AssemblyLine thread for that Connector instance. This design feature provides the ability to process each Event in a separate thread, which allows the HTTP Server Connector to process several HTTP events in parallel. The AssemblyLine then calls the getNextEntry() method on this new Connector instance in the new thread. Each Entry returned by the getNextEntry() call represents an individual HTTP request from the HTTP client. The Connector's replyEntry(Entry conn) method is called for each Entry returned from getNextEntry() to send to the client the corresponding HTTP response.

Connector Client Authentication

The parameter HTTP Basic Authentication governs whether client authentication will be mandated for HTTP clients accessing this connector over the network.

There are two different ways to implement HTTP Basic Authentication with the HTTP Server Connector:

  1. Using an Authentication Connector

    This is a mechanism for compatibility with the old HTTP EventHandler (which is no longer present in SDI v7.2). A connector parameter Auth Connector specifies an SDI Connector that will be used in Lookup Mode, with the username and password for the HTTP Basic Authentication data specified as the Link Criteria:

    • If the lookup returns an Entry, the authentication is considered successful and the HTTP Server Connector proceeds with processing the client's request.
    • If the lookup cannot find an Entry, the client is not authenticated and the request will not be processed.

  2. Script authentication

    This mechanism requires a certain amount of coding, but provides more power and lets you implement authentication through our own scripting. It can only be used when the Auth Connector parameter is NULL or empty.

    The Connector will make available to you the username and password values in the "After GetNext" Hook through the getUserName() and getPassword() public Connector methods. It is now your responsibility to implement the authentication mechanism. The authentication code must be placed in the "After GetNext" Hook. You should call the Connector's rejectClientAuthentication() method from the AssemblyLine hook if authentication is not successful. Consider the following example authentication script code:

    var httpServerConn = thisConnector.connector;
    var username = httpServerConn.getUserName();
    var password = httpServerConn.getPassword();
    
    //perform verification here
    successful = true; 
    
    if (!successful) {
    	httpServerConn.rejectClientAuthentication();
    }

Chunked Transfer Encoding

When the parameter Chunked Transfer Encoding is enabled, the Connector will write the HTTP body as series of chunks.

When chunked encoding is used, you are responsible for calling the Connector's putEntry(entry) method for each chunk - the value of the "http.body" Attribute of the Entry provided will be sent as an HTTP chunk. The replyEntry(entry) Connector's method is automatically called by the AssemblyLine at the end of the iteration - it will write the last chunk of data (if the "http.body" Attribute is present) and close the chunk sequence.

When a Parser is specified to the HTTP Server Connector, it will be the stream returned by the Parser that will be sent as a HTTP chunk on each putEntry(entry) or replyEntry(entry) call.

Configuration

The Connector needs the following parameters:

Note: We can select a Parser from the Parser configuration pane; click on the Inherit from: button in the bottom right corner when the Parser pane is active.

Connector Schema

Listed below are all the Attributes supported by the HTTP Server Connector.

Input Attributes

Output Attributes


IBM Security Access Manager Connector

Introduction

The SDI v7.2 Connector for IBM Security Access Manager enables the provisioning and management of IBM Security Access Manager User accounts, Groups, Policies, Domains, SSO Resources, SSO Resource Groups, and SSO User Credentials to external applications (with respect to IBM Security Access Manager). The Connector uses the IBM Security Access Manager Java™ API.

The key features and benefits of the Connector are:

Note: The Connector uses the TAM 6 Java API to manipulate the attributes of the targeted TAM objects. Therefore, this Connector can't support TAM 5.1 because of JRE support restrictions for the TAM 5.1 Runtime Environment (RTE). It supports TAM 6.0 and TAM 6.1 only.

SSL communication with the TAM Server is supported.

Connector Modes

The Connector supports the Lookup, Iterator, Update, AddOnly, and Delete modes. Refer to Using the Connector for specific usage of the various modes.

Skip Lookup in Update and Delete mode

The TAM Connector supports the Skip Lookup general option in Update or Delete mode. When it is selected, no search is performed prior to actual update and delete operations.

Valid Link Criteria must be present, that is, the mandatory attribute must be defined in the Link Criteria of the Connector, as defined in the tables of mandatory attributes under the Update Mode and Delete Mode sections respectively.

Configuration

Before attempting to use the connector in an AssemblyLine, IBM Security Access Manager version 6.x must be installed on the target machine: The IBM Security Access Manager Java™ Runtime Environment (JRTE) must also be installed on the same machine as SDI.

Configure the IBM Security Access Manager Java Run Time

The Connector makes use of the IBM Security Access Manager Java API and therefore the IBM Security Access Manager Runtime for Java must be installed on the SDI machine. For information on how to install and configure IBM Security Access Manager Runtime for Java on the SDI machine, refer to the IBM Security Access Manager Installation Guide.

When entering the parameters to the configuration utility (pdjrtecfg):

Configure secure communication to the IBM Security Access Manager policy server

To configure secure communication between SDI and IBM Security Access Manager policy server and authorization server, and for SDI to become an authorized IBM Security Access Manager Java application, run the SvrSslCfg utility on the SDI machine.

For example, from the TDI_install_dir/jvm/jre/bin directory, enter the following command (as one line). This command must be run with the SDI's Java executable:

/opt/IBM/TDI/V7.2/jvm/jre/bin/java com.tivoli.pd.jcfg.SvrSslCfg -action config -admin_id sec_master
		-admin_pwd password -appsvr_id appsvr -host TAM_host_name -mode remote -port 999
		-policysvr policy_svr:7135:1 -authzsvr auth_svr:7136:1 -cfg_file cfg_file_name 
		-key_file keyfile_name -cfg_action create

For complete information on the SvrSslCfg utility, refer to the IBM Security Access Manager Authorization Java Classes Developer Reference (specifically Appendix A).

Configure SSL

The following steps allow you to optionally create a new self-signed certificate, and configure SDI to use the certificate:

  1. Open the SDI Configuration Editor.
  2. Select KeyManager from the Toolbar. The IBM Key Manager tool opens.
  3. Select Key Database File then New.
  4. Select "JKS" as the Key database type.
  5. Enter an appropriate File Name and an appropriate Location. Click OK.
  6. Enter a Password. Enter the password again to confirm. Click OK.
  7. In the Key database content section, select Personal Certificates. Click New Self-Signed.

    Note: Alternatively, an existing certificate can be used. If we wish to do this, click Export/Import to import the appropriate certificate.

  8. Enter an appropriate Key Label, an appropriate Organization, and any other appropriate information. Click OK.
  9. Close IBM Key Manager.
  10. In the SDI Configuration Editor, select Browse Server Stores then click on Open for the Server Store we wish to configure, usually Default.tdiserver. Double-click Solution-Properties, and the solution properties table is opened.
  11. Locate the javax.net.ssl.trustStore parameter. Enter the value of Key database File created in step 5 above.
  12. Locate the javax.net.ssl.trustStorePassword parameter. Enter the value of the Password entered in step 6 above.
  13. Locate the javax.net.ssl.trustStoreType parameter. Enter "jks".
  14. Locate the javax.net.ssl.keyStore parameter. Enter the value of Key database File created in step 5 above.
  15. Locate the javax.net.ssl.keyStorePassword parameter. Enter the value of the Password entered in step 6 above.
  16. Locate the javax.net.ssl.keyStoreType parameter. Enter "jks".
  17. Click Close to close the solution properties table. The changes to the solution properties are saved in the relevant solution.properties file.
  18. Close SDI Configuration Editor.

Refer to SDI v7.2 Installation and Administrator Guide for more information on configuring SSL.

Configure the Connector

The SDI Connector for IBM Security Access Manager can be added directly into an assembly line. The following section lists the configuration parameters that are available.

Using the Connector

This section describes how to use the Connector in each of the supported SDI Connector modes. The section also describes the SDI Entry schema supported by the Connector.

Note: When the Connector executes in the Assembly line, an IBM Security Access Manager Context is created in the Initialize method of the Connector. For performance reasons, so that a Context is not created for every IBM Security Access Manager Connector Instance, the IBM Security Access Manager Connector should be cached (pooled) within the AssemblyLine. The caching of a Connector within the AssemblyLine can be configured within SDI. Please refer to the SDI v7.2 Users Guide for more information.

When the Connector is configured to manipulate TAM Policy objects, special consideration is required when supply attribute values in the work entry that will feed the Connector in AddOnly or Update Modes. The policy object attributes are grouped together for related policy items. The attributes can be broken up into sets where each set of attributes requires a value to update or apply any of the individual attributes for that policy item. For example, when manipuilating the Policy item Account Expiry Date, you must supply values for each of the attributes AcctExpDateEnforced, AcctExpDateUnlimited, and AcctExpDate. If we wish to then modify any of these attributes for Account Expiry Date, you must again also supply values for each of the three attributes and the UserName attribute.

The following table defines the Policy items and their attribute groupings.
Table 14. Policy Items
Policy item Set of Required Policy Entry Attributes
Account Expiry Date AcctExpDateEnforced, AcctExpDateUnlimited, AcctExpDate.
Account Disable Time AcctDisableTimeEnforced, AcctDisableTimeUnlimited, AcctDisableTime
Account Password Spaces PwdSpacesAllowedEnforced, PwdSpacesAllowed
Account Maximum Password Age MaxPwdAgeEnforced, MaxPwdAge
Account Maximum Repeat Characters MaxPwdRepCharsEnforced, MaxPwdRepChars
Account Minimum Alphabetic Characters MinPwdAlphasEnforced, MinPwdAlphas
Account Minimum Non-Alphabetic Characters MinPwdNonAlphasEnforced, MinPwdNonAlphas
Account Time Of Day Access TodAccessEnforced, AccessibleDays, AccessStartTime, AccessEndTime, AccessTimezone
Account Minimum Password Length MinPwdLenEnforced, MinPwdLen
Account Maximum Failed Login Attempts MaxFailedLoginsEnforced, MaxFailedLogins
Account Maximum Concurrent Web Sessions MaxConcWebSessionsEnforced, MaxConcWebSessions, MaxConcWebSessionsUnlimited, MaxConcWebSessionsDisplaced

AddOnly Mode

When deployed in AddOnly mode, the Connector is able to create a range of data in the IBM Security Access Manager database. The Connector should be added to the Flow section of an SDI AssemblyLine. The Output Map must define a mapping for the following attributes, these attributes can be also be retrieved through querying the Connector Schema.

Notes:

  1. Attributes marked with an asterisk (*) are mandatory.
  2. For a detailed description of all attributes, please refer to Connector Input Attribute Details.
  3. Keep in mind the caveats on manipulating Policy items and their required Policy Entry attributes as stipulated in Table 14.

Table 15. Attributes by Entry Type in AddOnly Mode
Entry Type Attribute
User UserName*
RegistryUID*
FirstName*
LastName*
Description
Password*
IsAccountValid
IsPasswordValid
IsSSOUser
NoPasswordPolicyOnCreate
MaxFailedLogins
MaxConcWebSessions
Groups (Multivalued attribute) - the User must not already be a member of the Group
Group GroupName*
RegistryGID*
CommonName
Description
ObjectContainer
Users (Multivalued attribute) - the Group must not already contain the User
Policy UserName*
AcctExpDateEnforced
AcctExpDateUnlimited
AcctExpDate
AcctDisableTimeEnforced
AcctDisableTimeUnlimited
AcctDisableTimeInterval
PwdSpacesAllowedEnforced
PwdSpacesAllowed
MaxPwdAgeEnforced
MaxPwdAge
MaxPwdRepCharsEnforced
MaxPwdRepChars
MinPwdAlphas
MinPwdNonAlphasEnforced
MinPwdNonAlphas
TodAccessEnforced
AccessibleDays
AccessStartTime
AccessEndTime
AccessTimezone
MinPwdLenEnforced
MinPwdLen
MaxFailedLoginsEnforced
MaxFailedLogins
MaxConcWebSessions
MaxConcWebSessionsEnforced
MaxConcWebSessionsUnlimited
MaxConcWebSessionsDisplaced
Domain DomainName*
Description
SSO Credentials UserName*
ResourceName*
ResourceType*
ResourceUser*
ResourcePassword*
SSO Resource SSOResourceName*
Description
SSO Resource Group SSOResourceGroupName*
Description
SSOResources (Multivalued attribute)

The Connector does not support duplicate or multiple entries. Only one entry should be supplied to the Connector at a time.

Update Mode

When deployed in Update mode, the Connector is able to modify existing data in the IBM Security Access Manager database. The Connector should be added to the Flow section of a SDI AssemblyLine. The Output Map must define a mapping for the following attributes. These attributes can be also be retrieved through querying the Connector Schema.

When importing users/groups during an update:

Keep in mind the caveats on manipulating Policy items and their required Policy Entry attributes as stipulated in Table 14.

Attributes marked with an asterisk (*) are mandatory.
Table 16. Attributes by Entry Type in Update Mode
Entry Type Attribute
User UserName*
Description
Password
IsAccountValid
IsPasswordValid
IsSSOUser
MaxFailedLogins
MaxConcWebSessions
Groups (Multivalued attribute)
Group GroupName*
Description
ReplaceUsersOnUpdate
Users (Multivalued attribute)
Policy UserName*
AcctExpDateEnforced
AcctExpDateUnlimited
AcctExpDate
AcctDisableTimeEnforced
AcctDisableTimeUnlimited
AcctDisableTimeInterval
PwdSpacesAllowedEnforced
PwdSpacesAllowed
MaxPwdAgeEnforced
MaxPwdAge
MaxPwdRepCharsEnforced
MaxPwdRepChars
MinPwdAlphas
MinPwdAlphasEnforced
MinPwdNonAlphasEnforced
MinPwdNonAlphas
TodAccessEnforced
AccessEndTime
AccessibleDays
AccessStartTime
AccessTimezone
MinPwdLenEnforced
MinPwdLen
MaxFailedLoginsEnforced
MaxFailedLogins
MaxConcWebSessions
MaxConcWebSessionsEnforced
MaxConcWebSessionsUnlimited
MaxConcWebSessionsDisplaced
Domain DomainName*
Description
SSO Credentials UserName*
ResourceName*
ResourceType*
ResourceUser
ResourcePassword
SSO Resource Not Supported
SSO Resource Group SSOResourceGroupName*
SSOResources (Multivalued attribute)

Additionally, any mandatory fields mentioned above should be defined in the Link Criteria of the Connector. The Link Criteria is required by the AssemblyLine, since the AssemblyLine will invoke the Connectors findEntry() method to verify the existence of the given user. The value of the attribute, as defined in the Link Criteria, must match the value of the element present in the Output Map.

The only operator supported for Link Criteria is an equals exact match. Wildcard search criteria are not supported. The Connector does not support duplicate or multiple entries. Only one entry should be supplied to the Connector at a time.

Delete Mode

When deployed in Delete mode, the Connector is able to delete existing data from the IBM Security Access Manager database. The Connector should be added to the Flow section of an AssemblyLine.

Attributes marked with an asterisk (*) are mandatory.
Table 17. Attributes by Entry Type in Delete Mode
Entry Type Attribute
User UserName*
Group GroupName*
Policy UserName*
Domain DomainName*
SSO Credentials UserName*
ResourceName*
ResourceType*
SSO Resource SSOResourceName*
SSO Resource Group SSOResourceGroupName*

The mandatory attribute must be defined in the Link Criteria of the Connector. The Link Criteria is required by the AssemblyLine, since the AssemblyLine will invoke the Connector's findEntry() method to verify the existence of the given user.

The only operator supported for Link Criteria is an equals exact match. Wildcard search criteria are not supported. The Connector does not support duplicate or multiple entries. Only one entry should be supplied to the Connector at a time.

Lookup Mode

When deployed in Lookup mode, the Connector is able to obtain all details of the required IBM Security Access Manager data. The Connector should be added to the Flow section of an AssemblyLine. The mandatory attribute must be defined in the Link Criteria of the Connector.

Attributes marked with an asterisk (*) are mandatory.
Table 18. Attributes by Entry Type in Lookup Mode
Entry Type Attribute
User UserName*
Group GroupName*
Policy UserName*
Domain DomainName*
SSO Credentials UserName*
ResourceName*
ResourceType*
SSO Resource SSOResourceName*
SSO Resource Group SSOResourceGroupName*

The Connector's findEntry() method is the main code executed. The only operator supported for Link Criteria is an equals exact match. Wildcard search criteria are not supported.

The Connector does not support duplicate or multiple entries. The Connector will return only one entry at a time.

Iterator Mode

When deployed in Iterator mode, the Connector is able to retrieve the details of each data entry in the IBM Security Access Manager database, in turn, and make those details available to the AssemblyLine.

When deployed in this mode, the SDI AssemblyLine will first call the Connector's selectEntries() method to obtain and cache a list of all data entries in the IBM Security Access Manager database. If the entry Type is User or Group and a filter attribute was provided, then the list will contain the filtered entries. The Assembly Line will then call the Connector's getNextEntry() method. This method will maintain a pointer to the current name cached in the list.

Wildcards are supported for the filter attribute of User and Group entry types only:

Troubleshooting

Problems may be experienced for any of the following reasons:

Connector Input Attribute Details

This section details the attributes for connector input.

User

Table 19. Connector Input Attributes
Attribute Description Example Default
UserName The User Name maryl
RegistryUID The LDAP User Distinguished Name (DN) cn=mary ,o=companyabc, c=au
FirstName The User's First Name Mary
LastName The User's Last Name Lou
Description A Description Contractor
Password User's password

(If the 'NoPasswordPolicyOnCreate' attribute is set to FALSE, the password must conform to the current password policy in IBM Security Access Manager.)

m3ry10u
IsAccountValid TRUE to activate the account. FALSE to leave the account inactive. TRUE or FALSE TRUE
IsPasswordValid Set to FALSE if user is to change the password on next login. TRUE to remain unchanged. TRUE or FALSE TRUE
IsPDUser TAM PD User flag. TRUE or FALSE
IsSSOUser TRUE to enable Single Sign-on capabilities for this user. FALSE to disable. TRUE or FALSE FALSE
NoPasswordPolicy OnCreate FALSE will enforce the password policy on the "Password" attribute and as a result it will be checked against the password policy settings the first time it is created. TRUE will not enforce the password policy on the password when it is created. TRUE or FALSE TRUE
MaxFailedLogins Set the maximum number of failed logins a user can have before the account is disabled. 8 10
MaxConcWebSessions Set the maximum number of concurrent web sessions allowed 3 0
Groups (Multivalued attribute) This is a multi-valued attribute. Please refer to the SDI v7.2 Users Guide about how to set multi-valued attributes. Any Group listed in this attribute should already exist as a valid group in IBM Security Access Manager. Groups1 -> itSpecialists Groups2 -> programmers
ReplaceGroupsOnUpdate In Update mode, if this attribute is set to TRUE, the user is removed as a member of all of the groups with which the user is currently a member. The user is then added as members of the each of the groups supplied as values in the Groups attribute.

If this attribute is set to FALSE, then during modification the groups that currently contain the user are modified to add or delete that user in accordance with each of the Groups attribute value's operation. As a result, if the Groups attribute value operation is set to AttributeValue.AV_ADD, the user will be added to the group. If the Group attribute value operation is set to AttributeValue.AV_DELETE, the user will be removed from the group.

The ReplaceGroupsOnUpdate flag is ignored in Add mode. The flag is also ignored in Update mode if the update reverts to an Add operation when the user is not found to be an IBM Security Access Manager user.

TRUE or FALSE TRUE

Group

Table 20. Group Attributes
Attribute Description Example
GroupName The Group Name programmers
RegistryGID The LDAP Group DistinguishedName (DN) cn=programmers, cn=SecurityGroups, secAuthority=Default
CommonName The LDAP Common Name (CN) programmers
Description The Group Description Fulltime Programmers
IsPDGroup TAM PD Group Flag. TRUE or FALSE
ObjectContainer TAM Object Container
Users This is a multi-valued attribute. Please refer to the SDI v7.2 Users Guide about how to set multi-valued attributes. Any user listed in this attribute should already exist as a valid user in IBM Security Access Manager. Users1 -> maryl   Users2 -> johnd
ReplaceUsersOnUpdate In update mode, this Attribute provides a boolean flag to indicate how the membership of the group modified. If it is set to TRUE, all members of the group are removed and the list of users supplied as values in the Users attribute replaces the removed users.

If this Attribute is set to FALSE, then during modification, the users of the group are modified in accordance with the User attribute value's operation. As a result, if the User attribute value operation is set to AttributeValue.AV_ADD, the user will be added as a member of the group. If the User attribute value operation is set to AttributeValue.AV_DELETE, the user will be deleted from the group's membership.

The default value is TRUE.

The ReplaceUsersOnUpdate flag is ignored in Add mode. The flag is also ignored in Update mode if the update reverts to an Add operation when the group is not found to be an IBM Security Access Manager group.

TRUE or FALSE

Policy

Table 21. Policy Attributes
Attribute Description Example
UserName The User Name the policy will be set for. Must be a valid IBM Security Access Manager user. maryl
AcctExpDateEnforced If TRUE then enforce the Account Expiration Date. TRUE or FALSE
AcctExpDateUnlimited If TRUE then set the Account Expiration Date to be unlimited. TRUE or FALSE
AcctExpDate Sets the expiry date for the user account

The attribute must be of type java.util.Date, or java.lang.String. If a String value is provided the required date string format is "yyyyMMdd" where 'yyyy' us the four digit year, 'MM' is the two digit month, and 'dd' is the two digit day; i.e. 20091231 is the value for the date 31st December 2009.

Refer to the IBM Security Access Manager Java™ API Reference.
AcctDisableTimeEnforced If TRUE then enforce the Account Disable Time. TRUE or FALSE
AcctDisableTimeUnlimited If TRUE then set the Account Disable Time to be unlimited. TRUE or FALSE
AcctDisableTimeInterval Set the Account Disable Time Interval. Refer to the IBM Security Access Manager Java API Reference.
PwdSpacesAllowedEnforced If TRUE enforce the value of the 'PwdSpacesAllowed' attribute. TRUE or FALSE
PwdSpacesAllowed If TRUE allow spaces in the password. TRUE or FALSE
MaxPwdAgeEnforced If TRUE enforce the Maximum Password Age value. TRUE or FALSE
MaxPwdAge Sets the Maximum Password Age. Refer to the IBM Security Access Manager Java API Reference.
MaxPwdRepCharsEnforced If TRUE enforce the Maximum Password Repeatable characters number. TRUE or FALSE
MaxPwdRepChars Sets the Maximum Password Repeatable Characters. 5
MinPwdAlphasEnforced If TRUE enforce the Minimum number of Alphanumeric characters allowed. TRUE or FALSE
MinPwdAlphas Sets the Minimum number of Alphanumeric characters allowed. 6
MinPwdNonAlphasEnforced If TRUE enforce the Minimum number of non-alphanumeric characters allowed. TRUE or FALSE
MinPwdNonAlphas Sets the Minimum number of non-alphanumeric characters allowed. 3
TodAccessEnforced If TRUE enforce the access times set for the user. TRUE or FALSE
AccessibleDays Sets the days accessible for the user account. Refer to the IBM Security Access Manager Java API Reference.
AccessStartTime Sets the access start time for the user account. Refer to the IBM Security Access Manager Java API Reference.
AccessEndTime Sets the access end time for the user account. Refer to the IBM Security Access Manager Java API Reference.
AccessTimezone Sets the time zone for the user account. Refer to the IBM Security Access Manager Java API Reference.
MinPwdLenEnforced If TRUE enforce the Minimum Password Length. TRUE or FALSE
MinPwdLen Sets the Minimum Password Length. 8
MaxFailedLoginsEnforced If TRUE then enforce the Maximum Failed Login setting. TRUE or FALSE
MaxFailedLogins Sets the Maximum Failed Logins for the user. 8
MaxConcWebSessions Set the maximum number of concurrent web sessions allowed. 3
MaxConcWebSessionsEnforced. If TRUE then enforce the Maximum Concurrent Web Sessions setting. TRUE or FALSE
MaxConcWebSessionsUnlimited If TRUE then the maximum concurrent web sessions policy is set to "unlimited". TRUE or FALSE
MaxConcWebSessionsDisplaced If TRUE then the maximum concurrent web sessions policy is set to "displace". TRUE or FALSE

Domain

Table 22. Domain Attributes
Attribute Description Example
DomainName The name of the domain MyDomain
Description The Domain description Sample domain name

SSO Credentials

Table 23. SSO Credentials Attributes
Attribute Description Example
UserName The name of the user the credentials will be set for maryl
ResourceName The SSO Resource Name. (Must be a valid IBM Security Access Manager SSO Resource entry). myResource1
ResourceType Specifies whether this resource is a single resource or a resource group "Web Resource" and "Resource Group" are the only allowable values.
ResourceUser Sets the Resource User Name marylou
ResourcePassword Sets the User Name Password for the specified resource b1ddy4

SSO Resource

Table 24. SSO Resource Attributes
Attribute Description Example
SSOResourceName The Single sign-on Resource Name MyResource1
Description The Description Development Server 1

SSO Resource Group

Table 25. SSO Resource Group Attributes
Attribute Description Example
SSOResourceGroupName The Single sign-on Resource Group Name MyResourceGroup1
Description The Description All Development Servers
SSOResources This is a multi-valued attribute. Please refer to the SDI v7.2 Users Guide about how to set multi-valued attributes. Any SSO Resources listed in this attribute should already exist as a valid SSO Resource in IBM Security Access Manager. SSOResources1 -> myResource1 SSOResources2 -> myResource2

See also

Access Manager for e-business

Attribute merge behavior

In older versions of SDI, in the IBM Security Directory Server Changelog Connector merging occurs between Attributes of the changelog Entry and changed Attributes of the actual Directory Entry. This creates issues because we cannot detect the attributes that have changed. The SDI v7.2 version of the Connector has logic to address these situations, configured by a parameter: Merge Mode. The modes are:

Delta tagging is supported in all merge modes and entries can be transferred between different LDAP servers without much scripting.

Attribute merge behavior

In older versions of SDI, in the IBM Security Directory Server Changelog Connector merging occurs between Attributes of the changelog Entry and changed Attributes of the actual Directory Entry. This creates issues because we cannot detect the attributes that have changed. The SDI v7.2 version of the Connector has logic to address these situations, configured by a parameter: Merge Mode. The modes are:

Delta tagging is supported in all merge modes and entries can be transferred between different LDAP servers without much scripting.

Differences between changelog on distributed TDS and z/OS TDS

Note: The z/OS® operating system is not supported in SDI v7.2.

There are some differences in the way the changes to password policy operational attributes are logged to cn=changelog in IBM Security Directory Server on z/OS and in Distributed IBM Security Directory Server (which runs on other platforms). The currently known differences in behavior are listed below:

  1. Modify of userpassword

    A modify operation to change the userpassword will remove attributes such as pwdfailuretime, pwdreset, pwdaccountlockedtime, pwdgraceusetime and pwdexpirationswarned from an entry in the directory. It will also update the pwdchangedtime.

    Distributed TDS records these updates in the LDIF along with the replace of the userpassword value in the changelog entry.

    z/OS TDS only records the replace of the userpassword in the LDIF, omitting the generated deletion of the operational attributes.

    A password change can also conditionally update the pwdhistory attribute of an entry. We know that this change is not logged in z/OS TDS. Although we have no test data to show that it is indeed logged in Distributed TDS, we suspect it is.

  2. Password value in the changelog LDIF

    z/OS TDS suppresses the actual value (for security reasons) and instead displays the value as "userpassword: *ComeAndGetIt*".

    Distributed TDS shows the userpassword value as is. Note that we only have test output where password encryption is not being used, and thus the actual password is displayed "in the clear". If password encryption is active, probably the tagged, encrypted value is shown.

  3. Add of a user entry

    An add operation of a user entry containing a password will conditionally add the pwdreset attribute with a value of true if the effective policy for the user indicates this to be the case for new entries.

    Distributed TDS includes "PWDRESET: true" in the changelog entries LDIF for the add, but z/OS TDS does not.

  4. Authentication via a grace login

    When a password is expired, but "grace" logins are allowed, authentication (via either a bind or compare operation) succeeds and an additional value of the attribute pwdgraceusetime is added to the user entry. Distributed TDS records this as a single value added to the entry. z/OS TDS records this as a replace of the entire set of values for the pwdgraceusetime attribute, listing all the old values and the one new one.

Configuration

The Connector needs the following parameters:

Note: Changing Timeout or Sleep Interval values will automatically adjust its peer to a valid value after being changed (for example, when timeout is greater than sleep interval the value that was not edited is adjusted to be in line with the other). Adjustment is done when the field editor looses focus.

See also

Enable change log on IBM Security Directory Server, LDAP Connector, Active Directory Change Detection Connector, Sun Directory Change Detection Connector, z/OS LDAP Changelog Connector.


ITIM Agent Connector

The ITIM Agent Connector uses the IBM Security Identity Manager's JNDI driver to connect to ITIM Agents (the JNDI driver uses the DAML protocol). Thus the ITIM Agent Connector is able to connect to all ITIM Agents that support the DAML protocol.

The Connector itself does not understand the particular schema of the ITIM Agent it is connected to - it provides the basic functionality to create, read, update and delete JNDI entries.

The ITIM Agent Connector supports the Iterator, Lookup, AddOnly, Update and Delete modes.

This Connector uses the client library enroleagent.jar from the ITIM 5.1 release.

Set up SSL for the ITIM Agent Connector

Since the enroleagent.jar client library uses JSSE (Java™ based keystore/truststore) for SSL authentication, you are now required to mention the SSL-related certificate details in the global.properties/solution.properties; previous versions of the ITIM Agent Connector required you to specify the certificate name in the "CA Certificate File" parameter. You need to first import the ITIM Agent's certificate into the SDI truststore.

For example, with the following command you import the servercertificate.der file into tim.jks.

keytool -import -file servercertificate.der -keystore tim.jks

After you import the certificate, you need to mention this truststore in the "server authentication" section of the global.properties /solution.properties file.

## server authentication

javax.net.ssl.trustStore=E:\IBMDirectoryIntegrator\tim.jks
{protect}-javax.net.ssl.trustStorePassword=<jks_keystore_password>
javax.net.ssl.trustStoreType=jks

Note: The "CA Certificate File" property of the ITIM Agent Connector is no longer present, since now the certificates mentioned in the JKS trust store in global.properties or solution.properties are being used.

Configuration

The Connector needs the following parameters:

Known Issues

The Connector has been briefly tested with a few ITIM Agents. Some lookup issues have been detected that result from constraints of the underlying Agents implementation:

Sometimes simple JNDI searches might not return the expected results. For example, if you are using the Windows 2000 Agent, the JNDI search for the Guest user account "(eruid=Guest)" might return more than one Entry; or when you are using the Red Hat Linux Agent the search for the "root" group "(erLinuxGroupName=root)" returns an empty result set.

A work-around for these cases is to use an extended search filter where the object class is specified: "(&(eruid=)(objectclass=<classname>))". So for the Windows 2000 Agent the search would look like "(&(eruid=Guest)(objectclass=erW2KAccount))" and for the Red Hat Linux Agent the search filter should be "(&(eruid=root)(objectclass=erLinuxGroup))".

This work-around does not work for all lookup issues, for example the search for the Windows "Administrators" group (Windows 2000 Agent) - "(erW2KGroupName=Administrators)" returns an empty result set. The extended search filter "(&(eruid=Administrators)(objectclass=erW2KGroup))" returns an empty result set too.

When you encounter a lookup problem:

  1. Make sure you are using the latest version of the Agent.
  2. Try the work-around described above.
  3. If the work-around doesn't work, examine the schema of the Agent for other attributes that can be used for Entry identification.

Here are a few examples for how other attributes from the Agent schema can be used for Entry identification:

See also

DAML/DSML Protocol, TIM DSMLv2 Connector.


IBM MQ Connector

The IBM MQ Connector is a specialized instance of the JMS Connector.


JDBC Connector

The JDBC Connector provides database access to a variety of systems. To reach a system using JDBC you need a JDBC driver from the system provider. This provider is typically delivered with the product in a jar or zip file. These files must be in your path or copied to the jars/ directory of your SDI installation; otherwise you may get cryptic messages like "Unable to load T2 native library", indicating that the driver was not found on the classpath.

You will also need to find out which of the classes in this jar or zip file implements the JDBC driver; this information goes into the JDBC Driver parameter.

The JDBC Connector also provides multi-line input fields for the SELECT, INSERT, UPDATE and DELETE statements. When configured, the JDBC connector will use the value for any of these instead of its own auto-generated statement. The value is a template expanded by the parameter substitution module that yields a complete SQL statement. The template has access to the connector configuration as well as the searchcriteria and conn objects. The work object is not available for substitution, since the connector does not know what work contains. Additional provider parameters are also supported in the connector configuration.

The JDBC Connector supports the following modes: AddOnly, Update, Delete, Lookup, Iterator, Delta.

This Connector in principle can handle secure connections using the SSL protocol; but it may require driver-specific configuration steps in order to set up the SSL support. Refer to the manufacturer's driver documentation for details.

Connector structure and workflow

The JDBC connector makes a connection to the specified data sources during the connector initialization. While making a connection to the specified data source extra provider parameters are checked for, and set if they are specified. The auto-commit flag setting is also handled and set during connection initialization.

The JDBC connector builds SQL statements internally using a predefined mapping table. The connector flow behaves the same way as other connectors in AddOnly, Update, Delete, Iterator and Lookup modes.

In addition, this Connector supports Delta mode; the delta functionality for the JDBC connector is handled by the ALComponent (a generic building block common to all Connectors). The ALComponent will do a lookup and apply the delta Entry to a target Entry before doing an update, and then decide what the correct database operation must be. The Connector will then use the SQL statements for add, modify or delete, corresponding to what the operation is.

Understanding JDBC Drivers

In order for the JDBC Connector to access a relational database, it needs to access a driver, a set of subroutines or methods contained in a Java™ classlibrary. This library must be present in the CLASSPATH of SDI, otherwise SDI will not be able to load the library when initializing the Connector, and hence be unable to talk to the Relational Database (RDBMS). A good way to install a JDBC driver library such that SDI can use it is to copy it into the TDI_install_dir/jars directory, or a directory of your choosing subordinate to this, for example TDI_install_dir/jars/local.

Notes:

  1. Some drivers may contain native code, typically presented in .dll or .so files – these need to be added to the PATH variable in order for SDI to pick them up at run time.
  2. Be aware of duplicate class names. If your libraries contain classes that duplicate classes in any of the other libraries in the CLASSPATH, it is undefined which class will be loaded.
  3. The library should be readable by all users.
  4. The applications wishing to use the library must be restarted after installing the library (Configuration Editor, SDI Servers.)

There are 4 fundamental ways of accessing an RDBMS through JDBC (these are often referred to as driver types):

  1. Drivers that implement the JDBC API as a mapping to another data access API, such as Open Database Connectivity (ODBC). Drivers of this type are generally dependent on a native library, which limits their portability. The JDBC-ODBC Bridge driver is an example of a Type 1 driver; this driver is generally part of the JVM, so it does not need to be specified separately on the SDI classpath.

    To configure ODBC, see Specifying ODBC database paths.

    Note: The JDBC-ODBC bridge may be present in any of the different platform-dependent JVM's that IBM ships with the product. However, IBM supports the JDBC-ODBC bridge on Windows platforms only. In addition, performance is likely to be sub-optimal compared to a dedicated, native ("Type 4") driver. Commercial ODBC/JDBC bridges are available. If you need an JDBC-ODBC bridge, consider purchasing a commercially available bridge; see also the JDBC-ODBC bridge drivers discussion at http://java.sun.com/products/jdbc/driverdesc.html..

  2. Drivers that are written partly in the Java programming language and partly in native code. The drivers use a native client library specific to the data source to which they connect. Again, because of the native code, their portability is limited.
  3. Drivers that use a pure Java client and communicate with a middleware server using a database-independent protocol. The middleware server then communicates the client's requests to the data source.
  4. Drivers that are pure Java and implement the network protocol for a specific data source. The client connects directly to the data source.

With the exception of the JDBC-ODBC bridge on Windows, we only use Type 4 drivers with SDI. We will discuss other types as well--in the context of each of the supported databases—for a better understanding.

JDBC Type 3 and Type 4 drivers use a network protocol to communicate to their back-ends. This usually implies a TCP/IP connection; this will either be a straight TCP/IP socket, but if the driver supports it, it can be a Secure Socket Layer (SSL) connection.

Note: When working with custom prepared statements, make sure that the JDBC used driver is compliant with JDBC 3.0. There is a known issue with IBM solidDB® 6.5, since the driver implements only JDBC 2.0. If the Use custom SQL prepared statements option is enabled when working with this database, a java.lang.NullPointerException will be thrown.

Connect to DB2

The IBM driver for JDBC and SQLJ bundled with SDI was obtained from http://www-306.ibm.com/software/data/db2/java. It is JDBC 1.2, JDBC 2.0, JDBC 2.1 and JDBC 3.0 compliant.

Information about the JDBC driver for IBM DB2® is available online; a starting point and example for configuration purposes is the section on "How JDBC applications connect to a data source" in the DB2 Developer documentation. This driver may or may not suit your purpose. Driver Licensing

This driver does not need further licensing for DB2 database systems (that is, the appropriate license file, db2jcc_license_cu.jar is already included), except DB2 for z/Series and iSeries®. In order for the driver to be able to communicate with the latter two systems you would need to obtain the DB2 Connect™ product, and copy its license file, db2jcc_license_cisuz.jar, to the jars/3rdparty/IBM directory. In addition, since this driver is a FAT client with natively compiled code (.dll/.so), the DB2 Connect install path needs to be added to the PATH variable for these libraries to be used.

Based on the JDBC driver architecture DB2 JDBC drivers are divided into four types.

  1. DB2 JDBC Type 1

    This is an DB2 ODBC (not JDBC) driver, which you connect to using a JDBC-ODBC bridge driver. This driver is essentially not used anymore.

    A JDBC Type 1 driver can be used by JDBC 1.2 JDBC 2.0, and JDBC 2.1.

    To configure ODBC, see Specifying ODBC database paths.

  2. DB2 JDBC Type 2

    The DB2JDBC Type 2 driver is quite popular and is often referred to as the app driver. The app driver name comes from the notion that this driver will perform a native connect through a local DB2 client to a remote database, and from its package name (COM.ibm.db2.jdbc.app.*).

    In other words, you have to have a DB2 client installed on the machine where the application that is making the JDBC calls runs. The JDBC Type 2 driver is a combination of Java and native code, and will therefore usually yield better performance than a Java-only Type 3 or Type 4 implementation.

    This driver's implementation uses a Java layer that is bound to the native platform C libraries. Programmers using the J2EE programming model will gravitate to the Type 2 driver as it provides top performance and complete function. It is also certified for use on J2EE servers.

    The implementation class name for this type of driver is com.ibm.db2.jdbc.app.DB2Driver.

    The JDBC Type 2 drivers can be used to support JDBC 1.2, JDBC 2.0, and JDBC 2.1.

  3. DB2 JDBC Type 3

    The JDBC Type 3 driver is a pure Java implementation that must talk to middleware that provides a DB2 JDBC Applet Server. This driver was designed to enable Java applets to access DB2 data sources. An application using this driver can talk to another machine where a DB2 client has been installed.

    The JDBC Type 3 driver is often referred to as the net driver, appropriately named after its package name (COM.ibm.db2.jdbc.net.*).

    The implementation class name for this type of driver is com.ibm.db2.jdbc.net.DB2Driver.

    The JDBC Type 3 driver can be used with JDBC 1.2, JDBC 2.0, and JDBC 2.1.

  4. DB2 JDBC Type 4

    The JDBC Type 4 driver is also a pure Java implementation. An application using a JDBC Type 4 driver does not need to interface with a DB2 client for connectivity because this driver comes with Distributed Relational Database Architecture™ Application Requester (DRDA® AR) functionality built into the driver.

    The implementation class name for this type of driver is com.ibm.db2.jcc.DB2Driver.

    The latest version of this driver (9.1) supports SSL connections; this requires setting a property in the Extra Provider Parameters field. For more information see http://publib.boulder.ibm.com/infocenter/db2luw/v9/topic/com.ibm.db2.udb.apdv.java.doc/doc/rjvdsprp.htm.

    Note that the target database must be set up such that it accepts incoming SSL connections.

If the JDBC Connector's query schema throws an exception, or the Add/Update action on JDBC tables fails for BLOB data types, contact your database administrator and request that the required stored procedure for retrieving the schema be installed. For more information about accessing DB2 from Java, see also Overview of Java Development in DB2 UDB for Linux, UNIX, and Windows.

Connect to Informix Dynamic Server

If you install the Informix® Client SDK, you will also install Informix ODBC drivers which allow you to use a JDBC-ODBC bridge driver. This driver is not recommended for production use. To configure ODBC, see Specifying ODBC database paths.

However, we recommend you use the Informix JDBC driver, version 3.0. It is a pure-Java (Type 4) driver, which provides enhanced support for distributed transactions and is optimized to work with IBM WebSphere® Application Server.

It consists of a set of interfaces and classes written in the Java programming language. Included in the driver is Embedded SQL/J which supports embedded SQL in Java.

The implementation class for this driver is com.informix.jdbc.IfxDriver. For information how to install the Informix driver, see http://publib.boulder.ibm.com/infocenter/idshelp/v111/index.jsp?topic=/com.ibm.conn.doc/jdbc_install.htm

Connect to Oracle

Based on the JDBC driver architecture the following types of drivers are available from Oracle.

  1. Oracle JDBC Type 1

    This is an Oracle ODBC (not JDBC) driver, that you connect to using a JDBC-ODBC bridge driver. Oracle does supply an ODBC driver, but does not supply a bridge driver. Instead, you can use the default JDBC-ODBC bridge that is part of the JVM, or get one of the JDBC-ODBC bridge drivers from http://java.sun.com/products/jdbc/drivers.html. This configuration works fine, but a JDBC Type 2 or Type 4 driver will offer more features and will be faster.

    To configure ODBC, see Specifying ODBC database paths.

  2. Oracle JDBC Type 2

    There are two flavors of the Type 2 driver.

    • JDBC OCI client-side driver

      This driver uses Java native methods to call entrypoints in an underlying C library. That C library, called OCI (Oracle Call Interface), interacts with an Oracle database. The JDBC OCI driver requires an Oracle client installation of the same version as the driver. The use of native methods makes the JDBC OCI driver platform specific. Oracle supports Solaris, Windows, and many other platforms. This means that the Oracle JDBC OCI driver is not appropriate for Java applets, because it depends on a C library. Starting from Version 10.1.0, the JDBC OCI driver is available for installation with the OCI Instant Client feature, which does not require a complete Oracle client-installation. Please refer to the Oracle Call Interface for more information.

    • JDBC Server-Side Internal driver

      This driver uses Java native methods to call entrypoints in an underlying C library. That C library is part of the Oracle server process and communicates directly with the internal SQL engine inside Oracle. The driver accesses the SQL engine by using internal function calls and thus avoiding any network traffic. This allows your Java code to run on the server to access the underlying database in the fastest possible manner. It can only be used to access the same database.

  3. Oracle JDBC Type 4

    Again, there are two flavors of the Type 4 driver.

    • JDBC Thin client-side driver

      This driver uses Java to connect directly to Oracle. It implements Oracle's SQL*Net Net8 and TTC adapters using its own TCP/IP based Java socket implementation. The JDBC Thin client-side driver does not require Oracle client software to be installed, but does require the server to be configured with a TCP/IP listener. Because it is written entirely in Java, this driver is platform-independent. The JDBC Thin client-side driver can be downloaded into any browser as part of a Java application. (Note that if running in a client browser, that browser must allow the applet to open a Java socket connection back to the server.)

      This is the most commonly-used driver. In general, unless you need OCI-specific features, such as support for non-TCP/IP networks, use the JDBC Thin driver.

      The implementation class for this driver currently is oracle.jdbc.driver.OracleDriver.

    • JDBC Thin server-side driver

      This driver uses Java to connect directly to Oracle. This driver is used internally within the Oracle database, and it offers the same functionality as the JDBC Thin client-side driver, but runs inside an Oracle database and is used to access remote databases. Because it is written entirely in Java, this driver is platform-independent. There is no difference in your code between using the Thin driver from a client application or from inside a server.

For more information about accessing Oracle from Java, see also Java, JDBC & Database Web Services, and the Oracle JDBC FAQ.

Connect to SQL Server

The Microsoft SQL Server 2008 driver for JDBC supports the JDBC 1.22, JDBC 2.0 and JDBC 3.0 specification. It is a Type 4 driver.

The implementation class for this driver is com.microsoft.sqlserver.jdbc.SQLServerConnection. It is contained in the driver file sqljdbc.jar, typically obtained from the MS SQL Server 2008 installation, at <Microsoft SQL Server 2008-Install-Dir>\sqljdbc_1.1.1501.101_enu\sqljdbc_1.1\enu\sqljdbc.jar.

We can also use other third party drivers for connecting to Microsoft SQL Server.

The jTDS JDBC 3.0 driver distributed under the GNU LGPL is a good choice. This is a Type 4 driver and supports Microsoft SQL Server 6.5, 7, 2000, and 2005. jTDS is 100% JDBC 3.0 compatible, supporting forward-only and scrollable/updateable ResultSets, concurrent (completely independent) Statements and implementing all the DatabaseMetaData and ResultSetMetaData methods. It can be downloaded freely from http://jtds.sourceforge.net. More information about this driver is available from the Web site.

Connect to Sybase Adaptive Server

The jConnect for JDBC driver by Sybase provides high performance native access (Type 4) to the complete family of Sybase products including Adaptive Server Enterprise, Adaptive Server Anywhere, Adaptive Server IQ, and Replication Server.

jConnect for JDBC is an implementation of the Java JDBC standard; it supports JDBC 1.22 and JDBC2.0, plus limited compliance with JDBC 3.0. It provides Java developers with native database access in multi-tier and heterogeneous environments. We can download jConnect for JDBC quickly, without previous client installation, for use with thin-client Java applications - like SDI.

The implementation class name for this driver is com.sybase.jdbc3.jdbc.SybDriver.

We can also use other third party drivers for connecting to Sybase.

The jTDS JDBC 3.0 driver distributed under the GNU LGPL is a good choice. This is a Type 4 driver and supports Sybase 10, 11, 12 and 15.1. jTDS is 100% JDBC 3.0 compatible, supporting forward-only and scrollable/updateable ResultSets, concurrent (completely independent) Statements and implementing all the DatabaseMetaData and ResultSetMetaData methods. It can be downloaded freely from http://jtds.sourceforge.net. More information about this driver is available from the Web site.

Connect to Derby

Derby is a relational database, modeled after IBM DB2, written entirely in Java. This database product as well as its drivers are bundled with SDI. The network driver is a Type 4 driver: native Java code.

The implementation class name for this driver is org.apache.derby.jdbc.ClientDriver.

Refer to the Derby Developer's Guide, Conventions for specifying the database paths, for more information about how to construct your JDBC URLs when using Derby.

Connect to IBM solidDB

IBM solidDB is a relational in-memory database that offers enhanced performance compared to Derby. Thus, it can be used as System Store instead of the Derby database, to boost the performance of the components relying on it.

The driver provided by IBM solidDB is Type 4 (completely implemented in Java). It can be obtained from the database installation, from SolidDB_install_dir/jdbc/SolidDriver2.0.jar.

Detailed information on IBM solidDB can be found at http://publib.boulder.ibm.com/infocenter/soliddb/v6r3/index.jsp.

Note: The driver for IBM solidDB is not JDBC 3.0 compliant, but implements JDBC 2.0 only. This may cause problems if you use Custom Prepared Statements.

Specifying ODBC database paths

When you use ODBC connectivity using the JDBC-ODBC bridge (supported on Windows systems only) we can specify a database or file path the ODBC driver must use, if the ODBC driver permits. This type of configuration avoids having to define a data source name for each database or file path your Connector uses.

Schema

In Iterator and Lookup modes the JDBC Connector schema depends on the metadata information read from the database for the table name specified. If no table name is given the schema is retrieved using the SQL Select/Lookup statements (if defined; see Customizing select, insert, update and delete statements).

In AddOnly, Delete, Update and Delta the JDBC Connector schema depends on the metadata information read from the database for the table name specified.

Configuration

The Connector needs the following parameters:

Link Criteria configuration

Link criteria specified in the Connector's configuration for Lookup, Delete, Update and Delta modes are used to specify the WHERE clause in the SQL queries used to interact with the database.

The SDI operand Equal is translated to the equal sign ( = ) in the SQL query, while the Contains, Start With and End With operators are mapped to the like operator.

Skip Lookup in Update or Delete mode

The JDBC Connector supports the Skip Lookup general option in Update or Delete mode. When it is selected, no search is performed prior to actual update and delete operations. Special code in the Connector retrieves the proper number of entries affected when doing update or delete.

Customizing select, insert, update and delete statements

Overview

The JDBC connector has the ability to expand a SQL template before executing any of its SQL operations. There are five operations where the templates can be used. These operations are:
Table 29. SQL Operations
Operation Connector Parameter name Description Mode(s)
SELECT SQL Select Used in Iterator mode (no search criteria). Iterator
INSERT SQL Insert Used when adding an entry to the data source. Update, AddOnly, Delta
UPDATE SQL Update Used when modifying an existing entry in the data source. Update, Delta
DELETE SQL Delete Used when deleting an existing entry in the data source. Delete, Delta
LOOKUP SQL Lookup A SELECT statement with a WHERE clause. Used when searching the data source. Lookup, Delete, Update

If the template for a given operation is not defined (for example, null or empty), the JDBC connector will use its own internal template.

When there is a template defined for an operation, the template must generate a complete and valid SQL statement. The template can reference the standard parameter substitution objects (for example, mc, config, work, Connector), as well as the JDBC schema for the table configured for the connector and a few other convenience objects.

Note: The template for the LOOKUP operation can contain a WHERE clause filtering the elements that will be returned by the query. But when the connector is in Lookup, Update or Delete mode the Link Criteria parameter is mandatory, as it is used to assemble a WHERE clause for the executed query. If Link Criteria is omitted an exception will be thrown:

java.lang.Exception: CTGDIS143E No criteria can be built from input (no link criteria specified).
  		at com.ibm.di.server.SearchCriteria.buildCriteria(Unknown Source) 
Therefore if a configuration is created and it uses a WHERE clause in the LOOKUP template the you must provide a Link criteria although one will not be needed. The connector will simply ignore it and the template query will be used. In order to save you from adding unneeded "dummy" conditions in the Link criteria,the solution is to check the option Build criteria from custom script and leave the displayed script area empty.

Metadata Object

The information about JDBC field types is provided as an Entry object named metadata. Each attribute in the metadata Entry object corresponds to a field name and the value will be that field's corresponding type. For example, a table with the following definition:

CREATE TABLE SAMPLE (
		name varchar(255),
		age numeric(10),
) 

could be referenced in the following manner, during parameter substitution:

{javascript<<EOF
    		metadata = params.get("metadata");
	    	if (metadata.getAttribute("name").equals("varchar"))
			return "some sql statement";
	    else
		return "some other sql statement";
EOF
}

Link Object (Link Criteria)

The LinkCriteria values are available in the link object. The link object is an array of link criteria items. Each item has fields that define the link criteria according to configuration. If the configured link criteria is defined as cn equals john doe then the template could access this information with the following substitution expressions:

link[0].name > "cn"
link[0].match > "="
link[0].value > "john doe"
link[0].negate > false

A complete template for a SELECT operation could look like this:

SELECT * FROM {config.jdbcTable} WHERE {link[0].name} = '{link[0].value}'

Convenience Objects

Generating the WHERE clause or the list of column names is not easy without resorting to JavaScript code. As a convenience, the JDBC Connector makes available the column names that would have been used in an UPDATE and INSERT statement as columns; this does not apply to SELECT and LOOKUP statements. This value is a comma-delimited list of column names. The textual WHERE clause is available as "whereClause" to simplify operations. Below is an example of how to use both:

SELECT {columns} from {config.jdbcTable} WHERE {whereClause}

for example, SELECT a,b,c from TABLE-A WHERE a > 1 AND b = 2
Table 30. Information available for different statements
Object SELECT LOOKUP INSERT DELETE UPDATE
config yes yes yes yes yes
Connector yes yes yes yes yes
metadata no maybe maybe yes yes
conn no no yes yes yes
columns no no yes yes yes
link no yes no yes yes
whereClause no yes no yes yes

Option to turn off Prepared Statements in the JDBC connector

The JDBC connector uses PreparedStatement to efficiently execute an SQL statement on a connected RDBMS server. However, there maybe cases when the JDBC driver may not support PreparedStatements. As a fall back mechanism a config parameter (jdbcPreparedStatement, labelled Use prepared statements in the configuration panel) is available in the configuration of the JDBC connector. The config parameter is a Boolean flag that indicates whether the JDBC connector should use PreparedStatements. If this is set the connector will use PreparedStatement and will fall back to normal Statements (java.sql.Statement) in case of an exception. If this is not set, normal Statement will be used by the JDBC connector while executing SQL queries. This checkbox gives an option to an SDI solution developer to handle situations when there are problems due to use of PreparedStatements. The checkbox will be set by default, meaning that the JDBC connector will use PreparedStatement.

The findEntry, putEntry, deleteEntry and the modEntry methods of the JDBC connector check for the value of usePreparedStatement flag to determine whether to use PreparedStatements or Statements. If a connector config does not have this flag (as in an older version of the config), the value of this param will be true by default. This ensures that there are no migration issues or impact.

Custom Prepared Statements

When you use the JDBC connector without custom SQL statements, it uses Prepared Statements internally for faster access to the JDBC target database. The connector builds simple SQL prepared statements to perform the connector operations (for example, SELECT * from TABLE WHERE x = ?) and then uses the JDBC API to provide values for the placeholders (the question marks) in the statement. This makes it easy to provide a variety of Java™ objects to the database without having to do complex string encoding of values.

Every now and then, the user needs to override the SQL statements the JDBC connector creates. This is where the SQL Insert/Update/etc. configuration parameters come into play. The user can specify the exact SQL statement used for a specific operation. The SQL statement is provided to the JDBC driver as a standalone statement, which basically means that the SQL statement contains values for columns used in the statement. This also means that the user must build the statement including values for columns in the configuration itself, including any complex string encoding of values.

With the custom prepared statements feature, the user can now use proper prepared statements. This will make custom statements much easier since it removes the encoding requirement. Also, if the statement does not change between calls, the prepared statement is reused, which results in faster execution.

The JDBC connector has a parameter named Use custom prepared statements. The checkbox is to enable/disable prepared statements and is false by default. When this checkbox is enabled you must use proper syntax in the custom SQL field. While we can still use constants in the SQL statement you must either properly escape any question marks in the statement or provide an expression for the prepared statement placeholder. Type "?" or use ctrl-<space> to bring up the code completion helper that lets you choose from the list of attributes you have in the output map as well as other common expressions. Once you have chosen an expression and press Enter the editor will insert that string between two question marks. Also note the syntax highlights that will provide feedback as to what is being interpreted as a placeholder expression:

Select * from table where modified_date > ?{javascript return new java.util.Date()}
	and something_else < ?{conn.a}

Note that it is also possible to use expressions in the statement where you normally don't have placeholders. Prepared Statements can also be provided by means of an API; sometimes this may be the easiest option to use. See APIs to allow specification of Prepared Statements for more information. If you use the JDBC Connector with a JDBC 2.0 driver, in particular with IBM solidDB®, also see Connecting to IBM solidDB. Background

To more fully explain the need for the above functionality, consider the following:

The current format for custom SQL statements requires the user to enter a complete SQL statement. Often, this can be tedious as well as near impossible if the values are complex or binary in nature. An option is present in the JDBC connector (Use custom prepared statements) where the user can toggle between Prepared Statement mode and the current plain string mode. Prepared statement mode applies a different syntax to the custom SQL statements. When prepared statement mode is chosen, no substitution is done to the string as is the case when non prepared statement mode is selected.

As an example let's use a simple SELECT statement to illustrate the problem with building a complete SQL statement:

	SELECT * FROM TABLE_NAME WHERE modified_date > 03/04/09

This statement contains a where clause filtering on modifed_date being greater than a given date. This example shows the problematic nature of SQL statements: Is the date march 3rd 2009 or april 4th 2009? There are other cases where building a complete SQL statement becomes even more problematic.

With the Use custom prepared statements option, the user can use a slightly modified SQL prepared statement. A SQL prepared statement in JDBC terms is a complete SQL statement with placeholders for values. The placeholder is the question mark and is replaced with a value at runtime.

	SELECT * FROM TABLE_NAME WHERE modified_date > ?

However, the JDBC connector needs to know which value to provide for each placeholder. To make the prepared statement as syntactically correct as possible while also providing the ability to specify which values are provided at runtime, the prepared statement syntax is slightly modified:

	SELECT * FROM TABLE_NAME WHERE modified_date > {expression}

This is not a valid prepared statement syntax, but the JDBC connector will parse this string and replace "{expression}" with a single question mark before executing the statement. The "{expression}" is an SDI expression that provides the value for the prepared statement placeholder. The text field editors for the custom SQL statements provide additional functionality to aid the user in building the statement.

Note: When custom prepared statements are used, the user must also provide the WHERE clause where applicable.

Additional JDBC Connector functions

Apart from the standard functions exposed by all Connectors, this Connector also exposes several other functions we can use in your scripts. You could call them using the special variable thisConnector, for example, thisConnector.commit(); -- when called from any scripting location in the Connector.

The above functions do not interfere with the normal flow of entries and attribute mappings for the Connector.

API to disable or enable parameter substitution

The above subsections describe how the JDBC Connector has access to its parameter substitution capabilities in insert, update, and delete SQL commands that are executed by the Connector. In certain cases, this causes issues because your customized SQL can end up with substrings (starting with a "{" and ending with a "}") that will be acted upon by the parameter substitution mechanism, and should not. The JDBC Connector exposes an API so we can disable or enable parameter substitution for the SQL statements that will be executed by the JDBC Connector.

	/**
	 * set enableParamSubstitute parameter
	 *
	 * 
	 */
	public void setParameterSubstitution(boolean val)
	{
		enableParamSubstitute = val;
	}
	
	/**
	 * Returns value of enableParamSubstitute parameter
	 *
	 * 
	 */
	public boolean getParameterSubstitution()
	{
	  return enableParamSubstitute ;
	}

An alternative to using this API to avoid unwanted parameter substitution is using escape characters.

The escape character is a "\". If a "\" is encountered in the character directly preceding a {ArgumentIndex} and {TDIReference} (that is, \{ArgumentIndex}and \{TDIReference}), then the parameter substitution will not take place (will not be processed). Instead, the escape character will be removed and the parameter substitution will not occur. For example, \{TDIReference} would simply be {TDIReference} after being processed.

APIs to allow specification of Prepared Statements

For power-users, it may be easier to just use an API to specify the correct Prepared Statement that they would like to use and also all the values that should be used. The following new methods have been added to the JDBCConnector:

public PreparedStatement setPreparedModifyStatement(String preparedSql) 
public PreparedStatement setPreparedDeleteStatement(String preparedSql)
public PreparedStatement setPreparedInsertStatement(String preparedSql)
public PreparedStatement setPreparedFindStatement(String preparedSql)
public PreparedStatement setPreparedSelectStatement(String preparedSql)

With these methods, the user can have code like this to for example do a special select:

 	ps = thisConnector.connector.setPreparedSelectStatement("Select * from tableName where fieldName = ? and field2= ?")
	ps.setInteger(1, someValue)
	ps.setObject(2, someObject)

The Javadocs for the methods give more examples.

Timestamps

If you want to store a timestamp value containing both a date and a time, you must make sure you provide an object of type java.sql.Timestamp, as we can with this Attribute Mapping:

ret.value = java.sql.Timestamp(java.util.Date().getTime());

The java.sql.Timestamp type can also come in handy if for some reason storing DATE fields in tables causes trouble, for example the Oracle error ORA-01830: date format picture ends before converting entire input string. Normally, if you try to store date/time values which are in the form of strings, the Date Format parameter comes into play to convert the string into the DATE type the underlying database expects, and if there is a mismatch between this parameter and your date/time value formatted as a string, problems will ensue.

To troubleshoot your problem:

Padding

Traditionally, the JDBC Connector would pad data to be added in the CHAR datatype column if the length of data was less than the column width. This was the default behavior and there was no option for configuring the padding.

With the advent of the UTF-8 character set, this could result in unexpected behavior since the Connector was not able to determine the exact length of UTF-8 data. This, in turn, resulted in the adding of an indeterminate amount of whitespaces, and the data length became bigger than the column width which resulted in an exception thrown from the database.

To get around this problem we provide you with the option of optionally disable padding for various operations performed by the Connector, namely Insert, Update and Lookup operations, in AddOnly, Lookup, Update, Delete and Delta modes. See the Configuration section for the parameters selecting this functionality.

For UTF-8 data the padding should be disabled. For Latin-1 characters the padding can be enabled or disabled.

Calling Stored Procedures

The JDBC Connector's "getConnection()" method gives you access to the JDBC Connection object created when the connector has successfully initialized.

In other words, if your JDBC connector is named DBconn in your AL,

var con = DBconn.getConnector().getConnection();

will give you access to the JDBC Connection object (an instance of java.sql.Connection).

Note: When called from anywhere inside the connector itself, you can also use the thisConnector variable.

Here is a code example illustrating how we can invoke a stored procedure on that database:

// Stored procedure call
command = "{call DBName.dbo.spProcedureName(?,?)}";

try {
    cstmt = con.prepareCall(command);

    // Assign IN parameters (use positional placement)
    cstmt.setString(1, "Christian");
    cstmt.setString(2, "Chateauvieux");

    cstmt.execute();

    cstmt.close();
    // Security Directory Integrator will close the connection, but you might want to force a close now.
    DBConn.close();
}

catch(e) {
    main.logmsg(e);
}

SQL Databases: column names with special characters

If you have columns with special characters in their names and use the AddOnly or Update modes:

  1. Go to the attribute map of the Update or AddOnly Connector
  2. Rename the Connector attribute (not the work attribute!) from name-with-dash to "name-with-dash" (add quotes).

The necessity of using this functionality might be dependent on the JDBC driver you are using, but standard MS Access 2000 has this problem.

Using prepared statements

This section describes how the Connector creates SQL queries. You can skip this section unless you are curious about the internals.

For a database, the Connector uses prepared statements or dynamic query depending on the situation:

On Multiple Entries

See Appendix B. AssemblyLine Flow Diagrams for more information about what happens when a Connector has a link criteria returning multiple entries.

For the JDBC Connector in Delete or Update mode, if you have used the setCurrent() method of the Connector and not added extra logic, all entries matching the link-criteria are deleted or updated.

Additional built-in reconnect rules

The JDBC Connector takes advantage of the Reconnect engine that is part of SDI v7.2. In addition to the standard behavior this engine provides, the JDBC Connector has a number of additional built-in rules. The Connector specific built-in rules will perform a reconnect if a java.sql.SQLException is thrown and the exception contains the following messages, evaluated using Regular Expressions:

These rules are visible in the Connection Errors pane in the Connector's configuration.

JMS message flow

Everything sent and received by the JMS Connector is a JMS message. The JMS Connector converts the SDI Entry object into a JMS message and vice versa. Each JMS message contains predefined JMS headers, user defined properties and some kind of body that is either text, a byte array or a serialized Java™ object.

There exists a method as part of the JMS Connector which can greatly facilitate communication with the JMS bus: acknowledge(). The method acknowledge() is used to explicitly acknowledge all the JMS session's consumed messages when Auto Acknowlege is unchecked. By invoking acknowledge() of the Connector, the Connector acknowledges all messages consumed by the session to which the message was delivered. Calls to acknowledge are ignored when Auto Acknowlege is checked.

Careful thought must be given to the acknowledgement of received messages. As described, the best approach is to not use Auto Acknowledge in the JMS Connector, but rather insert a Script Connector right after the JMS Connector in the AssemblyLine, invoking the acknowledge() method of the JMS Connector. This ensures that the window between the relevant message information in the system store being saved, and the JMS queue notification is as small as possible. If a failure occurs in this window, the message is received once more.

Conversely, relying on Auto Acknowledge creates a window that exists from the point at which the message is retrieved from the queue (and acknowledged), until the message contents mapped into the entry is secured in the system store. If a failure occurs in this window, the message is lost, which can be a greater problem.

Note: There could be a problem when configuring the JMS Connector in the Config Editor when Auto Acknowledge is on, because as long as this is the case, when going through the process of schema discovery using either Schema->Connect->GetNext or Quick Discover from Input Map the message will be grabbed and consumed (that is, gone from the input queue). This may be an unintended side-effect. To avoid this, turn Auto Acknowledge off before Schema detection — but remember to switch it back on again afterwards, if this is the desired behavior

IBM WebSphere MQ and JMS/non-JMS consumers of messages

When the JMS Connector sends messages to IBM WebSphere® MQ, it is capable of sending these messages in two different modes depending on the client which will read these messages:

By default the Connector sends the messages so that they are intended to be read by non-JMS clients. The major difference between these two modes is that when the messages are intended to be read by non-JMS clients, the JMS properties are ignored. Thus a subsequent lookup on these properties will not find a match.

In order to switch to the "intended to be read by JMS clients" mode, the "Specific Driver Attributes" parameter value must contain the following line (apart from any other attributes specified): mq_nonjms=false

JMS message types

The JMS environment that enables you to send different types of data on the JMS bus. This Connector recognizes three of those types. The three types are referred to as Text Message, Bytes Message and Object Message. The most open-minded strategy is to use Text Message (for example, jms.usetextmessages=true) so that applications other than SDI can read messages generated by the JMS Connector.

When you communicate with other SDI servers over a JMS bus the BytesMessage provides a very simple way to send an entire Entry object to the recipient. This is also particularly useful when the entry object contains special Java™ objects that are not easy to represent as text. Most Java objects provide a toString() method that returns the string representation of it but the opposite is very rare. Also, the toString() method does not always return very useful information. For example, the following is a string representation of a byte array:

"[B@<memory-address>"

Text message

A text message carries a body of text. The format of the text itself is undefined so it can be virtually anything. When you send or receive messages of this type the Connector does one of two things depending on whether you have specified a Parser:

var str = work.getString ("message"); 
task.logmsg ("Received the following text: " + str );

If you expect to receive text messages in various formats (XML, LDIF, CSV ...) you must leave the Parser parameter blank and make the guess yourself as to what format the text message is. When you know the format we can use the system.parseObject(parserName, data) syntax to do the parsing for you:

var str = work.getString ("message"); 
// code to determine format 
if ( isLDIF ) 
			e = system.parseObject( "ibmdi.LDIF", str ); 
else if ( isCSV ) 	
			e = system.parseObject ( "ibmdi.CSV", str ); 
else 	
			e = system.parseObject ( "ibmdi.XML", str ); 
}  
// Dump parsed entry to logfile 
task.dumpEntry ( e ); 

The Use Textmessage flag determines whether the Connector must use this method when sending a message.

Object message

An object message is a message containing a serialized Java object. A serialized Java object is a Java object that has been converted into a byte stream in a specific format which makes it possible for the receiver to resurrect the object at the other end. Testing shows that this is fine as long as the Java class libraries are available to the JMS server in both ends. Typically, a java.lang.String object causes no problems but other Java objects might. For this reason, the JMS Connector does not generate object messages but is able to receive them. When you receive an object message the Connector returns two attributes:

var obj = work.getObject ("java.object"); 
obj.anyMethodDefinedForTheObject (); 

You only receive these messages.

Bytes message

A bytes message is a message carrying an arbitrary array of bytes. The JMS Connector generates this type of message when the Use Textmessage flag is false. The Connector takes the provided entry and serialize it into a byte array and send the message as a bytes message. When receiving a bytes message, the Connector first attempts to deserialize the byte array into an Entry object. If that fails, the byte array is returned in the message attribute. You must access the byte array using the getObject method in your work or conn entry.

var ba = work.getObject ("message"); 
for ( i = 0; i < ba.length; i++) 	
			task.logmsg ( "Next byte: " + ba [ i ] );  

This type of message is generated only ifUse Textmessage is false (not checked).

Iterator mode

A message selector is a String that contains an expression. The syntax of the expression is based on a subset of the SQL92 conditional expression syntax. The message selector in the following example selects any message that has a NewsType property that is set to the value 'Sports' or 'Opinion':

NewsType = 'Sports' OR NewsType = 'Opinion'

Lookup mode

The Connector supports Lookup mode where the user can search for matching messages in a JMS Queue (Topic (Pub/Sub) is not supported by Lookup mode).

The Link Criteria specifies the JMS headers and properties for selecting matching messages on a queue.

For the advanced link criteria you must conform to the Message Selection specification as described in the JMS specification (http://java.sun.com/products/jms). The JMS Connector reuses the SQL filter specification (JMS message selection is a subset of SQL92) to build the message selection string. Turn on debug mode to view the generated message filter string.

There are basically two ways to perform a Lookup:

Decide which to use by setting the Lookup Removes flag in the Connector configuration. For Topic connections the Lookup Removes flag does not apply as messages on topics are always removed when a subscriber receives it. However, the Lookup mode heeds the Durable Subscriber flag in which case the JMS server holds any messages sent on a topic when you are disconnected.

The JMS Connector works in the same way as other Connectors in that we can specify a maximum number of entries to return in your AssemblyLine settings. To ensure you retrieve a single message only during Lookup, specify Max duplicate entries returned = 1 in the AssemblyLine settings. Setting Max duplicate entries returned to 1 enables you to retrieve one matching entry at a time regardless of the number of matching messages in the JMS queue.

Since the JMS bus is asynchronous the JMS Connector provides parameters to determine when the Lookup must stop looking for messages. There are two parameters that tells the Connector how many times it queries the JMS queue and for how long it waits for new messages during the query. Specifying 10 for the retry count and 1000 for the timeout causes the Connector to query the JMS queue ten times each waiting 1 second for new messages. If no messages are received during this interval the Connector returns. If during a query the Connector receives a message, it continues to check for additional messages (this time without any timeout) until the queue returns no more messages or until the received message count reaches the Max duplicate entries returned limit defined by the AssemblyLine. The effect of this is that a Lookup operation only retrieves those messages that are available at the moment.

AddOnly mode

In this mode, on each AssemblyLine iteration the JMS Connector sends an entry to the JMS server. If a Topic is used the message is published and if a Queue is used the message is queued.

Call/Reply mode

In this mode the Connector has two attribute maps, both Input and Output. When the AssemblyLine invokes the Connector, an Output map operation is performed, followed by an Input map operation. There is a method in the JMS Connector called queryReply() which uses the class QueueRequestor. The QueueRequestor constructor is given a non-transacted QueueSession and a destination Queue. It creates a TemporaryQueue for the responses and provides a request() method that sends the request message and waits for its reply.

JMS headers and properties

A JMS message consists of headers, properties and the body. Headers are accessed differently than properties and were not available in previous versions. In this version we can specify how to deal with headers and properties.

JMS headers

JMS headers are predefined named values that are present in all messages (although the value might be null). The following is a list of JMS header names this Connector supports:

These headers are all set by the provider and might be acted upon by the JMS driver for outgoing messages. In the configuration screen we can specify that you want all headers returned as attributes or specify a list of those of interest. All headers are named using a prefix of jms.. Also note that JMS header names always start with the string JMS. This means that you must never use property names starting with jms.JMS as they can be interpreted as headers.

Depending on the operation mode, the JMS Connector sets the following additional properties to its conn Entry.

JMS properties

In previous versions of this Connector all JMS properties were copied between the Entry object and the JMS Message. In this release we can refine this behavior by telling the Connector to return all user defined properties as attributes or specify a list of properties of interest. All properties are prefixed with jms. to separate them from other attributes. If you leave the list of properties blank and uncheck the JMS Properties As Attributes flag, you get the same behavior as for previous versions. Both JMS headers and JMS properties can be set by the user. If you use the backwards compatible mode you must set the entry properties in the Before Add hook as in:

conn.setProperty ( "jms.MyProperty", "Some Value" ); 

If you either check the JMS Properties As Attributes flag or specify a list of properties, you must provide the JMS properties as attributes. One way to do that is to add attributes using the jms. prefix in your attribute map. For example, if you add jms.MyProperty attribute map it results in a JMS property named MyProperty.

Configuration

The Connector name is JMS Pub/Sub Connector, and it needs the following parameters:

A Parser can be selected form the Parser... pane; once in this pane, choose a parser by clicking the bottom-right Inheritance button. If a Parser is specified, a JMS Text message is parsed using this Parser. This Parser works with messages that are received by the JMS Connector, and is used to generate a text message when JMS Connector sends a message.

Examples

Go to the TDI_install_dir/examples/SoniqMQ directory of your SDI installation.

SDI v7.2 comes with an example of a JMS script driver for Sonic MQ. This sample demonstrates how the SDI JMS components (JMS Connector, System Queue) can use the SonicMQ server as a JMS provider.

In directory TDI_install_dir/examples/was_jms_ScriptDriver you will find an example that demonstrates how to use the WebSphere® Default JMS provider with the JMS Connector and the JMS Script Driver.

External System Configuration

The configuration of external JMS systems which this Connector accesses is not specific to this Connector. Any external JMS system which this Connector accesses must be configured as it would be configured for any other JMS client.

Troubleshooting

In case of systems containing two or more IBM WebSphere® MQ servers exchanging messages, residing on different platforms the transmitted messages may be received corrupted.

For example consider the following scenario with two MQ servers - one MQ server on a z/OS® platform sending messages to another MQ server on the Linux platform. If the received messages from the Linux MQ server are incorrect this might be due to a character set conversions since the default character set of the z/OS and Linux platforms are different. Here are some possible solutions when dealing with such an issue (in descending order, from most to least preferable):

Note: The z/OS operating system is not supported in SDI v7.2.

  1. Encode the messages using the z/OS MQ Queue Manager's character set before sending them to the z/OS MQ server
  2. Configure the z/OS MQ Queue Manager to use the same character set as the expected messages
  3. Use the following workaround with the correct z/OS and Linux character sets:

    1. Map the message attribute with an advanced mapping.
    2. Use the following script:
      ret.value = new java.lang.String(conn.getString("message").getBytes(z/OS_charset), Linux_charset);
    3. Run the configuration.

    Note: This workaround is only applicable for the described scenario. In systems with more than two MQ servers a more complex decoding of the messages may be needed.


JMS Password Store Connector

In previous releases, this Connector was known as the MQe Password Store Connector. In SDI v7.2, it is called the JMS Password Store Connector; its name was changed because it is now able to make use of SDI's JMS Driver pluggable architecture. This means that this connector can connect not only to the MQe Queue Manager but it can connect to the IBM WebSphere® MQ Queue Manager out of the box. In addition it can connect to any user provided Queue Manager as long as you provide the JMS Driver for establishing the connection.

The JMS Password Store Connector supports Iterator mode only.

The JMS Password Store can use PKCS7 encryption to sign and encrypt the password change notification messages before it sends them to the JMS Password Store Connector.

Notes:

  1. For more information about installing and configuring the IBM Password Synchronization plug-ins, please see the SDI v7.2 Password Synchronization Plug-ins Guide.
  2. SDI v7.2 components can be deployed to take advantage of MQe Mini-Certificate authenticated access. To use these MQe features, it is necessary to download and install IBM WebSphere MQ Everyplace® 2.0.1.7 (or higher) and IBM WebSphere MQ Everyplace Server Support ES06. Use of certificate authenticated access prevents an anonymous MQe client Queue Manager or applications submitting a change password request to the JMS Password Store Connector.
  3. IBM MQ Everyplace does not support IP Version 6 addressing; as a consequence, the JMS Password Store Connector can only reach MQe using traditional IPv4 addresses.
  4. IBM MQ Everyplace is deprecated in this version of SDI, and will be removed in a future version. A suitable lightweight message queue will be provided at that time.

The JMS Password Store Connector supports receiving messages from multiple password stores.

Connector Workflow

The following is the JMS Password Store Connector workflow:

  1. The JMS Password Store Connector requests a message from a predefined queue on either its local MQe Queue Manager (if using the MQe JMS Driver) or the external Queue Manager (if using any other than the MQe JMS Driver). The messages are retrieved using the JMS interface.
  2. The retrieved message is verified and/or decrypted (this step is optional).
  3. The message is parsed and an Entry object is created. The attributes of this Entry object represent the user ID, the password values and the type of password update.
  4. This newly created Entry object is passed to the SDI AssemblyLine.

On initialization, the JMS Password Store Connector performs the following actions:

On getting a password update message, the Connector can operate in one of three modes:

By default, the Connector automatically acknowledges every message it receives from the QueueManager JMS queue. However, we can change this behavior by de-selecting the Auto Acknowledge parameter; in that case, you are responsible for message acknowledgements yourself by calling the Connector's acknowledge() method at appropriate places in the AssemblyLine. Each time you call the Connector's acknowledge() method you acknowledge all messages delivered so far by the Connector.

Force transfer of accumulated messages from the JMS Password Store with MQe

Accumulated messages in an MQe-based Password Store are not automatically transferred to the SDI. To force transmission of such accumulated messages, use the Storage notification server(s) parameter of the JMS Password Store Connector and the "mqe.notify.port" parameter of the JMS Password Store.

Here is an explanation:

When the JMS Password Store is used with MQe, there are two MQe Queue Managers involved - one on the Password Store side and the other on the SDI side. On the Password Store side a remote MQe queue is configured, which points to a local MQe queue on the SDI side.

Messages are transferred only when both Queue Managers are operational. When SDI is not running, the Queue Manager of the Password Store accumulates arriving messages. Normally MQe does not automatically detect when a remote Queue Manager goes operational. So when the Directory Integrator goes back online, the accumulated messages are not transferred until a new message arrives in the Password Store.

There is a special feature which allows the JMS Password Store Connector to "pull" accumulated messages from the Password Store. This feature is configured by the Storage notification server(s) parameter of the Connector and the "mqe.notify.port" parameter of the JMS Password Store. When the Connector initializes, it sends a notification to the Password Store to start sending accumulated messages. Note that currently there is no "push" alternative, that is, the Password Store does not periodically check if SDI is running.

Message security

From security point of view, the JMS Password Store Connector can receive messages in the following three modes as:

Using PKCS7 for encapsulation is optional. By default, it is turned off. If you want to use PKCS7, configure both JMS Password Store and JMS Password Store Connector to use PKCS7. However, when PKCS7 is used, the PKI encryption is not allowed, because the PKCS7 supports encryption.

PKCS7 Encryption support

The JMS Password Store can use PKCS7 encryption to sign and encrypt the password change notification messages before it sends them to the JMS Password Store Connector. The use of PKCS7 encapsulation is optional; by default it is turned off. Both signing and encryption need certificates in order to function. Usage of PKCS7 is incompatible with the older PKI-based encryption mechanisms available in older versions of SDI.

With the PKCS7 option activated, it verifies the signature of each received message by comparing the Signer certificate with those in its trust store. In case of a match it verifies the message signature. If the signature verification is successful the Connector accepts the message and decrypts it with the Connector's private key from its own certificate.

Note: If PKCS7 needs to be used then both the JMS Password Store Connector and the JMS Password Store (all of them, if multiple Stores are used) need to be setup to use PKCS7. If only one side is configured to use PKCS7 then an error will occur.

The certificates are stored in a .jks file. The Connector has a .jks file and the JMS Password Store has another .jks file.

Signing of messages

Signing is used to verify that the sender of the message is the one he/she claims to be.

In this particular scenario the JMS Password Store Connector needs to verify that the sender of a password change notification message is actually a trusted JMS Password Store.

It is possible to have several password stores sending messages to a single JMS Password Store Connector. In this case the Connector must be configured so that its .jks file contains the public keys of each of the trusted password stores.

Encryption of messages

Encryption is achieved by having the password store use the public key of the Connector to encrypt the message. Then the Connector uses its private key to decrypt the message.

Certificate management

A .jks file is required in order to be able to work with the PKCS7 functionality. It must contain not only the JMS Password Store Connector's certificate, but also the certificates of all the password stores that send messages to it.

The JMS Password Store Connector's certificate is a self-signed personal certificate, whose private key is used to decrypt the messages from the password store.

The password stores' certificates are trusted signer certificates, which are supplied from each JMS Password Store's .jks file. Every received message is then verified: the public key, attached to it, is compared with the available in the .jks file. In case of a match the message signature is verified against the certificate and then the message is decrypted using the Connector's own private key.

Certificate structure

Certificates are stored in a .jks file. The Connector has a .jks file and the password store has another, corresponding, .jks file. The two .jks files need to contain the following so that PKCS7 can be used:

Create certificates

The primary tool used to handle .jks files is ikeyman.exe. Ikeyman.exe is a tool available with every JVM distributed with SDI.

It can be found in: TDI_install_dir\jvm\jre\bin, where TDI_install_dir is the installed directory of SDI. Below are the steps we can follow in order to create the required keystore/truststore .jks files.

  1. Creating a .jks file

    To create a new .jks file click on Key Database File -> New and choose JKS together with the desired name and file path. You will be asked to enter a password. Remember it – it has to be provided later when setting up the components. You will need to create at least two such files – one for the MQePasswordStore and another one for the JMS Password Store Connector.

  2. Creating a certificate

    To create a new certificate click on the drop-down menu above the list of certificates and choose Personal Certificates. Next, click on New Self-Signed... and enter the appropriate information.

  3. Transferring certificates

    The last step is adding the just created self-signed certificates from the MQePasswordStore's JKS to the JMS Password Store Connector's and vice versa. For this purpose you have to extract the certificate as DER binary data: click on Extract Certificate... and then choose Data Type -> DER Binary data. Save it to an appropriate location with the desired name and open the other .jks file. Click Add... and find the file with the DER extracted data (Note: you must have chosen the Signer Certificates list before adding the new certificate).

Note: The implementation of PKCS7 in SDI v7.2 does not support certificates that are secured with an additional password except the one set for the .jks file.

Example usage

The following example demonstrates how the JMS Password Store Connector can be configured to work with the configured JMS Password Store, described in SDI v7.2 Password Synchronization Plug-ins Guide. Parameter PKCS7 is checked - meaning that the PKCS7 encryption/certification option is enabled.

The path to the .jks file, parameter PKCS7 Key Store File is C:\dev\di611_061025a\certs\mqeconnpkcs7.jks. It must contain its self-signed certificate as well as the trusted signer certificate of the JMS Password Store (please refer to Creating certificates for more information about creating the necessary certificates). In our case the parameter MQeConnector Certificate Alias is specified as "mqeconn".

For the needs of our example we need to create the two .jks files - 'mqepkcs7.jks' and 'mqeconnpkcs7.jks'. The steps are as follows:

  1. Open iKeyman.exe and click on Key Database File-> New...
  2. Select the desired location of the file. For the example described above, save the .jks file under C:\dev\061025a\certs with the name mqeconnpkcs7.jks. By pressing the OK button, you will be asked to enter a password. To keep compatibility with the other data in the example, enter "secret" as password.
  3. The next step is to create the JMS Password Store Connector's certificate itself. For this purpose select Personal Certificates from the drop-down menu and click New Self-Signed... The Key Label is the alias of the certificate in the .jks file. Set it to "mqeconn". The other options can be left with the default values.
  4. Extract the just created self-signed certificate "mqeconn" as DER data in the same folder: C:\dev\di611_061025a\certs. Choose a name that corresponds to the certificate itself (for example, mqeconn). This file will be used later to import the JMS Password Store Connector's certificate in the .jks file of a JMS Password Store.
  5. Repeat the steps from 1 to 4, but this time the location of the .jks file is: C:\Program Files\IBM\DiPlugins\mqepkcs7.jks and the password again: "secret". For Key Label of the JMS Password Store certificate set the value to "mqestore" and extract it as mqestore.der in the same directory: C:\Program Files\IBM\DiPlugins\.
  6. Both created .jks files must exchange their certificates. Since the mqepkcs7.jks file is opened, import first the DER binary data that was extracted from mqeconnpkcs7.jks. Select Signer Certificates from the drop-down list and click on Add... In the window that popped up select "Binary DER data" as Data type and then browse to the location C:\dev\di611_061025a\certs, where the .der file is saved. Select the mqeconn.der file and click "OK". A label for the imported certificate is required. To avoid confusion it is advisable to give it the same alias as in the other .jks file, in this case, mqeconn, because this value must be given in the properties file of the JMS Password Store for the property "pkcs7MqeConnectorCertificateAlias".
  7. The same procedure must be performed on the mqeconnpkcs7.jks file (the key store holding the necessary certificates for the connector). First open the .jks file by clicking Key Database File-> Open... and navigating to the exact location. If you followed all the instructions precisely the path to the required file should be C:\dev\di611_061025a\certs. The password will be prompted for again. Afterwards repeat step 6 with the new parameters. The location is C:\Program Files\IBM\DiPlugins and the certificate name is mqestore.der. For convenience name it "mqestore" again. With this step the example is completed.

Schema

The JMS Password Store Connector constructs SDI Entry objects with the following fixed attribute structure (schema):

Configuration

JMS drivers

IBM WebSphere MQ Everyplace driver

In order to use MQe as the JMS provider for the JMS Password Store Connector, the JMS Server Type config parameter must be set to "IBMMQE", and the "systemqueue.jmsdriver.name" property in global.properties or solution.properties must be set to "com.ibm.di.systemqueue.driver.IBMMQe".

The IBM WebSphere® MQ Everyplace® driver has one parameter:

For example, if the JMS Password Store Connector needs to be configured to use MQe, then the following line must be put in global.properties or solution.properties:

systemqueue.jmsdriver.param.mqe.file.ini=TDI_install_folder/MQePWStore/pwstore_server.ini

This is the default location where the MQe Configuration utility creates the MQe initialization file.

Note: In order to be able to use MQe as the JMS provider for the JMS Password Store Connector an MQe Queue Manager needs to be created. This can be done using the MQe Configuration utility bundled with SDI; for more information, see "MQe Configuration Utility" in SDI v7.2 Installation and Administrator Guide.

IBM WebSphere MQ driver

In order to use IBM WebSphere MQ as the JMS provider for the JMS Password Store Connector the JMS Server Type config parameter must be set to "IBMMQ".

The IBM WebSphere MQ driver has the following parameters:

For specific configuration of the IBM WebSphere MQ server, please refer to its documentation.


JMX Connector

The JMX Connector uses the JMX 1.2 and JMX Remote API 1.0 specifications. It only uses standard JMX features.

The JMX Connector can listen to, and report, either local or remote JMX notifications, depending on how it is configured.

When the AssemblyLine starts the JMX Connector is initialized. On initialization, the Connector determines whether it will report local or remote notifications based on the Connector parameters (the Connector cannot report both local and remote notifications in a single run). Then, the Connector gets either a local or a remote reference to the respective MBean Server and registers for the desired JMX notifications specified in a Connector parameter.

In the getNextEntry() method, the Connector blocks the AssemblyLine while waiting for notifications. When a notification is received, the getNextEntry() method of the Connector returns an Entry (which contains the notification details) to the AssemblyLine.

Notifications that are received between successive getNextEntry() calls are buffered, so that no notifications are lost. If there are buffered notifications when the getNextEntry() is called, then the Connector returns the first buffered notification immediately without blocking the AssemblyLine.

This Connector operates in Iterator mode only.

Connector Schema

The JMX Connector makes the following Attributes available (Input Attribute Map):

Configuration

The JMX Connector is capable of using the SSL protocol on the connection. If the remote JMX system accepts only SSL connections, the JMX Connector will automatically establish an SSL connection provided that a trust store is configured properly. This means that appropriate values have to be set for the javax.net.ssl.trustStore, javax.net.ssl.trustStorePassword and javax.net.ssl.trustStoreType properties in global.properties or solution.properties.


JNDI Connector

The JNDI Connector provides access to a variety of JNDI services; it uses the javax.naming and javax.naming.directory packages to work with different directory services. To reach a specific system, you must install the JNDI driver for that system, for example com.sun.jndi.ldap.LdapCtxFactory for LDAP. The driver is typically distributed as one or more jar or zip files. Place these file in a place where the Java™ runtime can reach them, for example, in the TDI_install_dir/lib/ext directory.

This Connector supports Delta Tagging at the Attribute level. This means that provided a previous Connector in the AssemblyLine has provided Delta information at the Attribute level, the JNDI Connector will be able to use it in order to make the changes needed in the target JNDI directory.

When using the JNDI Connector for querying an LDAP Server, a SizeLimitExceededException may occur if the number of entries satisfying the search criteria is greater than the maximum limit set by the LDAP Server. To work around this situation, either increase the LDAP Server's maximum result limit, or set the java.naming.batchsize provider parameter to some value smaller than the maximum limit of the server. For more information on the java.naming.batchsize parameter refer to: http://java.sun.com/products/jndi/tutorial/ldap/search/batch.html

Configuration

The Connector needs the following parameters:

Setting the Modify operation

The JNDI connector has a way to set a modify operation value when the connector is in Modify mode. We can also use the simple connector interface to directly add, remove or replace attribute values and attributes instead of setting modify operation.

There is no Config Editor provided to set the modify operation. You must manually add the operation value to each attribute in the work entry of the JNDI connector in Modify mode using the following interface:

Calling the Modify Interface

Adding a value to an attribute

public void addAttributeValue(String moddn, String modattr, String modval)

throws Exception where:

For example, if you want to add "cn=bob" to the members attribute of "cn=mygroup" you use the method as such:

thisConnector.connector.addAttributeValue("cn=mygroup","members","cn=bob");

An Exception is thrown when the underlying modify operation fails.

Replacing the attribute value

public void replaceAttributeValue(String moddn, String modattr, String modval)

throws Exception where:

For example, if you want to replace the members attribute of "cn=mygroup" with "cn=bob" only, you use the method as such:

thisConnector.connector.replaceAttributeValue("cn=mygroup","members","cn=bob");

An Exception is thrown when the underlying modify operation fails.

Removing attribute

public void removeAttribute(String moddn, String modattr)

throws Exception where:

For example, if you want to remove the members attribute of "cn=mygroup" you use the method as such:

thisConnector.connector.removeAttribute("cn=mygroup","members");

An Exception is thrown when the underlying modify operation fails.

Removing a certain attribute value from an attribute

public void removeAttributeValue(String moddn, String modattr, String modval)

throws Exception where:

An Exception is thrown when the underlying modify operation fails.

modify operation

modify operation can be set per Modify request. It causes modify operation for all attributes in the modify request entry to be set to the proper modify operation value. Property values and matching modify operation values:
Property value (String) modify operation value
delete

di.com.ibm.di.entry.Attribute.

ATTRIBUTE_DELETE

add

di.com.ibm.di.entry.Attribute.

ATTRIBUTE_ADD

replace

di.com.ibm.di.entry.Attribute.

ATTRIBUTE_REPLACE

This property can be set at any time while the Connector is running by setting the property modOperation from the scripts:

conn.setProperty("modOperation","delete");

Note: This property does not affect the behavior of the any interfaces defined above. However, it does overwrite the existing modify operation set by di.com.ibm.di.entry.Attribute.setOper(char operation)

Skip Lookup in Update and Delete mode

The JNDI Connector supports the Skip Lookup general option in Update or Delete mode. When it is selected, no search is performed prior to actual update and delete operations. It requires a name parameter (for example, $dn for LDAP) to be specified in order to operate properly.


LDAP Connector

The LDAP Connector provides access to a variety of LDAP-based systems. The Connector supports both LDAP version 2 and 3. It is built layered on top of JNDI connectivity.

This Connector can be used in conjunction with the IBM Password Synchronization plug-ins. For more information about installing and configuring the IBM Password Synchronization plug-ins, please see the SDI v7.2 Password Synchronization Plug-ins Guide.

Note that, unlike most Connectors, while inserting an object into an LDAP directory, you must specify the object class attribute, the $dn attribute as well as other attributes. The following code example, if inserted in the Prolog, defines an objectClass attribute that we can use later.

// This variable used to set the object class attribute 
var objectClass = system.newAttribute ("objectclass"); 
objectClass.addValue ("top"); 
objectClass.addValue ("person"); 
objectClass.addValue ("inetorgperson");
objectClass.addValue ("organizationalPerson");

Then your LDAP Connectors can have an attribute called objectclass with the following assignment:

ret.value = objectClass

To see what kind of attributes the person class has, see http://java.sun.com/products/jndi/tutorial/ldap/schema/object.html

You see that you must supply an sn and cn attribute in your Update or Add Connector.

In the LDAP Connector, you also need the $dn attribute that corresponds to the distinguished name. When building $dn in the Attribute Map, assuming an attribute in the work object called iuid, you typically have code like the following fragment:

var tuid = work.getString("iuid"); 
ret.value = "uid= " + tuid + ",ou=people,o=example_name.com";

Notes:

  1. The two special attributes, $dn and objectclass usually are not included in Modification in Update mode unless you want to move entries in addition to updating them.
  2. If we cannot connect to your directory, make sure the Use SSL flag in the Configuration is set according to what the directory expects.
  3. When doing a Lookup, we can use $dn as the Connector attribute, to look up using the distinguished name. Do not specify a Simple Link Criteria using both $dn and other attributes; in this case a simple lookup will be done with the DN using an Equals comparison.
  4. Certain servers have a size limit parameter to stop you from selecting all their data. This can be a nuisance as your Iterator only returns the first n entries. Some servers, for example, Netscape/iPlanet, enable you to exceed the size limit if you are authenticated as a manager.
  5. Those servers that return their whole directory in one go (for example, non-paged search) typically cause memory problems on the client side. See Handling memory problems in the LDAP Connector.
  6. When Connector Flags contains the value deleteEmptyStrings, then for each attribute, the LDAP Connector removes empty string values. This possibly leaves the attribute with no values (for example, empty value set). If an attribute has an empty value set then a modify operation deletes the attribute from the entry in the directory. An add operation never includes an empty attribute since this is not permitted. Otherwise, modify entry replaces the attribute values.
  7. When performing a rootdse search in Lookup mode using the "baselevel"search scope, you must add a Link Criteria specifying that the value of objectClass is * (objectClass equals * ) and leave the Search Base field blank. In Iterator mode the same thing is achieved by leaving the Search Base blank and setting the Search Filter to "objectClass=*".
  8. When performing a normal search in Lookup mode using the "baselevel"search scope, you need to add a valid Link Criteria in accordance with the specified Search Base (for example, Search Base: cn=MyName,o=MyOrganization,c=MyCountry ; Link Criteria: sn equals MySurName).

Detect and handle modrdn operation

Some changelog connectors (the IDS Changelog Connector and Sun Directory Change Detection Connector) can detect modrdn operations as the underlying LDAP servers' changelogs provide it. When this happens the Changelog Connector tags the Entry with the modify operation. The changelog attributes contain the "newrdn" attribute when the operation is modrdn. The LDAP Connector detects in its modEntry method if the "newrdn" attribute exists and if so, it replaces the rdn in the target $dn with the new value and does a context rename operation.

Note: LDAP configurations in Delta mode before SDI v7.0 have treated modrdn operation as generic and have not handled it at all. Now they will handle it as modify. Also, such configurations will rename $dn if the "newrdn" attribute is provided.

Configuration

The Connector needs the following parameters; not all parameters are available or visible in all modes:

Virtual List View Control

In order to use the Virtual List View Control in SDI v7.2, the JNDI/LDAP Booster Pack from Sun Microsystems needs to be downloaded (http://java.sun.com/products/jndi/downloads/index.html). After downloading the Booster Pack the "ldapbp.jar" contained in the pack needs to be copied to the TDI_install_dir\jars folder before starting SDI. If the Virtual List View control is used, but the "ldapbp.jar" is unavailable, the AssemblyLine will fail with a corresponding error message.

Handling memory problems in the LDAP Connector

Some servers return the whole search result in one go (for example, non paged search) and this typically causes memory problems. It might look to you that SDI leaks memory, but that is just because it is processing the entries from the server while the server continues to pour more and more entries into it.

LDAP servers such as Active Directory support the Paged Search extension that enables you to retrieve a page (the number of objects to return at a time), and this is the preferred way to handle big return sets (see the Page Size parameter for more info on this). We can always test if a server supports the paged search by clicking the button to the right of the Page Size parameter in the LDAP Connector Configuration tab.

If the Page Size parameter is not supported, you might have a problem, since there is little a client can do when being overwhelmed by the Server. Here are some workarounds:

Built-in rules for reconnect functionality

The Connector has implemented logic for reconnect processing as of SDI v6.1.1 fixpack 2. The Connector-specific built-in rules makes it possible to perform a reconnect if javax.naming.ServiceUnavailableException is thrown regardless of the message.

In versions of SDI before v7.1, the Connector had a loop that tried 10 times to establish the initial connection. This was to work around a problem with some servers, but it had the side effect that a failure to establish the initial connection could take a very long time if the server was down. From v7.1 onwards, this "loop" is moved into reconnect rules instead. This way you may specify if a reconnect attempt should take place, and also how many times it should be tried. For compatibility with earlier versions, initial re-connection is enabled.

Searching against an SDBM backend on z/OS

Note: The z/OS® operating system is not supported in SDI v7.2.

When using the LDAP Connector for searches against an SDBM backend on z/OS, you need to consider the following:

  1. When an LDAP Connector in Iterator mode is used to get a list of user profiles on an z/OS SDBM (LDAP) service, by default only the DN Attribute is returned. Other attributes are not returned even with a "*" attribute specified in the input map. This is a known limitation of the LDAP connector (it was not originally intended for this). To retrieve all the attributes, construct the AssemblyLine such that you use the LDAP Connector first in Iterator mode to retrieve the DN and subsequently use the LDAP Connector in Lookup mode with Link Criteria using the DN (that is, Link Criteria set to "$dn EQUAL $$dn").

    Note: Here a "presence" filter is used in the Iterator Connector's configuration (Config Tab-> Search Filter) to determine the scope of DN to retrieve and an subsequent equivalence filter is used in the Link Criteria in an LDAP connector in Lookup mode.

  2. There are 3 user profiles for which the Iterator/Lookup flow does not work with an SDBM backend on z/OS:

    • $dn 'racfid=irrmulti,profiletype=user,sysplex=sysb'
    • $dn 'racfid=irrsitec,profiletype=user,sysplex=sysb'
    • $dn 'racfid=irrcerta,profiletype=user,sysplex=sysb'

    The lookup may get the following error on these user profiles: 'ICH30001I UNABLE TO LOCATE USER' or 'ICH31005I NO ENTRIES MEET SEARCH CRITERIA'. This happens because these users are not real users and therefore should not be the subject of searches. The SDBM backend will do a "listuser" under the covers that issues the request in uppercase and therefore, will not find the profiles. This is expected behavior.

LDAP Connector methods (API)

This section describes some of the methods available in the LDAP Connector. The exhaustive API reference is in the JavaDocs; they can be viewed by choosing Help -> Welcome screen, JavaDocs link in the Config Editor.

LDAP compare

public boolean compare(String compdn, String attname, String attvalue) 
		throws Exception

where

If the value is equal, true is returned. If the value is not equal, the value false is returned. For example, if you wanted to determine if the userpassword attribute for cn=joe,o=ibm was equal to secret, use the method: compare("cn=joe,o=ibm", "userpassword", "secret").

Adding a value to an attribute

This method adds a given value to an attribute:

public void addAttributeValue(String moddn, String modattr, String modval) 
		throws Exception

where

For example, if you want to add cn=bob to the members attribute of cn=mygroup, use the method: addAttributeValue("cn=mygroup", "members", "cn=bob")

A java.langException is thrown when the underlying modify operation fails.

Replacing an attribute value

This method replaces a given value for an attribute:

public void replaceAttributeValue(String moddn, String modattr, String modval) 
		throws Exception

where

For example, if you want to replace the members attribute of cn=mygroup with only cn=bob, use the method: replaceAttributeValue("cn=mygroup", "members", "cn=bob")

A java.langException is thrown when the underlying modify operation fails.

Removing an attribute value

This method removes a given value from an attribute:

public void removeAttributeValue(String moddn, String modattr, String modval) 
		throws Exception

where

For example, if you want to remove the value cn=bob from the attribute members in the DN cn=mygroup, use the method: removeAttributeValue("cn=mygroup", "members", "cn=bob")

A java.langException is thrown when the underlying modify operation fails.

Removing all attribute values

This method removes all values for a given attribute:

public void removeAllAttributeValues(String moddn, String modattr) 
		throws Exception

where

For example, if you want to remove all values of the members attribute of cn=mygroup, use the method: removeAllAttributeValues("cn=mygroup", "members")

A java.langException is thrown when the underlying modify operation fails.

Flag in Config Editor for default action for attribute add or replace

In the LDAP Connector Config Editor there is a checkbox named Add Attributes (instead of replace). This option changes the default behavior of the LDAP Connector when it modifies an entry.

If this checkbox is checked, the LDAP Connector sets the constraint DirContext.ADD_ATTRIBUTE. If this checkbox is not checked, the LDAP Connector sets the constraint DirContext.REPLACE_ATTRIBUTE.

By setting DirContext.ADD_ATTRIBUTE constraint for the LDAP connection, you add new values to any attribute that goes through the AssemblyLine. This might mean that the same value gets repeatedly added to the entry if not used carefully. This might also result in an exception if the attribute in question is single-valued. If DirContext.REPLACE_ATTRIBUTE is set, the behavior is the same as the old LDAP Connector (default behavior), that is, all values for the attribute are replaced by whatever might be in the work entry.

You typically want this flag set when you are handling groups. If you want to add a member (a value) to a group (an attribute), you do not want to delete all the other values.

The old behavior was to replace the attribute with the new value. This behavior remains the default.

Note: This property can be set at any time while the Connector is running by setting the property addAttribute from your scripts. Use something similar to the following command:

work.setProperty("addAttribute", true)

Note: This property does not affect the behavior of the addAttributeValue and replaceAttributeValue methods described previously.

Rebind

The LDAP Connector has a rebind() method which facilitates building advanced solutions like virtual directories and other solutions that map incoming authentication requests (use any of the support protocols) to LDAP. See the JavaDocs for more information.

Skip Lookup in Update and Delete mode

The LDAP Connector supports the Skip Lookup general option in Update or Delete mode. When it is selected, no search is performed prior to actual update and delete operations. It requires a $dn parameter to be specified in order to operate properly.


LDAP Group Members Connector

Use the LDAP Group Members Connector to retrieve the members of LDAP groups. This component returns the user entries of group members, and not the group entries themselves. We can access information about the containing group and the parent/ancestor groups through properties.

Overview of LDAP Group Members Connector

The LDAP Group Members Connector supports Iterator mode, returning the user entries of LDAP group members. This Connector also supports nested groups. The LDAP Group Members Connector extends the LDAP Connector.

Handling large Active Directory groups

The LDAP Group Members Connector automatically handles large Active Directory groups.

How this works

This Connector provides the requisite attribute name syntax for Active Directory servers to iterate the group member list. When the Active Directory returns a fragment of the member list, the returned attribute name is encoded with the range of members returned. For example, if a group entry contains 700 members, and the first read returns the member;range=0-499 attribute, the range part indicates the portion of the membership list, which is being returned as attribute values. The LDAP Group Members Connector retains this information and processes the next batch of members when the current range is completed by requesting the member;range=500-999 attribute. When the last fragment of the member list is returned, the attribute name is encoded with "*" character for the end range, for example: member;range=500-*

LDAP group entry

The LDAP Group Members Connector processes LDAP group entries based on the following criteria:

Data source schema

In Iterator mode, the LDAP Group Members Connector reads user entries from the connected LDAP server. The data source schema depends on the object class of the read entry and can be detected by connecting and reading an entry. For example, using the Connect and Read Next buttons of an Input or Output Map. For detailed information about how to operate a connector in Iterator mode, see the "Iterator mode" section in SDI v7.2 Users Guide.

The LDAP Group Members Connector also returns the following properties for each returned entry.
Property Description
groupHierarchy Contains an array of GroupEntry (described in the following section), with index 0 being the containing group and the rest being its ancestors as the order of nesting.
group The SDI entry object contains the current group in its member list. Regardless of the ObjectClass of the group entry, it contains a member attribute with returned distinguished name of the group member. This entry is tagged with delta operation codes to make it suitable for use in synchronization. For example, the LDAP Connector in Delta mode can be used to incrementally update the list of group members.
groupEntry The GroupEntry for the user, which is currently being iterated.

The group entry object, which is returned by the LDAP Group Members Connector, has the following schema definition.

	public static class GroupEntry {
		Entry entry;
		String dn;
		Attribute groupMembers;
		int groupIndex;
		ArrayList<String> nestedGroups;
		
		public String getGroupDN();
		public Entry getGroupEntry();
		public Attribute getMembers();
		public boolean hasMoreMembers();
	}

For more information about LDAP v3 schema, see http://www.ietf.org/rfc/rfc2256.txt.

Configuration

The LDAP Group Members Connector is based on the LDAP Connector. Therefore, the configuration parameters of LDAP Connector are also applicable for LDAP Group Members Connector. The configuration parameter, which is specific to LDAP Group Members Connector, is described in this section.


LDAP Server Connector

The LDAP Server Connector accepts an LDAP connection request from an LDAP client on a well-known port set up in the configuration (usually 389). The LDAP Server Connector only operates in Server mode, and spawns a copy of itself to take care of any accepted connection until the connection is closed by the LDAP client.

This Connector can be used in conjunction with the IBM Password Synchronization plug-ins. For more information about installing and configuring the IBM Password Synchronization plug-ins, please see the SDI v7.2 Password Synchronization Plug-ins Guide.

Each LDAP message received on the connection drives one cycle of the LDAP Server Connector logic. The main thread returns to listening for similar LDAP requests from other LDAP clients. At this point, Attribute Mapping will take place, and the appropriate attributes like the LDAP Operation should be mapped into the work object.

The rest of the AssemblyLine will be executed, and when the cycle reaches the Response channel the return message is built from Attributes mapped out, and sent back to the client. If it was an LDAP search command, the user will call the add method to build the data structure that is to be sent back to the client. The LDAP Server Connector goes back to listening for the next LDAP command on the existing connection.

The value of the LDAP operation is provided in the LDAP.operation attribute in the LDAP Server Connector conn entry, which should be mapped into the work entry for further processing (along with any other required attributes). Legal values are SEARCH, BIND, UNBIND, COMPARE, ADD, DELETE, MODIFY, and MODIFYRDN. The LDAP message provides a number of attributes for the specified LDAP operation.

Scripting

The part of the AssemblyLine that follows the LDAP Server Connector must do work to determine the desired outcome of the LDAP message. The basic LDAP operations (SEARCH, BIND, UNBIND, COMPARE, ADD, DELETE, MODIFY, and MODIFYRDN) are provided as values in the LDAP Server Connector scripting environment to facilitate scripting, for example, if LDAP.operation equals BIND. The user code sends search result entries to the client by calling the add ( entry ) method in the LDAP Server Connector. The entry must be formatted with legal LDAP attribute names plus the special attribute $dn (the distinguished name of the entry).

Returning the LDAP message returned values

The user-provided code in the AssemblyLine responds to each request by setting the ldap.status, ldap.matcheddn and ldap.errormessage entry attributes. ldap.matcheddn and ldap.errormessage are optional.

In the Response channel phase of the AssemblyLine, the LDAP Server Connector formats and returns some of the attributes of the work entry. These are:

Error handling

The LDAP Server Connector terminates the connection and records an error if the received message does not conform to the LDAP v3 format

Note: The LDAP Server Connector does not perform any validation on the incoming attributes. Any operation or parameter value is therefore accepted.

Configuration

The Connector needs the following parameters:


Log Connector

The Log Connector is very different from other connectors as it does not have any idea of source/target system. This connector was written exclusively to give you an alternative access to the SDI logging features; see "Logging and debugging" in the SDI v7.2 Installation and Administrator Guide.

Introduction

The Log Connector enables you to use logging utilities in a simpler way, requiring less scripting. The connector can be inserted at any point in the AssemblyLine and enabled/disabled dynamically. Prior to the introduction of this connector you would have to add script code that invoked the AssemblyLine's log object. Using this log object would add log messages to all loggers associated with the AssemblyLine's log object. Similarly, adding a logger to the AssemblyLine (using the AssemblyLine logging configuration screen) would also merge SDI's internal logging to the log output channel causing the log to fill up with possibly unwanted log messages. The Log Connector gives you explicit control of all messages written to the log channel.

This Connector supports AddOnly mode only.

Schema

The schema for the Log Connector is flat with the following predefined attributes:
Attribute Description
message The message to be logged.
level Optional level of the logged message.
exception Optional java.lang.Exception

When exception is present, the level parameter is ignored and LogInterface.error(msg, exception) is called.

When both exception and level is absent then LogInterface.info(msg) is called.

Configuration

The connector configuration for the Log Connector will display the form associated with the selected logger component.

An alternative to define loggers is to use the built-in logging feature and a LogConfigItem configuration object. This method bypasses Log4J's lookup in the log4j.properties file; see Creating additional Loggers for more information.

Logger configuration screen

Apache Log4J Loggers

The Apache Log4j logging utility is bundled with SDI. Log4j uses a properties file to configure loggers that are used by SDI. Obtaining a logger via the Log4J API requires a category name, which matches a logger definition in the log4j.properties file. By default, SDI configures Log4J to use solution dir/etc/log4j.properties; see SDI v7.2 Installation and Administrator Guide for example configurations. Layouts

With most loggers we can specify the layout of the output log. You have a choice of the following layouts:

Patterns

Many loggers use a pattern to format the output string that goes to the log (Pattern.ConversionPattern). The format of this pattern is documented in the respective logging utility's documentation. Below is an incomplete listing of some of the more useful conversion characters that can be used in those patterns.

A pattern is a string that contains special constructs that are substituted by a computed value. Use a percentage sign (%) followed by one of the following characters to insert computed values:
Character Effect
m Used to output the log message from the caller.
c Used to output the category of the logging event. The category conversion specifier can be optionally followed by precision specifier, this is a decimal constant in brackets.
d Used to output the date of the logging event. The date conversion specifier may be followed by a date format specifier enclosed between braces. For example, %d{HH:mm:ss,SSS} or %d{dd MMM yyyy HH:mm:ss,SSS}. If no date format specifier is given then ISO8601 format is assumed.
F Used to output the file name where the logging request was issued.
n The platform specific end-of-line character(s)
p The priority of the logged event
t The name of the Thread that generated the log event (for example, AssemblyLine name etc)
% Outputs a single percent sign

Category based configuration

The category based configuration logger will delegate creating the logger to Log4J. Log4j will use the log4j.properties file to find a match for the category and return a logger as defined in the properties file.

ConsoleAppender

The console appender writes to the standard output/error streams. Parameters are:

CustomAppender

The custom appender is available when custom appenders have been defined in one or more java properties. Properties that start with custom.appender. are expected to have a value that specify an appender class that implements com.ibm.di.log.CustomAppenderInterface. Parameters are:

DailyRollingFileAppender

The daily rolling file appender rotates the log file every day. When the output file is rolled it is given a name consisting of the base name plus a date pattern string (for example, filename.yyyy-mm-dd.) Parameters are:

FileAppender

The file appender writes messages to an output file. Parameters are:

NTEventLog

The NT event logger writes messages to the Windows NT event log. Parameters are:

FileRollerAppender

The file roller will rotate its logs every day using a sequence number of 1 through Number of backup files. Parameters are:

SystemLogAppender

The system log appender writes to log files found under system_logs/{ConfigId}/{AL,EH}_X (where X is the name of the AL/EH being run). Parameters are:

SyslogAppender

The syslog appender writes to a syslogd deamon. The syslogd deamon is the standard logging utility on most UNIX systems. Parameters are:

Java Util Loggers

These loggers are part of the standard Java VM. The layouts in java util loggers are termed formatters. SDI includes support for the following formatters:

Category based configuration

The category based configuration logger will delegate creating the logger to Java Util Logging. JUL will use its lib/logging.properties file to find a match for the category and return a logger as defined in the properties file.

FileHandler

The file appender writes messages to an output file. If the "limit" parameter sets an upper limit of the file's size, the log file is rotated when it reaches the maximum size and a new file is created to continue logging. Parameters are:

JLOG Loggers

JLOG loggers are bundled with SDI. The layouts in JLOG are termed formatters. SDI includes support for the following formatters:

Category based configuration

The category based configuration logger will delegate creating the logger to JLOG. JLOG will use its jlog.properties file to find a match for the category and return a logger as defined in the properties file.

FileHandler

The file appender writes messages to an output file. Parameters are:


Lotus Notes Connector

See Lotus Notes Connector, in the combined Lotus® Notes® section.


Mailbox Connector

This Mailbox Connector provides access to internet mailboxes (POP3 or IMAP). The Mailbox Connector can be used in AddOnly, Iterator, Lookup, Update and Delete modes. The Mailbox Connector uses predefined attribute names for the headers that are used most often. If you need more than this,f use the mail.message property to retrieve the native message object.

On initialization, the Connector gets all available mail messages from the mailbox on the server and stores them into an internal Connector buffer. Later the Connector retrieves the messages one by one on each getNextEntry() call; that is, on each Iteration. When all the messages from the buffer have been retrieved, the parameter Poll Interval governs what happens next; see Configuration. This is different from earlier implementations of this Connector.

If the IMAP protocol is specified the Mailbox Connector registers for notifications for messages added and messages removed from the mailbox on the server. When a notification that a message has been added to the mailbox is received, the Connector adds this message to its internal buffer. If a notification that a message has been removed from the mailbox is received, the Connector removes this message from its internal buffer.

For all supported modes except Addonly (Iterator, Update, Lookup, Delete) the Mailbox connector iterates on all folders in the mailbox, if no folder is specified in the configuration. Otherwise, there is the option to iterate on subfolders of the supplied folder. In both cases we can specify a list of comma-separated folders to be excluded when browsing the mailbox.

Notes:

  1. Only one connection per user ID is supported. If the user fails to disconnect when using the schema tab, and then runs the AssemblyLine, this results in a connection refused error.
  2. The Mailbox Connector does not support the Advanced Link Criteria (see "Advanced link criteria" in SDI v7.2 Users Guide).

Schema

Input Map

The Mailbox Connector uses the following predefined attributes and properties, which are available in the Input Map:

Output Map

The Mailbox Connector gets the following Attributes from the work Entry (Output Attribute Map):

Using the Connector

Iterator Mode

In this mode the Connector is iterating through all messages from the specified folder (INBOX by default). Each message will be translated into an entry and its attributes will be available for the next step of the flow.

Depending on the backend mail server you are connecting to, you might not be able to interact directly with the body of a message. This is because the server supports multi-body parts, as opposed to a single one. In this case all the body parts can be accessed as a multi-valued attribute (for example,

work.getAttribute("mail.bodyparts").getValue(N).getContent();

where N is the number of the message body, the number zero indicating the first message body). In case the server does not provide multi-body parts then the message body will be in the attribute mail.body.

If no folder is specified, then the Connector iterates through all mailbox folders. A parameter allows for configuring that certain folders can be skipped when iterating. The names of these folders are entered in a text box separated by comma. Another parameter defined as a checkbox is used to indicate whether the Connector also should iterate through the subfolders of the specified folder. Note that when POP3 is chosen this option is not available, since the POP3 provider supplies a single folder - "INBOX". If the Mail Folder parameter is left empty, the Connector will iterate on the messages in the INBOX folder.

Lookup Mode

In this mode the retrieved Entry(s) are based on the LinkCriteria defined for the connector. In case no message is found then an exception is thrown. This mode is similar to Iterator mode but here we can not iterate through the messages in the mail folders unless you have LinkCriteria defined. The attributes mapped in the work entry will be filled in if a single message is found. If more than one entry is returned then the AssemblyLine execution will stop; we can work around this by providing logic for cases like this in the 'On Multiple Entries' Hook.

When the Mailbox Connector is used in Lookup mode the only searchable headers are:

Delete Mode

In this mode the entry(s) returned are based on the defined LinkCriteria. In one message is found then it is deleted. If no messages are found then either 'On No Match' Hook is called (if defined) or an exception is thrown and execution stops. If more than one message is found then either 'On Multiple Entries' Hook is called (if defined) or exception is thrown.

When the Mailbox Connector is used in Delete mode the only searchable headers are:

AddOnly Mode

AddOnly mode is used for putting messages into a specified mailbox folder. For this purpose first configure the mail server, mail credentials, protocol type (only IMAP) and the folder in which the new messages will be delivered.

This mode can only be used with IMAP protocol since the POP3 protocol does not support appending of messages. For more information about the restrictions of the POP3 provider for the Java™ Mail API refer to: http://java.sun.com/products/javamail/javadocs/com/sun/mail/pop3/package-summary.html.

In case the folder defined in the configuration of the Connector does not exist, the parameter "createFolder" is taken into account. If it is checked and the supplied folder is not present, a new one with this name is created. The new Messages are delivered to the connector in an attribute called mail.addMessage; this attribute is passed an Object of type javax.mail.Message or an array of that type. The Connector connects to the mail box and appends the message(s) into the specified folder.

Update Mode

In update mode the Mailbox Connector is able to make changes to the flags of a message in the specified mail folder. The supported flags are: Answered, Deleted, Draft, Recent and Seen. These parameters are passed to the Connector as Attributes in its output map. Flags and therefore the corresponding Attributes are of Boolean type. The Flags can be manipulated through the javax.mail.Message.setFlag(...) method.

You specify in the work entry which flag should be updated and the new value. In case the message store does not support the flag that you want to update, a message is logged containing the flag attribute for which the operation failed. Afterwards the connector continues with the updates of the other flags.

When the Mailbox Connector is used in Update mode the only searchable headers are:

Configuration


Memory Queue Connector (MemQueue)

The Memory Queue (MemQueue) Connector provides a connector-like functionality to read and write to the memory queue feature (aka. MemBufferQ). This is an alternative to writing script to access a memory queue and is an extension of the Memory Queue Function Component (function component).

The objects used to communicate between components are not persistent and are not capable of handling large return sets. For example, large data returned by an ldapsearch operation. In order to solve this problem, an internal threadsafe memory queue can be used as a communications data structure between AL components. It can contain embedded logic that would trigger whenever buffer is x% full/empty/data available.

There can be multiple readers and writers for the same queue. Every writer has to obtain a lock before adding data. The writer has to release lock before a reader can access it. Connectors in Iterator mode have a parameter that determines when the read lock is released - After single read, on AL cycle end or Connector close.

This Connector supports AddOnly and Iterator modes only.

Notes:

  1. Because of the non-persistent nature of this Connector, we recommend that you use the System Queue Connector instead, because that Connector relies on the underlying Java™ Messaging Service (JMS) functionality with persistent object storage.
  2. When the Memory Queue Connector is in Iterator mode it reads from the configured queue. If that queue does not exist it is created. If you don't want this behavior, you need to set the system property tdi.memq.create.queue.default=false, in this case SDI will behave like previous versions; this implies that when the queue does not exists, an exception is thrown in Iterator Mode.

This Connector can also be used in connection with MemQueue pipes set up from JavaScript, although it is important to note that a MemQueue pipe created by the MemQueue Connector will be terminated when the Connector closes.

The Memory queue buffer is a FIFO type of data structure, where adding and reading can occur simultaneously. It works as a pipe where additions happen at one end and reading happens at the other end and reading removes the data from queue.

The Memory queue buffer provides overflow storage using the System Store when a threshold value is reached, which is a function of the runtime memory available.

Memory queue components

High level workflow

The Memory queue Buffer is a queue of pages containing objects. When a particular threshold (the "watermark") is reached, a new thread is created that starts writing to another buffer of pages; when a page is full, it transfers the page to either to the main queue or to the system store. When a page is read from the main queue, one page is transferred from system store to the main queue; in doing it also deletes that page from the system store.

Configuration

Accessing the Memory Queue programmatically

The Memory Queue can be accessed directly from JavaScript, not only through the Connector.

  1. To create new pipe - There are two methods for this.

    1. Paging disabled - newPipe(String instName,String pipeName,int watermark) // Does not require any DB related entries
    2. Paging enabled - newPipe(String instName,String pipeName,int watermark,int pagesize) // Requires DB initialization

    An example script with paging enabled:

    var memQ=system.newPipe( "inst","Q1",1000,10) ; 
    memq.initDB(dbName, jdbcLogin, jdbcPassword, tableName);  // Required to Initialize DB 
    memQ.write(conn);
  2. getPipe(String instName,String pipeName)
  3. purgeQueue()

    An example script would look something like this:

    var q =system.getPipe("Inst1","Q1") ;
    q.purgeQueue();
  4. deletePipe(String pipeName)

    Example:

    var q =system.getPipe("Inst1","Q1");
    q.deletePipe();

The following is an example script to read from the Memory Queue using API calls:

var memQ=system.getPipe( "inst","Q1") ; 
var size=memQ.size();

for(var count=0;i<=size;count++){
	main.logmsg(memQ.read());
}


Memory Stream Connector

The Memory Stream Connector can read from or write to any Java™ stream, but is most often used to write into memory, where the formatted data can be retrieved later. The allocated buffer is retrieved/accessed as needed.

Note: The memory stream is confined to the local JVM, so it's not possible to interchange data with a task running in another JVM; be it on the same machine or a different one.

The Connector can only operate in Iterator mode, AddOnly mode, or Passive state. The behavior of the Connector depends on the way it has been initialized.

Note: Do not reinitialize unless you want to start reading from or writing to another data stream. If you want to use the Connector Interface object, see The Connector Interface object. This Connector has an additional method, the getDataBuffer() method.

Configuration


Properties Connector

SDI solutions are packaged into one or more SDI configuration files (XML format) that contain the settings for end point connections, data flow and a host of other features. Although a configuration file can hold everything you need to create a solution, you often may need to use data sources external to the configuration file to modify the behavior of the configuration, such as standard Java™ properties, SDI external properties and SDI System Store properties.

Property stores are used to hold configuration information in the format of key=value. The Properties connector is used to work with such stores, performing operations of reading/writing of properties and encryption/decryption of certain property values. The familiar global.properties and solution.properties are examples of such property stores.

Individual property stores can be encrypted with individual Certificates, by means of the Property Key and Encrypt parameters. This allows a certificate that is different from the server certificate to be used for encrypting and decrypting both properties in the file, and the entire file if wanted. This may be useful when multiple developers are working on a project, and credentials cannot be shared.

This Connector uses an internal memory buffer to hold all properties in a properties file. The Connector can also be used to access the JavaVM system properties object.

The Connector supports Iterator, AddOnly, Update, Lookup and Delete mode.

Configuration

The Properties Connector uses the following parameters:

Using the Connector

The Property Connector is used to connect to standard .property files, Java™ Properties or the System Store User Property Store. It provides encryption/decryption of the stores being read/written.

The typical behavior of this connector is to connect to a .property file specified by its URL. This can be achieved by setting the collection parameter of the connector, and constitutes "User-Defined" properties.

However, we can also access the system-defined property stores: JVM ("java") properties, User Property Store, global.properties and solution.properties. In order to do this, you need to set the collectionType property of the connector. It is not exposed in the configuration screen but can be set with the following script (for example, put in the Prolog -> Before Initialize hook):

Note: These property collections are those that show up in the "Properties" folder in the Config Browser for a given configuration file. These can be modified using the Config Editor, and this may make it unneccesary to use this Connector to access or alter any of the properties in these property collections at runtime.

All of these stores are shared within the same JavaVM, which means that an AssemblyLine writing to the System Store will affect all other AssemblyLines in the JavaVM reading from the same store.

All of the properties in the global and solution stores are propagated to the Java property store by the SDI server at startup in that order. The point to make here is that the global and solution stores can now be discretely addressed and modifications to these files, if permitted, can also be made. Each property store is given a unique name that is unique within the confines of a configuration instance. If an SDI server runs multiple configuration instances, they will share the Java, global, solution and all System-Store property stores (for example, system) but all others are local to the configuration instance.

Note: When using the Connector to deal with external properties, the Auto-Rewrite parameter should be set to true if you want to automatically write back encrypted properties without calling an explicit "commit".

The link criteria for the Properties Connector can only be a single criteria in the form 'key equals keyvalue', where keyvalue is the key value to be found. More advanced searches are not possible.

Properties File Format

# comment
' comment
// comment
!include filename
!merge filename

[{protect}-]keyword <colon | equals> [{encr}]value

Notes:

  1. The optional {protect}- prefix indicates that the value either is or should be encrypted. When the value starts with the character sequence {encr} it means that the value is already encrypted.
  2. "!include" reads an external file/URL with properties which are written unconditionally to the current property map.
  3. "!merge" reads an external file/URL with properties which are written to the current property map if the property does not already exist (non-destructive write).
  4. SDI currently uses the equal sign "=" or colon ":" as the separator in key/value pairs property files, whichever is first. Using equal signs or colons in property names and property values is therefore not supported. The property file key/value separator in SDI Version 6.0 and earlier was only the ":" character; therefore, property files migrated from V6.0 and earlier may require editing.

Syntax checking is used on properties files that are read in directly by the Properties Connector, the SDI Server and the CE. If any nonblank line does not adhere to the properties file format, an Exception will be thrown. Headers in the Property file

The first one or two lines in a Property File will be lines beginning with this String

##{PropertiesConnector}

This signifies that this line is a header that is rewritten every time the Property File is written.

The first line will look like this

##{PropertiesConnector} savedBy=user, saveDate=date

where user is the name of the user that saved the file and date is the date the file was saved.

If the Property Key parameter was specified when writing the file, the next line will look like this

##{PropertiesConnector} encryptionKey=keyAlias

where keyAlias is the value of the Property Key parameter.


RAC Connector

Introduction

Note: This connector is deprecated and will be removed in a future version of SDI.

"RAC" stands for Remote Agent Controller, however the current name for this technology is Agent Controller.

The Agent Controller is a server that enables client applications to interact with agents under its domain: hthttp://help.eclipse.org/helios/index.jsp?topic=%2Forg.eclipse.tptp.platform.agentcontroller.doc.user%2Fconcepts%2Fac%2Fc_ac_ovr.html

A Generic Log Adapter (GLA) transforms proprietary log and trace data to the Common Base Event format (http://www.ibm.com/developerworks/library/specification/ws-cbe/). The rationale for a Generic Log Adapter is that reading log files is messy and making parsers for all types of logs is not scalable and one tends to customize anyway. A GLA can act as an agent of an Agent Controller so that clients can monitor remote application logs.

More information about Agent Controller and Generic Log Adapter can be found on http://www.eclipse.org/tptp/home/documents/index.php.

The RAC Connector can read data from and write data to RAC:

Configuration

The Connector's title, shown in the Configuration panel is "RAC Connector". Parameters are:

Using the Connector

AddOnly Mode

Post-install Configuration for AddOnly Mode

The AddOnly mode of the RAC Connector requires that the binaries of the Agent Controller (.dll, .so) are available to the dynamic library loader of the operating system. The preferred way to achieve this is to include the binaries folder of the Agent Controller in the PATH environment variable on Windows platforms, and in the LD_LIBRARY_PATH environment variable on Linux platforms. This can be done either globally or just for the process of the SDI Server. For example:

When operating in AddOnly mode, the first RAC Connector on the SDI Server registers a Logging Agent with the local Agent Controller.

All Common Base Event objects received from the AssemblyLine, are serialized as XML and written to the Logging Agent. The Logging Agent stays operational as long as the master process of the SDI server is alive. During its lifetime it can be monitored by clients even if the Connector which registered it has already closed. When the SDI server stops (or crashes), however, the Agent Controller (RAC) terminates the SDI Logging Agent's registration.

The Connector will wait a specified amount of time for a monitoring client to arrive before starting to write data to the Logging Agent. In particular, it can wait forever. This is specified by the Wait to be monitored Connector parameter. When a client starts monitoring the agent, the agent starts transferring data to the Agent Controller. The Agent Controller then sends the data to the client.

Waiting happens before each Connector write attempt.

If the waiting time expires and there is still no monitoring client, the Connector throws an Exception. However, if a client starts monitoring the agent while the Connector is waiting, the waiting is interrupted and the agent starts transferring data to the Agent Controller.

Depending on the Wait to be monitored Connector parameter value the Connector could potentially wait indefinitely for a client to start monitoring the agent. This would cause the entire AssemblyLine to block indefinitely. Precisely for this reason the following Connector method is available to you:

public boolean isLogging();

This method returns true if there is a client monitoring/listening for data from this Connector and false otherwise. This method is accessible through JavaScript and can be invoked on the Connector object (that is, thisConnector.islogging().)

We can use this method in order to detect whether the Connector will block when the AssemblyLine execution reaches the Connector. If blocking is not desirable, but loosing data is unacceptable, then you could implement a solution which temporarily stores the data into a queue (possibly the SDI Memory Queue) when the isLogging() method returns false.

Iterator Mode

In Iterator mode the RAC Connector acts as a client of a remote Agent Controller. It connects to the Agent Controller to obtain a handle to the Logging Agent, whose name is specified in the Connector's configuration. After that the Connector starts monitoring the Logging Agent. During the monitoring, the Connector receives data produced by the Logging Agent.

Data reception is handled asynchronously by the Agent Controller client library and queued there. The Connector is notified when data reception occurs, and when the Connector reads from the queue a buffer is received with the incoming binary data. The queue is blocking, so the Connector will wait if no data is available and the data processor will wait if there is no free space in the queue.

The received binary data contains a CommonBaseEvent object serialized as XML in UTF-8 encoding. In addition, the CommonBaseEvent is decoded from the buffer and made available to the Connector in the Input Map.

If there is no active agent with the specified name when the Connector contacts the Agent Controller, the Connector waits until such an agent is registered.

If at some point the agent gets deregistered (while the Connector is listening for events), the Connector will wait for another agent with the same name to appear. Essentially the Connector never stops unless its connection to the Agent Controller fails.

The Connector exposes a method, which provides access to the Common Base Event object obtained by the Connector on the current AssemblyLine iteration (the last event, processed by the 'getNextEntry' method of the Connector):

public CommonBaseEvent getCurrentCbeObject();

Schema

The connector internally uses the CBE Function Component, and uses that particular FC's schema.

See also

Agent Controller: overview, architecture, administration and configuration,
TPTP Data Collection Framework. How to develop agents and clients using Java/C++,
Monitoring an application with logging agents,
Log and Trace Analyzer, Generic Log Adapter Connector.


RDBMS Change Detection Connector

The RDBMS Change Detection Connector enables SDI to detect when changes have occurred in specific RDBMS tables. Currently, setup scenarios are provided for tables in Oracle, DB2®, MS SQL, Informix® and Sybase databases.

RDBMS's have no common mechanism to inform the outside world of the changes that have been taking place on any selected database table. To address this shortcoming, SDI assumes that some RDBMS mechanism (such as a trigger, stored procedures or other) is able to maintain a separate change table containing one record per modified record in the target table. Sequence numbers are also maintained by the same mechanism.

Similar to an LDAP Change Detection Connector, the RDBMS Change Detection Connector communicates with the change table that is structured in a specific format that enables the connector to propagate changes to other systems. The format is the same that IBM DB2 Information Integrator (version 8) uses, providing SDI users with the option to use DB2II to create such tables, or create the tables in some other manner. The RDBMS Change Detection Connector keeps track of a sequence number so that it only reports changes since the last iteration through the change table.

The RDBMS Change Detection Connector uses JDBC to connect to a specific RDBMS table. See the JDBC Connector for more information about JDBC driver issues.

The RDBMS Change Detection Connector only operates in Iterator mode.

This connector supports Delta Tagging at the Entry level only.

The RDBMS Change Detection Connector reads specific fields to determine new changes in the change table (see Change table format). The RDBMS Change Detection Connector reads the next change table record, or discovers the first change table record. If the RDBMS Change Detection Connector finds no data in the change table, the RDBMS Change Detection Connector checks whether it has exceeded the maximum wait time. If the RDBMS Change Detection Connector has exceeded the maximum wait time, it returns null to signal end of the iteration. If the RDBMS Change Detection Connector finds no data in the change table, and has not exceeded the maximum wait time, it waits for a specific number of seconds (Poll Interval), then reads the next change table record.

If the Connector returns data in the change table, the RDBMS Change Detection Connector increments and updates the nextchangelog number in the User Property Store (an area in the System Store tailored for this type of persistent information).

For each Entry returned, control information (counters, operation, time/date) is moved into Entry properties. All non-control information fields in the change table are copied as is to the Entry as attributes. The Entry objects operation (as returned by getOperation) is set to the corresponding changelog operation (Add, Delete or Modify).

This Connector in principle can handle secure connections using the SSL protocol; but it may require driver specific configuration steps in order to set up the SSL support. Refer to manufacturer's driver documentation for details.

Configuration

The Connector needs the following parameters:

Change table format

This example change table captures the changes from a table containing the fields NAME and EMAIL. Elements in bold are common for all Changelog table. The syntax for this example is for Oracle.

IBMSNAP_COMMITSEQ is used as our changelog-nr.
IBMSNAP_OPERATION takes on of the values I (Insert), U (Updated)  or D (Deleted).
CREATE TABLE "SYSTEM"."CCDCHANGELOG"
(
IBMSNAP_COMMITSEQ   RAW(10)   NOT NULL,
IBMSNAP_INTENTSEQ   RAW(10)   NOT NULL,
IBMSNAP_OPERATION   CHAR(1)   NOT NULL,
IBMSNAP_LOGMARKER   DATE      NOT NULL,
NAME   VARCHAR2 ( 80 )  NOT NULL,
EMAIL   VARCHAR2 ( 80 ) 
)#

The RDBMS Change Detection Connector does not work if the ibmsnap_commitseq column name used internally in the connector does not match exactly with the actual column in the database. This is true only when case-sensitivity is turned on for data objects in the Database the RDBMS Change Detection Connector is iterating on.

To handle this the column name is externalized as a connector configuration parameter. This provides the DBA an easy way to set ibmsnap_commitseq with the same case as used in his Database table. However, this parameter is not visible in connector config tab. To configure this parameter, you will have to set this manually in the before initialize hooks of the RDBMS Change Detection Connector. This will enable multiple RDBMS Change Detection Connectors to have their own copy of the column name value set for the change table the connector iterates on. For example,

myConn.connector.setParam("rdbms.chlog.col","IBMSNAP_COMMITSEQ");

sets the name of the ibmsnap_commitseq column to literally, IBMSNAP_COMMITSEQ. The default is lowercase otherwise.

Create change tables in DB2

The following example creates triggers in a DB2® database to maintain the change table as described previous:

connect to your_db

drop table email
drop table ccdemail

create table email ( \
	name varchar(80), \
	email varchar(80) \
)

create table ccdemail ( \
	ibmsnap_commitseq integer, \
	ibmsnap_intentseq integer, \
	ibmsnap_logmarker date, \
	ibmsnap_operation char, \
	name varchar(80), \
	email varchar(80) \
)

drop sequence ccdemail_seq
create sequence ccdemail_seq

create trigger t_email_ins after insert on email referencing new as n \
	for each row mode db2sql \
		 INSERT INTO ccdemail VALUES (nextval for ccdemail_seq, 0, 
			CURRENT_DATE, 'I', n.name, n.email )

create trigger t_email_del after delete on email referencing old as n \
	for each row mode db2sql \
		 INSERT INTO ccdemail VALUES (nextval for ccdemail_seq, 0, 
			CURRENT_DATE, 'D', n.name, n.email )

create trigger t_email_upd after update on email referencing new as n \
	for each row mode db2sql \
		 INSERT INTO ccdemail VALUES (nextval for ccdemail_seq, 0, 
			CURRENT_DATE, 'U', n.name, n.email )

Create change tables in Oracle

Given that your username is "ORAID", then this (example) change table will capture the changes from a table containing the fields NAME and EMAIL. Boldfaced elements are common for all change tables. Bold faced entries are extra control information that will end up as Entry properties.

-- create source email table in Oracle. 
---This will be the table that the RDBMS Change Detection Connector will detect changes on.
CREATE TABLE ORAID.EMAIL
(
 NAME VARCHAR2(80),
 EMAIL VARCHAR2(80)
); 
-- Sequence generators used for Intentseq and commitseq
CREATE SEQUENCE ORAID.SGENERATOR001
MINVALUE 100 INCREMENT BY 1 ORDER;

CREATE SEQUENCE ORAID.SGENERATOR002
MINVALUE 100 INCREMENT BY 1 ORDER;

-- create change table and index for email table
CREATE TABLE ORAID.CCDEMAIL
(
 IBMSNAP_COMMITSEQ   RAW(10)   NULL,
 IBMSNAP_INTENTSEQ   RAW(10)   NOT NULL,
 IBMSNAP_OPERATION   CHAR(1)   NOT NULL,
 IBMSNAP_LOGMARKER   DATE      NOT NULL,
 NAME VARCHAR2( 80 ),
EMAIL VARCHAR2( 80 )
);

CREATE UNIQUE INDEX ORAID.IXCCDEMAIL ON ORAID.CCDEMAIL
(
IBMSNAP_INTENTSEQ
);

-- create TRIGGER to capture INSERTs into email
CREATE TRIGGER  ORAID.EMAIL_INS_TRIG
AFTER INSERT ON ORAID.EMAIL
FOR EACH ROW BEGIN INSERT INTO ORAID.CCDEMAIL
( NAME,
  EMAIL,
  IBMSNAP_COMMITSEQ,
  IBMSNAP_INTENTSEQ,
  IBMSNAP_OPERATION,
  IBMSNAP_LOGMARKER )
 VALUES (
  :NEW.NAME,
  :NEW.EMAIL,
  LPAD(TO_CHAR(ORAID.SGENERATOR001.NEXTVAL),20,'0'),
  LPAD(TO_CHAR(ORAID.SGENERATOR002.NEXTVAL),20,'0'),
  'I',
  SYSDATE);END;


-- create TRIGGER to capture DELETE ops on email
CREATE TRIGGER  ORAID.EMAIL_DEL_TRIG
AFTER DELETE ON ORAID.EMAIL
FOR EACH ROW BEGIN INSERT INTO ORAID.CCDEMAIL
( NAME,
  EMAIL,
  IBMSNAP_COMMITSEQ,
  IBMSNAP_INTENTSEQ,
  IBMSNAP_OPERATION,
  IBMSNAP_LOGMARKER)
 VALUES 
( :OLD.NAME,
  :OLD.EMAIL,
  LPAD(TO_CHAR(ORAID.SGENERATOR001.NEXTVAL),20,'0'),
  LPAD(TO_CHAR(ORAID.SGENERATOR002.NEXTVAL),20,'0'),
  'D',
  SYSDATE);END;


-- create TRIGGER to capture UPDATEs on email
CREATE TRIGGER  ORAID.EMAIL_UPD_TRIG
AFTER UPDATE ON ORAID.EMAIL
FOR EACH ROW BEGIN INSERT INTO ORAID.CCDEMAIL
( NAME,
  EMAIL,
  IBMSNAP_COMMITSEQ,
  IBMSNAP_INTENTSEQ,
  IBMSNAP_OPERATION,
  IBMSNAP_LOGMARKER )
 VALUES (
  :NEW.NAME,
  :NEW.EMAIL,
  LPAD(TO_CHAR(ORAID.SGENERATOR001.NEXTVAL),20,'0'),
  LPAD(TO_CHAR(ORAID.SGENERATOR002.NEXTVAL),20,'0'),
  'U',
  SYSDATE);END;

Create change table and triggers in MS SQL

-- Source table msid.email. 
-- This will be the table that the RDBMS Change Detection Connector will detect changes on.
CREATE TABLE msid.email
(
 NAME   VARCHAR (80),
 EMAIL   VARCHAR (80)
 );

-- CCD table to capture changes. The RDBMS Change Detection Connector uses the CCD table to capture
-- all the changes in the source table. This table needs to be created in the following format.
CREATE TABLE msid.ccdemail
(
 IBMSNAP_MSTMSTMP timestamp, 
 IBMSNAP_COMMITSEQ   BINARY(10)   NOT NULL,
 IBMSNAP_INTENTSEQ   BINARY(10)   NOT NULL,
 IBMSNAP_OPERATION   CHAR(1)      NOT NULL,
 IBMSNAP_LOGMARKER   DATETIME     NOT NULL,
 NAME   VARCHAR (80),
 EMAIL   VARCHAR (80) 
);

You also need to create triggers to capture the insert, update and delete operations performed on the email table.

CREATE TRIGGER  msid.email_ins_trig ON msid.email
FOR INSERT AS
BEGIN
 INSERT INTO msid.ccdemail
(NAME,
 EMAIL,
 IBMSNAP_COMMITSEQ,
 IBMSNAP_INTENTSEQ,
 IBMSNAP_OPERATION,
 IBMSNAP_LOGMARKER )
 SELECT
 NAME,
 EMAIL,
  @@DBTS,
 @@DBTS,
 'I',
 GETDATE() FROM inserted
END;

Note: : @@DBTS returns the value of the current timestamp data type for the current database. This timestamp is guaranteed to be unique in the database.

-- creating DELETE trigger to capture delete operations on email table
CREATE TRIGGER  msid.email_del_trig ON msid.email
FOR DELETE AS 
BEGIN
 INSERT INTO msid.ccdemail
(
 NAME,
 EMAIL,
 IBMSNAP_COMMITSEQ,
 IBMSNAP_INTENTSEQ,
 IBMSNAP_OPERATION,
 IBMSNAP_LOGMARKER
)
 SELECT
 NAME,
 EMAIL,
 @@DBTS,
 @@DBTS,
 'D',
 GETDATE() FROM deleted
END;#

-- creating UPDATE trigger to capture update operations on email table
CREATE TRIGGER  msid.email_upd_trig ON msid.email
FOR UPDATE AS 
BEGIN
 INSERT INTO msid.ccdemail
(
 NAME,
 EMAIL,
 IBMSNAP_COMMITSEQ,
 IBMSNAP_INTENTSEQ,
 IBMSNAP_OPERATION,
 IBMSNAP_LOGMARKER
)
 SELECT
 NAME,
 EMAIL,
 @@DBTS,
 @@DBTS,
 'U',
 GETDATE() FROM updated
END;

Create change table and triggers in Informix®

-- Create Source table infxid.email. This will be the table that the RDBMS Change Detection Connector
-- will detect changes on.
CREATE TABLE infxid.email
(
NAME VARCHAR(80),
EMAIL VARCHAR(80)
);

-- create ccdemail table to capture DML operations on email table
CREATE TABLE infxid.ccdemail
(
IBMSNAP_COMMITSEQ   CHAR(10)   NOT NULL, 
IBMSNAP_INTENTSEQ   CHAR(10) NOT NULL, 
IBMSNAP_OPERATION   CHAR(1)   NOT NULL, 
IBMSNAP_LOGMARKER   DATETIME YEAR TO FRACTION(5) NOT NULL, 
NAME   VARCHAR(80), 
EMAIL   VARCHAR(80)
);

--Create sequence generators
CREATE SEQUENCE infxid.SG1
MINVALUE 100 INCREMENT BY 1;
CREATE SEQUENCE infxid.SG2
MINVALUE 100 INCREMENT BY 1;

-- procedure to capture INSERTs into email table
CREATE PROCEDURE infxid.email_ins_proc
( 
NNAME  VARCHAR(80), 
	
NEMAIL  VARCHAR(80)
)
 
DEFINE VARHEX CHAR(256);
 
 INSERT INTO infxid.ccdemail
(NAME,
 EMAIL,
 IBMSNAP_COMMITSEQ,
 IBMSNAP_INTENTSEQ,
 IBMSNAP_OPERATION,
 IBMSNAP_LOGMARKER )
 VALUES 
(NNAME,
 NEMAIL,
 infxid.SG1.NEXTVAL,
 infxid.SG2.NEXTVAL,
 'I',
 CURRENT YEAR TO FRACTION(5));END PROCEDURE;

-- now create the trigger for INSERTs into ccdemail
CREATE TRIGGER  infxid.email_ins_trig
 INSERT ON infxid.email
 REFERENCING NEW AS NEW FOR EACH ROW( EXECUTE PROCEDURE 
 infxid.email_ins_proc
( NEW.NAME,
  NEW.EMAIL
) );

-- create procedure to capture DELETEs on email table
CREATE PROCEDURE infxid.email_del_proc
( 
 ONAME  VARCHAR(80),
 OEMAIL  VARCHAR(80)
 );

 INSERT INTO infxid.ccdemail
(NAME,
 EMAIL,
 IBMSNAP_COMMITSEQ,
 IBMSNAP_INTENTSEQ,
 IBMSNAP_OPERATION,
 IBMSNAP_LOGMARKER )
 VALUES 
(ONAME,
 OEMAIL,
 infxid.SG1.NEXTVAL,
 infxid.SG2.NEXTVAL,
 'D',
 CURRENT YEAR TO FRACTION(5));END PROCEDURE;

-- create DELETE trigger
CREATE TRIGGER  infxid.email_del_trig
 DELETE ON infxid.email
 REFERENCING OLD AS OLD FOR EACH ROW( EXECUTE PROCEDURE 
 infxid.email_del_proc
(OLD.NAME,
 OLD.EMAIL
) );

-- create PROCEDURE to capture updates
CREATE PROCEDURE infxid.email_upd_proc
( 
 NNAME  VARCHAR(80),
 NEMAIL  VARCHAR(80)
);
 INSERT INTO infxid.ccdemail
(NAME, 
 EMAIL, 
 IBMSNAP_COMMITSEQ,
 IBMSNAP_INTENTSEQ,
 IBMSNAP_OPERATION,
 IBMSNAP_LOGMARKER)
 VALUES 
(NNAME,
 NEMAIL,
 infxid.SG1.NEXTVAL,
 infxid.SG2.NEXTVAL,
 'U',
 CURRENT YEAR TO FRACTION(5));END PROCEDURE;

-- create TRIGGER to capture UPDATES
CREATE TRIGGER  infxid.email_upd_trig
 UPDATE ON infxid.email
 REFERENCING NEW AS NEW OLD AS OLD FOR EACH ROW( EXECUTE PROCEDURE 
 infxid.email_upd_proc
(NEW.NAME,
 NEW.EMAIL
 ) );

Create change table and triggers for SYBASE

-- Create Source table sybid.email. 
-- This will be the table that the RDBMS Change Detection Connector will detect changes on.
CREATE TABLE sybid.EMAIL
(
 NAME   VARCHAR (80),
 EMAIL   VARCHAR (80)
)

-- Create CCD table to captures changes on email table
CREATE TABLE sybid.CCDEMAIL
(
 IBMSNAP_TMSTMP TIMESTAMP,
 IBMSNAP_COMMITSEQ  NUMERIC(10)   IDENTITY,
 IBMSNAP_INTENTSEQ   BINARY(10)   NOT NULL,
 IBMSNAP_OPERATION   CHAR(1)      NOT NULL,
 IBMSNAP_LOGMARKER   DATETIME     NOT NULL,
 NAME   VARCHAR(80),
 EMAIL   VARCHAR(80) 
)

-- Create TRIGGER to capture INSERTs on email table
CREATE TRIGGER  sybid.EMAIL_INS_TRIG ON sybid.EMAIL
FOR INSERT AS
BEGIN
 INSERT INTO sybid.CCDEMAIL
(NAME,
 EMAIL,
 IBMSNAP_INTENTSEQ,
 IBMSNAP_OPERATION,
 IBMSNAP_LOGMARKER )
 SELECT
 NAME,
 EMAIL,
@@DBTS,
 'I',
 GETDATE() FROM inserted
END

NOTE: @@DBTS is a special database variable that yields the next database timestamp value 

-- create TRIGGER to captures DELETE ops on EMAIL table
CREATE TRIGGER  sybid.EMAIL_DEL_TRIG ON sybid.EMAIL
FOR DELETE AS 
BEGIN
 INSERT INTO sybid.CCDEMAIL
(
 NAME,
 EMAIL,
 IBMSNAP_INTENTSEQ,
 IBMSNAP_OPERATION,
 IBMSNAP_LOGMARKER
)
 SELECT
 NAME,
 EMAIL,
 @@DBTS,
 'D',
 GETDATE() FROM deleted
END

-- create TRIGGER to capture UPDATEs on email
CREATE TRIGGER  sybid.EMAIL_UPD_TRIG ON sybid.EMAIL
FOR UPDATE AS 
BEGIN
  DECLARE @COUNTER INT 
  SELECT @COUNTER=COUNT(*) FROM deleted 
  IF @COUNTER>1 
  BEGIN 
  DECLARE @NAME  VARCHAR ( 80 )
  DECLARE @EMAIL  VARCHAR ( 80 )
  DECLARE insertedrows CURSOR FOR SELECT * FROM inserted 
  OPEN insertedrows 
  WHILE 1=1 BEGIN 
  FETCH insertedrows INTO 
  @NAME, 
  @EMAIL 
   IF @@fetch_status<>0 BREAK 
  ELSE INSERT INTO sybid.CCDEMAIL
 (
  NAME,
  EMAIL,
  IBMSNAP_INTENTSEQ,
  IBMSNAP_OPERATION,
  IBMSNAP_LOGMARKER
 )
 VALUES
 (
  @NAME,
  @EMAIL,
  @@DBTS,
 'U',
 GETDATE()
 )
 END
  DEALLOCATE insertedrows 
 END ELSE INSERT INTO sybid.CCDEMAIL(
  NAME,
  EMAIL,
  IBMSNAP_INTENTSEQ,
  IBMSNAP_OPERATION,
  IBMSNAP_LOGMARKER
 )
 SELECT
  I.NAME,
  I.EMAIL,
  @@DBTS,
 'U',
 GETDATE() FROM inserted I
 END

Example

An example is provided under the directory TDI_install_dir/examples/RDBMS. The example demonstrates the abilities of the RDBMS Change Detection Connector to detect changes over a table in a remote DataBase. The current example is designed to work with IBM DB2® only.


SCIM Connector

The System for Cross-Domain Identity Management (SCIM) protocol is an application-level, REST protocol for provisioning and managing identity data on the web. The protocol supports creation, modification, retrieval, and discovery of core identity resources, which are users and groups, and also custom resource extensions.

The SCIM Connector implements the SCIM Protocol by using JavaScript and an HTTP Client Connector.

Configuration

The SCIM Connector uses the following parameters:

The following parameters are available under the Advanced section:


Script Connector

The Script Connector enables you to write our own Connector in JavaScript.

A Script Connector must implement a few functions to operate. If you plan to use it for iteration purposes only (for example, reading, not searching or updating), we can operate with two functions only. If you plan to use it as a fully qualified Connector, you must implement all functions. The functions do not use parameters. Passing data between the hosting Connector and the script is enabled by using predefined objects. One of these predefined objects is the result object, which is used to communicate status information. Upon entry in either function, the status field is set to normal, which causes the hosting Connector to continue calls. Signaling end-of-input or error is done by setting the status and message fields in this object. Two other script objects are defined upon function entry, the entry object and the search object.

Note: When you modify a Script Connector or Parser, the script gets copied from the Library where it is stored, into your configuration file. This enables you to customize the script, but with the caveat that new versions are not known to the AssemblyLine.

One workaround is to remove the old Script Connector from the AssemblyLine and reintroduce it.

For a generic container, you write the Script Connector yourself in JavaScript, and it provides the modes you write into it. See "JavaScript Connector" in SDI v7.2 Users Guide.

For a list of Supported Modes, see Legend for the Supported Mode columns.

In Script-based Connectors, a potential source of problems exists if you made direct Java™ calls into the same libraries as SDI. A new version of SDI might have updated libraries (with different semantics), or you might have upgraded your libraries since the last time you used your Connector.

Predefined script objects

main

The Config Instance (RS object) that is running.

task

The AssemblyLine this Connector is a part of

system

A UserFunctions object.

The result object

The config object

This object gives you access to the configuration of this AL component, and its Input and Output schema -- note that the getSchema() method of this object has a single Boolean parameter: true means to return the Input Schema while false gets you the Output Schema.

The entry object

The entry object corresponds to the conn Entry for a Connector (or Function, when scripting an FC.)

See The Entry object for more details.

The search object

The search object gives you access to the searchCriteria object (built based on Link Criteria settings.) See The Search (criteria) object for more details.

The connector object

A reference to this Connector.

This could be useful for example when returning multiple Entries found in the findEntry() function, with code similar to this:

	function findEntry() {
		connector.clearFindEntries();
		// Use the search object fo find Entries, and
		for ( entry = all Entries found) {
			connector.addFindEntry(entry)
		}
		if (connector.getFindEntryCount() == 1)
			result.setStatus(1);
		else
			result.setStatus(0);
	}

Functions

The following functions can be implemented by the Script Connector. Even though some functions might never be called, it is recommended that you insert the functions with an error-signaling code that notifies the caller that the function is unsupported.

According to the various modes, these are the minimum required functions you need to implement:
Table 31. Required functions
Mode Function you must implement
Iterator selectEntries() getNextEntry()
AddOnly putEntry()
Lookup findEntry()
Delete findEntry() deleteEntry()
Update findEntry() putEntry() modEntry()
CallReply queryReply()

Configuration

The Connector needs the following parameters:

Examples

Navigate to the TDI_install_dir/examples/script_connector directory of your SDI installation.


Server Notifications Connector

The Server Notifications Connector is an interface to the SDI notification system. It listens for and reports, as well as issues, Server API notifications. The Connector provides the ability to monitor various processes taking place in the SDI Server, such as AssemblyLine stop and start process events, as well as issue custom server notifications.

The Server Notifications Connector supports the Iterator and AddOnly modes.

Iterator Mode

Depending on how it is configured the Server Notifications Connector in Iterator mode is capable of listening to and reporting either local or remote Server API notifications, but not both during the same Connector session.

The "Local" connection type should be used when the Connector is run in the same JVM as the SDI Server which sends notifications.

The "Remote" connection type should be used when the Connector connects to a remote SDI Server run in a different JVM.

AddOnly Mode

The Connector in AddOnly mode sends Server API custom (that is, user-defined) notifications through either the local or the remote Server API session, but not both during the same Connector session.

The "Local" connection type should be used when the Connector is run in the same JVM as the SDI Server which sends notifications.

The "Remote" connection type should be used when the Connector connects to a remote SDI Server run in a different JVM.

The data needed for creating the notification objects is retrieved from the conn Entry passed to the Connector by the AssemblyLine. The Connector looks for fixed-name Attributes in this Entry, retrieves their values, builds the notification object using these values and emits this notification object through the Server API. For more information about the fixed-name Attributes please see the "Schema" section.

Since each Server API notification also causes a corresponding JMX notification to be emitted, the Server Notifications Connector in AddOnly mode also indirectly sends JMX notifications. For more information about the details of custom notifications please see section Schema.

Encryption

The Server Notifications Connector provides the option to use Secure Sockets Layer (SSL) when the connection type is set to remote. If the remote SDI server accepts SSL connections only, the Server Notifications Connector automatically establishes an SSL connection provided that a trust store on the local SDI Server is configured properly. When SSL is used, the Connector uses a Server API SSL session, which runs RMI over SSL.

Trust store

A trust store on the local SDI Server is needed because when the remote SDI Server fires a notification a new SSL connection to the local SDI Server is created and in order for this new SSL connection session to be established the local SDI Server must trust (through its trust store) the remote SDI Server SSL certificate. A trust store is configured by setting the appropriate values for the javax.net.ssl.trustStore, javax.net.ssl.trustStorePassword and javax.net.ssl.trustStoreType properties in the global.properties or solution.properties files.

Authentication

SSL Authentication

The Server Notifications Connector is capable of authenticating by using a client SSL certificate. This is only possible when the remote SDI Server API is configured to use SSL and to require clients to possess SSL client certificates. A trust store must be configured properly on the local SDI server.

Username and Password Authentication

The Server Notifications Connector is capable of using the Server API username and password authentication mechanism. The desired username and password can be set as a Connector parameter, in which case the Connector will use the Server API username and password authentication mechanism. If SSL is used and a username and password have been supplied as Connector parameters, then the Connector will use the supplied username and password and not an SSL client certificate to authenticate to the remote SDI Server.

Configuration

The Server Notifications Connector uses the following parameters:

Schema

Iterator mode

The Server Notifications Connector in Iterator mode sets the following Attributes in the Input Attribute Map:

AddOnly mode

The Server Notifications Connector in AddOnly mode expects to receive the following Attributes from the Output Attribute Map:


Simple Tpae IF Connector

Introduction

The Tivoli® Process Automation Engine (Tpae), also known as Base Services, is a collection of core Java™ classes and is used as a base to build Java applications. The Integration Framework, a Tpae feature, contains standard integration objects (Object Structures and interfaces) and outbound/inbound objects. The Simple Tpae IF Connector connects SDI to the Tpae Integration Framework to exchange information.

The Simple Tpae IF Connector reads from and writes to the Integration Framework. It supports Maximo® Business Object (MBO) and is processed through an integration object. This connector uses the MBO layer for validating imported or exported objects. The Simple Tpae IF Connector can be used in various AssemblyLine modes such as Iterator, AddOnly, Update, Lookup, and Delete.

Tivoli Process Automation Engine

The benefit of using the Tpae architecture is that the core function used in many Java™ applications does not need to be coded by the applications. Each application depends on base classes to provide core function rather than coding it into each application. The Tpae layer is a middleware and is not directly used as an application by the user. Key functions include:

A few Tpae applications are:

An application, which is built on Tpae, uses the base system security tools for user, role, and group management. Key features of Tpae are:

Integration Framework

The Integration Framework (IF) is a set of applications that facilitates integration between the system and framework applications. IF is a part of base Tivoli® Process Automation Engine and is available in all major products that use Tpae. For example, MAM, and SRM.

The IF is a part of Tpae. It is an XML-based integration framework and supports both XML and delimited files. IF allows synchronization and integration of data between an external system and applications that use the Tpae common architecture and run under an application server. Using IF, we can exchange data synchronously and asynchronously, using various communication protocols.

IF provides a set of outbound (channels) and inbound (services) integration interfaces. It supports multiple communication methods, such as:

Tpae Integration Framework

We can use the following IF services to integrate Tpae product and the external systems:

IF allows data to flow in and out of applications (MAM or CCMDB), and data to flow in and out of external systems. The Simple Tpae IF Connector uses the IF feature to integrate data.

Maximo Business Object

The Maximo® Business Object (MBO) defines a set of fields and business rules and updates one or more Maximo database tables. If multiple object structures use the same MBO, each structure definition repeats these details.

The IF uses MBOs to extract data from and load data into underlying tables. The MBO enforces one or more business rules on the data being received. If the rules cannot be applied successfully to the data, the MBO fails to perform the required operation. For example, changing the status of a Purchase Order or inserting a new workflow process. The MBO layer is used when integrating data with Tpae.

MIF Object Structure

The MIF Object Structure (MOS) is made up of one or more subrecords, which make up the content of an integration message sent to or received from an external system. Each subrecord contains fields from the MBO. The MBO and the corresponding subrecord have the same name. A MOS can include any number of subrecords. The Object Structures can be hierarchical, representing a parent-child relationship between pairs of subrecords in the Object Structure. The topmost MBO is called a primary object or root MBO.

The Purchase Order predefined Object Structure

An Object Structure is the common data layer that IF uses for outbound and inbound application data processing. We can use the message content of a single Object Structure to support both inbound and outbound message processing. Standard Service and REST APIs (see fig1">Figure 1) do not go through the Object Structure layer.

Note: In Maximo® 6, the Object Structure was known as Integration Object. In Maximo 7, the Object Structure is called as Integration Object Structure or MIF Object Structure (MOS).

Using the Connector

The Simple Tpae IF Connector uses XML over HTTP to integrate data with IF.

Using Object Structure Services

The Object Structure Services provide capabilities to perform the following operations over a specified Object Structure:

The Simple Tpae IF Connector supports the operations that are indicated in preceding list. Object Structure Services can be used to create, read, update, or delete operations and are the default behavior of the connector. These services are managed by the Object Structure application. Maximo® comes with several predefined Object Structures, for example, MXASSET:
Table 32. MXASSET example
MBO Parent Object Location Path Relationship
ASSET ASSET
ASSETMETER ASSET ASSET/ASSETMETER INT_ASSETMETER
ASSETUSERCUST ASSET ASSET/ASSETUSERCUST ASSETUSERCUST
ASSETSPEC ASSET ASSET/ASSETSPEC ASSETSPECCLASS

When reading assets, the connector receives a structure, similar to the following XML file, where the relationships are represented as nested elements:

<ASSET>
	<ASSETNUM>7112</ASSETNUM>
	-
	<ASSETMETER>
		<METERNAME>RUNHOURS</METERNAME>
		-
	</ASSETMETER>
	<ASSETMETER>
		<METERNAME>KILOMETERS</METERNAME>
		-
	</ASSETMETER>
	-
</ASSET>

Using Enterprise Services

The Enterprise Service associates an operation and an Object Structure. The Enterprise Service defines the operations that are performed on the specified Object Structure. These operations are requested from Maximo as XML messages over HTTP. The Enterprise Service provides the following functions:

Using Enterprise Services for integration requires configuration of queues and external system information. These services are managed by the Enterprise Services application.

External systems are managed by the External Systems application. An external system identifies a specific external application involved in outbound or inbound data synchronization with Maximo. It defines all the Enterprise services available to the external application. The following figure illustrates the basic concept:

Tpae Enterprise Services example

Note: Enterprise Services parameters are enabled and can be used by the connector only if you specify the External System parameter. In this case, Enterprise Services are used for integration instead of Object Structure services.

MBO parameter

The Simple Tpae IF Connector works only with flat entries. An Object Structure is composed of several MBOs arranged in hierarchy. Therefore, the connector can work with only one MBO at a time. In the Configuration Editor, you need to specify the MBO name in the MBO field. If this parameter is not defined, the connector works with the root MBO of the Object Structure. The selected MBO has to be a part of the specified Object Structure.

For example, the predefined Object Structure MXASSET is composed of MBOs such as ASSET, ASSETMETER, ASSETUSERCUST, and ASSETSPEC.

Use the MBO parameter with the following syntax:

<Top-Level MBO>[@<Child MBO Level 1>[@<Child MBO Level 2>[@<Child MBO Level N>]]] 

Example:
Value of MBO parameter Selected MBO
ASSET ASSET
ASSET@ASSETMETER ASSETMETER
ASSET@ASSETSPEC ASSETSPEC

The selected MBO is used in all connector modes.

Use the TpaeIFConnector.getMboList() method to retrieve a list of available MBOs in the specified Object Structure. For more information about this method, see the Javadocs.

Connector Modes

The Simple Tpae IF Connector operates in various modes such as Iterator, AddOnly, Update, Lookup, and Delete.

Iterator Mode

In the Iterator mode, the connector sends a Query XML request to the IF server and receives a Query XML response. For example, Maximo returns the following XML as a result of a query operation on the predefined MXASSET Object Structure:

<ASSET>
	<ASSETNUM>7111</ASSETNUM>
	<BUDGETCOST>1000.0</BUDGETCOST>
	<ASSETSPEC>
		<ASSETATTRID>RAMSIZE</ASSETATTRID>
		<MEASUREUNITID>MBYTE</MEASUREUNITID>
		<NUMVALUE>512.0</NUMVALUE>
		-
	</ASSETSPEC>
	<ASSETSPEC>
		<ALNVALUE />
		<ASSETATTRID>DISKSIZE</ASSETATTRID>
		<MEASUREUNITID>GBYTE</MEASUREUNITID>
		<NUMVALUE>100.0</NUMVALUE>
		-
	</ASSETSPEC>
	<ASSETSPEC>
		<ASSETATTRID>PROSPEED</ASSETATTRID>
		<MEASUREUNITID>GHZ</MEASUREUNITID>
		<NUMVALUE>1.5</NUMVALUE>
		-
	</ASSETSPEC>
	-
</ASSET>
<ASSET>
	<ASSETNUM>7115</ASSETNUM>
	<BUDGETCOST>1500.0</BUDGETCOST>
	<ASSETSPEC>
		<ASSETATTRID>RAMSIZE</ASSETATTRID>
		<MEASUREUNITID>MBYTE</MEASUREUNITID>
		<NUMVALUE>2048.0</NUMVALUE>
		-
	</ASSETSPEC>
	<ASSETSPEC>
		<ALNVALUE />
		<ASSETATTRID>DISKSIZE</ASSETATTRID>
		<MEASUREUNITID>GBYTE</MEASUREUNITID>
		<NUMVALUE>250.0</NUMVALUE>
		-
	</ASSETSPEC>
	<ASSETSPEC>
		<ASSETATTRID>PROSPEED</ASSETATTRID>
		<MEASUREUNITID>GHZ</MEASUREUNITID>
		<NUMVALUE>3.2</NUMVALUE>
		-
	</ASSETSPEC>
	-
</ASSET>

The query returns two assets, each with three asset specifications. The resulting Entry object depends on the value defined for the MBO parameter.

If the MBO parameter is ASSET, the result is two Entry objects with the following attribute names and values:
Entry ASSETNUM BUDGETCOST
1 7111 1000.0
2 7115 1500.0

If the MBO parameter is ASSET@ASSETSPEC, the result is six Entry objects with the following attribute names and values:
Entry ASSET NUM BUDGET COST ASSETSPEC@ ASSETATTRID ASSETSPEC@ MEASUREUNITID ASSETSPEC@ NUMVALUE
1 7111 1000.0 RAMSIZE MBYTE 512.0
2 7111 1000.0 DISKSIZE GBYTE 100.0
3 7111 1000.0 PROSPEED GHZ 1.5
4 7115 1500.0 RAMSIZE MBYTE 2048.0
5 7115 1500.0 DISKSIZE GBYTE 350.0
6 7115 1500.0 PROSPEED GHZ 3.2
Query Criteria in Iterator mode

The Connector uses the Query criteria parameter only in Iterator mode to filter the results set of the iteration.

Note: Select query values from the top two levels of MBOs from an Object Structure. For example, attributes from ASSET, ASSETSPEC or ASSETMETER MBOs.

AddOnly mode

When adding Entries using Simple Tpae IF Connector, specify the attributes marked as Required. The Tpae also accepts empty strings. If any of the attributes are missing, the connector throws an exception and the add operation fails.

Note: When adding child MBOs, ensure that parent exists in the IF. MBO parameter in AddOnly mode

If the MBO parameter targets the root MBO of the Object Structure, connector uses the CREATE Enterprise Service parameter.

If the MBO parameter targets a child MBO at any level of the Object Structure, connector uses the UPDATE Enterprise Service parameter. Provide the key attributes of MBOs, up to the root MBO of the Object Structure, with a reference to the existing records, except the MBO target by the MBO parameter to be created.

For example, the predefined Object Structure MXASSET exposes the ASSET and the ASSETMETER MBOs. Therefore, to create a meter for an asset, specify a work entry with the following minimum attributes:
Attribute Name Value
ASSETNUM 1001
SITEID BEDFORD
ASSETMETER@METERNAME RUNHOURS
ASSETNUM

and SITEID identify an existing asset and ASSETMETER@METERNAME is the name of the new meter.

Update mode

When modifying entries with the Simple Tpae IF Connector, specify only the attributes marked as Required. If any of the attributes are missing, the connector throws an exception and the modification fails.

Note: Change in unique key of MBO in the output map, overwrites the value of the key with the original value read from Maximo. This key value modifies the MBO. Also, a debug message informs that the unique attributes cannot be changed.

Delete mode

When deleting Entries using Simple Tpae IF Connector, specify only the attributes marked as Required. If any of the attributes are missing, the connector throws an exception and the deletion fails.

Note: Even when all unique attributes are specified, deletion might fail. This failure is due to the relationship between the Maximo objects. MBO parameter in Delete mode

If the MBO parameter targets the root MBO or child MBO of the Object Structure, the connector uses the SYNC Enterprise Service parameter. Provide the key attributes of all MBOs, up to the root MBO of the Object Structure, with a reference to the existing records.

For example, the predefined MXASSET Object Structure exposes ASSET and ASSETMETER MBOs. Therefore, to delete an asset meter, provide a work entry with the following attributes:
Attribute Name Value
ASSETNUM 11430
SITEID BEDFORD
ASSETMETER@METERNAME RUNHOURS
ASSETNUM

and SITEID identify an existing asset and ASSETMETER@METERNAME identifies the meter to be deleted.

Lookup mode

To find a specific record in Maximo, provide the Link Criteria with attributes that uniquely identify the record.

Note: Unique attributes for a selected MBO are marked as Required in the Configuration Editor.Example:

The following attributes uniquely identify an asset in Maximo.
Attribute Name Value
ASSETNUM 1001
SITEID BEDFORD

The following attributes uniquely identify an asset meter in Maximo.
Attribute Name Value
ASSETNUM 1001
SITEID BEDFORD
ASSETMETER@METERNAME RUNHOURS

The Simple Tpae IF Connector supports only Link Criteria of type AND, and the following match operators:

Schema

The schema of the returned entries depends on the selected Object Structure and MBO. Each MBO has a set of unique attributes that needs to be specified when creating, updating, or deleting it. These attributes are marked as Required in the Configuration Editor.

Error handling

The Simple Tpae IF Connector handles all exceptions that occur through the normal server hooks. If a failure cannot be handled, the corresponding AssemblyLine Error hook is started. The following exceptions are unique to this connector:

If an Assembly line with a Simple Tpae IF Connector fails, you can retrieve additional information about the error as follows:

  1. Add the following code in the Default On Error hook. Name the Connector as mxConn:
    task.logmsg("ERROR", "An exception occurred.");
    mxConn.connector. extractMaximoException(error);
    task.dumpEntry(error);
  2. When an exception occurs, the following message is displayed:
    19:31:44  CTGDIS003I *** Start dumping Entry
    19:31:44  	Operation: generic 
    19:31:44  	Entry attributes:
    19:31:44   exception (replace):	'com.ibm.di.connector.maximo.exception.
               MxConnHttpException: response: 404 - Not Found'
    19:31:44   targetUrl (replace):	'http://9.156.6.14/meaweb/schema
               /service/MXPersonService.xsd'
    19:31:44  class (replace):	'com.ibm.di.connector.maximo.
              exception.MxConnHttpException'
    19:31:44  operation (replace):	'update'
    19:31:44  status (replace):	'fail'
    19:31:44  connectorname (replace):	'AddPerson'
    19:31:44  body (replace):	'Error 404: BMXAA1513E - 
              Cannot obtain resource /meaweb/schema/service/MXPersonService.xsd.'
    19:31:44  responseCode (replace):	'404.0'
    19:31:44  responseMessage (replace):	'Not Found'
    19:31:44  message (replace):	'The HTTP server did not returned "HTTP OK".'
    19:31:44  CTGDIS004I *** Finished dumping Entry

    Note: The task.dumpEntry(error) prints information about the error.

Configure external systems

Generating XML schema definition

When using the Connector for the first time, perform the following steps:

  1. Log on to Maximo® as an administrator with authority to perform system configuration tasks.
  2. From the Go To menu on the Navigation toolbar, select Integration -> Object Structures to open the Object Structures application.
  3. Repeat the following steps for each Object Structure you are going to use:

    1. On the List tab, search for the name of the Object Structure, for example, MXASSET.

      To search, open the Filter and type the name of the Object Structure, or a partial name, in the Filter field of the Object Structure column. Then press ENTER.

    2. Click the Object Structure name to open the record for the Object Structure.
    3. From the Select Action menu, select Generate Schema/View XML.

      A message box opens, asking if you want to generate a schema for each operation.

    4. Click OK. The View XML dialog box opens.
    5. Click OK to return to the List tab.

Configuration

The Simple Tpae IF Connector parameters are:

Examples

Go to the TDI_install_dir/examples/SimpleTpaeIFConnector directory of your SDI installation.


SNMP Connector

This Connector listens for SNMP traps sent on the network and returns an entry with the name and values for all elements in an SNMP PDU.

Notes:

  1. In Client mode, a request is retried 5 times with increasing intervals; the retry waiting period doubles on every retry, starting with 5 seconds. Timeout occurs if no answer is received.
  2. If you want to send SNMP Traps, the system.snmpTrap() method is available.
  3. The SNMP Connector does not support the Advanced Link Criteria (see "Advanced link criteria" in SDI v7.2 Users Guide).

Configuration

The Connector needs the following parameters:

Note: Link Criteria are treated differently for this Connector. In Lookup mode, the connector performs a get request returning the oid/value for the requested oid. The link criteria specifies oid, as well as server, port and version. An example link criterion might be "oid" = "1.1.1.1.1.1.1".

Examples

Go to the TDI_install_dir/examples/snmpTrap directory of your SDI installation.


SNMP Server Connector

The SNMP Server Connector supports SNMP v1. SNMP v2 is supported without the SNMP v2 authentication and encryption features.

The Connector does not support SNMP TRAP messages.

The SNMP Server Connector operates in server mode only. The transport protocol it uses is UDP and not TCP. UDP is an unreliable transport protocol, and SSL cannot run on top of an unreliable transport protocol. That is why the Connector cannot use SSL to protect the transport layer.

The SNMP Server Connector (contrary to other Connectors in Server Mode) uses DatagramSockets. That is why there is no notion of connection. The SNMP Server Connector uses a single DatagramSocket which receives SNMP packets from many different SNMP managers on the network.

In the getNextClient() method, the socket blocks on the receive() method until an SNMP packet is received. Then, the Connector creates a new instance of itself, passes the received packet to the child Connector and returns the child Connector.

The getNextEntry() method extracts the SNMP request packet attributes and sets them in the conn Entry, ready for Input Attribute Mapping.

The replyEntry() method extracts the Attributes from the conn Entry and creates an SNMP response packet and returns it to the client; the conn Entry should be populated using Output Attribute Mapping.

The replyEntry() method uses the parent Connector's DatagramSocket to send back the response. Since the parent Connector's DatagramSocket is shared among all child Connectors the access to the DatagramSocket is synchronized.

Connector Schema

The SNMP Server Connector makes the following Attributes available for Input Attribute Mapping:

Configuration

The SNMP Server Connector uses the following parameters:


Sun Directory Change Detection Connector

The Sun Directory Change Detection Connector is a specialized instance of the LDAP Connector; this connector was previously called the Netscape/iPlanet Changelog Connector.

In Sun/iPlanet Directory Server 5.0, the format of the changelog was modified to a proprietary format. In earlier versions of iPlanet Directory Server, the change log was accessible through LDAP. Now the changelog is intended for internal use by the server only. If you have applications that must read the changelog, you will need to use the iPlanet Retro Change Log Plug-in for compatibility with earlier versions.

Since it is not always possible to run the Sun/iPlanet Directory Server in Retro Changelog mode, the Connector is able to run in two different Delivery Modes:

  1. Changelog mode - in this mode the Connector will iterate through the changelog (enabled by the iPlanet Retro Change Log Plug-in) and after delivering all Entries it will poll for new changes or use change notifications
  2. Realtime mode - in this mode, only changes received as notifications will be delivered and offline changes will be lost. The Connector will not use the changelog in this mode. This delivery mode is necessary for Sun/Netscape/iPlanet Servers that do not support a changelog

This Connector supports Delta Tagging, in two different operation modes:

The Connector will detect modrdn operations in the Server's changelog, see Detect and handle modrdn operation for more information.

Attribute merge behavior

In older versions of SDI, in the Sun Directory Change Detection Connector merging occurs between Attributes of the changelog Entry and changed Attributes of the actual Directory Entry. This creates issues because we cannot detect the attributes that have changed. The SDI v7.2 version of the Connector has logic to address these situations, configured by a parameter: Merge Mode. The modes are:

Delta tagging is supported in all merge modes and entries can be transferred between different LDAP servers without much scripting.

Note that in Realtime mode when the LDAP search base is different than "cn=changelog", the Connector cannot determine which attributes of Directory Entry are changed so no matter what value the Merge Mode parameter has, the output entry will still be the same. Of course, in Realtime mode when the server supports changelog and search base is set to "cn=changelog" the output entry is merged according to the chosen Merge Mode.

Configuration

The Connector needs the following parameters:

Note: Changing Timeout/SleepInterval values will automatically adjust its peer to a valid value after being changed (for example, when timeout is greater than sleep interval the value that was not edited is adjusted to be in line with the other). Adjustment is done when the field editor looses focus.


System Queue Connector

Introduction

The System Queue provides a subsystem similar to Java™ Message Service (JMS) for SDI. It is designed for storing and forwarding general messages and SDI Entry Objects, between SDI Servers and AssemblyLines.

The System Queue Connector is the mechanism for AssemblyLines to interface with the System Queue. To learn more about the System Queue and its configuration, refer to the System Queue section in the SDI v7.2 Installation and Administrator Guide.

The System Queue Connector can be used with AssemblyLines in Iterator and AddOnly modes:

Note: If two JMS clients retrieve messages from the same JMS queue simultaneously, an error might occur. Avoid solutions which use several instances of the System Queue Connector retrieving messages from the same JMS queue simultaneously. However, an instance of the System Queue Connector writing to a queue and another instance of the Connector reading from that same queue at the same time is acceptable.

The System Queue Connector uses the Server API to access the System Queue. The Connector uses both the local and remote interfaces of the Server API, allowing the Connector to operate on an SDI System Queue running on a remote computer. The Connector's ability to operate on a remote computer, coupled with the System Queue's capability to connect to remote JMS servers, results in the ability to use some quite complex deployment scenarios. For example: an SDI server and a System Queue Connector deployed on machine A, working through the remote Server API with the SDI server and System Queue on machine B, which in turn interface with a JMS server deployed on machine C.

In SDI v7.2, ActiveMQ is used as the default System Queue and is enabled by the install process. Default settings of ActiveMQ can be altered in the <tdi install dir>/etc/activemq.xml file.

Configuration

The System Queue Connector uses the following parameters:

Security, Authentication and Authorization

Encryption

When the connection type is set to remote and the remote SDI server is configured to use Secure Sockets Layer (SSL), then the System Queue Connector uses SSL, provided that a trust store on the local SDI Server is configured properly. When SSL is used, the Connector uses a Server API SSL session, which runs RMI over SSL.

Note: Of the standard JMS Drivers only the driver for MQ supports SSL out of the box. The MQe JMS Driver only works with a local Queue Manager - this is mandated by the MQe architecture. The JMS Script Driver is a generic driver which supports whatever the corresponding user-provided Javascript supports.

Authentication

Username and password authentication

The System Queue Connector can use the remote Server API username and password authentication. The Connector does not implement any authentication itself. The username and password supplied to the Server API are configured as Connector configuration parameters.

SSL certificate-based authentication

The System Queue Connector is capable of authenticating by using a client SSL certificate. This is only possible when the remote SDI Server API is configured to use SSL and to require clients to possess SSL client certificates. A trust store must be configured properly on the local SDI server.

If SSL is used and a user name and password have been supplied as Connector parameters then the Connector will use the supplied user name and password and not the SSL client certificate to authenticate to the remote SDI Server.

Authorization

The Server API authorization mechanism is applied to the Server API session the System Queue Connector establishes to the SDI Server. With the SDI v7.2 Server API once the System Queue Connector is authenticated it can use the SDI System Queue.

See also

"System Queue" in SDI v7.2 Installation and Administrator Guide,

Usage example SystemQueue_SonicMQ_example.xml using Sonic MQ in TDI_install_dir/examples/SonicMQ,

Usage example SystemQConn_jmsScriptDriver_example.xml using the WebSphere® Default JMS Provider in TDI_install_dir/examples/was_jms_ScriptDriver.


System Store Connector

The System Store Connector provides access to the underlying System Store. The primary use of the System Store Connector is to store Entry objects into the System Store tables. However, we can also use the connector to connect to an external Derby, DB2® 9.7, Oracle, Microsoft SQL*Server or IBM solidDB® database, not just the database configured as the System Store. Each Entry object is identified by a unique value called the key attribute.

The System Store Connector creates a new table in a specified database if one does not already exist. If you iterate on a non-existing table, the (empty) table is created, and the Iterator returns no values.

The System Store Connector uses the following SQL statements to create a table and set the primary key constraint on the table (Derby syntax):

"CREATE TABLE {0} (ID VARCHAR(VARCHAR_LENGTH) NOT NULL, ENTRY BLOB ); 
ALTER TABLE {0} ADD CONSTRAINT {0}_PRIMARY Primary Key (ID);"

This connector provides pre-set SQL statements for a number of popular databases, but there is also the ability to modify them as you see fit. For other databases, you must enter our own, equivalent SQL statements (multiple ones, if required) by specifying these in the Create Table Statement parameter. The parameter can not be empty; in that case, an exception is thrown.

Notes:

  1. The VARCHAR_LENGTH value is picked up from the com.ibm.di.store.varchar.length property set in the Properties Store (TDI-P). The default VARCHAR_LENGTH is set to 512. We can change this value by setting the value of the com.ibm.di.store.varchar.length in the Properties Store.
  2. Another attribute, tdi.pesconnector.return.wrapped.entry, exists for SDI 6.0 compatibility. If you define this property in the SDI global.properties file and set it to true , then SDI reverts back to its earlier behavior where for example, the findEntry() method (used by the system in Iterator, Lookup and Update modes) would return an Entry object of the format: [ENTRY: <Instance of Entry object containing Attributes passed by user>]. In SDI 6.0, in order to obtain the original passed attributes, you would need to write JavaScript code something like this:
    Entry e = (Entry)conn.getAttribute("ENTRY");
    at some appropriate place, after which e contains the Attributes originally passed in when writing to the System Store. You could do this in the Input Attribute Map Hook where you would have to carefully map the Attributes in e to the work entry, or use a Script Component after this Connector to unpack the composite entry attribute in the work entry using the aforementioned JavaScript example (substitute work for conn.)

    In SDI v7.2, by default the entry is unwrapped and therefore all attributes passed by you are now directly available as attributes in the Entry. The above scripting will not be needed any longer (unless you set the tdi.pesconnector.return.wrapped.entry attribute to true.)

  3. The System Store Connector operates in the following modes: AddOnly, Update, Delete, Iterate, Lookup. However, AddOnly, Update and Delete operations are not permitted on the Delta Tables and Property store tables.

The Connector supports both simple and advanced Link Criteria.

This Connector, like the JDBC Connector it is based upon, in principle can handle secure connections using the SSL protocol. However, it may require driver specific configuration steps in order to set up the SSL support. Refer to the manufacturer's driver documentation for details.

Configuration

The System Store Connector requires the following parameters.

Using the Connector

The System Store Connector provides access to the tables created in the System Store. The System Store can be located on any DB server for which a JDBC driver is available. Furthermore, if the System Store uses Derby, it can be configured to run in either embedded (inside the SDI Server process) or networked mode. The connector is able to resolve globally defined parameters to obtain a connection the default System Store. In order to configure a connection to a different DB at least the following parameters must be explicitly provided: Database, Username, Password or JDBC Driver.

The correct way to specify the database and JDBC Driver for different configurations of System Store is given below.

Note: The examples are specific to Windows.

See also

Connector usage examples in: TDI_install_dir/examples/systore, and TDI_install_dir/examples/SystemStore; The chapter on System Store in SDI v7.2 Installation and Administrator Guide.


TADDM Change Detection Connector

Use the TADDM Change Detection Connector to communicate with IBM Tivoli® Application Dependency Discovery Manager (TADDM) database for directly detecting changes using TADDM Java™ APIs (TADDM SDK).

Introduction to TADDM Change Detection Connector

TADDM is a management tool that supports configuration items (CIs) and relationships between them. This configuration information is collected through periodic automatic discoveries, which can scan the entire application infrastructure of a business organization. The collected data includes deployed software components, physical servers, network devices, virtual LAN and host data.

When an initial scan is performed, all subsequent discoveries detect new changes, if any, which are occurred in the infrastructure. The TADDM Change Detection Connector directly retrieves these changes without scanning the entire TADDM database.

The TADDM Change Detection Connector is based on TADDM Connector and shares the following common features:

The TADDM Change Detection Connector operates only in Iterator mode.

TADDM change detection

The TADDM Change Detection Connector detects changes for a specified time interval, for example, from 01 Jan 2010 00:00:00 to 10 May 2010 02:00:00. If the interval is not specified, the connector detects changes from the earliest possible date 01 Jan 1970 00:00:00 and returns everything since then.

In the TADDM Change Detection Connector, the changes are retrieved at discrete points in time. If the waiting period is 180 seconds, the reported changes are shown at the end of this interval, when compared to its beginning. Therefore, if a configuration item is created and updated several times, the output shows a single create and a single update event. Because the content of configuration items is updated at the end of the waiting interval, both the events show the same attributes. If a configuration item is created and subsequently deleted, no event is returned because it did not exist at either end of the waiting interval. If you require a more fine-grained detection, reduce the value of the Sleep Interval parameter.

Delta tagging support

The TADDM Change Detection Connector provides Delta tagging at the Entry level. This connector sets only the Entry operation depending on the change type.

For example:

*** Start dumping Entry
	Operation: delete
	Entry attributes:
	guid (replace):	'30EFCB75FDAF3B3F92274803BAE6FB01'
	$classType (replace):	'sys.linux.LinuxUnitaryComputerSystem'
	$id (replace):	'30EFCB75FDAF3B3F92274803BAE6FB01'

In this example, a deleted model objects, and its Guid is shown. Also, for the added or modified objects, the attribute list is shown. The IdML mode is used in this example.

*** Start dumping Entry
	Operation: add
	Entry attributes:
	$classType:	'sys.ComputerSystem'
	$id:		'CBEEDF3618633CDFA56039E39AE833FF',
	cdm:Guid:	'CBEEDF3618633CDFA56039E39AE833FF',
	cdm:Signature:	'testSignature',
	cdm:CreatedBy:	'administrator',
	cdm:LastModifiedTime:	1279790297569,
	cdm:DisplayName:	'testDisplayName'

The Entry operation can be get or set as shown in the following script.

var entryOperation = work.getOperation();	//get entry operation

work.setOperation(“modify");			//set entry operation 
work.setOp (“m");

For more information about Delta tagging and other Delta features, see the "Delta Features" section in SDI v7.2 Users Guide.

Data source schema of TADDM Change Detection Connector

This section describes input schema of TADDM Change Detection Connector.

Input schema

The following table lists attributes of the Input Map.

Note: All of the attributes are not present in all situations.
Table 34. Input Schema
Attribute Name Description
$cycle Prevents cycles (loops) in the hierarchical Entry model.
$id Holds a unique identifier for the item. For example, TADDM Guid or IdML ID.
$classType Holds the CDM/TADDM class name of the read item.
cdm:ManagedSystemName and managedSystemName formats Explicit attributes

Both the formats can be used depending on the IdML mode.

cdm-rel:installedOn.cdm-trg:sys.ComputerSystem and parent formats Implicit attributes

Can be used depending on the IdML mode.

$mss and its children Holds MSS information, if available, and its parameter option is enabled.
$domain holds Domain attributes which can prove useful in an enterprise TADDM infrastructure.
ext:attrName and cdm-ext: attrName formats Extended attributes

Holds additional non-CDM flat data and both the formats are supported.

Configuration

This section describes the parameters of TADDM Change Detection Connector.


TADDM Connector

Use the TADDM Connector to communicate with IBM Tivoli® Application Dependency Discovery Manager (TADDM) server using TADDM Java™ APIs (TADDM SDK).

Introduction to TADDM Connector

TADDM is a management tool that supports configuration items (CIs) and relationships between them. This configuration information is collected through periodic automatic discoveries, which can scan the entire application infrastructure of a business organization. The collected data includes deployed software components, physical servers, network devices, virtual LAN and host data. In addition, TADDM provides a centralized topology view of the discovered data, change and version tracking, and report generation function.

TADDM allows other applications to load data into and retrieve data from its database using one of the following techniques:

The TADDM Connector uses Common Data Model (CDM), an IBM data integration initiative, to represent the transferred data. See Asset Integration Suite .

The TADDM Connector supports AddOnly, Delete, Iterator, Lookup, Update, and Delta operation modes.

TADDM data representation model

TADDM uses a cyclic hierarchical model for representing its information. For example, the OperatingSystem item can have simple attributes such as Name, Release, and FQDN along with a related ComputerSystem item, where the operating system is installed. The ComputerSystem item can have one or more related CPUs as child attributes as graphically depicted in the following hierarchical data model.

OperatingSystem
	Name=Windows
	Release=1.2.3
	FQDN=www.myfqdn.com
	ComputerSystem (Parent)
		SerialNumber=1234567
		Manufacturer=IBM
		CPU (Parent)
			CPUSpeed=2500000000
			IndexOrder=5
		CPU (Parent)
			CPUSpeed=2500000000
			IndexOrder=5	
			ExternalCache=1000

To support TADDM data model, the TADDM Connector uses the hierarchical capabilities of SDI Entry.

In the preceding data structure, the attributes of the OperatingSystem item, for example, Name, and theComputerSystem item, are child attributes of the Entry. The CPUs are the child attributes of ComputerSystem, and so on. For more information about attribute arrangement, see the Schema comparison section.

Common Data Model

The logical model for representing items and their relationships in an infrastructure is called as Common Data Model (CDM). This model defines the supported items types and relationships (or class types), and the ways they can be linked. In addition, it specifies the attributes supported by each class type and its naming rules.

The attributes of an item provide information about it. The ComputerSystem item holds its serial number and manufacturer details. These two attributes have simple values such as numbers or strings, and are called as explicit attributes.

The implicit attributes are used to represent a relationship in an item. As shown in the following figure, the OperatingSystem is installed on ComputerSystem and shares an installedOn relationship, as defined by CDM, with OperatingSystem as its source and ComputerSystem as its target. The relationship in the OperatingSystem item is represented by its Parent attribute and the ComputerSystem item has an OSInstalled implicit attribute.

Common Data Model

Each naming rule of a class type is composed of one or many implicit and explicit attributes and defines a unique way to identify the items. To register an item in the IdML consumer, provide its attributes to satisfy at least one of the naming rules of the item, to uniquely identify the resource and also to avoid rejection. The attributes in a naming rule are also called as identifying. In the example shown above, the identifying attributes of the OperatingSystem are Name (explicit) and Parent (implicit). Both Name and the ComputerSystem attributes are required for unique identification.

Data representation formats

This section describes various data representation formats used by the products.

TADDM model

TADDM has both implicit and explicit attributes, configuration items, and relationships between them. The attribute names start with lowercase except for names with two or more starting capital letters. For example, Name changes into name while OSInstalled stays unchanged.

TADDM uses the following naming format for the class type:

The CDM names are used for the class types, if there is a collision when the short names are used. For example, the short name SSLSettings can correspond to both app.SSLSettings and app.lotus.SSLSettings. In this case, the CDM names are used.

IT registry model

IT registry model supports only explicit attribute names except for implicit naming attributes. In this model, the items are registered in a single operation. For example, for the OperatingSystem attribute, you need to provide its naming context (ComputerSystem), along with the explicit attributes. The registration is achieved using ad-hoc attributes, which can either hold the unique identifier of the other item (Guid) or its identifying attributes.

However, the implicit non-identifying attributes are not supported and their meaning is implemented by adding relationships between the items.

IdML book format

The IdML books are XML files that hold information for items and relationships in the context of a particular Management Software System. The attribute and class names match the CDM model.

The IdML book format has no implicit attributes. In this model, the information is stored in a file and we can add the explicit attributes of the ComputerSystem item as an XML element followed by the explicit attributes of the OperatingSystem, and provide an XML element for the relationship between them. Parse the content of entire IdML book before it can be imported into another system. This task can be delegated to bulk loading utilities.

The following table lists the differences among the three data representation models:
Items and Relationships Correct attribute and class names Explicit attributes Implicit attributes
TADDM Yes No Yes Yes
IT registry Yes Yes Yes Limited
IdML Yes Yes Yes No

Implementing unified schema in TADDM

The TADDM Connector uses a new schema, which is the unification of all the differences from various data formats described in the previous section. The TADDM Connector uses the solutions described in the following sections for the basic differences and problems.

Explicit attribute names and class types

All names are changed to match their corresponding CDM versions and follow the following naming convention:

Implicit attributes

Each implicit attribute name consists of relationship type, related class type, and direction of the relationship. For example, the OSRunning implicit attribute of sys.ComputerSystem can be a runsOn relationship to a sys.OperatingSystem item with a reversed direction, that is, from OperatingSystem to ComputerSystem.

Identifiable data

As shown in the example data structure, the TADDM data (CDM format), is hierarchical and linked. If you choose to read an item, for example, a CPU, it might be linked to a ComputerSystem, which in turn is linked to an OperatingSystem. For reading only one type of item, you need to retrieve many attributes. If you need only the CPU, set the value 1 for the Depth parameter and retrieve its explicit attributes. The implicit attributes contain only the Guid of the ComputerSystem. This data is not identifiable if you choose to store in an IdML mode.

The ComputerSystem with CPU defined, has only its Guid and does not comply with any of the naming rules. To solve this problem, additional processing is done for the new schema. Each retrieved item is validated against its naming rules and skips if it does not match any of them. For example, the ComputerSystem is skipped, and if the CPU is identified in its context, this item is also skipped. Alternatively, if the CPU has other attributes, it stays in the retrieved result. We can also increase the value of Depth parameter, which allows you to uniquely define the ComputerSystem item.

The unified schema requires more processing such as name transformations, naming rule retrieval, and validation. Therefore, the TADDM Connector has an optional feature called IdML Mode.

Data representation modes

The TADDM Connector supports the following two modes of data representation:

Use the native mode if you want to work with TADDM. For example, instance registering of data read from a CSV file. To export all computer systems to an IdML file, use the IdML mode (the IdML Connector is used in this scenario).

Note: The modes described in this section are not similar to the standard SDI connector operation modes such as AddOnly or Iterator. The TADDM Connector has a switch that modifies its schema, irrespective of the actual SDI mode.

IdML mode

In the IdML mode, all explicit attributes are represented with their CDM names capitalized and are prefixed with cdm: For example, attribute managedSystemName becomes cdm:ManagedSystemName. Implicit attributes consist of two parts namely, relationship and a related class. The related class carries information for the relationship direction. Thus, the sys.ComputerSystem's attribute OSRunning changes to cdm-rel:runsOn . cdm-src:sys.OperatingSystem (for clarity, the two parts are separated with additional spaces). The first part, cdm-rel:runsOn, describes the relationship as seen in the prefix cdm-rel. The second part represents a related type of class sys.OperatingSystem and its prefix, if the related item is the source of the relationship.

In the reversed scenario, sys.OperatingSystem has an implicit attribute called Parent. In the unified model, this attribute changes to cdm-rel:runsOn . cdm-trg:sys.ComputerSystem.

In the IdML mode:

Native mode

In the native mode, there are no modifications to the class types and attribute names. The only addition to the standard TADDM data is that all implicit attributes of an item are added as children attribute of $implicit. For example, if you have an implicit attribute OSInstalled, it changes to $implicit.OSInstalled.

Schema comparison

The following example data structure, in native mode and IdML mode, shows an operating system with installed software.
IdML mode Native mode
{
    cdm:Name=Windows
    cdm:Release=3.3.4
    cdm-rel:installedOn {
        cdm-trg:sys.ComputerSystem {
            cdm:Signature=12345
            cdm:Manufacturer=IBM
            cdm:Fqdn=www.sample.com
            cdm-rel:contains {
                cdm-trg:sys.CPU {
                    cdm:IndexOrder=2
                    cdm:ExternalCache=1000
                    cdm:CPUSpeed=2500000000
                }
                cdm-trg:sys.CPU {
                    cdm:IndexOrder=1
                    cdm:CPUSpeed=1500000000
                }
            }
        }
        cdm-src:app.SoftwareInstallation {
            cdm:ProductName=Notes
            cdm:ManufacturerName=IBM
            cdm:InstalledLocation=C:\notes
        }
    }
}
{
    name=Windows
    release=3.3.4
    $implicit {
        parent {
            signature=12345
            manufacturer=IBM
            fqdn=www.sample.com
            $implicit {
                CPU {
                    indexOrder=2
                    externalCache=1000
                    CPUSpeed=2500000000
                }
                CPU {
                    indexOrder=1
                    CPUSpeed=1500000000
                }
            }
        }
        softwareInstallation {
            productName=Notes
            manufacturerName=IBM
            installationLocation=C:\notes
        }
    }
}

Note: In this example data structure, a few system attributes are skipped. For more information about system attributes, see the System attributes section.

We can switch between these two modes of data representation using the IdML Mode option in the Configuration Editor. To check the structure of the current schema, use the Query Schema function of the TADDM Connector.

System attributes

The TADDM Connector supports three system attributes that are used in different situations. When reading the data from TADDM, each returned object (root object and related objects) has the $classType attribute that holds its type. If IdML mode is enabled, the CDM class type is used, for example, sys.ComputerSystem. In native mode, the TADDM class type is used, for example, com.collation.platform.model.topology.sys.ComputerSystem. In AddOnly mode, the attribute can be used for changing the registered item type at run time. For more details, see the Creating new model objects section.

The, $id attribute holds the unique identifier of the processed item. When reading data from TADDM, it holds the Guid of the item (the same value as attribute guid/cdm:Guid). When writing data to TADDM and if it is coming from another system (for example, an IdML book), the $id attribute holds the corresponding ID attribute of the IdML element (a simple identifier without a special format). In AddOnly mode, the attribute is used for resolving cycles and preventing data duplication.

The $cycle attribute is used when a loop in the TADDM data is detected. If you list the content of the first item, you get the second item, through the forward implicit attribute. The second item links back to the first item causing a loop. Such situations are resolved by checking whether the current item is not found anywhere higher in the hierarchy path. If found, only $id is kept as the value of the $cycle attribute.

Using the Connector

Basic configuration

The TADDM Connector depends on TADDM Java™ API for communicating with a TADDM server. Therefore, you need to specify the path of TADDM SDK in the Configuration Editor. This SDK includes JAR libraries, configuration files, and documentation for the TADDM model.

The host name of the TADDM server must be specified along with a user name and password. If the port number, on which the TADDM server listens, is not specified, the value of property com.collation.api.port from the taddm-sdk/etc/collation.properties file is used. Default port number is 9530. For SSL connections, the value of property com.collation.api.ssl.port is used.

We can manually enter a class type, or click the Select button in the Configuration Editor to select a TADDM supported class type. If you select the IdML Mode check box, the listed types are prefixed with cdm:, and in native mode, the types are not prefixed. Leaving the IdML Mode field blank, results in an iteration of all (regardless of type) items in the TADDM database. For more details, see the Reading configuration items and relationships from TADDM section.

If you select the Configuration Item resource type from the Artifact Type selection list in the Configuration Editor, only the item class types are listed when you click the Select button. If the Relationship resource type is selected, only the known relationships are displayed. This filter option is supported only in native mode. In IdML mode, the relationships are not supported, and the Artifact Type field is disabled.
{
    guid=D234B679309DCE
    hostname=www.sample.com
    VMID=12
    $implicit {
        hostSystem {
            guid=6859GA5934B1
            signature=4530093
            serialNumber=12345
            $implicit {
                CPU {
                    indexOrder=12
                    CPUSpeed=10000
                    guid=0BCA35EF1
                }
            }
        }
     }
}
The Depth parameter is used to specify the level of relationships to be traversed when reading model objects from TADDM. It is not set by default, unlimited queries are performed. To get only the related data, you need to set the value for this parameter.

For example, consider the ComputerSystem data shown in the data structure. It uses the native schema syntax, but the same rules apply for the IdML version. Here are the possible situations:

  • Negative depth - error is returned by the TADDM Connector.
  • Zero depth - only the guid of the queried item is returned. In this case, only the item guid=D234B679309DCE.

    Note: System attributes are also available.

  • Depth is set to 1 - all explicit attributes of the ComputerSystem item are returned, and also the guid of the related ComputerSystem up to guid=6859GA5934B1.
  • Depth is set to 2 - all attributes of the first ComputerSystem are returned whereas the second has only its explicit ones. For the $implicit.hostSystem.$implicit.CPU attribute, only its guid is returned - everything up to guid=0BCA35EF1.
  • Depth is set to 3 or higher - all of the data in this example are returned.

MSS support

The TADDM Connector supports specifying a Management Software System (MSS) Guid when working with TADDM. Using SDI user interface, we can list all the available MSS and select the required Guid. See the Configuration section for user interface details.

When reading data from TADDM, for example, Iterator or Lookup mode, only the items registered by the selected MSS are retrieved or searched.

When adding new items to TADDM (AddOnly mode), the MSS shows TADDM that the item is being registered for the specific MSS. If you subsequently read this item back, you need to list the MSS data in the Entry, if the MSS Information check box is selected in the Configuration Editor.

Note: For more information about MSS, see the Retrieving additional attributes section.

Querying TADDM

When you specify the class type, the TADDM Connector retrieves the items from that type. The TADDM Connector forms an MQL query and the query is run against the TADDM database. MQL is a TADDM invention used for retrieving data from its database. The MQL does not support all SQL features and is useful when you need to limit the number of returned items. The following query is constructed when only the class type of the item is provided.

	SELECT * FROM class-type

If depth is 0, the generated query is modified as shown.

	SELECT guid FROM class-type

MQL also supports complex WHERE clauses and INNER JOIN. In Iterator mode, the TADDM Connector uses these capabilities through the MQL Select field in the Configuration Editor. If you provide a query in this field, it overrides the Class Type parameter. The following example query returns all OperatingSystem-s with name "Windows".

	SELECT * FROM OperatingSystem WHERE OSName == 'Windows'

A complex query can, for example, return the names of all Db2Server-s, the ports of which match the OracleInstance-s and in addition each Db2Server that runs on a ComputerSystem manufactured by IBM.

	SELECT Db2Server.* FROM Db2Server, OracleInstance 
   WHERE Db2Server.port == OracleInstance.port 
		AND DB2Server.host.manufacturer contains 'IBM'

This query uses both implicit attributes (the $implicit attribute is not used) and an inner join. For more information about the query language, see the TADDM MQL documentation.

The MQL Select parameter supports parameter substitution. The available objects that can be used are:

In addition, the MQL Select field supports auto-complete capability and the Link Criteria dialog box transfers the condition into the regular MQL statement.

The MQL Select field is dependent on IdML mode. When IdML mode is enabled, an additional filtering feature is turned on. We can specify both class types and attribute names (implicit and explicit) as defined by the unified schema described in the IdML mode section.

	SELECT cdm:Signature FROM cdm:sys\.ComputerSystem 
WHERE cdm-reln:virtualizes.cdm-trg:sys\.ComputerSystem.cdm:NumCPUs == '3'

This query returns the signatures of all virtual computer systems present on a computer system with 3 CPUs.

Note: All class type names are escaped, for example, cdm:sys\.ComputerSystem. This requirement is important because in SDI, the dot indicates a separator between attributes and if not escaped, it is interpreted as the ComputerSystem is a subelement of the sys element.

The Fetch Size parameter indicates the size of the buffer used when reading data from TADDM. By changing the size, we can fine-tune the performance of your SDI solution. For example, if the connector is to read 100 items (with depth 1) from the TADDM server, and the Fetch Size is more than 100, then a single bulk request is performed and all the data is retrieved.

Retrieving additional attributes

This section describes the additional attributes that TADDM Connector provides.
Table 35. Extra Attributes
Attributes Description
Domain Attributes These attributes contain details of the TADDM domain, where the item read belongs. These attributes are available only when the query is against an enterprise TADDM server. The attributes can be used to distinguish the TADDM server that provides the data from all the other servers in the enterprise architecture.

When present, the domain attributes, for example, port or hostname, can be found under the $domain attribute.

Extended Attributes These attributes can be attached to each item and allows storing of information outside the CDM model, without corrupting consistency of the item. For example, there might be an extended attribute used as a comment attribute, holding a short textual description of the item read. In IdML mode, such attributes are prefixed with cdm-ext:, while in native mode, prefixed only with ext:.
Explicit Relationships This feature is supported only in IdML mode. The TADDM has the following two types of relationships:

  • Implicit type - defined by implicit attributes of the read items.
  • Explicit type - added between existing items.

Note: Because implicit and explicit relationships can overlap, before adding an explicit relationship to the returned Entry, the TADDM Connector checks if the same relation is not already added as an implicit type. This action preserves the Entry against data duplication and keeps its consistency. The explicit relationships comply with the configured retrieval depth. Therefore, following an explicit relationship results in bypassing the depth limit, and the relationship is ignored.

MSS Information If this option is used, the MSS information is returned under the $mss system attribute. The information includes details for each MSS that registered the item read. Thus, there can be more than one MSS element. The following example shows the attribute with two MSS in IdML mode.
{
    $mss {
       cdm:process.ManagementSoftwareSystem {
           cdm:MSSName=MSS1
           cdm:Guid=12BA843..2FF
       }
       cdm:process.ManagementSoftwareSystem {
           cdm:Guid=12BA843..2FF
           cdm:Hostname=www.sample.com
           cdm:ProductName=TDI
           cdm:ManufacturerName=IBM
       }
    }
}

Searching for specific model objects

In the Lookup mode of operation, the TADDM Connector searches for a particular item using the TADDM MQL capabilities.

We can search for items either providing link criteria in the Configuration Editor or building advanced criteria from the custom script. The formula for creating the resulting MQL query is:

	SELECT * FROM class-type WHERE criteria

If the value Depth parameter is 0, use the guid attribute instead of ‘*'.

Since MQL offers extended criteria capabilities, all of SDI link criteria match conditions are supported and mapped to their MQL synonyms.

Similarly, to the MQL Select parameter, when IdML Mode is enabled, the criteria in Lookup mode supports unified schema names. Avoid the ’.’ in CDM class types, such as app.db.db2.Db2Server, sys.CPU, and so on.

Reading configuration items and relationships from TADDM

You need to specify the type of item to be read by the TADDM Connector. The TADDM exposes items in a table structure, where the class type of item denotes the table name. See the Querying TADDM section for details.

The TADDM Connector supports working without an explicitly configured class type in both Iterator and Lookup modes and traverses all TADDM items. Using this function, we can either retrieve all items (Iterator mode) or perform a general search over the whole TADDM data (Lookup mode).

Since this behavior is achieved through multiple queries against the configured TADDM server (one for each type), if any of the queries fail, the TADDM Connector does not exit, but logs a debug message and continues with the next query.

When reading all items from the TADDM database, if IdML mode is enabled, the TADDM Connector exits prematurely with the following exception: CTGDJN095E Unable to get the IdML compliant name of attribute 'attr-name' from class 'class-name'. For more details, see the Troubleshooting TADDM Connector section.

Deleting model objects

Use the Delete mode of operation to clear the unwanted items and relationships from TADDM. It performs a Lookup operation and then tries to delete the discovered item. For the delete operation, the TADDM API requires only guid of the item, and the connector uses the $id system attribute.

When the multiple matching items are found, the On Multiple Entries hook is started, and we can iterate the discovered items and delete them. Additionally, we can add all the $id-s of the found items to the first discovered item and provide it to the connector. When the connector detects multiple IDs, they are all deleted in a single TADDM request.

The hook script is as follows:

var first = thisConnector.getFirstDuplicateEntry();

var next = thisConnector.getNextDuplicateEntry();
while (next != null)
{
	var id = next.getString("$id");
	first.addAttributeValue("$id", id);
	next = thisConnector.getNextDuplicateEntry();
}

thisConnector.setCurrent(first);

Create new model objects

In the AddOnly mode of operation, the TADDM Connector creates objects in TADDM. Due to CDM naming rules, the TADDM considers that the object creation process and object update process are similar. The attributes are updated, if the provided attributes are the same as the existing TADDM object.

To register an item, you need to provide the identifying attributes and is also applicable for the related items. Alternatively, instead of supplying identifying attributes, we can set the $classType and guid attributes of the related item. This data is not sufficient for creating such model objects, but in this case, the TADDM Connector attempts to handle that model object. If such data is found, it is added to the created model object. To take advantage of the guid lookup function, only the $classType and $id attributes are considered. Using this feature, we can add an explicit relationship between two existing items in TADDM.

The $classType system attribute can be modified at run time and enables the TADDM Connector to switch the type of the created item. We can also use the $cycle system attribute in AddOnly mode. This attribute helps to avoid duplicating the identifying attributes of a related item, if it is already specified higher in the hierarchy path of the current entry. You need to set unique identifier of the item in the $cycle attribute to retrieve all the information from its original location.

When the new model object is registered in TADDM, its Guid is returned through the conn Entry. The value can be retrieved by using the following scripting code in the After Add hook:

var guid = conn.getString("$id");

Design-time naming rules validation

The TADDM Connector provides support for validation of attributes, mapped in its Output Map, against the naming rules of the chosen CDM class type

Note: The TADDM Connector must be configured to connect to TADDM and have a specified class type. Else, an error is logged in the Error Log view.

Click the Validate button present in the Output Map of the Connector in the Configuration Editor to validate all attributes and display the results in the Problems view. There is a message for each of the naming rules of the specified class. If at least one naming rule is satisfied, all messages in the Problems view are marked as information. If none is satisfied, messages are marked as error. Each message shows that which attributes need to be added or removed to satisfy the rule.

Updating an existing model object

In the Update operation mode, we can update existing model objects. In TADDM, object creation and update process are same. Refer to the Creating new model objects section for more details. However, there are some additional specifics to be considered when updating an existing model object.

If you try to update a single-valued implicit attribute that does not exist, TADDM creates an attribute. If exists, the attribute is updated.
Original Entry Update Entry Result Entry
{
   $classType=sys.ComputerSystem
   Guid=A027...2ECB
   displayName=777-888
   signature=777-888
   $implicit {
      OSRunning{
      $classType=sys.OperatingSystem
      $id=3F62F...B5AD
      guid=3F62...B5AD
      displayName=976063427
      FQDN=fqdn/976063428
      }
   }
}
{
  $implicit {
   OSRunning{
   displayName=976063429
   }
 }
{
   $classType=sys.ComputerSystem
   Guid=A027...2ECB
   displayName=777-888
   signature=777-888
   $implicit {
      OSRunning{
      $classType=sys.OperatingSystem
      guid=3F62...B5AD
      displayName=976063429
      FQDN=fqdn/976063428
      }
   }
}

If you want to update one of the values of a multi-valued implicit attribute, you need to provide its guid in the Output Map. Else, a new value is added instead of updating an existing value.

The update Entry with Guid provided is as follows.
Original Entry Update Entry Result Entry
{
   $classType=sys.ComputerSystem
   Guid=A027...2ECB
   displayName=777-888
   signature=777-888
   $implicit {
      OSInstalled{
      $classType=sys.OperatingSystem
      guid=3F62...B5AD
      displayName=976063427
      FQDN=fqdn/976063428
      }
   }
}
{
  $implicit {
    OSInstalled{
    guid=3F62...B5AD
    displayName=976063429
  }
 }
{
   $classType=sys.ComputerSystem
   Guid=A027...2ECB
   displayName=777-888
   signature=777-888
   $implicit {
      OSInstalled{
      $classType=sys.OperatingSystem
      guid=3F62...B5AD
      displayName=976063429
      FQDN=fqdn/976063428
      }
   }
}

The update Entry with no Guid provided is as follows.
Original Entry Update Entry Result Entry
{
   $classType=sys.ComputerSystem
   Guid=A027...2ECB
   displayName=777-888
   signature=777-888
   $implicit {
      OSInstalled{
      $classType=sys.OperatingSystem
      guid=3F62...B5AD
      displayName=976063427
      FQDN=fqdn/976063428
      }
   }
}
{
  $implicit {
    OSInstalled{
    displayName=976063429
    FQDN=fqdn/976063429
  }
}
{
   $classType=sys.ComputerSystem
   Guid=A027...2ECB
   displayName=777-888
   signature=777-888
   $implicit {
      OSInstalled{
      $classType=sys.OperatingSystem
      guid=3F62...B5AD
      displayName=976063427
      FQDN=fqdn/976063428
      }
      OSInstalled{
      $classType=sys.OperatingSystem
      guid= 6030...E574
      displayName=976063429
      FQDN=fqdn/976063429
      }
   }
}

In addition, the TADDM Connector allows removing an explicit attribute from an existing model object by updating its value to null. To update, change the default null behavior to Return null value in the Output Map (click the More button in the Configuration Editor).

Note: The TADDM Connector does not support the Compute Changes feature, because hierarchical Entries cannot be compared correctly.

Delta mode support

In Delta mode, the TADDM Connector receives specific delta information. Based on the received information, the connector performs add, update, or delete operation on the provided Entry.

Note: You need to set the link criteria to update or delete Entries. For example, the criterion $id equals $$id along with a delete tagged Entry, results in Entry deletion in TADDM.

Data source schema of TADDM Connector

This section describes input schema and output schema of TADDM Connector.

Input schema

The following table lists attributes of the Input Map.

Note: All of the attributes are not present in all situations.
Table 36. Input Schema
Attribute Name Description
$cycle Prevents cycles (loops) in the hierarchical Entry model.
$id Holds a unique identifier for the item. For example, TADDM Guid or IdML ID.
$classType Holds the CDM/TADDM class name of the read item.
cdm:ManagedSystemName and managedSystemName formats Explicit attributes

Both the formats can be used depending on the IdML mode.

cdm-rel:installedOn.cdm-trg:sys.ComputerSystem and parent formats Implicit attributes

Can be used depending on the IdML mode.

$mss and its children Holds MSS information, if available, and its parameter option is enabled.
$domain holds Domain attributes which can prove useful in an enterprise TADDM infrastructure.
ext:attrName and cdm-ext: attrName formats Extended attributes

Holds additional non-CDM flat data and both the formats are supported.

Output schema

The following table lists attributes of the Output Map.

Note: All of the attributes are not present in all situations.
Table 37. Output Schema
Attribute Name Description
$cycle Prevents cycles (loops) in the hierarchical Entry model.
$id Holds a unique identifier for the item. For example, TADDM Guid or IdML ID.
$classType Holds the CDM/TADDM class name of the read item.
cdm:ManagedSystemName and managedSystemName formats Explicit attributes

Both the formats can be used depending on the IdML mode.

cdm-rel:installedOn.cdm-trg.sys.ComputerSystem and parent formats Implicit attributes

Can be used depending on the IdML mode.

Post-installation tasks

The TADDM Connector depends on TADDM SDK. The TADDM API JAR files are not shipped with SDI. You need to perform the following tasks, before you start using the TADDM Connector, to avoid the unsupported modes such as Server and CallReply, and to get a list of supported class type:

  1. Copy the TADDM SDK compressed archive file from the TADDM server and extract it to the system where SDI is running. The TADDM SDK compressed file can be found at taddm-home/dist/sdk.

    The directory structure created for the TADDM 7.2 SDK is as follows:

    /sdk
     |--/adaptor - contains TADDM Discovery Library Adaptor 1.0
     |--/bin - contains useful shell scripts and batch files
     |--/dla - contains IBM Discovery Library IdML Certification Tool
     |--/doc - contains English pdfs and other documentation files 
     |--/etc - configuration properties 
     |--/examples - samples directory 
     |--/lib - server and client runtime libraries 
     |--/log - runtime logs schema The XML Schema
  2. Copy the following TADDK API JAR files to class path of SDI.

    • taddm-sdk/lib/taddm-api-client.jar
    • taddm-sdk/lib/platform-model.jar

    For TADDM 7.1.2, the JAR file is at taddm-sdk/clientlib. Copy the files to /jars directory of SDI, TDI_install_dir/jars/3rdparty/IBM, or the location specified by the com.ibm.di.loader.userjars property in solution.properties.

Note: Only one TADDM SDK can be used by all TADDM Connectors running on an SDI server. This limitation is imposed by the TADDM SDK, which requires its path to be set as a system property. We cannot have two TADDM Connectors working with different versions of TADDM servers simultaneously, for example, TADDM 7.1 and TADDM 7.2.

Troubleshooting TADDM Connector

This section provides information about the issues you might encounter as you use TADDM Connector, and describes how to troubleshoot them.

Throwing AccessException when using TADDM Connector

Problem

The exception is thrown when another component sets a restrictive security manager before starting the TADDM Connector for the first time. TADDM Connector relies on an all-permissive manager and unable to function properly.

Solution

Perform one of the following tasks:

To grant full permissions:

  1. Editing the TDI_install_dir/jvm/jre/lib/security/java.policy file.
  2. Provide the following entry:
       grant {
             permission java.security.AllPermission;
       };

Note: This change can be made to another Java™ policy file, if it is set up correctly. For details, see Policy Files.

Throwing exception when reading data from TADDM in IdML mode

Problem

This problem is due to an inconsistency in the storage of implicit attributes in TADDM. Normally, each TADDM implicit attribute contains the name of the relationship it corresponds to. Missing of this information prevents the TADDM Connector from constructing the correct IdML model of the read data and exception is thrown. This problem is observed only for the Parent implicit attribute of class meta.UserDataMeta and process.CompositeAttributeDef.

Solution

We can switch to native mode to avoid the conversion. To switch to native mode, clear the IdML Mode check box in the Configuration Editor.

This exception is thrown when performing an iteration/lookup over all items in TADDM. The IdML mode is required for working with other CDM-aware components, for example, IdML Connector. In this scenario, you must override the Iterator Error/Lookup Error hook in the configuration of the TADDM Connector.

Configuration

This section describes the parameters of TADDM Connector. For basic configuration information, see the Basic configuration section.


TCP Connector

The TCP Connector is a transport Connector using TCP sockets for transport. We can use the TCP Connector in Iterator and AddOnly mode only.

Iterator Mode

When in Iterator mode, the TCP Connector waits for incoming TCP calls on a specific port. When a connection is established, the getnext method returns an entry with the following properties:

The in and out objects can be used to read and write data to or from the TCP connection. For example, we can do the following to implement a simple echo server (put the code in the After GetNext Hook):

var ins = conn.getProperty("in");
var outs = conn.getProperty("out");
var str = ins.readLine();
outs.write("You said==>"+str+"<==");
outs.flush();

Because you are using a BufferedWriter, it is important to call the out.flush() method to make sure data is actually sent out over the connection.

If you specify a Parser, then the BufferedReader is passed to the Parser, which in turn reads and interprets data sent on the stream. The returned entry then includes any attributes assigned by the Parser as well as the properties listed previously (socket, in, and out).

If the TCP Connector is configured in Listen Mode=true then the connection is closed between each call to the getnext method. If Listen Mode=false the connection to the remote host is kept open for as long as the TCP Connector is active (for example, until the AssemblyLine stops).

Note: The Listen Mode parameter in this connector should not be confused with the behavior of the TCP Server Connector, which is a connector more suited for accepting incoming (multiple concurrent ones, if necessary) TCP requests. The functionality associated with Listen Mode=true is deprecated and will be removed in future versions of the connector, and it will be possible to configure and use the connector for outgoing connections only.

AddOnly Mode

When the TCP Connector works in this mode, the default implementation is to write entries in their string form, which is not useful. Typically, you specify a Parser or use the Override Add hook to preform specific output. In the Override Add hook you access the in or out objects by calling the Connector Interface's getReader() and getWriter() methods, for example:

var in = mytcpconnector.connector.getReader();
var out = mytcpconnector.connector.getWriter();

We can also use the Before Add and After Add hooks to insert headers or footers around the output from your Parser.

Configuration

We can select a Parser for this Connector from the Parser pane, where you select a parser by clicking the top-left Select Parser button.


TCP Server Connector

This Connector supports Server and Iterator modes only.

In Server mode, this Connector waits for incoming TCP connections on a specified port and spawns a new thread to handle the incoming request. When the new thread has started, the original Server mode Connector goes back to listening mode. When the newly created thread has completed, the thread stops and the TCP connection is closed.

In Iterator mode, the Connector is single-threaded, in that it waits for a connection on the IP address of the local machine and the port specified. Once the connection is received, the Connector will generate Entries based on received data until the Client closes the connection.

Configuration

Connector Schema

The Connector makes the following properties available in the Input Attribute Map:

The TCP Server Connector does not use its Output Attribute Map - it just closes the Connection to the client application when done.

The tcp.inputstream and tcp.outputstream Attribute values are meant to be used via scripting in the AssemblyLine to read the client request and write the response respectively.


Timer Connector

The timer waits for a specified time; then it returns from sleep and resumes an AssemblyLine, that is, it starts a new cycle. This Connector runs in Iterator mode only.

On attribute mapping, there is one attribute we can map into the work entry: a timestamp, which is of type java.util.Date. It will contain the time when it started the cycle.

Using Delta functionality with this Connector does probably not make much sense.

Configuration

This Connector needs the following parameters:


Tpae IF Change Detection Connector

Use the Tpae IF Change Detection Connector to receive change notifications from Tivoli® Process Automation Engine (Tpae) Integration Framework.

Overview of Tpae IF Change Detection Connector

The Tpae IF Change Detection Connector receives change notifications on a configurable TCP port for HTTP requests, from Maximo® based systems. The Maximo server is configured to send change notifications as HTTP POST requests.

The Tpae IF Change Detection Connector returns hierarchical entries and supports only the Server mode. When a client connects on the specified port, the AssemblyLine operates in Iterator mode, enabling the connector to simultaneously handle many clients that are sending change notifications.

Architecture of Tpae IF Change Detection Connector

The root MBO, along with child MBOs, is sent to the HTTP body of the request. Refer to the following change notification example for the predefined Object Structure: MXPERSON

<PublishMXPERSON xmlns="http://www.ibm.com/maximo" xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance" creationDateTime=
"2010-08-26T10:50:36+03:00"
 transLanguage="EN" baseLanguage="EN" messageID="1282809036703888707
"maximoVersion="7 1 20090627-0754 V7115-149" event="1">
	<MXPERSONSet>
		<PERSON action="Replace">
			<DISPLAYNAME changed="1">Fred Rogers</DISPLAYNAME>
			<FIRSTNAME changed="1">Fred2</FIRSTNAME>
			<ADDRESSLINE1>179 Woodtree Lane</ADDRESSLINE1>
			<CITY>Arlington</CITY>
			...
			<PHONE>
				<ISPRIMARY>1</ISPRIMARY>
				<PHONEID>145833</PHONEID>
				<PHONENUM>(617) 643-1933</PHONENUM>
				<TYPE>WORK</TYPE>
			</PHONE>
			<EMAIL>
				<EMAILADDRESS>fred.rogers@warpspeed.net</EMAILADDRESS>
				<EMAILID>146115</EMAILID>
				<ISPRIMARY>1</ISPRIMARY>
				<TYPE>WORK</TYPE>
			</EMAIL>
		</PERSON>
	</MXPERSONSet>
</PublishMXPERSON>

The XML representation of the root MBO is then parsed using the XML Parser to obtain only the root MBOs. For example, the PublishMXPERSON element can have elements MXPERSONSet and PERSON as shown in the XSD schema definition.

Maximo MBO

The resulting Entry is as shown in the following figure.

Entry parsed from Maximo MBO

The returned Entry contains all child MBOs, such as e-mail address and phone number as indicated in the preceding figure.

Note: Change notifications are sent only for changes to the root MBO attributes and structure. The changes can be:

Changes to attributes of a child MBO do not trigger change notifications and are not tagged with attribute operation provided in the work Entry.

Delta tagging

The Tpae IF Change Detection Connector provides delta tagging at Entry and Attribute levels. The connector tags Entries when received and parses change notifications. The delta tagging at Attribute level is only for the modified and added attributes of the root MBO.

The following output indicates the delta tags for the example Entry illustrated in the previous section.

{
	"#type": "modify",
	"#count": 11,
	"PERSON": {
		"#type": "replace",
		"#count": 0,
		"@xmlns": "http://www.ibm.com/maximo",
		"@action": "Replace",
		"FIRSTNAME": {
			"#type": "modify",
			"#count": 1,
			"@changed": "1",
			"#": "Fred"
			},
		"DISPLAYNAME": {
			"#type": "modify",
			"#count": 1,
			"@changed": "1",
			"#": "Fred Rogers"
		},
		"ADDRESSLINE1": [
			"#type": "replace",
			"#count": 1,
			"#": "179 Woodtree Lane"
		],
		"PHONE": {
			"#type": "replace",
			"#count": 0,
			"ISPRIMARY": [
				"#type": "replace",
				"#count": 1,
				"#": "1"
			],
			"PHONEID": [
				"#type": "replace",
				"#count": 1,
				"#": "145833"
			],
			"PHONENUM": [
				"#type": "replace",
				"#count": 1,
				"#": "(617) 643-1933"
			],
			"TYPE": [
				"#type": "replace",
				"#count": 1,
				"#": "WORK"
			]
		},
		"EMAIL": {
			"#type": "replace",
			"#count": 0,
			"EMAILADDRESS": [
				"#type": "replace",
				"#count": 1,
				"#": "fred.rogers@warpspeed.net"
			],
			"EMAILID": [
				"#type": "replace",
				"#count": 1,
				"#": "146115"
			],
			"ISPRIMARY": [
				"#type": "replace",
				"#count": 1,
				"#": "1"
			],
			"TYPE": [
				"#type": "replace",
				"#count": 1,
				"#": "WORK"
			]
		}
	}
}

The action property, which holds the Maximo® operation name, is used to determine the Entry operation. The action can be Add, Delete, Replace, or Change. The Replace and Change actions are interpreted as modify Entry operations.

Maximo server configuration

Create HTTP end points

  1. Log on to Maximo® as an administrator.
  2. From the Go To menu on the navigation toolbar, select Integration -> End Points to open the End Points application.
  3. From the Select Action menu, select Add/Modify Handlers or click New End Point.
  4. Provide a valid name for the new end point
  5. Select the HTTP handler and specify the values for the following parameters:

    1. USERNAME - user name to connect to the remote computer
    2. PASSWORD - password associated with the user name
    3. CONNECTIONTIMEOUT - timeout when connecting
    4. READTIMEOUT - timeout when reading
    5. URL - URL of the file on the remote system. For example, http://9.156.6.14/test.xml
    6. HTTPMETHOD - choose POST request
    7. HTTPEXIT - name of the custom Java™ class

For more information about end points and handlers, see the End Points and Handlers chapter of Maximo Asset Management 7.1 Integration Guide or the documentation of your Maximo based product.

Assigning HTTP end points

You need to assign the created HTTP handler to an external system or to a publish channel as described in the following procedure:

  1. Log on to Maximo as an administrator.
  2. From the Go To menu on the navigation toolbar, select Integration -> External Systems to open the External Systems application.
  3. Select an external system, which is enabled.
  4. Specify the HTTP End point name in the End Point field.

    Note: All publish channels use this end point.

  5. Option: Go to the Publish Channels tab and provide end points for the specified publish channels.

Note: You need to enable all the publish channels for receiving the change notifications for its Object Structure.

Enable event listeners

An event listener listens for changes in the root MBO of an Object Structure. When a change is detected, an outbound message is created and added to the outbound queue. Enable the event listener for the selected publish channel using the following procedure:

  1. Log on to Maximo as an administrator.
  2. From the Go To menu on the navigation toolbar, select Integration → Publish Channels to open the Publish Channels application.
  3. Select the publish channel corresponding to your Object Structure. For example, MXPERSONInterface for the MXPERSON Object Structure.
  4. From the Select Action menu, select Enable Event Listener if the Enable Listener? checkbox is not selected.

Use the following steps to find the name of the publish channel for a particular Object Structure:

  1. From the Go To menu on the navigation toolbar, select Integration → Publish Channels to open the Publish Channels application.
  2. Click the Advanced Search button and specify the name of the Object Structure.

    A message is displayed if there are no publish channels for the specified Object Structure.

Configure cron task

The handlers for each publish channel are started using a cron task, configured to run at specified intervals. The change notification messages use the sqout queue by default. The default settings direct the JMSQSEQCONSUMER cron task to poll the outbound queue (sqout), and the sequential inbound queue (sqin). Activate the applicable instances of the cron task (SEQQIN and SEQQOUT) to avoid unprocessed inbound and outbound messages in the queue. You can change the cron task run interval, if required.

To configure the cron task:

  1. Log on to Maximo as an administrator.
  2. From the Go To menu on the navigation toolbar, select System Configuration -> Platform Configuration -> Cron Task Setup to open the Cron Task Setup application.
  3. Select the JMSQSEQCONSUMER cron task.
  4. Ensure that the SEQQOUT instance, which is used to process outbound messages, is enabled.
  5. Change the cron task run interval, if required.

    The default is 30 seconds, which means, this cron task checks the output queue every 30 seconds for unprocessed messages. The unprocessed messages are sent to the specified handler for the publish channel.

Note: All queue names and cron task instances mentioned in this procedure are the default names in Maximo Asset Management 7.1. Other systems might have been configured to use different queues and cron tasks. If you are unable to find some of the referred objects, consult you system administrator.

Configuration

This section describes the configuration parameters of Tpae IF Change Detection Connector.

Examples

Go to the TDI_install_dir/examples/TpaeIFCDConnector directory of your SDI installation.


Tpae IF Connector

The Tivoli® Process Automation Engine (Tpae), also known as Base Services, is a collection of core Java™ classes and is used as a base to build Java applications. The Integration Framework, a Tpae feature, contains standard integration objects (Object Structures and interfaces) and outbound/inbound objects. The Tpae IF Connector connects SDI to the Tpae Integration Framework to exchange information.

The Tpae IF Connector reads from and writes to the Integration Framework. It supports Maximo® Business Object (MBO) and is processed through an integration object. This Connector uses the MBO layer for validating imported or exported objects.

The Tpae IF Connector can work with hierarchical Entries and is based on Simple Tpae IF Connector. The Tpae IF Connector can be used in various AssemblyLine modes such as Iterator, AddOnly, Update, Lookup, and Delete.

Using the Connector

The Tpae IF Connector is associated with Simple Tpae IF Connector and works with hierarchical Entries. It supports integration through Object Structure Services and Enterprise Services. The Connector reads or writes a complete Object Structure and does not require any MBO to be specified.

The following table provides a comparison of Simple Tpae IF Connector and Tpae IF Connector.
Table 38. Tpae Connectors differences
Criteria Simple Tpae IF Connector Tpae IF Connector
Data format Flat Entries Hierarchical Entries
Link Criteria Supports only equals and not equals match operators Supports all match operators

Supports hierarchical Link Criteria names. Only attributes from the top two levels of MBO can be specified

Schema Shows schema for the selected MBO Shows hierarchical schema for the selected Object Structure*
Services Supports Object Structure Services and Enterprise Services for the Create, Update, Delete, and Query operations Supports Object Structure Services and Enterprise Services for the Sync and Query operations.

Note: Services with the Sync operation encapsulate Create, Add, and Delete operations.

Iterator mode

In the Iterator mode, the Tpae IF Connector sends a Query XML request to the IF server and receives a Query XML response. For example, Maximo® returns the following XML response as a result of a query operation on the predefined MXASSET Object Structure.

<ASSET>
	<ASSETNUM>7111</ASSETNUM>
	<BUDGETCOST>1000.0</BUDGETCOST>
	<ASSETSPEC>
		<ASSETATTRID>RAMSIZE</ASSETATTRID>
		<MEASUREUNITID>MBYTE</MEASUREUNITID>
		<NUMVALUE>512.0</NUMVALUE>
		-
	</ASSETSPEC>
	<ASSETSPEC>
		<ASSETATTRID>DISKSIZE</ASSETATTRID>
		<MEASUREUNITID>GBYTE</MEASUREUNITID>
		<NUMVALUE>100.0</NUMVALUE>
		-
	</ASSETSPEC>
	<ASSETSPEC>
		<ASSETATTRID>PROSPEED</ASSETATTRID>
		<MEASUREUNITID>GHZ</MEASUREUNITID>
		<NUMVALUE>1.5</NUMVALUE>
		-
	</ASSETSPEC>
	-
</ASSET>

The above XML representation is translated into an SDI Entry object as follows:

Hierarchical Entry object from Maximo

The returned Entry contains all child MBOs (the tree asset specifications). We can use the toString() method of Entry to view a complete string representation of this hierarchy. Query Criteria in Iterator mode

The Tpae IF Connector uses the Query criteria parameter only in Iterator mode to filter the results set of the iteration.

Note: Select query values from the top two levels of MBOs from an Object Structure. For example, attributes from ASSET, ASSETSPEC or ASSETMETER MBOs.

AddOnly, Update, and Delete modes

The Tpae IF Connector uses both Object Structure and Enterprise Services to modify MBOs. To configure Tpae IF Connector, a single configuration parameter is used. The Object Structure Service or Enterprise Service offers function to encapsulate Create, Update, and Delete operations. In Update mode, the Connector checks the attribute operations and sets appropriate actions in the resulting XML payload sent to Tpae. If an attribute is not tagged with an Add, Modify, or Delete operation, no action is set.

The following example shows a delta-tagged Entry that is to be modified, and the corresponding request sent to the Tpae IF server.
{
  "#type": "generic",
  "#count": 7,
  "PERSON": {
    "#type": "replace",
    "#count": 0,
    "PERSONID": [
	"#type": "replace",
	"#count": 1,
	"#replace": "JOHN"
    ],
    "FIRSTNAME": [
	"#type": "replace",
	"#count": 1,
	"#replace": "John"
    ],
    "LASTNAME": [
	"#type": "replace",
	"#count": 1,
	"#replace": "Jones"
    ],
    "PHONE": {
	"#type": "add",
	"#count": 0,
	"PHONENUM": [
	    "#type": "replace",
	    "#count": 1,
	    "#replace": "0888776455"
		],
	"TYPE": [
	    "#type": "replace",
	    "#count": 1,
	    "#replace": "WORK"
	],
	"ISPRIMARY": [
	    "#type": "replace",
	    "#count": 1,
	    "#replace": "1"
	]
	},
    "PHONE": {
	"#type": "delete",
	"#count": 0,
	"PHONENUM": [
	    "#type": "replace",
	    "#count": 1,
	    "#replace": "555244458"
	]
    },
    "EMAIL": {
	"#type": "replace",
	"#count": 0,
	"EMAILADDRES": [
	    "#type": "replace",
	    "#count": 1,
	    "#replace": "jjones@mail.com"
		]
	}
    }
<?xml version='1.0' encoding='UTF-8'?>
<SyncMXPERSON
    xmlns="http://www.ibm.com/maximo"
    creationDateTime="2010-11-05T16:44:20+02:00"
    transLanguage="EN"
    messageID="1288968260482"
    maximoVersion="7 1 Harrier 072 7100-001">
    <MXPERSONSet>
	<PERSON action="Change">
	   <PERSONID>JOHN</PERSONID>
	   <FIRSTNAME>John</FIRSTNAME>
	   <LASTNAME>Jones</LASTNAME>
	   <PHONE action="Add">
	     <PHONENUM>0888776455</PHONENUM>
	     <TYPE>WORK</TYPE>
	     <ISPRIMARY>1</ISPRIMARY>
	   </PHONE>
	   <PHONE action="Delete">
	     <PHONENUM>555244458</PHONENUM>
	   </PHONE>
	   <EMAIL>
	     <EMAILADDRESS>
			jjones@mail.com
	     </EMAILADDRESS>
	     <ISPRIMARY>1</ISPRIMARY>
	   </EMAIL>
	</PERSON>
    </MXPERSONSet>
</SyncMXPERSON>




In AddOnly and Update modes, specify the values for all unique attributes of MBOs that are being added or updated. Attributes also support empty strings as values. In Delete mode, provide the values for all unique attributes of the root MBO. If any unique key is missing, the Connector throws an exception and operation fails.

In the Update mode, the Tpae IF Connector supports Skip Lookup function. The Skip Lookup function allows you to skip querying the Object Structure before updating or deleting it. Every query operation sends an HTTP request over the network. When updating AssemblyLines or deleting multiple Entries, enabling Skip Lookup improves the performance.

Note: In Update mode, when Skip Lookup is disabled and unique key of MBO is changed in the output map, the value is overwritten with the original value. The value is read from Maximo, and the MBO gets modified. A debug message informs that the unique attributes cannot be changed. When Skip Lookup is disabled, the MBO is considered as new and is being added.

Lookup mode

To find a specific record in Maximo, provide the Link Criteria with attributes that uniquely identify the record. Example

The following attributes uniquely identify an asset in Maximo.
Attribute Name Value
ASSET.ASSETNUM 1001
ASSET.SITEID BEDFORD

The following attributes uniquely identify an asset meter in Maximo.
Attribute Name Value
ASSET.ASSETNUM 1001
ASSET.SITEID BEDFORD
ASSET.ASSETMETER@METERNAME RUNHOURS

To find an Entry, we can use the different match operators. An error is thrown if the On Multiple Entries hook is not provided.

The Tpae IF Connector supports only Link Criteria of type AND. The supported match operators are listed in the following table:
Table 40. Match operators
Match operator Details
equals When used with attributes of type string, the strings are compared lexicographically.

For example: apple is less than carrotsince a is before c.

less than
less or equals
greater than
greater or equals
contains Use only with attributes of type string.
starts with
ends with
not equals

Note: A exception is thrown when more than two criteria are specified for an attribute .

The Link Criteria selects attributes from the top two levels of MBOs from a specific Object Structure hierarchy. Therefore, we can specify a hierarchical Link Criteria such as:
ASSET.SITEID equals BEDFORD
ASSET.ASSETUSERCUST.PERSONID equals MAXADMIN
ASSET.ASSETMETER.METERNAME contains HOURS

In the above table, both ASSETUSERCUST and ASSETMETER are first level child MBOs of the ASSET MBO.

Error handling

The Tpae IF Connector handles all exceptions that occur through the normal server hooks. If a failure cannot be handled, the corresponding AssemblyLine Error hook is started. The following exceptions are unique to this Connector:

If an AssemblyLinee with a Tpae IF Connector fails, we can retrieve additional information about the error as follows:

  1. Add the following code in the Default On Error hook. Name the Connector as mxConn:
    task.logmsg("ERROR", "An exception occurred.");
    mxConn.connector. extractMaximoException(error);
    task.dumpEntry(error);
  2. When an exception occurs, the following message is displayed:
    19:31:44  CTGDIS003I *** Start dumping Entry
    19:31:44  	Operation: generic 
    19:31:44  	Entry attributes:
    19:31:44  	exception (replace):	'com.ibm.di.connector.
               maximo.exception.MxConnHttpException: 
               response: 404 - Not Found'
    19:31:44  	targetUrl (replace):	'http://9.156.6.14/meaweb/schema
               /service/MXPersonService.xsd'
    19:31:44  	class (replace):	'com.ibm.di.connector.maximo.exception
              .MxConnHttpException'
    19:31:44  operation (replace):	'update'
    19:31:44  status (replace):	'fail'
    19:31:44  connectorname (replace):	'AddPerson'
    19:31:44  body (replace):	'Error 404: BMXAA1513E - Cannot obtain 
              resource /meaweb/schema/service/MXPersonService.xsd.'
    19:31:44  responseCode (replace):	'404.0'
    19:31:44  responseMessage (replace):	'Not Found'
    19:31:44  message (replace):	'The HTTP server did not returned "HTTP OK".'
    19:31:44  CTGDIS004I *** Finished dumping Entry

    Note: The task.dumpEntry(error) prints information about the error.

External system configuration

Generating XML schema definition

When using the Connector for the first time, perform the following steps:

  1. Log on to Maximo® as an administrator to perform system configuration tasks.
  2. From the Go To menu on the navigation toolbar, select Integration -> Object Structures to open the Object Structures application.
  3. Repeat the following steps for each Object Structure you are going to use:

    1. On the List tab, search for the name of the Object Structure, for example, MXASSET.

      To search, open the Filter and type the name of the Object Structure, or a partial name, in the Filter field of the Object Structure column. Press ENTER.

    2. Click the Object Structure name to open the record for the Object Structure.
    3. From the Select Action menu, select Generate Schema/View XML.
    4. Click OK. The View XML dialog box is displayed.
    5. Click OK to return to the List tab.

Creating Enterprise Service

The Tpae IF Connector supports both Object Structure Services and Enterprise Services. If you specify the External System parameter, you need to provide names of the Enterprise Services for the following parameters:

Tpae IF Connector uses these parameters to query or modify an Object Structure using Enterprise Services instead of Object Structure Services.

To create an Enterprise Service for an Object Structure through a specified external system:

  1. Log on to Maximo as an administrator to perform system configuration tasks.
  2. From the Go To menu on the Navigation toolbar, select Integration -> Object Structures to open the Object Structures application.
  3. Click New Enterprise Service to create an Enterprise Service.
  4. Provide details for the following the parameters:

    1. Enterprise Service - unique name for the Enterprise Service.
    2. Adapter - name of the adapter, which is used by Enterprise Service. The default name is Maximo.
    3. Object Structure - name of the Object Structure associated with the Enterprise Service.
    4. Operation - indicates the type of operation. The default operation is Sync. The Sync option includes Create, Delete, and Update functions. For Tpae IF Connector, we can create an Enterprise Service only for Query or Sync operations.

  5. Save your Enterprise Service.
  6. From the Go To menu on the Navigation toolbar, select Integration -> External to open the External Systems application.
  7. Select the external system and its Enterprise Services tab.
  8. Click New row and type the name of the newly created Enterprise Service.

Examples

Go to the TDI_install_dir/examples/TpaeIFConnector directory of your SDI installation.


URL Connector

The URL Connector is a transport Connector that requires a Parser to operate. The Connector opens a stream specified by a URL.

Note: When forced through a firewall that enforces a proxy server, the URL Connector does not work. The URL Connector needs to have the right proxy server set.

This Connector supports AddOnly and Iterator modes.

The Connector, in principle, can handle secure communications using the SSL protocol, but it may require driver–specific configuration steps in order to set up SSL support.


URL Connector

The URL Connector is a transport Connector that requires a Parser to operate. The Connector opens a stream specified by a URL.

Note: When forced through a firewall that enforces a proxy server, the URL Connector does not work. The URL Connector needs to have the right proxy server set.

This Connector supports AddOnly and Iterator modes.

The Connector, in principle, can handle secure communications using the SSL protocol, but it may require driver–specific configuration steps in order to set up SSL support.

Configuration

The Connector needs the following parameters:

From the Parser configuration pane, we can select a Parser to operate upon the stream. You select a parser by clicking on the bottom-right Inherit from: button.

Supported URL protocol

The supported URL protocols are:


Web Service Receiver Server Connector

The Web Service Receiver Server Connector is part of the SDI Web Services suite.

Note: Due to limitations of the Axis library used by this component only WSDL (http://www.w3.org/TR/wsdl) version 1.1 documents are supported. Furthermore, the supported message exchange protocol is SOAP 1.1.

This Connector is basically an HTTP Server specialized for servicing SOAP requests over HTTP. It operates in Server mode only.

AssemblyLines support an Operation Entry (op-entry). The op-entry has an attribute $operation that contains the name of the current operation executed by the AssemblyLine. In order to process different web service operations easier, the Web Service Receiver Server Connector will set the $operation attribute of the op-entry.

The Web Service Receiver Server Connector supports generation of a WSDL file according to the input and output schema of the AssemblyLine. As in SDI v7.2 AssemblyLines support multiple operations, the WSDL generation can result in a web service definition with multiple operations. There are some rules about naming the operations:

Hosting a WSDL file

The Web Service Receiver Server Connector provides the "wsdlRequested" Connector Attribute to the AssemblyLine.

If an HTTP request arrives and the requested HTTP resource ends with "?WSDL" then the Connector sets the value of the "wsdlRequested" Attribute to true; otherwise the value of this Attribute is set to false.

This Attribute's value tells the AssemblyLine whether the request received is a SOAP request or a request for a WSDL file, and allows the AssemblyLine to distinguish between pure SOAP requests and HTTP requests for the WSDL file. The AssemblyLine can use a branch component to execute only the required piece of logic - (1) when a request for the WSDL file has been received, then the AssemblyLine can read a WSDL file and send it back to the web service client; (2) when a SOAP request has been received the AssemblyLine will handle the SOAP request. Alternatively, you could program the system.skipEntry(); call at an appropriate place (in a script component, in a hook in the first Connector in the AssemblyLine, etc.) to skip further processing.

It is the responsibility of the AssemblyLine to provide the necessary response to either a SOAP request or a request for a WSDL file.

The Connector implements a public method:

public String readFile (String aFileName) throws IOException;

This method can be used from SDI JavaScript in a script component to read the contents of a WSDL file on the local file system. The AssemblyLine can then return the contents of the WSDL in the "soapResponse" Attribute, and thus to the web service client in case a request for the WSDL was received.

Schema

Input Schema

Table 41. Web Service Receiver Server Connector Input Schema
Attribute Value
host Type is String. Contains the name of the host to which the request is sent. This parameter is set only if "wsdlRequested" is false.
requestedResource The requested HTTP resource.
soapAction The SOAP action HTTP header. This parameter is set only if "wsdlRequested" is false.
soapRequest The SOAP request in txt/XML or DOMElement format. This parameter is set only if "wsdlRequested" is false.
wsdlRequested This parameter is true if a WSDL file is requested and false otherwise.
http.username This attribute is used only when HTTP basic authentication is enabled. The value is the username of the client connected.
http.password This attribute is used only when HTTP basic authentication is enabled. The value is the password of the client connected.

Output Schema

Table 42. Web Service Receiver Server Connector Output Schema
Attribute Value
responseContentType The response type. The default response type is "text/xml". It is used with SOAP messages.
soapResponse The SOAP response message. If wsdlRequested is true, then soapResponse is set to the contents of the WSDL file.
http.credentialsValid This attribute is used only when HTTP basic authentication is enabled. When the client provides username and password for HTTP Basic Authentication then this attribute must be set to true or false (this is not done by the Connector, it's done by the AssemblyLine using the Connector). If the value is true this means that the client is authenticated correctly and access is granted. If the value is false then the user is not authenticated and an HTTP "Not Authorized" Connector response is returned.

Configuration

Parameters

The Generate WSDL button runs the WSDL generation utility.

The WSDL Generation utility takes as input the name of the WSDL file to generate and the URL of the provider of the web service (the web service location). This utility extracts the input and output parameters of the AssemblyLine in which the Connector is embedded and uses that information to generate the WSDL parts of the input and output WSDL messages. It is mandatory that for each Entry Attribute in the "Initial Work Entry" and "Result Entry" Schema the "Native Syntax" column be filled in with the Java type of the Attribute (for example, "java.lang.String"). The WSDL file generated by this utility can then be manually edited.

The operation style of the SOAP Operation defined in the generated WSDL is rpc.

The WSDL generation utility cannot generate a <types...>...</types> section for complex types in the WSDL.

Connector Operation

The Web Service Receiver Server Connector stores the following information from the HTTP/SOAP request into Attributes of the Connector's conn entry, ready to be mapped into the work entry (also see Schema):

When reaching the Response stage of the AssemblyLine, this Connector requires the SOAP response message in text XML form or as DOMElement from the "soapResponse" Attribute of the work Entry to be mapped out:

The Connector then wraps the SOAP response message into an HTTP response and returns it to the web service client.


Windows Users and Groups Connector

The Windows Users and Groups Connector (in older versions of SDI this was called the NT4 Connector) operates with the Windows NT security database. It deals with Windows users and groups (the two basic entities of the Windows NT security database). This Connector can both read and modify Windows NT security database on the local Windows machine, the Primary Domain Controller machine and the Primary Domain Controller machine of another domain.

Note: This Connector is dependent on a Windows NT API, and only works on the Windows platform.

The Connector is designed to connect to the Windows NT4 and Windows 2000 SAM databases through the Win32 API for Windows NT and Windows 2000/2003 user and group accounts. We can connect to a Windows 2000 SAM database, but the Connector only reads or writes attributes that are compatible with NT4 (in other words, the Windows Users and Groups Connector has a predefined and static attribute map table consisting of NT4 attributes). Windows 2000/2003 native attributes or user-defined attributes are therefore not supported by this Connector.

See Windows Users and Groups Connector functional specifications and software requirements for a full functional specification of the Connector, architecture description as well as hardware and software requirements.

Preconditions

To successfully run the Windows Users and Groups Connector and obtain all of its functionality, the Connector must be run in a process owned by a user who is a member of the local Administrators group, and have logon privileges to the domain controller and other domains (if accessed). This precondition can be omitted if the UserName and Password parameters of the Connector are set to specify an account with the requirements stated above.

The Windows Users and Groups Connector is designed and implemented to work in the following modes:

Note: This Connector does not support Advanced Link Criteria (see "Advanced link criteria" in SDI v7.2 Users Guide).

Configuration

The Connector needs the following parameters:

Constructing Link Criteria

Construct link criteria when using the Windows Users and Groups Connector in Lookup, Update and Delete modes. The Connector supports Link Criteria that uniquely identifies one entry only. The format is strict, and passing a Link Criteria that doesn't match this format results in the following exception:

Unsupported Link Criteria structure.

The following is the Link Criteria structure that must be used, depending on Entry Type:

Other

User and Group account names

Global groups and domain users (can be members of a local group on a non-domain controller machine) are retrieved and must be accessed in the following format:

DOMAIN_NAME\GLOBAL_GROUP_NAME,DOMAIN_NAME\USER_NAME

Create a new user

When creating a new user with the Windows Users and Groups Connector, if any of the following attributes are omitted or assigned a null value, they are automatically assigned a default value as follows:

Setting user password

Remember that a user password value cannot be retrieved. Windows stores this in a format that cannot be read. If an AssemblyLine copies users from one Windows machine to another, you must set the Password attribute value manually.

When adding a user, passing the Password attribute with no value results in creating a user with an empty password.

When modifying a user, passing the Password attribute with no value results in retaining the old password.

Setting user Primary Group/global groups membership

All Domain Users must be members of their Primary Groups. This means that the value set in the PrimaryGroup attribute must be present in the GlobalGroups attribute. If there is no value for the PrimaryGroup attribute then it will be set to "Domain Users".

Operating with groups

There are certain groups that are predefined and special for Windows, and there are certain operations that are not enabled on these groups. Such operations are delete, rename and modification of some of their attributes. Any attempt to try such an invalid operation over any of these groups results in an exception thrown.

Here is the list of these groups, structured by Windows installations:

Domain Controller:

Non-Domain Controller:

Character sets

Unicode is supported.

Examples

Navigate to the TDI_install_dir/examples/NT4 directory of your SDI installation.

Windows Users and Groups Connector functional specifications and software requirements

The Windows Users and Groups Connector implements Windows Users and Groups database access for both user and group management on Windows systems according to Windows definitions and restrictions as outlined below. For additional background information, see Overview of Users and Groups and Managing local and remote Users and Groups.

Functionality

Extract user and group data

The Windows Users and Groups Connector reads both user and group information from the Windows Users and Groups repository, including group and user metadata as well as relationship information (for example, users group and groups group membership). The Connector reads both local and domain user or group data. Data is read from Windows, then organized and provided in the containers expected by SDI.

Add user and group data

The Windows Users and Groups Connector adds user information to both local machines and domain controllers, and it adds group information to both local machines and domain controllers. When operating with a domain controller, the Connector can create both local and global groups. When operating with a machine that is not a domain controller, the Connector can only create local groups, according to security restrictions set by Windows.

Modify group membership

The Windows Users and Groups Connector modifies group membership for both local and global groups. In accordance with Windows NT security restrictions, members can be assigned to groups as follows:

Modify user and group data

The Windows Users and Groups Connector modifies external and group properties on both local machines and domain controllers. When connected to a domain controller, the Connector is able to modify the properties of both local and global groups.

Delete user and group data

The Windows Users and Groups Connector can remove users from both local machines and domain controllers, and it can remove local groups from both local machines and domain controllers. When operating with a domain controller, the Connector can remove both local and global groups.

Attribute merge behavior

In older versions of SDI, in the z/OS® LDAP Changelog Connector merging occurs between Attributes of the changelog Entry and changed Attributes of the actual Directory Entry. This creates issues because we cannot detect the attributes that have changed. The SDI v7.2 version of the Connector has logic to address these situations, configured by a parameter: Merge Mode. The modes are:

Delta tagging is supported in all merge modes and entries can be transferred between different LDAP servers without much scripting.


Java Class Function Component

SDI v7.2 allows you to use Java™ objects in your script code to perform specific operations not provided directly by SDI. Because calling methods of Java objects when the Java object must be constructed and parameters mapped to proper classes can be difficult, the Java class Function Component makes using Java objects in your scripts easier. The Java Class Function Component allows you to choose a Java class and method through the Config Editor and performs the conversion and mapping of parameters to the method.


Java Class Function Component

SDI v7.2 allows you to use Java™ objects in your script code to perform specific operations not provided directly by SDI. Because calling methods of Java objects when the Java object must be constructed and parameters mapped to proper classes can be difficult, the Java class Function Component makes using Java objects in your scripts easier. The Java Class Function Component allows you to choose a Java class and method through the Config Editor and performs the conversion and mapping of parameters to the method.


Java Class Function Component

SDI v7.2 allows you to use Java™ objects in your script code to perform specific operations not provided directly by SDI. Because calling methods of Java objects when the Java object must be constructed and parameters mapped to proper classes can be difficult, the Java class Function Component makes using Java objects in your scripts easier. The Java Class Function Component allows you to choose a Java class and method through the Config Editor and performs the conversion and mapping of parameters to the method.

Schema

The schema for the Java™ Class Function Component is dynamic and reflects the chosen Java class and method. The Function Component also performs dynamic conversion of parameters to match the signature of the target Java class/method.

Parameter Conversion

Parameter conversion is performed for the most common types. However, it is beyond the scope of this FC to provide conversion for all potential Java class objects. For unsupported objects you must explicitly create these before invoking the Java Class Function Component. Below is a table of objects that the Java Class Function Component will recognize for parameter conversions.
Parameter type Notes®
Integer Both object and primitive type
Long Both object and primitive type
Double Both object and primitive type
Float Both object and primitive type
Short Both object and primitive type
Byte Both object and primitive type
Character Both object and primitive type
Boolean Both object and primitive type
Date Only conversion from default date format as defined by DateFormat
String

In addition to these types, the Java Class Function Component will also attempt conversion into primitive arrays and java.util.Collection objects.

Configuration

The Java™ Class Function Component uses the following parameters:


Parser Function Component

The Parser FC wraps a Parser into an AssemblyLine Component, such that it can be inserted anywhere in the AssemblyLine data flow.

Multiple instances of Parser FCs could aid in decoding two or more layers of protocol.

Configuration

A Parser Function Component also has a Parser tab. Using the Parser tab, we can select and configure the Parser you want to use to interpret or generate data stream records.

Attribute merge behavior

In older versions of SDI, in the z/OS® LDAP Changelog Connector merging occurs between Attributes of the changelog Entry and changed Attributes of the actual Directory Entry. This creates issues because we cannot detect the attributes that have changed. The SDI v7.2 version of the Connector has logic to address these situations, configured by a parameter: Merge Mode. The modes are:

Delta tagging is supported in all merge modes and entries can be transferred between different LDAP servers without much scripting.

Configuration

The Connector needs the following parameters:

Changing Timeout or Sleep Interval values will automatically adjust its peer to a valid value after being changed (for example, when timeout is greater than sleep interval the value that was not edited is adjusted to be in line with the other). Adjustment is done when the field editor loses focus.

See also

LDAP Connector,

Active Directory Change Detection Connector,

IBM Security Directory Server Changelog Connector,

Sun Directory Change Detection Connector.


Parsers

Parsers are used in conjunction with a stream-based Connector to interpret or generate the content that travels over the Connector's byte stream.

Parsers cooperate with their calling Connectors in discovering the schema of the underlying data stream when you press Discover Schema.

When the bytestream you are trying to parse is not in harmony with the chosen Parser, you get a sun.io.MalformedInputException error. For example, the error message can show up when using the Schema tab to browse a file.

Base Parsers

Character Encoding conversion

SDI is written in Java™ which in turn supports Unicode (double byte) character sets. When you work with strings and characters in AssemblyLines and Connectors, they are always assumed to be in Unicode. Most Connectors provide some means of Character Encoding to be used. When you read from text files on the local system, Java has already established a default Character Encoding conversion that is dependent on the platform you are running.

The SDI Server has the -n command line option, which specifies the character set of Config files it will use when writing new ones; it also embeds this character set designator in the file so that it can correctly interpret the file when reading it back in later.

However, occasionally you read or write data from or to text files in which information is encoded in different Character Encodings (this could happen if you are reading a file created on a machine running a different operating system). The Connectors that require a Parser usually accept a Character Set parameter in the Parser configuration. If set, this parameter must be set to one of the accepted conversion tables found in the Java runtime, as governed by the IANA Charset Registry. If this parameter is not set, most Parsers use the local character set. Some Parsers might have specific default character sets. See information about individual Parsers in this guide.

Some files, when UTF-8, UTF-16 or UTF-32 encoded, may contain a Byte Order Marker (BOM) at the beginning of the file. The purpose of the BOM is to help finding the algorithm used for encoding the InputStream to characters. A BOM is the encoding of the character 0xFEFF. This can be used as a signature for the encoding used. The SDI File Connector does not recognize a BOM. Also, these SDI Parsers do not recognize a BOM:

If you try to read a file with a BOM, and the Parser does not know how to handle this, then in order to avoid returning unusable data, you should add this code to, for example, the Before Selection Hook of the connector:

	var bom = thisConnector.connector.getParser().getReader().read(); // skip the BOM = 65279
	if (bom != -1 && bom != 65279) {
	//make sure that we are skipping the BOM and not any other meaningful character.
		throw "Invalid BOM";
	}

This code will read and skip the BOM, assuming that you have specified the correct character set for the parser. This workaround is only needed if the Parser does not recognize or process the BOM, or a skip of the BOM is needed in general.

Some care must be taken with the HTTP protocol; see HTTP Parser, section Character sets/Encoding about character sets encoding in the description of the HTTP Parser for more details.

AssemblyLine Function Component

The AssemblyLine Function Component (AL FC) wraps the calling of another AssemblyLine into a Component, with some controls on how the other AssemblyLine is executed and what to do with a possible result.

The AL FC uses the Server API to call and manage the ALs. The component establishes a server connection to the Server API through RMI and creates a session with the server.

Using the FC

This FC provides a handler object for calling and managing AssemblyLines on either the local or a remote Server.

You configure this FC by choosing the AL to call, the Server on which this AL is defined and should run on (blank or "local" indicating that the AL runs on this Server which is running the FC), as well as the Config Instance that the AL belongs to. Again, a blank parameter value means that this AL is in the same Config Instance as the one containing the FC itself.

You also choose the Execution Mode (see "AL Cycle Mode" in SDI v7.2 Users Guide for more information). Although there are three Execution Modes (Run and wait for completion, Run in background and Manual cycle mode), the first two options are the standard methods of starting an AL from script with or without calling the AL join() method.

These first two modes cause the target AL to run on its own (stand alone) in its own thread. The third mode, cycle mode, means that the target AL is controlled by the FC which will execute it one cycle at a time for each time the FC is invoked. When the FC runs an AssemblyLine in stand-alone mode, the FC keeps a reference to the target AL - just like you get when you call main.startAL(). The FC can also return the status of the running/terminated ALs. You obtain this status by calling the FC's perform() method with a null or empty Entry parameter. The returned Entry object contains the reference to the target AL in an attribute called "value". If you pass a null value to the FC, the return value is the actual reference to the target AL (again, like making a main.startAL() call).

We can also call the FC with specific string command values to obtain info about the target AL:
perform("target") returns the object reference of the target AL.
perform("active") returns either "active", "aborted" or "terminated" depending on the target AL status.
perform("error") returns the java.lang.Exception object when the status is "aborted".
perform("result") returns the current result Entry object.
perform("stop") tries to terminate an active target AL, and will throw an error if the call does not succeed.

Note that if you have specified the "Run and wait for completion" Execution Mode, then each call to perform() starts the target AL and returns the complete status for the execution (for example, reference to the target as well as status and error object). In this case, the initialize() method does NOT start the target AL as it does in all other cases. When the FC is called in this mode with an Entry object, the Entry object can contain one or more of the above keywords in an attribute called command (as described in the list above, and concatenated in a comma-separated list). The returned Entry object is then populated with the same values as described above. So, rather than calling perform() several times with each desired command, we can create an Entry with all keywords as attributes in the Entry object and get away with one call to perform():

var e = system.newEntry(); 
e.setAttribute("command", "target, status"); 
// In this example, fc references a Function Interface.
// If this was an AL Function instead, then fc.callreply(e)
// would be done.
var res = fc.perform(e); 
task.logmsg("The status is: " + res.getString("status")); 

When the FC runs an AL in manual mode, each call with an Entry object causes one cycle to be executed in the target AL. The returned Entry object is the work entry result at the end of the cycle. When the target AL has completed, a null entry is returned. If the cycle execution causes an error, then that error is re-thrown by the FC (so you should use a try-catch block in your script).

The target AL can be supplied with parameters, in two different ways.

The Query ("Quick Discovery") button in the FC Input and Output Map tabs will try the following methods for determining the schema of the AL to be called:

  1. If the Operation parameter is set the FC will get any attributes that are defined in the Input and Output maps of that operation.
  2. If the AL has a defined schema (AL Call/Return tab), then this will be used.
  3. Otherwise the FC examines the Input and Output maps of all Connectors in the AL to be called in order to "guess" its schema.

In the target AL we can define an operation called "querySchema"; if this is the case the Input and Output attributes of this operation are used to supply the AL FC (or the AssemblyLine Connector) with the schema.

Passing parameters to AssemblyLines

The following example shows how to pass parameters to different AssemblyLines using the AssemblyLine Function Component.

  1. In the Before Call Hook of the AssemblyLine Function Component, declare a TCB as shown:
    tcb = thisComponent.getFunction().getTCB(); 
    // gets the current TCB of the running AssemblyLine. 
    tcb.setALSetting("dn", work.getString("$dn")); 
    // dn is the attribute name that TCB holds. The second parameter in 
    // tcb.setALSetting(<>,<>) is the actual value that needs to be passed to the target AssemblyLine. 
  2. In the Configuration Editor, select the Use TCB Attributes checkbox.
  3. At the target AssemblyLine, for a mapped variable (input map), use the following code to retrieve the passed parameter:
    var mydn = task.getConfigStr("dn"); 
    //task.getConfigStr(<transfer_name>), the transfer_name should match the name used in Step 1.

See also

AssemblyLine Connector,

Appendix E. Creating new components using Adapters.

Java Class Function Component

SDI v7.2 allows you to use Java™ objects in your script code to perform specific operations not provided directly by SDI. Because calling methods of Java objects when the Java object must be constructed and parameters mapped to proper classes can be difficult, the Java class Function Component makes using Java objects in your scripts easier. The Java Class Function Component allows you to choose a Java class and method through the Config Editor and performs the conversion and mapping of parameters to the method.

Schema

The schema for the Java™ Class Function Component is dynamic and reflects the chosen Java class and method. The Function Component also performs dynamic conversion of parameters to match the signature of the target Java class/method.

Parameter Conversion

Parameter conversion is performed for the most common types. However, it is beyond the scope of this FC to provide conversion for all potential Java class objects. For unsupported objects you must explicitly create these before invoking the Java Class Function Component. Below is a table of objects that the Java Class Function Component will recognize for parameter conversions.
Parameter type Notes®
Integer Both object and primitive type
Long Both object and primitive type
Double Both object and primitive type
Float Both object and primitive type
Short Both object and primitive type
Byte Both object and primitive type
Character Both object and primitive type
Boolean Both object and primitive type
Date Only conversion from default date format as defined by DateFormat
String

In addition to these types, the Java Class Function Component will also attempt conversion into primitive arrays and java.util.Collection objects.

Configuration

The Java™ Class Function Component uses the following parameters:

This Attribute is a java.lang.String if you select the Return result as String checkbox in the Config tab; otherwise it is a bytearray.

Scripted Function Component

Like Connectors and Parsers, SDI allows you to fully program a Function Component using scripting. This is done by means of the template that the Scripted FC provides.

The script for the Scripted Function Component is running in a separate JavaScript Engine. This means that the script cannot access any variables that are available, or have been set, in the normal hooks of an AssemblyLine.