- Overview
- New for administrators
- Administrators: Deploying a preview guide to users
- Accessibility features for Connections
- Directory path conventions
- Deployment options
- Connections system requirements
- Connections support statements
- Worksheet for installing Connections
- Connections release notes
- Small deployment
- Medium deployment
- Large deployment
- Install
- New
- The installation process
- Pre-installation tasks
- Prepare to configure the LDAP directory
- Create the Cognos administrator account
- Install IBM WAS
- Access Windows network shares
- Set up federated repositories
- Create databases
- Create multiple database instances
- Register the DB2 product license key
- Create a dedicated DB2 user
- Configure the DB2 databases for unicode
- Create databases with the database wizard
- Create databases with SQL scripts
- Configure a FileNet database for SQL Server or Oracle
- Enable NO FILE SYSTEM CACHING for DB2 on System z
- Populate the Profiles database with LDAP data
- Configure Tivoli Directory Integrator (TDI)
- Add source data to the Profiles database
- Configure the Manager designation in user profiles
- Supplemental user data for Profiles
- Install Cognos Business Intelligence
- Install WAS for Cognos Business Intelligence
- Install the database client for Cognos Transformer
- Install required patches on the Cognos BI Server system
- Install Cognos Business Intelligence components
- Configure Cognos Business Intelligence after installation
- Federating the Cognos server to the Deployment Manager
- Validating the Cognos server installation
- Before installing Connections
- Install Connections
- Install as a non-root user
- Install Connections 4.5
- Install in console mode
- Install silently
- Modify the installation in interactive mode
- Modify the installation using response file
- Modify the installation in console mode
- Post-installation tasks
- Mandatory post-installation tasks
- Review the JVM heap size
- Configure IBM HTTP Server
- Configure the Home page administrator
- Enable Search dictionaries
- Copying Search conversion tools to local nodes
- Create the initial Search index
- Configure file downloads through the HTTP Server
- Configure Cognos Business Intelligence
- Configure Connections Content Manager for Libraries
- Configure Connections Content Manager with a new FileNet deployment
- Configure Connections Content Manager with an existing FileNet deployment
- To confirm changes made post-installation tasks
- Uninstalling Connections
- Remove applications
- Uninstalling a deployment
- Uninstalling in console mode
- Uninstalling using response file
- Uninstalling: delete databases with the database wizard
- Uninstalling databases using response file
- Uninstalling: Manually drop databases
- Reverting Common, Connections-proxy, and WidgetContainer Applications for Uninstallation
- Uninstalling Cognos Business Intelligence server
- Update and migrating
- Prepare Connections for maintenance
- Back up Connections
- Saving your customizations
- Prepare to migrate the media gallery
- Migrating Cognos Business Intelligence
- Quickr migration tools for migrating places to Connections
- Migrating to Connections 4.5
- Rolling back a migration
- Update Connections 4.5
Overview
Activities Share work related to a project goal. Blogs Online journals. Bookmarks Social bookmarking tool. Previously known as Dogear. Communities Members participate in activities and forums, and share blogs, bookmarks, feeds, and files. Files Common repository for uploaded files Forums Brainstorm and collect feedback. Home page Central location for latest updates Libraries Work with drafts, reviews, and publishing in Communities. Profiles Directory of the people in your organization. Metrics Statistics on how people use Connections applications. Wikis Share and coauthor information.
New for administrators
- Install and configure
- Administration
Connections:
- Lock the default frequency with which email digests from Connections applications are sent to users by configuring the frequencyLocked property in notification-config.xml.
Activities:
- The EventLogPurgeJob task deletes old entries from the Activities events log. This task helps maintain performance and keeps the log from becoming too large. By default, it runs daily at 2 AM. You can specify the time that the log is purged and set the properties that define which entries can be deleted.
Communities:
- You can increase the maximum number of communities displayed in Communities user views by adding a configuration setting to LotusConnections-config.xml.
- If the Communities catalog index becomes corrupted or is not being refreshed properly, you can restore the index by deleting the existing index data and waiting for the next scheduled crawl.
- By adding search to the list of modes in the Linked Library widget definition, you can enable search for content in linked libraries.
Wikis:
- The Wikis administrator can delete draft wiki pages created by other users. This capability is useful after a user leaves the organization.
- Create your own message for users by customizing the Wikis welcome page.
Files:
- You can now use a Files administrative command to obtain the ID of a community library using its community uuid value. The format of the command is:
FilesLibraryService.getByExternalContainerId(string community_id)
Profiles:
- You can now exclude nicknames when adding or updating user profiles by specifying a new name.expansion property in the tdi-profiles-config.xml file.
- You can use the new mapToNameTable property in the profiles-types.xml file to specify an additional givenname or surname value for use with Profiles directory search.
- You can integrate the Profiles business card with your web application by mapping an LDAP distinguished name, using the DN parameter; in addition to the previously available user ID and email mapping options.
- The hashEmail extended attribute can be added to the map_dbrepos_from_source.properties file or profileExtension table in the tdi-profiles-config.xml file to support Profiles users in conjunction with the Microsoft Outlook Social Connector.
See Use the Connections Desktop Plug-ins for Microsoft Windows.
News:
- Use the NewsActivityStreamService.updateApplicationRegistrationForEmailDigest command to update a registered third-party application to enable it for email digest functionality.
- Use the NewsEmailDigestService.refreshDefaultEmailPrefsFromConfig() command to refresh updates to the default email preferences specified in notification-config.xml without the need for a server restart.
Search:
- Use the new, optional parameter for the SearchService.startBackgroundIndex command to run social analytics indexing jobs at the end of the background indexing operation.
- You can configure post-filtering for the ECM service by enabling or disabling a property in the Search configuration file. Post-filtering is enabled by default.
- Use the SearchService.startBackgroundSandIndex command to create a background index for the social analytics service. Use this command to run background indexing for social analytics without having to run a full index crawl.
- By adding search to the list of modes in the Linked Library widget definition, you can enable search for content in linked libraries using FileNet P8 5.2.
- Application Development
The Connections API documentation is now available in the IBM Social Business Development wiki.
- Troubleshooting and Support
Administrators: Deploying a preview guide to users
The Connections preview guide provides...
- Overview of new applications
- Important changes from the previous release
- Familiar applications that remain the same
- Links to product tours, reference cards, and product documentation
- A few key productivity tips
You can download the preview guide from the Connections wiki at http://www-10.lotus.com/ldd/lcwiki.nsf. There are two files available to you:
- An Adobe PDF file, ready for emailing, printing, or distributing to your organization.
- An IBM Symphony ODT file that can be customized for your organization; for example, you can add contact information for your Help Desk. This file includes instructions in blue text for customizing information. Remember to remove these instructions before rolling out the file to your organization.
Accessibility features
- Keyboard-only operation
- Interfaces that are commonly used by screen readers
- Keys that are discernible by touch but do not activate just by touching them
- Industry-standard devices for ports and connectors
- The attachment of alternative input and output devices
- Screen magnification
Accessibility is optimized when using a Microsoft Windows 7 64 bit client, Microsoft Windows Server 2003 or later, FireFox 17 ESR, and JAWS 12 or later.
Interface information
To display a person's business card, hover over a person's name, and then press Ctrl + Enter to open the business card. Press Tab to set focus to the first element in the business card.
For JAWS users: To activate buttons in the user interface, press the Enter key, even when JAWS announces to use the space bar.
If the application user interface contains a View Demo link, you can download the target video in accessible MP4 format by clicking View Demo and then clicking Download the video in accessible format.
For administrators installing the product
For optimal accessibility when installing Connections, follow the instructions to install the product in console mode.
Related accessibility information
The accessible version of this product documentation is available on the Connections wiki.
- The accessible version documentation is available in HTML format to give the maximum opportunity for users to apply screen-reader software technology.
- Images in the accessible version documentation are provided with alternative text so that users with vision impairments can know the contents of the images.
IBM and accessibility
See the IBM Human Ability and Accessibility Center for more information about the commitment that IBM has to accessibility.
Directory path conventions
Directory variable Default installation root app_server_root IBM WAS installation directory
AIX: /usr/IBM/WebSphere/AppServer
IBM i : /QIBM/UserData/WebSphere/AppServer/V8/ND
Linux: /opt/IBM/WebSphere/AppServer
Windows: drive:\\IBM\WebSphere\AppServerprofile_root dmgrGR_PROFILE
AIX: /usr/IBM/WebSphere/AppServer/profiles/profile_name
IBM i : /QIBM/UserData/WebSphere/AppServer/V8/ND/profiles/profile_name
Linux: /opt/IBM/WebSphere/AppServer/profiles/profile_name
Windows: drive:\\IBM\WebSphere\AppServer\profiles\profile_nameibm_http_server_root IBM HTTP Server installation directory
AIX: /usr/IBM/HTTPServer
IBM i : /www
Linux: /opt/IBM/HTTPServer
Windows: drive:\\IBM\HTTPServerconnections_root Connections installation directory
AIX or Linux: /opt/IBM/Connections
IBM i : /QIBM/ProdData/IBM/Connections
Windows: drive:\\IBM\Connectionslocal_data_directory_root Local content stores
AIX or Linux: /opt/IBM/Connections/data/local
IBM i : /QIBM/UserData/IBM/Connections/data/local
Windows: drive:\\IBM\Connections\data\localshared_data_directory_root Shared content stores
AIX or Linux: /opt/IBM/Connections/data/shared
IBM i : /QIBM/UserData/IBM/Connections/data/shared
Windows: drive:\\IBM\Connections\data\shared\IM_root Installation Manager installation directory
AIX: /opt/IBM/InstallationManager
IBM i : /QIBM/ProdData/InstallationManager/eclipse
Linux: /opt/IBM/InstallationManager
>Windows: drive:\ \IBM\Installation Managershared_resources_root Shared resources directory
AIX or Linux: /opt/IBM/IMShared
IBM i : /QIBM/ProdData/IBM/IMShared
Windows: drive:\(x86)\IBM\IMShareddb2_root DB2 database installation directory
AIX: /usr/IBM/db2/version
Linux: /opt/ibm/db2/version
Windows: drive:\\IBM\SQLLIB\versionoracle_root AIX or Linux: /home/oracle/oracle/product/version/db_1
Windows: drive:\oracle\product\version\db_1sql_server_root Windows: drive:\\Microsoft_SQL_Server C or D. Cognos_BI_install_path IBM Cognos BI Server installation directory
AIX or Linux: /opt/IBM/CognosBI
IBM i : Please refer to AIX. IBM Cognos BI Server does not supportIBM i .
Windows: drive:\\IBM\CognosCognos_Transformer_install_path Cognos Transformer installation directory
AIX or Linux: /opt/IBM/CognosTF
IBM i : Refer to AIX. IBM Cognos Transformer does not supportIBM i .
Windows:drive:\\IBM\Cognos
You can specify the installation directory in the cognos-setup.properties file during installation.
Deployment options
A network deployment can consist of a single server that hosts all Connections applications or two or more sets of clustered servers that share the workload. You must configure an additional system with WAS Network Deployment Manager.
IBM Cognos Business Intelligence is an optional component in the deployment. If used, Cognos must be federated to the same Deployment Manager as the Connections servers. However, Cognos servers cannot be configured within an Connections cluster.
A network deployment provides the administrator with a central management facility and it ensures that users have constant access to data. It balances the workload between servers, improves server performance, and facilitates the maintenance of performance when the number of users increases. The added reliability also requires a larger number of systems and the experienced administrative personnel who can manage them.
When you are installing Connections, you have three deployment options:
- Small deployment
- Install all Connections applications on a single node in a single cluster. This option is the simplest deployment but has limited flexibility and does not allow individual applications to be scaled up. All the applications run within a single JVM.
The diagram depicts a topology with up to 8 servers. If you install the servers on shared systems, you do not need to deploy 8 separate systems. Figure 1. Small deployment
- Medium deployment
- Install a subset of applications in separate clusters. Connections provides three predefined cluster names shared among all of its applications. Use this option to distribute applications according to your usage expectations. For instance, you might anticipate higher loads for the Profiles application and install it in its own cluster, while other applications could be installed in a different cluster. This option allows you to maximize the use of available hardware and system resources to suit your needs. Figure 2. Medium deployment
- Large deployment
- Install each application in its own cluster. Connections provides a predefined cluster name for each application. This option provides the best performance in terms of scalability and availability options but also requires more system resources. In most cases, you should install the News and Home page applications in the same cluster. Figure 3. Large deployment
In a multi-node cluster, configure network share directories as shared content stores. When using NFS, use NFS v4 because NFS v3 lacks advanced locking capability. When using Microsoft SMB Protocol for file-sharing, use the UNC file-naming convention; for example: \\machine-name\share-name.
You can assign various combinations of applications to clusters in many different ways, depending on your usage and expectations.
The number of JVMs required for each cluster depends on the user population and workload. For failover, you must have two JVMs per application, or two nodes for each cluster, scaled horizontally. Horizontal scaling refers to having multiple JVMs per application with each JVM running on a WAS instance. Vertical scaling refers to running multiple JVMs for the same application on a single WAS instance. Vertical scaling is not officially supported in Connections. However, it is typically not needed unless your server has several CPUs.
For performance and security reasons, consider using a proxy server in the deployment.
BM Cognos Business Intelligence does not have to be deployed before you install the Metrics application. Even if you do not plan to deploy Cognos now, you should install the Metrics application so that events are recorded in the Metrics database for use when Cognos is available to provide reports.
For added security when you are planning to run 3rd party OpenSocial gadgets, such as those from iGoogle, configure locked domains. Locked domains are required to isolate these gadgets from access to your intranet and SSO information. The basic configuration of locked domains is as follows:
- A second top-level domain that is not in your SSO domain. For example, if you organization's SSO domain is example.com, you will require a distinct top level domain, such as example-modules.com.
- A wild card SSL certificate for this domain name.
No additional server instances are required for the basic configuration.
Connections system requirements
See Detailed system requirements for Connections web page.
If you deploy Cognos Transformer on non-Windows system and connect to the data located in Microsoft SQL Server database, install the ODBC driver of "Progress DataDirect Connect for ODBC". It is the only driver that IBM Cognos supports for such configuration. The driver is not free, if you want to avoid the cost for the licensed driver, you need to install Transformer on Windows system. You have two options, one is to install both Cognos Business Intelligence and Transformer on Windows system, another is to leave Cognos Business Intelligence on non-Windows system and install Transformer on Windows. Refer to this technote for more information.
Worksheet for installing Connections
While installing and configuring Connections, it can be difficult to remember all the user IDs, passwords, server names, and other information that you need during and after installation. Print out and use this worksheet to record that data.
LDAP server details
LDAP data type Example Details LDAP server type and version IBM Domino 8.5 Primary host name domino_ldap.example.com Port 389 Bind distinguished name cn=lcadmin,ou=People,dc=example,dc=com Bind password Certificate mapping Certificate filter Login attribute mail or uid
WAS details
WAS item Details WAS version 8.0.0.5 or higher Installation location For example: C:\IBM\WebSphere\AppServer
Update installer location For example: C:\IBM\WebSphere\UpdateInstaller
Administrator ID For example: wsadmin
Administrator password WAS URL For example:
http://was.example.com:9060/ibm/console
WAS secure URL For example:
https://was.example.com:9043/ibm/console
WAS host name HTTP transport port HTTPS transport port SOAP connector port Run application server as a service? (True/False)
Database details
Database item Details Database type and version For example: Oracle Database 10g Enterprise Edition Release 2 10.2.0.4 Database instance or service name Database server host name For example: database.example.com Port The default values are: DB2=50000; Oracle=1433; MS SQL Server=1523. JDBC driver fully qualified file path For example: C:\IBM\SQLLIB Database client name and version For example: MS SQL Server Management Studio Express v9.0.2
Database client user ID. The default is db2admin. Database client user password DB2 administrators group (Windows only) The default is DB2AdmgrNS. DB2 users group (Windows only) The default is DB2USERS. Activities database server host name Activities database server port number Activities database name. Default: OPNACT. Activities database application user ID Activities database application user password Blogs database server host name Blogs database server port number Blogs database name. Default: BLOGS. Blogs database application user ID Blogs database application user password Cognos database server host name Cognos database server port number Cognos database name. Default: COGNOS. Cognos database application user ID Cognos database application user password Communities database server host name Communities database server port number Communities database name Default: SNCOMM. Communities database application user ID Communities database application user password Dogear database server host name Dogear database server port number Dogear database name. Default: DOGEAR. Dogear database application user ID Dogear database application user password Files database server host name Files database server port number Files database name. Default: FILES. Files database application user ID Files database application user password Forums database server host name Forums database server port number Forums database name. Default: FORUM. Forums database application user ID Forums database application user password Home page database server host name Home page database server port number Home page database name. Default: HOMEPAGE. Home page database application user ID Home page database application user password Metrics database server host name Metrics database server port number Metrics database name. Default: METRICS. Metrics database application user ID Metrics database application user password Mobile database server host name Mobile database server port number Mobile database name. Default: MOBILE. Mobile database application user ID Mobile database application user password Profiles database server host name Profiles database server port number Profiles database name. Default: PEOPLEDB. Profiles database application user ID Profiles database application user password Wikis database server host name Wikis database server port number Wikis database name. Default: WIKIS. Wikis database application user ID Wikis database application user password
Tivoli Directory Integrator (TDI) details
TDI item Details TDI installation location For example: C:\IBM\TDI\ TDI version. For example: 7.1 fix pack 2 or higher Solutions Directory path For example: C:\IBM\TDISOL\TDI
LDAP-Profiles mapping details
This table is derived from the map_dbrepos_from_source.properties file.
Profiles database attribute LDAP attribute (example) Profiles database column alternateLastname null PROF_ALTERNATE_LAST_NAME bldgId null PROF_BUILDING_IDENTIFIER blogUrl null PROF_BLOG_URL calendarUrl null PROF_CALENDAR_URL countryCode c PROF_ISO_COUNTRY_CODE courtesyTitle null PROF_COURTESY_TITLE deptNumber null PROF_DEPARTMENT_NUMBER description description PROF_DESCRIPTION displayName cn PROF_DISPLAY_NAME distinguishedName $dn PROF_SOURCE_UID PROF_MAIL employeeNumber employeenumber PROF_EMPLOYEE_NUMBER employeeTypeCode employeetype PROF_EMPLOYEE_TYPE experience null PROF_EXPERIENCE faxNumber facsimiletelephonenumber PROF_FAX_TELEPHONE_NUMBER floor null PROF_FLOOR freeBusyUrl null PROF_FREEBUSY_URL givenName givenName PROF_GIVEN_NAME givenNames givenName groupwareEmail null PROF_GROUPWARE_EMAIL guid (Javascript function: {func_map_from_GUID}) PROF_GUID ipTelephoneNumber null PROF_IP_TELEPHONE_NUMBER isManager null PROF_IS_MANAGER jobResp null PROF_JOBRESPONSIBILITIES loginId employeenumber PROF_LOGIN and PROF_LOGIN_LOWER logins PROF_LOGIN managerUid $manager_uid This attribute represents a lookup of the UID of a manager using DN in the manager field.
PROF_MANAGER_UID mobileNumber mobile PROF_MOBILE nativeFirstName null PROF_NATIVE_FIRST_NAME nativeLastName null PROF_NATIVE_LAST_NAME officeName physicaldeliveryofficename PROF_PHYSICAL_DELIVERY_OFFICE orgId ou PROF_ORGANIZATION_IDENTIFIER pagerId null PROF_PAGER_ID pagerNumber null PROF_PAGER pagerServiceProvider null PROF_PAGER_SERVICE_PROVIDER pagerType null PROF_PAGER_TYPE preferredFirstName null PROF_PREFERRED_FIRST_NAME preferredLanguage preferredlanguage PROF_PREFERRED_LANGUAGE preferredLastName null PROF_PROF_PREFERRED_LAST_NAME profileType null PROF_TYPE secretaryUid $secretaryUid This attribute represents a lookup of the UID of a secretary using DN in the secretary field.
PROF_SECRETARY_UID shift null PROF_SHIFT surname sn PROF_SURNAME surnames sn PROF_SURNAME telephoneNumber telephonenumber PROF_TELEPHONE_NUMBER timezone null PROF_TIMEZONE title null PROF_TITLE uid (Javascript function - {func_map_to_db_UID}) PROF_UID workLocationCode postallocation PROF_WORK_LOCATION
Connections details
Connections item Details Connections installation location For example: C:\IBM\Connections Response file directory path. For example: C:\IBM\Connections\InstallResponse.txt
DNS host name For example: connections.example.com
Choose: DNS MX Records or Java Mail Session? DNS MX Records only: Local mail domain For example: example.com Java Mail Session only: DNS server name or SMTP relay host For example: dns.example.com; relayhost.example.com
Domain name for Reply-to email address Suffix or prefix for Reply-to email address Server that receives Reply-to emails User name and password for that server URL and ports for admin and user access. You can look up the URLs for each application in the text files that the installation wizard generates. These files are located under the connections_root directory. Activities server name Activities cluster member name Activities URL For example: http://www.example.com:9080/activities
Activities secure URL For example: https://www.example.com:9446/activities
Activities statistics files directory path Activities content files directory path Blogs server name Blogs cluster member name Blogs URL For example: http://www.example.com:9080/blogs Blogs secure URL For example:
https://www.example.com:9446/blogs
Blogs upload files directory path Bookmarks server name Bookmarks cluster member name Bookmarks URL For example:
http://www.example.com:9080/dogear
Bookmarks secure URL For example:
https://www.example.com:9446/dogear
Bookmarks favicon files directory path Communities server name Communities cluster member name Communities URL For example:
http://www.example.com:9080/communities
Communities secure URL For example:
https://www.example.com:9446/communities
Communities statistics files directory path Communities discussion forum content directory path Files server name Files cluster member name Files URL For example: http://www.example.com:9080/files Files secure URL For example:
https://www.example.com:9446/files
Files content store directory path Forums server name Forums cluster member name Forums URL For example:
http://www.example.com:9080/forums
Forums secure URL For example:
https://www.example.com:9446/forums
Forums content store directory path Home page server name Home page cluster member name Home page URL For example:
http://www.example.com:9080/homepage
Home page secure URL For example:
https://www.example.com:9446/homepage
Home page content store directory path Metrics server name Metrics cluster member name Moderation server name Moderation cluster member name Moderation URL For example:
http://www.example.com:9080/moderation
Moderation secure URL For example:
https://www.example.com:9446/moderation
Profiles server name Profiles cluster member name Profiles URL For example:
http://www.example.com:9080/profiles
Profiles secure URL For example:
https://www.example.com:9446/profiles
Profiles statistics files directory path Profiles cache directory path Search server name Search cluster member name Search dictionary directory path Search index directory path Wikis server name Wikis cluster member name Wikis URL For example: http://www.example.com:9080/wikis Wikis secure URL For example:
https://www.example.com:9446/wikis
Wikis content directory path
IBM HTTP Server
IBM HTTP Server item Details IBM HTTP Server installation location For example: C:\IBM\HTTPServer\ IBM HTTP Server version For example: V7.0 fix pack 21. IBM HTTP Server httpd.conf file directory path For example: C:\IBM\HTTPServer\conf\ web server definition name For example: webserver1
web server plugin-cfg.xml file directory path For example: C:\IBM\HTTPServer\Plugins\config\webserver1\ IBM HTTP Server host name IBM HTTP Server fully qualified host name IBM HTTP Server IP address IBM HTTP Server communication port For example: 80 IBM HTTP Server administration port For example: 8008 Run IBM HTTP Server as a service? (Y/N) Run IBM HTTP administration as a service? (Y/N) IBM HTTP Server administrator ID IBM HTTP Server administrator password
Cognos BI Server and Transformer
Cognos BI Server and Transformer item Details Cognos BI Server and Transformer Refer to the cognos-setup.properties file. Installation location for BI server, such as:
- AIX or Linux: /opt/IBM/CognosBI
- Windows: C:\\IBM\Cognos
Installation location for Transformer server, such as:
- AIX or Linux: /opt/IBM/CognosTF
- Windows: C:\ (x86)\IBM\Cognos
Administrator ID Administrative password
Connections Content Manager
Connections Content Manager item Details connectionsAdmin alias For multiple cells - must be defined to be same username/password. filenetAdmin alias Stores object store administrator credentials used to access the FileNet Collaboration Services URL. FileNet Collaboration Services URL (HTTP) http://fncs_server:port/dm FileNet Collaboration Services URL (HTTPS) https://fncs_server:port/dm FileNet Collaboration Services anonymous role Global configuration data (GCD) admins Object store admins FileNet P8 domain name ICDomain Object store name ICObjectStore
Libraries
Connections Content Manager item Details Libraries server name Libraries cluster member name Libraries FileNet object store database server host name Libraries FileNet object store database server port number Libraries FileNet object store database name Default: FNOS Libraries FileNet object store database application user ID Libraries FileNet object store database application user password Libraries FileNet global configuration database server host name Libraries FileNet global configuration database server port number Libraries FileNet global configuration database name Default: FNGCD Libraries FileNet global configuration database application user ID Libraries FileNet global configuration database application user password
Connections release notes
The release notes for Connections 4.5 explain compatibility, installation, and other getting-started issues.
Contents
Description
Connections 4 introduces metrics for Communities. Metrics data is available for the entire product as well as for individual communities. Metrics employs the analytic capabilities of the IBM Cognos Business Intelligence server, which is provided as part of the Connections installation to support the collection of metrics data.
Announcement
The Connections 4.5 announcement notification describes the following information. Additional overview information is available at http://www-03.ibm.com/software/products/us/en/conn.
- Detailed product description, including a description of new function
- Product-positioning statement
- Packaging and ordering details
- International compatibility information
System requirements
For information about hardware and software compatibility, see the Connections system requirements topic.
Install Connections 4.5
For step-by-step installation instructions, refer to the Installing section of the product documentation.
Once mandatory tasks are completed, go to fix central to obtain the latest iFixes and apply them using " Update Connections 4.5" to ensure the deployment will have the latest set of software fixes.
Known problems
Known problems are documented in the form of individual technotes in the Connections. Support Portal. As problems are discovered and resolved, the IBM Support team updates the knowledge base. By searching the knowledge base, you can quickly find workarounds or solutions to problems.
The following links launch customized queries of the live Support knowledge base:
- All known problems for Connections 4.5
- Activities
- Blogs
- Bookmarks
- Communities
- Files
- Forums
- Home page
- Mobile
- News
- Profiles
- Search
- Wikis
- Installation
- Connections Plugin for Lotus Notes
- Connections Plugin for Microsoft Office
- Connections Plugin for Microsoft Outlook
- Connections Plugin for Microsoft Windows Explorer
- Connections Plugin for WebSphere Portal
- Connections APIs
This image represents a sample topology for an Connections install. This topology shows a clustered deployment on one node, called Node 1. The image shows one cell, Cell A, containing two systems, System A and System B. Each system has WAS (WAS) installed. System A has the deployment manager installed. System B shows Cluster A, along with Node 1 and Server A. Server A has all Connections applications. Outside of the Systems you see other deployment components listed. The deployment components that are listed are DBMS, TDI, LDAP Server, Network File Share, IBM HTTP Server, and Cognos.
This image represents a sample topology for an Connections install. This topology shows a clustered deployment on one node, called Node 1. The three systems that are displayed have WAS (WAS) installed. This topology includes three systems, with two of the systems connected by three clusters.
The image shows one cell, Cell A, containing three systems, System A, System B, and System C. Each system has WAS (WAS) installed. System A has the deployment manager installed. Systems B and C each have one node installed, Node 1. Systems B and C also have three servers installed on Node 1. System B has servers A, C, and E installed. System C has servers B, D, and F installed. Systems B and C both contain Activities, Communities, and other applications. In System B, Server A has Activities installed, Server C has Communities installed, and Server E has other Connections applications installed. In System C, Server B has Activities installed, Server D has Communities installed, and Server F has other Connections applications installed.
The diagram also shows three clusters, Cluster A, Cluster B, and Cluster C, going between the different servers on Systems B and C. The servers and clusters are all installed on Node 1. Outside of the Systems you see other deployment components listed. The deployment components that are listed are DBMS, TDI, LDAP Server, Network File Share, IBM HTTP Server, and Cognos.
This image represents a sample topology for an Connections install. This topology shows a clustered deployment on two nodes, Node 1 and Node 2. All three systems that are displayed have WAS (WAS) installed.
The image shows one cell, Cell A, containing three systems, System A, System B, and System C. Each system has WAS (WAS) installed. System A has the Deployment Manager installed. Systems B and C each have one node that is installed, Node 1 on System B and Node 2 on System C.
System B has four servers A, C, E and X installed on Node 1. System C also has four servers B, D, F, and Y installed on Node 2. Systems B and C both contain Activities, Blogs, Communities, and other applications. In System B, Server A has Activities installed, Server C has Blogs installed, and Server E has Communities installed, and Server X has other Connections applications installed. In System C, Server B has Activities installed, Server D has Blogs installed, Server F has Communities installed, and Server X has other applications installed.
The diagram also shows four clusters: an Activities cluster, a Blogs cluster, a Communities cluster, and an Other clusters extending between the different servers on Systems B and C. Outside of the Systems you see other deployment components listed. The deployment components that are listed are DBMS, TDI, LDAP Server, Network File Share, IBM HTTP Server, and Cognos.
Install
New Connections 4.5 install
- Installation of Connections Content Manager for the Libraries application on AIX, Linux, and Windows.
- Updates to configuring Connections Content Manager for Libraries
- Migration tools for Lotus Quickr for Portal and Lotus Quickr for Domino places.
- Updated system requirements (new support)
- OAuth2 support for Connections Content Manager (same cell)
- Configuring Interoperability mode is no longer mandatory when setting up federated repositories and enabling single sign-on for Domino.
Migrating to this release
- After migration to Connections 4.5, you can reuse content stores from 4.0.
- Connections 4.5 is the first release on the
IBM i operating system, so no migration is necessary forIBM i .
The installation process
Installing Connections in a production environment involves several procedures to deploy the different components of the environment.
- Review the software and hardware requirements for the systems that will host Connections.
Download the Installation Manager v1.5.3 or later
Recent releases of IIM offer a 64-bit version, however the 64-bit version is not compatible with the Connections package so a 32-bit version of the IIM needs to be used when installing Connections.
- Install the required software, choosing a supported product in each case:
- WAS
- LDAP directory
- Database server
- TDI
- IBM Cognos (optional)
- To use mail notification, ensure that you have the SMTP and DNS details of your mail infrastructure available at installation time.
- Prepare the LDAP directory, install WAS, and create databases
- Install Connections and, optionally, Connections Content Manager.
You may rerun the installation program to add Connections Content Manager after an initial installation of Connections only, or you can install both at the same time
- Complete the post-installation tasks that apply to your configuration. For example, map the installed applications to IBM HTTP Server.
Pre-installation tasks
If you are migrating from a prior release of Connections, do not complete the tasks for creating databases or populating the Profiles database. The migration process handles those tasks automatically.
Special consideration is required when restoring backed up content:
- Content only can be restored to the same major/minor release that it was backed up from. For example Code Refresh (CR) releases for Connections 4.5 are considered part of the IC 4.5 release.
- Content cannot be restored to Connections 4.5 if it was backed up on Connections 4.0 (CRx).
- Content cannot be restored to a later release if it was backed up on Connections 4.5 (CRx).
- The /provision subdirectory in the shared content folder only can be restored to the exact version that it was backed up on. It is generally not safe to restore the /provision subdirectory to a version with a CR or an iFix different from the version the directory was backed up from.
To use Connections Content Manager, configure Connections and FileNet with the same WebSphere federated repositories. When Connections is installed, the installer provides a user name and password for a system user account created by the installer to handle feature-to-feature communication. The Connections installer also creates a J2C authentication alias name connectionsAdmin. This alias is filled with the specified user and maps that user to a set of application roles.
For many advanced security scenarios (especially Tivoli Access Manager and Siteminder) and in cases where an existing FileNet server is used with Connections Content Manager, the connectionsAdmin user should be located in an LDAP directory or a common directory that is available to all services.
Connections Content Manager is not supported on
IBM i .To use the Connections Metrics application on
IBM i , install DB2 and Cognos Business Intelligence on an AIX server.
Prepare to configure the LDAP directory
To ensure that the Profiles population wizard can return the maximum number of records from your LDAP directory, set the Size Limit parameter in your LDAP configuration to match the number of users in the directory. For example, if your directory has 100,000 users, set this parameter to 100000. If you cannot set the Size Limit parameter, you could run the wizard multiple times. Alternatively, you could write a JavaScript function to split the original LDAP search filter, then run the collect_dns_iterate.bat file, and finally run the populate_from_dns_files.bat file.
To prepare to configure your LDAP directory with IBM WAS...
- Identify LDAP attributes to use for the following roles.
If no corresponding attribute exists, create one. You can use an attribute for multiple purposes. For example, you can use the mail attribute to perform the login and messaging tasks.
Display name The cn LDAP attribute is used to display a person's name in the product user interface. Ensure that the value you use in the cn attribute is suitable for use as a display name. Log in Determine which attribute or attributes to use to log in to Connections. For example: uid. The values of the login name attribute must be unique in the LDAP directory. Messaging Determine which attribute to use to define the email address of a person. The email address must be unique in the LDAP directory. If a person does not have an email address and does not have an LDAP attribute that represents the email address, that person cannot receive notifications. Global unique identifier (GUID) Determine which attribute to use as the unique identifier of each person and group in the organization. This value must be unique across the organization. - Collect the following information about your LDAP directory before configuring it for WAS:
- Directory Type
- Primary host name
- Port
- Bind distinguished name
- Bind password
- Certificate mapping
- Certificate filter, if applicable.
- LDAP entity types or classes. Identifies and selects LDAP object classes. For example, select the LDAP inetOrgPerson object class for the Person Account entity, or the LDAP groupOfUniqueNames object class for the Group entity.
- Search base. Identifies and selects the distinguished name (DN) of the LDAP subtree as the search scope. For example, select o=ibm.com to allow all directory objects underneath this subtree node to be searched.
For example: Group, OrgContainer, PersonAccount, or inetOrgPerson.
Create the Cognos administrator account
Create a new user, or select an existing user in the LDAP directory to serve as the administrator of the IBM Cognos BI Server component (you will add the administrator credentials to a configuration script when you deploy Cognos Business Intelligence).
The Cognos administrator account must reside in the same LDAP directory used by Connections.
If you will use an existing LDAP account, take note of the user name and password. For example, if your organization already has a Cognos deployment, you might choose to use the same administrator account with Connections.
If an acceptable account does not exist already, create it now; again, note the credentials for use later.
The Cognos administrator account is specified in the cognos.admin.username setting of the cognos_setup.properties file.
The Cognos administrator user name should use the value of the LDAP attribute to be used for the User lookup field. For example, if the value for User lookup is (uid=${userID}), then use the value of the uid attribute of the account as user name.
Install IBM WAS
Install IBM WAS.
WAS Network Deployment is provided with Connections.
To establish an environment with one Deployment Manager and one or more managed nodes, use the following table to determine the installation option that you should choose. The Connections installation wizard creates server instances that require each node to have an application server. Choose one of these options when installing WAS to ensure that each node has an application server.
- Deployment Manager and one node on the same system
- Deployment Manager and nodes on separate systems. You can deploy one node on the same system as the dmgr but you must use separate systems for all other nodes in a cluster.
- The heap size of the Deployment Manager might need to be increased beyond the default values if an out-of-memory condition is experienced while installing Connections.
Notes for
IBM i :
- Make sure the offering tag of the WAS 8 response file includes both core.feature and ejbdeploy when installing WAS.
- After installing WAS 8, change WAS 8 to use the 64-bit version of JVM. Refer to the managesdk command for more information.
- Use dmgr as Deploy Manager profile name.
- DmgrHost requires in lowercase for the manageprofiles or addNode command.
- Java 2 security is not supported
Java 2 security is disabled by default in WebSphere and must remain disabled for Connections.
- In WAS admin console click Security > Global.
- In the Java 2 security section, ensure that the Use Java 2 security to restrict application access to local resources option is not selected.
- Logout on HTTP Session Expiration must be disabled
This WebSphere feature, disabled by default, is only applicable to single web applications that do not integrate with other web applications through single sign-on. Since Connections is a set of tightly integrated web applications and not a single web application, this setting cannot be applied. If you have customized security in WebSphere, ensure that this WebSphere setting is not enabled as follows:
- In WAS admin console click Security > Global.
- Under Custom properties, make sure that com.ibm.ws.security.web.logoutOnHTTPSessionExpire is either not listed or set to false.
To install and configure WAS...
- Install WAS Network Deployment.
Enable security when the installation wizard requests it. The administrative user ID that you create must be unique and must not exist in the LDAP repository that you plan to federate.
- Apply the available fix packs.
See the Connections system requirements topic for details.
- Configure WAS to communicate with the LDAP directory.
Perform this step on the Dmgr admin console.
Configure the LDAP for Cognos separately.
See the Configuring support for LDAP authentication for Cognos Business Intelligence topic.
- Configure Application Security after you have completely installed WAS ND.
Perform this step on the Dmgr admin console.
- Add further nodes, if required, to the cell. For each node to add to the cell...
- Log on to node as user wasadmin, and run...
cd WAS_HOME/profiles/profile/bin
./addNode.sh DmgrHost DmgrSoapPort -username AdminUserId -password AdminPwd
- Repeat this step for each additional node that your want to add to the cell.
- Synchronize all the nodes.
- To confirm changes made . Set up single sign on
Create an administrator for the WAS
Use an existing FileNet deployment for Connections Content Manager: If you install Connections Content Manager or integrate with FileNet, you must use an administrative account that is a recognized account in both your Connections environment and your FileNet environment.
If you use an existing FileNet deployment, you must ensure the Connections administrator is a valid user account on the FileNet system. The easiest way to do is to make the Connections administrator a user in a shared LDAP.
If you use the Connections installation to install a new FileNet deployment for use with Connections Content Manager, having the Connections administrator be a user in both Connections and FileNet occurs automatically. With the new FileNet deployment option, both Connections and FileNet share directory configuration in a single WebSphere cell.
- Restart the Dmgr and then log into the dmgr again.
- Click Users and Groups > Administrative user roles and then click Add.
- Select Administrator from the Roles option and then search for a user.
- Select the target user and use the move arrow button to add that user's name to the Mapped to role option.
Ensure that this user ID does not have spaces in the name.
- Click OK and then click Save.
- Log out of the dmgr.
- Restart the dmgr and the nodes.
- Log into the dmgr using the new administrator credentials.
Access Windows network shares
Configure a user account to access network shares in an Connections deployment on the Microsoft Windows operating system
This task applies only to deployments of Connections environments where the data is located on network file shares, and where you have installed WAS on Microsoft Windows and configured it to run as a service.
When WAS runs as a Windows service, it uses the local system account to log in with null credentials. When WAS tries to access an Connections network share using Universal Naming Convention (UNC) mapping, the access request fails because the content share is accessible only to valid user IDs.
When using a Windows service to start WAS, you must use UNC mapping; you cannot use drive letters to reference network shares.
To resolve this problem, configure the WAS service login attribute to log in with a user account that is authorized to access the content share.
To configure the WAS service...
- Click Start > Control Panel and select Administrative Tools > Services.
- Open the service for the first node in the list of WAS services.
- Click the Log On tab and select This account.
- Enter a user account name or click Browse to search for a user account.
- Enter the account password, and then confirm the password.
- Click OK to save your changes and click OK again to return to the Services window.
- Stop and restart the service.
- Repeat steps 3-7 for each node.
Your corporate password policy might require that you change this login attribute periodically. If so, remember to update this service configuration. Otherwise, your access to network shares might fail.
Set up federated repositories
Use federated repositories with IBM WAS to manage and secure user and group identities.
Complete the steps described in Prepare to configure the LDAP directory.
You can configure the user directory for Connections to be populated with users from more than one LDAP directory.
Ensure that you meet the following guidelines for entity-object class mapping:
- If you are using IBM Tivoli Directory Server, decide whether the deployment will rely on the LDAP groupOfNames or groupOfUniqueNames object class for group entities. WAS uses groupOfNames by default. In most cases, you need to delete this default mapping and create a new mapping for group entities using the LDAP groupOfUniqueNames object class.
- If you are using the groupOfUniqueNames object class for group entities, use the uniqueMember attribute for the group member attribute.
- If you are using the groupOfNames object class group entities, use the member attribute for the group member attribute.
To set up federated repositories in WAS...
- Log on to the Dmgr console:
http://DmgrHost:9060/ibm/console
- Click Security > Global Security.
- Select Federated Repositories from the Available realm definitions field, and then click Configure.
- If installing Connections Content Manager, set the realm name to defaultWIMFileBasedRealm.
- Click Click Add Base entry to Realm and then, on the Repository reference page, click Add Repository.
- On the New page, type a repository identifier, such as myFavoriteRepository into the Repository identifier field.
- Specify the LDAP directory that you are using in the Directory type field.
Directory type option LDAP directory supported by Connections IBM Tivoli Directory Server IBM Tivoli Directory Server 6.1, 6.2, 6.3 z/OS Integrated Security Services LDAP Server IBM Lotus Domino IBM Lotus Domino 8.0 or later, 8.5 or later Novell Directory Services eDirectory 8.8 Sun Java System Directory Server Sun Java System Directory Server 7 Windows Active Directory Microsoft Active Directory 2008 Microsoft Active Directory Application Mode Microsoft Active Directory Application Mode Referred to as Active Directory Lightweight Directory Services (AD LDS) in Windows Server 2008.
- Type the host name of the primary LDAP directory server in the Primary host name field. The host name is either an IP address or a DNS name.
- If your directory does not allow LDAP attributes to be searched anonymously, provide values for the Bind distinguished name and Bind password fields. For example, the Domino LDAP directory does not allow anonymous access, so if you are using a Domino directory, you must specify the user name and password with administrative level access in these fields.
- Login attribute or attributes to use for authentication in the Login properties field.
Separate multiple attributes with a semicolon. For example: uid;mail.
If you are using Active Directory and you use an email address as the login, specify mail as the value for this property. If you use the samAccountName attribute as the login, specify uid as the value for this property.
- Click Apply and then click Save.
- On the Repository reference page, the following fields represent the LDAP attribute type and value pairs for the base element in the realm and the LDAP repository. (The type and value pair are separated by an equal sign (=), for example: o=example. These can be the same value when a single LDAP repository is configured for the realm or can be different in a multiple LDAP repository configuration.)
- Distinguished name of a base entry that uniquely identifies this set of entries in the realm
- Identifies entries in the realm. For example, on a Domino LDAP server: cn=john doe, o=example.
- Distinguished name of a base entry in this repository
- Identifies entries in the LDAP directory. For example, cn=john doe, o=example.
This value defines the location in the LDAP directory information tree from which the LDAP search begins. The entries beneath it in the tree can also be accessed by the LDAP search. In other words, the search base entry is the top node of a subtree which consists of many possible entries beneath it. For example, the search base entry could be o=example and one of the entries underneath this search base could be cn=john doe, o=example.
For defined flat groups in the Domino directory, enter a blank character in this field.
- Click Apply and then click Save.
- Click OK to return the Federated Repositories page.
- In the Repository Identifier column, click the link for the repository or repositories that you just added.
- In the Additional Properties area, click the LDAP entity types link.
- Click the Group entity type and modify the object classes mapping. You can also edit the Search bases and Search filters fields, if necessary. Enter LDAP parameters that are suitable for your LDAP directory.
You can accept the default object classes value for Group. However, if you are using Domino, change the value to dominoGroup.
- Click Apply and then click Save.
- Click the PersonAccount entity type and modify the default object classes mapping. You can also edit the Search bases and Search filters fields, if necessary. Enter LDAP parameters that are suitable for your LDAP directory. Click Apply, and then click Save to save this setting.
If you are using a Domino LDAP, replace the default mapping with dominoPerson and dominoGroup object classes for person account and group entities.
- In the navigation links at the beginning of the page, click the name of the repository that you have just modified to return to the Repository page.
- If your applications rely on group membership from LDAP...
Notes:
- Click the Group attribute definition link in the Additional Properties area, and then click the Member attributes link.
- Click New to create a group attribute definition.
- Enter group membership values in the Name of member attribute and Object class fields.
- Click Apply and then click Save.
If you have already accepted the default groupOfNames value for Group, then you can also accept the default value for Member.
- If you changed objectclass for Group to dominoGroup earlier, you must add dominoGroup to the definition of Member.
- If you do not configure the group membership attribute, then the group member attribute is used when you search group membership. If you need to enable searches of nested group membership, then configure the group membership attribute.
- Consider an example of group membership attribute for using Activities: the Member attribute type is used by the groupOfNames object class, and the uniqueMember attribute type is used by groupOfUniqueNames.
- To support more than one LDAP directory, repeat steps 8-22 for each additional LDAP directory.
- Add Base Entry to Realm for each of the repositories added.
- Set the new repository as the current repository:
- Click Global Security in the navigation links at the beginning of the page.
- Select Federated Repositories from the Available realm definitions field, and then click Set as current.
- Enable login security on WAS:
The administrative user name and password are now required because you set up security on WAS.
- Select the Administrative Security and Application Security check boxes. For better performance, clear the Java 2 security check box.
- Click Apply and then click Save.
- Create an administrator for WAS:
Notes:
- Restart the dmgr and then log into the dmgr again.
- Click Users and Groups > Administrative user roles and then click Add.
- Select Adminstrator from the Roles box and then search for a user.
- Select the target user and click the arrow to move the user name to the Mapped to role box.
- Click OK and then click Save.
- Log out of the dmgr.
- Restart the dmgr and the nodes.
- Log into the dmgr using the new administrator credentials.
- Ensure that this user ID does not have spaces in the name.
- Set a primary administrative user:
- Click Security > Global Security.
- Select Federated Repositories from the Available realm definitions field, and then click Configure.
- Enter the user name that you mapped in the previous step in the Primary administrative user name box.
- Click Apply and then click Save.
- Log out of the dmgr and restart WAS.
- When WAS is running again, log in to the Integrated Solutions Console using the primary administrative user name and password.
- Test the new configuration by adding some LDAP users to the WAS with administrative roles.
- If you are using SSL for LDAP, add a signer certificate to your trust store by completing the following steps:
- From the WAS admin console, select SSL Certificate and key management > Key Stores and certificates > CellDefaultTrustStore > Signer Certificates > Retrieve from port.
- Type the DNS name of the LDAP directory in the Host field.
- Type the secure LDAP port in the Port field (typically 636).
- Type an alias name, such as LDAPSSLCertificate, in the Alias field.
- Click Apply and then click Save.
- To enable SSO, prepare the WAS environment...
- From the WAS admin console, select...
Security | Global security | web and SIP security | Single sign-on (SSO) | Enabled | Interoperability Mode (optional) | web inbound security attribute propagation
- Return to the Global security page and click...
Web and SIP security | General settings | Use available authentication data when an unprotected URI is accessed | Apply | Save
- Verify that users in the LDAP directory have been successfully added to the repository:
- From the WAS admin console, select Users and Groups > Manage Users.
- In the Search by field, enter a user name that you know to be in the LDAP directory and click Search. If the search succeeds, you have partial verification that the repository is configured correctly. However, this check cannot check for the groups that a user belongs to. Check that if you leave the default Search by field of User ID, then you need to specify a known UID within the LDAP in the search input field.
Results
You have configured WAS to use a federated repository.
Choosing login values
Determine which LDAP attribute or attributes you want to use to log in to Connections.
The following scenarios are supported:
- Single LDAP attribute with a single value
- For example: uid=jsmith.
- Multiple LDAP attributes, each with a single value
- To specify multiple attributes, separate them with a semicolon when you enter them in the Login properties field (while adding the repository to IBM WAS). For example, where uid=jsmith and mail=jsmith@example.com, you would enter: uid; mail.
- Single LDAP attribute with multiple values
- For example, mail is the login attribute and it accepts two different email addresses: an intranet address and an extranet address.
For example: mail=jsmith@myCompany.com or mail=jsmith@example.com.
- Multiple LDAP attributes, each with multiple values
- For example: uid=jsmith or uid=john_smith and mail=jsmith@example.com or mail=john_smith@example.com or mail=jsmith@MyCompany.com.
- Multiple LDAP directories
- For example: One LDAP directory uses uid as the login attribute and the other uses mail. You must repeat the steps in Set up federated repositories for each LDAP directory.
Multi-valued attributes
You can map multiple values to common attributes such as uid or mail.
If, for example, you mapped the following attributes for a user called Sample User, all three values for the user are populated in the PROFILE_LOGIN table in the Profiles database:
- mail=suser@example.com
- mail=sample_user@example.com
- mail=user_sample@example.com
A similar example for the uid property would have the following attributes:
- uid=suser
- uid=sampleuser
- uid=user_sample
By default, the population wizard only allows you to choose one attribute for logins, so you can't select mail and uid. You can, however, write a custom function to union multiple attributes.
Custom attributes
The Profiles population wizard populates uid and mail during the population process but maps the loginID attribute to null. You can specify a custom attribute if your directory uses a unique login attribute other than, for example, uid or mail. The login value can be based on any attribute that you have defined in your repository. You can specify that attribute by setting loginID=attribute when you populate the Profiles database.
The following sample extract from the profiles-config.xml file shows the standard login attributes:
<loginAttributes> <loginAttribute>uid</loginAttribute> <loginAttribute>email</loginAttribute> <loginAttribute>loginId</loginAttribute> </loginAttributes>The value for the loginID attribute is stored in the Prof_Login column of the Employee table in the Profiles database.
See the Mapping fields manually topic.
Use Profiles or LDAP as the repository
The default login attributes that are defined in the profiles-config.xml file are uid, email, or loginID
If you change the default Connections configuration to use the LDAP directory as the user repository, WAS maps uid as the login default.
Specify the global ID attribute for users and groups
Determine which attribute to use as the unique identifier of each person and group in the organization. This identifier must be unique across the organization.
By default, WAS reserves the following attributes as unique identifiers for the following LDAP directory servers:
IBM Tivoli Directory Server ibm-entryUUID Microsoft Active Directory objectGUID If you are using Active Directory, remember that the samAccountName attribute has a 20 character limit; other IDs used by Connections have a 256 character limit.
IBM Domino Enterprise Server dominoUNID If the bind ID for the Domino LDAP does not have sufficient manager access to the Domino directory, the Virtual Member Manager (VMM) does not return the correct attribute type for the Domino schema query; DN is returned as the VMM ID. To override VMM's default ID setting, add the following line to the <config:attributeConfiguration> section of the wimconfig.xml file:
<config:externalIdAttributes name="dominoUNID"/>
Sun Java System Directory Server nsuniqueid eNovell Directory Server GUID Custom ID If your organization already uses a unique identifier for each user and group, you can configure Connections to use that identifier. The wimconfig.xml file is stored in the following location:
/usr/IBM/WebSphere/AppServer/profiles/profile_name/config/cells/cell_name/wim/config
IBM recommends that you do not allow the GUID of a user to change. If you change the GUID, the user will not have access to their data unless you re-synchronize the LDAP and Profiles database with the new GUID. When you change the GUID and run the sync_all_dns batch file, the user's GUID is initially changed in the Profiles database, and then propagated to the other components using the user life cycle commands. Be sure when you are running sync_all_dns that an unchanged field is used as the hash.
See Managing user data using Profiles administrative commands.
Specify a custom ID attribute for users or groups
Specify custom global unique ID attributes to identify users and groups in the LDAP directory. This is an optional task.
By default, Connections looks for LDAP attributes to use as the global unique IDs (guids) to identify users and groups in the LDAP directory. The identifiers assigned by LDAP directory servers are usually unique for any LDAP entry instance. If the user information is deleted and re-added, or exported and imported into another LDAP directory, the guid changes. Changes like this are usually implemented when employees change status, a directory record is deleted and added again, or when user data is ported across directories.
When the guid of a user changes, you must synchronize the LDAP with the Profiles database before that user logs in again. Otherwise, the user will have two accounts in Connections and the user's previous content will appear to be lost as it is associated with the previous guid. If you assign a fixed attribute to each record, you can minimize the possibility of accidentally introducing dual accounts for a user in Connections.
The custom ID attribute must be chosen carefully and must have certain properties as follows:
- The ID must be static and unique. It must not be reassigned across users and groups in the directory.
- The custom ID chosen must be globally unique across all time.
In other words, the value of an ID must not be assigned to one user today and then a different user sometime in the future. The ID is used to reference a user for security and access control, so reuse of an ID may accidentally grant a user permissions to content previously available to a prior user with the same ID.
- The custom ID chosen must be stable for a particular user over the life of that user.
That is, a user should not have one ID today and a different ID at some time in the future. Changing a user's ID might result in loss of access to content and references to that user reporting that the user is not found or no longer exists.
- These requirements generally makes a login name (such as jsmith) or e-mail (jsmith@example.com) a poor choice for a custom ID.
Since these attributes are frequently recycled as different users join and leave an organization,jsmith may not reference the same user today and into the future. Since this attribute is used in access control lists, the use of login name, e-mail or similar attributes that are recycled might result in a future user getting access to a current user's private communities and content.
Connections will store that jsmith has access to content if that is used as the value of the ID. Whom jsmith refers to might change over time. If this occurs, a new user might get access to content unintentionally because a prior jsmith had access.
- The ID must not exceed 256 characters in length. To achieve faster search results, use a fixed-length attribute for the ID.
If you are planning to install the Files or Wikis application, the ID cannot exceed 252 characters in length. If you are planning to install Connections Content Manager, the string representation of the ID cannot exceed 507 bytes. Values of IDs are compared frequently, so you should choose reasonably compact values for performance. The lengths of default ID values range from approximately 16-to-36 bytes.
- The ID must have a one-to-one mapping per directory object. You cannot use an attribute with multiple values as a unique ID.
As long the value is stable over time and not reused, a good choice for a custom ID might be a global employee or customer ID that you generate and assign to individuals in your directory. An LDAP's GUID, the default for the ID attribute, might or might not be a good choice, depending on how it is populated in your directory and how your organization uses LDAP. If you frequently delete and then recreate LDAP entries for the same user but want the old and new entry to represent the same user, you might need to specify a custom ID. Other considerations for choosing a custom ID are as follows:
- Avoid using attributes containing family names or other information that can change due to personal events.
An employee's family name might change as a result of a change in marital status or other reasons. Such a change obviously does not affect the employee's security role, but it would have the unintended effect of cause the employee's ID to change, leading to losing access.
- Avoid using attributes containing a work group name or other information that can change due to organizational events. A work group's name or reporting structure might change, but this does not necessarily affect the work group's or its members' security role and certainly does not mean the user should be a new user in the system. So these events should not impact ID values, and such attributes are not good ID attribute choices.
- The template or procedure to recreate or restore users and groups must ensure identical ID values, including the case of characters and the use of any filler spaces. Most LDAP servers can be configured to be case-insensitive and ignoring filler spaces, but this is not always the case.
- ID values for distinct users and groups should differ by more than just the case of characters. Depending on the LDAP server and your configuration, your system might reject an entry if it contains the same value of an existing entry except for characters in different cases. Or it might add a system-generated prefix to make the new entry distinguishable. Either way leads to undesirable results.
The wimconfig.xml file governs a single ID attribute for all supported objects such as users, groups, and organizations in WAS. You can use the LotusConnections-config.xml file to override the ID attribute in the wimconfig.xml file. For example, you could use the wimconfig.xml file to specify the ibm-entryUUID attribute as the ID Key attribute for users and groups in all applications running on WAS, and then modify the LotusConnections-config.xml file to specify the employeeID as the ID Key attribute for Connections applications.
Also refer to Manage users for best practices for keeping the Connections membership tables up-to-date with the changes that occur in your corporate directory
You can change the default setting to use a custom ID to identify users and groups in the directory.
A custom ID must meet the following requirements:
- The ID must be static and unique. It must not be reassigned across users and groups in the directory.
- The ID must not exceed 256 characters in length. To achieve faster search results, use a fixed-length attribute for the ID.
If you are planning to install the Files or Wikis application, the ID cannot exceed 252 characters in length.
- The ID must have a one-to-one mapping per directory object. You cannot use an attribute with multiple values as a unique ID.
To specify a custom attribute as the unique ID for users or groups...
- From the VMM_HOME/model directory, open the wimxmlextension.xml file. If no file with this name exists, create one.
VMM_HOME is the directory where the Virtual Member Manager files are located. This location is set to either the wim.home system property or the user.install.root/config/cells/local.cell/wim directory.
- Add the definitions of the new property types and the entity types to which they apply. Ensure that the XML is well-formed and conforms to the schema defined in wimschema.xsd.
- To select a single ID attribute for both users and groups, use the following sample XML, which defines a new property type called enterpriseID and adds this property type to the PersonAccount and Group entity types:
<?xml version="1.0" encoding="UTF-8"?> <sdo:datagraph xmlns:sdo="commonj.sdo" xmlns:wim="http://www.example.com/websphere/wim"> <wim:schema> <wim:propertySchema nsURI="http://www.example.com/websphere/wim" dataType="STRING" multiValued="false" propertyName="enterpriseID"> <wim:applicableEntityTypeNames>PersonAccount</wim:applicableEntityTypeNames> </wim:propertySchema> <wim:propertySchema nsURI="http://www.example.com/websphere/wim" dataType="STRING" multiValued="false" propertyName="enterpriseID"> <wim:applicableEntityTypeNames>Group</wim:applicableEntityTypeNames> </wim:propertySchema> </wim:schema> </sdo:datagraph>- To use two different ID attributes, one for users and a different one for groups, use the following sample XML, which defines a property type called customUserID and adds it to the PersonAccount entity type, and also defines a property type called customGroupID and adds it to the Group entity type:
<?xml version="1.0" encoding="UTF-8"?> <sdo:datagraph xmlns:sdo="commonj.sdo" xmlns:wim="http://www.example.com/websphere/wim"> <wim:schema> <wim:propertySchema nsURI="http://www.example.com/websphere/wim" dataType="STRING" multiValued="false" propertyName="customUserID"> <wim:applicableEntityTypeNames>PersonAccount</wim:applicableEntityTypeNames> </wim:propertySchema> <wim:propertySchema nsURI="http://www.example.com/websphere/wim" dataType="STRING" multiValued="false" propertyName="customGroupID"> <wim:applicableEntityTypeNames>Group</wim:applicableEntityTypeNames> </wim:propertySchema> </wim:schema> </sdo:datagraph>The customUserID and customGroupID properties are not related to the properties of the login ID.
- Add the new property types to each repository adapter.
Open the wimconfig.xml file in a text editor.
PROFILE_HOME//config/cells/<cell_name>/wim/config
- Find and edit the <config:attributeConfiguration> element, adding one of the following texts:
- To use a single ID attribute for both users and groups, using a string called enterpriseid, add the following text:
<config:attributeConfiguration> <config:externalIdAttributes name="enterpriseID" syntax="String"/> </config:attributeConfiguration>- To use two different ID attributes, one for users and the other for groups, add the following text:
<config:attributeConfiguration> <config:attributes name="userPassword" propertyName="password"/> <config:attributes name="customUserID" propertyName="customUserID"/> <config:attributes name="customGroupID" propertyName="customGroupID"/> <config:propertiesNotSupported name="homeAddress"/> <config:propertiesNotSupported name="businessAddress"/> </config:attributeConfiguration>
- Save and close the wimconfig.xml file.
If you specified different ID attributes for users and groups, complete the steps in the Configuring the custom ID attribute for users or groups topic in the Post-installation tasks section of the product documentation. The steps in that task configure Connections to use the custom ID attributes specified in this task.
When you map fields in the Profiles database, ensure that you add the custom ID attribute to the PROF_GUID field in the EMPLOYEE table.
See the Mapping fields manually topic.
Create databases
Create databases for the applications that you plan to install. You can use the database wizard or run the SQL scripts that are provided with Connections.
Each Connections application requires its own database, except Moderation, News, and Search. The Moderation application does not have an associated database or content store, while the News and Search applications share the Home page database.
The database wizard automates the process of creating databases for the applications that you plan to install. It is a more reliable method for creating databases because it validates the databases as you create them.
Consult your database documentation for detailed information about preparing your databases.
You must have already created and started a database instance before you can create databases. If you install the database for Connections Content Manager, a new feature in the 4.5 release, it will create two databases: Global Configuration Database and Object Store.
Complete the procedures that are appropriate for the deployment:
Create multiple database instances
Create multiple instances of a database for a more versatile database environment.
This is an optional procedure. If you need to have only one database instance (in Oracle terminology, one database), you can skip this task.
(Windows only) Complete the following steps for each instance that you plan to create:
- Create a new user and add it to the Administrators group.
If you are using DB2, add the new user to the DB2AdmgrNS group as well.
- Remove the user account from the Users group.
- Grant rights to the new user:
- Click Start > Run and enter secpol.msc.
- Expand Local Policies and click User Rights Assignment.
- Open each of the following rights, click Add User or Group, and add the new user:
- Act as part of the operating system
- Adjust memory quotas|Increase quotas for a process
- Create a token object
- Debug programs
- Lock pages in memory
- Log on as a service
- Replace a process level token
The new account uses the local system as the domain.
A database environment with multiple instances provides several benefits:
For example, if you need to make changes to one of the instances, you can restart just that instance instead of restarting the whole system. Similarly, if you need to take an instance offline, only the databases that are hosted on that instance are unavailable during the outage, while your other databases are unaffected.
- the ability to use different instances for development and production.
- restricted access to sensitive information.
- an optimized configuration for each instance.
Multiple instances require additional system resources.
To create multiple instances of a database...
Choose your database type:
- DB2 Notes:
- For each instance to create, log in as the instance owner before creating the instance.
- Use the DB2 Command Line Processor to enter commands.
- After creating the instance, add the instance to the user environment variable. The instance is then visible in the DB2 Control Center.
- AIX:
An instance called db2inst1 is created during DB2 installation.
- Create a group for DB2:
mkgroup db2iadm1- Create a user for DB2:
mkuser groups=db2iadm1 db2instNwhere db2instN is the name of a user. DB2 prompts you to enter a password for the user. Repeat this step to create enough users to match the number of database instances.
- Create DB2 instances:
Login with root user and go to /opt/IBM/db2/V9.5/instance.
./db2icrt -u db2instN db2instNwhere db2instN is the name of a user and also the name of an instance. Repeat this step to create enough instances to match the number of databases.- Set the port number of the instance:
Edit the/etc/services file and add the following line:
db2c_instance_name instance_port/tcpwhere instance_name is the name of the instance and instance_port is the port number of that instance. Repeat this step for each instance.
- Set the communication protocols for the instance:
db2 update database manager configuration using svcename db2c_instance_name db2set DB2COMM=tcpip db2stop db2startRepeat this step for each instance.- Edit your firewall configuration to allow the new instances to communicate through their listening ports.
- Linux:
An instance called db2inst1 is created during DB2 installation, along with three users: db2inst1, db2fenc1, and dasusr1.
- Create groups for DB2:
groupadd -g 999 db2iadm1 groupadd -g 998 db2fadm1 groupadd -g 997 dasadm1- Create users for DB2 in the db2iadm1 group:
useradd -u 1100 -g db2iadm1 -m -d /home/db2instN db2instN -p db2instXwhere db2instN is the name of a user and db2instX is the password for that user. Create enough users to match the number of database instances.
- Create the db2fenc1 user for DB2 in the db2fadm1 group:
useradd -u 1101 -g db2fadm1 -m -d /home/db2fenc1 db2fenc1 -p db2instX
- Create the dasusr1 user for DB2 in the dasadm1 group:
useradd -u 1102 -g dasadm1 -m -d /home/dasadm1 dasusr1 -p db2instX
- Create new DB2 instances:
Login with root user and go to /opt/ibm/db2/V9.5/instance.
./db2icrt -u db2fenc1 db2instN
Create enough instances to match the number of databases.
- Set the port number of the instance:
Edit the /etc/services file and add the following line:
db2c_<instance_name> <instance_port>/tcp where instance_name is the name of the instance and instance_port is the port number of that instance. Repeat this step for each instance.
- Log in as the database instance and set the communication protocols for the instance:
su - db2instN db2 update database manager configuration using svcename db2c_instance_name db2set DB2COMM=tcpip db2stop db2startRepeat this step for each instance.- Edit your firewall configuration to allow the new instances to communicate through their listening ports.
Windows:
- Create an instance by running the following command:
db2icrt instance_name -s ese -u db2_admin_user
where instance_name is the name of the instance and db2_admin_user is the user account for that instance.
- Set the port number of the instance:
Edit the C:\WINDOWS\system32\drivers\etc\services file and add the following line:
db2c_instance_name instance_port/tcp
- Set the current instance parameter:
set DB2INSTANCE=instance_name- Set the communication protocols for the instance:
db2 update database manager configuration using svcename db2c_instance_name db2set DB2COMM=npipe,tcpip db2stop db2start- Edit your firewall configuration to allow the new instances to communicate through their listening ports.
- Oracle:
Each database is a database instance.
Use the Oracle Database Configuration Assistant (DBCA) to create Oracle a new database:
- Open the DBCA tool:
- AIX or Linux:
- Change login user to oracle
- $ export [[ORACLE_HOME]]=...
- $ export PATH=$PATH:$ORACLE_HOME/bin
- $ export DISPLAY=hostname:displaynumber.screennumber
where hostname:displaynumber.screennumber represents the client system, monitor number, and window number. For example: localhost:0.0
- $ dbca &
- Windows:
- Click Start
- Select Oracle > Oracle_home_name > Configuration and Migration Tools > Database Configuration Assistant.
where Oracle_home_name is the Oracle home on your system. For example: OraDB10g_Home1.
- On the Operations page, accept the default option to Create a database and click Next.
- On the Database Templates page, accept the General Purpose default option and click Next.
- On the Database Identification page, enter LSCONN in the Global Database Name and SID fields and click Next.
- On the Management Options page, accept the default option to Configure the database with Enterprise Manager and click Next.
- On the Database Credentials page, enter the database password and click Next.
- On the Storage Options page, accept the File System storage option and click Next.
- On the Database File Locations page, accept the Database File Locations from Template default option and click Next.
- On the Recovery Configuration page, accept the Specify Flash Recovery Area default option and click Next.
- On the Database Content page, accept the defaults and click Next.
- On the Initialization Parameters page, click the Character Sets tab and select the Use Unicode (AL32UTF8) option. Click Next.
- On the Database Content page, accept the defaults and click Next.
- On the Creation Options page, accept the Create Database default option and click Next.
- SQL Server
- Run the SQL Server installation wizard.
On the Instance Name panel of the installation wizard, select Named instance, and then specify a new instance name in the field.
- Edit your firewall configuration to allow the new instances to communicate through their listening ports.
- Ensure that Named Pipes is enabled in the SQL Server Network Configuration for all instances. Refer to your SQL Server documentation.
- Use the same collation that you are using for the application databases; that is: Latin1_General_BIN. Ensure that the ancillary databases, such as the master, model, tempdb, and msdb databases, use that collation.
- For Authentication mode, use Mixed Mode (Windows Authentication and SQL Server Authentication).
- If you receive any warnings or errors from the System Configuration Check dialog, correct them from the SQL Server 2005 instance installation.
When you create multiple database instances, install the databases on each instance. If you are using the database wizard to install the databases, you must prepare and run the database wizard once for each instance and if you are using the scripts to install the databases, you must run the scripts once for each instance.
Register the DB2 product license key
Register the DB2 product license key for the version of DB2 that is included with Connections.
Only perform this procedure if you are using the version of DB2 that was included with Connections. If you installed Connections and DB2 from the product DVD, the license key was already provided.
If you used DB2 with an earlier version of Connections, your installation of DB2 is already registered and you can skip this task.
Install DB2 before beginning this task but do not create any application databases until after you have completed this task.
To register the DB2 product license key...
- Navigate to the IBM Passport Advantage web site and log in.
If you installed Connections and DB2 from the product DVD, the license key was already provided. You can skip Steps 1-3 and begin at Step 4.
- Choose Find by Part Number and search for part number CI71NML.
- Download the part and extract the DB2_ESE_Restricted_QS_Activation_10.1.zip file, making a note of the download location.
- Log into DB2 using an ID with SYSAdmgr authority.
- Open a command prompt, change to the directory where the license file is stored, and run the following command:
On the DVD image, the license is stored in the DB2.License directory.
db2licm where path_to_lic_file is the directory to which you extracted the db2ese_o.lic file.
- Verify that the license is registered by running the following command:
db2_install_dir/adm/ db2licm -l
If the license is correctly registered, the details of your DB2 installation are displayed.
- Restart DB2.
Create your Connections application databases.
Create a dedicated DB2 user
Create a dedicated IBM DB2 database user named lcuser with restricted privileges.
Perform this task to create a DB2 database user, called lcuser, with a limited set of privileges. The scripts that are provided with Connections grant the appropriate rights to lcuser and are written with the assumption that the user name is lcuser. Always use lowercase characters for this user name.
To create a dedicated DB2 database user named lcuser...
Choose your operating system:
- AIX or Linux:
- Log into the DB2 server as the root user, and then type the following command to create a new user:
useradd -u 1004 -g db2iadm1 -m -d /db2home/lcuser lcuser echo "lcuser:password" | chpasswdwhere password is new password for the new user. The command assumes that your DB2 users group is db2iadm1 and that your home directory for DB2 is db2home. If these values are different in your environment, modify the command accordingly.
- Windows
- Click Start > Control Panel and select User Account > Add or Remove User Accounts > Create a New Account.
- Enter LCUSER for the name of the new account. The account type should be administrator.
- Click the newly created account, click Create a Password to give a password to the new account.
- 4. Right-click Computer, select Manage in the menu.
- Select Configuration > Local Users and Groups > Users, right-click LCUSER, and then select Properties.
- In the popup window select Member Of tab, click Add and enter DB2USERS in the Enter the object name to select field.
- Click Check Names and then click OK.
- Click OK again to save your changes.
If the DB2USERS group is not found, extended security for DB2 on Windows might not be enabled.
See the DB2 documentation for information about Extended Windows security using DB2AdmgrNS and DB2USERS groups.
Configure the DB2 databases for unicode
You must configure each DB2 database used in the Connections deployment for unicode.
Configure the DB2 databases for unicode ensures that DB2 tools like export and import do not corrupt unicode data.
Perform the following steps on each DB2 instance in the deployment:
- Stop any WebSphere server connected to the DB2 database you are configuring.
- Log in to the DB2 server as the DB2 instance owner.
- Open the DB2 command window.
- Run the following commands:
db2set DB2CODEPAGE=1208 db2stop force db2start
- Run the following commands to check the new configuration:
db2setThis should return DB2CODEPAGE=1208. If not, it is not configured correctly and you should try Step 4 again.
Create databases with the database wizard
Use the database wizard to create databases for the Connections applications. You must be logged in with the database administrator account.
Prepare the database wizard
Before you can use the wizard to create databases for your Connections deployment, prepare the database server.
Ensure that you have given the necessary permissions to the user IDs that need to log in to the database system and access the Connections Wizards directory.
Notes:
- If you are planning to create multiple database instances, prepare and run the database wizard once for each instance.
(DB2 only) Create a dedicated IBM DB2 database user named lcuser.
See the Creating a dedicated DB2 user topic.
(Oracle only) Ensure that the Statement cache size for the data sources on WAS is no larger than 50. A higher value could lead to Out Of Memory errors on the application server instance.
(AIX only) If you are downloading the wizard, the TAR program available by default with AIX does not handle path lengths longer than 100 characters. To overcome this restriction, use the GNU file archiving program instead. This program is an open source package that IBM distributes through the AIX Toolbox for Linux Applications at the IBM AIX Toolbox web site. Download and install the GNU-compatible TAR package. You do not need to install the RPM Package Manager because it is provided with AIX.
After you have installed the GNU-compatible TAR program, change to the directory where you downloaded the Connections TAR file, and enter the following command to extract the files from it:
gtar -xvf Lotus_Connections_wizard_aix.tar
This command creates a directory named after the wizard.
(AIX only) Download and install the following packages from the AIX Toolbox for Linux Applications webpage:
gtk2-2.10.6, pango-1.14.5, fontconfig-2.4.2, pkg-config-0.19, libjpeg-6b, freetype2-2.3.9, expat-2.0.1, zlib-1.2.3, xft-2.1.6, xcursor-1.1.7, glib-1.2.10, glib2-2.12.4, atk-1.12.3, gettext-0.10.40, libpng-1.2.32, and libtiff-3.8.2
Some of these packages have dependencies on other packages. The AIX package installer alerts you to any additional packages that might be required.
To prepare the database wizard...
- Log in to your database server as the root user or system administrator.
- (AIX/Linux only) Grant display authority to all users by running the following commands under the root user or system administrator:
xhost + // Grant display authority to other users
If granting display authority to all users is a security concern for you, change the command to grant display authority to a specific user or users.
echo $DISPLAY // Echo the value of DISPLAY under the root user
- (AIX/Linux only) Ensure that the current user is qualified or else switch to a qualified user by running the following commands:
- DB2
su – db2inst1 // db2inst1 is the default DB2 administrator
export DISPLAY=hostname:displaynumber.screennumber
where hostname:displaynumber.screennumber represents the client system, monitor number, and window number. For example: localhost:0.0
xclock // Display the clock, confirming that the current user has display authority and can run the wizard
// Press Ctrl + C to close the clock and return to the command prompt
- Oracle
Before running the database wizard, create an Oracle database instance.
su – oracle // oracle is the Oracle database administrator
export DISPLAY=hostname:displaynumber.screennumber
xclock //Display the clock, confirming that the current user has display authority and can run the wizard
// Press Ctrl + C to close the clock and return to the command prompt
where hostname:displaynumber.screennumber represents the client system, monitor number, and window number. For example: localhost:0.0
If you can see the xclock application running after issuing the xclock command, then you have permission to run the database wizard. If you cannot see the xclock application, run the xhost + command as root user and then run the su command.
- Start the database instance:
Run the database commands under the user account that has administrative access to the database.
- AIX or Linux:
Windows:
Windows registers most database instances as a service. You can start or stop a database service manually if necessary.
- DB2
- Log in to the Control Center.
- In Object View, right-click the database instance.
- In the menu, click Start to start the database manager.
- Oracle
- Open the Windows Services panel: Click Start > All Programs > Administrative Tools > Services.
- Right-click the Oracle service.
- From the menu, click Start to start the database service.
- SQL Server
- Open SQL Server Management Studio.
- Connect the database instance.
- Start the database instance from the studio.
- Copy the Wizards directory in the Connections installation media to the system that hosts the database server. Notes:
- If you have more instances, exit from the current instance and repeat this step for each instance.
- (AIX/Linux only) Ensure that users other than root have permission to access the Connections Wizards directory.
- (DB2 only)
See the Set the current instance environment variables topic in the DB2 information center.
Use the database wizard
Use the database wizard to create databases for the Connections applications that you plan to install.
Before using the wizard for the first time, you must complete the steps described in the Preparing the database wizard topic.
When you are creating a database either with the database wizard or SQL scripts, you must log into the system where the database is hosted with the database administrator account. The default values for DB2 are db2admin on Microsoft Windows, and db2inst1 on Linux and AIX. For Oracle, the default value on AIX/Linux is oracle, and system administrator on Windows. For SQL Server, the default value is the system administrator.
Oracle and SQL Server connect to Connections databases with the user accounts that are configured during database creation. The passwords of those user accounts are defined later in this task.
(Oracle only) Ensure that the Statement cache size for the data sources on WAS is no larger than 50. A higher value could lead to Out Of Memory errors on the application server instance.
(DB2 only) If you use only one database instance and if that instance includes other databases besides Connections, configure the numdb parameter to match the total number of databases on the instance.
- If you migrated from Connections 4.0, the numdb parameter was set to 14, the maximum number of Connections 4.0 databases. If the instance has additional databases, increase the value of the numdb parameter to match the total number of databases on the instance. To change the parameter:
db2 UPDATE DBM CFG USING NUMDB nn
where nn is a number of databases.
- Before removing (or dropping) a database, stop Connections first to ensure that no database connection is in use; otherwise you will not drop the user and the database removal will not occur.
- If you run dbWizard.bat but the database wizard does not launch, check whether you have 32-bit DB2 installed. You need to have 64-bit DB2 on a 64-bit system.
DB2 uses a user account called lcuser. If you are creating a DB2 database with SQL scripts, you must manually create the lcuser account on your operating system and then run the appGrants.sql script to grant the appropriate privileges to the lcuser account. When you use the database wizard, this script runs automatically.
See the Creating a dedicated DB2 user topic.
Notes:
- If you are using Linux on IBM System z with the DASD driver, the SQL scripts are located in the connections.s390.sql/application_subdirectory directory of the Connections setup directory or installation media.
- If you are using Linux on IBM System z with the SCSI driver, back up the connections.s390.sql directory and rename the connections.sql directory to connections.s390.sql.
(AIX only) Download and install the following packages from the AIX Toolbox for Linux Applications webpage:
gtk2-2.10.6, pango-1.14.5, fontconfig-2.4.2, pkg-config-0.19, libjpeg-6b, freetype2-2.3.9, expat-2.0.1, zlib-1.2.3, xft-2.1.6, xcursor-1.1.7, glib-1.2.10, glib2-2.12.4, atk-1.12.3, gettext-0.10.40, libpng-1.2.32, and libtiff-3.8.2
Some of these packages have dependencies on other packages. The AIX package installer alerts you to any additional packages that might be required.
Use the Connections database wizard to create, update, and remove databases.
You can review the scripts that the wizard executes by looking in the connections.sql directory in the installation media. On DB2, the commands are shown in the log that the wizard creates. On Oracle and SQL Server, the log shows the results of the commands.
To create databases with the wizard...
- (DB2 on Windows 2008 64-bit.) On Windows 2008, you must perform DB2 administration tasks with full administrator privileges.
- Logged in as the instance owner, open a command prompt and change to the DB2 bin directory. For example: C:\\IBM\SQLLIB\BIN.
- Enter the following command: db2cwadmin.bat. This command opens the DB2 command line processor while also setting your DB2 privileges.
- From the Connections Wizards directory, open the following file to launch the wizard:
- AIX: ./dbWizard.sh
- Linux: ./dbWizard.sh
Windows: dbWizard.bat
- Click Next to continue.
- Select the option to Create a database and click Next.
- Enter the details of the database you wish to create and then click Next:
- Select a database type.
- Select the location of the database.
- Specify a database instance.
- Select an application and click Next. Notes:
- If you are creating databases in this task, only applications that have not already been installed to a database instance are available. If you are updating databases, you can only choose applications that are already installed.
- The News and Search databases are contained in the Home page database.
- The Metrics application has some additional requirements:
- If you select the Metrics application, also select the IBM Cognos application.
If you have already deployed Cognos components and have a Cognos Content Store available, you do not need to create another.
- If you do create a Cognos Content Store, only the container is created now; the tables are created when you start the Cognos BI Server for the first time.
- You do not need a dedicated database server for Cognos or for the Metrics application; you can host the Metrics database and the Cognos Content Store on the same database server as the other Connections databases.
- Even if you do not plan to deploy Cognos yet, you should create the Metrics database and the Cognos Content Store now so that Connections can begin collecting event data immediately.
- Connections Content Manager requires two databases: Global Configuration Database and Object Store.
- (Oracle and SQL Server databases only) Enter the password for the databases and then click Next. Choose one of the following options:
- Use the same password for all applications.
Enter the password in the Password and Confirm password fields.
- Create different passwords for each application.
Enter a different password for each application database, and confirm the password in the confirm field.
- (SQL Server only) Location of the database file and then click Next.
- Use the same database file location for all applications. Enter the location of the database or click Browse to choose a location.
- Use different database file locations for each application. For each application, enter the location of the database file or click Browse to choose a location.
- Review the Pre Configuration Task Summary to ensure that the values you entered on previous pages in the wizard are correct.
To make a change, click Back to edit the value. Click Create to begin creating databases. To preview each SQL command before it is executed by the wizard, click Show detailed database commands. If you choose to save the commands, you must have write-access to the folder you choose to save them in.
- Review the Post Configuration Task Summary panel and, if necessary, click View Log to open the log file.
Click Finish to exit the wizard.
If the wizard returns an error indicating that it failed to create the Metrics database or encountered errors while creating it, see the following IBM technote for assistance: Metrics database creation with error message and Cognos cube build failed.
Use the database wizard using response file
Run the database wizard using response file when you need an identical installation on multiple servers.
Ensure that the wizard has created the response.properties file in the user_settings/lcWizard/response/dbWizard directory.
To create a response file, run the wizard in standard mode and specify that you would like to create a response file. You can modify the existing response file or create your own, using a text editor.
See the Database wizard response file topic.
(DB2 only) If you use only one database instance and if that instance includes other databases besides Connections, configure the numdb parameter to match the total number of databases on the instance.
- If you migrated from Connections 4.0, the numdb parameter was set to 14, the maximum number of Connections 4.0 databases. If the instance has additional databases, increase the value of the numdb parameter to match the total number of databases on the instance. To change the parameter:
db2 UPDATE DBM CFG USING NUMDB nn
where nn is a number of databases.
- Before removing (or dropping) a database, stop Connections first to ensure that no database connection is in use; otherwise you will not drop the user and the database removal will not occur.
- If you run dbWizard.bat but the database wizard does not launch, check whether you have 32-bit DB2 installed. You need to have 64-bit DB2 on a 64-bit system.
(Oracle only) Ensure that the Statement cache size for the data sources on WAS is no larger than 50. A higher value could lead to Out Of Memory errors on the application server instance.
To create databases using response file...
- (DB2 on Windows 2008 64-bit.) On Windows 2008, you must perform DB2 administration tasks with full administrator privileges.
- Logged in as the instance owner, open a command prompt and change to the DB2 bin directory. For example: C:\\IBM\SQLLIB\BIN.
- Enter the following command: db2cwadmin.bat. This command opens the DB2 command line processor while also setting your DB2 privileges.
- From a command prompt, change to the directory where the wizard is located.
- Launch the wizard by running the following command:
- AIX: ./dbWizard.sh -silent response_file
- Linux: ./dbWizard.sh -silent response_file
Windows: dbWizard.bat -silent response_file
where response_file is the file path to the response file.
If the path to the response_file contains a space, this parameter must be enclosed in double quotation marks (").
After the wizard has finished, check the log file in the dbUser_home/lcWizard/log/dbWizard directory for messages. The log file name uses the time as a postfix. For example: dbConfig_20110308_202501.log.
The database wizard response file
The Connections database wizard can record your input in a response file used for silent installations.
You can start the wizard from a command prompt and then pass the response file in as a parameter. The wizard uses the values in the response file rather than requiring you to interact with it.
There is a sample response file called dbWizard_response.properties in the Wizards/samples directory on the Connections set-up directory or installation media.
The response.properties file collects a specific set of values. Those values are described in the following table:
Table 14. Typical properties of the response.properties file
Property Value Description dbtype db2 | oracle | sqlserver The database system to use. Choose from IBM DB2, Oracle, or Microsoft SQL Server. dbInstance database_instance_name The instance name of the database to use. For example:
- DB2 (DB2 on Windows)
- db2inst1 (DB2 on AIX or Linux)
- orcl (Oracle)
- \\ (SQL Server)
The first '\' is an escape character.
dbHome database_location File path to the database. If you encounter an Invalid database instance error, the file path to the database might be incorrect.
If the dbHome value is, for example, /home/oracle/oracle/product/10.2.0/db_1/, then you must remove the final / character. This limitation applies only on Oracle databases.
On Windows, you need to add an escape character '\'. For example, activities.filepath=C\:\\SQLSERVER.
action create | delete | upgrade The action performed by the wizard. The options are create, delete, or upgrade. dbVersion DB2: 9 or 10 | Oracle: 11| SQL Server: 10 The major version number of the database type. applications activities, blogs, cognos, communities, dogear, files, forum, homepage, libraries, metrics, mobile, profiles, wikis Connections applications for which the wizard creates databases. Use a comma (,) character to separate multiple applications.
If you are creating Oracle or SQL Server databases, you must add the additional properties described in the following table:
Property Value Description <application>.password Password for application databases Password for the applications. The passwords will be removed from the response file after the wizard has finished processing.
<application>.filepath File path to the directory where database files are stored (SQL Server only) File path to the database file location. On Windows, you must add an escape character '\'. For example, activities.filepath=C\:\\SQLSERVER.
If you are upgrading databases and a JDBC connection is needed, you must add the additional properties described in the following table:
Property Recommended value Description port
- DB2 default is 50000
- Oracle default is 1521
- SQL Server default is 1433
Database server port for starting JDBC administrator
- DB2 default on Windows is db2admin
- DB2 default on AIX/Linux is db2inst1
- Oracle default is system
- SQL Server default is sa
Database administrator account for starting JDBC adminPassword Database administrator password for starting JDBC jdbcLibPath (SQL Server only) JDBC library path for starting JDBC. On Windows, you must add an escape character '\'. For example, jdbcLibPath=C\:\\sqljdbc4.jar
Create databases with SQL scripts
Create Connections databases using the SQL scripts that are provided on the installation media.
Use the SQL scripts to create databases for Connections takes longer than using the wizard, and does not validate the databases, but might be necessary in some circumstances.
Create IBM DB2 databases manually
Create IBM DB2 databases with SQL scripts instead of using the Connections database wizard.
Use this procedure if you do not want to use the database wizard to create your databases.
The SQL scripts are located in a compressed file called connections.sql.zip|tar, located in the IBM_Connections_Install/IBMConnections/connections.sql directory of the Connections set-up directory or installation media. Extract this file before proceeding. When extracted, the SQL scripts are located in the IBMConnections/connections.sql/application_subdirectory directory of the Connections set-up directory or installation media, where application_subdirectory is the directory that contains the SQL scripts for each application.
If you are using AIX, see the note in the Preparing the database wizard topic about decompressing TAR files. Notes:
- If you are using Linux on IBM System z with the DASD driver, the SQL scripts are located in the IBM_Connections_Install_s390/IBMConnections/connections.s390.sql directory.
- If you are using Linux on IBM System z with the SCSI driver, back up the connections.s390.sql directory and rename the connections.sql directory to connections.s390.sql.
If the database server and Connections are installed on different systems, copy the SQL scripts to the system that hosts the database server.
(AIX only) Configure the AIX system that hosts the DB2 databases to use the enhanced journaled file system (JFS2), which supports file sizes larger than 2 GB. To enable large files in the JFS system...
- In the SMIT tool, select System Storage Management>File System>Add/Change/Show/Delete File Systems
- Select the file system type you want to use and specify other characteristics as wanted. If you use a Journaled File System, set the Large File Enabled setting to true.
See the AIX documentation for more options. Notes:
When you are creating a database either with the database wizard or SQL scripts, you must log into the system where the database is hosted with the database administrator account. The default values for DB2 are db2admin on Microsoft Windows, and db2inst1 on Linux and AIX. For Oracle, the default value on AIX/Linux is oracle, and system administrator on Windows. For SQL Server, the default value is the system administrator.
Perform this task for each Connections application that you are installing.
To capture the output of each command to a log file, append the following parameter to each command: >> /file_path/db_application.log
where file_path is the full path to the log file and application is the name of the log file. For example:
db2 -tvf createDb.sql >> /home/db2inst1/db_activities.log
Ensure that you have write permissions for the directories and log files.
To create the application databases...
- (Only required if the database server and Connections are installed on different systems.) Copy the Connections SQL scripts to the DB2 database system. Authorize a user ID that can create the databases.
- Log in to the DB2 database system with the user ID of the owner of the database instance. The user ID must have privileges to create a database, a tablespace, tables, and indexes. Notes:
- If you created multiple database instances, specify the user ID for the first instance.
- The default administrative ID for Microsoft Windows is db2admin.
- Start the DB2 command line processor in command mode and enter the following command:
db2start
- For Home page and Profiles, change to the directory where the SQL scripts for each application are stored, and then enter the following command to run the script:
db2 -tvf createDb.sql
- For Home page, run the following script:
db2 -tvf initData.sql
- For Activities, Communities, Blogs, Bookmarks, Files, Forums, Mobile, and Wikis, change to the directory where the SQL scripts for each application are stored, and then enter the following command to run the script:
db2 -td@ -vf createDb.sql
The SQL scripts for Bookmarks are stored in the dogear directory.
- Run the following command to grant access privileges to the lcuser account for the Home page and Profiles databases:
db2 -tvf <application_subdirectory>/appGrants.sql
- Run the following command to grant access privileges to the lcuser account for the Activities, Communities, Blogs, Bookmarks, Files, Forums, Mobile, and Wikis databases:
db2 -td@ -vf application_subdirectory/appGrants.sql
- Run the following commands to generate statistics for the Home page database:
db2 -tvf application_subdirectory/reorg.sql
db2 -tvf application_subdirectory/updateStats.sql
- Run the following commands to create Calendar tables in the Communities database:
db2 -td@ -vf communities/calendar-createDb.sql
db2 -td@ -vf communities/calendar-appGrants.sql
- To use the Metrics application, run the following commands to create the Metrics and Cognos databases:
db2 -td@ -vf metrics/createDb.sql
db2 -td@ -vf metrics/appGrants.sql
db2 -td@ -vf cognos/createDb.sql
db2 -td@ -vf cognos/appGrants.sql
The first two of these commands create the Metrics database and the following two commands create the Cognos database. The Cognos database tables are created when you start the Cognos BI Server for the first time.
- To use Connections Content Manager (CCM), run the following commands to create the CCM databases:
db2 -td@ -vf libraries.gcd/createDb.sql
db2 -td@ -vf libraries.gcd/appGrants.sql
db2 -td@ -vf libraries.os/createDb.sql
db2 -td@ -vf libraries.os/appGrants.sql
Two databases will be created if you want to install Connections Content Manager databases.
- Close the DB2 command line processor.
- When you install Connections, the JDBC configuration page of the installation wizard asks you to provide a user ID and password for the Application User. The user ID that you specify on that page must have read and write access to the database. You can provide the user ID of an administrative user or you can create a dedicated user ID with fewer privileges.
See the Creating a dedicated DB2 user topic for more information.
(DB2 for Linux on System z only.) To improve database performance, enable the NO FILE SYSTEM CACHING option.
See the Enabling NO FILE SYSTEM CACHING for DB2 on System z topic.
Create Oracle databases manually
Create Oracle databases with SQL scripts instead of using the Connections database wizard.
Follow this procedure if you do not want to use the database wizard to create your databases.
The SQL scripts are located in a compressed file called connections.sql.zip|tar, located in the IBM_Connections_Install/IBMConnections/connections.sql directory of the Connections set-up directory or installation media. Extract this file before proceeding. When extracted, the SQL scripts are located in the IBMConnections/connections.sql/application_subdirectory directory of the Connections set-up directory or installation media, where application_subdirectory is the directory that contains the SQL scripts for each application.
If the database server and Connections are installed on different systems, copy the SQL scripts to the system that hosts the database server.
You must specify the Unicode AL32UTF8 character set.
This task describes how to use SQL scripts to create Oracle databases for Connections applications. Complete this task only if you do not want to use the database wizard.
To capture the output of each command to a log file, run the following commands before starting this task:
sql> spool on
sql> spool output_file
where output_file is the full path and name of the file where the output is captured.
When you have completed this task, run the following command: sql> spool off
To manually create the application database tables...
- Log in with the same user ID that you used to install the Oracle database system.
- Create an Oracle user ID with system database administrator privileges used to manage the database tables. Alternatively, use an existing ID that has administrative privileges, such as SYS.
- Set the ORACLE_SID.
If you created multiple databases, specify the database on which to install the tables by providing the SID for that database.
- Run SQL Plus by entering the following command:
sqlplus /NOLOG
- Log in as an administrator with the sysdba role by entering the following command:
connect as sysdba
If not logged in as sysdba, the statistics gathering job for the Bookmarks database is not created or correctly scheduled. As a result, database performance is impacted.
- Enter the Oracle user ID and password.
- For each application, change to that application's SQL scripts directory and enter the following command to create the application's database tables:
@application_subdirectory/createDb.sql password Notes:
- Repeat this step for each Connections application that you plan to install.
- Begin the command with the @ symbol.
- The createDB script creates a dedicated user ID for the JDBC connector for an application database. Later, when you run the Connections installation wizard, you must provide the user ID that you specify in this step. You can specify one of the following default user IDs:
Notes:
- Activities: OAUSER
- Blogs: BLOGSUSER
- Bookmarks: DOGEARUSER
- Cognos: COGNOS
- Communities: SNCOMMUSER
- Files: FILESUSER
- Forums: DFUSER
- Global Configuration Database: FNGCDUSER (Connections Content Manager)
- Home page: HOMEPAGEUSER
- Metrics: METRICSUSER
- Mobile: MOBILEUSER
- Object Store: FNOSUSER (Connections Content Manager)
- Profiles: PROFUSER
- Wikis: WIKISUSER
- Each of these default user IDs has a narrower set of privileges than an administrative user ID.
- You can change the passwords for these database users later in Oracle Enterprise Manager Console. If you change the passwords there, also change them in the J2C authentication alias settings in the WAS admin console.
- If you plan to install the Metrics application, you can create the database now but the tables are not created until you start the Cognos BI Server for the first time.
- (Communities only.) Run the following commands:
@application_subdirectory/calendar-createDb.sql
@application_subdirectory/calendar-appGrants.sql
- (Dogear only.) Run the following command:
@application_subdirectory/createHistogramStatsJob.sql
- This script creates a job to collect histogram statistics.
- You must run this command while logged in with the SYS ID.
- (Home page only.) Run the following command:
@application_subdirectory/initData.sql
- Run the following command to grant access privileges for each application:
@application_subdirectory/appGrants.sql
- Close the SQL Plus window.
Create SQL Server databases manually
Create Microsoft SQL Server databases with SQL scripts instead of using the Connections database wizard.
Follow this procedure if you do not want to use the database wizard to create your databases.
The SQL scripts are located in a compressed file called connections.sql.zip|tar, located in the IBM_Connections_Install/IBMConnections/connections.sql directory of the Connections set-up directory or installation media. Extract this file before proceeding. When extracted, the SQL scripts are located in the IBMConnections/connections.sql/application_subdirectory directory of the Connections set-up directory or installation media, where application_subdirectory is the directory that contains the SQL scripts for each application.
If the database server and Connections are installed on different systems, copy the SQL scripts to the system that hosts the database server.
Before beginning the task, decide whether to use SQL Server with or without an instance name, and with or without an A-Record Alias.
If you installed SQL Server with a default instance, you do not need to supply details of the sql_server_instance_name. For example, in a default instance
- The name of the server is ServerA.
- You configured the default instance when setting up SQL Server.
- Use only the server name.
Alternatively, in an instancename example:
- ServerB is the name of the server
- You configured the instancename as Connections when setting up SQL Server.
- Use the ServerB\Connections naming format.
Finally, where the A-Record is specified as an Alias for SQL Server:
- ServerC is the name of the server
- You configured the default instance when setting up SQL Server.
- You created an A-Record to use as an alias for a new SQL Server called ServerC.
- Use the name of the new A-Record. For example, use A-Record-Name\sqlserver_server_instance_name>
This task describes how to use SQL scripts to create SQL Server databases for Connections applications.
Download Microsoft JDBC Driver 4.0 for SQL Server Connections uses the sqljdbc4.jar file.
To capture the output of each command to a log file, append the following parameter to each command:
>> \file_path\db_application.log
where file_path is the full path to the log file and application is the name of the log file. For example:
sqlcmd >> \home\admin_user\lc_logs\db_activities.log
where sqlcmd is a command with parameters and admin_user is the logged-in user. Ensure that you have write permissions for the directories and log files.
To create the application database tables...
- Configure SQL Server account mode and Windows Authentication mode:
- Create a SQL Server Account such as lcuser.
- Apply sysadmin permissions.
- Configure Local Account Mode:
- Create a local account, such as lcuser, on the system that is hosting SQL Server.
- Add the local account to SQL Server with sysadmin permissions.
- Add the local account to the Local Administrators group.
You must specify these credentials later as parameters of the U and P flags for the sqlcmd command.
- Create a directory on the SQL Server system where you can store the application databases.
Later on, you need to specify these directories as parameters of the file path flag for the sqlcmd command.
- Create a SQL Server user ID with system database administrator privileges used to manage the database tables or use an existing ID that has administrative privileges, such as sa.
You will specify these credentials as parameters of the U and P flags for the sqlcmd command later.
- Perform the following steps once per application to create each database:
- cd directory to which you copied the database creation scripts for the application.
- Enter the following command to create the application database table:
If your database server has multiple SQL Server instances, add the following parameter as the first parameter in each command:
-S sqlserver_server_name\sqlserver_server_instance_name
sqlcmd -U admin_user -P admin_password -i "createDb.sql" -v filepath="path_to_db" password="password_for_application_user" where
- admin_user and admin_password are the credentials for the user ID that you created in a previous step or an existing ID with administrative privileges.
- path_to_db is the directory in which the created database is stored.
- password_for_application_user is the password for each application database.
- The database user IDs are named as follows:
- Activities: OAUSER
- Blogs: BLOGSUSER
- Bookmarks: DOGEARUSER
- Cognos:COGNOSUSER
- Communities: SNCOMMUSER
- Files: FILESUSER
- Forums: DFUSER
- Global Configuration Database: FNGCDUSER (Connections Content Manager)
- Home page: HOMEPAGEUSER
- Metrics: METRICSUSER
- Mobile: MOBILEUSER
- Object Store: FNOSUSER (Connections Content Manager)
- Profiles: PROFUSER
- Wikis: WIKISUSER
Specify the password to be associated with this user ID.
When you run the installation wizard, you are asked to provide a user ID for the JDBC provider. Specify the user ID created by the database creation script and the password that you defined in this step.
You can change the passwords for these database users later in SQL Server Management Studio. If you change the passwords there, also change them in the J2C authentication alias in the WAS admin console.
If you plan to install the Metrics application, you can create the database now but the tables are not created until you start the Cognos BI Server for the first time.
Example for SQL Server Account Mode:
sqlcmd -S sql_server_name\sql_server_instance_name -U sql_server_account -P sql_server_account_password -i "createDb.sql" -v filepath="sql_server_data_path" password="password_for_application_user"
Example for Local Account Mode:
sqlcmd -S sql_server_name\sql_server_instance_name -U servername \local_account -P local_account_password -i "createDb.sql" -v filepath="sql_server_data_path" password="password_for_application_user"
...where...
- sql_server_account andsql_server_account_password are the credentials for SQL Server. These credentials do not apply for Windows Local Account or Windows Domain Account.
- servername \local_account are the credentials for the user ID.
- sql_server_data_path is the directory in which the created database is stored.
- (Home page only) Perform the following steps for the Home page application:
- cd directory to which you copied the database creation scripts for this application.
- Enter the following command to create the application database table:
sqlcmd -U admin_user -P admin_password -i initData.sql
- (Communities only) Run the following commands:
sqlcmd -U admin_user -P admin_password -i calendar-createDb.sql
sqlcmd -U admin_user -P admin_password -i calendar-appGrants.sql
- Perform the following steps to grant access privileges for the applications:
- cd directory to which you copied the database creation scripts for each application.
- Enter the following command:
sqlcmd -U admin_user -P admin_password -i appGrants.sql
See Microsoft SQL Server web site.
Configure a FileNet database for SQL Server or Oracle
You must enable XA transactions on every Microsoft SQL Server or Oracle.
Configure the JDBC distributed transaction components for SQL Server
You must enable XA transactions on every Microsoft SQL Server that will have a Content Platform Engine database. To enable XA transactions for Content Platform Engine databases:
- Download the Microsoft SQL Server JDBC Driver that is referenced in the IBM FileNet P8 Hardware and Software Requirements document for Content Platform Engine SQL Server databases. A downloaded version is here.
Installation procedures for JDBC settings can vary by release.
See the Microsoft website for full details.
- Copy the sqljdbc_xa.dll from the JDBC installation directory to the binn folder of the instance, although a pre-2.0 version of the driver also functions correctly from the tools\binn folder. For the 32-bit version of Microsoft SQL Server, use the sqljdbc_xa.dll file in the x86 folder. For the 64-bit version of Microsoft SQL Server, use the sqljdbc_xa.dll file in the x64 folder. For example, you need to copy:
- C:\Microsoft JDBC Driver 4.0 for SQL Server\sqljdbc_4.0\enu\xa\x86\sqljdbc_xa.dll into C:\ (x86)\Microsoft SQL Server\100\Tools\Binn;
- C:\Microsoft JDBC Driver 4.0 for SQL Server\sqljdbc_4.0\enu\xa\x64\sqljdbc_xa.dll into C:\\Microsoft SQL Server\100\Tools\Binn
Both the 32-bit and 64-bit DLLs need to be copied.
- Log into the SQL Server Management Studio as the sa administrator or as a user with equivalent permissions and execute the database script xa_install.sql on the master database on every SQL Server instance that will participate in distributed transactions.
Use SQL Server database credentials, not Windows credentials, to log in. Windows Integrated Logon to SQL Server is not supported with IBM FileNet P8. This script installs sqljdbc_xa.dll as an extended stored procedure and creates the SqlJDBCXAUser role in the Master database.
- Click New Query.
- Copy the content of C:\Microsoft JDBC Driver 4.0 for SQL Server\sqljdbc_4.0\enu\xa\xa_install.sql into the query window.
- Click Execute to run the scripts.
- Add each database account that Content Platform Engine uses to access SQL Server to the SqlJDBCXAUser role. This action grants permissions to those accounts to participate in distributed transactions with the JDBC driver as follows:
- Click Security > FNGCDUser, and then right-click FNGCDUSER and choose Properties.
- Select the User Mapping tab in the open window.
- Select FNGCD and master databases.
- Select SQLJDBCXAUser in the list.
- Add each database account that Content Platform Engine uses to access SQL Server for the FNOSUSER.
- Click Security > FNOSUSER, and then right-click FNOSUSER and choose Properties.
- Select the User Mapping tab in the open window.
- Select FNOS and master database.
- Select SQLJDBCXAUser in the list.
- Click Administrative Tools > Component Services > Computers > My Computer > Distributed Transation Coordinator Properties > Local DTC, and then right-click Local DTC and select Properties. On the Security tab, select enable XA transactions.
Configure Oracle XA transactions
Configure Oracle XA transactions for Content Platform Engine by running several Oracle SQL scripts. To configure XA transactions:
- Log into the Oracle database as either SYSOPER or SYSDBA.
- Locate and run the initxa.sql script in the ORACLE_HOME\javavm\install directory.
- If the script fails to run because the database memory space is too small, locate and run the initjvm.sql script in the ORACLE_HOME\javavm\install directory. Additional memory-related parameters might need to be adjusted to successfully run this script.
Enable NO FILE SYSTEM CACHING for DB2 on System z
When your operating system is Linux on System z, enable the NO FILE SYSTEM CACHING option for IBM DB2 databases to improve performance.
- Enable the NO FILE SYSTEM CACHING option on an unsupported device could cause your database to become inaccessible. Ensure that your file system supports the NO FILE SYSTEM CACHING option and that it meets the requirements for creating table spaces without file system caching.
- Create a backup copy of the DB2 database using native database tools.
- If the database server and Connections are installed on different systems, copy the SQL scripts to the system that hosts the database server.
- The SQL scripts for DB2 for Linux on System z are located in the connections.s390.sqlapplication_subdirectory directory of the Connections set-up directory or installation media, where application_subdirectory is the directory that contains the SQL scripts for each application.
- You can enable the NO FILE SYSTEM CACHING option for the Activities, Communities, and Profiles databases only.
When you create DB2 databases for Connections under Linux on System z, the Connections database wizard and the createDb.sql script create table spaces with the FILE SYSTEM CACHING option enabled. If you are storing DB2 table spaces on devices where Direct I/O (DIO) is enabled, such as Small Computer System Interface (SCSI) disks that use Fibre Channel Protocol (FCP), you can improve database performance by enabling the NO FILE SYSTEM CACHING option.
To enable the NO FILE SYSTEM CACHING option...
- Log in to the DB2 database system with the user ID of the owner of the database instance. The user ID must have privileges to create a database, a table space, tables, and indexes.
If you created multiple database instances, specify the user ID for the first instance.
- Enable the NO FILE SYSTEM CACHING option for the Activities table space...
CONNECT TO OPNACT
ALTER TABLESPACE OAREGTABSPACE NO FILE SYSTEM CACHING
CONNECT RESET- Enable the NO FILE SYSTEM CACHING option for the Communities table space...
CONNECT TO SNCOMM
ALTER TABLESPACE SNCOMMREGTABSPACE NO FILE SYSTEM CACHING
ALTER TABLESPACE DFREGTABSPACE NO FILE SYSTEM CACHING
CONNECT RESET- Enable the NO FILE SYSTEM CACHING option for the Forums table space...
CONNECT TO FORUM
ALTER TABLESPACE DFREGTABSPACE NO FILE SYSTEM CACHING
CONNECT RESET- Enable the NO FILE SYSTEM CACHING option for the Profiles table space...
CONNECT TO PEOPLEDB
ALTER TABLESPACE USERSPACE4K NO FILE SYSTEM CACHING
ALTER TABLESPACE TEMPSPACE4K NO FILE SYSTEM CACHING
ALTER TABLESPACE USERSPACE32K NO FILE SYSTEM CACHING
ALTER TABLESPACE TEMPSPACE32K NO FILE SYSTEM CACHING
CONNECT RESET- Close the DB2 command line processor.
Populate the Profiles database with LDAP data
Create the Profiles database prior to installing Connections. The installer links to the Profiles database during the installation of Connections. Nothing gets written to the database during installation, but the databases have to exist. IBM recommends performing user population prior to installation, though you are able perform user population after the installation or in parallel with installation, depending on the size of your LDAP directory.
Configure TDI
Configure IBM Tivoli Directory Integrator (TDI) to synchronize and exchange information between the Profiles database, PEOPLEDB, and the LDAP directory.
Some of the info below assumes use of the profiles population wizard.
As an alternative to the procedure below, you can manually run various Profiles tasks by using the appropriate scripts in the TDI Solution directory.
To configure TDI...
- Install Tivoli Directory Integrator.
When prompted for the location of the TDI solution directory, select...
Do not specify. Use the current working directory at startup time.
At the end of the installation process, clear the check box...
Start the Configuration editor
After you have configured TDI, update it with the recommended fix packs.
- Make the database available to TDI...
The following information assumes that the database server is on a separate system.
- DB2
Copy db2jcc.jar and db2jcc_license_cu.jar from DB2_HOST:/path/to/db2_install/java to TDI_HOST:/tmp.
The wizard will prompt for this location and copy the files into the TDI subdirectory...
jvm/jre/lib/ext
For example, if you installed TDI on a Linux system in...
/opt/IBM/TDI/V7.1
...the path would be...
/opt/IBM/TDI/V7.1/jvm/jre/lib/ext
This is the case regardless of the database provider.
If you only run the manual mode, you must copy the files.
- Oracle
Copy ojdbc6.jar from ORACLE_HOST:/path/oracle_install/jdbc/lib TDI_HOST:/tmp.
The wizard will prompt for this location and copy the files into...
/path/to/tdi_install/jvm/jre/lib/ext
- SQL Server
Download the SQL Server JDBC 4.0 driver from the Microsoft web site and follow the instructions to extract the driver files. Connections uses sqljdbc4.jar.
Paste the files into a temporary location on the system where TDI is installed. The wizard will prompt for this location and copy the files into...
/path/to/tdi_install/jvm/jre/lib/ext
If the database is hosted on a separate system, copy the database JAR file to the system hosting TDI.
The jvm/jre/lib/ext directory is is on the TDI classpath, but in rare circumstances may not be close enough to the beginning of the path. If TDI throws an exception that seems to be Java related, try putting the database JAR files in...
jars\3rdparty\others TDI
- Edit the ibmdisrv file to increase runtime memory and disable the JIT compiler.
To increase the runtime memory, add the two -Xms256M -Xmx1024M space-separated arguments to the Java invocation command; to disable the JIT compiler, add the -Xnojit argument as follows.
On Linux systems the file name is ibmdisrv. On Windows systems the file name is ibmdisrv.bat. On both systems the file is located in the main TDI directory.
- AIX or Linux:ibmdisrv
After you add the new arguments, the Java invocation command is similar to the following:
"$TDI_JAVA_PROGRAM" -Xms256M -Xmx1024M $TDI_MIXEdmgrODE_FLAG -Xnojit -cp "$TDI_HOME_DIR/IDILoader.jar" "$LOG_4J" com.ibm.di.loader.ServerLauncher "$@" &
- Windows: ibmdisrv.bat
After you add the new arguments, the Java invocation command is similar to the following:
"%TDI_JAVA_PROGRAM%" -Xms256M -Xmx1024M -Xnojit -classpath "%TDI_HOME_DIR%\IDILoader.jar" %ENV_VARIABLES% com.ibm.di.loader.ServerLauncher %*
- (AIX or Linux only.) In the TDI solution directory, execute the chmod +x *.sh command to ensure that the script files are executable.
- (AIX or Linux only.) Ensure that there is a localhost entry in the /etc/hosts file. For example:
127.0.0.1 localhost
Introduction to IBM TDI
Connections uses IBM TDI to transform, move, and synchronize data from your LDAP directories to the Profiles database.
AssemblyLines
The main tool within TDI is the AssemblyLine, which...
- Processes LDAP data such as entries, records, items, and objects.
- Transforms it
- Outputs it to the Profiles database
When you import data from multiple LDAP directories, the AssemblyLine processes, transforms, and combines all the source data before outputting it.
How data is organized can differ greatly from system to system. For example, databases usually store information in records with a fixed number of fields. Directories usually work with variable objects called entries, and other systems use messages or key-value pairs.
AssemblyLine connectors
Connectors are designed so that you do not need to deal with the technical details of working with various data stores, systems, services, or transports. Each type of connector uses a specific protocol or API to handle the details of data source access. You can create your own connectors to support different functions or use the connectors that are provided with Connections.
work Entries
TDI collects and stores all types of information in a Java data container called a work entry. The data values are kept in objects called Attributes that the work entry holds and manages. AssemblyLine components process the information in the work entry by joining in additional data, verifying content, computing new attributes and values, as well as changing existing ones, until the data is ready for delivery to the Profiles database. TDI internal attribute mapping, business rules, and transformation logic do not need to deal with type conflicts.
Attribute mapping
Attribute Maps are your instructions on which attributes are brought into the AssemblyLine during input, or included in output operations. An AssemblyLine is designed and optimized for working with one item at a time, such as one data record, one directory entry or one registry key. To perform multiple updates or multiple deletes, then you must write AssemblyLine scripts.
Add source data to the Profiles database
Populate the Profiles database with information from the source server by using the Profiles population wizard or by populating the database manually.
Use the Profiles population wizard
The population wizard populates only those entries where the value for surname is not null. You can run the population wizard before, during, or after installing Connections.
- Create a Profiles database
- Install TDI
- Install an LDAP directory.
- If desired, create a response file.
- Log into the system where TDI is installed as the root user or system administrator.
- Grant display authority to all users...
// Grant display authority to other users
xhost +If granting display authority to all users is a security concern for you, change the command to grant display authority to a specific user or users.
// Echo the value of DISPLAY under the root user
echo $DISPLAY- Copy the Wizards directory from the Connections installation media to the system where TDI is installed.
For Windows, if you are installing from disk or ISO, change the permissions for the Wizards folder from Read Only to Write or the population wizard will fail.
- Execute script...
cd /path/to/Wizards
./populationWizard.shIf the wizard does not run correctly, you might need to edit populationWizard.sh and set the JRE/JVM path. Default is...
jvm/linux/jre/bin
- On the Welcome page of the wizard, click Launch Information Center to open the Connections Information Center in a browser window. Click Next to continue.
- Select Default settings or, if you are resuming an earlier session, click...
Last successful default settings
- Enter the location of TDI and then click Next.
- Select a database type and click Next.
- Enter DB info...
Host name Name of the DB host Port Communications port for connecting to the database. Defaults:
DB2 50000 Oracle 1521 SQL Server 1433 Database name Default: PEOPLEDB. No default Oracle database, instead, enter the name of the database instance. JDBC driver library path Path to the JDBC driver(s) on the host machine.
DB2 db2jcc.jar and db2jcc_license_cu.jar in... IBM/DB2/v9.7/SQLLIB/java
Oracle You can find the ojdbc6.jar file in... oracle/product/11.2.0/db_1/jdbc/lib
SQL Server Download the SQL Server JDBC 2 driver from the Microsoft web site. Connections uses sqljdbc4.jar. User ID Database user with write access to the Profiles database. DB2 default: LCUSER. Oracle and SQL Server default: PROFUSER. These user names are automatically created when you create the database. Password Enter your password. - Enter the following properties for the LDAP server, and then click Next:
- LDAP server name
- The host name or IP address of the LDAP server.
- LDAP server port
- The default port is 389. If SSL is selected, the default port is 636.
- Use SSL communication
- Select the check box to enable SSL.
- For SSL, create a truststore file...
- Start the iKeyman utility by running the following file:
TDI_Install_directory/jvm/jre/bin/./ikeyman
where TDI_Install_directory is the directory where TDI is installed.
On the Windows 7 and Windows 2008 operating systems, right-click ikeyman.exe and select Run as administrator.
- Click Key Database File from the menu bar and then click New.
- Select JKS or PKCS12 as the key database type.
- Save the new file to an appropriate location and click OK.
- Enter a password in the Password Prompt dialog box and then confirm the password. Click OK.
You need this password when you use the Profiles population wizard.
- Exit the iKeyman utility.
The Profiles population wizard can use the new truststore file to communicate with your LDAP server in SSL handshaking mode. It can also use the file when fetching data from your LDAP. The Profiles population wizard downloads the LDAP server certificates from your LDAP directory for you.
- If you selected SSL when you entered the LDAP properties, you are asked to enter the following keystore properties:
Truststore file File where trusted server certificates are stored. Used when SSL handshaking is performed. Keystore password Password to access the keystore. Keystore type Format of the trusted server certificate. Currently only JKS and PKCS12 are supported in Java. If the LDAP server certificate is not in the truststore, a message appears that asks you to permanently accept the certificate in the truststore file. If you do not accept it, the wizard cannot connect to the LDAP server with SSL and will not continue with the population task.
Ensure that the global.properties file in TDI is configured with the file trust store name, password and type you just created. For more info...
- Enter the authentication details for the Bind distinguished name (DN) and Bind password, and then click Next.
The Profiles population wizard does not support anonymous binding for LDAP. To populate the Profiles database using anonymous binding, you must populate the database manually.
- Enter the details of the Base distinguished name (LDAP user search base) and LDAP user search filter, and then click Next.
- Map LDAP attributes or JS Functions to the Profiles database fields.
- For each user in the LDAP, TDI will create a row in the database, mapping each LDAP attribute or JavaScript function to the corresponding column in the database.
The wizard automatically validates each mapping. To change the default mapping, select the required LDAP attributes or JavaScript functions and create or modify the field.
- The uid, guid, dn, surname, and displayName attributes are always required.
- You can use the Group By filter in Metrics to categorize the metrics report by a particular user attribute. Metrics defines the Group By attributes by default as country, organization and title. You can also configure the Metrics report after populating,
- If you are prompted to supply a profile type value, see the Profile-types topic for available options.
- You can choose to run the following additional tasks:
Countries Add country data to each profile. Departments Add department data to each profile. Organizations Add organization data to each profile. Employee types Add employee-type data to each profile. Work locations Add location data to each profile. Select Yes if you want to mark the profiles of each manager. For all the entries in this list (except Mark managers), prepare corresponding CSV files with the required information. An Employee Types CSV file might include...
regular=IBM Employee
manager=IBM ManagerYou can edit profiles-config.xml to specify whether you want to display the code or the value, where regular or manager are the employee type codes stored in LDAP and IBM Employee or IBM Manager are the values.
- To see the input file format of the optional tasks, examine the CSV files in...
Wizards/TDIPopulation/TDISOL/OS/samples
...where OS is your operating system
Countries task isocc_sample.csv Departments task deptinfo_sample.csv Organizations task orginfo_sample.csv Employee types task emptype_sample.csv Work locations task workloc_sample.csv - Review the Summary page to ensure that the information you entered in the previous panels is correct. Click Configure to begin populating the database.
- Review the message on the Result page. If necessary, click View log to examine the log in detail. Click Finish to exit the wizard.
Use the Profiles population wizard using response file
You can run the Profiles population wizard in silent mode to populate the Profiles database.
When you run the Profiles population wizard in silent mode, it creates the map_dbrepos_from_source.properties file, located in the Wizards\TDIPopulation\platform\TDI directory, and updates this file with data from the mappings.properties file.
When you use the Profiles population wizard in interactive mode, the wizard creates a response file called tdisettings.properties in the Wizards\TDIPopulation directory. You can modify the existing response file or create a new one. It also creates a mappings.properties file, which contains properties very similar to those in map_dbrepos_from_source.properties file.
If you need to configure multiple systems with Profiles data, you can run the wizard in silent mode.
You can also modify the mappings files manually.
See the Mapping fields manually topic.
(AIX only) If you are downloading the wizard, the TAR program available by default with AIX does not handle path lengths longer than 100 characters. To overcome this restriction, use the GNU file archiving program instead. This program is an open source package that IBM distributes through the AIX Toolbox for Linux Applications at the IBM AIX Toolbox web site. Download and install the GNU-compatible TAR package. You do not need to install the RPM Package Manager because it is provided with AIX.
After you have installed the GNU-compatible TAR program, change to the directory where you downloaded the Connections TAR file, and enter the following command to extract the files from it:
gtar -xvf Lotus_Connections_wizard_aix.tar
This command creates a directory named after the wizard.
To run the Profiles population wizard in silent mode...
- Log in to your database server as the root user or system administrator.
- (AIX/Linux only) Grant display authority to all users by running the following commands under the root user or system administrator:
xhost + // Grant display authority to other users
If granting display authority to all users is a security concern for you, change the command to grant display authority to a specific user or users.
echo $DISPLAY
- Ensure that the Profiles population wizard has created the tdisettings.properties response file in the TDIPopulation directory.
- Launch the wizard in silent mode:
cd TDIPopulation populationWizard.sh -silent response_file \ [ -mappingFile mapping_file] \ [ -dbPassword db_password] \ [ -ldapPassword ldap_password] \ [ -sslPassword ssl_password] \ [ -help | -? | /help | /? | -usage]where response_file is the full path to the tdisettings.properties response file, mapping_file is the full path to the mappings.properties file, dbPassword is the password for the Profiles database, ldapPassword is the password for bind user in the LDAP directory, and sslPassword is the password for the SSL key store.
If you do not specify a mapping file, the default mapping file for your LDAP directory type is used. These mapping files are located in the Wizards/TDIPopulation directory, where you can edit the file for your LDAP directory type. The following table lists the mappings files for applicable LDAP directory types:
Directory type Mapping file IBM Lotus Domino defaultMapping_domino.properties IBM Tivoli Directory Server defaultMapping_tivoli.properties Microsoft Active Directory Application Mode defaultMapping_adam.properties Windows Server 2003 Active Directory defaultMapping_ad.properties Novell Directory Services defaultMapping_nds.properties Sun ONE defaultMapping_sun.properties The parameters for running the population wizard in silent mode are described in the following table:
Parameter Value Description responseFile (required) full path to the tdisettings.properties response file After running the population wizard successfully, the tdisettings.properties response file is stored in the Wizards\TDIPopulation directory in the Connections set-up directory. mappingFile (optional) full path to the mappings.properties file The mappings.properties file is stored in the Wizards\TDIPopulation directory in the Connections set-up directory. If you do not use specify a different file with the -mappingFile parameter, the wizard uses this file to map properties to the LDAP directory. dbPassword (optional) Database password Overwrites the database password in the response file. If you do not specify the database password here, you must specify it in the response file. ldapPassword (optional) LDAP password Overwrites the LDAP password in the response file. If you do not specify the LDAP password here, you must specify it in the response file. sslPassword (optional) SSL key store password Overwrites the SSL key store password in the response file. If you do not specify the SSL password here, you must specify it in the response file.
After the wizard has finished, check the log file in the directory for messages:
<user home>/lcwizard/log/tdi/
The log file name uses the time as a suffix. For example:
tdi_20090912_163536.log
The tdisettings.properties file
When you run the Profiles population wizard, you can record your selections in two response files: a tdisettings.properties file and a mapping file.
After running the Profiles population wizard in interactive mode, you can repeat the same configuration in silent mode by starting the wizard from the command line and passing the response files in as an argument. The wizard uses the values in the response files rather than requiring you to interact with it.
The tdisettings.properties file collects the values that are described in the following table.
Property Description Value db.hostname Host name of the database server. db.jdbcdriver Location of the JDBC driver. Example: C\:\\IBM\\SQLLIB\\java The extra "\" symbol is an escape character.
db.name Name of the Profiles database. Default: PEOPLEDB db.password Password for connecting to the database. The property is required if you do not specify -dbPassword as a command parameter.
- DB2 default: 50000
- Oracle default: 1521
- SQL Server default: 1433
db.port Database server port for invoking JDBC.
- DB2 default: 50000
- Oracle default: 1521
- SQL Server default: 1433
db.type DB2, Oracle, or SQL Server. db2 | oracle | sqlserver db.user Name of the database user, such as lcuser. Example: lcuser ldap.dn.base LDAP distinguished name search base. Example: dc=example, dc=com ldap.enable.ssl Boolean value that determines if SSL is enabled. If the value of this property is yes, also provide values for the ssl.keystore, ssl.password, and ssl.type properties. yes | no ldap.filter Filter for the LDAP. Example: (&(uid=*)(objectclass=inetOrgPerson)) ldap.hostname Host name of the LDAP server. ldap.password Password for connecting to the LDAP directory. Default: 389 or 663 (SSL) ldap.port Communications port of the LDAP server. Default: 389 or 663 (SSL) ldap.user Distinguished name of the LDAP administrative user. ssl.keyStore File path to the keystore. Required only if the ldap.enable.ssl property is set to yes. ssl.password SSL password. Required only if the ldap.enable.ssl property is set to yes. ssl.type SSL standard. Required only if the ldap.enable.ssl property is set to yes. JKS | PKCS12 task.list Tasks that the Profiles population wizard can perform. You can choose from the following options: LDAP_OPTIONAL_TASK_MARK_MANAGER, LDAP_OPTIONAL_TASK_FILL_COUNTRIES, LDAP_OPTIONAL_TASK_FILL_DEPARTMENT, LDAP_OPTIONAL_TASK_FILL_ORGANIZATION, LDAP_OPTIONAL_TASK_FILL_EMPLOYEE, and LDAP_OPTIONAL_TASK_FILL_WORK_LOCATION. To execute multiple tasks, separate the tasks with the comma symbol. Example: LDAP_OPTIONAL_TASK_MARK_MANAGER, LDAP_OPTIONAL_TASK_FILL_COUNTRIES task.country.csv File path to the isocc.csv file. Required if you specify LDAP_OPTIONAL_TASK_FILL_COUNTRIES in the task.list property. Example: C\:\\build\\isocc.csv The extra "\" symbol is an escape character.
task.department.csv File path to the deptinfo.csv file. Required if you specify LDAP_OPTIONAL_TASK_FILL_DEPARTMENT in the task.list property. Example: C\:\\build\\deptinfo.csv The extra "\" symbol is an escape character.
task.empoyeetype.csv File path to the emptype.csv file. Required if you specify LDAP_OPTIONAL_TASK_FILL_EMPLOYEE in the task.list property. Example: C\:\\build\\emptype.csv The extra "\" symbol is an escape character.
task.organization.csv File path to the orginfo.csv file. Required if you specify LDAP_OPTIONAL_TASK_FILL_ORGANIZATION in the task.list property. Example: C\:\\build\\orginfo.csv The extra "\" symbol is an escape character.
task.worklocation.csv File path to the workloc.csv file. Required if you specify LDAP_OPTIONAL_TASK_FILL_ORGANIZATION in the task.list property. Example: C\:\\build\\workloc.csv The extra "\" symbol is an escape character.
TDI.dir Installation location of TDI. Example: C\:\\IBM\\TDI\\V7.1 The extra "\" symbol is an escape character.
Manually populating the Profiles database
Instead of using the Profiles population wizard, you can manually populate the database.
You can populate the Profiles database manually, as described here, or with the help of the population wizard as described in the Using the Profiles population wizard topic. You might choose to manually populate the database to take advantage of functionality not provided by the wizard, such as anonymous LDAP access, large data sets, and property configuration other than what is provided by the wizard, for example alternate source options.
Additional and related information about configuration and mapping properties may be available in the Using the Profiles population wizard topic.
Before starting this task, you must complete the steps in the Mapping fields manually topic. You must set up the mapping file before starting this task.
(AIX only). An AIX limitation causes a file naming error when you extract the tdisol.tar archive. The system renames the profile-links.xsd to profile-links.xs. To resolve this issue, use the GNU Tar program, version 1.14 or higher, to extract the archive. Download the program from ftp://ftp.gnu.org/gnu/tar/ and install it as the default tar utility in the path. The default location for GNU Tar is /usr/local/bin.
The internal name of the Profiles database is PEOPLEDB.
After installing the Profiles database and defining mapping and validation... to populate the Profiles database:
- Update profiles_tdi.properties to specify values for the following properties.
To locate this file, extract the tdisol.tar|zip file from the tdisol directory in your Connections installation media. After extraction, the file is located in...
tdisol.tar|zip/tdisol/TDI
To locate this file, change to the TDI solution directory that you created in the Configuring TDI topic. The file is located in...TDI_solution_directory/TDI
The following list contains properties that you must review. Edit any property values that require editing for your configuration.
- source_ldap_url
- Universal resource locator of the LDAP directory. Enables programs to access the LDAP directory. Use the following syntax to specify the value:
source_ldap_url=ldap://myldap.enterprise.example.com:389- source_ldap_user_login
- If you cannot use Anonymous search, a user login name is required . Use the following syntax to specify the value:
source_ldap_user_login=uid=wpsbind,cn=users,l=Bedford Falls, st=New York,c=US,ou=Enterprise,o=Sales Division,dc=example,dc=com- source_ldap_user_password
- If you cannot use anonymous search, a user password is required, along with user login name. Use the following syntax to specify the value:
{protect}-source_ldap_user_password=wpsbindTDI automatically encrypts any properties which have the {protect} prefix. If you do not want to encrypt these properties, remove the {protect} prefix.
- source_ldap_search_base
- A portion of the LDAP DN that must be part of all entries processed. This base usually contains the expected organization (o) value, such as source_ldap_search_base=o=ibm.com. Use the following syntax to specify the value:
source_ldap_search_base=l=Bedford Falls,st=New York,c=US, ou=Enterprise,o=Sales Division,dc=example,dc=com- source_ldap_search_filter
- A search filter to further refine the entries used. A typical value might be source_ldap_search_filter=cn=*. Use the following syntax to specify the value:
source_ldap_search_filter=(&(uid=*)(objectclass=inetOrgPerson))- source_ldap_use_ssl
- Required only if you are using SSL to authenticate. Specifies whether to use Secure Sockets Layer for the connection. Options are true or false.
- dbrepos_jdbc_driver
- JDBC driver used to access the Profiles database repository. The default value of the properties file references the DB2 database provided with Profiles as follows:
dbrepos_jdbc_driver=com.ibm.db2.jcc.DB2DriverIf you are using DB2, you do not need to modify this value. If you are using an Oracle database, change the value to reference an Oracle database. The following values are examples:dbrepos_jdbc_driver=oracle.jdbc.driver.OracleDriverdbrepos_jdbc_driver=oracle.jdbc.pool.OracleConnectionPoolDataSourceIf you are using SQL Server, change the value to reference the SQL Server database. The following value is an example:com.microsoft.sqlserver.jdbc.SQLServerDriver- dbrepos_jdbc_url
- Universal resource locator of the database that you created. This value specifies the peopledb database, and must include the port number. For example:
.
- DB2: > jdbc:db2://localhost:50000/peopledb
- Oracle: > jdbc:oracle:thin:@localhost:1521:PEOPLEDB
- SQL Server: > jdbc:sqlserver://enterprise.example.com:1433;DatabaseName=PEOPLEDB
- dbrepos_username
- The user name used to authenticate to the database that you created. Use the following syntax to specify the value:
dbrepos_username=<db_admin_id>- dbrepos_password
- The password used to authenticate to the database that you created. Use the following syntax to specify the value:
{protect}-dbrepos_password=act1vitiesYou can provide values for additional properties if necessary, see TDI solution properties for Profiles for details.
- Ensure that you have completed the steps in the Mapping fields manually task. You must complete the mapping task before continuing.
- Run the ./collect_dns.sh or collect_dns.bat script to create a file containing the distinguished names (DNs) to be processed from the source LDAP directory.
Before starting the script, ensure that you have completed the steps in the Mapping fields manually task.
If the script does not run, you might need to enable its Executable attribute by running the chmod command first. The Executable attribute of a script can become disabled after the script is copied from a read-only medium such as DVD.
The new file is named collect.dns by default but you can rename it if necessary. If you change the file name, update the source_ldap_collect_dns_file parameter in profiles_tdi.properties.
After the script runs, it creates a log file called ibmdi.log in the tdisol.tar|zip/tdisol/TDI directory. Examine this file to find out whether any errors occurred during the process.
- Populate the database repository from the source LDAP directory by running the ./populate_from_dn_file.sh or populate_from_dn_file.bat script.
Depending on how many records you are processing, this step could take many hours. For example, 5,000 records might take a few minutes, while half a million records could take over 12 hours. Tivoli Database Integrator prints a message to the screen after every 1,000 iterations to inform you of its progress.
If a failure occurs during processing, such as loss of the network connection to the LDAP directory server, start processing the names from where it was interrupted. Examine the PopulateDBFromDNFile.log file in the logs subdirectory to find out which distinguished name was last successfully processed. The ibmdi.log file also tracks the tasks that you run. Edit the collect.dns file to remove all entries up to and including the last successfully processed entry. Start the task again. You can repeat this step as many times as necessary until all the distinguished names are processed.
- If you are setting the PROF_IS_MANAGER field based on PROF_MANAGER_UID references in other employee records, run the ./mark_managers.sh or mark_managers.bat script.
Manager identification is not performed as part of the previous record population step because it must run across all the records and it is possible that the initial record population step does not complete in a single pass for large organizations.
If the manager designation was not part of the source records for your data set, you can run this task to analyze all the records after population. This task will take each user record and see if it is referenced as the manager for any other users. If yes, the user will be marked as a manager. If not, the user will be marked as not a manager. If you need to use this process to set this profile attribute, you will also need to run it periodically to perform updates.
- Run additional and optional scripts to populate additional fields. For example, run the Country code script ./fill_country.sh or fill_country.bat to populate the Country table from the isocc.csv file. Other scripts include the following:
- Work location code script ./fill_workloc.sh or fill_workloc.bat
- Organization codes script ./fill_organization.sh or fill_organization.bat
- Employee type code script ./fill_emp_type.sh or fill_emp_type.bat
- Department code script ./fill_department.sh or fill_department.bat
TDI solution properties for Profiles
Connections maps LDAP, database, and other properties with TDI configuration parameters.
The properties described in this topic are found in the supplied profiles_tdi.properties file.
The TDI parameter column in the tables contains the name of the parameter in the LDAP connector.
See the TDI product documentation for more information.
You can find additional information about LDAP properties at ibm.com and other sites.
All file paths specified are relative to the TDI solution directory.
The following properties are associated in an LDAP directory that will be used as the source for the data. If you wish to use a source other than LDAP, see Manually populating the Profiles database.
Table 20. LDAP Properties
Property TDI parameter Definition source_ldap_url LDAP URL hostname and LDAP URL Port Required. The LDAP web address used to access the source LDAP system. The port is required and is typically 389 for non-SSL connections. Express this value in the form of ldap://host:port. For example: ldap://myservername.com:389
If using the population wizard, this property is configured using the LDAP server name and LDAP server port on the LDAP server connection page. The LDAP query constructed from the source URL, search base, and search filter are stored in a source url property, which can be used to segment the Profiles database user set during synchronization. Using different values for this property, which may be equivalent (for example referencing the LDAP server by IP address or DNS name) is not advised. The default value is...
ldap://localhost:389
source_ldap_use_ssl LDAP URL Use SSL connection Required if you are using SSL to authenticate. Set to either true or false. Set to true if you are using SSL (for example if you are using port 636 in the LDAP URL). The default value is false. If using the population wizard, this property is configured with the Use SSL communication checkbox on the LDAP server connection page. source_ldap_user_login Login user name Login user name used for authentication. You can leave this blank if no authentication is required. If using the population wizard, this property is configured in the Bind distinguished name (DN) field on the LDAP authentication properties page. source_ldap_user_password Login password Login password used for authentication. Leave this blank if no authentication is required. The value will be encrypted in the file the next time it is loaded. If using the population wizard, this property is configured in the Bind password field on the LDAP authentication properties page. source_ldap_search_base or source_ldap_user_search_base Search Base The search base (the location from where the search begins) of the iterating directory. The search begins at this point in the ldap directory structure and searches all records underneath. This should be a distinguished name. Most directories require a search base, and as such it must be a valid distinguished name. Some directory services allow you to specify a blank string which defaults to whatever the server is configured to do. A default value is not specified. If using the population wizard, this property is configured in the LDAP user search base field on the LDAP page. source_ldap_search_filter or source_ldap_user_search_filter Search Filter Search filter used when iterating the directory. This determines which objects are included or excluded in the search. If using the search base and the specified search filter properties do not allow you to adequately construct your search set, use the source_ldap_required_dn_regex property. Search filters are used by those directories to select entries from which data is retrieved from a search operation. Care should be taken when specifying search filters as they can affect performance of the directory being searched. The directory server schema being queried can impact performance. A default value is not specified. If using the population wizard, this is the LDAP user search filter field on the LDAP authentication properties page. source_ldap_sort_page_size Page size If specified, the LDAP Connector tries to use paged mode search. Paged mode causes the directory server to return a specific number of entries (called pages) instead of all entries in one chunk. Not all directory servers support this option. The default value is 0, which indicates that paged mode is disabled. The default value is 0.
This parameter is not configurable when using the population wizard.
source_ldap_authentication_method Authentication Method Options include the following:
Anonymous dd>Minimal security. Simple Use a login user name and password to authenticate. It is treated as anonymous if no user name and password are provided. CRAM-MD5 Challenge/Response Authentication Mechanism using Message Digest 5. Provides reasonable security against various attacks, including replay. SASL Adds authentication support to connection-based protocols. Specify parameters for this type of authentication using the Extra Provider Parameters option. This parameter is not configurable when using the population wizard.
source_ldap_collect_dns_file Name of the file used to collect distinguished names (DNs) by the collect_dns.bat/sh process from the source, and then used during population by the populate_from_dn_file.bat/sh processes to look up entries to add to the database repository. This file can also be constructed by hand to populate an explicit set of users. The default value is collect.dns. This parameter is not configurable when using the population wizard. source_ldap_escape_dns Indicates that special characters have not been escaped properly and identifies them so the processor can find those characters and escape them. Special characters are:
- , (comma)
- = (equals)
- + (plus)
- < (less than)
- > (greater than)
- # (number sign)
- ; (semicolon)
- \ (backslash)
- " (quotation mark)
The backslash is used to escape special characters. A plus sign is represented by \+ and a backslash is represented by \\. If your distinguished names contains these special characters and you receive errors when running the collect_dns/populate_from_dn_file process, set this property to true so that the characters are escaped. The default value is false. This parameter is not configurable when using the population wizard.
source_ldap_required_dn_regex Allow a regular expression to be used to limit the distinguished names (DNs) which are processed by providing a regular expression which must be matched. If the regular expression is not matched, that particular record is skipped. Although the search filter property gives some flexibility, in case this is not sufficient, you can use a more powerful regular expression. A default value is not specified. This parameter is not configurable when using the population wizard. source_ldap_sort_attribute Sort Attribute Server-side sorting. This parameter instructs the LDAP server to sort entries matching the search base on the specified field name. Server-side sorting is an LDAP extension. The iterating directory must be able to support this sorting extension. A default value is not specified. This parameter is not configurable when using the population wizard. source_ldap_iterate_with_filter This property should be used if the size of the data to be retrieved from LDAP exceeds the search limit from the LDAP. For example, if your search parameters would return 250K records but your LDAP only allows 100K to be returned at a time, this parameter must be used. If the data is too large, an LDAP size limit exceeded error message is generated. To configure this mechanism, see the Populating a large user set topic. When set to true, this specifies that the default iteration assembly line use the collect_ldap_dns_generator.js file to iterate over a set of LDAP search bases and filters. The cconfig setting replaces the sync_all_dns_forLarge and collect_dns_iterate scripts used in earlier releases. This parameter is not configurable when using the population wizard. The default value is false. source_ldap_binary_attributes Binary Attributes By default, this property is set internally to GUID, objectGUID, objectSid, sourceObjectGUID. Any additional values specified in the property are appended to the list. This parameter is not configurable when using the population wizard. The default value is GUID. source_ldap_time_limit_seconds Time Limit Maximum number of seconds that can be used when searching for entries; 0 = no limit. Not configurable when using the population wizard. Default value is 0. source_ldap_map_functions_file Location of any referenced function mappings. When using the population wizard, the functions shown in the mapping dialog are read from and written to this file. Default value is profiles_functions.js. source_ldap_logfile In addition to the standard logs/ibmdi.log file, output from the populate_from_dn_file.bat or populate_from_dn_file.sh task is written to this file. This parameter is not configurable when using the population wizard. The default value is logs/PopulateDBFromSource.log. source_ldap_compute_function_for_givenName Connections allows Javascript functions for setting values of common LDAP fields such as cn, sn, givenName to execute before Connections performs its mapping. For example, sn and/or givenName can be parsed from cn (common name). This parameter is not configurable when using the population wizard. A default value is not specified. source_ldap_compute_function_for_sn Connections allows Javascript functions for setting values of common LDAP fields such as cn, sn, givenName to execute before Connections performs its mapping. For example, sn and/or givenName can be parsed from cn (common name). This parameter is not configurable when using the population wizard. A default value is not specified. source_ldap_collect_updates_file This property is no longer used. source_ldap_manager_lookup_field This property is no longer used. source_ldap_secretary_lookup_field This property is no longer used.
Many properties in the TDI LDAP connector are not mapped to Profiles TDI properties. To configure properties other than those listed here, consider using an alternate source repository and creating your own specialized configuration. Use the LDAP iterator and connectors provided with the TDI solution directory as a starting point, see Using a custom source repository connector for more information.
The following properties are associated with the Profiles database repository.
The following properties must be set in profiles_tdi.properties, even if developing your own assembly lines using the connectors provided in the Profiles TDI solution directory. These properties are not configured in the Connector panels, but rather in profiles_tdi.properties.
Table 21. Profiles Database Properties
Property TDI parameter Definition dbrepos_jdbc_driver JDBC Driver Required. The JDBC driver implementation class name used to access the Profiles database repository. For DB2, the default is com.ibm.db2.jcc.DB2Driver. For example: dbrepos_jdbc_driver=com.ibm.db2.jcc.DB2Driver
For Oracle, the default is oracle.jdbc.driver.OracleDriver. For example:
dbrepos_jdbc_driver=oracle.jdbc.driver.OracleDriver
If you are using a Microsoft SQL Server database, change the value to reference a SQL Server driver, for example:
dbrepos_jdbc_driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
This corresponds to the JDBC driver path in the population wizard. If not using the wizard, this library must be present in the CLASSPATH of TDI, or TDI cannot load the library when initializing the Connector and cannot communicate with the Relational Database (RDBMS).
To install a JDBC driver library so that TDI can use it, copy it into the TDI_install_dir/jars directory, or a subdirectory such as TDI_install_dir/jars/local.
dbrepos_jdbc_url JDBC URL Required. JDBC web address used to access the Profiles database repository. Modify the hostname portion and port number to reference your server information. You can find this information by accessing the WAS Administration Console (http://yourhost:9060), and then selecting... Resources | JDBC | Data sources | profiles
The default syntax is for DB2, unless using the wizard, but the default uses a local host. If the DB2 is not on the same system as the TDI solution directory, update the URL with the host name. If you are using an Oracle database, use the following syntax:
dbrepos_jdbc_url=jdbc:oracle:thin:@<host_name>:1521:orcl
If you are using a SQL Server database, use the following syntax:
dbrepos_jdbc_url=jdbc:sqlserver://<host_name>:1433;databaseName=PEOPLEDB
dbrepos_username User name Required. User name under which the database tables, which are part of the Profiles database repository, are accessed. dbrepos_password Password Required. Password associated with the username under which the database tables, which are part of the Profiles database repository, are accessed. dbrepos_mark_manger_if_referenced This property is no longer used.
The following properties are associated with the task that monitors the Profiles employee draft table for changes and transmits them through a DSML v2 connector.
Property TDI parameter Definition monitor_changes_dsml_server_authentication Type of authentication used by the DSML server update requests. Options include the following:
HTTP basic authentication Allow a web browser, or other client program, to provide credentials, in the form of a user name and password, when making a request. Anonymous Minimal security. monitor_changes_dsml_server_url Required if you are transmitting user changes back to the source repository. Web address of the DSML server to which the DSML update requests should be sent. monitor_changes_dsml_server_username Required if you are transmitting user changes back to the source repository. User name used for authentication to the DSML server. monitor_changes_dsml_server_password Required if you are transmitting user changes back to the source repository. Password used for authentication to DSML server that the DSML update requests should be sent to. monitor_changes_map_functions_file Path to the file containing mapping functions for mapping from a changed database field to a source. for example LDAP field. This is only needed if changes made to the source based on database repository field changes are not mapped one-to-one. You can use the same file you use to map from source to database repository fields, assuming the functions are named appropriately. monitor_changes_sleep_interval Polling interval, in seconds, between checks for additional changes when no changes exist.
The following properties are associated with the TDI processing that reads a Tivoli Directory Server change log and subsequently updates the database repository with those changes.
Property TDI parameter Definition ad_changelog_ldap_url LDAP web address used to access the LDAP system that was updated. For example: ldap://host:port
ad_changelog_ldap_user_login Login user name to use to authenticate with an LDAP system that has been updated. You can leave this blank if no authentication is needed. ad_changelog_ldap_user_password Login user name to use to authenticate with an LDAP that has been updated. You can leave this blank if no authentication is needed. The value will be encrypted in the file the next time it is loaded. ad_changelog_ldap_search_base ad_changelog_ldap_use_ssl Defines whether or not to use SSL in authenticating with an LDAP system that was updated. The options are true and false. ad_changelog_timeout ad_changelog_sleep_interval Polling interval, in seconds, between checks for additional changes when no changes exist. ad_changelog_use_notifications Indicates whether to use changelog notifications rather than polling. If true, the tds_changelog_sleep_interval is not applicable since polling is not used. The options are true and false. ad_changelog_ldap_page_size ad_changelog_start_at Change number in the Active Directory changelog to start at. Typically this is an integer, while the special value EOD means start at the end of the changelog. ad_changelog_ldap_required_dn_regex. tds_changelog_ldap_authentication_method Authentication Method Authentication method used to connect to LDAP to read records. Options include the following:
Anonymous Minimal security. Simple Login user name and password to authenticate. Treated as anonymous if no user name and password are provided. CRAM-MD5 Challenge/Response Authentication Mechanism using Message Digest 5. Provides reasonable security against various attacks, including replay. SASL Simple Authentication and Security Layer. Adds authentication support to connection-based protocols. Specify parameters for this type of authentication using the Extra Provider Parameters option. tds_changelog_ldap_changelog_base ChangelogBase Changelog base to use when iterating through the changes. This is typically cn=changelog. tds_changelog_ldap_time_limit_seconds Time Limit Searching for entries must take no more than this number of seconds; 0 = no limit.
tds_changelog_ldap_url LDAP URL LDAP web address used to access the LDAP system that was updated. For example:
ldap://host:porttds_changelog_ldap_use_ssl Use SSL Defines whether or not to use SSL in authenticating with an LDAP system that was updated. The options are true and false. tds_changelog_ldap_user_login Login user name Login user name to use to authenticate with an LDAP system that has been updated. You can leave this blank if no authentication is needed. tds_changelog_ldap_user_password Login password Login user name to use to authenticate with an LDAP that has been updated. You can leave this blank if no authentication is needed. The value will be encrypted in the file the next time it is loaded. tds_changelog_sleep_interval Polling interval, in seconds, between checks for additional changes when no changes exist. tds_changelog_start_at_changenumber Change number in the Tivoli Directory Server changelog to start at. Typically this is an integer, while the special EOD value means start at the end of the changelog. tds_changelog_use_notifications Indicates whether to use changelog notifications rather than polling. If true, the tds_changelog_sleep_interval is not applicable since polling is not used. The options are true and false.
The following properties are available in profiles_tdi.properties and are associated with TDI debug activities.
The debug properties enable TDI debugging for an entire assembly. In addition, enabling debug_update_profile which enables debugging for the commands that use the Profiles Connector, also enables Java debugging for the following packages.
- log4j.logger.com.ibm.lconn.profiles.api.tdi=ALL
- log4j.logger.com.ibm.lconn.profiles.internal.service=ALL
- log4j.logger.java.sql=ALL
The following properties are not configurable when using the population wizard.
Property TDI parameter Definition sync_all_dns For information about sync_all_dns, see Understanding how the sync_all_dns process works.
debug_managers Flag that instructs TDI to log additional debug information for the following command(s). The options are true and false. To enable, set as debug_managers=true. This property maps as follows:
debug_managers mark_managersThe default setting is false.
debug_photos Flag that instructs TDI to log additional debug information for the following command(s). The options are true and false.
This property maps as follows:
debug_photos load_photos_from_files dump_photos_to_filesThe default setting is false.
debug_pronounce Flag that instructs TDI to log additional debug information for the following command(s). The options are true and false. This property applies to the following command(s): debug_pronounce load_pronounce_from_files, dump_pronounce_to_filesThe default setting is false.
debug_fill_codes Flag that instructs TDI to log additional debug information for the following command(s). The options are true and false. This property applies to the following command(s): debug_fill_codes fill_country fill_department fill_emp_type fill_organization fill_worklokThe default setting is false.
debug_draft Flag that instructs TDI to log additional debug information for the following command(s). The options are true and false. This property applies to the following command(s): debug_draft process_draft_updates reset_draft_iiterator_state set_draft_iterator_countThe default setting is false.
debug_update_profile Flag that instructs TDI to log additional debug information for the following command(s). The options are true and false. This property applies to the following command(s): debug_update_profile populate_from_dn_file delete_or_inactivate_employees populate_from_xml_file process_ad_changes process_tds_changesThe default setting is false.
debug_collect Flag that instructs TDI to log additional debug information for the following command(s). The options are true and false. This property applies to the following command(s): debug_collect collect_dnsThe default setting is false.
debug_special Flag that instructs TDI to log additional debug information for the following command(s). The options are true and false. This property applies to the following command(s): debug_special unused at presentThe default setting is false.
trace_profile_tdi_javascript Enable generation of an internal Javascript trace file. Options are OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL ( values are not case-sensitive). The default setting is OFF.
Batch files for processing Profiles data
Connections provides several batch files that automate the collection and processing of LDAP data for the Profiles database.
Batch file functions
The name of each batch file ends with the .sh suffix for the IBM AIX/Linux operating systems and with the .bat suffix for the Microsoft Windows operating system.
The following list describes each batch file and its functions. You can search for more information about these files in the help topics.
- clearLock
- Delete the lock file that is generated by the sync_all_dns batch file.
- collect_dns
- Create a file called collect.dns that contains the distinguished names from the LDAP directory. This batch file is used in the first step of the process to populate the Profiles database.
- delete_or_inactivate_employees
- Deactivate employee records in the Profiles database. The records are not removed from the Profiles database but are set to an inactive state and the employee login and mail address values are removed. These changes are propagated to the member and login tables in the databases of installed applications. The records to be deactivated are defined in the delete_or_inactivate_employees.in file. To remove users from only the Profiles database, change the value of the sync_delete_or_inactivate property in profiles_tdi.properties to delete.
You must manually create the delete_or_inactivate_employees.in file. Use the following format for adding entries:
$dn:cn=Any User3,cn=Users,l=WestfordFVT,st=Massachusetts,c=US,ou=Lotus,o=Software Group,dc=ibm,dc=com uid:Any User3
- dump_photos_to_files
- Copy all the photos from the PHOTO table in the Profiles database to a folder on the local system called dump_photos. This batch file also creates a local file called collect_photos.in that contains the UID and URL of each photo.
- dump_pronounce_to_files
- Copy all the pronunciation files from the PRONUNCIATION table in the Profiles database to a folder on the local system called dump_pronounce. the local files. This batch file also creates a local file called collect_pronounce.in that contains the UID and URL of each pronunciation file.
- fill_country
- Populate the COUNTRY table in the Profiles database from the isocc.csv file.
- fill_department
- Populate the DEPARTMENT table in the Profiles database from the deptinfo.csv file.
- fill_emp_type
- Populate the EMP_TYPE table in the Profiles database from the emptype.csv file.
- fill_organization
- Populate the ORGANIZATION table in the Profiles database from the orginfo.csv file.
- fill_workloc
- Populate the WORKLOC table in the Profiles database from the workloc.csv file.
- load_photos_from_files
- Load all the photos from the dump_photos folder on the local system to the PHOTO table in the Profiles database. This batch file reads the collect_photos.in file and the dump_photos folder that you created with the dump_photos_to_files batch file. This batch file loads photos only for people who are already recorded in the database.
- load_pronounce_from_files
- Load all the pronunciation files from the dump_pronounce folder on the local system to the PRONUNCIATION table in the Profiles database. This batch file reads the collect_pronounce.in file and the dump_pronounce folder that you created with the dump_pronounce_to_files batch file. This batch file loads pronunciation files only for people who are already recorded in the database.
- mark_managers
- Set the PROF_IS_MANAGER field in the Profiles database, based on the value of the PROF_MANAGER_UID field in the employee records.
- populate_from_dn_file
- Populate the Profiles database from the source LDAP directory. This batch file reads the collect.dns data file that you created with the collect_dns batch file. The batch file also updates existing employee records in the Profiles database.
- process_ad_changes
- Synchronize LDAP directory changes with the Profiles database when your LDAP directory type is Microsoft Active Directory. This batch file is stored in the solution-dir/Samples directory.
For more information, go to the Active Directory Change Detection Connector topic in the TDI information center.
For information about permissions, go to the How to poll for object attribute changes topic on the Microsoft Support website.
The sync_all_dns script is recommended when you want to synchronize changes in the LDAP directory with the Profiles database.
- process_draft_updates
- Synchronize changes from the Profiles database back to the LDAP directory.
- process_tds_changes
- Synchronize LDAP directory changes with the Profiles database when your LDAP directory type is IBM Tivoli Directory Server. This batch file is stored in the solution-dir/Samples directory.
The sync_all_dns script is recommended when you want to synchronize changes in the LDAP directory with the Profiles database.
- sync_all_dns
- Update the Profiles database to capture changes to the LDAP directory. This synchronization process includes updates to employee records and additions and deletions of records.
- tdienv
- Set the correct environment for IBM TDI. This batch file sets the path to the TDI program, the TDI host, and the TDI port. If you installed TDI to a custom location, modify the path to that location before using this batch file.
Populate a large user set
Populate the Profiles database from an LDAP directory with a large user population.
In very large organizations, the number of users in the LDAP directory exceeds the capacity of the TDI assembly lines for Profiles. To overcome this restriction, you can populate the database by using manual TDI assembly lines. You cannot use the Profiles population wizard.
For related information and details, see TDI help.
Limits with large user sets
The LDAP administrator can change the LDAP size limit. The capacity of the standard assembly lines provided with Connections is 100,000 users. In some cases, you can modify the maximum number of entries returned from the LDAP or adjust the source_ldap_page_size parameter in profiles_tdi.properties. For example, set the parameter to the maximum number of records your LDAP repository will return, using the following sample statement:
source_ldap_page_size=1000If you receive the following, adjust the source_ldap_page_size parameter in profiles_tdi.properties.LDAP: error code 4 - Sizelimit ExceededIf neither of these alternatives is successful, use a special set of assembly lines to populate your Profiles database from your LDAP directory.
Alternative population process
If you have a very large set of data, set the source_ldap_iterate_with_filter property in profiles_tdi.properties to true. This uses the collect_ldap_dns_generator.js file to retrieve search criteria for a batch of records. The batch is always smaller than the limit of the LDAP retrieval.
The collect_ldap_dns_generator.js file constructs a search filter with a portion of UIDs but does not modify the search base. It is data-specific so you need to modify it for your own deployment. Modify suppliesSearchBase() or suppliesSearchBase(), depending on which filter is used in the LDAP retrieval.
If one of the filters is changed to return true (in the supplied file, suppliesSearchBase returns true), the corresponding function, either getNextSearchBase() or getNextSearchFilter(), is called in iterations. Each time the function is called it returns a string with the next search base or filter to use. When it reaches the end of the batch, it returns null.
In the sample file, the UID is examined over a range of its first characters. The process first uses some special characters and then examines the first two characters of the UID string, or example aa*, ab*, and so on. After it reaches zz* it returns null and the collect_dns assembly line stops processing. You can then run populate_from_dn_file.
Map fields manually
To populate the Profiles database with data from the enterprise LDAP directory, map the content of the fields in the database to the fields in the LDAP directory.
Edit the map_dbrepos_from_source.properties file to map fields between the Profiles database and the LDAP directory. Open the profiles_functions.js file to see the options for the different mapping functions. You can add your own functions if necessary.
When you run the Profiles population wizard in interactive mode, it generates two property files in the Wizards\TDIPopulation directory: a tdisetting.properties file and a mappings.properties file. The properties in mappings.properties are very similar to those in map_dbrepos_from_source.properties.
To map fields...
- On the system hosting your TDI installation, create a subdirectory in which to store the TDI solution directory. Make sure that the file path does not contain spaces. Do not, for example, create the subdirectory in the directory in Microsoft Windows.
- Copy the tdisol compressed file from the TDISOL directory of the Connections installation media to the system where you installed TDI.
- Use appropriate tools, extract the tdisol file to the directory that you created in Step 1. This process creates a TDI Solution directory called TDI.
- Move the OS directory that you copied earlier to jvm/jre/lib/ext subdirectory of the TDI directory. Rename the directory to TDI. This directory becomes the TDI solution directory.
- From the TDI solution directory, open the tdienv.bat or tdienv.sh file in a text editor. Ensure that the path to the TDI installation directory is specified correctly in the TDIPATH variable. If the path is not correct, edit the TDIPATH environment variable.
Other scripts in the solution directory use this TDI path or tdienv.bat or tdienv.sh file to find TDI files.
- AIX or Linux:
The default value for TDIPATH is:
export TDIPATH=/opt/IBM/TDI/V7.1- Windows:
The default value for TDIPATH is:
SET TDIPATH=C:\\IBM\TDI\V7.1- Edit the properties files to define the mapping between the LDAP directory and the Profiles database. Consider using LDAP viewer software to help you map the fields. To define the mappings that are used when populating the Profiles database from the enterprise directory:
- From the TDI directory, open the map_dbrepos_from_source.properties file in a text editor.
- Add or modify the field values. Any values that you omit or set to null are not populated in the database. You can modify the values in one of the following ways:
- 1:1 mapping
- If one field in the Profiles database matches one field in the enterprise directory, type the name of the field in the Profiles database and set it equal to the associated source database LDAP property. For example:
bldgId=buildingname
- Complex mapping
- If there is a more complex relationship between the fields in the Profiles database and enterprise directory, such as the content of the property in the enterprise LDAP directory must be split into multiple fields in the Profiles database, use a JavaScript function to define the relationship. Define the function in the profiles_functions.js file and wrap the name of the JavaScript function in braces {}. Begin function names with func_ so that you can more easily identify them. For example:
bldgId={func_map_to_db_bldgId}
See Example complex mapping of Profiles data for an example of complex mapping. Notes:
- The uid, guid, dn, surname, and displayName attributes are always required.
See Table 2 for a list of the default values for the fields.
- Open the tdi-profile-config.xml file. After the IBM TDI Solution files are extracted, the file is located in the following directory:
TDI/conf/LotusConnections-config
- Modify the file to configure the extension attribute, specifying the property's name and mapping from the source. Use the following parameters:
Table 25. Custom extension attribute parameters
Parameter Description extensionId ID of the extension attribute. Required. sourceKey Name of the attribute from the source. Required. For example, to add a simple attribute called spokenLangs, the configuration would look like the following extract from the tdi-profile-config.xml file:
<simpleAttribute extensionId="spokenLangs" sourceKey="spokenLang"/>The formatting between the tdi-profiles-config.xml and the profiles-config.xml files is compatible, so you can copy and paste configuration information between the files. For the extension to be displayed in the user interface, the modifications must be made in profiles-config.xml.
For more information, see Extension properties in the data model in the Customizing Profiles section.
To leverage the custom attribute in the Profiles user interface or REST API, configure the application per the instructions in the Customizing Profiles section. For a detailed example that uses custom attributes, see Creating a simple profile data model and template customization.
- Save your changes to the tdi-profiles-config.xml file.
- Write a JavaScript function that combines different attributes from your LDAP directory to map a customized extension attribute for the Profiles database:
- Add the extension attribute function definition in the map_dbrepos_from_source.properties file, using the following format:
extattr.spokenLangs={func_map_to_langs}
The extensionAttribute name must match the specified extensionId in the tdi-profiles-config.xml extension attribute definition.
- Add a new func_map_to_db_extensionAttribute JavaScript function in the TDISolution\TDI\profiles_functions.js file. Write logic for the function that specifies the new extension attribute mapping.
- Repeat these steps for each JavaScript function.
The properties in the map_dbrepos_from_source.properties file have the default values defined in the following table. Many of them are null. You must determine which LDAP properties to map to your database fields and edit this file to specify values that apply to your configuration. Any values that you omit or set to null are not populated in the database.
See Attribute mapping for Profiles for a table of additional attribute mapping field values.
Table 26. Default values for properties in the map_dbrepos_from_source.properties file
TDI property Default LDAP attribute mapping alternateLastname null bldgId null blogUrl null calendarUrl null countryCode c courtesyTitle null deptNumber null description null displayName cn Required.
distinguishedName $dn Required.
By default the TDI Property distinguishedName is mapped to the $dn function which executes a DN lookup based on the directory type.
employeeNumber employeenumber employeeTypeCode employeetype experience null faxNumber facsimiletelephonenumber floor null freeBusyUrl null givenName givenName givenNames gn groupwareEmail null guid See the note following this table for information about mapping the guid, uid, and loginId properties.
Required
ipTelephoneNumber null isManager null jobResp null loginId See the note following this table for information about mapping the guid, uid, and loginId properties.
logins null managerUid $manager_uid This property represents a lookup of the UID of the manager using the Distinguished Name in the manager field.
mobileNumber mobile nativeFirstName null nativeLastName null officeName physicaldeliveryofficename orgId ou pagerId null pagerNumber null pagerServiceProvider null pagerType null preferredFirstName null preferredLanguage preferredlanguage preferredLastName null prof_type Common types are customer, employee, and contractor. See the Profile-types topic for details.
See the note following this table for information about adding profile types.
secretaryUid null shift null surname sn The Search application expects to find this in the Profiles database. Required.
surnames sn telephoneNumber telephonenumber timezone null title null uid See the note following this table for information about mapping the guid, uid, and loginId properties. Required. workLocationCode postallocation Mapping the guid, uid, and loginId: The guid property identifies the global unique ID of a user. This property's value is created by the LDAP directory and is unique, complex, and never changes. It is essential in that it maps each user's Connections data to their User ID when using the Profiles database as the user repository. The mapping of the guid property must be handled differently depending on the type of LDAP directory that you are using:
If you edited the wimconfig.xml or LotusConnections-config.xml file to use a custom global unique ID, be sure to specify that custom ID here. If you specify an attribute here other than the default, edit the federated directory configuration to match the guid used here.
- Microsoft Active Directory
guid={function_map_from_objectGUID} You must use a JavaScript function to define the value for Active Directory because objectGUID is stored in Active Directory as a binary value, but is mapped to guid, which is stored as a string in the Profiles database. Also, the samAccountName property used by Active Directory has a 20 character limit, as opposed to the 256 character limit of the other IDs used by Connections.
- IBM Lotus Domino
guid={function_map_from_dominoUNID}
- IBM Directory Server
guid=ibm-entryUuid
- Sun Java System Directory Server
guid=nsUniqueID
- Novelle Directory
guid={function_map_from_GUID}
See also Specify the global ID attribute for users and groups and Specify a custom ID attribute for users or groups. If the custom ID is different for users and groups, see Configure the custom ID attribute for users or groups. The uid property, not to be confused with the guid property, defines the unique ID of a user. This property differs from a guid in that it is the organization-specific permanent identifier for a user – often a login ID or some value based on the user's employee code. The uid is a critical field in the Profiles database. By default, this property links a given person's user record back to LDAP data. The value you map to uid must meet the following requirements:
In Active Directory, although there often is a UID field available, this field is not always the best choice for mapping to uid because it is not guaranteed to be present for all entries. A better choice is AMAccountName because it usually does exist for all entries. Other values are acceptable also, as long as they meet the requirements. Notes:
- It must be present in every entry that is to be added to the database.
- It must be unique.
- In a multi-LDAP environment, it must be unique across LDAP directories.
- It must be 256 characters or fewer in length.
- If you are mapping the uid from an LDAP field, specify the name of the field. However, if you need to parse it from the distinguished name and it is in the DN in the form of uid=value, use the following mapping function:
{func_map_to_db_UID}
- Use the isManager and managerUid properties to set up the organizational structure of the organization. The isManager field determines whether the current person is a manager or not. You must assign a Y (Yes) or N (No) value to this property for each entry. Y identifies the person as a manager. The managerUid identifies the UID of the current person's manager. By default, managerUid is mapped to $manager_uid, which represents a lookup of the UID of the manager (using the Distinguished Name contained in the LDAP manager field). If a user's manager information is not contained in the $manager_uid field, you should adjust the mapping accordingly. These two properties work together to identify manager/employee relationships and create a report-to chain out of individual user entries.
- If users intend to log into Profiles using a single-valued user name other than the value specified in the uid or email properties, map that user name value to the loginId property. To do so...
- Set the loginId property in the map_dbrepos_from_source.propeties file equal to the LDAP property to use as the login ID. For example, if you want to use employeeNumber as the login property, edit the property value as follows:
loginId=employeeNumber
If you have more than one additional login ID (such as with a long and short form user ID) and you want to allow users to login with either of their login IDs, you can populate multiple additional login IDs by using one of the following settings:
logins=multiValuedLdapAttribute
or
logins={function_to_get_multiple_ldap_values}
For more information, see the TDI product documentation. Adding profile types:
Connections supports the ability to classify a profile using a profile type. The profile type allows the application to provide the set of properties that are intended for a given profile object.
Example complex mapping of Profiles data
This example contains sample complex mapping using Javascript functions to define mapping between the LDAP directory and the Profiles database.
When manually mapping Profiles fields you can perform 1:1 mapping as described in the Mapping fields manually parent topic, or complex mapping as described here.
The following examples progress in their complexity. The examples are designed to convey mapping options, not necessarily to provide practical use cases. Sample code is annotated to help further describe its intent and functional behavior.
Example 1:
This first example returns a single value, namely the employee number value obtained from the LDAP with the string _ACME appended to show a simple programmatic change.
Normally, the following line would be entered in the map_dbrepos_from_source.properties file to copy the LDAPemployeeNumber value:
employeeNumber=LDAPemployeeNumberHowever, the resulting value cannot be modified to add additional information, such as an organization name. To use the func_employeeNumber_simple function to append an organization name such as _ACME, the following line could be entered:
employeeNumber={func_employeeNumber_simple}Notice that the function name is surrounded by curly braces, thus the return value of the function is a string that is copied into the Profiles database as the employee number (employeeNumber).
The following func_employeeNumber_simple function definition needs to be added to the profiles_functions.js file in the TDI solution directory.
function func_employeeNumber_simple( propertyName) { // propertyName would be ‘employeeNumber’; not used var propertyNameStr = propertyName.toString(); // assume no error getting value var result = work.getString("LDAPemployeeNumber") + “_ACME”; return result; }Note that the function references a work entry named LDAPemployeeNumber in the third line (var result = work.getString("LDAPemployeeNumber"). This is the name of the employee number attribute in the LDAP.
Enter the following line in map_dbrepos_from_source.properties prior to the line where the function is called, seen as follows:
xxx=LDAPemployeeNumber employeeNumber={ func_employeeNumber_simple}The purpose of the xxx=LDAPemployeeNumberline line is simply to make LDAPemployeeNumber available in the work entry.
Example 2:
This example shows how to return multiple values, namely the employee number value and the correlating employee's short name value, a Domino attribute that also uniquely identifies a person. This example assumes that you want users be able to log in using their LDAPemployeeNumber and an LDAP attribute named shortName, in addition to the standard uid and email attributes. You also want all user login values to appear in the Profiles database EMPLOYEE or PROFILE_LOGIN tables. If there were only one attribute, you could use the loginId= property in map_dbrepos_from_source.properties, but there are two so that method is not available.
The LDAPemployeeNumber and shortName attributes are single value attributes.
You'll need to create a function call that returns a list containing the employeeNumber and shortName; in this example the function call is created in profiles_functions.js as usual as func_logins_ext. To simplify this example, the uid and email attributes are not shown in the list.
....xxx=employeeNumber yyy=shortName logins={func_loginNames_ext}Because of the logins={func_loginNames_ext} line, the list that is returned because of the following statements becomes the logins value and is added to the PROFILE_LOGINS table. While typically not needed for valid login, this method illustrates the mapping technique.
function func_loginNames_ext( propertyName) { var propertyNameStr = propertyName.toString(); // ='logins'; not used in later statement var result = work.getAttribute("givenName"); // get any single valued defined attribute if (result == null) { result = "no_work_element"; } else { result = result.clone(); // required result.removeValue(0); // remove givenName (not required here) var result0 = work.getString("employeeNumber"); //assume no error, 1 value result.setValue( 0, result0); var result1 = work.getString("shortName"); // assume no error, single value result.setValue( 1, result1); } return result; }Example 3:
In this final example, the mapping pertains to a multivalue list of surnames using thesn attribute. From the list of returned surnames, we append the string Jr. to demonstrate a simple change.
The needed properties in map_dbrepos_from_source.properties are as follows.
surname=sn surnames={func_surnames_Acme}In this example, the LDAP sn field contains a list of surnames.
The surname=sn line causes the first entry in the resulting list to be stored in the surname field in the Profiles user table.
The surnames={func_surnames_Acme} line causes the returned list from the following func_surnames_Acme line to be placed in the Profiles SURNAME table, and available for name search.
function func_surNames_Acme( propertyName) { var propertyNameStr = propertyName.toString(); // =’surnames’; not used in l var result = work.getAttribute("sn"); // get the sn list if(result == null) { result = "no sn work element"; // return bogus value. // See the function func_compute_givenName() in // profiles_functions.js for a more realistic approach } else { result = result.clone(); var len = result.size(); for (var i = 0; i < len; i++) { var val = result.getValue(i); if (!(val instanceof java.lang.String)) { val = java.lang.String.valueOf(val); } val = val + " Jr.” // append ‘Jr.” result.setValue(i, val); // update value } } return result; }In the function func_surNames_Acme( propertyName) { line, the propertyName parameter value is the name of the property that appears in the map_dbrepos_from_source.properties file on the line where the function is referenced. This makes it possible to use the same function for a number of properties. In this example, the value is surnames.
In the var result = work.getAttribute("sn"); line, an attribute object is obtained. In this example, the argument for getAttribute() must be sn to obtain the list of surnames.
The result = "no sn work element"; line is simply a test for sn not being available.
Given that the list is available, we clone it with the result = result.clone(); line to avoid changing the entry list that belongs to TDI.
We next iterate through the list testing to verify that each value is a string. This is a best practice even though in this actual example it is unnecessary.
The result.getValue(i); line gets the next item in the list; this represents the R element of CRUD.
The result.setValue(i, val); line shows how to modify a value; this represents the U element of CRUD.
In the previous example 2, the setValue method was used to perform both these functions.
To demonstrates how the delete (D) and remove (R) CRUD functions would work, let’s limit the surnames list to five names by adding the following Javascript after the for (var i = 0; i <len; i++) line:
if (i > 4) { result.removeValue(i); len--; i--; continue; }
Attribute mapping for Profiles
When the Profiles directory service is enabled, Connections relies on the Profiles database to provide user data such as user name, ID, and email.
The internal name of the Profiles database is PEOPLEDB.
The following table shows the mapping relationships between Profiles, the Profiles directory service, Virtual Member Manager, and LDAP.
Profiles database column Profiles Directory Service Virtual Member Manager LDAP PROF_GUID ID uniqueId UUID/GUID/UNID (defined in RFC4122) PROF_DISPLAY_NAME Name cn/displayName cn/displayName PROF_MAIL mail/ibm-primaryEmail mail/ibm-primaryEmail PROF_SOURCE_UID DN uniqueName DN PROF_UID UID UID UID or samAccountName (in MS Active Directory uid is mapped to samAccountName) PROF_LOGIN LOGIN Login attributes other than UID and mail LDAP login attributes other than UID and mail
The following table shows the population functions that are used in TDI scripts to populate ID into PROF_GUID.
LDAP implementations LDAP attribute type names LDAP syntax TDI scripts with functions IBM Lotus Domino Server dominoUNID Directory String (in Byte String Format) {function_map_from_dominoUNID} Novell eDirectory Server GUID Octet String (in Binary Format) {function_map_from_GUID} Microsoft Active Directory Server/Service objectGUID Octet String (in Binary Format) {function_map_from_objectGUID} IBM Tivoli Directory Server ibm-entryUUID Directory String (in Canonical Format) n/a Sun Java Directory Server nsuniqueid Directory String (in Canonical Format) n/a
Configure the Manager designation in user profiles
When you map manager data in the Profiles database, you can mark manager profiles and also create report-to chains.
Each profile contains a manager_uid field which stores the uid value of that person's manager. This information is used to build the Reports To display widget in the Profiles user interface.
For information about the manager_uid field, see Mapping fields manually.
Additionally, the isManager field (which equates to the Mark manager mapping task in the Profiles population wizard) is used to mark the user profile as being a manager. This information is used to build the People Managed display widget in the Profiles user interface. A Y or N attribute is assigned to an employee to indicate whether the employee is listed as a manager of other employees.
You can set the isManager field as described in the Mapping fields manually topic (using either 1:1 mapping or function mapping) or by running the Mark managers task (using the population wizard or by running the mark_managers.bat or mark_managers.sh script).
If you are setting the ismanager field using a 1:1 mapping, ensure specified how to set the field in the map_dbrepos_from_source.properties file. For example, if your LDAP has an ismanager field that is set to a value of Y or N, your map_dbrepos_from_source.properties file could specify the following property:
PROF_IS_MANAGER=ismanagerIf the manager information is supplied directly from the source, the Mark managers task is not necessary.
The Mark managers task will iterate through the profiles, and see if that particular profile is referenced as the manager for any other profiles. If yes, it will mark that profile as a manager. If that profile is not referenced as anyone else's manager, it will be marked as not a manager.
For information about configuring the display of the Reports To and People Managed widgets for your organization, see Configure the reporting structure feature.
Supplemental user data for Profiles
You can supplement user profiles data using a mapping table, profiles_tdi.properties, and CSV files.
Map user data
You can map additional user data to supplemental tables within the Profiles database and then display that data in a user's profile.
When the LDAP directory provides a code or abbreviation for a particular setting, the supplemental table can provide extra data. For example, an employeeType of P in the LDAP directory might correspond to Permanent. If the employee-type table is populated with data such as p;permanent, this extra data can be displayed in the profile. The profiles_tdi.properties file stores the settings that determine how the files are formatted.
These properties are supplied in profiles_tdi.properties. The file path specified is relative to the TDI solution directory.
This step is mandatory if one or more entities have been selected as the Group By filter in Metrics. Otherwise, when you categorize the Metrics report by this entity, the report will show an unknown value, not the descriptive name of the entity in the chart. Metrics have three default Group By attributes: country, organization and title. The country and organization attributes are in the supported list. The mapping task for Profiles maps your user data to the following entities:
- Fill countries
- Add country data to each profile. Use Country code script ./fill_country.sh or fill_country.bat to populate the Country table.
- Fill departments
- Add department data to each profile. Use Department code script ./fill_department.sh or fill_department.bat.
- Fill organization
- Add organization data to each profile. Use Organization code script ./fill_organization.sh or fill_organization.bat.
- Fill employee types
- Add employee-type data to each profile. Use Employee type code script ./fill_emp_type.sh or fill_emp_type.bat
- Fill work locations
- Add location data to each profile. Use Work location code script ./fill_workloc.sh or fill_workloc.bat.
CSV files
A CSV (comma separated value) file is required as input for each of these tasks.
The following properties pertain to the CVS files used by these tasks:
country_table_csv_separator=; country_table_csv_file=isocc.csv department_table_csv_separator=; department_table_csv_file=deptinfo.csv emp_type_table_csv_separator=; emp_type_table_csv_file=emptype.csv organization_table_csv_separator=; organization_table_csv_file=orginfo.csv workloc_table_csv_separator=; workloc_table_csv_file=workloc.csvThe separator character separates the different tokens in each line. The second property is the name of the file, relative to the solution directory.
The first token is the code. The next attributes are read in order for each additional field. No other fields are required.
The data that can be populated in these tables is usually provided as two values per line: code;description. For the workloc code, the values can be code;addr1;addr2;city;state;zip. For example: WSF;FIVE TECHNOLOGY PARK DR;;WESTFORD;MA;01886-3141.
Fields that you do not require in your mapping can be omitted from the file; this example uses only one addr field.
The default file name for each codes is shown in the following list:
- countryCode
- isocc.csv
- deptNumber
- deptinfo.csv
- orgId
- orginfo.csv
- employeeTypeCode
- emptype.csv
- workLocationCode
- workloc.csv
Sample CSV file
This sample shows some lines from the isocc.csv file, which can be used to fill countries data:
ad;Andorra, Principality of ae;United Arab Emirates af;Afghanistan, Islamic State of ag;Antigua and Barbuda ai;Anguilla al;Albania am;Armenia an;Netherlands Antilles ao;Angola aq;Antarctica ar;ArgentinaYou can find more sample CSV files in the wizard_files_directory/TDIPopulation/TDISOL/aix|lin|win/samples directory, where the wizard_files_directory is the location of the various Wizard files that you downloaded or received on disk, and aix|lin|win is the AIX, Linux, or Microsoft Windows version of the directory.
Install Cognos Business Intelligence
IBM Cognos Business Intelligence collects, manages, and displays statistical information for the Metrics application in Connections. Installing Cognos Business Intelligence involves installing IBM WAS Network Deployment, plus a database client, in addition to the Cognos components.
This documentation assumes you are deploying Cognos Business Intelligence before you install Connections. If you choose to deploy Cognos later, you can ignore any Cognos-related validation errors during the Connections installation process and then return to this section later for instructions on deploying the Cognos components. Even if you are not deploying Cognos now, you can create the raw Metrics database and install the Metrics application so that Connections can immediately begin capturing event data for later use.
If you deploy IBM Cognos Transformer on a non-Windows system and connect to the data located in Microsoft SQL Server database, install the ODBC driver of "Process DataDirect Connect for ODBC". It is the only driver that IBM Cognos supports for such a configuration. The driver is not free so if you want to avoid the cost for the licensed driver, you need to install Transformer on a Windows system. You have two options:
- To install both Cognos Business Intelligence and Transformer on the Windows system
- To leave Cognos Business Intelligence on a non-Windows system and install Transformer on Windows. Refer to the technote for detailed information:
Connections requires a customized version of Cognos Business Intelligence, which is installed using the provided script. You cannot use previously deployed Cognos Business Intelligence components with the Metrics application. For best performance, use a separate computer for the customized version of Cognos Business Intelligence.
To install the Cognos Business Intelligence and its supporting software...
Install WAS for Cognos Business Intelligence
Install IBM WAS Network Deployment on the computer that will host IBM Cognos Business Intelligence, so that the server can be managed by the Deployment Manager used with Connections.
The WebSphere node hosting the Cognos BI server will be federated to the same dedicated Deployment Manager as your Connections deployment. Do not use a WebSphere node that has already been federated to a different Deployment Manager.
- Install WAS Network Deployment as described in the WAS Version 8.0 information center.
This server should be installed using the Application server profile; you will federate it to the Deployment Manager in a later task.
If you will use a server where the appropriate version of WAS Network Deployment is already deployed, you do not have to install it again; just create a new Application server profile for Cognos Business Intelligence as follows:
IBM AIX, Linux
manageprofiles.sh -create -templatepath WAS_install_root/profileTemplates/default -adminUserName was_admin_name -adminPassword was_admin_password -profileName cognosProfile -nodeName CognosNode01Windows
manageprofiles.bat -create -templatepath WAS_install_root\profileTemplates\default -adminUserName was_admin_name -adminPassword was_admin_password -profileName cognosProfile -nodeName CognosNode01You can verify that the installation was successful by reviewing the WAS log stored in: Cognos_WAS_node_profile/logs/Cognos_server_name/SystemOut.log; for example on Windows:
C:\\IBM\WebSphere\AppServer\profiles\cognosProfile\logs\cognos_server\SystemOut.log- On the newly installed server, set the system clock to match the time (and time zone) on the Deployment Manager’s computer.
The node’s clock must be synced to within 1 minute of the Deployment Manager’s clock to ensure that federation does not fail in a later task.
Install the database client for Cognos Transformer
Install a database client on the computer where you will host IBM Cognos Business Intelligence. The Cognos Transformer component requires the use of a database client to ensure proper access to the database server for PowerCube generation and reporting.
Install the DB2 database client for Cognos Transformer
Install the IBM DB2 database client on the computer where you will host IBM Cognos Business Intelligence.
Before you install the database client, the database server must be installed and running, and you must have created the Metrics database and the Cognos Content Store database.
- Install the database client on the server where you will deploy Cognos Business Intelligence, using the instructions provided with your database product.
For instructions on installing the IBM DB2 client, see the Installation methods for IBM data server clients in the DB2 information center.
- (DB2 on Linux only) Ensure that the DB2 client library is properly sourced:
- Open the /etc/ld.so.conf file for editing.
- Add the following library DB2_installation_directory/lib32 to the file.
Use the path where DB2 is installed; for example: .
/usr/X11R6/lib64/Xaw3d /usr/X11R6/lib64 /usr/lib64/Xaw3d /usr/X11R6/lib/Xaw3d /usr/X11R6/lib /usr/lib/Xaw3d /usr/x86_64-suse-linux/lib /usr/local/lib /opt/kde3/lib /lib64 /lib /usr/lib64 /usr/lib /usr/local/lib64 /opt/kde3/lib64 /opt/ibm/db2/V10.1/lib32 include /etc/ld.so.conf.d/*.conf
- Save and close the file.
- Run the ldconfig command to regenerate dynamic link libraries (DLLs).
- Create a connection between the client and the database.
First, connect the client to the node (the DB2 server) where the database is hosted by completing the following steps.
- Log on to the client computer as a user with at least DB2 SYSAdmgr authority.
You cannot catalog a node as a root authority.
- cd home directory of instance.
The following steps refer to this directory as INSTHOME.
- (AIX or Linux) Set up the instance environment by running the startup script:
bash, Bourne, or Korn shell
. INSTHOME/sqllib/db2profileC shell
source INSTHOME/sqllib/db2cshrcwhere INSTHOME indicates the home directory of the instance.
- Open the DB2 command line processor:
AIX or Linux
db2Windows
db2cmd db2- Catalog the DB2 server by entering the following commands in the command line processor:
db2 => catalog tcpip node Node_name remote Host_name|IP_address server Service_name|DB2_Instance_Port db2 => terminateWhere:For example, catalog the server as node “db2node” with the IP address "2001:DB8:0:0:0:0:0:0" on port 50000 as follows:
- Node_name indicates a local nickname (for example, the short host name) that you can set for the DB2 server.
- Host_name|IP_address indicates the host name or IP address of the server instance where the database resides (both IPv4 and IPv6 addresses are accepted).
- Service_name|DB2_Instance_Port indicates the service name or DB2 instance port used for the connection (typically port 50000 for DB2).
db2 => catalog tcpip node db2node remote 2001:DB8:0:0:0:0:0:0 server 50000 DB20000I The CATALOG TCPIP NODE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. db2 => terminate DB20000I The TERMINATE command completed successfully.Now connect the client to the DB2 database itself by completing the following steps.
- Run the following commands in the DB2 command line processor:
db2 => catalog database Database_name as Database_alias at node Node_name db2 => connect to Database_alias user Metrics_db_user db2 => terminateWhere Node_name is the same local nickname that you used to connect to the DB2 server earlier in this step, and the database information matches the values specified in the cognos-setup.properties file that is used during installation:For example, catalog the METRICS database as “METRICS” on the server "db2node" as the user "LCUSER" and then test the connection with the following commands:
- Database_name indicates the name of the Metrics database. The name should be “Metrics” when it is created by Connection database install wizard (the metrics.db.name setting; the default value is METRICS).
- Database_alias indicates the alias for the Metrics database (the metrics.db.local setting; the default value is METRICS).
- Metrics_db_user indicates the user name that will be used to access the database (the metrics.db.user setting; the default value is LCUSER).
db2 => catalog database METRICS as METRICS at node db2node DB20000I The CATALOG DATABASE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. db2 => connect to METRICS user LCUSER Database Connection Information Database server = DB2 10.1.0 SQL authorization ID = LCUSER Local database alias = METRICS db2 => terminate DB20000I The TERMINATE command completed successfully.For more information on connecting the DB2 client to the DB2 server and database (including optional arguments), see Configure client-to-server connections using the command line processor in the DB2 information center.
Install the Oracle database client for Cognos Transformer
Install the Oracle database client on the computer where you will host IBM Cognos Business Intelligence.
Before you install the database client, the database server must be installed and running, and you must have created the Metrics database and the Cognos Content Store database.
For troubleshooting information on configuring the Oracle database client for use with Cognos, see the IBM technote, Resolving Oracle connection errors.
- Install the standard 32-bit database client on the server where you will deploy Cognos Business Intelligence; for information see Install Oracle Database Client.
Be sure to install the standard 32-bit client rather than the Instant Client, which is not supported by Cognos. If you installed the 64-bit client, you must uninstall it before installing the 32-bit client.
- Edit the Oracle_client_install_path/network/admin/tnsnames.ora file and add the following TNS settings into the file:
The TNS setting on the Oracle client should look like the example that follows:
Local_tns_name = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = Oracle_database_server_host_name)(PORT = Port)) ) (CONNECT_DATA = (SERVICE_NAME = Database_service_name) ) )...where...
- Local_tns_name is the user -defined TNS alias for the remote Oracle database instance. It needs to match the value of the metrics.db.local setting in the cognos-setup.properties file that will be used during Cognos Business Intelligence installation.
- Oracle_database_server_host_name is the host name of the server hosting the Oracle database server; for example: oradb.example.com.
- Port is port on which the Oracle database server is listening; typically port 1521.
- Database_service_name is the database service name; for example orcl.
- Verify that the client can connect to the Metrics database. In bin folder of oracle_home, run sqlplus [db_user]/[password]@[Local_tns_name] where:
If the setting is correct, you will connect the database successfully.
- db_user is the Metrics db user.
- password is the password of the db_user.
- Local_tns_name is the TNS alias you just created.
Install the SQL Server database driver for Cognos Transformer
Install the Microsoft SQL Server database driver on the computer where you will host IBM Cognos Business Intelligence.
For Windows
On Microsoft Windows 2008, Cognos supports OLE DB type to connect SQL Server database. There is no need to install the database client for SQL Server, because Windows has the necessary drivers for the OLE DB connection already built-in.When using the default database instance for SQL Server, leave the metrics.db.local.name property blank in the cognos-setup.properties file, so it looks like the following example:
metrics.db.local.name=
For Linux and AIX
On Linux and AIX, Cognos supports ODBC type to connect SQL Server database. The only ODBC driver supported by Cognos is "Process DataDirect Connect for ODBC". The driver is not free. To avoid the cost for the licensed driver, you can consider deploying the Cognos Transformer on Windows. Refer to this technote for detailed information.
- You must install the 32-bit DataDirect driver on the server where you will deploy Cognos Transformer. The detailed installation instructions are in Installation on Unix and Linux on the DataDirect Web site.
- Create a file named ..odbc.ini at a directory like /root/.odbc.ini. You can copy a template .odbc.ini from where the DataDirect ODBC driver is installed.
- Edit the odbc.ini configuring the SQL Server data source as follows:
Here is an example of the .odbc.ini configuration with data source for metricsds:
- In the [ODBC Data Sources] section, define a data source name and driver for the Metrics database, such as metricsds.
- In the [ODBC] section, specify the ODBC root directory and whether Driver Manager tracing is enabled.
- Create a section [metricsds] like the following sample and define the details in this section.
[ODBC Data Sources] metricsds=DataDirect 7.1 SQL Server Wire Protocol [ODBC] IANAAppCodePage=4 InstallDir=/opt/Progress/DataDirect/Connect_for_ODBC_71 Trace=0 TraceFile=odbctrace.out TraceDll=/opt/Progress/DataDirect/Connect_for_ODBC_71/lib/ivtrc27.so [metricsds] Driver=/opt/Progress/DataDirect/Connect_for_ODBC_71/lib/ivsqls27.so Description=DataDirect 7.1 SQL Server Wire Protocol Database=metrics HostName=lwptsthink68.cn.ibm.com PortNumber=1511Where:
- Database indicates the name of the Metrics database. The name should be “Metrics”, which is decided by Connection database install wizard.
- HostName is the hostname of sqlserver machine.
- PortNumber is the port of the database instance where Metrics is created.
- To verify that DataDirect and the data source are configured correctly, follow these steps:
- Set the appropriate library path environment variable to specify the location of the ODBC libraries for your operating system:
For example, on Linux:
- Linux: LD_LIBRARY_PATH
- AIX: LIBPATH
export LD_LIBRARY_PATH=/opt/Progress/DataDirect/Connect_for_ODBC_71/lib/- Set the ODBCINI environment as follows:
export ODBCINI=/root/.odbc.ini- Go to the demo subdirectory in the DataDirect installation directory, and run the command demoobc:
cd /opt/Progress/DataDirect/Connect_for_ODBC_71/samples/demo ./demoodbc -uid metricsuser -pwd **** metricsds- Set the password as the Metrics database password of your SQL Server. If you receive the following message, it means your ODBC driver works fine and can connect to your Metrics database (the "EMP" error does not matter):
./demoodbc DataDirect Technologies, Inc. ODBC Sample Application. ./demoodbc: will connect to data source 'metricsds' as user 'metricsuser/password1'. ......SQLExecute has Failed. RC=-1 SQLSTATE = S0002 NATIVE ERROR = 208 MSG = [DataDirect][ODBC SQL Server Wire Protocol driver][Microsoft SQL Server]Invalid object name 'EMP'.
Install required patches on the Cognos BI Server system
Before installing IBM Cognos Business Intelligence and its related software, prepare the server environment by applying required patches to the operating system and other third-party components.
Any required patches must be installed before you attempt to deploy the Cognos server.
- Review IBM technote 7022463, Cognos BI 10.1.1 Software Environments - Required Patches.
Open Motif libraries are still required (as mentioned in the technote) for headless Linux systems.
- Install the patches specified for your server environment.
- Restart the server to make sure all patches take effect.
Install Cognos Business Intelligence components
Install IBM Cognos Business Intelligence on the computer where you previously installed IBM WAS Network Deployment and the database client. The Cognos product consists of two components (Cognos BI Server and Cognos Transformer); install both components as part of this deployment.
The following conditions must be satisfied to ensure that the Cognos Business Intelligence components installs correctly:
- The Cognos Content Store database must have been created on the database server, and that database server must be running. Refer to Create databases.
- The current server must reside within the same domain as the Connections servers so you can enable SSO (Single Sign-On) as required for generating reports.
- The user running the Cognos installation script must have permissions to use the database client.
Install Cognos Business Intelligence requires two Cognos packages (BI Server and Transformer) in addition to the scripts provided in the Connections kit.
The installation packages for Cognos Business Intelligence components are available in the Connections kit as separate downloads.
Operating System Cognos BI Server package name Cognos Transformer package name IBM AIX IBM Cognos Business Intelligence Server 64-bit 10.1.1 AIX Multilingual IBM Cognos Business Intelligence Transformer 10.1.1 AIX Multilingual Linux IBM Cognos Business Intelligence Server 64-bit 10.1.1 Linux x86 Multilingual IBM Cognos Business Intelligence Transformer 10.1.1 Linux x86 Multilingual Windows IBM Cognos Business Intelligence Server 64-bit 10.1.1 Windows Multilingual IBM Cognos Business Intelligence Transformer 10.1.1 Windows Multilingual zLinux (System z) IBM Cognos Business Intelligence Server 64-bit 10.1.1 Linux on System z Multilingual (CI5W5ML IBM Cognos Business Intelligence Transformer 10.1.1 Linux on System z Multilingual (CI2QHML)
- Create a shared network folder where Cognos Transformer can publish metrics data (in the form of PowerCubes) for reports to access.
The shared network folder is used when you cluster multiple Cognos servers; you can use any name you want for the folder. Record this location so you can assign it to the cognos.cube.path property when you set up the cognos-setup.properties file in step 6.
The folder will be delegated to Cognos only for storing the PowerCube so it must be either a new folder or an existing empty folder.
- Prepare the Cognos BI Server package:
- Download the Cognos BI Server package to a temporary location on the server.
- Create a directory to contain the expanded package; for example:
- AIX or Linux: /opt/biserver_10.1.1
- Windows: C:\biserver_10.1.1
- Expand the package into the new directory.
- Prepare the Cognos Transformer package:
- Download the Cognos Transformer package to a temporary location on the server.
- Create a directory to contain the expanded package; for example:
- AIX or Linux: /opt/transformer_10.1.1
- Windows: C:\transformer_10.1.1
Due to potential file name conflicts, the Transformer package cannot share a directory with the BI Server package. Extracting the packages to the same location will overwrite libraries and result in a non-functioning server.
- Expand the package into the new directory.
- Prepare the Cognos server setup package:
CognosConfig.zip or CognosConfig.tar can be found in the /Cognos folder within the Connections product media.
- Download the CognosConfig package to a temporary location on the server.
- Create a directory to contain the expanded package; for example:
- AIX or Linux: /opt/CognosSetup
- Windows: C:\CognosSetup
- Expand the package into the new directory.
- Set up the JDBC driver:
- Locate the type 4 JDBC driver provided by your database server product.
- Copy the JDBC driver to the following location: /CognosSetup/BI-Customization/JDBC
In order to let Cognos BI connects to SQL server you need to install Microsoft JDBC Driver 4.0 for SQL Server. This installs two versions of the JDBC jar: sqljdbc.jar and sqljdbc4.jar When both of these are present in the Lotus_Connections_Install/Cognos/BI-Customization/JDBC, the connection to the SQL server will fail as it seems to default to sqljdbc.jar but sqljdbc4.jar is required. Delete the file sqljdbc.jar in order for the cognos-setup to work.
If you are working with DB2v10, make sure to use jdbc driver v10.1 FP1 or later. You can find the driver at the DB2 JDBC Driver Versions technote.
- Prepare the cognos-setup.properties file by filling in a value for each property.
The cognos-setup.properties file is located in the CognosSetup directory where you expanded the Cognos server setup package in step 4. This file provides values that are used during installation; each property setting in the file is accompanied by a description.
In the file, some database related properties are used to build a connection to Metrics and Cognos databases. These settings are shared by all supported database types: DB2, Oracle and SQL Server. So one property might present a different meaning on a different database type. Refer to the table of your preferred database type to fill-in these properties carefully.
Table 30. DB2-related properties in the cognos-setup,properties file. Property descriptions for DB2 also showing names and values.
Property Name Value Description cognos.db.type db2 Indicates the DB2 database type. cognos.db.host [DB2_server_hostname]:[Port] Host name of the DB2 server and the port of the database instance that contains the Cognos database. cognos.db.name COGNOS Created by the Connections database installer. cognos.db.user LCUSER The dedicated DB2 database user required by the Connections database installer. cognos.db.password [password] The password of LCUSER. metrics.db.type db2 Indicates the DB2 database type. metrics.db.host [DB2_server_hostname]:[Port] Host name of the DB2 server and the port of the database instance that contains Metrics database. metrics.db.name METRICS Created by the Connections database installer. metrics.db.local.name [Database_alias] The Metrics database alias created in the early step of Install the DB2 database client for Cognos Transformer. metrics.db.user LCUSER The dedicated DB2 database user required by the Connections database installer. metrics.db.password [password] The password of LCUSER.
Oracle-related properties in the cognos-setup,properties file. Property descriptions for Oracle also showing names and values
Property Name Value Description cognos.db.type oracle Indicates the Oracle database type. cognos.db.host [Oracle_server_hostname]:[Port] Hostname and Port of the Oracle database server. Port is port where the Oracle database server is listening, typically port 1521. cognos.db.name [Database_service_name] The service name of the database that contains the Cognos database. It should be the service name if the service name and SID are different on this database; otherwise the installation script will fail due to DB Connection validation. Run the show parameter service_names command on the Oracle database to detect the service name of the database.
Use the service name will make the installation successful, but Cognos actually uses the SID to connect to the content store database. Therefore, if the service name and SID are not the same, manually update the service name to the SID in Cognos configuration tools after installation.
cognos.db.user COGNOS Created by the Connections database installer. cognos.db.password [password] The password of database user COGNOS. metrics.db.type oracle Indicates the Oracle database type. metrics.db.host [Oracle_server_hostname]:[Port] Hostname and Port of the Oracle database server. Port is port where the Oracle database server is listening, typically port 1521. metrics.db.name [Database_service_name] The service name of the database that contains the Metrics database. It should be the service name if the service name and SID are different on this database; otherwise the installation script will fail due to DB Connection validation.
metrics.db.local.name [Local_tns_name] The TNS alias created in the early step of Install the Oracle database client for Cognos Transformer. metrics.db.user METRICSUSER Created by the Connections database installer. metrics.db.password [password] The password of database user METRICSUSER.
SQL Server-related properties in the cognos-setup,properties file. Property descriptions for SQL Server also showing names and values.
Property Name Value Description cognos.db.type sqlserver Indicates the SQL Server database type. cognos.db.host [SQLSERVER_hostname]:[Port] Hostname and Port of the SQL Server machine and the port of the database instance that contains Cognos database. cognos.db.name COGNOS Created by the Connections database installer. cognos.db.user COGNOSUSER Created by the Connections database installer. cognos.db.password [password] The password of database user COGNOSUSER. metrics.db.type sqlserver Indicates the SQL Server database type. metrics.db.host [SQLSERVER_hostname]:[Port] Host name of the SqlServer machine and the port of the database instance that contains the Metrics database. metrics.db.name METRICS Created by the Connections database installer. metrics.db.local.name [Database_instance_name] The name of the database instance that contains the Metrics database. If the Metrics database has been created in a SQL Server default instance, set metrics.db.local.name to be empty. If the Metrics database has been created in a SQL Server named instance called, for example, DBSERVER1, set metrics.db.local.name to DBSERVER1
metrics.db.user METRICSUSER Created by the Connections database installer. metrics.db.password [password] The password of database user METRICSUSER.
Some other settings need to be noted in these files:
- The shared network folder is represented by the cognos.cube.path property; set this property to the name you used for the shared network folder that you created in step 1.
- The Cognos administrator account specified for the cognos.admin.username setting must represent a valid LDAP user – this is not the WebSphere administrator. You should have already set up a user in the LDAP for this purpose; for information see Create the Cognos administrator.
The Cognos administrator's username cannot contain a space.
- Any passwords stored in this file will be removed after the script finishes running; if you need to run the script again you will need to insert the passwords before the next run. If you prefer, you can omit the passwords from the properties file and supply them on the command line when you run the script, as shown in step 9.
- (RedHat Linux 6 64-bit systems) Run the following command to preload libraries needed for the setup scripts used in the next step:
export LD_PRELOAD=/usr/lib64/libfreebl3.so- (SuSE 11 zLinux systems) Create a symbolic link to the libXm.so.3 package:
The openmotif22-libs-32bit-2.2.4-138.18.1 kit includes libXm.so.4, while the Cognos installer program issetup script links with libXm.so.3 and encounters an error if it is not available. Prevent this error by creating a symbolic link to libXm.so.3 with the following command:
ln -s /usr/lib/libXm.so.4 /usr/lib/libXm.so.3- (Microsoft SQL Server) If you are using Microsoft SQL Server as your database management system, configure it to support remote connections before running the cognos-setup.bat|sh script, or the database validation will fail.
For information on enabling remote connections in SQL Server, see the Microsoft site.
- Run the cognos-setup.sh script to install the Cognos BI Server and Transformer components.
Any of the properties specified in the cognos-setup.properties file can be passed in as parameters when you run this script. In particular, you might want to supply passwords using this method rather than adding them into the properties file because they will be deleted from the file after it runs. For any properties you supply at run time, use the following syntax:
cognos-setup.sh -property_name1=property_value1 -property_name2=property_value2For example, the properties file contains settings for four passwords. You can pass all of them in at run time by including them on the command line as shown:cognos-setup.sh -was.local.admin.password=WASadmin_pwd -cognos.admin.password=CognosAdmin_pwd -cognos.db.password=CognosContentStore_pwd -metrics.db.password=Metrics_DB_pwdOutput from this operation is stored in the /CognosSetup/cognos-setup.log.
If you encounter an error when running the cognos-setup.bat|sh script, correct the error and run the script again before proceeding to the next step. For example, if you provided an incorrect password in the cognos-setup.properties file, modify the password and then run the cognos-setup.sh script again. Remember that if you added passwords to the properties file, they were deleted after the previous run, and must be provided again. If you encounter other problems, see Troubleshooting Cognos Business Intelligence components for possible solutions.
- Now run the cognos-configure.sh script to configure the Cognos server and set up database connections.
Any of the properties specified in this script can be passed in as parameters when you run this script, using the syntax shown in the previous step.
Output from this operation is stored in the /CognosSetup/cognos-configure.log.
If you encounter an error when running the cognos-configure.sh script, correct the error and run the script again before proceeding to the next step. For example, if you provided an incorrect password in the cognos-setup.properties file, modify the password and then run the cognos-setup.sh script again. Remember that if you added passwords to the properties file, they were deleted after the previous run, and must be provided again. If you encounter other problems, see Troubleshooting Cognos Business Intelligence components for possible solutions.
- (RedHat Linux 6 64-bit systems) Set the LD_PRELOAD variable to JVM environment variable list of the Cognos server. The environment variable LD_PRELOAD needs to be set every time after a Linux system restart. To enable this, add this variable to JVM environment variable list as follows:
- Start server1 of the WAS where you deployed the Cognos BI.
- Log into the administrative console of the WAS.
- Navigate to Servers > Server Types > WebSphere application servers.
- Click the cognos_server link.
- Click JAVA and then Process Management > Process definition > Environment Entries.
- Click New to add the following entry:
LD_PRELOAD = /usr/lib64/libfreebl3.so- Stop the Cognos server and server1 if they are running. You will start the Cognos server after federating it to the Deployment Manager in the next task.
In the future, whenever you want to stop the Cognos server, stop the WAS hosting it; wait at least one full minute to ensure that all of the Cognos processes (AIX or Linux: cgsServer.sh and CAM_LPSvr processes; Windows: cgsLauncher.exe and CAM_LPSvr processes) have completely stopped before you attempt to start the server again.
Results
The Cognos BI Server and Transformer components are installed into the directories specified in the cognos-setup.properties file; for example:
- Cognos BI Server: cognos.biserver.install.path
- AIX or Linux: /opt/IBM/CognosBI
- Windows: C:\IBM\CognosBI
- Cognos Transformer: cognos.transformer.install.path
- AIX or Linux: /opt/IBM/CognosTF
- Windows: C:\IBM\CognosTF
During the installation, you might see the following message: ERROR: The system cannot find the file specified. You can ignore this message because it will not block installation or cause any issue.
During the installation, you might see the following message in the log file: ERROR: ld.so: object '/usr/lib64/libfreebl3.so' from LD_PRELOAD cannot be preloaded: ignored. You can ignore this message because it will not block installation or cause any issue.
You can verify that the components were installed successfully by reviewing the installation logs at the following locations:
- Cognos BI Server installation log: Cognos_BI_install_path/logs/cogserver.log
- AIX or Linux: /opt/IBM/CognosBI/logs/cogserver.log
- Windows: C:\IBM\CognosBI\logs\cogserver.log
- Cognos Transformer installation log: Cognos_Transformer_install_path/logs/cogserver.log:
- AIX or Linux: /opt/IBM/CognosTF/logs/cogserver.log
- Windows: C:\IBM\CognosTF\logs\cogserver.log
The cognos-setup.properties file contains two sign-ons: Cognos administrator signon and Metrics database signon. These two signon will be set into the Powercube model files on the Cognos Transformer server during the installation, which will be used for PowerCube generation. If you need to change these two signon after the installation, perform the following steps:
- Update the cognos-setup.properties file with the new signon information.
- Run transformer-logon-set.bat|sh, which is located in the CognosSetup directory. The script needs the Cognos BI server URL as a parameter. You need to pass it in at run time by including it on the command line as shown:
transformer-logon-set.bat|sh -cognos.server.url=http(s)://<cognos_bi_server_doamin>:<cognos_bi_server_port>/<cognos_bi_server_contextroot>Output from this operation is stored in the /CognosSetup/transformer-logon-set.log. If you encounter an error when running the script, correct the error and run the script again.- If you set the flag removePassword= to false instead of accepting the default value of true, then passwords will not be removed and you can just continue to run the next command.
Configure Cognos Business Intelligence after installation
After installation, you need to perform further configuration tasks to ensure that IBM Cognos Business Intelligence works.
Before performing these configuration steps, the following components must have been installed:
- Database client for Cognos Transformer. Refer to Install the database client for Cognos Transformer.
- Cognos Business Intelligence components.
Cognos Business Intelligence Server and Cognos Transformer need to communicate with a database client to set up the Cognos content store and to build the Cognos PowerCube. A set of specific settings (appropriate to each operation system and database type) must be specified before Cognos components can work properly. The following topics detail the configuration steps according to the supported database types.
DB2 instructions
- (AIX only) Make sure the locale on AIX is set to: en_US.ISO8859-1 or en_US.ISO8859-15. The locale setting is required by Cognos Transformer for building PowerCube.
- Run the following command first before manually running the script to build the cube:
export LC_ALL=en_US.ISO8859-1
- Use the following command to edit the cron jobs to set the locale for scheduled PowerCube generation tasks:
crontab -eFor example, change both daily-refresh.sh and weekly-rebuild.sh cron jobs from:05 00 * * 1-6 /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 /opt/IBM/Cognos/metricsmodel/weekly-rebuild.shto:05 00 * * 1-6 export LC_ALL=en_US.ISO8859-1; /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 export LC_ALL=en_US.ISO8859-1; /opt/IBM/Cognos/metricsmodel/weekly-rebuild.sh- (AIX only) Copy libdb2.a from the lib32 folder under <db2client_home_directory>, and paste it to <transformer_installation_directory>/bin folder.
Cognos Transformer is a 32-bit application, but the DB2 client is 64-bit. This incompatibility can cause issues when Transformer accesses the database. Associated errors that might occur are: UDA-SQL-0569 Unable to unable to load driver manager library (liddb2.a(shr.o) and cannot load module shr.o. To resolve this problem, copy the 32-bit compatible library provided by the DB2 client to the Transformer location.
- (RedHat Linux only) Make sure LD_PRELOAD is set to: /usr/lib/libfreebl3.so before building cube. Run the following command first before manually running the script to build cube:
export LD_PRELOAD=/usr/lib/libfreebl3.so
- Use the following command to edit the cron jobs to set the LD_PRELOAD for scheduled cube generation tasks:
crontab -eFor example, change both daily-refresh.sh and weekly-rebuild.sh cron jobs from:05 00 * * 1-6 /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 /opt/IBM/Cognos/metricsmodel/weekly-rebuild.shto05 00 * * 1-6 export LD_PRELOAD=/usr/lib/libfreebl3.so; /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 export LD_PRELOAD=/usr/lib/libfreebl3.so; /opt/IBM/Cognos/metricsmodel/weekly-rebuild.shBe careful about the path of libfreebl3.so file. In the earlier step, the 64bit libfreebl3.so in /usr/lib64/ folder is used when installing Cognos components. In this step, the 32-bit version in lib folder should be used.
- (Windows) No special configuration needs be done, except to configure the job scheduler for Cognos Transformer on Windows.
Oracle instructions
- (AIX only) Make sure the locale on AIX is set to: en_US.ISO8859-1 or en_US.ISO8859-15. The locale setting is required by Cognos transformer for building a power cube.
- Execute the following command first before manually running the script to build a cube.
export LC_ALL=en_US.ISO8859-1
- Use the following command to edit the cron jobs to set the locale for scheduled cube generation tasks:
crontab -eFor example, change both daily-refresh.sh and weekly-rebuild.sh cron jobs from:05 00 * * 1-6 /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 /opt/IBM/Cognos/metricsmodel/weekly-rebuild.shto:05 00 * * 1-6 export LC_ALL=en_US.ISO8859-1; /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 export LC_ALL=en_US.ISO8859-1; /opt/IBM/Cognos/metricsmodel/weekly-rebuild.sh- (AIX or Linux) Define the ORACLE_HOME, TNS_AdmgrIN, and LIBPATH on AIX, LD_LIBRARY_PATH on Linux, JVM variables to the Cognos server. Add this variable to JVM environment variable list.
- Login to the WAS admin console of the Cognos BI Server.
- Click Servers > Server Types > WebSphere application servers.
- Click the link of the Cognos server.
- Click JAVA and then Process Management > Process definition > Environment Entries.
- Add or edit entries as needed, such as:
ORACLE_HOME=/u01/app/oracle/product/11.2.0/client_1 TNS_AdmgrIN=ORACLE_HOME/network/adminAIX: LIBPATH=Cognos_BI_install_path/bin64: /u01/app/oracle/product/11.2.0/client_1/lib
Linux: LD_LIBRARY_PATH =Cognos_BI_install_path/bin64: /u01/app/oracle/product/11.2.0/client_1/lib
- (RedHat Linux only) Follow the same steps as previously described to add an additional entry:
LD_PRELOAD = /usr/lib64/libfreebl3.so- (AIX or Linux) Copy the following files to the Cognos_Transformer_install_path/bin directory: AIX:
- Oracle_client_install_path/lib/libclntsh.so
- Oracle_client_install_path/lib/libnnz11.so
Linux:
If you are not logged in as root, change the permissions on the two files using chmod 755.
- Oracle_client_install_path/lib/libclntsh.so.11.1
- Oracle_client_install_path/lib/libnnz11.so
If you are using a different version of the Oracle client, you should find similar files (named for the version) in the same location and can use those files instead.
- (AIX or Linux) Make sure ORACLE_HOME is set before building a cube.
- Run the following command first before manually running the script to build cube. For example:
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/client_1
- Use the following command to edit the cron jobs to set the ORACLE_HOME for scheduled cube generation tasks:
crontab -eFor example, change both daily-refresh.sh and weekly-rebuild.sh cron jobs from:05 00 * * 1-6 /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 /opt/IBM/Cognos/metricsmodel/weekly-rebuild.shto the following for AIX:05 00 * * 1-6 export ORACLE_HOME=/u01/app/oracle/product/11.2.0/client_1; export LC_ALL=en_US.ISO8859-1; /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 export ORACLE_HOME=/u01/app/oracle/product/11.2.0/client_1; export LC_ALL=en_US.ISO8859-1; /opt/IBM/Cognos/metricsmodel/weekly-rebuild.sh- and to the following for Linux:
05 00 * * 1-6 export ORACLE_HOME=/u01/app/oracle/product/11.2.0/client_1; /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 export ORACLE_HOME=/u01/app/oracle/product/11.2.0/client_1; /opt/IBM/Cognos/metricsmodel/weekly-rebuild.sh- (Windows) No special configuration needs be done, except to configure the job scheduler for Cognos Transformer on Windows.
SQL Server instructions
- (AIX only) Make sure the locale on AIX is set to: en_US.ISO8859-1 or en_US.ISO8859-15. The locale setting is required by Cognos Transformer for building powercube. Execute the following command first before manually running the script to build cube.
export LC_ALL=en_US.ISO8859-1- (AIX or Linux) Define ODBCINI, LD_LIBRARY_PATH on Linux, LIBPATH on AIX, JVM variables to Cognos Server. Add this variable to JVM environment variable list as follows:
- Log into the WAS admin console of the Cognos BI Server.
- Click Servers > Server Types > WebSphere application servers.
- Click the link of the Cognos server.
- Click JAVA and then select Process Management > Process definition > Environment Entries.
- Add or edit entries as needed, such as:
ODBCINI=/root/.odbc.ini AIX: LIBPATH=Cognos_BI_install_path/bin64: /opt/Progress/DataDirect/Connect_for_ODBC_71/lib/ Linux: LD_LIBRARY_PATH =Cognos_BI_install_path/bin64: /opt/Progress/DataDirect/Connect_for_ODBC_71/lib/
- Restart the Cognos BI Server.
- (RedHat Linux only) Follow the same steps as previously described to add an additional entry:
LD_PRELOAD = /usr/lib64/libfreebl3.so- (AIX or Linux) Make sure environment variable ODBCINI, LD_LIBRARY_PATH on Linux, LIBPATH on AIX are set before building a cube.
- Run the following commands first before manually running the script to build cube. For example:
export ODBCINI=/root/.odbc.iniAIX: export LIBPATH=/opt/Progress/DataDirect/Connect_for_ODBC_71/lib/
Linux: export LD_LIBRARY_PATH =/opt/Progress/DataDirect/Connect_for_ODBC_71/lib/
- Use the following command to edit the cron jobs to set the ODBCINI, LD_LIBRARY_PATH on Linux, LIBPATH on AIX for scheduled cube generation tasks,
crontab -eFor example, change both daily-refresh.sh and weekly-rebuild.sh cron jobs from:05 00 * * 1-6 /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 /opt/IBM/Cognos/metricsmodel/weekly-rebuild.shto the following for AIX:05 00 * * 1-6 export LC_ALL=en_US.ISO8859-1: export ODBCINI=/root/.odbc.ini:export LIBPATH=/opt/Progress/DataDirect/Connect_for_ODBC_71/lib/:/opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 export LC_ALL=en_US.ISO8859-1: export ODBCINI=/root/.odbc.ini:export LIBPATH=/opt/Progress/DataDirect/Connect_for_ODBC_71/lib/:/opt/IBM/Cognos/metricsmodel/weekly-rebuild.shand to the following for Linux:05 00 * * 1-6 export ODBCINI=/root/.odbc.ini:export LD_LIBRARY_PATH =/opt/Progress/DataDirect/Connect_for_ODBC_71/lib/: /opt/IBM/Cognos/metricsmodel/daily-refresh.sh 05 00 * * 0 export ODBCINI=/root/.odbc.ini:export LD_LIBRARY_PATH =/opt/Progress/DataDirect/Connect_for_ODBC_71/lib/: /opt/IBM/Cognos/metricsmodel/weekly-rebuild.sh- (Windows) No special configuration needs be done, except to configure the job scheduler for Cognos Transformer on Windows.
Federating the Cognos server to the Deployment Manager
The computer hosting the IBM Cognos Business Intelligence server must be federated to the same dedicated Deployment Manager used by Connections.
Before attempting to federate the Cognos node to the Deployment Manager, make sure that:
- The Deployment Manager is running.
- The Cognos server is stopped (if you started it after installation, stop it now by stopping the IBM WAS hosting it).
- The system clock on the Cognos server is set to within 1 minute of the time (and time zone) of the system clock on the Deployment Manager.
- The Deployment Manager and the Cognos server are either both registered in the DNS or are referenced in each other’s etc/hosts file.
- For 64-bit Red Hat only: server1 of the Cognos profile is stopped.
This task involves working on the newly installed Cognos Business Intelligence server to add the node to the Deployment Manager, and then working on the Deployment Manager to create virtual ports for the new node.
- On the Deployment Manager, disable Java 2 security:
- Log into the Integrated Solutions Console as the WebSphere administrator.
- Click Security > Global Security
- Look under Java 2 security and clear the selection for Use Java 2 security to restrict application access to local resources.
- Click Apply.
- Click OK.
- Use the computer hosting the Cognos components to federate the new node to the Deployment Manager:
- On the computer hosting Cognos, open a command window and navigate to the /bin directory in the WAS installation root; for example:
- IBM AIX, Linux: /opt/IBM/WebSphere/AppServer/bin
Windows: C:\\IBM\WebSphere\AppServer\bin
- Run the addNode command to federate the node to the Deployment Manager as in the following example:
addNode.bat|sh DmgrHost_name dmgr_port -profileName Cognos_profile_name -includeapps -username dmgr_admin_user_name -password dmgr_admin_passwordwhere:
- DmgrHost_name is the fully qualified host name of the Deployment Manager.
- dmgr_port is the SOAP_CONNECTOR port that the Deployment Manager is listening on; the default port is 8879.
You can determine the port by checking the SOAP_CONNECTOR_ADDRESS value in the following file on the computer hosting the Deployment Manager:
WAS_install_root/profiles/dmgr_Name/properties/portdef.props; for example on Windows:
C:\\IBM\WebSphere\AppServer\profiles\Dmgr01\properties\portdef.props- Cognos_profile_name is the profile name where Cognos Business Intelligence is installed (in case multiple profiles are installed on the same server).
If you only installed one WebSphere profile on the server, you can omit this parameter.
If you are not sure of the Cognos profile name, you can determine it by looking at the directory name where it was installed; for example on Linux:
/opt/IBM/WebSphere/AppServer/profiles/Cognos_profile_name- dmgr_admin_user_name is the user name of the WebSphere administrator account on the Deployment Manager, and dmgr_admin_password is the associated password.
Make sure you specify the -includeapps parameter as shown in the example. For example:
addNode.bat lc40.example.com 8879 -profileName AppSrv01 -includeapps -username wasadmin -password my_WASadmin_pwd- If Cognos is installed on the same system as Connections will be, then you need to specify the cognos profile with the -profileName argument.
- Synchronize the new node to the Deployment Manager:
- On the navigation tree, click System Administration > Nodes.
- In the nodes table, click the checkbox that precedes the new node (the Cognos server).
- Click the Synchronize button in the table and wait for the operation to finish before proceeding to the next step.
Synchronization might take several minutes to complete; be sure to allow sufficient time before restarting the Cognos server in the next task.
- Check the virtual host settings on the Deployment Manager and verify that the Cognos server port is included.
For more information and a workaround, see the technote: SRVE0255E: A WebGroup/Virtual Host to handle /p2pd/servlet/dispatch has not been defined.
- Start the Cognos server.
Validating the Cognos server installation
Verify that the IBM Cognos BI server and Cognos Transformer server are correctly installed.
Make sure the Cognos server is running.
For configuration tips and information on troubleshooting the Cognos Business Intelligence installation, see Troubleshooting Cognos Business Intelligence.
- Locate the cognos-installation-verify.sh script in the directory where you expanded the CognosConfig.zip or CognosConfig.tar when you installed Cognos Business Intelligence components as part of the pre-installation task.
- Edit the cognos-setup.properties file and verify that it contains the appropriate values for each property. If all passwords were removed from this file the last time it was used, you must either add the passwords again, or pass them in from the command line when you run the cognos-installation-verify.sh script.
- Find and execute the commands that need to run before manually running the script to build a cube in Configure Cognos Business Intelligence after installation.
- Run the cognos-installation-verify.sh script.
Results
Output from this operation is stored in the /CognosSetup/cognos-verify.log file. In this log file, the status of Cognos BI server and Transformer server displays at the end.If you encounter a parameter validation error when running the cognos-installation-verify.sh script, correct the error and run the script again.
If you deployed Cognos Business Intelligence as a prerequisite step to installing Connections, continue to the next section, Install Connections. Complete the Cognos configuration tasks after installing Connections.
If you originally installed Connections without first deploying Cognos Business Intelligence and are now deploying the Cognos server, skip the Connections installation topics (which you have already completed) and instead go directly to Configure Cognos Business Intelligence.
Before installing
Check the Release notes for late-breaking issues.
Install all the required fixes for WAS that are listed in the Connections Software Requirements web page.
If you previously installed Installation Manager, update it to V. 1.5.3 or higher.
For more information, go to the Installation Manager updates web page.
Use the same user account to install Installation Manager and Connections.
Connections installation presents three options for the type of deployment that you can install.
The Connections installation process supports the creation of new server instances and clusters. Do not use existing clusters to deploy Connections.
You can install Connections with either root or non-root accounts on AIX/Linux, or administrator or non-administrator accounts on Microsoft Windows.
Complete the Pre-installation tasks.
If you are migrating from Connections 4.0, you need to complete only the following tasks:
- Prepare to configure the LDAP directory
- Install IBM WAS if you are installing on the same host as 4.0.
- Set up federated repositories
- Do not complete Pre-installation tasks for creating databases or populating the Profiles database. The migration process handles those tasks separately.
- Connections 4.5 is the first release for
IBM i , so no migration is needed.
Install IBM WAS Network Deployment (Application Server option) on each node. Connections is installed on the system where WAS Deployment Manager is installed. Back up the profile_root/Dmgr01 directory. Configure WAS to communicate with the LDAP directory. For more information, see the Set up federated repositories topic.
Prepare directories to use as content stores. You need to provide shared content stores on network share devices and local content stores on each node. Both shared and local content stores must be accessible using the same path from all nodes and from the Deployment Manager. Set the system clocks on the Deployment Manager and the nodes to within 1 minute of each other. If these system clocks are further than 1 minute apart, you might experience synchronization errors. Copy the JDBC files for your database type to the Dmgr and then from the dmgr to each node. Place the copied files in the same location on each node as their locations on the dmgr. If, for example, you copied the db2jcc4.jar file from the C:\IBM\SQLLIB directory on the dmgr, place the copy in the C:\IBM\SQLLIB directory on each node. Make sure these JDBC drivers are in the same location on each node.
See the following table to determine which files to copy:
Database type JDBC files DB2 db2jcc4.jar
db2jcc_license_cu.jarDB2 on IBM i All applications except Metrics:
- jt400.jar
Metrics
- db2jcc4.jar
- db2jcc_license_cu.jar
Oracle ojdbc6.jar Ensure that you are using the latest version of the ojdbc6.jar file.
SQL Server sqljdbc4.jar
If you are going to use a trusted SSL certificate, ensure that is available before you begin the installation. If you do not plan to deploy IBM Cognos Business Intelligence now to support metrics, you can still install the Metrics application along with the other Connections applications. This enables Connections to begin collecting event data immediately and store it in the Metrics database for use when Cognos is available to provide reports. (Microsoft Windows) You must use an administrator account to install Connections on Windows. If you are installing on Windows Server 2008, you must use a local administrator account. If you use a domain administrator account, the installation might fail. (Linux only) If you receive an error message after attempting to start Installation Manager, you might need to install additional 32-bit libraries. For more information about Installation Manager errors, go to the Unable to install Installation Manager on RHEL 6.0/6.1 (64-bit) webpage.
Ensure that the directory paths that you enter contain no spaces. Ensure that the Open File Descriptor limit is 8192. For information about setting the file limit, go to the Installation error messages topic and search for error code CLFRP0042E.
(AIX only) Installation Manager requires additional libraries for the AIX operating system. For more information, go to the Required filesets on AIX for Installation Manager webpage.
(AIX only) If Installation Manager hangs while being installed on your system, you might need to update your version of the software. For more information, read the Installation Manager hangs on 64-bit AIX systems technote.
(AIX only) If you are downloading Installation Manager, the TAR program available by default with AIX does not handle path lengths longer than 100 characters. To overcome this restriction, use the GNU file archiving program instead. This program is an open source package that IBM distributes through the AIX Toolbox for Linux Applications at the IBM AIX Toolbox website. Download and install the GNU-compatible TAR package. You do not need to install the RPM Package Manager because it is provided with AIX. After installing the GNU-compatible compression program, change to the directory where you downloaded the Connections tar file. Enter the following command to extract the files from the file:
gtar -xvf Lotus_Connections_wizard_aix.tar
This command creates a directory named after Installation Manager.
Establish naming conventions for nodes, servers, clusters, and web servers. Use a worksheet to record the user IDs, passwords, server names, and other information that you need during and after installation.
Linux libraries
The complete list of Linux libraries required for deploying Connections 4.5.
Linux
Ensure that you have installed the following Linux packages and libraries: Notes: Ensure that the GTK library is available on your system. Even when you are installing on a 64-bit system, you still need the 32-bit version of the GTK library.
If you are using silent mode or console mode to install Connections, you do not need these libraries.
- compat-libstdc++-33.x86_64
- libcanberra-gtk2.i686
- PackageKit-gtk-module
- gtk2.i686
- compat-libstdc++-33.i686
- compat-libstdc++-296
- compat-libstdc++
- libXtst.i686
- libpam.so.0
Cognos
If you plan to install Cognos, you also need the libraries listed in the Cognos BI 10.1.1 Software Environments - Required Patches technote.
Both 32-bit and 64-bit versions are required.
Install Connections
An Connections clustered deployment...
- WAS nodes:
- One node with IBM WAS ND dmgr installed.
- WAS nodes federated into the Dmgr cell.
- Satabase server
- LDAP server.
- IBM HTTP Server
Install as a non-root user
By default, only root users have the necessary permissions to install an Connections deployment. The non-root user must be the same user who installed IBM WAS.
To grant install permissions to a non-root user...
- Complete prerequisite tasks.
- Edit...
LC_SETUP/IM/aix/install.ini
...and on the second line of the file, change "admin" to "nonadmin"
- Grant permissions to non-root user...
Directory Permissions Commands app_server_root RWX chgrp -R non-root_user_group app_server_root
chmod -R g+wrx app_server_root
chown -R non-root_ID:group app_server_rootLC_SETUP RWX chgrp -R non-root_user_group LC_SETUP
chmod -R g+wrx LC_SETUP
chown -R non-root_ID:group LC_SETUPconnections_root RWX chgrp -R non-root_user_group connections_root
chmod -R g+wrx connections_root
chown -R non-root_ID:group connections_rootIM_ROOT RWX chgrp -R non-root_user_group IM_root
chmod -R g+wrx IM_root
chown -R non-root_ID:group IM_rootshared_resources_root RWX chgrp -R non-root_user_group shared_resources_root
chmod -R g+wrx shared_resources_root
chown -R non-root_ID:group shared_resources_root/var/ibm/InstallationManager RWX chmod -R ugo+rwx /var/ibm/InstallationManager
chown -R non-root_ID:group /var/ibm/InstallationManager
Grant permissions to /var/ibm/InstallationManager only if the root user installed Installation Manager.
- Install Connections using either the wizard, the console, or a silent installation method.
On the Windows operating system, the user must be a member of the administrator group.
Install Connections 4.5
- Download Connections 4.5 and copy the installation files to the system hosting the Deployment Manager.
- Complete prerequisite tasks
- On each node, stop any running instances of WAS and WebSphere node agents.
Do not stop Cognos Business Intelligence node agents or servers.
- Start WAS Network Deployment Manager.
- From the Connections setup directory, start Connections launchpad
cd LC_SETUP
./launchpad.shThe launchpad needs a web browser to run. If your system does not have a web browser you can install Connections in console mode, or you can install with a response file...
cd Connections_install/IM/myOS
./install.sh -input response.xml.- Click Install Connections 4.5 and then click Launch the Connections 4.5 install wizard.
Click Welcome to open the documentation link.
- In the Select packages to install window, select the packages to install and click Next to continue.
- Accept the default setting for Show all versions.
- If you are using an earlier version of Installation Manager than 1.5.3, the 1.5.3 package is selected in this window.
- Click Check for Other Versions and Extensions to search for updates to Installation Manager.
- Review and accept the license agreement
- Location of shared directories for Installation Manager.
- Set Shared Resources directory.
Resources shared by multiple packages.
- Set Installation Manager directory.
Resources unique to packages.
- Click Next.
- Choose either...
- Use the existing package group
- Create a new package group
- SEt installation directory for Connections.
You can accept the default directory location, enter a new directory name, or click Browse to select an existing directory. Click Next.
The path only can consist of letters (a-z, A-Z), numbers (0-9), and an underscore (_).
- Confirm the applications to install and click Next.
The wizard always installs the Home page, News, and Search applications. To use media gallery widgets in the Communities application, install the Files application. Media gallery widgets store photo and video files in the Files database. Even if you are not configuring Cognos yet, install Metrics now so that application data is captured from the moment that Connections is deployed. Metrics captures the deployment data whereas Cognos is used for viewing data reports. If you install Metrics at a later stage, you will not have any data reports for the period before you installed Metrics.
Option Description Connections 4.5 Install all Connections applications. Activities Collaborate with colleagues. Blogs Write personal perspectives about projects. Communities Interact with people on shared projects. Bookmarks Bookmark important web sites. Files Share files among users. Forums Discuss projects and exchange information. Connections Content Manager Manage files using advanced sharing and draft review in Communities. to be installed. If you choose to install Connections Content Manager, Communities will be selected automatically, because they need to work together. Connections Content Manager appears under Add-on. Metrics Identify and analyze usage and trends. Mobile Access Connections from mobile devices. Moderation Forum and community owners can moderate the content of forums. Profiles Find people in the organization. Wikis Create content for your website.
- Enter the details of your WAS environment:
- Select the WAS installation location that contains the Deployment Manager. For example...
/opt/IBM/WebSphere/AppServer
- Enter the properties of the WAS Dmgr:
Deployment Manager profile Name of the dmgr to use for Connections. The wizard automatically detects any available dmgrs. Host name Name of the host dmgr server. Administrator ID The administrative ID of the dmgr. Set to the connectionsAdmin J2C authentication alias, which is mapped to the following J2EE roles: dsx-admin, widget-admin , andsearch-admin . Also used by the service integration bus. To use security management software such as Tivoli Access Manager or SiteMinder, the ID specified here must exist in the LDAP directory. This user account can be an LDAP or local repository user.Administrator Password The password for the administrative ID of the dmgr. SOAP port number The wizard automatically detects this value. - Click Validate to verify the dmgr information that you entered and that application security is enabled on WAS.
If the verification fails, Installation Manager displays an error message.
The validation process checks the number of open files that are supported by your system. If the value for this parameter, known as the Open File Descriptor limit, is too low, a file open error, memory allocation failure, or connection establishment error could occur. If one of these errors occurs, exit the installation wizard and increase the open file limit before restarting the wizard. To set the file limit, go to the Installation error messages topic and search for error code CLFRP0042E. The recommended value for Connections is 8192.
- When the verification test is successful, click Next.
The wizard creates file dmInfo.properties to record details of the cell, node, and server.
- Configure Connections Content Manager deployment option. This panel only displays if you chose to install the Connections Content Manager feature.
Refer to Configure Connections Content Manager to find the post-installations tasks you must perform to get CCM up and running.
- Select Existing Deployment to use an existing FileNet deployment for Connections Content Manager:
- Enter FileNet Object Store administrator username and password that the following URLs will point to
- Enter the HTTP URL for the FileNet Collaboration Services server such as:
http://fncs.example.com:80/dm
- Enter the HTTPS URL for the FileNet Collaboration Services server such as:
https://fncs.example.com:443/dm
- Select New Deployment to install a new FileNet deployment to use for Connections Content Manager;
- Enter FileNet installer packages location. The three FileNet installers: Content Platform Engine, FileNet Collaboration Services, and Content Platform Engine Client need to be placed into the same folder. The package names are as follows:
Platform: Content Platform Engine Content Platform Engine Client FileNet Collaboration Services AIX: 5.2.0-P8CE-AIX.BIN 5.2.0-P8CE-CLIENT-AIX.BIN FNCS-2.0.0.0-AIX.bin Linux: 5.2.0-P8CE-LINUX.BIN 5.2.0-P8CE-CLIENT-LINUX.BIN FNCS-2.0.0.0-Linux.bin Windows: 5.2.0-P8CE-WIN.EXE 5.2.0-P8CE-CLIENT-WIN.EXE FNCS-2.0.0.0-WIN.exe zLinux: 5.2.0-P8CE-ZLINUX.BIN 5.2.0-P8CE-CLIENT-ZLINUX.BIN FNCS-2.0.0.0-zLinux.bin
For the Linux platform, at least 3 GB of free disk space is needed under the /tmp folder for the Connections 4.5 CCM installation, or else there will be an installation failure.
- Configure your topology.
If you return to this page from a later page in the installation wizard, your settings are still present but not visible. To change any settings, you must enter all of the information again. If you do not want to change your initial settings, click Next.
The applications for Connections Content Manager will not be shown if you have chosen to use an existing FileNet deployment.
- Small deployment:
- Select the Small deployment topology.
- Enter a Cluster name for the topology.
- Select a Node.
- Click Next.
- Medium deployment:
- Select the Medium deployment topology.
- Select the default value or enter a Cluster name for each application or for groups of applications. For example, use Cluster1 for Activities, Communities, and Forums.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- Click Next.
- Large deployment:
- Select the Large deployment topology.
- Enter a Cluster name for each application.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- Click Next.
- Enter the database information:
If you return to this page from a later page in the installation wizard, your settings are still present but not visible. To change any settings, you must enter all of the information again. If you do not want to change your initial settings, click Next.
The Connections Content Manager databases will not be shown if you have chosen to use an existing FileNet deployment.
Database information for Global Configuration Data and Object Store must be set correctly, otherwise installation will fail.
- Specify whether the installed applications use the same database server or instance: Select Yes or No.
If allowed by your database configuration, you can select multiple database instances as well as different database servers.
- Select a Database type from one of the following options:
- IBM DB2 Universal Database™
- Oracle Enterprise Edition
- Microsoft SQL Server Enterprise Edition
- Enter the Database server host name. For example:
appserver.enterprise.example.com
If your installed applications use different database servers, enter the database host name for each application.
- Enter the Port number of the database server. The default values are: 50000 for DB2, 1521 for Oracle, and 1433 for SQL Server.
If your installed applications use different database servers or instances, enter the port number for each database server or instance.
- Enter the JDBC driver location. For example:
- AIX:
/usr/IBM/WebSphere/AppServer/lib
- Linux:
/opt/IBM/WebSphere/AppServer/lib
- Windows:
C:\IBM\WebSphere\Appserver\lib
- Ensure that the following JDBC driver libraries are present in the JDBC directory:
- DB2
- db2jcc4.jar and db2jcc_license_cu.jar
Ensure that your user account has the necessary permissions to access the DB2 JDBC files.
- Oracle
- ojdbc6.jar
- SQL Server
- Download the SQL Server JDBC 4 driver from the Microsoft website to a local directory and enter that directory name in the JDBC driver library field. The directory must not contain the sqljdbc.jar file, only the sqljdbc4.jar file. Even though the data source is configured to use the sqljdbc4.jar file, an exception occurs if both files are present in the same directory.
- Enter the User ID and Password for each database. If each database uses the same user credentials, select the Use the same password for all applications check box and then enter the user ID and password for the first database in the list.
If your database type is Oracle, you must connect to the database with the user ID that you used when you created the application database.
- Click Validate to verify your database settings. If the validation fails, check your database settings. When the validation succeeds, click Next.
Installation Manager tests your database connection with the database values that you supplied. You can change the database configuration later in the WAS admin console.
Usually you can continue even if the validation failed because you can change the database settings from WebSphere Application server administrative console afterwards. However, you can not continue if you have entered incorrect information for the Connections Content Manager database, because there are database operations during installation. Incorrect database information will cause installation to fail. So you must use correct information for Connections Content Manager database.
- Specify your Cognos configuration details as explained in the table, and then click Validate to verify your connection.
- The IBM Cognos configuration panel appears only if you chose to install the Metrics application earlier in this task.
- Ensure that Cognos Business Intelligence Server is running because the wizard pings the server during the validation process.
- If you choose to install Metrics but do not want to setup Cognos, you can choose "Do Later" option to continue.
- If you decide to deploy Cognos later, be sure to update the J2C alias and update the web addresses for the Cognos service entry in LotusConnections-config.xml as described in the Troubleshooting Cognos validation problems topic.
See also the technote How to change metrics settings to integrate with the newly installed Cognos BI.
Option Description Administrator user ID Type the user name of the administrator account that you selected for Cognos Business Intelligence. This user must be included in the LDAP directory used with Connections. Administrator password Type the password for the Cognos administrator. Name Click the Load node info button to retrieve the list of available nodes and profiles, then click the arrow and select the WebSphere profile that hosts the Cognos BI server. The profile you select here must match the profile you specified as the was.profile.name in the cognos-setup.properties file. Host name
This is a non-editable field that is populated when you select a profile in the Name field. See the associated host name for each profile/node can help you choose the correct node where the Cognos BI Server is running.
Server name There might be multiple servers installed on the same computer as the Cognos BI server; click the arrow and select the instance that represents the Cognos server. This value must match what you specified as the cognos.was.server.name in the cognos-setup.properties file. A default value of cognos_server was assigned in the properties file, so unless you changed that value, use it now. Web context root The context root determines which requests will be delegated to the Cognos application for processing (any request beginning with this string will be handled by Cognos). This value must match the cognos.contextroot specified in the cognos-setup.properties file. A default value of cognos was assigned in the properties file, so unless you changed that value, use it now.
- Locations of the content stores. All nodes in a cluster must have read-write access to shared content. Both shared and local content stores must be accessible using the same path from all nodes and from the dmgr. Each content store is represented by a corresponding WebSphere variable that is further defined as shared or local. Local content is node-specific.
If you are migrating from Connections 4.0, you must reuse your existing content stores in 4.5 in order to maintain data integrity.
- Enter the location of the Shared content store. The shared content store usually resides in a shared repository that grants read-write access to the dmgr and all the nodes.
Use one of the following methods to create a shared data directory:
- Network-based file shares (for example: NFS, SMB/Samba, and so on)
- Storage area network drives (SAN)
- If you are using a shared-file system on Microsoft Windows, specify the file location using the Universal Naming Convention (UNC) format. For example:
\\server_name\share_name
(Windows only) If you use Remote Desktop Connection to map shared folder drives, ensure that you use the same session to start the node agents. Otherwise, the shared drives might be invisible to the nodes.
- Enter the location of the Local content store.
- Click Validate to verify that the account that you are using to install Connections has write access to the content store.
- Click Next.
- Select a Notification solution.
Notifications are email messages to users about new information and events in your Connections deployment.
- Enable Notification only.
Use notifications but without the ReplyTo capability.
- Enable Notification and ReplyTo.
Use notifications and the ReplyTo capability. To use ReplyTo, your mail server must be able to receive all the replies and funnel these replies into a single inbox. IBM Connection connects to the mail server using the IMAP protocol.
- None.
Do not use a notification solution in your Connections deployment. You can configure notifications after installation.
- Select and specify a mail server solution and then click Next.
WebSphere Java Mail Session:
Use a single mail server for all notifications. Select this option if you can access an SMTP server directly using the host name.
Complete the following fields to identify the mail server to use for sending email:
- Host name of SMTP messaging server
- Enter the host name or IP address of the preferred SMTP mail server.
- This SMTP server requires authentication
- Select the check box to force authentication when mail is sent from this server.
- User ID
- If the SMTP server requires authentication, enter the user ID.
- Password
- If the SMTP server requires authentication, enter the user password.
- Encrypt outgoing mail traffic to the SMTP messaging server using SSL
- Select this check box if you want to encrypt outgoing mail to the SMTP server.
- Port
- Accept the default port of 25, or enter port 465 if you are using SSL.
- DNS MX Records:
Use information from DNS to determine which mail servers to use. Select this option if you use a DNS server to access the SMTP messaging server.
Messaging domain name Enter the name or IP address of the messaging domain. Choose a specific DNS server Select this check box if you want to specify a unique SMTP server. DNS server for the messaging servers query Enter the host name or IP address of the DNS server. DNS port used for the messaging servers query Enter the port number that is used for sending queries using the messaging server. This SMTP server requires authentication Select the check box to force authentication when notification mail is sent from this server. User ID If SMTP authentication is required, enter the administrator user ID for the SMTP server. Password If SMTP authentication is required, enter the password for the administrator user of the SMTP server. Encrypt outgoing mail traffic to the SMTP messaging server using SSL Select the check box if you want to use the Secure Sockets Layer (SSL) when connecting to the SMTP server. Port Specify the port number to use for the SMTP server connection. The default port number for the SMTP protocol is 25. The default port number for SMTP over SSL is 465. - If you click Do not enable Notification, Installation Manager skips the rest of this step. You can configure notification later.
- If you selected the Notification and ReplyTo option, configure the ReplyTo email settings.
Connections uses a unique ReplyTo address to identify both the person who replied to a notification and the event or item that triggered the notification.
- Enter a domain name. For example:
mail.example.com
This domain name is used to build the ReplyTo address. The address consists of the suffix or prefix, a unique key, and the domain name.
- The reply email address is given a unique ID by the system. You can customize the address by adding a prefix or suffix, using a maximum of 28 characters. This extra information is useful if the domain name is already in use for other purposes. Select one of the following options:
None Use the ID generated by the system. Prefix Enter a prefix in the Example field. Suffix Enter a suffix in the Example field. As you select an option, the wizard creates an example of the address, combining your selection with the ID generated by the system. For example:
- unique_id@domain
- prefix_unique_id@domain
- unique_id_suffix@domain
- Specify the details of the mail file to which ReplyTo emails are sent:
- Server
- The domain where your mail server is located. For example:
replyTo.mail.example.com
- User ID
- The user account for the mail server. The user ID and password are credentials that Connections will use to poll the inbox on the mail server to retrieve the replies and process the content. Connections connects to the mail server using IMAP.
- Password
- Password for the user account. The user ID and password are credentials that Connections will use to poll the inbox on the mail server to retrieve the replies and process the content. Connections connects to the mail server using IMAP.
- Click Next.
You can modify the ReplyTo settings after installation. To edit the domain name and prefix or suffix, edit news-config.xml. To edit the server and authentication details, log in to the WAS admin console and navigate to the Mail Sessions page, where you can edit the configuration.
- Review the information that you have entered. To revise your selections, click Back. To finalize the installation, click Next.
- Review the result of the installation. Click Finish to exit the installation wizard.
- Restart the Deployment Manager:
cd WAS_HOME/profiles/Dmgr01/bin directory.
./stopManager.sh
./startManager.sh
- Start all the federated nodes and enter the startNode command. For each node run...
cd profile_root/bin
./startNode.sh
- Log in to the dmgr console to perform a full synchronization of all nodes.
System administration | Nodes | nodes | Full Resynchronize
Wait until the dmgr copies all the application EAR files to the installedApps directory on each of the nodes. This process can take up to 30 minutes. To verify that the dmgr has distributed the application EAR files to the nodes, check the SystemOut.log file of each node agent. The default path to the SystemOut.log file on a node is...
profile_root/logs/nodeagent
Look for a message such as the following example:
AdmgrA7021I: Distribution of application application_name completed successfully
...where application_name is the name of an Connections application.
- Restart the Deployment Manager.
- Start all your Connections clusters:
Servers | Clusters | WebSphere Application server clusters | Connections clusters | Start
If you installed a cluster with multiple Search nodes, create the initial index.
- If you are installing a non-English language deployment, enable Search dictionaries.
- The index is ready when the INDEX.READY and CRAWLING_VERSION files are present in the index directory.
If some applications do not start, the file-copying process might not have completed. Wait a few minutes and start the applications.
In case the Connections applications are installed on different clusters, the cluster start order should be as follows:
- News cluster
- Profiles cluster
- Search cluster
- Dogear cluster
- Communities cluster
- Activities cluster
- Blogs cluster
- Files cluster
- Forums cluster
- Wikis cluster
- Hompage cluster
- Metrics cluster
- Mobile cluster
- Moderation cluster
- Connections Content Manager cluster
Results
The installation wizard has installed Connections in a network deployment.
To confirm that the installation was successful, open the log files in...
connections_root/logs
Each Connections application that you installed has a log file, using the following naming format:
application_nameInstall.log
...where application_name is the name of an Connections application. Search for the words error or exception to check whether any errors or exceptions occurred during installation.
To view the log file for system events that occurred during the installation, open date_time.xml, where date_time represents the date and time of the installation. The file is located by default in the following directory:
- root user: /var/ibm/InstallationManager/logs
- non-root user: /home/user/var/ibm/Installation Manager/logs where user is the non-root user name
- Windows Server 2008 64-bit: C:\ProgramData\IBM\Installation Manager\logs
Complete the post-installation tasks that are relevant to your installation.
Access network shares:
If you installed WAS on Microsoft Windows and configured it to run as a service, ensure that you can access network shares.
Install in console mode
Non-graphical install. Useful on systems that do not have a video card.
- Download Connections 4.5 and copy the installation files to the system hosting the Deployment Manager.
- Complete prerequisite tasks
- On each node, stop any running instances of WAS and WebSphere node agents.
Do not stop Cognos Business Intelligence node agents or servers.
- Start WAS Network Deployment Manager.
- Log on to Deployment Manager and run...
cd /opt/IBM/InstallationManager/eclipse/tools
./imcl -c- To install both Installation Manager and Connections...
cd IBM_Connections_Install/IM/aix/
./install.console.shThe batch script calls a response file which contains information about the repositories for both Installation Manager and Connections.
cd IBM_Connections_Install/IM/aix/tools/
./imcl -c -nl language_code...where language_code is the two-letter code for a language, such as fr for French.
- In the console window, specify the Connections repository:
If you chose the option to run the install.console.sh file above, you can skip this step.
- Type P to edit preferences.
- Type 1 to specify repositories.
- Type D to add a repository.
- Type the repository path for Connections 4.5, for example: Lotus_Connections/repository.xml.
- Type A to save the repository information.
- Type C to return to the console window.
- In the Select packages to install step, type the appropriate number to select the package, and then type N to proceed.
Connections Content Manager is not supported on
IBM i . Although it appears as an option in the console interface, you cannot select it.- Review the license agreement
- Set shared resources directory for Installation Manager.
Resources shared by multiple packages.
- Set Installation Manager installation directory.
Resources unique to packages.
- Locations of the package group for Installation Manager and the installation directory for Connections:
The wizard automatically detects the Connections package group. To accept the default location for the Connections installation directory, type N. To specify a new directory name, type M and enter the new directory name and path.
- Select the applications to install and then type N to proceed.
The wizard always installs the Home page, News, and Search applications. If you clear the selections of the Home page, News, or Search applications, the wizard will exit. To use media gallery widgets in the Communities application, install the Files application. Media gallery widgets store photo and video files in the Files database. Even if you are not configuring Cognos yet, install Metrics now so that application data is captured from the moment that Connections is deployed. Metrics captures the deployment data whereas Cognos is used for viewing data reports. If you install Metrics at a later stage, you will not have any data reports for the period before you installed Metrics.
Option Description Connections 4.5 Install all Connections applications. Activities Collaborate with colleagues. Blogs Write personal perspectives about projects. Communities Interact with people on shared projects. Bookmarks Bookmark important web sites. Files Share files among users. Forums Discuss projects and exchange information. Content Manager Manage files using advanced sharing and draft review in Communities. to be installed. If you install Content Manager, also select Communities; otherwise installation will quit. Metrics Identify and analyze usage and trends. Mobile Access Connections from mobile devices. Moderation Forum and community owners can moderate the content of forums. Profiles Find people in the organization. Wikis Create content for your website.
- Enter the details of your WAS environment:
- Select the WAS installation location that contains the Deployment Manager.
Note the default path to the WAS installation:
/usr/IBM/WebSphere/AppServer
- Enter the properties of the WAS Dmgr:
Deployment Manager profile Name of the dmgr to use for Connections. The wizard automatically detects any available dmgrs. Host name Name of the host dmgr server. Administrator ID The administrative ID of the dmgr. Set to the connectionsAdmin J2C authentication alias, which is mapped to the following J2EE roles: dsx-admin, widget-admin , andsearch-admin . Also used by the service integration bus. To use security management software such as Tivoli Access Manager or SiteMinder, the ID specified here must exist in the LDAP directory. This user account needs to be both a WAS administrative user and an LDAP user.Administrator Password The password for the administrative ID of the dmgr. SOAP port number The wizard automatically detects this value. - Press Enter to verify the dmgr information that you entered.
The verification process also checks that application security is enabled on WAS. If the verification fails, Installation Manager displays an error message.
The validation process checks the number of open files that are supported by your system. If the value for this parameter, known as the Open File Descriptor limit, is too low, a file open error, memory allocation failure, or connection establishment error could occur. If one of these errors occurs, exit the installation wizard and increase the open file limit before restarting the wizard. To set the file limit, go to the Installation error messages topic and search for error code CLFRP0042E. The recommended value for Connections is 8192.
- If the verification check is successful, type N to proceed. If verification fails, press B to reenter the required information.
The wizard creates file dmInfo.file to record details of the cell, node, and server.
- Configure the Connections Content Manager deployment option.
This panel only displays if you chose to install the Connections Content Manager feature.
- Select 1 to use a new FileNet deployment for Connections Content Manager.
- Enter the FileNet installer packages location. The three FileNet installers, Content Platform Engine, FileNet Collaboration Services, and Content Platform Engine Client need to be placed into the same folder.
- Press Enter to validate whether the correct installers could be found.
- Select 2 to use an existing FileNet deployment for Connections Content Manager.
- Enter the User Id for the FileNet Object Store administrator username.
- Enter password for the FileNet Object Store administrator password.
- Enter the HTTP URL for the FileNet Collaboration Services server such as:
http://fncs.example.com:80/dm
- Enter the HTTPS URL for the FileNet Collaboration Services serve such as:
https://fncs.example.com:443/dm
- Press Enter to validate. You can still continue even if the validation failed. If you are sure that correct information has been specified, then the FileNet server might not be available. Type N to continue; otherwise type B to reenter the information.
- Configure your topology.
For more information about each option, refer to Deployment options.
If you chose to use existing deployment of FileNet for Connections Content Manager, the application Connections Content Manager will not show in this topology panel.
- Small deployment:
- Type 1 to select the Small deployment topology.
- Enter a Cluster name for the topology.
- Select a Node.
- Enter a Server member name for the node.
- Type N to proceed.
- Medium deployment:
- Type 2 to select the Medium deployment topology.
- Select the default value or enter a Cluster name for each application or for groups of applications. For example, use Cluster1 for Activities, Communities, and Forums.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- The topology specified is displayed. To re-specify any details, type the number that corresponds to the application; for example, type 1 for Activities.
- Type N to proceed.
- Large deployment:
- Type 3 to select the Large deployment topology.
- Enter a Cluster name for each application.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- The topology specified is displayed. To re-specify any details, type the number that corresponds to the application; for example, type 1 for Activities.
- Type N to proceed.
The Connections Content Manager databases will not be shown if you have chosen to use an existing FileNet deployment. The Connections Content Manager is not supported on
IBM i .Database information for Global Configuration Data and Object Store must be set correctly, otherwise installation will fail. Enter the database information:
- Specify whether the installed applications use the same database server or instance: Type 1 to specify that the applications use same database server or instance; type 2 to specify that they use different database servers or instances.
If allowed by your database configuration, you can select multiple database instances as well as different database servers.
- Select a Database type from one of the following options: If installing on Windows, Linux, or AIX:
- IBM DB2 Universal Database
- Oracle Enterprise Edition
- Microsoft SQL Server Enterprise Edition
- Enter the Database server host name. For example:
appserver.enterprise.example.com
If your installed applications use different database servers, enter the database host name for each application.
- Enter the Port number of the database server. The default values are: 50000 for DB2 , 1521 for Oracle, and 1433 for SQL Server.
If your installed applications use different database servers or instances, enter the port number for each database server or instance. Database Name and Port number do not apply to DB2 for
IBM i , so you can just ignore them and keep with the default values. For Metrics, since the Metrics database is DB2 on AIX, you need to provide the Database Name and Port number to make the Installer configure them correctly.- Enter the JDBC driver location.
/usr/IBM/WebSphere/AppServer/lib
- Ensure that the following JDBC driver libraries are present in the JDBC directory:
- DB2
- db2jcc4.jar and db2jcc_license_cu.jar
Ensure that your user account has the necessary permissions to access the DB2 JDBC files.
- Oracle
- ojdbc6.jar
- SQL Server
- Download the SQL Server JDBC 2 driver from the Microsoft website to a local directory and enter that directory name in the JDBC driver library field. The directory must not contain the sqljdbc.jar file, only the sqljdbc4.jar file. Even though the data source is configured to use the sqljdbc4.jar file, an exception occurs if both files are present in the same directory.
- Enter the User ID and Password for each database. If each database uses the same user credentials, confirm the Use the same password for all applications question and then enter the user ID and password for the first database in the list.
If your database type is Oracle, you must connect to the database with the user ID that you used when you created the application database.
- If you need to make changes, type the number that corresponds to the application to change. Alternatively, type R to reset all the database specifications to their default values.
- Press Enter to verify your database settings. If the validation fails, check your database settings. When the validation succeeds, click Next.
Installation Manager tests your database connection with the database values that you supplied. You can change the database configuration later in the WAS admin console.
Usually you can continue even if the validation failed because you can change the database settings from WebSphere Application server administrative console afterwards. However, you can not continue if you have entered incorrect information for the Connections Content Manager database, because there are database operations during installation. Incorrect database information will cause installation to fail. So you must use correct information for Connections Content Manager database.
- If the verification check is successful, type N to proceed. If verification fails, press B to reenter the required information.
- Specify your Cognos configuration.
The IBM Cognos configuration panel appears only if you chose to install the Metrics application earlier in this task.
- Select when to configure the Cognos application.
- Type 1 to configure it after installation completes as described in Configure Cognos Business Intelligence after installation.
- Type 2 to configure it now.
- Enter the LDAP user ID for the Cognos administrator.
- Enter the password for the Cognos administrator.
- Select the node where the Cognos BI Server is installed.
- Type N to proceed.
- Enter the web context root.
- Press Enter to validate the configuration.
- Type N to proceed.
See Troubleshooting Cognos validation problems if any issues arise.
- Locations of the content stores.
Shared content must be read/write accessible by all nodes in a cluster. Both shared and local content stores must be accessible using the same path from all nodes and the dmgr. Local content is node-specific. Each content store is represented by a corresponding WebSphere variable that is further defined as shared or local. If you are migrating from Connections 4.0, you must reuse your existing content stores in 4.5 in order to maintain data integrity.
- Enter the location of the Shared content store.
The shared content store usually resides in a shared repository that grants read-write access to the dmgr and all the nodes.
Use one of the following methods to create a shared data directory:
- Network-based file shares (for example: NFS, SMB/Samba, and so on)
- Storage area network drives (SAN)
- If you are using a shared-file system on Microsoft Windows, specify the file location using the Universal Naming Convention (UNC) format. For example:
\\server_name\share_name
(Windows only) If you use Remote Desktop Connection to map shared folder drives, ensure that you use the same session to start the node agents. Otherwise, the shared drives might be invisible to the nodes.
- Enter the location of the Local content store.
- Press Enter to verify that the account that you are using to install Connections has write access to the content store.
- Click Next.
- Select a Notification solution.
- Enable Notification only.
Use notifications but without the ReplyTo capability.
- Enable Notification and ReplyTo.
Use notifications and the ReplyTo capability. To use ReplyTo, your mail server must be able to receive all the replies and funnel these replies into a single inbox. IBM Connection connects to the mail server using the IMAP protocol.
- None.
Do not use a notification solution in your Connections deployment. You can configure notifications after installation.
- If you chose a mail notification option, select and specify a mail server solution.
WebSphere Java Mail Session:
Use a single mail server for all notifications. Select this option if you can access an SMTP server directly using the host name.
Identify the mail server to use for sending email:
- Host name of SMTP messaging server
- Enter the host name or IP address of the preferred SMTP mail server.
- This SMTP server requires authentication
- Enter Y to force authentication when mail is sent from this server.
- User ID
- If the SMTP server requires authentication, enter the user ID.
- Password
- If the SMTP server requires authentication, enter the user password.
- Encrypt outgoing mail traffic to the SMTP messaging server using SSL
- To encrypt outgoing mail to the SMTP server, press Y.
- Port
- Press Enter to accept the default port of 25, or enter 465 if you are using SSL.
DNS MX Records:
Use information from DNS to determine which mail servers to use. Select this option if you use a DNS server to access the SMTP messaging server.
- Messaging domain name
- Enter the name or IP address of the messaging domain.
- Choose a specific DNS server
- To specify a unique SMTP server, press Y.
- DNS server for the messaging servers query
- Enter the host name or IP address of the DNS server.
- DNS port used for the messaging servers query
- Enter the port number that is used for sending queries using the messaging server.
- This SMTP server requires authentication
- Enter Y to force authentication when mail is sent from this server.
- User ID
- If SMTP authentication is required, enter the administrator user ID for the SMTP server.
- Password
- If SMTP authentication is required, enter the password for the administrator user of the SMTP server.
- Encrypt outgoing mail traffic to the SMTP messaging server using SSL
- To encrypt outgoing mail to the SMTP server, press Y.
- Port
- Press Enter to accept the default port of 25, or enter 465 if you are using SSL.
- If you specify Do not enable Notification, Installation Manager skips the rest of this step. You can configure notification later.
- If you selected the Notification and ReplyTo option, configure the ReplyTo email settings.
Connections uses a unique ReplyTo address to identify both the person who replied to a notification and the event or item that triggered the notification.
- Enter a domain name. For example: mail.example.com.
This domain name is used to build the ReplyTo address. The address consists of the suffix or prefix, a unique key, and the domain name.
- The reply email address is given a unique ID by the system. You can customize the address by adding a prefix or suffix, using a maximum of 28 characters. This extra information is useful if the domain name is already in use for other purposes. Select one of the following options:
None Use the ID generated by the system. Prefix Enter a prefix in the Example field. Suffix Enter a suffix in the Example field. As you select an option, the wizard creates an example of the address, combining your selection with the ID generated by the system. For example:
- unique_id@domain
- prefix_unique_id@domain
- unique_id_suffix@domain
- Specify the details of the mail file to which ReplyTo emails are sent:
- Server
- The domain where your mail server is located. For example:
replyTo.mail.example.com
- User ID
- The user account for the mail server. The user ID and password are credentials that Connections will use to poll the inbox on the mail server to retrieve the replies and process the content. Connections connects to the mail server using IMAP.
- Password
- Password for the user account. The user ID and password are credentials that Connections will use to poll the inbox on the mail server to retrieve the replies and process the content. Connections connects to the mail server using IMAP.
- Type N to proceed.
You can modify the ReplyTo settings after installation. To edit the domain name and prefix or suffix, edit news-config.xml. To edit the server and authentication details, log in to the WAS admin console and navigate to the Mail Sessions page, where you can edit the configuration.
- Review the information that you entered. To revise your selections, press B. To continue installing, press N.
- To install the product, press I. To generate a response file, press G.
- Review the result of the installation. Press F to exit the installation wizard.
- Restart the Deployment Manager.
cd WAS_HOME/profiles/Dmgr01/bin
./stopManager
./startManager.sh
- Start all the federated nodes and enter the startNode command. For each node run...
cd profile_root/bin
./startNode.sh
- Log in to the dmgr console to perform a full synchronization of all nodes.
System administration | Nodes | nodes | Full Resynchronize
Wait until the dmgr copies all the application EAR files to the installedApps directory on each of the nodes. This process can take up to 30 minutes. To verify that the dmgr has distributed the application EAR files to the nodes, check the SystemOut.log file of each node agent. The default path to the SystemOut.log file on a node is...
profile_root/logs/nodeagent
Look for a message such as the following example:
AdmgrA7021I: Distribution of application application_name completed successfully
...where application_name is the name of an Connections application.
- Restart the Deployment Manager.
- Start all your Connections clusters:
Servers | Clusters | WebSphere Application server clusters | Connections clusters | Start
If you installed a cluster with multiple Search nodes, create the initial index.
- If you are installing a non-English language deployment, enable Search dictionaries.
- The index is ready when the INDEX.READY and CRAWLING_VERSION files are present in the index directory.
If some applications do not start, the file-copying process might not have completed. Wait a few minutes and start the applications.
In case the Connections applications are installed on different clusters, the cluster start order should be as follows:
- News cluster
- Profiles cluster
- Search cluster
- Dogear cluster
- Communities cluster
- Activities cluster
- Blogs cluster
- Files cluster
- Forums cluster
- Wikis cluster
- Hompage cluster
- Metrics cluster
- Mobile cluster
- Moderation cluster
- Connections Content Manager cluster
Results
The installation wizard has installed Connections in a network deployment.
To confirm that the installation was successful, review...
connections_root/logs
Each Connections application that you installed has a log file, using the following naming format:
application_nameInstall.log
...where application_name is the name of an Connections application. Search for the words error or exception to check whether any errors or exceptions occurred during installation. To view the log file for system events that occurred during the installation, open date_time.xml, where date_time represents the date and time of the installation. The file is located by default in the following directory:
- root user: /var/ibm/InstallationManager/logs
- non-root user: /home/user/var/ibm/Installation Manager/logs
Complete the post-installation tasks that are relevant to your installation.
Access network shares:
If you installed WAS on Microsoft Windows and configured it to run as a service, ensure that you can access network shares.
Install silently
Silent installation is a tool to simplify the installation process in enterprises that need multiple, identical instances of Connections.
Silent installation uses installation parameters in a response file to install identical Connections profiles on different computers. To specify silent installation parameters you can edit the default response file provided with Connections, or create a new file.
In addition to silently installing Connections, you can use the silent installation process to modify, update, or uninstall Connections.
Install Connections using response file
Use a silent installation to perform an identical installation of Connections on multiple systems.
cd <Connections_installer>/IM/<your_platform>
./install --launcher.ini silent-install.ini -log /path/to/mylog.xml -acceptLicensewhere log_file is the name and path of the log file.
To prevent errors caused by using the wrong version of Installation Manager, remove the following line from the default response file:
<offering id='com.ibm.cic.agent' version='1.5.3000.20120531_1954' profile='Installation Manager' features='agent_core,agent_jre' installFixes='none'/>
To create a customized version of the default response file, run the installation wizard in interactive mode.
./install --launcher.ini silent-install.ini -log root/mylogs/mylogfile.xml -acceptLicense
Use a response file for your intended deployment, install Connections on multiple systems without needing to interact with the installation wizard.
To perform a silent installation with response file...
cd IM_ROOT/eclipse/tools
./imcl -input /path/to/file.rsp /path/to/install_log.xml -acceptLicenseThe default name of the response file is...
connections_root/LC.rsp
Compare the following examples to your environment:
./imcl -input <CE_HOME>/silentResponseFile/LC.rsp -log /mylog/silent_install_log.xml -acceptLicense
Results
Installation Manager writes the result of the installation command to the log file specified with the -log parameter.
If the installation is successful, the log files are empty. For example:
<?xml version="1.0" encoding="UTF-8"?> <result> </result>The log file contains an error element if the operation was not completed successfully. A successful installation adds a value of 0 to the log file. An unsuccessful installation adds a positive integer to the log file.The log file for Installation Manager records the values that you entered when you ran Installation Manager in interactive mode. To review the log file for Installation Manager, open date_time.xml, where date_time represents the date and time of the installation. The file by default is in the following directory:
- root user: /var/ibm/InstallationManager/logs
- non-root user: user_home/var/ibm/InstallationManager/logs where user_home is the non-root user account directory
To check the complete details of the installation, open each of the log files in connections_root/logs. Each Connections application that you installed has a log file, using the following naming format:
applicationInstallog.txt
...where application is the name of an Connections application.
Complete any applicable post-installation tasks.
Install Connections and Installation Manager using response file
Use a silent installation to perform an identical installation of Connections and Installation Manager on multiple systems.
This task assumes that Installation Manager is not installed on your system.
Ensure that you complete all the prerequisite tasks that are relevant for your environment.
However, do not complete the prerequisite tasks that relate to Installation Manager.
Edit the default response file to suit your environment.
For more information, refer to Use the default response file.
Use a response file for your intended deployment, install Connections on multiple systems without needing to interact with the installation wizard.
To perform a silent installation...
- Open a command prompt and go to the location of the silent installation file. The file is stored in the IBM_Connections_set-up/IBM_Connections_install/IM/OS directory, where IBM_Connections_set-up is the directory or media where the Connections installation files are located and OS represents your operating system
To change the paths to the response file and log file, edit the lc_install.ini file. The file is located in the IBM_Connections_set-up/IBM_Connections_install/IM/OS directory.
- Run the silent installation script:
- root user: ./installc -input response_file -log log_file -acceptLicense
- non-root user: ./userinstc -input /path/to/file.rsp response_file -log log_file -acceptLicense
where response_file is the full path and name of the response file and log_file is the full path and name of the log file. The default name of the file is LC.rsp.
By default, the response file is stored in the IBM_Connections_set-up/IBM_Connections_install/IBMConnecgtions directory on the installation media.
Results
Installation Manager writes the result of the installation command to the log file specified with the -log parameter.
If the installation is successful, the log files are empty. For example:
<?xml version="1.0" encoding="UTF-8"?> <result> </result>The log file contains an error element if the operation was not completed successfully. A successful installation adds a value of 0 to the log file. An unsuccessful installation adds a positive integer to the log file.The log file for Installation Manager records the values that you entered when you ran Installation Manager in interactive mode. To review the log file for Installation Manager, open date_time.xml, where date_time represents the date and time of the installation. The file by default is in the following directory:
- root user: /var/ibm/InstallationManager/logs
- non-root user: user_home/var/ibm/InstallationManager/logs where user_home is the non-root user account directory
To check the complete details of the installation, open each of the log files in connections_root/logs. Each Connections application that you installed has a log file, using the following naming format:
applicationInstallog.txt
...where application is the name of an Connections application.
Complete any applicable post-installation tasks.
Refer to Post-Installation tasks for more information.
The default response file
Response files provide input parameters for silent installations of Connections.
The default response file for AIX, Linux, and
IBM i is called LC.rsp and is located in the Connections set-up directory or installation media. You can edit this file and use it as input for a silent installation.IBM does not provide a default response file on Windows because it is more convenient if you to generate the file yourself.
For information about generating a response file, see the Creating a response file topic.
Instead of generating a new response file, you can edit the default response file that is provided with the product. However, if you edit the default response file, you need to add encrypted passwords to the file.
Use the default response file
Use the default response file to specify silent installation parameters for your environment.
Encrypt your administrator passwords.
Connections on
IBM i uses the same installation code framework with other platforms, and there are some Database settings for other platforms that do not apply toIBM i , so you can just ignore them and keep with the default values, such as the user.application_Name.dbName and user.application_Name.dbPort data keys. For Metrics, since the Metrics database is DB2 on AIX, you need to provide the user.metrics.dbName and user.metrics.dbPort data keys to make the Installer configure them correctly. Furthermore, you need to set the value of user.metrics.dbType data key to db2.
Silent installation uses the parameters in a response file to install the same Connections profile on multiple computers.
If you are silently installing Connections as a non-root user in an AIX or Linux environment, you must specify that parameter in the silent-install.ini file.
- Navigate to the connections_root directory and open the LC.rsp response file.
- Specify your installation parameters.
- Add the encrypted passwords to the relevant elements of the response file.
The following example shows the elements for the Activities passwords:
<data key='user.activities.adminuser.password' value='encrypted_password'/><data key='user.activities.dbUserPassword value='encrypted_password'/>where encrypted_password is the password after you encrypted it.
- Change the default WAS administrator name from wasadmin if your administrator name is different.
- Save your changes.
- If you are performing the silent installation as a non-root user on AIX or Linux systems...
- Open the silent-install.ini file for editing from the following location:
- AIX: IBM_Connections_set-up/IBM_Connections_install/IM/aix/silent-install.ini
- Linux: IBM_Connections_set-up/IBM_Connections_install/IM/linux/silent-install.ini
- Linux on System z: IBM_Connections_set-up/IBM_Connections_install_s390/IM/zlinux/silent-install.ini
where IBM_Connections_set-up is the Connections set-up directory or installation media.
- In the second line of the file, change admin to nonadmin.
- Save and close the file.
Create a response file
Use a response file to install, modify, update, or uninstall Connections without user interaction.
You can create a response file by using Installation Manager or by editing the file that is provided with the product.
Ensure that Installation Manager is installed.
To ensure that the response file captures the details of your SSL certificates, start IBM WAS.
The default location of a response file that you generate is the connections_root/silentResponseFile directory.
Instead of creating your own response file, you can edit the file that is provided with the product. The file is in the IBM_Connections_set-up/IBM_Connections_install/IBMConnections directory. However, this default file is applicable only for installation. The response files for modifying, updating, rolling back, and uninstalling the product are based on the response file for installation. Before you create a response file for any of those procedures, you must first run the silent installation procedure.
For more information about creating response files with Installation Manager, go to the Recording a response file with Installation Manager webpage.
This task describes the procedure to generate a response file for the following procedures:
- Install Connections
- Modify an existing installation by adding or removing Connections applications
- Updating an existing installation by installing a fix pack
- Rolling back an update
- Uninstalling Connections
For each procedure, run a simulated instance of the Installation Manager and record your input to a response file. Later, you can run a silent command that uses this response file as an input parameter.
Default response files on AIX or Linux:
- Install
- LC.rsp
- Modify - Add
- LC_modify_add.rsp
- Modify - Remove
- LC_modify_remove.rsp
- Update
- LC_update.rsp
- Roll back
- LC_rollback.rsp
- Uninstall
- LC_uninstall.rsp
To create a response file...
- Open a command prompt and go to the IM_root/eclipse directory.
- Ensure that the IBM_Connections_set-up/IBM_Connections_install/IM/OS/skip directory allows write access, where IBM_Connections_set-up is the directory or media where the Connections installation files are located, and OS represents your operating system.
- Run the command to record a response file. This command uses the -skipInstall agentDataLocation argument, which records the installation commands without installing Connections. Substitute your own file name and path for the response file. Verify that the file paths that you enter exist because Installation Manager does not create directories for the response file.
- ./IBMIM -record /response_files/install_product.xml -skipInstall agentDataLocation
Windows: IBMIM.exe -record responseFile.rsp -skipInstall agentDataLocation
where agentDataLocation is the file path to the skip directory, which stores Installation Manager data files.
- The -log option is not available when recording a response file.
- Use quotation marks around file paths that contain spaces.
- You can use the same agentDataLocation parameter in the next recording session to update, modify, roll back or uninstall Connections. However, if you want to record a new installation, you must specify a new agentDataLocation parameter.
- Enter the required information in the Installation Manager.
- To install a new deployment, open Files > Preferences, and enter the path to the Connections repository; for example: C:\build\\IBM_Connections_Install\IBMConnections. Click Install and enter the required information as if you were installing the product.
- To modify an existing installation, click Modify and enter the required information.
- To add applications, select the applications to add in the Application Selection pane.
Ensure that all the currently installed applications are also selected.
- To remove applications, clear the check boxes of the applications to remove in the Application Selection pane.
- To update an existing installation, click Update and enter the required information.
- To roll back an update, click Rollback and enter the required information
- To uninstall Connections, click Uninstall and enter the required information.
In the Connections Content Manager panel, validation can be skipped in case the environment does not have the real FileNet installation binaries. The administrator must edit the response file manually with the following correct paths after generation:
- <data key='user.ccm.ce.installer.path' value='xxx'/>
- <data key='user.ccm.fncs.installer.path' value='xxx'/>
- <data key='user.ccm.ceclient.installer.path' value='xxx'/>
- Close the Installation Manager window.
- Confirm that the new response file is present.
Use the response file to silently install, modify, update, roll back, or uninstall Connections.
If you are running Installation Manager as a non-administrator and plan to use the response file to install the product on another user's system, you must change the file paths in your response file from absolute paths to relative paths.
Create encrypted passwords for a response file
Add encrypted passwords to your edited version of the default response file.
You can create a response file using Installation Manager or by editing the file that is provided with the product.
When you edit the default response file to suit your own environment, create encrypted passwords and add them to the file. Create encrypted passwords for both WebSphere Application Sever and your databases.
To create encrypted passwords for a response file...
- cd IBM_Connections_setup/IBM_Connections_Install/IM/OS/tools directory, where OS is your operating system.
- Run the following command:
./imutilsc encryptString Password -silent -noSplash
- Add the encrypted password to the relevant line in the response file. You usually need to enter passwords for both the WAS administrator and the database user. For example:
<data key='user.activities.adminuser.password' value='encrypted_password'/>
<data key='user.activities.dbUserPassword value='encrypted_password'/>where encrypted_password is the value generated by the command.
You might also need to change the default WAS administrator name from wasadmin, if different from your administrator name.
- Repeat these steps for each unique password.
Use the response file to silently install, modify, update, roll back, or uninstall Connections.
Directory paths for Installation Manager
Installation Manager uses default directory paths for its installation files.
Purpose
This topic describes the default directory paths for Installation Manager.
Installation Manager
Each instance of Installation Manager must have its own installation directory and agent data directory.
The directories where Installation Manager is installed are determined by the type of user account that you used to install the product.
Any changes that you make to an installation of Installation Manager that you installed with a root user or non-administrator account do not affect an installation of Installation Manager that you installed with a non-root user or non-administrator account. The reverse is also true.
The following table indicates the location of the relevant directories.
Table 36. Default installation directories for Installation Manager
Directory Root/Administrator Non-root/non-administrator Default installation directory /opt/IBM/InstallationManager/eclipse /<user/IBM/InstallationManager/eclipse Eclipse log file /var/ibm/InstallationManager/pluginState/.metadata /<user/var/ibm/InstallationManager/pluginState/.metadata Default agent data location. For more information about agent data, go to the Agent data location page in the Installation Manager information center.
/var/ibm/InstallationManager /<user>/var/ibm/InstallationManager
To find the location of Installation Manager...
- AIX or Linux:
- Open the /etc/.ibm/registry/InstallationManager.dat file.
- Examine the location entry. For example, location=/var/ibm/InstallationManager.
Modify the installation in interactive mode
Modify the deployment of Connections by adding or removing applications.
Use the Modify function of the Installation Manager to add or remove Connections applications.
To modify your installation...
- Run the following command:
cd IM_ROOT directory.
./launcher- From the Installation Manager menu, click File > Preferences.
- Click Repositories.
- In the Repositories area, select the repositories to modify.
- Click OK to save your selections.
- Click Modify.
- Select Connections and click Next.
- In the Application Selection page, choose the applications you want to add or remove and then click Next.
- Add applications: Select the check boxes of any applications that are not already installed and to add to the deployment.
- Remove applications: Clear the check boxes of any installed applications to remove from the deployment.
All installed applications are selected by default.
The Home page, News, and Search applications are required and cannot be removed.
- Enter the administrative ID and password of the Deployment Manager.
Set to the connectionsAdmin J2C authentication alias, which is mapped to the following J2EE roles: dsx-admin,
widget-admin , andsearch-admin . Also used by the service integration bus. To use security management software such as Tivoli Access Manager or SiteMinder, the ID specified here must exist in the LDAP directory.- Configure your topology:
The panel described in this step appears only if you selected new applications to install.
The applications for Connections Content Manager will not be shown if you have chosen to use an existing FileNet deployment.
If you select an existing cluster on which to deploy applications, the nodes in that cluster are fixed and cannot be modified.
- Small deployment:
- Select the Small deployment topology.
- Enter a Cluster name for the topology.
- Select a Node.
- Click Next.
- Medium deployment:
- Select the Medium deployment topology.
- Select the default value or enter a Cluster name for each application or for groups of applications. For example, use Cluster1 for Activities, Communities, and Forums.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- Click Next.
- Large deployment:
- Select the Large deployment topology.
- Enter a Cluster name for each application.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- Click Next.
- Enter the database information.
The panel described in this step appears only if you selected new applications to install and if the new applications require database configuration.
The Connections Content Manager databases will not be shown if you have chosen to use an existing FileNet deployment.
Database information for Global Configuration Data and Object Store must be set correctly or installation will fail.
- Specify whether the installed applications use the same database server or instance: Select Yes or No.
If allowed by your database configuration, you can select multiple database instances as well as different database servers.
- Select a Database type from one of the following options:
- IBM DB2 Universal Database
- Oracle Enterprise Edition
- Microsoft SQL Server Enterprise Edition
- Enter the Database server host name. For example:
appserver.enterprise.example.com
If your installed applications use different database servers, enter the database host name for each application.
- Enter the Port number of the database server. The default values are: 50000 for DB2, 1521 for Oracle, and 1433 for SQL Server.
If your installed applications use different database servers or instances, enter the port number for each database server or instance.
- Enter the JDBC driver location. For example:
- AIX:
/usr/IBM/WebSphere/AppServer/lib
- Linux:
/opt/IBM/WebSphere/AppServer/lib
- Windows:
C:\IBM\WebSphere\Appserver\lib
- Ensure that the following JDBC driver libraries are present in the JDBC directory:
- DB2
- db2jcc4.jar and db2jcc_license_cu.jar
Ensure that your user account has the necessary permissions to access the DB2 JDBC files.
- Oracle
- ojdbc6.jar
- SQL Server
- Download the SQL Server JDBC 4 driver from the Microsoft website to a local directory and enter that directory name in the JDBC driver library field. The directory must not contain the sqljdbc.jar file, only the sqljdbc4.jar file. Even though the data source is configured to use the sqljdbc4.jar file, an exception occurs if both files are present in the same directory.
- Enter the User ID and Password for each database. If each database uses the same user credentials, select the Use the same password for all applications check box and then enter the user ID and password for the first database in the list.
If your database type is Oracle, you must connect to the database with the user ID that you used when you created the application database.
- Click Validate to verify your database settings. If the validation fails, check your database settings. When the validation succeeds, click Next.
Installation Manager tests your database connection with the database values that you supplied. You can change the database configuration later in the WAS admin console.
Usually you can continue even if the validation failed because you can change the database settings from WebSphere Application server administrative console afterwards. However, you can not continue if you have entered incorrect information for the Connections Content Manager database, because there are database operations during installation. Incorrect database information will cause installation to fail. So you must use correct information for Connections Content Manager database.
- In the summary panel, confirm your selection and click Modify.
- When the modification process is complete, restart the Deployment Manager and all the nodes.
Wait until the dmgr copies all the application EAR files to the installedApps directory on each of the nodes. This process can take up to 30 minutes. To verify that the dmgr has distributed the application EAR files to the nodes, check...
profile_root/logs/nodeagent/SystemOut.log
Look for a message such as the following example:
AdmgrA7021I: Distribution of application application_name completed successfully
...where application_name is the name of an Connections application.
To confirm that the installation was successful, review...
connections_root/logs
Each Connections application that you installed has a log file, using the following naming format:
application_nameInstall.log
...where application_name is the name of an Connections application. Search for the words error or exception to check whether any errors or exceptions occurred during installation.
Results
Installation Manager writes the result of the installation command to the log file specified with the -log parameter.
If the installation is successful, the log files are empty. For example:
<?xml version="1.0" encoding="UTF-8"?> <result> </result>The log file contains an error element if the operation was not completed successfully. A successful installation adds a value of 0 to the log file. An unsuccessful installation adds a positive integer to the log file.
The log file for Installation Manager records the values that you entered when you ran Installation Manager in interactive mode. To review the log file for Installation Manager, open date_time.xml, where date_time represents the date and time of the installation. The file by default is in the following directory:
- root user: /var/ibm/InstallationManager/logs
- non-root user: user_home/var/ibm/InstallationManager/logs where user_home is the non-root user account directory
To check the complete details of the installation, open each of the log files in connections_root/logs. Each Connections application that you installed has a log file, using the following naming format:
applicationInstallog.txt
...where application is the name of an Connections application.
Modify the installation using response file
Modify the deployment of Connections by adding or removing applications in silent mode.
With the help of a response file, use the Modify function of the Installation Manager to add or remove Connections applications.
To modify your installation in silent mode...
Add applications in silent mode
Add applications to the deployment of Connections without using the installation wizard.
Create a response file for this task by running a simulated modification. Response files are provided for silent installations on AIX, Linux and
IBM i . For silent installations on Microsoft Windows, refer to the topics Installing in console mode and Modifying the installation in console mode.Instead of generating a new response file, you can edit the default response file that is provided with the product. However, if you edit the default response file, you need to add encrypted passwords to the file.
A silent modification uses a response file to automate the addition of applications to the deployment.
To perform a silent modification...
- Open a command prompt and navigate to the IM_ROOT/eclipse/tools directory.
- Enter the following command:
./imcl -input /path/to/file.rsp response_file -log log_file -acceptLicense
The IM_ROOT/eclipse directory contains a similar file called IBMIM.exe but that file is not suitable for silent installation.
where response_file is the full path and name of the response file and log_file is the full path and name of the log file. The default name of the response file is LC.rsp. By default, the response file is in the following directory:
connections_root.
Compare the following examples to your environment:./imcl -input <CE_HOME>/silentResponseFile/LC.rsp -log /mylog/silent_install_log.xml -acceptLicense
Results
Installation Manager writes the result of the installation command to the log file specified with the -log parameter.
If the installation is successful, the log files are empty. For example:
<?xml version="1.0" encoding="UTF-8"?> <result> </result>The log file contains an error element if the operation was not completed successfully. A successful installation adds a value of 0 to the log file. An unsuccessful installation adds a positive integer to the log file.The log file for Installation Manager records the values that you entered when you ran Installation Manager in interactive mode. To review the log file for Installation Manager, open date_time.xml, where date_time represents the date and time of the installation. The file by default is in the following directory:
root user: /var/ibm/InstallationManager/logs
non-root user: user_home/var/ibm/InstallationManager/logs where user_home is the non-root user account directoryTo check the complete details of the installation, open each of the log files in connections_root/logs. Each Connections application that you installed has a log file, using the following naming format:
applicationInstallog.txt
...where application is the name of an Connections application.
Remove applications using response file
Silently remove applications from the deployment of Connections.
Create a response file for this task by running a simulated modification. Response files are provided for silent installations on AIX, Linux and
IBM i . For silent installations on Microsoft Windows, refer to the topics Installing in console mode and Modifying the installation in console mode.Instead of generating a new response file, you can edit the default response file that is provided with the product. However, if you edit the default response file, you need to add encrypted passwords to the file.
A silent modification uses a response file to automate the removal of applications from the deployment.
To perform a silent modification...
- Open a command prompt and navigate to the IM_ROOT/eclipse/tools directory.
- Enter the following command:
./imcl -input /path/to/file.rsp response_file -log log_file -acceptLicense
The IM_ROOT/eclipse directory contains a similar file called IBMIM.exe but that file is not suitable for silent installation.
where response_file is the full path and name of the response file and log_file is the full path and name of the log file. The default name of the response file is LC.rsp. By default, the response file is in the following directory:
connections_root
Compare the following examples to your environment:
./imcl -input <CE_HOME>/silentResponseFile/LC_modify_remove_linux.rsp -log /mylog/silent_install_log.xml -acceptLicense
Results
Installation Manager writes the result of the installation command to the log file specified with the -log parameter.
If the installation is successful, the log files are empty. For example:
<?xml version="1.0" encoding="UTF-8"?> <result> </result>The log file contains an error element if the operation was not completed successfully. A successful installation adds a value of 0 to the log file. An unsuccessful installation adds a positive integer to the log file.
The log file for Installation Manager records the values that you entered when you ran Installation Manager in interactive mode. To review the log file for Installation Manager, open date_time.xml, where date_time represents the date and time of the installation. The file by default is in the following directory:
root user: /var/ibm/InstallationManager/logs
non-root user: user_home/var/ibm/InstallationManager/logs where user_home is the non-root user account directoryTo check the complete details of the installation, open each of the log files in connections_root/logs. Each Connections application that you installed has a log file, using the following naming format:
applicationInstallog.txt
...where application is the name of an Connections application.
Modify the installation in console mode
Use console mode, modify the deployment of Connections by adding or removing applications.
Use Installation Manager in console mode to add or remove Connections applications. This method is convenient if you cannot or do not want to use the interactive mode.
To modify your installation...
- On each node, stop any running instances of WAS and WebSphere node agents.
Do not stop Cognos Business Intelligence node agents or servers.
- Start WAS Network Deployment Manager.
- Start Installation Manager in console mode:
cd IM_ROOT/eclipse/tools directory.
./imcl -c- Type 3 to begin modifying the deployment.
- In the Select packages to modify step, select Connections and then type N to proceed.
- Select the applications to add or remove and then type N.
- Add applications: Type the numbers corresponding to applications that are not already installed and to add to the deployment.
- Remove applications: Type the numbers corresponding to installed applications to remove from the deployment. The Home page, News, and Search applications are required and can't be removed.
Connections Content Manager is not supported on
IBM i . Although it is appears as an option in the console interface, you cannot select it.All installed applications are selected by default.
- Enter the administrative ID and password of the Deployment Manager.
Set to the connectionsAdmin J2C authentication alias, which is mapped to the following J2EE roles: dsx-admin,
widget-admin , andsearch-admin . Also used by the service integration bus. To use security management software such as Tivoli Access Manager or SiteMinder, the ID specified here must exist in the LDAP directory.- Configure your topology.
For more information about each option, refer to Deployment options.
If you chose to use existing deployment of FileNet for Connections Content Manager, the application Connections Content Manager will not show in this topology panel.
- Small deployment:
- Type 1 to select the Small deployment topology.
- Enter a Cluster name for the topology.
- Select a Node.
- Enter a Server member name for the node.
- Type N to proceed.
- Medium deployment:
- Type 2 to select the Medium deployment topology.
- Select the default value or enter a Cluster name for each application or for groups of applications. For example, use Cluster1 for Activities, Communities, and Forums.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- The topology specified is displayed. To re-specify any details, type the number that corresponds to the application; for example, type 1 for Activities.
- Type N to proceed.
- Large deployment:
- Type 3 to select the Large deployment topology.
- Enter a Cluster name for each application.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- The topology specified is displayed. To re-specify any details, type the number that corresponds to the application; for example, type 1 for Activities.
- Type N to proceed.
The Connections Content Manager databases will not be shown if you have chosen to use an existing FileNet deployment. The Connections Content Manager is not supported on
IBM i .Database information for Global Configuration Data and Object Store must be set correctly, otherwise installation will fail. Enter the database information:
- Specify whether the installed applications use the same database server or instance: Type 1 to specify that the applications use same database server or instance; type 2 to specify that they use different database servers or instances.
If allowed by your database configuration, you can select multiple database instances as well as different database servers.
Connections on
IBM i uses the same installation code framework with other platforms, and there are some Database terminologies for other platforms that do not apply toIBM i , so you can just ignore them and keep with the default values. You can consider a database instance as a DB2 forIBM i server. If you select to install Metrics, make sure you select 2 because the Metrics database needs to be DB2 on AIX.- Select a Database type from one of the following options: If installing on Windows, Linux, or AIX:
- IBM DB2 Universal Database
- Oracle Enterprise Edition
- Microsoft SQL Server Enterprise Edition
- Enter the Database server host name. For example:
appserver.enterprise.example.com
If your installed applications use different database servers, enter the database host name for each application.
- Enter the Port number of the database server. The default values are: 50000 for DB2(r), 1521 for Oracle, and 1433 for SQL Server.
If your installed applications use different database servers or instances, enter the port number for each database server or instance.
Database Name and Port number do not apply to DB2 for
IBM i , so you can just ignore them and keep with the default values. For Metrics, since the Metrics database is DB2 on AIX, you need to provide the Database name and port number to make the Installer configure them correctly.- Enter the JDBC driver location. For example:
- AIX:
/usr/IBM/WebSphere/AppServer/lib
IBM i : /QIBM/ProdData/HTTP/Public/jt400/libMetrics on
IBM i uses DB2 on AIX. If you select to install Metrics, you are prompted to enter the Metrics JDBC driver location. You can use the DB2 JDBC Driver provided by WAS from the following location:/QIBM/ProdData/WebSphere/AppServer/V8/ND/deploytool/itp/plugins/com.ibm.datatools.db2_2.1.102.v20120412_2209/driver- Linux:
/opt/IBM/WebSphere/AppServer/lib
- Windows:
C:\IBM\WebSphere\Appserver\lib
- Ensure that the following JDBC driver libraries are present in the JDBC directory:
- DB2
- db2jcc4.jar and db2jcc_license_cu.jar
Ensure that your user account has the necessary permissions to access the DB2 JDBC files.
- Oracle
- ojdbc6.jar
- SQL Server
- Download the SQL Server JDBC 2 driver from the Microsoft website to a local directory and enter that directory name in the JDBC driver library field. The directory must not contain the sqljdbc.jar file, only the sqljdbc4.jar file. Even though the data source is configured to use the sqljdbc4.jar file, an exception occurs if both files are present in the same directory.
- Enter the User ID and Password for each database. If each database uses the same user credentials, confirm the Use the same password for all applications question and then enter the user ID and password for the first database in the list.
If your database type is Oracle, you must connect to the database with the user ID that you used when you created the application database.
- If you need to make changes, type the number that corresponds to the application to change. Alternatively, type R to reset all the database specifications to their default values.
- Press Enter to verify your database settings. If the validation fails, check your database settings. When the validation succeeds, click Next.
Installation Manager tests your database connection with the database values that you supplied. You can change the database configuration later in the WAS admin console.
Usually you can continue even if the validation failed because you can change the database settings from WebSphere Application server administrative console afterwards. However, you can not continue if you have entered incorrect information for the Connections Content Manager database, because there are database operations during installation. Incorrect database information will cause installation to fail. So you must use correct information for Connections Content Manager database.
- Review the information that you have entered.
To revise your selections, press B. To finish modifying, press M.
- Review the result of the installation. Press F to exit the installation wizard.
- Restart the Deployment Manager:
cd WAS_HOME/profiles/Dmgr01/bin directory.
./stopManager.sh
./startManager.sh command.
- For each node:
profile_root/bin
./startNode.sh
- Log in to the dmgr console to perform a full synchronization of all nodes.
System administration | Nodes | nodes | Full Resynchronize
Wait until the dmgr copies all the application EAR files to the installedApps directory on each of the nodes. This process can take up to 30 minutes. To verify that the dmgr has distributed the application EAR files to the nodes, check...
profile_root/logs/nodeagent/SystemOut.log
Look for a message such as the following example:
AdmgrA7021I: Distribution of application application_name completed successfully
...where application_name is the name of an Connections application.
- Start all your Connections clusters:
Servers | Clusters | WebSphere Application server clusters | Connections clusters | Start
If you installed a cluster with multiple Search nodes, create the initial index.
- If you are installing a non-English language deployment, enable Search dictionaries.
- The index is ready when the INDEX.READY and CRAWLING_VERSION files are present in the index directory.
If some applications do not start, the file-copying process might not have completed. Wait a few minutes and start the applications.
In case the Connections applications are installed on different clusters, the cluster start order should be as follows:
- News cluster
- Profiles cluster
- Search cluster
- Dogear cluster
- Communities cluster
- Activities cluster
- Blogs cluster
- Files cluster
- Forums cluster
- Wikis cluster
- Hompage cluster
- Metrics cluster
- Mobile cluster
- Moderation cluster
- Connections Content Manager cluster
Post-installation tasks
After installation, you need to perform further tasks to ensure an efficient deployment. After running the wizards to install applications and create databases, check which of the following additional tasks you need to complete.
Tasks to be completed
Some post-installation tasks are mandatory while others are optional and depend on the deployment choices.
After you complete the mandatory post-installation tasks, update the deployment with the latest fixes.
For more information, see the Update Connections 4.5 topic.
Each post-installation task is described in separate topics:
Mandatory post-installation tasks
Complete the following post-installation tasks.
After you complete the mandatory post-installation tasks, update the deployment with the latest fixes.
For more information, see the Update Connections 4.5 topic.
Review the JVM heap size
Review the size of the Java Virtual Machine heap and adjust it, if necessary, to avoid out-of-memory errors or to suit your hardware capabilities.
If you selected the Small or Medium deployment option when you installed Connections, Installation Manager set the Maximum Heap Size of the Java Virtual Machine (JVM) on each application server. This setting is designed to avoid out-of-memory errors.
Review the heap size on each server to ensure that you are allocating enough memory for Connections but also to ensure that you are not allocating more memory than the physical capabilities of the systems where the JVMs are deployed.
Whether you installed a Small, Medium, or Large deployment of Connections, you should review the JVM heap sizes in the deployment and make adjustments, if necessary.
To review the JVM heap size...
- Log into the WAS admin console and select Servers > Server Type > WebSphere application servers.
- Click server, where server is the name of an Connections server. You might have several servers in the deployment, so you might need to repeat these steps for each server.
- In the Server Infrastructure area, click Java and Process Management and then click Process Definition > Java Virtual Machine.
- Review the Maximum heap size. Installation Manager sets the following default values:
- Small deployment:
- Maximum Heap Size
- 2506 MB
- Medium deployment:
- Maximum Heap Size
- 2506 MB
- Large deployment:
Each application in a Large deployment, except News and Search, has a default Heap size of 256 MB. The News and Search applications have a default Heap size of 784 MB.
Ensure that you are not allocating more memory than the physical capacity of the system where the JVM is installed.
- Adjust the current values of the heap size up or down to suit the needs of the deployment and your hardware capabilities.
- Click OK and then click Save.
- Repeat these steps for any additional servers in the deployment.
For more information about tuning the JVM, see the Connections 4.0 Performance Tuning Guide in the Community Articles section of the wiki.
Configure IBM HTTP Server
Configure IBM HTTP Server to manage web requests to Connections.
You must re-configure FileNet Collaboration Services with the base URL of the HTTP server when you configure an HTTP server or change the URL used by end users to access Connections.
See Configure FileNet Collaboration Services for the Connections Content Manager for more information on re-configuring FileNet Collaboration Services.
The following two settings need to be specified in the httpd.conf file or else Connections cannot be accessed through the web server:
LoadModule was_ap22_module /opt/IBM/WebSphere/Plugins/bin/64bits/mod_was_ap22_http.so WebSpherePluginConfig /opt/IBM/WebSphere/Plugins/config/*NAME OF WEB SERVER*/plugin-cfg.xmlWhen you have successfully installed Connections to run on WAS, you can configure IBM HTTP Server to handle web traffic by completing the following tasks:
Define IBM HTTP Server
Define IBM HTTP Server to manage web connections.
Install web server plug-ins for IBM HTTP Server, if they are not already installed.
For more information, go to the Install web server plug-ins web site.
Connections uses a web server as the entry point for all the applications.
This procedure describes how to create a web server using the Integrated Solutions Console. There are other ways to create the web server.
See the WAS information center for more information.
To define IBM HTTP Server...
- Start the IBM HTTP Administration Server:
AIX:
cd /usr/IBM/HTTPServer/bin
./adminctl startWindows:
Open the Services window in the Windows Control Panel and verify that the IBM HTTP Administration Server service is started. If this service is not running, start it
Log in to the WAS admin console on the Deployment Manager and select... System administration | Nodes | Add Node | | Unmanaged node | Next
Specify the properties of the node by providing values in the following fields:
- Name
- Enter the name of the node.
- Host Name
- Enter the fully qualified DNS host name for IBM HTTP Server. For example: webserver.example.com.
- Platform
- Select the operating system type that hosts your IBM HTTP Server.
Click OK and then click Save.
Select Servers > Server Types > Web servers and click New. Provide values for the following fields:
- Select node
- Select the node specified in Step 4.
- Server name
- Enter the name of the your web server. The default value is webserver1.
- Type
- Select IBM HTTP Server.
Click Next. Select the default web server template and click Next. On the Enter the properties for the new web server page, check the paths and make adjustments if necessary, and then enter the user name and password specified when you installed IBM HTTP Server. Confirm the password and click Next. Confirm to create the new web server. Click Finish and then click Save. Synchronize all the nodes. Select Servers > Server Types > Web servers and click the link to your webserver. Click Generate Plug-in. Select the check box for your webserver. Click Propagate Plug-in. Select Servers > Server Types > Web servers and click the link to your webserver. Click Plug-in properties and then click Copy to Web Server key store directory. If the plugin-key.kdb file is on a different system from the IBM HTTP Server system, copy it manually from the WAS system to the IBM HTTP Server system.
Restart IBM HTTP Server.
Complete the steps in the Configuring IBM HTTP Server for SSL topic.
Configure IBM HTTP Server to handle file downloads from the Files and Wikis applications.
For information on this configuration, see the Configuring Files and Wikis downloads topic.
Configure IBM HTTP Server for SSL
Configure IBM HTTP Server to use the SSL protocol.
To support SSL, create a self-signed certificate and then configure IBM HTTP Server for SSL traffic. If you use this certificate in production, users might receiver warning messages from their browsers. In a typical production deployment, you would use a certificate from a trusted certificate authority.
To configure IBM HTTP Server for SSL...
- Create a key file.
- Start the iKeyman user interface.
For more information, see Starting the Key Management utility in the IBM HTTP Server information center.
- Click Key Database File in the main user interface, then click New. Select CMS for the Key database type. IBM HTTP Server does not support database types other than CMS.
Enter a name for the new key file. For example, hostname-key.kdb. Click OK.
- Enter your password in the Password Prompt dialog box, and confirm the password. Select Stash the password to a file and then click OK. The new key database should display in the iKeyman utility with default signer certificates. Ensure that there is a functional, non-expiring signer certificate for each of your personal certificates.
- Create a self-signed certificate:
- Start the iKeyman user interface.
- Click Key Database File and then click Open.
- Enter your key file name in the Open dialog box and click OK.
- In the Password Prompt dialog box, enter your password and click OK.
- Click Personal Certificates in the Key Database content frame, and then click New Self-Signed.
- Enter the required information about the key file, your web server, and organization in the dialog box.
- Click OK.
Save the new self-signed certificate with a unique file name; do not overwrite the default Plugin-key.kdb file because that file might be accessed by other applications.
- Stop IBM HTTP Server.
- Log in to the WAS admin console for the Deployment Manager and select Servers > Server types > Web servers.
- From the list of web servers, click the web server that you defined for this profile.
- On the Configuration page for this web server, click Edit for the Configuration file name field. This action opens the httpd.conf configuration file on the Deployment Manager.
- Add the following text to the end of the configuration file:
LoadModule ibm_ssl_module modules/mod_ibm_ssl.so
<IfModule mod_ibm_ssl.c>
Listen 0.0.0.0:443
<VirtualHost *:443>
ServerName server_name
#DocumentRoot C:\IBM\HTTPServer\htdocs
SSLEnable
</VirtualHost>
</IfModule>
SSLDisable
Keyfile "path_to_key_file"
SSLStashFile "path_to_stash_file"
where
For example:
- server_name is the host name of the IBM HTTP Server.
- path_to_key_file is the path to the key file that you created with the iKeyman utility.
- path_to_stash_file is the path to the associated stash file.
where key_file is the name that you have given to your key file and stash file.
- AIX:
- Keyfile: /usr/IBM/keyfiles/key_file.kdb
- SSLStashFile: /usr/IBM/keyfiles/key_file.sth
- Linux:
- Keyfile: /opt/IBM/keyfiles/key_file.kdb
- SSLStashFile: /opt/IBM/keyfiles/key_file.sth
Windows:
- Keyfile: C:\IBM\keyfiles\key_file.kdb
- SSLStashFile: C:\IBM\keyfiles\key_file.sth
- Click Apply and then click OK.
- Restart IBM HTTP Server to apply the changes.
- Test the new configuration: Open a web browser and ensure that you can successfully reach https://server_name. You might be prompted to accept the self-signed certificate on your browser.
Results
Connections users can access applications through the SSL protocol.If you receive an error message about failing to load a GSK library (libgsk7ssl.so), install the libgsk7ssl.so GSK library.
For more information, go to the following Support page: Failure attempting to load GSK library when using SSL with IBM HTTP Server.
For more information about securing web communications, go to the WAS information center.
For more information about the key store and setting up the IBM HTTP Server, see the Secure communications topic in the WAS information center. The key file can be shared between two web servers, thus providing failover capability.
Map applications to IBM HTTP Server
Map Connections applications to IBM HTTP Server.
Complete this task if you installed and configured IBM HTTP Server before installing Connections.
If you plan to configure a reverse proxy, see the Configuring a reverse caching proxy topic.
If you installed and configured IBM HTTP Server after installing Connections, your Connections applications are automatically mapped to the web server. However, if you installed and configured IBM HTTP Server before installing Connections, you must manually map the applications.
To map your Connections applications to IBM HTTP Server and regenerate the plugin...
- Open the WAS admin console on the system where you installed the Deployment Manager.
- Select Applications > Application Types > WebSphere enterprise applications.
- Map an Connections application to IBM HTTP Server:
This step instructs you to select webserver1. Ensure that you have defined this web server before you attempt to complete these steps.
- Select application > Manage Modules, where application is an Connections application.
- In the Clusters and Servers box, select the cluster and server on which you installed the application. If necessary, use the Ctrl key to select both targets.
- Select the check boxes for all the modules and click Apply.
- Review the Server details and ensure that both servers are listed there. Click OK and then click Save.
- Repeat this step for each Connections application.
- From the WAS admin console, select Servers > Server Types > Web servers and then click the web server (webserver1).
- Click Generate Plug-in.
- Click your web server again and then click Propagate Plug-in.
If you have trouble propagating the plug-in on Linux, restart IBM HTTP Server using the following commands:
./adminctl start ./apachectl -k stop ./apachectl -k start- Stop and restart the web server.
- Synchronize the nodes.
- Restart all IBM Connection clusters.
- Restart the Deployment Manager.
To verify that the mappings are correct, complete the steps in the Verifying application mappings topic.
Test the mappings: open a web browser and try to access each of the applications by specifying the web address using the following syntax:
http://hostname/application_name
where hostname is the host name of the web server to which you mapped the application and application_name is the name of the application. Do not specify the port number.
Verify application mappings
Verify that Connections applications are mapped to your webserver.
If you installed and configured IBM HTTP Server after installing Connections, your Connections applications are automatically mapped to the web server. However, if you installed and configured IBM HTTP Server before installing Connections, you must manually map the applications. To verify whether the mappings exist...
- From the WAS admin console, select Servers > Server Types > Web servers and then click the web server (webserver1).
- Click Generate Plug-in.
- Click your web server again and then click Propagate Plug-in.
If you have trouble propagating the plug-in on Linux, restart IBM HTTP Server using the following commands:
./adminctl start ./apachectl -k stop ./apachectl -k start- Wait until a confirmation message is displayed; for example:
PLGC0062I: The plug-in configuration file is propagated from /opt/IBM/WebSphere/AppServer/profiles/Dmgr01/config/cells/servernameCell01/nodes/webserver1/servers/webserver1/plugin-cfg.xml to /opt/IBM/HTTPServer/Plugins/config/webserver1/plugin-cfg.xml.The message identifies where the plugin-cfg.xml file is on the system that hosts IBM HTTP Server. In this example, the file path is: /opt/IBM/HTTPServer/Plugins/config/webserver1/plugin-cfg.xml.
- Log on to the system that hosts IBM HTTP Server and open the plugin-cfg.xml file.
- Verify that the URIs for the installed Connections applications are present. For example:
<UriGroup Name="default_host_Cluster1_URIs"> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/activities/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/activities/quickrpicker/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/communities/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/communities/calendar/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/communities/recomm/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/forums/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/metrics/service/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/metrics/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/profiles/*"/> <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/profiles/seedlist/*"/> </UriGroup>If the Connections URIs are not present, complete the steps in the Mapping applications to IBM HTTP Server topic.
Add certificates to the WebSphere trust store
Import a self-signed IBM HTTP Server certificate into the default trust store of IBM WAS.
Before you complete this procedure, ensure that IBM HTTP Server is configured to support SSL.
This topic describes the procedure to configure certificates in a deployment with one web server.
To establish trusted server to server communication for Connections, import signer certificates from IBM HTTP Server into the WAS default trust store.
There are different types of certificates used. This procedure describes how to import a self-signed certificate. You can also import a certificate that you purchased from a third-party Certificate Authority. To help decide a key file strategy for your environment, go the IBM HTTP Server information center.
To import a public certificate from IBM HTTP Server to the default trust store in IBM WAS...
- Log into the IBM WAS admin console and select Security > SSL Certificate and key management > Key stores and certificates.
- Click CellDefaultTrustStore.
- Click Signer Certificates.
- Click Retrieve from port.
- Enter the Host name, SSL Port, and Alias of the web server.
- Click Retrieve Signer Information and then click OK. The root certificate is added to the list of signer certificates.
- If using Tivoli Access Manager or other proxies, also repeat steps 4-6 for your Tivoli Access Manager or other proxy servers.
Results
If your configuration changes aren't successful, ensure that you have applied the instructions to configure a default personal certificate.
Verify that users can create a private community and add other widgets, such as Activities, Blogs, Dogear, and so on, to it. Ensure that there are no errors when these widgets are added. If problems are reported, consult the Communities SystemOut.log file.
The proxy-config.tpl file allows a proxy to work with self-signed certificates. This is true for an out-of-the-box deployment but for improved security you should set the value of the unsigned_ssl_certificate_support property to false when the deployment is ready for production.
Ensure that you are ready to renew your certificate before it expires. WAS provides a utility for monitoring certificates.
For more information, refer to Configure certificate expiration monitoring in the WAS information center.
Determine which files to compress
If you are not compressing content with the IBM WAS Edge components or a similar device, configure the IBM HTTP Server to compress certain types of content to improve browser performance.
This is an optional configuration. You do not need to perform this procedure if you are compressing content elsewhere in your network. Compression requires a significant amount of CPU; you must monitor resource availability if you choose to use this option.
The directives discussed here do not compress images, but do compress JavaScript.
To specify which types of files to compress...
- Use a text editor, open the httpd.conf file. The file is stored in by default:
- AIX: /usr/IBM/HTTPServer/conf
- Linux: /opt/IBM/HTTPServer/conf
Windows: C:\IBM\HTTPServer\conf
- Find the following entry in the configuration file:
LoadModule deflate_module modules/mod_deflate.soIf this entry is not present, add it.
- Add the following statements to compress multiple content types used by Connections:
#Only the specified MIME types will be compressed. AddOutputFilterByType DEFLATE application/atom+xml AddOutputFilterByType DEFLATE application/atomcat+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/json AddOutputFilterByType DEFLATE application/octet-stream AddOutputFilterByType DEFLATE application/x-javascript AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/xsl- Add the following statement to specifically indicate that image files and binaries must not be compressed to prevent web browser hangs:
# Ensures that images and executable binaries are not compressed SetEnvIfNoCase Request_URI \\.(?:gif|jpe?g|png|exe)$ no-gzip dont-vary- Add the following statement to ensure that proxy servers do not modify the User Agent header needed by the previous statements:
# Ensure that proxies do not deliver the wrong content Header append Vary User-Agent env=!dont-varyIf the following line is commented out, remove the commenting from it:LoadModule headers_module modules/mod_headers.so
- Save and close the configuration file.
- Restart IBM HTTP Server.
Update web addresses in IBM HTTP Server
Update the web addresses that IBM HTTP Server uses to access Connections applications.
If you installed and configured IBM HTTP Server after installing Connections, your Connections applications are automatically mapped to the web server. However, if you installed and configured IBM HTTP Server before installing Connections, you must manually map the applications.
Before continuing with this task, map the application modules to IBM HTTP Server.
For more information, refer to Map applications to IBM HTTP Server.
If you are using the Files, Wikis, or Libraries applications, configure IBM HTTP Server to handle file downloads from those applications as described in Configure file downloads through the HTTP server.
If you do not install a web server such as IBM HTTP Server, users must include the correct port number in the web address that they use to access the application. When you use a web server, users can access the applications without using port numbers.
By default, the web address that you enter to access Connections applications includes the port number for each application. To avoid using port numbers, update the web addresses by editing LotusConnections-config.xml. IBM HTTP Server can then redirect requests to the appropriate port for each application.
For more information about editing configuration files, refer to Edit configuration files.
To update the web addresses to your Connections applications...
- Stop WAS.
- Check out LotusConnections-config.xml. The file is stored by default in the following directory:
- AIX: /usr/IBM/WebSphere/AppServer/profiles/profile_name/config/cells/cell_name/LotusConnections-config
- Linux: /opt/IBM/WebSphere/AppServer/profiles/profile_name/config/cells/cell_name/LotusConnections-config
- Windows: C:\IBM\WebSphere\AppServer\profiles\profile_name\config\cells\cell_name\LotusConnections-config
- For each application, update the web addresses specified in the href and ssl_href properties:
<sloc:href> <sloc:hrefPathPrefix>/application</sloc:hrefPathPrefix> <sloc:static href="http://webserver:port" ssl_href="https://webserver:port"> <sloc:interService href="https://webserver:port"> </sloc:href>where
- webserver is the domain name of IBM HTTP Server, such as webserver.example.com.
- port is the default port number of the application. Remove the port number when you specify a web server.
- application is the name of an Connections application.
Each href attribute in LotusConnections-config.xml is case-sensitive and must specify a fully-qualified domain name. For example, to update the web address for Communities, add the following specifications to the file:
<sloc:href>
<sloc:hrefPathPrefix>/communities</sloc:hrefPathPrefix>
<sloc:static href="http://webserver.example.com"
ssl_href="https://webserver.example.com">
<sloc:interService href="https://webserver.example.com">
</sloc:href>
To use a reverse proxy, the web addresses defined in this file must be updated to match the appropriate proxy server URLs. Go to the Connections wiki for more information about deployment scenarios, including how to configure a reverse proxy.
- Save and check in LotusConnections-config.xml.
- Synchronize the nodes.
- Log on to each application to ensure that the web addresses in the navigation bar are correct.
Results
You can access each application without needing to specify a port number.
Configure IBM HTTP Server on
IBM i Configure IBM HTTP Server to manage web requests to Connections.
Once you have successfully installed Connections to run on WAS, you can configure the IBM HTTP Server to handle web traffic by completing the following tasks:
- Define the IBM HTTP Server on
IBM i to manage web connections.- Map your Connections applications on
IBM i to the IBM HTTP Server and generate the plugin.- Configure IBM HTTP Server for SSL on
IBM i to use the SSL protocol.- Add a certificate to the WebSphere trust store for
IBM i , a self-signed IBM HTTP Server certificate into the default trust store of IBM WAS.- Update the web addresses in the IBM HTTP Server for
IBM i . The IBM HTTP Server uses these web addresses to access Connections applications.After all these steps are completed, restart the HTTP Server, Application servers, nodes, and Dmgr. Then access https://hostname/homepage.
Define IBM HTTP Server on
IBM i Definethe IBM HTTP Server to manage web connections.
Install web server plug-ins on
IBM i for the IBM HTTP Server, if they are not already installed.For more information, refer to Install web server plug-ins.
Connections uses a web server as the entry point for all the applications. This procedure describes how to create a web server using the manageProfiles command . There are other ways to create the web server. Refer to the IBM WAS information center for more information.
To define the IBM HTTP Server...
- From an
IBM i command line, run the following command to start the QShell Interpreter:QSH
- Run the cd shell command, specifying the was.installlocation, for example:
cd /QIBM/ProdData/WebSphere/AppServer/V8/ND/bin
- Run the manageprofiles Qshell command to create an http profile, as follows:
manageprofiles -create -profileName myHttpProfile -templatePath httpThe myHttpProfile variable is the name of the profile.
- Create an HTTP Server instance and associate it with the http profile as follows:
- Open a browser to the URL http://system_hostname:2001/HTTPAdmin.
- Create an HTTP Server instance as described in Step 2: Create an HTTP Server instance of the topic Get started with the IBM Web Administration for i interface.
- Continue to associate the HTTP Server instance with the http profile you created on step 3.
- From the browser page http://system_hostname:2001/HTTPAdmin, scroll down and click WAS, and then navigate to HTTP_Server_instance_name > WAS.
- On the General tab page, select WAS, V8.0.0.* ND.
- Select myHttpProfile from the WebSphere profile drop down list and then click Apply.
Results
Complete the steps in the Map applications to IBM HTTP Server onIBM i topic. Map your Connections applications to IBM HTTP Server and generate the plug-in.Parent topic: Configure IBM HTTP Server on
IBM i
Map applications to IBM HTTP Server on
IBM i Map Connections applications to IBM HTTP Server.
Be sure you defined IBM HTTP server as described in Define IBM HTTP Server on
IBM i .To map your Connections applications to IBM HTTP Server and generate the plugin...
- Copy the configureIHS_myHttpProfile from the http profile to the dmgr /bin directory of the Connections' cell. For example, the source directory is: /QIBM/UserData/WebSphere/AppServer/V8/ND/profiles/myHttpProfile/config/IHS_myHttpProfile/ And the target directory is: /QIBM/UserData/WebSphere/AppServer/V8/ND/profiles/dmgr/bin/
The myHttpProfile variable is the name of the http profile created in step 3 in Define IBM HTTP Server on
IBM i .
- Make sure the dmgr server is running and restart the dmgr if needed to perform the next step.
- Run the copied script in step 1 from QSH, for example:
- From an
IBM i command line, run QSH as follows:cd /QIBM/UserData/WebSphere/AppServer/V8/ND/profiles/dmgr/bin/ configureIHS_myHttpProfile- To finish the command, enter the Username and password of the dmgr.
- Synchronize all the nodes, from WAS Integrated Solutions console on the system where you installed the Deployment Manager.
- On the ISC, select Servers > Server Types > Web servers and then select the check box before IHS_myHttpProfile.
- Click Generate Plug-in and then Propagate Plug-in to generate and propagate the plug-in file to the web server.
- Click IHS_myHttpProfile > Plug-in properties > Copy to Web server key store directory.
- From the Servers > Server Types > Web servers > IHS_myHttpProfile > Remote Web server management, enter a User profile and password of the
IBM i LPAR where HTTP Server is configured, and then click Apply and Save.
- Restart IBM HTTP Server from http://<system_hostname>:2001/HTTPAdmin or run
ENDTCPSVR SERVER(*HTTP) HTTPSVR(myHttpProfile) STRTCPSVR SERVER(*HTTP) HTTPSVR(myHttpProfile)- Restart all IBM Connection clusters and the Deployment Manager.
Parent topic: Configure IBM HTTP Server on
IBM i
Configure IBM HTTP Server for SSL on
IBM i Configure IBM HTTP Server to use the SSL protocol.
To support SSL, create a self-signed certificate and then configure IBM HTTP Server for SSL traffic. If you use this certificate in production, users might receiver warning messages from their browsers. In a typical production deployment, you would use a certificate from a trusted certificate authority.
To configure IBM HTTP Server for SSL, complete the following main procedures:
- Configure IBM HTTP Server for SSL using the IBM Web Administration for
IBM i .- Associate the system certificate with HTTP Server on Digital Certificate Manager.
- Restart IBM HTTP Server to apply the changes.
- Configure HTTP Server for SSL using the IBM Web Administration for
IBM i as follows:
- Open a browser to the URL http://<system_hostname>:2001/HTTPAdmin.
- Click the Manage tab.
- Click the HTTP Servers subtab.
- Select your HTTP Server from the Server list, for example: myHttpProfile.
- Select Global configuration from the Server area list.
- Expand Server Properties.
- Click Virtual Hosts.
- Click the Name-based tab in the form.
- Click Add under the Named virtual hosts table.
- Select or enter an IP address in the IP address column for example 10.1.2.3
The IP address 10.1.2.3 used in this scenario is associated with
IBM i system host name <system_hostname> and registered by a Domain Name Server (DNS). You will need to choose a different IP address and hostname. The IBM Web Administration for i interface provides the IP addresses used by yourIBM i server in the IP Address list; however, you will need to provide the hostname associated with the address you choose.- Enter a port number in the Port column, such as: 443.
Specify a port number other than the one currently being used for your HTTP Server to maintain an SSL and non-SSL Web site.
- Click Add under the Virtual host containers table in the Named host column.
This is a table within the Named virtual hosts table in the Named host column.
- Enter the fully qualified server hostname for the virtual host in the Server name column, such as: <system_hostname>
Make sure the server hostname you enter is fully qualified and associated with the IP address you selected.
- Enter a document root for the virtual host index file or welcome file in the Document root column, such as: /www/myHttpProfile
You are specifying a document root that will be created later in this procedure. Remember the document root you have entered; you will be asked to enter the document root again when creating a new directory.
- Click Continue and then click OK.
- Set up Listen directive for virtual host as follows:
- Expand Server Properties.
- Click General Server Configuration.
- Click the General Settings tab in the form.
- Click Add under the Server IP addresses and ports to listen on table.
- Select the IP address you entered for the virtual host in the IP address column, such as: 10.1.2.3.
- Enter the port number you entered for the virtual host in the Port column, such as: 443
- Click Continue and then click OK.
- Enable SSL for the virtual host as follows:
- Select the virtual host from the Server area list, such as: Virtual Host *:443
- Expand Server Properties.
- Click Security.
- Click the SSL with Certificate Authentication tab in the form.
- Select Enable SSL under SSL.
- Select QIBM_HTTP_SERVER_[server_name] from the Server certificate application name list, for example: QIBM_HTTP_SERVER_myHttpProfile
Remember the name of the server certificate. You will need to select it again in the Digital Certificate Manager.
- Select Do not request client certificate for connection under Client certificates when establishing the connection.
- Click OK. The HTTPS_PORT field provides a specific environment variable value that is passed to CGI programs. This field is not used in this scenario.
- Associate system certificate with HTTP Server on Digital Certificate Manager as follows:
- Open a browser to the URL http://<system_hostname>:2001/QIBM/ICSS/Cert/Admin/qycucm1.ndm/main0.
- Create or renew the local CA. If there is no local CA, create it first, refer to Set up certificates for the first time for details.
After creating a private local CA, the Create a Certificate Authority (CA) task no longer appears in the navigation panel. To renew an expired local CA, perform later in this procedure.
After Local CA is valid, create a new certificate and then assign it to the IBM HTTP Server
- Select the Local CA and enter a password.
- Select Manage local CA > Renew.
- Renew the CA cert and extend the expire period.
- Create a new certificate in the *System keystore as follows:
- Select a keystore *system.
- Enter the password for *system keystore.
- Select Create certificate.
- Select Server or Client certificate.
- Select the local CA as the current CA who will use its CA cert to sign the new certificate.
- Input the essential information for the new certificate.
- Click Continue.
- Select Applications for the newly created certificate, such as QIBM_HTTP_SERVER_myHttpProfile, and then click Continue to finish.
- If the local CA is valid and a certificate signed by the Local CA already exists, you can update the certificate assigned to the application by selecting Manage Applications > Update certificate assignment>.
- Restart IBM HTTP Server to apply the changes.
Results
To test the new configuration: open a web browser and ensure that you can successfully reach https://<server_name>. You might be prompted to accept the self-signed certificate on your browser. Connections users can access applications through the SSL protocol.Parent topic: Configure IBM HTTP Server on
IBM i
Add certificates to the WebSphere trust store for
IBM i Import a self-signed IBM HTTP Server certificate into the default trust store of IBM WAS.
Before you complete this procedure, ensure that IBM HTTP Server is configured to support SSL.
To establish trusted server to server communication for Connections, import signer certificates from IBM HTTP Server into the WAS default trust store.
There are different types of certificates used. This procedure describes how to import a self-signed certificate. You can also import a certificate that you purchased from a third-party Certificate Authority. To help decide a key file strategy for your environment, go the IBM HTTP Server information center.
To import a public certificate from IBM HTTP Server to the default trust store in IBM WAS...
- Log into the IBM WAS admin console and select Security > SSL Certificate and key management > Key stores and certificates.
- Click CellDefaultTrustStore.
- Click Signer Certificates.
- Click Retrieve from port.
- Enter the Host name, SSL Port, and Alias of the web server.
- Click Retrieve Signer Information and then click OK. The root certificate is added to the list of signer certificates.
- If using Tivoli Access Manager or other proxies, also repeat steps 4-6 for your Tivoli Access Manager or other proxy servers.
Results
If your configuration changes aren't successful, ensure that you have applied the instructions to configure a default personal certificate.
Verify that users can create a private community and add other widgets, such as Activities, Blogs, Dogear, and so on, to it. Ensure that there are no errors when these widgets are added. If problems are reported, consult the Communities SystemOut.log file.
The proxy-config.tpl file allows a proxy to work with self-signed certificates. This is true for an out-of-the-box deployment but for improved security you should set the value of the unsigned_ssl_certificate_support property to false when the deployment is ready for production.
Ensure that you are ready to renew your certificate before it expires. WAS provides a utility for monitoring certificates.
For more information, refer to Configure certificate expiration monitoring in the WAS information center.
Update web addresses in IBM HTTP Server for
IBM i Update the web addresses that IBM HTTP Server uses to access Connections applications on the
IBM i operating system.If you installed and configured IBM HTTP Server after installing Connections, your Connections applications are automatically mapped to the web server. However, if you installed and configured IBM HTTP Server before installing Connections, you must manually map the applications.
Before continuing with this task, map the application modules to IBM HTTP Server.
See Map applications to IBM HTTP Server on
IBM i .If you are using the Files, Wikis, or Libraries application, configure IBM HTTP Server to handle file downloads from those applications as described in Configure file downloads through the HTTP server.
If you do not install a web server such as IBM HTTP Server, users must include the correct port number in the web address that they use to access the application. When you use a web server, users can access the applications without using port numbers.
By default, the web address that you enter to access Connections applications includes the port number for each application. To avoid using port numbers, update the web addresses by editing LotusConnections-config.xml. IBM HTTP Server can then redirect requests to the appropriate port for each application.
For more information about editing configuration files, refer to Edit configuration files.
To update the web addresses to your Connections applications...
- Stop WAS.
- Check out LotusConnections-config.xml. The file is stored by default in the /QIBM/UserData/WebSphere/AppServer/V8/ND/profiles/dmgr/config/cells/cell_name/LotusConnections-config directory.
- For each application, update the web addresses specified in the href and ssl_href properties:
<sloc:href> <sloc:hrefPathPrefix>/application</sloc:hrefPathPrefix> <sloc:static href="http://webserver:port" ssl_href="https://webserver:port"> <sloc:interService href="https://webserver:port"> </sloc:href>where
- webserver is the domain name of IBM HTTP Server, such as webserver.example.com.
- port is the default port number of the application. Remove the port number when you specify a webserver.
- application is the name of an Connections application.
Each href attribute in LotusConnections-config.xml is case-sensitive and must specify a fully-qualified domain name. For example, to update the web address for Communities, add the following specifications to the file:
<sloc:href>
<sloc:hrefPathPrefix>/communities</sloc:hrefPathPrefix>
<sloc:static href="http://webserver.example.com"
ssl_href="https://webserver.example.com">
<sloc:interService href="https://webserver.example.com">
</sloc:href>
To use a reverse proxy, the web addresses defined in this file must be updated to match the appropriate proxy server URLs. Refer to the Connections wiki for more information about deployment scenarios, including how to configure a reverse proxy.
- Save and check in LotusConnections-config.xml.
- Synchronize the nodes.
- Log on to each application to ensure that the web addresses in the navigation bar are correct.
Results
You can access each application without needing to specify a port number.
Configure the Home page administrator
Create an administrator for Home page so that you can make changes to the application such as adding and removing widgets.
Connections administrators must be dedicated users. Their only purpose should be application administration.
Only a Home page administrator can add, remove, enable, or disable widgets on the Home page.
For more information, see the Administering the Home page from the user interface topic.
You can also create global administrators for any of the applications, for the purpose of managing content.
For more information, see the Administering application content topic.
To configure administrative access to the Home page application...
- Log in to the WAS admin console on the Deployment Manager.
- Select Applications > Application Types > WebSphere enterprise applications.
- Click the link to the Home page application.
- Click the Security role to user/group mapping link.
- Select the check box for the admin role and then click Map Users.
- In the Search String box, type the name of the person whom you would like to set as an administrator, and then click Search. If the user name exists in the LDAP directory, it is found and displayed in the Available box.
- Select the name from the Available box and then move it into the Selected column by clicking the move arrow.
- Repeat Steps 4 and 5 to add more users to the administrative role.
- Click OK.
- From the Enterprise Applications > <application> > Security role to user/group mapping page, click OK and then click Save.
- Synchronize and restart all your WAS instances.
Enable Search dictionaries
During installation, only the English language dictionary is enabled by default. When your organization spans multiple geographies and multiple languages, you need to enable the relevant language dictionaries for the deployment to ensure that Search returns optimum results for users.
For non-English deployments, enabling multilingual support for Search is a mandatory post-installation step that needs to be performed before you start your Connections Search server for the first time. Without multiple dictionary support, for languages other than English, Search only returns results where there is an exact match between the search term and content term. Enabling multiple dictionaries ensures better quality search results when your user base is multilingual.
For information about how to enable multilingual support, see Configure dictionaries for Search.
Copying Search conversion tools to local nodes
To enable full indexing of data, copy the Search conversion tools to local nodes.
Perform this task only on nodes in the Search cluster. If you added a node to an existing cluster, as described in the Adding a node to a cluster topic, complete this task only if the new node is a member of the Search cluster. References to nodes in the steps of this task apply only to nodes in the Search cluster.
Steps 1-3 and 6-7 are required for all supported operating systems. However, if the deployment has only one node, skip steps 1-3. Steps 4-5 are required only if you are using the AIX or Linux operating system.
The Search conversion tools index Files and Wiki attachments. The tools work best when they are available locally on each node. However, when Connections was installed, the conversion tools were deployed on a network share. Therefore, you must copy the tools to each node in the Search cluster.
To copy Search conversion tools to local nodes...
- Identify the nodes in the Search cluster.
- Log in to the Integrated Solutions Console and click...
Servers | Clusters | WebSphere application server clusters | cluster_name
...where cluster_name is the name of the Search cluster.
- In the Additional Properties area, expand Cluster members and then click Details.
- In the table of cluster members, make a note of the nodes that host the cluster members.
- Copy the shared_data_directory_root/search/stellent directory from the shared content folder to a local directory on each node.
Use exactly the same path on each node. The following path is an example only and might be different on your operating system:
/opt/IBM/Connections/data/local/search/stellent
The new directory contains the exporter executable file.
- On the Deployment Manager, update the FILE_CONTENT_CONVERSION Websphere variable to point to the exporter file in the local directory on each node. For example:
/opt/IBM/Connections/data/local/search/stellent/dcs/oiexport/exporter
The exporter file must be in the same file path on all nodes.
- Back up the setupCmdLine.sh file on each node. This file is in the app_server_root/AppServer/bin directory.
- Add the following text to the end of the setupCmdLine.sh file on each node:
- export PATH=$PATH:SearchBinariesHome/dcs/oiexport
where SearchBinariesHome is the path to the directory specified in Step 2.
- Choose the option for your operating system:
- AIX: export LIBPATH=$LIBPATH:SearchBinariesHome/dcs/oiexport
- Linux: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:SearchBinariesHome/dcs/oiexport
- Restart all node agents.
- Restart WAS on each node.
Create the initial Search index
When you install Connections, Search indexing is automatically configured. To create the initial Search index, all you need to do is wait for one of the default indexing tasks to run.
When you are installing a non-English language deployment, you must enable the relevant language dictionaries for the deployment before creating the initial Search index. Without multiple dictionary support, for languages other than English, Search only returns results where there is an exact match between the search term and the content term. Enabling multiple dictionaries ensures better quality search results when your user base is multilingual.
For more information, see Configure dictionaries for Search.
Initial index creation occurs when any scheduled indexing task fires and an index does not yet exist. As part of the initial index creation process, the index is automatically rolled out to the secondary nodes in the deployment. Each node running the Search application must have the Search index stored locally on the node's file system. Because there are multiple indexes in a clustered environment, they must all be kept in synchronization with each other.
The Search index directory is defined by the IBM WAS variable SEARCH_INDEX_DIR. You can change the location of the index by editing this variable.
For more information, see Changing the location of the Search index.
After the initial index has been built and optimized, the contents of the index directory are copied to a staging folder. When the newly-built index is successfully posted, JMS messages are broadcast so that each node automatically downloads the index from the staging folder and loads it. The index management tables are populated at the same time. For Search to function properly, the initial index must have completed successfully and it must be deployed to all nodes.
Do not stop the deployment until the index has been copied to all nodes. If the server is stopped during this process, the index will not be successfully rolled out to all nodes. In this event, you need to manually copy the index from the staging location to the other nodes.
You can change the location of the Search index staging folder by editing the WAS variable, SEARCH_INDEX_SHARED_COPY_LOCATION.
Configure file downloads through the HTTP Server
You can make downloading files from the Files, Wikis, and Library applications more efficient by configuring an IBM HTTP Server to handle most of the download process instead of the WAS. It is strongly recommended that you configure production deployments this way. Configuring downloading for Files, Wikis, and Libraries is not supported on
IBM i .Install an IBM HTTP Server in your WAS environment.
See the topic Configuring IBM HTTP Server for information.
In network deployments, Files and Wikis data must be stored on a shared file system, as described in the topic Deployment options. Connections Content Manager Libraries make use of an optional file cache on the file system for serving files through the HTTP server. All IBM HTTP Servers in the deployment must have read access to the files, and all WASs must have write access.
If you choose not to configure the IBM HTTP Server to download files, configure the WAS to transfer data synchronously instead of asynchronously in order to avoid errors related to using too much memory.
See the tech note Excessive native memory use in IBM WAS for instructions.
In the default deployment with an IBM HTTP Server, file download requests are passed from the IBM HTTP Server to the WAS. The WAS accesses the binary files in a data directory on the file system and returns them to the IBM HTTP Server, which passes them to the browser.
This is inefficient in deployments where large numbers of users are downloading files, partly because WAS has a limited thread pool that is tuned for short-lived transactions, and optimized for J2EE applications and not file downloads. In this environment it is possible that you would need to create a cluster to handle downloads, especially if you have slow transfer rates, for example caused by people in different geographies downloading 2 MB at 2 KB per second. This would cause problems, such as making it impractical to properly tune the thread pool.
Configure the IBM HTTP Server to download the binary files instead makes downloading far more efficient, since IBM HTTP Server is designed specifically for serving files. This leaves WAS to perform tasks such as security checking and cache validation while leaving downloading to the IBM HTTP Server.
To configure this environment, you install an add-on module to the IBM HTTP Server. As in typical deployments, download requests are passed from the IBM HTTP Server to the WAS. But instead of responding with the binary data, the WAS only adds a special header to its response. The add-on module recognizes the header and directs the IBM HTTP Server to download the binary data.
This configuration requires making the Files and Wikis data directories, and optionally a content cache directory from Connections Content Manager, available to the IBM HTTP Server using an alias. This creates a security concern, so configure the access control at the IBM HTTP Server level. After you configure security, access to the data through the HTTP Server is denied unless a specific variable is set. Requests to the applications on WAS are then configured to set the variable. In other words, only requests passing through WAS are able to access the data directory, with WAS acting as the authorizer.
If you use the add-on module you must use an IBM HTTP Server address for the Connections inter-service URL.
See the topic Troubleshooting inter-server communication for information on setting an inter-service URL.
Do the following tasks to configure IBM HTTP Server downloading:
- Install the Connections Files or Wikis applications.
- On the server that you installed Connections on, navigate to the connections_root/plugins/ihs/mod_ibm_local_redirect/platform directory to find the module file (mod_ibm_local_redirect.so) appropriate to your IBM HTTP Server operating system. These are the platform directories:
For example, on Linux computers:
- /aix_ppc32-ap20
- /aix_ppc32-ap22
- /aix_ppc64-ap22
- /linux390-ap20
- /linux390-ap22
- /linux_ia32-ap20
- /linux_ia32-ap22
- /linux_ppc64-ap22
- /linuxs390_x64-ap22
- /linux_x64-ap22
- /win_ia32-ap20
- /win_ia32-ap22
/IBM/Connections/plugins/ihs/mod_ibm_local_redirect/linux_ia32-ap20/mod_ibm_local_redirect.so
- You can use these whether you installed IBM HTTP Server from the 32-bit or 64-bit supplemental package on all supported platforms, as the IBM HTTP Server process is 32-bits in both cases and requires 32-bit modules.
See this support document for more information on this topic. For IBM HTTP Server 6.1.x releases, use the ap20 versions; for version 7.x releases use the ap22 version.
- Copy the module to the appropriate directory location on your IBM HTTP Server. By default, modules are located in the ibm_http_server_root/modules directory.
- Open the IBM HTTP Server httpd.conf file (in the ibm_http_server_root/conf directory by default) and add the following statements to load the ibm_local_redirect_module, and the required mod_env environment variable module:
LoadModule ibm_local_redirect_module path_to_module/mod_ibm_local_redirect.soFor example: LoadModule ibm_local_redirect_module modules/mod_ibm_local_redirect.so LoadModule env_module path_to_mod_env/mod_env.soFor example: LoadModule env_module modules/mod_env.soBy default, the mod_env module is installed in the /modules directory. It might already be loaded, or it might be a commented-out line that you can remove comments from to load.
- Do one of the following, according to your IBM HTTP Server operating system:
Windows: Give the IBM HTTP Server user READ access to the data directory root. For optimal security, do not give the user WRITE access.
- Give the IBM HTTP Server user READ and EXECUTE access to the data directory root.
You can find the data_directory_root path in the files-config.xml or wikis-config.xml file, in the file.storage.rootDirectory attribute. This attribute will contain either the path itself, or a WAS variable whose value is the path. If it contains a variable, you can find the path by opening the WAS console, clicking Environment > WebSphere Variables, and finding the variable. For example, if the element's value is ${FILES_CONTENT_DIR}, find FILES_CONTENT_DIR in the console to find the path.
See the topic Changing configuration property values for information on opening the files-config.xml or wikis-config.xml file.
For Connections Content Manager, the data directory is a shared cache directory available to both the HTTP Server and the Libraries servers (or FileNet Collaboration Services). For Connections Content Manager, the data_directory_root refers to this cache location and will be configured in step 11.
In some situations, granting access at the data directory root may not work for you. For example, where the value of FILES_CONTENT_DIR is \\server\Shared\files\upload, giving Read access to the user there isn't useful because they don't have any rights to the share. Instead, give the user Read access at the share point of \\server\Shared.
- On all virtual hosts in the same domain as Files, Wikis, or Libraries, including both HTTP and HTTPS, do the following to expose the data directory root:
- Open the httpd.conf file.
- Add the following to create an alias for the data directory root:
alias /< alias> " data_directory_root"For example, if the Files data directory root is /opt/IBM/Connections/Data/Files (on Linux), the following line creates the alias files_content for that directory:alias /files_content /opt/Connections/data/shared/filesA similar example for Wikis:alias /wikis_content /opt/IBM/Connections/data/shared/wikisA similar example for Libraries:alias /library_content_cache /opt/IBM/Connections/data/shared/ccmcacheYou must create the directory used in this step.
- Do not use the application context root (/files or /wikis or /dm by default) as part of the alias, but you can use any other value. For example, use /files_content, but not /files/content. The application context root is the path part of the application URL, for example the application context root of a Files application with the URL www.my.enterprise.com/files is /files. You can see the value in the files.href.prefix property in LotusConnections-config.xml.
See the topic Changing common configuration property values for information on opening the configuration file.
- Include quotes around the file path on Windows computers, and always use forward slashes, for example: "C://IBM/Connections/Data/Files"
- The example assumes the HTTP server is on the same computer as Connections. If the HTTP server is on a different computer (as is common), specify the data directory using the network share path appropriate to your environment. For example, use a UNC network share format such as: alias /files_content "//server/sharename/Files"
- In the httpd.conf file, add these lines after the lines you added in Step 6, to make the alias more secure:
<Directory " data_directory_root"> Order Deny,Allow Deny from all Allow from env=REDIRECT_FILES_CONTENT or REDIRECT_WIKIS_CONTENT or REDIRECT_LIBRARIES_CONTENT </Directory>For example:<Directory "/opt/IBM/Connections/data/shared/files"> Order Deny,Allow Deny from all Allow from env=REDIRECT_FILES_CONTENT </Directory> <Directory "/opt/IBM/Connections/data/shared/wikis"> Order Deny,Allow Deny from all Allow from env=REDIRECT_WIKIS_CONTENT </Directory> <Directory "/opt/IBM/Connections/data/shared/ccmcache"> Order Deny,Allow Deny from all Allow from env=REDIRECT_LIBRARIES_CONTENT </Directory>
- This secures the data by only allowing requests where REDIRECT_FILES_CONTENT or REDIRECT_WIKIS_CONTENT or REDIRECT_LIBRARIES_CONTENT is specified. Use any environment variable you want, as long as it is not already in the IBM HTTP Server environment.
- The example assumes the HTTP server is on the same computer as Connections. If the HTTP server is on a different computer (as is common), specify the data directory using the network share path appropriate to your environment. For example, use a UNC network share format such as: <Directory "//server/sharename/Files">
- In the httpd.conf file, add these lines after the lines you added in Step 7, to enable the module for Files or Wikis:
<Location application_context_root> IBMLocalRedirect On IBMLocalRedirectKeepHeaders X-LConn-Auth,Cache-Control,Content-Type,Content-Disposition, Last-Modified,ETag,Content-Language,Set-Cookie SetEnv FILES_CONTENT or WIKIS_CONTENT or LIBRARIES_CONTENT true </Location>For example:<Location /files> IBMLocalRedirect On IBMLocalRedirectKeepHeaders X-LConn-Auth,Cache-Control,Content-Type,Content-Disposition, Last-Modified,ETag,Content-Language,Set-Cookie SetEnv FILES_CONTENT true </Location> <Location /wikis> IBMLocalRedirect On IBMLocalRedirectKeepHeaders X-LConn-Auth,Cache-Control,Content-Type,Content-Disposition, Last-Modified,ETag,Content-Language,Set-Cookie SetEnv WIKIS_CONTENT true </Location> <Location /dm> IBMLocalRedirect On IBMLocalRedirectKeepHeaders X-LConn-Auth,Cache-Control,Content-Type,Content-Disposition, Last-Modified,ETag,Content-Language,Set-Cookie SetEnv LIBRARIES_CONTENT true </Location>
- The application_context_root value is the last part of the application URL, for example the application context root of a Files application with the URL www.my.enterprise.com/files is /files. This is /files, /wikis or /dm by default, but can be changed during post-installation steps. You can see the value in the files.href.prefix property in LotusConnections-config.xml.
See the topic Changing common configuration property values for information on opening the configuration file.
- Specify IBMLocalRedirectKeepHeaders instructs the plugin to keep the specified headers from the application server, instead of recomputing them. This is critical because the applications set such directives as the content-type and content-disposition that the IBM HTTP Server would not know about.
- If your environment requires additional headers (for example for a proxy cache), you can add them to the comma-delimited IBMLocalRedirectKeepHeaders list to ensure that the module retains them during redirection.
- Header names must be comma-delimited with no space before or after commas. Also, all header names must be on one line regardless of how many there are.
- The SetEnv value sets the token that the data directory requires to be accessible. It must match the value after REDIRECT_ that you set in Allow from env= in Step 7. For example, if you set REDIRECT_FILES_CONTENT in Step 7, this value must be SetEnv FILES_CONTENT true.
- You can think of this as a lock and key mechanism: only requests that go through the Files, Wikis or Library applications get a key, and the applications ensure that only authorized users can unlock particular files.
- Do the following to test that the IBM HTTP Server is configured properly and securely:
- Restart the IBM HTTP Server. Make sure it loads properly and there are no log errors about loading modules or configuration. If there are problems, make sure the load module and configuration directives do not contain typos.
- Try to access the alias directory directly at http/https:host/alias and make sure you are denied permission. If you can access the directory, make sure that the Order Deny, Allow; Deny from All; Allow from env from Step 7 are all there.
- Access the application and download a file to make sure it functions. The module is not yet enabled.
- Check out the files-config.xml or wikis-config.xml file using the steps in the topic Changing configuration property values, and specify the following property attributes:
<download> <modIBMLocalRedirect enabled="true" hrefPathPrefix="/alias" /> </download>The alias must have a forward slash in front of it.
- Connections Content Manager only: Use the task Configure FileNet Collaboration Services to set values for the variables cdhc_isEnabled, cdhc_urlPath, cdhc_rootPath, and cdhc_guardHeader. where:
For example:
- cdhc_urlPath must be the name of the alias specified in step 6.
- cdhc_rootPath must correspond to the data_directory_root used in step 7.
- cdhc_guardHeader must correspond to the variable used in the SetEnv command in step 8.
cdhc_urlPath=library_content_cache cdhc_rootPath=/opt/IBM/Connections/data/shared/ccmcache cdhc_guardHeader=LIBRARIES_CONTENT
- In the httpd.conf file, append the Requestheader via HTTP Server so that cached items will appear in the shared directory as follows:
- Specify RequestHeader append LIBRARIES_CONTENT true in the httpd.conf where LIBRARIES_CONTENT is the name you set for the cdhc_guardHeader.
- Restart Files or Wikis or Libraries.
- Download a file to make sure it works.
- Do the following to test whether the IBM HTTP Server is downloading the files:
- Open the httpd.conf file and add # characters to comment out the last line in the <Directory> element, for example:
<Directory " data_directory_root"> Order Deny,Allow Deny from all #Allow from env=REDIRECT_FILES_CONTENT or REDIRECT_WIKIS_CONTENT or REDIRECT_LIBRARIES_CONTENT </Directory>For example:<Directory "/opt/IBM/Connections/data/shared/files"> Order Deny,Allow Deny from all #Allow from env=REDIRECT_FILES_CONTENT </Directory> <Directory "/opt/Connections/data/shared/wikis"> Order Deny,Allow Deny from all #Allow from env=REDIRECT_WIKIS_CONTENT </Directory> <Directory "/opt/IBM/Connections/data/shared/ccmcache"> Order Deny,Allow Deny from all #Allow from env=REDIRECT_LIBRARIES_CONTENT </Directory>
- Save the file.
- Try to download a file from Files or Wikis. You should be denied. Test over both HTTP and HTTPS protocols (if HTTPS is enabled).
- Open the httpd.conf file and remove the # characters from the last line specified in Step a.
Check the standard IBM HTTP Server error and request logs for any problems.
- If you get a permission denied error trying to download a file, IBM HTTP Server might not have access to the content. You can temporarily disable security on the directory, and ensure you can access it directly first, then re-enable security. Note that you can tell if WebSphere or IBM HTTP Server is encountering an issue by the error page displayed, and by the path. If IBM HTTP Server is having a problem with the module invoked, the path will include /<alias>.
- If you get log errors about loading the module, make sure that it is only loaded once, that you have selected the correct binary, and that you are on a supported platform.
- If it works for HTTP but not HTTPS (or vice versa), make sure that the configuration lines are in a global context or in each virtual host, depending on your setup.
Configure Cognos Business Intelligence
Configure your IBM Cognos Business Intelligence environment to work with Connections.
This configuration task assumes you have already installed the Cognos BI server. If you chose to deploy Cognos after installing Connections, be sure you have installed Cognos properly as described in Install Cognos Business Intelligence components.
After you have installed Cognos Business Intelligence, configure the environment by completing the following tasks:
Apply fix packs to update the Cognos server
Apply available fix packs to the Cognos Business Intelligence server to provide important product corrections. These fixes must be applied after the initial installation of Cognos Business Intelligence for this version of Connections.
Install Cognos Business Intelligence and federate the server to the Connections Deployment Manager as explained in Install Cognos Business Intelligence in the Pre-installation section of this documentation.
For additional information on updating Cognos Business Intelligence with fix packs, see Install Fix Packs in the Cognos information center.
Download the latest fix pack and apply it to the Cognos server. At a minimum, you will need to apply the fix pack, but you should check with IBM Fix Central in case additional fix packs were made available after this documentation was published. Fix packs are cumulative; when you install the latest fix pack, it includes updates from all previous fix packs.
- Stop all IBM WAS except the Deployment Manager, which must be running to complete step 6, and verify that all Cognos services have stopped before proceeding to the next step.
After stopping the server, wait at least one full minute to ensure that all Cognos processes have stopped:
- IBM AIX or Linux: cgsServer.sh and CAM_LPSvr processes
Windows: cgsLauncher.exe and CAM_LPSvr processes
- Download fix packs for Cognos Business Intelligence 10.1.1 from IBM Fix Central.
Table 37. Fix packs required for a new deployment of IBM Cognos for Connections
Operating system Fix pack AIX 10.1.1-BA-CBI-AIX64-FP001 Linux 10.1.1-BA-CBI-Linuxi38664-FP001 Windows 10.1.1-BA-CBI-Win64-FP001 zLinux 10.1.1-BA-CBI-zLinux64-FP001
- Review the fix pack prerequisites and make sure the deployment satisfies all requirements.
- Download the appropriate compressed tar file for your operating system using the links in the "Download Package" section.
Some browsers might change a downloaded file’s type from .tar.gz to a file type not recognized by the operating system. To correct this, change the file type back to .tar.gz after the download is complete. Using Download Director will prevent inadvertent renaming of files at download.
- Expand the downloaded fix pack.
AIX or Linux
- Open command prompt or terminal window.
- cd directory where you have downloaded the fix pack.
- Run the following command to expand the package using either GNU Zip (gzip) or GNU Tar (tar):
gunzip fix_pack_file_name.tar.gz | tar xvf –Windows
- Open a command prompt.
- cd directory where you have downloaded the fix pack.
- Expand the package using your file compress and decompress utility (if you are using WinZip, select the option "Use folder names" to retain the package’s folder structure).
- Apply the fix pack:
AIX or Linux:
- cd directory where you expanded the fix pack.
- Now change to the following subdirectory:
- AIX: /aix64h
- Linux: /linuxi38664h
- zLinux: /zlinux64h
- Run the following command: ./issetup
If you do not use XWindow, run an unattended installation as explained in Set Up an Unattended Installation Using a File From an Installation on Another Computer in the Cognos information center.
- Follow the instructions in the installation wizard to install the fix pack in the same location as your existing IBM Cognos components.
This installation location is the path specified for the cognos.biserver.install.path property in the cognos-setup.properties file.
Windows:
- cd directory where you expanded the fix pack.
- Now change to the \win64h subdirectory.
- Run the following command: issetup.exe
- Follow the instructions in the installation wizard, install the fix pack in the same location as your existing IBM Cognos components.
This installation location is the path specified for the cognos.biserver.install.path property in the cognos-setup.properties file.
- Generate a new Cognos BI Server EAR file with the fix pack:
- Locate the cognos-setup-update.sh script in the directory where you expanded the CognosConfig.zip or CognosConfig.tar when you installed Cognos Business Intelligence components as part of the pre-install task.
- Edit the cognos-setup.properties file and verify that it contains the appropriate values for each property.
All passwords were removed from this file the last time it was used, so you must either add the passwords again, or pass them in from the command line when you run the cognos-setup-update.sh script.
- Run the cognos-setup-update.sh script.
- Apply the new EAR file to the Cognos server by running an update in the WebSphere Integrated Solutions Console:
- On the Deployment Manager, log in to the Integrated Solutions Console as the WebSphere administrator.
- Click Applications > Application types > WebSphere enterprise applications.
- In the list of applications, select the Cognos application and click the Update button in the table.
- Browse to the newly built EAR file residing in the Cognos_BI_Server_install_path directory, and click Next.
- Complete the remaining screens by accepting the default values and clicking Next.
- Click Finish to complete the update.
- Set the JAVA_HOME variable: cd WAS_install_root/bin
./setupCmdLine.sh
- Update the Cognos server configuration as follows:
- Start the Cognos server. Verify that all processes are started by accessing the Cognos BI dispatch URL. If startup is complete, the Public Folders directory is present.
- Locate the cognos-configure-update.sh script in the directory where you expanded the CognosConfig.zip|tar file when you installed Cognos Business Intelligence components.
- Edit the cognos-setup.properties file and verify that it contains the appropriate values for each property. All passwords were removed from this file the last time that it was used, so you must either add the passwords again, or pass them in from the command line when you run the cognos-configure-update.sh script.
- Run the cognos-configure-update.sh script.
Output from this operation is stored in the /CognosSetup/cognos-configure.log file.
If you encounter an error when running the cognos-configure-update.sh script, correct the error and run the script again before proceeding to the next task.
Granting access to global metrics
Configure the
metrics-report-run security role to grant users the authority to view and interact with global metrics.
Other than administrators, only the users assigned to the
metrics-report-run role can access global metrics. Every user assigned to themetrics-report-run role must be added to the IBMConnectionsMetricsAdmin group as described in Configure the IBMConnectionsMetricsAdmin role on Cognos.
- On the Deployment Manager, log in to the Integrated Solutions Console as the WebSphere administrator.
- In the navigation tree, click Applications > WebSphere enterprise applications > Metrics > Security role to user/group mapping.
- In the roles table, select the check box next to the
metrics-report-run role.- Still in the table, click Map Users or Map Groups.
Use Map Users to add individual users to the role; use Map Groups to add user groups to the role.
- Add one or more users or groups to the
metrics-report-run role.For best results, limit access to a small set of users whose jobs require them to view the most recent metrics. Granting this level of access to a large number of users slow performance, as update requests are processed in sequence.
- Click OK.
- In the navigation tree, click Applications > WebSphere enterprise applications > Common > Security role to user/group mapping.
- In the roles table, select the check box next to the
metrics-report-run role.- Still in the table, click Map Users or Map Groups.
- Add the users or groups that are mapped to the
metrics-report-run role of Metrics (step 5).- Click OK.
- Save the change to the master configuration by clicking the Save link in the "Messages" box at the beginning of the page.
- Synchronize all nodes in the cell to the Deployment Manager, and then restart the node agents:
- On the navigation tree, click System Administration > Nodes.
- Click the Full Resynchronize button in the table.
- Return to the navigation tree and click System Administration > Node agents.
- In the nodes table, click the box in front of each node.
- Click the Restart button in the table.
Granting access to community metrics
Configure the
community-metrics-run security role to grant users the authority to view community metrics using static reports.
Other than administrators, only the users assigned to the
community-metrics-run role can access community metrics of the communities they own. Users with this level of access see static reports, which can be refreshed by clicking the Update button in the Metrics user interface. You can map this role to everyone, or to subset of the user population. For example, you can gradually provide the community metrics feature to the user population by mapping this role to small group first, and then adding more users to the role over time.
- On the Deployment Manager, log in to the Integrated Solutions Console as the WebSphere administrator.
- In the navigation tree, click Applications > WebSphere enterprise applications > Metrics > Security role to user/group mapping.
- In the roles table, click the check box next to the
community-metrics-run role.- Still in the table, click the Map Users button or the Map Groups button.
Use Map Users to add individual users to the role; user Map Groups to add user groups to the role.
- Add one or more users or groups to the
community-metrics-run role.- Click OK.
- Save the change to the master configuration by clicking the Save link in the "Messages" box at the beginning of the page.
- Add the same users or groups to the
community-metrics-run role of Communities applications.- Synchronize all nodes in the cell to the Deployment Manager, and then restart the node agents:
- On the navigation tree, click System Administration > Nodes.
- Click the Full Resynchronize button in the table.
- Return to the navigation tree and click System Administration > Node agents.
- In the nodes table, click the box in front of each node.
- Click the Restart button in the table.
By default, the
community-metrics-run role is already assigned to All Authenticated in Application's Realm for both the Metrics and Communities applications, which means that all the community owners have the authority to run the community metrics of his or her community. Therefore, if you only like to assign the access to the specific users or groups, remove the All Authenticated in Application's Realm from the Metrics and Communities applications first.
Configure the IBMConnectionsMetricsAdmin role on Cognos
Configure the IBMConnectionsMetricsAdmin role in Cognos Business Intelligence to ensure that the Metrics administrator has access to features and reports.
The default custom authentication provider is configured automatically in Cognos during installation and configuration. The name of the custom authentication provider is the value specified in cognos.namespace setting of the cognos-setup.properties file. When configuring the IBMConnectionsMetricsAdmin role, you must be logged in using the Cognos administrator account specified in the cognos.admin.username setting of the cognos_setup.properties file. If the Cognos administrator cannot view and add other users, consult your LDAP administrator.
After you have configured LDAP authentication for Cognos Business Intelligence, configure the IBMConnectionsMetricsAdmin role so that specified LDAP users can access Cognos features. In particular, you will want to add the following users to this role:
- The user assigned to the Cognos administrator account
The Cognos administrator is the primary person responsible for configuring Cognos features and reports.
- All users who have been assigned to the admin role for Connections
Anyone tasked with administering the Connections deployment should have access to Cognos features to ensure they can manage the full deployment as needed.
- All users who have been assigned to the
metrics-report-run roleUsers who have been authorized to run global metrics reports require access to Cognos before they can work with the reports.
- Set cognos.admin.username as an administrator for WAS as follows:
- Start the Dmgr and then log into the dmgr.
- Click Users and Groups > Administrative user roles and then click Add.
- Select Administrator from Roles and then search for the user cognos.admin.username, which is specified in cognos-setup.properties file.
- Select the target user and click the move button to move the user name to the Mapped to role field.
- Click OK and then click Save.
- Log out of the dmgr.
- Restart the dmgr and the nodes.
- Restart Cognos server.
- Log into the dmgr using cognos.admin.username. Make sure the user cognos.admin.username can search for users and groups in WAS admin console.
- Use a browser to navigate to the Cognos deployment with the following address:
http://Host_Name:Port/Context_Root/servlet/dispatch/ext
where:
- Host_Name is the fully qualified host name of the Cognos server; for example, host.example.com. This value is specified in the was.fqdn.hostname property in the cognos-setup.properties file used for installing the server.
- Port is the port that the Cognos server is listening on.
- Context_Root is the context root to which you installed the Cognos server; for example, cognos. This value is specified in the ognos.contextroot property in the cognos-setup.properties file; its default value is "cognos".
- Log in to Cognos using the Cognos administrator account that you set up previously.
- On the next page, click Launch and then select IBM Cognos Administration.
- Select the Security tab.
- On the Directory page, select Cognos from the list.
- Add users to the IBMConnectionsMetricsAdmin role:
- Locate the IBMConnectionsMetricsAdmin role and click the More button that follows it.
By default the list displays 15 roles at a time. To see more roles, use the arrow keys to scroll through the list or edit the number of entries displayed at one time.
- Click the Set properties icon.
- In the properties window, click the Members tab, and then click Add.
- In the Add window, click Show users in the list.
- Select the directory named with the value specified in cognos.namespace in cognos-setup.properties file from the Directory list.
- Select all users who require administrator access to Cognos Business Intelligence, and click Add to add them to the role.
Use the Search button to search for a particular user. Remember to add at least the following users:
- The Cognos administrator
- All Connections administrators
- All users assigned to the
metrics-report-run roleIf a folder icon displays next to a user’s name and you cannot select that user, this may indicate the Cognos is treating the user as a folder instead of as a user. For instructions on correcting this problem, see Troubleshooting the Cognos BI Server.
- Click OK to save the change.
- Limit access to the System Administrators role by removing Everyone from the members list:
- Back in the Cognos roles list, locate the System Administrators role click the More button that follows it.
- Click the Set properties icon.
- In the properties window, click the Members tab.
- In the Members window, select Everyone, and then click Remove to delete it from the list of members.
- Click OK to save the change.
Configure PowerCube refresh schedules
By default, IBM Cognos Transformer refreshes each PowerCube with incremental updates once each day, and replaces the cube’s data for the current month once a week. These jobs are scheduled by default but you might need to modify the schedules to avoid conflicts with other activities.
When scheduling the refresh jobs, keep the following issues in mind:
- These jobs should run at times when the Cognos system usage is relatively low, to minimize the impact on normal usage. You should adjust these times based on your system's usage pattern.
- The weekly refresh should run only once every week; since it will take longer to complete, you should schedule it for the time when system usage is lowest (for example, on the weekend).
- The daily refresh should run early in the morning (for example, just after midnight), so users can see the latest metrics for the previous day.
- On the computer hosting Cognos Transformer, schedule the PowerCube updates, making sure the schedules for the daily and weekly jobs do not collide:
- AIX or Linux:
Edit the cron jobs in the system crontab.
- Windows:
Edit the MetricsCubeDailyRefresh job to ensure it does not collide with the weekly refresh. You can modify the job’s properties in the Task Scheduler Library; for more information see the next topic.
- Run the build-all script once you complete refreshing to make sure the metrics are loaded successfully. The build-all script is located in <Transformer install path>/metricsmodel/build-all script.
Configure the job scheduler for Cognos Transformer on Windows
Rather than store administrative credentials in a script, you can add them to the job properties of the IBM Cognos Transformer to enable scheduled tasks on Microsoft Windows.
Finish configuring the job scheduler to run the Transformer periodically by adding the Windows Administrator credentials to the MetricsCubeDailyRefresh scheduler job.
- Click Start > Control Panel > Administrative Tools and click Task Scheduler.
- In the navigation tree, click Task Scheduler Library.
- In the list of scheduled tasks, click MetricsCubeDailyRefresh to view its properties.
- In the MetricsCubeDailyRefresh Properties window, open the General tab and click the Change User or Group button.
- In the Select user or group dialog box, type administrator in the Enter the object name to select field, and then click the Check Names button.
The Task Scheduler compiles a list accounts with administrative access on this Windows server.
- When the Task Scheduler dialog box prompts to Enter the user account information to run this task, select the Windows Administrator user name that will be associated with the MetricsCubeDailyRefresh task, and type the password associated with that account; then click OK.
The selected administrator account now has the authority to run the MetricsCubeDailyRefresh task.
- Click OK to save the change and close the MetricsCubeDailyRefresh Properties window.
- Repeat these steps on the scheduled task MetricsCubeWeeklyRebuild.
- Close the Task Scheduler.
Configure Cognos Business Intelligence to use IBM HTTP Server
IBM Cognos Business Intelligence uses the same IBM HTTP Server as Connections, but configure Cognos Business Intelligence to work with that server.
After you have configured IBM HTTP Server as described in the section Configure IBM HTTP Server, complete these tasks to configure the Cognos BI Server and the Cognos Transformer components to use HTTP:
To ensure that community metrics runs smoothly, one HTTP Server setting needs to be changed. In the plugin-cfg.xml file of the HTTP Server, change the ServerIOTimeout value for Cognos Server. The plugin-cfg.xml file is located in <WAS root>\profiles\Dmgr01\config\cells\<CellName>\nodes\<HTTPNodeName>\servers\<HTTPServerName>. Search for the Cognos server-related section and modify ServerIOTimeout. By default the value is set to 60 (seconds), which is not enough in most situations. The recommended value is 300 or 5 minutes. Save the file, synchronize, and restart the entire Connections environment.
Check LotusConnections-config.xml to ensure the web addresses specified in the href and ssl_href of the Cognos application have been updated to the HTTP server.
Configure Cognos BI Server to use HTTP
Configure the Cognos BI Server to use the IBM HTTP Server that operates with the Connections deployment.
Use the Cognos Configuration tool to specify settings that enable Cognos BI Server to operate with IBM HTTP Server.
The Cognos Configuration tool provides a graphical user interface. If your IBM AIX or Linux server does not support a graphical user interface, see the topic Configure HTTP manually for Cognos BI Server for instructions on configuring these settings manually.
- Start the Cognos Configuration Tool:
- Navigate to the /bin64 directory of the Cognos BI server installation directory. For example:
/opt/IBM/CognosBI/bin64/
- Start the Cognos Configuration tool by running the following command:
- AIX, Linux: ./cogconfig.sh
- Windows: cogconfigw.exe
- Expand Local Configuration > Environment and edit the URLs for the following properties by removing any reference to ports 908x and replacing them with port 80:
The URLs must be updated to point to the HTTP server's host name and port number. The port number must be included even if it's the standard port 80.
- Gateway Settings
In this section, change only the "Dispatch URIs for gateway" attribute.
- Other URI Settings
In this section, change only the "Dispatcher URI for external applications" attribute.
There might be a need to update the Gateway settings Gateway URI and Controller URI to point to the HTTP server instead of 'localhost' if the problem described in this technote occurs.
- Save your changes.
- Exit the Cognos Configuration tool, making sure to select No at the following prompt: The service 'IBM Cognos' is not running on the local computer. Before you can use it your computer must start the service. Do you want to start this service before exiting?
- Restart the Cognos server:
- Stop the IBM WAS that hosts the Cognos server.
- Wait at least 1 full minute to ensure that all Cognos processes have stopped:
- AIX or Linux: cgsServer.sh and CAM_LPSvr processes
- Windows: cgsLauncher.exe and CAM_LPSvr processes
- Start WAS.
- Start the Cognos server.
Configure HTTP manually for Cognos BI Server
If your IBM Cognos Business Intelligence server runs on IBM AIX or Linux and does not provide a graphical user interface, you can configure HTTP settings manually.
You can configure HTTP by adding a component to the Cognos BI Server’s cogstartup.xml file and then customizing it for the deployment.
For more information, see the Cognos information center.
- On the computer where you installed Cognos Business Intelligence, navigate to the /configuration directory within the installation location of the BI Server component (specified by the cognos.biserver.install.path property in the cognos-setup.properties file); for example:
- AIX or Linux: /opt/IBM/CognosBI/configuration
Windows: C:\\IBM\Cognos\configuration
- Make a backup copy of the cogstartup.xml file.
- Open the working copy of the cogstartup.xml file for editing.
- In the file, locate the following parameters and update their URI port references from 90xx to port 80:
In this file, the URLs must be updated to point to the HTTP server's host name and port number. The port number must be included even if it's the standard port 80.
<crn:parameter name=
- gatewayDispatcherURIList
- sdk
After your changes, those parameters should look like the ones that follow:
<crn:parameter name="gatewayDispatcherURIList" opaque="true"> <crn:value xsi:type="cfg:sortedArray"> <crn:item xsi:type="xsd:anyURI" order="0">http://cognos.example.com:80/cognos/servlet/dispatch/ext</crn:item> </crn:value> </crn:parameter> <crn:parameter name="sdk"> <crn:value xsi:type="xsd:anyURI">http://cognos.example.com:80/cognos/servlet/dispatch</crn:value> </crn:parameter>
- Save and close the file.
- Validate the modified file:
- Set JAVA_HOME to the WAS_install_path/java directory.
- Run the configuration script:
- AIX or Linux: ./cogconfig.sh -config
- Windows: cogconfig.bat -config
- Check the Cognos_BI_Server_install_path/logs/cogconfig_response.csv file for a success message.
- Restart the Cognos server:
- Stop the IBM WAS that hosts the Cognos server.
- Wait at least 1 full minute to ensure that all Cognos processes have stopped:
- AIX or Linux: cgsServer.sh and CAM_LPSvr processes
- Windows: cgsLauncher.exe and CAM_LPSvr processes
- Start WAS.
- Start the Cognos server.
Configure Cognos Transformer to use HTTP
Configure the Cognos Transformer to use the IBM HTTP Server that operates with the Connections deployment.
Use the Cognos Configuration tool to specify settings that enable Cognos Transformer to operate with IBM HTTP Server.
The Cognos Configuration tool provides a graphical user interface. If your IBM AIX or Linux server does not support a graphical user interface, see the topic Configure HTTP manually for Cognos Transformer for instructions on configuring these settings manually.
- Set the JAVA_HOME variable:
- Navigate to the WAS_install_root/bin directory. For example:
- IBM AIX or Linux: /opt/IBM/WebSphere/AppServer/bin
Windows: C:\\IBM\WebSphere\AppServer\bin
- Run the following command:
- AIX or Linux: setupCmdLine.sh
- Windows: setupCmdLine.bat
- Set environment variables to point to the Cognos BI Server’s /bin directory by running the following command:
By default, the Transformer’s environment variables point to its own directory, so you must change them to point to the BI Server’s directory.
- AIX: export LIBPATH=/opt/IBM/CognosBI/bin64/
- Linux: export LD_LIBRARY_PATH=/opt/IBM/CognosBI/bin64/
- Start the Cognos Configuration Tool:
- Navigate to the /bin directory of the Cognos Transformer installation directory. For example:
- AIX, Linux: /opt/IBM/CognosTF/bin/
- Windows: C:\IBM\CognosTF\bin
- Start the Cognos Configuration tool by running the following command:
- AIX, Linux: ./cogconfig.sh
- Windows: cogconfigw.exe
- Expand Local Configuration > Environment and edit the URLs for the following properties by removing any reference to ports 908x and replacing them with port 80:
The URLs must be updated to point to the HTTP server's host name and port number. The port number must be included even if it's the standard port 80.
- Gateway Settings
- Other URI Settings
- Save your changes.
- Exit the Cognos Configuration tool.
You do not need to restart the Transformer component.
Configure HTTP manually for Cognos Transformer
If your IBM Cognos Business Intelligence server runs on IBM AIX or Linux and does not provide a graphical user interface, you can configure HTTP settings manually.
You can configure HTTP by adding a component to the Cognos Transformer’s cogstartup.xml file and then customizing it for the deployment.
For more information, see the Cognos information center.
- On the computer where you installed Cognos Business Intelligence, navigate to the /configuration directory within the installation location of the Transformer component (specified by the cognos.transformer.install.path property in the cognos-setup.properties file); for example:
- IBM AIX or Linux: /opt/IBM/Cognos/configuration
- Windows: C:\ (x86)\IBM\Cognos\configuration
- Make a backup copy of the cogstartup.xml file.
- Open the working copy of the cogstartup.xml file for editing.
- In the file, locate the following parameters and update their URI port references from 90xx to port 80: I
In this file, the URLs must be updated to point to the HTTP server's host name and port number. The port number must be included even if it's the standard port 80.
<crn:parameter name=
- gateway
- sdk
After your changes, those parameters should look like the ones that follow:
<crn:parameter name="gateway"> <crn:value xsi:type="xsd:anyURI">http://cognos.example.com:80/cognos/servlet/dispatch</crn:value> </crn:parameter> <crn:parameter name="sdk"> <crn:value xsi:type="xsd:anyURI">http://cognos.example.com:80/cognos/servlet/dispatch</crn:value> </crn:parameter>
- Save and close the file.
- Validate the modified file: by running the following command: ./cogconfig.sh -config
- Set the path as shown for your operating system:
- AIX: LIBPATH=Cognos_BI_Server_install_path/bin64
- Linux: LD_LIBRARY_PATH=Cognos_BI_Server_install_path/bin64
- Windows: PATH=Cognos_BI_Server_install_path/bin64;%PATH%
- Set JAVA_HOME to the WAS_install_path/java directory.
- Run the configuration script:
- AIX or Linux: ./cogconfig.sh -config
- Windows: cogconfig.bat -config
- Check the Cognos_Transformer_install_path/logs/cogconfig_response.csv file for a success message.
You do not need to restart the Transformer component.
Configure Cognos for SSL
Set up IBM Cognos to handle secure https URLs.
To enable IBM Cognos components to use an SSL-enabled Web server, you must have copies of the trusted root certificate (the certificate of the root Certificate Authority which signed the Web server certificate) and all other certificates that make up the chain of trust for the Web server's certificate. These certificates must be in Base64 encoded in ASCII (PEM) or DER format, and must not be self-signed, because self-signed certificates will not be trusted by IBM Cognos components.
- Import certificates into IBM Cognos Transformer Trust Store.
- Launch a Command Prompt on the machine Transformer installed on. Change directory into the ..\<transformer installation>\bin. For example:
- IBM AIX, Linux: /opt/IBM/CognosTF/bin/
- Microsoft™ Windows™: C:\IBM\CognosTF\bin
- Repeat the following command for each certificate (root and intermediate-level certificates):
Replace CA_certificate_fileName with the correct filenames of the Root and Intermediate-level certificates.
- AIX or Linux: ./ThirdPartyCertificateTool.sh -T -i -r CA_certificate_fileName -D ../configuration/signkeypair -p password
- Windows: ThirdPartyCertificateTool.bat -T -i -r CA_certificate_fileName -D ..\configuration\signkeypair -p password
The trust store password should have been set by your administrator, the default is NoPassWordSet. If the ThirdPartyCertificateToolbat is unable to locate a valid JRE, you have to set the JAVA_HOME environment variable to the Java™ Runtime Environment (JRE) that the product is configured to use. For example:
For example:
- AIX or Linux: export JAVA_HOME=/usr/java/jre
- Windows: set JAVA_HOME=<Cognos_Installation>\bin\jre\6.0
- AIX or Linux: ./ThirdPartyCertificateTool.sh -T -i -r c:\hostname_Certificate.cer -D ../configuration/signkeypair -p NoPassWordSet
- Windows: ThirdPartyCertificateTool.bat -T -i -r c:\hostname_Certificate.cer -D ..\configuration\signkeypair -p NoPassWordSet
- Configure Cognos Transformer and BI to use HTTPS.
- Configuring Cognos Transformer as follows:
- Set the JAVA_HOME variable: Navigate to the WAS_install_root/bin directory. For example: IBM AIX or Linux: /opt/IBM/WebSphere/AppServer/bin Microsoft™
Windows™: C:\\IBM\WebSphere\AppServer\bin
- Run the following command:
- AIX or Linux: setupCmdLine.sh
- Windows: setupCmdLine.bat
- (AIX or Linux) Set environment variables to point to the Cognos BI Server’s /bin directory by running the following command. By default, the Transformer’s environment variables point to its own directory, so you must change them to point to the BI Server’s directory:
- AIX: export LIBPATH=/opt/IBM/CognosBI/bin64/
- Linux: export LD_LIBRARY_PATH=/opt/IBM/CognosBI/bin64/
- Start the Cognos Transformer Configuration Tool: Navigate to the /bin directory of the Cognos BI server installation directory. For example:
- IBM AIX, Linux:/opt/IBM/CognosTF/bin/
- Microsoft™ Windows™: C:\IBM\CognosTF\bin
- Start the Cognos Configuration tool by running the following command:
- AIX, Linux: ./cogconfig.sh
- Windows: cogconfigw.exe
- ExpandLocal Configuration > Environmentand edit the URLs for the following properties by replacing the http URLs with https URLs.
- Gateway Settings
- Other URI Settings
The URLs must be updated to point to the HTTP server's host name and port number. The port number must be included even if it is the standard port 80.
- Save your changes.
- Exit the Cognos Configuration tool. You do not need to restart the Transformer component.
- Configure Cognos BI to use https as follows:
- Start the Cognos Configuration Tool by navigating to the /bin64 directory of the Cognos BI server installation directory. For example:
- IBM AIX, Linux: /opt/IBM/CognosBI/bin64/
- Windows: C:\IBM\CognosBI\bin64
- Start the Cognos Configuration tool by running the following command:
- AIX, Linux: ./cogconfig.sh
- Windows: cogconfigw.exe
- ExpandLocal Configuration > Environment to edit the URLs for the following properties by replacing http URLs with https URLs.
The URLs must be updated to point to the HTTP server's host name and port number. The port number must be included even if it is the standard port 80.
In the Gateway Settings section, change only the Dispatch URIs for gateway attribute.
In the Other URI Settings section, change only the Dispatcher URI for external applications attribute.
- Save your changes.
- Exit the Cognos Configuration tool, making sure to select No at the following prompt:
The service 'IBM Cognos' is not running on the local computer. Before you can use it your computer must start the service. Do you want to start this service before exiting?- Restart the Cognos server.
Configure Connections Content Manager for Libraries
This section is required if you installd the Connections Content Manager add-on during initial installation. The configuration process depends upon whether you are setting up for an existing FileNet deployment or have installed a new one.
This section assumes you have already performed the Connections Content Manager preinstall and installation tasks required for making Libraries available in your Connections implementation.
You can select to use an existing FileNet deployment for Connections Content Manager or install a new FileNet deployment to use for Connections Content Manager as the following topics describe:
- Pre-installation tasks
To use Connections Content Manager, configure Connections and FileNet with the same WebSphere federated repositories. When Connections is installed, the installer provides a user name and password for a system user account created by the installer to handle feature-to-feature communication. The Connections installer also creates a J2C authentication alias name connectionsAdmin. This alias is filled with the specified user and maps that user to a set of application roles.
- Installing Connections
The Connections Content Manager deployment option only displays if you chose to install the Connections Content Manager feature during installation.
Configure Connections Content Manager with a new FileNet deployment
Use these tasks to set up your new installation of FileNet to work with Connections Content Manager.
Configure Libraries automatically
You can configure Libraries automatically using the ccmDomain tool to create a P8 domain, GCD, Object Store, and AddOns.
If you have set the value of login properties in your LDAP configuration to be other than uid or if uid is not the first value in the list of values, then the first <login> attribute in the section of <loginAttributes> in profiles-config.xml needs to match with the value that FileNet uses to look up a user. Since by default, the uid value is used and if the security principal for FileNet is not uid, then you must modify the profiles-config.xml to move the attribute that matches up with the principal to be the first attribute in the <loginAttributes> section. For example, if email is used as the principal, then the <loginAttributes> section should look like this:
<loginAttributes> <loginAttribute>email</loginAttribute> <loginAttribute>uid</loginAttribute> <loginAttribute>loginId</loginAttribute> </loginAttributes>If you encounter a Transaction is ended due to timeout message, you can modify the transaction time as follows:
- From the WAS admin console, click Servers > Server Types > WebSphere application servers > server1 > [Container Settings] Container Services > Transaction Service, where server1 stands for the server running the FileNet application.
- Click the Configuration tab, and set the Maximum transaction timeout parameter value to at least 600 (seconds).
- Click Apply and then click Save.
After you create the object store, make sure to change the value back to the default.
If you have installed FileNet Content Platform Engine Fix pack 1 before creating the object store and the auto-upgrade has finished successfully, remove the -Dibm.filenet.security.vmmProvider.waltzImpl=true JVM argument in FileNet server as follows:
- From the WAS admin console, click Servers > Server Types > WebSphere application servers > server1 > Java Process Management > Process Definition > Java Virtual Machine where server1 stands for the server running the FileNet application.
- In Generic JVM Arguments, remove this JVM argument: -Dibm.filenet.security.vmmProvider.waltzImpl=true.
- Click Apply and then click Save.
- Restart the server running the FileNet application.
Make sure CCM shared file system is readable/writeable before running the tool.
- To create a P8 domain and Global Configuration Data (GCD), perform the following steps:
- Locate the ccmDomainTool automation tool under the <CE_HOME>\addons\ccm\ccmDomainTool.
<CE_HOME> typically means the directory where Connections is installed, such as: /opt/IBM/Connections.
- Start the server where the Connections Content Manager is deployed or start the Connections Content Manager cluster.
- (Non-Windows only) Set the executive permission by running the command: chmod 755 *.
- Create the P8 domain and GCD as follows:
- For Windows platform, run the command: createGCD.bat
- For non-Windows platforms, run the command: ./createGCD.sh
You will be required to enter the Connections administrator password twice. Near the end of the script, when you are prompted to enter another username and password, reenter the Connections administrator account. This second user name/password is used to register two Quickr addons.
- To create an Object Store and AddOns, perform the following steps:
- Find the ccmDomainTool automation tool under the <CE_HOME>\addons\ccm\ccmDomainTool directory.
- Make sure you already have created the domain and GCD with ccmDomainTool or manually.
- Create the Object Store as follows:
- For Windows platform, run the command: createObjectStore.bat
- For non-Windows platforms, run the command: ./createObjectStore.sh
You will be required to input the Connections administrator password (which is the FileNet Domain administrator).
Results
Once these inputs have been collected, the program will return the SID value. Refer to Generate SID values for more information.If you fail when running the scripts, fix the problem, drop the GCD and ObjectsStore database and clean out anything that is under CCM shared file system (<shared content store>/ccm), and then rerun the scripts. For examples of possible causes for script failure, refer to Troubleshooting the ccmDomain tool.
Generate Security identifiers to be used in configuration
You need to generate Security Identifiers (SID) values for installing Connections Content Manager with a new FileNet deployment or an existing FileNet deployment.
A Security Identifier (SID) is an internal ID used within Connections Content Manager and FileNet. This internal ID is used to reference a user in some areas of configuration. Specifically, you will need SIDs when setting up anonymous access and configuring indexing. Follow these instructions if you chose to install Connections Content Manager with a new FileNet deployment.
- Find the tool under the <CE_HOME>\addons\ccm\ccmDomainTool.
- Run the following command to generate the SID value for an associate username:
You will be asked for the following inputs:
- Windows: generateSID.bat
- Linux, AIX: generateSID.sh
Once these inputs have been collected the program will return the SID value.
- Domain admin user password
- Username you want to generate the SID for.
Configure Filenet to be an Activity Stream producer with a new deployment
You can set up FileNet to send Addons to the Connections Activity Stream.
- Open the Administration Console for Content Platform Engine (ACCE) on the FileNet system with a web browser and login with the administrator's username and password by accessing the following URL:
ACCE supports a subset of the browsers supported by Connections. While some operations in Administration Console for Content Platform Engine might work with other browsers, if an error is encountered, make sure you are running a supported browser.
See the system requirements for supported software for FileNet Content Engine. Click Administrative Console for Content Engine in the By component column, view Prerequisites and check Web Browser support.
http://server:port/accewhere server is the name of the application, by default: FileNetEngine; port is the port number the server is listening on. For example:http://cpe.example.com:9080/acce
- Expand the Object Stores node in the navigation tree, right-click the existing object store you want to configure, and then click Open. The name of the object store, if the createObjectStore.sh|.bat command line tool was used to create it, is ICObjectStore.
- Once the object store has opened, click Search to open the Search page.
- On the Simple Search tab page, select Collaboration Configuration from the Select From Table dropdown.
- From the Select Columns list select the asterisk (*). Use the move button to place "*" into the Selected pane, and then click Search.
- Click OK on the Message window that displays. A single row is returned.
- Click the result link in the ID column to open it for viewing and editing:
- In the results view tab, click the Properties inner tab.
- Scroll down to set the following configuration properties: Set the following properties for the Content Platform Engine activity stream generation to function properly."
Table 38. Configuration properties for Addons
Property Description Activity Stream Retrieval URL URL for FileNet Collaboration Services. {ecm_files}
You should enter this exactly as shown: {ecm_files} including the braces {}.
Activity Stream HTTP Endpoint URL Base URL for Connections, for example: https://connections.example.com
Or
https://connections.example.com:9443
Must use HTTPS. Do not add an extra slash at the end of the URL. This should use the host and port of your HTTP server. To test Activity Stream without the HTTP server, this must be the port of the application server hosting the Connections News application. HTTP server configuration is a mandatory post-installation step.
Activity Stream Gadget URL Fully hard-coded Gadget URL: {connections}/resources/web/com.ibm.social.ee/ConnectionsEE.xml
You should enter this url exactly as shown, including the text: {connections}.
Config 1 Holds password for the Connections user defined in the Config 2 property and will be encrypted after input. Config 2 Holds the login name of a Connections user and will be encrypted after input. This user must be in the trustedExternalApplication role on the Widget Container application in Connections. By default, the Connections administrator has these privileges and may be used here. Activity Stream Extended Settings
- Click the action menu associated with the Property Value for this entry and select Display or Edit Value.
- Sequentially enter the following five entries by placing each of the strings in the Enter a string value field and clicking Add for each entry. When finished, click OK.
- activityStreamRetrievalURL={0}/atom/library/{1}%3B{2}/{3}/{4}/entry
- activityStreamAnonymousRetrievalURL={0}/atom/anonymous/library/{1}%3B{2}/{3}/{4}/entry
- activityStreamOauthRetrievalURL={0}/atom/oauth/library/{1}%3B{2}/{3}/{4}/entry
- activityStreamFileLinkURL={0}/atom/library/{1}/document/{2}/media/{3}
- activityStreamNullifyActionableURL={0}/connections/opensocial/basic/rest/activitystreams/@me/@all/@all/{1}
Download Count Ignored User Ids A multi-valued property (MVP) string that holds the SIDs of users whose content downloads will not be counted. This list must include the user used by Connections to index Connections Content Manager libraries into Connections search. By default, for a new FileNet deployment, this user is the same as the Connections administrative user. For an existing FileNet deployment, this is the administrative user you provided during the installation of Connections Content Manager. This user is referenced by the FileNet Admin J2C authentication alias as configured in... WebSphere Administration | Security | Global Security | Java Authentication and Authorization Service | J2C authentication data
Use the task Generate SID Values to find the SID for the user and enter the value here.
Download Count Anonymous User Ids An MVP string that holds the SIDs of users whose content downloads will be counted as anonymous. Activity Stream Ignored Users Ids An MVP string that holds the SIDs of users whose activities will not be added to the feed.
- After editing properties, click Save and then Close.
Set up anonymous access for a new FileNet deployment
If anonymous access has not been enabled for IBM FileNet Collaboration Services 2.0, enable it now. Connections requires anonymous access to be set in FileNet for public communities.
IBM FileNet Collaboration Services implements anonymous access with a designated user that is used only for this purpose. The user should be a system-type user that is not used by a real person. The user ID does not need, and should not have, any particular privileges on the object store beyond what is given by the installation guide. This user's access control records will determine what level of access is given to anonymous users. Consequently, choose a functional ID that is reserved for this purpose and that does not have special access.
Configure an anonymous user is required if users will be accessing Connections communities anonymously. In some cases, such as when desktop single-sign is enabled, or when roles in the communities application have been restricted to limit access to authenticated users, setting up anonymous access for FileNet is optional. Refer to Roles for information on restricting access to anonymous users in communities.
The display name of the user used in this role might appear in some supplemental user interfaces, so a user account or functional ID should be chosen with a suitable display name matching the purpose of this account, for instance, Anonymous User. Do not choose the administrative account ID. Follow these steps to configure anonymous access
- Log into the WAS admin console that hosts your FileNet server with the FileNet Collaboration Services application.
- Click Applications > WebSphere enterprise applications > fncs > User RunAs roles,
- Select the Anonymous role and enter the username and password of the LDAP user designated for the anonymous access role.
- Click Apply and then click OK to save.
- Open the Administration Console for Content Platform Engine (ACCE) and expand the Object Stores node on the side navigation tree.
- Right-click ICObjectStore, the object you want to configure, and then click Open.
- Select Search, select Collaboration Configuration in the Select From Table dropdown menu, and then click OK.
- From the Select Columns list, select the asterisk (*). Use the move button to place (*) into the Selected pane, and then click Search. A single result object displays after clicking OK for any popup warnings.
- Click the object and then click Properties.
- On the Properties tab, click the Property Value cell for Download Count Anonymous User Ids, which displays a dropdown menu.
- Select Edit list, add the user into the list, and then select it from the dropdown menu. The user should be the same user you provided for the User RunAs roles in the WAS admin console in step 2; however, the SID of the user must be provided instead of the username. To understand how SID values are created, refer to Generate SID values.
- Click OK.
Setting an LDAP group to be domain administrator instead of specific user
Select or create a group in LDAP to act as IBM FileNet domain or Object Store administrators or both, and add any desired user for this administrator role into this group.
You should consult your LDAP documentation for complete information about LDAP groups.
This task involves both a domain administrator and an object store administrator.
Follow these instructions to set an LDAP group to be domain administrator in FileNet:
For the domain administrator
- Log into the Administrative Console for Content Engine (ACCE)
- Click the Security tab and then click Add.
- Enter the name of the group in the Search text field and click Search.
- Select the group from the Available Users and Groups pane and the use the move button to add it to the Selected Users and Groups pane.
- Set the Apply to field to This object and all children.
- Set Permission group to Full Control.
- Click OK and then click Save.
For the object store administrator
Follow these instructions to set an LDAP group to be object store administrator in FileNet:
- Log into ACCE.
- Click ICObjectStore in the navigation pane under Object Stores.
- Click the Security tab and then click Add.
- Enter the name of the group in the Search text field and click Search.
- Select the group from the Available Users and Groups pane and the use the move button to add it to the Selected Users and Groups pane.
- Set the Apply to field to This object and all children.
- Set Permission group to Full Control.
- Click OK and then click Save.
Configure Connections Content Manager with an existing FileNet deployment
Use these tasks to set up your implementation of IBM Filenet to work in Connections Content Manager.
To use an existing FileNet deployment with Connections Content Manager, the existing deployment must have used the IBM Virtual Member Manager (VMM) with the Waltz Services being enabled through a JVM argument as the directory service provider. Contact IBM services to migrate the directory service provider to use IBM VMM.
Ensure that you have installed the required add-ons to your object store before adding the library widget to a community. For an existing installation of Connections with IBM FileNet, the connectionsAdmin user defined in your FileNet system and the filenetAdmin user defined in your Connections system must be available in the directory configuration of both FileNet and Connections.
For an existing FileNet system, ensure that single sign-on (SSO) has been configured between your FileNet and Connections servers. WebSphere LTPA SSO is recommended.
See Configure SSO between IBM FileNet and Connections for more information.
Configure Profile and Community membership lookups for FileNet
You must deploy both IBM FileNet and Connections with the same WebSphere federated repositories. In other words, the Connections cell security configuration must be pointing to the same LDAP directory that your IBM FileNet is configured to use, with identical configuration options.
IBM FileNet P8 must be configured to use Connections as a source of directory information. This enables Connections to use community members and community owners as if they were groups in the access control model of FileNet. This step is required for the FileNet server selected during the Connections Content Manager installation. If you choose to install a new deployment of FileNet with Connections Content Manager, this step is automated by the installation. If you use an existing FileNet server, you must ensure the directory configuration for FileNet uses the Virtual Member Manager directory provider and then perform the following steps to configure FileNet to use your Connections server for directory information.
For more information about the Virtual Member Manager directory provider, refer to IBM virtual memory manager.
Once FileNet has been configured to use Connections for directory information, the FileNet server cannot operate without the Connections Communities application in operation and responding to requests. If this step is performed prior to installing Connections, the Connections Content Manager installation might report a warning when validating the connection to FileNet.
- Cross-certify the two domains/cells for SSO by configuring LTPA / SSO between the Connections and FileNet domains as described in Configure Single Sign On.
Ensure that the same domain name is configured for both domains. In addition, exchange LTPA keys by exporting from the Connections cell to the FileNet cell as described here.
Export LTPA keys is done from the WebSphere Integrated Solutions Console for Connections, while importing LTPA keys is done on the WebSphere Integrated Solutions Console for FileNet.
Ensure that for both cells, the interoperability mode and LTPA V1 and V2 cookie names are the same. You can find these values using WebSphere Integrated Solutions Console to navigate to Security > Global security > Web and SIP security > Single sign-on (SSO) .
- Configure JVM properties on the FileNet server as follows:
- Log into WebSphere Integrated Solutions console that hosts your existing FileNet Content Platform Engine server.
- Check your login properties in Global security > Federated repositories > <your_LDAP_Name>.
- Make note of the first value from the login properties field, such as uid. This value will be used later in setting a JVM argument.
- Click Application Servers > <Server Name> > Process definition > Java Virtual Machine .
- In the generic JVM arguments field, add the following code if it is not present already:
-DenableWaltzIdConversion=true -Dibm.filenet.security.vmmProvider.waltzImpl=true -Dcom.ibm.connections.directory.services.j2ee.security.principal=< login_property_value_from_previous_step>If the login properties contains multiple values, such as uid;mail, only the first value should be used from the list.
- Click OK to save the changes.
- Configure Waltz and Sonata on the FileNet WebSphere cell. This step will configure directory.services.xml, directory.services.xsd, sonata.services.xml, and sonata.services.xsd, and create a J2C authentication alias to allow FileNet to connect to Connections for directory information. Unzip the waltz.zip/tar file to your FileNet server and follow the Readme.txt file to configure Waltz and Sonata on your FileNet cell.
- For Windows: <connections_root>\addons\ccm\waltz\waltz.jar
- For Linux: <connections_root>/addons/ccm/waltz.tar
If you have installed FileNet Content Platform Engine Fix pack 1 and the auto-upgrade has completed successfully, then after you have run this waltz.jar , you can remove the -Dibm.filenet.security.vmmProvider.waltzImpl=true argument.
- Restart dmgr and the FileNet application server.
Generate Security identifiers for an existing FileNet deployment
You need to generate Security Identifiers (SID) values for installing Connections Content Manager with an existing FileNet deployment.
A Security Identifier (SID) is an internal ID used within Connections Content Manager and FileNet. This internal ID is used to reference a user in some areas of configuration. Specifically, you will need SIDs when setting up anonymous access and configuring indexing. Follow these instructions if you had chosen to install Connections Content Manager with an existing FileNet deployment.
- On the Connections system, create a directory to hold the JARs you will copy from the FileNet side. For example:
- Windows: C:\CPEFiles
- Linux, AIX: /CPEFiles
- Copy the Jace.jar and log4j.jar files from the FileNet Content Platform Engine deployment.
- Place these JAR files into a subdirectory called lib in the directory you previously created For example:
- Windows: C:\CPEFiles\lib
- Linux, AIX: /CPEFiles/lib
- Edit and save the generateSID.bat|.sh script file by updating the line containing the CE_HOME variable to refer to the directory you created in step #1. For example:
- Windows: set CE_HOME=C:/CPEFiles
- Linux, AIX: export CE_HOME=/CPEFiles
- Open a command line console and change directory to <CE_HOME>.
- Set the JAVA_HOME environment variable if it has not already been defined. For example:
- Windows: set JAVA_HOME=C:\ (x86)\IBM\WebSphere\AppServer\java
- Linux, AIX: export JAVA_HOME=/opt/IBM/WebSphere/AppServer/java
- Run the following command to generate the SID value for an associated username:
You will be asked for the following inputs:
- Windows: generateSID.bat
- Linux, AIX: generateSID.sh
Once these inputs have been collected the program will return the SID value.
- Hostname the Content Platform Engine application (FileNetEngine) is running on
- Port number the Content Platform Engine application (FileNetEngine) is listening on
- Domain admin username
- Domain admin password
- Username you desire to generate the SID for
Prepare an object store to be used by Connections
You must prepare an object store so that it can be used by Connections.
The following procedure details how to prepare a new Object Store to be used by Connections. In particular, this topic discusses how to manually configure default security on the object store so users have the appropriate initial permissions, including permissions to download public content, add comments, like documents, and other common operations. The topic also covers the installation of FileNet add-ons, which include metadata properties used by Connections and event listeners required for new functions such as Document Approval.
- Register with Global Configuration Data (GCD) as follows: For Linux, AIX:
For Windows:
- Open command console.
- Run the command: export JAVA_HOME=/opt/IBM/WebSphere/AppServer/java/jre where /opt/IBM/WebSphere/AppServer is the location where WebSphere Application server is installed
- Change to the FileNet Collaboration Services (FNCS) installation folder: cd /opt/IBM/FNCS
- Run the following command to register the Quickr addons with the Global Configuration Database (GCD): ./addon.sh /opt/IBM/FNCS/CE_API http://127.0.0.1:9080/wsi/FNCEWS40MTOM/ where 127.0.0.1:9080 is the server/port that the Content Platform Engine is installed on.
- Open command console.
- Run the command: set JAVA_HOME=C:\IBM\WebSphere\AppServer\java\jre where C:\IBM\WebSphere\AppServer is the location where WebSphere Application server is installed
- Change to the FileNet Collaboration Services (FNCS) installation folder: cd C:\IBM\FNCS
- Run the following command to register the Quickr addons with the Global Configuration Database (GCD): .\addon.bat C:\IBM\FNCS\CE_API http://127.0.0.1:9080/wsi/FNCEWS40MTOM/ where 127.0.0.1:9080 is the server/port that the Content Platform Engine is installed on.
- Log into Administration Console for Content Platform Engine (ACCE)
- To open the administration console...
http://mycehost:port/acce
...where...
- mycehost is the name of the server where Content Platform Engine is deployed.
- port is the WSI port that is used by the web application on the server where Content Platform Engine is deployed.
In a highly available environment, use the load-balanced, virtual name for the mycehost:port, for example:
http://virtual_server/acce
- If you receive a prompt to block potentially unsafe components from being run, click No.
- If you receive a prompt asking you if you want to run this application, click Run. To confirm changes made ly, select the option to always trust content from this publisher.
- Enter your FileNet administrator user name and password.
- On the navigation panel that displays, expand Object Stores and select the object store you will work with.
Before installing the Add-ons, ensure the following steps 4 through 9 have been performed to configure the proper permission settings. CAUTION: This object store must not have #AUTHENTICATED-USERS on any access list prior to performing these instructions.#AUTHENTICATED-USERS must not have default access to the object store. Granting #AUTHENTICATED-USERS default access, or leaving the default access empty when creating the object store effectively grants #AUTHENTICATED-USERS read access to all content in the object store and bypasses access controls set by communities.
- Click the Security tab and then click Add to add #AUTHENTICATED-USERS principal with the following permissions settings:
- In the popup dialog, click Search.
- In the Available Users and Groups pane, select #AUTHENTICATED-USERS, and click the move button to place it into the Selected Users and Groups pane.
- For the Apply to dropdown menu, select This object only.
- Under Permission group select Use object store.
- Click OK and then click Save.
- In the Object Store navigation panel, update the permissions on the following Class Definitions:
Clicking on the class opens the class definition panel.
- Object Store > Data Design > Classes
- Custom Object
- Document
- Folder
- Object Store > Data Design > Classes > Other Classes
- Abstract Persistable
- Abstract Queue Entry
- Choice List
- Recovery Bin
- Recovery Item
- Referential Containment Relationship
- Task
- Click the Security tab and then click Add to add #AUTHENTICATED-USERS principal with the following permissions settings:
- In the popup dialog, click Search.
- In the Available Users and Groups pane, select #AUTHENTICATED-USERS, and click the move button to place it into the Selected Users and Groups pane.
- For the Apply to dropdown menu, select This object and all children.
- Under Permission group check create instance, view all properties, but deselect read permissions.
- Click OK and then click Save.
- Click Close to close the class definition panel.
- Set default instance permissions on Choice List class In the Object Store navigation panel: Object Store > Data Design > Classes > Other Classes > Choice List
- Click Default Instance Security tab of the Choice List class definition panel.
- In the popup dialog, click Search.
- In the Available Users and Groups pane, select #AUTHENTICATED-USERS, and click the move button to place it into the Selected Users and Groups pane.
- For the Apply to dropdown menu, select This object and all children.
- Under permission group check view all properties, but deselect read permissions.
- Click OK and then click Save.
- Click Close to close the class definition panel.
- Set default instance permissions on Task Relationship class as follows: In the Object Store navigation panel: Object Store > Data Design > Classes > Other Classes > Task Relationship
- Click Default Instance Security tab of the Task Relationship class definition panel.
- In the popup dialog, click Search.
- In the Available Users and Groups pane, select #AUTHENTICATED-USERS, and click the move button to place it into the Selected Users and Groups pane.
- For the Apply to dropdown menu, select This object and all children.
- Under Permission group check view all properties, but deselect read permissions.
- Click OK and then click Save.
- Click Close to close the class definition panel
- Set default instance permissions on Property Template class for each of the eight Content Engine data types to grant #AUTHENTICATED-USERS the View all properties right on PropertyTemplates that are created by AddOns. These permissions should be set to inherit to all subclasses (InheritableDepth=-1) or This object and all children in the Apply To dropdown, if performing these steps manually via FEM/ACCE). The classes that must be modified are as follows:
- PropertyTemplateBinary
- PropertyTemplateBoolean
- PropertyTemplateDateTime
- PropertyTemplateFloat64
- PropertyTemplateId
- PropertyTemplateInteger32
- PropertyTemplateObject
- PropertyTemplateString
- In the Object Store panel, click Actions and then select Install Add-on Features. Ensure all the following add-ons are selected and click OK:
- 5.2.0 Base Application Extensions
- 5.2.0 Base Content Engine Extensions
- 5.2.0 Custom Role Extensions
- 5.2.0 Social Collaboration Base Extensions
- 5.2.0 Social Collaboration Document Review Extensions
- 5.2.0 Social Collaboration Notification Extensions
- 5.2.0 Social Collaboration Role Extensions
- 5.2.0 Social Collaboration Search Indexing Extensions
- 5.2.0 TeamSpace Extensions
- IBM FileNet Services for Lotus Quickr 1.1 Extensions
- IBM FileNet Services for Lotus Quickr 1.1 Supplemental Metadata
- Click OK to close message popup.
Configure FileNet to be an Activity Stream producer
You can set up FileNet to send Addons to the Connections Activity Stream.
- Open the Administration Console for Content Platform Engine (ACCE) on the FileNet system with a web browser and login with the administrator's username and password by accessing the following URL:
ACCE supports a subset of the browsers supported by Connections. While some operations in Administration Console for Content Platform Engine might work with other browsers, if an error is encountered, make sure you are running a supported browser.
See the system requirements for supported software for FileNet Content Engine. Click Administrative Console for Content Engine in the By component column, view Prerequisites and check Web Browser support.
http://server:port/accewhere server is the name of the application, by default: FileNetEngine; port is the port number the server is listening on. For example:http://cpe.example.com:9080/acce
- Expand the Object Stores node in the navigation tree, right-click the existing object store you want to configure, and then click Open. The name of the object store, if the createObjectStore.sh|.bat command line tool was used to create it, is ICObjectStore.
- Once the object store has opened, click Search to open the Search page.
- On the Simple Search tab page, select Collaboration Configuration from the Select From Table dropdown.
- From the Select Columns list select the asterisk (*). Use the move button to place "*" into the Selected pane, and then click Search.
- Click OK on the Message window that displays. A single row is returned.
- Click the result link in the ID column to open it for viewing and editing:
- In the results view tab, click the Properties inner tab.
- Scroll down to set the following configuration properties: Set the following properties for the Content Platform Engine activity stream generation to function properly."
Table 39. Configuration properties for Addons
Property Description Activity Stream Retrieval URL URL for FileNet Collaboration Services. {ecm_files}
You should enter this exactly as shown: {ecm_files} including the braces {}.
Activity Stream HTTP Endpoint URL Base URL for Connections, for example: https://connections.example.com
Or
https://connections.example.com:9443
Must use HTTPS. Do not add an extra slash at the end of the URL. This should use the host and port of your HTTP server. To test Activity Stream without the HTTP server, this must be the port of the application server hosting the Connections News application. HTTP server configuration is a mandatory post-installation step.
Activity Stream Gadget URL Fully hard-coded Gadget URL: {connections}/resources/web/com.ibm.social.ee/ConnectionsEE.xml
You should enter this url exactly as shown, including the text: {connections}.
Config 1 Holds password for the Connections user defined in the Config 2 property and will be encrypted after input. Config 2 Holds the login name of a Connections user and will be encrypted after input. This user must be in the trustedExternalApplication role on the Widget Container application in Connections. By default, the Connections administrator has these privileges and may be used here. Activity Stream Extended Settings
- Click the action menu associated with the Property Value for this entry and select Display or Edit Value.
- Sequentially enter the following five entries by placing each of the strings in the Enter a string value field and clicking Add for each entry. When finished, click OK.
- activityStreamRetrievalURL={0}/atom/library/{1}%3B{2}/{3}/{4}/entry
- activityStreamAnonymousRetrievalURL={0}/atom/anonymous/library/{1}%3B{2}/{3}/{4}/entry
- activityStreamOauthRetrievalURL={0}/atom/oauth/library/{1}%3B{2}/{3}/{4}/entry
- activityStreamFileLinkURL={0}/atom/library/{1}/document/{2}/media/{3}
- activityStreamNullifyActionableURL={0}/connections/opensocial/basic/rest/activitystreams/@me/@all/@all/{1}
Download Count Ignored User Ids A multi-valued property (MVP) string that holds the SIDs of users whose content downloads will not be counted. This list must include the user used by Connections to index Connections Content Manager libraries into Connections search. By default, for a new FileNet deployment, this user is the same as the Connections administrative user. For an existing FileNet deployment, this is the administrative user you provided during the installation of Connections Content Manager. This user is referenced by the FileNet Admin J2C authentication alias as configured in.... WebSphere Administration | Security | Global Security | Java Authentication and Authorization Service | J2C authentication data
Use the task Generate SID Values to find the SID for the user and enter the value here.
Download Count Anonymous User Ids An MVP string that holds the SIDs of users whose content downloads will be counted as anonymous. Activity Stream Ignored Users Ids An MVP string that holds the SIDs of users whose activities will not be added to the feed.
- After editing properties, click Save and then Close.
Configure WebSphere SSL for Activity Stream
Configure WebSphere SSL for Activity Stream when you are installing FileNet into an existing Connections 4.5 deployment.
The following steps are not needed for the new FileNet Deployment scenario since Connections and FileNet are in the same cell. Open the WAS's administration console on your FileNet system and install the certificate from your Connections system.
If you are installing FileNet into an existing Connections 4.5 deployment, this procedure must be completed.
- In a browser, open the WAS's administrative console for your FileNet Collaboration Services server; then click Security > SSL certificate and key management > Key stores and certificates > CellDefaultTrustStore > Signer certificates .
In a configuration employs Tivoli Access Manager (TAM), you must import the certificates from the TAM server.
- Click Retrieve from port.
- Enter the hostname and SSL port that your Connections HTTP server is listening on, and then enter an alias name for this certificate.
- Click Retrieve signer information.
- Review the retrieved signer information and click OK.
- Click Save.
- Restart the application server.
Set up anonymous access
If anonymous access has not been enabled for IBM FileNet Collaboration Services 2.0, enable it now. Connections requires anonymous access to be set in FileNet for public communities.
IBM FileNet Collaboration Services (FNCS) implements anonymous access with a designated user that is used only for this purpose. The user should be a system-type user that is not used by a real person. The user ID does not need, and should not have, any particular privileges on the object store beyond what is given by the installation guide. This user's access control records will determine what level of access is given to anonymous users. Consequently, choose a functional ID that is reserved for this purpose and that does not have special access.
Configure an anonymous user is required if users will be accessing Connections communities anonymously. In some cases, such as when desktop single-sign is enabled, or when roles in the communities application have been restricted to limit access to authenticated users, setting up anonymous access for FileNet is optional. Refer to Roles for information on restricting access to anonymous users in communities.
The display name of the user used in this role might appear in some supplemental user interfaces, so a user account or functional ID should be chosen with a suitable display name matching the purpose of this account, for instance, Anonymous User. Do not choose the administrative account ID. Follow these steps to configure anonymous access:
- Log into the WAS admin console that hosts your FileNet server with the FileNet Collaboration Services application.
- Click Applications > WebSphere enterprise applications > fncs > User RunAs roles,
- Select the Anonymous role and enter the username and password of the LDAP user designated for the anonymous access role.
- Click Apply and then click OK to save. FileNet Collaboration Services now reads custom settings from a file that is bundled into the application .ear file during configuration and deployment.
- To add properties, the administrator needs to edit the <FNCS_HOME>\configmanager\profiles\fncs-sitePrefs.properties file, where FNCS_HOME is the FNCS installation directory, before running the configuration wizard.
- Add the following property to the fncs-sitePrefs.properties file at the end of the file after the comments and save it:
anonymousAccessEnabled=true enablePropertySheetTemplateMinMax=true
- Open Administration Console for Content Platform Engine (ACCE), open the Object Store tab, and select Search in the menu tree.
- Select Collaboration Configuration in the Select From Table drop-down and then click OK. A single object is displayed.
- Right-click the object and then click Properties.
- On the Properties tab, click the Property Value cell for Download Count Anonymous User Ids, which displays a drop-down menu.
- Select Edit list, add the user into the list, and then select it from the dropdown menu. The user should be the same user you provided for the User RunAs roles in the WAS admin console in step 2; however, the SID of the user must be provided instead of the username. To understand how SID values are created, refer to Generate SID values.
- Click OK.
- Follow the steps in Configure FileNet Collaboration Services to build and deploy the FileNet Collaboration Services application.
- Apply the authentication filter as described in Configure web resources and virus scan properties. The authentication filter also is needed to enable the antivirus feature.
- Validate anonymous access by using an Connections library without logging in or going to /dm/atom/anonymous/libraries/feed on the FNCS server.
To confirm changes made post-installation tasks
Complete the post-installation tasks that are relevant to the deployment.
Add a node to a cluster
Add a node to an existing cluster.
You must already have a cluster with at least one member.
Ensure that you installed IBM WAS Network Deployment (Application Server option) on the new node.
If you are adding a node to a Search cluster, do not use these instructions. Instead, use the instructions in the Adding an additional Search node to a cluster topic.
Although the IBM Cognos Business Intelligence server is managed by the same Deployment Manager as Connections, you cannot add that node to the Connections cluster.
To add a node to a cluster...
- Add a node to the dmgr cell:
- Log on to the new node and run...
cd WAS_HOME/profiles/profile_name/bin addnode [DmgrHost] \ [dmgr_port] \ [-username uid] \ [-password pwd] \ [-localusername localuid] \ [-localpassword localpwd]where
DmgrHost host name of the Deployment Manager dmgr_port SOAP port of the deployment manager (the default is 8879) uid and pwd dmgr administrator user name and password localuid
localpwduser name and password for the WAS administrator of the node
- Open the addNode.log file and confirm that the node was successfully added to the dmgr cell. The file is stored in the following location:
WAS_HOME/profiles/profile_name/log/addNode.log
- Copy the relevant JDBC files from the dmgr node to this node, placing them in the same location as the JDBC files on the dmgr. If, for example, you copied the db2jcc.jar file from the C:\IBM\SQLLIB directory on the dmgr, you must copy the same file to the C:\IBM\SQLLIB directory on this node.
See the following table to determine which files to copy.
See the following table to determine which files to copy:
Table 40. JDBC files
Database type JDBC files DB2 db2jcc.jar
db2jcc_license_cu.jarOracle ojdbc6.jar SQL Server sqljdbc4.jar
- Ensure that the shared folders that are used for the application content stores in the cluster are accessible from the new node: from the new node, try to access the shared directories.
- Add additional members to an existing Connections cluster:
- Log on to the Deployment Manager Integration Solutions Console.
- Click Servers > Clusters > cluster_name > Cluster members > New. Specify the following information about the new cluster member:
- Member name
- The name of the server instance created for the cluster. The dmgr creates a server instance with this name.
Each member name in the same cluster must be unique. The Integration Solutions Console prevents you from reusing the same member name in a cluster.
- Select node
- The node where the server instance is located.
Click Add Member to add this member to the cluster member list.
- Click Next to go to the summary page where you can examine detailed information about this cluster member. Click Finish to complete this step or click Previous to modify the settings.
- Click Save to save the configuration.
- Click Server > Servers > Clusters > cluster_name > Cluster members. In the member list, click the new member that you added in the previous step.
- On the detailed configuration page, click Ports to expand the port information of the member. Make a note of the WC_defaulthost and WC_defaulthost_secure port numbers. For example, the WC_defaulthost port number is typically 9084, while the WC_defaulthost_secure port number is typically 9447.
- Click Environment > Virtual Hosts > default_host > Host Aliases > New. Enter the following information for the host alias for the WC_defaulthost port:
- Host name
- The IP address or DNS host name of the node where the new member is located.
- Port:
- The port number for WC_defaulthost. For example, 9084.
Click OK to complete the virtual host configuration.
- Click Save to save the configuration.
- Repeat the previous two sub-steps to add the host alias for the WC_defaulthost_secure port.
- Click System administration > Nodes.
- In the node list page, select all the nodes where the target cluster members are located and then click Synchronize.
Configure IBM HTTP Server to connect to this node.
For more information, see the Configuring IBM HTTP Server and Defining IBM HTTP Server for a node topics.
Repeat this task for each new node to add to a cluster.
(AIX or Linux only) If you installed the Search application on the new node, you need to copy the Search conversion tools to local nodes and configure the path variables to point to that application.
For more information, refer to Copying Search conversion tools to local nodes.
Create Search work managers for the newly added node.
For more information, refer to Create work managers for Search.
If you experience interoperability failure, you might be running two servers on the same host with the same name. This problem can cause the Search and News applications to fail.
For more information, go to the NameNotFoundException from JNDI lookup operation web page.
Configure a reverse caching proxy
Configure a reverse proxy that directs all traffic to your Connections deployment to a single server.
This is an optional configuration. It is recommended for optimal performance, especially if users are accessing Connections from a wide area network (WAN).
Ensure that you have installed IBM WebSphere Edge Components which is supplied with WAS Network Deployment.
For more information, go to the WebSphere Edge Components information center.
You must also have completed the basic configuration of WebSphere Edge Components, set up a target backend server, and created an administrator account.
The IBM WAS Edge components provide a caching proxy used to optimize the deployment. Edge components are provided with the WAS Network Deployment software.
A reverse proxy configuration intercepts browser requests, forwards them to the appropriate content host, caches the returned data, and delivers that data to the browser. The proxy delivers requests for the same content directly from the cache, which is much quicker than retrieving it again from the content host. Information can be cached depending on when it will expire, how large the cache should be, and when the information should be updated.
This topic describes how to configure the Edge components to optimize the performance of Connections.
- Open the ibmproxy.conf configuration file for the Edge components in a text editor. The file is stored in:
- AIX or Linux: /etc/
Windows: C:\\IBM\edge\cp\etc\en_US\
- Make the following edits to the file:
- In the SendRevProxyName Directive section, add or enable the following rule:
SendRevProxyName yes
- In the PureProxy Directive section, add or enable the following rule:
PureProxy off
- In the SSL Directives section, add or enable the following rules:
SSLEnable On
SSLCaching On
- In the Keyring Directive section, add or enable the following rules:
KeyRing C:\ProxyKey\proxykey.kdb
KeyRingStash C:\ProxyKey\proxykey.sth
- In the URL Rewriting rules section, add the following reverse pass rules:
ReversePass http://httpserver/* http://proxyserver/*
ReversePass https://httpserver/* https://proxyserver/*where httpserver is the host name of the HTTP server. The HTTP server is usually IBM HTTP Server, but could be a load balancer or another proxy, depending on the deployment. proxyserver is the host name of the proxy server.
You can specify * in the URL (to indicate that all URLs for the server can be passed) only if Connections is the only application installed on the server. Alternatively, you can use a more specific URL such as...
http://httpserver/connections/*
You can use more than one ReversePass rule if you need to specify different servers for each component.
- Also in the Mapping Rules section, add the following proxy rules:
Proxy /* http://httpserver:80/*
Proxy /* https://httpserver:443/*- Set the CacheTimeMargin rule to zero seconds. When a document's expiry date is set to “soon” and soon is defined by the CacheTimeMargin rule, setting this rule to zero disables the calculation and forces all documents to be cached, regardless of their expiry date. This setting is required for Blogs caching to function properly; it does not negatively affect the other applications.
CacheTimeMargin 0 seconds
- Prevent the validation of a cache object from sending multiple requests for the same resource to the backend server by setting the KeepExpired rule to on. An expired or stale copy of the resource will be returned for the brief time that the resource is being updated on the proxy.
KeepExpired On
- In the Method Directives section, add the following methods:
Enable CONNECT
Enable PUT
Enable DELETE- Add the following rule to the CacheQueries Directives section:
CacheQueries PUBLIC
- Configure the proxy to allow large file uploads by editing and uncommenting the LimitRequestBody directive:
LimitRequestBody n M
where n is the maximum file size in MB. For example: LimitRequestBody 50 M allows a file size of up to 50 MB.
- Save and close the ibmproxy.conf file.
- Update the dynamicHosts attribute in LotusConnections-config.xml to reflect the URL of the proxy server:
<dynamicHosts enabled="true"> <host href="http://proxy.example.com" ssl_href="https://proxy.example.com"/> </dynamicHosts>The dynamic hosts settings does not affect interservice URLs. Therefore, even when the proxy server is enabled, Connections still routes internal communication between the applications through their own interservice URLs. You can force this internal traffic to be routed over the proxy server by updating the interservice URLs to use the proxy server.
Each href attribute in LotusConnections-config.xml is case-sensitive and must specify a fully-qualified domain name.
- Using iKeyman, extract certificates from Connections and add them to the proxy server key database:
Be sure to use iKeyman that comes with the HTTP server, since it does not come with the proxy.
- Open the Connections kdb file and extract the certificates.
- Open the kdb file on the proxy server and add the certificates that you extracted from Connections.
For more information about iKeyman, go to the Import and exporting keys topic in the IBM HTTP Server information center.
- Restart the Edge server.
Enable locked domains for OpenSocial
Assuming that you have completed the server setup previously described, to enable locked domains in Connections, specify an additional atrribute in the LotusConnections-config.xml to ensure that only ConnectionsOpensocial application is mapped to the locked domain host. For added security, only the ConnectionsCommon.ear should be mapped to the locked host. Although no SSO tokens will be flowing from the host, this extra precaution limits exposure of your Connections infrastructure to potentially malicious gadgets.
For more information about locked domains refer to Understanding and configuring locked domains on the IBM Social Business Toolkit wiki.
For example, this configuration could look like the following sample:
- Add the new attribute to LotusConnections-config.xml by completing the following steps:
- Start the wsadmin tool.
- Access the Connections configuration file:
execfile("<$WAS_HOME>/profiles/<dmgrGR>/config/bin_lc_admin/ connectionsConfig.py")If you are prompted to specify which server to connect to, enter 1. This information is not used by the wsadmin client when you are making configuration changes.- Check out the Connections configuration :
LCConfigService.checkOutConfig("/working_directory", "cell_name")...where...
- working_directory is the temporary working directory where the configuration XML and XSD files are copied to. The files are kept in this working directory while you change them.
- cell_name is the name of the WAS cell hosting the Connections application. This argument is case sensitive. If you do not know the cell name, you can determine it by entering the following command in the wsadmin command processor: print AdminControl.getCell(), for example:
LCConfigService.checkOutConfig("/temp","Cell01")- From the temporary directory where you checked out the Connections configuration files to, open LotusConnections-config.xml in a text editor.
- Add this attribute to LotusConnections-config.xml.
<sloc:serviceReference bootstrapHost="{locked.host.name}" bootstrapPort="2809" clusterName="" enabled="true" serviceName="opensocialLocked" ssl_enabled="true"> <sloc:href> <sloc:hrefPathPrefix>/connections/opensocial</sloc:hrefPathPrefix> <sloc:static href="http://{locked.host.name.authority/http}" ssl_href="https://{locked.host.name.authority/https}"/> <sloc:interService href="https://{locked.host.name.authority/https}"/> </sloc:href> </sloc:serviceReference>
- Save LotusConnections-config.xml.
- Check in the changed configuration property :
LCConfigService.checkInConfig()
- After making updates to deploy the changes: synchAllNodes()
- Restart your Connections server.
<sloc:serviceReference bootstrapHost="hern120w.dyn.webahead.renovations.com" bootstrapPort="2809" clusterName="" enabled="true" serviceName="opensocialLocked" ssl_enabled="true"> <sloc:href> <sloc:hrefPathPrefix>/connections/opensocial</sloc:hrefPathPrefix> <sloc:static href="http://hern120w.locked.com:9080" ssl_href="https://hernw120.locked.com:9443"/> <sloc:interService href="https://hern120w.dyn.webahead.renovations.com:9443"/> </sloc:href> </sloc:serviceReference>
Separating Common, Connection-proxy and Widget Container applications from the News cluster
Run the ConfigEngine script to configure Connections to separate critical user interface components to a new cluster. Doing so provides high availability and failover capability for the Connections Web user interface features.
Install all of the applications you need, and make sure all your applications works correctly after installing.
Understand these applications to better evaluate whether or not you want to separate them out to a dedicated cluster in your particular environment:
To separate the Common and WidgetContainer application or both from the cluster where the News repository is located (referred to as the News cluster in this procedure)...
- The Common application is responsible for serving up static content, including images, css, and javascript to all Connections applications. Additionally as an OSGi container, it is the component that enables Social Mail.
- The WidgetContainer application provides the ability to render gadgets, proxies requests to remote services, and handles REST requests. With the requirement to proxy requests, it has the potential to be very resource intensive.
- After installation, if the cluster fails, not all Web user interface elements will render.
The Connection-proxy ear will be separated at the same time. The new cluster, node and server name will be same as those in the Common application.
- Add new nodes if needed as described in Installing IBM WAS.
- Stop all clusters in the Connections deployment.
- On the deployment manager, open a command prompt, and then change to the following directory on the WAS hosting the Connections server:
- On AIX or Linux:
connections_root/ConfigEngine- On Microsoft Windows:
connections_root\ConfigEngine- Enter the following command, if needed, to run the script that removes the Common application from the News cluster. In addition, the script removes the associated dynacache objects:
For example, on the Microsoft Windows operating system, you would enter the following command:
- On AIX or Linux:.
./ConfigEngine.sh remove-common-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > /tmp/remove-common-ear.log 2>&1- On Microsoft Windows:
ConfigEngine.bat remove-common-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > C:\remove-common-ear.log 2>&1ConfigEngine.bat remove-common-ear -DWasUserid=wasadmin -DWasPassword=yourpassword > C:\remove-common-ear.log 2>&1- Check for a success message in the log file.
- Enter the following command, if needed, to run the script that removes the WidgetContainer application from the News cluster. In addition, the script removes the associated dynacache objects:
For example, on the Microsoft Windows operating system, you would enter the following command:
- AIX or Linux:
./ConfigEngine.sh remove-widgetcontainer-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > /tmp/remove-widgetcontainer-ear.log 2>&1Windows:
ConfigEngine.bat remove-widgetcontainer-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > C:\remove-widgetcontainer-ear.log 2>&1ConfigEngine.bat remove-widgetcontainer-ear -DWasUserid=wasadmin -DWasPassword=yourpassword > C:\remove-widgetcontainer-ear.log 2>&1- Check for a success message in the log file.
- Change to the ./properties directory to open and edit the wkplc_comp.properties file according to the following settings for separating out Common or WidgetContainer or both:
Table 41. Property values for commonear
Property Name Description commonear.ClusterName Desired cluster name for Common. commonear.FirstNodeName Name of first node of the cluster. commonear.SecondaryNodesNames Comma-delimited list of secondary node names. commonear.<FirstNodeName>.ServerName Name of the server on the first node, note that the <FirstNodeName> needs to be an actual name. commonear.<SecondaryNodesNames1>.ServerName Name of the server on the secondary node 1, note that the <SecondaryNodesNames1> needs to be an actual name. ... ... commonear.<SecondaryNodesNamesN>.ServerName Name of the server on the secondary node n, note that the <SecondaryNodesNamesN> needs to be an actual name.
Table 42. Property values for the widgetcontainer
Property Description widgetcontainer.ClusterName Desired cluster name for the WidgetContainer. widgetcontainer.FirstNodeName Name of first node of the cluster. widgetcontainer.SecondaryNodesNames Comma-delimited list of secondary node names. widgetcontainer.<FirstNodeName>.ServerName Name of the server on the first node, note that the <FirstNodeName> needs to be an actual name. widgetcontainer.<SecondaryNodesNames1>.ServerName Name of the server on the secondary node 1, note that the <SecondaryNodesNames1> needs to be an actual name. ... ... widgetcontainer.<SecondaryNodesNamesN>.ServerName Name of the server on the secondary node n, note that the <SecondaryNodesNamesN> needs to be an actual name.
Ensure the server name(s) selected is/are not already in use.
- Enter the following command, if needed, to run the script that creates a new cluster and deploys the Common application onto it. In addition, the script will create associated dynacache objects, set the cluster as bus member of the ConnectionsBus, and update the cluster name in the LotusConnections-config.xml for the Common application's service entry:
For example, on the Microsoft Windows operating system, you would enter the following command:
- AIX or Linux:
./ConfigEngine.sh install-common-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > /tmp/install-common-ear.log 2>&1Windows:
ConfigEngine.bat install-common-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > C:\install-common-ear.log 2>&1ConfigEngine.bat install-common-ear -DWasUserid=wasadmin -DWasPassword=yourpassword > C:\install-common-ear.log 2>&1- Check for a success message in the log file.
- Enter the following command, if needed, to run the script that creates a new cluster and deploys the WidgetContainer application onto it. In addition, the script will create associated dynacache objects, set the cluster as bus member of the ConnectionsBus, and update the cluster name in the LotusConnections-config.xml for the WidgetContainer application's service entry:
For example, on the Microsoft Windows operating system, you would enter the following command:
- AIX or Linux:
./ConfigEngine.sh install-widgetcontainer-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > /tmp/install-widgetcontainer-ear.log 2>&1Windows:
ConfigEngine.bat install-widgetcontainer-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > C:\install-widgetcontainer-ear.log 2>&1ConfigEngine.bat install-widgetcontainer-ear -DWasUserid=wasadmin -DWasPassword=yourpassword > C:\install-widgetcontainer-ear.log 2>&1- Check for a success message in the log file.
- Configure IHS to refresh the application mapping according to the instructions contained in the topic Mapping applications to IBM HTTP Server.
For the users who have not yet deployed IHS, the ports for the servers that host the Common (webresources ) and WidgetContainer (opensocial) applications (as specified in the profile_root/config/cells/cell_name/LotusConnections-config/LotusConnections-config.xml file) remain the old ports and need to be updated manually to the new ports. The new http port and secure http port can be found in LC_HOME/new_server.properties. For example, this entry in LotusConnections-config.xml is for Common:
<sloc:serviceReference clusterName="CommonCluster" enabled="true" serviceName="webresources" ssl_enabled="true"> <sloc:href> <sloc:hrefPathPrefix>/connections/resources</sloc:hrefPathPrefix> <sloc:static href="http://yourserver.renovations.com:9081" ssl_href="https://yourserver.renovations.com:9444"/> <sloc:interService href="https://yourserver.renovations.com:9444"/> </sloc:href> </sloc:serviceReference>
- Restart Connections.
- Restart IHS.
Configure the custom ID attribute for users or groups
Configure Connections to use custom ID attributes to identify users and groups in the LDAP directory.
- If you specified a single ID attribute for both users and groups, you don't need to complete this task. This task is required only if you specified a custom ID attribute for users or groups in the Specifying a custom ID for users or groups topic.
- Ensure that you have completed the steps to specify different ID attributes for users and groups in the Specifying a custom ID for users or groups topic.
You can change the default setting to use a custom ID to identify users and groups in the directory. To configure Connections to use the custom ID attribute specified earlier...
- Add the new attribute to LotusConnections-config.xml. To do so...
- Start the wsadmin tool.
- Access the Connections configuration file:
execfile("<$WAS_HOME>/profiles/<dmgrGR>/config/bin_lc_admin/connectionsConfig.py")
If you are prompted to specify which server to connect to, enter 1. This information is not used by the wsadmin client when you are making configuration changes.
- Check out the Connections configuration :
LCConfigService.checkOutConfig("/working_directory", "cell_name")
where
For example:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied. The files are kept in this working directory while you change them.
- cell_name is the name of the IBM WAS cell hosting the Connections application. This argument is case sensitive. If you do not know the cell name, you can determine it by entering the following command in the wsadmin command processor:
print AdminControl.getCell()
LCConfigService.checkOutConfig("/temp","Cell01")
- From the temporary directory to which you checked out the Connections configuration files, open LotusConnections-config.xml in a text editor.
- Add the new custom properties to LotusConnections-config.xml. For example:
<sloc:serviceReference serviceName="directory" ... custom_user_id_attribute="customUserID" custom_group_id_attribute="customGroupID"/>
- Save LotusConnections-config.xml.
- Check in the changed configuration property :LCConfigService.checkInConfig()
- After making updates to deploy the changes: synchAllNodes()
- Stop and restart the WAS instance hosting Connections.
Configure Moderation
Configure moderation so that moderators can review content for blogs, forums, and community files from a central interface.
If the moderation application is installed and enabled, you can define a person or set of people who will review and approve the content that users add to certain applications before it is published or review and act on content that has been flagged as problematic. Moderated content can be:
- Blog entries and comments. These can be in stand-alone blogs or community blogs.
- Forum posts and replies. These can be in stand-alone forums or community forums.
- Files and comments within a community.
To enable moderation, follow these steps:
- Install the Moderation application as part of the Connections installation or migration.
- Assign users to the role of global moderator so they can moderate content from a central interface.
- Configure moderation in the contentreview-config.xml file if you want to enable moderation features such as owner moderation or assign content reviewers for flagged content.
Designating global moderators
Assign users to the role of
global-moderator so they can moderate content for blogs, forums, and community files from a central interface.
In order for a user to moderate content, they must be assigned the
global-moderator role for the Moderation, Communities and Common applications and for the applications you want moderated, which can be Blogs, Forums and Files.
To map users to a global moderator role...
- Map a user to a
global-moderator role:
- Expand Applications > Application Types, and then select WebSphere enterprise applications.
Find and click the link to the application to configure.
- Click Security role to user/group mapping.
- Select the check box for the
global-moderator role, and then click Map users.
- In the Search String box, type the name of the user to assign to the role, and then click Search. If the user exists in the directory, it is displayed in the Available list.
- Select the user or group name from the Available box, and then move it into the Selected column by clicking the move button.
- Click OK.
- Click OK, and then click Save to save the changes.
- Synchronize and restart all your WAS instances.
Configure J2C Aliases for the moderation proxy service
Configure J2C Aliases so that community owners can moderate their community Blogs, Forums, and Files applications.
Moderation actions are performed by a moderation API. Community owners cannot access that API, so Connections handles their moderation requests through a proxy service. The proxy service uses J2C Aliases to pass the requests. Proxy service alias users must be in the
global-moderator roles of the appropriate applications, and they must be able to log in to Connections.By default the proxy service uses the connectionsAdmin J2C Alias provided during installation. That user is mapped to the
global-moderator roles for Blogs, Forums, Files, Moderation, and Communities by the installation program, and can log in to Connections. However, you can create different moderation aliases for each of the different supported applications. You can create the following aliases:
- For Blogs create an alias called moderationBlogsAlias.
- For Files create an alias called moderationFilesAlias.
- For Forums create an alias called moderationForumAlias.
The different applications recognize these specific aliases. You can map any users to these aliases, but all users must be in the
global-moderator roles of the appropriate application, and they must be able to log into Connections. For example, the moderationBlogsAlias user must be in theglobal-moderator role for Blogs.See Roles.
The proxy service logs its actions, so if the users (other than the connectionAdmin user) are only used for this purpose, it will make reading the log more clear.
To create moderation aliases and then map them to a global moderator role...
- Create a moderation alias:
- From the IBM WAS admin console, expand Security, and then click Global security.
- In the Authentication area, expand Java Authentication and Authorization Service, and click J2C authentication data.
- Click New.
- Name the alias, for example moderationFilesAlias.
- Type the name and password of a user for the alias.
- Click OK.
- Map an alias user to a
global-moderator role:
- Expand Applications > Application Types, and then select WebSphere enterprise applications.
Find and click the link to the application to configure.
- Click Security role to user/group mapping.
- Select the check box for the
global-moderator role, and then click Map users.
- In the Search String box, type the name of the user to assign to the role, and then click Search. If the user exists in the directory, it is displayed in the Available list.
- Select the user or group name from the Available box, and then move it into the Selected column.
- Click OK.
- Click OK, and then click Save to save the changes.
- Synchronize and restart all your WAS instances.
Manage content moderation and flagged content
Edit configuration property settings in the contentreview-config.xml file to enable moderation and to specify what moderators should receive email notification when content requires moderation. Restart your applications to see the changes.
To edit configuration files, use the wsadmin client.
Configure Connections using scripts accessed with the wsadmin client. These scripts use the connectionsConfig object available in WAS wsadmin client to interact with the Connections configuration file, which is named contentreview-config.xml.
The properties in the contentreview-config.xml file cannot be edited using the updateConfig command nor displayed using showConfig. Instead, check out the configuration file using the checkOutContentConfig command, and then edit the property values by opening the checked out property file from the temporary directory using a text editor. After editing the property file, save the file in Unicode format and check the file back in using the checkInContentConfig command and restart the application servers to see the changes.
If moderation is enabled, moderators can review and approve blog comments and entries, forum posts, and community files from a central location. You can configure who can review and approve content with a setting in the contentreview-config.xml file as follows:
- If ownerModerate=true in contentreview-config.xml, a blog, forum, or community owner can moderate content for a blog, forum, or community they own. In addition, content an owner creates is published directly, without requiring moderation.
- If ownerModerate=false in contentreview-config.xml, only users assigned the J2EE moderator role in the WAS console can manage content on the site.
For information on assigning users to the moderator role, see the topic Roles.
You can also configure the flag inappropriate content application for Blogs and Forums to specify categories for what type of content to flag, and to specify designated reviewers who will receive email notifications when content is flagged. There are two default categories for inappropriate content: Legal issue and Human resources issue. You can edit those categories, add new ones, or remove all categories. The file is also configured with placeholders for the email addresses of designated reviewers. Change those to actual email addresses for users assigned the moderator role who can review flagged content.
When you enable moderation, users cannot upload thumbnail images in Media Gallery widgets.
To change moderation configuration settings...
WAS_HOME/profiles/Dmgr01/bin
where app_server_root is the WAS installation directory and Dmgr01 is the Deployment Manager profile directory, typically dmgr01. For example, on Windows:
C:\\IBM\WebSphere\AppServer\profiles\Dmgr01\bin
You must run the following command to Start wsadmin from this specific directory because the Jython files for the product are stored here. If you try to start the client from a different directory, then the execfile() command that you subsequently call to initialize the administration environment for an Connections component does not work correctly.
- Use the wsadmin client to access and check out the Connections configuration file:
- Access the Connections configuration file
execfile("connectionsConfig.py")
If you are prompted to specify a service to connect to, type 1 to select the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file by using a local file path, select the node where the file is stored. This information is not used by the wsadmin client when you are making configuration changes.
- Check out the Connections content configuration file
LCConfigService.checkOutContentReviewConfig("working_directory","cell_name")
where:
For example:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
- cell_name is the name of the WAS cell hosting the Connections application. This argument is case-sensitive, so type it with care. To get cell name, from wsadmin:
print AdminControl.getCell()
LCConfigService.checkOutContentReviewConfig("/opt/temp","Cell01")
- From the temporary directory to which you just checked out the Connections configuration files, open the contentreview-config.xml file in a Unicode text editor using the encoding mode of UTF-8 without BOM.
Edit the file using a standard text editor that does not support Unicode could corrupt the file.
- To configure settings for managing content in the pre-moderated state, that is, before it is published or when it is updated, set the following options for each application:
- contentApproval
- Set to "true" to require moderation for the specified application. By default this is set to "false". When the setting is set to false, moderation is not automatically enforced for an application, but moderation API command and filters still work. Moderators can still perform moderation tasks.
In the following example, moderation is enforced for blogs so that all content must be approved by a moderator before it is published or updated in a blog. Each blog owner can moderate content for the blogs they own.
- ownerModerate
- Set to "true" to specify that blog owners and community owners can moderate content in blogs or communities they own. By default this is set to "false" so that only users assigned the J2EE moderator role in the WAS console can moderate content.
<serviceConfiguration> <service id="blogs"> <contentApproval enabled="true"> <ownerModerate enabled="true" /> </contentApproval>- To configure settings for managing content in the post-moderated state, that is, content that is flagged by a user after it is published. set the following options for each application:
- contentFlagging
- Set to "true" to require moderation for flagged content. By default this is set to "false". When the setting is set to false, this means the end user can't flag content from the user interface or using an API command. Blogs Moderation API and filters will still work. Moderators can still perform moderation tasks. Files and Forums API commands will return errors.
- ownerModerate
- Set to "true" to specify that forum users can moderate their own flagged content. By default this is set to "false" so that only users assigned the J2EE moderator role in the WAS console can moderate.
For information on assigning users to the moderator role, see the topic Roles.
In the following example, flagging is enabled for forums and each forum owner can moderate flagged content for the forums they own. Issue categorization is enabled so that users can select a category when flagging content. If ten users flag a forum post, it is automatically quarantined and removed from the forum.
- IssueCategorization
- Set to "true" to display a list of categories so that users can choose one when flagging content. By default this is set to "false."
This feature is not available for Files.
- automaticQuarantine
- Forums only. Set to "true" and specify an integer as a value for threshold. When a forum post is flagged the number of times specified for the threshold value, the post is automatically quarantined and removed from the forum. By default this is set to "false."
- flagCategory
- To make a <flagCategory> element available for an application, first define it with a unique id and descriptions in the desired languages to the <flaggedCategories> section of the configuration file, then add it to the <IssueCategorization> section for the application.
- reviewer email
- To add designated reviewers who will receive notification email when content is flagged, replace the placeholder email names for each category with the email addresses of designated reviewers who are assigned the moderator role.
You can also configure a group email here, but each member of the group must be assigned the moderator role.
<service id="forums"> <contentApproval enabled="true"> <ownerModerate enabled="true" /> </contentApproval> <contentFlagging enabled="true"> <ownerModerate enabled="true" /> <automaticQuarantine enabled="true" threshold="10" /> <issueCategorization enabled="true"> <flagCategory id="001"> <reviewer email="reviewer1@acme.com" /> <reviewer email="reviewer2@acme.com" /> </flagCategory> <flagCategory id="002"> <reviewer email="reviewer2@acme.com" /> <reviewer email="reviewer3@acme.com" /> </flagCategory> </issueCategorization> </contentFlagging>- To add a category for flagged content, add a new <flagCategory> element with a unique id and descriptions in the desired languages to the <flaggedCategories> section of the configuration file.
The fastest way to add a content category is to copy an existing <flagCategory> element, paste it into the file and edit the ID and descriptions in the required languages. For example, to add a content category for "Offensive Language" add the following:
<flagCategory> <id>003</id> <description xml:lang="en">Offensive Language</description> <description xml:lang="fr">French equivalent</description> <description xml:lang="it">Italian equivalent</description> </flagCategory>Note that the new ID is “003”. This must be unique. As this example shows, you can also add language statements and provide translated strings for category names.- To specify who should receive email notification of content awaiting moderation or flagged content that needs review, replace the placeholder email names in the following section with the email addresses of users assigned the moderator role for that service.
<moderator email="moderator3@acme.com" /> <moderator email="moderator4@acme.com" />
- Save your changes to the contentreview-config.xml file.
- Check the file back in, using the command:
LCConfigService.checkInContentReviewConfig(<temp-dir>,"<cell-name>")During the check-in process validation is done to ensure no xml syntax errors are in the file.
- Restart the affected applications to see the changes.
Configure moderation for communities
Configure moderation for communities so that owners can review and manage the blog, file, and forum content directly in the community. Owners can control what content is added by members (pre-moderation) and remove anything that might be considered inappropriate in your organization (post-moderation).
To edit configuration files, use the wsadmin client.
See Starting the wsadmin client for details.
Configure Connections using scripts accessed with the wsadmin client. These scripts use the connectionsConfig object available in WAS wsadmin client to interact with the Connections configuration file, which is named contentreview-config.xml.
The properties in the contentreview-config.xml file cannot be edited using the updateConfig command nor displayed using showConfig. Instead, check out the configuration file using the checkOutContentReviewConfig command, and then edit the property values by opening the checked out property file from the temporary directory using a text editor. After editing the property file, save the file in Unicode format and check the file back in using the checkInContentReviewConfig command and restart the application servers to see the changes.
When owner moderation is enabled for communities, community owners can access moderation options for their community by opening the community and selecting Community Actions > Moderate Community. Community moderators only can manage the content of communities that they own.
Administrators configure the following section of the contentreview-config.xml file to set community moderation:
<commModerationConfiguration> <preModeration> <forceForAllCommunities enabled=boolean value of true or false /> <enabledByCreation enabled="true" /> </preModeration> <postModeration> <forceForAllCommunities enabled=boolean value of true or false /> <enabledByCreation enabled="true" /> </postModeration> </commModerationConfiguration> <service id="blogs"> <contentApproval enabled=boolean value of true or false> <ownerModerate enabled=boolean value of true or false/> </contentApproval> <contentFlagging enabled=boolean value of true or false> <ownerModerate enabled=boolean value of true or false/> <service id="files"> <contentApproval enabled=boolean value of true or false> <ownerModerate enabled=boolean value of true or false/> </contentApproval> <contentFlagging enabled=boolean value of true or false> <ownerModerate enabled=boolean value of true or false/> <service id="forums"> <contentApproval enabled=boolean value of true or false> <ownerModerate enabled=boolean value of true or false/> </contentApproval> <contentFlagging enabled=boolean value of true or false> <ownerModerate enabled=boolean value of true or false/>Where:
- preModeration
- Community owners must approve all content before it can be posted.
- postModeration
- Viewers can flag content.
- forceForAllCommunities
- Set to "true" to require moderation for communities. By default this attribute is set to "false". When the setting is set to false, moderation is not automatically required for a community, but moderation API command and filters still work. Moderators can still perform moderation tasks.
- enabledByCreation
- Set determines if the moderation check boxes in the Start Community form should be checked when a user clicks Start a Community.
- contentApproval
- Set to "true" to require moderation for the specified application. By default this attribute is set to "false". When the setting is set to false, moderation is not automatically enforced for an application, but moderation API command and filters still work. Moderators can still perform moderation tasks.
- contentFlagging
- Set to "true" to require moderation for flagged content. By default this attribute is set to "false". When the setting is set to false, the user cannot flag content from the user interface or using an API command. Blogs Moderation API and filters still work. Moderators can still perform moderation tasks. Files and Forums API commands returns errors.
If you upgraded Connections from release 2.5 to release 3.0 or higher, the default for Blogs is "true" for compatibility reasons.
- ownerModerate
- Must be set to "true" to specify that community owners can moderate the content in communities that they own, otherwise it is set to "false". If contentFlagging or contentApproval for a service is set to true, then ownerModerate must be set to true. If contentFlagging or contentApproval for a service is set to false, then ownerModerate must be set to false.
To change community moderation configuration settings...
WAS_HOME/profiles/Dmgr01/binwhere app_server_root is the WAS installation directory and Dmgr01 is the Deployment Manager profile directory, typically dmgr01. For example, on Windows:C:\\IBM\WebSphere\AppServer\profiles\Dmgr01\binYou must run the following command to Start wsadmin from this specific directory because the Jython files for the product are stored here. If you try to start the client from a different directory, then the execfile() command that you subsequently call to initialize the administration environment for an Connections component does not work correctly.
See the topic Starting the wsadmin client.
- Use the wsadmin client to access and check out the Connections configuration file:
- Access the Connections configuration file
execfile("connectionsConfig.py")If you are prompted to specify a service to connect to, type 1 to select the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file by using a local file path, select the node where the file is stored. This information is not used by the wsadmin client when you are making configuration changes.
- Check out the Connections content configuration file
LCConfigService.checkOutContentReviewConfig("working_directory","cell_name")where:
For example:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
- cell_name is the name of the WAS cell hosting the Connections application. This argument is case-sensitive, so type it with care. To get cell name, from wsadmin:
print AdminControl.getCell()
LCConfigService.checkOutContentReviewConfig("/opt/temp","Cell01")Windows:
LCConfigService.checkOutContentReviewConfig("c:/temp","Cell01")- From the temporary directory to which you just checked out the Connections configuration files, open the contentreview-config.xml file in a Unicode text editor.
Edit the file using a standard text editor that does not support Unicode could corrupt the file.
- Determine if you want to give owners the ability to turn pre-approval on and off, or if you want to force owners to approve content.
Force pre-approval Sample code What the owner sees in the user interface Let owners turn off pre-approval <preModeration> <forceForAllCommunities enabled="false"> ..... </premoderation>Owners can clear the check box.
Owners must approve all content (...)
Force owners to pre-approve content <preModeration> <forceForAllCommunities enabled="true"> ..... </premoderation>Owners cannot clear the check box.
Owners must approve all content (...)
- After you decide whether or not you want to force owners to pre-approve content, you should select the specific content that requires pre-approval.
Content that requires owner’s approval Sample code What owner sees in the user interface Blog content requires the owner’s approval <service id="blogs"> <contentApproval enabled="true"> <ownerModerate enabled= "true"> </contentApproval>If forceForAllCommunities is set to false, the check box can be cleared by owners. Owners must approve all content (Blogs)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box is gray.
Owners must approve all content (Blogs)
Files content requires the owner’s approval <service id="files"> <contentApproval enabled="true"> <ownerModerate enabled= "true"> </contentApproval>If forceForAllCommunities is set to false, the check box can be cleared by owners. Owners must approve all content (Files)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box is gray.
Owners must approve all content (Files)
Forums content requires the owner’s approval <service id="Forums"> <contentApproval enabled="true"> <ownerModerate enabled= "true"> </contentApproval>If forceForAllCommunities is set to false, the check box can be cleared by owners. Owners must approve all content (Forums)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box is gray.
Owners must approve all content (Forums)
Blogs and Files require the owner’s approval. Forums does not. <service id="blogs"> <contentApproval enabled="true"> <ownerModerate enabled= "true"> </contentApproval> <service id="files"> <contentApproval enabled="true"> <ownerModerate enabled= "true"> </contentApproval> <service id="Forums"> <contentApproval enabled="false"> <ownerModerate enabled= "true"> </contentApproval>If forceForAllCommunities is set to false, the check box can be cleared by owners. Owners must approve all content (Blogs, Files)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box is gray.
Owners must approve all content (Blogs, Files)
Files requires the owner’s approval. Blogs content is moderated in the community, but only the global moderator can approve it. The community owner cannot manage content or change the moderation options of Blogs in his community. <service id="blogs"> <contentApproval enabled= "true"> <ownerModerate enabled= "false"> <service id="files"> <contentApproval enabled= "true"> <ownerModerate enabled= "true">If forceForAllCommunities is set to false, the check box can be cleared by owners. Owners must approve all content (Files)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box is gray.
Owners must approve all content (Files)
Files requires the owner’s approval. Blogs content does not require any approval. All content is published directly without approval. <service id="blogs"> <contentApproval enabled= "false"> <ownerModerate enabled= "false"> <service id="files"> <contentApproval enabled= "true"> <ownerModerate enabled= "true">If forceForAllCommunities is set to false, the check box can be cleared by owners. Owners must approve all content (Files)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box is gray.
Owners must approve all content (Files)
- Determine if you want to give owners the ability to let people flag inappropriate content. You can let owners decide if they want to include this ability in their community, or you can force them to offer this ability in their community.
Force pre-approval Sample code What the owner sees in the user interface Let owners turn off flagging <postModeration> <forceForAllCommunities enabled="false"> ..... </postmoderation>Owners can clear the check box. Viewers can flag inappropriate content (...)
Force owners to viewers to flag content. <postModeration> <forceForAllCommunities enabled="true"> ..... </postmoderation>Owners cannot clear the check box. Viewers can flag inappropriate content (...)
- After you decide if you want to force owners to let viewers flag content, select the specific widget where they can flag content.
Content that can be flagged Sample code What owner sees in the user interface Blog content can be flagged as inappropriate by viewers <service id="blogs"> <contentFlagging enabled="true"> <ownerModerate enabled="true"/>If forceForAllCommunities is set to false, the check box can be cleared by owners. Viewers can flag inappropriate content (Blogs)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box and text are gray.
Viewers can flag inappropriate content (Blogs)
Files content can be flagged as inappropriate by viewers <service id="files"> <contentFlagging enabled="true"> <ownerModerate enabled="true"/>If forceForAllCommunities is set to false, the check box can be cleared by owners. Viewers can flag inappropriate content (Files)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box and text are gray.
Viewers can flag inappropriate content (Files)
Forums content can be flagged as inappropriate by viewers <service id="Forums"> <contentFlagging enabled="true"> <ownerModerate enabled="true"/>If forceForAllCommunities is set to false, the check box can be cleared by owners. Viewers can flag inappropriate content (Forums)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box and text are gray.
Viewers can flag inappropriate content (Forums)
Files and Forums content can be flagged as inappropriate by viewers. Blogs cannot. <service id="blogs"> <contentFlagging enabled="false"> <ownerModerate enabled="false"/> <service id="files"> <contentFlagging enabled="true"> <ownerModerate enabled="true"/> <service id="Forums"> <contentFlagging enabled="true"> <ownerModerate enabled="true"/>If forceForAllCommunities is set to false, the check box can be cleared by owners. Viewers can flag inappropriate content (Files, Forums)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box and text are gray.
Viewers can flag inappropriate content (Files, Forums)
Files content can be flagged as inappropriate by viewers. Blogs content can be flagged as inappropriate by viewers, but only the global moderator can take action. The community owner cannot manage content or change the moderation options of Blogs in his community. <service id="blogs"> <contentFlagging enabled="true"> <ownerModerate enabled= "false"> <service id="files"> <contentFlagging enabled="true"> <ownerModerate enabled= "true">If forceForAllCommunities is set to false, the check box can be cleared by owners. Viewers can flag inappropriate content (Files)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box and text are gray.
Viewers can flag inappropriate content (Files)
Files content can be flagged as inappropriate by viewers. Blogs content cannot be flagged as inappropriate. The community owner cannot manage flagged content or change the flag moderation options of Blogs in his community. <service id="blogs"> <contentFlagging enabled="false"> <ownerModerate enabled= "false"> <service id="files"> <contentFlagging enabled="true"> <ownerModerate enabled= "true">If forceForAllCommunities is set to false, the check box can be cleared by owners. Viewers can flag inappropriate content (Files)
If forceForAllCommunities is set to true, the check box cannot be cleared by owners. The check box and text are gray.
Viewers can flag inappropriate content (Files)
Uninstalling Connections
Uninstall Connections.
Remove applications
Remove selected applications from Connections.
If you no longer need to keep certain applications, you can remove them from the deployment. You cannot remove core applications such as Home page, News, and Search.
To remove selected applications from Connections...
- Start the IBM WAS Deployment Manager.
- Stop all instances of WAS, including node agents, in the deployment.
- To uninstall a component of connections, start the Installation Manager and click Modify.
- Select Connections and click Next.
- Clear the check boxes of the applications to remove and then click Next.
- Enter the details of your WAS environment and then click Next.
- In the Summary page, verify that the details are correct.
- Click Modify to begin removing applications.
- When the process is complete, restart the Deployment Manager.
- Restart all instances of WAS, including node agents.
- Synchronize the nodes.
- To check the details of the procedure, review...
connections_root/logs
Each Connections application that you uninstalled has a log file that uses the following naming format:
application_nameUninstall.log
...where application_name is the name of an Connections application.
Uninstalling a deployment
Delete Connections data files makes the original deployment unrecoverable. If you plan to reinstall Connections and use your old data, do not delete the data files.
You do not need to remove Installation Manager files. These files might be associated with other IBM applications.
To uninstall Connections...
- Start the IBM WAS Deployment Manager.
- Stop all instances of WAS, including node agents, in the deployment.
- To uninstall a deployment of Connections, start the Installation Manager and click Uninstall.
- Select Connections and click Next.
- Clear the check boxes of the applications to remove.
- Enter the details of your WAS environment and then click Next.
- In the Summary page, verify that the details are correct.
- Click Modify to begin removing applications
- When the process is complete, restart the Deployment Manager.
- Restart all instances of WAS, including node agents.
- Synchronize the nodes.
- To check the details of the procedure, open the log files in...
connections_root/logs
Each Connections application that you uninstalled has a log file that uses the following naming format:
application_nameUninstall.log
...where application_name is the name of an Connections application.
- If you plan to reinstall Connections, remove the following files:
Except where noted, remove these files from the system that hosts the Deployment Manager.
Because some of these files might be used by other programs, it is possible that you are not allowed to remove all of the following files.
- Connections installation files: connections_root
If you did not install Connections in the default directory, delete the directory where you installed the product.
- Connections shared data: Delete the directories specified for shared data when you installed Connections.
- Connections local data: On each node, delete the directories specified for local data when you installed Connections.
Delete Connections data files makes the original deployment unrecoverable. If you plan to reinstall Connections and use your old data, do not delete the data files.
- Connections configuration files: Delete the profile_root/config/cells/cell_name/LotusConnections-config directory, where cell_name is the name of your WAS cell.
- If it is present, delete the registry.xml file from the profile_root/config/cells/cell_name directory.
- Delete all .py files from the /opt/IBM/WebSphere/AppServer/profiles/profile_name/bin directory on the deployment manager server as follows:
- /opt/IBM/WebSphere/AppServer/profiles/profile_name/bin
IBM i : /QIBM/userData/websphere/appserver/V8/nd/profiles/dmgr/bin- Windows: drive:\\IBM\WebSphere\AppServer\profiles\profile_name\bin
Results
You uninstalled Connections.
If you plan to reinstall Connections at some point, refer to Upgrading and Migrating.
Uninstalling in console mode
Uninstall Connections in console mode. This method is convenient if you cannot or do not want to use the graphical mode.
Use console mode to uninstall the product in a non-graphical environment. This mode is useful when you have to uninstall Connections on a system that does not have a video card.
In steps where you enter custom information, such as server details, you can type P at any time to return to the previous input choice in that step. However, you cannot type P to return to a previous step.
Uninstalling Connections uses Installation Manager to manage the installation process.
To uninstall Connections in console mode...
- Start the IBM WAS Deployment Manager.
- Stop all instances of WAS, including node agents, in the deployment.
- Start the installation wizard
cd IM_ROOT/eclipse/tools directory.
./imcl -c- Type P.
- Select the Connections package group and then type N to proceed.
- Select the applications to remove and then type N to proceed.
Connections Content Manager and Connections Mail uninstall are not supported on
IBM i . Although they appear as options in the console interface, you cannot select them.- Type U to start removing applications.
- To check the details of the procedure, open the log files in...
connections_root/logs
Each Connections application that you uninstalled has a log file that uses the following naming format:
application_nameUninstall.log
- If you plan to reinstall Connections, remove the following files:
Except where noted, remove these files from the system that hosts the Deployment Manager.
Because some of these files might be used by other programs, it is possible that you are not allowed to remove all of the following files.
- Connections installation files: connections_root
If you did not install Connections in the default directory, delete the directory where you installed the product.
- Connections shared data: Delete the directories specified for shared data when you installed Connections.
- Connections local data: On each node, delete the directories specified for local data when you installed Connections.
Delete Connections data files makes the original deployment unrecoverable. If you plan to reinstall Connections and use your old data, do not delete the data files.
- Connections configuration files: Delete the profile_root/config/cells/cell_name/LotusConnections-config directory, where cell_name is the name of your WAS cell.
- If it is present, delete the registry.xml file from the profile_root/config/cells/cell_name directory.
- Delete all .py files from the /opt/IBM/WebSphere/AppServer/profiles/profile_name/bin directory on the deployment manager server as follows:
- /opt/IBM/WebSphere/AppServer/profiles/profile_name/bin
IBM i : /QIBM/userData/websphere/appserver/V8/nd/profiles/dmgr/bin- Windows: drive:\\IBM\WebSphere\AppServer\profiles\profile_name\bin
Results
Uninstalling using response file
To run Installation Manager in silent mode, run the following command from the eclipse subdirectory in the directory where you installed Installation Manager:
- Windows : IBMIMc.exe --launcher.ini silent-install.ini -input <response file path and name> -log <log file path and name>
- Linux or UNIX : ./IBMIM --launcher.ini silent-install.ini -input <response file path and name> -log <log file path and name>
Windows does not support using IBMIM.exe for silent installation. Use IBMIMc.exe for silent installation. For example:
- Windows: IBMIMc.exe --launcher.ini silent-install.ini -input <LC_HOME>\silentResponseFile\LC_uninstall.rsp -log .\silent_uninstall_log.xml
- Linux or UNIX: ./IBMIM --launcher.ini silent-install.ini -input <LC_HOME>/silentResponseFile/LC_uninstall_linux.rsp -log ./silent_uninstall_log.xml
The Installation Manager has an initialization or .ini file, silent-install.ini, that includes default values for the arguments. An example of a silent-install.ini file is as follows:
-accessRights admin -vm C:\\IBM\Installation Manager\eclipse\jre_5.0.2.sr5_20070511\jre\bin\java.exe -nosplash --launcher.suppressErrors -silent -vmargs -Xquickstart -Xgcpolicy:gencon
Results
When the installation is successful, check...<LC_HOME>/logs/<feature>uninstall.log
...for a return status of "0". An unsuccessful operation returns a non-zero number. When the Installation Manager installer is run, it reads the response file and (optionally) writes to a log file to the directory specified. If you specified a log file and directory, the log file is empty when the operation is successful, for example:
<?xml version="1.0" encoding="UTF-8"?> <result> </result>The log file contains an error element if the operation did not complete successfully. A log file for Installation Manager is also available. The default locations for the Installation Manager log file are as follows:
- For Windows as an administrator: C:\Documents and Settings\All Users\Application Data\IBM\Installation Manager\logs
- For Windows as a non-administrator: C:\Documents and Settings\<my id>\Application Data\IBM\Installation Manager\logs
- Linux or UNIX: /var/ibm/InstallationManager/logs
Uninstalling using response file on
IBM i Uninstall Connections in silent mode on
IBM i .Edit the default response file that is provided with the product. However, if you edit the default response file, you need to add encrypted passwords to the file.
For more information, see the Creating encrypted passwords for a response file topic.
A silent modification uses a response file to automate the removal of applications from the deployment. To perform a silent modification...
- Customize the response file LC_uninstall_linux.rsp under /QIBM/ProdData/IBM/Connections/silentResponseFile. You need to add encrypted passwords to the file.
For more information, see the Creating encrypted passwords for a response file topic.
- Open a command prompt and navigate to the IM_root/eclipse/tools directory.
- Enter the following command:
imcl -input /path/to/file.rsp response_file -log log_filewhere response_file is the full path and name of the response file and log_file is the full path and name of the log file. The default name of the response file is LC_uninstall_linux.rsp. By default, the response file is in theIBM i : connections_root directory:- Compare the following examples to your environment:
IBM i : imcl -input /QIBM/ProdData/IBM/Connections/silentResponseFile/LC_uninstall_linux.rsp -log /mylog/silent_uninstall_log.xmlInstallation Manager writes the result of the installation command to the log file specified with the -log parameter. To check the complete details of the installation, open each of the log files in connections_root/logs. Each Connections application that you installed has a log file in the following naming format:
applicationInstallog.txt
...where application is the name of an Connections application. If the installation is successful, the log files are empty. For example:
<?xml version="1.0" encoding="UTF-8"?> <result> </result>The log file contains an error element if the operation did not complete successfully. A successful installation adds a value of 0 to the log file. An unsuccessful installation adds a positive integer to the log file. The log file for Installation Manager records the values that you entered when you ran Installation Manager in interactive mode. To review the log file for Installation Manager, open date_time.xml, where date_time represents the date and time of the installation. The file by default is in the following directory:IBM i : /QIBM/UserData/InstallationManager/logs
Uninstalling: delete databases with the database wizard
Use the database wizard to delete databases.
To delete databases with the database wizard...
- Log in as the database administrator, using the account that you created when you installed the database.
- From the Connections wizards directory, run the following script file to launch the wizard:
- AIX:
./dbWizard.sh- Linux:
./dbWizard.shWindows:
dbWizard.bat- On the Welcome panel, click Launch Information Center to open the Connections product documentation in a browser window. Click Next to continue.
- Select the option to delete a database, and click Next.
- Specify the relevant database information, and then click Next:
- Select a database type.
- Select the location of the database.
- Specify a database instance.
The database instance that you specify must already exist on your system. For more detail about the database information, refer to Create databases using SQL scripts.
- Select the application databases to delete and click Next.
Application databases that are not installed are greyed out.
- Review the Pre-Configuration Task Summary to ensure that the values you entered on previous panels are correct. To make a change, click Back to edit the value. Click Delete to begin deleting databases.
- Review the Post Configuration Task Summary panel and, if necessary, click View Log to open the log file. Click Finish to exit the wizard.
Uninstalling databases using response file
Remove databases with the database wizard using response file.
Ensure that the wizard has created the response.properties file in the user_settings/lcWizard/response/dbWizard directory.
For more information, see the The database wizard response file topic.
To create a response file, run the wizard in standard mode and specify that you would like to create a response file. You can modify the existing response file or create your own, using a text editor.
To remove databases using response file...
- (DB2 on Windows 2008 64-bit.) On Windows 2008, you must perform DB2 administration tasks with full administrator privileges.
- Logged in as the instance owner, open a command prompt and change to the DB2 bin directory. For example: C:\\IBM\SQLLIB\BIN.
- Enter the following command: db2cwadmin.bat. This command opens the DB2 command line processor while also setting your DB2 privileges.
- From the command prompt, change to the directory where the wizard is located.
- Launch the wizard by running the following command:
- AIX: ./dbWizard.sh -silent response_file
- Linux: ./dbWizard.sh -silent response_file
Windows: dbWizard.bat -silent response_file
where response_file is the file path to the response file.
If the path to the response_file contains a space, this parameter must be enclosed in double quotation marks (").
After the wizard has finished, check the log file in the Lotus_Connections_set-up_directory/Wizards/DBWizard directory for messages. The log file name uses the time as a postfix. For example: dbConfig_20101228_202501.log.
Uninstalling: delete databases on
IBM i Use the CL command to delete databases on the
IBM i platform.Ensure that all database connections are stopped, such as applications, db2clients, and so on.
To remove databases using CL commands on the
IBM i platform.... If you still need to delete Metrics and Cognos databases on AIX, refer to the topic Uninstalling: Manually drop databases.
- Log into the
IBM i server where the databases located with the user profile used to create the databases. The *ALLOBJ and *SECAdmgr special authorities are required to delete the database schemas.- From the command lines to remove the database schema manually.
DLTLIB ACTIVITIES DLTLIB CALENDAR DLTLIB DOGEAR DLTLIB ACTIVITIES DLTLIB BLOGS DLTLIB SNCOMM DLTLIB FILES DLTLIB FORUM DLTLIB HOMEPAGE DLTLIB MOBILE DLTLIB EMPINST DLTLIB WIKISMake sure the library CALENDAR is deleted before removing the library SNCOMM. For more details about the DLTLIB command, refer to Delete library (DLTLIB).
After these commands have finished, check every database schema by running the following commands. All libraries will not exist if you receive the following message:
Cannot find object to match specified name.WRKLIB ACTIVITIES WRKLIB CALENDAR WRKLIB DOGEAR WRKLIB ACTIVITIES WRKLIB BLOGS WRKLIB SNCOMM WRKLIB FILES WRKLIB FORUM WRKLIB HOMEPAGE WRKLIB MOBILE WRKLIB EMPINST WRKLIB WIKIS
Uninstalling: Manually drop databases
After uninstalling an Connections application, you can drop any related databases by using the database wizard or by following this manual procedure.
- Ensure that all Connections database instances are running.
- Disconnect any open applications from the database.
- If the database server and IBM WAS are on different systems, copy the database scripts to the system that hosts the database server.
If you prefer not to use the database wizard, use this procedure to manually drop DB2, Oracle, or Microsoft SQL Server databases.
The Wizards directory is located in the Connections set-up directory or installation media.
Complete the following steps for your database type:
- DB2:
- Log in to the database server with an authorized administrator account.
The default administrator account is db2inst1 on AIX/Linux, and db2admin on Windows.
- Run the following command for Communities:
db2 -td@ -vf calendar-dropDb.sql
- For Home page and Profiles, run the following command :
db2 -tvf dropDb.sql
- Run the following command for all the other applications:
db2 -td@ -vf dropDb.sql
The SQL scripts are located in the following directory:
- AIX or Linux: Wizard/connections.sql/application_subdirectory/db2
If you are using Linux on IBM System z with the DASD driver, the SQL scripts are located in the IBM_Connections_Install_s390/IBMConnections/connections.s390.sql directory.
If you are using Linux on IBM System z with the SCSI driver, back up the connections.s390.sql directory and rename the connections.sql directory to connections.s390.sql.
Windows: Wizards\connections.sql\application_subdirectory\db2
where application_subdirectory is the directory for an Connections application.
- Oracle:
- Log in to the database server with an authorized administrator account.
The default administrator account is oracle.
- Set the ORACLE_SID.
- Run SQL Plus by typing the following command:
sqlplus /NOLOG- Type the following command to log in as an administrator with the sysdba role:
connect as sysdba
- Run the following command for Communities:
- AIX or Linux:@Wizards/connections.sql/communities/oracle/calendar-dropDb.sql
- Windows:@Wizards\connections.sql\communities\oracle\calendar-dropDb.sql
- Run the following command for the other applications:
- AIX or Linux:@Wizards/connections.sql/application_subdirectory/oracle/dropDb.sql
- Windows:@Wizards\connections.sql\application_subdirectory\oracle\dropDb.sql
where application_subdirectory is the directory for an Connections application.
- SQL Server:
- Open a command prompt.
- Run the following command for Communities:
sqlcmd -U <admin_user> -P admin_password -i Wizards\connections.sql\communities\sqlserver\calendar-dropDb.sql
If your SQL Server database has multiple instances, add the following line as the first parameter of the command:
-S sqlserver_server_name\sqlserver_server_instance_name
where sqlserver_server_name is the name of the SQL Server database, and sqlserver_server_instance_name is the name of each database instance.
- Run the following command for the other applications:
sqlcmd -U <admin_user> -P admin_password -i Wizards\connections.sql\application_subdirectory\sqlserver\dropDb.sql
where application_subdirectory is the directory for an Connections application.
Reverting Common, Connections-proxy, and WidgetContainer Applications for Uninstallation
The Connections Installation Manager program requires that you revert the Common and WidgetContainer applications the News cluster for uninstallation if you previously separated those applications to their own clusters.
When reverting the Common application, the Connection-proxy ear will be reverted at the same time. The new cluster, node, and server name will be the same as those of the Common application.
- Stop all clusters in the Connections deployment.
- On the deployment manager, open a command prompt, and then change to the following directory on WAS hosting the Connections server:
- On AIX or Linux:
/opt/IBM/Connections/ConfigEngine- On Microsoft Windows:
C:\IBM\Connections\ConfigEngine- Enter the following command to run the script that reverts the Common application to the News cluster:
For example, on the Microsoft Windows operating system, you would enter the following command:
- On AIX or Linux:
./ConfigEngine.sh revert-common-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > /tmp/revert-common-ear.log 2>&1- On Microsoft Windows:
ConfigEngine.bat revert-common-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > C:\revert-common-ear.log 2>&1ConfigEngine.bat revert-common-ear -DWasUserid=wasadmin -DWasPassword=yourpassword > C:\revert-common-ear.log 2>&1- Enter the following command to run the script that reverts the WidgetContainer application to the News cluster:
For example, on the Microsoft Windows operating system, you would enter the following command:
- On AIX or Linux:
./ConfigEngine.sh revert-widgetcontainer-ear \ -DWasUserid=<was_admin_username> \ -DWasPassword=<was_admin_password> > /tmp/revert-widgetcontainer-ear.log 2>&1- On Windows:
ConfigEngine.bat revert-shindig-ear -DWasUserid=<was_admin_username> -DWasPassword=<was_admin_password> > C:\revert-shindig-ear.log 2>&1ConfigEngine.bat revert-widgetcontainer-ear -DWasUserid=wasadmin -DWasPassword=yourpassword > C:\revert-widgetcontainer-ear.log 2>&1- Proceed with uninstallation.
Uninstalling Cognos Business Intelligence server
You uninstall Cognos by uninstalling the Cognos Business Intelligence server and Cognos Transformer and removing the powercube refresh scheduler .
- Stop the Cognos server from the application server list on the IBM WebSphere Deployment Manager.
- Remove the Cognos server node from the Deployment Manager by navigating to System Administration > Nodes and then select the node containing Cognos and click Remove Node.
Do not remove the node if other servers on this node should remain in the dmgrGR cell. If this node should remain in the cell, please use a new node for the Cognos server that will be used with Connections 4.5.
- Uninstall the Cognos application on the IBM WebSphere Deployment Manager if it still exists by navigating to Applications > Application Types > Websphere enterprise applications > Select Cognos and then click Uninstall.
- On the machine where the Cognos application is installed, remove the Cognos server instance in WAS by
wsadmin -lang jython \ -conntype NONE \ -c "AdminServerManagement.deleteServer('NODE_NAME', 'cognos_server')"Where NODE_NAME is the name of the node that the cognos_server is in. Also, change cognos_server to be the actual server name if that has been changed. This command is run from the $WAS_HOME/bin folder.
- Make sure that the following Cognos processes have stopped before proceeding to the next step.
These processes should have stopped within a few minutes after the Cognos application stopped.
- IBM AIX or Linux: cgsServer.sh and CAM_LPSvr processes
Windows: cgsLauncher.exe and CAM_LPSvr processes
- From the operating system command line, change to the Cognos_BI_install_path/uninstall directory.
- At the command prompt:
- Windows: uninst -u
- UNIX or Linux with XWindows: ./uninst -u
- UNIX or Linux without XWindows: ./uninstnx -u -s
- Then follow the prompts to complete the IBM Cognos Business Intelligence Server uninstallation.
- Delete the folder that Cognos Business Intelligence Server was installed.
- From the operating system command line, change to the Cognos_Transformer_install_path/uninstall directory.
- At the command prompt:
>
- UNIX or Linux with XWindows: ./uninst -u
- UNIX or Linux without XWindows: ./uninstnx -u -s
- Then follow the prompts to complete the IBM Cognos Business Intelligence Transformer uninstallation.
- Delete the folder where Cognos Business Intelligence Transformer was installed.
- Delete all the files from the path where the powecube file was stored.
- Remove the powercube refresh scheduler as follows:
- Windows:
schtasks /delete /tn MetricsCubeDailyRefresh /f schtasks /delete /tn MetricsCubeWeeklyRebuild /f- UNIX or Linux:
crontab -eEdit the file opened to remove the lines that contain the following path, then save the change.Cognos_Transformer_install_path/metricsmodel/daily-refresh.shOrCognos_Transformer_install_path/metricsmodel/weekly-rebuild.sh
Update and migrating
Update or migrate Connections to the latest point release.
About migrating
Migrate Connections version 4.0 to version 4.5 using built-in wizards and scripts to move your data, configuration settings, and databases.
This topic applies to all 4.0 CR2 and later releases of version 4.0, which means you need to upgrade the deployment to 4.0 CR2 before performing the migration.
If you have any customized header, footer, theme, or CSS files, you might need to update those customizations manually.
For more information, see the Saving your customizations topic.
You must delete any Search-related data from your Connections 4.0 content stores, such as indexes and statistics. The 4.5 installation generates new Search-related data.
About updating
Update Connections 4.5 with the latest interim fixes or fix packs.
After updating, you might need to reconfigure IBM HTTP Server.
For more information, see the Configuring IBM HTTP Server topic.
Prepare Connections for maintenance
When you migrate or update Connections, inform users about any planned outages.
If you are migrating from version 4.0, ensure that your systems meet the requirements for Connections 4.5.
For more information, see the Connections system requirements topic.
Add Location and ErrorDocument stanzas to the httpd.conf file before migrating. When the deployment is offline, users are directed to a maintenance page.
To bring down the product in preparation for updating or migrating...
- Inform users of the planned outage with details of when the maintenance work begins and how long it is schedule to last. You can send email notifications to community members or post a message to an area of the product used provide site status information.
- Perform one of the following steps:
- Stop the IBM HTTP Server – ensure that no other applications are using the IBM HTTP Server.
- Keep the web server running but prevent user-access to the deployment during the migration or update. To accomplish this, set up a maintenance page and create a rewrite rule in the httpd.conf configuration file for the IBM HTTP Server to redirect requests for Connections:
- Create an HTML document notifying users of the server maintenance window. The maintenance page can inform users that Connections is temporarily unavailable because of scheduled maintenance work. Point to the maintenance page by using these ErrorDocument statements:
ErrorDocument 401 /upgrading.htm
ErrorDocument 403 /upgrading.htm
- Add the following element to the httpd.conf file to block all non-authorized IP addresses from reaching the server and to send the user to the upgrading.html page:
<Location / >
Order Deny,Allow
Deny from all
Allow from your.ip.address>
Allow from ip.address.of.each.machine.in.deployment
</Location>
You must have an Allow element for every instance of WAS in the deployment.
When using plugins with Connections, this approach returns a 403 HTTP response code for clients, meaning "Forbidden". For the browser client, the maintenance page will be displayed as defined by "ErrorDocument"; however, the Connections plugins cannot handle the 403 HTTP response code and will display the userid/password prompt, but providing the user credentials will keep failing. The plugins will not ask the user for credentials. To avoid this you could use the following alternative approach, which would return a 500 HTTP response code, meaning "Internal Server Error". Add the following lines to the end of httpd.conf file as follows:
LoadModule rewrite_module modules/mod_rewrite.so RewriteEngine on RewriteCond %{REMOTE_HOST} !^127.0.0.1 RewriteCond %{REMOTE_HOST} !^192.168.157.139 RewriteCond %{REMOTE_HOST} !^192.168.157.140 RewriteRule !^/upgrading.htm$ /upgrading.htm [L,R=500] ErrorDocument 500 /upgrading.htmAgain, you should exclude all IPs of servers and desktops, which should still have access to the environment during maintenance. Using the RewriteCond lines in the example, exclude every instance of WAS in the deployment. Comment out these lines when maintenance is finished and then restart the HTTP server.When the migration or update is complete, remove the Location and ErrorDocument stanzas from the httpd.conf file.
Back up Connections
Before migrating or updating Connections, back up your databases and applications.
Follow these steps to back up your Connections deployment. You can use this back-up to restore your existing deployment if the update or migration fails. This procedure backs up your entire deployment; you cannot use it to back up individual applications within Connections.
- Stop the IBM WAS instances that are hosting Connections.
- Use native database tools, back up the databases. If the update or migration fails, use this backup to restore the databases.
For more information about backing up Connections data, see the Backing up and restoring data topic.
- Back up the WAS Deployment Manager profile directory: profile_root/Dmgr01. For example: D:\WebSphere\AppServer\profiles\dmgr.
- Back up your Connections deployment.
- Create a back-up of the Connections installation directory: connections_root.
- Create a back-up of the WAS profile directory: profile_root
If Connections applications are deployed on separate profiles, archive each profile.
- Create a back-up of the profileRegistry.xml file, located under app_server_root/properties.
- Back up the local and shared data directories:
- Back up the Shared Resources directory:
- AIX or Linux: shared_resources_root
- Windows: shared_resources_root
- Back up the Installation Manager data directory:
This step is necessary only if you are planning an in-place migration of Connections; that is, where you use the same systems to host the new deployment.
- AIX or Linux: /var/ibm/InstallationManager
- non-root user: /home/user/var/ibm/Installation Manager where user is the account name of the non-root user.
- Windows: C:\ProgramData\IBM\Installation Manager
CAUTION: The Installation Manager's shared data directory needs to be backed up as well. If there is a need to go back to the older Connections installation after an upgrade, a conflict might occur if the IIM shared folder has not been backed up and restored in parallel. The IIM relies on the data inside the shared folder to work correctly. If the version of Connections that IIM thinks is installed differs from the version of Connections that was restored on the file system, then IIM will not function correctly. The following folder should be backed up as follows:
- Windows 2008:
- C:\IBM\Installation Manager
- C:\IBM\SSPShared
- C:\ProgramData\IBM\Installation Manager
- Windows 2003:
- C:\Documents and Settings\All Users\Application Data\IBM\Installation Manager
- C:\IBM\Installation Manager
- C:\IBM\SSPShared
- Linux:
- /opt/IBM/InstallationManager
- /var/ibm/InstallationManager
- /opt/IBM/SSPShared
- AIX:
- /var/ibm/InstallationManager
- /usr/IBM/InstallationManager
- /usr/IBM/SSPShared
- Back up any customized configuration files.
For more information, see the Saving your customizations topic.
Saving your customizations
Before updating or migrating Connections, back up and make notes of your customizations.
Only some of the customizations that you made to Connections 4.0 application files and custom fields are preserved by the migration tool (the lc-export and lc-import commands). Ensure that you reapply your customizations after migration.
Customized files and themes
The update and migration processes change several configuration files, including files that you customized. Customized files can include header and footer HTML files, CSS and JSP files, themes, and several other files that are listed in this topic. Custom CSS files are preserved in the customization directory but update them to match the new One UI version implemented in this release.
For more information about the One UI standard used in Connections 4.5, refer to the ICS UI Developer Guide:
http://infolib.lotus.com/resources/oneui/3.0/docPublic/index.htm
You must manually migrate the files that are referenced in the Migrate manually column of the following table:
Customization Automatically migrated Migrate manually User interface (customized CSS, JSP, and HTML; labels and strings). Customized string files are preserved in the customization directory but you might need to rename some string files to match new Java and Dojo package names in Connections 4.5.
Custom CSS files are preserved in the customization directory but update them to match the new One UI version in Connections.
For more information, refer to the ICS UI Developer Guide:
http://infolib.lotus.com/resources/oneui/3.0/docPublic/index.htm
Partially. Yes. Template configuration Manual post-migration steps may be required, see the Post-migration steps for profile types and profile policies topic.
Partially. See the post-migration steps.
Header and Footer The customized header and footer are preserved in the customization directory, which is not migrated. You must update your customized header and footer to match the new layout and functionality of Connections 4.5.
No. Yes. Email notification templates. Email notification templates have been translated from JSP to FreeMarker. For more information, see the Customize notifications topic. Merge your existing customizations with the new Notification templates in the profile_root/Dmgr01/config/cells/cell_name/LotusConnections-config/notifications directory. If a global customization exists for all emails, then you only need to customize the shared resources. If JSP templates were customized in the 4.0 customization folders - you can remove them.
No. Yes. Blog themes Partially. Yes. Community themes The default Communities themes are migrated automatically. However, must redefine any custom theme.
Yes Redefine any custom themes. Business cards The Links and Actions attributes of business cards are migrated when the LotusConnections-config.xml and profiles-config.xml are migrated. Link definitions defined as attributes to each service in the LCC.xml file are migrated to the new LCC.xml file.
The photo and contact information in business cards is migrated from the profiles-config.xml file in 4.0 to the new businessCardInfo.ftl FreeMarker Template Language file in 4.5. Other elements in the profiles-config.xml file are migrated to the new profiles-config.xml file.
Manual steps may also be required to migrate profile customizations for business cards from 4.0 as described in the Post-migration steps for profile types and profile policies topic.
Yes. Partially. Security role mappings. Redefine your security role mappings.
For more information about security role mapping, see the Security role to user or group mapping topic in the WAS Information Center.
No. Yes. service-location.xsd Customized extended service names are not migrated. Copy any customized XSD elements to the 4.5 service-location.xsd file.
Partially. Yes. profiles-policy.xml Copy the 4.0 version of the profiles-policy.xml file to the 4.5 deployment, overwriting the 4.5 version of the file.
For more information, see the Post-migration steps for profile types and profile policies topic.
No. Yes. validation.xml Redefine customized Profile field validation settings.
No Yes. JavaScript The file path and contents of JavaScript string customizations have changed and require manual migration.
For more information about the locations of JavaScript files, see the Customize strings sourced in JavaScript topic.
The ui-extensions framework is deprecated in 4.5 so you must manually migrate your 4.0 JavaScript customizations.
For more information, see the Extend JavaScript in Connections topic.
No. Yes Connections Connector for Lotus Quickr. The Connections Connector for Lotus Quickr was updated for Connections 4.5. You must obtain the 4.5 version from the catalog.
No. Reinstall the connector. Server whitelist for publishing file attachments from Activities to Lotus Quickr. No. Redefine the whitelist.
Prepare to migrate the media gallery
Before you migrate your media gallery from Connections 4.0 to 4.5, you must prepare your terms and conditions agreement and any third-party video player that you use. These features were both optional, so you might not use them.
If you created an optional terms and conditions agreement for the media gallery or used a custom video player in 4.0, then follow these instructions:
- Copy the string bundle files associated with terms and conditions agreement from the customization_dir/strings directory that you used in Connections 4.0.
- If you were using a custom video player in Connections 4.0, then backup the original enterprise application that you used. If you used the same naming conventions that were used as in the 4.0 documentation, then back up the CustomVideoPlayer enterprise application (ear).
- Select the ear in the WAS admin console.
- Click Export.
- Save the ear to the file system.
- If you were using a custom field renderer in Connections 4.0, then backup the original enterprise application that you used. If you used the same naming conventions that were used in the Connections 4.0 documentation, then backup the CustomRenderers enterprise application (ear).
- Select the ear in the WAS admin console.
- Click Export.
- Save the ear to the file system.
Migrating Cognos Business Intelligence
You can use an existing content store or create a new content store to migrate Cognos Business Intelligence for Connections from 4.0 to 4.5.
To migrate Cognos Business Intelligence from Connections 4.0 to 4.5, you need to install a new Cognos Business Intelligence server instead of using the old one, however the old Cognos database can be reused for the new installation if you want. In the steps that mention the installation of Connections 4.5 in this topic, be sure to follow the Connections 4.5 guide to install the new Cognos Business Intelligence server.
Use an existing Cognos content store database
You can use an existing content store to migrate Cognos Business Intelligence for Connections from 4.0 to 4.5.
To use existing Cognos content store database from Connection 4.0, perform the following steps:
- Back up the Connections 4.0 Cognos content store database.
- Back up the customized reports on the Cognos Business Intelligence server. If no customized report exists, you can skip this step.
- Open a browser and log into the Cognos dashboard page as Cognos administrator.
- Create a new folder titled CustomizeReportsBackupunder the Public Folders > IBMConnectionsMetrics to keep the backup of all customized reports.
- Copy all customized reports and folders to the CustomizeReportsBackup folder. Make note of the original location for these reports and folders for later use.
- Make note of the changes that were made for the 'jobtemplate', 'jobtemplate1', 'jobtemplate2', 'jobtemplate3', 'jobtemplate4' and 'jobtemplate5' under Public Folders > IBMConnectionsMetrics > Metrics > static. Adding customized reports to a Community or removing any reports from a Community for performance tuning might require the list of report changes for these job templates.
- Remove the old LDAP namespace and shut down the Connections 4.0 Cognos BI server.
- Enable anonymous access of the Connections 4.0 Cognos BI server and remove the LDAP namespace configuration. There are two ways to enable anonymous access.
- Run the Cognos Configuration tool.
- Or, if your Cognos server runs on AIX or Linux and does not provide a graphical user interface, you can enable anonymous access manually.
You can delete the LDAP namespace configuration either using in the Cognos Configuration tool or manually removing the LDAP configuration section from the Cognos configuration file.
Restart the Cognos server after you have performed the previous steps.
- Open a browser and log into the Cognos dashboard page as anonymous.
- On the Cognos dashboard page, click Launch and then select IBM Cognos Administration.
- Select the Security tab and then select Users, Groups, and Roles in the list.
- On the Directory page, delete the LDAP namespace from the list.
- Shut down Connection 4.0 Cognos BI server and make sure the Cognos content store database is not in use since it will be used by the new Connections 4.5 Cognos server.
- Update the Connections 4.0 Cognos BI server configuration not to use this Cognos content store database anymore. There are two ways to do this:
- Run the Cognos Configuration tool. Click Data Access > Content Manager > IBM Cognos Content Store' within the Cognos Configuration tool, and change the Database name or Service name to an invalid value, such as Cognos_obsolete. Then save the change.
- Or, if your Cognos server runs on AIX or Linux and does not provide a graphical user interface, you need to do this manually. Open the Cognos configuration file cogstartup.xml in the cognos_biserver_install_path/configuration directory with a text editor, and find the instance named IBM Cognos Content Store. Change the value for the name or servicename parameter within it to an invalid value, such as Cognos_obsolete. Then save the change.
- Uninstall Cognos BI.This is recommended if you are installing Cognos BI in place. It is not necessary if you are installing Cognos BI on new hardware.
- Install the new Cognos Business intelligence server.
During the process of creating databases for Connection 4.5, do not create the Cognos database. Use the Connections 4.0 Cognos content store database information when preparing the cognos-setup.properties file for the Cognos Business Intelligence components installation. If an old Cognos node already exists on the Connections 4.5 Deployment Manager before you federate the newly installed Cognos server to it, remove the old Cognos node first, uninstall the Cognos application if it exists in the application list, and then federate the newly installed Cognos server to the Deployment Manager.
- Install Connections 4.5.
For more information, see Migrating to Connections 4.5. Return to this page to continue with step 8 after this step is done.
- Configure Cognos BI as described in Configure Cognos Business Intelligence .
- Run build-all.bat|sh script under cognos_transformer_install_path/metricsmodel to rebuild the powercube on the newly installed Cognos Transformer server.
All the history data in Metrics database needs to be restored from the database backup file before you run the build-all.bat|sh script. To do this, you can restore all history data to Connections 4.0 Metrics database, then use the database migration tool to migrate the data to the Connections 4.5 Metrics database.
- Restore customized reports and redo the job template change on the new BI server. If no customization was made, you can skip this step:
- Open a browser and log into the new Connections 4.5 Cognos dashboard page as the Cognos administrator.
- Copy all customized reports and folders from the Public Folders > IBMConnectionsMetrics > CustomizeReportsBackup folder to the original location noted in step 2.
- Redo any change to 'jobtemplate', 'jobtemplate1', 'jobtemplate2', 'jobtemplate3', 'jobtemplate4' and 'jobtemplate5' under Public Folders > IBMConnectionsMetrics > Metrics > static according to the information noted in step 3.
- Confirm that the customized reports for both global and community are able to work well in the Metrics interface.
To confirm the community report, you need to choose a community and perform the report update for this community to generate the new version reports.
- Delete the Public Folders > IBMConnectionsMetrics > CustomizeReportsBackup folder.
- Remove the old dispatcher for Connections 4.0 Cognos BI server on newly installed Connections 4.5 Cognos BI server.
- Make sure you have logged into the Connections 4.5 Cognos dashboard page as the Cognos administrator.
- On Cognos dashboard page, click Launch and then select IBM Cognos Administration.
- Select the Configuration tab and then select Dispatchers and Services in the list.
- On the Configuration page, delete the old dispatcher URL for Connections 4.0 Cognos BI server if it exists.
Use a newly created Cognos content store database
You can use a new content store to migrate Cognos Business Intelligence for Connections from 4.0 to 4.5.
To use a new Cognos content store database 4.0, perform the following steps:
- Prepare the customized reports on the Connections 4.0 Business Intelligence server as follows. If no customized report exists, you can skip this step.
- Open a browser and log into the Cognos dashboard page as the Cognos administrator.
- Create a new folder titled CustomizeReportsBackupunder the Public Folders > IBMConnectionsMetrics to keep the backup of all customized reports.
- Copy all customized reports and folders to the CustomizeReportsBackup folder. Make note of the original location for these reports and folders for later use.
- Make note of the changes that were made for the 'jobtemplate', 'jobtemplate1', 'jobtemplate2', 'jobtemplate3', 'jobtemplate4' and 'jobtemplate5' under Public Folders > IBMConnectionsMetrics > Metrics > static. Adding customized reports to a Community or removing any reports from a Community for performance tuning might require the list of report changes for these job templates.
- Export customized reports and community reports from Connections 4.0 BI server as follows: If no customization was made, you can skip this step.
- On Cognos dashboard page, click Launch and then select IBM Cognos Administration.
- Select the Configuration tab and then select Content Administration in the list.
- On the Administration page, click New Export in the list tool bar to create an 'Export' object.
- Specify MigrationPkg for the Name attribute in the opened page and then click Next.
- Select Select public folders and directory content for Deployment method, and then click Next.
- In the Public folders content section, click Add to open the Select entries(Navigate) page.
- Locate the CustomizeReportsBackup and StaticReports folders under Public Folders > IBMConnectionsMetrics If no customized report was made, skip the folder CustomizeReportsBackup.
- Select the check box for these two folders.
- Click Add to add them to the Selected entries list.
- Click OK to save the edit.
- Return to the Select public folders content page.
- Select the check boxes for the two folders just added in the list in the Public folders content section.
- In the Options section, select Include report output versions and select Replace existing entries for Conflict resolution and then click Next.
- Leave the items for Directory content unchecked and then click Next.
- Select Include access permissions and Apply to new and existing entries for Access permissions, keep the default option for the other sections and then click Next.
- Keep the default option for Deployment archive and Encryption and then click Next to go to the Review the summary page.
- Click Next to go to the Select an action page.
- Select Save and run once, then click Finish.
- Select Now and click Run.
- You might see a message that confirms you to run the export. Check the View the details of this export after closing this dialog and then click OK to start the export.
- In the View run history details page, you can click the Refresh link to refresh the status of the export after you wait for a while. When the export completes, the status changes to Succeeded.
- Go to cognos_biserver_install_path/deployment folder to locate a new zip file named MigrationPkg.
- Uninstall Cognos BI. This is recommended if you are installing Cognos BI in place. It is not necessary if you are installing Cognos BI on new hardware.
- Install the new Cognos Business intelligence server.
During the process of creating databases for Connection 4.5, do not create the Cognos database. Use the Connections 4.0 Cognos content store database information when preparing the cognos-setup.properties file for the Cognos Business Intelligence components installation. If an old Cognos node already exists on the Connections 4.5 Deployment Manager before you federate the newly installed Cognos server to it, remove the old Cognos node first, uninstall the Cognos application if it exists in the application list, and then federate the newly installed Cognos server to the Deployment Manager.
- Install Connections 4.5.
For more information, see Migrating to Connections 4.5. Return to this page to continue with step 7 after this step is done.
- Configure Cognos BI as described in Configure Cognos Business Intelligence .
- After Connections 4.5 installation completes, run build-all.bat|sh under Cognos_transformer_install_path/metricsmodel to rebuild the powercube on the newly installed Cognos Transformer server.
All the history data in Metrics database needs to be restored from the database backup file before you run the build-all.bat|shscript. To do this, you can restore all history data to Connections 4.0 Metrics database then use the database migration tool to migrate the data to Connections 4.5 Metrics database.
- Import the customized reports and community reports to the Connections 4.5 Cognos BI server.
- Copy the MigrationPkg.zip created in step 3 to the cognos_biserver_install_path/deployment folder on the new Connections 4.5 BI server.
- Open a browser and log into the new Connections 4.5 Cognos dashboard page as Cognos administrator.
- On Cognos dashboard page, click Launch and then select IBM Cognos Administration.
- Select the Configuration tab then Content Administration.
- On the Administration page, click New Import on the tool bar to create an 'Import' object.
- Select MigrationPkg for Deployment archive in the new opened page and then click Next.
- Use default name and then click Next.
- Select the CustomizeReportsBackup and StaticReports in the Public folders content section, keep the default options for other items, and then click Next. If no customized report was made, skip the folder CustomizeReportsBackup.
- Keep the default options for this page and then click Next to go to the Review the summary page.
- Click Next to go to the Select an action page.
- Select Save and run once, then click Finish.
- Select Now, keep the other options on this page unchanged and then click Run.
- If you receive a message that confirms you to run the import, select the View the details of this import after closing this dialog option and then click OK to start the import.
- In the View run history details page, you can click the Refresh link to refresh the status of the import after you wait for a while. When the import completes, the status changes to Succeeded.
- Restore customized reports and redo the job template changes on new BI server. If no customization was made, you can skip this step:
- Open a browser and log into the new Connections 4.5 Cognos dashboard page as Cognos administrator.
- Copy all customized reports and folders from the Public Folders > IBMConnectionsMetrics > CustomizeReportsBackup folder to the original location noted in step 1.
- Redo the changes to 'jobtemplate', 'jobtemplate1', 'jobtemplate2', 'jobtemplate3', 'jobtemplate4' and 'jobtemplate5' under Public Folders > IBMConnectionsMetrics > Metrics > static according to the information noted in step 3.
- Confirm that the customized reports for both global and community can work well in the Metrics interface.
To confirm the community report, you need to choose a community and do the report update for this community to generate the new version reports.
- Delete Public Folders > IBMConnectionsMetrics > CustomizeReportsBackup.
Quickr migration tools for migrating places to Connections
There are migration tools available for migrating Lotus Quickr 8.5.1 for Domino and Quickr 8.5 for Portal places into Connections communities.
Migrating Quickr for Portal places to Connections Content Manager libraries
Use the Quickr for Portal migration tool to migrate contents from Lotus Quickr for WebSphere Portal 8.5 to Connections Content Manager.
The following conditions apply to be able to migrate Lotus Quickr content into Connections:
- All draft contents must be complete before migration.
- The migration tool does not support migrating the multiple selection property from Quickr.
- After migration, the timestamp and creator of new communities in Connections will be current time and the admin account who actually runs the migration. The actual content migrated will preserve the timestamp and updater information from Quickr.
- Users should not add place ids from the Quickr user interface but should use place ids via he migration tool using the -a command.
- Quickr workflows that have been completed will not be preserved. All workflows are expected to be completed before migration.
The Quickr for Portal migration tool needs to migrate membership from Quickr to a Connections Community, however, Connections Community API requires a unique ID when adding a member. The migration tool uses TDI to generate a flat file to map the DN to unique ID. When you generate the files successfully, you can copy them to the Quickr server and perform additional configuration. Make sure that you have created the correct mapping for every external user in Quickr; otherwise errors will be reported during migration.
A group member who was a manager will be downgraded to be a simple member in a community after migration.
Libraries in Quickr places, including folders, documents, history versions, and comments can be migrated to Connections Communities. Existing Lotus Quickr Linked Libraries and Libraries that have been integrated with Connections also can be migrated.
Quickr blogs, wikis, and components other than libraries cannot be migrated.
- Generate mapping between DN to Connections unique ID. Before performing this task, you should know how to populate the Profiles database and where your Connections unique ID comes from. You can get the wizard for populating Profiles databases from the Connections installation package. All TDI (TDI) related files are in the Wizards\TDIPopulation directory.
- The Unique ID could be directly from a LDAP attribute, no conversion needed, for example from ibm-entryUuid for Tivoli LDAP by default.
- The Unique ID also could be converted from a LDAP attribute by calling existing TDI JavaScript functions. You could copy them from Wizards\TDIPopulation\win\TDI\profiles_functions.js, for example, call function_map_from_objectGUID to convert from objectGUID for AD LDAP by default, call function_map_from_dominoUNID to convert from dominoUNID for Domino LDAP by default.
Make sure to save the DN:unique id mapping file character set as "utf-8".
The following procedure uses an Active Directory LDAP as an example. If you are using Tivoli LDAP, add ibm-entryUuid instead of objectGUID in step e, and then choose CSV Parser instead of Script Parser in step f. If you do not use the default LDAP attribute as the Connections unique ID, you should refer these steps and change them to suit your Connections implementation. You should generate two files for user and group respectively. You can specify the LDAP search filter for user and group respectively in step g.
- Set up your TDI development environment as described in Set up your development environment.
- Import the Connections TDI project to the TDI configuration editor. The configuration file is WizardsCopy\TDIPopulation\win\TDI\profiles_tdi.xml. You can make a copy of the Wizards directory and work on this copy.
- Create collect_dns_uid_For_AD and collect_dns_uid_flow_For_AD by copying and pasting the existing collect_dns and collect_dns_flow.
- Change collect_dns_uid_For_AD to call collect_dns_uid_flow_For_AD.
- In collect_dns_uid_flow_For_AD, add a new mapping by clicking Add then entering objectGUID as name.
- On the Parser tab:
- Select Script Parser, click Edit Script, and then copy the function function_map_from_objectGUID from profiles_functions.js.
- Change work.getAttribute("objectGUID") to entry.getAttribute("objectGUID").
- Change the function writeEntry() as follows:
function writeEntry () { out.write (entry.getString("$dn")); out.write (";"); out.write (function_map_from_objectGUID()); out.newLine(); }- Click Advanced and set Character Encoding to UTF8. If the directory is Domino Native, and Connections also connects to the same directory, then the writeEntry function should be similar to the following to convert "," to "/" for person dn and trim "CN=" from group dn. Also add an attribute in Output Map named "objectclass".
function writeEntry () { var type = entry.getString("objectclass"); if(type == "dominoPerson") { var person_dn = entry.getString("$dn"); var index = person_dn.indexOf(","); if(index != -1) { person_dn = person_dn.replace(/,/g, "/"); } out.write (person_dn); out.write (";"); out.write (function_map_from_dominoUNID()); out.newLine(); } if(type == "dominoGroup") { var group_dn = entry.getString("$dn"); var index = group_dn.indexOf("CN="); if(index == 0) { group_dn = group_dn.substring(3); } out.write (group_dn); out.write (";"); out.write (function_map_from_dominoUNID()); out.newLine(); } }- Configure Wzards\TDIPopulation\win\TDI\profiles_tdi.properties with the LDAP server information, such as source_ldap_url, source_ldap_user_login, source_ldap_user_password, source_ldap_search_base, source_ldap_search_filter.
- Create Wzards\TDIPopulation\win\TDI\collect_dns_uid_For_AD.bat by copying from Wzards\TDIPopulation\win\TDI\collect_dns.bat, and then change it to call the new assembly line collect_dns_uid_For_AD.
- Run collect_dns_uid_For_AD.bat in the command console. Then the generated collect.dn includes the mapping from DN to uniqueID as the following example shows. You can rename it as you want.
CN=John Smith1,OU=Users,OU=region,OU=yourcompany,O=Sales Group,DC=company,DC=sales,DC=companyname,DC=com;05A7B8F2-1E24-4F0D-B02F-BDB223613EE5 CN=John Smith1,OU=Users,OU=region,OU=yourcompany,O=Sales Group,DC=company,DC=sales,DC=companyname,DC=com;CEABB8F9-D2A0-4754-B9FD-E2DA162D705B ......- Add "DN;UniqueID" at the beginning of the mapping file.
- Select the Connections Communities administrative user. Connections Communities administrative user account will be used during the migration. To find out which account is the administrative user, open the IBM WebSphere Application server administrative console on the Connections server and navigate to Applications > Application Types > WebSphere enterprise applications > Communities > Security role to user/group mapping > admin to see all Communities administrative users listed in the Mapped users column.
- Give the FileNet administrator role the privilege for the file content migration as follows:
- Access FileNet ACCE via...
http://yourfilenet:port/acce
- Click the ICObjectStore tab. ICObjectStore is created by Connections.
- Click the Security tab and select an administrative user, for example wasadmin, and then click Edit to set the permissions for the selected administrative user.
- Select This object and All children instead of the default value This object only from the Apply to dropdown list.
- Select Modify certain system properties, and then click OK and Save. Here is an example of the system properties that can be modified by editing the migration.properties file:
- isMigrationVersionsForDoc if you want to migrate all historical versions of a document, set this property to true; if not, set it to false.
- IntegratedIDsFileName for the Quickr places-to-Connections communities ID mapping file to support -m migration.
- placeIDsFileName Quickr places ID file to support -s migration
- Quickr, FileNet, and Connections server details and Connections database details where Quickr for Portal already has been integrated with Connections, for example:
#Quickr server information quickrHost=quickr.mycompany.com quickrPort=10040 quickrAdmin=wpsadmin quickrPassword=password #FileNet server information filenetHost=filenet.mycompany.com filenetPort=9081 filenetAdmin=wasadmin filenetPassword=password filenetObjName=ICObjectStore #Connections server information ConectionsHost=connections.mycompany.com ConectionsPort=9444 ConectionsAdmin=wasadmin ConectionsPassword=password #This is only needed if Quickr has been integrated with Connections with Quickr widget #ConnectionsDBHost=connectionsdb.mycompany.com #Connections database server information #ConnectionsDBPort=50000 #ConnectionsDBAdmin=LCUSER #ConnectionsDBPassword=password #Please select your Connections database type #ConnectionsDBBrand=DB2 #ConnectionsDBBrand=Oracle #ConnectionsDBBrand=SQLServer ##Quickr database server information #Please specify your database server information for schema community QuickrDBSchemaCommunityHost=quickrdb.schema_community.mycompany.com QuickrDBSchemaCommunityPort=50000 QuickrDBSchemaCommunityAdmin=db2admin QuickrDBSchemaCommunityPassword=password #Please specify your database server information for schema jcr QuickrDBSchemaJCRHost=quickrdb.schema_jcr.mycompany.com QuickrDBSchemaJCRPort=50000 QuickrDBSchemaJCRAdmin=db2admin QuickrDBSchemaJCRPassword=password #Please specify your database server information for schema release QuickrDBSchemaRELEASEHost=quickrdb.schema_release.mycompany.com QuickrDBSchemaRELEASEPort=50000 QuickrDBSchemaRELEASEAdmin=db2admin QuickrDBSchemaRELEASEPassword=password #Please select your Connections database type QuickrDBBrand=DB2 #QuickrDBBrand=Oracle- DocTypePropMappingSubClassProp You can provide document types/properties mapping (save as documentType_properties_mapping.xml) to customize how those values can be mapped for during migration. For example:
<?xml version="1.0" encoding="UTF-8"?> <doctypes> <doctype QJName ="docType1" FNName="docType1"> <property QJName='New Text' QJType='text' FNName='New_Text' FNType='String'/> <property QJName='New Multi-line Text' QJType='String' FNName='New_Multi_line_Text' FNType='String'/> <property QJName='New Number' QJType='double' FNName='New_Number' FNType='Float'/> <property QJName='New Person' QJType='string' FNName='New_Person' FNType='String'/> <property QJName='New Single Selection' QJType='string' FNName='New_Single_Selection' FNType='String'/> <property QJName='New Date' QJType='dateTime' FNName='New_Date' FNType='DateTime'/> <property QJName='New Time' QJType='long' FNName='New_Time' FNType='DateTime'/> <property QJName='New Date and Time' QJType='dateTime' FNName='New_Date_and_Time' FNType='DateTime'/> <property QJName='New URL' QJType='string' FNName='New_URL' FNType='String'/> </doctype> <doctype QJName ="docType2" FNName="docType2"> <property QJName='New Text' QJType='text' FNName='New_Text' FNType='String'/> <property QJName='New Multi-line Text' QJType='String' FNName='New_Multi_line_Text' FNType='String'/> <property QJName='New Number' QJType='double' FNName='New_Number' FNType='Float'/> <property QJName='New Person' QJType='string' FNName='New_Person' FNType='String'/> </doctype> </doctypes>- Put the migration tool (migration4quickrj.zip) on the Connections Content Manager server or Quickr server. After unzipping it you will see the properties/execution script files/migration tool with libraries on which migration tool depends.
- To start the migration, click the Start menu, click Run, navigate to the directory where you unzipped the migration4quickrj.zip and enter one of the following commands depending on your operating system. For example:
The migration command options are as follows:
- On Windows entering run.bat -a retrieves all places.
- On Linux/AIX entering ./run.sh -i retrieves Quickr places that have been integrated with Connections via the Quickr widget.
-a retrieves all places from Quickr with place ID and name, and saves to the not_integrated_place_IDs.properties file. -i retrieves Quickr places that have been integrated with Connections via the Quickr widget, lists Connections communities ID, places ID, and name and saves to integrated_place_and_community_IDs.properties file. -r migrates all places. If no integration exists between Lotus Quickr and Connections via the Quickr widget, this command migrates all places into Connections as new communities. If integration does exist between Lotus Quickr and Connections via the Quickr widget, the command migrates places that have not been integrated with Connections as new communities. - e migrates all places into an existing community. If no integration exists between Lotus Quickr and Connections via the Quickr widget, this command migrates all places into Connections existing communities by specifying the source places ID and target communities ID from configuration file; If integration does exist between Lotus Quickr and Connections via the Quickr widget, the command migrates places that have not been integrated with Connections as new communities by specifying the source places ID and target communities ID from configuration file. -s migrates places not integrated with Connections by place ID. Place By default place IDs are stored in the not_integrated_place_IDs.properties configuration file. -m migrates places integrated with Connections into existing Connections communities by place ID and community ID. By default place IDs are stored in the integrated_place_and_community_IDs.properties configuration file. You should migrate linked libraries that have been previously integrated with Connections via a library widget first. If Lotus Quickr is integrated with Connections via a widget before the migration, you need to disable the widgets from Connections after migration. The Communities widgets can be disabled by editing the Connections widgets-config.xml file to comment out or remove the widget definition.
- Open the widgets-config.xml file under <was_home>\AppServer\profiles\<app_name>\config\cells\<cell_name>\LotusConnections-config.
- Search for primaryWidget settings and then either comment them out or disable them similar to the following by setting them to false, for example:
<widgetDef bundleRefId="quickrCommunityLibrary_res" defId="LinkedQuickrCommunityLib" description="LinkedQuickrCommunityLibDesc" primaryWidget="false"You might consider running parallel migrations by making multiple copies of the migration tool to perform batch migration according to different place ids.
- Check that places, libraries, documents, folders, document versions, members, and comments have migrated as anticipated.
- Check the impact on the public community name to make sure that the public community name is unique.
- For a private community that might happen to have the same as a newly migrated community, check that the migration tool migrates the content into the correct Content Manager library; that is, that the content was not incorrectly migrated into the previously existing private community instead into the library newly migrated from Quickr..
Prepare to migrate Quickr for Domino places to Connections Content Manager
You need to perform several tasks to get ready to migrate IBM Lotus Quickr for Domino places to Connections Content Manager.
- In your notes.ini file, perform the following steps:
- Modify the jar file path for new reliant jar files as follows:
QuickplaceAdmin=CN=qp/OU=QP/O=MyCompany $h_MailDomain=lpfdesktop.us.mycompany.com JavaUserClassesExt=QPJC1,QPJC2,QPJC3,QPJC4,QPJC5,QPJC6,QPJC7,QPJC8,QPJC9,QPJC10,QPJC11,QPJC12,QPJC13,QPJC14,QPJC15,QPJC16,QPJC17,QPJC18,QPJC19,QPJC20,QPJC21,QPJC22,QPJC23,QPJC24 QPJC1=D:\DEVQD853\DOMINO\quickplace.jar QPJC2=D:\DEVQD853\DOMINO\log4j-1.2.14.jar QPJC3=D:\DEVQD853\DOMINO\xsp\proxy\WEB-INF\lib\commons-httpclient-3.0.1.jar QPJC4=D:\DEVQD853\DOMINO\xsp\proxy\WEB-INF\lib\commons-codec-1.3-minus-mp.jar QPJC5=D:\DEVQD853\DOMINO\xsp\shared\lib\commons-logging.jar QPJC6=D:\DEVQD853\DOMINO\abdera-core-0.4.0-incubating.jar QPJC7=D:\DEVQD853\DOMINO\abdera-i18n-0.4.0-incubating.jar QPJC8=D:\DEVQD853\DOMINO\abdera-parser-0.4.0-incubating.jar QPJC9=D:\DEVQD853\DOMINO\axiom-impl-1.2.5.jar QPJC10=D:\DEVQD853\DOMINO\axiom-api-1.2.5.jar QPJC11=D:\DEVQD853\DOMINO\jaxen-1.1.1.jar QPJC12=D:\DEVQD853\DOMINO\poi-3.6.jar QPJC13=D:\DEVQD853\DOMINO\commons-io-1.4.jar QPJC14=D:\DEVQD853\DOMINO\commons-fileupload-1.2.jar QPJC15=D:\DEVQD853\DOMINO\odfdom.jar QPJC16=D:\DEVQD853\DOMINO\poi-ooml-3.6-20091214.jar QPJC17=D:\DEVQD853\DOMINO\poi-ooml-schemas-3.6-20091214.jar QPJC18=D:\DEVQD853\DOMINO\xmlbeans-2.3.0.jar QPJC19=D:\DEVQD853\DOMINO\dom4j-1.6.1.jar QPJC20=D:\DEVQD853\DOMINO\Jace.jar QPJC21=D:\DEVQD853\DOMINO\stax-api.jar QPJC22=D:\DEVQD853\DOMINO\xlxpScanner.jar QPJC23=D:\DEVQD853\DOMINO\xlxpScannerUtils.jar QPJC24=D:\DEVQD853\DOMINO\abdera-client-0.4.0-incubating.jar ServerTasksAT3=qptool placecatalog -push -a,qptool deadmail -cleanup- Add the following configuration parameters for communicating with FileNet and Connections. The setting for the Connections url should be ConnectionsServerURL and must utilize https.
ConnectionsServerURL=https://icserver.mycompany.com:9444 FilenetURL=http://fnserver.mycompany.com:9081 ObjectStore=ICObjectStore TopTargetFolderForMigration=/C1bTeamspaces
- In the qpconfig.xml file, add the migration configuration item as the child element of <server_setting> item as follows:
<migration> <person_mapping_file></person_mapping_file> <group_mapping_file></group_mapping_file> <person_ldap_dump_file>D:\LDAP_dump\collect_AD213_All_User.dns</person__ldap_dump_file> <group_ldap_dump_file>D:\LDAP_dump\collect_AD213_All_Groups.dns</group__ldap_dump_file> <ccm_owner_role>Editor</ccm_owner_role> <expand_external_groups enabled="true" max_depth="3" number_limit="10" <special_char_encoding_mode>underline</special_char_encoding_mode> </migration>The following elements in the qpconfig.xml are configurable:
- <person_mapping_file> This element is an optional configuration parameter that indicates the location of the person mapping file, which contains text lines of the form A;B, where A is the user dn value in the Quickr for Domino LDAP server, B is the mapped user dn value in the CCM LDAP server, ";" is the separator. This setting covers the case where Quickr for Domino connects to Domino LDAP/Native as the directory, and CCM connects to non-Domino LDAP as directory. For example, the following snippet of the person mapping file contains the person mapping relationship from Domino LDAP to Active Directory LDAP:
CN=Domino Testuser25,o=Salesforce,c=US;CN=John Smith25,OU=Users,OU=Westerly,OU=MyCompany,O=Sales Group,DC=mycompany,DC=salesforce,DC=mycm,DC=com CN=Domino Testuser26,o=Salesforce,c=US;CN=John Smith26,OU=Users,OU=Westerly,OU=MyCompany,O=Sales Group,DC=mycompany,DC=salesforce,DC=mycm,DC=comThe person_mapping_file needs to be saved in UTF-8 encoding.
- <group_mapping_file> This element is an optional configuration parameter that indicates the location of the group mapping file, which contains text lines of the form A;B, where A is the original group dn value in QD LDAP server, B is the mapped group dn value in CCM LDAP server, ";" is the separator. This setting covers the case where Quickr for Domino connects to Domino LDAP/Native as directory and the CCM connects to non-Domino LDAP as directory. For example, the following snippet of the group mapping file contains the group mapping relationship from Domino LDAP to Active Directory LDAP:
CN=Group-C-1;CN=Group-C-1,OU=Groups,OU=Westerly,OU=MyCompany,O=Sales Group,DC=mycompany,DC=salesforce,DC=mycm,DC=com CN=Group-C-2;CN=Group-C-2,OU=Groups,OU=Westerly,OU=MyCompany,O=Sales Group,DC=mycompany,DC=salesforce,DC=mycm,DC=comThe group_mapping_file needs to be saved in UTF-8 encoding.
- <person_ldap_dump_file> This element is a required configuration parameter that indicates the location of the person LDAP dumping file, which contains text lines of the form A;B, where A is the user dn value, B is the user uid required by CCM ACL Rest API, ";" is the separator. For example, the following snippet of the person ldap dump file contains the user dn-->user uid mapping relation for Domino LDAP:
CN=Domino Testuser101,o=SalesGroup,c=US;111EDD84-D82F-7300-4825-7887002CA75D CN=Domino Testuser102,o=SalesGroup,c=US;837EF755-86DD-52C5-4825-7887002CA767The person_ldap_dump_file needs to be saved in UTF-8 encoding.
- <group_ldap_dump_file> This element is a required configuration parameter that indicates the location of the group LDAP dumping file, which contains text lines of the form A;B, where A is the group dn value, B is the group uid required by CCM ACL Rest API, ";" is the separator. For example, the following snippet of the group ldap dump file contains the group dn-->group uid mapping relation for Domino LDAP:
CN=Group-A-6;742DCEB2-FC1E-FDC7-4825-78870035D0A3 CN=Group-A-7;42D3E14B-CEF7-6F1C-4825-78870035D0ACThe group_ldap_dump_file needs to be saved in UTF-8 encoding.
- <ccm_owner_role> This element is a optional configuration parameter used to specify how to set the role element for owner in the Rest API atom feed in the post request sent to Connections Content Manager. Currently, only Editor is an acceptable value for this parameter.
- <expand_external_groups> This element specifies if the owner group should be expanded as individual person owner. If there existed group owner in a Quickr for Domino place, but this configuration option is not enabled, then an exception will be thrown and this place will be skipped and not migrated. The max_depth attribute specifies how many levelsa nested group can be processed during the group expansion. The number_limit attribute specifies how many members should be returned from the group including the nested group.
- <special_char_encoding_mode> This element specifies how to encode the special characters in a file name. Only two encoding mechanism are supported, urlencoding and underline. All other encoding mechanism will be ignored, and any file containing special characters in the file name will not be migrated. The special characters \ / : * ? " < > | [ ] will be substituted in accordance with the specified encoding mechanism. If urlencoding is specified, the special characters are encoded using url encoding, as follows:
Special characters Substitution values \ %5c / %2f : %3a * %2a ? %3f " %22 < %3c > %3e | %7c [ %5b ] %5d
If underline is specified, then the special characters will be substituted with "_".
- <combination_document_name> After migration, the attachment name will be used as title in Connections Content Manager for a document created by an upload, an import, or a Microsoft Office-based form in Quickr to make sure the download function work well. To preserve the original document title in Quickr, you could set <combination_document_name enabled="true"/> in the <migration> section in qpconfig.xml. With this attribute enabled, the Quickr for Domino migration tool combines the document title and attachment name, then moves it to Connections Content Manager.
- <domino_native enabled="true"/> If the directory of Quickr for Domino is Domino Native, enable this attribute.
- Modify the java.policy as follows: This file locate at...
<Domino installation dir>\jvm\lib\security
The following entry needs to be appended to the end of the file:
grant codeBase "file:${notes.binary}/lib/Jace.jar" { permission java.security.AllPermission; }; grant codeBase "file:${notes.binary}/lib/quickplace.jar" { permission java.security.AllPermission; };
- In order to be able to modify the created time and owner property of the new created document, the superuser specified when running qptool migration command needs to be granted all permission access rights. You can modify Filenet Object Store properties as follows:
- Log into Administration Console for Content Platform Engine (ACCE)
- On the navigation panel that displays, expand Object Stores and select the object store you will work with.
- Click the Security tab and highlight the user specified when running the qptool migration command in the Available Users and Groups pane.
- Click Edit to prompt the Edit Permissions dialog box.
- Ensure that Permission Type value is Allow, the Apply to value is This object and all children, and that the Level value is Custom.
- Select All system properties in the Rights list.
- Click OK to complete the update.
- Generate mapping between DN to Connections unique ID. Before performing this task, you should know how to populate the Profiles database and where your Connections unique ID comes from. You can get the wizard for populating Profiles databases from the Connections installation package. All TDI related files are in the Wizards\TDIPopulation directory.
- The Unique ID could be directly from a LDAP attribute, no conversion needed, for example from ibm-entryUuid for Tivoli LDAP by default.
- The Unique ID also could be converted from a LDAP attribute by calling existing TDI JavaScript functions. You could copy them from Wizards\TDIPopulation\win\TDI\profiles_functions.js, for example, call function_map_from_objectGUID to convert from objectGUID for AD LDAP by default, call function_map_from_dominoUNID to convert from dominoUNID for Domino LDAP by default.
The following procedure uses an Active Directory LDAP as an example. If you are using Tivoli LDAP, add ibm-entryUuid instead of objectGUID in step e, and then choose CSV Parser instead of Script Parser in step f. If you do not use the default LDAP attribute as the Connections unique ID, you should refer these steps and change them to suit your Connections implementation. You should generate two files for user and group respectively. You can specify the LDAP search filter for user and group respectively in step g.
- Set up your TDI development environment as described in Set up your development environment.
- Import the Connections TDI project to the TDI configuration editor. The configuration file is WizardsCopy\TDIPopulation\win\TDI\profiles_tdi.xml. You can make a copy of the Wizards directory and work on this copy.
- Create collect_dns_uid_For_AD and collect_dns_uid_flow_For_AD by copying and pasting the existing collect_dns and collect_dns_flow.
- Change collect_dns_uid_For_AD to call collect_dns_uid_flow_For_AD.
- In collect_dns_uid_flow_For_AD, add a new mapping by clicking Add then entering objectGUID as name.
- On the Parser tab:
- Select Script Parser, click Edit Script, and then copy the function function_map_from_objectGUID from profiles_functions.js.
- Change work.getAttribute("objectGUID") to entry.getAttribute("objectGUID").
- Change the function writeEntry() as follows:
function writeEntry () { out.write (entry.getString("$dn")); out.write (";"); out.write (function_map_from_objectGUID()); out.newLine(); }- Click Advanced and set Character Encoding to UTF8. If the directory is Domino Native, and Connections also connects to the same directory, then the writeEntry function should be similar to the following to convert "," to "/" for person dn and trim "CN=" from group dn. Also add an attribute in Output Map named "objectclass".
function writeEntry () { var type = entry.getString("objectclass"); if(type == "dominoPerson") { var person_dn = entry.getString("$dn"); var index = person_dn.indexOf(","); if(index != -1) { person_dn = person_dn.replace(/,/g, "/"); } out.write (person_dn); out.write (";"); out.write (function_map_from_dominoUNID()); out.newLine(); } if(type == "dominoGroup") { var group_dn = entry.getString("$dn"); var index = group_dn.indexOf("CN="); if(index == 0) { group_dn = group_dn.substring(3); } out.write (group_dn); out.write (";"); out.write (function_map_from_dominoUNID()); out.newLine(); } }- Configure Wzards\TDIPopulation\win\TDI\profiles_tdi.properties with the LDAP server information, such as source_ldap_url, source_ldap_user_login, source_ldap_user_password, source_ldap_search_base, source_ldap_search_filter.
- Create Wzards\TDIPopulation\win\TDI\collect_dns_uid_For_AD.bat by copying from Wzards\TDIPopulation\win\TDI\collect_dns.bat, and then change it to call the new assembly line collect_dns_uid_For_AD.
- Run collect_dns_uid_For_AD.bat in the command console. Then the generated collect.dn includes the mapping from DN to uniqueID as the following example shows. You can rename it as you want.
CN=John Smith1,OU=Users,OU=region,OU=yourcompany,O=Sales Group,DC=company,DC=sales,DC=companyname,DC=com;05A7B8F2-1E24-4F0D-B02F-BDB223613EE5 CN=John Smith1,OU=Users,OU=region,OU=yourcompany,O=Sales Group,DC=company,DC=sales,DC=companyname,DC=com;CEABB8F9-D2A0-4754-B9FD-E2DA162D705B ......- Add "DN;UniqueID" at the beginning of the mapping file.
- Grant the authenticated users the permission to view the folder description after migration as follows:
- Open FileNet Enterprise Engine Content Manager administration tool, not the ACCE tool.
- Navigate to the Object Store you are working on, and then click Other Classes > Custom Object > Quickr Folder Properties.
- Right-click Quickr Folder Properties, select Properties, and then open the Default Instance Security tab.
- Click Add to open the Select Users and Groups window.
- In Search Criteria, enter auth and click Find.
- Select AUTHENTICATED-USERS and then click OK to grant the authenticated users group the permission to view this object.
Migrating Quickr for Domino places to Connections Content Manager libraries
Use the Quickr for Domino migration tool to migrate Quickr places to Connections Content Manager (CCM) libraries.
You need to determine which user to use for migration. The user must be from LDAP, be able to access both Connections and FileNet successfully, and should be the administrator of FileNet.
The following items are covered by the migration tool.
The following items are not covered by the migration tool:
- A Quickr for Domino place will be migrated as a community in Connections.
- Quickr for Domino place membership will be migrated to community.
- A Quickr for Domino room will be migrated to CCM as folder, hierarchy will be preserved.
- A Quickr for Domino folder will be migrated to CCM as folder, hierarchy will be preserved.
- Quickr for Domino documents created from uploads, imports, pages, simple custom forms, and Microsoft office forms will be migrated to CCM.
- The information of creator, last editor, created timestamp, last updated timestamp will be migrated with the documents and folders.
- Versions and comments will be migrated together with the documents.
- Simple custom forms will be migrated to CCM as document classes.
- ACLs (Access Control Lists) will be migrated to CCM along with associated documents, folders, and rooms.
- Wiki and Blog places.
- Task, Calendar, forum, list, link, or custom library.
- Html forms and the documents based on the html forms Draft. All drafts need to be completed before migration.
- Content in Trash.
- Local members.
- Workflow state. All workflows need to be completed before migration.
- reply complete delete (classify)
Run the qptool migration command from the Domino console, for example:
load qptool migration -u <user> -pw <password> (-a | -p <place list> | -i <file>) [-o <file>]Other options that can be used are as follows:You could create an input xml file to batch migrate your places by using the -i parameter similar to the following example:
- -? Print out this usage message.
- -u <user> User name of the privileged account to execute the migration.
- -pw <password> Password of the privileged account to execute the migration.
- -a Perform the migration for all places on the server.
- -p <name> <name> Space-separated list of places to perform the migration for.
- -i <file> Input file specifying places to be migrated.
- -o <file> Output file. (Default: qptool.migration.xml)
- Create C:\BatchMigratePlaces.xml.
- Replace "yourservername.ibm.com", "yourplacename1", and "yourplacename2" with your Quickr for Domino server name and place name.
- You can also create more <place> sections as needed to migrate more places.
- Enter the following command in the Domino console: load qptool migration -i C:\BatchMigratePlaces.xml -u username -pw password.
<?xml version="1.0"?> <service> <servers> <server> <hostname>yuzcdlpc.ah.mycompany.com</hostname> <places> <place> <name>a0319b</name> <action_status action="migration"> </action_status> </place> <place> <name>a0320a</name> <action_status action="migration"> </action_status> </place> </places> </server> </servers> </service>
Results
Information about a migration will be written to the Domino Server Console and also added to the <action_status> element in the qptool.migrate.xml file for each migrated place, for example:<action_status> succeeded(or failed with fail reason) </action_status>
Migrating customized forms
You can migrate forms you have customized in IBM Lotus Quickr places to Connections Content Manager.
The customized form that can be migrated are those based on the Simple Form template. The Microsoft Office- and Imported HTML-based forms cannot be migrated. Pages created based on the Microsoft form will be migrated with the Microsoft document as an attachment, but pages based on HTML form will be ignored during the migration process.
- Get a list of all customized forms from the desired Quickr places to be migrated by running the qptool getcustomizedforms command from the command line. For example, to generate a list of customized forms for the place_0328 place:
D:\QD853\Domino\nqptool.exe getcustomizedforms -p place_0328 -o d:\output.xml- Edit the output.xml file to select the customized forms you want to migrate by setting their migrate flags to "true". You also need to set the migrate configuration item of the containing room element to "true". The customized form's migrate setting contains two levels: room level and form level with the default settings set to "false" for both levels. The room level migrate setting has a higher priority than the form level setting.
In the generated XML file containing the customized forms list, select the customized forms to be migrated. For example, if you want to migrate customized Form1, you first need to enable the migrate setting for its containing room, and then enable the migrate setting for the customizedForm Form1 as follows:
<places> <place> <name>place_0328</name> <room migrate="true" name="Main.nsf"> <customizedForm description="migrate="false" name="CForm2" unid="59A3C9B7945A8A64A00257B3C00F2CB4"> <customizedField dataType="h_TextInput" descriptionText="plain text instruction" displayName="Plain Text1" name="c_PlainText1"/> .... </customizedForm ... <customizedForm description="migrate="true" name="Form1" unid="54AE15B7945A8A6D00257B3C00F2F05"> <customizedField dataType="h_TextInput" descriptionText="Plain Text Instructions" displayName="Plain Text Title" name="c_PlainText1"/> ... <\room> ...Customized form Form1 will be migrated, but CForm2 will not be migrated.If the room level setting is "false", all forms in the room will not be migrated, even if you set a form to "true".
- Complete the migration of the selected customized forms by running the qptool migratecustomizedforms command using the output.xml file as an input parameter, for example:
D:\QD853\Domino\nqptool.exe migratecustomizedforms -i d:\output.xml -u wpsadmin -pw passw0rdThe console output will show which customized forms have been successfully migrated, and how many customized forms are migrated for each room and place.- Migrate normal pages and pages created based on selected customized forms by running the migration command. This command migrates normal pages and pages created based on successfully migrated customized forms for the specified place.
D:\QD853\Domino\nqptool.exe migration -p place_0328 -u wpsadmin -pw passw0rd
Migrating to Connections 4.5
Migrate a production installation of Connections 4.0 to Connections 4.5.
Ensure that your environment meets the hardware and software requirements for Connections 4.5.
For more information, see the Connections system requirements topic.
If you have a version of Connections that is earlier than version 4.0, you must migrate it to version 4.0 before migrating to version 4.5.
If you plan to install the new Metrics application, IBM recommends that you deploy IBM Cognos Business Intelligence before installing Connections 4.5. However, you can defer deploying Cognos and still install Metrics.
For more information, see the Install Cognos Business Intelligence and Configure Cognos Business Intelligence topics.
If you are deploying version 4.5 on a different system than version 4.0, you do not need to uninstall Connections 4.0.
If possible, set up a test environment and simulate the migration process. Correct the cause of any errors that occur and then migrate your production environment.
This topic applies to all 4.0 CR2 and later releases of version 4.0, which means you need to upgrade the deployment to 4.0 CR2 before performing the migration.
There are several procedures required to migrate the deployment. Your migration strategy determines which procedures you need to follow.
A side-by-side migration strategy minimizes the downtime of your production environment but costs more in terms of hardware resources.
An in-place strategy minimizes costs but causes more downtime. It is similar to the side-by-side strategy except that you do not need to deploy new hardware.
Whatever strategy you decide to follow, you must complete the following steps:
- Install IBM WAS version 8.0.0.5 or higher and check for any related fix packs.
For more information, go to the Install maintenance packages, interim fixes, fix packs, and refresh packs in the WAS information center.
- Advise users about any possible outages.
For more information, see the Preparing Connections for maintenance topic.
- Back up your current deployment, including customized files and settings.
For more information, see the Backing up Connections and Saving your customizations topics.
- Stop Connections 4.0.
- Export Connections 4.0 artifacts.
For more information, see the Exporting application artifacts from Connections 4.0 topic.
- Migrate your Connections 4.0 data.
For more information, see the Migrating data from Connections 4.0 databases topic.
- Update your databases to version 4.0.
For more information, see the Updating 4.0 databases topic.
- Uninstall Connections 4.0. This is recommended if you are installing version 4.5 in-place. It is not necessary if you are installing version 4.5 on new hardware.
- Install Connections 4.5.
For more information, see the Installing Connections 4.5 for migration topic.
- Complete the relevant Pre-installation tasks.
For more information, see the Pre-installation tasks topic.
Some of the Pre-installation tasks describe how to create databases or populate the Profiles database. You do not need to complete those particular tasks because the migration process automatically completes them.
- Migrate your content stores.
For more information, see the Content store migration topic.
Reuse the extracted file content that is stored at the location pointed to by the EXTRACTED_FILE_STORE WebSphere variable. You can achieve this by copying the contents of EXTRACTED_FILE_STORE location on the 4.0 system to the EXTRACTED_FILE_STORE location on the 4.5 system as part of the content store migration.
- Import Connections 4.0 artifacts.
For more information, see the Importing application artifacts to Connections 4.5 topic.
Perform any of the following applicable tasks as needed; in particular take note of the Post-Migration tasks:
Export application artifacts from Connections 4.0
Export application artifacts, such as configuration data, from your 4.0 deployment.
To export application data from your 4.0 deployment...
- Perform a full synchronization of all the nodes in the cluster where Connections 4.0 is deployed.
System administration | Nodes | nodes | Full Resynchronize
The migration tool does not migrate custom fields in Connections configuration files. Nor does it migrate the validation.xml file. This file is needed by the Struts validation framework and is accessed when Connections starts. To ensure that your custom fields and files are usable after upgrading, see the Saving your customizations and Post-migration tasks topics.
- Open a command prompt on the version 4.0 system, change to the migration directory and run the following command:
The migration tool does not migrate the content stores.
For more information about migrating the content stores, see the Content store migration topic.
./migration.sh lc-export
- The lc-export command exports the following data:
- Configuration files in the LotusConnections-config directory. You can find this directory in a the following location:
- profile_root/config/cells/dmgr_cell_name/LotusConnections-config
- Properties files in the connections_root directory
The exported data is stored in the migration/work directory. Check the log file to validate the export. The log file is stored in the system user's home directory and uses the following naming format:
lc-migration-yyyyMMdd_HHmm_ss.log For example:
/root/lc-migration-20101215_1534_26.log
- You can reuse the 4.0 content stores or copy them to your new content store.
For more information, see the Content store migration topic.
- Back up the migration directory to a location outside your 4.0 deployment.
Continue with the next task in the migration process.
Migrating data from Connections 4.0 databases
Learn how to migrate data from your Connections 4.0 databases.
To migrate your data, you can choose from two different strategies:
- Side-by-side
- Create new 4.0 databases on a separate system and transfer your existing 4.0 data to that system. Then update the new databases to version 4.5. This strategy requires more time and resources than an in-place migration but means that you can test the update while you continue to use your 4.0 databases.
For more information, see the Migrating data side-by-side topic.
- In-place
- Instead of migrating your 4.0 data, update the databases on the same system as the earlier version. This strategy saves time and hardware resources.
For more information, see the Migrating data in-place topic.
You can update databases with the Connections database wizard or with the SQL scripts that are provided with the product.
Perform the tasks that apply to the deployment:
Migrating 4.0 data side-by-side
Migrate your Connections 4.0 data in a side-by-side procedure so that your 4.0 data remains intact.
(DB2 only) If you use only one database instance and if that instance includes other databases besides Connections, configure the numdb parameter to match the total number of databases on the instance.
For more information, go to the numdb webpage in the DB2 information center. Notes:
- If you migrated from Connections 4.0, the numdb parameter was set to 14, the maximum number of Connections 4.0 databases. If the instance has additional databases, increase the value of the numdb parameter to match the total number of databases on the instance. To change the parameter:
db2 UPDATE DBM CFG USING NUMDB nn
where nn is a number of databases.
- Before removing (or dropping) a database, stop Connections first to ensure that no database connection is in use; otherwise you will not drop the user and the database removal will not occur.
- If you run dbWizard.bat but the database wizard does not launch, check whether you have 32-bit DB2 installed. You need to have 64-bit DB2 on a 64-bit system.
(Oracle only) Ensure that the Statement cache size for the data sources on WAS is no larger than 50. A higher value could lead to Out Of Memory errors on the application server instance.
The 4.5 database wizards have the necessary updates to the 4.0 databases and will update the necessary schemas for Homepage and Metrics databases to the latest 4.0 version, then upgrade all databases to the 4.5 schema. It is not necessary to manually apply the 4.0 schema upgrades.
Transfer data from your Connections 4.0 databases, described here as the source databases, to the new 4.0 databases, described here as the target databases. When the data transfer is complete and you validate the new databases, you can update them to version 4.5.
You can continue to use your 4.0 databases until you are ready to move to Connections 4.5.
Although you can continue to use your 4.0 databases until you are ready to move to Connections 4.5, any data that you generate after the database update is not migrated to the new environment.
Run the predbxfer40.sql and postdbxfer40.sql that are in the 4.5 GA build and not the 4.0 build.
To update the databases...
- Use the Connections 4.0 database wizard, create target 4.0 databases on a separate system from your 4.0 source databases. The new databases host your data for migration to the 4.5 deployment.
If the 4.0 database wizard is not on the system that hosts the target databases, copy it from the system hosting Connections 4.0.
- (DB2 on Windows 2008 64-bit.) On Windows 2008, you must perform DB2 administration tasks with full administrator privileges.
- Logged in as the instance owner, open a command prompt and change to the DB2 bin directory. For example: C:\\IBM\SQLLIB\BIN.
- Enter the following command: db2cwadmin.bat. This command opens the DB2 command line processor while also setting your DB2 privileges.
- Prepare the target 4.0 databases to accept data from the source 4.0 databases. Remove constraints from the target databases by executing the following SQL scripts: Notes:
Repeat the following procedures for each application that you are migrating:
- Run these SQL scripts before transferring data to the target database.
- Run each script from the same directory that you use to create the target database.
- Connections uses the following database libraries which are already on the target database server:
- DB2
- db2jcc.jar
- db2jcc_license_cu.jar
- Oracle
- ojdbc6.jar
- SQL Server
- sqljdbc4.jar
- DB2:
- Log in as the instance owner. The default owner on AIX/Linux is db2inst1. On Windows, the default is db2admin.
- For each application, change to the directory where the relevant SQL file is stored.
- Enter the appropriate commands for each application, as shown in the following table.
Table 49. DB2 commands for removing constraints
Application Directory DB2 commands Activities /connections.sql/activities/db2 db2 -tvf predbxfer40.sql Blogs /connections.sql/blogs/db2 db2 -td@ -vf predbxfer40.sql Bookmarks /connections.sql/dogear/db2 db2 -td@ -vf predbxfer40.sql Communities /connections.sql/communities/db2 db2 -td@ -vf predbxfer40.sql
db2 -td@ -vf calendar-predbxfer40.sqlFiles /connections.sql/files/db2 db2 -td@ -vf predbxfer40.sql Forum /connections.sql/forum/db2 db2 -tvf predbxfer40.sql Home page /connections.sql/homepage/db2 db2 -tvf predbxfer40.sql Metrics /connections.sql/metrics/db2 db2 -td@ -vf predbxfer40.sql Mobile /connections.sql/mobile/db2 db2 -td@ -vf predbxfer40.sql Profiles /connections.sql/profiles/db2 db2 -tvf predbxfer40.sql Wikis /connections.sql/wikis/db2 db2 -td@ -vf predbxfer40.sql
- Oracle:
- For each application, change to the directory containing the relevant SQL file.
- Enter the following commands:
- sqlplus /NOLOG
- conn system/password@SID
- @SQL_script.sql
...where
- password is the password for the user system.
- SID is the Oracle System Identifier for Connections.
- SQL_script refers to a SQL script from the following table.
Table 50. Oracle commands for removing constraints
Application Directory Oracle commands Activities /connections.sql/activities/oracle @predbxfer40.sql Blogs /connections.sql/blogs/oracle @predbxfer40.sql Bookmarks /connections.sql/dogear/oracle @predbxfer40.sql Communities /connections.sql/communities/oracle @predbxfer40.sql
@calendar-predbxfer40.sqlFiles /connections.sql/files/oracle @predbxfer40.sql Forum /connections.sql/forum/oracle @predbxfer40.sql Home page /connections.sql/homepage/oracle @predbxfer40.sql Metrics /connections.sql/metrics/oracle @predbxfer40.sql Mobile /connections.sql/mobile/oracle @predbxfer40.sql Profiles /connections.sql/profiles/oracle @predbxfer40.sql Wikis /connections.sql/wikis/oracle @predbxfer40.sql
- SQL Server
- Log in as the database administrator.
- For each application, change to the directory containing the relevant SQL file.
- Enter the commands shown in the following table:
In these commands, password is the password for the SQL Server user sa.
If your database server has multiple SQL Server instances, add the following line as the first parameter to each command in the table: -S sqlserver_server_name\sqlserver_server_instance_name
Table 51. SQL Server commands for removing constraints
Application Directory SQL Server commands Activities /connections.sql/activities/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" Blogs /connections.sql/blogs/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" Bookmarks /connections.sql/dogear/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" Communities /connections.sql/communities/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" sqlcmd -U sa -P password -i "calendar-predbxfer40.sql"
Files /connections.sql/files/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" Forum /connections.sql/forum/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" Home page /connections.sql/homepage/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" Metrics /connections.sql/metrics/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" Mobile /connections.sql/mobile/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" Profiles /connections.sql/profiles/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql" Wikis /connections.sql/wikis/sqlserver sqlcmd -U sa -P password -i "predbxfer40.sql"
- Use the Connections database transfer tool, transfer data to the target databases:
- Create a directory called DBT_HOME on the target database server. This directory temporarily stores transferred data.
- Be sure to use the new version of the dbt.jar file and copy it from the connections_root\ConfigEngine\lib directory to the DBT_HOME directory on the target database server. Notes:
- Connections does not support GNU Java.
- Use the Java Runtime Environment (JRE) under the Wizards directory in the installation media. Update your PATH variable to point to this JRE, using the instructions for your operating system. For example, the relative path to the JRE on the Microsoft Windows operating system might be Wizards\jvm\win\jre. For the AIX or Linux operating systems, the relative path might be Wizards/jvm/aix/jre and Wizards/jvm/linux/jre.
- Create an XML configuration file under the DBT_HOME directory and add the following content:
<dbTransfer xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <database role="source" driver="JDBC_driver" url="JDBC_url" userId="database_admin schema="application_db_schema_name" dbType="dbType"/> <database role="target" driver="JDBC_driver" url="JDBC_url" userId="database_admin" schema="application_db_schema_name" dbType="dbType"/> </dbTransfer>where
JDBC_driver is one of the following types:
- DB2: com.ibm.db2.jcc.DB2Driver
- Oracle: oracle.jdbc.driver.OracleDriver
- SQL Server: com.microsoft.sqlserver.jdbc.SQLServerDriver
JDBC_url is one of the following types:
- DB2: jdbc:db2://host_IP:port/application_database_name
You can try to increase the speed of the migration process by adapting the URL as follows:
DB2: jdbc:db2://host_IP:port/application_database_name:streamBufferSize=2097152;progressiveStreaming=1;
- Oracle: jdbc:oracle:thin:@host_IP:port:SID
- SQL Server: jdbc:sqlserver://host_IP:port;databaseName=application_database_name
where
- host_IP is the IP address of the database server.
- port is the port number of the server.
- SID is the Oracle System Identifier for Connections.
- application_database_name is one of the following values:
- Activities: OPNACT
- Blogs: BLOGS
- Communities: SNCOMM
- Dogear: DOGEAR
- Files: FILES
- Forum: FORUM
- Home page: HOMEPAGE
- Metrics: METRICS
- Profiles: PEOPLEDB
- Wikis: WIKIS
database_admin is the user ID of the database administrator.
application_db_schema_name is one of the following values:
- Activities: ACTIVITIES
- Blogs: BLOGS
- Communities: SNCOMM and CALENDAR
To migrate Communities data, the dbt command needs to be run twice; once for the SNCOMM schema, and the second time for the CALENDAR schema.
- Dogear: DOGEAR
- Files: FILES
- Forum: FORUM
- Home page: HOMEPAGE
- Metrics: METRICS
- Profiles: EMPINST
- Wikis: WIKIS
dbType is one of the following values:
- DB2: DB2
- Oracle: oracle
- SQL Server: sqlserver2005
The JDBC driver, however, is SQL Server 2008 as indicated in the next step.
- Prepare the JDBC driver of the target databases:
- DB2:
- Use the JDBC driver on the target database server.
- Oracle:
- Use the JDBC driver on the target database server.
- Ensure that the Oracle driver on your system has the same version number as your Oracle database server. Connections does not support the Oracle 10.2.0.1 JDBC driver.
- SQL Server
- Download the SQL Server 2008 JDBC 2.0 driver from the Microsoft website
- To perform the data transfer, run the dbt.jar file:
Remove the lines for the database systems that are not in the deployment.
- Linux:
"JAVA_HOME/bin/java" -cp DBT_HOME/dbt.jar: DB2_HOME/java/db2jcc.jar: DB2_HOME/java/db2jcc_license_cu.jar: ORACLE_HOME/jdbc/lib/ojdbc6.jar: SQLSERVER_DRIVER_PATH: com.ibm.wps.config.db.transfer.CmdLineTransfer -logDir DBT_HOME/logs -xmlfile DBT_HOME/dbt_config_file_name -sourcepassword source_db_password -targetpassword target_db_password
where
- JAVA_HOME is the path to the Java JDK.
- dbt_config_file_name is the name of the XML configuration file you created for the dbt.jar file
- logs is the directory where log files are stored. Create the logs directory before running this file
- DB2_HOME is the path to the DB2 installation directory.
- ORACLE_HOME is the path to the Oracle installation directory.
- SQLSERVER_DRIVER_PATH is the path to the sqljdbc4.jar JDBC driver.
- Windows:
"JAVA_HOME/bin/java" -cp DBT_HOME/dbt.jar; DB2_HOME/java/db2jcc.jar; DB2_HOME/java/db2jcc_license_cu.jar; ORACLE_HOME/jdbc/lib/ojdbc6.jar; SQLSERVER_DRIVER_PATH; com.ibm.wps.config.db.transfer.CmdLineTransfer -logDir DBT_HOME/logs -xmlfile DBT_HOME/dbt_config_file_name -sourcepassword source_db_password -targetpassword target_db_password
where
- JAVA_HOME is the path to the Java JDK.
- dbt_config_file_name is the name of the XML configuration file you created for the dbt.jar file
- logs is the directory where log files are stored. Create the logs directory before running this file
- DB2_HOME is the path to the DB2 installation directory.
- ORACLE_HOME is the path to the Oracle installation directory.
- SQLSERVER_DRIVER_PATH is the path to the sqljdbc4.jar JDBC driver.
When the transfer is complete, you can restart your 4.0 deployment to minimize service downtime.
Data that is generated after restarting the 4.0 environment is not migrated.
- Reapply constraints to the target databases:
- DB2:
- Log in as the instance owner.
- For each application, change to the directory where the relevant SQL file is stored.
- Enter the appropriate commands for each application, as shown in the following table.
Table 52. DB2 commands for reapplying constraints
Application Directory DB2 commands Activities /connections.sql/activities/db2 db2 -tvf postdbxfer301.sql
db2 -td@ -vf clearScheduler.sqlBlogs /connections.sql/blogs/db2 db2 -td@ -vf postdbxfer40.sql Bookmarks /connections.sql/dogear/db2 db2 -td@ -vf postdbxfer40.sql Communities /connections.sql/communities/db2 db2 -td@ -vf postdbxfer40.sql
db2 -td@ -vf clearScheduler.sql
db2 -td@ -vf calendar-postdbxfer40.sqlFiles /connections.sql/files/db2 db2 -td@ -vf postdbxfer40.sql
db2 -td@ -vf clearScheduler.sqlForum /connections.sql/forum/db2 db2 -tvf postdbxfer40.sql
db2 -td@ -vf clearScheduler.sqlHome page /connections.sql/homepage/db2 db2 -tvf postdbxfer40.sql
db2 -tvf clearScheduler.sqlMetrics /connections.sql/metrics/db2 db2 -td@ -vf postdbxfer40.sql
db2 -td@ -vf clearScheduler.sqlMobile /connections.sql/mobile/db2 db2 -td@ -vf postdbxfer40.sql
db2 -td@ -vf clearScheduler.sqlProfiles /connections.sql/profiles/db2 db2 -tvf postdbxfer40.sql
db2 -tvf clearScheduler.sqlWikis /connections.sql/wikis/db2 db2 -td@ -vf postdbxfer40.sql
db2 -td@ -vf clearScheduler.sql
- Oracle:
- For each application, change to the directory containing the relevant SQL file.
- Enter the following commands:
- sqlplus /NOLOG
- conn system/password@SID
- @SQL_script.sql
where
- password is the password for the user system.
- SID is the Oracle System Identifier for Connections.
- SQL_script refers to a SQL script from the following table.
Table 53. Oracle commands for reapplying constraints
Application Directory Oracle commands Activities /connections.sql/activities/oracle @postdbxfer40.sql
@clearScheduler.sqlBlogs /connections.sql/blogs/oracle @postdbxfer40.sql Bookmarks /connections.sql/dogear/oracle @postdbxfer40.sql Communities /connections.sql/communities/oracle @postdbxfer40.sql
@clearScheduler.sql
@calendar-postdbxfer40.sqlFiles /connections.sql/files/oracle @postdbxfer40.sql
@clearScheduler.sqlForum /connections.sql/forum/oracle @postdbxfer40.sql
@clearScheduler.sqlHome page /connections.sql/homepage/oracle @postdbxfer40.sql
@clearScheduler.sqlMetrics /connections.sql/metrics/oracle @postdbxfer40.sql
@clearScheduler.sqlMobile /connections.sql/mobile/oracle @postdbxfer40.sql
@clearScheduler.sqlProfiles /connections.sql/profiles/oracle @postdbxfer40.sql
@clearScheduler.sqlWikis /connections.sql/wikis/oracle @postdbxfer40.sql
@clearScheduler.sql
- SQL Server
- Log in as the database administrator.
- For each application, change to the directory containing the relevant SQL file.
- Enter the commands shown in the following table:
In these commands, password is the password for the SQL Server user sa.
If your database server has multiple SQL Server instances, add the following line as the first parameter to each command in the table:
-S sqlserver_server_name\sqlserver_server_instance_name
Table 54. SQL Server commands for reapplying constraints
Application Directory SQL Server commands Activities /connections.sql/activities/sqlserver sqlcmd -U sa -P password
-i "postdbxfer40.sql"
sqlcmd -U sa -P password -i "clearScheduler.sql"Blogs /connections.sql/blogs/sqlserver sqlcmd -U sa -P password -i "postdbxfer40.sql" Bookmarks /connections.sql/dogear/sqlserver sqlcmd -U sa -P password -i "postdbxfer40.sql" Communities /connections.sql/communities/sqlserver sqlcmd -U sa -P password -i "postdbxfer40.sql"
sqlcmd -U sa -P password -i "clearScheduler.sql"
sqlcmd -U sa -P password -i "calendar-postdbxfer40.sql"Files /connections.sql/files/sqlserver sqlcmd -U sa -P password -i "postdbxfer40.sql"
sqlcmd -U sa -P password -i "clearScheduler.sql"Forum /connections.sql/forum/sqlserver sqlcmd -U sa -P password -i "postdbxfer40.sql"
sqlcmd -U sa -P password -i "clearScheduler.sql"Home page /connections.sql/homepage/sqlserver sqlcmd -U sa -P password -i "postdbxfer40.sql"
sqlcmd -U sa -P password -i "clearScheduler.sql"Metrics /connections.sql/metrics/SQL Server sqlcmd -U sa -P password -i "postdbxfer40.sql"
sqlcmd -U sa -P password -i "clearScheduler.sql"Mobile Wizards/connections.sql/mobile/sqlserver sqlcmd -U sa -P password -i "postdbxfer40.sql"
sqlcmd -U sa -P password -i "clearScheduler.sql"Profiles /connections.sql/profiles/sqlserver sqlcmd -U sa -P password -i "postdbxfer40.sql"
sqlcmd -U sa -P password -i "clearScheduler.sql"Wikis /connections.sql/wikis/sqlserver sqlcmd -U sa -P password -i "postdbxfer40.sql"
sqlcmd -U sa -P password -i "clearScheduler.sql"
- (Profiles only.) Run the following commands to update the database sequence for DB2 or Oracle target databases:
- DB2
Run the following commands on the 4.0 source database:
SELECT NEXT VALUE FOR EMPINST.CHG_EMP_DRAFT_SEQ AS CHG_EMP_DRAFT_SEQ FROM SYSIBM.SYSDUMMY1;
SELECT NEXT VALUE FOR EMPINST.EMP_DRAFT_SEQ AS EMP_DRAFT_SEQ FROM SYSIBM.SYSDUMMY1;
SELECT NEXT VALUE FOR EMPINST.EXT_DRAFT_SEQ AS EXT_DRAFT_SEQ FROM SYSIBM.SYSDUMMY1;
Run the following commands on the 4.0 target database:
ALTER SEQUENCE EMPINST.CHG_EMP_DRAFT_SEQ RESTART WITH query_result;
ALTER SEQUENCE EMPINST.EMP_DRAFT_SEQ RESTART WITHquery_result;
ALTER SEQUENCE EMPINST.EXT_DRAFT_SEQ RESTART WITHquery_result;
- Oracle
Run the following commands on the 4.0 source database:
SELECT EMPINST.EXT_DRAFT_SEQ.NEXTVAL AS EXT_DRAFT_SEQ FROM DUAL;
SELECT EMPINST.EMP_DRAFT_SEQ.NEXTVAL AS EMP_DRAFT_SEQ FROM DUAL;
SELECT EMPINST.CHG_EMP_DRAFT_SEQ1.NEXTVAL AS CHG_EMP_DRAFT_SEQ1 FROM DUAL;
SELECT EMPINST.CHG_EMP_DRAFT_SEQ2.NEXTVAL AS CHG_EMP_DRAFT_SEQ2 FROM DUAL;
Run the following commands on the 4.0 target database:
DROP SEQUENCE EMPINST.EXT_DRAFT_SEQ;
CREATE SEQUENCE EMPINST.EXT_DRAFT_SEQ START WITH query_result;
DROP SEQUENCE EMPINST.EMP_DRAFT_SEQ;
CREATE SEQUENCE EMPINST.EMP_DRAFT_SEQ START WITH query_result;
DROP SEQUENCE EMPINST.CHG_EMP_DRAFT_SEQ1;
CREATE SEQUENCE EMPINST.CHG_EMP_DRAFT_SEQ1 START WITH query_result;
DROP SEQUENCE EMPINST.CHG_EMP_DRAFT_SEQ2;
CREATE SEQUENCE EMPINST.CHG_EMP_DRAFT_SEQ2 START WITH query_result;
where query_result is the result of the corresponding SELECT command that you ran on the 40 database.
- (Metrics only.) Run the following commands to update the database sequence for DB2, Oracle, or SQL Server target databases:
- DB2
Run the following command on the 4.0 source database:
SELECT NEXT VALUE FOR METRICS.ID_VALUES AS ID_VALUES_SEQ FROM SYSIBM.SYSDUMMY1;
SELECT MAX(ID) FROM METRICS.F_TRX_EVENTS;
Run the following commands on the 4.0 target database:
ALTER SEQUENCE METRICS.ID_VALUES RESTART WITH query_result;
where query_result is the result of the corresponding SELECT command that you ran on the 4.0 database.
ALTER TABLE METRICS.F_TRX_EVENTS ALTER COLUMN ID RESTART WITH query_result;
where query_result is the result of the corresponding SELECT command that you ran on the 4.0 database.
- Oracle
Run the following command on the 4.0 source database:
SELECT METRICS.ID_VALUES.NEXTVAL AS ID_VALUES_SEQ FROM DUAL;
Run the following commands on the 4.0 target database:
DROP SEQUENCE METRICS.ID_VALUES;
CREATE SEQUENCE "METRICS"."ID_VALUES" START WITH query_result INCREMENT BY 1 NOMAXVALUE NOCYCLE CACHE 20;
GRANT SELECT, ALTER ON "METRICS"."ID_VALUES" TO METRICSUSER_ROLE;
where query_result is the result of the corresponding SELECT command that you ran on the 4.0 database.
- SQL Server
Run the following command on the 4.0 source database:
exec METRICS.GETNEWSEQVAL_ID_VALUES;
Run the following commands on the 4.0 target database:
DROP TABLE [METRICS].[ID_VALUES];
CREATE TABLE [METRICS].[ID_VALUES]
(
[SEQID] [BIGINT] IDENTITY(query_result,1) NOT NULL,
[SEQVAL] [VARCHAR](1) NULL,
);
ALTER TABLE [METRICS].[ID_VALUES] ADD CONSTRAINT [ID_VALUES_PK] PRIMARY KEY ([SEQID]);
GRANT DELETE,INSERT,SELECT,UPDATE ON "METRICS"."ID_VALUES" TO METRICSUSER;
where query_result is the result of the corresponding EXEC command that you ran on the 4.0 database.
Check that all the databases are working correctly. If you find errors, resolve the problem and repeat this task.
Update the new databases to Connections version 4.5.
For more information, see the Updating 4.0 databases topic.
(DB2 for Linux on System z only.) To improve database performance, enable the NO FILE SYSTEM CACHING option.
For more information, see the Enabling NO FILE SYSTEM CACHING for DB2 on System z topic.
Migrating 4.0 data in-place
Prepare to update your Connections 4.0 databases to version 4.5.
Update databases in-place overwrites your existing databases. Ensure that you have backed up your databases before beginning the update. If the update fails, you can minimize downtime by restoring the backup.
When you use the in-place strategy to update the databases to Connections 4.5, you can no longer use your Connections 4.0 deployment. Moreover, this strategy increases the system downtime because stop the 4.0 deployment.
The in-place strategy is a good option when you have limited system resources because it minimizes the system resources required. Instead of migrating your data, you simply update the Connections 4.0 databases. To keep your 4.0 databases, use the side-by-side data migration strategy instead.
Update 4.0 databases
Update Connections 4.0 databases to version 4.5 in an existing database environment.
There are two methods for updating a database: using the Connections database wizard or the SQL scripts provided with the product.
- Database wizard
- The wizard is a faster procedure and also validates the update.
For more information, see the Updating 4.0 databases with the wizard topic.
- SQL scripts
- Use the SQL scripts provided with the product, you can examine all the commands applied to your database.
For more information, see the Updating 4.0 databases manually topic.
Perform the tasks that apply to the deployment:
Update 4.0 databases with the wizard
Update your Connections 4.0 databases by using the database wizard.
Before applying updates, back up your databases.
For more information, refer to Back up Connections.
Do not use the database wizard on a system that does not have UTF-8 encoding. Running the database wizard on a system where the default encoding is not UTF-8 corrupts non-ASCII characters. To discover the default encoding on the system where you intend to run the wizard...
- Download the EncodingCheck.class file from the Non-ASCII characters might not be displayed correctly technote.
- cd directory where the EncodingCheck.class file is located.
- Type java -cp . EncodingCheck and press Enter.
The output of the command indicates the default encoding on the system:
charset: charset encoding: encoding lang: lang region: regionIf the charset value is not UTF-8, update your databases manually instead of using the wizard.
For more information, see the Update 4.0 databases manually topic.
If you use different database instances for the Home page and Profiles database, run the wizard first on the instance that hosts the Home page database.
You should update the Home page database manually. By using the manual method, you can back up the Home page database after each step. This precaution is useful because updating the Home page database, if it is large, can take considerably more time than the other databases. If you choose this option, update the Home page database before updating any other database. After updating the Home page database, you can use the wizard to update the remaining databases.
For more information about manually updating databases, see the Updating 4.0 databases manually topic.
(SQL Server only) Ensure that the name of the Home page database instance is not null. If it is null, rename it to HOMEPAGE.
(DB2 only) If you use only one database instance and if that instance includes other databases besides Connections, configure the numdb parameter to match the total number of databases on the instance.
For more information, go to the numdb webpage in the DB2 information center. Notes:
- If you migrated from Connections 4.0, the numdb parameter was set to 14, the maximum number of Connections 4.0 databases. If the instance has additional databases, increase the value of the numdb parameter to match the total number of databases on the instance. To change the parameter:
db2 UPDATE DBM CFG USING NUMDB nn
where nn is a number of databases.
- Before removing (or dropping) a database, stop Connections first to ensure that no database connection is in use; otherwise you will not drop the user and the database removal will not occur.
- If you run dbWizard.bat but the database wizard does not launch, check whether you have 32-bit DB2 installed. You need to have 64-bit DB2 on a 64-bit system.
(Oracle only) Ensure that the Statement cache size for the data sources on WAS is no larger than 50. A higher value could lead to Out Of Memory errors on the application server instance.
(SQL Server only) Ensure that Named Pipes is enabled in the SQL Server Network Configuration for all instances.
For more information, refer to your SQL Server documentation.
Follow these steps to update your Connections 4.0 databases to Connections 4.5 databases.
As an alternative to using the wizard, you can update the databases manually.
For more information, see the Updating 4.0 databases manually topic.
To update your databases with the database wizard...
- Stop the IBM WAS instances that are hosting Connections 4.0.
- If the database servers and Connections are on different systems, copy the 4.5 database wizard to the system that hosts your Connections databases.
- Log in as the database administrator.
- (DB2 on Windows 2008 64-bit.) On Windows 2008, you must perform DB2 administration tasks with full administrator privileges.
- Logged in as the instance owner, open a command prompt and change to the DB2 bin directory. For example: C:\\IBM\SQLLIB\BIN.
- Enter the following command: db2cwadmin.bat. This command opens the DB2 command line processor while also setting your DB2 privileges.
- Change to the directory where the database wizard is stored. The default location is the Wizards directory on the installation media.
(AIX/Linux only) Ensure that the database administrator has all permissions for the Connections Wizards directory.
- Enter the following command and then click Next:
- AIX or Linux: ./dbWizard.sh
- Windows: dbWizard.bat
- Select the Upgrade task and click Next.
- Specify the database type, database instance, and the installation location, and then click Next. The wizard detects the current database version.
- Select the databases to update and click Next:
- Activities: OPNACT
- Blogs: BLOGS
- Bookmarks: DOGEAR
- Communities: SNCOMM
- Files: FILES
- Forum: FORUM
- Home page: HOMEPAGE (In Connections 4.0, Code Refresh 2, you must run a sql script to upgrade home page database. If you have not run this sql, the db Wizard runs it for you.)
- Metrics: METRICS (Connections 4.5 uses the same version of Cognos as 4.0.
- Profiles: PEOPLEDB
- Wikis: WIKIS
The database wizard disables the selection of any applications that were not released in Connections 4.0. If any application databases were created in an earlier release of Connections than 4.0, update that database by using the Connections 4.0 database wizard.
- (SQL Server only) Enter the location of the data files for the Home Page application.
- (SQL Server only) Enter the password for the Home Page database user.
- (Oracle only) Enter the password for the Communities database user.
- Enter the connection information for the Profiles database user.
- Enter the Oracle SID in the Database name field.
- This information is used during the migration of the Home page database to copy data from the Profile database.
- If you use different instances to host the Home page and Profiles databases, this step is displayed only when you run the wizard on the instance that hosts the Home page database.
- If you are updating the Home Page database, provide the port, administrator ID, and administrator password for the database.
- Review the Pre Configuration Task Summary to ensure that the values you entered are correct. To change any values, click Back to edit the value. To continue, click Update.
Click Show detailed database commands to display the commands. To save the commands, ensure that the user who is running the database wizard has write access to the destination folder. Click Execute to run the commands.
- After the update task finishes, review the Post Configuration Task Summary. Click Finish to exit the wizard.
- Run the database wizard again to create databases for Connections Content Manager.
For more information, see the Creating database topic
(DB2 for Linux on System z only.) To improve database performance, enable the NO FILE SYSTEM CACHING option.
For more information, see the Enabling NO FILE SYSTEM CACHING for DB2 on System z topic.
If you have error when upgrading the database, you can check the log in the <db_user_home>/lcWizard/log/dbWizard folder to know what the error is.
Update 4.0 databases manually
Manually update Connections 4.0 databases to version 4.5 in an existing IBM WAS and database environment. Complete the task that is applicable to the deployment:
Update 4.0 DB2 databases manually
Manually update Connections 4.0 databases to version 4.5 in an existing IBM WAS and DB2 database environment.
Before applying updates, back up your databases.
For more information, see Back up Connections.
Make sure to configure the DB2 databases for unicode so that DB2 tools like export and import do not corrupt unicode data.
Ensure that you have installed and configured all supporting software for version 4.5.
(DB2 only) If you use only one database instance and if that instance includes other databases besides Connections, configure the numdb parameter to match the total number of databases on the instance.
For more information, go to the numdb webpage in the DB2 information center. Notes:
- If you migrated from Connections 4.0, the numdb parameter was set to 14, the maximum number of Connections 4.0 databases. If the instance has additional databases, increase the value of the numdb parameter to match the total number of databases on the instance. To change the parameter:
db2 UPDATE DBM CFG USING NUMDB nn
where nn is a number of databases.
- Before removing (or dropping) a database, stop Connections first to ensure that no database connection is in use; otherwise you will not drop the user and the database removal will not occur.
- If you run dbWizard.bat but the database wizard does not launch, check whether you have 32-bit DB2 installed. You need to have 64-bit DB2 on a 64-bit system.
This topic describes how to manually update Connections version 4.0 databases to version 4.5. Use this procedure if you want an alternative to using the database wizard to update your databases. Notes:
- This topic applies to all releases of version 4.0
- Use the Java Runtime Environment (JRE) under the Wizards directory in the installation media. Update your PATH variable to point to this JRE, using the instructions for your operating system. For example, the relative path to the JRE on the Microsoft Windows operating system might be Wizards\jvm\win\jre. For the AIX or Linux operating systems, the relative path might be Wizards/jvm/aix/jre and Wizards/jvm/linux/jre.
- Connections does not support GNU Java.
- You need to use a database administrator ID to run the Java migration utilities described in this task.
- After running each command, examine the output of the command for error messages. If you find errors, resolve them before continuing with the update process.
- To improve readability, some commands and file paths in this topic are displayed on separate lines. Ignore these formatting conventions when entering the commands.
To update databases manually...
- Log in to the WAS admin console on your Deployment Manager.
- Go to Applications > Application types > WebSphere enterprise Applications.
- Stop all Connections applications.
- (DB2 on Windows 2008 64-bit.) On Windows 2008, you must perform DB2 administration tasks with full administrator privileges.
- Logged in as the instance owner, open a command prompt and change to the DB2 bin directory. For example: C:\\IBM\SQLLIB\BIN.
- Enter the following command: db2cwadmin.bat. This command opens the DB2 command line processor while also setting your DB2 privileges.
- Log in as the database administrator.
- For each application, change to the directory where the SQL scripts are stored and then enter the commands for that application.
To capture the output of each command to a log file, append the following parameter to each command: >> /file_path/db_application.log
where file_path is the full path to the log file and application is the name of the log file. For example:
db2 -tvf createDb.sql >> /home/db2inst1/db_activities.log
Ensure that you have write permissions for the directories and log files.
- Activities: Wizards/connections.sql/activities/db2
- db2 -td@ -vf upgrade-40-45.sql
- Blogs: Wizards/connections.sql/blogs/db2
- db2 -td@ -vf upgrade-40-45.sql
- Bookmarks: Wizards/connections.sql/dogear/db2
- db2 -td@ -vf upgrade-40-45.sql
- Communities: Wizards/connections.sql/communities/db2
- db2 -td@ -vf upgrade-40-45.sql
- db2 -td@ -vf calendar-upgrade-40-45.sql
- Files: Wizards/connections.sql/files/db2
- db2 -td@ -vf upgrade-40-45.sql
- Forum: Wizards/connections.sql/forum/db2
- db2 -td@ -vf upgrade-40-45.sql
- db2 -td@ -vf appGrants.sql
- Home page: Wizards/connections.sql/homepage/db2
- If you are on Connections 4.0 or 4.0 CR1, run this script to upgrade database to 4.0 CR2 first: db2 -tvf upgrade-40-40CR2.sql.
- If you are on Connections 4.0 CR2 or later, run this script to upgrade to 4.5: db2 -tvf upgrade-40-45.sql.
- db2 -tvf upgrade-40-45.sql.
From a command prompt, change to the Wizards directory and enter the following text as a command on a single line:
AIX or Linux:
jvm/OS/jre/bin/java -Dfile.encoding=UTF-8 -Xmx1024m -classpath jdbc_library_location/db2jcc.jar: jdbc_library_location/db2jcc_license_cu.jar: lib/lc.dbmigration.default.jar: lib/commons-logging-1.0.4.jar: lib/news.common.jar: lib/news.migrate.jar com.ibm.lconn.news.migration.next45.NewsMigrationFrom40To45 -dburl jdbc:db2://dbHost:dbPort/HOMEPAGE -dbuser dbUser -dbpassword dbPassword > java.out.log 2>&1where OS is the operating system on which the database is hosted. The heap size must be at least 1024 MB.Windows:
jvm\win\jre\bin\java -Dfile.encoding=UTF-8 -Xmx1024m -classpath jdbc_library_location\db2jcc.jar; jdbc_library_location\db2jcc_license_cu.jar; lib\lc.dbmigration.default.jar; lib\commons-logging-1.0.4.jar; lib\news.common.jar: lib\news.migrate.jar com.ibm.lconn.news.migration.next45.NewsMigrationFrom40To45 -dburl jdbc:db2://dbHost:dbPort/HOMEPAGE -dbuser dbUser -dbpassword dbPassword > java.out.log 2>&1...where...
- jdbc_library_location is the location of your JDBC driver
- dbHost is the name of the system hosting your database
- dbPort is the communications port of the database
- dbUser is the database administrator ID
- dbPassword is the administrator password
- db2 -tvf appGrants.sql
- db2 -tvf post-java-migration-40-45.sql
- Metrics: Wizards/connections.sql/metrics/db2
- If you are on Connections 4.0, 4.0 CR1 or 4.0 CR2, run this script to upgrade to 4.0 CR3 first: db2 -tvf upgrade-40-40CR3.sql.
- If you are on Connections 4.0 CR3 or later, run this script to upgrade to 4.5: db2 -td@ -vf upgrade-40-45.sql.
- Mobile: Wizards/connections.sql/mobile/db2
- db2 -td@ -vf upgrade-40-45.sql
- Profiles: Wizards/connections.sql/profiles/db2
- db2 -tvf upgrade-40-45.sql
- Wikis: Wizards/connections.sql/wikis/db2
- db2 -td@ -vf upgrade-40-45.sql
- Connections Content Manager: Wizards/connections.sql/libraries.gcd/db2
Wizards/connections.sql/libraries.os/db2
- db2 -td@ -vf createDb.sql
- db2 -td@ -vf appGrants.sql
- db2 -td@ -vf createDb.sql
- db2 -td@ -vf appGrants.sql
Check that all the databases are working correctly.
(DB2 for Linux on System z only.) To improve database performance, enable the NO FILE SYSTEM CACHING option.
For more information, see the Enabling NO FILE SYSTEM CACHING for DB2 on System z topic.
Update 4.0 Oracle databases manually
Manually update Connections 4.0 databases to version 4.5 in an existing IBM WAS and Oracle database environment.
Before applying updates, back up your databases.
For more information, see Back up Connections.
Ensure that you have installed and configured all supporting software for version 4.5.
(Oracle only) Ensure that the Statement cache size for the data sources on WAS is no larger than 50. A higher value could lead to Out Of Memory errors on the application server instance. (Oracle only) Connections databases use SMALLFILE tablespaces which have a size limitation of 2 22 blocks. When you use 8 KB blocks, this limit is approximately 32 GB. If you anticipate needing more space than this, add additional tablespace files to individual databases. For detailed information, refer to your Oracle documentation.
This topic describes how to manually update Connections version 4.0 databases to version 4.5. Use this procedure if you want an alternative to using the database wizard to update your databases. Notes:
- This topic applies to all releases of version 4.0
- Use the Java Runtime Environment (JRE) under the Wizards directory in the installation media. Update your PATH variable to point to this JRE, using the instructions for your operating system. For example, the relative path to the JRE on the Microsoft Windows operating system might be Wizards\jvm\win\jre. For the AIX or Linux operating systems, the relative path might be Wizards/jvm/aix/jre and Wizards/jvm/linux/jre.
- Connections does not support GNU Java.
- You need to use a database administrator ID to run the Java migration utilities described in this task.
- After running each command, examine the output of the command for error messages. If you find errors, resolve them before continuing with the update process.
- To improve readability, some commands and file paths in this topic are displayed on separate lines. Ignore these formatting conventions when entering the commands.
To update databases manually...
- Log in to the WAS admin console on your Deployment Manager.
- Go to Applications > Application types > WebSphere enterprise Applications.
- Stop all Connections applications.
- Ensure that the Oracle driver on your system has the same version number as your Oracle database server.
- Change to the directory containing the scripts.
- For each application and then run the appropriate scripts:
sqlplus /as sysdba
To capture the output of each command to a log file, run the following commands before starting this task:
sql> spool on
sql> spool output_file
where output_file is the full path and name of the file where the output is captured.
When you have completed this task, run the following command: sql> spool off
To manually create the application database tables...
- Activities: Wizards/connections.sql/activities/oracle
- @upgrade-40-45.sql
- Blogs: Wizards/connections.sql/blogs/oracle
- @upgrade-40-45.sql
- Bookmarks: Wizards/connections.sql/dogear/oracle
- @upgrade-40-45.sql
- Communities: Wizards/connections.sql/communities/oracle
- @upgrade-40-45.sql
- calendar-upgrade-40-45.sql, password
- Files: Wizards/connections.sql/files/oracle
- @upgrade-40-45.sql
- Forum: Wizards/connections.sql/forum/oracle
- @upgrade-40-45.sql
- @appGrants.sql
- Home page: Wizards/connections.sql/homepage/oracle
- If you are on Connections 4.0 or 4.0 CR1, run this script to upgrade to 4.0 CR2 first: @upgrade-40-40CR2.sql
- If you are on Connections 4.0 CR2 or later, run this script to upgrade to 4.5: @upgrade-40-45.sql
- @upgrade-40-45.sql
From a command prompt, change to the Wizards directory and enter the following text as a single command:
AIX or Linux:
jvm/OS/jre/bin/java -Dfile.encoding=UTF-8 -Xmx1024m -classpath jdbc_library_location/ojdbc6.jar: lib/lc.dbmigration.default.jar: lib/commons-logging-1.0.4.jar: lib/news.common.jar: lib/news.migrate.jar com.ibm.lconn.news.migration.next45.NewsMigrationFrom40To45 -dburl jdbc:oracle:thin:@//dbHost:dbPort/ServiceName | -dburl jdbc:oracle:thin:@dbHost:dbPort:SID -dbuser dbUser -dbpassword dbPassword > java.out.log 2>&1where OS is the operating system on which the database is hosted. The heap size must be at least 1024 MB.Windows:
jvm/win/jre/bin/java -Dfile.encoding=UTF-8 -Xmx1024m -classpath jdbc_library_location\ojdbc6.jar; lib\lc.dbmigration.default.jar; lib\commons-logging-1.0.4.jar; lib\news.common.jar: lib\news.migrate.jar com.ibm.lconn.news.migration.next45.NewsMigrationFrom40To45 -dburl jdbc:oracle:thin:@//dbHost:dbPort/ServiceName | -dburl jdbc:oracle:thin:@dbHost:dbPort:SID -dbuser dbUser -dbpassword dbPassword > java.out.log 2>&1where
- jdbc_library_location is the location of your JDBC driver
- dbHost is the name of the system hosting your database
- dbPort is the communications port for the database
- dbUser is the database administrator ID
- dbPassword is the administrator password
Enter the appropriate dburl parameter depending on whether you are using SERVICE_NAME or SID.
- @appGrants.sql
- @post-java-migration-40-45.sql
- Metrics: Wizards/connections.sql/metrics/oracle
- @upgrade-40-45.sql
- Mobile: Wizards/connections.sql/mobile/oracle
- @upgrade-40-45.sql
- Profiles: Wizards/connections.sql/profiles/oracle
- @upgrade-40-45.sql
- Wikis: Wizards/connections.sql/wikis/oracle
- @upgrade-40-45.sql
- Connections Content Manager: Wizards/connections.sql/libraries.gcd/oracle
Wizards/connections.sql/libraries.os/oracle
- @createDb.sql password
- @appGrants.sql
- @createDb.sql password
- @appGrants.sql
Check that all the databases are working correctly.
Update 4.0 SQL Server databases manually
Manually update Connections 4.0 databases to version 4.5 in an existing IBM WAS and Microsoft SQL Server database environment.
Before applying updates, back up your databases.
For more information, see Back up Connections.
Ensure that you have installed and configured all supporting software for version 4.5.
Ensure that the name of the Home page database instance is not null. If it is null, rename it to HOMEPAGE.
Ensure that Named Pipes is enabled in the SQL Server Network Configuration for all instances.
For more information, refer to your SQL Server documentation.
This topic describes how to manually update Connections version 4.0 databases to version 4.5. Use this procedure if you want an alternative to using the database wizard to update your databases. Notes:
- This topic applies to all releases of version 4.0
- Use the Java Runtime Environment (JRE) under the Wizards directory in the installation media. Update your PATH variable to point to this JRE, using the instructions for your operating system. For example, the relative path to the JRE on the Microsoft Windows operating system might be Wizards\jvm\win\jre. For the AIX or Linux operating systems, the relative path might be Wizards/jvm/aix/jre and Wizards/jvm/linux/jre.
- Connections does not support GNU Java.
- You need to use a database administrator ID to run the Java migration utilities described in this task.
- After running each command, examine the output of the command for error messages. If you find errors, resolve them before continuing with the update process.
- To improve readability, some commands and file paths in this topic are displayed on separate lines. Ignore these formatting conventions when entering the commands.
To update databases manually...
- Log in to the WAS admin console on your Deployment Manager.
- Go to Applications > Application types > WebSphere enterprise Applications.
- Stop all Connections applications.
- Log in as the database administrator and change to the directory containing the scripts. The relative path is shown in the step for each application.
- For each application, run the appropriate scripts by entering the commands shown in the following list. In these commands, dbPassword is the password for the SQL Server user named sa. If your database server has multiple SQL Server instances installed, add the following text as the first parameter to each command:
-S sqlserver_server_name\sqlserver_server_instance_name
where
- sqlserver_server_name is the name of your SQL Server database server
- sqlserver_server_instance_name is the name of your current instance
To capture the output of each command to a log file, append the following parameter to each command:
>> \file_path\db_application.log
where file_path is the full path to the log file and application is the name of the log file. For example:
sqlcmd >> \home\admin_user\lc_logs\db_activities.log
where sqlcmd is a command with parameters and admin_user is the logged-in user. Ensure that you have write permissions for the directories and log files.
- Activities: Wizards\connections.sql\activities\sqlserver
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
where
- dbUser is the database user ID
- dbPassword is the administrator password
This script generates a message that states Changing any part of an object name could break scripts and stored procedures. You can safely ignore the message.
- Blogs: Wizards\connections.sql\blogs\sqlserver
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
- Bookmarks: Wizards\connections.sql\dogear\sqlserver
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
- Communities: Wizards\connections.sql\communities\sqlserver
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
- sqlcmd -U dbUser -P dbPassword calendar-upgrade-40-45.sql
- Files: Wizards\connections.sql\files\sqlserver
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
- Forum: Wizards\connections.sql\forum\sqlserver
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
- sqlcmd -U dbUser -P dbPassword -i appGrants.sql
- Home page: Wizards\connections.sql\homepage\sqlserver
- If you are on Connections 4.0 or 4.0 CR1, run this script to upgrade to 4.0 CR2 first: sqlcmd -U dbUser -P dbPassword -i upgrade-40-40CR2.sql
- If you are on Connections 4.0 CR2 or later, run this script to upgrade to 4.5: sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
From a command prompt, change to the Wizards directory and enter the following text as a single command:
jvm\win\jre\bin\java -Dfile.encoding=UTF-8 -Xmx1024m -classpath jdbc_library_location\sqljdbc4.jar; lib\lc.dbmigration.default.jar; lib\commons-logging-1.0.4.jar; lib\news.common.jar; lib\news.migrate.jar com.ibm.lconn.news.migration.next45.NewsMigrationFrom40To45 -dburl jdbc:sqlserver://dbHost:dbPort;databaseName=HOMEPAGE -dbuser dbUser -dbpassword dbPassword > java.out.log 2>&1- sqlcmd -U dbUser -P dbPassword -i appGrants.sql
- sqlcmd -U dbUser -P dbPassword -i post-java-migration-40-45.sql
- Metrics: Wizards\connections.sql\metrics\sqlserver
- If you are on Connections 4.0, 4.0 CR1 or 4.0 CR2, run this script to upgrade to 4.0 CR3 first: sqlcmd -U dbUser -P dbPassword -i upgrade-40-40CR3.sql
- If you are on Connections 4.0 CR3 or later, run this script to upgrade to 4.5: sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
- Mobile: Wizards/connections.sql/mobile/sqlserver
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
- Profiles: Wizards\connections.sql\profiles\sqlserver
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
This script generates a message that states Changing any part of an object name could break scripts and stored procedures. You can safely ignore the message.
- Wikis: Wizards\connections.sql\wikis\sqlserver
- sqlcmd -U dbUser -P dbPassword -i upgrade-40-45.sql
- Connections Content Manager: Wizards\connections.sql\libraries.gcd\sqlserver
Wizards\connections.sql\libraries.os\sqlserver
- sqlcmd -U dbUser -P dbPassword -i "createDb.sql" -v filepath="path_to_db" password="password_for_FNGCDUSER"
- sqlcmd -U dbUser -P dbPassword appGrants.sql
- sqlcmd -U dbUser -P dbPassword -i "createDb.sql" -v filepath="path_to_db" password="password_for_FNOSUSER"
- sqlcmd -U dbUser -P dbPassword appGrants.sql
Check that all the databases are working correctly.
Restore databases
Restore your databases if you need to roll back a failed update.
Use the database utilities provided by your database vendor to roll back your databases.
For more information, refer to the vendor's product documentation.
Uninstalling a deployment before migration
Uninstall a deployment of Connections before migrating to a later version.
If you are installing Connections 4.5 side-by-side with your current deployment, you do not need to complete this task. However, if you are installing 4.5 on the same systems as 4.0, IBM recommends that you uninstall the 4.0 deployment. You can migrate your 4.0 data to 4.5.
For more information, see the Migrating to Connections 4.5 topic.
Back up your current deployment.
For more information, see the Backing up Connections topic.
Notes:
- Delete Connections data files makes the original deployment unrecoverable. If you plan to reinstall Connections and use your old data, do not delete the data files.
- You do not need to remove Installation Manager files. These files might be associated with other IBM applications.
To uninstall Connections...
- Start the IBM WAS Deployment Manager.
- Stop all instances of WAS, including node agents, in the deployment.
- To uninstall a deployment of Connections, start the Installation Manager and then click Uninstall.
- Select the Connections package group and click Next.
- Click Uninstall to begin uninstalling.
- When the process is complete, restart the Deployment Manager.
- Restart all instances of WAS, including node agents.
- Synchronize the nodes.
- To check the details of the procedure, open the log files in...
connections_root/logs
Each Connections application that you uninstalled has a log file that uses the following naming format:
application_nameUninstall.log
- If you plan to install Connections 4.5 on the same server as the 4.0 deployment, remove the following files: Notes:
- Except where noted, remove these files from the system that hosts the Deployment Manager.
- Because some of these files might be used by other programs, it is possible that you are not allowed to remove all of the following files.
- You do not need to remove Installation Manager files. These files might be associated with other IBM applications.
- Connections installation files: connections_root
If you did not install Connections in the default directory, delete the directory where you installed the product.
- Connections shared and local content stores. Before removing this data, migrate your 4.0 content stores.
For more information see the Content store migration topic.
- Connections configuration files: Delete the profile_root/config/cells/cell_name/LotusConnections-config directory, where cell_name is the name of your WAS cell.
- If it is present, delete the registry.xml file from the profile_root/config/cells/cell_name directory.
- Delete all .py files from the /opt/IBM/WebSphere/AppServer/profiles/profile_name/bin directory on the deployment manager server.
Results
You uninstalled Connections.
If you plan to reinstall Connections at some point, refer to Upgrading and Migrating.
Install Connections 4.5 for migration
You can see all the part numbers for this release on the Download Connections 4.5 page.
To install Connections, run the Installation Manager wizard on the system where the Deployment Manager is installed.
This installation is not supported for
IBM i . To install Connections onIBM i , refer to the next topics Install in console mode or Install silently.If an error occurs during installation, Installation Manager cancels the installation and rolls back the installation files. Installation errors are usually caused by environment problems such as insufficient disk space, privilege issues, or corruption of a WebSphere profile. If your installation is canceled...
- Identify and resolve the error that caused the cancellation. After canceling the installation, Installation Manager displays an error message with an error code. You can look up the error code in the Installation error messages topic or check the log files.
- Restore the Deployment Manager profile from your backup.
- Delete the connections_root directory.
- Start this task again.
To install Connections...
- Satisfy prerequisites
- On each node, stop any running instances of WAS and WebSphere node agents.
Do not stop Cognos Business Intelligence node agents or servers.
- Start WAS Network Deployment Manager.
- Copy the installation files to the system hosting the Deployment Manager.
- From the Connections setup directory, run the file to start the Connections launchpad if the system does not have a supported web browser:
LC_SETUP/launchpad.sh
The launchpad needs a web browser to run. If your system does not have a web browser, take one of the following actions:
- Install a web browser.
- Install Connections using response file.
For more information, see the Installing silently topic.
- Start Installation Manager manually:
- Open a command prompt.
- cd Connections_install/IM/OS directory, where OS is your operating system.
- Enter ./install.sh -input response.xml.
- Click Install Connections 4.5 and then click Launch the Connections 4.5 install wizard.
Click Welcome to open the documentation link.
- In the Select packages to install window, select the packages to install and click Next to continue. Notes:
- Accept the default setting for Show all versions.
- If you are using an earlier version of Installation Manager than 1.5.3, the 1.5.3 package is selected in this window.
- Click Check for Other Versions and Extensions to search for updates to Installation Manager.
- Review and accept the license agreement
- Location of shared directories for Installation Manager.
- Set Shared Resources directory.
Resources shared by multiple packages.
- Set Installation Manager directory.
Resources unique to packages.
- Choose to Use the existing package group or Create a new package group.
If you are using the wizard for the first time, the Use the existing package group option is not available.
- Set installation directory for Connections.
You can accept the default directory location, enter a new directory name, or click Browse to select an existing directory. Click Next. The path only can consist of letters (a-z, A-Z), numbers (0-9), and an underscore (_).
- Confirm the applications to install and click Next. You can select from the following options:
- The wizard always installs the Home page, News, and Search applications.
- To use media gallery widgets in the Communities application, install the Files application. Media gallery widgets store photo and video files in the Files database.
- Even if you are not configuring Cognos yet, install Metrics now so that application data is captured from the moment that Connections is deployed. Metrics captures the deployment data whereas Cognos is used for viewing data reports. If you install Metrics at a later stage, you will not have any data reports for the period before you installed Metrics.
Option Description Connections 4.5 Install all Connections applications. Activities Collaborate with colleagues. Blogs Write personal perspectives about projects. Communities Interact with people on shared projects. Bookmarks Bookmark important web sites. Files Share files among users. Forums Discuss projects and exchange information. Connections Content Manager Manage files using advanced sharing and draft review in Communities. to be installed. If you choose to install Connections Content Manager, Communities will be selected automatically, because they need to work together.
Connections Content Manager appears under Add-on.
Metrics Identify and analyze usage and trends. Mobile Access Connections from mobile devices. Moderation Forum and community owners can moderate the content of forums. Profiles Find people in the organization. Wikis Create content for your website.
- Enter the details of your WAS environment:
The wizard creates file dmInfo.properties to record details of the cell, node, and server.
- Select the WAS installation location that contains the Deployment Manager.
Note the default path to the WAS installation:
- AIX: /usr/IBM/WebSphere/AppServer
- Linux: /opt/IBM/WebSphere/AppServer
- Windows: C:\ (x86)\IBM\WebSphere\AppServer
- Enter the properties of the WAS Dmgr:
Deployment Manager profile Name of the dmgr to use for Connections. The wizard automatically detects any available dmgrs. Host name Name of the host dmgr server. Administrator ID The administrative ID of the dmgr. Set to the connectionsAdmin J2C authentication alias, which is mapped to the following J2EE roles: dsx-admin, widget-admin , andsearch-admin . Also used by the service integration bus. To use security management software such as Tivoli Access Manager or SiteMinder, the ID specified here must exist in the LDAP directory. This user account can be an LDAP or local repository user.Administrator Password The password for the administrative ID of the dmgr. SOAP port number The wizard automatically detects this value. - Click Validate to verify the dmgr information that you entered and that application security is enabled on WAS. If the verification fails, Installation Manager displays an error message.
The validation process checks the number of open files that are supported by your system. If the value for this parameter, known as the Open File Descriptor limit, is too low, a file open error, memory allocation failure, or connection establishment error could occur. If one of these errors occurs, exit the installation wizard and increase the open file limit before restarting the wizard. To set the file limit, go to the Installation error messages topic and search for error code CLFRP0042E. The recommended value for Connections is 8192.
- When the verification test is successful, click Next.
- Configure Connections Content Manager deployment option. This panel only displays if you chose to install the Connections Content Manager feature.
Refer to Configure Connections Content Manager to find the post-installations tasks you must perform to get CCM up and running.
- Select Existing Deployment to use an existing FileNet deployment for Connections Content Manager:
- Enter FileNet Object Store administrator username and password that the following URLs will point to
- Enter the HTTP URL for the FileNet Collaboration Services server such as:
http://fncs.example.com:80/dm
- Enter the HTTPS URL for the FileNet Collaboration Services server such as:
https://fncs.example.com:443/dm
- Select New Deployment to install a new FileNet deployment to use for Connections Content Manager;
- Enter FileNet installer packages location. The three FileNet installers: Content Platform Engine, FileNet Collaboration Services, and Content Platform Engine Client need to be placed into the same folder. The package names are as follows:
Platform: Content Platform Engine Content Platform Engine Client FileNet Collaboration Services AIX: 5.2.0-P8CE-AIX.BIN 5.2.0-P8CE-CLIENT-AIX.BIN FNCS-2.0.0.0-AIX.bin Linux: 5.2.0-P8CE-LINUX.BIN 5.2.0-P8CE-CLIENT-LINUX.BIN FNCS-2.0.0.0-Linux.bin Windows: 5.2.0-P8CE-WIN.EXE 5.2.0-P8CE-CLIENT-WIN.EXE FNCS-2.0.0.0-WIN.exe zLinux: 5.2.0-P8CE-ZLINUX.BIN 5.2.0-P8CE-CLIENT-ZLINUX.BIN FNCS-2.0.0.0-zLinux.bin
For the Linux platform, at least 3 GB of free disk space is needed under the /tmp folder for the Connections 4.5 CCM installation, or else there will be an installation failure.
- Configure your topology.
If you return to this page from a later page in the installation wizard, your settings are still present but not visible. To change any settings, you must enter all of the information again. If you do not want to change your initial settings, click Next.
The applications for Connections Content Manager will not be shown if you have chosen to use an existing FileNet deployment.
- Small deployment:
- Select the Small deployment topology.
- Enter a Cluster name for the topology.
- Select a Node.
- Click Next.
- Medium deployment:
- Select the Medium deployment topology.
- Select the default value or enter a Cluster name for each application or for groups of applications. For example, use Cluster1 for Activities, Communities, and Forums.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- Click Next.
- Large deployment:
- Select the Large deployment topology.
- Enter a Cluster name for each application.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- Click Next.
- Enter the database information:
If you return to this page from a later page in the installation wizard, your settings are still present but not visible. To change any settings, you must enter all of the information again. If you do not want to change your initial settings, click Next.
The Connections Content Manager databases will not be shown if you have chosen to use an existing FileNet deployment.
Database information for Global Configuration Data and Object Store must be set correctly, otherwise installation will fail.
- Specify whether the installed applications use the same database server or instance: Select Yes or No.
If allowed by your database configuration, you can select multiple database instances as well as different database servers.
- Select a Database type from one of the following options:
- IBM DB2 Universal Database
- Oracle Enterprise Edition
- Microsoft SQL Server Enterprise Edition
- Enter the Database server host name. For example:
appserver.enterprise.example.com
If your installed applications use different database servers, enter the database host name for each application.
- Enter the Port number of the database server. The default values are: 50000 for DB2, 1521 for Oracle, and 1433 for SQL Server.
If your installed applications use different database servers or instances, enter the port number for each database server or instance.
- Enter the JDBC driver location. For example:
- AIX:
/usr/IBM/WebSphere/AppServer/lib
- Linux:
/opt/IBM/WebSphere/AppServer/lib
- Windows:
C:\IBM\WebSphere\Appserver\lib
- Ensure that the following JDBC driver libraries are present in the JDBC directory:
- DB2
- db2jcc4.jar and db2jcc_license_cu.jar
Ensure that your user account has the necessary permissions to access the DB2 JDBC files.
- Oracle
- ojdbc6.jar
- SQL Server
- Download the SQL Server JDBC 4 driver from the Microsoft website to a local directory and enter that directory name in the JDBC driver library field. The directory must not contain the sqljdbc.jar file, only the sqljdbc4.jar file. Even though the data source is configured to use the sqljdbc4.jar file, an exception occurs if both files are present in the same directory.
- Enter the User ID and Password for each database. If each database uses the same user credentials, select the Use the same password for all applications check box and then enter the user ID and password for the first database in the list.
If your database type is Oracle, you must connect to the database with the user ID that you used when you created the application database.
- Click Validate to verify your database settings. If the validation fails, check your database settings. When the validation succeeds, click Next.
Installation Manager tests your database connection with the database values that you supplied. You can change the database configuration later in the WAS admin console.
Usually you can continue even if the validation failed because you can change the database settings from WebSphere Application server administrative console afterwards. However, you can not continue if you have entered incorrect information for the Connections Content Manager database, because there are database operations during installation. Incorrect database information will cause installation to fail. So you must use correct information for Connections Content Manager database.
- Specify your Cognos configuration details as explained in the table, and then click Validate to verify your connection.
- The IBM Cognos configuration panel appears only if you chose to install the Metrics application earlier in this task.
- Ensure that Cognos Business Intelligence Server is running because the wizard pings the server during the validation process.
- If you choose to install Metrics but do not want to setup Cognos, you can choose "Do Later" option to continue.
- If you decide to deploy Cognos later, be sure to update the J2C alias and update the web addresses for the Cognos service entry in LotusConnections-config.xml as described in the Troubleshooting Cognos validation problems topic.
See also the technote How to change metrics settings to integrate with the newly installed Cognos BI.
Option Description Administrator user ID Type the user name of the administrator account that you selected for Cognos Business Intelligence. This user must be included in the LDAP directory used with Connections. Administrator password Type the password for the Cognos administrator. Name Click the Load node info button to retrieve the list of available nodes and profiles, then click the arrow and select the WebSphere profile that hosts the Cognos BI server. The profile you select here must match the profile you specified as the was.profile.name in the cognos-setup.properties file. Host name
This is a non-editable field that is populated when you select a profile in the Name field. See the associated host name for each profile/node can help you choose the correct node where the Cognos BI Server is running. Server name There might be multiple servers installed on the same computer as the Cognos BI server; click the arrow and select the instance that represents the Cognos server. This value must match what you specified as the cognos.was.server.name in the cognos-setup.properties file. A default value of cognos_server was assigned in the properties file, so unless you changed that value, use it now. Web context root The context root determines which requests will be delegated to the Cognos application for processing (any request beginning with this string will be handled by Cognos). This value must match the cognos.contextroot specified in the cognos-setup.properties file. A default value of cognos was assigned in the properties file, so unless you changed that value, use it now.
- Locations of the content stores. All nodes in a cluster must have read-write access to shared content. Both shared and local content stores must be accessible using the same path from all nodes and from the dmgr. Each content store is represented by a corresponding WebSphere variable that is further defined as shared or local. Local content is node-specific.
If you are migrating from Connections 4.0, you must reuse your existing content stores in 4.5 in order to maintain data integrity.
- Enter the location of the Shared content store. The shared content store usually resides in a shared repository that grants read-write access to the dmgr and all the nodes.
Use one of the following methods to create a shared data directory:
- Network-based file shares (for example: NFS, SMB/Samba, and so on)
- Storage area network drives (SAN)
- If you are using a shared-file system on Microsoft Windows, specify the file location using the Universal Naming Convention (UNC) format. For example: \\server_name\share_name.
(Windows only) If you use Remote Desktop Connection to map shared folder drives, ensure that you use the same session to start the node agents. Otherwise, the shared drives might be invisible to the nodes.
- Enter the location of the Local content store.
- Click Validate to verify that the account that you are using to install Connections has write access to the content store.
- Click Next.
- Select a Notification solution.
- Enable Notification only.
Use notifications but without the ReplyTo capability.
- Enable Notification and ReplyTo.
Use notifications and the ReplyTo capability. To use ReplyTo, your mail server must be able to receive all the replies and funnel these replies into a single inbox. IBM Connection connects to the mail server using the IMAP protocol.
- None.
Do not use a notification solution in your Connections deployment. You can configure notifications after installation.
- Select and specify a mail server solution and then click Next.
- WebSphere Java Mail Session:
Use a single mail server for all notifications. Select this option if you can access an SMTP server directly using the host name.
Complete the following fields to identify the mail server to use for sending email:
- Host name of SMTP messaging server
- Enter the host name or IP address of the preferred SMTP mail server.
- This SMTP server requires authentication
- Select the check box to force authentication when mail is sent from this server.
- User ID
- If the SMTP server requires authentication, enter the user ID.
- Password
- If the SMTP server requires authentication, enter the user password.
- Encrypt outgoing mail traffic to the SMTP messaging server using SSL
- Select this check box if you want to encrypt outgoing mail to the SMTP server.
- Port
- Accept the default port of 25, or enter port 465 if you are using SSL.
DNS MX Records:
Use information from DNS to determine which mail servers to use. Select this option if you use a DNS server to access the SMTP messaging server.
- Messaging domain name
- Enter the name or IP address of the messaging domain.
- Choose a specific DNS server
- Select this check box if you want to specify a unique SMTP server.
- DNS server for the messaging servers query
- Enter the host name or IP address of the DNS server.
- DNS port used for the messaging servers query
- Enter the port number that is used for sending queries using the messaging server.
- This SMTP server requires authentication
- Select the check box to force authentication when notification mail is sent from this server.
- User ID
- If SMTP authentication is required, enter the administrator user ID for the SMTP server.
- Password
- If SMTP authentication is required, enter the password for the administrator user of the SMTP server.
- Encrypt outgoing mail traffic to the SMTP messaging server using SSL
- Select the check box if you want to use the Secure Sockets Layer (SSL) when connecting to the SMTP server.
- Port
- Specify the port number to use for the SMTP server connection. The default port number for the SMTP protocol is 25. The default port number for SMTP over SSL is 465.
- If you click Do not enable Notification, Installation Manager skips the rest of this step. You can configure notification later.
- If you selected the Notification and ReplyTo option, configure the ReplyTo email settings. Connections uses a unique ReplyTo address to identify both the person who replied to a notification and the event or item that triggered the notification.
- Enter a domain name. For example: mail.example.com.
This domain name is used to build the ReplyTo address. The address consists of the suffix or prefix, a unique key, and the domain name.
- The reply email address is given a unique ID by the system. You can customize the address by adding a prefix or suffix, using a maximum of 28 characters. This extra information is useful if the domain name is already in use for other purposes. Select one of the following options:
None Use the ID generated by the system. Prefix Enter a prefix in the Example field. Suffix Enter a suffix in the Example field. As you select an option, the wizard creates an example of the address, combining your selection with the ID generated by the system. For example:
- unique_id@domain
- prefix_unique_id@domain
- unique_id_suffix@domain -
- Specify the details of the mail file to which ReplyTo emails are sent:
- Server
- The domain where your mail server is located. For example:
replyTo.mail.example.com
- User ID
- The user account for the mail server. The user ID and password are credentials that Connections will use to poll the inbox on the mail server to retrieve the replies and process the content. Connections connects to the mail server using IMAP.
- Password
- Password for the user account. The user ID and password are credentials that Connections will use to poll the inbox on the mail server to retrieve the replies and process the content. Connections connects to the mail server using IMAP.
- Click Next.
You can modify the ReplyTo settings after installation. To edit the domain name and prefix or suffix, edit news-config.xml.
- Review the information that you have entered. To revise your selections, click Back. To finalize the installation, click Next.
- Review the result of the installation. Click Finish to exit the installation wizard.
- Restart the Deployment Manager:
cd WAS_HOME/profiles/Dmgr01/bin directory.
./stopManager.sh
./startManager.sh
- For each node:
cd profile_root/bin
./startNode.sh
- Log in to the dmgr console to perform a full synchronization of all nodes.
System administration | Nodes | nodes | Full Resynchronize
Wait until the dmgr copies all the application EAR files to the installedApps directory on each of the nodes. This process can take up to 30 minutes. To verify that the dmgr has distributed the application EAR files to the nodes, check the SystemOut.log file of each node agent. The default path to the SystemOut.log file on a node is...
profile_root/logs/nodeagent
Look for a message such as the following example:
AdmgrA7021I: Distribution of application application_name completed successfully
...where application_name is the name of an Connections application.
- Restart the Deployment Manager.
- Start all your Connections clusters:
Servers | Clusters | WebSphere Application server clusters | Connections clusters | Start
If you installed a cluster with multiple Search nodes, create the initial index.
- If you are installing a non-English language deployment, enable Search dictionaries.
- The index is ready when the INDEX.READY and CRAWLING_VERSION files are present in the index directory.
- If some applications do not start, the file-copying process might not have completed. Wait a few minutes and start the applications.
In case the Connections applications are installed on different clusters, the cluster start order should be as follows:
- News cluster
- Profiles cluster
- Search cluster
- Dogear cluster
- Communities cluster
- Activities cluster
- Blogs cluster
- Files cluster
- Forums cluster
- Wikis cluster
- Hompage cluster
- Metrics cluster
- Mobile cluster
- Moderation cluster
- Connections Content Manager cluster
Results
The installation wizard has installed Connections in a network deployment.
To confirm that the installation was successful, open the log files in...
connections_root/logs
Each Connections application that you installed has a log file, using the following naming format:
application_nameInstall.log
...where application_name is the name of an Connections application. Search for the words error or exception to check whether any errors or exceptions occurred during installation.
To view the log file for system events that occurred during the installation, open date_time.xml, where date_time represents the date and time of the installation. The file is located by default in the following directory:
- root user: /var/ibm/InstallationManager/logs
- non-root user: /home/user/var/ibm/Installation Manager/logs where user is the non-root user name
- Windows Server 2008 64-bit: C:\ProgramData\IBM\Installation Manager\logs
Complete the post-installation tasks that are relevant to your installation.
Access network shares:
If you installed WAS on Microsoft Windows and configured it to run as a service, ensure that you can access network shares.
Before installing for migration
Verify that all necessary prerequisite conditions are complete before installing Connections.
Prerequisites
- Check the Release notes for late-breaking issues.
- Install all the required fixes for WAS that are listed in the Connections Software Requirements web page.
- If you previously installed Installation Manager, update it to V. 1.5.3 or higher.
For more information, go to the Installation Manager updates web page.
Use the same user account to install Installation Manager and Connections.
- Connections installation presents three options for the type of deployment that you can install.
- The Connections installation process supports the creation of new server instances and clusters. Do not use existing clusters to deploy Connections.
- You can install Connections with either root or non-root accounts on AIX/Linux, or administrator or non-administrator accounts on Microsoft Windows.
- Complete the Pre-installation tasks.
If you are migrating from Connections 4.0, you need to complete only the following tasks:
- Prepare to configure the LDAP directory
- Install IBM WAS if you are installing on the same host as 4.0.
- Set up federated repositories
- Do not complete the Pre-installation tasks for creating databases or populating the Profiles database. The migration process handles those tasks separately. Connections 4.5 is the first release for
IBM i , so no migration is needed.
- Install IBM WAS Network Deployment (Application Server option) on each node. Connections is installed on the system where WAS Deployment Manager is installed.
- Back up the profile_root/Dmgr01 directory.
- Configure WAS to communicate with the LDAP directory.
- Prepare directories to use as content stores. You need to provide shared content stores on network share devices and local content stores on each node. Both shared and local content stores must be accessible using the same path from all nodes and from the Deployment Manager.
- Set the system clocks on the Deployment Manager and the nodes to within 1 minute of each other. If these system clocks are further than 1 minute apart, you might experience synchronization errors.
- Copy the JDBC files for your database type to the Dmgr and then from the dmgr to each node. Place the copied files in the same location on each node as their locations on the dmgr. If, for example, you copied the db2jcc4.jar file from the C:\IBM\SQLLIB directory on the dmgr, place the copy in the C:\IBM\SQLLIB directory on each node.
See the following table to determine which files to copy:
Database type JDBC files DB2 db2jcc4.jar
db2jcc_license_cu.jarDB2 on IBM i All applications except Metrics:
- jt400.jar
Metrics
- db2jcc4.jar
- db2jcc_license_cu.jar
Oracle ojdbc6.jar
Ensure that you are using the latest version of the ojdbc6.jar file.
SQL Server sqljdbc4.jar
- If you are going to use a trusted SSL certificate, ensure that is available before you begin the installation.
- If you do not plan to deploy IBM Cognos Business Intelligence now to support metrics, you can still install the Metrics application along with the other Connections applications. This enables Connections to begin collecting event data immediately and store it in the Metrics database for use when Cognos is available to provide reports.
- (Microsoft Windows) You must use an administrator account to install Connections on Windows. If you are installing on Windows Server 2008, you must use a local administrator account. If you use a domain administrator account, the installation might fail.
- (Linux only) If you receive an error message after attempting to start Installation Manager, you might need to install additional 32-bit libraries.
For more information about Installation Manager errors, go to the Unable to install Installation Manager on RHEL 6.0/6.1 (64-bit) webpage.
- Ensure that the Open File Descriptor limit is 8192.
For information about setting the file limit, go to the Installation error messages topic and search for error code CLFRP0042E.
- (AIX only) Installation Manager requires additional libraries for the AIX operating system.
- (AIX only) If Installation Manager hangs while being installed on your system, you might need to update your version of the software.
- (AIX only) If you are downloading Installation Manager, the TAR program available by default with AIX does not handle path lengths longer than 100 characters. To overcome this restriction, use the GNU file archiving program instead. This program is an open source package that IBM distributes through the AIX Toolbox for Linux Applications at the IBM AIX Toolbox website. Download and install the GNU-compatible TAR package. You do not need to install the RPM Package Manager because it is provided with AIX.
After installing the GNU-compatible compression program, change to the directory where you downloaded the Connections tar file. Enter the following command to extract the files from the file:
gtar -xvf Lotus_Connections_wizard_aix.tar
This command creates a directory named after Installation Manager.
Establish naming conventions for nodes, servers, clusters, and web servers.
Use a worksheet to record the user IDs, passwords, server names, and other information that you need during and after installation.
Install Linux libraries
The complete list of Linux libraries required for deploying Connections 4.5.
Linux
Ensure that you have installed the following Linux packages and libraries: Notes: Ensure that the GTK library is available on your system. Even when you are installing on a 64-bit system, you still need the 32-bit version of the GTK library.
If you are using respone file or console mode to install Connections, you do not need these libraries.
- compat-libstdc++-33.x86_64
- libcanberra-gtk2.i686
- PackageKit-gtk-module
- gtk2.i686
- compat-libstdc++-33.i686
- compat-libstdc++-296
- compat-libstdc++
- libXtst.i686
- libpam.so.0
Cognos
If you plan to install Cognos, you also need the libraries listed in the Cognos BI 10.1.1 Software Environments - Required Patches technote.
Both 32-bit and 64-bit versions are required.
Content store migration
Migrate your 4.0 content stores to your Connections 4.5 deployment.
When creating content stores for Connections 4.5, you can reuse the 4.0 content stores, using either of two methods: use the 4.0 directories in the 4.5 deployment or create new 4.5 directories and copy the 4.0 content to them.
Reuse the extracted file content that is stored at the location pointed to by the EXTRACTED_FILE_STORE WebSphere variable. You can achieve this by copying the contents of EXTRACTED_FILE_STORE location on the 4.0 system to the EXTRACTED_FILE_STORE location on the 4.5 system as part of the content store migration.
Reusing content stores
Reuse your 4.0 content stores in your Connections 4.5 deployment.
While installing Connections 4.5, you are prompted to specify the locations of the 4.5 content stores. Instead of creating new directories, you can specify the 4.0 content stores so that you can reuse that content in your 4.5 deployment.
You must delete any Search-related data from your Connections 4.0 content stores, such as indexes and statistics. The 4.5 installation generates new Search-related data.
- Delete the data in content stores that are related to the Search application.
- Use the following table as a guide to deleting data that is recreated in the Connections 4.5 deployment:
Table 57. Content stores that must be deleted
Content Store Location local_content_store/news/search/index local_content_store/profiles/cache local_content_store/search/backup local_content_store/search/index shared_content_store/news/search/indexReplication shared_content_store/search/dictionary shared_content_store/search/stellent/dcs/oiexport/exporter
where shared_content_store is a content store on a shared storage device on a network and local_content_store is content that is stored on your local system.
Copying content stores
Copy data from your Connections 4.0 content stores to your 4.5 deployment.
Ensure that you created new content stores while installing Connections 4.5.
You can reuse content stores from your Connections 4.0 deployment by copying the data to the new content stores in your 4.5 deployment. You do not need to copy local content stores because those directories are recreated by the 4.5 deployment.
This topic applies to all 4.0 CR2 and later releases of version 4.0, which means you need to upgrade the deployment to 4.0 CR2 before performing the migration.
To copy data from your Connections 4.0 content stores...
- Locate the content stores in your Connections 4.0 deployment.
- Copy the 4.0 data to the corresponding content store in your Connections 4.5 deployment.
If you are doing a side-by-side migration, you may need to rename the profiles and statistics subdirectories after copying to reflect the node and cluster names on the target system.
- Use the following table as a guide to organizing the content stores: Notes:
- The table excludes content that is generated by the Search application, such as indexes and statistics.
- If you created additional content stores, copy those content stores as well as the default content stores in the table. For Activities, for example, the content stores that you must copy are defined in the objectStore element of the oa-config.xml file.
Table 58. Content stores that must be copied
Content Store Location shared_content_store/audit shared_content_store/activities/content shared_content_store/activities/statistics shared_content_store/blogs/upload shared_content_store/communities/statistics shared_content_store/customization shared_content_store/dogear/favorite shared_content_store/files/upload shared_content_store/forums/content shared_content_store/profiles/statistics shared_content_store/wikis/upload
Import application artifacts to Connections 4.5
Import application artifacts, including configuration data and properties files, into your new deployment.
Ensure that you completed the steps in the Exporting application artifacts from Connections 4.0 topic.
To import application data into your Connections 4.5 deployment...
- Start the Deployment Manager in your Connections 4.5 deployment.
- Copy the migration directory that you backed up from your 4.0 deployment to the connections_root directory in your 4.5 deployment.
Be sure not overwrite the 4.5 migration directory with the 4.0 migration directory. Only copy the 4.0 migration/work directory into the existing migration directory on the 4.5 system.
- Import your 4.0 data. Open a command prompt, change to the migration directory on the Deployment Manager node in your 4.0 deployment, and run the following command:
- AIX or Linux:
./migration.sh lc-import
-DdmgrUserid=dm_admin
-DdmgrPassword=dm_password
- Windows:
migration.bat lc-import
-DdmgrUserid=dm_admin
-DdmgrPassword=dm_password
- where dm_admin is the administrative user ID for WAS Deployment Manager and dm_password is the user password.
Check the log file to validate the import. The log file is stored in the system user's home directory, and uses the following naming format:
lc-migration-yyyyMMdd_HHmm_ss.log For example:
- AIX or Linux:
/root/lc-migration-20110215_1534_26.log
- Windows:
C:\Documents and Settings\Administrator\lc-migration-20110215_1534_26.log
- Windows Server 2008:
C:\Users\Administrator\lc-migration-20110215_1534_26.log
- If the deployment uses multiple nodes, synchronize the nodes.
Compete the steps in the Post-migration tasks topic.
If the migration fails, restore the Deployment Manager profile and complete the steps in this task again. If you customized your 4.0 environment, reapply your customizations to the relevant XML files.
If your existing deployment uses Quickr Connector and you want to continue using it, install and configure Quickr Connector after installing Connections 4.5. The migration tool does not handle the Quickr Connector configuration files.
Post-migration tasks
After migrating to Connections 4.5, you need to perform further tasks to ensure that your new deployment is complete.
Ensure that you have completed any required post-installation tasks.
Ensure that you backed up customized files from your Connections 4.0 deployment.
After updating or migrating Connections, you must manually update any custom fields and customized files that could not be automatically updated or migrated.
To finalize the migration process...
- Reapply the customizations that you used in version 4.0.
- Migrate any JSP, CSS, and string customizations.
- Verify that your Blogs themes are present in 4.5. If not, manually update them.
- Update your customized Community themes.
- Copy the 4.0 version of the profiles-policy.xml file to the 4.5 deployment, overwriting the 4.5 version of the file.
- Copy the customized XSD elements of the 4.0 service-location.xsd file to the 4.5 version of the file.
- Redefine customized Profiles fields in the validation.xml file.
- Migrate your 4.0 JavaScript customizations.
- Reapply any proxy configurations, if necessary.
- Required: Delete your pre-migration search indexes and create new indexes.
- Synchronize the member database tables for each Connections application with the data in the user directory.
You must have a web server configured for Connections before attempting to synchronize Profiles and the LDAP directory.
- If you used Connections Connectors in version 4.0, such as Lotus Quickr, re-install them.
You must obtain the 4.0 version of Connections Connector for Lotus Quickr from the IBM Collaboration Solutions Catalog.
- If you defined a server whitelist in version 4.0 for publishing file attachments from Activities to Lotus Quickr, redefine it.
- If you changed the root URL of any Connections application, and if the old and new URLs point to the same web server, redirect requests to the new URL:
- Open the httpd.conf file in a text editor. The file is located in the ibm_http_server_root/conf directory.
- Uncomment the following line:
LoadModule rewrite_module modules/mod_rewrite.so
- Add the following statements:
- The lines referring to weblogs redirect all requests for the pre-migration URL of https://blog301.example.com/weblogs/* to the post-migration URL of https://blog40.example.com/newblogs/*. Substitute your own URLs as appropriate.
- The lines referring to "bookmarklet" redirect the path of the bookmarklet feature to make it work in 4.0. Use the exact URLs shown.
RewriteEngine on
RewriteRule /weblogs/(.*) https://blog40.example.com/newblogs/$1 [R,L]
RewriteCond %{REQUEST_URI} /(.*)/bookmarklet/(.*)
RewriteCond %{REQUEST_URI} !^/connections/bookmarklet/(.*)
RewriteRule ^/(.*)/bookmarklet/(.*) /connections/bookmarklet/$2 [noescape,L,R]
Listen 0.0.0.0:443
<VirtualHost *:443>
RewriteEngine on
RewriteRule /weblogs/(.*) https://blog40.example.com/newblogs/$1 [R,L]
ServerName blog40.example.com
SSLEnable
RewriteCond %{REQUEST_URI} /(.*)/bookmarklet/(.*)
RewriteCond %{REQUEST_URI} !^/connections/bookmarklet/(.*)
RewriteRule ^/(.*)/bookmarklet/(.*) /connections/bookmarklet/$2 [noescape,L,R]
</VirtualHost>
SSLDisable
- If you installed and configured IBM HTTP Server before installing Connections 4.5, map the host name of the deployment in LotusConnections-config.xml.
- Remove the Location and ErrorDocument stanzas if you added them to the httpd.conf file before migrating.
After you complete the mandatory post-installation tasks, update the deployment with the latest fixes.
Create the Search index after migrating or updating
When you update or migrate Connections from a previous release, you must recreate the Search index. This task is mandatory when migrating from Connections 4.0 to 4.5.
When you follow the steps described in the following procedure, Search functionality is not available to users.
You must delete any Search-related data from your Connections 4.0 content stores, such as indexes and statistics. The 4.5 installation generates new Search-related data.
During the indexing update process, documents are first written to a cache table in the HOMEPAGE database and then written to each index across the nodes. When a new index needs to be built, the database cache is skipped, and the crawling and indexing process writes directly to the index directory on the node that is performing the indexing task.
When the indexing task is finished, the Search application copies the index to all nodes that host the Search application. If some of those nodes were not running during indexing, you can copy the index manually.
Before copying the index, verify that the INDEX.READY and CRAWLING_VERSION files are present in the index directory.
- Stop all the nodes that are running the Search application. If there are existing search indexes on these nodes, delete them by performing the steps described in Delete the index.
- To ensure that status updates and calendar events are included in the indexing task, restore the full set of default indexing tasks. Go to the Restoring the default scheduled tasks for Search topic and follow the instructions for running the SearchService.resetAllTasks() command.
- Start all the Search nodes in the cluster.
- Recreate the index by completing one of the following steps:
- Create a one-off task that indexes all the installed Connections applications in the deployment.
- Wait for the next scheduled indexing task to run.
You can tell that the index is built on the indexing node when the INDEX.READY and CRAWLING_VERSION files are present in the index directory. The Search index directory is defined by the IBM WAS variable SEARCH_INDEX_DIR.
After the index is built, the next phase is index roll-out. During this phase, the files in the index directory are automatically copied to the Search staging folder, which is defined by the WAS variable SEARCH_INDEX_SHARED_COPY_LOCATION. The files in the Search staging folder are then copied to each index folder on the remaining nodes.
Do not stop the deployment until the index has been copied to all nodes. If the server is stopped during this process, the index will not be successfully rolled out to all nodes. In this event, you need to manually copy the index from the staging location to the other nodes.
Synchronize files shared with communities
After upgrading, you must synchronize files that have been shared with communities.
After upgrading the Connections applications and databases, but before any users have access, you must synchronize files shared with communities.
To run the command FilesDataIntegrityService.syncAllCommunityShares() follow the steps detailed in the topic Run Files administrative commands.
Post-migration steps for profile types and profile policies
In migrating to this release, there are significant changes to Profiles configuration that should be validated. In prior releases, the layout of the profile and the underlying data model were defined in the profiles-config.xml file. In this release, the data model for profile-type definitions has been moved into a dedicated profiles-types.xml file and the rules for presentation of a profile have been moved into a set of FreeMarker template files.
It is important to understand how the default migration for each file works to validate the results for your environment:
- profiles-types.xml file
This file was introduced in Connections 4.0 and identifies the set of properties associated with each profile-record based on its associated profile-type.
A profile-type declaration is generated on the previous data in the profiles-config.xml file using the following procedure:
- For each apiModel element previously defined in the profiles-config.xml, a corresponding profile-type is declared with the associated properties. A property is identified as editable or hidden in the new file based on the original definition.
- For each profileLayout element previously defined in the profiles-config.xml, a corresponding profile-type is declared or merged with the definition derived from the apiModel. A property is identified as editable, hidden, or rich-text based on the original definition.
If the deployment associates a profile-record with a profile-type that was not previously declared in profiles-config.xml, you must manually define your type in the generated profiles-types.xml to make the system aware of the properties to be associated with that type. Finally, it is possible that you reference profile-type identifiers in either the profiles-policy.xml or widgets-config.xml file that were not previously declared in profiles-config.xml. You must manually declare these profile-type definitions in the generated profiles-types.xml to enumerate the set of properties to be leveraged at run-time.
For more information, see Profile-types.
- User interface template files (profileEdit.ftl, profileDetails.ftl, searchResults.ftl, businessCardInfo.ftl)
A set of template files now control the rendering of a profile record in the user interface.
Each template file is generated based on previous layout definitions present in the prior release profiles-config.xml file.
profileDetails.ftl Generated using the profileLayout elements from the prior profiles-config.xml file. It controls rendering of attributes on the main profile page. profileEdit.ftl Generated using the profileLayout elements from the prior profiles-config.xml file. It controls rendering the input controls when editing your profile. searchResults.ftl Generated using the searchResultsLayout elements from the prior profiles-config.xml file. It controls rendering profiles in the directory search and report-to chain application views. businessCardInfo.ftl Generated using the businessCardLayout elements from the prior profiles-config.xml file. It controls the display of profile properties on the business card. The migrated template files should preserve the behavior of the previous releases layout definitions, but it is recommended that you review the generated template file, and leverage the features of the FreeMarker template language to simplify the result of each migrated file. If there were multiple profile-type layouts defined, each migrated file will have a set of if-elseif-else logic to handle profile-type specific rendering behavior. It is often the case that there were common rendering semantics across profile-type layout definitions, and as a result the migrated file may appear to have redundant content that can be removed or cleaned up.
- LotusConnections-config.xml
If your Profiles customization included custom strings, ensure that your custom resource bundle is properly registered in LotusConnections-config.xml and that you have manually applied your custom resource bundle to the target deployment.
- profiles-policy.xml
If you have modified this file in prior releases, your must manually update it again with the same changes, or copy and replace the profiles-policy.xml file in the target release.
Post-migration steps for the media gallery terms and conditions
After you migrate Connections 4.0 to Connections 4.5, perform the following steps to update your terms and conditions agreement.
You need to get the string bundle files that you copied in Preparing to migrate the media gallery.
- Create a jar file that declares an OSGI extension. To see an example, download this sample JAR file by right-clicking the following link: com.ibm.lconn.mediagallery.web.resources.examples_1.0.0.jar
- Place your string bundle properties files that you copied from Connections 4.0 in the _properties folder of the OSGI bundle jar
- Update the file attribute with the name of the string bundle in the _properties folder if it is not tc.properties. Update the file attribute in the dojoResourceModule element in the plugin.xml file in the jar from step 1.
- Update the following items in the media gallery widget definition in widgets-config.xml.
<item name="tc_bundle" value="my.custom.TermsConditions.messages" /> <item name="tc_key" value="tc" /> <item name="tc_checkbox" value="false" />...where...
- The tc_bundle value must be the name of the object that is set in the file, which is referenced by the bind attribute of the net.jazz.ajax.dojoModuleBinding in the OSGI bundle. In this case, the value is my.custom.TermsConditions.messages.
- The value of tc_key is the key used in the properties file located in the _properties directory of the OSGI bundle jar, for example...
com.ibm.lconn.mediagallery.web.resources.example.jar
- The value attribute of the tc_checkbox item can be true or false. If true, users must check a box in the agreement before they can access the media gallery.
See Using the widgets-config.xml file for Communities.
- Place the jar from step 1 on the Connections server in the provision\webresources directory and then restart the common ear.
The provision directory is typically...
CE_HOME/data/shared/provision
Additional post-migration steps for the media gallery
After you migrate Connections 4.0 to Connections 4.5, perform the following tasks to update your custom video player and your custom field renderer.
After migration you can update the media gallery custom video player and your custom field renderer.
- Updating the media gallery custom video player
You need the custom video player enterprise application from Connections 4 that you backed up and saved in the Preparing to migrate the media gallery topic procedure.
- Create a jar file that declares an OSGI extension. To see an example, download this sample JAR file by right-clicking the following link: com.ibm.lconn.mediagallery.web.resources.examples_1.0.0.jar
For additional information about creating an IBM Websphere Application Server OSGi extension using the plugin.xml, see the WAS information center.
Replace the contents of the playVideo method in the resources/VideoPlayer.js file in com.ibm.lconn.mediagallery.web.resources.examples_1.0.0.jar with the contents out of the CustomVideoPlayer.ear/CustomVideoPlayerWeb.war/custom-player.js file.
- In javascript, make sure that the following variable is set to the name of a constructor function (dojo class) that creates an object with a single member function:
quickr.lw.config.media.videoPlayerFor example:myCustomVideoPlayer_playVideo = function (node, store, item, width, height) { // Include dom code to render video player here } function myCustomVideoPlayer = function () { // Initialize member variables if necessary } myCustomVideoPlayer.prototype.playVideo = myCustomVideoPlayer_playVideo- If your quickr.lw.config.media.videoPlayer variable had a value different from my.custom.VideoPlayer, then make the following updates in the com.ibm.lconn.mediagallery.web.resources.examples_1.0.0.jar file:
- In resources/VideoPlayer.js, change all references to my.custom.VideoPlayer to the value used in your CustomVideoPlayer.ear/CustomVideoPlayerWeb.war/custom-player.js file.
- In plugin.xml, change the bind attribute of the dojoModuleBinding element to the name of the class used in resources/VideoPlayer.js. Update the value attribute of the alias element if the package name of the class is different from my.custom to the package name of the class defined in resources/VideoPlayer.js.
- Finally, you would set the configuration variable to the name of your constructor:
quickr.lw.config.media.VideoPlayer = "myCustomVideoPlayer";- Place the jar on the Connections server in the provision\webresources directory and then restart the common ear.
The provision directory is typically...
CE_HOME/data/shared/provision
- Updating your custom field renderer
After you migrate to this Connections release, perform the following tasks to update your custom field renderer. You need the custom field renderer enterprise application from Connections 4 that you backed up and saved in the Preparing to migrate the media gallery topic procedure.
- Create a jar file that declares an OSGI extension. To see an example, download this sample JAR file by right-clicking the com.ibm.lconn.mediagallery.web.resources.examples_1.0.0.jar link.
For additional information about creating an IBM Websphere Application Server OSGi extension using the plugin.xml, see the WAS information center.
- Copy your javascript classes out of CustomRenderers.ear\CustomRenderersWeb.war.
- Create javascript class files in the resources directory of the com.ibm.lconn.mediagallery.web.resources.examples_1.0.0.jar similar to the provided my.custom.renderer.SearchTermType.
- If your custom type class has a name different from my.custom.renderer.SearchTermType, then make the following updates in the com.ibm.lconn.mediagallery.web.resources.examples_1.0.0.jar file:
- In plugin.xml, change the bind attribute of the dojoModuleBinding element to the name of the class used in your javascript class located in the resources directory of the jar.
- If the package name of the class is different from my.custom, update the value attribute of the alias element to the package name of your javascript class.
- Place the jar on the Connections server in the provision\webresources directory and then restart the common ear.
The provision directory is typically...
CE_HOME/data/shared/provision
Rolling back a migration
If a migration fails, you can roll back your environment to the previous one.
Ensure that you have a back-up copy of your installation environment.
Rolling back your Connections environment ensures that you have a clean environment before attempting the migration again.
To roll back your Connections environment...
- Restore your databases from the backup that you made. Use your native database tools.
- Restore your Connections installation directory from the backup.
- Restore your WAS profile directory and the profileRegistry.xml file from the back-up.
- Restore the WAS Deployment Manager profile directory: profile_root/Dmgr01. For example: D:\WebSphere\AppServer\profiles\dmgr.
- if you are performing migration on the same system as your 4.0 deployment, restore the Installation Manager data directory from your backup.
Update Connections 4.5
Update Connections 4.5 with interim fixes or fix packs.
Fixes are available for download from the IBM Fix Central web site.
Updates include the following types of fixes:
- Interim fix
- A noncumulative fix that fixes a single issue.
- Cumulative Refresh (CR)
- Similar to an interim fix, a CR also updates the application EAR files.
- Fix pack
- A cumulative fix that contains multiple interim fixes and other identified updates.
Use the update wizard to install interim fixes and CRs but use Installation Manager to install fix packs. You must always update Connections from the Deployment Manager and then synchronize nodes to propagate the update. Each fix pack and interim fix contains complete instructions for installation.
The following topics describe how to update your Connections environment.
Downloading fixes
Download fix packs and interim fixes from the IBM support web site.
List all the fixes that you have already installed by completing the following steps:
For
- cd connections_root/updateInstaller directory.
- Run the following command:
updateSilent.sh -installDir connections_root -fix -applications application_name
where application_name is one of the following Connections applications:
Use a comma or semicolon to delimit multiple applications. If you do not provide this variable, all installed fixes will be listed.
- activities
- blogs
- communities
- dogear
- files
- forums
- homepage
- metrics
- mobile
- moderation
- news
- profiles
- search
- wikis
The Connections update installer does not install fixes for IBM Cognos.
IBM i , use updateSilent_OS400.sh to list all the fixes. You can get this file from the CR1 package in updateInstaller.zip.To download fixes, including interim fixes and fix packs....
- Go to the Fix Central web site.
- Select Lotus from the product group menu, Connections from the Product menu, your currently Installed version, your Platform, and then click Continue.
- Use one of the available search methods to identify the fix that you wish to install.
Select Recommended to see a list of recommended fixes for your release.
- Follow the online instructions to download the fix to a temporary directory.
- Extract the contents of the fix file and then copy the extracted files to the following directory:
- AIX or Linux: connections_root/update/fixes
Windows: connections_root\update\fixes
IBM i : connections_root/updateInstaller/fixesIf a fixes subdirectory does not already exist in the update directory, create it. You need to specify this directory when you install fixes.
Set the WAS_HOME environment variable
Set an environment variable that points to the WAS installation directory.
Complete this task only if you are installing interim fixes. If you are installing fix packs, you do not need to complete this task.
The update wizard is programmed to access the WAS installation by reading the WAS_HOME environment variable in the system path.
To set the WAS_HOME environment variable... on the system that hosts the Deployment Manager:
- Open a command prompt and navigate to the WebSphere/AppServer/bin directory.
- Execute the following script:
- Linux:
./setupCmdLine.sh
- Windows:
setupCmdLine.bat
This task is not applicable to deployments on the AIX operating system.
For
IBM i For
IBM i the environment variable forIBM i platform is WAS_PROD_HOME.
- Start Qshell by entering STRQSH in the command line.
- Use export to set the environment variable, for example:
export WAS_PROD_HOME=/QIBM/ProdData/WebSphere/AppServer/V8/ND
Install fixes as a non-root user
Grant permissions to a non-root user to install fixes.
This task applies only to Connections deployments on AIX or Linux.
By default, only root users have the necessary permissions to install fixes for an Connections deployment. You can permit non-root users to install fixes by changing their permissions to access certain data directories.
To grant the necessary permissions to a non-root user...
- Create a non-root user.
- Create a home directory for the new non-root user.
- Open a command prompt and grant the appropriate permissions to the non-root user by running the commands shown in the following table:
Table 59. Non-root user permissions
Directory Permissions Command WAS_HOME RWX chown -R non-root_user app_server_root connections_root RWX chown -R non-root_user connections_root data_directory_root RWX chown -R non-root_user data_directory_root path/tmp/efixes RWX chown -R non-root_user tmp/efixes
where non-root_user is the account ID of the new non-root user and path is the path to the efixes directory. Notes:
- The execute permission that you grant for the data_directory_root directory is intended specifically for the search/dcs/stellent directory.
- Verify that the /tmp/efixes directory already exists before running the chown command.
Results
When you have granted the necessary permissions, the non-root user can install interim fixes and fix packs.If different non-root users intend to install fixes, you must first delete any files that might remain in the download directories since you installed earlier fixes.
Example
Grant permissions to a new non-root user who wants to install a fix pack for an Connections deployment on Linux:
- Create a non-root user account called fix_installer.
- Create a home directory for the fix_installer user account.
- Open a command prompt and run the following commands:
- chown -R fix_installer /opt/IBM
In this example, the opt/IBM directory contains both the app_server_root and connections_root directories.
- chown -R fix_installer /usr/IBM
If the /usr/IBM directory does not exist, create it.
- Advise the new non-root user to log in and then download and install the latest fixes for Connections.
Install interim fixes
Use the update wizard in interactive or response file to install interim fixes.
Prerequisites
Ensure that you have met the following prerequisites:
- You have downloaded interim fixes, as described in the Downloading fixes topic.
- The WAS_HOME environment variable has been set.
- You have RWX permissions for the Deployment Manager profile directory that hosts Connections applications.
- (AIX only) You have installed all the required libraries.
- You have backed up any customizations you have made.
See Saving your customizations.
IBM i only supports Silent mode.There are two modes in which you can run the update wizard:
- Interactive mode
- Confirm each step of the process. This mode is useful when you use the wizard for the first time or if you are updating a single installation
- Silent mode
- Use a series of commands and parameters to launch and run the wizard. Silent mode is useful when you are updating multiple installations of Connections
Install interim fixes in interactive mode
Install interim fixes with the update wizard in interactive mode.
For information about prerequisites, see the Installing interim fixes topic.
An interim fix is a noncumulative fix that resolves a single issue. This topic describes the steps to install an interim fix only; it does not include information about how to prepare the production environment before installing the fix.
By using the update wizard, you can interact with each step in the procedure.
You can install multiple fixes in this procedure.
To install an interim fix in interactive mode...
A WAS admin user and password with a word space is not supported. You need to use a WAS admin user and password without a word space to install fixes.
- Stop all the clusters that host Connections applications.
- Configure the command line:
- Open a command prompt and navigate to the WebSphere/AppServer/bin directory.
- Run the following script:
- AIX or Linux: .setupCmdLine.sh
If the command is not working, consider these alternatives:
- . ./setupCmdLine.sh
- . setupCmdLine.sh
- . /opt/IBM/WebSphere/AppServer/profiles/Dmgr01/bin/setupCmdLine.sh
- Windows: setupCmdLine.bat
- When updating to Connections 4.5 CR1, you need to replace updateInstaller by runningchmod for the new UpdateInstaller. For example:
- Navigate to: connections_root /opt/IBM/Connections.
- Then change directory to: cd /opt/IBM/Connections/updateInstaller.
- Run chmod ugo+x *
- Start the installation wizard from the updateInstaller directory in the connections_root directory and run the following script:
- ./updateWizard.sh
Windows:updateWizard.bat
- On the Welcome page, click Next to continue.
- Enter the location of the fixes in the Fix location field, or click Browse to navigate to the location of the fixes and then click Next. The update wizard scans the location for fixes.
- Select the check boxes of the fixes to install and then click Next.
- Confirm whether you backed up any customizations that you made to the Connections interface. This input does not validate any such backup; it is just a reminder to consider backing up any customizations because updates to the deployment could overwrite your customizations.
- Enter the WAS Deployment Manager administrator ID and password for each application and then click Next.
- Review the information that you entered. To edit your input, click Back. To start the update, click Next. The installation process can take up to 10 minutes to complete.
- Review the result of the update. Click Finish to exit the wizard.
- Remove temporary WAS directories such as the WAS_HOME/profiles/AppSrv01/temp directory.
- Perform a full synchronization to push the update to all nodes.
System administration | Nodes | nodes | Full Resynchronize
Results
The log files that are created by the wizard are located under the connections_root/version/log directory.Check the <timestamp>_<fix name>_<feature name>_install.log to find "Build Successful" near the end of the file. If this exists, you have installed fixes successfully. Otherwise, uninstall the fix and then install again.
Your Connections deployment was updated. To check the logs, go to the IM_ROOT directory and open the applicationUpdate.log file, where application is the name of an Connections application.
Install interim fixes using response file
Install interim fixes with the update wizard using response file.
For information about prerequisites, see the Installing interim fixes topic.
An interim fix is a noncumulative fix that resolves a single issue. This topic describes the steps to install an interim fix only; it does not include information about how to prepare the production environment before installing the fix. You can install multiple fixes at a time.
For information about additional command options, see the updateSilent command topic.
To install an interim fix using response file...
- Stop all the clusters that host IBM Connection applications.
- From the updateInstaller directory under the connections_root directory open a command prompt and enter the following commands (without the carriage returns):
- AIX:
chmod +x updateSilent.sh ./updateSilent.sh -fix -installDir connections_root -fixDir fix_file_location -install -fixes APAR_number_of_fix -wasUserId AdminUserID -wasPassword AdminPasswordAdminPassword -featureCustomizationBackedUp backup_status -homepageDBSchemaUpdatesHaveBeenCompleted yes/no- Linux:
chmod +x updateSilent.sh ./updateSilent.sh -fix -installDir connections_root -fixDir fix_file_location -install -fixes APAR_number_of_fix -wasUserId AdminUserID -wasPassword AdminPasswordAdminPassword -featureCustomizationBackedUp backup_status -homepageDBSchemaUpdatesHaveBeenCompleted yes/no- Windows:
updateSilent.bat -fix -installDir connections_root -fixDir fix_file_location -install -fixes APAR_number_of_fix -wasUserId AdminUserID -wasPassword AdminPasswordAdminPassword -featureCustomizationBackedUp backup_status -homepageDBSchemaUpdatesHaveBeenCompleted yes/noIBM i :updateSilent_OS400.sh -fix -installDir connections_root -fixDir fix_file_location -install -fixes APAR_number_of_fix -wasUserId AdminUserID -wasPassword AdminPasswordAdminPassword -featureCustomizationBackedUp backup_statuswhere
- fix_file_location is the directory containing the downloaded fixes
- APAR_number_of_fix is the APAR number of the fix (such as LO36338)
- AdminUserId is the administrative user name for WAS Deployment Manager. A space in the WAS admin user and password is not supported. You need to use a WAS admin user and password without space to install fixes.
- AdminPwd is the password for that user
- backup_status confirms whether you backed up any customizations that you made to the Connections interface. The possible values are yes|no. This parameter does not validate any such backup; it is just a reminder to consider backing up any customizations because updates to the deployment could overwrite your customizations.
- homepageDBSchemaUpdatesHaveBeenCompleted is for acknowledgement that the schema must be updated as part of CR2 upgrade and is required. The possible values are yes|no. The database schema update can be performed at any time during the CR2 upgrade maintenance window. The database must be upgraded prior to restarting the server with CR2 applied.
If you do not know the APAR number of the fix, the fix JAR filename contains it. It is the string starting with "LO" and followed by 5 digits. Example: For fix JAR 4.0.0.0-IC-News-IFLO12345.jar, the APAR is LO12345. For example:
./updateSilent.sh -installDir /opt/IBM/Connections \ -fix \ -fixDir /opt/IBM/Connections/update/fixes -install \ -fixes LO36338 LO34499 LO34327 LO35077 LO34966 \ -wasUserId wasadmin \ -wasPassword wasadmin \ -featureCustomizationBackedUp yes- Remove the content of temporary WAS directories such as the app_server_root/profiles/AppSrv01/temp directory.
- After installing the fixes, perform a full synchronization to push the updates to all nodes.
System administration | Nodes | nodes | Full Resynchronize
Results
The log files that are created by the wizard are located under the connections_root/version/log directory.
updateSilent command
Use the updateSilent command to run the update wizard using response file.
Purpose
The updateSilent command:
- Installs fixes
- Uninstalls fixes
- Reports on the current state of applied fixes
The updateSilent command was called the updateLC command in previous releases of Connections.
updateSilent.{sh}
updateSilent_OS400.sh for
IBM i
Parameters
- -?
- Displays command usage information.
- /?
- Displays command usage information.
- -fix
- Identifies the update as an interim fix update.
- -fixDetails
- Instructs the command to display interim fix detail information.
- -fixDir <directory>
- Specifies the fully qualified directory to which you downloaded the interim fixes. The recommended directory is connections_root/update/fixes.
- -fixes <fix1> <fix2>
- Specifies a list of space-delimited interim fixes to install or uninstall.
- -help
- Displays command usage information.
- /help
- Displays command usage information.
- -install
- Installs the update.
- -installDir <directory>
- Specifies the fully qualified installation root of the Connections product. By default, this directory is connections_root.
If you are applying an interim fix to applications in a cluster, apply the fix to the first node and then do a full synchronization to push the fix to the other nodes.
System administration | Nodes | nodes | Full Resynchronize
- -uninstall
- Uninstalls the identified fix.
- -uninstallAll
- Uninstalls all applied interim fixes.
- -usage
- Displays command usage information.
- -wasPassword <password>
- Required to install or uninstall. Identifies the succeeding text as a WAS Deployment Manager administrative user password.
- -wasUserId <AdminUserId>
- Required to install or uninstall. Specifies the user ID of the WAS administrative user.
- -featureCustomizationBackedUp <backup_status>
- Confirms whether you backed up any customizations that you made to the Connections interface. The possible values are yes|no. This parameter does not validate any such backup; it is just a reminder to consider backing up any customizations because updates to the deployment could overwrite your customizations.
Syntax
Use the specified syntax to perform the following common tasks:
- To display command usage information:
updateSilent -help | -? | /help | /? | -usage- To process a fix:
updateSilent -installDir <connections_root> \ -fix \ -fixDir <connections_root/update/fixes> \ -install | -uninstall | uninstallAll \ -fixes <space-delimited list of fixes> \ -wasUserId <AdminUserId> -wasPassword <AdminPwd> \ [-configProperties "property file name and path"] \ [-fixDetails] \ -featureCustomizationBackedUp yes- To display a list of applied fixes:
updateSilent -fix -installDir <connections_root>- To display a list of available fixes:
updateSilent -fix -installDir <connections_root> -fixDir <connections_root/update/fixes>
Examples
The following examples demonstrate how to perform common tasks with the updateSilent command. They assume the following conditions:
- The location of the update wizard is: C:\IBM\Connections\update
- The Connections installation root is: C:\IBM\Connections
- The fix repository is: C:\IBM\Connections\updateInstaller\fixes
The examples include carriage returns after each parameter to make the example easier to read. When using the command, do not add carriage returns after the parameters.
To install a collection of interim fixes:
C:\IBM\Connections\updateInstaller updateSilent -fix -installDir "C:\IBM\Connections" -fixDir "C:\IBM\Connections\updateInstaller\fixes" -install -fixes Fix1 Fix2 -wasUserId wsadmin -wasPassword wspwd -featureCustomizationBackedUp yesTo install a collection of interim fixes and display interim fix details:
C:\IBM\Connections\updateInstaller updateSilent -fix -installDir "C:\IBM\Connections" -fixDir "C:\IBM\Connections\updateInstaller\fixes" -install -fixes Fix1 Fix2 -fixDetails -wasUserId wsadmin -wasPassword wspwd -featureCustomizationBackedUp yesTo uninstall a collection of interim fixes:
C:\IBM\Connections\updateInstaller updateSilent -fix -installDir "C:\IBM\Connections" -fixDir "C:\IBM\Connections\updateInstaller\fixes" -uninstall -fixes Fix1 Fix2 -wasUserId wsadmin -wasPassword wspwd -featureCustomizationBackedUp yesTo display a list of interim fixes:
C:\IBM\Connections\updateInstaller updateSilent -fix -installDir "C:\IBM\Connections"To display a list of interim fixes available in the repository:
C:\IBM\Connections\updateInstaller updateSilent -fix -installDir "C:\IBM\Connections" -fixDir "C:\IBM\Connections\updateInstaller\fixes"
AIX libraries
The following 64-bit AIX libraries are required for updating Connections 4.5.
Install the following RPMs to install the current version of the GTK:
atk-1.12.3-2.aix5.2.ppc.rpm
cairo-1.8.8-1.aix5.2.ppc.rpm
expat-2.0.1-1.aix5.2.ppc.rpm
fontconfig-2.4.2-1.aix5.2.ppc.rpm
freetype2-2.3.9-1.aix5.2.ppc.rpm
gettext-0.10.40-6.aix5.1.ppc.rpm
glib2-2.12.4-2.aix5.2.ppc.rpm
gtk2-2.10.6-4.aix5.2.ppc.rpm
libjpeg-6b-6.aix5.1.ppc.rpm
libpng-1.2.32-2.aix5.2.ppc.rpm
libtiff-3.8.2-1.aix5.2.ppc.rpm
pango-1.14.5-4.aix5.2.ppc.rpm
xcursor-1.1.7-3.aix5.2.ppc.rpm
xft-2.1.6-5.aix5.1.ppc.rpm
xrender-0.9.1-3.aix5.2.ppc.rpm
zlib-1.2.3-3.aix5.1.ppc.rpm
pixman-0.12.0-3.aix5.2.ppc.rpmFor more details, go to the Prepare AIX systems for installation webpage in the WAS information center.
Install fix packs
Install fix packs with the Installation Manager.
Fix packs contain multiple interim fixes for your Connections installation.
You must use Installation Manager to install fix packs.
Prerequisites
Ensure that you have met the following prerequisites:
- You backed up your files.
For more information, see the Backing up Connections topic.
- You downloaded the latest fix pack.
See the Downloading fixes topic for more information.
- You backed up any customizations that you made.
See Saving your customizations.
Install a fix pack in interactive mode
Install a fix pack to update Connections.
Important information about the user account:
- Use the same operating system user account to install this fix pack that was used to install Connections 4.5. If that account is no longer available, recreate it with the same user name and password and use it to install the fix pack.
- If you installed Connections 4.5 as a root user, you cannot install this fix pack as a non-root user.
- Windows: Use an operating system user account that is in the same Administrators group as the account that was used to install Connections 4.5.
- Ensure that you have downloaded and extracted the latest fix pack.
See the Downloading fixes topic for more information.
- You must run Installation Manager on the system where the Deployment Manager is installed.
- If an error occurs during installation, Installation Manager cancels the installation and rolls back the installation files. Installation errors are usually caused by environment problems such as insufficient disk space, privilege issues, or corruption of a WebSphere profile. If your installation is cancelled...
- Identify and resolve the error that caused the cancellation. After cancelling the installation, Installation Manager displays an error message with an error code. You can look up the error code in the Installation error messages topic or check the log files.
- Start this task again.
To install the fix pack...
- Stop all clusters and node agents in the deployment but leave the Deployment Manager running.
- From the IM_ROOT directory, run the file to start the Installation Manager:
- AIX or Linux: ./launcher
- Windows: launcher.exe
On Windows Server 2008, the launcher.exe file is in the IM_ROOT\eclipse directory.
- From the Installation Manager menu, click File > Preferences.
- Click Repositories > Add Repository.
- Enter the full path to the fix pack that you downloaded and then click OK. For example: C:\IBM\Connections\update\fixes\LC_version_Fixpack\repository.config The Installation Manager indicates if it can connect to the repository.
- Click OK to save the new repository.
- Click Update.
- The package group panel appears. Click Next to continue.
- Select the fix packs to install and click Next.
- Review and accept the license agreement
- Ensure that all the applications are selected and click Next.
All of the installed applications are selected by default. If you add any of the unselected applications, those applications will be installed. If you clear any of the selected applications, those applications will be uninstalled.
- Enter the administrative ID and password of the Deployment Manager.
Set to the connectionsAdmin J2C authentication alias, which is mapped to the following J2EE roles: dsx-admin,
widget-admin , andsearch-admin . Also used by the service integration bus. To use security management software such as Tivoli Access Manager or SiteMinder, the ID specified here must exist in the LDAP directory.For more information, see the Switching to unique administrator IDs for system level communication topic.
- Configure Connections Content Manager deployment option. This panel only displays if you chose to install the Connections Content Manager feature.
- Select Existing Deployment to use an existing FileNet deployment with Connections Content Manager:
- Enter the FileNet Object Store administrator username and password that the next two URLs will point to:
- Enter the HTTP URL for the FileNet Collaboration Services server such as:
http://fncs.example.com:80/dm
- Enter the HTTPS URL for the FileNet Collaboration Services serve such as:
https://fncs.example.com:443/dm
- Select New Deployment to install a new FileNet deployment to use with Connections Content Manager.
- Enter the FileNet installer packages location. The three FileNet installers: Content Platform Engine, FileNet Collaboration Services, and Content Platform Engine Client need to be placed into the same folder.
- Configure your topology: Notes:
- The panel described in this step appears only if you selected new applications to install.
- The applications for Connections Content Manager will not be shown if you have chosen to use an existing FileNet deployment.
- If you select an existing cluster on which to deploy applications, the nodes in that cluster are fixed and cannot be modified.
- Small deployment:
- Select the Small deployment topology.
- Enter a Cluster name for the topology.
- Select a Node.
- Click Next.
- Medium deployment:
- Select the Medium deployment topology.
- Select the default value or enter a Cluster name for each application or for groups of applications. For example, use Cluster1 for Activities, Communities, and Forums.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- Click Next.
- Large deployment:
- Select the Large deployment topology.
- Enter a Cluster name for each application.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- Click Next.
- Enter the database information.
The panel described in this step appears only if you selected new applications to install and if the new applications require database configuration.
The Connections Content Manager databases will not be shown if you have chosen to use an existing FileNet deployment.
Database information for Global Configuration Data and Object Store must be set correctly or installation will fail.
- Specify whether the installed applications use the same database server or instance: Select Yes or No.
If allowed by your database configuration, you can select multiple database instances as well as different database servers.
- Select a Database type from one of the following options:
- IBM DB2 Universal Database
- Oracle Enterprise Edition
- Microsoft SQL Server Enterprise Edition
- Enter the Database server host name. For example:
appserver.enterprise.example.com
If your installed applications use different database servers, enter the database host name for each application.
- Enter the Port number of the database server. The default values are: 50000 for DB2, 1521 for Oracle, and 1433 for SQL Server.
If your installed applications use different database servers or instances, enter the port number for each database server or instance.
- Enter the JDBC driver location. For example:
/usr/IBM/WebSphere/AppServer/lib
- Ensure that the following JDBC driver libraries are present in the JDBC directory:
- DB2
- db2jcc4.jar and db2jcc_license_cu.jar
Ensure that your user account has the necessary permissions to access the DB2 JDBC files.
- Oracle
- ojdbc6.jar
- SQL Server
- Download the SQL Server JDBC 4 driver from the Microsoft website to a local directory and enter that directory name in the JDBC driver library field. The directory must not contain the sqljdbc.jar file, only the sqljdbc4.jar file. Even though the data source is configured to use the sqljdbc4.jar file, an exception occurs if both files are present in the same directory.
- Enter the User ID and Password for each database. If each database uses the same user credentials, select the Use the same password for all applications check box and then enter the user ID and password for the first database in the list.
If your database type is Oracle, you must connect to the database with the user ID that you used when you created the application database.
- Click Validate to verify your database settings. If the validation fails, check your database settings. When the validation succeeds, click Next.
Installation Manager tests your database connection with the database values that you supplied. You can change the database configuration later in the WAS admin console.
Usually you can continue even if the validation failed because you can change the database settings from WebSphere Application server administrative console afterwards. However, you can not continue if you have entered incorrect information for the Connections Content Manager database, because there are database operations during installation. Incorrect database information will cause installation to fail. So you must use correct information for Connections Content Manager database.
- Review the summary information. Click Back to change the information or click Update to install the selected fix packs.
- When the installation is complete, restart the clusters and node agents and synchronize the nodes.
Results
Your Connections deployment has been updated. To check the logs, go to connections_root/logs and open the applicationUpdate.log file, where application is the name of an Connections application. If you added new applications, check the applicationInstall.log file as well.
Install a fix pack using response file
Silently install a fix pack to update Connections.
Important information about the user account:
- Use the same operating system user account to install this fix pack that was used to install Connections 4.5. If that account is no longer available, recreate it with the same user name and password and use it to install the fix pack.
- If you installed Connections 4.5 as a root user, you cannot install this fix pack as a non-root user.
- Windows: Use an operating system user account that is in the same Administrators group as the account that was used to install Connections 4.5.
- Ensure that you have downloaded and extracted the latest fix pack.
See the Downloading fixes topic for more information.
- You must run Installation Manager on the system where the Deployment Manager is installed.
Create a response file for this task by running a simulated modification.
Response files are provided for silent installations on AIX, Linux and
IBM i .Instead of generating a new response file, you can edit the default response file that is provided with the product. However, if you edit the default response file, you need to add encrypted passwords to the file.
For more information, see the Creating encrypted passwords for a response file topic.
- If an error occurs during installation, Installation Manager cancels the installation and rolls back the installation files. Installation errors are usually caused by environment problems such as insufficient disk space, privilege issues, or corruption of a WebSphere profile. If your installation is cancelled...
- Identify and resolve the error that caused the cancellation. After cancelling the installation, Installation Manager displays an error message with an error code. You can look up the error code in the Installation error messages topic or check the log files.
- Start this task again.
A silent update uses a response file to automate the installation of a fix pack.
To change the paths to the response file and log file, edit the generate_other_responsefile.sh file.
To perform a silent update...
- Stop all clusters and node agents in the deployment but leave the Deployment Manager running.
- cd IM_ROOT/eclipse directory.
- Run the following command:
- AIX or Linux: ./IBMIM --launcher.ini silent-install.ini -input fixpack_root/LC_update.rsp -log ./update.log
- Windows: IBMIMc.exe --launcher.ini silent-install.ini -input fixpack_root\LC_update.rsp -log .\update.log
If you stored the response file in a location that is different from the default location, change the path in the command to point to the file.
- When the installation is complete, restart the clusters and node agents and synchronize the nodes.
Check the update.log file for errors and repeat this task if necessary.
Install a fix pack in console mode
Use console mode to install a fix pack to update Connections.
Important information about the user account:
- Use the same operating system user account to install this fix pack that was used to install Connections 4.5. If that account is no longer available, recreate it with the same user name and password and use it to install the fix pack.
- If you installed Connections 4.5 as a root user, you cannot install this fix pack as a non-root user.
- Windows: Use an operating system user account that is in the same Administrators group as the account that was used to install Connections 4.5.
- Ensure that you have downloaded and extracted the latest fix pack.
See the Downloading fixes topic for more information.
- You must run Installation Manager on the system where the Deployment Manager is installed.
- If an error occurs during installation, Installation Manager cancels the installation and rolls back the installation files. Installation errors are usually caused by environment problems such as insufficient disk space, privilege issues, or corruption of a WebSphere profile. If your installation is cancelled...
- Identify and resolve the error that caused the cancellation. After cancelling the installation, Installation Manager displays an error message with an error code. You can look up the error code in the Installation error messages topic or check the log files.
- Start this task again.
To install a fix pack in console mode...
- Stop all clusters and node agents in the deployment but leave the Deployment Manager running.
- cd IM_ROOT/eclipse/tools directory.
- Start Installation Manager by running the following command:
- AIX or Linux: ./imcl.sh -c
- Windows: ./imcl.exe -c
- In the console window:
- Choose P to edit preferences.
- Choose 1. Repositories.
- Choose D. Add Repository.
- Specify the fix pack repository location that you have downloaded, such as C:\Users\Administrator\Desktop\IBM_Connections_4.5.0.1_Windows\IBMConnections\repository.config.
- Choose A to apply these changes, and then return to the main menu.
- In the console main menu, choose Update and then select Connections as the package group to update.
- Ensure that all the applications are selected and type N to proceed.
All of the installed applications are selected by default. If you add any of the unselected applications, those applications will be installed. If you clear any of the selected applications, those applications will be uninstalled.
- Enter the administrative ID and password of the Deployment Manager.
Set to the connectionsAdmin J2C authentication alias, which is mapped to the following J2EE roles: dsx-admin,
widget-admin , andsearch-admin . Also used by the service integration bus. To use security management software such as Tivoli Access Manager or SiteMinder, the ID specified here must exist in the LDAP directory.For more information, see the Switching to unique administrator IDs for system level communication topic.
- Configure the Connections Content Manager deployment option. This panel only displays if you chose to install the Connections Content Manager feature while installing the fix pack.
- Select 1 to use a new FileNet deployment for Connections Content Manager.
- Enter the FileNet installer packages location.
The three FileNet installers...
- Content Platform Engine
- FileNet Collaboration Services
- Content Platform Engine Client
...need to be placed into the same folder. Press Enter to validate whether the correct installers could be found.
- Select 2 to use an existing FileNet deployment for Connections Content Manager.
- Enter the User Id for the FileNet Object Store administrator username.
- Enter a password for the FileNet Object Store administrator password.
- Enter the HTTP URL for the FileNet Collaboration Services server such as:
http://fncs.example.com:80/dm
- Enter the HTTPS URL for the FileNet Collaboration Services serve such as:
https://fncs.example.com:443/dm.
- Press Enter to validate.
- You can continue even if the validation failed. If you are sure that the correct information has been entered, then it might be that the FileNet server is not available. type N to continue; otherwise please type B to reenter the information.
- Configure your topology.
The panel described in this step appears only if you selected new applications to install.
The Connections Content Manager applications do not display if you have chosen to use an existing FileNet deployment.
- Small deployment:
- Type 1 to select the Small deployment topology.
- Enter a Cluster name for the topology.
- Select a Node.
- Enter a Server member name for the node.
- Type N to proceed.
- Medium deployment:
- Type 2 to select the Medium deployment topology.
- Select the default value or enter a Cluster name for each application or for groups of applications. For example, use Cluster1 for Activities, Communities, and Forums.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- The topology specified is displayed. To re-specify any details, type the number that corresponds to the application; for example, type 1 for Activities.
- Type N to proceed.
- Large deployment:
- Type 3 to select the Large deployment topology.
- Enter a Cluster name for each application.
Installation Manager creates servers and clusters when required.
- Select a Node for each cluster. Accept the predefined node or select a different node.
These nodes host application server instances that serve Connections applications. You can assign multiple nodes to a cluster, where each node is a server member of that cluster.
- Enter a Server member name for the selected node. Choose the default or enter a custom name.
If you enter a custom server member name, the name must be unique across all nodes in the deployment.
- The topology specified is displayed. To re-specify any details, type the number that corresponds to the application; for example, type 1 for Activities.
- Type N to proceed.
- Specify your Cognos configuration as follows:
The IBM Cognos configuration panel appears only if you chose to install the Metrics application earlier in this task.
- Select when to configure Cognos application:
- Type 1 to configure it after installation as described in Configuring Cognos Business Intelligence after installation.
- Type 2 to configure it now.
- Enter the LDAP user ID for the Cognos administrator.
- Enter the password for the Cognos administrator.
- Select the node where the Cognos BI Server is installed.
- Enter the web context root.
- Press Enter to validate the configuration. If validation failed, type B to reenter the information.
- Type N to proceed.
- Enter the database information.
The panel described in this step appears only if you selected new applications to install and if the new applications require database configuration.
The Connections Content Manager databases do not display if you have chosen to use an existing FileNet deployment.
- Specify whether the installed applications use the same database server or instance: Type 1 to specify that the applications use same database server or instance; type 2 to specify that they use different database servers or instances.
If allowed by your database configuration, you can select multiple database instances as well as different database servers.
- Select a Database type from one of the following options: If installing on Windows, Linux, or AIX:
- IBM DB2 Universal Database
- Oracle Enterprise Edition
- Microsoft SQL Server Enterprise Edition
- Enter the Database server host name. For example:
appserver.enterprise.example.com
If your installed applications use different database servers, enter the database host name for each application.
- Enter the Port number of the database server. The default values are: 50000 for DB2 , 1521 for Oracle, and 1433 for SQL Server.
If your installed applications use different database servers or instances, enter the port number for each database server or instance. Database Name and Port number do not apply to DB2 for
IBM i , so you can just ignore them and keep with the default values. For Metrics, since the Metrics database is DB2 on AIX, you need to provide the Database Name and Port number to make the Installer configure them correctly.- Enter the JDBC driver location. For example:
/usr/IBM/WebSphere/AppServer/lib
- Ensure that the following JDBC driver libraries are present in the JDBC directory:
- DB2
- db2jcc4.jar and db2jcc_license_cu.jar
Ensure that your user account has the necessary permissions to access the DB2 JDBC files.
- Oracle
- ojdbc6.jar
- SQL Server
- Download the SQL Server JDBC 2 driver from the Microsoft website to a local directory and enter that directory name in the JDBC driver library field. The directory must not contain the sqljdbc.jar file, only the sqljdbc4.jar file. Even though the data source is configured to use the sqljdbc4.jar file, an exception occurs if both files are present in the same directory.
- Enter the User ID and Password for each database. If each database uses the same user credentials, confirm the Use the same password for all applications question and then enter the user ID and password for the first database in the list.
If your database type is Oracle, you must connect to the database with the user ID that you used when you created the application database.
- If you need to make changes, type the number that corresponds to the application to change. Alternatively, type R to reset all the database specifications to their default values.
- Press Enter to verify your database settings. If the validation fails, check your database settings. When the validation succeeds, click Next.
Installation Manager tests your database connection with the database values that you supplied. You can change the database configuration later in the WAS admin console.
Usually you can continue even if the validation failed because you can change the database settings from WebSphere Application server administrative console afterwards. However, you can not continue if you have entered incorrect information for the Connections Content Manager database, because there are database operations during installation. Incorrect database information will cause installation to fail. So you must use correct information for Connections Content Manager database.
- If the verification check is successful, type N to proceed. If verification fails, press B to reenter the required information.
- Review the information that you entered. To revise your selections, press B. To install the update, press U.
- When the installation is complete, restart the clusters and node agents and synchronize the nodes.
Results
Your Connections deployment has been updated. To check the logs, go to connections_root/logs and open the applicationUpdate.log file, where application is the name of an Connections application. If you added new applications, check the applicationInstall.log file as well.
Synchronize nodes
Synchronize all the nodes in a cluster.
Ensure that synchronization is enabled.
For more information, see the Enabling and disabling synchronization topic.
After updating or changing the deployment, you usually need to synchronize those changes to the nodes in the deployment.
To synchronize the nodes...
- Log in to the WAS admin console for the Deployment Manager and click System Administration > Nodes.
- Select the check boxes for the nodes and click Full Resynchronize.
- Click Save.
- Restart WAS .
Enable and disable synchronization
Enable or disable the synchronization of nodes in a deployment of Connections.
You can enable the following types of synchronization from the Deployment Manager:
- Automatic synchronization – Updates occur on a schedule. This type of synchronization is enabled by default in network deployments
- Startup synchronization – Updates occur each time the server is started
To enable or disable synchronization, from WAS dmgr console...
- Go to...
System Administration | Node agents | nodeagent | node | Additional Properties | File synchronization service
Select or clear check boxes...
- Automatic synchronization
- Startup synchronization
- Click Save.
- Click System Administration > Node agents.
- Select the check box of the node for which you are enabling or disabling synchronization, and click Restart.
If you are turning synchronization on or off for more than one node, perform this step for each node.
- Restart the Deployment Manager.
Uninstalling fixes
If the installation of a fix pack or interim fix fails, you can restore your Connections environment to its previous state.
Use the update wizard to uninstall interim fixes. There are two modes in which you can run the wizard:
- Interactive mode
- You must confirm each step of the process.
- Silent mode
- You use a series of commands and parameters to launch and run the wizard.
IBM i only supports Silent mode.
Uninstalling interim fixes in interactive mode
If the interim fix that you installed is not working, you can uninstall it using the update wizard in interactive mode.
Ensure that you have met the following prerequisites:
- You have restored your databases
- The WAS_HOME environment variable has been set for Linux and Windows
To uninstall interim fixes with the update wizard in interactive mode...
- From the updateInstaller directory under the connections_root directory, run the following script:
- AIX or Linux:
./updateWizard.shWindows:
updateWizard.bat- On the Welcome panel, click Next to continue.
- Choose to uninstall fixes in this panel, click Next to continue.
- Select the check boxes of the interim fixes that you wish to uninstall, and then click Next.
- Enter the WAS Deployment Manager administrator user ID and password and click Next.
- Review the information that you have entered. To make changes, click Back. To start uninstalling, click Next.
- Review the result of the update. Click Finish to exit the wizard.
Results
At least two logs are created by the wizard under the connections_root/version/log directory:
- Date_Time_ifix name_application name_uninstall.log
- Date_Time_ifix name_uninstall.log
Check the <timestamp>_<fix name>_<feature name>_uninstall.log to find "Build Successful" near the end of the file. If this exists, it means you have uninstalled the fixes successfully. Otherwise, uninstall the fix again.
Uninstalling interim fixes using response file
If the interim fix that you installed is not working, you can uninstall it using the update wizard using response file.
Ensure that you verified the following prerequisites:
- You restored your databases
- The WAS_HOME environment variable is set for Linux and Windows
- The WAS_PROD_HOME environment variable is set for
IBM i .
To uninstall an interim fix using response file...
- Open a command prompt and enter the following command:
updateSilent.sh|exe -installDir fix_file_location -fix -uninstall -fixes fix1_id fix2_id -wasUserId AdminUserId -wasPassword AdminPwd -featureCustomizationBackedUp <yes|no>
where:
- fix_file_location is the directory containing the downloaded interim fix.
- fix1_id and fix2_id are the label of the interim fixes to remove. Repeat this parameter as often as required.
- AdminUserId is the user ID for WAS Deployment Manager.
- AdminPwd is the password for WAS Deployment Manager.
- <yes|no> is the possible value of the featureCustomizationBackedUp parameter; this parameter indicates if you have backed up your customizations but does not validate any such backup.
- For
IBM i , use updateSilent_OS400.sh as follows:updateSilent_OS400.sh -installDir connections_root -fix -uninstall -fixes fix1_id fix2_id -wasUserId AdminUserId -wasPassword AdminPwd -featureCustomizationBackedUp <yes|no>
- For
IBM i , if you want to uninstall the fix for Wikis (LO75060-IC4500-CR01-Wikis.jar), use the following example as a guide:updateSilent_OS400.sh -installDir /qibm/ProdData/IBM/connections -fix -uninstall -fixes LO75060-IC4500-CR01-Wikis -wasUserId wasadmin -wasPassword password -featureCustomizationBackedUp yes
For
IBM i :
- To remove the fix for News, remove the fixes for Proxy, Help, Container, and Common in sequence first.
- To remove the fix for Mobile, remove the fix for MobileAdmin first.
Results
The log files that are created by the wizard are located under the connections_root/version/log directory.Check the <timestamp>_<fix name>_<feature name>_uninstall.log to find "Build Successful" near the end of the file. If this exists, it means you have uninstalled the fixes successfully. Otherwise, uninstall the fix again.
Rolling back a fix pack in interactive mode
If the fix pack that you installed is not working, you can uninstall it in interactive mode using Installation Manager.
Ensure that you have restored your databases.
You can also use a silent method to uninstall fix packs.
For more information, see the Rolling back a fix pack using response file topic.
To uninstall fix packs in interactive mode...
- Stop all clusters in the deployment.
- From the IM_ROOT directory, run the file to start Installation Manager:
- AIX or Linux: ./launcher
- Windows 32-bit: launcher.exe
- Click Roll Back.
- Select the fix packs to uninstall and click Next. Ensure that Connections is shown in the Package Group Name column and that the correct update version is shown under Installed Packages and Fixes. Click Next.
- Select the target check box and click Next.
- Enter the administrative ID and password of the Deployment Manager. Click Validate to verify the information that you entered and that application security is enabled on WAS. If the verification fails, Installation Manager displays an error message.
- Review the summary information. Click Roll Back to uninstall the selected fix pack.
- Review the result of the update. Click Finish to exit the wizard.
- When the uninstallation is complete, synchronize all the nodes and restart all the clusters.
Results
To check the details of the uninstallation, review...
connections_root/logs
Each Connections application that you uninstalled has a log file, using the following naming format:
application_nameUninstall.log
...where application_name is the name of an Connections application. If you added new applications, check the application_nameInstall.log file as well. To check the details of updated applications, open...
application_nameUpdate.log
Rolling back a fix pack using response file
If the fix pack that you installed is not working, you can uninstall it using response file.
Create a response file for this task by running a simulated rollback.
Instead of generating a new response file, you can edit the default response file that is provided with the product. However, if you edit the default response file, you need to add encrypted passwords to the file.
For more information, see the Creating encrypted passwords for a response file topic.
A silent rollback uses a response file to automate the removal of a fix pack.
To change the paths to the response file and log file, edit the lc_install.ini file.
To perform a silent rollback...
- Stop all clusters and node agents in the deployment but leave the Deployment Manager running.
- cd Installation Manager directory. The default location is:
- AIX or Linux: /opt/IBM/InstallationManager/eclipse
- Windows: C:\\IBM\Installation Manager\eclipse
- Run the following command to start Installation Manager:
- AIX or Linux: ./IBMIM --launcher.ini silent-install.ini -input connections_root/silentResponseFile/rollback_file.rsp -log ./rollback.log
- Windows: .\IBMIMc.exe --launcher.ini silent-install.ini -input connections_root\silentResponseFile\rollback_file.rsp -log .\rollback.log
where rollback_file is the name of your response file. The default rollback response file is lc_rollback.rsp.
- When the installation is complete, restart the clusters and node agents and synchronize the nodes.
Results
Check the rollback.log file for errors and repeat this task if necessary.
Rolling back a fix pack in console mode
Use console mode to uninstall a fix pack.
If the fix pack that you installed is not working, you can roll back the deployment to the previous configuration.
To uninstall a fix pack in console mode...
- Stop all clusters and node agents in the deployment but leave the Deployment Manager running.
- cd IM_ROOT/eclipse/tools directory.
- Start Installation Manager by running the following command:
- AIX or Linux: ./imcl.sh -c
- Windows 64-bit: ./imcl.exe -c
- In the console window, type 4.
- Select the fix pack to roll back and then type 'N' to proceed.
- Enter the WAS admin password in WAS admin console, and then press Enter to validate. If validation succeeds, type N to proceed.
- Review the information that you entered. To revise your selections, press B. To roll back the update, press R.
- When the rollback finishes, type F to exit Installation Manager.
Results
To check the details of the uninstallation, review...
connections_root/logs
Each Connections application that you uninstalled has a log file, using the following naming format:
application_nameUninstall.log
...where application_name is the name of an Connections application. If you added new applications, check the application_nameInstall.log file as well. To check the details of updated applications, open...
application_nameUpdate.log