- Connections 4.5 CR1 Part 3: Administering Home page, Libraries, Metrics, Mobile, New repository, Profiles, Search, and Wikis
- Administer the Home page
- Administer Libraries
- Library concepts and terminology
- Run Library administrative queries and commands
- Create document types
- Permissions and access management
- Libraries and the Activity Stream
- Configure FileNet Collaboration Services for the Connections Content Manager
- Change ConnectionsAdmin to be an LDAP user
- Change the Bootstrap admin password
- Configure Library widget options and defaults
- Add Linked Libraries
- Administer Metrics
- Administer Mobile
- Administer the News repository
- Access the News configuration file
- News administrative commands
- Synchronize News data with other applications
- Manage scheduled tasks for the News repository
- Configure database clean-up for the News repository
- Reallocating and load balancing users according to mail domain
- Purge compromised reply-to IDs
- Register third-party applications
- Administer microblogs
- Activity stream search
- Administer Profiles
- Run Profiles administrative commands
- Change Profiles configuration property values
- Tivoli Directory Integrator commands
- Add supplemental content to Profiles
- Developing custom Tivoli Directory Integrator assembly lines for Profiles
- Manage profile content
- Configure profile features
- Configure widgets in Profiles
- Configure Profiles events
- Manage Profiles scheduled tasks
- Administer cache
- Manage the Profiles search operation
- Monitoring statistics and metrics for Profiles
- Configure the vCard export application for Profiles
- Make photos cachable in secure environments
- Enable the use of pronunciation files in an HTTPS environment
- Manage the Profiles event log
- Configure advanced settings in Profiles
- Administer Search
- Access the Search configuration environment
- Apply property changes for Search
- Manage the Search index
- The indexing process
- Create Search indexes
- Index settings
- Verify Search
- Configure scheduled tasks
- Run one-off tasks
- Retrieve file content
- Purge content from the index
- Reindexing content
- Delete persisted seedlist data
- Delete the index
- Listing indexing nodes
- Remove a node from the index management table
- Backup and restore
- Configure file attachment indexing settings
- Configure the number of crawling threads
- Configure the number of indexing threads
- Manage the Search application
- Administer the social analytics service
- Social analytic relationships
- Listing social analytics scheduled tasks
- Add scheduled tasks for the social analytics service
- Run one-off social analytics scheduled tasks
- Tuning social analytics indexing
- Create a background index for the social analytics service
- Configure global properties for the social analytics service
- Excluding specific users from the social analytics service
- Add an additional Search node to a cluster
- Create work managers for Search
- Update Search work manager settings
- Reloading the Search application
- Configure page persistence settings
- Avoiding unnecessary full search crawls
- Maximum seedlist page size for a service
- Add third-party search options to the search control
- Set the timeout for seedlist requests
- Excluding inactive users from search results
- Configure post-filtering for community libraries
- Viewing and collecting Search metrics
- SearchCellConfig commands
- SearchService commands
- Administer Wikis
- Change Wikis configuration property values
- Run Wikis administrative commands
- Back up Wikis data
- Restrict attachment file types in Wikis
- Set maximum sizes on media, pages, and attachments
- Set maximum sizes on libraries
- Work with Wikis policies
- Viewing Wikis library information
- Filter library lists
- Print library information
- Disable wiki page versioning
- Delete wikis from the system
- Delete draft wiki pages
- Find the location of a stored attachment
- Displaying file attachments inline
- Search Engine Optimization (SEO) for Wikis
Connections 4.5 CR1 Part 3: Administering Home page, Libraries, Metrics, Mobile, New repository, Profiles, Search, and Wikis
Administer the Home page
You can administer site-wide settings for the Home page from...
- Home page administration user interface
- wsadmin client
Home page administration information is contained in the widget catalog, which resides in the HOMEPAGE database. The catalog defines the widgets that have been deployed in the Home page. It also specifies whether the widgets are enabled or disabled, and sets out any prerequisites that the widgets have on Connections applications. The catalog is administered using the Administration view in the Home page user interface.
You can also perform a number of administrative tasks for the Home page from the wsadmin client, for example, synchronizing member IDs between the Home page and the LDAP directory.
Administer the Home page from the user interface
Users with the Home page administrator role have access to an additional Administration view on their Home page. When you are assigned this role, you can see an Administration option in the navigation sidebar, under the My Page option.
From the Administration view you can:
- Add custom widgets for use on the Home page
- Enable and disable widgets
- Enable and disable the My Page view
The Home page caches the XML descriptors of widgets in memory on startup. You can refresh the version of the XML descriptor cached in memory for a particular widget by selecting the widget from the Enabled widgets list in the Administration view and clicking Refresh cache. When you refresh the cache, the Home page fetches the latest version of the XML descriptor for the selected widget and updates the memory cache. This option is typically only used for third-party widgets at development time.
Changes made using the administration user interface occur in real time and are immediately reflected in the Home page user interface.
Administer Home page widgets
Add OpenSocial gadgets to extend Home page functionality. Add them from the Administration view and then enable them for use.
The widgets that you add to the Home page can be based on the iWidget specification, which uses technology based on JavaScript, XML, HTML, and CSS, or the OpenSocial gadget specification. The widget files are stored on an HTTP server.
Gadgets added to Connections need to be based on the OpenSocial gadget specification, which surfaces gadgets in an OpenSocial container that interacts with an OAuth 2.0 protected service
The widget files can be bundled as EAR applications and deployed on IBM WebSphere Application Server. They can also be hosted in LAMP, .NET, and other environments.
You can add two types of third-party widget to the Home page:
- Opened by default
- Displays by default, but can be removed or hidden. Can be moved to a different location. For example, the To Do List widget that displays in the Updates view is opened by default.
- To confirm changes made
- Available for users to add to their Home page if they select it from the widget catalog. Widgets can be added, removed, hidden, or moved by Home page users.
Any widget can be used as a default or optional widget.
Third-party widgets can be added to the side column in the Updates view and to any column in the My Page view.
Configure Home page widgets
Ensure that the widgets that you add to the catalog are from trusted sources. The widget catalog allows you to administer OpenSocial gadgets for the entire Connections system including...
- Experiences and Share Dialog gadgets
- gadgets and iWidgets specific to the Home page
To add widgets via the Home page, you must be logged in as an administrator. If you do not see an Administration option display under the My Page option in the navigation sidebar on the Home page, this means that you are not configured as an administrator of the Home page application.
To add gadgets deployed externally, such as iGoogle gadgets, configure locked domains, which isolate semi-trusted gadgets, and prevents them from accessing SSO tokens, or using DOM access to the parent page of the gadget iFrame to forward sensitive data to external sites.
For more information on locked domains, refer to Enable locked domains.
Add a widget to the widget catalog
- Open the Administration view.
- Click Add another widget .
- Specify whether you are adding a widget that is based on the iWidget specification or the OpenSocial gadget specification.
To allow users to be able to integrate applications such as Facebook, Twitter, and LinkedIn into their Connections client experience, select OpenSocial gadget. Selecting OpenSocial gadget opens the Gadget settings with Activity stream or Share dialog subform, on which you need to specify the following additional options:
- For Security, select either a Trusted or a Restricted security type.
- Trusted gadgets can access a wider selection of container APIs and they can interact with Connections data. Even though a gadget is marked as "trusted", its data access is still isolated from SSO data when used in combination with locked domains.
- Select Use SSO token for trusted enterprise gadgets. Selecting this feature disables all of the security provided by locked domains for this particular gadget.
- In general, gadgets should be written to utilize OAuth unless they are speaking to a "legacy" system that has not been OAuth enabled. For 4.0, all of the Connections APIs are now OAuth enabled. Currently SSO is not a standard OpenSocial feature, but specific to the Common Rendering Engine (CRE). Some IBM Containers, such as Notes Social Edition, do not support the SSO feature at all at the time of this writing.
- Restricted gadgets are limited in terms of the OpenSocial features.
For example, they are not able to popup model dialogs. In general, you should always scope gadget feature access as narrowly as possible. Use this setting if the gadget does not require the additional page API features provided to trusted gadgets.
You should not put any external (restricted) gadgets into Connections if you have not configured locked domains.
- For UI integration points select where the gadget should be inserted in the user interface:
- Show in Share dialog
- Show for Activity stream events
You can select both.
Your gadget will display after the Connections gadgets in the Share dialog.
- Select the Server access via proxy preference...
- Only outside the intranet prevents the gadget from accessing your intranet servers. Unless specifically configured, the system will deem a server as part of the intranet if it is part of the WebSphere SSO domain. If this scoping is too narrow, it can be expanded via additional configuration settings.
- All servers allows the gadget to access URLs in your intranet as well as outside it.
- Custom rule defined for this gadget in the proxy-policy.dynamic file enforces settings defined in the policy file.
For more information refer to Configure per-host proxy access rules for OpenSocial gadgets.
- In the Service Mappings section, create a new mapping between an OAuth client, such as Facebook or Twitter, and its associated service, or edit or remove an existing mapping.
OAuth clients must be pre-configured via wsadmin commands. This setting just manages the association of the clients with individual gadgets.
- Click Add Mapping to create a new mapping between an OAuth service and an OAuth client. In the fields that display, select an OAuth Client name and enter the associated Service Name.
- Select an existing mapping in the map list and then click Edit Mapping to update the mapping.
- Select an existing mapping in the map list and then click Delete Mapping to remove that mapping from the list.
For more information about OAuth tokens, refer to Configure OAuth for custom gadgets.
Enter a name for the widget in the Widget Title field.
- Enter a short description of the widget in the Description field.
- Enter the web address for the XML widget descriptor in the URL Address field.
This address must be an absolute web address.
- Enter the widget location in the Secure URL Address field.
This address must be an absolute web address.
- Enter the web address for an icon to associate with the widget in the Icon URL field.
This image is used to represent the widget when it is docked in the widget palette.
- Enter the location of an icon to display for the widget in the Icon Secure URL field.
This icon displays when the widget is docked in the content palette.
- Select Use Connections specific tags to indicate if the widget deployment descriptor uses specific Connections tags to represent the URLs of the Connections applications.
- To display the widget in the My Page view, select Display in the My Page view.
You must display the widget in the My Page view or the Updates view, or both.
- To display the widget in the Updates view, select Display in the Updates view.
You must display the widget in the My Page view or the Updates view, or both.
- To display the widget when users open the Home page for the first time, select Opened by default.
The widget will not display for existing users, but it will be available from the widget palette so that they can add it whenever they want.
- To enable multiple instances of the widget to be used, select Multiple widgets.
Each widget instance has its own properties.
For example, if you are using a widget that displays bookmarks for a specific tag, you can enable multiple instances of the widget so that you can follow different tags in each widget.
This setting is only applicable for iWidgets. Only one instance of an OpenSocial gadget may be loaded at a time.
- If there are specific Connections applications that must be included in your deployment for the widget to function correctly, select the required applications in the Prerequisites area.
When an Connections application is selected as a prerequisite but that application is not installed, the widget does not display on the Home page. For gadgets that declare dependencies this way, they will act as though they are "disabled" for the purposes of whitelisting if all of their dependencies are not met.
- Click Save.
If you are adding widgets that are hosted on third-party servers, then you might need to update your proxy configuration.
Enable Home page widgets
You need to enable widgets before they can display on the Home page or elsewhere in Connections.
The Home page administration user interface lists the current status of widgets in the widgets catalog, and allows you to enable and disable widgets as needed. All changes made in the user interface, including enabling and disabling widgets, take immediate effect without the need for application restart.
To change a widget's status to enabled....
- Open the Administration view.
- In the Disabled widgets area, select the widget to enable and click Enable.
Disable Home page widgets
You can disable widgets from displaying on the Home page or elsewhere in Connections when you no longer want them to be available to users.
Set a widget's status to disabled means that it no longer displays on the Home page. The Home page administration user interface lists the current status of widgets in the widget catalog, and allows the administrator to enable and disable widgets as needed. All changes made in the user interface, including enabling and disabling widgets, take immediate effect without the need for application restart.
To change a widget's status to disabled....
- Open the Administration view.
- In the Enabled widgets area, select the widget to disable and click Disable.
Edit Home page widgets
Edit a widget to change its name, description, or deployment descriptor.
You can edit a widget to update the icon associated with the widget when it is docked, specify whether the widget deployment descriptor uses particular Connections tags to represent the URLs of Connections applications, and select Connections applications as prerequisites for the widget.
For system widgets that are installed as part of Connections, you can only change the name of the widget.
To edit a widget....
- Open the Administration view.
- Select the widget to edit and click Edit.
- Make the necessary changes in the Edit widget form and then click Save.
Remove Home page widgets
If a third-party widget is no longer used or needed, you can remove it from the widget catalog. You cannot remove Connections widgets from the catalog, you can only disable them.
To remove a third-party widget from the widget catalog....
- Open the Administration page.
- Select the widget to remove, and then click Remove.
Enable and disable the My Page view
The My Page view is disabled by default. If you have migrated from a previous version of the product, the view is enabled by default.
To enable or disable the My Page view...
- Open the Administration view.
- Select Enabled or Disabled from the list...
Widget page is currently
Administer the Home page using the wsadmin client
There are some administrative functions that you must perform using the wsadmin client.
For example, to access the configuration files for the Getting Started wizard, and force the Getting Started view to be the default view for Home page users.
Home page administrative commands
HomepageCellConfig commands
- HomepageCellConfig.checkInGettingstartedConfig("working_directory", "cell_name")
Check in the Getting start wizard configuration files.
working directory and cell_name match values specified during checkout.
- HomepageCellConfig.checkOutGettingstartedConfig("working_directory","cell_name")
Check out Getting Started wizard configuration files.
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX , Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.- cell_name is the name of the WAS cell hosting the Connections application.
This argument is case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()For example:
- AIX/Linux:
HomepageCellConfig.checkOutGettingstartedConfig("/opt/act/temp","Cell01")IBM i :HomepageCellConfig.checkOutGettingstartedConfig("/act/temp","Cell01")Windows:
HomepageCellConfig.checkOutGettingstartedConfig("c:/act/temp","Cell01")
HomepagePersonService
- HomepagePersonService.resetWelcomeFlagAllMembers()
Force the Getting Started view to be the default Home page view for all users.
- HomepagePersonService.resetWelcomeFlagMemberByEmail(String email)
Force the Getting Started view to be the default Home page view for the user specified by email address.
For example:
HomepagePersonService.resetWelcomeFlagMemberByEmail("jsmith@example.com")- HomepagePersonService.resetWelcomeFlagMemberByLoginName(String loginName)
Force the Getting Started view to be the default Home page view for the user specified by login name.
For example:
HomepagePersonService.resetWelcomeFlagMemberByLoginName("Joe Smith")- HomepagePersonService.resetWelcomeFlagBatchMembersByEmail(String fileName)
Force the Getting Started view to be the default Home page view for the users listed in the specified text file. Define the people by adding one person's email per line.
For example:
HomepagePersonService.resetWelcomeFlagBatchMembersByEmail("/opt/Homepage/emails.txt")- HomepagePersonService.resetWelcomeFlagBatchMembersByLoginName(String fileName)
Force the Getting Started view to be the default Home page view for the users listed in the specified text file. Define the people by adding one person's login name per line.
For example:
HomepagePersonService.resetWelcomeFlagBatchMembersByLoginName("/opt/Homepage/logins.txt")
Force the Getting Started view to be the default Home page view
To use administrative commands, use the wsadmin client.
When you first open the Home page, the Getting Started view is displayed by default. Users can select a check box in the Getting Started view to prevent it from being displayed each time they log in. However, you can use an administrative command to force it to be the default view for all users or for a subset of users. You might want to set this view as the default if, for example, you added an important enterprise-wide message to the page that you want people to read.
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Home page Jython script interpreter.
- Access the Home page configuration files.
execfile("homepageAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Force the Getting Started view to be displayed for the specified users:
- HomepagePersonService.resetWelcomeFlagAllMembers()
Force the Getting Started view to be the default Home page view for all users.
- HomepagePersonService.resetWelcomeFlagMemberByEmail(String email)
Force the Getting Started view to be the default Home page view for the user specified by email address.
For example:
HomepagePersonService.resetWelcomeFlagMemberByEmail("jsmith@example.com")- HomepagePersonService.resetWelcomeFlagMemberByLoginName(String loginName)
Force the Getting Started view to be the default Home page view for the user specified by login name.
For example:
HomepagePersonService.resetWelcomeFlagMemberByLoginName("Joe Smith")- HomepagePersonService.resetWelcomeFlagBatchMembersByEmail(String fileName)
Force the Getting Started view to be the default Home page view for the users listed in the specified text file. Define the people by adding one person's email per line.
For example:
HomepagePersonService.resetWelcomeFlagBatchMembersByEmail("/opt/Homepage/emails.txt")- HomepagePersonService.resetWelcomeFlagBatchMembersByLoginName(String fileName)
Force the Getting Started view to be the default Home page view for the users listed in the specified text file. Define the people by adding one person's login name per line.
For example:
HomepagePersonService.resetWelcomeFlagBatchMembersByLoginName("/opt/Homepage/logins.txt")
Administer Libraries
A Library provides the services of an Enterprise Content Management (ECM) system to Communities. To enable the use of Libraries, make some configuration changes to Connections and to each ECM system that you plan to use.
Two types of Library widgets are available in Communities: Libraries, and Linked Libraries. By default, Libraries are enabled and Linked Libraries are disabled. When users create a Library widget, the library is created on the ECM system that is configured to work with Connections. When users create a Linked Library widget in a Community, they specify a server then select an existing library from that server.
To reuse existing content from your ECM server, use the Linked Library widget. If you need a new place to store content and collaborate within your community, use the Library widget.
The Library widget feature is installed during the installation of Connections. Although you do not need to do any configuration, keep in mind that changes to the file library-config.xml also affect the Library widget, except for useSSO and showLegacyLibraryMessage.
After you configure your systems for Linked Library, users can add Linked Library widgets to their Communities. Files and folders within a Library are stored and managed on the ECM system, independently of Connections. As a result, users who have access to the Community must also have access to the ECM system before they can use the Library. Unless you configure single sign-on, users must authenticate to the Connections system and to each ECM system separately.
The following table shows the ECM systems that are available for each type of library.
Type of widget ECM System Linked Library
IBM FileNet P8 5.2.0 or later
IBM FileNet Collaboration Services v2.0.OR
IBM DB2 Content Manager 8.4.3 with FixPack 1 or later
IBM Content Manager Services for Lotus Quickr.Library
IBM FileNet P8 5.2.0 or later
IBM FileNet Collaboration Services v2.0.
Library concepts and terminology
Knowledge of FileNet concepts can help with extension and customization of your deployment. Connections Content Manager uses the FileNet P8 Platform to store and retrieve documents in libraries.
- Object
Objects are the managed elements in FileNet and in CCM.
- Document
- Folder
- Teamspace
- Recovery Bin
- Recovery Item
- Abstract Comment and subclasses
- Custom Role Base and subclasses
- Document Review
- Download Record
- Follower
- Recommendation (Like)
- Summary Data
- Tag
- Activity Stream Queue Entry
- Seedlist Entry
- Collaboration Configuration
The object types extend FileNet to provide special capabilities that are exposed in CCM. Users of CCM do not always directly interact with all types of objects. Instead, most objects are created as a side effect of liking, downloading, following, commenting on, or performing other actions with documents.
- Add-on
A FileNet add-on is a group of object types, event listeners, and other objects that provide extra capabilities to FileNet.
When you use the CCM configuration tools to create a FileNet system, the definitions for the object types that Connections uses are imported into FileNet. The following set of add-ons are added to the new object store by the object store creation tool, createObjectStore:
- 5.2.0 Base Application Extensions
- 5.2.0 Base Content Engine Extensions
- 5.2.0 Custom Role Extensions
- 5.2.0 Social Collaboration Base Extensions
- 5.2.0 Social Collaboration Document Review Extensions
- 5.2.0 Social Collaboration Notification Extensions
- 5.2.0 Social Collaboration Role Extensions
- 5.2.0 Social Collaboration Search Indexing Extensions
- 5.2.0 TeamSpace Extensions
- IBM FileNet Services for Lotus Quickr 1.1 Extensions
- IBM FileNet Services for Lotus Quickr 1.1 Supplemental Metadata
- Document
- A document is a special type of object with a version history and a binary. A document is a file in CCM. Documents can have special behaviors through draft approval or other event listeners. Documents can have metadata and access control. In CCM, documents have a feed of comments, list of tags, and a list of likes. All of these feeds are added to Documents by using the standard extension mechanisms in FileNet with custom object classes. Documents have other standard metadata, such as last modified date, which are provided by the core FileNet platform.
- Library
Libraries are sets of documents and folders that are grouped with common configuration settings. The common configuration settings include access settings and draft approval options. Libraries can be added to communities to provide document management capabilities by using CCM.
Internal to FileNet, a library is represented by the Teamspace class, a subclass of Folder in FileNet. The Teamspace class is part of a common object model that is shared with IBM Content Navigator and other IBM content applications.
Community owners can create libraries in CCM by adding a Library widget to a community from the customization shelf. When a library is created, an object of class Teamspace is automatically created in FileNet to represent the library. Community owners mark a library for deletion by removing a Library widget from a community. Deleting a library does not remove the associated Teamspace object and its content from FileNet, but removes the library from the view of CCM and marks the corresponding teamspace object for deletion in FileNet.
Instances of the Teamspace class can also be created manually in FileNet, or through other applications such as IBM Content Navigator. Content that is stored in Teamspace is created outside of Connections. This content is not added to the Connections search index and does not inherit permissions from communities.
Libraries that are created by adding a widget to a community inherit permissions from the community membership list and are added into the Connections search index. Community owners are administrators of the library created in that community and have broad powers to add and delete content anywhere in the library. Community members have at least read access to non-draft documents in the library. Also, their access can be customized through the edit mode on the library widget by the community owners or by settings on individual folders and documents.
Administrators who administer multiple FileNet applications should note that libraries that are created by Connections are created in the FileNet object store that is configured for Connections (by default ICObjectStore) under the folder ClbTeamspaces.
- Storage area
A storage area is a device or logical place used to store document binary files. With CCM, content stores can be database or file system based. Storage areas are not directly available to community users.
By using the configuration tools that are provided with CCM to create a FileNet system, a single file system-based storage area is created by using the ccm subdirectory of the Connections shared content directory for its storage. Use file system-based storage areas for new object stores because they scale independently of database size limitations and make better use of the file system.
- Object store
An object store is a repository for a collection of objects. Each object store has its own logical database, which can be shared in actual database but isolated by schema. Each object store also has its own set of tables for storing information about objects. Object stores have configuration for default security permissions that are applied to all objects in that store. Object stores also have a default storage area, and have configuration for the types of objects available in that store.
The Library widget always creates new libraries in a single particular object store. Libraries (FileNet Teamspace) in existing and other object stores can be used with the Linked Library widget.
By using the configuration tools that are provided with CCM to create a FileNet system, a single object store is created (named ICObjectStore by default). Also, and add-ons are installed, a single storage area is created, and default permissions are set with the createObjectStore command.
- Domain
A domain is a logical grouping of object stores, storage areas, and other resources that are served by a common set of servers. All objects in a domain share directory configuration and some global security configuration. Each FileNet domain has its own global configuration database (GCD) which contains this configuration.
By using the configuration tools that are provided with CCM to create a FileNet system, a single domain is created with the createGCD command.
Run Library administrative queries and commands
Run administrative commands from the Administration Console for Content Platform Engine (ACCE) to query for and modify objects in the FileNet Content Platform Engine
Run administrative commands requires you to start ACCE. ACCE supports a subset of the browsers that are supported by Connections. Although some operations in ACCE work with non-supported browsers, for best results, use ACCE with a supported browser.
See ACCE requirements , and in the By Component column for Content Manager, click Administrative Console for Content Engine.
This application is available from the following sources:
- connections_root/acce
...if FileNet server is installed with CCM
For example:
http://connections.example.com/acce- filenet_root/acce
...if an existing FileNet server is used by CCM
For example:
http://filenet.example.com/acceTo access administration for library content...
- Log on to the administrative console for Content Platform Engine
- From the administrative console, select...
Domain > Object Stores > ICObjectstore.
ICObjectStore is the default name of the object store created when CCM is installed.
- Choose Select from table: Teamspace.
- Choose all columns in the Select Columns section and click the arrow to move the columns from Columnsto Selected
- Enter a query criterion.
For example:
WHERE ClbTeamspaceState=2
If you exclude the WHERE clause from this query, then all teamspaces (Connections libraries) are returned. Running such a query takes a long time and degrades performance.
- Click Search to run administrative queries on objects in the object store. You can then inspect or delete items in the object store.
Results
In the example query, the search returns a list of Libraries that are marked for deletion by removing the Library widget from a Community. Community owners mark a library for deletion by removing a Library widget from a community.Delete teamspace objects does not remove the corresponding widget from a Community. You can delete teamspace objects only after the corresponding library widget is removed from a community.
Create document types
Define the document types that users can select when they add a file to a library.
Library owners can set document type parameters...
- Default document types
- Whether changing a document type is allowed for a library
If the library owner allows the type to be changed when a user uploads a file to a library, the user chooses the document type from a list that the owner creates. If the library owner does not allow the type to be changed, uploaded documents use the default document type for that library.
See allowChangeDocTypeDefault property.
Owners also define the properties that are added to the standard metadata of the document.
Users can use document types to classify documents according to a defined set of document types. Each document type has required and optional properties. Users can enter properties when they upload a document. These properties can then be used to search for documents. When users search the Files filter in All Connections search, they can filter by the properties that are used in Document Types.
You can create new document types, or modify existing ones by accessing the Object Store and working inside of the Classes folder.
This procedure shows how select certain properties by right-clicking them.
If your browser does not support this function or if you are having difficulty with your browser, you can also select the item in the tree view. This action opens a tab on for that item where you can click the Actions button in that tab, and perform the action.
- Open the ACCE interface and log in as the FileNet P8 domain administrator.
- Create properties:
- In the Data Design folder, right-click Property Templates and select New Property Template.
- Define the rest of the property template's details.
- To make the property a required field:
- When it is created, open the Property Template.
- Select Properties.
- In the list find Is Value Required, and choose True.
- Click Save.
- Create the document type:
- Click the P8 Domain tab.
- In the tree view, open the object store under Object Stores. The default name of the object store created by CCM is ICObjectStore
- Open the Data Design folder.
- Open the Classes folder.
- Right-click Document and select New Class.
- Modify properties:
- Expand Document.
- Select the class to modify.
- In the tab for the class, select Property Definitions tab.
- Add and remove properties.
- Modify security settings:
- Expand Document.
- Select the class to modify.
- In the tab for the class, select Default Instance Security tab.
- Modify the security settings.
- Check the Create Instance permission to allow users to create instances of a particular Document Type.
- Use Groups in Create Instance permission to add and remove users.
The "#AUTHENTICATED-USERS" entry applies to all users that can access the FileNet P8 domain. If anonymous users are enabled, the "#AUTHENTICATED-USERS" entry also applies.
If permissions are configured correctly when you create the FileNet P8 domain and Object Store, #AUTHENTICATED-USERS are added to the new class automatically. This configuration allows users to use the document type without requiring a manual update of security setting.
Caching in FileNet Collaboration Services means that changes to document types are not immediately available to CCM users. By default, you see document type changes within one hour, or after a restart of the FileNet Collaboration Services (FNCS) application in WebSphere. It is also possible that a document type may be added to the cache after it is created but before you have added properties.
For example, if you create a document type and then add properties later, you may see the document type immediately in CCM but do not see the properties for one hour.
Permissions and access management
Files and folders can be shared within a Library. However, the Library itself cannot be shared, but you can set access on the Library in the edit mode of the Library widget. The Library owner can decide the permissions to be given to members of a community. Initially, files and folders inherit access settings from their parent.
See Configure Library widget options and defaults
The following user types can use files and folders:
- Readers
- Contributors
- Editors
- Owners
The following table shows the user types and their permissions.
User types Permissions Readers
- View and download files
- View folders and metadata
- Perform social actions (like, follow, comment)
Contributors
- Upload new files
- Create subfolders
- Copy files to folders to which the Contributor has access
Editors
- Edit file content by uploading a new version
- Change item properties (Name, Description, or Document Type)
- Add or remove Tags
Owners
- Share files
- Delete and move files to Trash
- Restore files from Trash
- Move files
Note the following restrictions on users and permissions:
- Contributors
The permissions that are granted to Contributors apply only to folders, so you cannot set Contributors on a file. A user who is a contributor on a folder can read files on that folder by default.
- Owners
Owners have the highest role permissions, and cannot be normally set or modified.
- Created items
The following are added to an item when it is created:
- The item creator
- In a community library, the special group “Community Owners”
- Sharing files and folders
You can share with the following users and groups:
- Individual users
- Normal groups that exist in the directory for Connections
- Special groups
Special groups are handled by Connections and have more dynamic membership than normal groups
- Owner and member status
Community owners are all users in a Community with Owner status. Community members are all users in a Community with Member status.
- Public
Everyone (public) is all users that have accounts for Connections, and all anonymous users if they are enabled.
- Special groups
Special groups are inclusive of each other and the users in the groups. If a special group is on a higher role that a member contained in that group, the special group's role takes precedence.
- Breaking inheritance
When you break inheritance on an item, the Library adds all entries, besides the already present Owners, to the item's access list in FileNet. Connection Libraries do not set access directly in the Access Control list on a document in FileNet. Instead, Connection Libraries use a Role object that is added to the document. By using Roles instead of FileNet access lists, access is applied to all versions of a document at the same time. The user does not see the use of the Roles object instead FileNet. Instead, the user interacts with the document access through the sharing tab. Resetting an item's inheritance erases the Role objects that are used for access within FileNet and resets all versions to reinherit from their parent.
- Inheritance in Libraries and Linked Libraries
Libraries that are created in Connections by adding a Library widget to a Community, inherit access from that Community. Libraries that are created by manually creating Teamspaces in FileNet or other FileNet applications, do not inherit access from a community. You can reference these Libraries created outside of Connections by using the Linked Library widget.
Libraries and Linked Libraries have different sharing behaviors.
- Library widget
This widget and community controls the membership for community libraries, so it has the special groups with “Community” in the name. Examples of special groups are “Community Members” and “Community Owners” with special permissions. You can also only share with individual users and groups that are explicitly added as Members to the current community.
- Linked Library widget
This widget can connect to several types of libraries:
- Connecting to another community's library disables sharing in the Linked Library, but allows you to view an item's settings. A link is provided to return to the original Library to set access.
- Connecting to the same community's library enables, and acts like, a Library widget.
- Connecting to a library created outside of Connections enables sharing.
- Sharing
- You can share with other users. Users must have access not only on specific items, but on the Library (Teamspace object in FileNet) to view content.
- You can share with anybody in Connections and they can access content provided they are on the access list for the Teamspace. Because this scenario enables integrating with other applications, consult with Library creator to ensure that you have correct access on the Library.
- You can remove public access, or the special group Everyone (public), as they are not required.
- Sharing is only supported on FileNet.
Libraries and the Activity Stream
A library sends events from documents in the library to the Activity Stream.
For a document event to display in a user's Activity Stream, the user must have access to the library or FileNet Teamspace. The user must also have access to the document that generated the event.
Only events visible to the entire community display on the recent updates tab within a Community.
For example, requests to review drafts do not show on the community recent updates because they are only visible to draft reviewers. Instead, these events display on the user's homepage. The recent updates tab within an embedded application only shows events which have been targeted to the user.
For example, if a user is a draft reviewer, they see draft review events in the recent updates tab of the embedded application for that document. However, the user does not see other events unless they were following the community or document when the event occurred.
Document event Who sees the event in their Activity Stream upload file
- Followers of the Community
- Followers of the person
- Followers of the document
publish draft
- Followers of the Community
- Followers of the person
- Followers of the document
create a draft
- Followers of the Community
- Followers of the person
- Followers of the document
create comment on a draft
- Draft owner
- Reviewers
- Followers of the Community
- Followers of the file
- Followers of the person
update comment on a draft
- Draft owner
- Reviewers
- Followers of the Community
- Followers of the file
- Followers of the person
draft approved and published
- Followers of the Community
- Followers of the person
- Followers of the document
draft cancelled
Draft owner
rejection accepted
Draft owner
review requested
Requested reviewers
review required
Required reviewers
submit a draft for review
Draft owner
individual reviewer approves
- Draft owner
- Reviewers
reviewer vote not needed
Reviewers with canceled tasks
individual reviewer rejects
- Draft owner
- Reviewers
vote changed from reject to approve
- Draft owner
- Reviewers
vote changed from approve to reject
- Draft owner
- Reviewers
comment on a file
- All followers of the Community
- Followers of the person
- Followers of the document
update comment on a file
- All followers of the Community
- Followers of the person
- Followers of the document
like a file
- All followers of the Community
- Followers of the person
- Followers of the document
Configure FileNet Collaboration Services for the Connections Content Manager
Use the IBM FileNet Collaboration Services (FNCS) Configuration Manager to re-configure FNCS to support changes to Connections Content Manager; for instance, when a host or port used to connect to CCM is changed.
Set properties and values in....
FNCS_HOME/configmanager/profiles/fncs-sitePrefs.properties
The following procedure explains how to re-configure FileNet Collaboration Services embedded with a new installation of FileNet from CCM. For an existing FileNet installation, refer to the FileNet Collaboration Services information center.
With CCM and a new FileNet deployment, these configuration options are available in...
CONNECTIONS_HOME/addons/ccm/FNCS/configmanager/profiles
Configure FNCS for CCM
- Run...
CONNECTIONS_HOME/addons/ccm/FNCS/configmanager/configwizard.sh
- Ensure that the following prerequisites have been satisfied and then click Next:
- Content platform engine is installed and running
- Application servers are configured.
- Single signon is enabled.
- Installation and configuration sheet for FileNet Collaboration Services is available and populated with values for your environment.
- Enter the Content Platform Engine (CPE) information...
- Determine the port number for the CPE iiop:// URL by opening the WAS admin console managing the server where the CPE application is running and navigating to...
Application servers | server name | Communications | Ports
Make note of the port for BOOTSTRAP_ADDRESS
- Enter the name of the object store the FileNet Collaboration Services application references that will be used or already is being used for Connections.
- Uncheck...
Test the connection to the Content Platform Engine
- Select None for the Web client application or select the web client application to use to view your documents and supply the appropriate details, and then click Next.
- Enter the Connections HTTP URL and then click Next.
If you have already set up the HTTP server to handle web requests for Connections, then you should enter its URL here.
- Select the version and location of WebSphere Application Server where you want to deploy the FileNet Collaboration Services application, and then click Next.
- Specify WAS profile information that you will use to deploy the FileNet Collaboration Services application and supply the administrator credentials and then click Next.
If you are redeploying the FileNet Collaboration Services application, then choose the same profile that the application is already deployed on.
If your FileNet is deployed by Connections, choose the profile of Deployment Manager.
- Select whether to deploy the FileNet Collaboration Services application on a single application server or in a cluster.
If you are redeploying the application, choose the same option that the application has already been deployed on, and then click Next.
The cluster or single server needs to have been created beforehand.
- Select the cluster name (or server name) where you want to deploy IBM FileNet Collaboration Services application.
Choose the same cluster name (or server name) where the application has already been deployed to if you are redeploying the application. Click Next.
- If you had previously changed the name of the IBM FileNet Collaboration Services application, then change it now to that same name. If you have Filenet installed by Connections, the default application name is FNCS (If you update it to fix pack 1, make sure the application's name appears as FNCS with all uppercase letters). Click Next.
- Review the information you have entered to ensure its accuracy and then click Next.
- When the Deployed successfully summary panel displays, click Done to complete the configuration of the FileNet Collaboration Services.
- Open the WAS Dmgr console
- Enable use of authentication data on unprotected URLs:
- Navigate to Security > Global Security > Web security > General Settings .
- Select both...
- Authenticate only when the URI is protected
- Use available authentication data when an unprotected URI is accessed
- Modify security role mapping for the FileNet Collaboration Services application...
Applications | WebSphere Enterprise Applications | FNCS | Click Security role to user/group mapping | Authenticated | Map Special Subjects and Everyone
- Install the authentication filter code:
- Still in WebSphere Administration console navigate to WebSphere Enterprise Applications.
- Select the FileNet Collaboration Services option.
- Click Update.
- For Application update options, select the Replace, add, or delete multiple files option.
- Select local file system if running the browser on the Deployment Manager node and then locate the auth_filter_patch.zip file in...
CONNECTIONS_HOME/ccm/ccm/ccm/auth_filter_patch/auth_filter_patch.zip
If the browser is not running on the Dmgr node, then select remote file system and choose the dmgr file system, locating the auth_filter_patch.zip file in the directory previously stated.
- Click Next and OK to update the application.
- Restart the FileNet Collaboration Services (FNCS) application
Results
A summary panel also supplies suggestions of how to confirm that the service is up and running.
Change ConnectionsAdmin to be an LDAP user
For an existing installation of Connections with IBM FileNet, the connectionsAdmin user defined in your FileNet system must be available in the directory configuration of both FileNet and Connections. The easiest way to accomplish this may be to change to an LDAP user in a common directory
There are two ways to resolve this issue. You either can:
- Find a user in the LDAP directory that is accessible by both FileNet and Connections and change your connectionsAdmin to be that user on both FileNet and Connections. Then you need to update the DSX-Admin settings in Communities and update the SIB Bus information on Connections. To do so, follow these instructions:
- Add an existing LDAP user, and add this user to your Connections Admin to ensure that both Connections and FileNet use that LDAP User. This method avoids having to change all the credentials and SIB Bus information. To add an existing user, perform the following steps.
- In WAS admin console on the FileNet server select Deployment Security > Global Security > Authentication: Java Authentication and Authorization Service > J2C authentication data .
- Change the connectionsAdmin user ID to the LDAP user name you want to use (instead of the WIM account name). Supply the credentials and then click Apply and OK.
- Restart FileNet server (server1).
- In WAS admin console on the Connections deployment select Applications > WebSphere enterprise applications > Communities > Detail Properties: Security role to user/group mapping.
- Select the dsx-admin box and then click Map Users
- Enter the LDAP user's name in the Search string.
- Select this name from the Available box then click the arrow to add it to the Selected box.
- Click OK twice to save the master configuration and then restart Communities.
- Prove the SSO connection still works.
- Open a browser session to FileNet, authenticate, and then in same browser window change the url to Communities. You should be logged in as the same user.
- Open a browser session to: <fileNetHostName>:<fileNetPort>/dm.
The default HTTP port in FileNet is commonly set to 80 or 9080.
- Log in with the connectionsAdmin user you previously added.
- Change the url to: http://<connectionsHostName>/communities When the page loads you should be logged in as the same user you just logged in on FileNet. You should not be prompted for credentials.
Change the Bootstrap admin password
This procedure describes how to change the password for the Content Platform Engine system user (also known as the bootstrap administrator, or cpe_bootstrap_admin). The credentials for this account are entered during CPE configuration. Configuration Manager places this user name and its password into the CPE bootstrap file. When CPE starts up, it uses the account and password to authenticate against the user registry defined in the application server.
Here are the characteristics of the cpe_bootstrap_admin account:
- It must reside in CPE's configured LDAP directory server.
- Configuration Manager's Configure Bootstrap Properties task places it in the CPE's bootstrap file. In this location cpe_bootstrap_admin is called the CPE system user.
- Many installations will also enter this account into Configuration Manager's Configure LDAP task as the Directory Service User account (sometimes known as the bind user, or cpe_service_user), the account that CPE's application server uses to bind to the directory server. The Configure LDAP task places the account into the application server's authentication configuration location.
Changing cpe_bootstrap_admin's password in the directory server means that you must at the same time change it in these locations. If you do not, the bootstrap file will not be able to authenticate to the LDAP and CPE will not be able to start. Follow this procedure carefully to avoid this scenario.
This procedure requires access to the CPE location, to the application server console, and to the directory server. Because of the relative complexity of this procedure, unless there is an overriding reason to change the password of this important account, you can consider exempting the CPE system user account from your password change policy if this still meets your security requirements.
For the existing FileNet deployment, follow the FileNet Info Center topic: Change Bootstrap admin password
To change the CPE system user password perform the following steps:
- Backup the Engine-ws.ear file. You can then revert to the last good, known, EAR file in case changing the password fails. It is located in...
CONNECTIONS_HOME/addons/ccm/ContentEngine/tools/configure/profiles/CCM/ear
For example:
- Windows: C:\(x86)\IBM\Connections\addons\ccm\ContentEngine\tools\configure\profiles\CCM\ear\Engine-ws.ear
- Linux: /opt/IBM/Connections/addons/ccm/ContentEngine/tools/configure/profiles/CCM/ear/Engine-ws.ear
- Start the Configuration Manager:
CONNECTIONS_HOME/addons/ccm/ContentEngine/tools/configure/configmgr.sh
For example:
- Windows: C:\(x86)\IBM\Connections/addons/ccm/ContentEngine/tools/configure/configmgr.exe
- Linux: /opt/IBM/Connections/addons/ccm/ContentEngine/tools/configure/configmgr.sh
- Load the Configuration Manager profile that describes your installation: CCM.
- Click Configuration Bootstrap Properties. Do not change anything yet. The Bootstrap user password is the field you will change later in this procedure.
- Leave this window open while performing the following steps.
- Log into your directory server.
- Navigate to the location containing the account for the CPE system user.
- Change its password.
- Save and apply.
- Return to the window containing Configuration Manager:
- In the Configure Bootstrap Properties task, set the Bootstrap Operation property to Modify Existing.
- Confirm that the Bootstrapped EAR file property contains the path to the bootstrap file you need to edit.
- Change the Bootstrap user password. Use Configuration Manager's features to save and run the task.
- Run Configuration Manager's Deploy Application.
- Restart the application server.
Configure Library widget options and defaults
Configure the behavior of community Library widgets by checking out library-config.xml and editing it directly.
Check out the library-config.xml file, edit configuration properties, and then check the file back in.
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Use the wsadmin client to access and check out the configuration file:
- Access the Connections configuration files: execfile("connectionsConfig.py")
If you are prompted to specify a service to connect to, type 1 to select the first node in the list. Most commands can run on any node.
If the command writes or reads information to or from a file by using a local file path, select the node where the file is stored.
This information is not used by the wsadmin client when you are making configuration changes.
- Check out the library-config.xml configuration file:
LCConfigService.checkOutLibraryConfig("working_directory","cell_name")
Where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you change them. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system. For AIX and Linux, the directory must grant write permissions or the command does not run successfully.
- cell_name is the name of the WAS cell that hosts the Connections application. This argument is case-sensitive, so type it with care. To get cell name with wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:LCConfigService.checkOutLibraryConfig("/opt/temp","Cell01")
Windows:LCConfigService.checkOutLibraryConfig("c:/temp","Cell01")
- Open library-config.xml in an editor.
- Edit any of the following configuration properties:
Table 1. library-config.xml Configuration Properties
Configuration property Description Possible values Default value Supported by displayPersonCard Specifies whether to display the person card for Enterprise Content Manager users. If Connections and Enterprise Content Manager users do not have matching email addresses, set this property to false. If exposeEmail is turned off, person card is no longer automatically disabled.
true/false true Library and Linked Library roundTripEdit Specifies whether to allow round-trip editing through the connectors. Disable this feature in environments where the connectors are not installed on desktop clients. Round-trip editing is not available on any Library that has draft approvals enabled, regardless of the roundTripEdit setting.
true/false true Library and Linked Library downloadThruProxy Specifies whether to download files through the common proxy or directly from the Enterprise Content Manager server. Downloading through the proxy means more traffic through the proxy, but does not require users to reauthenticate to download in environments where SSO is not enabled. Applies only if FileNet Collaboration Services and Connections use different host names. This scenario is uncommon, especially if you are using the new installation option for FileNet in CCM. true/false true Library and Linked Library openInActionAsLink Specifies whether to show the Open in repository link on the document summary page, or as an action button in the toolbar. Adding this action to the toolbar makes it more easily accessible. Add it to the toolbar if you think users frequently use the Enterprise Content Manager interface to perform advanced features. true/false false Library and Linked Library allowCheckForConnectors Specifies whether to allow checks for the existence of the connectors on client workstations. This configuration option takes effect only when the roundTripEdit option is enabled (set to true). Disabling this feature makes round trip editing features available for users whether they have the connectors that are installed on their client workstation. This property might cause unexpected behavior on workstations where the clients are not installed. Disable this feature in environments where all client workstations have the connectors installed. allowCheckForConnectors uses HTTP only to check for Connectors. If you use HTTPS and Microsoft Internet Explorer, the browser prompts you...
Do you want to view only the webpages content that was delivered securely?
To remove these warnings you must preinstall Connectors on User Workstations and set allowCheckForConnectors to false if roundTripEdit in Connectors is required
true/false true Library and Linked Library displayViews Specifies whether to display the Views menu on the main document list. This menu shows all the Enterprise Content Manager views that can be shown. These views might not be scoped to the library that the user is connected to.
For a list of the views available from the Views menu, see Library views.
true/false false Library and Linked Library uploadTimeout Specifies the number of seconds to wait before a timeout occurs that ends a file upload attempt. Think carefully about editing this property. It is used by both the Linked Library and Media Gallery widgets.
Any integer 1200 Library, Linked Library and Media Gallery showLegacyLibraryMessage Specifies whether to display a warning message for non-teamspace FileNet Libraries. Applicable to Linked Library only. true/false true Linked Library useSSO Specifies whether all Linked Libraries have SSO configured with FileNet. This forces the Library widget to always use the Connections login page to authenticate the user. Applicable to Linked Library only. true/false true Linked Library allowChangeDocTypeDefault Specifies the default value for whether users can select a non-default document type when you are working with files in a Library or Linked Library. true/false false Library and Linked Library
If roundTripEdit in Connectors is not required in your environment, set roundTripEdit and allowCheckForConnectors to false.
- To check in the changed library-config.xml file:
LCConfigService.checkInLibraryConfig("working_directory", "cell_name")
- After you make updates, deploy the changes:
synchAllNodes()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart all of the Connections application servers.
Check the configuration files back in after you change them. You must also check the files in in the same wsadmin session in which they were checked out for the changes to take effect.
See Applying property changes for details.
Add Linked Libraries
Enable and configure linked libraries so that Community owners can add them.
Before enabling linked libraries, make sure that you have one of the following Enterprise Content Management systems installed:
- IBM FileNet P8 5.2.0 or later
IBM FileNet Collaboration Services v2.0.- IBM DB2 Content Manager 8.4.3 with FixPack 1 or later
IBM Content Manager Services for Lotus Quickr
Enable linked libraries
Enable the Linked Library widget in Connections so that community owners can add linked libraries to their communities.
Linked libraries are installed with Connections, but you must enable them.
- Edit widgets-config.xml to include the Linked Library widget definition:
- Check out widgets-config.xml.
- Uncomment the following XML widget definition following the resource type="community":
<widgetDef defId="CustomLibrary" bundleRefId="lc_clib" url="{webresourcesSvcRef}/web/quickr.lw/widgetDefs/LibraryWidget_QCS_Connections.xml?etag={version}" description="CustomLibrary.description" modes="view edit fullpage" primaryWidget="false" helpLink="{helpSvcRef}/topic/com.ibm.lotus.connections.communities.help/c_com_library_frame.html" iconUrl="{contextRoot}/nav/common/images/iconFiles16.png" uniqueInstance="true" displayLoginRequired="true"> </widgetDef>- Enable communication between Connections and the ECM servers.
Configure IBM DB2 Content Manager
For the Linked Library widget to function in an IBM Content Manager deployment, update the value of sortByNameIdealAttribute in cmpathservice.properties.
This task must be completed only when you are using Connections with IBM DB2 Content Manager 8.4.3. If IBM DB2 Content Manager 8.4.3 is already installed, you can integrate IBM DB2 Content Manager 8.4.3 content in Connections communities with linked libraries.
Ensure that you have the following software installed:
- IBM DB2 Content Manager, FixPack 1 or later.
- IBM Content Manager Services for Lotus Quickr.
By default, the sortByNameIdealAttribute property in cmpathservice.properties is set to ICM$NAME. To enable support for the Linked Library widget when you are using IBM Content Manager, update the value of the property to clbContent.clbLabel.
Set allowed ECM servers
Set a list of specific Enterprise Content Management (ECM) servers that Linked Library widgets can connect with.
Without a set list of servers, users must type the URL of the ECM server to connect to when they create a Linked Library widget. After setting a list of allowed servers, users can select from a dropdown list of servers.
Create a list of allowed ECM servers
- Check out widgets-config.xml.
- In the CustomLibrary widget definition add the following tags to the <itemSet>:
<item name="allowCustomServers" value="false"/>
<item name="allowedHosts" value=" http://ecm.server.com:9080, http://ecm.server2.com:9080"/>Where allowCustomServers must be set to false and allowedHosts is a comma separated list of Filenet or Content Manager ECM servers.
- Check in widgets-config.xml.
- Restart the Communities application.
- Add each ECM server to the library widget proxy.
Configure single sign-on with ECM servers
Configure single sign-on between Connections and Enterprise content manager (ECM) servers. With single sign-on enabled, users do not need to log in to Connections and the ECM server separately; logging in to Connections also give them access to the ECM server.
For an existing FileNet system, ensure that single sign-on (SSO) is configured between your FileNet and Connections servers. WebSphere LTPA SSO is recommended.
See Configure SSO between IBM FileNet and Connections for more information.
Prerequisites for Single Sign-On (SSO)
There are several prerequisites for configuring single sign-on (SSO) between WASs that underlie the Connections and ECM servers.
Before configuring SSO between the Connections and ECM servers, ensure that:
- The two instances of WebSphere Application Server use the same LDAP directory for authentication.
- The two instances of WebSphere Application Server specify the same domain name (for example, .example.com) for all the single sign-on hosts.
To verify the domain name, follow these steps to navigate to the single sign-on settings pages for the Connections and ECM WebSphere Application Server instances:
On each server...
- Open WAS administration console.
- Click Security > Global security.
- Click Web and SIP security.
- Click Single sign-on (SSO)
- See the value in the Domain name field.
- Application security is enabled.
Application security, including authentication and role-based authorization, is not enforced unless Global Security is active. Note that Global Security is enabled by default during the installation of Connections. Thus, application security is enabled on Connections, by default. Also, the fact that the two instances of WebSphere Application Server use the same LDAP server for authentication ensures that application security is enabled on the Connections server. You need to perform the following steps only if Application security has been disabled for some reason.
On the Connections server and on the ECM server, from the WAS administration console...
Security | Global security | Enable application security
Configure SSO between IBM FileNet and Connections
Configure single sign-on (SSO) between IBM FileNet Collaboration Services and Connections.
Single sign-on between IBM FileNet Collaboration Services and Connections is mandatory for use of the Library widget.
Single sign-on between IBM FileNet Collaboration Services and Connections is not mandatory for use of the Linked Library widget, but is supported and always preferable for a better user experience.
To configure SSO between WebSphere Application Server using a Lightweight Third Party Authentication (LTPA) mechanism, you generate and export an LTPA key from one server, and then import it to the other. In the following instructions, LTPA keys are generated and exported from the Connections server and imported into the IBM FileNet Collaboration Services server, but it doesn't matter which server exports and which imports.
To generate and export an LTPA key on the Connections server, perform the following steps:
- Open WAS administration console.
- Navigate to Security > Global security >LTPA.
- Type and confirm a password and make a note of it.
- Type a fully qualified key file name.
- Click Export keys.
The LTPA keys are exported to the location typed in Step 4.
- Copy the LTPA key file you have just generated to the IBM FileNet Collaboration Services server and note the location.
To import an LTPA key on the IBM FileNet Collaboration Services server, perform the following steps.
- Open WAS administration console.
- Navigate to Security > Global security >LTPA.
- Enter the password that you entered when you configured WAS administration console for Connections server.
- Enter the full path and filename of the key file that you exported.
- Click Import keys.
- Click OK and Save.
Configure SSO between IBM Content Manager and Connections
Configure single sign-on (SSO) between IBM FileNet Collaboration Services and Connections.
Single sign-on between IBM FileNet Collaboration Services and Connections is mandatory for use of the Library widget.
Single sign-on between IBM FileNet Collaboration Services and Connections is not mandatory for use of the Linked Library widget, but is supported and always preferable for a better user experience.
To configure SSO between IBM Content Manager and Connections, perform the following steps:
- Configure the IBM Content Manager Enterprise Edition library server for SSO.
- Configure IBM FileNet Collaboration Services for SSO.
- Configure Connections for SSO.
Configure the IBM Content Manager server for SSO
Configure the IBM Content Manager Enterprise Edition server for single sign-on.
These steps assume you have installed Connections, IBM Content Manager Enterprise Edition, IBM FileNet Collaboration Services, and an LDAP server. They also assume the LDAP server is shared by IBM Content Manager, IBM FileNet Collaboration Services, and Connections.
- Disable the required password setting:
- Start the IBM Content Manager system administration client.
- Click Tools > Manage Database Connection ID > Change Database Shared Connection ID from the menu.
- Clear the Password is required for all users logging on to CM check box.
- Click OK
- Allow trusted log ons:
- In the navigation pane, click Library server parameters > Configurations.
- Right-click Library Server Configuration and select Properties.
- Set Max user action to Allow logon without warning and select the Allow trusted logon check box.
- Click OK.
- Setup LDAP user import information:
- Log in to the IBM Content Manager system administration client.
- Click ToolsLDAP Configuration.
- Go to the LDAP tab and select the check box...
Enable LDAP User import and authentication
- To configure the LDAP properties click on the Server panel and enter your LDAP server information.
Filter the existing LDAP users to log in to IBM FileNet Collaboration Services.
For example, in WebSphere Administration Console...
Secure administration, applications, and infrastructure | Standalone LDAP registry | Advanced LDAP
...if you are using sAMAccountName in your organization as the User ID value, the User filter setting should be set to...
(&(sAMAccountName=%v)(objectcategory=user))
...and User ID map should be...
user:sAMAccountName
- Create privilege set for SSO users:
- Log in to the IBM Content Manager system administration client.
- Expand Authorization and click Privilege Sets.
- Select AllPrivs privilege set.
This privilege set is used as an example. Modify the privilege set information as required.
Do not clear the SystemSuperDomainAdmin check box.
- Right-click and select Copy > Advanced.
Enter a name for this privilege set, for example: SSOPriv
- In the new privilege set, select AllowTrustedLogon and clear the SystemSuperDomainAdmin check box.
This privilege is not required.
- Click OK.
- Add LDAP users:
- Log in to the IBM Content Manager system administration client.
- Expand Authentication.
- Right-click and select Users > New.
- Set Password expiration to Never expires.
- Click LDAP and provide the user name you want to import.
- After the names are returned, highlight the name and click OK.
- Set Maximum privilege set to SSOPriv, the privilege set that you created in Step 4.
- In the Set Default panel, enter Default item access control list and click OK to create new SSO user.
- Restart the IBM Content Manager server.
- Install the LDAP client to enable LDAP users to log in:
If the LDAP server is an IBM Tivoli Directory Server (ITDS), install the ITDS client on the same machine as IBM Content Manager.
- During the LDAP client installation, select the Java™ client and C client only.
- Add the following file path to the PATH environment variable: C:\IBM\LDAP\V6.1\bin;C:\IBM\LDAP\V6.1\lib;
- Copy the DLL file from the C:\IBM\db2cmv8\ldap directory to the C:\IBM\db2cmv8\cmgmt\ls\icmnlsdb directory.
- Restart the LDAP server.
- Verify the LDAP setup:
- Install the IBM Content Manager Enterprise Edition Client for Windows.
- Verify whether the LDAP user can log in to IBM Content Manager server using the client.
Configure IBM FileNet Collaboration Services for SSO
Configure IBM FileNet Collaboration Services for single sign-on.
- Log on to the WebSphere Administration client on the server you deployed IBM FileNet Collaboration Services.
- Click Application > Enterprise Applications > clb.cm.websvc > Security role to user/group mapping.
- Make sure that the Everyone check box is clear. Select the All authenticated check box, and then click OK.
- Click Security > Secure administration, applications, and infrastructure.
- Check Enable administrative security.
- Check Enable application security.
- Clear the Java 2 security check box.
- Set the Available realm definitions to Standalone LDAP registry.
- Click Configure and enter the same LDAP information that you entered when you created LDAP user information in the IBM Content Manager system administration client.
See Step 3 in the topic Configuring the IBM Content Manager server for SSO.
- Check Reuse connection.
- Check Ignore case for authorization.
- Clear the SSL enabled check box.
- Click Test connection and make sure you can successfully connect to the LDAP server.
- Click Apply and Save to save the changes to the master configuration.
- Restart the WebSphere Application server for the changes to take effect.
Configure Connections for SSO
Configure Connections for single sign-on.
To configure Connections for SSO, see the topic Configuring single sign-on in this documentation.
For example, if the IBM Content Manager server is using a standalone LDAP, follow steps in Enabling single sign-on for standalone LDAP before performing the steps in this topic.
To complete the SSO configuration between Connections and IBM FileNet Collaboration Services, you must synchronize the LTPA tokens between the two servers.
To synchronize the LTPA tokens between the Connections and IBM Content Manager servers, perform the following steps:
- On the Connections server, open the WAS admin console.
- Navigate to Security > Global security >LTPA.
- Type and confirm a password and make a note of it.
- Type the full path to a file on the application server where you want to store the keys, such as /home/wasadmin/ltpa.keys.
- Click Export keys. WebSphere exports the LTPA keys into the location you specified.
- Click Apply and save the changes.
- Copy the LTPA key file you just generated to the IBM FileNet Collaboration Services server and note the location.
- Open the WAS admin console on the IBM FileNet Collaboration Services, and follow Step 2.
- Navigate to the Single sign-on section and enter the password you entered in Step 3.
- Type the full path to the LTPA key file from Step 7 on the IBM FileNet Collaboration Services server.
- Click Import Keys and Save.
- Restart the Connections and IBM FileNet Collaboration Services WebSphere Application Servers for the changes to take effect.
Enable search in linked libraries
You can enable content-based text search in linked libraries that are hosted on FileNet P8. Search is disabled by default.
To edit configuration files, use wsadmin client.
When searching content in libraries created outside Connections or using the Linked Library widget, IBM FileNet Collaboration Services searches the IBM FileNet P8 server directly. For these libraries, the IBM FileNet P8 administrator must set up the content-based text search server and enable content-based retrieval (CBR) for the document types. If text search is not enabled on the IBM FileNet P8 server, community-scoped searching is available only for libraries created within Connections using the Library widget.
For more information, see IBM FileNet Information Center.
For the Library widget, the Connections Search index is used directly and FileNet search does not need to be enabled.
The widgets-config.xml file contains information about widget definitions, widget attributes, widget location, default widget templates, and page definitions. To enable search for the Linked Library widget, you need to add "search" to the list of modes for the widget definition in widgets-config.xml.
To enable search for linked libraries....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter will not execute correctly.
- Start the Communities Jython script interpreter
execfile("communitiesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out widgets-config.xml.
CommunitiesConfigService.checkOutWidgetsConfig("working_directory", "cell_name")
where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied.
The files are kept in this working directory while you make changes to them. When specifying the path to a working directory or temporary directory where the checked out files are to be placed, use a forward slash as the path separator, even for Microsoft Windows systems.
- cell_name is the name of the WAS cell hosting the Communities application. This argument is required. It is also case-sensitive, so type it with care.
For example:
CommunitiesConfigService.checkOutWidgetsConfig("C:/tmp2","MyCell01")- Use a text editor, open the widgets_config.xml file from the temporary directory to which you checked it out.
- Look for the CustomLibrary widgetDef element and add "search" to the list of modes specified for the widget:
<widgets> <definitions> ... <widgetDef defId="CustomLibrary" bundleRefId="lc_clib" url="{webresourcesSvcRef}/web/quickr.lw/widgetDefs/LibraryWidget_QCS_Connections.xml?etag={version}" description="CustomLibrary_description" modes="view search edit fullpage" primaryWidget="false" helpLink="{helpSvcRef}/topic/com.ibm.lotus.connections.communities.help/c_com_library_frame.html" iconUrl="{contextRoot}/nav/common/images/LinkedFiles.png" uniqueInstance="true"> ... </definitions>
- Save your changes and then close widgets-config.xml.
- Apply your changes by doing the following:
- Check in the updated widgets-config.xml file
CommunitiesConfigService.checkInWidgetsConfig("working_directory", "cell_name")
For example:
CommunitiesConfigService.checkInWidgetsConfig("C:/tmp2","MyCell01")- To exit the wsadmin client, type exit at the prompt.
- Restart the Communities application using WAS admin console.
Administer Metrics
Administer metrics in Connections by modifying configuration settings, backing up data, and running commands to perform maintenance tasks.
About this task
Connections provides vital metrics about your deployment. Metrics are presented simply, using charts that provide clear business value to users, executives, and administrators. Connections metrics are supported by IBM Cognos Business Intelligence, which you install as part of the Connections deployment. Connections application events, such as reading or creating objects, generate metrics that are stored in a database, which synchronizes with a Cognos PowerCube to store data so it can be accessed along different dimensions. When a Connections user runs a Metrics report, the Cognos server sends the data to the Connections metrics interface.
Authorized users can work with the charts interactively, filtering data by parameters such as geography and time period, and drilling down into data points for more detail. The Connections administrator can customize reports using a suite of Cognos tools. Refer to the Cognos documentation for detailed information on customizing reports as well as administering the PowerCube as well as the Cognos server.
What are metrics?
The new Metrics application in Connections provides a comprehensive set of quantitative (measures) and qualitative (descriptions) metrics that help measure the business value of Connections in your organization. Connections uses IBM Cognos Business Intelligence to collect and maintain metrics, and to generate reports that users can view directly in Connections. Metrics reports can be presented as tables or charts that you refine by selecting options such as the time period to report on, a particular application to focus on, and how to group users in the results.
Statistics related to the Search application are not collected or reported by the Metrics application.
For information on search statistics, see Viewing and collecting Search metrics.
Connections provides metrics on two levels, global and community. A Global metrics report is generated every time the user clicks it, however, the report data are updated on a daily basis. Refreshes are typically scheduled during off-peak hours to avoid degrading system performance for users
Community metrics report on a particular community; for example, the number of people who logged into the Sales community last week. Community metrics are generated on demand and are then cached until a new report is requested. Requests are place into a queue and are processed in order. After submitting a request to update metrics, the community owner can work in other areas and return when the report is ready for viewing.
The Connections administrator has implicit access to all metrics reports. Community owners can view metrics for their own community, but cannot view global metrics. Additional users who require the information can be authorized to view global metrics; for example, a high-level manager might require access to metrics for business purposes even if he or she does not manage a community.
The Metrics application comprises the following components.
- Event tracker
- The event tracker records user actions in Connections; for example, an event is created every time a Connections user reads a blog entry, creates a "To do" item in an activity, updates a wiki page, or follows a community. These events are then used to calculate various metrics based on their timing and frequency.
- Database
- Event data is stored in database tables and includes information about each event, such as the user who performed an action, the data that was acted upon, the date the event occurred, and the Connections application that was affected. Connections requires two databases for managing metrics data: the Cognos Content Store database contains data needed for managing the Cognos Business Intelligence components that provide the metrics collection and reporting tools, and the Metrics database contains the raw event-related data.
- PowerCube
- Cognos Business Intelligence uses a cube to store metrics information for analysis. A cube is a data structure in which data is stored across multiple categories, or dimensions. The Metrics PowerCube is generated by the Cognos Transformer application. The PowerCube uses OLAP (online analytical processing) techniques for quickly analyzing a measure across multiple dimensions; for example to count user logins across multiple Connections applications during a specified period of time. Pre-aggregating the data for each measure across the available dimensions enables the PowerCube to respond quickly to queries so that reports can be produced in a timely manner.
The PowerCube is based on the metrics model, which is a business-oriented representation of the information from the metrics cube datasource that includes the dimensional metadata and measures. The metrics cube datasource is a Cognos package that defines the queries that retrieve data from the PowerCube.
- Reports
- A report is a summary of the data provided by the PowerCube in response to a particular query. Reports can be scheduled to run automatically at specified times, and are presented to the user as tables, charts, or other types of graphs. When you deploy Connections, a predefined set of reports is immediately available for use. You can additionally create custom reports to suit your organization’s needs.
Reports are designed with three standard filters that categorize data in the displays:
- The applications filter groups data for a specific application scope, either across Connections or within a particular community; for example to view only blogs owned by the Customer Support community.
- The time range filter groups allows a report to focus on a particular period of time by including only data for one of the following predefined intervals:
- Last 7 days
- Last 4 weeks
- Last quarter
- Last 12 months
- All years
- The user attribute filter groups data about people based on predefined user profile attributes (geography, department, and role).
For information on customizing the user attribute filter for your organization’s reports, see Map Metrics report dimensions to user profile attributes.
- User interface
- Connections provides a user interface for displaying reports. When viewing reports, authorized users can apply filters to collect and categorize data, drill down on data points to see more detail, and save a copy of the report.
The Metrics event-tracking component captures events from Connections, such as viewing a blog entry, replying to a forum topic, uploading a file, or updating wiki page. Events are stored in the raw Metrics database. A scheduled job runs the Cognos Transformer application to retrieve data from the raw Metrics database and generate the PowerCube. When a user accesses the Metrics user interface in Connections, the Cognos Business Intelligence application generates metrics reports by querying the PowerCube.
Manage the Metrics environment
You can manage the Metrics environment by modifying configuration settings and running administrative commands that determine how Metrics data is collected and reported.
Manage the metrics environment using wsadmin to specify settings in a configuration file or to run administrative commands.
The configuration file contains settings that control when and how Metrics operations are performed. When you make configuration changes, you use scripts to check out the Metrics configuration file, modify settings, and then check the file back in. A server restart is required for your changes to take effect.
Administrative commands perform tasks that manipulate Metrics content, change member access levels and synchronize member IDs, and manage scheduled tasks in Metrics.
Set an initial count for Metrics
After you migrate to Connections, set the initial count of metrics data for the Metrics application.
The task offers two methods of setting the initial count: the first method retrieves global, community, and user metrics in a single process. If your deployment has a large number of communities and users, this method might take a long time to complete. The second method retrieves the same metrics in separate process. If you use the second method, create files with the IDs of your communities and users:
- Run the following SQL command on the Communities database:
SELECT COMMUNITY_UUID FROM SNCOMM.COMMUNITY
Each line in the output contains one community UUID, in the following format:
00000000-0000-0000-0000-000000000000 11111111-0000-0000-0000-000000000000
- Save the output to a text file.
- Run the following SQL command on the Profiles database:
SELECT PROF_KEY FROM EMPINS.EMPLOYEE
Each line in the output contains one Profiles ID, in the following format:
00000000-0000-0000-0000-000000000000 11111111-0000-0000-0000-000000000000
- Save the output to a text file.
Before you start using the Metrics application, you must establish a starting point for metrics data. This starting point is used by the application as basis for comparing subsequent changes to the data. You can capture metrics at the global, community, and user levels by running the relevant administrative commands.
You can capture the following metrics data:
Table 2. Global metrics
Application Metric Activities Number of activities Blogs Number of blogs Blogs Number of entries Blogs Number of entry comments Bookmarks Number of bookmarks Communities Number of communities Communities Number of status updates Files Number of files Forums Number of forum topics Forums Number of forums Forums Number of topic replies Home page Number of status updates Moderation Number of rejected items Profiles Number of status updates Wikis Number of wiki pages Wikis Number of wikis
Table 3. User metrics
Application Metric Profiles Number of followers
Table 4. Community metrics
Application Metric Activities Number of activities Blogs Number of entries Blogs Number of entry comments Bookmarks Number of bookmarks Communities Number of members Communities Number of people who are following the community Communities Number of status updates Files Number of files Forums Number of forum topics Forums Number of topic replies Ideation blog Number of graduated ideas Ideation blog Number of ideas Moderation Number of rejected items Moderation Number of rejected items Wikis Number of wiki pages
To capture data for the Metrics application...
For more information, see the Starting the wsadmin client topic.
- Start the Metrics Jython script interpreter by entering the following command:
execfile("metricsGetInitCount.py")
- Choose one of the following options:
- Global metrics: Get the initial count for global metrics, including the metrics for every community and every user.
Enter the following command:
MetricsAppsMetricsService.fetchInitialBalanceStartFrom(CommunityThreads, ConnectionsThreads, StartDate)
where
- CommunityThreads
- is the number of concurrent threads that the Communities application allocates for this command. The command starts these threads concurrently and sends requests to the Communities application to retrieve the initial count for all communities.
- ConnectionsThreads
- is the number of threads that all Connections applications allocate for this command. The command starts these threads concurrently and sends requests to all Connections applications to process the initial count for all communities. Adjust the number of threads in this parameter to match your server capacity.
- StartDate
- is the date that the Metrics application started to collect data. Input the value in the yyyy-mm-dd format, enclosed in double quotation marks. To configure the Metrics application to automatically calculate the start date, enter None as the value.
If your Connections deployment has large number of communities and users, this command might take a long time to complete. In that case, select the next option instead.
Examples:
MetricsAppsMetricsService.fetchInitialBalanceStartFrom(5, 20, None)
MetricsAppsMetricsService.fetchInitialBalanceStartFrom(5, 20, "2012-12-01")
- Retrieve the initial count in separate steps for global, community, and user metrics. Select this option when your deployment has a large number of communities and users.
- Enter the following command to retrieve the initial count for global metrics only:
MetricsAppsMetricsService.fetchInitialBalanceForGlobalMetricsStartFrom(StartDate)
where
- StartDate
- is the date that the Metrics application started to collect data. Input the value in the yyyy-mm-dd format, enclosed in double quotation marks. To configure the Metrics application to automatically calculate the start date, enter None as the value.
Examples:
MetricsAppsMetricsService.fetchInitialBalanceForGlobalMetricsStartFrom(None)
MetricsAppsMetricsService.fetchInitialBalanceForGlobalMetricsStartFrom("2012-12-01")- Enter the following command to retrieve the initial count for Communities metrics:
MetricsAppsMetricsService.fetchInitialBalanceForCommunitiesStartFrom(CommunityThread, ConnectionsThread, CommunityUuidFile, StartDate)
where
- CommunityThreads
- is the number of concurrent threads that the Communities application allocates for this command. The command starts these threads concurrently and sends requests to the Communities application to retrieve the initial count for all communities.
- ConnectionsThreads
- is the number of threads that all Connections applications allocate for this command. The command starts these threads concurrently and sends requests to all Connections applications to process the initial count for all communities. Adjust the number of threads in this parameter to match your server capacity.
- CommunityUuidFile
- is the file that contains the UUIDs of all the communities in your deployment. Enter the full path and name of the file, enclosed in double quotation marks.
- StartDate
- is the date that the Metrics application started to collect data. Input the value in the yyyy-mm-dd format, enclosed in double quotation marks. To configure the Metrics application to automatically calculate the start date, enter None as the value.
Examples:
MetricsAppsMetricsService.fetchInitialBalanceForCommunitiesStartFrom(5, 20, "C:\communityUUIDs.txt", None)
MetricsAppsMetricsService.fetchInitialBalanceForCommunitiesStartFrom(5, 20, "C:\communityUUIDs.txt", "2012-10-01")- Retrieve the initial metrics data for users by using one of the following methods:
- Enter the following command to retrieve the initial metrics data for all users:
MetricsAppsMetricsService.fetchInitialBalanceForAllPersonFollower(1)
If your Connections deployment has a large number of users, this command might take a long time to complete. In that case, skip this step and run the next command instead.
- Enter the following command to retrieve the initial metrics data for users that are specified in the input file:
MetricsAppsMetricsService.fetchInitialBalanceForPersonFollower(1, ProfilesIdFile)
where
- ProfilesIdFile
- is the file that contains the IDs of all the users in your deployment. Enter the full path and name of the file, enclosed in double quotation marks.
Example:
MetricsAppsMetricsService.fetchInitialBalanceForPersonFollower(1, "C:\profileBaseKeys.txt")
Manage the scheduled tasks for Metrics
Use administrative commands to manage scheduled tasks in Metrics.
To run administrative commands, use the wsadmin client.
Metrics uses the WAS scheduling service for performing regular managed tasks.
For more information about how the scheduler works, see Schedule tasks.
To manage a task...
- Start the Metrics Jython script interpreter.
- Access the Metrics configuration file:
execfile("metricsAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node.
If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Administer the Metrics scheduler service.
MetricsSchedulerService.getTaskDetails(java.lang.String taskName)Returns information about the scheduled task specified by taskName. Specify one of the following jobs:
- "ReportGenerator"
- "MetricsDBCleanup"
- "DataSynchronization"
The values returned are server time, next scheduled run time, status (SCHEDULED, RUNNING, SUSPENDED), and task name. When the task has been paused, then the status parameter shows as SUSPENDED instead of SCHEDULED. SUSPENDED means that the task is not scheduled to run.
For example:
MetricsSchedulerService.getTaskDetails("ReportGenerator")returns output similar to the following:
{currentServerTime=Fri Jan 22 14:13:52 EST 2010, nextFireTime= Fri Jan 22 14:21:00 EST 2010, status=SCHEDULED, taskName=ReportGenerator} MetricsSchedulerService.pauseSchedulingTask(java.lang.String taskName)Temporarily pauses the specified task and stops it from running.
When you pause a scheduled task, the task remains in the suspended state even after you stop and restart Metrics or the IBM WebSphere Application Server. You must run the MetricsScheduler.resumeSchedulingTask(String taskName) command to get the task running again.
If the task is currently running, it continues to run but is not scheduled to run again. If the task is already suspended, this command has no effect.
For example:
MetricsSchedulerService.pauseSchedulingTask("ReportGenerator")returns output similar to the following:
ReportGenerator paused MetricsSchedulerService.resumeSchedulingTask(java.lang.String taskName)If the task is suspended, puts the task in the scheduled state. If the task is not suspended, this command has no effect.
When a task is resumed, it does not run immediately; it runs at the time when it is next scheduled to run.
For example:
MetricsSchedulerService.resumeSchedulingTask("ReportGenerator")returns output similar to the following:
ReportGenerator resumed
Manage Metrics configurations
You can modify configuration settings to control how the Metrics application behaves. You can also use the configuration information to map user profile attributes to report dimensions.
Modify Metrics configuration properties
Change the way that the Metrics application behaves by modifying configuration properties.
To edit configuration files, use the IBM WebSphere Application Server wsadmin client.
Configuration settings control how and when various Metrics operations take place. Configure Metrics by using scripts that are accessed with the wsadmin client. These scripts use the AdminConfig object in the wsadmin client to modify the Metrics configuration file.
Changes to Metrics configuration settings require node synchronization and a restart of the Metrics server before they take effect.
To edit Metrics configuration properties...
- Start the Metrics Jython script interpreter.
- Access the Metrics configuration file:
execfile("metricsAdmin.py")If you are asked to select a server, you can select any server.
- Check out the Metrics configuration files by
MetricsConfigService.checkOutConfig("working_directory", "cell_name")where
- working_directory is the temporary working directory where the configuration XML and XSD files are copied while you modify them.
AIX,
IBM i , and Linux only: The directory must grant write permissions or the command does not run successfully.- cell_name is the name of the WAS cell that hosts the Connections application. This argument is case-sensitive, so type it with care. To get cell name with wsadmin command:
print AdminControl.getCell()
For example:
MetricsConfigService.checkOutConfig("/opt/my_temp_dir", "CommCell01")
- To view a list of the valid Metrics configuration settings and their current values, use the MetricsConfigService.showConfig() command
This is sample output from the MetricsConfigService.showConfig() command:
Metrics configuration properties: cognos.namespace = IBMConnections cognos.secsPerRequest = 1200 communitiesMetricsDateRange.all.enabled = true communitiesMetricsDateRange.last12months.enabled = true communitiesMetricsDateRange.last4weeks.enabled = true communitiesMetricsDateRange.last7days.enabled = false communitiesMetricsDateRange.lastquarter.enabled = true db.dialect = DB2Only properties that are in the metrics-config.xml file are printed by the MetricsConfigService.showConfig() command. Configurations of custom reports are not listed.
- Modify configuration properties by using the appropriate method: Notes:
- Some Metrics configuration properties can be edited using wsadmin:
MetricsConfigService.updateConfig("property", "value")where property is one of the editable Metrics configuration properties and value is the new value for the property.
For example:
MetricsConfigService.updateConfig("communitiesMetricsDateRange.last7days.enabled", "false")
- Properties that are not listed as being editable with the wsadmin client can be edited directly in the configuration file: open the file in a text editor to modify it.
- Check in the file: MetricsConfigService.checkInConfig()
- Update the value of the version stamp configuration property in LotusConnections-config.xml. This setting forces browsers to pick up the changes.
- Exit the wsadmin client.
- Restart the server to apply your changes.
After you update Metrics properties, you can use the MetricsConfigService.showConfig() command to display the properties and their updated values.
Metrics configuration properties
Configuration properties control how and when various Metrics operations occur and help optimize performance.
You can modify the following configuration properties in the metrics-config.xml file. After making changes to the file, restart the server to implement the changes.
Restriction: Note the following restrictions on the values allowed in the file:
- All configuration properties, except those with multiple values, are required.
- .enabled properties must have boolean values of either true or false.
- Number values must be integers.
- communitiesMetricsDateRanges group
- Five configuration properties that specify the scope of date ranges against which community metrics static reports should be generated. Generating reports for all 5 date ranges can be time consuming. You can reduce report generation time by disabling one or more date ranges. A disabled date range is not displayed in the community metrics application.
The 5 date ranges properties specify the last 7 days, last 4 weeks, last quarter, last 12 months, and all years.
- communitiesMetricsDateRange.last7days.enabled
- communitiesMetricsDateRange.last4weeks.enabled
- communitiesMetricsDateRange.lastquarter.enabled
- communitiesMetricsDateRange.last12months.enabled
- communitiesMetricsDateRange.all.enabled
- scheduledTasks group
- Properties for the three scheduler tasks used by Metrics. Each task has two properties, "enabled" and "interval". "enabled" determines whether the task is run or not. If the value is false, the task is still registered, but does not run. "interval" specifies the task schedule. To get more information on how to configure the scheduler tasks,
See Manage the scheduler.
- scheduledTasks.ReportGenerator.enabled
- scheduledTasks.ReportGenerator.interval
- scheduledTasks.MetricsDBCleanup.enabled
- scheduledTasks.MetricsDBCleanup.interval
- scheduledTasks.DataSynchronization.enabled
- scheduledTasks.DataSynchronization.interval
- db.dialect
- Reflects the current database type, typically specified during installation.
Valid values include DB2, Oracle, or SQL Server.
- UserAttributesMappings group
- Contains three configuration properties that define the mapping relationship between the user attribute filters used in Metrics and user properties in Profiles.
Metrics supports up to three report dimensions that are based on user attributes defined in Profiles. By default, the three dimensions are Geography, Department, and Role. You can redefine them by mapping the preceding attributes to other available attributes in Profiles.
- userAttributesMappings.attribute1
- userAttributesMappings.attribute2
- userAttributesMappings.attribute3
For more information, see Map user profile attributes to report dimensions.
- eventLifetimeInMonths
- Number of months from the date a metrics event is created until it is removed from the database. Metrics events are periodically removed from the database. You can change this value based on your database's capacity.
The value must be greater than 0.
- privacy.displayReportWithUserName
- User names are not displayed in all reports if this configuration is set to "false".
By default, the value is "true". Valid values are "true" and "false".
- cognos.namespace
- Authentication namespace ID configured during IBM Cognos Business Intelligence installation. The authentication namespace defines a group of properties that allows Cognos to access an LDAP for user authentication.
This value must be the same as what you defined in the Cognos configuration. Otherwise, you might have authentication issues when accessing Metrics.
- cognos.secsPerRequest
- Specifies the estimated time to generate a whole set of reports for a community. This value is used when Metrics calculates the estimated finish time of Community Metrics report generation. Be default, the value is 1200, which is measured on a normal two-CPU server. You can change this value based on the capacity of your server.
The value must be in seconds and must be greater than 0.
Map user profile attributes to report dimensions
Edit Metrics configuration files to map various user profile attributes to Metrics report dimensions.
To edit configuration files, use wsadmin client.
Metrics reports support up to three different dimensions.
The dimensions are based on user profile attributes, such as "com.ibm.snx_profiles.base.title", which specifies the user role. By default, the three attributes are:
- com.ibm.snx_profiles.base.countryCode
- Mapped to the "Geography" dimension in Metrics reports
- com.ibm.snx_profiles.base.orgId
- Mapped to the "Department" dimension in Metrics reports
- com.ibm.snx_profiles.base.title
- Mapped to the "Role" dimension in Metrics reports
To get the full list of Profiles attributes through the administration ATOM API provided by Profiles, refer to Retrieving the Profiles Administration API service document in the IBM Social Business wiki.
The following Profiles attributes contain codes instead of real values:
- com.ibm.snx_profiles.base.deptNumber
- com.ibm.snx_profiles.base.countryCode
- com.ibm.snx_profiles.base.orgId
- com.ibm.snx_profiles.base.employeeTypeCode
- com.ibm.snx_profiles.base.workLocationCode
If any of these attributes are used, Metrics shows real values in the reports.
To manage a task...
- Start the Metrics Jython script interpreter.
- Access the Metrics configuration file:
execfile("metricsAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node.
- Use the following command to check out the Metrics configuration file metrics.xml:
MetricsConfigService.checkOutConfig("working_directory", "cell_name")- Modify metrics-config.xml directly or use MetricsConfigService.updateConfig() to update the Profiles attributes mappings.
- Use the following command to check in the Metrics configuration file metrics.xml:
MetricsConfigService.checkInConfig()
- Define names for customized attributes with customized product strings. Set new values for the following properties in Metrics properties file com.ibm.connections.metrics.ui.strings.ui_xx.properties, where xx is the language identifier in metrics.ear/lc.metrics.ui.jar:
- METRICS.FILTER.DIMENSION.OPTIONS.ATTRIBUTE1
- METRICS.FILTER.DIMENSION.OPTIONS.ATTRIBUTE2
- METRICS.FILTER.DIMENSION.OPTIONS.ATTRIBUTE3
See Customize product strings for more details.
- Save all changes and restart the server on which Metrics is deployed.
Run Metrics administrative commands
Use administrative commands from the wsadmin command line to directly interact with Metrics.
To use administrative commands, use the wsadmin client.
Administrative commands interact with the Metrics application and its resources through scripts. These scripts use the AdminControl object available in the IBM WebSphere Application Server wsadmin tool to interact with the Metrics server. Each script uses managed Java beans (MBeans) to get and set server administration properties. You do not need to check out files before running administrative commands, and you do not need to restart the server for the commands to take effect.
See Metrics administrative commands for a complete list of other administrative commands for the Metrics application.
To run Metrics administrative commands...
- Start the Metrics Jython script interpreter.
- Access the Metrics configuration file: execfile("metricsAdmin.py") If an error occurs when you are executing the commands, examine the SystemOut.log file to determine the error.
Metrics administrative commands
You can manage the Metrics services using wsadmin to perform administrative commands that allow you to check files in and out and modify settings. Administrative commands do not require a server restart to take effect.
You must use the wsadmin client to run administrative commands.
MetricsConfigService
MetricsConfigService.checkOutConfig("working_directory", "cell_name")Checks Metrics configuration files out to a temporary directory. Run from the wsadmin command processor.
- working_directory
- Temporary working directory to which the configuration files are copied. The files are kept in this working directory while you make changes to them.
- cell_name
- Name of the WAS cell hosting the application. If you do not know the cell name, type the following command in the wsadmin command processor:
print AdminControl.getCell()For example:
MetricsConfigService.checkOutConfig("/opt/my_temp_dir", "Cell")
MetricsConfigService.showConfig()Display the current configuration settings. Check out the configuration files with MetricsConfigService.checkOutConfig() before running MetricsConfigService.showConfig().
MetricsConfigService.updateConfig("quick_config_property", "new_value")Updates configuration properties.
- quick_config_property
- Property in the metrics-config.xml configuration file expressed as a quick config command.
For example, the quick config value for following property:
<databaseCleanup> <eventLifetimeInMonths>12</eventLifetimeInMonths> </databaseCleanup>...is eventLifetimeInMonths.
See Metrics configuration properties for configuration properties and descriptions.
- new_value
- The new value for the property. Property values can be restricted, for example, to either true or false.
For example, to set the scheduledTasks.ReportGenerator.enabled property to false:
MetricsConfigService.updateConfig("scheduledTasks.ReportGenerator.enabled", "false")- MetricsConfigService.checkInConfig()
Checks in Metrics configuration files. Run from the wsadmin command processor.
MetricsMemberService
- MetricsMemberService.syncAllMembersByExtId( {"updateOnEmailLoginMatch": ["true" | "false"] } )
- MetricsMemberService.syncMemberByExtId("currentExternalId"[, {"newExtId" : "id-string" [, "allowExtIdSwap" : ["true" | "false"] ] } ] )
- MetricsMemberService.inactivateMemberByEmail("email")
- MetricsMemberService.inactivateMemberByExtId("externalID")
- MetricsMemberService.getMemberExtIdByEmail("email")
- MetricsMemberService.getMemberExtIdByLogin("login")
- MetricsMemberService.syncBatchMemberExtIdsByEmail("emailFile" [, {"allowInactivate" : ["true" | "false"] } ] )
- MetricsMemberService.syncBatchMemberExtIdsByLogin("loginFile" [, {"allowInactivate" : ["true" | "false"] } ] )
- MetricsMemberService.syncMemberExtIdByEmail("email" [, { "allowInactivate" : ["true" | "false"] } ])
- MetricsMemberService.syncMemberExtIdByLogin("name" [, {"allowInactivate": ["true" | "false"] } ])
See Synchronize user data using administrative commands
MetricsUsersService
MetricsUsersService.reloadUsersAttributes()
Synchronize user information with Profiles immediately.
User synchronization runs periodically by the "DataSynchronization" scheduler task to get detailed information of users captured in Metrics raw data.
By default, user synchronization runs daily. Use MetricsUsersService.reloadUsersAttributes() if you want to run synchronization ahead of the scheduled time.
MetricsSchedulerService
MetricsSchedulerService.pauseSchedulingTask(string taskName)Suspends scheduling of a task. This has no effect on currently running tasks. Paused tasks remain paused until you explicitly resume them, even if the server is stopped and restarted.
- taskName
- A string value containing one of the following:
- ReportGenerator
- MetricsDBCleanup
- DataSynchronization
For example: MetricsSchedulerService.pauseSchedulingTask("ReportGenerator")
MetricsSchedulerService.resumeSchedulingTask(string taskName)Resumes the start of a paused task.
- taskName
- A string value containing one of the following:
- ReportGenerator
- MetricsDBCleanup
- DataSynchronization
For example: MetricsSchedulerService.resumeSchedulingTask("ReportGenerator")
MetricsSchedulerService.forceTaskExecution(string taskName string executeSynchronously)Runs a task.
Property settings in the metrics-config.xml configuration properties file specify whether tasks are enabled to run automatically, and how often. This command allows you to run tasks manually, for example, if you disabled a task but want to run it at specified times.
- taskName
- A string value containing one of the following:
- ReportGenerator
- MetricsDBCleanup
- DataSynchronization
- executeSynchronously
- Takes the string values true or false. Specifying this value is optional, and the default value is false.
If this value is false, then the task executes asynchronously, meaning that if the taskId is valid, the command returns immediately and execution continues in the background. If this value is true, the command does not return until the task completes.
For example: MetricsSchedulerService.forceTaskExecution("ReportGenerator")
MetricsSchedulerService.getTaskDetails(string taskName)Display the status of a task and returns a detailed status message.
- taskName
- A string value containing one of the following:
- ReportGenerator
- MetricsDBCleanup
- DataSynchronization
For example: MetricsSchedulerService.getTaskDetails("ReportGenerator")
Update the Cognos server with fix packs
Apply available fix packs to the Cognos Business Intelligence server to provide important product corrections.
Download the latest fix pack and apply it to the Cognos server. Fix packs are cumulative; when you install the latest fix pack, it includes updates from all previous fix packs.
If the latest available Cognos fix packs have been applied during installation, you do not need to do so again at this time. If some fix packs come out later, you can follow this procedure to apply them.
- Stop IBM WebSphere Application Server and verify that all Cognos services have stopped before proceeding to the next step.
After stopping the server, wait at least 1 full minute to ensure that all Cognos processes have stopped:
- AIX or Linux: cgsServer.sh and CAM_LPSvr processes
- Windows: cgsLauncher.exe and CAM_LPSvr processes
- Download fix packs for Cognos Business Intelligence from IBM Fix Central.
- Review the fix pack prerequisites and make sure your deployment satisfies all requirements.
- Download the appropriate compressed tar file for your operating system using the links in the "Download Package" section.
Some browsers might change a downloaded file’s type from .tar.gz to a file type not recognized by the operating system. To correct this, change the file type back to .tar.gz after the download is complete. Using Download Director will prevent inadvertent renaming of files at download.
- Expand the downloaded fix pack.
IBM AIX or Linux
- Open command prompt or terminal window.
- Change to the directory where you have downloaded the fix pack.
- Run the following command to expand the package using either GNU Zip (gzip) or GNU Tar (tar):
gunzip fix_pack_file_name.tar.gz | tar xvf –
Windows
- Open a command prompt.
- Change to the directory where you have downloaded the fix pack.
- Expand the package using your file compress and decompress utility (if you are using WinZip, select the option "Use folder names" to retain the package’s folder structure).
- Apply the fix pack:
AIX or Linux:
- Change to the directory where you expanded the fix pack.
- Now change to the following subdirectory:
- AIX: /aix64h
- Linux: /linuxi38664h
- zLinux: /zlinux64h
- Run the following command: ./issetup
If you do not use XWindow, run an unattended installation as explained in Set Up an Unattended Installation Using a File From an Installation on Another Computer in the Cognos information center.
- Follow the instructions in the installation wizard, install the fix pack in the same location as your existing IBM Cognos components.
This installation location is the path specified for the cognos.biserver.install.path property in the cognos-setup.properties file.
Windows:
- Change to the directory where you expanded the fix pack.
- Now change to the \win64h subdirectory.
- Run the following command: issetup.exe
- Follow the instructions in the installation wizard, install the fix pack in the same location as your existing IBM Cognos components.
This installation location is the path specified for the cognos.biserver.install.path property in the cognos-setup.properties file.
- Generate a new Cognos BI Server EAR file with the fix pack:
- Locate the cognos-setup-update.sh script in the directory where you expanded the CognosConfig.zip or CognosConfig.tar when you installed Cognos Business Intelligence components as part of the pre-install task.
- Edit the cognos-setup.properties file and verify that it contains the appropriate values for each property.
All passwords were removed from this file that last time it was used, so you must either add the passwords again, or pass them in from the command line when you run the cognos-setup-update.sh script in the next step.
- Run the cognos-setup-update.sh script.
- Use the WAS admin console to apply the new EAR file to the Cognos server:
- Log on to the Dmgr admin console
- Click Servers > Enterprise Applications.
- In the list of applications, select the Cognos where you generated the new EAR file, and click the Update button in the table.
- Browse to the newly built EAR file residing in the Cognos_BI_Server_install_path directory, and click Next.
- Complete the remaining screens by accepting the default values and clicking Next.
- Click Finish to complete the update.
- Update the Cognos server configuration...
- Start the Cognos server. Verify that all processes are started by accessing the Cognos BI dispatch URL. If startup is complete, the Public Folders directory is present.
- Locate the cognos-configure-update.sh script in the directory where you expanded the CognosConfig.zip|tar file when you installed Cognos Business Intelligence components.
- Edit the cognos-setup.properties file and verify that it contains the appropriate values for each property. All passwords were removed from this file the last time that it was used, so you must either add the passwords again, or pass them in from the command line when you run the cognos-configure-update.sh script.
- Run the cognos-configure-update.sh script.
Output from this operation is stored in the /CognosSetup/cognos-configure.log file.
If you encounter an error when running the cognos-configure-update.sh script, correct the error and run the script again before proceeding to the next task.
- Start WebSphere Application Server and Cognos.
Results
For additional information on updating Cognos Business Intelligence with fix packs, see Install Fix Packs in the Cognos information center.
Back up and restoring Metrics data
You can back up the PowerCube’s data on a monthly basis, or you can back up the entire Metrics database history. It is easier to back up a single month’s worth of PowerCube data than to back up the entire Metrics database history, but backing up the database allows you to rebuild the entire PowerCube if needed.
Back up and restoring sub-PowerCubes
Each month, one sub-PowerCube is generated containing data for the past month. These sub-PowerCubes are stored on the disk as files used for backing up and restoring a month’s worth of data in the PowerCube.
Back up a sub-PowerCube only preserves one month’s data at a time. Rebuilding a PowerCube requires a full restore from the raw Metrics database.
For information on restoring an entire PowerCube, see Restore the entire Metrics database and PowerCube or Restore the Metrics database and PowerCube incrementally.
- To back up a month’s worth of PowerCube data, copy the previous month’s sub-PowerCube file to a different server:
Back up the most recent sub-PowerCube that contains data for an entire month; do not back up the current month’s data because it is incomplete.
- On the server hosting the IBM Cognos Transformer component, navigate to the Powercube_save_path/MetricsTrxCube directory, which is where sub-PowerCube files for each month are stored.
The Powercube_save_path is the value of the cognos.cube.path property specified in the cognos-setup.properties file.
- Copy the sub-PowerCube for the previous month to the backup location.
- To restore a PowerCube, copy the backed-up version of the sub-PowerCube back to the original server:
- On the server storing the backup files, navigate to the location of the saved PowerCube file and copy the most recent month’s backup file.
- On the server hosting the IBM Cognos Transformer component, copy the retrieved file to the Powercube_save_path/MetricsTrxCube directory.
- Either wait for the scheduled cube refresh job to run, or run it manually.
After the cube refresh job finishes, the changes are updated to the Cognos server.
Back up the Metrics database
The Metrics database contains the raw data used generate the PowerCube. Backing up the Metrics database on a regular basis ensures that you can rebuild the PowerCube in the event of a failure.
The backup schedule for the Metrics database is based on how long the raw metrics data is stored there.
For example, if you specify the metrics-config.xml file’s databaseCleanup.eventLifetimeInMonths value as 6, then raw metrics data older than 6 months is purged, so you must back up the Metrics database at least every 6 months to prevent data loss.
To back up the Metrics database...
- Select a backup schedule method to back up the Metrics database every month, quarter, or year, depending on the database size and your business needs.
- On the database server, back up the database as explained in the database product documentation.
Restore the entire Metrics database and PowerCube
The Metrics database contains the raw data used generate the PowerCube. Backing up the Metrics database on a regular basis ensures that you can rebuild the PowerCube in the event of a failure.
If you plan to restore the PowerCube on a staging server before copying to your product server, begin by installing Cognos Business Intelligence on the staging server. Best practice is to install and configure the staging server just like the production server; at a minimum the staging server requires the customized version of Cognos Business Intelligence that you installed on the production server, and access to the LDAP directory where the Cognos administrator account resides.
When you restore the Metrics database, you must rebuild the PowerCube to make sure it contains all of the same data. You can restore the entire Metrics database in a single operation, and then rebuild the entire PowerCube in another single operation. If the Metrics database is large and your space for restoring the database and rebuilding the PowerCube is limited, you can complete the process incrementally by restoring subsets of data and rebuilding the PowerCube to add each additional set of data.
For more information, see Restore the Metrics database and PowerCube incrementally. To minimize the impact on users, restore the PowerCube to a staging server and then copy the completed PowerCube to the production server where IBM Cognos Transformer is installed.
If the database is large, it can take a long time to restore it and rebuild the PowerCube.
To restore the Metrics database and rebuild the PowerCube...
- Restore the entire Metrics database by following the instructions in the database product documentation.
- On the staging server, copy the Transformer_install_path/metricsmodel directory and its contents from the production server where IBM Cognos Transformer is installed.
This is the directory containing saved PowerCube model files that specify how the PowerCube is built; it is the cognos.transformer.install.path property specified in the cognos-setup.properties file.
- On the staging server, rebuild the PowerCube from the restored Metrics database:
- Navigate to the Transformer_install_path/metricsmodel directory.
The Transformer_install_path is the value of the cognos.transformer.install.path property specified in the cognos-setup.properties file.
- Run the build-all.sh script.
- Move the rebuilt PowerCube to the production server by copying the following directories and their contents from the staging server to the production server:
- Powercube_save_path
Powercube_save_path is the directory containing saved PowerCube files; it is the cognos.cube.path property specified in the cognos-setup.properties file.
- Transformer_install_path/metricsmodel
Transformer_install_path is the value of the cognos.transformer.install.path property specified in the cognos-setup.properties file.
Restore the Metrics database and PowerCube incrementally
If you have limited disk space, you can restore both the Metrics database and the PowerCube in increments.
If you plan to restore the PowerCube on a staging server before you copy it to your production server, begin by installing Cognos Business Intelligence on the staging server. Install and configure the staging server just like the production server. Install the customized version of Cognos Business Intelligence on the staging server. This server must also have access to the LDAP directory where the Cognos administrator account resides.
The Metrics database contains the raw data used generate the PowerCube. Whenever you restore the Metrics database from the backup, you must rebuild the PowerCube to ensure that it contains the full set of data.
If the Metrics database is large and your space for restoring the database and rebuilding the PowerCube is limited, you can complete the process incrementally by restoring subsets of data. Rebuild the PowerCube to add each additional set of data incrementally. Delete the restored data to prepare the database for the next increment. When the PowerCube is rebuilt, complete a final restore on the Metrics database to replace the data that you deleted between increments.
If you prefer to rebuild the entire PowerCube in a single operation, see Rebuilding the entire Metrics database and PowerCube.
If the database is large, it can take a long time to restore it and rebuild the PowerCube. To minimize the impact on users, restore the PowerCube to a staging server and then copy the completed PowerCube to the production server.
- Prepare the staging server where you plan to rebuild the PowerCube:
- Copy the Transformer_install_path/metricsmodel directory and its contents from the production server where IBM Cognos Transformer is installed.
This directory contains the PowerCube model files that specify how the PowerCube is built. The directory is specified in the cognos.transformer.install.path property in the cognos-setup.properties file.
- Create an incremental version of the script that builds the PowerCube:
- Go to the Powercube_save_path/metricsmodel directory.
- Copy the build-all.sh script to a new file called incremental-build-all.sh in the same directory.
- Edit the incremental-build-all.sh file and delete the following statements to avoid accidentally deleting incremental additions to the PowerCube:
- AIX,
IBM i , and Linux:echo delete all existing cube files... rm -rf [POWERCUBE_SAVE_PATH]/*
- Windows:
echo delete all existing cube files... del /F /Q "[POWERCUBE_SAVE_PATH]" for /D %%i in ("[POWERCUBE_SAVE_PATH]") do RD /S /Q "%%i"
- Save and close the file.
- On the database server, restore the first increment of data to the Metrics database.
For more information, see your database product documentation.
For best results, restore data in full-month increments (multiple months are acceptable provided you restore the complete set of data for each month). Restoring data for full months ensures that you do not duplicate data by restoring it twice.
For example, suppose that you restore data for 1-May-2013 through 12-May-2013 but omit the remainder of the data for that month. In the next increment, you must ensure that you restore only the remaining data for that month (13-May-2013 through 31-May-2013). To restore the entire month in the second increment, you must first delete the partial month that you already restored. Restoring full-month increments avoids this problem.
If the final set of data comprises an incomplete month, there is no risk of duplication.
If you restored a partial month’s data, you can delete those records from the database. You can identify the records from the timestamp column for each table. Table 1 lists the database tables and corresponding timestamp columns where you can delete records.
For example, in the F_TRX_EVENT database table, check the data in the EVENT_TS column and delete the records where the date falls within the time period to delete.
Table 5. Tables to reference when you are deleting records for partial months
Table name in database Timestamp column F_TRX_EVENT EVENT_TS F_USER_EVENT_COUNT UPDATE_TS F_ITEM_EVENT_COUNT UPDATE_TS
- On the staging server, rebuild the first increment of restored data in the PowerCube:
- Go to the Powercube_save_path/metricsmodel directory.
- Run the build-all.sh script to rebuild the PowerCube from the first set of restored Metrics data.
If there is already a PowerCube in the directory, the build-all.sh script deletes it before it builds the new one. When you build the PowerCube incrementally, run this version of the script once only so that you do not delete the incremental copies of the PowerCube.
- Complete the following steps for each subsequent increment of data to be restored:
- On the database server, delete the previously restored data from the Metrics database to prevent accidental duplication.
- On the database server, restore the next increment of data to the Metrics database by following the instructions in your database product documentation.
- On the staging server, rebuild the new increment of restored data to the PowerCube by running the incremental-build-all.sh script.
Repeat this process until the PowerCube is rebuilt.
- Move the rebuilt PowerCube to the production server by copying the Powercube_save_path directory and its contents from the staging server:
The directory is specified in the cognos.cube.path property in the cognos-setup.properties file.
- Delete the data from the Metrics database that was previously restored. Deleting this data prevents duplication the next time that you restore the database.
Administer Mobile
You can use the Connections Mobile app to perform common tasks from a mobile device.
You can install the Connections Mobile app on the following devices:
- Android
- Apple iPhone
- Apple iPad
- Apple iPod Touch
- BlackBerry OS6
Search for the Connections Mobile app in the Apple App Store, Android Market, or BlackBerry App World.
If you do not want to use the Connections Mobile app, you can use the web browser on your mobile device to access the mobile version of Connections. The mobile version optimizes the applications for use from a mobile device.
When using a browser to access the mobile version, the following applications are available:
- Home page
- Find out what people are working on quickly by checking the Status Updates feed. Customize which messages are displayed in this feed; by default, it shows you the status of people in your network and people that you are following. You can change your own status message from here instead of opening the Profiles application to do so.
- Activities
- View activities and to-do items. Prioritize and add content to existing activities. Add new activities or edit, complete, and delete activities that you own.
- Blogs
- Search for and view blogs. Start a blog, and add entries and comments.
- Bookmarks
- Save, organize, and share website addresses as bookmarks. Discover bookmarks that others in your organization, including subject matter experts, saved. Add useful bookmarks created by other people to your own bookmark collection or to an activity, blog, or community. Flag bookmarks with broken links.
- Communities
- View communities, manage the membership of communities you own, and perform the following tasks in your communities:
- Review the recent updates by other members.
- Add a community activity or work with existing community activities.
- Add new entries, and recommend or comment on existing entries in the community blog.
- Add ideas and vote on other people’s ideas in the ideation blog.
- Add and use community bookmarks.
- Download, comment on, recommend, or follow community files or share images and videos that are stored on your device.
- Read and post entries to the community forum or reply to other people's posts.
- Access community feeds.
- View images and videos in the community media gallery (You cannot play videos in the gallery from a BlackBerry device).
- View and comment on the community wiki.
- Files
- Access files and share them with others. Recommend, comment on, or follow other people’s files. Views who has downloaded a file or commented on it, see file ratings, and see highly recommended files. Upload photos or videos that are stored on your device.
- Forums
- Access forums. Add topics and replies to forums you belong to. Edit and delete topics and replies that you created.
- Profiles
- Update your status message. Edit your profile; add a picture to it by uploading an image file that is stored on your device. Search for people using the search icon in the menu. You can view a person's contact information, and then contact the person by using the native dialer and email application on the device. You can also view the person’s network and report-to chain, and see what they are working on from the Board and Recent Updates views. Invite the person to join your network by clicking the Invite to My Network link in the profile header.
Wherever a person’s name is displayed in blue on the page, you can click it to view the person’s business card. The business card is based on the person’s profile and provides you with quick links to phone numbers and email addresses. The business card also links to content that a person has contributed to Connections.
- Wikis
- Consult your wikis while on-the-go and add comments to them.
Supported devices
You can access the mobile version of Connections from the browser on the following devices:
- Android
- Apple iPad
- Apple iPhone
- Apple iPod Touch
- BlackBerry
- Nokia S60
For details about supported device versions, see the detailed system requirements for Connections.
How do I enable the mobile version?
While it is possible to access Connections by using the browser on your mobile device, IBM recommends that you use the Connections native app instead. The native app is available for following mobile devices:
- Android
- Apple iOS
- BlackBerry
Establish how mobile users will remotely and securely connect into your enterprise for Connections mobile access from a device.
If your enterprise requires the use of a VPN for mobile device remote access, Lotus Mobile Connect is a recommended option.
If you use signer certificates that are not already trusted by the web browser, install the SSL signer (trust) certificate on the mobile device. Standard certificate authorities, such as VeriSign, for example, are trusted by most web browsers, so you would not need to perform this step if you are using a certificate from a standard certificate authority. To install the SSL signer certificate on the mobile device, complete the following step:
- Apple iOS:
Send an email to users with the certificate provided as an attachment. Ask users to open the attachment from the device. When they do, the device recognizes and installs the certificate.
- pevices that run on the Symbian operating system:
Refer to the documentation for the device to find out how to install certificates.
For example, to install a certificate on the Nokia S60 device, go to the Nokia website.
When you use install Connections, select the option to install Mobile.
If you already installed Connections, but did not select the Mobile application at the time, you can rerun the installation wizard in Modify mode.
To access the mobile version of Connections...
- Log in to your intranet from the mobile device.
- Go to the following web address:
http://hostname/mobile
...where hostname is the same server host name from which users access Connections in a standard web browser.
To support 3GP videos in the media gallery, add the following mapping to the mime-files-config.xml configuration file:
<mapping mediaType="video" mimeType="video/3gpp"> <extension>3gp</extension> </mapping>You cannot play videos in the media gallery of a community from BlackBerry devices.
Change Mobile configuration property values
Modify the configuration properties in mobile-config.xml to control how users can interact with the Connections Mobile native app.
Use wsadmin client to check out the file.
To edit Mobile configuration properties...
- Start the Files Jython script interpreter.
- Access the Mobile configuration file:
execfile("mobileAdmin.py")
If you are asked to select a server, you can select any server.
- Check out the Mobile configuration file
MobileConfigService.checkOutConfig("working_directory", "cell_name")
where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied. The files are kept in this working directory while you make changes to them.
AIX,
IBM i , and Linux only: The directory must grant write permissions or the command will not run successfully.- cell_name is the name of the WAS cell hosting the Connections application. This argument is required. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
MobileConfigService.checkOutConfig("/opt/my_temp_dir", "CommCell01")
- Create a backup copy of the file.
- Edit the file in a text editor.
- Save your changes and check the file in.
- Synchronize the nodes in your deployment.
- Restart the application server that hosts the Mobile application.
You must check the configuration files back in after making changes, and they must be checked in in the same wsadmin session in which they were checked out.
Mobile configuration properties
Configuration properties in mobile-config.xml control how users can interact with the Connections Mobile app.
Modify configuration properties
For information about how to modify a configuration property in mobile-config.xml, see the Changing Mobile configuration property values topic.
When you modify a property, ensure that you apply the following guidelines:
- Enabled properties must have a value of either true or false.
- Number values must be integers.
You can modify the following configuration properties:
General properties
- ExposeGeoLocation
- Display the geographic location of the user in the activity stream.
The default value is true. To hide the geographic location, set the value to false.
- ExposeEmailAddress
- Displays email addresses in Profiles. The default value is true. To prevent email addresses from being displayed in Profiles, set the value to false.
- DefaultApplication
- Application that is displayed when a user logs in.
The default value is Updates.
Other possible values are Activities, Blogs, Bookmarks, Communities, Files, Forums, Profiles, and Wikis.
- AllowCopyandPaste
- Allows the copying and pasting of text throughout the application.
The default value is true. To prevent copying and pasting, set this value to false.
- Updates
- Displays updates in the Home page. The default value is true. To prevent updates from being displayed, set the value to false.
- RememberPassword
- Allows passwords to be saved. The default value is true. To prevent passwords from being saved, set the value to false.
- AllowiTunesSharing (iOS only)
- Allows documents to be shared when you sync your device with iTunes.
The default value is true.
This setting does not apply to the application log files.
- WebClientAccess
- Allows access for users with earlier mobile browsers such as trackball-equipped Nokia Symbian or RIM BlackBerry smartphones. The default value is true. When you disable web access, users who attempt to access the website see a message that directs them to download the Connections native app. The URL points to the marketplace appropriate for the user’s device.
For a richer user experience, disable web client access.
Activities properties
- enabled
- Enable the application by default. To disable the application, set the value to false.
- displayInLauncher
- Allows the application to be displayed in the launcher by default. To hide it, set the value to false.
- PublicActivities
- Allows public activities to be shown by default. To hide public activities, set the value to false.
- Not supported in BlackBerry.
Blogs properties
- enabled
- Enable the application by default. To disable the application, set the value to false.
- displayInLauncher
- Allows the application to be displayed in the launcher by default. To hide it, set the value to false.
- PublicBlogs
- Allows public blogs to be shown by default. To hide public blogs, set the value to false.
Not supported in BlackBerry.
Bookmarks properties
- enabled
- Enable the application by default. To disable the application, set the value to false.
- displayInLauncher
- Allows the application to be displayed in the launcher by default. To hide it, set the value to false.
- PublicBookmarks
- Allows public bookmarks to be shown by default. To hide public bookmarks, set the value to false.
Not supported in BlackBerry.
Communities properties
- enabled
- Enable the application by default. To disable the application, set the value to false.
- displayInLauncher
- Allows the application to be displayed in the launcher by default. To hide it, set the value to false.
- PublicCommunities
- Allows public communities to be shown by default. To hide public communities, set the value to false.
Not supported in BlackBerry.
- AllowAddMembers
- Allows people to be added to a community. The default value is true.
Files properties
- enabled
- Enable the application by default. To disable the application, set the value to false.
- displayInLauncher
- Allows the application to be displayed in the launcher by default. To hide it, set the value to false.
- PublicFiles
- Allows public files to be shown by default. To hide public files, set the value of this property to false.
Not supported in BlackBerry.
- ShareWithPublic
- Allows files to be shared with everyone. The default value is true. To hide files from public view, set the value to false.
Not supported in BlackBerry.
- AllowDownloads
- Allows files to be downloaded to a mobile device. The default value is true. To prevent downloads, set the value to false.
- AllowUploads
- Allows files to be uploaded from a mobile device. The default value is true. To prevent uploads, set the value to false.
- AllowEncryption
- Encrypts files that are downloaded to the device. The default value is true. To prevent encryption, set the value to false.
- AllowExport
- Allows files to be exported to specific folders on the device.
The default value is true. To prevent the exporting of files, set the value to false.
Not supported in BlackBerry.
- AllowExportToDeviceGallery
- Allows files to be exported to the device gallery on the device.
The default value is true. To prevent the exporting of files to the device gallery, set the value to false.
Not supported in BlackBerry.
Forums properties
- enabled
- Enable the application by default. To disable the application, set the value to false.
- displayInLauncher
- Allows the application to be displayed in the launcher by default. To hide it, set the value to false.
- PublicForums
- Allows public forums to be shown by default. To hide public forums, set the value to false.
Not supported in BlackBerry.
Profiles properties
- AllowEditProfile (iOS only)
- By default, users can edit their profiles. To prevent the editing of profiles, set the value to false.
- displayInLauncher
- Allows the application to be displayed in the launcher by default. To hide it, set the value to false.
- enabled
- Enable the application by default. To disable the application, set the value to false.
- Upload
- By default, users can upload Profiles picture. To prevent the uploading of Profiles pictures, set the value to false.
Search properties
- GlobalSearch
- Enables global searching by default. To disable searching across all of Connections, set the value to false. Not supported in BlackBerry.
Security management properties
To enable security management of the Connections native apps, change the default value of the MobileAdmin property.
- MobileAdmin
- Mobile security administration is disabled by default. To enable it, set the value of the MobileAdmin property to true.
- ServiceLocation
- Location of the security management service for Mobile. By default, the ServiceLocation property is null. A null value indicates that the security management service is collocated with the Connections server.
Change the value of this property only if the security management service is deployed on a domain different from Connections.
For example, if Connections is hosted at https://example.com and the security management service is hosted at https://example.org, change the value of the ServiceLocation property to https://example.org.
If you specify a domain for the security management service in this property, you must enable single sign-on between this domain and the Connections domain.
Wikis properties
- enabled
- Enable the application by default. To disable the application, set the value to false.
- displayInLauncher
- Allows the application to be displayed in the launcher by default. To hide it, set the value to false.
- PublicWikis
- Public Wikis are shown by default. To hide public wikis, set the value to false.
Not supported in BlackBerry.
Application customization properties
You can specify customized application labels and the AppName in a properties file. Give it a name such as the mobile.properties file, or similar. The file must be stored under the SHARED_DATA/customization/strings directory.
The following list shows examples of properties that can be specified in the file:
appname.pluraltitle=Connections activities.singulartitle=Activity activities.pluraltitle=Activities blogs.singulartitle=Blog blogs.pluraltitle=Blogs bookmarks.singulartitle=Bookmark bookmarks.pluraltitle=Bookmarks communities.singulartitle=Community communities.pluraltitle=Communities files.singulartitle=File files.pluraltitle=Files forums.singulartitle=Forum forums.pluraltitle=Forums profiles.singulartitle=Profile profiles.pluraltitle=Profiles wikis.singulartitle=Wiki wikis.pluraltitle=WikisThe pluraltitle extension is used in the Home page screen where the applications are listed. The singulartitle extension is used elsewhere in the app, such as in error messages.
For multiple locale support, create a properties file for each locale and store it in the same folder as the English-language properties file.
For example, add a mobile_fr.properties file for the french locale.
- enabled
- App customizations are disabled by default. To enable them, set the value to true in the Customizations element.
- CustomizationLocation
- Specifies the name of the customization properties file.
Specify the file name without the .properties extension.
For example, enter mobile, not mobile.properties.
- appname.title
- Represents the customized app name. If this property is not specified, the value of the Appname element in mobile-config.xml is used. However, that value is overridden by the appname.title property if it is specified in the properties file.
The following list shows an example of customized properties:
appname.pluraltitle=Connections activities.singulartitle=Task activities.pluraltitle=Tasks blogs.singulartitle=ContentShare blogs.pluraltitle=ContentShare bookmarks.singulartitle=URL bookmarks.pluraltitle=URLs communities.singulartitle=Teamroom communities.pluraltitle=Teamrooms files.singulartitle=Document files.pluraltitle=Documents forums.singulartitle=GroupShare forums.pluraltitle=GroupShares homepage.singulartitle=Updates homepage.pluraltitle= Updates profiles.singulartitle=Blue Page profiles.pluraltitle=Blue Pages wikis.singulartitle=Wiki wikis.pluraltitle=Wikis
Security properties
Not supported in BlackBerry.
- AuthType
- Authentication type. Allowed values are SiteMinder, Form, Basic, and SPNEGO. There is no default value.
- enabled
- The security settings are disabled by default. To enable them, set the value to true.
- InfoPageNegativePathPattern
- Contains a regular expression that can be used to match the URL of a negative response page. The detection of the negative page indicates that the user did not accept the information page.
- InfoPagePathPattern
- Contains a regular expression that can be used to match the URL of the information page. The information page usually states the terms of using the website. Users must agree to the terms before they can proceed.
- InfoPagePositivePathPattern
- Contains a regular expression that can be used to match the URL of a positive response page. The detection of the positive page indicates that the user accepted the information page.
- LoginFormName
- Defines the login form name in the custom authentication form.
- LoginUrlContext
- Defines the login URL context for the custom authentication form.
- LoginErrorUrlContext
- Defines the error URL login context.
- PasswordFieldName
- Password field name in the custom authentication form.
- RejectUntrustedCertificates
- When set to true, the app rejects any untrusted certificates that are presented to it and denies app access. The default value is false.
- UseridFieldName
- Defines the user field name in the custom authentication form.
Extensibility properties
Use these properties to add new applications to the Home page.
- name
- Application name.
- enabled
- Show or hide the application.
- ApplicationIcon
- Location of the images for the platforms and densities. Images must be stored under the SHARED_DATA/customization directory.
(iOS only) You can find the default icons for Regular and Retina at the following location:
node/installedApps/cell/Mobile.ear/mobile.web.war/extensibilityIcons
- DefaultLocation
- Default image for the web client and is used if an image is not set for any of the density fields.
- ApplicationLabel
- Label for the new application.
- ApplicationURL
- Web address or the native URL of the application. Connections uses this URL to start the new application.
The following excerpt from a sample mobile-config.xml file demonstrates how to specify a new application:
<Applications> <Application name="ApplicationName" enabled="true"> <ApplicationIcon> <Android> <Hdpi>/images/ibmhdpi.jpg</Hdpi> <Mdpi>/images/ibmmdpi.jpg</Mdpi> <Ldpi>/images/ibmldpi.jpg</Ldpi> </Android> <IOS> <Reg>reg</Reg> <Retina>retina</Retina> </IOS> <BB> <HighDensity></HighDensity> <MedDensity></MedDensity> <LowDensity></LowDensity> </BB> <DefaultLocation>/images/ibm50.jpg</DefaultLocation> </ApplicationIcon> <ApplicationLabel>IBM</ApplicationLabel> <ApplicationURL>http://www.ibm.com</ApplicationURL></> </Application> </Applications>HPDI, MPDI, and LPDI correspond to high, medium, and low densities on Android. Similarly, HighDensity, MedDensity, and LowDensity correspond to the screen densities on BlackBerry. Reg and Retina are the screen densities on iOS.
Use the following standard image sizes for iOS:
- Reg: 24 pixels x 24 pixels 72 pixels per inch
- Retina: 48 pixels x 48 pixels 72 pixels per inch
Configure access with client certificates
Configure the Connections mobile app to allow client certificate authentication.
If your organization uses client certificates to authenticate access to your Connections deployment, you must distribute certificates, along with instructions, to your mobile users.
Importing a client certificate for an account is a once-off process. When a certificate is associated with an account, the Connections mobile app automatically uses the certificate when it receives a client certificate challenge.
Configure client certificates on Android
Client certificates are supported on Android versions 4.0.x and 4.2.x, which have a common credential storage that can be accessed by all applications on the device. The Connections mobile app retrieves certificates from the common credential storage. Your Mobile Device Management (Mdmgr) provider might support pushing client certificates to mobile devices. If certificates are not deployed in this way, you can deploy them manually.
To import a client certificate...
- Append the .ibmmbd extension to the client certificate p12 file so that the Connections Mobile app can open the file.
For example: cert.p12 becomes cert.p12.ibmmbd.
A .p12 file follows the PKCS #12 standard for storing cryptography objects as a single file. Each .12 file bundles a private key with a corresponding X.509 certificate.
The Connections app uses the common credential storage mechanism, which means that you can also send PKCS #12 certificates to the user as regular .p12 files. When the user taps on the file, Android attempts to import it into the common credential storage.
- Distribute the .ibmmbd file to your mobile users.
Send the file by email or add it to a website that can be accessed from a mobile device.
- Provide the following instruction to your mobile users:
- Transfer the .ibmmbd file to your mobile device.
- From your device, tap on the .ibmmbd file and select Open in Connections. The Connections app prompts the user to enter the password for the certificate.
- Import the certificate. A confirmation message verifies that the certificate was successfully imported.
- Open the Connections mobile app and create an account.
When prompted, select the certificate that you imported and enter the password.
If the user's email app cannot store the .ibmmbd file in the common credential storage, provide these alternative instructions:
- From your email app, save the file to the SD card.
- Open Connections and go to Settings.
- Select Certificates, and then select Import from SD Card. The Connections app scans the SD card for .ibmmbd files.
If a valid file is found, Connections imports the client certificate.
If no certificates are found, an error message is displayed.
When logging in to a server that requires a client certificate, the user is prompted to select a certificate from the common credential storage. If no certificates are found, the user can import one. If there are no valid certificates on the device, the login procedure stops.
Configure client certificates on iOS
Most Mobile Device Management (Mdmgr) products can push client certificates to the iOS device. However, because of iOS security restrictions, the Connections app cannot access these certificates. To work around this restriction, you can import client certificates into the Connections app's keychain.
To import a client certificate on an iOS device...
- Append the .ibmmbd extension to the client certificate p12 file so that the Connections Mobile app can open the file.
For example: cert.p12 becomes cert.p12.ibmmbd.
If you do not append the.ibmmbd extension, iOS installs the.p12 file to the iOS Settings app instead of the Connections app. In that case, the Connections app cannot use the certificate to access the server.
A .p12 file follows the PKCS #12 standard for storing cryptography objects as a single file. Each .12 file bundles a private key with a corresponding X.509 certificate.
- Distribute the .ibmmbd file to your mobile users.
Send the file by email or add it to a website that can be accessed from a mobile device.
If you distribute the .ibmmbd file from a website, define an application/octet-stream mime type on the web server for the .ibmmbd extension.
If the mime type is not defined, iOS reads the contents of the .ibmmbd file, decides that the file is a certificate, and sends it to the iOS Settings app.
- Provide the following instruction to your mobile users:
- Transfer the .ibmmbd file to your mobile device.
- From your device, tap on the .ibmmbd file and select Open in Connections. The Connections app prompts the user to enter the password for the certificate.
- Import the certificate. A confirmation message verifies that the certificate was successfully imported.
- Open the Connections mobile app and create an account. When prompted, select the certificate that you imported and enter the password.
Configure security for the mobile application
Using the Mobile Administration console, you can control which users and mobile devices can access your Connections servers. In addition to denying access to specific users or specific devices, you can wipe Connections data from devices, restore access to users or devices, and set password policies.
Use this built-in security management capability if you do not already have a security management solution for mobile devices.
Ensure that you created the database for the Mobile application. Verify that the database exists by completing the following steps:
- Log in to WAS admin console on the system that hosts the Deployment Manager.
- Click Resources > JDBC > Data sources.
- Search the page for the mobile data source.
If it is not present, install the MOBILE database.
Register devices
When a device connects to an Connections server, it is automatically registered in the MOBILE database. You can view the inventory of every device and user that has accessed the Connections Mobile application.
If a device connecting to a 4.0 server uses a version of the native app that is earlier than the latest release, the user is prompted to update the app.
Audit history
You can view the security and ID details of each registered device by clicking on the device entry in the console. The audit history also shows who issued requests for changes to access and other configuration settings.
OS capabilities
Some operating systems provide different capabilities for security management of devices:
- Android: All the features provided in the Mobile Administration console are supported.
- BlackBerry provides its own security management system through the BlackBerry Enterprise Server. Therefore, the Mobile Administration console does not provide a password policy for BlackBerry.
- iOS: The Connections Mobile password policy is not supported.
Enable mobile security management
Configure Connections to enable security management for the Mobile application.
To begin configuring security management for the Mobile application, you need to enable the MobileAdmin settings and then map a user to the Mobile administrator role.
The Mobile Admin console is located at https://host:port/mobileAdmin/login
where:
- host is the full host name or IP address of your Connections deployment.
- port is the port number for SSL traffic.
To enable mobile security management...
- Set the value of the MobileAdmin property in mobile-config.xml to true.
If your mobile security management service is deployed on a different domain than Connections, update the ServiceLocation property with the address of the domain that hosts the security management service.
- Log in to WAS admin console on the Deployment Manager.
- Select Applications > Application Types > WebSphere enterprise applications.
- Click the link to the Mobile Administration application.
- Click the Security role to user/group mapping link.
- Select the check box for the admin role and then click Map Users.
- In the Search String box, type the name of the person whom you would like to set as an administrator, and then click Search. If the user name exists in the LDAP directory, it is found and displayed in the Available box.
- Select the name from the Available box and then move it into the Selected column by clicking the move arrow.
- Click OK.
- From the Enterprise Applications > <application> > Security role to user/group mapping page, click OK and then click Save.
- Synchronize and restart all your WebSphere Application Server instances.
- Verify that the mapped user has access by logging into the Mobile Admin console at https://host:port/mobileAdmin/login.
Deny access to a device
Use the Mobile Administration console to control which mobile devices can access your Connections servers. When users log in for the first time, they and their devices automatically have access. However, you can deny access to specific devices. Conversely, you restore access to devices that you previously blocked.
You would typically deny access to a device if it is lost but you believe that it might be found. If a device is irretrievably lost or stolen, you would deny access and also wipe Connections data from the device.
To deny access to a device...
- Log in to the Mobile Admin console and click Device Security.
- Search for the device to block:
- Select User Name, Device Name, or Device Id from the Filter menu.
- Enter search terms in the Enter the Search Terms box and press Enter or click the Search icon.
- Select the check box for the device to block and click Deny Access.
You can configure multiple devices by selecting the check box for each device to block.
- Confirm your choice.
The view refreshes and the Access column displays Deny in the row for the blocked device. To see the full audit history of the device, click the row to display the Device Details for the device.
Results
The blocked device can no longer log in to Connections. The user receives a message that the device is blocked.To restore access for a blocked device, select the check box for the device and click Allow Access.
Deny access to a user
Using the Mobile Administration console, you can control which users can access your Connections servers. When users log in for the first time, they and their devices automatically have access. However, you can deny access to specific users. Conversely, you can restore access to users whom you previously blocked.
To deny access to a user...
- Log in to the Mobile Admin console and click Users.
- Enter search terms for a specific user in the Enter the Search Terms box and press Enter or click the Search icon.
- Click the row for the user to open the User Details record and then click Deny Access.
- Confirm your choice.
The view refreshes and the Access allowed column displays Deny in the row for the blocked user.
Results
The blocked user can no longer log in to Connections.When the blocked user tries to connect to the Connections server, a message is displayed to indicate that the user is denied access.
To restore access for a blocked user, click the row for the user and click Allow Access.
Wipe app data from a device
If a device is lost or stolen, you can remove all the Connections data on the device remotely. You might also want to wipe the app data for performance reasons.
When you issue the command to wipe a device, the operation deletes the local Connections databases and files on the device and any cached preference data. There is no warning to the device user. After the Connections data is removed, the Connections app restarts. In addition, the device can no longer log in to Connections.
You cannot wipe the entire device or return it to the default factory state. To issue a remote command that wipes the entire device, consider using a mobile device management product such as IBM Endpoint Manager for Mobile Devices.
On Android devices, the wipe command also removes Connections data from the SD card, if present.
To wipe Connections data from a device...
- Log in to the Mobile Admin console and click Device Security.
- Search for the device to wipe:
- Select User Name, Device Name, or Device Id from the Filter menu.
- Enter search terms in the Enter the Search Terms box and press Enter or click the Search icon.
- Select the check box for the device to wipe and click Wipe Device.
You can configure multiple devices by selecting the check box for each device to wipe.
- Confirm your choice.
The view refreshes and the Wipe Status column displays Requested in the row for the selected device. To see the full audit history of the device, click the row to open record for the device.
Results
The app data on the device is remotely wiped and the app restarts.The Wipe Status column in the Mobile Admin console displays Succeeded.
You can reset the wipe status of a device: select the device and click Clear Wipe. This command clears the Wipe Status field and the device can log in to the Connections server again.
Set a password policy for Mobile
Set a password policy for accessing the Mobile application.
A password policy is a set of rules that controls how passwords are used and administered for the Mobile application. These rules can ensure that users change their passwords periodically and that their passwords meet your organization’s password complexity requirements. The rules can also restrict the reuse of old passwords and ensure that users are locked out after a specific number of failed login attempts.
When the Mobile application is installed, there is no password policy in effect until you create one.
When a password policy is in use, users of the Android V3 or higher operating system must complete the following steps when they want to uninstall the Connections app:
- Go to Settings > Security > Device Administrators.
- Clear Connections Security.
- Uninstall the app.
To set or change a password policy...
- Log in to the Mobile Admin console and click Device Settings.
- Click the link to one of the supported operating systems.
For example: Android
- To delete the current policy, click Delete Policy.
You cannot restore a deleted policy, create a policy, if required.
- To disable the current policy, click Disable Policy.
A disabled policy still exists so that you can edit it, if required. When you re-enable a policy, it is applied to devices from that point.
- Modify the policy settings.
Hover the cursor over a data field to display helpful information about the corresponding entry.
- Policy Name
- Enter a name for the policy.
- Password Type
- Type of password that users can choose; for example: numeric, alphanumeric, or complex. Complex passwords are supported in Android V3.0 and higher.
- Minimum Password Length
- Minimum number of characters that a password must contain. The allowed range is 0-64; the default is zero means that there are no restrictions.
- Maximum Number of Failed Attempts
- How many login attempts are allowed. The maximum value is 16. A value of zero means that there are no restrictions. If a user exceeds the maximum number of login attempts that are specified by this setting, all the data on the device is wiped and the factory settings are applied.
- Auto-Lock (in minutes)
- Number of minutes of inactivity that must elapse before the device is locked. The maximum value is 60. A value of zero means that there are no restrictions.
- Password Expiration Timeout (in days)
- Number of days that must elapse pass before passwords must be changed. The maximum value is 730. A value of zero means that there are no restrictions. Users receive 24 hours notification before their passwords expire.
- Password History Count
- Number of unique passwords that are required before one can be reused. The range is 0-50. A value of zero means that there are no restrictions.
- Click Save to save your changes. Alternatively, click Refresh to discard your changes.
Set expiry limits for users and devices
Specify when data about users and their devices is purged from your database.
When data about users and their devices reaches a defined age, you can purge it from the Mobile database. You can also purge audit events; that is, records of changes to user or device records.
When a user is purged from the database, all the user’s devices are also purged.
To set or change expiry limits...
- Log in to the Mobile Admin console and click System Settings.
- Select the Enable System Settings check box, if it is not already selected.
- Enter a value for the number of days of inactivity for users in the Inactive User Reap Interval box.
Any users who are inactive beyond this number of days of inactivity are purged from the database. The maximum value is 999. A value of zero means that the users are never purged from the database.
- Enter a value for the number of days of inactivity for audit events in the Audit Event Purge Interval box.
Any audit events that are inactive beyond this number of days are purged from the database. The maximum value is 1500. A value of zero means that audit events are never purged from the database.
- Click Save to save your changes. Alternatively, click Refresh to discard your changes.
User and device data
Reference information about users of the native app and their mobile devices.
About users and devices
The Mobile Admin console provides various views of the users and devices that are registered to use the Mobile application.
Users
To view user data, log in to the Mobile Admin console and click Users. The console displays the following user data:
- User Name
- Login name of the user.
- User State
- Whether the user is online or offline. If the user has logged in at any time in the previous 24 hours, the state is given as online.
- Access Allowed
- The user’s current access status.
Click on a column to sort the table.
To filter the table, select a value from the Filter menu, enter a search term, and click the Search icon or press Enter. You can use * as a wildcard search term.
Click on a user record to open it. From the user details view, you can deny access to the user. Conversely, if the user is already denied access, you can restore access from this view.
Devices
To view device data, log in to the Mobile Admin console and click Devices. The console displays the following device data:
- User Name
- The user name associated with the device.
- Device Name
- The name of the device.
- Last Access Check
- The date and time when the device most recently logged into Connections.
- Device OS
- The operating system on the device.
- Device OS Level
- The version of the operating system.
- Build Level
- Represents the version of the Connections native app that is connecting to the server.
Click on a column to sort the table.
To filter the table, select a value from the Filter menu, enter a search term, and click the Search icon or press Enter. You can use * as a wildcard search term.
Click on a device record to open it. From the device details view, you can deny access to the device. Conversely, if the device is already denied access, you can restore access from this view. Similarly, you can wipe the device or clear a scheduled wipe.
Mobile reference card for Connections
Common functions
Table 6. Common functions
Task Action Return to the Home screen. Search for items in the current application. Change the view in the current application. Display or hide tags. Tap a tag to search for items with that tag. Sort items in the current application. Expand or collapse an entry or part of an entry to show or hide its content. Take a photo or video with your device:
- Choose a picture
- Choose a video
- Take a picture
- Take a video
Uploading files:
Notes:
- Name: Give the file a name.
- Tags: Add tags to make the file easy to categorize and find.
- Share with: Make the file private or publicly available.
- The camera function is available only with the native app for your device.
- On BlackBerry devices, take a picture or video by selecting the native Choose a picture/Choose a Video dialog.
When you work on the applications within a community, the Places icon is displayed. The Places menu lists the applications that are available in the community that you are currently viewing, including the following default applications:
- Activities
- Blog
- Bookmarks
- Feeds
- Files
- Forums
- Members
- Wikis
- Updates
Log out Tap to log out of Connections.
Sets
Configure settings such as server details, login credentials. Where applicable, work with log files and view your download history.
The Settings screens are available only from native apps. You cannot access these settings from the web application.
Table 7. Settings
Task Action Android To show the Settings screen on your Android device, tap the device menu button. From this screen, you can select the following options:
- Account:
- Server: view or edit the URL of the server where Connections is hosted. Enter the URL in the following format: http://servername/mobile.
- Remember Password: select or clear this check box to save or clear your password.
- My Profile: go to your Profile page.
- Log Out: end your current session and log out of Connections.
Download history: view a list of all the files that you downloaded.
Logging:
- Enable logging: Select or clear the check box to enable or disable logging.
- Log size. The default size limit is 2 MB. Enter a new size limit if necessary.
- View logs: You have further options to refresh, clear, and export the log files.
- Send logs: Email the log files to your Connections administrator. Your device might display several applications that can send email. Select an appropriate application and send the files.
About:
- Follow the link to view the Mobile Reference Card in the Connections wiki.
- Read about this version of Connections.
BlackBerry To show the Settings screen on your BlackBerry device, tap the device menu button and select Settings. From this screen, you can select the following options:
- My Account:
- Server: view or edit the URL of the server where Connections is hosted.
Enter the URL in the following format: http://servername/mobile.
- Remember Password: select or clear this check box to save or clear your password.
- My Profile: go to your Profile page.
- Log Out: end your current session and log out of Connections.
- About:
- Follow the link to view the Mobile Reference Card in the Connections wiki.
- Read about this version of Connections.
iOS To show the Settings screen on your iPhone or iPad, tap the Settings icon. From this screen, you can select the following options:
- Account:
- Server: view or edit the URL of the server where Connections is hosted.
Enter the URL in the following format: http://servername/mobile.
- My Profile: go to your Profile page.
- Log Out: end your current session and log out of Connections.
Tap the Information icon to go to the About screen.
About:
- Follow the Help link to view the Mobile Reference Card in the Connections wiki.
- Send Logs: Email the log files to your Connections administrator.
- Reset Logs: Clear all log files.
- Read about this version of Connections.
Home screen
When you connect from your mobile device, the default application is the Home screen from where you can access any Connections application.
Updates
The Updates page shows an aggregated view of the latest status updates and news feeds from your network of contacts.
Table 8. Updates
Task Action Update Update your status. Status Updates Check the status of people whom you are following or who are in your network. Your own status updates are also displayed here. Filter status updates on the following types:
- Network and Following
- My Network
- I'm Following
- All Status Updates
- My Status Updates
You can add comments to status updates. When a status update has more than two comments, tap Show x> more comments to see all the comments for that status update, where x is the number of additional comments. When you add a comment, the view changes to show that comment thread at the beginning of the list of status updates.
You can delete your own status updates and comments.
News Feed Read the latest news from people in your network and save interesting stories to your Saved Stories list. Saved Stories News stories that you saved earlier. About Read about this version of Connections. Tap the Documentation link to visit the product documentation wiki.
Activities
Table 9. Activities
Task Action Change view to:
- Updates
- My Activities
- To Do List
View the different parts of an Activity:
- Recent Updates
- Sections
- To Do Items
- Members
- Description
Actions Within an Activity, perform the following actions:
- Mark Complete
- Edit Activity
- Delete Activity
- Medium Priority
- Normal Priority
- High Priority
The available Priority settings depend on the current Priority of the Activity. If the Activity is currently marked as High Priority, you can select either Normal or Medium Priority from this list.
Search Activities by:
- Public Activities
- My Activities
- My Todos
Add Add:
- Start an activity
- Add a Section
- Add an Entry
- Add a To Do
Expand or collapse an Activity or part of an Activity to show or hide its content. Sort Activities:
- By Date
- By Title
- By Priority
Blogs
Table 10. Blogs
Task Action View Blogs, updates, notifications, and recommendations:
- Updates
- My Blogs
- My Recommendations
- Latest Blog Entries
- Featured Blog Entries
- Public Blogs
- Notifications Received
- Notifications Sent
Add If your browser is in My Blogs page, tap Add to start a new blog. If you are browsing one of your blogs, tap Add to create an entry for that blog.
Actions Choose a blog action:
- Start a blog
- Edit a blog
Search all blogs, your blogs, or the currently open blog. Expand a blog entry to:
- Add a comment
- Recommend this entry
- Edit (if you are the entry author)
- Delete (if you are the entry author)
Sort blog entries:
- By Date
- By Title
- Most recommended
- Most Commented
- Most Visits
Bookmarks
Table 11. Bookmarks
Task Action Change view to:
- Updates
- My Bookmarks
- My Watchlist
- Public Bookmarks
- Most Bookmarked
- Most Notifications
- Most Active People
- Notifications Received
- Notifications Sent
Expand a Bookmark entry to sort:
- By Date
- By Popularity
Search bookmarks:
- Search public bookmarks
- Search my bookmarks
Add Create Bookmark Display or hide tags Work on My Bookmarks When your own bookmark is active, you can perform the following actions on it:
- Edit
- Delete
- Notify Other Users
- More Actions (add your bookmark to an Activity, Blog, or Community
Work on Public Bookmarks When a public bookmark is active, you can perform the following actions on it:
- Add to My Bookmarks
- Notify Other Users
- More Actions (Add your bookmark to an Activity, Blog, or Community or flag as a broken URL.)
Sort bookmarks:
- By Date
- By Popularity
Communities
Table 12. Communities
Task Action Change view to:
- Updates
- My Communities
- Public Communities
When a public community is active, you have the following options:
- Join Community
- Leave
When a community that you belong to is active, you can select from the applications that are available in that community, including the following default applications:
- Updates
- Bookmarks
- Feeds
- Files
- Forums
- Ideation Blog
- Media Gallery (You can view images and videos in the gallery but you cannot add any until you join the community.)
- Members
Ideation Blog options:
- Updates
- My Blogs
- My Recommendations
- Latest Blog Entries
- Featured Blog Entries
- Public Blogs
- Notifications Received
- Notifications Sent
Ideation Blog actions When an ideation blog is active, you have the following options:
- Add a Comment
- Add Vote
- Edit
- Delete
Search communities:
- Search public communities
- Search my communities
- Search by email
Sort communities:
- By Date
- By Popularity
- By Title
Files
Table 13. Files
Task Action Change view to:
- Updates
- My Files
- Files Shared With Me
- Files Shared By Me
- Public Files
- My Folders
- Folders Shared With Me
- Public Folders
Select an action to perform in the active item. The available actions depend on the state of the active item.
- Download
- Share
- Add to Folder
- Follow | Stop Following
- Edit Properties
- Add Comment
- Move to Trash
Sort Files. The Files category determines which of the following filters are available:
- Shared
- Name
- Updated
- Created
- Downloads
- Comments
- Recommendations
Search Files:
- My Files
- Files Shared With Me
- All Files
- Files Belonging To ...
Actions Perform the following actions on an active file:
- Download
- Share
- Add to Folder
- Add Recommendation
- Follow
Find out more about this file:
- About
- Comments
- Sharing
- Versions
Forums
Table 14. Forums
Task Action Change view to:
- Updates
- I'm Following
- I'm a Member
- I'm an Owner
- Public Forums
Profiles
Table 15. Profiles
Task Action Change view to:
- Contact Info
- The Board
- Network
- Report-to Chain
- Same Manager
- Recent Updates p>
- People Managed (only available if the profile you are viewing belongs to a manager)
Update Update your profile status Search profiles:
- Search by name
- Search by email
- Search by keyword
- Search by tag
Advanced search Tap Advanced to search Profiles by:
- Name
- Title
- Organization
- City
- State
- Country
- Work
- Telephone
To Do List
Table 16. To Do List
Task Action Change view to:
- Assigned to me
- Created by me
- Completed To Dos
- Incomplete To Dos
Actions Perform the following actions on an active To Do:
- Add a Comment
- Add a To Do
- Edit
- Delete
Search To Dos:
- Public Activities
- My Activities
- My Todos
Wikis
Table 17. Wikis
Task Action Change view to:
- Updates
- My Wikis
- Public Wikis
When you are reading a page, this menu changes to:
- Page
- Comments
- Attachments
- About
- Children
- Pages Index
Sort Wikis by:
- Name
- Created
- Updated
In the Pages Index, you can sort by Name, Updated, Visits, Recommendations, or Comments.
Search Wikis:
- Search Public Wikis
- Search My Wikis
Administer the News repository
You administer the News repository using scripts accessed using the wsadmin client. Changes to News configuration settings require node synchronization and a restart of the Home page server before they take effect.
Events are generated by the different Connections applications whenever an activity occurs in the system. Information about these events is stored in the News repository, and the Home page application pulls data from the repository to display only the events that are relevant to a particular user on that user's Home page. You can configure News configuration settings to control how the information that the Home page receives from the News repository is stored and administered.
Access the News configuration file
To make configuration changes to the News component in Connections, you must first access the News configuration file.
To access configuration files, use the wsadmin client.
See Start wsadmin
To change News configuration settings....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the News Jython script interpreter.
- Access the News configuration file:
execfile("newsAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the News cell-level configuration file
NewsCellConfig.checkOutConfig("working_dir", "cellName")
...where...
- working_dir is the temporary directory to which you want to check out the cell-level configuration file. This directory must exist on the server where you are running wsadmin.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the home page node belongs to. This argument is required. It is also case-sensitive, so type it with care. To get cell name, from wsadmin:
print AdminControl.getCell()
For example:
NewsCellConfig.checkOutConfig("d:/temp", "NewsCell01")
The command displays this message:
News Cell Level configuration file successfully checked out.
- Navigate to the temporary directory in which you saved news-config.xml, and then open the file in a text editor and update the following parameters as required.
Table 18. News configuration parameters
Parameter Description databaseCleanup storyLifetimeInDays Interval at which news stories are deleted from the News repository. dataSynchronization frequencyInHours Interval at which networking data is synchronized between the News repository and the Profiles application. NewsDataCleanup task Defines the interval at which the databaseCleanup task runs. There is also a configuration parameter to enable or disable this task from running. Do not disable this setting. If you disable it, you run the risk of rapidly reaching your file system storage limit as the database increases in size. Disabling this setting can also result in poor data access performance.
The following parameters are also present in news-config.xml, but their default settings must not be changed.
Table 19. Additional parameters
Parameter Description EmailDigestDelivery A batch job that runs each hour to collect and post the news as daily and weekly emails. This setting must not be changed. Depending on their email notification settings, a daily or weekly email digest is posted to Connections users with the most relevant news. The emails are sent instantly when they are posted as notifications in Connections. MetricsCollector Used to update the statistics and metrics that are displayed for the Home page and News applications. The metrics collector is a batch process that runs every night. NewsCheckUpdatedPersons Used internally to discover how often a person status changes in the system, for example, when a user is marked as active or inactive. PersonSpreadTranche A scheduled task that load balances the users in the existing tranches that are used by the email digest so that users are spread in a uniform way according to their mail domain. You can run this task manually using the NewsEmailDigestService.loadBalanceEmailDigest() command.
For more information, see Reallocating and load balancing users according to mail domain.
- You must check the configuration file back in after making changes, and it must be checked in in the same wsadmin session in which it was checked out for the changes to take effect.
Apply property changes in the News repository
After making changes to News configuration settings, check in the configuration settings and restart the server to apply the changes.
Perform the check-in in the same wsadmin session in which you checked out the files.
- Complete your configuration changes.
- Check in the changed configuration property keys using the following wsadmin client command:
NewsCellConfig.checkInConfig()
- Update the value of the version stamp configuration property in LotusConnections-config.xml to force users' browsers to pick up this change
- To exit the wsadmin client, type exit at the prompt.
- Use the WAS admin console to stop and restart the server hosting the News application.
News administrative commands
- NewsActivityStreamService
- NewsCellConfig
- NewsEmailDigestService
- NewsMailinService
- NewsMemberService
- NewsMicrobloggingService
- NewsOAuth2ConsumerService
- NewsScheduler
- NewsWidgetCatalogService
NewsActivityStreamService commands
- NewsActivityStreamService.getApplicationRegistration("applicationId")
Return a list of details about the specified application. This command takes a single argument, which is a string that specifies the application ID.
To retrieve this ID, use the command...
NewsActivityStreamService.listApplicationRegistrations()
Results are returned as key value pairs that are separated by commas.
For example:
wsadmin>NewsActivityStreamService.listApplicationRegistrations()
{wikis=wikis, ecm_files=Libraries, communities=communities, profiles=profiles, activities=activities, 3rd_party_id=A Third Party, homepage=homepage, blogs=blogs, forums=forums, files=files, dogear=dogear}wsadmin>NewsActivityStreamService.getApplicationRegistration("communities")
{imageUrl=http://example.com:9082/connections/resources/web/com.ibm.lconn.core.styles/images/iconCommunities16.png, summary=null, isEnabled=true, secureUrl=https://example.com:9445/communities, secureImageUrl=https://example.com:9445/connections/resources/web/com.ibm.lconn.core.styles/images/iconCommunities16.png, appId=communities, displayName=communities, url=http://example.com:9082/communities}- NewsActivityStreamService.listApplicationRegistrations()
Return a list of the applications that are registered with the News service. The applications are separated by commas and formatted...
application ID=Display name or description
- NewsActivityStreamService.registerApplication(appId, displayName, url, secureUrl, imageUrl, secureImageUrl, summary, isEnabled)
Register a new application.
Parameters:
appId The ID of the application to register. displayName The name of the application to display in the product user interface. url The web address for the application. secureUrl The secure web address for the application. imageUrl The web address for an image to associate with the application in the user interface. secureImageUrl The secure web address for an image to display for the application. summary A short description of the application. isEnabled A Boolean string that specifies whether the registration is enabled or disabled. You must include the appId and isEnabled parameters, but the remaining parameters are optional. After registering an application, you might also need to restart the Home page application for the application to display in the filter list.
For example:
NewsActivityStreamService.registerApplication("testApp", "Test Application", "http://www.test.com/gadget.xml", "https://www.test.com/gadget.xml", "http://www.test.com/image.jpg", "https://www.test.com/image.jpg", "summary", "true")
- NewsActivityStreamService.removeApplicationRegistration(appId)
Removes the specified application registration.
This command takes a single parameter, which is the ID of the application to remove.
For example, the following command removes the registration of the testApp application.
NewsActivityStreamService.removeApplicationRegistration("testApp")
- NewsActivityStreamService.updateApplicationRegistration(appId, field, value)
Updates a particular field associated with an existing, registered application.
Parameters:
appId The ID of the application to update. field The registration field whose information you want to update. Set this parameter to one of the following values:
displayName The updated name of the application. url The updated web address for the application. secureUrl The updated secure web address for the application. imageUrl The updated web address of the image to associate with the application in the user interface. secureImageUrl The updated secure web address of the image to associate with the application in the user interface. summary An updated short description of the application. isEnabled A Boolean string that specifies whether the registration is enabled or disabled. value The value of the field that you are updating. When you are updating the displayName field, summary, or one of the URL fields, specify the value as a string. When you are updating the isEnabled field, specify a Boolean string. For example, the following command disables the registration of the testApp application:
NewsActivityStreamService.updateApplicationRegistration("testApp", "isEnabled", "false")
- NewsActivityStreamService.updateApplicationRegistrationForEmailDigest(appId, isEnabled, defaultFollowFrequency, isLocked)
Updates a registered application so that it is enabled for email digest functionality. When you run this command, the updates occur immediately; you do not need to restart the system.
Enabled applications are included on the Email Preferences page, and email notifications are sent to the intended recipients of activities. Users can select the frequency with which they want to be notified about updates, as with the standard Connections applications. Depending on the user's choice, an email notification is sent either immediately or grouped in a daily or weekly newsletter.
Parameters:
You must run the command every time a parameter update is required.
appId The ID of the application to update. isEnabled A Boolean string that specifies whether the registration is enabled or disabled for email digest sending. defaultFollowFrequency A value that specifies the default frequency with which application updates are notified. The following values are valid: NONE, INDIVIDUAL, DAILY, and WEEKLY. isLocked A Boolean string that specifies whether email settings for the application are locked or unlocked. Set this parameter to true enforces the defaultFollowFrequency parameter for all users, and individual user settings for the application are overridden. When isLocked is set to true, a lock icon displays next to the application name on the Email Preferences page, and the radio buttons for selecting the notification frequency for the application are disabled. When isLocked is set to false, users can specify notification frequency settings for the application, and any settings that were previously specified are restored. For example:
NewsActivityStreamService.updateApplicationRegistrationForEmailDigest("testApp", "true", "DAILY", "false")
NewsCellConfig commands
- NewsCellConfig.checkOutConfig("working_directory", "cell_name")
Checks out the News repository configuration files.
Parameters:
- <working_directory>
- Temporary working directory to which the configuration files are copied. The files are kept in this working directory while you make changes to them.
- <cell_name>
- Name of the IBM WAS cell hosting the Connections application. To get cell name, from wsadmin:
print AdminControl.getCell()
For example:
NewsCellConfig.checkOutConfig("/opt/my_temp_dir", "Cell01")
NewsCellConfig.checkInConfig() Checks in the News repository configuration files. Run from the wsadmin command processor.
NewsEmailDigestService commands
- NewsEmailDigestService.loadBalanceEmailDigest()
Reallocate and load balance the users in the email address groups used by the email digest according to mail domain. This command does not take any parameters.
The command returns the number of users who have been reallocated to different email address groups for load balancing purposes.
For example:
wsadmin> NewsEmailDigestService.loadBalanceEmailDigest()
1603- NewsEmailDigestService.refreshDefaultEmailPrefsFromConfig()
Refresh the defaultEmailPreferences section of notification-config.xml, including the source attributes, defaultFollowFrequency, and frequencyLocked settings.
Updates to the default email preferences made in the configuration file are applied without the need for a server restart.
This command does not take any parameters.
NewsMailinService commands
- NewsMailinService.removeReplyToId("replyto address ID")
Removes a single reply-to ID.
Takes a single parameter, a string, that specifies the reply-to ID to delete.
For example:
NewsMailinService.removeReplyToId("c0c7e9bf-32d9-48a7-933c-74794479ebf3")
- NewsMailinService.removeReplyToIdsForUserEmail("user email")
Removes all the reply-to IDs for the user with the specified email address.
Takes a single parameter, a string, that specifies the email address for the user whose reply-to IDs you want to delete.
For example:
NewsMailinService.removeReplyToIdsForUserEmail("mary_smith@example.com")
- NewsMailinService.removeReplyToIdsForUserExtId("user extId")
Removes all the reply-to IDs for the user with the specified external ID.
Takes a single parameter, a string, that specifies the external ID for the user whose reply-to IDs you want to delete.
For example:
NewsMailinService.removeReplyToIdsForUserExtId("91b3897d-b4f8-4d05-3621-50bcaa22d300")
NewsMemberService commands
- NewsMemberService.getMemberExtIdByEmail("email")
- NewsMemberService.getMemberExtIdByLogin("login")
- NewsMemberService.inactivateMemberByEmail("email")
- NewsMemberService.inactivateMemberByExtId("externalID")
- NewsMemberService.syncAllMembersByExtId( {"updateOnEmailLoginMatch": ["true" | "false"] } )
- NewsMemberService.syncBatchMemberExtIdsByEmail("emailFile" [, {"allowInactivate" : ["true" | "false"] } ] )
- NewsMemberService.syncBatchMemberExtIdsByLogin("loginFile" [, {"allowInactivate" : ["true" | "false"] } ] )
- NewsMemberService.syncMemberByExtId("currentExternalId"[, {"newExtId" : "id-string" [, "allowExtIdSwap" : ["true" | "false"] ] } ] )
- NewsMemberService.syncMemberExtIdByEmail("email" [, { "allowInactivate" : ["true" | "false"] } ])
- NewsMemberService.syncMemberExtIdByLogin("name" [, {"allowInactivate": ["true" | "false"] } ])
NewsMicrobloggingService commands
- NewsMicrobloggingService.deleteMicroblogs("communityId")
Removes all microblog and associated data for a community from the News repository.
Takes a single parameter, a string, that specifies the ID of the community whose microblog data you want to delete.
For example:
NewsMicrobloggingService.deleteMicroblogs("e952cf0c-a86c-4e26-b1e0-f8bf40a75804")
- NewsMicrobloggingService.exportSyncedResourceInfo(filePath, eventType)
Return an XML synchronization report of the community resources held in the News repository. The report contains information about the current state of microblog data in the community activity stream.
Parameters:
- filePath
- Full directory path in which to store the file that is returned by the command. Include the file name in the file path and use forward slashes.
For example: "C:/temp/activity_output.xml"
- eventType
- Type of synchronization event to report about.
The only supported value for this parameter is community. Specify this value as a singular community and in lower case.
For example:
NewsMicrobloggingService.exportSyncedResourceInfo("C:/temp/news_output.xml","community")
NewsOAuth2ConsumerService commands
- NewsOAuth2ConsumerService.bindGadget(string widgetId, string serviceName, string clientName, string allowModuleOverride)
- Binds a gadget to a client with the specified service name and client name.
- widgetId
- The id of the widget.
- serviceName
- The name to associate with the gadget. The widgetId and service name must create a unique composite key for the deployment.
- clientName
- The name of the client to associate with this gadget.
- allowModuleOverride
- Value is "true" if the gadget overrides the provider default endpoint urls, else "false".
Example:
wsadmin>NewsOAuth2ConsumerService.bindGadget("aad20aa1-c0fa-48ef-bd05-8abe630c0012", "connections_service", "client123", "false")
- NewsOAuth2ConsumerService.browseClient(string providerName, int pageSize, int pageNumber)
- Return a list of Map objects that represent each OAuth 2.0 clients registered with the consumer proxy, in ascending ordered by provider name. The following information is returned for each map object in the returned list:
clientId Identifier issued by the authorization server when registering your client clientSecret Secret issued by the authorization server when registering your client ctype Client type, "confidential" or "public" are the supported values per the specification grantType "code" per specification, or "client_credentials" per specification name Name of the client providerName Name of the associated provider that was previously registered redirectUri Client redirection uri
- providerName
- An optional filter to only browse clients associated with the specified provider.
- pageSize
- The maximum number of clients to list per page. The default value is 20. This parameter is optional.
- pageNumber
- The number of the page to display.
For example, if you specify in the pageSize parameter that each page will have 50 items, page 1 will contain items 1-50. The default value is 1. This parameter is optional.
Example:
wsadmin>NewsOAuth2ConsumerService.browseClient("provider123", 50, 1)
- NewsOAuth2ConsumerService.browseGadgetBinding(string widgetId, string clientName, int pageSize, int pageNumber)
- Return a list of Map objects that represent each OAuth 2.0 gadget bindings registered with the consumer proxy ordered by service name ascending. The following information is returned for each map entry in the returned list:
clientName Name of the associated client allowModuleOverride "true" or "false" serviceName Name of the associated service uri Gadget uri
- widgetId
- An optional filter to browse bindings only associated with a specific widget.
- clientName
- An optional filter to browse gadgets only associated with the specified client.
- pageSize
- The maximum number of bindings to list per page. The default value is 20. This parameter is optional.
- pageNumber
- The number of the page to display.
For example, if you specify in the pageSize parameter that each page will have 50 items, page 1 will contain items 1-50. The default value is 1. This parameter is optional.
Example:
wsadmin>NewsOAuth2ConsumerService.browseGadgetBinding("aad20aa1-c0fa-48ef-bd05-8abe630c0012", "client123", 50, 2)
- NewsOAuth2ConsumerService.browseProvider(int pageSize, int pageNumber)
- Return a list of Map objects that represent each OAuth 2.0 provider registered with the consumer proxy, in ascending ordered by provider name. The following information is returned for each map object in the returned list:
authHeader "true" or "false" authUrl Authentication url endpoint for the provider clientAuth Client authentication method in use name Name of the provider tokenUrl Token url endpoint for the provider urlParam "true" or "false"
- pageSize
- The maximum number of providers to list per page. The default value is 20. This parameter is optional.
- pageNumber
- The number of the page to display.
For example, if you specify in the pageSize parameter that each page will have 50 items, page 1 will contain items 1-50. The default value is 1. This parameter is optional.
Example:
wsadmin>NewsOAuth2ConsumerService.browseProvider(50, 1)
- NewsOAuth2ConsumerService.countClient(string providerName)
- Return the total number of OAuth 2.0 clients registered with the consumer proxy.
- providerName
- An optional filter to only count clients associated with the specified provider.
Example:
wsadmin>NewsOAuth2ConsumerService.countClient("provider123")
- NewsOAuth2ConsumerService.countGadgetBinding(string widgetId, string clientName)
- Return the total number of OAuth 2.0 bindings registered with the consumer proxy.
- string widgetId, string clientName
- widgetId is an optional filter to count only bindings associated with a specific widget.
Example:
wsadmin>NewsOAuth2ConsumerService.countGadgetBinding("aad20aa1-c0fa-48ef-bd05-8abe630c0012", "connections_servicex")
- NewsOAuth2ConsumerService.deleteClient(string clientName)
- Delete a client by name if it exists, and has no existing associated gadget bindings that leverage this client.
- clientName
- The name of the client to remove.
Example:
wsadmin>NewsOAuth2ConsumerService.deleteClient("client123")
- NewsOAuth2ConsumerService.findClient(string clientName)
- Return a Map with information about the registered OAuth 2.0 client with the specified name.
- providerName
- The client name.
Example:
wsadmin>NewsOAuth2ConsumerService.findClient("client123")
- NewsOAuth2ConsumerService.findGadgetBindingByUri(string gadgetUri, string serviceName)
- Return a Map with information about the registered OAuth 2.0 gadget bindings with the specified gadgetUri and service name.
- gadgetUri
- The uri for the gadget.
- serviceName
- The name associated with the gadget. A gadgetUri and service name create a unique composite key for a gadget in the deployment.
Example:
wsadmin>NewsOAuth2ConsumerService.findGadgetBindingByUri("http://www.acme.com/mygadget", "connections_service")
- NewsOAuth2ConsumerService.findGadgetBindingByWidgetId(string widgetId, string serviceName)
- Return a Map with information about the registered OAuth 2.0 gadget bindings with the specified widget id and service name.
- widgetId
- The id of the widget.
- serviceName
- The name associated with the gadget. A widgetId and service name create a unique composite key for a gadget in the deployment.
Example:
wsadmin>NewsOAuth2ConsumerService.findGadgetBinding("aad20aa1-c0fa-48ef-bd05-8abe630c0012", "connections_service")
- NewsOAuth2ConsumerService.countProvider()
- Return the total number of OAuth 2.0 providers registered with the consumer proxy. There are no parameters.
Example:
wsadmin>NewsOAuth2ConsumerService.countProvider() 20- NewsOAuth2ConsumerService.deleteProvider(string providerName)
- Delete a provider by name if it exists, and has no existing associated clients or gadget bindings.
- providerName
- The unique provider name.
Example:
wsadmin>NewsOAuth2ConsumerService.deleteProvider("provider123")
- NewsOAuth2ConsumerService.findProvider(string providerName)
- Return a Map with information about the registered OAuth 2.0 provider with the specified name.
- providerName
- The unique provider name.
Example:
wsadmin>NewsOAuth2ConsumerService.findProvider("provider123")
- NewsOAuth2ConsumerService.purgeAllTokens()
- Purge all tokens persisted in the repository. This operation should be executed if the underlying encryption method has been modified.
- NewsOAuth2ConsumerService.registerClient(string clientName, string providerName, string ctype, string grantType, string clientId, string clientSecret, string redirectUri)
- Register or update an existing OAuth 2.0 client by name with the associated parameters.
- clientName
- The name to associate with the client that must be unique in a deployment.
- providerName
- The name of the registered provider to associate with this client.
- ctype
- The client type. Supported values are "confidential" or "public".
- grantType
- The authorization grant type. Supported values are "code" or "client_credentials".
- clientID
- The identifier issued by the authorization server when registering your client.
- clientSecret
- The secret issued by the authorization server when registering your client.
- redirectUri
- The client redirection URI.
Example:
wsadmin>NewsOAuth2ConsumerService.registerClient("client123", "provider123", "confidential", "code", "my-client", "my-secret", "https://{opensocial}/gadgets/oauth2callback")
- NewsOAuth2ConsumerService.registerProvider(string providerName, string clientAuth, string authHeader, string urlParam, string authUrl, string tokenUrl)
- Register or update an existing OAuth 2.0 provider by name with the associated parameters.
- providerName
- The unique provider name.
- clientAuth
- The client authentication method for accessing this provider. Supported values out of the box are "standard" and "basic" per the specification.
- authHeader
- Value of "true" if credentials must be encoded in the authorization header, otherwise "false".
- urlParam
- Value of "true" if credentials must be specified as query parameters on the URI, otherwise "false".
- authUrl
- The authentication endpoint for the provider.
- tokenUrl
- The token endpoint for the provider.
Example:
wsadmin>NewsOAuth2ConsumerService.registerProvider("provider123", "standard", "true", "false", "???", "???")
- NewsOAuth2ConsumerService.unbindGadget(string widgetId, string serviceName)
- Delete a gadget binding by widgetId and serviceName.
- widgetId
- The id of the widget.
- serviceName
- The name to associate with the gadget. The widgetId and service name must create a unique composite key for the deployment.
Example:
wsadmin>NewsOAuth2ConsumerService.unbindGadget("aad20aa1-c0fa-48ef-bd05-8abe630c0012", "connections_service")
NewsScheduler commands
- NewsScheduler.getTaskDetails(java.lang.String taskName)
Return information about the scheduled task specified by taskName.
The values returned are server time, next scheduled run time, status (SCHEDULED, RUNNING, SUSPENDED), and task name. When the task has been paused, then the status parameter shows as SUSPENDED instead of SCHEDULED. SUSPENDED means that the task is not scheduled to run.
For example:
NewsScheduler.getTaskDetails("NewsDataCleanup")
The resulting output looks similar to the following:
{taskName=NewsDataCleanup, currentServerTime=Fri Mar 12 14:42:25 GMT 2010, nextFireTime=Fri Mar 12 23:00:00 GMT 2010, status=SCHEDULED}
- NewsScheduler.pauseSchedulingTask(java.lang.String taskName)
Temporarily pauses the specified task and stops it from running.
When you pause a scheduled task, the task remains in the suspended state even after you stop and restart News or WAS. You must run the NewsScheduler.resumeSchedulingTask(String taskName) command to get the task running again.
If the task is currently running, it continues to run but is not scheduled to run again. If the task is already suspended, this command has no effect. When the task is paused successfully, a 1 is returned to the wsadmin client. When the task is not paused successfully, a 0 is returned.
For example:
NewsScheduler.pauseSchedulingTask("NewsDataCleanup")
- NewsScheduler.resumeSchedulingTask(java.lang.String taskName)
If the task is suspended, puts the task in the scheduled state.
If the task is not suspended, this command has no effect.
When a task is resumed, it does not run immediately; it runs at the time when it is next scheduled to run.
For example:
NewsScheduler.resumeSchedulingTask("NewsDataCleanup")
When the task is resumed successfully, a 1 is returned to the wsadmin client. When the task is not resumed successfully, a 0 is returned.
NewsWidgetCatalogService commands
- NewsWidgetCatalogService.addWidget(**widget)
- Add a widget to the widget catalog.
- ** widget indicates that this is a free form set of key=value properties. The keys/values map to the Settings available for widgets table previously described.
- Return the ID of the newly created widget.
The following example creates a sample EE gadget that has 'trusted' access policies. This gadget depends on the Profiles component.
NewsWidgetCatalogService.addWidget(, text="Sample gadget description.", url="http://www.to.my.gadget.com/gadget.xml", categoryName=WidgetCategories.NONE, isGadget=TRUE,appContexts=[WidgetContexts.EMBEDXP], policyFlags=[GadgetPolicyFlags.TRUSTED], prereqs=["profiles"])
- NewsWidgetCatalogService.browseWidgets(enablement = Enablement.ALL, pageSize = PAGE_SIZE_UNBOUNDED, pageNumber = 1)
- Browse the widgets in the widget catalog.
- Uses the parameter for enablement (Refer to Enablement).
- Uses the parameter for pageSize.
- Uses the parameter for pageNumber.
- Return a list of Widget objects.
wsadmin>NewsWidgetCatalogService.browseWidgets(Enablement.ALL, 1, 1)
- NewsWidgetCatalogService.countWidgets(enablement = Enablement.ALL)
- Count the widgets in the widget catalog.
- * Uses the parameter for enablement (Refer to Enablement).
- * Return a count of the number of widgets in the catalog.
- NewsWidgetCatalogService.disableWidget(widgetId)
- Return the following output:
CLFRQXXXXI: Widget {0} is now disabled.- NewsWidgetCatalogService.enableWidget(widgetId)
- Return the following output:
CLFRQXXXXI: Widget {0} is now enabled.- NewsWidgetCatalogService.findWidgetById(WidgetId)
- Find a widget by id.
- Uses the parameter for widgetId.
- Return the matching widget or null if no matching widget is found.
For example:
wsadmin>NewsWidgetCatalogService.findWidgetById("405a4f26-fa08-4cef-a995-7d90fbe2634f")
- NewsWidgetCatalogService.findWidgetByUrl(widgetUrl)
- Find a widget by Url.
- Uses the parameter for url.
- Return the matching widget or null if no matching widget is found.
- NewsWidgetCatalogService.listShareGadgets(enablement = Enablement.ALL)
- List out the share gadgets. By design, paging is not supported.
- Uses the parameter for enablement (Refer to Enablement).
- Return the share gadgets.
For example:
wsadmin>NewsWidgetCatalogService.listShareGadgets(Enablement.ALL)
- NewsWidgetCatalogService.ProxyPolicy
- Specify the server proxy policy.
- INTRANET_ACCESS May access intranet sites.
- EXTERNAL_ONLY May access external (non-intranet) sites only.
- CUSTOM Uses rules in the rule manager configuration.
- NewsWidgetCatalogService.removeWidget(widgetId)
- Remove a widget matching the widgetId entered.
For example:
wsadmin>NewsWidgetCatalogService.removeWidget("405a4f26-fa08-4cef-a995-7d90fbe2634f")
- NewsWidgetCatalogService.updateWidget(widgetId, **widget)
- Update an existing widget in the widget catalog.
- Uses the parameter for widgetId.
- ** widget indicates that this is a free form set of key=value properties. The keys/values map to the Settings available for widgets table previously described.
wsadmin>NewsWidgetCatalogService.updateWidget("1bf9ad75-a634-4301-88c6-ce493eb03cc9", , text="test")
- NewsWidgetCatalogService.updateWidgetShareOrder(widgetId, orderAfterWidgetId)
- Place the widget marked in a widgetId after a second widget in widget ordering.
- widgetId The id of the widget you wish to move.
- orderAfterWidgetId The id of the widget you want to place the gadget after. If this is null, the widget will be placed first in the ordering.
Synchronize News data with other applications
Edit settings in news-config.xml to define the interval at which data from the Connections applications is synchronized with the News repository.
To edit configuration files, use wsadmin client.
To ensure that the information received by the News repository is analyzed correctly, you need to synchronize data between the other Connections applications and the News repository to ensure that the information in the repository is kept up-to-date. The interval at which data is synchronized is specified using the frequencyInHours setting. By default, the synchronization is set to take place every 24 hours.
To configure the data synchronization task....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the News Jython script interpreter.
- Access the News configuration file:
execfile("newsAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the News cell-level configuration file
NewsCellConfig.checkOutConfig("working_dir", "cellName")
...where...
- working_dir is the temporary directory to which you want to check out the cell-level configuration file. This directory must exist on the server where you are running wsadmin.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the home page node belongs to. This argument is required. It is also case-sensitive, so type it with care. To get cell name, from wsadmin:
print AdminControl.getCell()
For example:
NewsCellConfig.checkOutConfig("d:/temp", "NewsCell01")
The command displays this message:
News Cell Level configuration file successfully checked out.
- Open news-config.xml in a text editor.
- Locate the section containing the dataSynchronization task and make the necessary changes.
For example, the following code specifies that data is synchronized between the News repository and the Connections applications every 24 hours. The information is copied over only if it hasn't been copied already in the last 24 hours.
<dataSynchronization> <frequencyInHours>24</frequencyInHours> </dataSynchronization>
- After making changes, check the configuration files back in, and you must do so in the same wsadmin session in which you checked them out for the changes to take effect.
Manage scheduled tasks for the News repository
Use administrative commands to manage scheduled tasks for the News repository.
To run administrative commands, use the wsadmin client.
See Start wsadmin for details.
You can use the NewsScheduler commands to pause and resume the scheduled tasks for the News repository, and to retrieve information about tasks. The scheduling information is contained in news-config.xml.
The SystemOut.log file also contains information about whether the scheduler is running and whether any scheduled tasks have started.
The News repository uses the WAS scheduling service for performing regular managed tasks.
For more information about how the scheduler works, see Scheduling tasks.
To manage a scheduled task....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Jython script interpreter for the News repository.
- Access the News configuration file:
execfile("newsAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Administer the scheduler service for the News repository.
- NewsScheduler.getTaskDetails(java.lang.String taskName)
Return information about the scheduled task specified by taskName.
The values returned are server time, next scheduled run time, status (SCHEDULED, RUNNING, SUSPENDED), and task name. When the task has been paused, then the status parameter shows as SUSPENDED instead of SCHEDULED. SUSPENDED means that the task is not scheduled to run.
For example:
NewsScheduler.getTaskDetails("NewsDataCleanup")
The resulting output looks similar to the following:
{taskName=NewsDataCleanup, currentServerTime=Fri Mar 12 14:42:25 GMT 2010, nextFireTime=Fri Mar 12 23:00:00 GMT 2010, status=SCHEDULED}
- NewsScheduler.pauseSchedulingTask(java.lang.String taskName)
Temporarily pauses the specified task and stops it from running.
When you pause a scheduled task, the task remains in the suspended state even after you stop and restart News or WAS. You must run the NewsScheduler.resumeSchedulingTask(String taskName) command to get the task running again.
If the task is currently running, it continues to run but is not scheduled to run again. If the task is already suspended, this command has no effect. When the task is paused successfully, a 1 is returned to the wsadmin client. When the task is not paused successfully, a 0 is returned.
For example:
NewsScheduler.pauseSchedulingTask("NewsDataCleanup")
- NewsScheduler.resumeSchedulingTask(java.lang.String taskName)
If the task is suspended, puts the task in the scheduled state.
If the task is not suspended, this command has no effect.
When a task is resumed, it does not run immediately; it runs at the time when it is next scheduled to run.
For example:
NewsScheduler.resumeSchedulingTask("NewsDataCleanup")
When the task is resumed successfully, a 1 is returned to the wsadmin client. When the task is not resumed successfully, a 0 is returned.
Configure database clean-up for the News repository
Edit settings in news-config.xml to define the interval at which the different database clean-up tasks run and specify when the IBM WebSphere Application Server scheduler starts the tasks.
To edit configuration files, use wsadmin client.
The database clean-up tasks defined in news-config.xml ensure that content that is out-of-date is periodically removed from the News repository. You can update the following properties for these tasks:
- enabled
- Enables or disables the task. This property takes a Boolean value, true or false. The value must be formatted in lowercase.
- interval
- Specifies the interval at which the task runs. This property is a string value that must be specified in Cron format.
For more information about the Cron schedule, see Scheduling tasks.
If you disable the database clean-up tasks, you run the risk of rapidly reaching your file system storage limit as the database increases in size. Disabling these tasks can also result in poor data access performance.
To configure database clean-up tasks....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the News Jython script interpreter.
- Access the News configuration file:
execfile("newsAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the News cell-level configuration file
NewsCellConfig.checkOutConfig("working_dir", "cellName")
...where...
- working_dir is the temporary directory to which you want to check out the cell-level configuration file. This directory must exist on the server where you are running wsadmin.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the home page node belongs to. This argument is required. It is also case-sensitive, so type it with care. To get cell name, from wsadmin:
print AdminControl.getCell()
For example:
NewsCellConfig.checkOutConfig("d:/temp", "NewsCell01")
The command displays this message:
News Cell Level configuration file successfully checked out.
- Open news-config.xml in a text editor.
- Locate the section of the file containing the database clean-up tasks and make the necessary changes.
- To update the News data clean-up task:
- Enable or disable the task by locating the following section of code and updating the value of the enabled parameter. By default, the task is enabled and runs every day exactly at 23 hours.
<task name="NewsDataCleanup" description="Job to clean up the news" interval="0 0 23 ? * *" startby="" enabled="true" scope="cluster" type="internal" targetName="ScheduledTaskService" mbeanMethodName="" serverName="unsupported" > </task>- Specify the interval at which news stories are deleted from the News repository:
For example, the following code specifies that news stories are deleted from the database when they are more than 30 days old.
<databaseCleanup> ... <storyLifetimeInDays>30</storyLifetimeInDays> </databaseCleanup>- To update the ReplyToId clean-up task:
- Enable or disable the task by locating the following section of code and updating the value of the enabled parameter. By default, the task is enabled and runs weekly.
<!-- This task run periodically to purge the system of expired ReplyTo Id records> <task serverName="unsupported" startby="" mbeanMethodName="" targetName="ScheduledTaskService" type="internal" scope="cluster" enabled="true" interval="0 0 4 ? * SAT" description="Job to cleanup Expired ReplyTo Id records" name="ReplyToIdCleanup" > </task>- Set the expiry date for the ReplyTo IDs that enable users to reply to notifications about forum posts directly in the forum:
For example, the following code specifies that ReplyTo IDs are deleted from the database when they are 365 days old.
<databaseCleanup> ... <replyToIdLifetimeInDays>365</replyToIdLifetimeInDays> </databaseCleanup>- To update the ReplyToAttachment clean-up task:
- Enable or disable the task by locating the following section of code and updating the value of the enabled parameter. By default, the task is enabled and runs weekly.
<!-- This task runs periodically to remove any replyTo attachments that were not properly removed from the shared data store> <task serverName="unsupported" startby="" mbeanMethodName="" targetName="ScheduledTaskService" type="internal" scope="cluster" enabled="true" interval="0 0 4 ? * SUN"</p><p> description="Job to cleanup Expired ReplyTo Attachment Files" name="ReplyToAttachmentCleanup" > </task>- Number of days to keep mailed-in reply attachments and folders on the file system before deleting them:
For example, the following code specifies that any resource stored by the system is to be deleted after 7 days.
<databaseCleanup> ... <!-- The number of days before the system will remove any replyTo attachments that were not properly removed from the shared data store. --> <replyToAttachmentLifetimeInDays>7</replyToAttachmentLifetimeInDays> </databaseCleanup>- After making changes, check the configuration files back in, and you must do so in the same wsadmin session in which you checked them out for the changes to take effect.
Reallocating and load balancing users according to mail domain
Use the NewsEmailDigestService.loadBalanceEmailDigest() command to manually reallocate and load balance users in the different email tranches (or groups of email addresses) used by the email digest.
To run administrative commands, use the wsadmin client.
See Start wsadmin for details.
A scheduled task runs every month to load balance the users in the email address groups used by the email digest. This task ensures that users are spread across the groups in a uniform way according to their mail domain. The task is configured in news-config.xml and looks as follows. Note that the default settings should not be modified.
<task serverName="unsupported" startby="" mbeanMethodName="" targetName="ScheduledTaskService" type="internal" scope="cluster" enabled="true" interval="0 0 22 1 * ?" description="Job to spread users in tranche" name="PersonSpreadTranche" > </task>By default, the task runs on the first day of every month at 10:00 p.m. If you do not want to wait for the next scheduled task, you can run the task manually using the NewsEmailDigestService.loadBalanceEmailDigest() MBean command.
To reallocate and load balance users in the existing email address groups used by the email digest....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Jython script interpreter for the News repository.
- Access the News configuration file:
execfile("newsAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Enter the following command:
- NewsEmailDigestService.loadBalanceEmailDigest()
Reallocate and load balance the users in the email address groups used by the email digest according to mail domain. This command does not take any parameters.
The command returns the number of users who have been reallocated to different email address groups for load balancing purposes.
For example:
wsadmin> NewsEmailDigestService.loadBalanceEmailDigest()
1603
Purge compromised reply-to IDs
Use the NewsMailinService commands to delete compromised reply-to IDs from the system and ensure that replies are received from secure IDs only. If a particular reply to ID is being misused, you can delete that ID from the system while keeping the user’s other valid IDs active.
To run administrative commands, use the wsadmin client.
See Start wsadmin for details.
In Connections, users can reply to a forum post directly from an email notification about the post.
For example, when a forum topic is updated, a notification is sent out to all the people who are following that topic and those people can reply to the topic by clicking a link in the notification.
The notification has a ReplyToNotification ID and each recipient is issued a ReplyToID. This reply-to ID is included in the reply email address and is used to verify the content coming back in to the system when the user replies to the notification. If you suspect that a reply-to ID has been compromised, you can remove the ID from the system using the NewsMaillinService commands.
For example, when users leave the organization, you might want to remove all their reply-to IDs so that they cannot update a feature by saving an ID and responding to a forum post.
The ReplyToIdCleanup task also runs weekly to purge the system of any reply-to ID records that are out of date. This task removes any IDs that are older than the interval specified by the replyToIdLifetimeInDays property. The expiry period is set to 365 days by default. The ReplyToIdCleanup task removes any ReplyToNotification IDs that have expired so that it is no longer possible for users to reply to the forum topic from the email notification. All related reply-to IDs are also removed as part of the clean-up task. Note that the task does not perform any security checking for comprised or corrupted IDs.
For information about how to configure the ReplyToIdCleanup task, see Configure database clean-up for the News repository.
Reply-to IDs can vary in format but in general they look similar to the following:
id@connections.example.com id_mailin@connections.example.comFor example:
c0c7e9bf-32d9-48a7-933c-74794479ebf3_replyto@connections.example.comYou can customize reply-to IDs if you want. For instance, instead of using the ID as a prefix as in the example, you can include it as a suffix.For example:
replyto_c0c7e9bf-32d9-48a7-933c-74794479ebf3@connections.example.com
To remove reply-to IDs from the system....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Jython script interpreter for the News repository.
- Access the News configuration file:
execfile("newsAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following commands:
- NewsMailinService.removeReplyToId("replyto address ID")
Removes a single reply-to ID.
Takes a single parameter, a string, that specifies the reply-to ID to delete.
For example:
NewsMailinService.removeReplyToId("c0c7e9bf-32d9-48a7-933c-74794479ebf3")- NewsMailinService.removeReplyToIdsForUserExtId("user extId")
Removes all the reply-to IDs for the user with the specified external ID.
Takes a single parameter, a string, that specifies the external ID for the user whose reply-to IDs you want to delete.
For example:
NewsMailinService.removeReplyToIdsForUserExtId("91b3897d-b4f8-4d05-3621-50bcaa22d300")- NewsMailinService.removeReplyToIdsForUserEmail("user email")
Removes all the reply-to IDs for the user with the specified email address.
Takes a single parameter, a string, that specifies the email address for the user whose reply-to IDs you want to delete.
For example:
NewsMailinService.removeReplyToIdsForUserEmail("mary_smith@example.com")
Register third-party applications
Use the NewsActivityStreamService commands to register third-party applications. An application in this context is an OpenSocial gadget that is compatible with Connections.
To run administrative commands, use the wsadmin client.
See Start wsadmin for details.
You can extend the scope of the Home page activity stream to include updates from third-party applications using the JSON APIs provided with Connections.
If you want third-party applications to display in the filter list in the product user interface, you must register the applications.
You can still post third-party events to the activity stream if the corresponding application is not registered, but you must register the application if you want it to be available from the filter list.
You can also update applications that have already been registered to enable them for email digest functionality. Enabled applications are included on the Email Preferences page, and email notifications are sent to the intended recipients of activities. Users can select the frequency with which they want to be notified about updates, as with the standard Connections applications. Depending on the user's choice, an email notification is sent either immediately or grouped in a daily or weekly newsletter.
To register third-party applications, update or retrieve existing registration information, or enable a third-party application for email digest functionality....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Jython script interpreter for the News repository.
- Access the News configuration file:
execfile("newsAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following commands as needed:
- NewsActivityStreamService.registerApplication(appId, displayName, url, secureUrl, imageUrl, secureImageUrl, summary, isEnabled)
Register a new application.
Parameters:
- appId. The ID of the application to register.
- displayName. The name of the application to display in the product user interface.
- url. The web address for the application.
- secureUrl. The secure web address for the application.
- imageUrl. The web address for an image to associate with the application in the user interface.
- secureImageUrl. The secure web address for an image to display for the application.
- summary. A short description of the application.
- isEnabled. A Boolean string that specifies whether the registration is enabled or disabled.
You must include the appId and isEnabled parameters, but the remaining parameters are optional. After registering an application, you might also need to restart the Home page application for the application to display in the filter list.
For example:
NewsActivityStreamService.registerApplication("testApp", "Test Application", "http://www.test.com/gadget.xml", "https://www.test.com/gadget.xml", "http://www.test.com/image.jpg", "https://www.test.com/image.jpg", "summary", "true")- NewsActivityStreamService.updateApplicationRegistration(appId, field, value)
Updates a particular field associated with an existing, registered application.
Parameters:
- appId. The ID of the application to update.
- field. The registration field whose information you want to update. Set this parameter to one of the following values:
- displayName. The updated name of the application.
- url. The updated web address for the application.
- secureUrl. The updated secure web address for the application.
- imageUrl. The updated web address of the image to associate with the application in the user interface.
- secureImageUrl. The updated secure web address of the image to associate with the application in the user interface.
- summary. An updated short description of the application.
- isEnabled. A Boolean string that specifies whether the registration is enabled or disabled.
- value. The value of the field that you are updating. When you are updating the displayName field, summary, or one of the URL fields, specify the value as a string. When you are updating the isEnabled field, specify a Boolean string.
For example, the following command disables the registration of the testApp application:
NewsActivityStreamService.updateApplicationRegistration("testApp", "isEnabled", "false")- NewsActivityStreamService.updateApplicationRegistrationForEmailDigest(appId, isEnabled, defaultFollowFrequency, isLocked)
Updates a registered application so that it is enabled for email digest functionality. When you run this command, the updates occur immediately; you do not need to restart the system.
Enabled applications are included on the Email Preferences page, and email notifications are sent to the intended recipients of activities. Users can select the frequency with which they want to be notified about updates, as with the standard Connections applications. Depending on the user's choice, an email notification is sent either immediately or grouped in a daily or weekly newsletter.
Parameters:
You must run the command every time a parameter update is required.
- appId. The ID of the application to update.
- isEnabled. A Boolean string that specifies whether the registration is enabled or disabled for email digest sending.
- defaultFollowFrequency. A value that specifies the default frequency with which application updates are notified. The following values are valid: NONE, INDIVIDUAL, DAILY, and WEEKLY.
- isLocked. A Boolean string that specifies whether email settings for the application are locked or unlocked.
Set this parameter to true enforces the defaultFollowFrequency parameter for all users, and individual user settings for the application are overridden. When isLocked is set to true, a lock icon displays next to the application name on the Email Preferences page, and the radio buttons for selecting the notification frequency for the application are disabled. When isLocked is set to false, users can specify notification frequency settings for the application, and any settings that were previously specified are restored.
For example:
NewsActivityStreamService.updateApplicationRegistrationForEmailDigest("testApp", "true", "DAILY", "false")- NewsActivityStreamService.removeApplicationRegistration(appId)
Removes the specified application registration.
This command takes a single parameter, which is the ID of the application to remove.
For example, the following command removes the registration of the testApp application.
NewsActivityStreamService.removeApplicationRegistration("testApp")- NewsActivityStreamService.listApplicationRegistrations()
Return a list of the applications that are registered with the News service. The applications are separated by commas and formatted...
application ID=Display name or description- NewsActivityStreamService.getApplicationRegistration("applicationId")
Return a list of details about the specified application. This command takes a single argument, which is a string that specifies the application ID. Use the NewsActivityStreamService.listApplicationRegistrations() command to retrieve this ID.
Results are returned as key value pairs that are separated by commas.
For example:
wsadmin>NewsActivityStreamService.listApplicationRegistrations() {wikis=wikis, ecm_files=Libraries, communities=communities, profiles=profiles, activities=activities, 3rd_party_id=A Third Party, homepage=homepage, blogs=blogs, forums=forums, files=files, dogear=dogear}
wsadmin>NewsActivityStreamService.getApplicationRegistration("communities")
{imageUrl=http://example.com:9082/connections/resources/web/com.ibm.lconn.core.styles/images/iconCommunities16.png, summary=null, isEnabled=true, secureUrl=https://example.com:9445/communities, secureImageUrl=https://example.com:9445/connections/resources/web/com.ibm.lconn.core.styles/images/iconCommunities16.png, appId=communities, displayName=communities, url=http://example.com:9082/communities}Restart the Home page application to pick up your changes. You might also need to restart the Profiles and Communities applications in order for the filter list to display in those applications.
Administer microblogs
You can perform a number of administrative tasks to manage the microblogging feature in Connections.
You can control the size and display of microblog entries in your deployment by editing settings in news-config.xml. In the event of a system crash, you can use administrative commands to synchronize microblog data with the Communities database or remove orphaned community microblog data.
Delete microblog data
The News administrator can delete any status update or comment from the Home page, Profiles, and Communities applications by clicking the X icon next to the status update or comment in the user interface.
To delete microblog data, the administrator must be assigned the admin role for the News application.
For information about how to assign a role, see Assigning people to J2EE roles.
Maximum size for microblogs
Edit settings in news-config.xml to set the maximum size of microblogs in your deployment.
To edit configuration files, use wsadmin client.
You can control the size of microblog data in your deployment by specifying the maximum number of characters allowed for entries and comments in news-config.xml.
To specify microblog settings....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the News Jython script interpreter.
- Access the News configuration file:
execfile("newsAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the News cell-level configuration file
NewsCellConfig.checkOutConfig("working_dir", "cellName")
...where...
- working_dir is the temporary directory to which you want to check out the cell-level configuration file. This directory must exist on the server where you are running wsadmin.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the home page node belongs to. This argument is required. It is also case-sensitive, so type it with care. To get cell name, from wsadmin:
print AdminControl.getCell()
For example:
NewsCellConfig.checkOutConfig("d:/temp", "NewsCell01")
The command displays this message:
News Cell Level configuration file successfully checked out.
- Open news-config.xml in a text editor.
- Locate the <microblogging settings> section of the file and update the following lines as needed.
<microblogEntryMaxChars>1000</microblogEntryMaxChars>
Specifies the maximum number of characters allowed for microblog entries. The default value is 1000.<microblogCommentMaxChars>1000</microblogCommentMaxChars>
Specifies the maximum number of characters allowed for microblog comments. The default value is 1000.
- Save your changes to news-config.xml.
- After making changes, check the configuration files back in, and you must do so in the same wsadmin session in which you checked them out for the changes to take effect.
Synchronize microblog data with Communities
Use the NewsMicrobloggingService.exportSyncedResourceInfo command to return an XML synchronization report of the community resources held in the News repository. The report contains information about the current state of microblog data in the community activity stream.
This information can help you to synchronize the microblog data with the Communities database after a system crash that involves data loss.
For more information, see Recovering from a database failure.
Delete community microblogs from the News repository
You can use an administrative command to remove orphaned community microblog data as part of the community widget life-cycle disaster recovery scenario.
To run administrative commands, use the wsadmin client.
See Start wsadmin for details.
A microblog is a status update message that is posted to a community activity stream. Microblog updates are displayed in the aggregated list of events in the Recent Updates widget in Communities.
If a community owner has added the Status Updates widget to a community, microblog messages can also be seen in that widget. In addition, microblog messages are displayed when users filter the Home page activity stream to show status updates for a community.
The microblogs that display in the Recent Updates and Status Updates widgets in Communities are stored in the News repository. In the event of a database failure or some other disaster, if the associated community data has been deleted, you might decide that the orphaned microblog data in the News repository should be removed. The NewsMicrobloggingService.deleteMicroblogs command allows you to remove all microblog and associated data for a community from the News repository. Note that there is no support for deleting other types of events that display in the Recent Updates widget.
For more information about removing orphaned data, see Deleting orphaned data.
To delete community microblogs from the News repository....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Jython script interpreter for the News repository.
- Access the News configuration file:
execfile("newsAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command:
- NewsMicrobloggingService.deleteMicroblogs("communityId")
Removes all microblog and associated data for a community from the News repository.
Takes a single parameter, a string, that specifies the ID of the community whose microblog data you want to delete.
For example:
NewsMicrobloggingService.deleteMicroblogs("e952cf0c-a86c-4e26-b1e0-f8bf40a75804")
Activity stream search
The activity stream search service provides an indexing and search infrastructure that is bundled with the News application.
This service provides search capabilities over the activity stream.
The activity stream search service is automatically configured to crawl the activity stream seedlist at regular intervals. By default, the interval is set to 30 seconds. After an initial crawl of the activity stream, subsequent crawls are incremental, and only new events that were generated since the previous crawl are collected. When you install Connections, the crawler is disabled by default.
Crawling and indexing is carried out on one of the servers in the cluster where the News application is deployed. This server is chosen automatically by the WebSphere High Availability (HA) Manager. If News becomes unavailable on this server, a different server that is running News is chosen by WebSphere HA to replace it. For each crawling session, the indexing server creates a delta index in a shared file system and sends a notification to other nodes in the cluster. This delta index is read from shared file system by the other nodes and merged into the main index on the local disk. All the cluster nodes serve search requests by reading from the local index. Configuration and status information for the crawlers is stored in database tables that are available to all the nodes. Delta indexes are stored for 24 hours. If a node is down for more than 24 hours, you need to copy the index manually to that node from another node. In the event that a node is unavailable, the other nodes can still perform search requests with no interruption.
Administrative users can manage the activity stream search service from a user interface that is accessed using a URL. From the Activity Stream Search Administration page, you can enable or disable the crawler, and edit the crawl schedule. You can also clear the current indexed content and perform a full crawl if required. To access the page, you must be assigned the
search-admin role.For more information about this role, see the Roles topic.
Administer activity stream search
Update information related to the activity stream search service and manage the collection of activity stream data.
To access the activity stream service administrative user interface, you must be assigned the IBM WebSphere Application Server
search-admin role.For more information about this role, see the Roles topic.
You can access options for managing the activity stream search service from the Activity Stream Search Administration page. From this page, you can update the settings for the activity stream source, check the status of the activity stream search service, see when it was last updated, and when the next update is due.
The activity stream source publishes metadata about activity stream entries and collects that metadata in an index. The metadata is collected automatically on a schedule, but you can also collect data manually, delete data from the index, and disable the schedule using the options available from the administration page.
Administer activity stream search by performing the following steps.
- To access the Activity Stream Search Administration page, enter the following URL in your browser and log in using your admin user credentials:
http://server_name/news/web/activityStreamSearchAdmin/activityStreamSearchAdmin.action
There is no link provided from the Connections user interface, you must access the page using the direct URL.
- To manage the activity stream search service, perform the following tasks:
- To view the number of documents in the index, see the Number of items column.
- To check whether the scheduler is enabled, view the Status column. When the scheduler is enabled, you can see the result of the last crawl; otherwise, the status displays as Disabled. When the scheduler is disabled, periodic crawling does not take place, but the search operation still works on existing indexed content.
- To edit the activity stream source, click Edit Details. Update the following fields as needed, and then click OK:
Name The name of the source. The source is the service that you are crawling. Server URL The web address of the local Connections server. The source and its server URL are automatically created when the News application starts up for the first time after the product is installed. Seedlist URL The web address of the seedlist that will be crawled. By default, the URL points to localhost, which means that crawling is done programmatically instead of using HTTP. Collect every The interval at which new data is collected from the activity stream. The default setting is 30 seconds. - To manage source metadata collection, click More actions and select one of the following options:
- Collect Data. Crawls the activity stream content and collects new data. Select this option when you want to crawl for new data immediately, without waiting for the next scheduled crawl. When the crawler is disabled, you can still use this option to manually collect data from existing indexed content.
- Clear Data. Delete activity stream metadata from the index. Select this option when you want to delete the indexed content and perform a full crawl. This option is useful when you want to investigate unexpected issues but should not be used frequently as it is resource intensive.
- Disable Schedule. Disable the crawler. Selecting this option disables the collection of metadata but it does not delete existing metadata from the index. When you install Connections, the schedule is disabled by default.
Copying the activity stream search index to new nodes
When you add a node to the News cluster to ensure high availability for activity stream search requests, you must copy the activity stream search index to the new node. Before copying the index, ensure that you disable scheduled metadata collections.
To copy the activity stream search index to a new node....
- Access the Activity Stream Search Administration page by entering the following URL in your browser and logging in using your admin user credentials:
http://server_name/news/web/activityStreamSearchAdmin/activityStreamSearchAdmin.action
- Disable source metadata collection by selecting More actions > Disable Schedule. This action stops future collections, but it does not delete existing metadata from the index.
- Copy the activity stream search index folder from an existing node to the new node by following these steps:
- Log in to the WAS admin console and click...
Servers | Clusters | WebSphere application server clusters
- Click cluster_name, where cluster_name is the name of the News cluster.
- In the Additional Properties area, expand Cluster members and then click Details.
- In the table of cluster members, make a note of the nodes that host the cluster members.
- Copy the activity stream search index folder from an existing node to the new node.
The activity stream search index is located in the ActivityStream folder that is defined by the ACTIVITY_STREAM_SEARCH_INDEX_DIR WebSphere Application Server variable.
The default location of the variable is \opt\IBM\Connections\DataLocal\news\search\index.
The activity stream search index is located in the ActivityStream subfolder created under the path.
- To reenable source metadata collection, return to the Activity Stream Search Administration page using the URL in step 1, and select More actions > Enable Schedule.
Configure activity stream search index settings
You can update the default settings for the index folder and the shared replication folder for the activity stream search service.
The IBM WebSphere Application Server variables ACTIVITY_STREAM_SEARCH_INDEX_DIR and ACTIVITY_STREAM_SEARCH_REPLICATION_DIR define the location of the activity stream search index folder and the activity stream search replication folder respectively. The search index folder stores the actual index. You need one of these for each server. The shared replication folder stores the changes that have been recently made to the index. You need one shared replication folder for each server cluster.
The default directory path for the ACTIVITY_STREAM_SEARCH_INDEX_DIR and ACTIVITY_STREAM_SEARCH_REPLICATION_DIR variables on the cell scope is set when you install Connections, and this definition is automatically used for every additional node. However, you can update the paths on the cell scope if you want to customize the default settings.
- From the WAS admin console, expand Environment and select Websphere Variables.
- Select ACTIVITY_STREAM_SEARCH_INDEX_DIR, enter the location of the local activity stream search index folder in the Value field, click Apply, and then click OK.
- Select ACTIVITY_STREAM_SEARCH_REPLICATION_DIR, enter the location of the shared replication folder in the Value field, click Apply, and then click OK. All nodes need access to this shared folder.
Determine which News node is currently carrying out crawling and indexing
You can determine which News node is currently carrying out crawling and indexing in a cluster from the WAS admin console.
Crawling and indexing is carried out on one of the servers in the cluster where the News application is deployed. This server is chosen automatically by the IBM WebSphere Application Server High Availability (HA) Manager. If News becomes unavailable on this server, a different server that is running News is chosen by the HA Manager to replace it.
The News node that carries out crawling and indexing is the News node that is the active member in WAS core group.
To determine the active News node in a cluster....
- From the Integrated Solutions Console, select Servers > Core Groups > Core group settings > DefaultCoreGroup.
- Click the Runtime tab.
- Click Show servers.
- Click the server that the News application is deployed on. The active member is shown.
- Click the single HA group shown.
- Check the Status column to verify which server is the HA group active member.
Restore the activity stream search index
If the activity stream search index becomes corrupt or is not being refreshed properly, you can delete the existing index data to rebuild the index.
The IBM WebSphere Application Server variables ACTIVITY_STREAM_SEARCH_INDEX_DIR and ACTIVITY_STREAM_SEARCH_REPLICATION_DIR define the location of the activity stream search index directory and replication directory respectively. If there are issues with the existing activity stream index, you can restore it by deleting the contents of these directories. The index is rebuilt when the next scheduled crawl takes place.
In a clustered environment, ACTIVITY_STREAM_SEARCH_REPLICATION_DIR is a shared folder whereas ACTIVITY_STREAM_SEARCH_INDEX_DIR exists on each one of the servers.
- To access the Activity Stream Search Administration page, enter the following URL in your browser and log in using your admin user credentials:
http://server_name/news/web/activityStreamSearchAdmin/activityStreamSearchAdmin.action
There is no link provided from the Connections user interface, you must access the page using the direct URL.
- Disable all the crawlers listed on the page by selecting More actions > Disable Schedule for each crawler.
- Find the value of the ACTIVITY_STREAM_SEARCH_REPLICATION_DIR and ACTIVITY_STREAM_SEARCH_INDEX_DIR WebSphere Application Server variables.
- Log in to the WebSphere Integrated Solutions Console.
- Expand Environment and select Websphere Variables.
- Look for the ACTIVITY_STREAM_SEARCH_REPLICATION_DIR and ACTIVITY_STREAM_SEARCH_INDEX_DIR variables and make a note of their respective locations.
- Navigate to the location specified by the ACTIVITY_STREAM_SEARCH_REPLICATION_DIR variable and delete the contents of the directory.
- Navigate to the location specified by the ACTIVITY_STREAM_SEARCH_INDEX_DIR variable and delete the contents of the directory.
- Return to the Activity Stream Search Administration page and enable all the crawlers.
Administer Profiles
Profiles provides two types of administrative capabilities: configuration settings and administrative commands. You change configuration settings and execute administrative commands by running scripts from the wsadmin command line.
Jython Scripts run from a wsadmin command line are used to configure and to administer Profiles. These scripts use the AdminConfig object available in IBM WebSphere Application Server Admin (wsadmin) to interact with the configuration repository.
You can update the Profiles environment in two ways:
- Configuration settings
- Modify these settings to control various configurable applications within Profiles. When you make configuration changes, you use scripts to check out the Profiles configuration file, profiles-config.xml, make changes, and then check the file back in. A server restart is required for your changes to take effect.
- Administrative commands
- Use these commands to control various aspects of the Profiles environment. Administrative commands do not require a server restart to take effect.
Run Profiles administrative commands
Scripts are used to administer the Profiles application.
These scripts use the AdminConfig object available in IBM WebSphere Application Server wsadmin client to interact with the Profiles server.
To run administrative commands, use the wsadmin client.
See Start wsadmin
You cannot use administrative tools to add or remove a profile from your LDAP directory system. You must use that directory's native tools to create and delete profiles.
The scripts used to administer Profiles give an administrator (who otherwise would not have access to edit a user's profile data), the ability to edit various fields in a user's profile.
For example, one use of the capability is to remove unwanted or inappropriate content.
Unlike with configuration properties, when you use administrative commands to change server administration properties, you do not have to check out any files nor restart the server. Your changes take effect immediately.
To run administrative commands....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
Profiles administrative commands
Use the commands listed to perform administrative tasks for Profiles. No file checkout or server restart is needed when using these commands. The following sections define the commands used when working with Profiles.
ProfilesConfigService commands
- ProfilesConfigService.checkOutConfig("working_directory", "cell_name")
Checks all Profiles configuration files out to a temporary directory.
Parameters:
- working_directory. Temporary working directory to which the configuration files are copied. The files are kept in this working directory while you make changes to them.
- cell_name. Name of the IBM WAS cell hosting the Connections application. To get cell name, from wsadmin:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/my_temp_dir", "Cell01")
Windows:
ProfilesConfigService.checkOutConfig("c:/temp","Cell01")
IBM i :ProfilesConfigService.checkOutConfig("/prof/temp","Cell01")
- ProfilesConfigService.showConfig()
Display the current configuration settings. Check out the configuration files with ProfilesConfigService.checkOutConfig before running ProfilesConfigService.showConfig.
- ProfilesConfigService.updateConfig("property", "value")
Updates configuration properties.
Parameters:
- property. One of the configuration properties that can be edited for Profiles.
- value. The new value with which you want to set the specified property. Acceptable values for properties can be restricted, for example, to either true or false.
- ProfilesConfigService.checkInConfig()
Checks in Profiles configuration files. Run from the wsadmin command processor.
- ProfilesConfigService.checkOutPolicyConfig("working_directory", "cell_name")
Checks the profiles-policy.xml and profiles-policy.xsd files out to a temporary directory.
Parameters:
- working_directory. Temporary working directory to which the configuration files are copied. The files are kept in this working directory while you make changes to them.
- cell_name. Name of the WAS cell hosting the Connections application.
To get cell name, from wsadmin:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutPolicyConfig("/opt/my_temp_dir", "Cell01")
Windows:
ProfilesConfigService.checkOutPolicyConfig("c:/temp","Cell01")
IBM i :ProfilesConfigService.checkOutConfig("/prof/temp","Cell01")
- ProfilesConfigService.checkInPolicyConfig()
- Checks in the Profiles policy configuration files. Run from the wsadmin command processor.
- ProfilesConfigService.findDistinctProfileTypeReferences()
- List profile types present in the Profiles database.
- ProfilesConfigService.findUndefinedProfileTypeReferences()
- List profile types present in the Profiles database that do not appear in profiles-types.xml.
ProfilesService commands
- ProfilesService.deletePhoto(String user_email_addr)
Delete image files associated with a user's email address.
This command can be used only if the user has uploaded a photo to their profile. This command removes the photo.
For example:
ProfilesService.deletePhoto("john_doe@company.com")
- ProfilesService.disableFullReportsToCache()
Disable the full report-to chain cache capability. This command does not take any arguments.
- ProfilesService.enableFullReportsToCache(startDelay, interval, schedTime)
Enable the full report-to chain cache with the specified start delay in minutes, refresh interval in minutes, and scheduled refresh time in HH:MM format.
This cache is used to populate the full report-to chain view available in a user's profile. The cache contains the specified number of top employees in the organizational pyramid; it is not intended to store an entry for each profile. It stores the profiles of those people at the top of the chain who are included in many full report-to chain views.
For example:
ProfilesService.enableFullReportsToCache(5, 15, "23:00")
- ProfilesService.purgeEventLogsByDates(string startDate, string endDate)
Delete event log entries created between the specified start date and end date.
Parameters:
- startDate
- A string that specifies the start date for the period in MM/DD/YYYY format.
- endDate
- A string that specifies the end date for the period in MM/DD/YYYY format.
For example:
ProfilesService.purgeEventLogsByDates("06/21/2009", "06/26/2009")
This command deletes all the event log entries created on or after June 21st, 2009 and before June 26th, 2009 from the EVENTLOG table.
- ProfilesService.purgeEventLogsByEventNameAndDates(eventName, string startDate, string endDate)
Delete event log entries with the specified event name created between given start date and end date.
Parameters:
- eventName
- The type of event to remove from the EVENTLOG table.
The following names are some examples of valid event names:
- profiles.created
- profiles.removed
- profiles.updated
- profiles.person.photo.updated
- profiles.person.audio.updated
- profiles.colleague.created
- profiles.colleague.added
- profiles.connection.rejected
- profiles.person.tagged
- profiles.person.selftagged
- profiles.tag.removed
- profiles.link.added
- profiles.link.removed
- profiles.status.updated
- profiles.wallpost.created
- profiles.wallpost.removed
- profiles.wall.comment.added
For a complete list of valid event names for Profiles, refer to the Events Reference article in the API Documentation wiki.
- startDate
- A string that specifies the start date for the period in MM/DD/YYYY format.
- endDate
- A string that specifies the end date for the period in MM/DD/YYYY format.
For example:
ProfilesService.purgeEventLogsByEventNameAndDates(profiles.colleague.created, "06/21/2009", "06/26/2009")
This command deletes all the profiles.colleague.created event log entries created on or after June 21st, 2009 and before June 26th, 2009 from the EVENTLOG table.
- ProfilesService.reloadFullReportsToCache()
Force a reload of the full report-to chain cache from the Profiles database. This command does not take any arguments. If the full report-to cache is disabled, it cannot be reloaded. This command fails when the cache is disabled.
- ProfilesService.updateDescription(String user_email_addr, String new_content_for_description_field)
Replace the existing description text associated with a user's email address with alternate description text enclosed by double quotes. Description text is information contained on the About Me tab of a user's profile. For example:
ProfilesService.updateDescription("ann_jones@company.com","This is new text that will be entered into the About Me tab for Ann.")
Rich text cannot be entered with this command.
- ProfilesService.updateExperience(String user_email_addr, String new_content_for_experience_field)
Replace the existing experience text associated with a user's email address with alternate text enclosed by double quotes.
Experience is the information contained in the Background area of a user's profile.
For example:
ProfilesService.updateExperience("ann_jones@company.com","This is new text that will be entered into the Background field for Ann.")
Rich text cannot be entered with this command.
Commands for managing user data
- ProfilesService.activateUserByUserId(String user_external_id, updated_properties_list)
- ProfilesService.inactivateUser(String user_email_addr)
- ProfilesService.inactivateUserByUserId(String userID)
- ProfilesService.publishUserData(String user_email_addr)
- ProfilesService.publishUserDataByUserId(String userID)
- ProfilesService.swapUserAccessByUserId("user_to_activate","user_to_inactivate")
- ProfilesService.updateUser(String user_email_addr, updated_properties_list)
- ProfilesService.updateUserByUserId(String userID, updated_properties_list)
See: Managing user data using Profiles administrative commands
Change Profiles configuration property values
Configuration settings control how and when various Profiles operations take place. You can edit the settings to change the ways that profiles behave.
To edit configuration files, use the wsadmin client.
See Start wsadmin
Configure Profiles using scripts accessed with the wsadmin client. These scripts use the AdminConfig object available in the IBM WebSphere Application Server wsadmin client to interact with the Profiles configuration file. Changes to configuration settings require node synchronization and a restart of the Profiles server before they take effect.
There are no Profiles application administrative tools for adding or removing a user's profile. To add or remove a profile for a person, you must add or remove that person's entry from the corporate LDAP directory system. Use that directory's native tools to create and delete user entries. When you perform standard synchronization tasks on the Profiles database, the profiles are updated. If you add a new user to the LDAP directory, a profile is created for that user. If you remove a user entry from the LDAP directory, that user's profile is removed.
To change Profiles configuration settings...
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the Profiles configuration files:
ProfilesConfigService.checkOutConfig("working_directory", "cell_name" where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes (/) to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command does not complete successfully.- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required. It is also case-sensitive. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/prof/temp","Cell01")
Windows:
ProfilesConfigService.checkOutConfig("c:/prof/temp","Cell01")
IBM i ProfilesConfigService.checkOutConfig("/temp","Cell01")
- To change a Profiles configuration setting:
ProfilesConfigService.updateConfig("property", "value")
...where...
- property is one of the editable Profiles configuration properties.
- value is the new value with which you want to set that property.
For example, the following code disables the display of organizational information.
ProfilesConfigService.updateConfig("organizationalStructure.enabled","false")
- Repeat the previous step once for each property setting you want to change.
You must check the configuration files back in after making changes, and they must be checked in in the same wsadmin session in which they were checked out for the changes to take effect.
See Applying property changes for details.
Profiles configuration properties
Configuration settings control various configurable applications within Profiles. They require a Profiles application or server restart to take effect. You can check out and modify the following configuration settings in the profiles-config.xml file.
Some configuration settings, such as the settings for the board and tagging features, were moved from the profiles-config.xml file to the profiles-policy.xml file.
For more information about the settings that you can configure in the profiles-policy.xml file, see Profile-types.
- activeContentFilter.enabled
Enables/disables filtering for active content of text entered into the About me and Background text input fields.
This property takes a Boolean value: true or false.
The value must be formatted in lowercase.
- fullReportsToChainCache.ceouid
The corporate directory user ID of the person who displays at the top of the organizational structure.
- fullReportsToChainCache.enabled
Enables or disables the full reports-to chain cache.
This property takes a Boolean value: true or false. The value must be formatted in lowercase.
- fullReportsToChainCache.refreshInterval
Time in minutes between cache reload operations.
This property takes an integer value.
- fullReportsToChainCache.refreshTime
HH:MM. Determines the time of day in 24-hour time format that Profiles performs the first scheduled reloading of the cache.
- fullReportsToChainCache.size
The number of employee entries that should be loaded into the cache.
This property takes an integer value.
- fullReportsToChainCache.startDelay
Time in minutes that Profiles should wait after starting loading the cache for the first time.
This property takes an integer value.
- organizationalStructure.enabled
Indicates if the organizational structure information (report-to chain, people managed, same manager) should display.
This property takes a Boolean value: true or false. The value must be formatted in lowercase.
- nameOrdering.enabled
When this property is set to true, names must be entered as (FirstName LastName) or (LastName, FirstName). By default, it is set to false.
When only a single word is entered, that word is treated as the LastName value during search.
This property takes a Boolean value.
- scheduledTasks.DbCleanupTasks
- Frequency at which the database cleanup tasks runs.
This task removes event log entries, or draft profiles updates older than the specified number of days.
eventLogTrashRetentionInDays: Specifies the number of days to keep system events in the EMPINST.EVENTLOG table.
draftTrashRetentionInDays: Specifies the number of days to keep draft profile updates.
eventLogMaxBulkPurge: Specifies the maximum number of events to purge in a query.
- scheduledTasks.ProcessLifeCycleEventsTasks
- Frequency at which lifecycle events are published.
This event ensures that lifecycle events are propagated.
platformCommandBatrchSize: Specifies the maximum number of events attempted to process in each event run.
- scheduledTasks.ProcessTDIEventsTasks
- Frequency at which audit events triggered by a TDI synch are processed.
platformCommandBatrchSize: Specifies the maximum number of events attempted to process in each event run.
- scheduledTasks.StatsCollectorTask
- Frequency at which Profiles statistics are calculated and written to disk.
filePath: Specifies the directory in which to place the file.
fileName: Specifies the file name.
- scheduledTasks.RefreshSystemObjectsTask
- This task is obsolete.
- search.maxRowsToReturn
Determines the maximum number of rows returned by a search operation.
This property takes an integer value.
- search.pageSize
Determines the number of returned rows to place on a results page.
This property takes an integer value.
Apply property changes in Profiles
After you have edited Profiles configuration properties, check the changed configuration file in, and restart the server to apply your changes.
- Check in the changed configuration property keys using the following wsadmin client command:
ProfilesConfigService.checkInConfig()
- Update the value of the version stamp configuration property in LotusConnections-config.xml to force users' browsers to pick up this change
- To exit the wsadmin client, type exit at the prompt.
- Use the IBM WebSphere Application Server Integrated Solutions Console to stop and restart the server hosting the Profiles application.
Tivoli Directory Integrator commands
The following IBM Tivoli Directory Integrator commands are available for managing profile data and performing user data synchronization tasks. For related information, see Developing custom Tivoli Directory Integrator assembly lines for Profiles. Use these commands to manage and work with profile data.
Table 20. Tivoli Directory Integrator commands
Command Description clearLock.sh Delete the lock file that is generated by the sync_all_dns command. delete_or_inactivate_employees.sh Deletes or inactivates users in the Profiles database. dump_photos_to_files.sh Read existing photo files from the Profiles database and stores them on disk. Used in conjunction with the load_photos_from_files command. dump_pronounce_to_files.sh Reads existing pronunciation files from the Profiles database and stores them on disk. Used in conjunction with the load_pronounce_from_files command. fixup_tdi_adapters.sh Adds a reference to the profiles property store to your adapter files when you are defining a custom assembly line to contain the load_photos_from_files.sh Reads stored photo files from disk and populates the Profiles database with them. Used in conjunction with the dump_photos_to_files command. load_pronounce_from_files.sh Reads stored pronunciation files from disk and populates the Profiles database with them. Used in conjunction with the dump_pronounce_to_files command. process_draft_updates.sh Synchronizes changes from the Profiles database back to the LDAP directory. The script initializes a daemon process that monitors the Profiles database for updates, formats each update as a DSML request, and the transmits it to a configured DSML server. process_ad_changes.sh Processes changes made to a Microsoft Active Directory LDAP directory and propagates the changes to the corresponding records in the Profiles database repository. For more information, see Synchronize IBM Tivoli Directory Server and Microsoft Active Directory LDAP changes.
process_tds_changes.sh Processes changes made to an IBM Tivoli Directory Server LDAP directory and propagates the changes to the corresponding records in the Profiles database repository. For more information, see Synchronize IBM Tivoli Directory Server and Microsoft Active Directory LDAP changes.
reset_draft_iterator_state.sh Delete the value of a database change record number tracked in a persistent field by the process_draft_update command. set_draft_iterator_count.sh Resets the value of a database change record number tracked in a persistent field by the process_draft_update command. sync_all_dns.sh Synchronizes LDAP directory changes with the Profiles database. update_employees_from_file.sh Replace the globally unique identifiers in the Profiles database with the correct values from the LDAP directory. For more information, see Updating Profiles when changing LDAP directory.
Add supplemental content to Profiles
You can use IBM Tivoli Directory Integrator assembly-line commands to add photo files and pronunciation files for users to the Profiles database.
When xWindows is not installed on your system, the load_pronunciation_from_files and load_photo_from_files commands might not work. In this scenario, you must change the default value of the headless_tdi_scripts setting in the profile_tdi.properties file from false to true...
headless_tdi_scripts=true
Related information is available in the IBM Tivoli Directory Integrator solutions for Connections real-world scenarios wiki article.
Uploading pronunciation files
Profiles users can add a recording of how their name is pronounced to enhance their profile. As administrator, you can use IBM Tivoli Directory Integrator assembly-line commands to populate the profiles database repository with pronunciation files for users.
You can use the dump_pronounce_to_files and load_pronounce_from_file assembly-line commands to populate the profiles database with pronunciation files.
These commands are useful when you are moving the profiles database, allowing you to save the pronunciation information from the existing database on disk, repopulate the new database from the LDAP, and then load the pronunciation files back into the new database.
To populate a new profiles database with pronunciation files....
- Use the dump_pronounce_to_files.bat or dump_pronounce_to_files.sh command to read the existing pronunciation files from the Profiles database and store them on disk:
The following table shows the properties that are used by this command and their default values.
These properties can be found in profiles_tdi.properties.
Property Description dump_pronounce_directory The directory where the extracted files are stored.
The default value is ./dump_pronounce.
dump_pronounce_file The list of people whose pronunciation files were collected.
The default value is collect_pronounce.in.
load_pronounce_simple_file The list of people whose pronunciation files were collected.
The default value is collect_pronounce.in.
To load only a subset of files from a location, you edit this file.
When dumping multiple pronunciation files, there must be a period separator between each entry. If the separator is omitted, an error is generated when you use the load command to import the files into the profiles database.
- To populate the new database with the pronunciation files that you saved in the previous step, use the load_pronounce_from_files.bat or load_pronounce_from_files.sh command to read the files from disk and populate the profiles database with them
The table in step 1 shows the properties that relate to this command.
Example
Here is an example of an entry from the collect_pronounce.in file:file:/C:/install_directory/TDISOL/TDI/./dump_pronounce/pron1197046202619_9.dat uid:FAdams .The characters following uid: correspond to the PROF_UID in the profiles database.
Note the required period separator between each entry.
For example:
file:/C:/install_directory/TDISOL/TDI/./dump_pronounce/pron1197046202619_9.dat uid:FAdams . file:/C:/install_directory/TDISOL/TDI/./dump_pronounce/pron6198046102314_6.dat uid:TAmado .
Populate Profiles with photos
You can use IBM Tivoli Directory Integrator assembly-line commands to populate the profiles database repository with photo files for users.
You can use the dump_photos_to_files and load_photos_from_file assembly-line commands to populate the profiles database with photo files. These commands are useful when you are moving the profiles database, allowing you to save the photos from the existing database on disk, repopulate the new database from the LDAP, and then load the photo files back into the new database.
To populate a new profiles database with photos....
- Use the dump_photos_to_files.bat or dump_photos_to_files.sh command to read the existing photos from the profiles database and store them on disk:
The following table shows the properties that are used by this command, and their default values. These properties can be found in profiles_tdi.properties.
Property Description dump_photos_directory The directory where the extracted files are stored. The default value is /dump_photos.
dump_photos_file The list of people whose photos were collected.
The default value is collect_photos.in.
load_photos_simple_file The list of people whose photos were collected.
The default value is collect_photos.in. To load only a subset of files from a location, edit this file.
When dumping multiple photo files, there must be a period separator between each entry. If the separator is omitted, an error is generated when you use the load command to import the files into the profiles database.
- To populate the new database with the photo files that you saved in the previous step, use the load_photos_from_files.bat or load_photos_from_files.sh command to read the files from disk and populate the Profiles database with them:
- The table in step 1 shows the properties that relate to this command.
- Although in Connections 2.0, the Profiles application can crop the photo uploaded by a user, the photo size limit in the underlying database is 15 KB. When Profiles is used with IBM Tivoli Access Manager enabled, the Tivoli Access Manager can only load files conforming to this size limit.
Example
Here is an example of an entry from the collect_photos.in file:photo:file:/C:/install_directory/TDISOL/TDI/./dump_photos/img1197046202619_9.dat uid:FAdams .The characters following uid: correspond to the PROF_UID in the profiles database.
Note the required period separator between each entry.
For example:
photo:file:/C:/install_directory/TDISOL/TDI/./dump_photos/img1197046202619_9.dat uid:FAdams . photo:file:/C:/install_directory/TDISOL/TDI/./dump_photos/img1197146402316_7.dat uid:TAmado .
Developing custom Tivoli Directory Integrator assembly lines for Profiles
You can use IBM Tivoli Directory Integrator (TDI) connectors to develop custom assembly-line scripts when you need to provide a specific type of function.
For example, if you do not want to populate Profiles with photos or pronunciation files using the standard method covered in Adding supplemental content to Profiles, you might want to use the Photo connector or Pronunciation connector as an alternative.
Tivoli Directory Integrator connectors are components that are used to access and update information sources. Connectors allow you to build your assembly lines without having to handle the technical details of working with different data stores, systems, services, or transports. Each type of connector suppresses implementation details and specifically handles the details of data source access.
For more information about programming TDI connectors, see the TDI Reference Guide.
Connections provides the following connectors:
- Profile connector
- Photo connector
- Pronunciation connector
- Codes connector
Related information is available in the IBM Tivoli Directory Integrator solutions for Connections real-world scenarios wiki article.
The only supported methods for writing data or modifying data in the Profiles database are the following:
Writing directly to the Profiles database, including using Tivoli Directory Integrator database connectors to do so, is not supported and can lead to data loss and application malfunction.
- Use the supplied Profiles Tivoli Directory Integrator assembly lines
- Use the Profiles ATOM administrative API
- Developing custom assembly lines using the Profiles TDI connectors
Set up your development environment
Use the Tivoli Directory Integrator Configuration Editor to create custom IBM Tivoli Directory Integrator scripts. Set up the development environment so that the editor can access a separate set of the Connections Profiles Tivoli Directory Integrator connector source files than those used by the installed product.
When you install Connections, a set of Tivoli Directory Integrator (TDI) components are installed on your system. These components are used by the population wizard and other TDI tasks, such as the synchronization tasks, to populate and update the Connections user directory. They are stored in a compressed file referred to as the TDI solution directory (tdisol.zip or tdisol.tar).
The solution directory includes a set of connectors, which are standard TDI components used to build your own TDI assembly lines when the assembly lines in the solution directory do not suit your needs. Your custom assembly lines can:
- Use an available plug point, such as an alternate source for profiles data or custom delete processing
- Create a stand-alone program to interact with the profiles, photos, pronunciation, or code sections of profiles through the available connectors.
This section describes how to set up a development environment in which you can write your own assembly lines using the Profiles TDI connectors provided in the Connections installation package.
To set up your development environment....
- Create a directory in which to store the TDI connector source files.
Since you might create multiple iterations of the code you are developing, use a directory naming system that will help you keep track of each iteration.
For example, you could add a subdirectory named version, where version is the version number and date of the copy of the tdisol.zip file that you will extract into the directory. Alternatively, you could name the directory after the assembly line you will be creating, such as custdel if you are working on custom delete processing logic.
For example: C:\TDIProject\20120530 or c:\tdiprojects\40\custdel.
- Extract the files from the TDI solution directory (tdisol.zip or tdisol.tar) into the directory you created in the previous step. You can find the solution directory in the following location:
C:\IBM\IBMConnections\TDISOL This action adds a tdi subdirectory to the directory path.
For example: C:\TDIProject\20120530\TDI or c:\tdiprojects\40\custdel\tdi.
- When you start the Configuration Editor, specify the location of the Profiles TDI solution directory using the -s command-line option...
tdi_install_dir/ibmditk -s your_TDI_directory
...where...
- tdi_install_dir is the name of the directory where you installed Tivoli Directory Integrator.
- your_TDI_directory is the subdirectory that you created in the previous step.
For example:
C:\IBM\TDI\V7.0/ibmditk -s C:\TDIProject\20120530\TDI
The workspace used for development must reference a TDI solution directory that contains all the Profiles TDI artifacts. It is not sufficient to create a new TDI solution directory or use one that does not contain these artifacts. If you attempt to use a Profiles TDI component, such as one of the connectors, and they do not appear in the connector list, then you do not have your workspace and solution configured correctly.
- When you start the Tivoli Directory Integrator Configuration Editor, you are asked to specify a workspace.
This is a working directory in which to store things related to your development project. When prompted, specify the same file path that you are using for the connector files, but replace TDI with workspace.
For example: C:\TDIProject\20120530\workspace or c:\tdiprojects\40\custdel\workspace The editor creates the workspace subdirectory if it does not already exist.
Results
You now have a Tivoli Directory Integrator solution environment used to edit the Profiles Tivoli Directory Integrator connectors.Refer to connector-specific documentation for details about each connector. Scripts created in this environment can be executed from the Configuration Editor in the same way that you would execute standard Tivoli Directory Integrator assembly lines.
Use a custom source repository connector
By creating custom source repository connectors, you can integrate data from non-LDAP sources when you are populating Profiles with user data.
To integrate a custom source repository, create custom versions of the following two assembly lines and package them as IBM Tivoli Directory Integrator adapters:
- Iterator connector
- This adapter iterates sequentially through all the users in your user directory. It is used by a number of iterative Tivoli Directory Integrator scripts.
For example, it is used to retrieve the full information for initial data population using the collect_dns script and during data synchronization using the sync_all_dns script.
- Lookup connector
- This adapter looks up individual users in your user directory. It is used by the populate_from_dns_file script.
For more information about using adapters, see the Tivoli Directory Integrator product documentation
http://www.ibm.com/developerworks/tivoli/library/t-tdiadapters/index.html.
After packaging your assembly lines, you can use them in your TDI solution by copying the file you published, such as the adapter.xml file, into the packages Profiles TDI solution directory.
Add a reference to the profiles property store to your adapter files by running the fixup_tdi_adapters.sh or fixup_tdi_adapters.bat command.
This reference is required to use the Profiles Tivoli Directory Integrator adapter. Even if you do not believe that your adapter file requires access to the profiles property store, there is no penalty for adding the reference so it is strongly advised that you run this command regardless.
Create an iterator connector
Create an iterator connector to perform a sequential read of your entire user source repository.
If you are using a source other than an LDAP, you must provide an iterator that will return each entry to process in sequence.
If you are combining data from multiple sources, you must join all the data that is relevant to the data population mapping for a particular user into the work entry of the iterator assembly line. The only output should be a work entry that contains all the attributes for that user. Joining all the data together in a single step allows you to provide just this hook component and rely on the remainder of the Profiles Tivoli Directory Integrator (TDI) assembly lines to perform the majority of the processing.
Create a source repository iterator connector by completing the following steps:
- To develop your iterator assembly line, connect to your source with an appropriate connector using its iterator mode. Retrieve the data to be used in the process into the work entry.
You must populate the $dn attribute in the work entry. You can populate all the data from the source by mapping all attributes. You can use the mapping functionality to map fields from their names in your source into the fields expected by Profiles; you do not have to perform that mapping in your connector. The $dn attribute is the only required attribute name you must provide at this point in the process.
To help get started with Tivoli Directory Integrator, go to the Learning TDI site. You can also refer to the Tivoli Directory Integrator product documentation for more information.
- Export your iterator solution by completing the following steps:
You can package the iterator connector together with the lookup assembly line, which is best practice although not a required step.
- Shift-click the assembly lines that comprise your iterator solution in the IBM Tivoli Directory Integrator Config Editor.
- Right-click a member of the selected assembly line group and select Publish.
- In the Publish window, enter a name for your solution in the Package ID field.
For example, myIterateAdapter.
- Enter additional information, such as a version number or a help URL for future administrators.
- Assuming you followed the development environment set-up guide outlined in the topic, Setting up your development environment, select the packages directory located in your Connections TDI solution from the File Path menu, and then click Finish.
- To make the adapter file visible to the Profiles TDI solution, restart the Tivoli Directory Integrator Config Editor. If the Tivoli Directory Integrator server is not recycled during testing, it might not detect the existence of the new adapter.xml file. Recycling the Config Editor stops and starts the embedded Tivoli Directory Integrator server.
- Configure the Profiles TDI solution to use your adapter for data iteration by completing the following steps:
- Open profiles_tdi.properties in a text editor.
- Add a property of the following format to the file:
source_repository_iterator_assemblyline={name-of-your-adapter.xml}:/AssemblyLines/{name-of-your-ITERATOR-al}
This property may already be present and commented out in the file. If so, remove the comment character (hash sign) and make the edits.
- Substitute {name-of-your-adapter.xml} with the package ID that you entered in step 2c.
- Substitute {name-of-your-ITERATOR-al} with the name of your iterator assembly line. The line should now look similar to the following:
source_repository_iterator_assemblyline=myIterateAdapter:/AssemblyLines/iterate_over_csv_file
- Save your changes and then close profiles_tdi.properties.
- Test your solution and verify that you are able to iterate and to ensure that you are selecting the users that you are expecting, run the ./collect_dns.sh or collect_dns.bat script located in the TDI solution directory. You can then review the resulting collect.dns script to ensure that you have the results that you are expecting.
Create a lookup connector
Create a lookup connector to fetch data for a single user, including all the attributes necessary for mapping that user in your source repository.
The lookup assembly line is used in the populate_from_dns_file script to populate users. Using the default mapping for secretary ($secretary_uid) and manager ($manager_uid), the script uses the assembly line to look up the manager and secretary uid values. If you can extract the manager and secretary uid values from the work entry without an additional lookup, it is advisable to do so for performance reasons.
After developing your lookup connector, export it and then restart the Tivoli Directory Integrator configuration editor to make the adapter file visible to the Profiles TDI solution. To test the adapter, configure the Profiles TDI solution to use your adapter for data lookup. You can then test the adapter using the TEST_source_repository_lookup script provided by the TDI solution.
A lookup connector is not usable in every situation.
For example, if your source is a data file, you cannot retrieve the data using a lookup mode with the TDI file. However, you can provide an iterator connector and use the TDI assembly lines that do not use the lookup connector, such as sync_all_dns.
Create a source repository lookup connector by completing the following steps:
- Develop your lookup assembly line by performing the following steps.
- Add an attribute map to your assembly line flow section by mapping the following attributes:
- $dn – This attribute should be present in the attribute map with the value from the work entry.
- $lookup_operation – This attribute should be present in the attribute map with the value from the work entry
- $lookup_status – This attribute should be initialized to return the string error, for example map it to ret.value = "error"; .
- Connect to your source with an appropriate connector using its iterator mode. Retrieve the data to be used in the process into the work entry. Use the $dn attribute as the link criteria, or ensure that another value you wish to use is present in the work entry.
- Add the following Javascript entries to the following lookup connector hooks:
- On no Match hook:
work.setAttribute("$lookup_status", "nomatch");
system.skipEntry();- Default Success hook:
work.setAttribute("$lookup_status", "success");
- Lookup Error hook:
work.setAttribute("$lookup_status", "nomatch");
system.skipEntry();- Add the following entry to the assembly line On Failure hook:
system.ignoreEntry();
- To confirm changes made ly add additional error checking code and tracing output.
To help get started with Tivoli Directory Integrator, go to the Learning TDI site. You can also refer to the Tivoli Directory Integrator product documentation for more information.
- Export your lookup solution by completing the following steps:
You can package the lookup connector together with the iterator assembly line, which is best practice suggestion but not a required step.
- Shift-click the assembly lines that comprise your lookup solution in the IBM Tivoli Directory Integrator Config Editor.
- Right-click a member of the selected assembly line group and select Publish.
- In the Publish window, enter a name for your solution in the Package ID field.
For example, myLookupAdapter.
- Enter additional information, such as a version number or a help URL for future administrators.
- Assuming you followed the development environment procedure outlined in the topic, Setting up your development environment, select the packages directory located in your Connections TDI solution from the File Path menu, and then click Finish.
- To make the adapter file visible to the Profiles TDI solution, restart the Tivoli Directory Integrator Config Editor. If the Tivoli Directory Integrator server is not recycled during testing, it might not detect the existence of the new adapter.xml file. Recycling the Config Editor stops and starts the embedded Tivoli Directory Integrator server.
- Configure the Profiles TDI solution to use your adapter for data lookup by completing the following steps:
- Open profiles_tdi.properties in a text editor.
- Add a property of the following format to the file:
source_repository_iterator_assemblyline={name-of-your-adapter.xml}:/AssemblyLines/{name-of-your-ITERATOR-al}
This property may already be present and commented out in the file. If so, remove the comment character (hash sign) and make the edits.
- Substitute {name-of-your-adapter.xml} with the package ID that you entered in step 2c.
- Substitute {name-of-your-ITERATOR-al} with the name of your lookup assembly line. The line should now look similar to the following:
source_repository_lookup_assemblyline=myLookupAdapter:/AssemblyLines/lookup_from_db
- Save your changes and then close profiles_tdi.properties.
- Test your lookup adapter using the TEST_source_repository_lookup script in the Profiles TDI solution. To use this script:
- Configure the assembly line as described in the previous steps.
- Write the distinguished names of a number of users in a file named collect.dns and place this file at the root of the Profiles TDI solution directory. Separte each distinguished name with a carriage return.
- Run the runAl.sh TEST_source_repository_lookup or runAl.bat TEST_source_repository_lookup command. This command iterates over the collect.dns file and attempts to look up each user specified. The resulting data returned by your adapter is generated in the ibmdi.log file in the {TDI solution}/logs/ibmdi.log directory. You can examine this file to confirm that it is returning all of the expected values correctly.
- Repeat this procedure as needed until you are satisfied with the output of your adapter.
Use the ProfileConnector
Use the ProfileConnector to retrieve, create, update, and reset profile entries in the employee, profile extension, and other employee tables in the Profiles database. The connector flattens these tables into a single view of the profile data. The ProfileConnector can also be used to change the user state and change whether a user profile is listed as a manager. The ProfileConnector is the only supported way to perform these operations on a profile using TDI as Connections does not support the use of direct database access.
For information about how to configure your development environment for working with the IBM Tivoli Directory Integrator connectors, and where to place the connectors, see Setting up your development environment.
Database properties are read from profiles_tdi.properties, which must be configured prior to using the connector. The Profiles property store must be part of the configuration (.xml) file where your assembly lines are located. For related information, see Connector modes in the Tivoli Directory Integrator documentation.
The mode setting of the ProfileConnector determines what role the connector carries out in the assembly line. You can use the ProfileConnector in the following modes.
Table 21. ProfileConnector modes
Mode Description Iterator Iteratively scans data source entries, reads their attribute values, and delivers each entry to the appropriate AssemblyLine Flow section components.
All attributes that contain data in the profile can be mapped to be returned by the iterator mode. In addition to the list of profile attributes in the map_dbrepos_from_source.properties file, the following attributes can be retrieved in the iterator and lookup modes:
- key - an internal key used to uniquely identify this profile. This key is used to link data between tables in the profiles database. It is not exported from profiles out to other connections components.
- sys_usrState - the value active or inactive based on the user's state
- lastUpdate - the time when this record was last updated
Lookup Fetches records from the Employee table in the Profiles database according to specified search criteria.
The following attributes can be used as search criteria:
- key
- uid
- guid
- distinguishedName
You can use sourceUrl in combination with one of these attributes.
For example, this mode is used by the dump photos assembly.
The ability to use the TDI ProfileConnector lookup based on managerUid was discontinued. To generate a list of direct reports, use the Profiles API as described in the " Searching for a person's direct reports" article in the IBM Social Business Development Wiki at http://www-10.lotus.com/ldd/lcwiki.nsf.
Update Update the profile records in the Employee table in the Profiles database.
The following attributes can be used for the search criteria:
You can use sourceUrl in combination with one of the following attributes.
- key
- uid
- guid
- distinguishedName
In Update mode the following options are available on the connector panel:
- Update user state – The available options are...
- Do not change (default)
- Activate
- Inactivate
If Activate or Inactivate is selected this state will be explicitly set during the update processing of the record. This option is used during the sync_dns_process_add phase of sync_all_dns.
- Mark manager – The available options are...
- Checked – Set the manger status to Yes or No
- Unchecked (default) – No manager status is specified
- When this option is enabled, the update mode will perform the Mark manager processing in addition to any other update operations. It will recognize if the profile being updated is referenced as the manager by any other profile in the database. If it is referenced, it will be marked with a Y as being a manager. If it is not referenced, it will be marked with a N as not being a manager. This option is used by the mark_manager assembly line.
The Update mode of the ProfileConnector is used by the update mode of the SyncDBFromSource internal assembly line, which is called by populate_from_dn_file.
The ProfileConnector also supports the Compute Changes and Skip Lookup checkboxes in the Advanced area. Consider unchecking the Compute Changes option if you want a state change or mark manager operation to be executed whether or not other changes are necessary.
For more information about Compute Changes and Skip Lookup options, see Connector modes in the Tivoli Directory Integrator documentation.
Delete Deletes records in the Employee table in the Profiles database according to specified search criteria.
The Delete mode of the ProfileConnector is used by the delete mode of the SyncDBFromSource internal assembly line, which is called by sync_all_dns.
The search (link) criteria is the same as the Lookup mode.
addOnly Adds new records to the Employee table in the Profiles database.
markManager This mode has been deprecated, use Update mode with the Mark manager option instead.
activate This mode has been deprecated, use Update mode with the Update user state option instead.
inactivate This mode has been deprecated, use Update mode with the Update user state option instead.
- To add the connector to an assembly line, open the assembly line, and then click Add Component in the Configuration Editor.
- Select Connectors, and then select ProfileConnector from the Components list.
Enter a name for the connector in the Name field.
- Select a mode from the Mode list, and then click Finish.
Consider referencing the supplied assembly lines for examples in using the ProfileConnector. In addition to these ProfilesConnector supplied assembly lines, the following topics describe additional example programs that are included as part of the TDI solution and that use the ProfileConnector. Do not modify the existing assembly lines, but use the extension points available through the hooks or create your own assembly line.
- Create a connector to synchronize Profiles data using LDIF – This describes how to use a source other than LDAP to synchronize Profiles user data. This sample shows how to use an LDIF text file as the user data source.
- Create a connector to synchronize a subset of Profiles data – This describes how to synchronize an explicit set of Profiles users out of cycle from your scheduled synchronization plan supplying a list of users to synchronize to an alternate synchronization utility.
- Use supplied scripts to delete inactive users based on inactivity length – This describes how to use supplied TDI scripts to surface and delete users who have been inactive for specified length of time.
- Create a connector to customize TDI attribute mapping –
This describes how to use the mapping functionality included with the TDI solution and used by the supplied Profiles tasks.
Create a connector to synchronize Profiles data using LDIF
You can use a source other than LDAP to synchronize Profiles user data. This sample shows how to use an LDIF text file as the user data source.
In this sample, data exported from an LDAP or constructed from another means contains data for the Profiles users in your Connections deployment. If you have user data in a text file such as in LDIF format, you can use this sample connector in conjunction with sync_all_dns to synchronize the source data with connections.
Use this process, you’ll do the following:
- Configure TDI to use the iterator connector during operations to synchronize the Profiles database form the source.
- Use TDI mapping and extension attribute processing functions to upload data from the LDIF file to the Profiles database. Based on the source content, you will specify the mapping functions as needed.
A sample connector for use with an LDIF file is supplied as samples/ldifSourceConnectorIterator.xml.
The following is sample LDIF file content for a single user:
dn: uid=asingh, cn=users, dc=ibm,dc=com cn: Allie Singh givenName: Allie sn: Singh employeeNumber: 24251 ou: Office of the CEO departmentNumber: 10 title: Administrative Assistant to George Bandini telephoneNumber: 1-301-555-1001 mobile: 1-312-555-0302 pager: 1-773-555-8840 facsimileTelephoneNumber: 1-301-555-1002 uid: asingh roomNumber: 1-400A workloc: ID countryName: USA mail: asingh@rennovations.com manager: uid=gbandini, cn=users, dc=ibm,dc=comIn this example, using source data from am LDIF text file, the iterator connector is available but the lookup connector is not. TDI assembly lines that work in an iterative manner will read from the LDIF file, however assembly lines that must look up a particular user will not.
For example, the collect_dns utility and the sync_all_dns utility will work but the populate_from_dn file utility will not because it also requires a lookup connector.
Use the following procedure:
- Create or otherwise obtain the LDIF file from your source data repository, for example an LDAP database.
- Move the supplied ldifSourceConnectorIterator.xml connector file from the samples directory to the packages subdirectory.
- Add the following statement to profiles_tdi.properties:
source_employees_file=your_file_name.ldif
- Open the map_dbrepos_from_source.properties file and configure mappings as needed based on the attributes in your source LDIF file. Note the following sample mapping for a single user:
In this example, a guid field was not present in the LDIF file so the guid entry in the following sample is mapped to employeeNumber, which will enable processing.
See Map fields manually and Create an iterator connector for details.
deptNumber=departmentNumber displayName=cn distinguishedName=$dn email=mail faxNumber=facsimileTelephoneNumber givenName=given_name givenNames=given_name guid=employeeNumber #managerUid={func_map_to_db_MANAGER_UID} mobileNumber=mobile officeName=roomNumber #secretaryUid={func_map_to_db_SECRETARY_UID} surname=sn surnames=sn telephoneNumber=telephoneNumber title=title uid=uid- Uncomment and change the following statement in profiles_tdi.properties to enable access to the connector:
source_repository_iterator_assemblyline=ldifSourceConnectorIterator:/AssemblyLines/ldifSourceConnectorIterator
- Run a command to process the connector, such as sync_all_dns, to update the corresponding Profiles user data.
Summary output will appear in the console and any errors generated will appear in the log file.
Synchronize a subset of Profiles data
You can synchronize an explicit set of Profiles users out of cycle from your scheduled synchronization plan supplying a list of users to synchronize to an alternate synchronization command.
While Connections enables you to upload and synchronize Profiles user data from an LDAP or alternate source repository (such as an LDIF file) on a scheduled basis, you can also synchronize data for a small user subset using the alternative process described here.
Typically the sync_all_dns command is used to synchronize the entire Profiles data set on a scheduled basis. If you need to synchronize a set of users, you can use the sync_dns_from_file command to accomplish the task.
For example, you can use this command, and a small user data subset, as a diagnostic tool when troubleshooting the synchronization process, making it easier to analyze trace output with a smaller sample size.
In this example, you will synchronize a list of users using the sync_dns_from_file command. Functionally, this process works as though you had run the sync_all_dns command.
- If the user is found in the Profiles database but not in the source repository (for example LDAP), the specified delete action occurs.
- If the user is found in the source repository (for example LDAP) but not in the Profiles database, specified adds occur as they would in an sync_all_dns action.
- If the user is found in the source repository (for example LDAP) and also in the Profiles database, specified updates occur as they would in an sync_all_dns action.
The sync_dns_from_file command file (sync_dns_from_file.bat or sync_dns_from_file.sh) is in the samples directory. You must copy the file to the main solution directory, which is typically one level above the samples directory, and run it from the main solution directory.
The data input file uses the same formatting as the existing delete_or_inactivate_employees.in file in the samples directory.
Use the following procedure:
- Create your user synchronization source data file, for example sync_users_subset.in, using the following format:
$dn:cn=Amy Jones3,cn=Users,l=WestfordFVT,st=Massachusetts,c=US,ou=Lotus,o=Software Group,dc=ibm,dc=com uid:Amy Jones3 . $dn:cn=Amy Jones8,cn=Users,l=WestfordFVT,st=Massachusetts,c=US,ou=Lotus,o=Software Group,dc=ibm,dc=com uid:Amy Jones8 .
- Save the completed data input file, for example sync_users_subset.in.
- Open profiles_tdi.properties and reference your file and path using the sync_dns_simple_file property using the following format:
sync_dns_simple_file=sync_users_subset.in
- Save profiles_tdi.properties.
- Run the sync_dns_from_file.bat or sync_dns_from_file.sh command.
Summary output will appear in the console and any errors generated will appear in the log file.
Use supplied scripts to delete inactive users based on inactivity length
You can use supplied Tivoli Directory Integrator (TDI) scripts to surface and delete users who have been inactive for specified length of time. You might use this process to ensure that users who are no longer in the organization, according to LDAP, are deleted from the Profiles directory. If your organization plans to reuse UID values, or other unique profiles fields, you should permanently delete inactive users to so that these values can be reassigned to others.
When you inactivate a user, their email field is cleared but other fields such as UID, GUID, and distinguished name are not.
These users also remain listed in components such as Communities, Activities, and Profiles. After a specified length of time you may want to delete these inactive users completely from your other Connections components. You can use the revoke users sample to delete inactive users who meet the length of time criteria. See also: User life cycle details.
After flagging a user as inactive, but prior to revoking or deleting that user, you can retrieve that user and their data. However, after revoking or deleting a user, you cannot retrieve that user or that user’s data.
Typically the sync_all_dns utility is used to synchronize the Profiles data set on a scheduled basis. When a user leaves the organization, and is removed from the LDAP directory, by default the sync_all_dns utility inactivate that user by flagging them as inactive in the Profiles database and propagating this infomation to the other Connections components.
In this example, you will use supplied scripts to delete inactive user(s) who were inactivated earlier than the specified number of days. This gives the organization a transition period, during which the users are in an inactive state. These users can then be deleted after the transition period. The transition period can be any value, in days. When the user is deleted, their UID and GUID identifiers are made available for reuse.
Use the following procedure:
- Copy the revoke_users.sh or revoke_users.bat, revoke_users.xml, and revoke_users.properties files to the Profiles TDI solution directory from the supplied samples directory.
- Run the revoke_users script with the validate parameter to check that you have installed the fixpacks required to run the revoke_users script with the revoke parameter. Results are sent to the logs/ibmdi.log file.
See the following sample output:
2012-06-19 11:22:03,076 INFO [AssemblyLine.AssemblyLines/validate.1] +++++++++ VALID TDI SOLUTION +++++++++++- Run the revoke_users script with the summary parameter to preview the users to be deleted before actually deleting them.
This script creates the following two preview files:
- revoke.ldif – lists the inactive users to be deleted from the Profiles database by the revoke_users revoke script. These are the inactive users who have been inactive for as long as or longer than the specified amount of time.
- revoke_skip.ldif – lists the inactive users to not be deleted from the Profiles database by the revoke_users revoke script. These are the inactive users who have been inactive for less than the specified amount of time.
The logs/ibmdi.log file is updated after every 10K user names processed.
After flagging a user as inactive, but prior to revoking or deleting that user, you can retrieve that user and their data. However, after revoking or deleting a user, you cannot retrieve that user or that user’s data.
- Run the revoke_users script with the revoke parameter to delete the inactive users from the Profiles database.
This script creates the same revoke.ldif and revoke_skip.ldif files as the revoke_users summary script. It then deletes the users listed in the revoke.ldif file from the Profiles database.
Use the PhotoConnector
Use the PhotoConnector to retrieve, create, update, and delete photo entries in the Photo table in the Profiles database.
For information about how to configure your development environment for working with the IBM Tivoli Directory Integrator connectors, and where to place the connectors, see Setting up your development environment.
Database properties are read from profiles_tdi.properties, which must be configured prior to using the connector. The Profiles property store must be part of the configuration (.xml) file where your assembly lines are located. For related information, see General Concepts in the Tivoli Directory Integrator documentation.
The PhotoConnector works with photos in the Profiles database.
The mode setting of the connector determines what role the connector carries out in the assembly line. You can use the PhotoConnector in the following modes.
Table 22. PhotoConnector modes
Mode Description Iterator Iteratively scans data source entries, reads their attribute values, and delivers each entry to the appropriate AssemblyLine Flow section components.
The available attributes that are returned in the work entry are in the iterator and lookup modes:
- fileType - specifies the file type, for example image or jpeg
- image - specifies the byte array containing the photo contents
- key - specifies the user’s key value
- updated - specifies when the photo was last modified
The dump_photos_to_files assembly line uses the Iterator mode.
Lookup Fetches records from the Photo table in the Profiles database according to specified search criteria, which is the key attribute.
The following attributes can be used for the search criteria:
- fileType - specifies the file type, for example image or jpeg
- image - specifies the byte array containing the photo contents
- key - specifies the user’s key value
- updated - specifies when the photo was last modified
The search (link) criteria can be either uid or key.
Update Update the photo for the user specified in the search criteria, that being the key attribute.
The image attribute must include the byte array containing the photo.
The load_photos_from_files assembly line uses the Update mode.
The search (link) criteria can be either uid or key.
Delete Deletes records in the Photo table in the Profiles database according to specified search criteria.
The search (link) criteria can be either uid or key.
updateToDB This mode has been deprecated, use the Update mode instead.
- To add the connector to an assembly line, create a new or open an existing assembly line and click Add Component in the Configuration Editor.
- Select Connectors, and then select PhotoConnector from the Components list.
Enter a name for the connector in the Name field.
- Select a mode from the Mode list, and then click Finish.
- Continue with any additional development of your assembly line.
Use the PronunciationConnector
Use the PronunciationConnector to retrieve, create, update, and delete pronunciation entries in the Pronunciation table in the Profiles database.
For information about how to configure your development environment for working with the IBM Tivoli Directory Integrator connectors, and where to place the connectors, see Setting up your development environment.
Database properties are read from profiles_tdi.properties, which must be configured prior to using the connector. The Profiles property store must be part of the configuration (.xml) file where your assembly lines are located. For related information, see General Concepts in the Tivoli Directory Integrator documentation.
The PronunciationConnector writes changes to the Pronunciation table in the Profiles database. The mode setting of the connector determines what role the connector carries out in the assembly line. You can use the PronunciationConnector in the following modes.
Table 23. PronunciationConnector modes
Mode Description Iterator Iteratively scans data source entries, reads their attribute values, and delivers each entry to the appropriate AssemblyLine Flow section components. The available attributes that are returned in the work entry are in the iterator and lookup modes:
In this mode, the PronunciationConnector connects to the Pronunciation table in the Profiles database, retrieves all the records, and handles them one by one.
Lookup Fetches records from the Pronunciation table in the Profiles database according to specified search criteria.
The PronunciationConnector only supports searches by uid and key.
The search (link) criteria must be the key attribute.
Update Update the pronunciation records in the Pronunciation table in the Profiles database. The connector can update the database using the pronunciation file link, inputting it as an InputStream data type, or using the pronunciation content, inputting it as a byte data type.
The search (link) criteria is the same as the Lookup mode.
Delete Deletes records in the Pronunciation table in the Profiles database according to specified search criteria.
The PronunciationConnector can only delete pronunciation records that are specified by key.
The search (link) criteria is the same as the Lookup mode.
updateToDB This mode has been deprecated, use the Update mode instead.
- To add the connector to an assembly line, open the assembly line, and then click Add Component in the Configuration Editor.
- Select Connectors, and then select PronunciationConnector from the Components list.
Enter a name for the connector in the Name field.
- Select a mode from the Mode list, and then click Finish.
- To add the connector to your project's connector library, right-click the Connectors folder in the Configuration Browser, and then select PronunciationConnector from the Components list.
Use the CodesConnector
Use the CodesConnector to retrieve, create, update, and delete code entries in various codes tables in the Profiles database.
For information about how to configure your development environment for working with the IBM Tivoli Directory Integrator connectors, and where to place the connectors, see Setting up your development environment. For additional information, see Supplemental user data for Profiles.
Database properties are read from profiles_tdi.properties, which must be configured prior to using the connector. The Profiles property store must be part of the configuration (.xml) file where your assembly lines are located. For related information, see General Concepts in the Tivoli Directory Integrator documentation.
The Codes Table Name menu option in the CodesConnector contains the choices Country, Department, EmployeeType, Organization, and WorkLocation. The CodesConnector requires that one of these table choices be assigned to the Codes Table Name field option. The table choice specified is used during CodesConnector operations. Each table has a different, but similar, schema. You can determine the schema of a particular table by making connections in the input or output map panels of the connector and then clicking Next to advance to the applicable record.
The CodesConnector works with records in the COUNTRY, DEPARTMENT, EMP_TYPE, ORGANIZATION, and WORKLOC tables in the Profiles database. The mode setting of the connector determines what role the connector carries out in the assembly line. You can use the CodesConnector in the following modes.
Table 24. CodesConnector modes
Mode Description Iterator Connects to the codes table in the Profiles database, retrieves all the records, and handles them one by one.
Lookup Fetches records from the codes table in the Profiles database according to specified search criteria.
The search (link) criteria attribute is determined by the code table name specified in the connector panel.
The following table names and their associated attributes are supported for use as the search criteria:
- COUNTRY – countryCode
- DEPARTMENT – departmentCode
- EMP_TYPE – employeeType
- ORGANIZATION – orgCode
- WORKLOC – workLocationCode
Update Updates records in the codes table in the Profiles database.
The search (link) criteria is the same as the Lookup mode.
Add Adds records in the codes table in the Profiles database.
The search (link) criteria is the same as the Lookup mode.
Delete Deletes records in the codes table in the Profiles database according to specified search criteria.
The search (link) criteria is the same as the Lookup mode.
updateToDB This mode has been deprecated; use the Update mode instead.
- To add the connector to an assembly line, create a new or open an existing assembly line and click Add Component in the Configuration Editor.
- Select Connectors, and then select CodesConnector from the Components list.
Enter a name for the connector in the Name field.
- Click Next.
- Select the desired table name from the list and click Finish.
- Continue with any additional development of your assembly line.
Manage profile content
You can enable the active content filter to prevent users from embedding malicious content in text input fields in Profiles. You can also use administrative commands to update or remove inappropriate information in fields to which you do not have owner access.
Filter active content in Profiles
Profiles provides a filter that prevents users from creating rich text descriptions with malicious scripts that are executed when other users visit Profiles. You can enable or disable this component.
To edit configuration files, use wsadmin client.
The active content filter prevents a user from embedding malicious content such as JavaScript in the About me and Background text input fields. You can disable the filter to provide richer options for content in these fields.
Disable this filter introduces a vulnerability to malicious cross-site scripting (XSS) attacks.
To configure active content filter settings....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the Profiles configuration files:
ProfilesConfigService.checkOutConfig("working_directory", "cell_name" where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes (/) to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command does not complete successfully.- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required. It is also case-sensitive. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/prof/temp","Cell01")
Windows:
ProfilesConfigService.checkOutConfig("c:/prof/temp","Cell01")
IBM i ProfilesConfigService.checkOutConfig("/temp","Cell01")
- To configure the active content filter for Profiles:
ProfilesConfigService.updateConfig(property, value) where
- property is one of the editable Profiles configuration properties.
- value is the new value with which you want to set that property.
The following table displays information regarding the active filter property and the type of data you can enter for it.
Table 25. The active content filter property
Option Description activeContentFilter.enabled Enables and disables filtering for active content of text entered into the About me and Background text input fields.
This property takes a Boolean value: true or false.
The value must be formatted in lowercase.
For example, to disable filtering:
ProfilesConfigService.updateConfig("activeContentFilter.enabled","false")
- After making changes, check the configuration files back in, and you must do so in the same wsadmin session in which you checked them out for the changes to take effect.
See Applying property changes in Profiles for information about how to save and apply your changes.
Remove inappropriate content
Content management commands are used to update inappropriate information stored in the Profiles database, such as text displayed in the About Me and Background fields of the Profiles user interface. These administrative commands can also be used to delete inappropriate photos from the database. No file checkout or server restart is required when using the commands.
To access configuration files, use the wsadmin client.
See Start wsadmin
Profiles provides a number of administrative commands that allow you to remove offensive or unwanted content from the database.
To update or delete content in the Profiles database....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following commands to remove or replace inappropriate or unwanted content from the Profiles database.
- ProfilesService.updateExperience(String user_email_addr, String new_content_for_experience_field)
Replace the existing experience text associated with a user's email address with alternate text enclosed by double quotes.
Experience is the information contained in the Background area of a user's profile.
For example:
ProfilesService.updateExperience("ann_jones@company.com","This is new text that will be entered into the Background field for Ann.")
Rich text cannot be entered with this command.
- ProfilesService.updateDescription(String user_email_addr, String new_content_for_description_field)
Replace the existing description text associated with a user's email address with alternate description text enclosed by double quotes.
Description text is information contained on the About Me tab of a user's profile.
For example:
ProfilesService.updateDescription("ann_jones@company.com","This is new text that will be entered into the About Me tab for Ann.")
Rich text cannot be entered with this command.
- ProfilesService.deletePhoto(String user_email_addr)
Delete image files associated with a user's email address.
This command can be used only if the user has uploaded a photo to their profile. This command removes the photo.
For example:
ProfilesService.deletePhoto("john_doe@company.com")
Configure profile features
You can configure certain features in Profiles by modifying policy settings.
By modifying settings in the profiles-policy.xml file, you can enable, disable, and set access control settings for the following Profiles features, according to profile type.
- Recent updates and messages
- Following
- Networking
- Profile photo
- Profile pronunciation file
- Reporting structure
- Tagging
Configure the recent updates feature
Edit settings in the profiles-policy.xml file to configure the recent updates feature.
To edit configuration files, use the IBM WebSphere Application Server wsadmin client.
See Start wsadmin for information about how to start the wsadmin tool.
The recent updates feature allows users to connect with people in their network by posting messages to their profile and commenting on their status messages. As administrator, you can enable or disable the feature for specific profile types, depending on your organization's needs. You can configure access control settings according to profile type. You can also configure visibility of targeted events in the Profiles activity stream.
Profiles directory extensions must be enabled to support this capability. Extensions are enabled by default.
Profiles policy contains two related settings that impact how a user is enabled to post a status update on their Profile page – profiles.board and profile.status.update. It is recommended to have identical settings for both of these policies. In case of conflict between the two settings, the most restrictive setting is used.
See Configure the status update feature for related information.
The following steps provide information about the properties that you can set for the recent updates feature, and the access levels and scopes that you can configure.
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the profiles-policy.xml file:
ProfilesConfigService.checkOutPolicyConfig("<working_directory>", "cell_name")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the IBM WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutPolicyConfig("/wsadminoutput", "jdoe30Node02Cell")
- Open the profiles-policy.xml file using a text editor, from the temporary directory to which you checked it out.
- Edit the following properties for the recent updates feature as needed.
- profile.board
Enables or disables the Profiles recent updates feature.
Configure this property does not affect the ability to post status messages.
This property takes a string value. Possible values include:
- true. Enable the recent updates feature for users with the specified profile type. When set to true, message posts display in the user interface.
- false. Disable the recent updates feature for users with the specified profile type. When set to false, message posts do not display in the user interface. The access control level settings are also ignored when the feature is disabled.
- profile.board.write.message
Controls user access to post messages.
Access levels for this property can be defined using one of the following scopes:
- none. No user can post messages to users with the specified profile type.
- self. Users with the specified profile type can view and post messages in their own recent updates area. Administrators can also view and post messages in the recent updates area of users with the specified profile type.
- colleagues_not_self. Only people who belong to the network of the user with the specified profile type, and who have the person role, can view and post messages to the user's recent updates area. Users with the specified profile type cannot post messages to their own recent updates area.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- colleagues_and_self. People who belong to the network of the user with the specified profile type, and who have the person role, can view and post messages to the user's recent updates area. Users with the specified profile type can also post messages to their own recent updates area.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- person_not_self. Users with the Person role in the News application can post messages to or view the recent updates area of users with the specified profile type. Users with the specified profile type cannot post messages to their own recent updates area.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- person_and_self. Users with the Person role in the News application, including self, can post messages to or view the recent updates area of users with the specified profile type. Users with the specified profile type can also post messages to their own recent updates area.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- profile.board.write.comment
Controls user access to post comments to the recent updates area.
Access levels for this property can be defined using one of the following scopes:
- none. No one can post comments to the recent updates area of users with the specified profile type.
- self. Users with the specified profile type can view and post comments to their own recent updates area. Administrators can also view and post comments to the recent updates area of users with the specified profile type.
- colleagues_not_self. Only the people who belong to the network of the user with the specified profile type, and who have the person role, can view and post comments to the user's recent updates area. Users with the specified profile type cannot post comments to their own recent updates area.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- colleagues_and_self. People who belong to the network of the user with the specified profile type, and who have the person role, can view and post comments to the user's recent updates area. Users with the specified profile type can also post comments to their own recent updates area.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- person_not_self. Users with the Person role in the News application can post comments to and view the recent updates area of users with the specified profile type. Users with the specified profile type cannot post comments to their own recent updates area.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- person_and_self. Users with the Person role in the News application, including self, can post comments to and view the recent updates area of users with the specified profile type. Users with the specified profile type can also post comments to their own recent updates area.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
For example:
<feature name="profile.board"> <profileType type="default" enabled="true"> <acl name="profile.board.write.message" scope="colleagues_and_self" /> <acl name="profile.board.write.comment" scope="colleagues_and_self" /> </profileType> <profileType type="contractor" enabled="true"> <acl name="profile.board.write.message" scope="person_and_self" /> <acl name="profile.board.write.comment" scope="colleagues_and_self" /> </profileType> <profileType type="visitor" enabled="false" /> </feature>This code sample enables the recent updates feature for the default profile type, but restricts access to post messages and comments to people in the profile owner's network who have the person and the profile owner. The recent updates feature is also enabled for the contractor profile type, but access to post messages is restricted to users with the person role, including the profile owner. Access to post comments is restricted to the profile owner, and people in the profile owner's network who have the person role. The recent updates feature is disabled for the visitor profile type.
- To restrict or enable display of activity stream events for a given set of users, use the profile.activitystream feature as shown in the following example:
<feature name="profile.activitystream"> <profileType type="default" enabled="true"> <!-- only surface public targetted events in the Profiles Activity Stream of people with the profile type "default" from colleagues --> <acl name="profile.activitystream.targetted.event" scope="colleagues" /> </profileType> </feature>The supported scopes are:
none No targeted events are surfaced in the activity stream of people with the given profileType. colleagues Only targeted events from colleagues are surfaced in the activity stream of people with the given profileType. self Only the user’s own targeted events are surfaced in the activity stream of people with the given profileType. colleagues_and_self Only the user’s own and targeted events from colleagues are surfaced in the activity stream of people with the given profileType. The profile policy only applies to new events, it does not impact the display of existing events that are already visible in the activity stream.
- Save your changes and check the profiles-policy.xml file back in
ProfilesConfigService.checkInPolicyConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Configure the following feature
Edit settings in the profiles-policy.xml file to configure the following feature.
To edit configuration files, use the IBM WebSphere Application Server wsadmin client.
See Start wsadmin for information about how to start the wsadmin tool.
When the following feature is enabled, users can follow people and content that they are interested in to get the latest updates about them. In this release of Connections, the following feature is enabled by default and you cannot disable it. However, you can configure access control settings for the feature according to profile type.
These steps provide information about the properties that you can set and the access levels that you can configure.
- Use the wsadmin client to access the Profiles configuration files.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the profiles-policy.xml file:
ProfilesConfigService.checkOutPolicyConfig("<working_directory>", "cell_name")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the IBM WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutPolicyConfig("/wsadminoutput", "jdoe30Node02Cell")
- Open the profiles-policy.xml file using a text editor, from the temporary directory to which you checked it out.
- Edit the following properties for the following feature as needed.
- profile.following
This property is always enabled in this release so that users are always able to see who they are following and who their followers are. You can use the profile.following.add access scope to control who can follow users of the specified profile type.
- profile.following.add
Controls access to follow users with the specified profile type.
Access levels for this property can be defined using one of the following scopes:
- none. No one can follow users with the specified profile type.
- self. Users with the specified profile type can follow themselves to subscribe to their own updates. Administrators can also follow users with the specified profile type.
- colleagues_not_self. Only people who belong to the network of the user with the specified profile type, and who have the person role, can follow the user. Users with the specified profile type cannot follow themselves.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- colleagues_and_self. People who belong to the network of the user with the specified profile type, and who have the person or self role, can follow the user. Users of the specified profile type can also follow themselves to subscribe to their own updates.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- person_not_self. Only users with the person J2EE role can post follow users with the specified profile type. Users with the specified profile type cannot follow themselves.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- person_and_self. Users with the person J2EE role can follow users with the specified profile type. Users of the specified profile type can also follow themselves to subscribe to their own updates.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
For example:
<feature name="profile.following"> <profileType type="default" enabled="true"> <acl name="profile.following.add" scope="person_not_self" /> </profileType> <profileType type="contractor" enabled="true"> <acl name="profile.following.add" scope="colleagues_not_self" /> </profileType> <profileType type="visitor" enabled="false"> <acl name="profile.following.add" scope="none" /> </profileType> </feature>This code sample allows only users who have the person J2EE role to follow users with the specified profile type.
For users with the contractor profile type, only the people who belong to the user's network can follow users of that profile type. Following is disabled for users with the visitor profile type.
- Save your changes and check the profiles-policy.xml file back in
ProfilesConfigService.checkInPolicyConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Configure the networking feature
Edit settings in the profiles-policy.xml file to configure the networking feature.
To edit configuration files, use the IBM WebSphere Application Server wsadmin client.
See Start wsadmin for information about how to start the wsadmin tool.
When networking is enabled, users can invite other users to join their network. The networking feature is enabled by default and you cannot disable it. However, you can configure access control settings for the feature according to profile type.
The following steps provide information about the properties for the networking feature, and the access levels and scopes that you can configure.
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the profiles-policy.xml file:
ProfilesConfigService.checkOutPolicyConfig("<working_directory>", "cell_name")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the IBM WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutPolicyConfig("/wsadminoutput", "jdoe30Node02Cell")
- Open the profiles-policy.xml file using a text editor, from the temporary directory to which you checked it out.
- Edit the following properties for the networking feature as needed.
- profile.colleague
This property is always set to enabled to ensure that users are always able to see their possible colleagues. You cannot set the property to disabled. However, you can use the profile.colleague.connect access scope to control who can invite the user to be a colleague.
- profile.colleague.connect
Controls user access to invite people to join their network.
Access levels for this property can be defined using one of the following scopes:
- none. No one can invite a user with the specified profile type to join their network. If the user has an existing network of colleagues, it is not available.
Set the scope to none does not make a user's network read-only. If you need to lock the state of a user, note that users can still remove contacts from their network when you set the scope to none.
- person_not_self. Only users with the person J2EE role can invite users with the specified profile type to join their network. The profile owner cannot invite themselves to join their own network.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the admin role.
For example:
<feature name="profile.colleague"> <profileType type="default" enabled="true"> <acl name="profile.colleague.connect" scope="person_not_self" /> </profileType> <profileType type="contractor" enabled="true"> <acl name="profile.colleague.connect" scope="none" /> </profileType> <profileType type="visitor" enabled="false"> <acl name="profile.colleague.connect" scope="none" /> </profileType> </feature>This code sample enables the networking feature for users with the default profile type, and enables only users with the person J2EE role to invite the profile owner to join their network. Networking is also enabled for the contractor profile type, although no one can invite contractor users to join their network. Networking is disabled for users with the visitor profile type.
- Save your changes and check the profiles-policy.xml file back in
ProfilesConfigService.checkInPolicyConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Configure the profile photo feature
Edit settings in the profiles-policy.xml file to configure the profile photo feature.
To edit configuration files, use wsadmin client.
When the profile photo feature is enabled, users can enhance their profile page by adding a picture of themselves. As administrator, you can enable or disable this feature for specific profile types, depending on your organization's needs. You can also configure access control settings for the profile photo feature according to profile type.
The following steps provide information about the properties that you can set for the profile photo feature, and the access levels and scopes that you can configure.
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the profiles-policy.xml file:
ProfilesConfigService.checkOutPolicyConfig("<working_directory>", "cell_name")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the IBM WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutPolicyConfig("/wsadminoutput", "jdoe30Node02Cell")
- Open the profiles-policy.xml file using a text editor, from the temporary directory to which you checked it out.
- Edit the following properties for the profile photo feature as needed.
- profile.photo
Enables or disables the profile photo feature.
This property takes a string value. Possible values include:
- true. Enable the photo feature for users with the specified profile type. The user interface displays the user's photo and provides options for editing the photo.
- false. Disable the photo feature for users with the specified profile type. The user interface does not display the user's photo or options for editing the photo. A generic photo image is displayed in place of the user's photo.
- profile.photo.update
Control access to view the photo. In additional to the scope attribute for this access control, dissallowNonAdminIfInactive can be used to indicate whether photos for inactive users can be viewed. Administrative users can view photos regardless of the configuration.
Access levels for this property can be defined using one of the following scopes:
- none. No one can update the profile photo of users with the specified profile type.
- self. Users with the specified profile type can update their own profile photo.
- profile.photo.view
Controls access to view the photo.
In additional to the scope attribute for this access control, dissallowNonAdminIfInactive can be used to indicate whether photos for inactive users can be viewed. Administrative users can view photos regardless of the configuration.
In the following photo policy sample, users who have been assigned the reader role can view active user's photos with the default profile type, but photos for inactive users are only viewable by users who have been assigned theadmin role. When a user's photo is not viewable, the default gray photo image is displayed.
<profileType type="default" enabled="true"> <acl name="profile.photo.view" scope="reader" dissallowNonAdminIfInactive="true"/> <acl name="profile.photo.update" scope="self" /> </profileType>The following sample enables the profile photo feature for the default profile type, but restricts access to update profile photos to profile owners and administrators.
For users with the contractor profile type, the profile photo is enabled, but no access is provided to update the profile photo for users of this profile type. The profile photo feature is disabled for users with the visitor profile type, and no one can update the profile photo for users of this profile type.
<feature name="profile.photo"> <profileType type="default" enabled="true"> <acl name="profile.photo.update" scope="self" /> </profileType> <profileType type="contractor" enabled="true"> <acl name="profile.photo.update" scope="none" /> </profileType> <profileType type="visitor" enabled="false"> <acl name="profile.photo.update" scope="none" /> </profileType> </feature>
- Save your changes and check the profiles-policy.xml file back in
ProfilesConfigService.checkInPolicyConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Configure the pronunciation feature
Edit settings in the profiles-policy.xml file to configure the pronunciation feature.
To edit configuration files, use the IBM WebSphere Application Server wsadmin client.
See Start wsadmin for information about how to start the wsadmin tool.
When the pronunciation feature is enabled, users can upload a recording of their name being pronounced correctly to their profile. You can enable or disable this feature for specific profile types. You can also configure access control settings for the pronunciation feature according to profile type.
The following steps provide information about the properties that you can set for the pronunciation feature, and the access levels and scopes that you can configure.
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the profiles-policy.xml file:
ProfilesConfigService.checkOutPolicyConfig("<working_directory>", "cell_name")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the IBM WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutPolicyConfig("/wsadminoutput", "jdoe30Node02Cell")
- Open the profiles-policy.xml file using a text editor, from the temporary directory to which you checked it out.
- Edit the following properties for the pronunciation feature as needed.
- profile.pronunciation
Enables or disables the profile pronunciation feature.
This property takes a string value. Possible values include:
- true. Enable the pronunciation feature for users with the specified profile type. The user interface displays an icon for the user's pronunciation file and provides options for editing the file.
- false. Disable the pronunciation feature for users with the specified profile type. The feature does not display in the user interface.
- profile.pronunciation.update
Controls user access to update the profile pronunciation file.
Access levels for this property can be defined using one of the following scopes:
- none. No one can update the pronunciation file of users with the specified profile type.
- self. Users with the specified profile type can update their own pronunciation file. Administrators can also update the pronunciation file of users with the specified profile type.
For example:
<feature name="profile.pronunciation"> <profileType type="default" enabled="true"> <acl name="profile.pronunciation.update" scope="self" /> </profileType> </feature>This code sample enables the pronunciation feature for users with the default profile type, but restricts the ability to update pronunciation files to profile owners and administrators.
- Save your changes and check the profiles-policy.xml file back in
ProfilesConfigService.checkInPolicyConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Configure the reporting structure feature
Edit settings in the profiles-policy.xml file to configure the reporting structure feature. You can specify whether a user's manager information is available and whether a manager's direct reports are available.
To edit configuration files, use wsadmin client.
When the report-to feature is enabled, users can view the position of other users within the organization using the report-to chain information displayed on their profile page. When the people-managed feature is enabled, users can view the direct reports of a particular manager. As administrator, you can enable or disable these reporting structure features for specific profile types.
The following steps provide information about the properties that you can set for the reporting structure feature.
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the profiles-policy.xml file:
ProfilesConfigService.checkOutPolicyConfig("<working_directory>", "cell_name")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the IBM WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutPolicyConfig("/wsadminoutput", "jdoe30Node02Cell")
- Open the profiles-policy.xml file using a text editor, from the temporary directory to which you checked it out.
- Edit the following properties for the reporting structure feature.
- profile.reportTo
Enables or disables the display of the user's report-to information on their profile page.
This property takes a string value. Possible values include:
- true. Report-to information is available for the users with this profile type. The user interface displays the report-to information, and the user's service document contains the reporting structure links.
- false. Report-to information is not available for the users with this profile type. The user interface still displays the Report-to-Chain widget, but with only the profile owner shown. The report-to information is hidden, as if the profile owner does not have a manager. If you are disabling this option, consider also disabling the widget for this profile type in widgets-config.xml.
For more information, see Managing widgets in Profiles.
For example, to enable the display of report-to information for users with the default profile type:
<feature name="profile.reportTo"> <profileType type="default" enabled="true"> </profileType> </feature>- profile.peopleManaged
Enables or disables the display of direct reports for managers with the specified profile type.
This property takes a string value. Possible values include:
- true. People-managed information is available for the users with this profile type, when they are managers. The user interface displays the report-to information, and the user's service document contains the reporting structure links.
- false. People-managed information is not available for managers with this profile type. The user interface still displays the Report-to Chain widget, but with only the current profile owner shown. The people-managed information is hidden, as if the user does not have any direct reports.
For example, to enable the display of people-managed information for managers with the default profile type:
<feature name="profile.peopleManaged"> <profileType type="default" enabled="true"> </profileType> </feature>
- Save your changes and check the profiles-policy.xml file back in
ProfilesConfigService.checkInPolicyConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Configure the status update feature
Edit settings in the profiles-policy.xml file to configure the status update feature.
To edit configuration files, use the IBM WebSphere Application Server wsadmin client.
See Start wsadmin for information about how to start the wsadmin tool.
Profiles users can keep people in their network and the wider organization informed about their latest activities by posting status messages. You can control whether users can update their status message by enabling or disabling status updates for specific profile types. You can also configure access control settings for status updates according to profile type. This profile policy is similar to that of the activity stream settings described in Configuring the recent updates feature; the Recent updates and Status updates features are tightly related.
The following steps provide information about the properties you can set for the status update feature, and the access levels you can configure.
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the profiles-policy.xml file:
ProfilesConfigService.checkOutPolicyConfig("<working_directory>", "cell_name")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the IBM WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutPolicyConfig("/wsadminoutput", "jdoe30Node02Cell")
- Open the profiles-policy.xml file using a text editor, from the temporary directory to which you checked it out.
- Edit the following properties for the status update feature as needed.
- profile.status
Enables or disables the status update feature.
This property takes a string value. Possible values include:
- true. Enable the status update feature for users with the specified profile type. The user interface for status messages displays.
- false. Disable the status update feature for users with the specified profile type. The user interface for status messages does not display.
The access control level settings are also ignored when this feature is disabled.
- profile.status.update
Controls user access to update status messages.
Access levels for this property can be defined using one of the following scopes:
- none. No one can update the status message of users with the specified profile type.
- self. Users with the specified profile type can update their own status message. Administrators can also update the status message of users with the specified profile type.
For example:
<feature name="profile.status"> <profileType type="default" enabled="true"> <acl name="profile.status.update" scope="self" /> </profileType> </feature>This code sample enables the status update feature for the default profile, but restricts the ability to update status messages to profile owners and administrators.
- Save your changes and check the profiles-policy.xml file back in
ProfilesConfigService.checkInPolicyConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Configure the tagging feature
Edit settings in the profiles-policy.xml file to configure the tagging feature.
To edit configuration files, use the IBM WebSphere Application Server wsadmin client.
See Start wsadmin for information about how to start the wsadmin tool.
The tagging feature allows users to assign meaningful keywords to their own profile and other users' profiles, making it easier to find people with a particular interest or expertise. You can control whether users can tag themselves and others by enabling or disabling the tagging feature for specific profile types. You can also configure access control settings for this feature according to profile type.
- When tagging is disabled, the tags widget is not automatically hidden in the user interface. To hide the widget, you must delete or comment out the relevant widget entry in the widget-config.xml file.
For more information, see Managing widgets in Profiles.
- When tagging is disabled for a given user type, the user's existing tags are still searchable; there is no mechanism available for controlling access to the tags from a reading and viewing perspective. If you choose to display the tags widget but you disable tagging for a particular user type, it might cause confusion for users when search results include tags for a particular profile, but the tags do not display in that profile.
To enable or disable tagging by profile type...
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the profiles-policy.xml file:
ProfilesConfigService.checkOutPolicyConfig("<working_directory>", "cell_name")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the IBM WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutPolicyConfig("/wsadminoutput", "jdoe30Node02Cell")
- Open the profiles-policy.xml file using a text editor, from the temporary directory to which you checked it out.
Edit the following properties for the tagging feature as needed.
- profile.tag
Controls user access to add tags to their profile.
Access levels for this property can be defined using one of the following scopes:
- none. No one can tag the profile of users with the specified profile type.
- self. Users with the specified profile type can tag their own profiles. Administrators can also tag the profiles of users with the specified profile type.
- colleagues_not_self. Only people who belong to the network of the user with the specified profile type, and who have the person role, can tag the user's profile. Users with the specified profile type cannot tag their own profiles.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- colleagues_and_self. People who belong to the network of the user with the specified profile type, and who have the person role, can tag the user's profile. Users with the specified profile type can tag their own profiles.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- person_not_self. Users with the person J2EE role can tag the profile of users with the specified profile type. Users with the specified profile type cannot tag their own profiles.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
- person_and_self. Users with the person J2EE role can tag the profile of users with the specified profile type. Users with the specified profile type can also tag their own profiles.
If resourceOwner is specified on the access check, the resource owner constraint must also be met, unless the user has the self role.
For example:
<feature name="profile.tag"> <profileType type="default" enabled="true"> <acl name="profile.tag.add" scope="person_and_self" /> </profileType> <profileType type="contractor" enabled="true"> <acl name="profile.tag.add" scope="colleagues_and_self" /> </profileType> <profileType type="visitor" enabled="false"> <acl name="profile.tag.add" scope="none" /> </profileType> </feature>This code sample enables tagging for users with the default profile type. Users with the person J2EE role can tag users of the default profile type, and default users can tag their own profiles. Tagging is also enabled for users with the contractor profile type. People in the profile owner's network who have the person role can add tags to profiles of the contractor type, and contractor users can tag their own profiles. Tagging is disabled for users with the visitor profile type.
- Save your changes and check the profiles-policy.xml file back in
ProfilesConfigService.checkInPolicyConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Configure widgets in Profiles
To configure existing widgets or to make custom widgets available for use in Profiles, you modify settings in widgets-config.xml.
Check out widgets-config.xml for Profiles
The widgets-config.xml files contains configuration settings for each of the widgets supported by Profiles. To update settings in the file, check the file out and, after making changes, check the file back in the same wsadmin session as the checkout for the changes to take effect.
To edit configuration files, use wsadmin client.
The widgets-config.xml file defines the widgets available for use in Profiles and Communities. You can edit configuration settings in this file to perform various tasks.
For example, if you want to make custom widgets available, you define the widgets in this file. You also edit settings in this file if you want to configure the Recent Posts widget to only display tabs for the applications included in your deployment.
For more information, see Configure the Recent Posts widget.
To configure settings in widgets-config.xml for Profiles....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the widget configuration file:
ProfilesConfigService.checkOutWidgetConfig("<working_directory>", "<cell_name>")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutWidgetConfig("/wsadminoutput", "jdoe30Node02Cell")
- Navigate to the temporary directory in which you saved widgets-config.xml, and then open the file in a text editor and make the required changes.
- Save your changes and check widgets-config.xml back in
ProfilesConfigService.checkInWidgetConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Manage widgets in Profiles
Configure settings in the widget definition file, widgets-config.xml, when you want to modify the widgets that display in the Profiles application.
To edit configuration files, use wsadmin client.
The widgets-config.xml file contains information about widget definitions, widget attributes, widget location, default widget templates, and page definitions. Each widget has a corresponding <widgetDef> element that contains the attributes for the widget. When you want to edit the widget, enable or disable it, or move it to a different location, you need to update the corresponding <widgetDef> element in widgets-config.xml.
The widgets-config.xml file is stored in the following location:
WAS_HOME\profiles\AppSrv01\config\cells\CELL_NAME\LotusConnections-config\widgets-config.xml
To edit the widgets that display in Profiles, complete the following steps.
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the widget configuration file:
ProfilesConfigService.checkOutWidgetConfig("<working_directory>", "<cell_name>")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutWidgetConfig("/wsadminoutput", "jdoe30Node02Cell")
- Navigate to the temporary directory in which you saved the widgets_config.xml file, and then open the file in a text editor.
- Do one of the following:
- To edit a widget's properties, look for the corresponding <widgetDef> element, and then modify the widget attributes and parameters as needed.
For information about the attributes and parameters, see Profiles widget attributes.
- To associate a widget with a different profile type, look for the relevant <widgetDef> element, and then modify the value of the resourceSubType attribute.
For information about the resourceSubType attribute, see Profiles widget attributes.
- To move a widget to a different location, look for the relevant <widgetDef> element, and then modify the value of the pageId and uiLocation attributes.
For information about these attributes, see Profiles widget attributes.
- To disable a widget, look for the relevant <widgetDef> element and either delete the element or comment it out of the code.
- Save your changes and check widgets-config.xml back in
ProfilesConfigService.checkInWidgetConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Example
<layout resourceSubType="default"> <page pageId="searchView"> <widgetInstance uiLocation="col1" defIdRef="commonTags"/> <widgetInstance uiLocation="col3" defIdRef="sand_DYK"/> <widgetInstance uiLocation="col3" defIdRef="sand_recomItems"/> </page > <page pageId="profilesView"> <widgetInstance uiLocation="col1" defIdRef="socialTags"/> <widgetInstance uiLocation="col1" defIdRef="sand_thingsInCommon"/> <widgetInstance uiLocation="col2" defIdRef="multiWidget"/> <widgetInstance uiLocation="multiWidget" defIdRef="board"/> <widgetInstance uiLocation="multiWidget" defIdRef="contactInfo"/> <widgetInstance uiLocation="multiWidget" defIdRef="backgroundInfo"/> <widgetInstance uiLocation="multiWidget" defIdRef="multiFeedReader"/> <widgetInstance uiLocation="col3" defIdRef="sand_socialPath"/> <widgetInstance uiLocation="col3" defIdRef="reportStructure"/> <widgetInstance uiLocation="col3" defIdRef="friends"/> <widgetInstance uiLocation="col3" defIdRef="linkRoll"/> </page > <page pageId="searchView"> <widgetInstance uiLocation="col1" defIdRef="commonTags"/> </page > <page pageId="networkView"> <widgetInstance uiLocation="col1" defIdRef="sand_DYK"/> </page > <page pageId="editProfileView"> <widgetInstance uiLocation="col1" defIdRef="socialTags"/> <widgetInstance uiLocation="col1" defIdRef="sand_thingsInCommon"/> <widgetInstance uiLocation="col3" defIdRef="sand_socialPath"/> <widgetInstance uiLocation="col3" defIdRef="reportStructure"/> <widgetInstance uiLocation="col3" defIdRef="friends"/> <widgetInstance uiLocation="col3" defIdRef="linkRoll"/> </page > </layout>
Profiles widgets
The following table lists the widgets that are available for the Profiles application. You can edit, enable or disable, and change the location of the following Profiles widgets by configuring settings in widgets-config.xml.
For more information, see Managing widgets in Profiles.
Table 26. Profiles widgets
User interface element Widget defIdRef Description Organization Tags widget commonTags Display the tags for the entire organization. This widget is visible on the Directory page.
Do You Know widget sand_DYK Recommends people for users to add to their network. This widget is visible on the Directory page. Tags widget socialTags Display the tags associated with a specific profile. Things in Common widget sand_thingsInCommon Displays a list of the things that a user has in common with another user. Tabs widget multiWidget Display the tabs in the center section of the profile view page. By default, this tabbed section displays the following widgets: You can include additional widgets as part of the Tabs widget if required.
- Recent Updates
- Contact Information
- Background
Recent Updates widget board Display the latest updates to the profile owner's status message, and messages or comments that other users have posted. Also displays a status message for actions and posts of users that the profile owner follows and from content the profile owner is a member of, such as communities, blogs, and wikis. Contact Information widget contactInfo Display the profile owner's contact information, such as name, job title, company, location, telephone numbers, and email addresses. Background widget backgroundInfo Display the profile owner's background information, such as work experience, technical skills, languages spoken, interests, past work experience, and education. Who Connects Us widget sand_socialPath Display the social path that links two users in Connections. Report-to Chain widget reportStructure Display the profile owner's position in the organization. Network widget friends Displays a selection of the people in the profile owner's network. My Links widget linkRoll Displays a list of external links that the profile owner has included as part of their profile.
Configure the Status Updates widget
Configure the Status Updates widget to display multiple feeds for a user profile. The widget can be extended to display additional feeds from Connections applications and external services as required.
To edit configuration files, use wsadmin client.
The Status Updates widget provides an aggregated summary of a user's recent activity in the different Connections applications. It is visible from a user’s Home page. The widget also displays the latest updates from content that the profile owner is a member of, such as communities and wikis.
The Status Updates widget is automatically configured to provide feeds from all the Connections applications, but you can configure it to display information for only those applications that are included in your deployment.
To configure the Status Updates widget....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the widget configuration file:
ProfilesConfigService.checkOutWidgetConfig("<working_directory>", "<cell_name>")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutWidgetConfig("/wsadminoutput", "jdoe30Node02Cell")
- Open widgets-config.xml in a text editor, and specify the widget attributes using the information in the following tables. You can find the configuration section for this component under config > widgets > definitions > widgetDef > defId = multiFeedReader > configData.
Table 27. Status updates widget attributes
Attribute Description serviceNameResourceId The resource string that specifies the name of the given feed that is displayed in the tab. serviceNameFeedUrl The feed URL for the specified Connections application. A standard URL can be used, or a serviceNameSvcRef parameter can be used if the serviceName has been defined in the lotusConnections-config.xml file. Specify the following URL parameters:
Table 28. Status updates widget URL parameters
Parameter Description A substitution variable for the user email displayed. This is used as a placeholder in the URL; it is replaced at runtime.
serviceNameSvcRef A substitution variable for the URL value that is replaced at runtime. This parameter is retrieved from the lotusConnections-config.xml file for the given Connections application.
For example:
<widgetDef defId="multiFeedReader" url="{contextRoot}/widget-catalog/multifeedreader.xml?version={version}"> <itemSet> <item name="numberOfEntriesToDisplay" value="5" /> <item name="communityResourceId" value="communityResourceId"/> <item name="communityFeedUrl" value="{communitiesSvcRef}/service/atom/communities/all?userid={userid}&ps=5"/> <item name="dogearResourceId" value="dogearResourceId"/> <item name="dogearFeedUrl" value="{dogearSvcRef}/atom?userid={userid}&access=any&sort=date&sortOrder=desc&ps=5&showFavIcon=true{appLangParam}"/> <item name="blogsResourceId" value="blogsResourceId"/> <item name="blogsFeedUrl" value="{blogsSvcRef}/roller-ui/feed/{userid}?order=asc&maxresults=5&sortby=0"/> <item name="activitiesResourceId" value="activitiesResourceId"/> <item name="activitiesFeedUrl" value="{activitiesSvcRef}/service/atom2/activities?public=only&userid={userid}&authenticate=no&ps=5"/> <item name="filesResourceId" value="filesResourceId"/> <item name="filesFeedUrl" value="{filesSvcRef}/basic/anonymous/api/userlibrary/{userid}/feed?pagesize=5"/> </itemSet> </widgetDef>- To remove an application feed, comment out or delete the <serviceNameResourceId> and <serviceFeedUrl> attributes.
To comment out the attributes, use the <!-- XML notation to open the comment and --> to close the comment.
In the following example, feeds from the Activities and Files applications are removed from the widget:
<!-- <item name="activitiesResourceId" value="activitiesResourceId"/> <item name="activitiesFeedUrl" value="{activitiesSvcRef}/service/atom2/activities?public=only&userid={userid}&authenticate=no&ps=5"/> <item name="filesResourceId" value="filesResourceId"/> <item name="filesFeedUrl" value="{filesSvcRef}/basic/anonymous/api/userlibrary/{userid}/feed?pagesize=5"/> --> </itemSet> </widgetDef>
- Save your changes and check widgets-config.xml back in
ProfilesConfigService.checkInWidgetConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
Add custom widgets to Profiles
Extend the functionality of the Profiles application by adding custom widgets.
You can use custom widgets to bring additional functionality to Profiles. Custom widgets must use the iWidget specification, which uses technology based on JavaScript, XML, HTML, and CSS. The widget files are stored on an HTTP server.
The widgets can be bundled as EAR applications and deployed on IBM WebSphere Application Server. They can also be hosted in LAMP, .NET, and other environments.
You need to register the widgets developed by iWidget developers to make them available for use in Connections. You do this by configuring the widget attributes defined by the iWidget developer in widgets-config.xml.
Profiles supports the use of mandated widgets only. Mandated widgets can be placed in any of the columns on the My Profile, Directory, and My Network pages. This type of widget exists in every profile and cannot be removed or hidden. Mandated widgets can also exist outside a profile, for example, they can show up in a search results page.
Enable custom widgets for Profiles
To make custom widgets available for use in Profiles, you need to configure the widgets in the widget definition file, widgets-config.xml.
To edit configuration files, use wsadmin client.
The widgets-config.xml file contains information about widget definitions, widget attributes, widget location, default widget templates, and page definitions. Custom widget attributes are defined by the widget developer but, as administrator, you need to configure the widgets by adding a <widgetDef> element containing the appropriate attributes for each widget in the widget configuration file. The file is stored in the following location:
WAS_HOME\profiles\AppSrv01\config\cells\CELL_NAME\LotusConnections-config\widgets-config.xml
You can integrate a custom widget as part of Connections and you can also integrate the widget as an external application. To integrate the widget inside the Connections application, your widget must provide a full page mode. To integrate the widget as an external application, you must use a navBarLink attribute to register a navigation link along with your widget configuration information.
To configure a custom widget for Profiles....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following command to check out the widget configuration file:
ProfilesConfigService.checkOutWidgetConfig("<working_directory>", "<cell_name>")
...where...
- working_directory is the temporary working directory to which the configuration XML and XSD files will be copied.
The files are kept in this working directory while you make changes to them.
- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required.
For example:
ProfilesConfigService.checkOutWidgetConfig("/wsadminoutput", "jdoe30Node02Cell")
- Navigate to the temporary directory in which you saved the widgets_config.xml file, and then open the file in a text editor.
- Define the custom widget by specifying a resource type of profile and adding a <widgetDef> element using the attributes and parameters defined in Profiles widget attributes. When adding custom strings using the resource bundle loader, the value of the category, description, and widgetDef attributes in the <widgetDef> element are used as the resource key for your custom bundle.
For more information about adding widget strings, see Adding custom strings for widgets and other specified scenarios.
For example:
<config id="widgets"> <resource type="profile"> <widgets> <definitions> <widgetDef defId="HelloWorld" primaryWidget="false" modes="view fullpage edit search" url="{contextRoot}/comm.widgets/helloWorld/HelloWorld.xml?version={version}"/> <!-- XML attribute with substitution variables --> </definitions> <layout resourceSubType="default"> <page pageId="profilesView"> <widgetInstance uiLocation="col3" defIdRef="reportStructure"/> <widgetInstance uiLocation="col3" defIdRef="friends"/> <widgetInstance uiLocation="col1" defIdRef="socialTags"/> <widgetInstance uiLocation="col3" defIdRef="linkRoll"/> <widgetInstance uiLocation="col2" defIdRef="multiFeedReader"/> </page> <page pageId="searchResultView"> <widgetInstance uiLocation="col1" defIdRef="commonTags"/> </page> <page pageId="searchView"> <widgetInstance uiLocation="col1" defIdRef="commonTags"/> </page> </layout> </widgets> </resource> ..... </config>The url, navBarLink, and item or @value XML attributes can be parameterized using substitution variables.
For more information about the substitution variables used, see Profiles widget configuration variables.
- Save your changes and check widgets-config.xml back in
ProfilesConfigService.checkInWidgetConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop and restart the Profiles server.
If you are adding widgets that are hosted on third-party servers, then you might need to update your proxy configuration.
For more information on configuring the Ajax proxy for Profiles, see Configure the AJAX proxy for a specific application.
Profiles widget configuration variables
The following table lists the configuration variables that can be used for the url, navBarLink, helpLink, and item or @value attributes when configuring a third-party widget for integration with Profiles. The @value attribute refers to the value attribute in the item of the itemSet configuration elements in widgets-config.xml.
Table 29. Profiles widget configuration variables
Variable Description resourceId The profileUuid of the profile in use. lastMod The timestamp for the last time that the profile was updated. userid The Connections user ID of the logged-in user. This variable is returned as undefined if the user is not logged in. The email address of the logged-in user. This variable is returned as undefined if the user is not logged in. version The timestamp form, versionStamp, in LotusConnections-config.xml. This is updated on customizations and upgrades to ensure that static content URLs are updated.
lang The language parameter. webresourcesSvcRef The service reference for the Common application as defined in LotusConnections-config.xml. For example, http://myserver.com/connections/resources.
contextRoot The context root of the Profiles application. For example: /profiles.
communitiesSvcRef The service reference for the Communities application, as defined in LotusConnections-config.xml. For example: http://myserver.com/communities.
profilesSvcRef The service reference for the Profiles application, as defined in LotusConnections-config.xml. For example: http://myserver.com/profiles.
dogearSvcRef The service reference for the Bookmarks application, as defined in LotusConnections-config.xml. For example: http://myserver.com/dogear.
blogsSvcRef The service reference for the Blogs application, as defined in LotusConnections-config.xml. For example: http://myserver.com/blogs.
activitiesSvcRef The service reference for the Activities application, as defined in LotusConnections-config.xml. For example: http://myserver.com/activities.
forumsSvcRef The service reference for the Forums application, as defined in LotusConnections-config.xml. For example: http://myserver.com/forums.
filesSvcRef The service reference for the Files application, as defined in LotusConnections-config.xml. For example: http://myserver.com/files.
wikisSvcRef The service reference for the Wikis application, as defined in LotusConnections-config.xml. For example: http://myserver.com/wikis.
opensocialSvcRef The service reference for the Status Updates application, as defined in LotusConnections-config.xml. For example: http://myserver.com/opensocial.
Profiles widget attributes
The following tables list the widget elements that you can configure when enabling, disabling, editing, or moving widgets in the Profiles application. These elements are configured in widgets-config.xml.
Table 30. The widget definition element
Attribute Description Required defId The widget name, which must be unique. The defId attribute is also used as a title or a resource bundle key. This attribute takes a string value.
Yes primaryWidget Specifies that the widget displays in the center column of the page. The default value is true. No description Description of the widget that displays in the widget palette. This attribute uses the custom string framework. For more information about adding widget strings, see Adding custom strings for widgets and other specified scenarios.
No category The category in which the widget is placed in the widget palette. This attribute uses the custom string framework. For more information about adding widget strings, see Adding custom strings for widgets and other specified scenarios.
No requires Specifies which Connections applications are required for the widget to function. The XML attribute values must match the serviceReference values in LotusConnections-config.xml. No url Location of the widget descriptor. This XML attribute can be parameterized with substitution variables.
This attribute takes a string value.
Yes modes Specifies the modes that are supported by the custom widget. Possible modes include:
- view. This mode enables the widget to display on the profile overview page.
- search. This mode integrates the widget into a community's search results page. Each widget displays as a separate tab on the page.
- fullpage. This mode integrates the widget into the navigation bar. When users click the widget link in the navigation bar, the widget displays in a full page view in the community.
- edit. This mode enables the Edit menu option in the widget's action menu, allowing community owners to edit the preferences of the widget inline, directly from the community's Overview page.
The widget is also integrated in to the Edit Community page as a separate tab.
No uniqueInstance Specifies whether the widget supports multiple instances on the same page. The default value is true. No resourceOwnerWidget Specifies whether the widget should be seen only by the profile owner. Set this to true or false as required. No navBarLink Specifies the URL to an external application. A link to this URL is added to the navigation bar when the widget is part of the community. The URL can contain substitution variables. No helpLink Specifies the URL to an HTML file containing help documentation for the widget. The help opens in a pop-up window. This parameter can contain subvariables. It can also be parameterized with substitution variables.
No showInPalette Specifies if the widget should be displayed in the content palette. No loginRequired Specifies that the widget displays only when users are logged in. No bundleRefId The resource bundle reference ID that is defined in LotusConnections-config.xml. This ID is used to determine the bundle strings for the widget category, widget description, and widget title. For more information about adding widget strings, see Adding custom strings for widgets and other specified scenarios.
No
The url, navBarLink, and item or @value XML attributes can be parameterized using substitution variables.
For more information about the substitution variables used, see Profiles widget configuration variables.
Table 31. The layout element
Attribute Description resourceSubType Contains the name of the profile type used render the widget layout. Multiple profile type layout configuration is allowed in Profiles. For more information, see Adding profile types.
This attribute takes a string value.
Table 32. The page element
Attribute Description pageId Contains the page ID for the page that Profiles uses to render the widget layout. This attribute takes a string value. Possible values include:
- profilesView
- searchView
- searchResultView
- networkView
- editProfileView
Table 33. The widget instance element
Attribute Description uiLocation Specifies which column on the page contains the widget. This attribute takes a string value. Possible values include:
- col1
- col2
- col3. Note that this option is not available for the networkView page.
defIdRef Defines the widget definition to which the instance is bound. This attribute takes a string value.
Configure Profiles events
Use configuration settings to control how you want the events generated by Profiles to be handled in your deployment for auditing purpose.
By default, all Tivoli Directory Integrator-related events are ignored for auditing purpose. For a list of the Tivoli Directory Integrator events that are logged, see Tivoli Directory Integrator events.
You can modify configuration settings in the tdi-profiles-config.xml and profiles-config.xml files to specify whether Tivoli Directory Integrator events are stored in the Profiles database and whether they are made available to the News application or third-party audit integration tools.
For example, you might want to continue storing Tivoli Directory Integrator events in the Profiles database but you might not want to publish the events to the event infrastructure.
You can configure settings to control whether regular, end-user events are stored in the Profiles database or published to the event infrastructure.
To configure Profiles events....
- To specify whether to store Tivoli Directory Integrator events in the Profiles database, you need to manually update settings in the tdi-profiles-config.xml file.
- Use a text editor, open the tdi-profiles-config.xml file.
After the Tivoli Directory Integrator Solution files are extracted, the file is located in the following directory:
TDI/conf/LotusConnections-config
- In the <properties> section of the file, update the value of the profiles.events.system.ignore property.
<property name="profiles.events.system.ignore" value="false" />
By default, the property is set to true so that Tivoli Directory Integrator events are not stored in the Profiles database.
- To configure event settings for the Profiles Web application, you must manually update settings in the profiles-config.xml file.
To edit the profiles-config.xml file, use the wsadmin client.
- Check out the profiles-config.xml file by completing steps 1 and 2 of the topic, Changing Profiles configuration property values.
- Use a text editor, open the profiles-config.xml file from the temporary directory to which you checked it out and perform the following steps:
- To configure Tivoli Directory Integrator and system events, update the following properties in the <properties> section of the profiles-config.xml file:
- profiles.events.system.publish
Specifies whether to publish Tivoli Directory Integrator events to the Connections event infrastructure. By default, this property is set to true.
For example, if you do not want to publish Tivoli Directory Integrator events to the event infrastructure, set the value of the property to false...
<property name="profiles.events.system.publish" value="false" />
Tivoli Directory Integrator events must be stored in the Profiles database if you want them to be published to the event infrastructure. Therefore, if profiles.events.system.ignore in the tdi-profiles-config.xml file is set to true, the profiles.events.system.publish property has no effect, that is, system events and Tivoli Directory Integrator events will not be published to the event infrastructure.
- profiles.events.ignore
Ignores all the events generated by Profiles. This property is set to false by default.
This property is not explicitly listed in the profiles-config.xml file when you install Connections. To change the default setting so that the events generated by the Profiles application are ignored, you must manually add the property to the <properties> section of the profiles-config.xml file and set its value to true...
<property name="profiles.events.ignore" value="true" />
If you installed the News application and are using it in your deployment, do not set the value of the profiles.events.ignore property to true. If you set it to true, the News application will not receive any events from the Profiles application.
- To configure end-user events, update the following properties in the <properties> section of the profiles-config.xml file:
- profiles.events.user.store
Specifies whether to store regular, end-user create, update, and delete events in the Profiles database. By default, this property is set to true.
This property is not explicitly listed in the profiles-config.xml file when you install Connections. To change the default setting so that end-user create, update, and delete events are not stored in the Profiles database, you must manually add the property to the <properties> section of the profiles-config.xml file and set its value to false...
<property name="profiles.events.user.store" value="false" />
By default, IBM Tivoli Directory Integrator-related events are not stored in the Profiles database.
- profiles.events.user.publish
Specifies whether to publish regular, end-user create, update, and delete events to the event infrastructure. By default, this property is set to true.
This property is not explicitly listed in the profiles-config.xml file when you install Connections. To change the default setting so that create, update, and delete events are not published to the event infrastructure, you must manually add the property to the <properties> section of the profiles-config.xml file and set its value to false...
<property name="profiles.events.user.publish" value="false" />
If you installed the News application and are using it in your deployment, do not set the value of the profiles.events.user.publish property to false. If you set it to false, the News application will not receive any events from the Profiles application.
- Save your changes and then close the profiles-config.xml file.
- After making changes, check the profiles-config.xml file back in, and you must do so in the same wsadmin session in which you checked it out for the changes to take effect.
See Applying property changes in Profiles for information about how to apply your changes.
Tivoli Directory Integrator events
By default, all IBM Tivoli Directory Integrator-related events are stored in the Profiles database and are published to the event infrastructure in Connections. The following Tivoli Directory Integrator events are logged by default:
Table 34. Tivoli Directory Integrator events
Event name Description profiles.person.updated Generated when a user's profile is modified. profiles.person.added Generated when a new user record is added to Profiles. There is no user interface for this task. This action can only be performed using the API or Tivoli Directory Integrator. profiles.person.deleted Generated when a new user record is deleted from the Profiles database. profiles.code.created Generated when new profile code is created using a Tivoli Directory Integrator script. The code might be for department, work location, or organization. profiles.code.updated Generated when profile code is updated using a Tivoli Directory Integrator script. The code might be for department, work location, or organization. profiles.code.deleted Generated when profile code is deleted using a Tivoli Directory Integrator script. The code might be for department, work location, or organization.
Manage Profiles scheduled tasks
Use the ProfilesScheduledTaskService administrative commands to manage the tasks scheduled for Profiles.
To use administrative commands, use the wsadmin client.
See Start wsadmin for details.
Profiles uses the WAS scheduling service for performing regular managed tasks.
For more information about how the scheduler works, see Scheduling tasks.
Profiles has four managed tasks that are specified in the profiles-config.xml property file. You can use the ProfilesScheduledTaskService commands to pause and resume a Profiles task, and to retrieve information about a task.
The SystemOut.log file also contains information about whether the scheduler is running and whether any scheduled tasks have started.
To administer the tasks performed by WAS scheduler, complete the following steps.
app_server_root\profiles\Dmgr01\bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Administer the Profiles scheduler service.
- ProfilesScheduledTaskService.pauseSchedulingTask(string taskName)
Suspends scheduling of a task. Has no effect on currently running tasks. Return a 1 to indicate that the task has been paused. Paused tasks remain paused until you explicitly resume them, even if the server is stopped and restarted.
Parameters: taskName
Task names have the following string values:
- DbCleanupTask
- ProcessLifeCycleEventsTask
- ProcessTDIEventsTask
- StatsCollectorTask
For example:
ProfilesScheduledTaskService.pauseSchedulingTask("StatsCollectorTask")
- ProfilesScheduledTaskService.resumeSchedulingTask(string taskName)
Resumes the start of a paused task. Return a 1 to indicate that the task has been resumed.
Parameters: taskName
Task names have the following string values:
- DbCleanupTask
- ProcessLifeCycleEventsTask
- ProcessTDIEventsTask
- StatsCollectorTask
For example:
ProfilesScheduledTaskService.resumeSchedulingTask("StatsCollectorTask")
- ProfilesScheduledTaskService.forceTaskExecution(string taskName, string executeSynchronously)
Property settings in the profiles-config.xml file specify whether tasks are enabled to run automatically, and how often.
This command allows you to run tasks manually, for example if you disabled a task but want to run it occasionally.
Executes a task. Return a 1 to indicate that the task has been executed.
Parameters: taskName
Task names have the following string values:
Parameters: executeSynchronously
- DbCleanupTask
- ProcessLifeCycleEventsTask
- ProcessTDIEventsTask
- StatsCollectorTask
This takes the string values true or false. Specifying this value is not required; the default is false. If this value is false, then the task executes asynchronously, meaning if the taskId is valid the command returns immediately and the execution continues in the background. If this value is true, it the command does not return until the task completes. The StatsCollectorTask is a local task (run on each node) and is always run asynchronously when triggered from the admin console.
For example:
ProfilesScheduledTaskService.forceTaskExecution("StatsCollectorTask")
- FilesScheduler.getTaskDetails(string taskName)
Displays status of a task and details about configuration parameters.
Parameters: taskName
Task names have the following string values:
- DbCleanupTask
- ProcessLifeCycleEventsTask
- ProcessTDIEventsTask
- StatsCollectorTask
For example:
ProfilesScheduledTaskService.getTaskDetails("StatsCollectorTask")
Administer cache
You can modify settings in the profiles-config.xml file to configure the full report-to and object caches for Profiles. Use Profiles administrative commands when you want to enable, disable, or reload the full report-to chain cache.
Profiles can display organizational structure information using report-to cache settings. These settings determine how the cache used to store the full-report-to data is configured. For performance reasons, this cache is used to present report-chain information rather than accessing the corporate directory. If the cache is disabled, the reporting structure information is still available, but it displays more slowly.
Profiles uses the object cache to store auxiliary table information, including department, organization, work location, employee type, and country code display values. You can configure settings to specify when the cache is refreshed, and to define the refresh interval and start delay.
Controlling cache operations
Use Profiles administrative commands to control the operation of the full report-to chain cache without having to stop and start the Profiles server.
To run administrative commands, use the wsadmin client.
See Start wsadmin
Profiles uses an in-memory cache to support the organizational structure view available in every profile – the full report-to chain cache. You can use this procedure to change the behavior of the cache at runtime. However, the changes that you make using this procedure are not permanently stored. You must change the configuration settings in the profiles-config.xml file to change the behavior permanently because the changes you make here will be overwritten by the settings in the configuration file the next time you stop and restart the server.
See Configure the full reports-to cache for details. Only use these steps to immediately effect the behavior of the organizational structure cache without having to stop and restart the server.
If you use the administrative commands to disable the cache, the reporting structure information is still available, but it displays more slowly.
To control cache operations for Profiles....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following commands to control cache operations.
- ProfilesService.enableFullReportsToCache(startDelay, interval, schedTime)
Enable the full report-to chain cache with the specified start delay in minutes, refresh interval in minutes, and scheduled refresh time in HH:MM format.
This cache is used to populate the full report-to chain view available in a user's profile. The cache contains the specified number of top employees in the organizational pyramid; it is not intended to store an entry for each profile. It stores the profiles of those people at the top of the chain who are included in many full report-to chain views.
For example:
ProfilesService.enableFullReportsToCache(5, 15, "23:00")
- ProfilesService.disableFullReportsToCache()
Disable the full report-to chain cache capability. This command does not take any arguments.
- ProfilesService.reloadFullReportsToCache()
Force a reload of the full report-to chain cache from the Profiles database. This command does not take any arguments.
If the full report-to cache is disabled, it cannot be reloaded. This command fails when the cache is disabled.
Configure the full reports-to cache
The full reports-to cache is one of the two in-memory caches used by Profiles to support the organizational structure views.
To edit configuration files, use wsadmin client.
The other in-memory cache is the object cache, which caches auxiliary table information, including department, organization, work location, employee type, and country code display values.
The full reports-to cache is used to populate the full reports-to chain view available in a profile. The cache contains the specified number of top employees in the organizational pyramid. It is not intended to store an entry for each profile; it stores the profiles of those people at the top of the chain who are included in many full report-to chain views.
When you use this procedure to change the default behavior of the cache, it changes the behavior permanently, but requires a server restart. To change the cache behavior in a production environment without having to disrupt service, see Controlling cache operations. But, any changes you make to the runtime cache are overwritten by the setting in the configuration file when the server next restarts.
To manage the display of report-chain information using the full reports-to cache....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the Profiles configuration files:
ProfilesConfigService.checkOutConfig("working_directory", "cell_name" where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes (/) to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command does not complete successfully.- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required. It is also case-sensitive. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/prof/temp","Cell01")
Windows:
ProfilesConfigService.checkOutConfig("c:/prof/temp","Cell01")
IBM i ProfilesConfigService.checkOutConfig("/temp","Cell01")
- To configure the full reports-to cache:
ProfilesConfigService.updateConfig(property, value) where
- property is one of the editable Profiles configuration properties.
- value is the new value with which you want to set that property.
The following table displays information regarding the properties that you can configure for the full reports-to chain cache, and the type of data that you can enter for them.
Table 35. Full reports-to chain cache properties
Option Description fullReportsToChainCache.enabled Enables or disables the full reports-to cache. This property takes a Boolean value: true or false. The value must be formatted in lowercase.
fullReportsToChainCache.ceouid The corporate directory user ID of the person who will appear at the top of the organizational structure. This property takes a UID value.
fullReportsToChainCache.size The number of employee entries that should be loaded into the cache. This property takes an integer value.
fullReportsToChainCache.refreshTime Determines the time of day in 24-hour time format that Profiles performs the first scheduled reloading of the cache. The property value must be expressed in hours and minutes using this formatting: HH:MM.
fullReportsToChainCache.refreshInterval The time in minutes between cache reload operations. This property takes an integer value.
fullReportsToChainCache.startDelay The time in minutes that Profiles should wait after starting before loading the cache for the first time. This property takes an integer value.
For example, to disable the cache, enter:
ProfilesConfigService.updateConfig("fullReportsToChainCache.enabled","false")
- After making changes, check the configuration files back in, and you must do so in the same wsadmin session in which you checked them out for the changes to take effect.
See Applying property changes in Profiles for information about how to save and apply your changes.
Configure the Profiles object cache
You can modify settings in the profiles-config.xml file to specify when the Profiles object cache is refreshed, and to define the refresh interval and start delay.
To edit configuration files, use wsadmin client.
The Profiles object cache is used to cache auxiliary table information, including department, organization, work location, employee type, and country code display values. As a result, there is a delay before changes to these types of data are reflected in the user interface.
By default, the data is refreshed every 15 minutes to ensure that, whenever data is updated, there is a relatively short delay from when the data is changed and when the changes are reflected in the user interface.
To configure the Profiles object cache....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the Profiles configuration files:
ProfilesConfigService.checkOutConfig("working_directory", "cell_name" where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes (/) to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command does not complete successfully.- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required. It is also case-sensitive. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/prof/temp","Cell01")
Windows:
ProfilesConfigService.checkOutConfig("c:/prof/temp","Cell01")
IBM i ProfilesConfigService.checkOutConfig("/temp","Cell01")
- Open the profiles-config.xml file in a text editor.
- Look for the <profileObjectCache> element, and then modify the following lines of code as needed:
<profileObjectCache> <refreshTime>22:30</refreshTime> <!-- 24 hour time --> <refreshInterval>15</refreshInterval> <!-- minutes --> <startDelay>10</startDelay> <!-- minutes --> </profileObjectCache>...where...
- <refreshTime> is the scheduled refresh time in HH:MM format.
- <refreshInterval> is the refresh interval in minutes.
- <startDelay> is the specified start delay in minutes.
- Save your changes and close the configuration file.
- After making changes, check the configuration files back in, and you must do so in the same wsadmin session in which you checked them out for the changes to take effect.
See Applying property changes in Profiles for information about how to save and apply your changes.
Manage the Profiles search operation
Use Profiles configuration settings to control how the search operation displays search results.
To edit configuration files, use wsadmin client.
To configure Profiles search....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the Profiles configuration files:
ProfilesConfigService.checkOutConfig("working_directory", "cell_name" where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes (/) to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command does not complete successfully.- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required. It is also case-sensitive. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/prof/temp","Cell01")
Windows:
ProfilesConfigService.checkOutConfig("c:/prof/temp","Cell01")
IBM i ProfilesConfigService.checkOutConfig("/temp","Cell01")
- To configure the Profiles search operation:
ProfilesConfigService.updateConfig(property, value) where
- property is one of the editable Profiles configuration properties.
- value is the new value with which you want to set that property.
The following table displays information regarding the search property and the type of data that you can enter for it.
Table 36. Profiles search properties
Property Description search.maxRowsToReturn Determines the maximum number of Profiles database rows returned by a name search operation.
This property takes an integer value. The default value is 250. You can increase the number, but do not specify a number larger than 500. Doing so causes search operations to fail entirely. Do not specify 0 unless you want no results to be returned.
The keyword and directory search operations do not have this limit.
search.pageSize Determines the number of returned rows to place on a results page. This property takes an integer value. The default value is 10.
search.firstNameSearchEnabled Determines if search by first name only is enabled. By default, this setting is set to false.
This property takes a Boolean value.
Enable this setting negatively impacts the performance of the Search by > Name function available in the Profiles user interface.
nameOrdering.enabled When this property is set to true, names must be entered as (FirstName LastName) or (LastName, FirstName). By default, it is set to false.
When only a single word is entered, that word is treated as the LastName value during search.
This property takes a Boolean value.
For example:
ProfilesConfigService.updateConfig("search.pageSize","20")
- To specify the default sorting key to use for displaying search results, edit properties in the profiles-config.xml file manually as follows.
- Open the profiles-config.xml file in a text editor.
- Update the following properties as needed.
- sortNameSearchResultsBy
- Determines what sorting key to use for Profiles name search results.
This property is also applied to the search results that display when a tag is clicked in a profile overview page. It does not affect the results generated by clicking a tag in a directory search.
The valid values for the default attribute are:
- displayName. Lists name search results in order of user display name.
- last_name. Lists name search results in order of user last name.
For example:
<sortNameSearchResultsBy default="displayName" />
- sortIndexSearchResultsBy
- Determines what sorting key to use for Profiles keyword and advanced search results.
The valid values for the default attribute are:
- relevance. Lists keyword and advanced search results in order of relevance.
- displayName. Lists keyword and advanced search results in order of user display name.
- last_name. Lists keyword and advanced search results in order of user last name.
For example:
<sortIndexSearchResultsBy default="relevance" />
- Save your changes and then close the profiles-config.xml file.
- After making changes, check the configuration files back in, and you must do so in the same wsadmin session in which you checked them out for the changes to take effect.
See Applying property changes in Profiles for information about how to save and apply your changes.
Monitoring statistics and metrics for Profiles
Use the Profiles statistics capabilities to monitor operations and product usage.
Profiles enables you to monitor statistics in the following ways:
- Collect usage statistics and save them to a specified log file name and location.
- Write usage statistics to a file on demand.
Enable and disable automatic generation of Profiles statistics files
You can enable or disable automatic generation of Profiles statistics files. By default, automatic generation is enabled.
To edit configuration files, use wsadmin client.
Usage statistics are stored in memory and saved to the file system during Profiles shutdown. The statistics are saved to the location configured in the statistics.filePath and statistics.fileName files.
The statistics logged depend on the Profiles capabilities that are enabled.
For example, if you disable the full report-to chain cache, you do not see statistics listed for that capability.
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the Profiles configuration files:
ProfilesConfigService.checkOutConfig("working_directory", "cell_name" where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes (/) to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command does not complete successfully.- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required. It is also case-sensitive. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/prof/temp","Cell01")
Windows:
ProfilesConfigService.checkOutConfig("c:/prof/temp","Cell01")
IBM i ProfilesConfigService.checkOutConfig("/temp","Cell01")
- In the working_directory from Step 2, locate and open the profiles-config.xml file using a text editor. Scroll to task StatsCollectorTask in the <scheduledTasks> section.
- To disable auto generation of the statistics file, specify enabled="false" in the StatsCollectorTask statement.
- To enable auto generation of the statistics file, specify enabled="true" in the StatsCollectorTask statement.
Example:
<task name="StatsCollectorTask" interval="0 0 1 * * ?" enabled="false" type="internal" scope="local"> <args> <property name="filePath">${PROFILES_STATS_DIR}//LC_NODE_NAME//${WAS_SERVER_NAME}</property> <property name="fileName">profilesStats</property> </args> </task>- After making changes, check the configuration files back in, and you must do so in the same wsadmin session in which you checked them out for the changes to take effect.
See Applying property changes in Profiles for information about how to save and apply your changes.
Configure the vCard export application for Profiles
Configure settings in the profiles-config.xml file to specify the character set encoding options used to export vCards.
To edit configuration files, use wsadmin client.
Profiles users can export vCards from people's profiles and then import the profiles into their email client as contacts. You can configure the profiles-config.xml file to specify the encoding options that are available when exporting vCards from Profiles and determine which options are most appropriate for users.
To configure the vCard export application....
WAS_HOME/profiles/Dmgr01/bin
You must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the Profiles configuration files:
ProfilesConfigService.checkOutConfig("working_directory", "cell_name" where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes (/) to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command does not complete successfully.- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required. It is also case-sensitive. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/prof/temp","Cell01")
Windows:
ProfilesConfigService.checkOutConfig("c:/prof/temp","Cell01")
IBM i ProfilesConfigService.checkOutConfig("/temp","Cell01")
- Open the Profiles configuration file, profiles-config.xml, using a text editor and locate the following <vcardExport> section:
<vcardExport> <charset name="UTF-8"> <label key="label.vcard.encoding.utf8"/> </charset> <charset name="ISO-8859-1"> <label key="label.vcard.encoding.iso88591"/> </charset> <charset name="Cp943c"> <label key="label.vcard.encoding.cp943c"/> </charset> </vcardExport>- To provide an export encoding specific to your language, include the following lines of code within the <vcardExport> tags:
<charset name="character_encoding"> <label key="ui_label"/> </charset>...where...
- character_encoding is the name of the character encoding to export.
- ui_label is the label for the character encoding in the user interface.
For example, to add an export setting for Arabic, include the following element:
<vcardExport> ... <charset name="Windows-1256"> <label key="label.vcard.encoding.windows.arabic"/> </charset> </vcardExport>The following character set encoding options work best:
Table 37. Export character set encodings
Character encoding Description Windows-1250 Central European languages that use Latin script (Polish, Czech, Slovak, Hungarian, Slovene, Serbian, Croatian, Romanian, and Albanian) Windows-1251 Cyrillic alphabets Windows-1252 Western languages Windows-1253 Greek Windows-1254 Turkish Windows-1255 Hebrew Windows-1256 Arabic Windows-1257 Baltic languages Windows-1258 Vietnamese gb2312 Chinese gb18030 Chinese Complete this step for every language for which you require encoding support. There is no limit to the number of character set encodings that you can specify.
- After making changes, check the configuration files back in in the same wsadmin session in which you checked them out.
See Applying property changes in Profiles for information about how to save and apply your changes.
Make photos cachable in secure environments
If your Profiles deployment is configured to prevent profile data from being accessible to readers, you can opt to make just user photos cachable.
This is an optional task; it is only useful if Profiles is configured to lock profile data. For security reasons, when the reader role is set to something other than everyone, Profiles does not publicly cache photos. As a result, no profile photos are visible to readers. To override this behavior and make photos visible, define a rule in the IBM HTTP Server's configuration file to explicitly set the caching headers of photos.
Define a rule in the IBM HTTP Server to explicitly set the caching headers of profile photos by completing the following steps:
- Use a text editor, open the httpd.conf file, which is the IBM HTTP Server configuration file. The file is stored in by default:
- AIX: /usr/IBM/HTTPServer/conf
- Linux: /opt/IBM/HTTPServer/conf
Windows: C:\IBM\HTTPServer\conf
IBM i : /www/HTTPServer_name/conf- Add the following block of code to the httpd.conf file:
<LocationMatch /*/profiles/photo.do > <IfModule mod_headers.c> Header set Pragma "" Header set Cache-Control "max-age=21600,s-maxage=21600,public" </IfModule> </LocationMatch>
- Save and close the configuration file.
- Restart the IBM HTTP Server.
Enable the use of pronunciation files in an HTTPS environment
Ensure that Profiles users can save and play pronunciation files in an HTTPS environment by defining a rule in the IBM HTTP server’s configuration file.
This task needs to be performed in an HTTPS environment only.
Profiles users can add a recording of how their name is pronounced to enhance their profile. To ensure that users can save a pronunciation file to their own profile and listen to the recordings of other users, define a rule in the IBM HTTP Server's configuration file to explicitly set the caching headers of pronunciation files.
Define a rule in the IBM HTTP Server by completing the following steps:
- Use a text editor, open the IBM HTTP Server configuration file, httpd.conf file. The file is stored in by default:
- AIX: /usr/IBM/HTTPServer/conf
- Linux: /opt/IBM/HTTPServer/conf
Windows: C:\IBM\HTTPServer\conf
IBM i : /www/HTTPServer_name/conf- Add the following block of code to the httpd.conf file:
<LocationMatch /*/profiles/audio.do> Header set Pragma "" Header set Cache-Control "private, max-age=0, must-revalidate" </LocationMatch>
- Save and close the configuration file.
- Restart the IBM HTTP Server.
Manage the Profiles event log
Use Profiles administrative commands to manage the Profiles event log.
To run administrative commands, use the wsadmin client.
See Start wsadmin
The Profiles EVENTLOG table logs records relating to Profiles user events.
For example, every time a user removes a board post or adds a tag to their profile, an entry is logged in the table. From time to time, you might want to purge older records from the table to control the size of the data. Otherwise, the entries can grow rapidly and impact performance in areas such as seedlist indexing.
By default, records that are more than 30 days old are automatically purged from the event log.
For information about how to modify the setting that controls this interval, see Configure event log clean-up for Profiles.
To manage entries in the event log table for Profiles....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Use the following commands as required.
- ProfilesService.purgeEventLogsByDates(string startDate, string endDate)
Delete event log entries created between the specified start date and end date.
Parameters:
- startDate
- A string that specifies the start date for the period in MM/DD/YYYY format.
- endDate
- A string that specifies the end date for the period in MM/DD/YYYY format.
For example:
ProfilesService.purgeEventLogsByDates("06/21/2009", "06/26/2009")
This command deletes all the event log entries created on or after June 21st, 2009 and before June 26th, 2009 from the EVENTLOG table.
- ProfilesService.purgeEventLogsByEventNameAndDates(eventName, string startDate, string endDate)
Delete event log entries with the specified event name created between given start date and end date.
Parameters:
- eventName
- The type of event to remove from the EVENTLOG table.
The following names are some examples of valid event names:
- profiles.created
- profiles.removed
- profiles.updated
- profiles.person.photo.updated
- profiles.person.audio.updated
- profiles.colleague.created
- profiles.colleague.added
- profiles.connection.rejected
- profiles.person.tagged
- profiles.person.selftagged
- profiles.tag.removed
- profiles.link.added
- profiles.link.removed
- profiles.status.updated
- profiles.wallpost.created
- profiles.wallpost.removed
- profiles.wall.comment.added
For a complete list of valid event names for Profiles, refer to the Events Reference article in the API Documentation wiki.
- startDate
- A string that specifies the start date for the period in MM/DD/YYYY format.
- endDate
- A string that specifies the end date for the period in MM/DD/YYYY format.
For example:
ProfilesService.purgeEventLogsByEventNameAndDates(profiles.colleague.created, "06/21/2009", "06/26/2009")This command deletes all the profiles.colleague.created event log entries created on or after June 21st, 2009 and before June 26th, 2009 from the EVENTLOG table.
Configure advanced settings in Profiles
Configure advanced settings in Profiles by adding information to the <properties> element in the profile-config.xml file.
For the most part, you can use the ProfilesConfigService.updateConfig command to update settings in the profiles-config.xml file. However, for some advanced settings, check out the configuration file and make changes to it manually.
For example, if you want to change the display order of the information that displays on the Reporting Structure page or to expose information relating to the following feature, you must use the steps documented in the following topics.
For a full list of the properties that you can update using the ProfilesConfigService.updateConfig command, see Profiles configuration properties.
Exposing information about following
You can add a setting to the profiles-config.xml file to specify whether information relating to the following feature is made public.
To edit configuration files, use the wsadmin client.
See Start wsadmin
When the following feature is enabled, users can follow other users to get the latest updates about them. By default, following information is private in the sense that only the logged-in user can view information about the people that they are following and the people who are following them. You can change this behavior to make following information public by adding a setting to the profiles-config.xml file.
To expose following information....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the Profiles configuration files:
ProfilesConfigService.checkOutConfig("working_directory", "cell_name" where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes (/) to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command does not complete successfully.- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required. It is also case-sensitive. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/prof/temp","Cell01")Windows:
ProfilesConfigService.checkOutConfig("c:/prof/temp","Cell01")IBM i ProfilesConfigService.checkOutConfig("/temp","Cell01")
- Open the profiles-config.xml file in a text editor.
- Add the following setting to the <properties> section of the file:
<property name="com.ibm.lconn.profiles.config.MakeFollowingInfoPublic" value="true"/>
- Save your changes and close the configuration file.
Change the display order of the Reporting Structure page
You can add a setting to the profiles-config.xml file to change the order in which the full report-to chain is displayed on the Reporting Structure page.
To edit configuration files, use the wsadmin client.
See Start wsadmin
Users can click Full Report-to Chain in a profile to view the profile owner's place in the organization's reporting structure. The default sorting order lists the profile owner first. To reverse the default order, edit the profiles-config.xml file.
To reverse the sorting order of the report-to chain displayed on the Reporting Structure page, edit properties in the profiles-config.xml file.
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- Start the Profiles Jython script interpreter.
- Access the Profiles configuration files:
execfile("profilesAdmin.py")
If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.
- Check out the Profiles configuration files:
ProfilesConfigService.checkOutConfig("working_directory", "cell_name" where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied and are stored while you make changes to them. Use forward slashes (/) to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command does not complete successfully.- cell_name is the name of the WAS cell hosting the Profiles application. This argument is required. It is also case-sensitive. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
- AIX or Linux:
ProfilesConfigService.checkOutConfig("/opt/prof/temp","Cell01")Windows:
ProfilesConfigService.checkOutConfig("c:/prof/temp","Cell01")IBM i ProfilesConfigService.checkOutConfig("/temp","Cell01")
- Open the profiles-config.xml file in a text editor.
- Add the following setting to the <properties> section:
<property name="com.ibm.lconn.profiles.ui.reportingChain.isBottomUp" value="false"/>
- Save your changes and close the configuration file.
Administer Search
The Search service provides a point for performing full text and tag searches across all the deployed Connections applications. Search is a required application for all Connections applications, and it must be running to prevent unexpected behaviors in the other applications.
Connections Search is based on multifaceted search technology and uses related people, related dates, related tags, and source application facets. This information enables users to drill down into specific facets to find the content that they want without having to page through large numbers of results.
During the indexing process, bookmarks created in the Activities, Communities, and Bookmarks applications are indexed into the same document, and the details of the link, such as its tags, are used to supplement the document in the index.
For example, a blog posting that was bookmarked in the Bookmarks application has facets for both Bookmarks and Blogs.
Search results in Connections are based on the following facets.
Table 38. Connections search facets
Facet Description Date The set of dates associated with the search results. This facet enables users to filter search results first by year, and then by year and month.
Tags The complete set of tags used for the full text result set, including tags associated with bookmarks created in the Bookmarks, Activities, and Communities applications. Related people The complete set of users associated with the full text result set. This facet includes associations mined from bookmarked content in Activities, Bookmarks, Communities, Files, Forums, and Wikis. Related people also include shared authors on blogs and people who have commented on blogs. Source component The Connections application from which the results were retrieved. Users can filter results by source using the options at the side of the Search Results page.
These facets are calculated at indexing time for optimum performance at search time.
Access the Search configuration environment
You need to initialize the Search configuration environment to be able to run the SearchCellConfig and SearchService MBean commands.
See Start wsadmin
Two types of command are provided for administering the Search application:
- SearchCellConfig
- An MBean used to check out, update, and check in copies of the Search configuration file, search-config.xml.
This file is used to control many aspects of Search configuration, such as:
- The location of the Search index
- The location of the IBM LanguageWare dictionaries used by Search
- The configuration of the file download and conversion service used by Search when indexing file attachments
The SearchCellConfig MBean also provides the user with a means of checking out and checking in the Search Ajax proxy configuration file, proxy-search-config.xml.
For more information about the syntax of the SearchCellConfig commands and a description of what each command does, see SearchCellConfig commands.
- SearchService
- An MBean used to create, retrieve, update, and delete scheduled task definitions of the following Search operations. This includes a facility to trigger one of these operations.
- Indexing
- Indexing optimization
This is implemented by scheduling a one-off task that is scheduled to run within 30 seconds of issuing the corresponding SearchService command.
For more information about the syntax of the SearchService commands and a description of what each command does, see SearchService commands.
To initialize the Search configuration environment....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
Apply property changes for Search
After making changes to Search configuration settings, check in the configuration settings and restart the servers to apply the changes.
To perform the following steps, you must first initialize the Search configuration environment.
For more information about how to do this, see Accessing the Search configuration file.
- Complete your configuration changes.
- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Manage the Search index
The Search application uses a Lucene 3.0.3 index, supplemented by social facet information. The location of the Search index is mapped to an IBM WebSphere Application Server variable, SEARCH_INDEX_DIR. The value of this variable is set to CONNECTIONS_DATA_DIRECTORY/search/index by default.
The index is generated by retrieving all the necessary information from each Connections application on an administrator-defined schedule. Each task defines which applications to crawl and whether to optimize the index at the end of the task.
The following applications can be indexed: Activities, Blogs, Bookmarks, Communities, Files, ECM files, Forums, Profiles, and Wikis. Status updates and community calendar events can also be indexed.
Search uses WAS scheduling service for creating and updating the Search index.
The scheduling service is based on the Cron calendar, which uses predefined date algorithms to determine when a task should run. While the scheduling service supports the use of a Simple calendar, this is not currently supported for Connections.
For more information about WAS scheduler, see Scheduling tasks.
Connections applications maintain delete and access-control update information for a maximum of 30 days. If indexing is not performed on an index for 30 days, that index is considered to be out-of-date and reindexing is necessary. You must delete and recreate the index to ensure data integrity.
When indexing on a Microsoft Windows 2008 deployment, you might get the following error: java.io.IOException: Access is denied.
This error is caused by an underlying Lucene issue and prevents the index from being updated. To resolve the problem, restart all the machines in the cluster.
The indexing process
The Search index is generated by retrieving information from each of the applications based on a schedule defined by the administrator. Search uses the WAS scheduling service for creating and updating the Search index. The index must be deployed on each node running the Search enterprise application.
Indexing overview
Search indexing happens in several stages:
- Crawling
- Crawling is the process of accessing and reading content from each application in order to create entries for indexing.
During the crawling process, the Search application requests a seedlist from each Connections application.
This seedlist is generated when each application runs queries on the data stored in its database, based on the parameters that the Search application submits in its HTTP request.
The contents of the seedlists are persisted to disk. They are deleted when the next incremental indexing task completes successfully.
- File content extraction
- Search provides a document conversion service to extract the content of the files to be indexed. During the file content extraction stage, the document conversion service downloads files to a temporary folder in the index directory, converts them to plain text, and stores this in the folder defined by WAS variable, EXTRACTED_FILE_STORE. The extracted text is then indexed.
Connections supports the indexing of file attachment content from the Files and Wikis applications, and IBM FileNet documents.
File content extraction takes place on the schedule defined for the file content extraction task, which runs every 20 minutes by default. File content is not searchable until the file content conversion is complete and the next indexing task has also completed.
- Indexing
- During the indexing phase, the entries in the persisted seedlists are processed into Lucene documents, which are serialized into a database table that acts as an index cache.
When the indexing phase is complete, the seedlists are removed from disk. A resume token marks where the last seedlist request finished so that the Search application can start from this point on the next seedlist request. This resume token enables Search to retrieve only the new data that was added after the last seedlists were generated and crawled.
The crawling and indexing stages for multiple applications take place concurrently in incremental foreground indexing.
For example, if an indexing task that indexes Files, Activities, and Blogs is created, each of these applications is crawled and added to the database cache at the same time. During initial and background indexing, only the crawling stage for multiple applications takes place concurrently.
During incremental foreground indexing, after the crawling and indexing stages are complete, all the nodes are notified that they can build their index. At this point, the index builder on each node begins extracting entries from the database cache and storing them in the index on the local file system.
- Index building
- Index building refers to the deserialization and writing of the Lucene documents into the Search index. This process only occurs during incremental foreground indexing. During index building, the index builder takes entries from the database cache and stores them in an index on the local file system. Each node has its own index builder, so crawling and preparing entries only takes place once in a network deployment, and then the index is created on each node from the information that has already been processed.
During initial and background indexing, the indexing stage and the index building stage are merged, and no database serialization or deserialization occurs.
- Post processing
- After index building (for incremental foreground indexing) or indexing (for initial or background indexing), post-processing work takes place on the new index entries to add additional metadata to the search results. This work includes bookmark rollup and the addition of file content to Files search results.
Bookmark rollup refers to the process of aggregating the information for public bookmarks that point to the same URL.
For example, if 1000 users create a public bookmark for the same URL, when someone searches for that URL, a single bookmark is returned instead of 1000 search results. The bookmark that is returned includes the information for all 1000 bookmarks rolled up into a single search result, so that all of the tags and people associated with each of the individual bookmarks are now associated with the one document.
In addition, if two users bookmark the same internal document, for example, a wiki page, then the wiki page gets rolled up with the bookmark so if the user then searches for the wiki page or the bookmark that they created to the wiki page, only one result is returned in the search results. The tags and people associated with the bookmark and the wiki page are combined into a single document.
Indexing types
The following table explains the differences between the various types of indexing:
Table 39. Types of indexing
Foreground indexing Background indexing Initial indexing The initial index is built using the default 15min-search-indexing-task. Alternatively, it can be built by a custom indexing task created by the SearchService.addIndexingTask command or a command that is run once, such as SearchService.indexNow(String applicationNames).
This index is used for searching and for further indexing. The database cache is not used.
An index is built using the SearchService.startBackgroundIndex command. The background indexing command creates a one-off index in a specified location on disk.
This index is not used for searching. The database cache is not used.
Incremental indexing The index is updated using the default 15min-search-indexing-task. Alternatively, the index can be updated by a custom indexing task created by the SearchService.addIndexingTask command or a command that is run once, such as SearchService.indexNow.
This index is used for searching and for further indexing. The database cache is used.
A background index can be updated using the SearchService.startBackgroundIndex command. This index is not used for searching. The database cache is not used.
Indexing steps
The indexing process involves the following steps:
- Initial and background indexing
- Crawl all pages of the seedlist and persist them to disk.
- Extract the file content and persist it to disk.
- Crawl a seedlist page from disk.
- Index the seedlist entries into Lucene documents.
- Write the documents to the Lucene index.
- Repeat until all the persisted seedlist pages have been crawled.
- Incremental foreground indexing
- The node that has the scheduler lease crawls all the pages of the seedlist and persists them to disk.
- Crawl a seedlist page from disk.
- Index the seedlist entries into Lucene documents.
- Serialize the Lucene documents into the database cache.
- Send a JMS message to all Search nodes to alert them of the completion of the serialization.
- Each node deserializes the Lucene documents into the Lucene index.
Search index directory structure
Each Connections application has its own index directory. The Search index directory is defined by WAS variable SEARCH_INDEX_DIR.
Because of the new features introduced in Connections 4.5, you cannot migrate an index from a previous versions of the product. You must build a new index when deploying Connections 4.5 or later. Note also that there are dependencies between the application index directories so that, for example, deleting an application index directory can cause index corruption and lead to other issues.
The application-specific index directories are named index_application_name.
Table 40. Application index directories
Application Index directory Activities index_activities Blogs index_blogs Bookmarks index_bookmarks Calendar index_calendar Communities index_communities Community Libraries index_ecm_files Files index_files Forums index_forums Profiles index_profiles Social analytics index_sand Status updates index_status_updates Wikis index_wikis Each index directory contains doctype, facets, filesTemp, and graph files. After the index is built, each directory also contains an INDEX.READY file and a CRAWLING_VERSION file.
The following directories are also used by the Search application:
- index_backup
- When the SearchService.backupIndexNow() command is run, the index is backed up to the index_backup directory. This directory is contained in the data\local\search folder. The folder location is specified by WAS variable SEARCH_INDEX_BACKUP_DIR.
- staging
- After the initial index is built, it is copied to the staging directory before it is rolled out to all the nodes in the deployment.
This directory is located on the network share. The directory location is specified by WAS variable SEARCH_INDEX_SHARED_COPY_LOCATION. An index in this directory is deleted 30 days after it is created. To disable the automatic roll-out behavior, you must delete the WebSphere variable SEARCH_INDEX_SHARED_COPY_LOCATION. If you delete this variable, also clear the existing contents of SEARCH_INDEX_SHARED_COPY_LOCATION manually.
- persistence
- The persistence directory contains the XML files that are created after an application is crawled. These files are used to build an index for the application. This directory is contained in the data\local\search folder.
The directory location is specified by WAS variable CRAWLER_PAGE_PERSISTENCE_DIR.
- extracted
- The extracted directory holds documents that contain the content extracted from files. This directory is located on the network share. The directory location is specified by WAS variable EXTRACTED_FILE_STORE.
Create Search indexes
Search indexing is automatically configured for Connections during installation.
To create the initial Search index after installing the product, wait for one of the default indexing tasks to run and this will automatically create the index. The next step is for the Search application to copy the index to all the nodes. The index is first saved to a staging folder, and then it is copied from the staging folder to all the secondary nodes in the deployment. You must not stop your deployment until the index has been copied to all the nodes. If the server is stopped during this process, the index will not be successfully rolled out to all the nodes.
To create subsequent indexes, for example, if the index has become corrupt or unusable, delete any existing indexes, and then either wait for the next scheduled indexing task to run or run a one-off indexing task. Alternatively, if the index is still functional, to avoid disrupting users’ access to Search functionality, you can recreate the Search index by running a wsadmin command to create a background index.
Initializing the Search index
When you install Connections, Search indexing is automatically configured. To create the initial Search index, all you need to do is wait for one of the default indexing tasks to run.
When you are installing a non-English language deployment, you must enable the relevant language dictionaries for your deployment before creating the initial Search index. Without multiple dictionary support, for languages other than English, Search only returns results where there is an exact match between the search term and the content term. Enabling multiple dictionaries ensures better quality search results when your user base is multilingual.
For more information, see Configure dictionaries for Search.
Initial index creation occurs when any scheduled indexing task fires and an index does not yet exist. As part of the initial index creation process, the index is automatically rolled out to the secondary nodes in your deployment. Each node running the Search application must have the Search index stored locally on the node's file system. Because there are multiple indexes in a clustered environment, they must all be kept in synchronization with each other.
The Search index directory is defined by the IBM WebSphere Application Server variable SEARCH_INDEX_DIR. You can change the location of the index by editing this variable.
For more information, see Changing the location of the Search index.
After the initial index has been built and optimized, the contents of the index directory are copied to a staging folder. When the newly-built index is successfully posted, JMS messages are broadcast so that each node automatically downloads the index from the staging folder and loads it. The index management tables are populated at the same time. For Search to function properly, the initial index must have completed successfully and it must be deployed to all nodes.
Do not stop your deployment until the index has been copied to all nodes. If the server is stopped during this process, the index will not be successfully rolled out to all nodes. In this event, you need to manually copy the index from the staging location to the other nodes.
You can change the location of the Search index staging folder by editing WAS variable, SEARCH_INDEX_SHARED_COPY_LOCATION.
Recreating the Search index
If your Search index is corrupt and cannot be used, you can recreate it by first deleting any existing indexes, and then either waiting for the next scheduled indexing task to run or running a one-off indexing task.
When you follow the steps described in the following procedure, Search functionality is not available to users. To recreate the index without the need for Search downtime, follow the steps described in the Creating a background index topic instead.
During the indexing update process, documents are first written to a cache table in the HOMEPAGE database and then written to each index across the nodes. When a new index needs to be built, the database cache is skipped, and the crawling and indexing process writes directly to the index directory on the node that is performing the indexing task.
- Stop all the nodes that are running the Search application. If there are existing search indexes on these nodes, delete them by performing the steps described in Deleting the index.
- Start all the Search nodes in the cluster.
- Recreate the index by completing one of the following steps:
- Create a one-off task that indexes all the installed Connections applications in your deployment.
For more information, see Running one-off tasks.
- Wait for the next scheduled indexing task to run.
You can tell that the index is built on the indexing node when the INDEX.READY and CRAWLING_VERSION files are present in the index directory. The Search index directory is defined by the IBM WebSphere Application Server variable SEARCH_INDEX_DIR.
After the index is built, the next phase is index roll-out. During this phase, the files in the index directory are automatically copied to the Search staging folder, which is defined by WAS variable SEARCH_INDEX_SHARED_COPY_LOCATION.
The files in the Search staging folder are then copied to each index folder on the remaining nodes.
Do not stop your deployment until the index has been copied to all nodes. If the server is stopped during this process, the index will not be successfully rolled out to all nodes. In this event, you need to manually copy the index from the staging location to the other nodes.
Create background indexes
By creating a background index, you can remove inconsistencies from your Search index without the need for downtime while the index is rebuilt. Background indexing involves three phases: crawling, file content extraction, and index creation.
You can run a sequence of SearchService admin commands for each phase of background indexing. The SearchService.startBackgroundCrawl command performs a background crawl of the Search seedlists, while the SearchService.startBackgroundFileContentExtraction command acts on up-to-date seedlists and extracts file content outside of the indexing process. The SearchService.startBackgroundIndex command creates a stand-alone index in a location that you specify.
You are not required to run the SearchService.startBackgroundCrawl and SearchService.startBackgroundFileContentExtraction commands before running SearchService.startBackgroundIndex, however, you can do this if you want more control over each phase of the indexing process.
If you run SearchService.startBackgroundIndex without executing SearchService.startBackgroundCrawl or SearchService.startBackgroundFileContentExtraction first, the SearchService.startBackgroundIndex command performs the same work as if you had executed the other commands beforehand.
Similarly, if you run SearchService.startBackgroundFileContentExtraction without executing SearchService.startBackgroundCrawl first, the SearchService.startBackgroundFileContentExtraction command performs the same work as if you had executed the SearchService.startBackgroundCrawl command beforehand.
Performing a background crawl
You can use a SearchService command to perform a background crawl of the Search seedlists without creating a Search index.
See Start wsadmin
The SearchService.startBackgroundCrawl command allows you to crawl the application seedlists and save those seedlists to a specified location. You might want to use this command if you are experiencing issues with crawling and you want to verify that the crawling process is completing successfully.
To perform a background crawl of the Search seedlists, complete the following steps.
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Enter the following command:
- SearchService.startBackgroundCrawl(String persistenceLocation, String components)
Crawls the seedlists for the specified applications and then saves the seedlists to the specified location. This command does not build an index.
The command takes the following parameters:
- persistenceLocation
- A string that specifies the path to which the seedlists are to be saved.
- components
- A string that specifies the applications whose seedlists are to be crawled. The following values are valid: activities, all_configured, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis. Use all_configured instead of listing all indexable services when you want to crawl all the applications.
For example:
SearchService.startBackgroundCrawl("/opt/IBM/Connections/backgroundCrawl", "activities, forums, communities, wikis")After completing a background crawl, perform one of the following options:
- Extract file content.
For more information, see Extracting file content.
- Create a background index.
For more information, see Creating a background index.
- Create a foreground index.
For more information, see Recreating the Search index.
To create a foreground index, copy the persisted seedlists from the persistence location specified when you used the startBackgroundIndex command to the CRAWLER_PAGE_PERSISTENCE_DIR directory on the node that is doing the indexing.
In a multi-node system, you might want to copy the seedlists to the CRAWLER_PAGE_PERSISTENCE_DIR directory on all nodes. Alternatively, you can set the CRAWLER_PAGE_PERSISTENCE_DIR variable to a network location and copy the persisted seedlists from the persistence location you specified to that location.
Extracting file content
To speed up the indexing process, you can use a SearchService command that extracts file content in a process that is separate from indexing.
To edit configuration files, use wsadmin client.
The SearchService.startBackgroundFileContentExtraction command performs file content extraction outside of the indexing process.
This command iterates over the persisted files seedlists and, for each file, it extracts the file content according to the specified configuration settings. This process is multithreaded, and is the same file content extraction process that occurs when you run the startBackgroundIndex command.
To extract file content outside of the indexing process, complete the following steps.
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following command:
- SearchService.startBackgroundFileContentExtraction(persistence dir, components, extracted text dir, thread limit)
Extracts file content for all files referenced in the persisted seedlists in a process that is independent of the indexing task.
Parameters:
- persistence dir
- A string that specifies the location of the persisted files seedlists.
- components
- A string that specifies the application or applications for which you want to extract file content. The following values are valid:
- files. Extracts file content from the Files application.
- wikis. Extracts file content from the Wikis application.
- ecm_files. Extracts file content from community library files stored in Enterprise Content Management (ECM) systems.
- extracted text dir
- A string that specifies the target location for the extracted text. The same directory structure and naming scheme is used for this directory as for the extracted text directory on the deployment: connections shared data/ExtractedText.
For example, ExtractedText/121/31/36cdb7a0-92b2-4cf9-91f3-c4e7e527a5e1.
- thread limit
- The maximum number of seedlist threads.
For example:
SearchService.startBackgroundFileContentExtraction("/bg_index/seedlists", "files", "/bg_index/extractedText", 10)You typically run this command after running a startBackgroundCrawl command to act on up-to-date seedlists. If there are no persisted seedlists available, the behavior is the same as when you run the startBackgroundCrawl command, that is, the seedlists are crawled and persisted first.
- Verify that the target extracted text directory is populated with the extracted files content. Open some of the extracted text files in a text editor. You can expect to see the typical format, for example, some header information followed by the extracted content.
- Copy the extracted file content to the directory specified by the WAS environmental variable EXTRACTED_FILE_STORE. Storing the extracted file content in this directory means that when the Search application next detects a file update during indexing, if the update is a metadata change only, Search can avoid converting the file again unnecessarily.
For more information about the EXTRACTED_FILE_STORE variable, see WAS environment variables.
- Complete the steps outlined in the topic, Creating a background index to create a background index using the extracted file content.
Create a background index
Use the SearchService.startBackgroundIndex command to create a background index. Using this command helps you to remove inconsistencies from your Search index without the need for downtime while the index is rebuilt.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
The SearchService.startBackgroundIndex command allows you to create a background index in a specified location. When you use this command, the Search application performs a full crawl of the specified applications and then builds the index at the chosen location.
If an index already exists at the location, the crawl resumes from the resume point stored in the Search index at that location.
A file called INDEX.READY is created in the specified location when the background index is complete.
To create a background index....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following command:
- SearchService.startBackgroundIndex(String persistenceLocation, String extractedFileContentLocation, String indexLocation, String applications, String jobs)
Creates a background index in the specified location.
This command crawls the seedlists for the specified applications, saves the seedlists to the specified persistence location, extracts the file content, and then builds a Search index for the applications at the specified index location.
You can optionally run social analytics indexing jobs at the end of the background indexing operation. Alternatively, you can run the SearchService.startSandBackgroundIndex if you want to create a background index for the social analytics service.
For more information, see Creating a background index for the social analytics service.
This command takes the following arguments:
- persistenceLocation
- A string value that specifies the location where you want to save the application seedlists.
- extractedFileContentLocation
- The file content extraction location. Use the same location specified when you previously extracted the file content using the SearchService.startBackgroundFileContentExtraction command or the SearchService.startBackgroundIndex command. Otherwise, specify an empty directory as the location for storing the extracted file content.
- indexLocation
- A string value that specifies the location where you want to create the background index.
- applications
- A string value that specifies the names of the applications to include in the index crawl. The following values are valid: activities, all_configured, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis. Use all_configured rather than listing all the indexable applications when you want to index all the applications.
To queue up multiple applications for indexing, run a single instance of the SearchService.startBackgroundIndex command with the names of the applications to index listed with a comma separator between them. If you run multiple instances of the command with a single application specified as a parameter, a lock is established when you run the first command so that only the first application specified is indexed successfully.
- jobs
- A string value that specifies the names of the social analytics post-processing indexers that examine, index, and produce new output based on the data in the index. The following values are valid: evidence, graph, manageremployees, tags, taggedby, and communitymembership. Use a comma to separate multiple values. This parameter is optional.
Examples:
SearchService.startBackgroundIndex("/opt/IBM/Connections/data/local/search/backgroundCrawl", "/opt/IBM/Connections/data/local/search/backgroundExtracted", "/opt/IBM/LotusConnections1/data/search/background/backgroundIndex", "activities, blogs, calendar, communities, dogear, files, forums, profiles, wikis, status_updates", "communitymembership, graph")SearchService.startBackgroundIndex("/opt/IBM/Connections/data/local/search/backgroundCrawl", "/opt/IBM/Connections/data/local/search/backgroundExtracted", "/opt/IBM/LotusConnections1/data/search/background/backgroundIndex", "all_configured")
- To start using the new index, complete the steps for restoring an index as described in Restoring the Search index.
The steps that you need to perform vary depending on your deployment type.
- Copy the extracted file content to the directory specified by the WAS environmental variable EXTRACTED_FILE_STORE so that the files do not have to be converted again unnecessarily during indexing.
For more information about the EXTRACTED_FILE_STORE variable, see WAS environment variables.
Index settings
Indexing is automatically configured in Connections. However, when setting up indexing for your environment, you might need to perform additional configuration tasks. Important for non-English deployments: Enabling multilingual support for Search is a mandatory post-installation step that needs to be performed before you start your Connections Search server for the first time. Without multiple dictionary support, for languages other than English, Search will only return results where there is an exact match between the search term and content term. Enabling multiple dictionaries ensures better quality search results when your user base is multilingual.
For more information about enabling multilingual support, see Configure dictionaries for Search.
By default, the Connections user interface is displayed in the language identified in the locale settings of the web browser being used. You can set it up to allow users to explicitly choose the language in which the product is displayed.
For more information, see Enabling users to set a language preference.
You can also perform optional post-installation configuration tasks relating to indexing, such as configuring J2C authentication for Search or changing the location of the Search index.
Enable indexing resumption
You can add a configuration setting to the search-config.xml file to specify that interrupted or failed indexing tasks are automatically resumed.
To edit configuration files, use wsadmin client.
The SearchCellConfig.setIndexingResumptionAllowed command allows you to enable the resumption of failed or interrupted indexing tasks that have not yet reached a resume point. When you enable this functionality and an indexing task fails or is interrupted, the task resumes at the start of the previous seedlist page rather than from the previous resume point.
Indexing resumption is disabled by default when you install Connections. When you run the SearchCellConfig.setIndexingResumptionAllowed command, the allowResumption setting, which specifies that interrupted or failed indexing tasks are automatically resumed, is added to the search-config.xml configuration file.
<indexSettings allowResumption="true" location="${SEARCH_INDEX_DIR}" maxIndexerThreads="1"/>You might want to consider enabling indexing resumption after installation because, if there is an interruption during initial indexing, this feature allows indexing to resume from where it left off. Normally, only crawling and file content extraction resume from where they are left off after an interruption. However, the indexing resumption feature has an impact on performance, and there is little benefit to enabling it during incremental indexing as incremental indexing typically executes very quickly.
To enable indexing resumption....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")
- Use the following command:
- SearchCellConfig.setIndexingResumptionAllowed(boolean allowed)
Enables or disables the resumption of interrupted or failed indexing tasks that have not reached a resume point.
This command takes a single argument:
- allowed. A boolean value.
For example, to enable indexing resumption:
SearchCellConfig.setIndexingResumptionAllowed("true")- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Configure dictionaries for Search
The Search application provides globalization support by using different dictionaries for different languages. Each dictionary file must be enabled in the Search configuration file before indexing. By default, only the English language dictionary is enabled during installation.
Every language has its own specified dictionary file. Dictionaries that are marked as enabled in the Search configuration file are loaded into memory at server start time when the Search application is started.
For non-English deployments, enabling multilingual support for Search is a mandatory post-installation step that needs to be performed before you start your Connections Search server for the first time. Without multiple dictionary support, for languages other than English, Search will only return results where there is an exact match between the search term and content term. Enabling multiple dictionaries ensures better quality search results when your user base is multilingual.
Enable dictionaries
Use administrative commands to enable the dictionaries to use with Search.
To edit configuration files, use wsadmin client.
Enable additional dictionaries adds a performance cost at indexing time and will increase the size of the index. Additional dictionaries should only be enabled as needed.
See Search language dictionaries for a list of available language dictionaries.
The Search application provides globalization support by using different dictionary files for different languages. Each dictionary file must be enabled in the Search configuration file before indexing.
The dictionaries that are enabled in the Search configuration file are loaded into memory at server start time when the Search application is started.
To enable dictionaries for use with Search....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")- To add a specified dictionary to the list of configured dictionaries:
- SearchCellConfig.enableDictionary(String languageCode, String dictionaryPath)
Enables support for the specified LanguageWare dictionary.
This command accepts two arguments.
- languageCode. The language code for the dictionary to add. This argument is a string value.
The language code typically comprises two letters conforming to the ISO standard 639-1:2002 that identifies the primary language of the dictionary. However, there are some codes that additionally define a country or variant, in which case these constituent parts are separated by an underscore.
For example, Portuguese has two variants, one for Portugal (pt_PT) and one for Brazil (pt_BR).
When using a code that also specifies a country, ensure that you use an underscore to separate the language code and the country code rather than a hyphen; otherwise an error will be generated.
- dictionaryPath. The path to the directory containing the dictionary file. This argument is a string value.
For example:
SearchCellConfig.enableDictionary("fr","/opt/IBM/Connections/data/shared/search/dictionary")You can also specify the path using a WebSphere environment variable. In the following example, the "${SEARCH_DICTIONARY_DIR}" value is used to point to the shared Search dictionary directory.SearchCellConfig.enableDictionary("fr","${SEARCH_DICTIONARY_DIR}")- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Listing enabled dictionaries
Use the listDictionaries command to check which dictionaries are currently enabled for use with Search.
To edit configuration files, use wsadmin client.
To list the dictionaries that are currently enabled....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file
SearchCellConfig.checkOutConfig("working_dir", "cellName")...where...
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. If you do not know the cell name, you can determine it by typing the following command while in the wsadmin command processor:
print AdminControl.getCell()For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")- To list the dictionaries currently enabled for use with Search:
Set the default dictionary
Use administrative commands to set the default dictionary used for Search query strings.
To edit configuration files, use wsadmin client.
You use the setDefaultDictionary command to set the default dictionary used for Search queries. At indexing time, when content is analyzed, an attempt is made to guess which of the enabled IBM LanguageWare dictionaries should be used when applying the text analysis process. If the attempt is unsuccessful or if the language guessed does not have a corresponding dictionary enabled, the default dictionary is used.
The default dictionary is also used at search time. Language guessing is not used at search time to determine which dictionary is used for text analysis, the language is specified as part of the HTTP request. If there is a problem loading the dictionary corresponding to the language specified or if there is no corresponding dictionary enabled, then the default dictionary is used.
To specify a default dictionary for use with Search....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")- To set the default dictionary:
- SearchCellConfig.setDefaultDictionary(String languageCode)
Configures the default LanguageWare dictionary used by the Search application. The default dictionary must be one of the enabled dictionaries.
This command takes a single argument:
- languageCode is the language code for the dictionary to set as the default.
This language code typically comprises two letters conforming to the ISO standard 639-1:2002 that identifies the primary language of the dictionary. However, there are some codes that additionally define a country or variant, in which case these constituent parts are separated by an underscore.
For example, Portuguese has two variants, one for Portugal (pt_PT) and one for Brazil (pt_BR). When using a code that also specifies a country, ensure that you use an underscore to separate the language code and the country code rather than a hyphen; otherwise an error will be generated.
A matching dictionary must exist in the list of configured dictionaries for the language that you specify as a parameter.
For example:
SearchCellConfig.setDefaultDictionary("fr")- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Disable dictionaries
If your organization no longer operates in specific geographies, you can streamline the operation of the Search application by disabling any dictionaries that are no longer needed.
When using the admin console, use the wsadmin client.
To remove a dictionary from the list of enabled dictionaries, complete the following steps.
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")- To disable a dictionary:
- SearchCellConfig.disableDictionary(String languageCode)
Disable the specified LanguageWare dictionary.
This command accepts one argument:
- languageCode. The language code for the dictionary to delete. This argument is a string value.
The language code typically comprises two letters conforming to the ISO standard 639-1:2002 that identifies the primary language of the dictionary. However, there are some codes that additionally define a country or variant, in which case these constituent parts are separated by an underscore.
For example, Portuguese has two variants, one for Portugal (pt_PT) and one for Brazil (pt_BR).
When using a code that also specifies a country, ensure that you use an underscore to separate the language code and the country code rather than a hyphen; otherwise an error will be generated.
For example:
SearchCellConfig.disableDictionary("fr")- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Search language dictionaries
The Search application provides a number of language dictionaries. The following table lists the language dictionaries that are currently provided.
Table 41. Search language dictionaries
Language Dictionary file Language identifier Arabic ar-XX-Lex-7004.dic ar Chinese zh-XX-Lex-7002.dic zh Czech cs-CZ-LLex-7002.dic cs Danish da-DK-LLex-7001.dic da Dutch nl-NL-Reform-LLex-7003.dic nl English en-XX-LLex-7011.dic en Finnish fi-FI-Lex-5312.dic fi French fr-XX-LLex-7003.dic fr German de-XX-LLex-7003.dic de Greek el-GR-LLex-7000.dic el Italian it-IT-LLex-7003.dic it Japanese ja-JP-Lex-7003.dic ja Korean ko-KR-Lex-7001.dic ko Norwegian (Bokmal) nb-NO-LLex-7000.dic nb Polish pl-PL-LLex-7002.dic pl Portuguese (Brazilian) pt-BR-LLex-7002.dic pt_BR Portuguese (Portugal) pt-PT-LLex-7002.dic pt_PT Russian ru-RU-LLex-7001.dic ru Spanish (Spanish) es-ES-LLex-7001.dic es Swedish sv-SE-LLex-7000.dic sv In addition, the mul-XX-LangID-5311.dic dictionary is used for language guessing. This dictionary is not configurable.
Change the location of the Search index
By default, the Search index is stored in the search/index subdirectory of the Connections data directory defined at install time, for example, on Linux, /opt/IBM/Connections/data/local/search/index.
This location can be changed by editing the IBM WebSphere Application Server variable, SEARCH_INDEX_DIR.
Each node running the Search application requires its own dedicated index on the file system. Using a Search index on a network share is not supported. Highly-available network storage, such as a storage area network (SAN), can be used but it must be configured to be locally mounted to each node running the Search application.
Change the value of the SEARCH_INDEX_DIR variable causes the next indexing task that fires to index all content from the beginning, so that the task creates a clean index. This operation might take some time to complete.
To change the location of the Search index....
- Launch WAS admin console.
- Select Environment > WebSphere variables.
- Select the SEARCH_INDEX_DIR environment variable from the list of defined variables. Depending on your deployment choices, there might be more than one SEARCH_INDEX_DIR variable defined. It is recommended that you have consistent locations for the Search index directory across the nodes in your deployment.
- Change all the SEARCH_INDEX_DIR variables by selecting each variable, entering a new location for the variable in the Value field, and then clicking OK.
- Save your changes to the configuration.
- Restart the Search server or servers for your changes to take effect.
Configure J2C authentication for Search
When you install Connections, the installation wizard automatically configures authentication and authorization for each application. Crawling content for indexing occurs over an internal REST API interface, and the credentials used are retrieved from the connectionsAdmin J2C authentication alias that is configured during installation. The user ID from the credentials is also added to the
search-admin J2EE role for each application. To further secure the Connections environment, you can override this authentication alias on an application-by-application basis.Search functionality depends on the ability of Connections to index the data that is stored by each application. The JAAS/J2C alias that is configured at installation time allows automatic authentication for the user account that you assigned to the alias. This user account is also mapped to the Search administrator
search-admin role for the application that you are configuring.To override the connectionsAdmin J2C authentication alias for a specific Connections application, you need to define a J2C authentication alias for that application using the IBM WebSphere Application Server Integrated Solutions Console.
For information about how to define a search J2C authentication alias for an application, see Switching to unique administrator IDs for system level communication.
Search and globalization
You can configure globalization settings to enable users to perform accent-insensitive searches, ignore punctuation in search terms, and perform a one-to-two mapping in search terms. Search globalization settings are disabled by default.
For non-English deployments, ensure that you first enable the relevant language dictionary for your geography. This procedure is a mandatory post-installation task. Without multiple dictionary support for languages other than English, Search will only return results where there is an exact match between the search term and content term. Enabling multiple dictionaries ensures better quality search results when your user base is multilingual. By default, only the English language dictionary is enabled during installation.
For more information about enabling multilingual support, see Configure dictionaries for Search.
When your organization spans multiple geographies and multiple languages, you might find it useful to enable the globalization options provided by the Search application. Note that when these options are enabled, Search requires more terms to be indexed, resulting in a larger Search index that contains the extra globalized terms. As more terms need to be indexed, this means that the indexing task will take longer to complete.
Search provides the following globalization options:
- Accent-insensitive search
- Allows users to search for equivalent non-accented search terms when using a search term that contains an accent.
For example, the default behavior of the Search application is to index the term ált as a single term. However, when accent sensitivity is enabled, the term ált is stored in the index as "ált" and "alt".
- Ignore Punctuation
- Allows users to search for equivalent search terms without using punctuation in the search term that contains the punctuation.
For example, the default behavior of the Search application is to index I.B.M. as a single term. However, when the ignore punctuation setting is enabled, the term I.B.M. is stored in the index as I.B.M. and IBM.
- 1 to 2 matching
- Allows users to search for equivalent search terms using the stem of characters in the Search term.
For example, the default behavior of the Search application is to index Müller as a single term. However, when the 1 to 2 matching setting is enabled, the term Müller is stored in the index as Müller and Mueller.
Enable the globalization options results in a larger index. When you choose to enable these options, you must delete the current index to generate the creation of a new one.
For more information, see Deleting the index.
Enable these settings for Search also affects the relevance of the results returned when a search is performed. The default behavior is to return search results for a term based on an exact match of the term. However, if these globalization settings are enabled, more search results are returned to the user.
For example, performing an accent-insensitive search for the term curé might return results for cure and curé. This type of search can lead to less relevant search results being returned to the user. The English translation of the French term cure is treatment, while curé is a priest.
For information about how to configure the globalization properties for Search, see Common configuration properties.
Verify Search
You can perform a number of steps to verify that Search index creation has completed successfully and the Search application is working as expected.
When you install Connections, Search indexing is automatically configured to run according to a default schedule. You can confirm that the initial index creation has completed successfully by checking for the presence of specific files in the index directory. You can also verify that Search is crawling on a regular basis and that incremental indexing is taking place as expected by checking for specific log messages in the SystemOut.log file.
Verify Search index creation
You can confirm that the initial Search index creation has completed successfully by checking for the presence of specific files in the index directory.
To verify that the initial index creation is complete: Check that the INDEX.READY and CRAWLING_VERSION files are present in the index directory.
By default, the Search index is stored in the search/index subdirectory of the Connections data directory defined at install time, for example, on Linux, /opt/IBM/Connections/data/local/search/index.
Initial index creation is complete when both files are present in the index directory. Note that this means that the Search index is fully built; the social analytics index is not yet built.
If your deployment has a single Search node only: No further action is required.
If your deployment has multiple Search nodes: Verify that the Search index was successfully copied to the remaining nodes.
Verify that Search is crawling regularly
Crawling is the process of accessing and reading content from each application to create entries for indexing. You can verify that the Search application is crawling on a regular basis by checking for specific log messages in the SystemOut.log file.
To verify that Search is crawling on a regular basis, open the SystemOut.log file that corresponds to the application server instance on which Search is running and look for the following log messages:
CLFRW0297I: Search is starting to crawl the {0} componentCLFRW0294I: Search has finished crawling the {0} componentwhere {0} is the name of an Connections application. Crawling refers to the persistence of the seedlists to disk.When Search is crawling as expected, these messages are available for each of the Connections applications that you installed and configured as part of the scheduled crawling task. By default, this task is scheduled to run every 15 minutes and it includes all the Connections applications that you installed.
You should also see the following log messages in the SystemOut.log file:
CLFRW0042I: Connections indexing task {0} fired event TaskNotificationInfo.FIRINGCLFRW0042I: Connections indexing task {0} fired event TaskNotificationInfo.FIREDCLFRW0042I: Connections indexing task {0} fired event TaskNotificationInfo.SCHEDULEDwhere {0} is the name of the scheduled task, for example, 15min-search-indexing-task.These informational messages refer to the current status of the scheduled task. The FIRING message is printed before the messages CLFRW0297I and CLFRW0294I. The FIRED and SCHEDULED messages are printed after the messages CLFRW0297I and CLFRW0294I.
Indexing in the CLFRW0588I and CLFRW0576I messages refers to the iteration through the persisted seedlists:
CLFRW0588I: Search is starting to index the {0} component. CLFRW0576I: Search has finished indexing the {0} component.For example:
[7/11/12 15:46:00:674 IST] 0000004c IndexingNotif I CLFRW0042I: Connections scheduled task 15min-search-indexing-task fired event TaskNotificationInfo.FIRING [7/11/12 15:46:00:755 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the blogs component. [7/11/12 15:46:00:777 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the forums component. [7/11/12 15:46:00:795 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the wikis component. [7/11/12 15:46:01:676 IST] 00000052 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the blogs component. [7/11/12 15:46:01:686 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the communities component. [7/11/12 15:46:01:728 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the blogs component. [7/11/12 15:46:01:764 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the blogs component. [7/11/12 15:46:02:276 IST] 00000053 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the forums component. [7/11/12 15:46:02:299 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the files component. [7/11/12 15:46:02:325 IST] 00000052 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the wikis component. [7/11/12 15:46:02:327 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the forums component. [7/11/12 15:46:02:355 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the forums component. [7/11/12 15:46:02:375 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the dogear component. [7/11/12 15:46:02:414 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the wikis component. [7/11/12 15:46:02:435 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the wikis component. [7/11/12 15:46:02:811 IST] 00000053 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the communities component. [7/11/12 15:46:02:824 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the profiles component. [7/11/12 15:46:02:835 IST] 00000052 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the files component. [7/11/12 15:46:02:852 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the communities component. [7/11/12 15:46:02:876 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the communities component. [7/11/12 15:46:02:901 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the activities component. [7/11/12 15:46:02:914 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the files component. [7/11/12 15:46:02:936 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the files component. [7/11/12 15:46:03:334 IST] 00000053 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the dogear component. [7/11/12 15:46:03:342 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the status_updates component. [7/11/12 15:46:03:363 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the dogear component. [7/11/12 15:46:03:375 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the dogear component. [7/11/12 15:46:03:483 IST] 00000052 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the profiles component. [7/11/12 15:46:03:493 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleCrawling CLFRW0297I: Search is starting to crawl the calendar component. [7/11/12 15:46:03:523 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the profiles component. [7/11/12 15:46:03:555 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the profiles component. [7/11/12 15:46:04:444 IST] 00000053 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the activities component. [7/11/12 15:46:04:478 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the activities component. [7/11/12 15:46:04:490 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the activities component. [7/11/12 15:46:04:864 IST] 00000053 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the calendar component. [7/11/12 15:46:04:889 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the calendar component. [7/11/12 15:46:04:901 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the calendar component. [7/11/12 15:46:04:909 IST] 00000052 CrawlingWorkL I com.ibm.connections.search.process.work.CrawlingWorkListener workCompleted CLFRW0294I: Search has finished crawling the status_updates component. [7/11/12 15:46:04:947 IST] 0000004c WorkScheduler I com.ibm.connections.search.process.WorkScheduler scheduleIndexing CLFRW0588I: Search is starting to index the status_updates component. [7/11/12 15:46:04:958 IST] 00000058 IndexingWorkL I com.ibm.connections.search.process.work.IndexingWorkListener workCompleted CLFRW0576I: Search has finished indexing the status_updates component. [7/11/12 15:46:05:009 IST] 0000004c IndexingNotif I CLFRW0042I: Connections scheduled task 15min-search-indexing-task fired event TaskNotificationInfo.FIRED [7/11/12 15:46:05:014 IST] 0000004c IndexingNotif I CLFRW0042I: Connections scheduled task 15min-search-indexing-task fired event TaskNotificationInfo.SCHEDULEDIn a deployment where there is a single Search node: No further action is required.
In a deployment with multiple Search nodes: Note that only one node in the cluster does the crawling although all the nodes do the incremental index building based on the crawling that Search node does. Because the crawling is performed by a single Search node, you only see the log messages on that node in the cluster.
Verify that the index is being built incrementally
You can verify that incremental indexing is taking place as expected by checking for specific log messages in the SystemOut.log file.
To verify that Search is building the index incrementally, open the SystemOut.log file that corresponds to the application server instance on which Search is running and look for the following log messages:
CLFRW0285I: Search is starting to build the index for {0}CLFRW0282I: Search has finished building the index for {0}where {0} is the name of an Connections application.When the index is being built as expected, these messages display for each of the Connections applications that you installed and configured as part of the scheduled crawling task. By default, this task is scheduled to run every 15 minutes and it includes all the Connections applications that you installed.
For example:
[7/11/12 15:46:01:838 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for blogs. [7/11/12 15:46:01:948 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for blogs. [7/11/12 15:46:02:382 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for forums. [7/11/12 15:46:02:511 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for forums. [7/11/12 15:46:02:518 IST] 0000005a IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for wikis. [7/11/12 15:46:02:622 IST] 0000005a IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for wikis. [7/11/12 15:46:02:899 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for communities. [7/11/12 15:46:03:022 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for communities. [7/11/12 15:46:03:028 IST] 0000005a IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for files. [7/11/12 15:46:03:156 IST] 0000005a IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for files. [7/11/12 15:46:03:396 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for dogear. [7/11/12 15:46:03:578 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for dogear. [7/11/12 15:46:03:590 IST] 0000005a IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for profiles. [7/11/12 15:46:03:972 IST] 0000005a IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for profiles. [7/11/12 15:46:04:512 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for activities. [7/11/12 15:46:04:693 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for activities. [7/11/12 15:46:04:922 IST] 0000005a IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for calendar. [7/11/12 15:46:05:051 IST] 0000005a IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for calendar. [7/11/12 15:46:05:060 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0285I: Search is starting to build the index for status_updates. [7/11/12 15:46:05:225 IST] 00000020 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue build CLFRW0282I: Search has finished building the index for status_updates. [7/11/12 15:46:05:229 IST] 00000020 IndexBuilder I com.ibm.connections.search.process.incremental.IndexBuilder postProcess CLFRW0591I: Search is starting BookmarksPostProcessor post-processing of the index. [7/11/12 15:46:05:265 IST] 00000020 BookmarkRollu I com.ibm.lotus.connections.search.index.impl.BookmarkRollup logBookmarkRollupProgressMessage CLFRW0871I: Bookmark rollup is in progress. 100 percent complete.Bookmark Rollup has completed rollup for 0 URLs. [7/11/12 15:46:05:351 IST] 00000020 IndexBuilder I com.ibm.connections.search.process.incremental.IndexBuilder postProcess CLFRW0580I: Search has finished BookmarksPostProcessor post-processing of the index. [7/11/12 15:46:05:353 IST] 00000020 IndexBuilder I com.ibm.connections.search.process.incremental.IndexBuilder postProcess CLFRW0591I: Search is starting FilesPostProcessor post-processing of the index. [7/11/12 15:46:05:354 IST] 00000020 FilesPostProc I com.ibm.lotus.connections.search.index.impl.FilesPostProcessor index CLFRW0779I: There were no file documents that needed to be processed. [7/11/12 15:46:05:355 IST] 00000020 IndexBuilder I com.ibm.connections.search.process.incremental.IndexBuilder postProcess CLFRW0580I: Search has finished FilesPostProcessor post-processing of the index. [7/11/12 15:46:05:356 IST] 00000020 IndexBuilder I com.ibm.connections.search.process.incremental.IndexBuilder postProcess CLFRW0591I: Search is starting StatusUpdatesPostProcessor post-processing of the index. [7/11/12 15:46:05:356 IST] 00000020 IndexBuilder I com.ibm.connections.search.process.incremental.IndexBuilder postProcess CLFRW0580I: Search has finished StatusUpdatesPostProcessor post-processing of the index.After index building for a task has finished, post processing takes place, and each post processor is marked at start and end by the CLFRW0591I and CLFRW0580I messages respectively:
CLFRW0591I: Search is starting {0} post-processing of the index. CLFRW0580I: Search has finished {0} post-processing of the index.In a deployment where there is a single Search node only: No further action is required.
In a deployment with multiple Search nodes: Check that you can see the log messages listed on all the Search nodes in the cluster. No further action is required.
Verify file content extraction
Verify that the Search application is extracting file content on a regular basis by checking entries in the SystemOut.log file.
During index building, files are extracted to the directory defined by the IBM Application Server WebSphere variable EXTRACTED_FILE_STORE. The files are not currently used after the index is built, although they are left in place for potential use by future features.
To verify that Search is extracting file content on a regular basis.... Open the SystemOut.log file that corresponds to the application server instance on which Search is running and look for the following log messages:
IndexBuilderQ > com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue startDocumentIndexingService ENTRY IndexBuilderQ < com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue startDocumentIndexingService RETURN DocumentIndex I com.ibm.lotus.connections.search.service.files.impl.DocumentIndexingServiceImpl isEnvironmentValid - FILE_CONTENT_CONVERSION: /opt/IBM/LotusConnections1/search/search/search/dcs/oiexport/exporter DocumentIndex I com.ibm.lotus.connections.search.service.files.impl.DocumentIndexingServiceImpl isEnvironmentValid: trueWhen a file is being extracted, the following messages are seen in the logs:SystemOut O {fallbackformat=FI_UTF8, pstylenamesflag=no, preferoitrendering=true, gridrows=5000, graphictype=jpeg, simplestylenames=no,By default, this task is scheduled to run every 20 minutes and it includes all the files in the Wikis and Files applications.You should also see the following log messages in the SystemOut.log file for the default 20 minute file content indexing task:
IndexingNotif I CLFRW0042I: Connections scheduled task 20min-file-retrieval-task fired event TaskNotificationInfo.FIRING IndexingNotif I CLFRW0042I: Connections scheduled task 20min-file-retrieval-task fired event TaskNotificationInfo.FIRED IndexingNotif I CLFRW0042I: Connections scheduled task 20min-file-retrieval-task fired event TaskNotificationInfo.SCHEDULED
Configure verbose logging
Use SearchCellConfig commands to configure verbose logging for the Search application.
When using the admin console, use the wsadmin client.
Verbose logging enables you to record detailed status information related to crawling and indexing in the SystemOut.log file.
This information can help you to monitor the progress of the Search crawling and indexing operations. Verbose logging is enabled by default.
To configure verbose logging....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")
- Use the following commands:
- SearchCellConfig.enableVerboseLogging()
Enables more detailed status reporting during crawling and indexing in the form of more verbose logging to the SystemOut.log file. Verbose logging is automatically enabled when Connections is installed.
This command does not take any parameters.
You can use the following commands to tune the frequency with which status information is logged to the SystemOut.log file during different stages of the crawling and indexing process:
- SearchCellConfig.setVerboseInitialLoggingInterval(int interval)
- SearchCellConfig.setVerbose
SeedlistRequestLoggingInterval(int interval)
- SearchCellConfig.setVerboseIncrementalCrawlingLoggingInterval(int interval)
- SearchCellConfig.setVerboseIncrementalBuildingLoggingInterval(int interval)
For more information about each of these commands, refer to the command descriptions that follow.
- SearchCellConfig.disableVerboseLogging()
Disables verbose logging.
This command does not take any parameters.
Verbose logging fills the SystemOut.log file with detailed output that can occupy an increasing amount of disk space, unless you have configured your deployment to retain only a limited number of the most recent log files. A high turnover of logs might be a problem when you are trying to track down the cause of an issue if the log file that you are interested in has been deleted.
For this reason, you might want to disable verbose logging. The performance impact of having verbose logging enabled is negligible.
- SearchCellConfig.setVerboseInitialLoggingInterval(int initialInterval)
Controls the frequency with which initial index creation progress is logged to the SystemOut.log file.
This command takes a single parameter:
- initialInterval
- A positive integer that corresponds to a number of seedlist entries. A seedlist entry is an indexing instruction that specifies an action, such as the creation, deletion, or update of a specified document in the Search index.
For example, if an interval of 500 is specified, then for every 500 entries processed, the number of seedlist entries indexed so far for an application by the current indexing job is logged.
The initialInterval parameter is set to 250 by default.
You can find additional logging information about initial index creation in the SystemOut.log file by searching for occurrences of the CLFRW0581I logging message.
For example:
CLFRW0581I: Search is continuing to build the index for activities: 3500 seedlist entries indexed.For example:
SearchCellConfig.setVerboseInitialLoggingInterval(500)- SearchCellConfig.setVerbose
SeedlistRequestLoggingInterval(int seedlistRequestInterval)
Controls the frequency with which seedlist crawling progress is logged to the SystemOut.log file.
This command takes a single parameter:
- seedlistRequestInterval
- A positive integer that corresponds to a number of seedlist page requests. A seedlist crawl is a sequence of seedlist page requests, which are HTTP GET operations that fetch seedlist pages. A seedlist page can contain zero or more seedlist entries up to a specified maximum.
For example, if an interval of 1 is specified, then after every seedlist request, the crawling progress of the application being currently crawled is logged. The seedlistRequestInterval parameter is set to 1 by default.
You can find additional logging information about seedlist crawling in the SystemOut.log file by searching for occurrences of the CLFRW0604 logging message.
For example:
CLFRW0604 : Current seedlist state: Finish Date: Thu May 12 10:14:58 IST 2011; Start Date: Thu Jan 01 01:00:00 GMT 1970; Type: 1; Last Modified: Thu Jan 01 01:00:00 GMT 1970; Finished: false; Started: true; ACL Start: 0; Offset: 0;For example:
SearchCellConfig.setVerboseSeedlistRequestLoggingInterval(1)
- SearchCellConfig.setVerboseIncrementalCrawlingLoggingInterval(int incrementalCrawlingInterval)
Controls the frequency with which seedlist update crawling progress is logged to the SystemOut.log file. An update crawl of an application fetches data that was created, updated, or deleted since the previous crawl of that application began.
This command takes a single parameter:
- incrementalCrawlingInterval
- A positive integer that corresponds to a number of seedlist entries.
For example, if an interval of 100 is specified, then, for every 100 entries that have been crawled, the number of entries that have been crawled for a particular application during the current indexing job is logged. The incrementalCrawlingInterval parameter is set to 100 by default.
You can find additional logging information about initial index creation in the SystemOut.log file by searching for occurrences of the CLFRW0589I logging message.
For example:
CLFRW0589I: Search is continuing to build the index for profiles: 1,600 seedlist entries indexed.For example:
SearchCellConfig.setVerboseIncrementalCrawlingLoggingInterval(100)- SearchCellConfig.setVerboseIncrementalBuildingLoggingInterval(int incrementalBuildingInterval)
Controls the frequency with which update indexing progress is logged to the SystemOut.log file. Update indexing of an Connections application or set of applications, is an indexing job that updates an index that already has content from all applications that are to be indexed as part of the current indexing job.
This command takes a single parameter:
- incrementalBuildingInterval
- A positive integer that corresponds to a number of documents.
For example, if an interval of 20 is specified, then for every 20 documents that have been indexed, the number of documents indexed when indexing a particular application during the current indexing job is logged. The incrementalBuildingInterval parameter is set to 100 by default.
You can find additional logging information about update indexing progress in the SystemOut.log file by searching for occurrences of the CLFRW0600I logging message.
For example:
CLFRW0600I: Search is continuing to build the index for blogs: 40 documents indexed.For example:
SearchCellConfig.setVerboseIncrementalBuildingLoggingInterval(100)- SearchCellConfig.setVerboseLogging(int initialInterval, int seedlistRequestInterval, int incrementalCrawlingInterval, int incrementalBuildingInterval)
Enables verbose logging with the specified initial interval, seedlist request interval, crawling interval, and incremental building interval.
Run this command has the same net effect as calling the following commands in sequence:
- SearchCellConfig.enableVerboseLogging()
- SearchCellConfig.setVerboseInitialLoggingInterval(initialInterval)
- SearchCellConfig.setVerbose
SeedlistRequestLoggingInterval(seedlistRequestInterval)
- SearchCellConfig.setVerboseIncrementalCrawlingLoggingInterval(incrementalCrawlingInterval)
- SearchCellConfig.setVerboseIncrementalBuildingLoggingInterval(incrementalBuildingInterval)
- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Verify that index building is not taking place
After disabling or deleting an indexing task, you might want to verify that Search indexing is not taking place.
To verify that index building is not taking place....
- Open the SystemOut.log file that corresponds to the application server instance on which Search is running and look for the following log messages:
CLFRW0234I: All tasks disabled successfullyCLFRW0714I: The following task {0} has been either disabled or deleted. We are skipping the business logic.where {0} is the name of the task that has been deleted or disabled.- Verify that the following messages do not display after the indexing tasks have been deleted or disabled:
CLFRW0588I: Search is starting to index the {0} component.CLFRW0040I: Starting index optimizationCLFRW0483I: SAND indexing has started.
Configure scheduled tasks
The SearchService MBean is used to access a service that provides an administrative interface for adding scheduled task definitions to the Home page database.
To configure scheduled tasks for Search, use the SearchService administration commands to add and delete scheduled task definitions, and or to enable or disable indexing task definitions in the Home page database.
You can also use SearchService commands get a list of the tasks that are scheduled for the Search application, and to list the tasks that are currently running for Search.
Search default scheduled tasks
When you install Connections, a number of tasks are automatically configured for the Search application. The following tasks are scheduled for Search by default:
Table 42. Search scheduled tasks
Task name Description 15min-search-indexing-task This task specifies that all installed Connections applications are to be crawled and indexed every 15 minutes, except for a one-hour period between 01:00 and 02:00. You can update the settings for this default task using the SearchService.addIndexingTask command.
For more information, see Adding scheduled tasks for Search.
20min-file-retrieval-task This task sends a JMS message that triggers the downloading of files and the indexing of file content on all Search nodes. The task retrieves ECM files in addition to files from Connections. The task runs every 20 minutes, except for a one-hour period between 01:00 and 02:00.
You can update the settings for this default task using the SearchService.addFileContentTask command.
For more information, see Adding scheduled tasks for Search.
nightly-sand-task This task sends a JMS message that triggers social analytics indexing on all Search nodes. The social analytic indexing task is resource-intensive and consequently should be run at off-peak times. By default, the task runs nightly at 01:00. The social analytic indexers query the index and create utility documents that are used by the social analytics feature to provide recommendations for the Recommendations widgets and to build the graph of connected users that is used by the Do You Know widget and the Who Connect. Us widget.
You can update the settings for this default task using the SearchService.addSandTask command.
For more information, see Adding scheduled tasks for Search.
nightly-optimize-task This task sends a JMS message that triggers a Lucene optimize operation of the local indexes on all Search nodes. The task runs nightly at 01:30.
You can update the settings for this default task using the SearchService.addOptimizeTask command.
For more information, see Adding scheduled tasks for Search.
Add scheduled tasks for Search
Use SearchService administrative commands to add scheduled task definitions for the Search application to the Home page database.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
The SearchService commands are used to access a service that provides an administrative interface for adding scheduled indexing task definitions to the Home page database. The following applications can be indexed: Activities, Blogs, Bookmarks, Communities, ECM files, Files, Forums, Profiles, Wikis, Status Updates, and Community Calendar events.
When defining a scheduled task in the Home page database, you need to specify when the scheduler starts the task. The schedule is defined using a Cron schedule.
For more information about the scheduler, see Scheduling tasks.
It is not possible to specify an end time for an indexing task. All tasks run as long as they need to. The startby interval defines the time period by which a task can fire before it is automatically canceled.
This mechanism ensures that tasks do not queue up for an overly long period before being canceled, and allows for tasks that run for longer than the default indexing schedule, such as initial index creation.
To define a scheduled task for the Search application....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following commands to add scheduled task definitions in the Home page database.
- SearchService.addBackupIndexTask(String taskName, String schedule, String startbySchedule)
Defines a new scheduled index backup task.
This command takes the following arguments:
- taskName. The name of the task to be added.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startbySchedule. The time given for the task to run before it is automatically canceled. This argument is a string value that must be specified in Cron format.
For example:
SearchService.addBackupIndexTask("WeeklyIndexBackup","0 0 2 ? * SAT","0 10 2 ? * SAT")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.addFileContentTask(String taskName, String schedule, String startBy, String applicationNames, failuresOnly)
Creates a scheduled file content retrieval task.
This command takes the following arguments:
- taskName. The name of the scheduled task. This argument is a string value, which must be unique.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startBy. The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format.
- applicationNames. The name (or names) of the Connections application to be indexed when the task is triggered. This argument is a string value. To index multiple applications, use a comma-delimited list. The following values are valid:
- ecm_files. Retrieves files from the Enterprise Content Management repository. Only published files are retrieved; draft files are not included.
- files. Retrieves files from the Files application.
- wikis. Retrieves files from the Wikis application.
- failuresOnly. A flag that indicates that only the content of files for which the download and conversion tasks failed should be retrieved.
This argument is a boolean value.
For example:
SearchService.addFileContentTask("mine", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI", "wikis,files","true")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
You can also use the SearchService.addFileContentTask command to replace the task definition for the default 20min-file-retrieval-task. By default, this task runs every 20 minutes, except for a one-hour period between 01:00 and 02:00. To replace the default task settings, first remove the existing task using the SearchService.deleteTask(String taskName) command. Then use the SearchService.addFileContentTask to create a new task with the values that you specify.
For example:
SearchService.deleteTask("20min-file-retrieval-task") SearchService.addFileContentTask("20min-file-retrieval-task", "0 1/20 0,2-23 * * ?", "0 10/20 0,2-23 * * ?", "all_configured", "false")- SearchService.addIndexingTask(String taskName, String schedule, String startBy, String applicationNames, Boolean optimizeFlag)
Creates a new scheduled indexing task definition in the Home page database.
This command takes the following arguments:
All arguments are required.
- taskName. The name of the scheduled task. This argument is a string value, which must be unique.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startBy. The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format.
This parameter should be used to ensure that indexing tasks are not queued up and running into server busy times. Under normal conditions, the only factors that might cause a task to be delayed are that overlapping or coincident tasks are trying to fire at the same time, or an earlier task is running for a long time.
- applicationNames. The name (or names) of the Connections application to be indexed when the task is triggered. This argument is a string value. To index multiple applications, use a comma-delimited list. The following values are valid: activities, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis.
- optimizeFlag. A flag that indicates if an optimization step should be performed after indexing. This argument is a boolean value.
The optimization operation is both CPU and I/O intensive. For this reason, the operation should be performed infrequently and, if possible, during off-peak hours.
For more information, refer to the following web page:
Note that when you install Connections, a search optimization task is set up to run every night by default.
See Search default tasks for more information.
For example:
SearchService.addIndexingTask("customDogearAndBlogs", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI", "dogear,blogs","true")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
The refreshTasks() command should be used after this command for the new task definitions to take effect immediately. Otherwise, the changes take place when the Search application is next restarted.
You can also use the SearchService.addIndexingTask command to replace the 15min-search-indexing-task that is automatically configured when you install Connections. By default, all installed Connections applications are crawled and indexed every 15 minutes, except for a one-hour period between 01:00 and 02:00. To replace the default indexing task settings, first remove the existing indexing task using the SearchService.deleteTask(String taskName) command. Then, use the SearchService.addIndexingTask command to create a new indexing task with the values that you specify.
For example:
SearchService.deleteTask("15min-search-indexing-task") SearchService.addIndexingTask("15min-search-indexing-task", "0 1/15 0,2-23 * * ?", "0 10/15 0,2-23 * * ?", "all_configured", "false")- SearchService.addOptimizeTask(String taskName, String schedule, String startBy)
Creates a new index optimization scheduled task definition.
This command takes the following arguments:
All arguments are required.
- taskName. The name of the scheduled task. This argument is a string value, which must be unique.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startBy. The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format.
This parameter should be used to ensure that indexing tasks are not queued up and running into server busy times. Under normal conditions, the only factors that might cause a task to be delayed are that overlapping or coincident tasks are trying to fire at the same time, or an earlier task is running for a long time.
The optimization operation is both CPU and I/O intensive. For this reason, the operation should be performed infrequently and, if possible, during off-peak hours.
For more information, refer to the following web page:
Note that when you install Connections, a search optimization task is set up to run every night by default.
See Search default tasks for more information.
For example:
SearchService.addOptimizeTask("customOptimize", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
The refreshTasks() command should be used after this command for the new task definitions to take effect immediately. Otherwise, the changes take place when the Search application is next restarted.
You can also use the SearchService.addOptimizeTask command to replace the nightly-optimize-task that is automatically configured when you install Connections. By default, this task runs nightly at 01:30. To replace the default optimize task settings, first remove the existing optimize task using the SearchService.deleteTask command. Then, use the SearchService.addOptimizeTask command to create a new optimize task with the values that you specify.
For example:
SearchService.deleteTask("nightly-optimize-task") SearchService.addOptimizeTask("nightly-optimize-task", "0 30 1 * * ?", "0 35 1 * * ?")- SearchService.addSandTask(String taskName, String schedule, String startBy, String jobs)
Creates a new scheduled task definition for the social analytics service in the Home page database.
This command takes the following arguments:
All the arguments are required.
- taskName. The name of the scheduled task. This argument is a string value, which must be unique.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startBy. The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format.
This parameter should be used to ensure that scheduled tasks are not queued up and running into server busy times. Under normal conditions, the only factors that might cause a task to be delayed are that overlapping or coincident tasks are trying to fire at the same time, or an earlier task is running for a long time.
- jobs. The name, or names, of the jobs to be run when the task is triggered. This argument is a string value. To index multiple jobs, use a comma-delimited list. The following values are valid: evidence, graph, manageremployees, tags, taggedby, and communitymembership.
For example:
SearchService.addSandTask("customSaNDIndexTask", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI", "evidence,graph,manageremployees,tags,taggedby,communitymembership")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
You can also use the SearchService.addSandTask command to replace the nightly-sand-task that is automatically configured when you install Connections. By default, the task runs nightly at 01:00. To replace the default SAND task settings, first remove the existing task using the SearchService.deleteTask(String taskName) command. Then use the SearchService.addSandTask command to create a new SAND task with the values that you specify.
For example:
SearchService.deleteTask("nightly-sand-task") SearchService.addSandTask("nightly-sand-task", "0 0 1 * * ?", "0 5 1 * * ?", "evidence,graph,manageremployees,tags,taggedby,communitymembership")- To refresh the Home page database to include the newly-added tasks:
SearchService.refreshTasks()
Listing scheduled tasks
Use SearchService administrative commands to list the scheduled tasks defined in the Home page database.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
To list the scheduled tasks defined in the Home page database....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following commands to list tasks defined in the Home page database.
- SearchService.listTasks()
Lists all Search scheduled task definitions (indexing and optimize) defined in the Home page database.
This command does not take any input parameters.
- SearchService.listIndexingTasks()
Lists all scheduled indexing task definitions defined in the Home page database.
This command does not take any input parameters.
- SearchService.listOptimizeTasks()
Lists all scheduled optimize task definitions defined in the Home page database.
This command does not take any input parameters.
Listing Search tasks that are currently running
You can use a SearchService command to get a list of the tasks that are currently running for the Search application.
To run SearchService commands, use the wsadmin client.
To get a list of the Search tasks that are currently running....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Enter the following command:
- SearchService.listRunningTasks()
Lists all the tasks that are currently running for the Search application. This command does not take any input parameters.
The command returns a list of the tasks that are currently running, and includes the following information for each task:
- Internal task ID
- Task name
- Time that the task started
For example:
wsadmin>SearchService.listRunningTasks() >>>51 roi-profiles-WedDec0715:23:09GMT2011 Wed Dec 07 15:23:09 GMT 2011
Delete scheduled tasks for Search
Use SearchService administrative commands to delete scheduled task definitions from the Home page database.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
To delete scheduled tasks for Search....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following commands:
- SearchService.deleteAllTasks()
Delete all task definitions from the Home page database.
This command does not take any parameters.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.deleteTask(String taskName)
Delete the task definition with the specified name from the Home page database.
This command takes a string value, which is the name of the task to be deleted.
For information about how to retrieve the names of the tasks in the Home page database, see Listing scheduled tasks.
For example:
SearchService.deleteTask("profilesIndexingTask")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- To refresh the Home page database and purge it of information related to the deleted task or tasks:
SearchService.refreshTasks()
Restore the default scheduled tasks for Search
Use a SearchService administrative command to delete all scheduled tasks from the Home page database and restore the tasks that are configured by default when you first install Connections. You can also use SearchService commands to restore individual default tasks.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
If the effect of resetting the default scheduled tasks for Search is to update the indexing task by adding an application that is not already part of any indexing task and is not currently indexed in the Search index, you must perform initial indexing for that application. In a production environment, first make a backup of your Search index, and then use the startBackgroundIndex command to add the new application to your Search index backup. Replace the current index with the resulting new index before you execute the reset command. If you do not do this, indexing on nodes that do not have a Search index containing resume points for all the applications contained in the task will not proceed and, as a result, all Search indexing will stop.
To restore the default scheduled tasks for Search....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- To restore the full set of default tasks:
- SearchService.resetAllTasks()
Delete all scheduled task definitions from the Home page database and restores the default set of tasks.
For more information about these tasks, see Search default scheduled tasks.
This command does not take any parameters.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- To reset individual default taskss with the parameters provided here:
- 15min-search-indexing-task
SearchService.addIndexingTask("15min-search-indexing-task", "0 1/15 0,2-23 * * ?", "0 10/15 0,2-23 * * ?", "all_configured", "false")- 20min-file-retrieval-task
SearchService.addFileContentTask("20min-file-retrieval-task", "0 1/20 0,2-23 * * ?", "0 10/20 0,2-23 * * ?", "all_configured", "false")- nightly-optimize-task
SearchService.addOptimizeTask("nightly-optimize-task", "0 30 1 * * ?", "0 35 1 * * ?")- nightly-sand-task
SearchService.addSandTask("nightly-sand-task", "0 0 1 * * ?", "0 5 1 * * ?", "evidence,graph,manageremployees,tags,taggedby,communitymembership")For more information about these default scheduled tasks, see Search default scheduled tasks.
- To refresh the Home page database and purge it of information related to the deleted task or tasks:
SearchService.refreshTasks()
Enable and disable scheduled tasks
Use SearchService administrative commands to enable and disable the scheduled tasks defined in the Home page database.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
To enable or disable scheduled tasks in the Home page database....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following commands to disable and re-enable scheduled tasks.
- SearchService.disableAllTasks()
Disables all scheduled tasks for the Search application.
This command does not take any arguments.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.disableTask(String taskName)
Disable the scheduled task with the specified name.
This command takes a single argument:
- taskName. The name of the task to be disabled. This argument is a string value.
For example:
SearchService.disableTask("mine")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
Use this command affects the indexing process...
Results for the current application that is being indexed are discarded but, if, as part of an scheduled task, some applications have been successfully crawled, those applications are up-to-date in the index.
- When the command is run before the scheduled task fires, the indexing operation is prevented from starting.
- When the command is run during the indexing operation for an application, the Search application stops indexing.
For example, if a task is fired that is to index Bookmarks, Blogs, and Activities (in that order) and the disable command is called while Blogs is being indexed, when the task is enabled again, Blogs and Activities resume indexing at the same point as the previously-called task. Disabled tasks remain disabled until they are re-enabled.
- SearchService.enableAllTasks()
Re-enables all scheduled tasks for the Search application.
This command does not take any arguments.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.enableTask(String taskName)
Re-enables the scheduled task with the specified name. This command uses the current schedule.
This command takes a single argument:
- taskName. The name of the task to be enabled. This argument is a string value.
For example:
SearchService.enableTask("mine")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
Run one-off tasks
The SearchService MBean provides commands that allow you to create an indexing optimize task that is scheduled to run once and only once, 30 seconds after being called.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
Notes:
- If the time between issuing these commands is less than the polling interval for the Search scheduler, then tasks might not execute in the same order as the order in which the commands were issued.
- You should wait at least the duration of the poll interval after issuing the following commands before issuing another one of the commands:
- indexNow()
- indexNowWithOptimization()
- optimizeNow()
To run one-off Search tasks....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following commands to run one-off indexing tasks.
- SearchService.indexNow(String applicationNames)
Creates a one-off task that indexes the specified applications 30 seconds after being called.
This command takes a single argument:
- applicationNames. The name (or names) of the Connections application to be indexed when the task is triggered. This argument is a string value. To index multiple applications, use a comma-delimited list.
The following values are valid: activities, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis.
An optimize operation is not run at the end of the indexing operation.
For example:
SearchService.indexNow("dogear, blogs")- SearchService.indexNowWithOptimization(String applicationNames)
Creates a one-off task that indexes the specified applications 30 seconds after being called, and performs an optimization operation at the end of the indexing operation.
This command takes a single argument:
- applicationNames. The name (or names) of the Connections application to be indexed when the task is triggered. This argument is a string value. To index multiple applications, use a comma-delimited list.
The following values are valid: activities, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis.
The optimization operation is both CPU and I/O intensive. For this reason, the operation should be performed infrequently and, if possible, during off-peak hours.
For more information, refer to the following web page:
Note that when you install Connections, a search optimization task is set up to run every night by default.
See Search default tasks for more information.
For example:
SearchService.indexNowWithOptimization("dogear, blogs")- SearchService.optimizeNow()
Creates a one-off task that performs an optimize operation on the search index, 30 seconds after being called.
The optimization operation is both CPU and I/O intensive. For this reason, the operation should be performed infrequently and, if possible, during off-peak hours.
For more information, refer to the following web page:
Note that when you install Connections, a search optimization task is set up to run every night by default.
See Search default tasks for more information.
This command does not accept any input parameters.
This operation should not be called during an indexing operation; if it needs to be run, do it at an off-peak time when the application is not expected to be performing intensive I/O operations on the index.
Retrieve file content
Use SearchService commands to perform file content retrieval tasks.
To edit configuration files, use the wsadmin client.
Depending on the number of files being indexed in your deployment, it can take a long time to retrieve file content. To ensure that all content is retrieved and indexed, you can run the indexNow command to retrieve all content before the document indexing service finishes, or you can run it after the document indexing service has finished.
For example, to manually index files and all file content, you might run the following commands:
wsadmin>SearchService.indexNow("files") wsadmin>SearchService.getFileContentNow("files") wsadmin>SearchService.indexNow("files")The document indexing service can run on multiple nodes, making the download and conversion process faster. When the document indexing task is scheduled, the Search application sends a message to all the nodes to tell them to start the document indexing process locally. Each Search server starts taking files from the cache and downloading and converting them. When a node retrieves a file, it flags the file in the cache as claimed so that other nodes do not try to get content for that file.
To perform file content retrieval tasks....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following commands to perform file content retrieval tasks.
- SearchService.getFileContentNow(String applicationNames)
Launches the file content retrieval task. This command iterates over the file cache, downloading and converting files that don't have any content.
This command takes a string value, which is the name of the application whose content is to be retrieved. The following values are valid:
- ecm_files. Retrieves files from the Enterprise Content Management repository. Only published files are retrieved; draft files are not included.
- files. Retrieves files from the Files application.
- wikis. Retrieves files from the Wikis application.
For example:
SearchService.getFileContentNow("files")- SearchService.retryContentFailuresNow(String applicationNames)
Retries failed attempts at downloading and converting files for the specified application.
This command takes a string value, which is the name of the application whose content is to be downloaded and converted. The following values are valid:
- files. Retrieves files from the Files application.
- wikis. Retrieves files from the Wikis application.
A file download or conversion task can fail for a number of reasons, for example, hardware or network issues. Failures are flagged in the cache and can be retried.
For example:
SearchService.retryContentFailuresNow("wikis,files")- SearchService.addFileContentTask(String taskName, String schedule, String startBy, String applicationNames, failuresOnly)
Creates a scheduled file content retrieval task.
This command takes the following arguments:
- taskName. The name of the scheduled task. This argument is a string value, which must be unique.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startBy. The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format.
- applicationNames. The name (or names) of the Connections application to be indexed when the task is triggered. This argument is a string value. To index multiple applications, use a comma-delimited list. The following values are valid:
- ecm_files. Retrieves files from the Enterprise Content Management repository. Only published files are retrieved; draft files are not included.
- files. Retrieves files from the Files application.
- wikis. Retrieves files from the Wikis application.
- failuresOnly. A flag that indicates that only the content of files for which the download and conversion tasks failed should be retrieved.
This argument is a boolean value.
For example:
SearchService.addFileContentTask("mine", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI", "wikis,files","true")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
You can also use the SearchService.addFileContentTask command to replace the task definition for the default 20min-file-retrieval-task. By default, this task runs every 20 minutes, except for a one-hour period between 01:00 and 02:00. To replace the default task settings, first remove the existing task using the SearchService.deleteTask(String taskName) command. Then use the SearchService.addFileContentTask to create a new task with the values that you specify.
For example:
SearchService.deleteTask("20min-file-retrieval-task") SearchService.addFileContentTask("20min-file-retrieval-task", "0 1/20 0,2-23 * * ?", "0 10/20 0,2-23 * * ?", "all_configured", "false")- SearchService.listFileContentTasks()
Lists all the scheduled file content retrieval tasks.
This command does not take any input parameters.
- SearchService.enableTask(String taskName)
Enable the specified task.
This command takes a single argument:
- taskName. The name of the task to be enabled. This argument is a string value.
For example:
SearchService.enableTask("mine")- SearchService.disableTask(String taskName)
Disable the specified task.
This command takes a single argument:
- taskName. The name of the task to be disabled. This argument is a string value.
For example:
SearchService.disableTask("mine")
Purge content from the index
Use the SearchService.deleteFeatureIndex command to purge content for a specific application from the Search index in a single-node environment.
To run administrative commands use the wsadmin client.
In an environment with multiple nodes, use the SearchService.deleteFeatureIndex command only when you want to delete the index for an application that has been uninstalled. After running this command, the content from the component that has been deleted cannot be reindexed. To delete content for a specific application from the index, use the SearchService.startBackgroundIndex command to rebuild a new index for all applications instead.
For more information about this command, see Creating a background index.
If there is a problem with indexed content from any of the Connections applications, instead of deleting and recreating the entire index, you can use the SearchService.deleteFeatureIndex command to remove and purge all documents for a given application from the index. The command deletes the content from the database that is shared by all the servers in the cluster as well as from the indexes.
When you run the SearchService.deleteFeatureIndex command, the command removes indexed content for the specified application from the node in your deployment. Indexing tasks are automatically disabled at the start of this process and re-enabled when the process is complete, regardless of whether the tasks were disabled initially.
When you remove an application from the Search index, you need to rebuild the indexes for the social analytics service. The social analytics indexes are completely rebuilt every night by default, however, to fully remove an application's index immediately, you must use the SearchService.sandIndexNow command on each of the social analytics indexes.
For more information about this command, see Running one-off social analytics scheduled tasks.
To purge content for a specific application from the index....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following command to remove and purge content from a specified application from the Search index.
- SearchService.deleteFeatureIndex(String applicationName)
Removes and purges the content for the specified application from the Search index.
Only use this command if you are uninstalling an application from Connections. After you run the command, the content from the application that has been deleted cannot be reindexed.
This command takes a string value, which is the name of the application whose content is to be deleted. The following values are valid: activities, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis.
For example:
SearchService.deleteFeatureIndex("activities")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
Reindexing content
Use the retryIndexing command when you want to reindex content that was not indexed successfully during initial or incremental indexing.
To run administrative commands use the wsadmin client.
If a failure occurs when you are trying to index content from the Connections applications, you can use the retryIndexing command to try to index that content again. You can tell if a failure has occurred during content indexing when you do not see the expected search results being returned, or when you see incorrect search results being returned.
For example, you might have updated a document but an older version of that document is returned by a search.
To reindex content that failed to be indexed previously, complete the following steps.
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following command.
- SearchService.retryIndexing(String service, String id)
Attempts to index an item of content that was not indexed successfully during initial or incremental indexing.
retryIndexing does not work for ecm_files.
This command takes two parameters:
- service
- The application from which the content originated.
- id
- The Atom ID of the content.
For information about how to retrieve the Atom ID for the content, refer to the Connections API documentation on the IBM Social Business Development Wiki.
For example:
SearchService.retryIndexing('activities', 'b63cabf8-0533-45cf-9636-d63cd6a6f3ca')If the command is successful, 1 is printed to the console. If the command fails, 0 is printed to the console.
Delete persisted seedlist data
You can free up disk space by deleting persisted seedlists from your system using the SearchService.flushPersistedCrawlContent command.
See Start wsadmin
Persisted seedlists can take up a large amount of space when your deployment has a lot of content. If you know that a particular set of crawled content is no longer needed, you can free up disk space by using the SearchService.flushPersistedCrawlContent command to delete the persisted data. This command only clears persisted seedlists in the default persistence location. To delete seedlists crawled using the startBackgroundCrawl, startBackgroundFileContentExtraction, or startBackgroundIndex commands, you must delete them manually.
You might also want to use the SearchService.flushPersistedCrawlContent command to remove old data when you are about to recrawl the entire system with the persistence option enabled. Where previously persisted data still exists, you can use the command to purge old data from the system before generating a more up-to-date copy.
To delete persisted seedlists....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Run the following command:
- SearchService.flushPersistedCrawlContent()
Deletes current persisted seedlists.
This command only clears persisted seedlists in the default persistence location.
Seedlists crawled using the startBackgroundCrawl, startBackgroundFileContentExtraction, or startBackgroundIndex commands must be deleted manually.
This command does not take any input parameters.
Do not run this command while a crawl is in progress.
When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
Delete the index
Delete the index by deleting the contents of the directory specified by the IBM WebSphere Application Server variable, SEARCH_INDEX_DIR.
From time to time, you might need to delete and rebuild the Search index.
For example, if you change the context root of one of the Connections applications, you then need to rebuild the index by deleting the current index.
The index is automatically rebuilt the next time the indexing task runs.
When you delete the index, you might also want to delete the content of the extracted file store used by the Search index. However, the existing extracted file content can be reused when generating a new index so, if the files were previously indexed successfully, it is generally preferable to keep the extracted content to reduce index recreation time.
To delete the Search index....
- Check the value of the SEARCH_INDEX_DIR WebSphere Application Server variable for the relevant server:
- Launch WAS admin console.
- Expand Environment and select WebSphere variables.
- Click the Show filter function icon.
- Ensure that Name displays in the Filter dropdown menu.
- Enter SEARCH_INDEX_DIR into the Search terms field and click Go. A variable called SEARCH_INDEX_DIR displays in the search results. Take a note of the value of this variable as the index location for the relevant server.
- To delete the contents of the extracted file store, check the value of the EXTRACTED_FILE_STORE WebSphere Application Server variable for the server:
- Repeat steps 1a to 1d from the previous step.
- Enter EXTRACTED_FILE_STORE into the Search terms field and click Go. A variable called EXTRACTED_FILE_STORE displays in the search results. Take a note of the value of this variable as the extracted file content location for the server.
- Shut down the Search server or cluster.
- Delete the contents of the index folder that you noted in step 1e.
- Delete the contents of the extracted file content folder that you noted in step 2b.
- Rebuild the index by following the steps described in Recreating the Search index.
Listing indexing nodes
Use the SearchService.listIndexingNodes command when you need to check the names of the Search indexing nodes in your deployment.
For example, if you want to remove an indexing node from the index management table, you can use this command to verify the name of the node to remove.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
To list the indexing nodes for Search....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Run the following command:
- SearchService.listIndexingNodes()
Return a list of the Search indexing nodes in your deployment.
This command does not take any arguments.
When the command runs successfully, the names of the Search indexing nodes are printed to the wsadmin console along with information about each node. The output includes a version timestamp and information that indicates whether the node is an indexing node or a non-indexing node, whether the index on the server is more than 30 days old, and whether the index on the server is synchronized with the latest index in the cluster.
For example:
Indexing Node Id: dubxpcvm084-0Node02:server1, Last Crawl Version: 1,340,285,460,074, Indexer: true, Out of Date: false, Out of Sync: false
Remove a node from the index management table
When you are removing a node from a cluster, use the SearchService.removeIndexingNode wsadmin command to remove the node from the index management table.
You must remove the node from the cluster before using the SearchService.removeIndexingNode command to remove it from the index management table.
For information about how to remove nodes, see Removing nodes from a cluster.
To use the SearchService.removeIndexingNode command, use the wsadmin client.
You can use the removeIndexNode command to remove an entry from the SR_INDEX_MANAGEMENT table that is added or updated by Search servers at start-up time.
To remove a node from the index management table....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following command:
- SearchService.removeIndexingNode(String nodeName)
Removes the specified node from the index management table.
This command takes a single argument:
- nodeName. The name of the node to be removed. This argument is a string value that takes the following format:
nodeName:serverNameTo retrieve a list of the indexing nodes in your deployment, run the SearchService.listIndexingNodes() command.For more information, see Listing indexing nodes.
For example:
SearchService.removeIndexingNode("Node01:cluster1_server1")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
Backup and restore
Create a backup of the Search index and save it to a secure location so that it can be used to restore the index in the event of loss or corruption.
When backing up and restoring data, refer to the product documentation for the database and file system that you are using.
When indexing tasks are scheduled, it can be difficult to find a suitable time for backing up the Search index. Instead of creating an index backup manually, you can use the IBM WebSphere Application Server scheduler framework to create backup tasks that automatically block indexing tasks from starting and create suitable slots in the scheduled list of tasks for creating index backups. By leveraging the existing scheduler framework, you can implement a consistent method for scheduling and administering the Search application.
The scheduler framework guarantees that a single backup task runs in a clustered environment. In the event of a previous backup task being interrupted, the current backup task removes any leftover artifacts from the failed backup task. Notes:
- The backup task overwrites existing backups when configured to do so, but it does not perform any further index-backup management. Administrators must ensure that issues such as disk space are managed independently of the backup process.
For example, you must ensure that redundant or unwanted backups are purged from storage as needed.
- Connections applications maintain delete and access-control update information for a maximum of 30 days. Indexes that are more than 30 days old are not suitable for restoration because they might contain obsolete or orphan content. An index over 30 days old that is restored cannot be made up-to-date by update indexing.
- Backup locations are set using WAS environment variables and must be set to a network file storage location. Because backup tasks can be run on any server in a clustered environment, locally-stored backups might interfere with other backup settings, for example, by causing confusion as to which is the latest backup. The user ID that is used for running WebSphere Application Server must have write and delete permissions to the network share.
- Servers running search.ear that can run backup tasks should have disk storage that is twice the size of the index directory available for internal operations on the index when performing backups.
- When restoring a backup, update the management table in the database with the new resume points for the Search crawlers. A backup of the resume points is stored at all times in the Lucene index as Lucene Documents. The notifyRestore administrative task allows you to update the management table with the resume points specified in the Lucene index. All future crawls then start from the specified point. The cache is also purged by the notifyRestore task.
- Valid backups have an IndexBackupInformation.txt file created in the backup location.
Configure index backup settings
Use SearchCellConfig commands to define index backup settings in the search-config.xml file.
When using administrative commands, use the wsadmin client.
When backing up the search index, you can specify the type of backup to create by configuring the backupType setting in the search-config.xml file. You can also specify whether to run a shell script or third-party application on completion of the backup task by editing the postBackupExecutable setting. These settings are applied to all backup tasks.
To configure index backup settings...
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")- To configure index backup settingss:
- SearchCellConfig.setBackupType(String type)
Specifies the type of backup to create.
This command takes a single argument that specifies the backup type. This can be one of the following:
- new. Creates a new index backup every time.
- dual. Creates dual copies and overwrites the oldest existing backup.
- overwrite. Overwrites the existing index backup.
For example:
SearchCellConfig.setBackupType("new")- SearchCellConfig.setPostBackupScript(String script)
Specifies which shell script or third-party application runs on completion of the backup task.
This command takes a single argument that specifies the name of the shell script or application file.
For example:
SearchCellConfig.setPostBackupScript("backup.sh")To disable the script, run the command again with an empty string as the argument.
For example:
SearchCellConfig.setPostBackupScript("")- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Back up the Search index
You can back up the Search index using SearchService commands or you can manually copy the index files to a backup location.
Note that the Search index has a dependency on the HOMEPAGE database.
The Search index can be backed up and restored independently of the HOMEPAGE database as long as the HOMEPAGE database remains current. However, if the database is restored, a corresponding Search index backup must be restored with it.
Back up the Search index using wsadmin commands
Use SearchService administrative commands to define scheduled backup tasks for the Search index.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
Backups are written to the location specified in the WAS environment variable, SEARCH_INDEX_BACKUP_DIR. When backing up the index, ensure that multiple nodes are not sharing the same backup location.
To back up the Search index, complete the following tasks.
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following commands.
- SearchService.addBackupIndexTask(String taskName, String schedule, String startbySchedule)
Defines a new scheduled index backup task.
This command takes the following arguments:
- taskName. The name of the task to be added.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startbySchedule. The time given for the task to run before it is automatically canceled. This argument is a string value that must be specified in Cron format.
For example:
SearchService.addBackupIndexTask("WeeklyIndexBackup","0 0 2 ? * SAT","0 10 2 ? * SAT")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.backupIndexNow()
Backs up the index to the location specified by the IBM WebSphere Application Server variable, SEARCH_INDEX_BACKUP_DIR. There might be a delay before the backup occurs if there are indexing tasks in progress.
This command does not take any arguments.
After backing up the Search index using wsadmin commands, consider performing a full backup of the HOMEPAGE database. Note that the Search index has a dependency on data in the HOMEPAGE database.
- SearchService.deleteTask(String taskName)
Delete the specified backup task.
This command takes a single argument:
- taskName. The name of the task to be deleted.
For example:
SearchService.deleteTask("NightlyBackupTask")- SearchService.notifyRestore(Boolean isNewIndex)
Brings the database to a consistent state so that crawlers start from the point at which the backup was made.
The notifyRestore command updates index management tables in the HOMEPAGE database so that crawling resume points are reloaded from a restored index, thereby ensuring that all future crawls start from the correct point. The command also purges cached content in the HOMEPAGE database.
The notifyRestore command optionally removes all entries from the HOMEPAGE database table that tracks the status of individual files as part of the content extraction process. This table is used by the Search application when indexing the content of file attachments.
This command takes a single parameter:
- isNewIndex: If set to true, all entries are removed from the database table that is used by the file content extraction process to track the status of individual files.
Set this parameter to true when you are restoring a newly-built index. Set the parameter to false when you are restoring an index backup.
For example:
SearchService.notifyRestore("true")
Back up the Search index manually
The Search index can be backed up and restored at a later date in the event of loss or corruption of data.
To back up the Search index manually....
- Disable any regular indexing tasks that you have configured.
To disable all tasks:
SearchService.disableAllTasks()- Verify that indexing is not ongoing.
For more information, see Verifying that index building is not taking place.
- Copy the entire index directory and its subdirectories to a secure backup location.
- When backing up the Search index, consider performing a full backup of the HOMEPAGE database. Note that the Search index has a dependency on data in the HOMEPAGE database.
- Re-enable your indexing task or tasks by performing one of the following steps:
The next indexing task to run resumes indexing at the point at which the restored index was last successfully indexed.
- If you had no tasks that were disabled before you completed step 1, then run the SearchService.enableAllTasks() command.
- If you had specific tasks that were disabled before you completed step 1, then use the SearchService.enableTask(String taskname) command to enable those tasks.
For example:
SearchService.enableTask("mine")
Restore the Search index
When you create a backup copy of the Search index, you can use the copy to restore the index in the event of loss or corruption.
The process for restoring the Search index differs depending on the number of nodes in your deployment. In an environment with multiple nodes, you must restore the backup consistently for all the nodes.
Restore a Search index in a single-node environment
In the event of data loss or corruption, you can use a backup copy of the Search index to restore the index. Use the following procedure to restore the Search index in a single node environment.
For information about how to create a backup copy of the Search index, see Backing up the Search index.
Connections applications maintain delete and access-control update information for a maximum of 30 days. Indexes that are more than 30 days old are not considered suitable for restoration because they might contain obsolete or orphan content.
You can also follow the procedure described here when you want to restore the index in an environment with multiple nodes if it is not an issue that all the nodes are unavailable while the index is being restored.
For information about restoring the index in a multi-node environment, see Restoring a Search index in an environment with multiple nodes.
To restore a Search index in a single-node environment, complete the following steps:
- Disable any regular indexing tasks that you have configured.
- To list the indexing tasks:
SearchService.listIndexingTasks()
- To disable tasks:
SearchService.disableAllTasks()
For example:
SearchService.disableAllTasks()There is only one indexing task by default.
- To prepare the HOMEPAGE database to successfully load the restored index:
SearchService.notifyRestore(Boolean isNewIndex)
where the isNewIndex parameter specifies whether all entries are removed from the database table that is used by the file content extraction process to track the status of individual files. Set the parameter to false when you are restoring an index backup.
For example:
SearchService.notifyRestore("false")For more information about this command, see Backing up the Search index using wsadmin commands.
- Stop the Search server.
- Delete the contents of the index directory and all its subdirectories from the Connections Search data directory.
- Copy the backup index and all its subdirectories into the Search directory.
- Restart the Search server.
- Re-enable your indexing task or tasks using the SearchService.enableAllTasks() command.
For example:
SearchService.enableAllTasks()If you don't want to enable all tasks (for example, if some tasks were disabled before you started these steps and you want to keep them disabled), use the SearchService.enableTask(String taskName) command instead to enable one task at a time.
The next indexing task to run resumes indexing at the point at which the restored index was last successfully indexed.
Restore a Search index in an environment with multiple nodes
Complete the following procedure when you want to restore a Search index in a multi-node environment where restarting individual Search nodes is acceptable. Some Search nodes are unavailable during the procedure but other nodes in the cluster are still available to handle incoming requests.
For information about how to create a backup copy of the Search index, see Backing up the Search index.
Connections applications maintain delete and access-control update information for a maximum of 30 days. Indexes that are more than 30 days old are not considered suitable for restoration because they might contain obsolete or orphan content.
When you create a backup copy of the Search index, you can use this copy to restore the index in the event of loss or corruption. You must restore the backup consistently for all the nodes in your deployment.
Follow this procedure to restore a Search index if your environment has multiple Search nodes in the cluster. If your environment has multiple nodes but the Search application is only deployed on one of those nodes, refer to the topic, Restoring a Search index in a single-node environment.
- Disable any regular indexing tasks that you have configured.
- To list the indexing tasks:
SearchService.listIndexingTasks()
- To disable tasks:
SearchService.disableAllTasks()
For example:
SearchService.disableAllTasks()There is only one indexing task by default.
- To prepare the HOMEPAGE database to successfully load restored indexes on each node:
SearchService.notifyRestore(Boolean isNewIndex)
where the isNewIndex parameter specifies whether all entries are removed from the database table that is used by the file content extraction process to track the status of individual files. Set the parameter to false when you are restoring an index backup.
For example:
SearchService.notifyRestore("false")For more information about this command, see Backing up the Search index using wsadmin commands.
- Stop the first Search node in your deployment.
- Delete the contents of the index directory and all its subdirectories from the Connections Search data directory.
- Copy the backup index and all its subdirectories into the Search directory.
- Restart the Search node.
- Stop each remaining Search node in the cluster in turn and repeat steps 4-6 for that node.
- Re-enable your indexing task or tasks using the SearchService.enableAllTasks() command.
For example:
SearchService.enableAllTasks()If you don't want to enable all tasks (for example, if some tasks were disabled before you started these steps and you want to keep them disabled), use the SearchService.enableTask(String taskName) command instead to enable one task at a time.
The next indexing task to run resumes indexing at the point at which the restored index was last successfully indexed.
Restore a Search index without restarting individual nodes
Complete the following procedure when you want to restore a Search index in a multi-node environment where restarting individual Search nodes must be avoided.
For information about how to create a backup copy of the Search index, see Backing up the Search index.
Connections applications maintain delete and access-control update information for a maximum of 30 days. Indexes that are more than 30 days old are not considered suitable for restoration because they might contain obsolete or orphan content.
You can use a backup copy of the Search index to restore the index in the event of loss or corruption. You must restore the backup consistently for all the nodes in your deployment.
To restore the Search index in the event of loss or corruption, complete the following steps:
- Disable any regular indexing tasks that you have configured.
- To list the indexing tasks:
SearchService.listIndexingTasks()
- To disable tasks:
SearchService.disableAllTasks()
For example:
SearchService.disableAllTasks()There is only one indexing task by default.
- To prepare the HOMEPAGE database to successfully load restored indexes on each node:
SearchService.notifyRestore(Boolean isNewIndex)
where the isNewIndex parameter specifies whether all entries are removed from the database table that is used by the file content extraction process to track the status of individual files. Set the parameter to false when you are restoring an index backup.
For example:
SearchService.notifyRestore("false")For more information about this command, see Backing up the Search index using wsadmin commands.
- On each Search node, delete the contents of the index directory and all its subdirectories from the Connections Search data directory.
- On each Search node, copy the backup index and all its subdirectories into the Search directory.
- On each Search node, reload the index
SearchService.reloadIndex()- Re-enable your indexing task or tasks using the SearchService.enableAllTasks() command.
For example:
SearchService.enableAllTasks()If you don't want to enable all tasks (for example, if some tasks were disabled before you started these steps and you want to keep them disabled), use the SearchService.enableTask(String taskName) command instead to enable one task at a time.
The next indexing task to run resumes indexing at the point at which the restored index was last successfully indexed.
Configure file attachment indexing settings
Edit settings in the search-config.xml file to configure Search for file attachments.
To edit configuration files, use the wsadmin client.
Search provides a dedicated document conversion service. When a file indexing task is run, the document conversion service downloads files, converts them to plain text, and then indexes the content. During this process, content from different MIME types is indexed. For a list of the MIME types supported by Search, see Supported MIME types.
The behavior of the document conversion service can be altered by modifying various settings, allowing administrators to control the file content indexing process.
Connections supports the indexing of file attachment content from the Files and Wikis applications. Content from file attachments in Activities, Blogs, and Forums is not searched.
When file indexing is enabled, the content of files is not indexed the first time that the index is run. The first index starts the process of retrieving the file content, but the actual indexing of the content only takes place when the index is run for the second time.
To configure file attachment indexing settings....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")
- Use the following commands to control the file content indexing process.
- SearchCellConfig.enableAttachmentHandling()
Enable the indexing of file attachments in the Files and Wikis applications.
If you already disabled the attachment handling of files during the last indexing, you need to rebuild the index again after enabling attachment handling. Otherwise, this command won't take effect.
This command does not take any input parameters.
- SearchCellConfig.disableAttachmentHandling()
Disable the indexing of file content in the Files, Wikis, and Library (ECM Files) applications.
This command does not take any input parameters.
- SearchCellConfig.setMaximumAttachmentSize(int maxAttachmentSize)
Set the limit on the size of files that can be downloaded for indexing. Files that are greater than the configured maximum attachment size are not downloaded or processed for content indexing. By default, the limit is set to 50 MB, which means that files over 50 MB are not indexed.
Files that are under the specified size are downloaded to a temporary directory located in the index directory, where they go through the text extraction process. The extracted text is then indexed. The temporary directory size available must be greater than the maximum file size allowed for content indexing. You can control the amount of extracted text that is indexed using the setCacheFileSize command.
This command accepts one argument:
- maxAttachmentSize. The maximum file size in bytes of any file attachment eligible for indexing. This is an integer value.
For example:
SearchCellConfig.setMaximumAttachmentSize("52428800")- SearchCellConfig.setCacheExpiryTime(int numberOfDays)
Set the number of days for which a downloaded file's indexable content is cached in the database. This information is cached for potential reuse at indexing time. If a file is not reused in the number of days specified, its entry in the database cache is deleted.
If the file content has changed, the file is downloaded again and the cache is updated with the revised content.
This command allows you to ensure that the database cache used for indexing files is kept up-to-date.
The expiry time is measured in days. Specify a positive integer greater than zero.
For example:
SearchCellConfig.setCacheExpiryTime("30")- SearchCellConfig.setCacheFileSize(int cacheFileSize)
Specifies the maximum amount of extracted text that can be indexed per file. Before a file is indexed, it is converted to plain text. This command allows you to specify how much of that plain text conversion should be indexed.
The cache file size is set to 200 KB by default, which is a very large amount of plain text.
The cache file size limit is applied to the amount of extracted text rather than the size of the original file. If you have a large presentation file, for example, the default setting should be sufficient to allow for all of the text in that file to be extracted for indexing. The limit refers to the size of the plain text, not the size of the original file.
This command accepts one argument:
- cacheFileSize. The number of bytes of indexable and searchable file content stored in the database cache. Use a positive integer greater than zero.
For example:
SearchCellConfig.setCacheFileSize("200000")- SearchCellConfig.setMaxCacheEntries(int maxCacheEntries)
Set the maximum number of cached file entries allowed in the database cache.
This command takes a single argument:
- maxCacheEntries. The number of cached file entries. This argument must be an integer greater than zero.
For example:
SearchCellConfig.setMaxCacheEntries("1000")- SearchCellConfig.setMaximumConcurrentDownloads(int maxConcurrentDownloads)
Set the maximum number of threads that perform file downloading on a Search server.
This command takes a single argument that specifies the maximum number of threads. The argument must be an integer greater than zero. The default value is 3. The value of the maxConcurrentDownloads argument must not exceed the maximum number of threads set for the DefaultWorkManager Work Manager resources at the Search server scope. CAUTION:
Increasing this value increases the load on the Files server.For example:
SearchCellConfig.setMaximumConcurrentDownloads("10")- SearchCellConfig.setMaximumTempDirSize(int maxTempDirSize)
Set the maximum size of a temporary directory used by a Search server for the files conversion process.
This command takes a single argument that specifies the maximum size in bytes. The argument must be an integer greater than zero. The default value is 100 MB.
Files are downloaded to a temporary directory, which is located in the index directory. The temporary directory size available must be greater than the maximum file size allowed for content indexing.
For example:
SearchCellConfig.setMaximumTempDirSize("51200")- SearchCellConfig.setDownloadThrottle(long downloadThrottle)
Set the duration of a rest period between successive files downloads in a single file-download thread.
This command takes a single argument that specifies the download throttle size in milliseconds.
The download throttle is set to 500 by default. CAUTION:
Increasing this value increases the load on the Files server.For example:
SearchCellConfig.setDownloadThrottle("500")- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Supported MIME types
Search supports the indexing of content from a number of MIME types. When a files indexing task is run, the document conversion service downloads files, converts them to plain text, and then indexes the content. During this process, content from the following MIME types is indexed:
Table 43. MIME types indexed by Search
MIME type application/msword application/vnd.ms-excel application/vnd.ms-powerpoint application/vnd.visio application/vnd.ms-project application/vnd.openxmlformats-officedocument.spreadsheetml.sheet application/vnd.openxmlformats-officedocument.presentationml.presentation application/vnd.openxmlformats-officedocument.wordprocessingml.document application/pdf application/postscript application/xhtml+xml application/xml text/html text/htm text/plain text/richtext text/xml application/rtf application/vnd.oasis.opendocument.text application/vnd.oasis.opendocument.spreadsheet application/vnd.oasis.opendocument.presentation application/vnd.oasis.opendocument.text-master application/vnd.lotus-1-2-3 application/vnd.lotus-wordpro application/vnd.lotus-freelance
Configure the number of crawling threads
Edit settings in the search-config.xml file to specify the maximum number of threads used when crawling. The maximum number of threads that you should specify is the number of applications that you have installed in your deployment.
To edit configuration files, use the wsadmin client.
By default, the maximum number of threads allowed when crawling is 2, however you can change this value by modifying the search-config.xml file. When you change the maximum number of crawling threads, you might also need to adjust the thread settings for the SearchCrawlingWorkManager on each node. The Search application will use whichever setting is lower.
For more information about updating Search work managers, see Updating Search work manager settings.
To update the maximum number of crawling threads that can be used when crawling....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")
- Use the following command:
- SearchCellConfig.setMaxCrawlerThreads(String maxThreadNumber)
Specifies the maximum number of seedlist threads that can be used when crawling. By default, the value is set to 2.
This command takes a single argument that specifies the number of threads allowed.
For example:
SearchCellConfig.setMaxCrawlerThreads("3")- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Configure the number of indexing threads
Edit settings in the search-config.xml file to specify the maximum number of threads used when indexing.
To edit configuration files, use the wsadmin client.
By default, the maximum number of threads allowed when indexing is 1, however you can change this value by modifying the search-config.xml file. When you change the maximum number of indexing threads, you might also need to adjust the thread settings for the SearchIndexingWorkManager on each node. The Search application will use whichever setting is lower.
For more information about updating Search work managers, see Updating Search work manager settings.
To update the maximum number of threads that can be used when indexing....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")- Enter the following command:
- SearchCellConfig.setMaxIndexerThreads(String maxThreadNumber)
Specifies the maximum number of indexer threads that can be used when indexing. By default, the value is set to 1.
This command takes a single argument that specifies the number of threads allowed.
For example:
SearchCellConfig.setMaxIndexerThreads("3")- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Manage the Search application
You can perform the following tasks when managing the Search application.
Administer the social analytics service
The social analytics widgets that are available from Communities, Profiles, and the Home page use the Search application as a data provider.
The social analytics framework used by the widgets analyzes the social elements of the Search application to generate an index and map complex relationships between users and content. This mapping information is stored with the Search index, and is leveraged to provide users with recommendations of content that might interest them.
Social analytic relationships
The social analytics service analyzes complex relationships between people, documents, and tags in Connections applications, and uses the results of the analysis to make recommendations to users in the social analytic widgets. These relationships and associations control the type of recommendations that are displayed to users in the widgets.
Two specific concepts are involved in the analysis of social data in Connections: associations and relationships. A social analytic association type refers to the association that a facet has to an indexed document. In the context of social analytic relationship configuration, associations are typically concerned with associations between people and documents. Associations have a weighting associated with them and this weighting is used to compute related facets for a search query.
A facet is an aspect of an indexed document that can be used to classify a document. The types of facet available in Connections Search include date, source, people, and tags. A document can have more than one instance of a facet type.
For example, a document can have many person facets associated with it.
A social analytic relationship refers to the relationship between a document and the association of two people to that document. The relationship takes into account the type of document and also the role that the two people have in relation to that document. The type of relationship changes depending on who is the query person and who is the target person.
For example, the relationship with the internal identifier le1 denotes a first-level employee relationship. Let’s say that John is Peter’s manager. In this instance, John is the target person and Peter is the query person. However, from John’s perspective, this is a first-level manager relationship, which has the internal identifier lm1. Peter is John’s employee, making Peter the target person and John the query person.
The relationships used in the social analytics service typically concern the people-to-people relationships that are evaluated when generating the list of related people included in the search results for a person query. Each of the social analytic widgets provided with Connections uses its own set of relationships, called a relationship set, for recommending content, communities, and people.
Listing social analytics scheduled tasks
You can use a SearchService administrative command to list the tasks that are scheduled for the social analytics service.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
To list the scheduled tasks defined for the social analytics service....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following command:
Add scheduled tasks for the social analytics service
Use SearchService administrative commands to schedule social analytics tasks in the Home page database. A nightly task is scheduled to run after the optimize task by default. Every time the social analytics scheduled task runs, the index for the social analytics service is recreated.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
The social analytics indexing process includes the following five jobs. You can schedule these jobs individually or in a batch.
- evidence
- Builds the evidence index, which links people to results and maps user connections.
- graph
- Builds the graph of connections between users.
- manageremployees
- Provides details of manager relationships so that people's relationships through their management can be identified.
For example, when two people share a second line manager.
- tags
- Generates index documents for each used tag and stores the list of users that have used that tag.
- taggedby
- Creates relationships between the users who have tagged each other's profiles.
- communitymembership
- Creates relationships between the users who are members of the same community.
Communities that have more than 100 members are skipped. These communities will not be recommended to users.
When defining a social analytics scheduled task in the Home page database, you need to specify when the scheduler starts the task. The schedule is defined using a Cron schedule.
For more information about the scheduler, see Scheduling tasks.
It is not possible to specify an end time for a scheduled task. All tasks run as long as they need to. The startby interval defines the time period by which a task can fire before it is automatically canceled.
This mechanism ensures that tasks do not queue up for an overly long period before being canceled, and allows for tasks than run for longer than the default indexing schedule, such as initial index creation.
To define a social analytics scheduled task....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following command:
- SearchService.addSandTask(String taskName, String schedule, String startBy, String jobs)
Creates a new scheduled task definition for the social analytics service in the Home page database.
This command takes the following arguments:
All the arguments are required.
- taskName. The name of the scheduled task. This argument is a string value, which must be unique.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startBy. The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format.
This parameter should be used to ensure that scheduled tasks are not queued up and running into server busy times. Under normal conditions, the only factors that might cause a task to be delayed are that overlapping or coincident tasks are trying to fire at the same time, or an earlier task is running for a long time.
- jobs. The name, or names, of the jobs to be run when the task is triggered. This argument is a string value. To index multiple jobs, use a comma-delimited list. The following values are valid: evidence, graph, manageremployees, tags, taggedby, and communitymembership.
For example:
SearchService.addSandTask("customSaNDIndexTask", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI", "evidence,graph,manageremployees,tags,taggedby,communitymembership")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
You can also use the SearchService.addSandTask command to replace the nightly-sand-task that is automatically configured when you install Connections. By default, the task runs nightly at 01:00. To replace the default SAND task settings, first remove the existing task using the SearchService.deleteTask(String taskName) command. Then use the SearchService.addSandTask command to create a new SAND task with the values that you specify.
For example:
SearchService.deleteTask("nightly-sand-task") SearchService.addSandTask("nightly-sand-task", "0 0 1 * * ?", "0 5 1 * * ?", "evidence,graph,manageremployees,tags,taggedby,communitymembership")- To refresh the Home page database to include the newly-added tasks:
SearchService.refreshTasks()
Run one-off social analytics scheduled tasks
Use the SearchService.sandIndexNow command to create a one-off scheduled task for the social analytics service. The task is scheduled to run once and only once, 30 seconds after being called.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
The social analytics indexing process includes the following five jobs. You can schedule these jobs individually or in a batch.
- evidence
- Builds the evidence index, which links people to results and maps user connections.
- graph
- Builds the graph of connections between users.
- manageremployees
- Provides details of manager relationships so that people's relationships through their management can be identified.
For example, when two people share a second line manager.
- tags
- Generates index documents for each used tag and stores the list of users that have used that tag.
- taggedby
- Creates relationships between the users who have tagged each other's profiles.
- communitymembership
- Creates relationships between the users who are members of the same community.
Communities that have more than 100 members are skipped. These communities will not be recommended to users.
To run a one-off social analytics scheduled task...
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following command:
- SearchService.sandIndexNow(String jobs)
Creates a one-off social analytics task that indexes the specified services 30 seconds after being called.
This command takes a single argument:
- jobs. The name, or names, of the jobs to be run when the task is triggered. This argument is a string value. To run multiple jobs, use a comma-delimited list. The following values are valid: evidence, graph, manageremployees, tags, taggedby, and communitymembership.
For example:
SearchService.sandIndexNow("evidence,graph,manageremployees,tags,taggedby,communitymembership")
Tuning social analytics indexing
Use a SearchCellConfig command to configure the number of iterations used by the different jobs involved in the social analytics indexing process.
When using administrative commands, use the wsadmin client.
The social analytics indexing process includes a number of jobs. The work for these jobs is divided up based on iterations. To improve performance, you can configure the number of iterations specified for a particular job based on the needs of your deployment.
For example, reducing the number of iterations results in faster performance but is more memory-intensive.
For more information about the social analytics indexing jobs, see Adding scheduled tasks for the social analytics service.
To tune the social analytics indexing process....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")
- Use the following command:
- SearchCellConfig.setSandIndexerTuning(String indexer, Int iterations)
Set the number of iterations used by a specified social analytics job.
This command takes the following arguments:
- indexer. A string that specifies the name of the social analytics indexing job. The following values are valid: evidence, graph, manageremployees, and tags.
- iterations. An integer that specifies the number of iterations for the specified social analytics indexing job.
For example:
SearchCellConfig.setSandIndexerTuning("manageremployees",200) SearchCellConfig.setSandIndexerTuning("graph",400)- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Create a background index for the social analytics service
Use the SearchService.startBackgroundSandIndex command to perform background indexing for the social analytics service.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
The SearchService.startSandBackgroundIndex command allows you to create a background index for the social analytics service in a specified location. When you run this command, you can specify the social analytics jobs that run as part of the background indexing process.
As an alternative to running this command, you can specify an additional parameter when you are running the SearchService.startBackgroundIndex command so that the social analytic indexers run at the end of the background indexing operation.
For more information, see Creating a background index.
To create a background index for the social analytics service....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following command:
- SearchService.startBackgroundSandIndex(String indexLocation, String jobs)
Creates a background index for the social analytics service in the specified location. This command must only be run against an index that already has content indexed from the Connections applications and the ECM service.
This command takes the following arguments:
- indexLocation
- A string value that specifies the location where you want to create the background index.
- jobs
- A string value that specifies the names of the social analytics post-processing indexers that examine, index, and produce new output based on the data in the index. The following values are valid: evidence, graph, manageremployees, tags, taggedby, and communitymembership. Use a comma to separate multiple values.
For example:
SearchService.startBackgroundSandIndex("/bkg2/index/","communitymembership,graph")
Configure global properties for the social analytics service
Use SearchService commands to list, add, update, or delete global properties for the social analytics service.
To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
You can have greater control over the social analytics service by configuring dynamic, global properties that affect the social analytics API or indexing behavior.
For example, you might want to configure the property that defines the frequency threshold of tags so that you can tune out popular tags from the recommendations provided to users.
To configure global properties for the social analytics service....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following commands to configure social analytics properties:
- SearchService.listGlobalSandProperties()
Lists all global properties for the social analytics service.
The properties are returned as a mapping of keys to values.
For example, the following output indicates that the value of the sand.tag.freq.threshold property is 32000.
{sand.tag.freq.threshold=32000}- SearchService.setGlobalSandIntegerProperty(String propertyName, String integerProperyValue)
Adds or updates a dynamic global social analytics property that affects the social analytics API or indexing behavior. The changes take place when the next social analytics indexing job starts.
When the property is successfully added or updated, 1 is printed to the wsadmin console. If the property is not successfully added or updated, then you will see 0 printed to the wsadmin console.
If this happens, contact the Search Cluster Administration and check the SystemOut.log file for more details.
Currently, support is provided only for the sand.tag.freq.threshold social analytics property. This property takes an integer value.
The property is used by the Recommend API algorithm...
- Get the people and tags to which the user is related.
- If the tag has a frequency in the Search index that is greater than or equal to the value specified for the sand.tag.freq.threshold property, discard it. This action prevents users from getting recommendations based on common tags, that is, tags that have a high frequency.
- Get the documents with which the people and tags gathered in the first query are associated.
- Return the results to the user.
For example:
SearchService.setGlobalSandIntegerProperty("sand.tag.freq.threshold",100)This setting is global and will affect all Connections users. The setting should only be changed by an administrator.
> You can consult the SystemOut.log file when social analytics indexing begins to check the frequency distribution of the most popular 100 tags in the system.
For example, in line 1 of the following extract, you can see that the tag brown has ordinal 1718 in the index (an ordinal is a facet identifier) and that it has a frequency of 1, which means that there is only one instance of a document being tagged with the keyword brown in the index.
[5/30/11 15:41:13:544 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1718:brown:1} [5/30/11 15:41:13:548 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1730:summaries:1} [5/30/11 15:41:13:551 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1737:public_holiday:1} [5/30/11 15:41:13:554 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1721:chronicle:1} [5/30/11 15:41:13:558 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1716:hollis:1} [5/30/11 15:41:13:561 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1700:inquirer:1} [5/30/11 15:41:13:565 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1684:gazette:5} [5/30/11 15:41:13:568 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1679:ibm:7} [5/30/11 15:41:13:572 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum Cache:{1679=7, 1684=5, 1700=1, 1716=1, 1718=1, 1721=1, 1730=1, 1737=1} [5/30/11 15:41:13:633 IST] 00000025 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue startSaNDIndexingService CLFRW0483I: SAND indexing has started.
- SearchService.deleteGlobalSandProperty(String propertyName)
Delete the specified global social analytics property.
For example:
SearchService.deleteGlobalSandProperty("sand.tag.freq.threshold")When the property is successfully added or updated, 1 is printed to the wsadmin console. If the property is not successfully added or updated, then you will see 0 printed to the wsadmin console.
If this happens, contact the Search Cluster Administration and check the SystemOut.log file for more details.
Excluding specific users from the social analytics service
Use SearchService commands to control whether specific users are included or excluded from the social analytics service. All users are included in the social analytics service by default.
To edit configuration files, use the wsadmin client.
You can use SearchService commands to exclude a specified user from the social analytics service. When a user is excluded from the social analytics service, the service does not build or infer relationships between that user and other users in the organization. The exempted user:
When the administrator excludes a user from the social analytics service, the user still receives recommendations from the Recommendations widgets in Communities and the Home page because these recommendations are based on the user's collaboration history with other people in the organization rather than on the user's social network.
- Is not returned as a related person in search results.
- Is not recommended in the Do You Know widget.
- Is not displayed as a link between two people in the Who Connect. Us widget.
The list of users who are included in the social analytics service is processed when the Search application starts up, and the list is only refreshed on completion of social analytics indexing tasks.
To exclude specific users from the social analytic widget service....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- To exclude a user from the social analytics service, enter one of the following commands.
- SearchService.optOutOfSandByEmail(String email)
Excludes the user with the specified email address from the social analytics service.
This command takes a single argument:
- email. The email address of the user who is to be excluded from the social analytics service. This argument is a string value.
For example:
SearchService.optOutOfSandByEmail("ajones10@example.com")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.optOutOfSandByExId(String externalId)
Excludes the user with the specified external ID from the social analytics service.
This command takes a single argument:
- externalId. The external ID of the user who is to be excluded from the social analytics service. This argument is a string value.
For example:
SearchService.optOutOfSandByExId("11111-1111-1111-1111")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- To re-enable a user for the social analytics service, use one of the following commands.
- SearchService.optIntoSandByEmail(String email)
Includes the user with the specified email address in the social analytics service.
This command takes a single argument:
- email. The email address of the user who is to be included in the social analytics service. This argument is a string value.
For example:
SearchService.optIntoSandByEmail("ajones10@example.com")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.optIntoSandByExId(String externalId)
Includes the user with the specified external ID in the social analytics service.
This command takes a single argument:
- externalId. The external ID of the user who is to be included in the social analytics service. This argument is a string value.
For example:
SearchService.optIntoSandByExId("11111-1111-1111-1111")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
Add an additional Search node to a cluster
You can add another Search node to a cluster where there is an existing Search node for load-balancing purposes or if you want to have a backup node for redundancy.
IBM WebSphere Application Server Network Deployment (Application Server option) must be installed on the new node.
To add a second Search node to a cluster, complete these steps.
- Add a new node to the Deployment Manager cell by doing the following:
- Log in to the new node.
- cd bin directory of the local WebSphere Application Server profile:
WAS_HOME/profiles/profile_name/bin
Where profile_name is the name of the applicable WebSphere Application Server profile on this node.
- Run the addNode command to add the node to the Deployment Manager cell:
addnode [DmgrHost] [dmgr_port] [-username uid] [-password pwd] [-localusername localuid] [-localpassword localpwd]
Where:
- DmgrHost is the host name of the Deployment Manager.
- dmgr_port is the SOAP port of the Deployment Manager. The default is 8879.
- uid and pwd are the dmgr administrator user name and password.
- localuid and localpwd are the user name and password for the node's WebSphere Application Server administrator.
- Open the addNode.log file and confirm that the node was successfully added to the Deployment Manager cell. The file is stored in:
WAS_HOME/profiles/profile_name/log/addNode.log
- Copy the relevant JDBC files from the Deployment Manager node to the new node, placing them in the same location as the JDBC files on the Deployment Manager.
For example, if you copied the db2jcc.jar file from the C:\IBM\SQLLIB directory on the Deployment Manager, you need to copy the same file to the C:\IBM\SQLLIBdirectory on the new node.
Use the following table to determine which files to copy.
Table 44. JDBC files
Database type JDBC files DB2 db2jcc.jar
db2jcc_license_cu.jar sql
Oracle ojdbc6.jar
Microsoft SQL Server sqljdbc4.jar
- Ensure that the shared folders that are used for the application content stores in the cluster are accessible from the new node.
- Add the node as an additional member to the cluster.
- Log in to the Deployment Manager Integration Solutions Console.
- Select Servers>Clusters>cluster_name>Cluster members>New.
- Specify the following information about the new cluster member:
- Member name
- The name of the server instance created for the cluster.
The Deployment Manager creates a new server instance with this name.
Each member name in the same cluster must be unique.
- Select node
- The node where the server instance resides.
- Click Add Member to add this member to the cluster member list.
- Click Next to go to the summary page where you can examine detailed information about the cluster member. Click Finish to complete this step or click Previous to modify the settings.
- Click Save to save the configuration.
- Select Server>Servers>Clusters>cluster_name>Cluster members. In the member list, click the new member that you added in the previous step.
- On the detailed configuration page, click Ports to expand the port information of the member. Make a note of the WC_defaulthost and WC_defaulthost_secure port numbers.
For example, the WC_defaulthost port number is typically 9084, while the WC_defaulthost_secure port number is typically 9447.
- Select Environment>Virtual Hosts>default_host>Host Aliases>New. Enter the following information for the host alias for the WC_defaulthost port:
- Host name
- The IP address or DNS host name of the node where the new member resides.
- Port:
- The port number for WC_defaulthost.
For example, 9084.
- Click OK to complete the virtual host configuration, and then click Save to save the configuration.
- Repeat the previous two substeps to add the host alias for the WC_defaulthost_secure port.
- Select System administration>Nodes.
- In the node list page, select all the nodes where the target cluster members reside, and then click Synchronize to perform a synchronization between the nodes.
- Use WAS admin console, stop the Search application on the new node.
- Stop all the previously existing nodes that are running the Search application.
- Copy the index directory from one of the existing nodes to the new node.
- Restart all the nodes that are running Search.
- Configure IBM HTTP Server to connect to the new node.
For more information, see Configure IBM HTTP Server and Defining IBM HTTP Server.
- Copy Search conversion tools to local nodes and configure the path variables to point to the Search application.
For more information, see Copying Search conversion tools to local nodes.
- Create Search work managers for the newly added node.
For more information, see Creating work managers for Search.
If you experience interoperability failure, you might be running two servers on the same host with the same name. This problem can cause the Search and News applications to fail.
For more information, see application access problems in the WebSphere Application Server information center.
Create work managers for Search
When you add a new node to your deployment after installing Connections, you need to manually create Search work managers for the newly-added node.
When you install Connections, the following work managers are automatically created for Search on each node in your deployment:
- SearchCrawlingWorkManager
- Handles the work involved in crawling the seedlists to persist them to disk.
- SearchDCSWorkManager
- Handles the work for the file content retrieval and conversion task.
- SearchIndexingWorkManager
- Handles the work involved in processing the entries in persisted seedlists into Lucene documents.
If you subsequently add a new node, you need to create these work managers manually.
For more information about creating work managers, see the WebSphere Application Server information center.
To create work managers for Search....
- Open the WAS admin console on the node where you want to create the work managers.
- Select Resources > Asynchronous beans > Work managers.
- Select the node where you want to create the work managers from the All scopes list and then click New.
- Enter one of the following display names in the Name field:
Table 45. JNDI Names
Work manager Name SearchIndexingWorkManager SearchIndexingWorkManager SearchCrawlingWorkManager SearchCrawlingWorkManager SearchDCSWorkManager SearchDCSWorkManager
- Enter one of the following values in the JNDI Name field:
Table 46. JNDI Names
Work manager JNDI Name SearchIndexingWorkManager wm/search-indexing SearchCrawlingWorkManager wm/search-crawling SearchDCSWorkManager wm/search-dcs
- Select all the options under Service names.
- Specify the following values under Thread pool properties:
Table 47. Thread pool property settings
Thread pool property Value Number of alarm threads 5 Minimum number of threads 1 Maximum number of threads 10 Thread Priority 5
- Deselect the Growable check box and then click OK to save your configuration.
- Repeat steps 3-8 to create each of the three work managers required on the node.
Update Search work manager settings
Update the settings for the work managers used by the Search application.
When you install Connections, the following work managers are automatically created for Search on each node in your deployment:
- SearchCrawlingWorkManager
- Handles the work involved in crawling the seedlists to persist them to disk.
- SearchDCSWorkManager
- Handles the work for the file content retrieval and conversion task.
- SearchIndexingWorkManager
- Handles the work involved in processing the entries in persisted seedlists into Lucene documents.
The Search application also uses the DefaultWorkManager at certain times, for example, for background indexing.
You can update the settings for the SearchCrawlingWorkManager, SearchDCSWorkManager, and SearchIndexingWorkManager from the WAS admin console.
For example, when you change the maximum number of crawling threads in the search-config.xml file, you might also need to adjust the thread settings for the SearchCrawlingWorkManager on each node.
To update settings for the Search work managers....
- Open the WebSphere Integrated Solutions Console on the node where you want to update the work manager settings.
- Expand and select Asynchronous beans > Work managers.
- Select the work manager to update.
- Update the settings as needed.
- Click Apply and then click OK to save the new settings.
Reloading the Search application
After making configuration changes to Search, you can use SearchService commands to reload the Search index and configuration, and avoid the need for restarting the Search application.
To use SearchService administrative commands, use the wsadmin client.
You can use the SearchService commands for reloading Search after running SearchCellConfig commands if it is not feasible to restart the Search application. You might want to use the commands for reloading the index as part of restoring a Search index backup if it is not feasible to stop the Search application.
To reload the Search application....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized
- Use the following commands to reload the Search configuration and index:
- SearchService.reloadSearchConfiguration()
Reloads the search-config.xml file for Search on the current node only without a restart of the Search application.
If you are making changes to the configuration of the social analytics service, you still need to restart Search to apply the changes.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.reloadSearchConfigurationAllNodes()
Reloads the search-config.xml file for Search on all nodes in the cluster without a restart of the Search application.
If you are making changes to the configuration of the social analytics service, you still need to restart Search to apply the changes.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.reloadIndex()
Reloads the Search index on the current node only without a restart of the Search application.
If you are making changes to the configuration of the social analytics service, you still need to restart Search to apply the changes.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.reloadIndexAllNodes()
Reloads the Search index on all the nodes in the cluster without a restart of the Search application.
If you are making changes to the configuration of the social analytics service, you still need to restart Search to apply the changes.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
Configure page persistence settings
Edit settings to specify whether the persisted pages in a seedlist persistence directory are deleted after a successful incremental index. You can also update the maximum age for persisted pages.
To edit configuration files, use the wsadmin client.
By default, the pages saved in a seedlist persistence directory are deleted after a successful incremental index. To speed up the indexing process when you have a large data set, you can also configure seedlist persistence settings so that pages over a specified age are not included when building an index or resuming a crawl.
To configure page persistence settings....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")
- Use the following commands:
- SearchCellConfig.setDeletePersistedPages(String enabled)
Specifies whether to delete the persisted pages after a successful incremental index. By default, the value is set to true.
This command takes a single argument:
- enabled
- A string that determines whether persisted pages are to be deleted after a successful incremental index. This string represents a boolean, that is, it should be set to true or false.
When this functionality is enabled, persisted pages from the initial index creation are also deleted after a successful incremental index.
For example:
SearchCellConfig.setDeletePersistedPages("false")- SearchCellConfig.setMaxPagePersistenceAge(String maxAgeInHours)
Specifies the maximum age for persisted pages in a seedlist persistence directory. By default, the value is set to 720 hours (30 days).
If the pages are older than the maximum age, they are ignored when building an index or resuming a crawl.
This command takes a single argument:
- maxAgeInHours
- A string representing an integer that specifies the maximum age in hours of the persisted pages.
For example:
SearchCellConfig.setMaxPagePersistenceAge("42")- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the server or servers hosting the Search application, delete the index, and then restart the Search servers. The next time the scheduled task runs, it recreates the index.
Avoiding unnecessary full search crawls
Use an administrative command to avoid performance hits by avoiding unnecessary full search crawls.
Connections keeps records of deleted files. The seedlistSettings.maximumIncrementalQuerySpanInDays property in the LotusConnections-config.xml configuration file specifies the number of days for which these records are saved before they are deleted. The records can be deleted by the SearchClearDeletionHistory task after the number of days that are specified by the property.
You can avoid performance hits by making sure that deletion records are kept long enough to be read by the incremental search crawler. The incremental search crawler needs these deletion records to update the Search index. If the records are deleted before the incremental crawler reads them, updates will be incomplete. When the updates are incomplete, Connections performs a full crawl instead of an incremental crawl. Full crawls delete the existing Search index and create a new one, which is more time-consuming than incremental crawls. To avoid frequent full crawls, make sure that the value of the seedlistSettings.maximumIncrementalQuerySpanInDays property is higher than the span of days between incremental crawls.
For example, if incremental crawls happen every four days, ensure that the property value is higher than 4. This setting ensures that incremental crawls capture all deletion records. By default the Search crawling task runs every 15 minutes.
Incremental crawls must be left at the default schedule.
To avoid unnecessary full-search crawls....
- Use the wsadmin client to access and check out the Connections configuration files:
- Access the Connections configuration file: execfile("connectionsConfig.py")
If you are prompted to specify a service to connect to, type 1 to select the first node in the list. Most commands can run on any node.
If the command writes or reads information to or from a file by using a local file path, select the node where the file is stored.
This information is not used by the wsadmin client when you are making configuration changes.
- Check out Connections configuration files:
LCConfigService.checkOutConfig("working_directory","cell_name")
where:
- working_directory is the temporary working directory to which configuration files are copied.
The files are kept in this working directory while you edit them. Notes:
- When you specify a path to the working directory on a system that is running Microsoft Windows, use a forward slash for the directory.
For example: "C:/temp".
- AIX,
IBM i , and Linux only: The directory must grant write permissions or the command fails.- cell_name is the name of the WAS cell that hosts the Connections application. To get cell name: print AdminControl.getCell()
This input parameter is case-sensitive.
- To list the current configuration settings and values:
LCConfigService.showConfig()
- Enter the following command:
LCConfigService.updateConfig("seedlistSettings.maximumIncrementalQuerySpanInDays",number_days)Where number_days is a number greater than or equal to 1 and less than or equal to 30.
Check the configuration files back in in the same wsadmin session in which you checked them out.
For more information, see the Applying common configuration property changes topic.
Maximum seedlist page size for a service
You can update a property in the Connections configuration file to specify the maximum seedlist page size for a service.
Use the seedlistSettings.maximumPageSize property in the LotusConnections-config.xml configuration file to specify the maximum number of entries to display on a seedlist page.
To specify the maximum number of seedlist entries per page....
- Use the wsadmin client to access and check out the Connections configuration files:
- Access the Connections configuration file: execfile("connectionsConfig.py")
If you are prompted to specify a service to connect to, type 1 to select the first node in the list. Most commands can run on any node.
If the command writes or reads information to or from a file by using a local file path, select the node where the file is stored.
This information is not used by the wsadmin client when you are making configuration changes.
- Check out Connections configuration files:
LCConfigService.checkOutConfig("working_directory","cell_name")
where:
- working_directory is the temporary working directory to which configuration files are copied.
The files are kept in this working directory while you edit them. Notes:
- When you specify a path to the working directory on a system that is running Microsoft Windows, use a forward slash for the directory.
For example: "C:/temp".
- AIX,
IBM i , and Linux only: The directory must grant write permissions or the command fails.- cell_name is the name of the WAS cell that hosts the Connections application. To get cell name: print AdminControl.getCell()
This input parameter is case-sensitive.
- To list the current configuration settings and values:
LCConfigService.showConfig()
- Enter the following command:
LCConfigService.updateConfig("seedlistSettings.maximumPageSize",number_items)Where number_items is a number greater than or equal to 100.
Check the configuration files back in in the same wsadmin session in which you checked them out.
For more information, see the Applying common configuration property changes topic.
Add third-party search options to the search control
You can extend the search control in Connections to include options from third-party search engines by configuring settings in LotusConnections-config.xml.
To edit configuration files, use the wsadmin client.
See Start wsadmin
When you configure settings for additional search options in LotusConnections-config.xml, those options are made available to users from the Search drop-down menu, allowing them to search content from the sources described in the configuration file. When a user selects a third-party search engine from the Search menu and enters a query term, the results of the search display on a third-party search results page.
To add a third-party option to the Connections search control....
- Use the wsadmin client to access and check out the Connections configuration files.
- Access the Connections configuration file: execfile("connectionsConfig.py")
If you are prompted to specify a service to connect to, type 1 to select the first node in the list. Most commands can run on any node.
If the command writes or reads information to or from a file by using a local file path, select the node where the file is stored.
This information is not used by the wsadmin client when you are making configuration changes.
- Check out Connections configuration files:
LCConfigService.checkOutConfig("working_directory","cell_name")
where:
- working_directory is the temporary working directory to which configuration files are copied.
The files are kept in this working directory while you edit them. Notes:
- When you specify a path to the working directory on a system that is running Microsoft Windows, use a forward slash for the directory.
For example: "C:/temp".
- AIX,
IBM i , and Linux only: The directory must grant write permissions or the command fails.- cell_name is the name of the WAS cell that hosts the Connections application. To get cell name: print AdminControl.getCell()
This input parameter is case-sensitive.
- Navigate to the temporary working directory specified in the previous step, and then open LotusConnections-config.xml in a text editor.
- Define the additional search option as a child element of the serviceName="search" element by adding a <sloc:searchScope> element that contains the details of the third-party service.
For example:
<sloc:serviceReference bootstrapHost="" bootstrapPort="" clusterName="cluster" enabled="true" serviceName="search" ssl_enabled="true"> <sloc:href> <sloc:hrefPathPrefix>/search</sloc:hrefPathPrefix> <sloc:static href="http://myserver.example.com:9081" ssl_href="https://myserver.example.com:9444"/> <sloc:interService href="https://myserver:9444"/> </sloc:href> <!-- Add third Party Search Options here --> <sloc:searchScope scopeName="Yahoo" enabled="true" isGlobal="true"> <sloc:searchApplicationURL> <sloc:static href="http://search.yahoo.com/search?q=" ssl_href="http://search.yahoo.com/search?q="/> </sloc:searchApplicationURL> <sloc:searchScopeIconClass>lconnSprite lconnSprite-iconThirdParty16</sloc:searchScopeIconClass> </sloc:searchScope> <!-- Third party Search options added--> </sloc:serviceReference>where:
- <sloc:searchApplicationURL> defines the URL to the third-party search application. When a user selects the third-party search engine from the Search menu and enters a search term, that search query term is appended to this URL.
Ensure that the URL that you define will use the search query terms that are passed to the URL. Pointing to a base URL, such as www.yahoo.com, does not work. Refer to the external documentation for the third-party search engine to find the correct URL to use.
For example, the correct URL for searching using the Yahoo search engine is...
http://search.yahoo.com/search?q=
- <sloc:searchScopeIconClass> specifies the CSS class for an icon that identifies the third-party search option in the Search drop-down menu. The value of <sloc:searchScopeIconClass> must always be set to lconnSprite lconnSprite-iconThirdParty16.
For the new search engine to display in the search control, ensure that the enabled parameter is set to true for the <sloc:serviceReference> and <sloc:searchScope> elements. The isGlobal parameter for the <sloc:searchScope> element must also be set to true.
- To point to a Search option that is locally available on the same URL as the Connections server, use the <sloc:hrefPathPrefix> tag instead of the <sloc:href> tag.
For example:
<sloc:searchScope scopeName="myPlaces" enabled="true" isGlobal="false"> <sloc:searchApplicationURL> <sloc:hrefPathPrefix>places?scope=myPlaces&query=</sloc:hrefPathPrefix> </sloc:searchApplicationURL> <sloc:searchScopeIconClass>lconnSprite lconnSprite-iconThirdParty16</sloc:searchScopeIconClass> </sloc:searchScope>In this case the isGlobal parameter is set to false because the example is for a local search.
- Save your changes and then close LotusConnections-config.xml.
- After making changes, check the configuration files back in, and you must do so in the same wsadmin session in which you checked them out for the changes to take effect.
See Applying common configuration property changes for information about how to apply your changes.
Set the timeout for seedlist requests
You can set the default timeout for seedlist requests by creating an WAS environment variable and specifying the required value of the timeout.
By default, seedlist requests time out after 240 seconds.
This default setting overrides the timeout for server-to-server requests that is defined in LotusConnections-config.xml, which is 60 seconds. You can override the default seedlist request timeout by creating a WebSphere Application Server variable named SEARCH_SEEDLIST_TIMEOUT and setting the required value of the timeout in milliseconds.
- Use an administrator ID, log in to the WAS admin console associated with the profile to which you installed Connections. If you installed the applications to multiple WAS profiles, log in to the console associated with the appropriate profile.
- Expand Environment and click WebSphere variables.
- Select the relevant cell from the Scope drop-down list and click New.
- Enter SEARCH_SEEDLIST_TIMEOUT in the Name field.
- Enter a value in milliseconds in the Value field.
- Enter a description of the variable in the Description field, and then click OK.
- Stop the affected servers and start those servers again to put the variable configuration change into effect. If the change you made affects a node, stop and restart all of the servers on that node. Similarly if the change you made affects a cell, stop and restart all of the servers in that cell.
For a high-availability deployment, stop and start the servers in turn to ensure that the Search application is still available to users.
Excluding inactive users from search results
By default, when users search for people in Connections, inactive user profiles are excluded from the search results. You can run a command to change your deployment settings so that search results related to inactive users are automatically included in search results.
To edit configuration files,
When you set the user profiles of employees who have left your organization to inactive, by default, those profiles are not listed in search results. Additionally, inactive users do not display in the person type-ahead on the Advanced Search page. End users can still filter search results to display inactive profiles by selecting All People from the Show menu on the Search Results page when the Profiles filter is selected.
If you want inactive profiles to display in search results by default, you can run a SearchCellConfig command to update the value of the includeInactiveUsers property in the search-config.xml file to true. When this property is set to true, the person type ahead on the Advanced Search page includes inactive users.
For more information about the user life cycle in Connections, see Managing users.
To include or exclude inactive users from search results, complete the following steps.
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")
- Use the following commands:
- SearchCellConfig.includeInactiveProfilesSearchResults()
Specifies that the documents corresponding to inactive user profiles are included in search results. In a default installation of Connections, inactive user profiles are automatically excluded from search results.
- SearchCellConfig.excludeInactiveProfilesSearchResults()
Specifies that the documents corresponding to inactive user profiles are excluded from search results. In a default installation of Connections, inactive user profiles are automatically excluded from search results.
- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
Configure post-filtering for community libraries
You can use SearchCellConfig commands to configure post-filtering for community libraries. Post-filtering is enabled by default.
To access configuration files, use the wsadmin client.
Pre-filtering is a process that happens before a search query is issued. It involves collecting all the access control lists (ACLs) for user content that is not public. These ACLs are added to the search query and are used for searching private content in addition to public content. For community library files that are part of a private community, the ACL of the private community is added to the member's search query.
Post-filtering takes place when the search results are returned from the index after a search query is run. After the search results are returned, the Search application must verify with the filtering service that the user who performed the search has the appropriate level of access to see returned documents. If the user is not allowed to see a file, that file is excluded from the search results and other documents are returned to take the place of the excluded file. Post-filtering is only relevant to community libraries.
Post-filtering is not required under the following circumstances:
Postfiltering must be enabled if you are using FileNet features to restrict access to a greater degree than the restrictions made in the Connections user interface.
- You are using Connections Content Manager 4.5 CR 1 or later, and
- You are using features that are only exposed in the Connections user interface and APIs
To configure post-filtering for community libraries....
WAS_HOME/profiles/Dmgr01/binYou must start the client from this directory or subsequent commands that you enter do not execute correctly.
- After the wsadmin command environment has initialized to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")If prompted to specify a service to connect to, type 1 to pick the first node in the list. Most commands can run on any node. If the command writes or reads information to or from a file using a local file path, pick the node where the file is stored.When the command is run successfully, the following message displays:
Search Administration initialized- Check out the Search cell-level configuration file, search-config.xml,
SearchCellConfig.checkOutConfig("working_dir", "cellName")
where:
- working_dir is the temporary directory to which you want to check out the cell level configuration file. This directory must exist on the server where you are running the wsadmin client. Use forward slashes to separate directories in the file path, even if you are using the Microsoft Windows operating system.
AIX, Linux, and
IBM i only: The directory must grant write permissions or the command will not run successfully.
- cellName is the name of the cell that the Search node belongs to. This argument is required. It is also case-sensitive, so type it with care. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
SearchCellConfig.checkOutConfig("c:/search_temp", "SearchCell01")
- Use the following commands as needed:
- SearchCellConfig.enableEcmPostFiltering()
Enables post-filtering for community libraries. Post-filtering is enabled by default.
This command does not take any parameters.
- SearchCellConfig.disableEcmPostFiltering()
Disables post-filtering for community libraries. Post-filtering is enabled by default.
This command does not take any parameters.
- SearchCellConfig.setEcmPostFilteringMultiplier(multiplier)
Set the multiplier for post filtering.
When a user requests a certain page size for their search results, the Search application attempts to populate the page with the specified number of results.
For example, if the user requests a page size of 10, the Search application checks more than 10 documents. However, a limit is required to avoid performance issues. A multiplier of 3 specifies that up to 30 documents are loaded to identify 10 documents to which the user has access. In most cases, statistically, this should be enough to fill the page. If the page cannot be fully populated after checking all 30 documents, a page with fewer search results is returned to the user.
If you frequently receive partially filled search result pages in Connections, You should change this parameter.
This command takes a single parameter:
- Multiplier. A positive integer that specifies how many documents are checked in the attempt to populate the search results page.
For example:
SearchCellConfig.setEcmPostFilteringMultiplier(20)- SearchCellConfig.setEcmPostFilteringMaxGapSize(maxGapSize)
Set the maximum gap size that is allowed for post-filtering.
If a user uses the pagination controls in the Search user interface, post-filtering calculation is performed when jumping from page 1 of the search results to, for example, page 4. However, you may not want to allow post-filtering calculation when jumping to page 100 for performance reasons. This command specifies the maximum gap that is allowed for post-filtering calculations between the current page and the requested page.
This command takes a single parameter:
- maxGapSize. A positive integer that specifies the maximum gap that is allowed between the current page (for which the accurate index is known) and the requested page for post-filtering calculations.
For example:
SearchCellConfig.setEcmPostFilteringMaxGapSize(250)- SearchCellConfig.setEcmPostFilteringConnectionTimeOut(connectionTimeOutInMillis)
Set the connection timeout value for post-filtering.
If the timeout occurs, community library documents are removed from the search results. Results for community documents that have no access control are still shown.
This command takes a single parameter:
- connectionTimeOutInMillis. A positive integer that specifies the connection timeout for post-filtering in milliseconds.
For example:
SearchCellConfig.setEcmPostFilteringConnectionTimeOut(1000)- SearchCellConfig.setEcmPostFilteringSocketDataTimeOut(socketDataTimeOutInMillis)
Set the socket data timeout value for post-filtering.
If the timeout occurs, community library documents are removed from the search results. Results for community documents that have no access control are still shown.
This command takes a single parameter:
- socketDataTimeOutInMillis. A positive integer that specifies the socket data timeout for post-filtering in milliseconds.
For example:
SearchCellConfig.setEcmPostFilteringSocketDataTimeOut(3000)- SearchCellConfig.setEcmPostFiltering(multiplier,maxGapSize,connectionTimeOutInMillis,socketDataTimeOutInMillis)
Enables post-filtering settings for community libraries with the values that you specify.
Parameters:
- Multiplier. A positive integer that specifies how many documents are checked in the attempt to populate the search results page.
- maxGapSize. A positive integer that specifies the maximum gap that is allowed between the current page (for which the accurate index is known) and the requested page for post-filtering calculations.
- connectionTimeOutInMillis. A positive integer that specifies the connection timeout for post-filtering in milliseconds.
- socketDataTimeOutInMillis. A positive integer that specifies the socket data timeout for post-filtering in milliseconds.
For example:
SearchCellConfig.setEcmPostFiltering(20,100,250,1000)- Check in the updated search-config.xml configuration file using the following wsadmin client command:
SearchCellConfig.checkInConfig()
- To exit the wsadmin client, type exit at the prompt.
- Stop the affected servers and then start them again to put the configuration changes into effect. If the change you made affects a node, stop and restart all of the servers on that node. Similarly, if the change you made affects a cell, stop and restart all of the servers in that cell.
For a high-availability deployment, stop and start the servers in turn to ensure that the Search application is still available to users.
Viewing and collecting Search metrics
Enter a URL to view a standard set of metrics related to the Search application. You can also write internal search metrics to a file.
- To access metrics for the Search application, you must be assigned the metrics-reader role. This role is assigned to everyone by default.
For more information about the metrics-reader role, see Roles.
- To use SearchService administrative commands, use the wsadmin client.
See Start wsadmin
You can enter a URL into your browser to display a standard set of indexing and Search metrics. All metrics are recorded from the time that the server is started. To save a record of the Search metrics, use a wsadmin command to write the metrics to a text file.
- To view Search metrics, enter the following URL in your browser:
http://servername.com:port/search/serverStatswhere servername.com:port is the appropriate host name and port number for your server.The following metrics are recorded for Search:
Table 48. Search metrics
Metric Description Index size The size of the index on the specified node. Number of indexed items The number of indexed items per node and per application. Average crawling time The average time in seconds taken to index all the applications and each individual application. Average index building time The average time in seconds taken to build the index, per node and per application. File conversions per second The average number of files downloaded and converted per second. File content retrieval The average time in seconds taken to run the document conversion service.
The file content metrics display in the user interface only after each operation has been carried; you must index and run the document conversion service to view all the metrics.
- To write internal metrics to a file...
- Open a command window and start the wsadmin command-line tool.
- After the wsadmin command environment has initialized, use the following command to initialize the Search environment and start the Search script interpreter:
execfile("searchAdmin.py")When asked to select a server, you can select any server.
- Use the following command to write the metrics to a file:
- SearchService.saveMetricsToFile(String filePath)
- Collects internal metrics and writes them to the specified file.
This command takes a single argument:
- filePath
- The full path to a text file in which to store the metric information.
This argument is a string value.
A file is created in the specified directory.
The file name is prefixed with the string "searchMetrics-" and contains a timestamp indicating when the metrics were collected. The file output is printed in the following format:
================================================================ ACTIVITIES Average entry indexing time: 0.03 seconds Max entry indexing time: 0.17 Min entry indexing time: 0.01 Entry count: 54 Average seedlist request time: 1.83 seconds Max seedlist request time: 4.16 Min seedlist request time: 0.1Seedlist request count: 3 ================================================================ PROFILES Average entry indexing time: 0.07 seconds Max entry indexing time: 1.48 Min entry indexing time: 0.04 Entry count: 1763 Average seedlist request time: 8.6 seconds Max seedlist request time: 13.06 Min seedlist request time: 0.14
Seedlist request count: 5
SearchCellConfig commands
The SearchCellConfig commands are used to configure the location of the Search index and the IBM LanguageWare dictionaries used by Search, and to configure the file download and conversion service used when indexing file attachments.
SearchCellConfig commands
Use the following MBean commands to perform administrative tasks for Search.To run the commands, you first need to initialize the Search configuration environment.
For more information about initializing the Search configuration environment, see Accessing the Search configuration environment.
For the SearchCellConfig commands that create, update, or delete configuration data, also check out the search-config.xml file using the SearchCellConfig.checkOutConfig() command. After making your edits, you need to check in your changes using the SearchCellConfig.checkInConfig() command. When the server next restarts, your changes will take effect. Any of these changes require the indexes to be rebuilt.
- SearchCellConfig.checkInConfig()
Checks in the Search configuration file. This command must be used after changes are made to the Search configuration file in order for those changes to take effect. As part of this operation , the edited copy of the Search configuration file, search-config.xml, is validated against the XSD schema definition file, search-config.xsd.
The checkInConfig command copies the updated configuration file from the temporary directory to the location of the active copy of these files and it overwrites the existing XML file.
For example:
SearchCellConfig.checkInConfig()- SearchCellConfig.checkOutConfig(String working_directory, String cell_name)
Checks out a copy of the Search configuration file to a working directory located on the file system. This command must be used before changes are made to the Search configuration file.
This command takes two arguments:
- working_directory. A file path to a temporary working directory to which the configuration XML and XSD files are copied by the checkOutConfig command. This argument is a string value.
- cell_name. The name of the IBM WAS cell hosting the Connections Search application. This argument is a string value.
For example:
SearchCellConfig.checkOutConfig("/temp","Cell01")- SearchCellConfig.disableAttachmentHandling()
Disable the indexing of file content in the Files, Wikis, and Library (ECM Files) applications.
This command does not take any input parameters.
- SearchCellConfig.disableDictionary(String languageCode)
Disable the specified LanguageWare dictionary.
This command accepts one argument:
- languageCode. The language code for the dictionary to delete. This argument is a string value.
The language code typically comprises two letters conforming to the ISO standard 639-1:2002 that identifies the primary language of the dictionary. However, there are some codes that additionally define a country or variant, in which case these constituent parts are separated by an underscore.
For example, Portuguese has two variants, one for Portugal (pt_PT) and one for Brazil (pt_BR).
When using a code that also specifies a country, ensure that you use an underscore to separate the language code and the country code rather than a hyphen; otherwise an error will be generated.
For example:
SearchCellConfig.disableDictionary("fr")- SearchCellConfig.disableEcmPostFiltering()
Disables post-filtering for community libraries. Post-filtering is enabled by default.
This command does not take any parameters.
- SearchCellConfig.disableVerboseLogging()
Disables verbose logging.
This command does not take any parameters.
Verbose logging fills the SystemOut.log file with detailed output that can occupy an increasing amount of disk space, unless you have configured your deployment to retain only a limited number of the most recent log files. A high turnover of logs might be a problem when you are trying to track down the cause of an issue if the log file that you are interested in has been deleted.
For this reason, you might want to disable verbose logging. The performance impact of having verbose logging enabled is negligible.
- SearchCellConfig.enableAttachmentHandling()
Enable the indexing of file attachments in the Files and Wikis applications.
If you already disabled the attachment handling of files during the last indexing, you need to rebuild the index again after enabling attachment handling. Otherwise, this command won't take effect.
This command does not take any input parameters.
- SearchCellConfig.enableDictionary(String languageCode, String dictionaryPath)
Enables support for the specified LanguageWare dictionary.
This command accepts two arguments.
- languageCode. The language code for the dictionary to add. This argument is a string value.
The language code typically comprises two letters conforming to the ISO standard 639-1:2002 that identifies the primary language of the dictionary. However, there are some codes that additionally define a country or variant, in which case these constituent parts are separated by an underscore.
For example, Portuguese has two variants, one for Portugal (pt_PT) and one for Brazil (pt_BR).
When using a code that also specifies a country, ensure that you use an underscore to separate the language code and the country code rather than a hyphen; otherwise an error will be generated.
- dictionaryPath. The path to the directory containing the dictionary file. This argument is a string value.
For example:
SearchCellConfig.enableDictionary("fr","/opt/IBM/Connections/data/shared/search/dictionary")You can also specify the path using a WebSphere environment variable. In the following example, the "${SEARCH_DICTIONARY_DIR}" value is used to point to the shared Search dictionary directory.SearchCellConfig.enableDictionary("fr","${SEARCH_DICTIONARY_DIR}")- SearchCellConfig.enableEcmPostFiltering()
Enables post-filtering for community libraries. Post-filtering is enabled by default.
This command does not take any parameters.
- SearchCellConfig.enableVerboseLogging()
Enables more detailed status reporting during crawling and indexing in the form of more verbose logging to the SystemOut.log file. Verbose logging is automatically enabled when Connections is installed.
This command does not take any parameters.
You can use the following commands to tune the frequency with which status information is logged to the SystemOut.log file during different stages of the crawling and indexing process:
- SearchCellConfig.setVerboseInitialLoggingInterval(int interval)
- SearchCellConfig.setVerbose
SeedlistRequestLoggingInterval(int interval)
- SearchCellConfig.setVerboseIncrementalCrawlingLoggingInterval(int interval)
- SearchCellConfig.setVerboseIncrementalBuildingLoggingInterval(int interval)
For more information about each of these commands, refer to the command descriptions that follow.
- SearchCellConfig.excludeInactiveProfilesSearchResults()
Specifies that the documents corresponding to inactive user profiles are excluded from search results. In a default installation of Connections, inactive user profiles are automatically excluded from search results.
- SearchCellConfig.includeInactiveProfilesSearchResults()
Specifies that the documents corresponding to inactive user profiles are included in search results. In a default installation of Connections, inactive user profiles are automatically excluded from search results.
- SearchCellConfig.listDictionaries()
List the LanguageWare dictionaries that are configured for Search. These dictionaries are used by common Search to support indexing multilingual content and searching in multiple languages.
This command does not take any input parameters.
- SearchCellConfig.setBackupType(String type)
Specifies the type of backup to create.
This command takes a single argument that specifies the backup type. This can be one of the following:
- new. Creates a new index backup every time.
- dual. Creates dual copies and overwrites the oldest existing backup.
- overwrite. Overwrites the existing index backup.
For example:
SearchCellConfig.setBackupType("new")- SearchCellConfig.setCacheExpiryTime(int numberOfDays)
Set the number of days for which a downloaded file's indexable content is cached in the database. This information is cached for potential reuse at indexing time. If a file is not reused in the number of days specified, its entry in the database cache is deleted.
If the file content has changed, the file is downloaded again and the cache is updated with the revised content.
This command allows you to ensure that the database cache used for indexing files is kept up-to-date.
The expiry time is measured in days. Specify a positive integer greater than zero.
For example:
SearchCellConfig.setCacheExpiryTime("30")- SearchCellConfig.setCacheFileSize(int cacheFileSize)
Specifies the maximum amount of extracted text that can be indexed per file. Before a file is indexed, it is converted to plain text. This command allows you to specify how much of that plain text conversion should be indexed.
The cache file size is set to 200 KB by default, which is a very large amount of plain text.
The cache file size limit is applied to the amount of extracted text rather than the size of the original file. If you have a large presentation file, for example, the default setting should be sufficient to allow for all of the text in that file to be extracted for indexing. The limit refers to the size of the plain text, not the size of the original file.
This command accepts one argument:
- cacheFileSize. The number of bytes of indexable and searchable file content stored in the database cache. Use a positive integer greater than zero.
For example:
SearchCellConfig.setCacheFileSize("200000")- SearchCellConfig.setDefaultDictionary(String languageCode)
Configures the default LanguageWare dictionary used by the Search application. The default dictionary must be one of the enabled dictionaries.
This command takes a single argument:
- languageCode is the language code for the dictionary to set as the default.
This language code typically comprises two letters conforming to the ISO standard 639-1:2002 that identifies the primary language of the dictionary. However, there are some codes that additionally define a country or variant, in which case these constituent parts are separated by an underscore.
For example, Portuguese has two variants, one for Portugal (pt_PT) and one for Brazil (pt_BR). When using a code that also specifies a country, ensure that you use an underscore to separate the language code and the country code rather than a hyphen; otherwise an error will be generated.
A matching dictionary must exist in the list of configured dictionaries for the language that you specify as a parameter.
For example:
SearchCellConfig.setDefaultDictionary("fr")- SearchCellConfig.setDeletePersistedPages(String enabled)
Specifies whether to delete the persisted pages after a successful incremental index. By default, the value is set to true.
This command takes a single argument:
- enabled
- A string that determines whether persisted pages are to be deleted after a successful incremental index. This string represents a boolean, that is, it should be set to true or false.
When this functionality is enabled, persisted pages from the initial index creation are also deleted after a successful incremental index.
For example:
SearchCellConfig.setDeletePersistedPages("false")- SearchCellConfig.setDownloadThrottle(long downloadThrottle)
Set the duration of a rest period between successive files downloads in a single file-download thread.
This command takes a single argument that specifies the download throttle size in milliseconds.
The download throttle is set to 500 by default. CAUTION:
Increasing this value increases the load on the Files server.For example:
SearchCellConfig.setDownloadThrottle("500")- SearchCellConfig.setEcmPostFilteringConnectionTimeOut(connectionTimeOutInMillis)
Set the connection timeout value for post-filtering.
If the timeout occurs, community library documents are removed from the search results. Results for community documents that have no access control are still shown.
This command takes a single parameter:
- connectionTimeOutInMillis. A positive integer that specifies the connection timeout for post-filtering in milliseconds.
For example:
SearchCellConfig.setEcmPostFilteringConnectionTimeOut(1000)- SearchCellConfig.setEcmPostFilteringMaxGapSize(maxGapSize)
Set the maximum gap size that is allowed for post-filtering.
If a user uses the pagination controls in the Search user interface, post-filtering calculation is performed when jumping from page 1 of the search results to, for example, page 4. However, you may not want to allow post-filtering calculation when jumping to page 100 for performance reasons. This command specifies the maximum gap that is allowed for post-filtering calculations between the current page and the requested page.
This command takes a single parameter:
- maxGapSize. A positive integer that specifies the maximum gap that is allowed between the current page (for which the accurate index is known) and the requested page for post-filtering calculations.
For example:
SearchCellConfig.setEcmPostFilteringMaxGapSize(250)- SearchCellConfig.setEcmPostFilteringMultiplier(multiplier)
Set the multiplier for post filtering.
When a user requests a certain page size for their search results, the Search application attempts to populate the page with the specified number of results.
For example, if the user requests a page size of 10, the Search application checks more than 10 documents. However, a limit is required to avoid performance issues. A multiplier of 3 specifies that up to 30 documents are loaded to identify 10 documents to which the user has access. In most cases, statistically, this should be enough to fill the page. If the page cannot be fully populated after checking all 30 documents, a page with fewer search results is returned to the user.
If you frequently receive partially filled search result pages in Connections, You should change this parameter.
This command takes a single parameter:
- Multiplier. A positive integer that specifies how many documents are checked in the attempt to populate the search results page.
For example:
SearchCellConfig.setEcmPostFilteringMultiplier(20)- SearchCellConfig.setEcmPostFiltering(multiplier,maxGapSize,connectionTimeOutInMillis,socketDataTimeOutInMillis)
Enables post-filtering settings for community libraries with the values that you specify.
Parameters:
- Multiplier. A positive integer that specifies how many documents are checked in the attempt to populate the search results page.
- maxGapSize. A positive integer that specifies the maximum gap that is allowed between the current page (for which the accurate index is known) and the requested page for post-filtering calculations.
- connectionTimeOutInMillis. A positive integer that specifies the connection timeout for post-filtering in milliseconds.
- socketDataTimeOutInMillis. A positive integer that specifies the socket data timeout for post-filtering in milliseconds.
For example:
SearchCellConfig.setEcmPostFiltering(20,100,250,1000)- SearchCellConfig.setEcmPostFilteringSocketDataTimeOut(socketDataTimeOutInMillis)
Set the socket data timeout value for post-filtering.
If the timeout occurs, community library documents are removed from the search results. Results for community documents that have no access control are still shown.
This command takes a single parameter:
- socketDataTimeOutInMillis. A positive integer that specifies the socket data timeout for post-filtering in milliseconds.
For example:
SearchCellConfig.setEcmPostFilteringSocketDataTimeOut(3000)- SearchCellConfig.setIndexingResumptionAllowed(boolean allowed)
Enables or disables the resumption of interrupted or failed indexing tasks that have not reached a resume point.
This command takes a single argument:
- allowed. A boolean value.
For example, to enable indexing resumption:
SearchCellConfig.setIndexingResumptionAllowed("true")- SearchCellConfig.setMaxCacheEntries(int maxCacheEntries)
Set the maximum number of cached file entries allowed in the database cache.
This command takes a single argument:
- maxCacheEntries. The number of cached file entries. This argument must be an integer greater than zero.
For example:
SearchCellConfig.setMaxCacheEntries("1000")- SearchCellConfig.setMaxCrawlerThreads(String maxThreadNumber)
Specifies the maximum number of seedlist threads that can be used when crawling. By default, the value is set to 2.
This command takes a single argument that specifies the number of threads allowed.
For example:
SearchCellConfig.setMaxCrawlerThreads("3")- SearchCellConfig.setMaximumAttachmentSize(int maxAttachmentSize)
Set the limit on the size of files that can be downloaded for indexing. Files that are greater than the configured maximum attachment size are not downloaded or processed for content indexing. By default, the limit is set to 50 MB, which means that files over 50 MB are not indexed.
Files that are under the specified size are downloaded to a temporary directory located in the index directory, where they go through the text extraction process. The extracted text is then indexed. The temporary directory size available must be greater than the maximum file size allowed for content indexing. You can control the amount of extracted text that is indexed using the setCacheFileSize command.
This command accepts one argument:
- maxAttachmentSize. The maximum file size in bytes of any file attachment eligible for indexing. This is an integer value.
For example:
SearchCellConfig.setMaximumAttachmentSize("52428800")- SearchCellConfig.setMaximumConcurrentDownloads(int maxConcurrentDownloads)
Set the maximum number of threads that perform file downloading on a Search server.
This command takes a single argument that specifies the maximum number of threads. The argument must be an integer greater than zero. The default value is 3. The value of the maxConcurrentDownloads argument must not exceed the maximum number of threads set for the DefaultWorkManager Work Manager resources at the Search server scope. CAUTION:
Increasing this value increases the load on the Files server.For example:
SearchCellConfig.setMaximumConcurrentDownloads("10")- SearchCellConfig.setMaxIndexerThreads(String maxThreadNumber)
Specifies the maximum number of indexer threads that can be used when indexing. By default, the value is set to 1.
This command takes a single argument that specifies the number of threads allowed.
For example:
SearchCellConfig.setMaxIndexerThreads("3")- SearchCellConfig.setMaximumTempDirSize(int maxTempDirSize)
Set the maximum size of a temporary directory used by a Search server for the files conversion process.
This command takes a single argument that specifies the maximum size in bytes. The argument must be an integer greater than zero. The default value is 100 MB.
Files are downloaded to a temporary directory, which is located in the index directory. The temporary directory size available must be greater than the maximum file size allowed for content indexing.
For example:
SearchCellConfig.setMaximumTempDirSize("51200")- SearchCellConfig.setMaxPagePersistenceAge(String maxAgeInHours)
Specifies the maximum age for persisted pages in a seedlist persistence directory. By default, the value is set to 720 hours (30 days).
If the pages are older than the maximum age, they are ignored when building an index or resuming a crawl.
This command takes a single argument:
- maxAgeInHours
- A string representing an integer that specifies the maximum age in hours of the persisted pages.
For example:
SearchCellConfig.setMaxPagePersistenceAge("42")- SearchCellConfig.setPostBackupScript(String script)
Specifies which shell script or third-party application runs on completion of the backup task.
This command takes a single argument that specifies the name of the shell script or application file.
For example:
SearchCellConfig.setPostBackupScript("backup.sh")To disable the script, run the command again with an empty string as the argument.
For example:
SearchCellConfig.setPostBackupScript("")- SearchCellConfig.setSandIndexerTuning(String indexer, Int iterations)
Set the number of iterations used by a specified social analytics job.
This command takes the following arguments:
- indexer. A string that specifies the name of the social analytics indexing job. The following values are valid: evidence, graph, manageremployees, and tags.
- iterations. An integer that specifies the number of iterations for the specified social analytics indexing job.
For example:
SearchCellConfig.setSandIndexerTuning("manageremployees",200) SearchCellConfig.setSandIndexerTuning("graph",400)- SearchCellConfig.setVerboseIncrementalBuildingLoggingInterval(int incrementalBuildingInterval)
Controls the frequency with which update indexing progress is logged to the SystemOut.log file. Update indexing of an Connections application or set of applications, is an indexing job that updates an index that already has content from all applications that are to be indexed as part of the current indexing job.
This command takes a single parameter:
- incrementalBuildingInterval
- A positive integer that corresponds to a number of documents.
For example, if an interval of 20 is specified, then for every 20 documents that have been indexed, the number of documents indexed when indexing a particular application during the current indexing job is logged. The incrementalBuildingInterval parameter is set to 100 by default.
You can find additional logging information about update indexing progress in the SystemOut.log file by searching for occurrences of the CLFRW0600I logging message.
For example:
CLFRW0600I: Search is continuing to build the index for blogs: 40 documents indexed.For example:
SearchCellConfig.setVerboseIncrementalBuildingLoggingInterval(100)- SearchCellConfig.setVerboseIncrementalCrawlingLoggingInterval(int incrementalCrawlingInterval)
Controls the frequency with which seedlist update crawling progress is logged to the SystemOut.log file. An update crawl of an application fetches data that was created, updated, or deleted since the previous crawl of that application began.
This command takes a single parameter:
- incrementalCrawlingInterval
- A positive integer that corresponds to a number of seedlist entries.
For example, if an interval of 100 is specified, then, for every 100 entries that have been crawled, the number of entries that have been crawled for a particular application during the current indexing job is logged. The incrementalCrawlingInterval parameter is set to 100 by default.
You can find additional logging information about initial index creation in the SystemOut.log file by searching for occurrences of the CLFRW0589I logging message.
For example:
CLFRW0589I: Search is continuing to build the index for profiles: 1,600 seedlist entries indexed.For example:
SearchCellConfig.setVerboseIncrementalCrawlingLoggingInterval(100)- SearchCellConfig.setVerboseInitialLoggingInterval(int initialInterval)
Controls the frequency with which initial index creation progress is logged to the SystemOut.log file.
This command takes a single parameter:
- initialInterval
- A positive integer that corresponds to a number of seedlist entries. A seedlist entry is an indexing instruction that specifies an action, such as the creation, deletion, or update of a specified document in the Search index.
For example, if an interval of 500 is specified, then for every 500 entries processed, the number of seedlist entries indexed so far for an application by the current indexing job is logged.
The initialInterval parameter is set to 250 by default.
You can find additional logging information about initial index creation in the SystemOut.log file by searching for occurrences of the CLFRW0581I logging message.
For example:
CLFRW0581I: Search is continuing to build the index for activities: 3500 seedlist entries indexed.For example:
SearchCellConfig.setVerboseInitialLoggingInterval(500)- SearchCellConfig.setVerboseLogging(int initialInterval, int seedlistRequestInterval, int incrementalCrawlingInterval, int incrementalBuildingInterval)
Enables verbose logging with the specified initial interval, seedlist request interval, crawling interval, and incremental building interval.
Run this command has the same net effect as calling the following commands in sequence:
- SearchCellConfig.enableVerboseLogging()
- SearchCellConfig.setVerboseInitialLoggingInterval(initialInterval)
- SearchCellConfig.setVerbose
SeedlistRequestLoggingInterval(seedlistRequestInterval)
- SearchCellConfig.setVerboseIncrementalCrawlingLoggingInterval(incrementalCrawlingInterval)
- SearchCellConfig.setVerboseIncrementalBuildingLoggingInterval(incrementalBuildingInterval)
- SearchCellConfig.setVerbose
SeedlistRequestLoggingInterval(int seedlistRequestInterval)
Controls the frequency with which seedlist crawling progress is logged to the SystemOut.log file.
This command takes a single parameter:
- seedlistRequestInterval
- A positive integer that corresponds to a number of seedlist page requests. A seedlist crawl is a sequence of seedlist page requests, which are HTTP GET operations that fetch seedlist pages. A seedlist page can contain zero or more seedlist entries up to a specified maximum.
For example, if an interval of 1 is specified, then after every seedlist request, the crawling progress of the application being currently crawled is logged. The seedlistRequestInterval parameter is set to 1 by default.
You can find additional logging information about seedlist crawling in the SystemOut.log file by searching for occurrences of the CLFRW0604 logging message.
For example:
CLFRW0604 : Current seedlist state: Finish Date: Thu May 12 10:14:58 IST 2011; Start Date: Thu Jan 01 01:00:00 GMT 1970; Type: 1; Last Modified: Thu Jan 01 01:00:00 GMT 1970; Finished: false; Started: true; ACL Start: 0; Offset: 0;For example:
SearchCellConfig.setVerboseSeedlistRequestLoggingInterval(1)
SearchService commands
The SearchService commands are used to create, retrieve, update, and delete scheduled task definitions for the indexing and optimization Search operations.
SearchService commands
Use the following MBean commands to perform administrative tasks for Search.To use the commands, you must first initialize the Search configuration environment.
For more information, see Accessing the Search configuration environment.
- SearchService.addBackupIndexTask(String taskName, String schedule, String startbySchedule)
Defines a new scheduled index backup task.
This command takes the following arguments:
- taskName. The name of the task to be added.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startbySchedule. The time given for the task to run before it is automatically canceled. This argument is a string value that must be specified in Cron format.
For example:
SearchService.addBackupIndexTask("WeeklyIndexBackup","0 0 2 ? * SAT","0 10 2 ? * SAT")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.addFileContentTask(String taskName, String schedule, String startBy, String applicationNames, failuresOnly)
Creates a scheduled file content retrieval task.
This command takes the following arguments:
- taskName. The name of the scheduled task. This argument is a string value, which must be unique.
- schedule. The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format.
- startBy. The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format.
- applicationNames. The name (or names) of the Connections application to be indexed when the task is triggered. This argument is a string value. To index multiple applications, use a comma-delimited list. The following values are valid:
- ecm_files. Retrieves files from the Enterprise Content Management repository. Only published files are retrieved; draft files are not included.
- files. Retrieves files from the Files application.
- wikis. Retrieves files from the Wikis application.
- failuresOnly. A flag that indicates that only the content of files for which the download and conversion tasks failed should be retrieved.
This argument is a boolean value.
For example:
SearchService.addFileContentTask("mine", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI", "wikis,files","true")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
You can also use the SearchService.addFileContentTask command to replace the task definition for the default 20min-file-retrieval-task. By default, this task runs every 20 minutes, except for a one-hour period between 01:00 and 02:00. To replace the default task settings, first remove the existing task using the SearchService.deleteTask(String taskName) command. Then use the SearchService.addFileContentTask to create a new task with the values that you specify.
For example:
SearchService.deleteTask("20min-file-retrieval-task")
SearchService.addFileContentTask("20min-file-retrieval-task", "0 1/20 0,2-23 * * ?", "0 10/20 0,2-23 * * ?", "all_configured", "false")- SearchService.addIndexingTask(String taskName, String schedule, String startBy, String applicationNames, Boolean optimizeFlag)
Creates a new scheduled indexing task definition in the Home page database.
This command takes the following arguments:
taskName The name of the scheduled task. This argument is a string value, which must be unique. schedule The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format. startBy The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format. This parameter should be used to ensure that indexing tasks are not queued up and running into server busy times. Under normal conditions, the only factors that might cause a task to be delayed are that overlapping or coincident tasks are trying to fire at the same time, or an earlier task is running for a long time.
applicationNames The name (or names) of the Connections application to be indexed when the task is triggered. This argument is a string value. To index multiple applications, use a comma-delimited list. The following values are valid: activities, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis. optimizeFlag A flag that indicates if an optimization step should be performed after indexing. This argument is a boolean value. The optimization operation is both CPU and I/O intensive. For this reason, the operation should be performed infrequently and, if possible, during off-peak hours. Note that when you install Connections, a search optimization task is set up to run every night by default. All arguments are required.
For example:
SearchService.addIndexingTask("customDogearAndBlogs", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI", "dogear,blogs","true")
When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
The refreshTasks() command should be used after this command for the new task definitions to take effect immediately. Otherwise, the changes take place when the Search application is next restarted.
You can also use the SearchService.addIndexingTask command to replace the 15min-search-indexing-task that is automatically configured when you install Connections. By default, all installed Connections applications are crawled and indexed every 15 minutes, except for a one-hour period between 01:00 and 02:00. To replace the default indexing task settings, first remove the existing indexing task using the SearchService.deleteTask(String taskName) command. Then, use the SearchService.addIndexingTask command to create a new indexing task with the values that you specify.
For example:
SearchService.deleteTask("15min-search-indexing-task") SearchService.addIndexingTask("15min-search-indexing-task", "0 1/15 0,2-23 * * ?", "0 10/15 0,2-23 * * ?", "all_configured", "false")- SearchService.addOptimizeTask(String taskName, String schedule, String startBy)
Creates a new index optimization scheduled task definition.
This command takes the following arguments:
taskName The name of the scheduled task. This argument is a string value, which must be unique. schedule The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format. startBy The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format. This parameter should be used to ensure that indexing tasks are not queued up and running into server busy times. Under normal conditions, the only factors that might cause a task to be delayed are that overlapping or coincident tasks are trying to fire at the same time, or an earlier task is running for a long time.
All arguments are required.
The optimization operation is both CPU and I/O intensive. For this reason, the operation should be performed infrequently and, if possible, during off-peak hours.
For more information, refer to the following web page:
Note that when you install Connections, a search optimization task is set up to run every night by default.
See Search default tasks for more information.
For example:
SearchService.addOptimizeTask("customOptimize", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI")
When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
The refreshTasks() command should be used after this command for the new task definitions to take effect immediately. Otherwise, the changes take place when the Search application is next restarted.
You can also use the SearchService.addOptimizeTask command to replace the nightly-optimize-task that is automatically configured when you install Connections. By default, this task runs nightly at 01:30. To replace the default optimize task settings, first remove the existing optimize task using the SearchService.deleteTask command. Then, use the SearchService.addOptimizeTask command to create a new optimize task with the values that you specify.
For example:
SearchService.deleteTask("nightly-optimize-task") SearchService.addOptimizeTask("nightly-optimize-task", "0 30 1 * * ?", "0 35 1 * * ?")- SearchService.addSandTask(String taskName, String schedule, String startBy, String jobs)
Creates a new scheduled task definition for the social analytics service in the Home page database.
This command takes the following arguments:
taskName The name of the scheduled task. This argument is a string value, which must be unique. schedule The time at which the scheduled task starts. This argument is a string value that must be specified in Cron format. startBy The time given to a task to fire before it is automatically canceled. This argument is a string value that must be specified in Cron format. This parameter should be used to ensure that scheduled tasks are not queued up and running into server busy times. Under normal conditions, the only factors that might cause a task to be delayed are that overlapping or coincident tasks are trying to fire at the same time, or an earlier task is running for a long time.
jobs The name, or names, of the jobs to be run when the task is triggered. This argument is a string value. To index multiple jobs, use a comma-delimited list. The following values are valid: evidence, graph, manageremployees, tags, taggedby, and communitymembership. All the arguments are required.
For example:
SearchService.addSandTask("customSaNDIndexTask", "0 0 1 ? * MON-FRI", "0 10 1 ? * MON-FRI", "evidence,graph,manageremployees,tags,taggedby,communitymembership")
When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
You can also use the SearchService.addSandTask command to replace the nightly-sand-task that is automatically configured when you install Connections. By default, the task runs nightly at 01:00. To replace the default SAND task settings, first remove the existing task using the SearchService.deleteTask(String taskName) command. Then use the SearchService.addSandTask command to create a new SAND task with the values that you specify.
For example:
SearchService.deleteTask("nightly-sand-task") SearchService.addSandTask("nightly-sand-task", "0 0 1 * * ?", "0 5 1 * * ?", "evidence,graph,manageremployees,tags,taggedby,communitymembership")- SearchService.backupIndexNow()
Backs up the index to the location specified by the IBM WebSphere Application Server variable, SEARCH_INDEX_BACKUP_DIR. There might be a delay before the backup occurs if there are indexing tasks in progress.
This command does not take any arguments.
After backing up the Search index using wsadmin commands, consider performing a full backup of the HOMEPAGE database. Note that the Search index has a dependency on data in the HOMEPAGE database.
- SearchService.deleteFeatureIndex(String featureName)
- Removes and purges the content for the specified application from the Search index.
Only use this command if you are uninstalling an application from Connections. After running the command, you cannot reindex the content from the application that has been deleted.
For more information, see Purging content from the index.
This command takes a string value, which is the name of the application whose content is to be deleted. The following values are valid: activities, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis.
For example:
SearchService.deleteFeatureIndex("activities")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.deleteAllTasks()
Delete all task definitions from the Home page database.
This command does not take any parameters.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.deleteGlobalSandProperty(String propertyName)
Delete the specified global social analytics property.
For example:
SearchService.deleteGlobalSandProperty("sand.tag.freq.threshold")When the property is successfully added or updated, 1 is printed to the wsadmin console. If the property is not successfully added or updated, then you will see 0 printed to the wsadmin console.
If this happens, contact the Search Cluster Administration and check the SystemOut.log file for more details.
- SearchService.deleteTask(String taskName)
Delete the task definition with the specified name from the Home page database.
This command takes a string value, which is the name of the task to be deleted.
For information about how to retrieve the names of the tasks in the Home page database, see Listing scheduled tasks.
For example:
SearchService.deleteTask("profilesIndexingTask")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.disableAllTasks()
Disables all scheduled tasks for the Search application.
This command does not take any arguments.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.disableIndexingOnServer("nodename","servername")
Disables indexing on the specified server. Use the SearchService.enableIndexingOnServer("nodename","servername") command to re-enable indexing later. CAUTION:
Do not use this command unless you are instructed to do so by IBM Support. When you want to disable indexing temporarily, use the commands for disabling scheduled tasks instead.Parameters:
- nodename. A string value that specifies the name of the node where you want to disable indexing.
- servername. A string value that specifies the name of the server on which you want to disable indexing.
For example:
SearchService.disableIndexingOnServer("Node01","cluster1_server1")- SearchService.disableTask(String taskName)
Disable the scheduled task with the specified name.
This command takes a single argument:
- taskName. The name of the task to be disabled. This argument is a string value.
For example:
SearchService.disableTask("mine")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
Use this command affects the indexing process...
Results for the current application that is being indexed are discarded but, if, as part of an scheduled task, some applications have been successfully crawled, those applications are up-to-date in the index.
- When the command is run before the scheduled task fires, the indexing operation is prevented from starting.
- When the command is run during the indexing operation for an application, the Search application stops indexing.
For example, if a task is fired that is to index Bookmarks, Blogs, and Activities (in that order) and the disable command is called while Blogs is being indexed, when the task is enabled again, Blogs and Activities resume indexing at the same point as the previously-called task. Disabled tasks remain disabled until they are re-enabled.
- SearchService.enableAllTasks()
Re-enables all scheduled tasks for the Search application.
This command does not take any arguments.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.enableIndexingOnServer("nodename","servername")
Re-enables indexing on the specified server. This command is used in conjunction with the SearchService.disableIndexingOnServer("nodename","servername") command.
Parameters:
- nodename. A string value that specifies the name of the node where you want to re-enable indexing.
- servername. A string value that specifies the name of the server on which you want to re-enable indexing.
For example:
SearchService.enableIndexingOnNode("nodename","servername")- SearchService.enableTask(String taskName)
Re-enables the scheduled task with the specified name. This command uses the current schedule.
This command takes a single argument:
- taskName. The name of the task to be enabled. This argument is a string value.
For example:
SearchService.enableTask("mine")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.flushPersistedCrawlContent()
Deletes current persisted seedlists.
This command only clears persisted seedlists in the default persistence location.
Seedlists crawled using the startBackgroundCrawl, startBackgroundFileContentExtraction, or startBackgroundIndex commands must be deleted manually.
This command does not take any input parameters.
Do not run this command while a crawl is in progress.
When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.getFileContentNow(String applicationNames)
Launches the file content retrieval task. This command iterates over the file cache, downloading and converting files that don't have any content.
This command takes a string value, which is the name of the application whose content is to be retrieved. The following values are valid:
- ecm_files. Retrieves files from the Enterprise Content Management repository. Only published files are retrieved; draft files are not included.
- files. Retrieves files from the Files application.
- wikis. Retrieves files from the Wikis application.
For example:
SearchService.getFileContentNow("files")- SearchService.indexNow(String applicationNames)
Creates a one-off task that indexes the specified applications 30 seconds after being called.
This command takes a single argument:
- applicationNames. The name (or names) of the Connections application to be indexed when the task is triggered. This argument is a string value. To index multiple applications, use a comma-delimited list.
The following values are valid: activities, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis.
An optimize operation is not run at the end of the indexing operation.
For example:
SearchService.indexNow("dogear, blogs")- SearchService.indexNowWithOptimization(String applicationNames)
Creates a one-off task that indexes the specified applications 30 seconds after being called, and performs an optimization operation at the end of the indexing operation.
This command takes a single argument:
- applicationNames. The name (or names) of the Connections application to be indexed when the task is triggered. This argument is a string value. To index multiple applications, use a comma-delimited list.
The following values are valid: activities, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis.
The optimization operation is both CPU and I/O intensive. For this reason, the operation should be performed infrequently and, if possible, during off-peak hours.
For more information, refer to the following web page:
Note that when you install Connections, a search optimization task is set up to run every night by default.
See Search default tasks for more information.
For example:
SearchService.indexNowWithOptimization("dogear, blogs")- SearchService.listFileContentTasks()
Lists all the scheduled file content retrieval tasks.
This command does not take any input parameters.
- SearchService.listGlobalSandProperties()
Lists all global properties for the social analytics service.
The properties are returned as a mapping of keys to values.
For example, the following output indicates that the value of the sand.tag.freq.threshold property is 32000.
{sand.tag.freq.threshold=32000}- SearchService.listIndexingNodes()
Return a list of the Search indexing nodes in your deployment.
This command does not take any arguments.
When the command runs successfully, the names of the Search indexing nodes are printed to the wsadmin console along with information about each node. The output includes a version timestamp and information that indicates whether the node is an indexing node or a non-indexing node, whether the index on the server is more than 30 days old, and whether the index on the server is synchronized with the latest index in the cluster.
For example:
Indexing Node Id: dubxpcvm084-0Node02:server1, Last Crawl Version: 1,340,285,460,074, Indexer: true, Out of Date: false, Out of Sync: false- SearchService.listIndexingTasks()
Lists all scheduled indexing task definitions defined in the Home page database.
This command does not take any input parameters.
- SearchService.listOptimizeTasks()
Lists all scheduled optimize task definitions defined in the Home page database.
This command does not take any input parameters.
- SearchService.listRunningTasks()
Lists all the tasks that are currently running for the Search application. This command does not take any input parameters.
The command returns a list of the tasks that are currently running, and includes the following information for each task:
- Internal task ID
- Task name
- Time that the task started
For example:
wsadmin>SearchService.listRunningTasks() >>>51 roi-profiles-WedDec0715:23:09GMT2011 Wed Dec 07 15:23:09 GMT 2011- SearchService.listSandTasks()
Lists all the tasks scheduled for the social analytics service that are defined in the Home page database.
This command does not take any input parameters.
- SearchService.listTasks()
Lists all Search scheduled task definitions (indexing and optimize) defined in the Home page database.
This command does not take any input parameters.
- SearchService.notifyRestore(Boolean isNewIndex)
Brings the database to a consistent state so that crawlers start from the point at which the backup was made.
The notifyRestore command updates index management tables in the HOMEPAGE database so that crawling resume points are reloaded from a restored index, thereby ensuring that all future crawls start from the correct point. The command also purges cached content in the HOMEPAGE database.
The notifyRestore command optionally removes all entries from the HOMEPAGE database table that tracks the status of individual files as part of the content extraction process. This table is used by the Search application when indexing the content of file attachments.
This command takes a single parameter:
- isNewIndex: If set to true, all entries are removed from the database table that is used by the file content extraction process to track the status of individual files.
Set this parameter to true when you are restoring a newly-built index. Set the parameter to false when you are restoring an index backup.
For example:
SearchService.notifyRestore("true")- SearchService.optimizeNow()
Creates a one-off task that performs an optimize operation on the search index, 30 seconds after being called.
The optimization operation is both CPU and I/O intensive. For this reason, the operation should be performed infrequently and, if possible, during off-peak hours.
For more information, refer to the following web page:
Note that when you install Connections, a search optimization task is set up to run every night by default.
See Search default tasks for more information.
This command does not accept any input parameters.
This operation should not be called during an indexing operation; if it needs to be run, do it at an off-peak time when the application is not expected to be performing intensive I/O operations on the index.
- SearchService.optIntoSandByEmail(String email)
Includes the user with the specified email address in the social analytics service.
This command takes a single argument:
- email. The email address of the user who is to be included in the social analytics service. This argument is a string value.
For example:
SearchService.optIntoSandByEmail("ajones10@example.com")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.optIntoSandByExId(String externalId)
Includes the user with the specified external ID in the social analytics service.
This command takes a single argument:
- externalId. The external ID of the user who is to be included in the social analytics service. This argument is a string value.
For example:
SearchService.optIntoSandByExId("11111-1111-1111-1111")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.optOutOfSandByEmail(String email)
Excludes the user with the specified email address from the social analytics service.
This command takes a single argument:
- email. The email address of the user who is to be excluded from the social analytics service. This argument is a string value.
For example:
SearchService.optOutOfSandByEmail("ajones10@example.com")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.optOutOfSandByExId(String externalId)
Excludes the user with the specified external ID from the social analytics service.
This command takes a single argument:
- externalId. The external ID of the user who is to be excluded from the social analytics service. This argument is a string value.
For example:
SearchService.optOutOfSandByExId("11111-1111-1111-1111")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.refreshTasks()
Calls the server to read task settings from the Search task definition tables and synchronizes the configured tasks with those persisted in the IBM WebSphere Application Server scheduler tables.
This command should be used after the following commands for the changes to task definitions to take effect immediately. Otherwise, the changes take place when the Search application is next restarted.
- SearchService.addIndexingTask(String taskName, String schedule, String startBy, String applicationNames, Boolean optimizeFlag)
- SearchService.addOptimizeTask(String taskName, String schedule, String startBy)
- SearchService.deleteTask(String taskName)
This command does not accept any input parameters.
- SearchService.reloadIndex()
Reloads the Search index on the current node only without a restart of the Search application.
If you are making changes to the configuration of the social analytics service, you still need to restart Search to apply the changes.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.reloadIndexAllNodes()
Reloads the Search index on all the nodes in the cluster without a restart of the Search application.
If you are making changes to the configuration of the social analytics service, you still need to restart Search to apply the changes.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.reloadSearchConfiguration()
Reloads the search-config.xml file for Search on the current node only without a restart of the Search application.
If you are making changes to the configuration of the social analytics service, you still need to restart Search to apply the changes.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.reloadSearchConfigurationAllNodes()
Reloads the search-config.xml file for Search on all nodes in the cluster without a restart of the Search application.
If you are making changes to the configuration of the social analytics service, you still need to restart Search to apply the changes.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.removeIndexingNode(String nodeName)
Removes the specified node from the index management table.
This command takes a single argument:
- nodeName. The name of the node to be removed. This argument is a string value that takes the following format:
nodeName:serverNameTo retrieve a list of the indexing nodes in your deployment, run the SearchService.listIndexingNodes() command.For more information, see Listing indexing nodes.
For example:
SearchService.removeIndexingNode("Node01:cluster1_server1")When the command runs successfully, 1 is printed to the wsadmin console.
If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.resetAllTasks()
Delete all scheduled task definitions from the Home page database and restores the default set of tasks.
For more information about these tasks, see Search default scheduled tasks.
This command does not take any parameters.
When the command runs successfully, 1 is printed to the wsadmin console. If the command does not run successfully, 0 is printed to the wsadmin console.
- SearchService.retryContentFailuresNow(String applicationNames)
Retries failed attempts at downloading and converting files for the specified application.
This command takes a string value, which is the name of the application whose content is to be downloaded and converted. The following values are valid:
- files. Retrieves files from the Files application.
- wikis. Retrieves files from the Wikis application.
A file download or conversion task can fail for a number of reasons, for example, hardware or network issues. Failures are flagged in the cache and can be retried.
For example:
SearchService.retryContentFailuresNow("wikis,files")- SearchService.retryIndexing(String service, String id)
Attempts to index an item of content that was not indexed successfully during initial or incremental indexing.
retryIndexing does not work for ecm_files.
This command takes two parameters:
- service
- The application from which the content originated.
- id
- The Atom ID of the content.
For information about how to retrieve the Atom ID for the content, refer to the Connections API documentation on the IBM Social Business Development Wiki.
For example:
SearchService.retryIndexing('activities', 'b63cabf8-0533-45cf-9636-d63cd6a6f3ca')If the command is successful, 1 is printed to the console. If the command fails, 0 is printed to the console.
- SearchService.sandIndexNow(String jobs)
Creates a one-off social analytics task that indexes the specified services 30 seconds after being called.
This command takes a single argument:
- jobs. The name, or names, of the jobs to be run when the task is triggered. This argument is a string value. To run multiple jobs, use a comma-delimited list. The following values are valid: evidence, graph, manageremployees, tags, taggedby, and communitymembership.
For example:
SearchService.sandIndexNow("evidence,graph,manageremployees,tags,taggedby,communitymembership")- SearchService.saveMetricsToFile(String filePath)
- Collects internal metrics and writes them to the specified file.
This command takes a single argument:
- filePath
- The full path to a text file in which to store the metric information.
This argument is a string value.
A file is created in the specified directory.
The file name is prefixed with the string "searchMetrics-" and contains a timestamp indicating when the metrics were collected. The file output is printed in the following format:
================================================================ ACTIVITIES Average entry indexing time: 0.03 seconds Max entry indexing time: 0.17 Min entry indexing time: 0.01 Entry count: 54 Average seedlist request time: 1.83 seconds Max seedlist request time: 4.16 Min seedlist request time: 0.1Seedlist request count: 3 ================================================================ PROFILES Average entry indexing time: 0.07 seconds Max entry indexing time: 1.48 Min entry indexing time: 0.04 Entry count: 1763 Average seedlist request time: 8.6 seconds Max seedlist request time: 13.06 Min seedlist request time: 0.14
Seedlist request count: 5
- SearchService.setGlobalSandIntegerProperty(String propertyName, String integerProperyValue)
Adds or updates a dynamic global social analytics property that affects the social analytics API or indexing behavior. The changes take place when the next social analytics indexing job starts.
When the property is successfully added or updated, 1 is printed to the wsadmin console. If the property is not successfully added or updated, then you will see 0 printed to the wsadmin console.
If this happens, contact the Search Cluster Administration and check the SystemOut.log file for more details.
Currently, support is provided only for the sand.tag.freq.threshold social analytics property. This property takes an integer value.
The property is used by the Recommend API algorithm...
- Get the people and tags to which the user is related.
- If the tag has a frequency in the Search index that is greater than or equal to the value specified for the sand.tag.freq.threshold property, discard it. This action prevents users from getting recommendations based on common tags, that is, tags that have a high frequency.
- Get the documents with which the people and tags gathered in the first query are associated.
- Return the results to the user.
For example:
SearchService.setGlobalSandIntegerProperty("sand.tag.freq.threshold",100)This setting is global and will affect all Connections users. The setting should only be changed by an administrator.
You can consult the SystemOut.log file when social analytics indexing begins to check the frequency distribution of the most popular 100 tags in the system.
For example, in line 1 of the following extract, you can see that the tag brown has ordinal 1718 in the index (an ordinal is a facet identifier) and that it has a frequency of 1, which means that there is only one instance of a document being tagged with the keyword brown in the index.
[5/30/11 15:41:13:544 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1718:brown:1} [5/30/11 15:41:13:548 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1730:summaries:1} [5/30/11 15:41:13:551 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1737:public_holiday:1} [5/30/11 15:41:13:554 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1721:chronicle:1} [5/30/11 15:41:13:558 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1716:hollis:1} [5/30/11 15:41:13:561 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1700:inquirer:1} [5/30/11 15:41:13:565 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1684:gazette:5} [5/30/11 15:41:13:568 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum {1679:ibm:7} [5/30/11 15:41:13:572 IST] 00000025 CommonTagsCac I com.ibm.lotus.connections.sand.tags.impl.CommonTagsCache buildCacheUsingTermEnum Cache:{1679=7, 1684=5, 1700=1, 1716=1, 1718=1, 1721=1, 1730=1, 1737=1} [5/30/11 15:41:13:633 IST] 00000025 IndexBuilderQ I com.ibm.lotus.connections.search.admin.index.impl.IndexBuilderQueue startSaNDIndexingService CLFRW0483I: SAND indexing has started.- SearchService.startBackgroundCrawl(String persistenceLocation, String components)
Crawls the seedlists for the specified applications and then saves the seedlists to the specified location. This command does not build an index.
The command takes the following parameters:
- persistenceLocation
- A string that specifies the path to which the seedlists are to be saved.
- components
- A string that specifies the applications whose seedlists are to be crawled. The following values are valid: activities, all_configured, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis. Use all_configured instead of listing all indexable services when you want to crawl all the applications.
For example:
SearchService.startBackgroundCrawl("/opt/IBM/Connections/backgroundCrawl", "activities, forums, communities, wikis")- SearchService.startBackgroundFileContentExtraction(persistence dir, components, extracted text dir, thread limit)
Extracts file content for all files referenced in the persisted seedlists in a process that is independent of the indexing task.
Parameters:
- persistence dir
- A string that specifies the location of the persisted files seedlists.
- components
- A string that specifies the application or applications for which you want to extract file content. The following values are valid:
- files. Extracts file content from the Files application.
- wikis. Extracts file content from the Wikis application.
- ecm_files. Extracts file content from community library files stored in Enterprise Content Management (ECM) systems.
- extracted text dir
- A string that specifies the target location for the extracted text. The same directory structure and naming scheme is used for this directory as for the extracted text directory on the deployment: connections shared data/ExtractedText.
For example, ExtractedText/121/31/36cdb7a0-92b2-4cf9-91f3-c4e7e527a5e1.
- thread limit
- The maximum number of seedlist threads.
For example:
SearchService.startBackgroundFileContentExtraction("/bg_index/seedlists", "files", "/bg_index/extractedText", 10)You typically run this command after running a startBackgroundCrawl command to act on up-to-date seedlists. If there are no persisted seedlists available, the behavior is the same as when you run the startBackgroundCrawl command, that is, the seedlists are crawled and persisted first.
- SearchService.startBackgroundIndex(String persistenceLocation, String extractedFileContentLocation, String indexLocation, String applications, String jobs)
Creates a background index in the specified location.
This command crawls the seedlists for the specified applications, saves the seedlists to the specified persistence location, extracts the file content, and then builds a Search index for the applications at the specified index location.
You can optionally run social analytics indexing jobs at the end of the background indexing operation. Alternatively, you can run the SearchService.startSandBackgroundIndex if you want to create a background index for the social analytics service.
For more information, see Creating a background index for the social analytics service.
This command takes the following arguments:
- persistenceLocation
- A string value that specifies the location where you want to save the application seedlists.
- extractedFileContentLocation
- The file content extraction location. Use the same location specified when you previously extracted the file content using the SearchService.startBackgroundFileContentExtraction command or the SearchService.startBackgroundIndex command. Otherwise, specify an empty directory as the location for storing the extracted file content.
- indexLocation
- A string value that specifies the location where you want to create the background index.
- applications
- A string value that specifies the names of the applications to include in the index crawl. The following values are valid: activities, all_configured, blogs, calendar, communities, dogear, ecm_files, files, forums, profiles, status_updates, and wikis. Use all_configured rather than listing all the indexable applications when you want to index all the applications.
To queue up multiple applications for indexing, run a single instance of the SearchService.startBackgroundIndex command with the names of the applications to index listed with a comma separator between them. If you run multiple instances of the command with a single application specified as a parameter, a lock is established when you run the first command so that only the first application specified is indexed successfully.
- jobs
- A string value that specifies the names of the social analytics post-processing indexers that examine, index, and produce new output based on the data in the index. The following values are valid: evidence, graph, manageremployees, tags, taggedby, and communitymembership. Use a comma to separate multiple values. This parameter is optional.
Examples:
SearchService.startBackgroundIndex("/opt/IBM/Connections/data/local/search/backgroundCrawl", "/opt/IBM/Connections/data/local/search/backgroundExtracted", "/opt/IBM/LotusConnections1/data/search/background/backgroundIndex", "activities, blogs, calendar, communities, dogear, files, forums, profiles, wikis, status_updates", "communitymembership, graph")SearchService.startBackgroundIndex("/opt/IBM/Connections/data/local/search/backgroundCrawl", "/opt/IBM/Connections/data/local/search/backgroundExtracted", "/opt/IBM/LotusConnections1/data/search/background/backgroundIndex", "all_configured")- SearchService.startBackgroundSandIndex(String indexLocation, String jobs)
Creates a background index for the social analytics service in the specified location. This command must only be run against an index that already has content indexed from the Connections applications and the ECM service.
This command takes the following arguments:
- indexLocation
- A string value that specifies the location where you want to create the background index.
- jobs
- A string value that specifies the names of the social analytics post-processing indexers that examine, index, and produce new output based on the data in the index. The following values are valid: evidence, graph, manageremployees, tags, taggedby, and communitymembership. Use a comma to separate multiple values.
For example:
SearchService.startBackgroundSandIndex("/bkg2/index/","communitymembership,graph")
Administer Wikis
Configure and administer Wikis using wsadmin to run administrative commands or by editing the configuration file directly.
Use the wsadmin client to run administrative commands to perform tasks that manipulate Wikis content. Changes that you make using wsadmin take effect immediately. Edit the configuration file directly to control how and when various Wikis operations take place. Before changes to the configuration file can take effect, you must synchronize the nodes and restart the application server.
Change Wikis configuration property values
Configuration properties control how and when various Wikis operations take place. You can edit the properties to change the ways that Wikis operates.
To edit configuration files, use the wsadmin client.
In the client, you access scripts that use the AdminConfig object to interact with the configuration file. After changing the configuration properties, you must synchronize nodes and restart the Wikis server before the changes take effect.
To edit Wikis configuration properties...
- Start the Wikis Jython script interpreter.
- Access the Wikis configuration files:
execfile("wikisAdmin.py")If you are asked to select a server, you can select any server.
- Check out the Wikis configuration :
WikisConfigService.checkOutConfig("working_directory", "cell_name")
where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied.
The files are kept in this working directory while you make changes to them.
AIX,
IBM i , and Linux only:The directory must grant write permissions or the command will not run successfully.
- cell_name is the name of the WAS cell hosting the Connections application.
This argument is required. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
WikisConfigService.checkOutConfig("/opt/my_temp_dir", "CommCell01")- To view a list of the valid Wikis configuration settings and their current values:
WikisConfigService.showConfig() Here is some sample output from the WikisConfigService.showConfig() command:
Wikis configuration properties: security.logout.href = /wikis/ibm_security_logout activeContentFilter.enabled = true cache.user.timeout = 43200000 cache.http.publicContentMaxAgeInSecs = 604800 db.dialect = DB2- Some properties must be edited using the wsadmin client; others can only be edited by editing the configuration XML file directly. To change a Wikis configuration setting, do one of the following:
- To edit a property using the wsadmin client:
WikisConfigService.updateConfig("property", "value")where property is one of the editable Wikis configuration properties and value is the new value with which you want to set that property.
See Wikis configuration properties for a complete list of editable properties.
For example:
WikisConfigService.updateConfig("file.page.maximumSizeInKb", "512")- To edit the value of a property in a configuration file directly, from the temporary directory to which you checked it out, open the file in a text editor, and then make your changes.
- Repeat step 4 for each single-value property setting to change.
- After updating the Wikis properties with new values, use the WikisConfigService.showConfig() command to display the list of properties and their updated values. These are the values that will be checked in with the WikisConfigService.checkInConfig() command.
- Check in the configuration file.
You must check in the file in the same wsadmin session in which you checked it out.
For more information, see Applying Wikis property changes.
Wikis configuration properties
Configuration properties in the wikis-config.xml file control how and when various Wikis operations occur.
When you modify a property, you must conform to the following guidelines:
- Configuration values, except for properties with multiple values, are always required.
- Enabled properties must have a Boolean value of either true or false.
- Number values must be integers.
After you edit the wikis-config.xml file, apply your changes by restarting the application server that hosts the Wikis application.
You can modify the following configuration properties:
- activeContentFilter.enabled
Specifies the mime types that are scanned and scripts that are removed from wiki pages when they are viewed.
You can disable this filter by changing the value to false, but that might leave your environment open to cross-site scripting attacks unless you took other security precautions.
For more information, see Securing applications from malicious attack.
- api.indent.enabled
Specifies whether API output is indented. Indentation helps with development and debugging.
This value is false by default because indentation affects performance. To enable indentation, change the value to true.
- api.tagFilter
- Specifies the maximum number of tags that are allowed for filtering Wikis by tags. The default value is 3. For performance reasons, a value greater than 10 is not recommended. To change the value, use the following format:
<tagFilter maximumTags="n"/>where n is the maximum number of tags.- cache.http.publicContentMaxAgeInSecs
Specifies the maximum age, in seconds, of the public content cache before it is refreshed. The cache stores static web resources, such as JavaScript™ and images. To show resource changes more quickly, reduce the value of this property.
The value must be greater than or equal to 0.
You can force a resource update by opening LotusConnections-config.xml and setting the versionStamp property to any value.
For example: <versionStamp value="20130929.013427">. The token is included in most URLs and in URL caches. The new URL overrides the old version and refreshes the resources.
For information about editing LotusConnections-config.xml, see Common configuration properties.
Some resources, including some images, do not use version stamps. If you edit these resources frequently, you can reduce the cache.http.publicContentMaxAgeInSecs value to show changes more quickly. Otherwise, update the version stamp when you modify the wikis-config.xml file.
- cache.http.publicFeedMaxAgeInSecs
Specifies the maximum age, in seconds, of the public feed cache before it is refreshed. Public feeds pass information to the Public Wikis view. To avoid performance issues in deployments with more than 200,000 wikis, increase this value. However, to maintain the latest information in the public view, decrease the value.
The value must be greater than or equal to 0.
- cache.user.timeout
Specifies the number of milliseconds that user objects stay in the user information cache. The user information cache stores metadata about users such as names and email addresses. Use this property to control the frequency of requests to the Wikis database for user information.
The value must be greater than or equal to 1.
If the value of user data changes while the update user task or commands for MemberSynch are running, the cache is invalidated.
- db.dialect
Reflects the current database type, typically specified during installation. Accepts the values DB2, Oracle, or SQL Server.
- directory.community.membershipCache.maximumAgeOnLoginInSeconds
Specifies the number of seconds after a user logs in that the community membership cache is refreshed. Only applicable if Communities wikis and Communities integration are enabled.
The value must be greater than or equal to 0.
Refreshing the cache is required so that a user has the same access to content as the community, but refreshing the cache affects performance. A short time interval between refreshes means that the Wikis application is more up-to-date, but performance might be slower. A long interval between refreshes has less effect on performance, but users might not have immediate access to updated content.
If you reduce the value of this property, inform users that logging out and back in is the best way to refresh the cache. You can also increase the value of the directory.community.membershipCache.maximumAgeOnRequestInSeconds property so that frequent requests do not affect performance.
C
- directory.community.membershipCache.maximumAgeOnRequestInSeconds
Specifies the number of seconds after an application request that the community membership cache is refreshed. Only applicable if Communities wikis and Communities integration are enabled.
The value must be greater than or equal to 0.
Refreshing the cache is required so that a user has the same access to content as the community, but refreshing the cache affects performance. A short time interval between refreshes means that the Wikis application is more up-to-date, but performance might be slower. A long interval between refreshes has less effect on performance, but users might not have immediate access to updated content.
If you reduce the value of this property, inform users that logging out and back in is the best way to refresh the cache. You can also increase the value of the directory.community.membershipCache.maximumAgeOnRequestInSeconds property so that frequent requests do not affect performance.
To improve performance, increase this value to 10 minutes or more.
- directory.group.membershipCache.maximumAgeOnLoginInSeconds
Number of seconds after a user logs in that group membership cache is refreshed after user login. Only applicable if groups are enabled.
The value must be greater than or equal to 0.
Refreshing the cache is required so that a user has the same access to content as the community, but refreshing the cache affects performance. A short time interval between refreshes means that the Wikis application is more up-to-date, but performance might be slower. A long interval between refreshes has less effect on performance, but users might not have immediate access to updated content.
If you reduce the value of this property, inform users that logging out and back in is the best way to refresh the cache. You can also increase the value of the directory.community.membershipCache.maximumAgeOnRequestInSeconds property so that frequent requests do not affect performance.
Use this property to increase the number of seconds in directory.group.membershipCache.maximumAgeOnRequestInSeconds so that frequent requests do not affect performance.
- directory.group.membershipCache.maximumAgeOnRequestInSeconds
Number of seconds after an application request that the group membership cache is refreshed. Only applicable if groups are enabled.
The value must be greater than or equal to 0.
Refreshing the cache is required so that a user has the same access to content as the community, but refreshing the cache affects performance. A short time interval between refreshes means that the Wikis application is more up-to-date, but performance might be slower. A long interval between refreshes has less effect on performance, but users might not have immediate access to updated content.
If you reduce the value of this property, inform users that logging out and back in is the best way to refresh the cache. You can also increase the value of the directory.community.membershipCache.maximumAgeOnRequestInSeconds property so that frequent requests do not affect performance.
To improve performance, increase this value to 10 minutes or more.
- directory.typeaheadSearch.maximumResults
Specifies the maximum number of names to display when a user searches for user or group names in a search field. Sets a maximum for both type-ahead results and search results when a user clicks Search.
The value must be greater than or equal to 1.
If the value is large, such as 1000, and there are 1000 or more matches, all names are returned and performance is greatly reduced. If the value is low, such as 10, a user who enters a generic name might not see all matches.
Type-ahead is available for public pages but not for filtered pages that might appear in Owner, Editor, or Reader views.
- download.modIBMLocalRedirect.enabled
Specifies whether IBM HTTP Server, and not WebSphere Application Server, serves downloaded files. In a production environment, use IBM HTTP Server to serve downloaded files.
If this property is set to true, also specify a file path in the download.modIBMLocalRedirect.hrefPathPrefix property.
If this property is set to false, WAS redirect servlet downloads files.
For information about configuring IBM HTTP Server to download files, see Configure file downloads through the HTTP server.
- download.modIBMLocalRedirect.hrefPathPrefix
Specifies the full path to the file system directory where Wikis data is stored. The file path must not include a trailing slash.
This property is relevant only if the download.modIBMLocalRedirect.enabled property is true. If the property is set to false, WAS redirect servlet downloads files.
For information about configuring IBM HTTP Server to download files, see Configure file downloads through the HTTP server.
- download.stats.logging.enabled
Level of detail to log about page views. If the value is set to false, Wikis logs the number of times a page is viewed. If true, Wikis logs the names of authenticated users who view pages.
Specify true for auditing.
- editor.wikitexttab.enabled
- Specifies whether the Wiki Text tab is enabled. The default value is false. You can enable the Wiki Text tab by changing the value to true.
- file.attachment.maximumSizeInKb
Specifies the maximum size, in KB, that is allowed for file attachments.
The value must be greater than or equal to 1.
Attachments that are larger than this setting fail to upload and return an error to users. To improve performance, use this property to restrict users from uploading large files.
After you change this value, the maximum size limit does not change for users until their browser cache is refreshed. You can force a refresh by running a command to update the product version stamp.
- file.media.maximumSizeInKb
Specifies the maximum size, in KB, that is allowed for media. In Wikis, media are wiki pages.
The value must be greater than or equal to 1.
This property is useful if you want a relatively large quota size for libraries, but you do not want users to attach large files.
After you change this value, the maximum size limit does not change for users until their browser cache is refreshed. You can force a refresh by running a command to update the product version stamp.
- file.page.maximumSizeInKb
Specifies the maximum size, in KB, that is allowed for wiki pages. Since wiki pages are a type of media, this value must be less than or equal to the maximum size set in the file.media.maximumSizeInKb property. Use this property to restrict users from uploading files whose size might affect performance.
The value must be greater than or equal to 1.
Pages that are larger than the value of this property return an error.
After you change this value, the maximum size limit does not change for users until their browser cache is refreshed. You can force a refresh by running a command to update the product version stamp.
- file.restrictions.enabled
Enables or disables the ability to restrict the types of files that users can upload as attachments in Wikis. Accepts the values true or false.
For more information about restricting file types, see Restricting attachment file types in Wikis.
- file.restrictions.mode
Set the mode for file extension restrictions. Accepts the values allow or deny.
If the value is allow, the file extensions in the list are the only ones that users can upload as attachments.
If the value is deny, the extensions are not allowed.
For example:
<file> .... <restrictions enabled="true" mode="allow"> <extensions> <extension>odt</extension> <extension>odp</extension> <extension>ods</extension> </extensions> </restrictions> </file>For more information, see Restricting attachment file types in Wikis.
- file.storage.rootDirectory
Specifies the path to the file system directory where Wikis data is stored. This value can be set during installation and can differ for each node in a cluster. If the directory is specified during installation, this value is populated by WebSphere Application Server. However, you can specify a different directory.
Connections looks for a files and a temp directory in the directory that is specified by this property. If the directories are not present, they are created.
The temp directory stores data while the data is being uploaded or scanned for viruses. The files directory contains binary data files.
For more information, see Backing up data.
- file.versioning.enabled
Specifies whether wiki page versioning is allowed. Specify true or false.
The default value is true.
When this value is set to false, the versioning interface is not displayed and the first version is always current. If you disable this property after multiple versions of a page are created, the latest version becomes the current and only version.
New versions are created only when content changes, not when title or tags or other metadata changes.
To reduce data storage, specify false.
For more information, see Disabling wiki page versioning.
- wikimacros.enabled
- Specifies whether macros are enabled in Wikis. You can use macros to automate common tasks, such as generating a table of contents in a wiki page.
The default value of this parameter is false. To enable macros, set the value to true. When enabled, macros are available from the Macros menu in the editor toolbar.
- scheduledTasks.DirectoryGroupSynch.args.maximumDataAgeInHours
Specifies the number of hours that group information can remain in the Wikis database before the synchronization task runs. If groups are disabled, the synchronization task does not run.
The value must be greater than or equal to 0.
The synchronization task runs automatically in the background, synchronizing group names in the Wikis database with the user directory. The task queries the user directory with the directory ID and when it finds a match it synchronizes the group name.
The task runs on any group information that is older than the value specified in the scheduledTasks.DirectoryGroupSynch.args.maximumDataAgeInHours property. It runs at a frequency that is specified in the scheduledTasks.DirectoryGroupSynch.interval property. It pauses between groups for the amount of time that is specified in the scheduledTasks.DirectoryGroupSynch.args.pauseInMillis property.
- scheduledTasks.DirectoryGroupSynch.args.pauseInMillis
Specifies the number of milliseconds that the synchronization task must wait before it updates information for the next group. Use this property to add an interval between synchronizing items in the queue. This interval avoids overloading your user directory when the task runs. Does not run if groups are disabled.
The value must be greater than or equal to 0.
The synchronization task runs automatically in the background, synchronizing group names in the Wikis database with the user directory. The task queries the user directory with the directory ID and when it finds a match it synchronizes the group name.
The task runs on any group information that is older than the value specified in the scheduledTasks.DirectoryGroupSynch.args.maximumDataAgeInHours property. It runs at a frequency that is specified in the scheduledTasks.DirectoryGroupSynch.interval property. It pauses between groups for the amount of time that is specified in the scheduledTasks.DirectoryGroupSynch.args.pauseInMillis property.
If the remote user directory can handle many simultaneous queries, you can enter 0 as the value.
- scheduledTasks.DirectoryGroupSynch.enabled
Enables or disables the synchronization task for groups. The default is true.
The synchronization task runs automatically in the background, synchronizing group names in the Wikis database with the user directory. The task queries the user directory with the directory ID and when it finds a match it synchronizes the group name.
- scheduledTasks.DirectoryGroupSynch.interval
Frequency of the synchronization task.
This property accepts a chronological expression.
For information about formatting an interval attribute, see Scheduling tasks.
The synchronization task runs automatically in the background, synchronizing group names in the Wikis database with the user directory. The task queries the user directory with the directory ID and when it finds a match it synchronizes the group name.
The task runs on any group information that is older than the value specified in the scheduledTasks.DirectoryGroupSynch.args.maximumDataAgeInHours property. It runs at a frequency that is specified in the scheduledTasks.DirectoryGroupSynch.interval property. It pauses between groups for the amount of time that is specified in the scheduledTasks.DirectoryGroupSynch.args.pauseInMillis property.
Adjust this property to speed up or slow down the process of synchronizing group information.
- scheduledTasks.FileActuallyDelete.args.softDeleteMinimumPendingTimeInMins
Specifies the number of minutes that files must be in the pending deletion queue before the delete files task deletes them.
For example, the default value of 720 means that queued files are deleted when they are in the queue for 720 minutes or longer.
The value must be greater than or equal to 0.
A high value allows users to finish downloading large files. It also allows more lenient online backup policies.
For example, for online backups that take fewer than this number of minutes, you do not have to pause the file deletion task.
For more information about pausing the file deletion task during backups, see Backing up data.
- scheduledTasks.FileActuallyDelete.enabled
Enables or disables the delete files task. The default is true.
This task deletes files if they are marked as pending deletion, and they are older than the value specified in the scheduledTasks.FileActuallyDelete.args.softDeleteMinimumPendingTimeInMins property.
- scheduledTasks.FileActuallyDelete.interval
Frequency with which the delete files task runs.
This property accepts a chronological expression.
For information about formatting an interval attribute, see Scheduling tasks.
Wikis are deleted if they are marked as pending deletion, and they are older than the value specified in the scheduledTasks.FileActuallyDelete.args.softDeleteMinimumPendingTimeInMins property.
- scheduledTasks.MetricsDailyCollection.enabled
Specifies whether to collect metrics. The values are true and false. The default value is true.
The collection task runs near midnight in the server time zone so that all of the date-based metrics include data from that day. Metrics entries require only a few KB per day, therefore the performance impact is low.
For information about MetricService commands used to access metrics, see Wikis administrative commands.
- scheduledTasks.MetricsDailyCollection.interval
Frequency with which the daily metrics collection task runs. Only the default value is supported: the task can run only at midnight in the server time zone. Do not edit this property.
- scheduledTasks.TagUpdateFrequency.enabled
Specifies whether to run the tag frequency update task. This task finds the most frequently used tags in public wikis and updates the public wikis tag cloud.
- scheduledTasks.TagUpdateFrequency.interval
Frequency with which the tag frequency update task runs. This task finds the most frequently used tags in public wikis and updates the public wikis tag cloud and the autocomplete lists.
This property accepts a chronological expression.
For information about formatting an interval attribute, see Scheduling tasks.
Update tag frequency data is resource-intensive, so you might want to adjust this value as your deployment grows. In small deployments, 60 minutes is appropriate. In large deployments, once per day is sufficient.
This property affects public tags only. Tag clouds for individual wikis are updated in real time and are not affected.
- search.seedlist.maximumIncrementalQuerySpanInDays
Specifies the number of days that deletion records are saved before they are eligible to be deleted by the SearchClearDeletionHistory task.
The value must be greater than or equal to 1.
Wikis keeps records of deleted files. These records are eligible to be deleted by the SearchClearDeletionHistory task after the number of days that are specified in this property. The incremental search crawler needs these deletion records to update the search index. If the records are deleted before the incremental crawler reads them, the updates are incomplete. For this reason, Wikis runs a full crawl instead of an incremental crawl. Full crawls delete the existing search index and add a new one. This process takes more time than incremental crawls.
To avoid frequent full crawls, make sure that incremental crawls occur sooner than the time it takes for a deletion record to be created and deleted.
For example, if deletion records are eligible for deletion after five days, specify that incremental crawls occur every four days.
- search.seedlist.maximumPageSize
Specifies the maximum number of items on the seedlist return page. The value must be greater than or equal to 100.
- security.inlineDownload.enabled
Enable the inline display of file attachments. Setting this property is useful when you use the Wikis API to download and display active content, such as Adobe Flash (.swf) files, in your own HTML pages.
By default, Connections passes file attachments to browsers with the Content-Disposition: attachment header.
This specification means that files are displayed as attachments. When users click the attachment, they are prompted to open or download the file. Enabling this property also prevents the embedding of files. To embed files in your own HTML page by using an embed tag, the content disposition must be inline. This property affects active content such as Adobe Flash (.swf), and HTML pages that are referenced within an iFrame.
Configure a property in the wikis-config.xml file to change the content disposition from attachment to inline. Then, set the inline parameter to true in your Wikis API download requests.
See Displaying file attachments inline.
Wikis uses the attachment disposition for security reasons. Specifically, uploaded files can potentially contain malicious code that can exploit the cross-site scripting vulnerabilities of some browsers. If you switch to inline disposition, configure an alternative download domain for greater security.
For more information, see Mitigating a cross site scripting attack.
The allowed values are true and false.
- security.logout.href
Logout URL for single-sign on solutions that require their own logout page.
If you are configuring Connections to work with Tivoli Access Manager, specify the following value:
/wikis/ibm_security_logout?logoutExitPage=<url>where <url> is the Tivoli Access Manager junction URL. This URL is usually the host name of the server.
For more information, see Enabling single sign-on for Tivoli Access Manager.
You must use fully qualified domain names in the configuration file. If you use an abbreviated name, secure communication between the servers fails.
Configure MIME types for Wikis
Configure MIME types to file extensions.
To edit configuration files, use the wsadmin client.
You can configure Wikis to assign specific MIME types to files with specific extensions. A MIME type informs operating systems about what applications to use to open the file types, and what applications to display in dialog boxes. MIME types make it easier for users to know at a glance what type of data a file contains. Some applications do not download files that do not have a MIME type that they support.
This configuration procedure applies to files attached to Wikis pages through the web user interface. The configuration is ignored if a third-party application assigns MIME types to extensions by using the API.
For information about assigning MIME types through the API, see the Wikis API topic.
- Start the Wikis Jython script interpreter.
- Access the Wikis configuration files:
execfile("wikisAdmin.py")If you are asked to select a server, you can select any server.
- Check out the Wikis configuration :
WikisConfigService.checkOutConfig("working_directory", "cell_name")
where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied.
The files are kept in this working directory while you make changes to them.
AIX,
IBM i , and Linux only:The directory must grant write permissions or the command will not run successfully.
- cell_name is the name of the WAS cell hosting the Connections application.
This argument is required. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
WikisConfigService.checkOutConfig("/opt/my_temp_dir", "CommCell01")- Go to the working directory that is specified in Step 2b and open the mime-wikis-config.xml file. The content of the file is similar to the following excerpt:
<mapping mimeType="..." mediaType="..."> <extension></extension> <extension></extension> .... </mapping>
- In the mimeType attribute, specify a mime type in standard format, for example text/plain. Each value must be unique or an error is returned when you start the system. Go to the Internet Assigned Numbers Authority (IANA) website for a list of MIME types.
The mediaType attribute is not supported in this release.
- In each extension element, specify the extensions to map to the MIME type. Each value must be unique or an error is returned when you start the system.
- Apply the changes.
For more information, see Applying Wikis property changes.
Example
<mapping mimeType="text/plain" mediaType=""> <extension>.txt</extension> .... </mapping>
Apply Wikis property changes
After you edit the Wikis configuration file, check the file in, update the version stamp property, and restart the servers.
For information about the properties that you can edit, see Wikis configuration properties. To apply Wikis property changes...
- Check in the changed configuration property keys by entering the following command in the wsadmin client:
WikisConfigService.checkInConfig()
- Update the value of the version stamp configuration property in LotusConnections-config.xml. This value force users' browsers to pick up this change.
- To exit the wsadmin client, type exit at the prompt.
- Restart the application servers that host the Wikis application.
Run Wikis administrative commands
Use administrative commands to perform tasks that manipulate Wikis content.
To run administrative commands, use the wsadmin client.
For more information, see Start wsadmin.
Administrative commands interact with the Wikis application and its resources through scripts. These scripts use the AdminControl object available in the IBM WebSphere Application Server wsadmin tool to interact with the Wikis server.
If an error occurs when you are executing the commands, you can examine the SystemOut.log file to determine what went wrong.
To run Wikis administrative commands...
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")See Wikis administrative commands for a complete list of administrative commands for the Wikis application.
Wikis administrative commands
Use these commands to run administrative tasks for the Wikis application. You do not have to check out the configuration file nor restart the application or server restart.
The following sections define administrative commands used when you work with Wikis.
- WikisConfigService
- WikisMemberService
- WikisLibraryService
- WikisDataIntegrityService
- WikisPrintService
- WikisScheduler
- WikisPolicyService
- WikisMetricsService
- WikisUtilService
Many commands require an ID as an input parameter, including library IDs, user IDs, policy IDs, and file IDs. You can find an ID by using special commands.
For example, when you run the WikisMemberService.getByEmail(string email) command, where you provide a user's email address as input, the output includes the user's ID. You can also find IDs by using feeds.
For more information, see the Connections API documentation.
WikisConfigService
- WikisConfigService.checkOutConfig("working_directory", "cell_name")
Checks the Wikis configuration file out to a temporary directory. Run from the wsadmin client.
- <working_directory>
- Temporary working directory to which the configuration files are copied. The files are kept in this working directory while you modify them.
- <cell_name>
- Name of WAS cell that hosts the Connections application. If you do not know the cell name, type the following command in the wsadmin client:
print AdminControl.getCell()For example:
- AIX or Linux:
WikisConfigService.checkOutConfig("/opt/my_temp_dir", "CommCell01")- IMB i:
WikisConfigService.checkOutConfig("/temp","Cell01")Windows:
WikisConfigService.checkOutConfig("c:/temp","Cell01")- WikisConfigService.showConfig()
- Display the current configuration settings. You must check out the configuration file with WikisConfigService.checkOutConfig() before you can run WikisConfigService.showConfig().
- WikisConfigService.updateConfig("quick_config_property", "new_value")
- Updates configuration properties.
- quick_config_property
Property in the wikis-config.xml configuration file that is expressed as a quick config command.
For example, the quick config value for following property:
<security> <logout href="/wikis/ibm_security_logout" /> </security>is:security.logout.hrefFor information about configuration properties and descriptions, see Wikis configuration properties.
- new_value
- The new value for the property. Property values can be restricted, for example, to either true or false.
For example, to set the scheduledTasks.MetricsDailyCollection.enabled property to false:
WikisConfigService.updateConfig("scheduledTasks.MetricsDailyCollection.enabled", "false")- WikisConfigService.checkInConfig()
Checks in Wikis configuration files. Run from the wsadmin client.
WikisMemberService
- WikisMemberService.getById(string id)
Returns information about a user that is specified by a user ID. The command searches the Wikis database only, so it returns only those users who logged in at least once.
Parameters:
- id
- The user ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000. The following information is returned:
- id: The user ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- name: The user's name in the database as of the date in directoryLastUpdate.
- email: The user's email address.
- isOrphan: Returns true if the user is in the database, but not the directory.
- createDate: The date that the user was added to the database.
- lastVisit: The date of the user's last login.
- directoryLastUpdate: The last time the user's data was synchronized from the directory.
- directoryGroupLastUpdate: The last time this user's group membership was synchronized from the directory.
- communityLastUpdate: The last time this user's Community membership was synchronized.
For example:
WikisMemberService.getById("2d93497d-065a-4022ae25-a4b52598d11a")- WikisMemberService.getByExtId(string externalId)
Returns information about a user that is specified by a user ID. The command searches the Wikis database only, so it returns only those users who logged in at least once.
Parameters:
- externalId
- A string value that matches the user's external directory ID.
This value can be any parameter in the user directory that you configured as the directory ID. The following user information is returned:
- id: The user ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- name: The user's name in the database as of the date in directoryLastUpdate.
- email: The user's email address.
- isOrphan: Returns true if the user is in the database, but not the directory.
- createDate: The date that the user was added to the database.
- lastVisit: The date of the user's last login.
- directoryLastUpdate: The last time the user's data was synchronized from the directory.
- directoryGroupLastUpdate: The last time this user's group membership was synchronized from the directory.
- communityLastUpdate: The last time this user's Community membership was synchronized.
For example:
WikisMemberService.getByExtId("2d93497d-065a-4022ae25-a4b52598d11a")- WikisMemberService.getByEmail(string email)
Returns information about a user that is specified by a user ID. The command searches the Wikis database only, so it returns only those users who logged in at least once.
Parameters:
- The email address for the user. The following user information is returned:
- id: The user ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- name: The user's name in the database as of the date in directoryLastUpdate.
- email: The user's email address.
- isOrphan: Returns true if the user is in the database, but not the directory.
- createDate: The date that the user was added to the database.
- lastVisit: The date of the user's last login.
- directoryLastUpdate: The last time the user's data was synchronized from the directory.
- directoryGroupLastUpdate: The last time this user's group membership was synchronized from the directory.
- communityLastUpdate: The last time this user's Community membership was synchronized.
For example:
WikisMemberService.getByEmail("john_doe@company.com")- WikisMemberService.syncAllMembersByExtId( {"updateOnEmailLoginMatch": ["true" | "false"] } )
For more information, see Synchronize user data using administrative commands.
- WikisMemberService.syncMemberByExtId("currentExternalId"[, {"newExtId" : "id-string" [, "allowExtIdSwap" : ["true" | "false"] ] } ] )
For more information, see Synchronize user data using administrative commands.
- WikisMemberService.inactivateMemberByEmail("email")
For more information, see Synchronize user data using administrative commands.
- WikisMemberService.inactivateMemberByExtId("externalID")
For more information, see Synchronize user data using administrative commands.
- WikisMemberService.getMemberExtIdByEmail("email")
For more information, see Synchronize user data using administrative commands.
- WikisMemberService.getMemberExtIdByLogin("login")
For more information, see Synchronize user data using administrative commands.
- WikisMemberService.syncBatchMemberExtIdsByEmail("emailFile" [, {"allowInactivate" : ["true" | "false"] } ] )
For more information, see Synchronize user data using administrative commands.
- WikisMemberService.syncBatchMemberExtIdsByLogin("loginFile" [, {"allowInactivate" : ["true" | "false"] } ] )
For more information, see Synchronize user data using administrative commands.
- WikisMemberService.syncMemberExtIdByEmail("email" [, { "allowInactivate" : ["true" | "false"] } ])
For more information, see Synchronize user data using administrative commands.
- WikisMemberService.syncMemberExtIdByLogin("name" [, {"allowInactivate": ["true" | "false"] } ])
For more information, see Synchronize user data using administrative commands.
WikisLibraryService
- WikisLibraryService.getById(string libraryId)
Return information about a single library that is specified by an ID. A library comprises the pages, attachments, and other data that make up a wiki. It includes all wiki page versions but does not include metadata such as comments.
Parameters:
- libraryId
- The library ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000. The following information is returned:
- id: The library ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- ownerUserId: The user ID of the library owner in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- type: The type of library. The only valid value is wiki.
- label: A string of characters that are used to identify the library in a URL.
- title: The library's title.
- summary: A summary of library information.
- size: The total size of the library binary data.
- percentUsed: The percentage of the maximum allowable size that is used, according to the library's policy. Zero if not applicable.
- maximumSize: The maximum size (in bytes) the library's policy allows. Zero for unlimited.
- policyId: The ID of the policy that sets a maximum limit (in bytes) on the library's size, in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- lastUpdate: The last time a significant user-driven update occurred to the metadata.
- createDate: The library's creation date.
- externalInstanceId: The widget ID if the library is owned by a community.
- externalContainerId: The community ID if the library is owned by a community.
- themeName: The theme that the community owner selected in communities. Returned for community libraries only.
- orphan: The value is true if the library owner is no longer active. Returned for personal libraries only.
For example:
WikisLibraryService.getById("2d93497d-065a-4022ae25-a4b52598d11a")- WikisLibraryService.delete(string libraryId)
Delete the library that is specified by the library ID, including all associated content.
A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- libraryID
- The library ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
For example:
WikisLibraryService.delete("f0d01111-9b21-4dd8-b8be-8825631cb84b")- WikisLibraryService.deleteBatch(string filePath)
Deletes libraries that are specified in a text file. The file must contain a list with a single library ID per line in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000. You must create the file and save it in a directory local to the server where you are running the wsadmin processor.
A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- filePath
- The full path to the text file, as a string.
For example:
WikisLibraryService.deleteBatch("C:/connections/delete_libraries.txt")- WikisLibraryService.assignPolicy(string libraryId, string policyId)
Assigns a policy to a library. A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments. A policy sets a maximum size for a wiki.
No message is printed if the task succeeds.
Parameters:
- libraryId
- The library ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- policyId
- The policy ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
For example:
WikisLibraryService.assignPolicy("f0d01111-9b21-4dd8-b8be-8825631cb84b", "2d93497d-065a-4022ae25-a4b52598d11a")- WikisLibraryService.assignPolicyBatch(string filePath)
Assigns policies to libraries as specified in a text file.
The file must contain a list of library-policy ID pairs, one pair per line, values separated by a comma.
For example: libraryId, policyId. Extra white space is ignored. You must create this text file and save it in a directory local to the server where you are running the wsadmin processor.
A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- filePath
- The full path to the text file, as a string.
For example:
WikisLibraryService.assignPolicyBatch("C:/connections/assign_policies.txt")- WikisLibraryService.browseWiki(string sortOption, string sortAscending, int pageNumber, int itemsPerPage)
Return a list of all wikis, with information about each wiki. The list includes wikis owned by communities, and wikis whose owners were removed from the user directory.
Parameters:
- sortOption
- A string value that specifies how to sort the list. The default value is title, but you can use lastUpdate, size, createDate, or quotaPercentage.
- sortAscending
- A string value that specifies whether to sort the list in ascending alphabetical order. This value depends on the sortOption value.
If sortOption is title, then this value is true; if sortOption any other value, then this value is false.
- pageNumber
- The number of the page to display.
For example, if you specify in the itemsPerPage parameter that each page has 50 items, page 1 contains items 1-50.
- itemsPerPage
- The maximum number of wikis to list per page. The default value is 20. The following information is returned:
- id: The library ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- ownerUserId: The user ID of the library owner in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- type: The type of library. The only valid value is wiki.
- label: A string of characters that are used to identify the library in a URL.
- title: The library's title.
- summary: A summary of library information.
- size: The total size of the library binary data.
- percentUsed: The percentage of the maximum allowable size that is used, according to the library's policy. Zero if not applicable.
- maximumSize: The maximum size (in bytes) the library's policy allows. Zero for unlimited.
- policyId: The ID of the policy that sets a maximum limit (in bytes) on the library's size, in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- lastUpdate: The last time a significant user-driven update occurred to the metadata.
- createDate: The library's creation date.
- externalInstanceId: The widget ID if the library is owned by a community.
- externalContainerId: The community ID if the library is owned by a community.
- themeName: The theme the community owner selected in communities. Returned for community libraries only.
- orphan: The value is true if the library owner is no longer active. Returned for personal libraries only.
For example:
WikisLibraryService.browseWiki("title", "true", 1, 25)- WikisLibraryService.getWikiCount()
Return the total number of wikis.
- WikisLibraryService.exportSyncedResourceInfo (string fullpathForOutput, string type)
Return a report of all of the communities that the Wikis application interacted with. After a system crash you can compare the report to the latest metadata in the Communities database to help synchronize and update any missing data.
For more information, see Comparing remote application data with the Communities database.
When you run the command from the deployment manager, the path and file are created on the server that hosts the Wikis application. In clusters where multiple nodes host Wikis, select a server to connect to and run the command on; the path and file are created on that server.
Parameters:
- fullPathforOutput
- The report file name and full path of the report, as a string in quotation marks. The report is an XML file. Use forward slashes ("/") in the path, even on Microsoft Windows computers.
- type
- This value is always community, including quotation marks. An error is returned if this value is anything other than community.
For example:
WikisLibraryService.exportSyncedResourceInfo("c:/connections/sync/community_output.xml", "community")- WikisLibraryService.getByExternalContainerId(string community_id)
- Return information about the community libraries available in the named Wiki.
Parameters:
- community_id
- The community id in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
For example:
WikisLibraryService.getByExternalContainerId("003456bc-078d-e990-0450-x12345678900")
WikisDataIntegrityService
- WikisDataIntegrityService.checkFiles(string extraFileDirectory)
Checks the integrity of the binary files in the file system extra files directory against the metadata in the database. The results are output to log files in a specified directory. The database is used as the primary source. During a backup, the file system is assumed to have extra data, if there is any.
The task logs a message for every extra file that is found or for every missing file. Missing files are errors that must be resolved by finding the files or restoring a backup. The application cannot start in a consistent state until you resolve these errors.
This information might be useful before you restore database and file system images to see how closely they match in a test environment.
For more information, see Checking Wikis data integrity.
Parameters:
- extraFileDirectory
A directory path as a string. This path is the location where you want to store files that are not found in the database. If the directory does not exist, the command creates it. If the directory cannot be created, or read or written to, an error is returned.
For example:
WikisDataIntegrityService.checkFiles("C:/wikis_integrity")
WikisPrintService
- WikisPrintService.saveToFile(string object, string filePath, string append)
Prints information that is returned by other commands to a file.
Parameters:
- object
A command with parameters that returns a Map or List<Map> Java object. You can use any of the following commands:
- WikisMemberService.getById (returns a Map)
- WikisMemberService.getByExtId (returns a Map)
- WikisMemberService.getByEmail (returns a Map)
- WikisLibraryService.getById (returns a Map)
- WikisLibraryService.browseWiki (returns a List<Map>)
- WikisPolicyService.getById (returns a Map)
- WikisPolicyService.browse (returns a List<Map>>)
- WikisMetricsService.browse (returns a List<Map>)
- filePath
- A path to a file in which to save the object data.
The data is saved in comma-separated value (.csv) format.
- append
- String whose default value is "true". Change to "false" to have the command overwrite the existing file instead of appending the data in the existing file.
Example:
WikisPrintService.saveToFile(WikisLibraryService.browseWiki("title","true", 1, 20), "/opt/wsadmin/LibraryMap.txt")
WikisScheduler
- WikisScheduler.pauseSchedulingTask(string taskName)
Suspends scheduling of a task. Has no effect on currently running tasks. Return a 1 to indicate that the task is paused. Paused tasks remain paused until you explicitly resume them, even if the server is stopped and restarted.
Parameters:
- taskName
- Any one of these task names, as a string value:
- DirectoryUserSynch
- DirectoryGroupSynch
- FileActuallyDelete
- SearchClearDeletionHistory
- MetricsDailyCollection
- TagUpdateFrequency
For example:
WikisScheduler.pauseSchedulingTask("DirectoryGroupSynch")- WikisScheduler.resumeSchedulingTask(string taskName)
Resumes the start of a paused task. Return a 1 to indicate that the task is resumed.
Parameters:
- taskName
- Any one of these task names, as a string value:
- DirectoryUserSynch
- DirectoryGroupSynch
- FileActuallyDelete
- SearchClearDeletionHistory
- MetricsDailyCollection
- TagUpdateFrequency
For example:
WikisScheduler.resumeSchedulingTask("DirectoryGroupSynch")- WikisScheduler.forceTaskExecution(string taskName, string executeSynchronously)
Runs a task. Return a 1 to indicate that the task has run.
Property settings in the wikis-config.xml configuration properties file specify whether tasks are enabled to run automatically, and how often. Use this command to run tasks manually, for example if you disabled a task but want to run it occasionally.
Parameters:
- taskName
- Any one of these task names, as a string value:
- DirectoryUserSynch
- DirectoryGroupSynch
- FileActuallyDelete
- SearchClearDeletionHistory
- MetricsDailyCollection
- TagUpdateFrequency
- executeSynchronously
- Takes the string values true or false. Specifying this value is not required; the default is false.
If this value is false, then the task runs asynchronously, meaning if the taskId is valid the command returns immediately and the execution continues in the background. If this value is true, it the command does not return until the task completes.
For example:
WikisScheduler.forceTaskExecution("DirectoryGroupSynch")- WikisScheduler.getTaskDetails(string taskName)
Displays status of a task. Returns detailed status message.
Parameters:
- taskName
- Any one of these task names, as a string value:
- DirectoryUserSynch
- DirectoryGroupSynch
- FileActuallyDelete
- SearchClearDeletionHistory
- MetricsDailyCollection
- TagUpdateFrequency
For example:
WikisScheduler.getTaskDetails("DirectoryGroupSynch")
WikisPolicyService
- WikisPolicyService.add(string title, long maximumSize)
Creates a policy with a specified title and maximum size. Policies set a maximum size limit on libraries. A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
When a policy is created, an ID is created for it and returned to you.
The ID is in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000. You must provide policy IDs as parameters when you run other WikisPolicyService commands.
Policies can be applied to libraries by using the WikisLibraryService.assignPolicy or WikisLibraryService.assignPolicyBatch commands.
Parameters:
- title
- The policy title. A required value.
- maximumSize
The maximum size that is allowed, in bytes. Must be zero or greater. A value of zero means the size is unlimited.
Numbers 2 GB or greater are long literals, and you must add an "L" to the end of the number, for example a policy of 2 GB must be 2147483648L.
For example:
WikisPolicyService.add("My Policy", 2147483648L)- WikisPolicyService.edit(string policyId, string title, long maximumSize)
Edits the title and maximum size of a policy with a specified ID. If the ID is for a default policy, the title is not modified. Policies set a maximum size limit on libraries. A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- policyID
- The policy ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- title
- The policy title. A required value.
- maximumSize
The maximum size that is allowed, in bytes. Must be zero or greater. A value of zero means the size is unlimited.
Numbers 2 GB or greater are long literals, and you must add an "L" to the end of the number, for example a policy of 2 GB must be 2147483648L.
For example:
WikisPolicyService.edit("2d93497d-065a-4022ae25-a4b52598d11a", "My Policy", 2147483648L)- WikisPolicyService.getById(string id)
Return information for a single policy that is specified by an ID. Policies set a maximum size limit on libraries.
A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- id
- The policy ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000. The following information is returned:
- id: the ID
- title: the policy title
- maximumSize: the maximum size (in bytes) the library can be, or 0 for unlimited
For example:
WikisPolicyService.getById("2d93497d-065a-4022ae25-a4b52598d11a")- WikisPolicyService.browse(string sortOption, string sortAscending, int pageNumber, int itemsPerPage)
Return a list of policies with ID, title, and maximum size information, as described for the WikisPolicyService.getById(id) command. Policies set a maximum size limit on libraries.
A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- sortOption
- A string value that specifies how to sort the list. The default value is title, but you can also use maximumSize.
- sortAscending
- A string value that specifies whether the list sorts in ascending alphabetical order. This value depends on sortOption.
If sortOption is title, then this value is true; if sortOption any other value, then this value is false.
- pageNumber
- The number of the page to return.
For example, if the itemsPerPage value is 40, and pageNumber value is 2, the command returns items 41 - 80 (page 2) instead of 1 to 40 (page 1).
- itemsPerPage
- The maximum number of policies to list per page. The default value is 20.
For example:
WikisPolicyService.browse("title", "true", 1, 25)- WikisPolicyService.getCount()
Return the number of policies. Policies set a maximum size limit on libraries. A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments.
- WikisPolicyService.editDefault(long maximumSize)
Set the maximum size, in bytes, for the personal wiki library default policy. Personal wikis are owned by a person.
Parameters:
- maximumSize
A number that represents the maximum size that is allowed, in bytes, for wikis that the default policy is assigned to.
Numbers 2 GB or greater are long literals, and you must add an "L" to the end of the number, for example a policy of 2 GB must be 2147483648L.
For example:
WikisPolicyService.editDefault(2147483648L)- WikisPolicyService.editCommunityDefault(long maximumSize)
Set the maximum size, in bytes, for the community wiki library default policy. Community wikis are owned by a community.
Parameters:
- maximumSize
A number that represents the maximum size that is allowed, in bytes, for wikis that the default policy is assigned to.
Numbers 2 GB or greater are long literals, and you must add an "L" to the end of the number, for example a policy of 2 GB must be 2147483648L.
For example:
WikisPolicyService.editCommunityDefault(2147483648L)- WikisPolicyService.delete(string id)
Delete the policy that is specified by the ID. You cannot delete default policies or policies in use by any libraries.
- id
- The policy ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
For example:
WikisPolicyService.delete("f0d01111-9b21-4dd8-b8be-8825631cb84b")
WikisMetricsService
- WikisMetricsService.browse(string startDate, string endDate, string filePathWithFilterKeys)
Returns metrics about wikis that are stored in the database. The same metrics are provided for each day in a specified period.
If you do not specify dates, then the command uses the first available day with data for startDate, and the last available day with data for endDate.
See the topic Wikis metrics for metrics and their descriptions.
Parameters:
- startDate
- The start date for the period, in YYYY-MM-DD format. This date is included in the returns, for example a start date of "2009-01-01" will include metrics from January 1, 2009. It must be in quotation marks, for example "2009-01-01".
- endDate
- The end date for the period, in YYYY-MM-DD format. This date is included in the returns, for example an end date of "2009-01-10" will include metrics from January 10, 2009. It must be in quotation marks, for example "2009-01-10".
- filePathWithFilterKeys
- The full path to a text file in which each line contains a statistical key. If you specify a file, only metrics that are listed in the file are returned. If you do not specify a file, all data is returned.
For example:
WikisMetricsService.browse("2009-01-01", "2009-01-10", "C:/connections/wikis/metrics.txt")- WikisMetricsService.saveToFile(string filePath, string startDate, string endDate, string filePathWithFilterKeys, string append)
Returns metrics about wikis and exports them to a local file. The same metrics are provided for each day in a specified period.
If you do not specify dates, then the command uses the first available day with data for startDate, and the last available day with data for endDate.
See the topic Wikis metrics for metrics and their descriptions.
Parameters:
- filePath
- Path to a file in which to export the metrics. Metrics are exported in comma-separated value (CSV) format. If you specify a file name with a .csv extension, it is possible to open it as a spreadsheet.
See Importing statistics and metrics into a spreadsheet.
- startDate
- The start date for the period, in YYYY-MM-DD format. This date is included in the returns, for example a start date of "2009-01-01" will include metrics from January 1, 2009. It must be in quotation marks, for example "2009-01-01".
- endDate
- The end date for the period, in YYYY-MM-DD format. This date is included in the returns, for example an end date of "2009-01-10" will include metrics from January 10, 2009. It must be in quotation marks, for example "2009-01-10".
- filePathWithFilterKeys
- The full path to a text file in which each line contains a metric key. If you specify a file, only metrics that are listed in the file are returned. If you do not specify a file, all data is returned.
For example, if the file lists these three keys, then only these metrics are returned:
wikis.metric.user.count wikis.metric.user.created.today.count wikis.metric.user.login.count- append
- String whose default value is "true". Change to "false" to have the command overwrite the existing file instead of appending the data in the existing file.
For example:
WikisMetricsService.saveToFile("C:/connections/wikis/metrics.csv", "2009-01-01", "2009-01-10", "C:/connections/wikis/metric_keys.txt", "false")- WikisMetricsService.getAvailableRange()
Return a string array where the first element is the first day data is available and the second element is the last day that data is available for wiki libraries. Typically, the current day's data is not available until 12:01 A.M. the following day.
If metrics collection was disabled or did not occur because of some issue, there might be gaps in data available.
WikisUtilService
- WikisUtilService.filterListByString(List listOfMaps, string filterKey, string regexstringCriteria)
Returns maps from a specified list that have a specified key that matches a specified regular expression. Use this command to filter List<Map> Java objects that are returned by any of the browse commands, such as WikisLibraryService.browseWiki.
A map is a list of key/value pairs, for example the WikisLibraryService.browseWiki command returns a list of libraries. Each library in the list is a map with a set of keys, and each key is paired with a value. Every library has the same set of keys, but unique values. Values contain information about the library, such as its title and creation date.
Parameters:
- listOfMaps
- A list of maps, for example the result of WikisLibraryService.browseWiki(parameters).
- filterKey
- A key in each map in the list, whose value is compared against the filter criteria.
- regexstringCriteria
- A regular expression that is represented as a string to match against the filterKey value.
For example, "[0-9]+" to match only >= 1 numbers in a row.
The command returns maps from the listOfMaps whose filterKey is the regexstringCriteria value.
For example, this command shows only the returned maps whose title values match the expression "Development*":
WikisUtilService.filterListByString(WikisLibraryService.browseWiki("title", "true", 1, 25), "title", "Development*")- WikisUtilService.filterListByDate(List listOfMaps, string filterKey, expression)
Returns maps from a specified list that have a specified key with a specified date. Use this command to filter List<Map> Java objects that are returned by any of the browse commands, such as WikisLibraryService.browseWiki.
A map is a list of key/value pairs, for example the WikisLibraryService.browseWiki command returns a list of libraries. Each library is a map with a set of keys, and each key is paired with a value. Every library has the same set of keys, but unique values. Values contain information about the library, such as its title and creation date.
Parameters:
- listOfMaps
- A list of maps, for example the result of WikisLibraryService.browseWiki(parameters).
- filterKey
- A key in each map in the list, whose value is compared against the filter criteria.
- expression
- A string of the form <operator> <date> where <date> is in yyyy-MM-dd format and <operator> is one of the following characters: > >= == <= <
The command returns maps from the listOfMaps value whose filterKey value is the expression value.
For example, this command shows only the returned maps whose creation date is on or later than January 1, 2010:
WikisUtilService.filterListByDate(WikisLibraryService.browseWiki("title", "true", 1, 25), "createDate", ">=2010-01-01")- WikisUtilService.filterListByNumber(List listOfMaps, string filterKey, expression)
Returns maps from a specified list that have a specified key with a specified number. Use this command to filter List<Map> Java objects that are returned by any of the browse commands, such as WikisLibraryService.browseWiki.
A map is a list of key/value pairs, for example the WikisLibraryService.browseWiki command returns a list of libraries. Each library is a map with a set of keys, and each key is paired with a value. Every library has the same set of keys, but unique values. Values contain information about the library, such as its title and creation date.
Parameters:
- listOfMaps
- A list of maps, for example the result of WikisLibraryService.browseWiki(parameters).
- filterKey
- A key in each map in the list, whose value is compared against the filter criteria.
- expression
- A string of the form <operator> <int> where <int> is an integer and <operator> is one of the following characters: > >= == <= <
The command returns maps from the listOfMaps value whose filterKey value is the expression value.
For example, this command shows only the returned maps whose percentUsed value (which reflects the percent of the library's available space that is used) is 20:
WikisUtilService.filterListByNumber(WikisLibraryService.browseWiki("title", "true", 1, 25), "percentUsed", "==20")- WikisUtilService.getFileById(string fileID)
Return the file path location of the wiki page file attachment that is identified by a provided file ID. Return a path even if the file is not in use.
Use this command to find the location of any file attachment that is stored in the shared file directory. This command can be useful when you want to restore backup versions of data.
For more information, see Back up Files data.
- fileID
- The ID of a file in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
For example:
WikisUtilService.getFileById("2d93497d-065a-4022ae25-a4b52598d11a")
Back up Wikis data
Back up the data in your wikis.
To run administrative commands, use the wsadmin client.
For more information, see Start wsadmin.
Wikis stores data in a database and on the file system. Metadata is stored in the database and binary files are stored in a data directory on the file system. You specified this directory during the installation of Connections. You can find the path to the directory in the file.storage.rootDirectory element of the wikis-config.xml file. The file.storage.rootDirectory element contains either the path itself or a WebSphere Application Server variable whose value is the path.
This storage architecture means you must maintain consistency between the database and file system during backups.
The simplest way to maintain consistency is to run offline backups by making the application inaccessible and then backing up both locations. During an online backup, users can continue to add and delete content.
You must back up the database before you back up the files on the file system because the database enforces transactional integrity. If you back up the file system first, files that are added after the file system backup starts but before the database backup completes will be missing from the file system on restoration. Backing up the database first ensures that you capture any new files that are added during the backup process.
File data is stored in subdirectories under the storage_root_directory/files/files directory. Each file is stored in a subdirectory whose name is generated from a UUID. Part of the UUID is used to create a directory with a number 0 - 127. Another part of the UUID is used to create a subdirectory with another number 0 - 127. The UUID itself is in that directory.
For example:
storage_root_directory/files/18/113/<file_UUID>Files are written one time only so that their identities are obvious if a file is missing during a restore.
You must prevent any file-deletion tasks from running during an online backup. When a user deletes a file, the file is removed from the user interface and added to a queue of files to be deleted from the file system. This deletion task runs regularly to delete the first item from the queue. You can increase the time that files can remain in the queue before they are deleted. Increase this time by adjusting the value in the scheduledTasks.FileActuallyDelete.args.softDeleteMinimumPendingTimeInMins property in the wikis-config.xml file. Increasing this time interval can give you enough time to run incremental backups to ensure that your archive is complete.
For information about editing the wikis-config.xml file, see Changing configuration property values.
For information about the scheduledTasks.FileActuallyDelete.args.softDeleteMinimumPendingTimeInMins property, see Wikis configuration properties.
To back up Wikis data...
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")- Stop the task that deletes files from the queue with the following command:
WikisScheduler.pauseSchedulingTask("FileActuallyDelete")- Back up the database according to your database documentation.
- Back up the file system in whatever way makes sense in your environment. For small deployments, you can archive the system.
For large deployments, use a tool like IBM Tivoli Storage Manager.
- Start the task that deletes files from the queue with the following command:
WikisScheduler.resumeSchedulingTask("FileActuallyDelete")You can run a task that checks for inconsistencies between the database and the file system. Before you restore data, use a test environment to compare database and file system images.
For more information, see Checking Wikis data integrity.
Check Wikis data integrity
The appropriate way to restore backed-up Wikis data is to restore versions of file system data and database data that match.
To edit configuration files, use the wsadmin client.
You can check integrity between database data and file system data by running the WikisDataIntegrityService.checkFiles command.
The WikiDataIntegrityService.checkFiles command moves files that are found on the file system, but not in the database, to a file path location that you specify. You can delete any extra files or back them up. Users cannot download files if they are missing from the database.
Before you run the WikisDataIntegrityService.checkFiles command, create the path_to_extra_filesDirectory target folder. This folder is used to save unused files during the integrity check. Create the folder on the device where you run the command.
If you are running the command on Linux,
IBM i , or AIX, ensure that you have the appropriate write permissions for the folder.
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")- Check the integrity of data in the database and file system directory with the following command:
- AIX or Linux: WikisDataIntegrityService.checkFiles("/opt/path_to_extra_wikisDirectory")
IBM i : WikisDataIntegrityService.checkFiles("/path_to_extra_wikisDirectory")- Windows: WikisDataIntegrityService.checkFiles("C:\path_to_extra_wikisDirectory")
Synchronize Wikis data with Communities data
Use the WikisLibraryService.exportSyncedResourceInfo command to return a report of all of the communities that Wikis interacts with. The information in this report can help you to synchronize Wikis data with the Communities database after a system crash that includes data loss.
For more information, see the Recovering from a database failure topic.
Restrict attachment file types in Wikis
Restrict the types of files that users can upload as attachments in wiki pages.
To edit configuration files, use the wsadmin client.
You can create a list of denied file extensions and prevent users from uploading files with those extensions. Or you can create a list of allowed file extensions and only allow users to upload files with those extensions.
Restrict file types affects users uploading new files, or changing the extensions of existing files. (Users cannot change existing files to a denied type.) But existing documents with denied extensions are not affected.
For example, if you deny the .xls extension, users cannot upload .xls files or change existing files to have the .xls extension. But existing .xls files are not affected, and users can still upload new versions of them.
This is not intended as a security application. Files are not analyzed to determine their type, only the file name is read to allow or deny (with an error) the upload. This is only to help you restrict the types of files you store in your environment.
Perform the following steps to restrict file types in Wikis:
- Start the Wikis Jython script interpreter.
- Access the Wikis configuration files:
execfile("wikisAdmin.py")If you are asked to select a server, you can select any server.
- Check out the Wikis configuration :
WikisConfigService.checkOutConfig("working_directory", "cell_name")
where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied.
The files are kept in this working directory while you make changes to them.
AIX,
IBM i , and Linux only:The directory must grant write permissions or the command will not run successfully.
- cell_name is the name of the WAS cell hosting the Connections application.
This argument is required. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
WikisConfigService.checkOutConfig("/opt/my_temp_dir", "CommCell01")
- Open wikis-config.xml.
- In the <restrictions> element in the <file> section, specify the enabled attribute as true.
- In the <restrictions> element in the <file> section, specify the mode attribute as one of the following values:
- A value of allow means the extensions in the list are the only ones allowed to be uploaded.
- A value of deny means the extensions in the list are the only ones not allowed to be uploaded.
- In the <restrictions> element, add an <extensions> element, and within the <extensions> element add one or more <extension> elements, each containing a file extension to allow or deny.
- Check in the configuration file.
You must check in the file in the same wsadmin session in which you checked it out.
For more information, see Applying Wikis property changes.
Example
<file> .... <restrictions enabled="true" mode="allow"> <extensions> <extension>odt</extension> <extension>odp</extension> <extension>ods</extension> </extensions> </restrictions> </file>In the previous example, .odt, .odp, and .ods IBM Lotus Symphony extensions are the only extensions users can upload. Case is ignored, and you can use values with or without periods.
For example, odt, .odt, and ODT are all valid.
Use an empty <extension> element to allow or deny files without extensions, or with extensions that exceed the platform limit of 16 characters.
Set maximum sizes on media, pages, and attachments
You can set maximum sizes for media, pages, and attachments in the wikis-config.xml properties file.
To edit configuration files, use the wsadmin client.
Many commands require an ID as an input parameter, including library IDs, user IDs, policy IDs, and file IDs. You can find an ID by using special commands.
For example, when you run the WikisMemberService.getByEmail(string email) command, where you provide a user's email address as input, the output includes the user's ID. You can also find IDs by using feeds.
For more information, see the Connections API documentation.
You can set maximum sizes on media, pages, and attachments to control the size of these individual objects.
For more information, see Setting maximum sizes on libraries.
Pages are a type of media so the maximum setting for pages cannot be larger than the maximum setting for media. Attachments have no relationship to media or pages, and can be any maximum size. However, if you allow users to attach large files make sure that network and browser timeout settings give users enough time to download the files.
- Start the Wikis Jython script interpreter.
- Access the Wikis configuration files:
execfile("wikisAdmin.py")If you are asked to select a server, you can select any server.
- Check out the Wikis configuration :
WikisConfigService.checkOutConfig("working_directory", "cell_name")
where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied.
The files are kept in this working directory while you make changes to them.
AIX,
IBM i , and Linux only:The directory must grant write permissions or the command will not run successfully.
- cell_name is the name of the WAS cell hosting the Connections application.
This argument is required. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
WikisConfigService.checkOutConfig("/opt/my_temp_dir", "CommCell01")- To view the current configuration settings:
WikisConfigService.showConfig()- To set a maximum size (in KB) for media, pages, and attachments, use the following commands:
WikisConfigService.updateConfig("file.media.maximumSizeInKb", "<number_of_kilobytes>")WikisConfigService.updateConfig("file.page.maximumSizeInKb", "<number_of_kilobytes>")WikisConfigService.updateConfig("file.attachment.maximumSizeInKb", "<number_of_kilobytes>")For better performance, set the maximum size of attachments to 2 GB. Files that are larger than that are likely to reach browser or server limitations.
The following limits show the default maximum size for different types of files:
- Media: 512 MB
- Pages: 1 MB
- Attachments: 75 MB
- Check in the configuration file.
You must check in the file in the same wsadmin session in which you checked it out.
For more information, see Applying Wikis property changes.
Set maximum sizes on libraries
Use WikisLibraryService commands to set maximum sizes on libraries by assigning them a policy. A library is the pages, attachments, and other data that make up a wiki. A policy sets a maximum size for a library.
To run administrative commands, use the wsadmin client.
For more information, see Start wsadmin.
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")- Create a policy to specify the library size.
For more information, see Working with Wikis policies.
- Run the following commands to set maximum sizes on libraries:
- WikisLibraryService.assignPolicy(string libraryId, string policyId)
Assigns a policy to a library. A library includes the pages, attachments, and other data that make up a wiki. It includes all wiki page versions but does not include metadata such as comments. It also includes all wiki page versions but does not include metadata such as comments. A policy sets a maximum size for a wiki.
If the task succeeds, no message is printed.
Parameters:
- libraryId
- The library ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- policyId
- The policy ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
For example:
WikisLibraryService.assignPolicy("f0d01111-9b21-4dd8-b8be-8825631cb84b", "2d93497d-065a-4022ae25-a4b52598d11a")- WikisLibraryService.assignPolicyBatch(string filePath)
Assigns policies that are specified in a text file. You must create this text file and save it in on the server where you are running the wsadmin client. The file must contain a list of library-policy ID pairs, one pair per line, with values separated by a comma.
For example: libraryId, policyId. Extra white space is ignored.
A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- filePath
- The full path to the text file, as a string.
For example:
WikisLibraryService.assignPolicyBatch("C:/connections/assign_policies.txt")
Work with Wikis policies
Use the WikisPolicyService commands to add, edit, count, and return information about policies. You apply policies to libraries to set a maximum size on those libraries. A library is a set of files owned by a person or community.
To run administrative commands, use the wsadmin client.
For more information, see Start wsadmin.
Many commands require an ID as an input parameter, including library IDs, user IDs, policy IDs, and file IDs. You can find an ID by using special commands.
For example, when you run the WikisMemberService.getByEmail(string email) command, where you provide a user's email address as input, the output includes the user's ID. You can also find IDs by using feeds.
For more information, see the Connections API documentation.
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")- Run the following commands to work with policies:
- WikisPolicyService.add(string title, long maximumSize)
Creates a policy with a specified title and maximum size. Policies set a maximum size limit on libraries. A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
When a policy is created, an ID is created for it and returned to you.
The ID is in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000. You must provide policy IDs as parameters when you run other WikisPolicyService commands.
Policies can be applied to libraries by using the WikisLibraryService.assignPolicy or WikisLibraryService.assignPolicyBatch commands.
Parameters:
- title
- The policy title. A required value.
- maximumSize
The maximum size that is allowed, in bytes. Must be zero or greater. A value of zero means the size is unlimited.
Numbers 2 GB or greater are long literals, and you must add an "L" to the end of the number, for example a policy of 2 GB must be 2147483648L.
For example:
WikisPolicyService.add("My Policy", 2147483648L)- WikisPolicyService.edit(string policyId, string title, long maximumSize)
Edits the title and maximum size of a policy with a specified ID. If the ID is for a default policy, the title is not modified. Policies set a maximum size limit on libraries. A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- policyID
- The policy ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- title
- The policy title. A required value.
- maximumSize
The maximum size that is allowed, in bytes. Must be zero or greater. A value of zero means the size is unlimited.
Numbers 2 GB or greater are long literals, and you must add an "L" to the end of the number, for example a policy of 2 GB must be 2147483648L.
For example:
WikisPolicyService.edit("2d93497d-065a-4022ae25-a4b52598d11a", "My Policy", 2147483648L)- WikisPolicyService.getById(string id)
Return information for a single policy that is specified by an ID. Policies set a maximum size limit on libraries.
A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- id
- The policy ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000. The following information is returned:
- id: the ID
- title: the policy title
- maximumSize: the maximum size (in bytes) the library can be, or 0 for unlimited
For example:
WikisPolicyService.getById("2d93497d-065a-4022ae25-a4b52598d11a")- WikisPolicyService.browse(string sortOption, string sortAscending, int pageNumber, int itemsPerPage)
Return a list of policies with ID, title, and maximum size information, as described for the WikisPolicyService.getById(id) command. Policies set a maximum size limit on libraries.
A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- sortOption
- A string value that specifies how to sort the list. The default value is title, but you can also use maximumSize.
- sortAscending
- A string value that specifies whether the list sorts in ascending alphabetical order. This value depends on sortOption.
If sortOption is title, then this value is true; if sortOption any other value, then this value is false.
- pageNumber
- The number of the page to return.
For example, if the itemsPerPage value is 40, and pageNumber value is 2, the command returns items 41 - 80 (page 2) instead of 1 to 40 (page 1).
- itemsPerPage
- The maximum number of policies to list per page. The default value is 20.
For example:
WikisPolicyService.browse("title", "true", 1, 25)- WikisPolicyService.getCount()
Return the number of policies. Policies set a maximum size limit on libraries. A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments.
- WikisPolicyService.editDefault(long maximumSize)
Set the maximum size, in bytes, for the personal wiki library default policy. Personal wikis are owned by a person.
Parameters:
- maximumSize
A number that represents the maximum size that is allowed, in bytes, for wikis that the default policy is assigned to.
Numbers 2 GB or greater are long literals, and you must add an "L" to the end of the number, for example a policy of 2 GB must be 2147483648L.
For example:
WikisPolicyService.editDefault(2147483648L)- WikisPolicyService.editCommunityDefault(long maximumSize)
Set the maximum size, in bytes, for the community wiki library default policy. Community wikis are owned by a community.
Parameters:
- maximumSize
A number that represents the maximum size that is allowed, in bytes, for wikis that the default policy is assigned to.
Numbers 2 GB or greater are long literals, and you must add an "L" to the end of the number, for example a policy of 2 GB must be 2147483648L.
For example:
WikisPolicyService.editCommunityDefault(2147483648L)- WikisPolicyService.delete(string id)
Delete the policy that is specified by the ID. You cannot delete default policies or policies in use by any libraries.
- id
- The policy ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
For example:
WikisPolicyService.delete("f0d01111-9b21-4dd8-b8be-8825631cb84b")
Viewing Wikis library information
Use the WikisLibraryService commands to find information about Wikis libraries. A library comprises the pages, attachments, and other data that make up a wiki.
To run administrative commands, use the wsadmin client.
For more information, see Start wsadmin.
Many commands require an ID as an input parameter, including library IDs, user IDs, policy IDs, and file IDs. You can find an ID by using special commands.
For example, when you run the WikisMemberService.getByEmail(string email) command, where you provide a user's email address as input, the output includes the user's ID. You can also find IDs by using feeds.
For more information, see the Connections API documentation.
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")- Run the following commands to return information about libraries:
- WikisLibraryService.getById(string libraryId)
Return information about a single library specified by an ID. A library is the pages, attachments, and other data that make up a wiki. It includes all wiki page versions, but does not include metadata such as comments.
Parameters:
- libraryId
- The library ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000. The following information is returned:
- id: The library ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- ownerUserId: The user ID of the library owner in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000..
- type: The type of library. The only valid value is wiki.
- label: A string of characters used to identify the library in a URL.
- title: The library's title.
- summary: A summary of library information.
- size: The total size of the library binary data.
- percentUsed: The percentage of the maximum allowable size used, according to the library's policy. Zero if not applicable.
- maximumSize: The maximum size (in bytes) the library's policy allows. Zero for unlimited.
- policyId: The ID of the policy that sets a maximum limit (in bytes) on the library's size, in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- lastUpdate: The last time a significant user-driven update occurred to the metadata.
- createDate: The library's creation date.
- externalInstanceId: The widget ID if the library is owned by a community.
- externalContainerId: The community ID if the library is owned by a community.
- themeName: The theme the community owner has selected in communities. Returned for community libraries only.
- orphan: The value is true if the library owner is no longer active. Returned for personal libraries only.
For example:
WikisLibraryService.getById("2d93497d-065a-4022ae25-a4b52598d11a")- WikisLibraryService.browseWiki(string sortOption, string sortAscending, int pageNumber, int itemsPerPage)
Return a list of all wikis, with information about each wiki. The list includes wikis owned by communities, and wikis whose owners were removed from the user directory.
Parameters:
- sortOption
- A string value that specifies how to sort the list. The default value is title, but you can use lastUpdate, size, createDate, or quotaPercentage.
- sortAscending
- A string value that specifies whether to sort the list in ascending alphabetical order. This depends on the sortOption value.
If sortOption is title, then this value is true; if sortOption any other value, then this value is false.
- pageNumber
- The number of the page to display.
For example, if you specify in the itemsPerPage parameter that each page will have 50 items, page 1 will contain items 1-50.
- itemsPerPage
- The maximum number of wikis to list per page. The default value is 20. The following information is returned:
- id: The library ID in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- ownerUserId: The user ID of the library owner in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000..
- type: The type of library. The only valid value is wiki.
- label: A string of characters used to identify the library in a URL.
- title: The library's title.
- summary: A summary of library information.
- size: The total size of the library binary data.
- percentUsed: The percentage of the maximum allowable size used, according to the library's policy. Zero if not applicable.
- maximumSize: The maximum size (in bytes) the library's policy allows. Zero for unlimited.
- policyId: The ID of the policy that sets a maximum limit (in bytes) on the library's size, in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- lastUpdate: The last time a significant user-driven update occurred to the metadata.
- createDate: The library's creation date.
- externalInstanceId: The widget ID if the library is owned by a community.
- externalContainerId: The community ID if the library is owned by a community.
- themeName: The theme the community owner has selected in communities. Returned for community libraries only.
- orphan: The value is true if the library owner is no longer active. Returned for personal libraries only.
For example:
WikisLibraryService.browseWiki("title", "true", 1, 25)- WikisLibraryService.getWikiCount()
Return the total number of wikis.
- WikisLibraryService.exportSyncedResourceInfo (string fullpathForOutput, string type)
Return a report of all of the communities that the Wikis application has interacted with. After a system crash you can compare the report to the latest metadata in the Communities database to help synchronize and update any missing data.
See the topic Comparing remote application data with the Communities database for more information.
Note that in clusters, when you run the command from the deployment manager the path and file are created on the server running Wikis. In clusters where multiple nodes are running Wikis, you are asked choose a server to connect to and run the command on, and then the path and file are created on that server.
Parameters:
- fullPathforOutput
- The full path location where you want the report, and the report filename, as a string in quotes. The report is an XML file. Use forward slashes ("/") in the path, even on Microsoft Windows computers.
- type
- This is always the string value, "community" (including quotes). An error is returned if this is anything except "community".
For example:
WikisLibraryService.exportSyncedResourceInfo("c:/connections/sync/community_output.xml", "community")
Filter library lists
Use the WikisUtilService commands to filter lists of library maps that are returned by the WikisLibraryService.browseWiki command. You can filter a list of library maps by string value, date value, or number value.
To run administrative commands, use the wsadmin client.
For more information, see Start wsadmin.
Many commands require an ID as an input parameter, including library IDs, user IDs, policy IDs, and file IDs. You can find an ID by using special commands.
For example, when you run the WikisMemberService.getByEmail(string email) command, where you provide a user's email address as input, the output includes the user's ID. You can also find IDs by using feeds.
For more information, see the Connections API documentation.
Commands such as WikisLibraryService.browseWiki return List<Map> Java objects. A List<Map> object is a list of Map Java objects. Maps are lists of key/value pairs.
For example, the WikisLibraryService.browseWiki command returns a list of libraries. Each library in the list is a map with a set of keys, and each key is paired with a value. Every library has the same set of keys, but unique values, such as a title and creation date.
You can filter a list by specifying that it must return maps that have a specific key with a specific string value, date value, or number value.
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")- Run the following commands to filter a list of library maps:
- WikisUtilService.filterListByString(List listOfMaps, string filterKey, string regexstringCriteria)
Returns maps from a specified list that have a specified key that matches a specified regular expression. Use this command to filter List<Map> Java objects that are returned by any of the browse commands, such as WikisLibraryService.browseWiki.
A map is a list of key/value pairs, for example the WikisLibraryService.browseWiki command returns a list of libraries. Each library in the list is a map with a set of keys, and each key is paired with a value. Every library has the same set of keys, but unique values. Values contain information about the library, such as its title and creation date.
Parameters:
- listOfMaps
- A list of maps, for example the result of WikisLibraryService.browseWiki(parameters).
- filterKey
- A key in each map in the list, whose value is compared against the filter criteria.
- regexstringCriteria
- A regular expression that is represented as a string to match against the filterKey value.
For example, "[0-9]+" to match only >= 1 numbers in a row.
The command returns maps from the listOfMaps whose filterKey is the regexstringCriteria value.
For example, this command shows only the returned maps whose title values match the expression "Development*":
WikisUtilService.filterListByString(WikisLibraryService.browseWiki("title", "true", 1, 25), "title", "Development*")- WikisUtilService.filterListByDate(List listOfMaps, string filterKey, expression)
Returns maps from a specified list that have a specified key with a specified date. Use this command to filter List<Map> Java objects that are returned by any of the browse commands, such as WikisLibraryService.browseWiki.
A map is a list of key/value pairs, for example the WikisLibraryService.browseWiki command returns a list of libraries. Each library is a map with a set of keys, and each key is paired with a value. Every library has the same set of keys, but unique values. Values contain information about the library, such as its title and creation date.
Parameters:
- listOfMaps
- A list of maps, for example the result of WikisLibraryService.browseWiki(parameters).
- filterKey
- A key in each map in the list, whose value is compared against the filter criteria.
- expression
- A string of the form <operator> <date> where <date> is in yyyy-MM-dd format and <operator> is one of the following characters: > >= == <= <
The command returns maps from the listOfMaps value whose filterKey value is the expression value.
For example, this command shows only the returned maps whose creation date is on or later than January 1, 2010:
WikisUtilService.filterListByDate(WikisLibraryService.browseWiki("title", "true", 1, 25), "createDate", ">=2010-01-01")- WikisUtilService.filterListByNumber(List listOfMaps, string filterKey, expression)
Returns maps from a specified list that have a specified key with a specified number. Use this command to filter List<Map> Java objects that are returned by any of the browse commands, such as WikisLibraryService.browseWiki.
A map is a list of key/value pairs, for example the WikisLibraryService.browseWiki command returns a list of libraries. Each library is a map with a set of keys, and each key is paired with a value. Every library has the same set of keys, but unique values. Values contain information about the library, such as its title and creation date.
Parameters:
- listOfMaps
- A list of maps, for example the result of WikisLibraryService.browseWiki(parameters).
- filterKey
- A key in each map in the list, whose value is compared against the filter criteria.
- expression
- A string of the form <operator> <int> where <int> is an integer and <operator> is one of the following characters: > >= == <= <
The command returns maps from the listOfMaps value whose filterKey value is the expression value.
For example, this command shows only the returned maps whose percentUsed value (which reflects the percent of the library's available space that is used) is 20:
WikisUtilService.filterListByNumber(WikisLibraryService.browseWiki("title", "true", 1, 25), "percentUsed", "==20")- WikisUtilService.getFileById(string fileID)
Return the file path location of the wiki page file attachment that is identified by a provided file ID. Return a path even if the file is not in use.
Use this command to find the location of any file attachment that is stored in the shared file directory. This command can be useful when you want to restore backup versions of data.
For more information, see Back up Files data.
- fileID
- The ID of a file in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
For example:
WikisUtilService.getFileById("2d93497d-065a-4022ae25-a4b52598d11a")
Print library information
Use the WikisPrintService.saveToFile command to print information that is returned by other commands.
To run administrative commands, use the wsadmin client.
For more information, see Start wsadmin.
This command prints Map or List<Map> Java objects that are returned by any of the following commands:
- WikisMemberService.getById (returns a Map)
- WikisMemberService.getByExtId (returns a Map)
- WikisMemberService.getByEmail (returns a Map)
- WikisLibraryService.getById (returns a Map)
- WikisLibraryService.browseWiki (returns a List<Map>)
- WikisPolicyService.getById (returns a Map)
- WikisPolicyService.browse (returns a List<Map>)
- WikisMetricService.browse (returns a List<Map>)
A List<Map> object is a list of Map Java objects. Maps are lists of key/value pairs.
For example, the WikisLibraryService.browseWiki command returns a list of library maps. (A library is the pages, attachments, and other data that make up a wiki.) Each map in the list has a set of keys, and each key is paired with a value. Every library has the same set of keys, but unique values. Values are information about the library, such as its title and creation date.
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")
- Run one the following command to print library information that is returned by other commands:
WikisPrintService.saveToFile(Object object, String filePath, String append)where object is a Map or List<Map> type of Java object. You provide the command that returns the object, for example, WikisLibraryService.browseWiki(parameters), which returns a List<Map> object.
The filePath parameter is the location where you want the data that are printed in a comma-separated value (.csv) file.
The append parameter determines whether to append the returned information to information in an existing file. If true (the default), returned information is appended, if false the file is overwritten.
For example:
WikisPrintService.saveToFile(WikisLibraryService.browseWiki("title","true",1,20), "/opt/wsadmin/LibraryMap.csv")In this example, the command runs WikisLibraryService.browseWiki with the specified parameters and then prints the output to the LibraryMap.csv file. Because the append parameter is left at the default value of true, running the same command again appends the new data to the existing LibraryMap.csv.
The following example shows data that are printed to a file:
"id", "createDate", "label", "lastUpdate", "maximumSize", "orphan", "ownerUserId", "percentUsed", "policyId", "size", "summary", "title", "type", "externalInstanceId", "externalContainerId" "ef8ed3e2-22c0-4f20-aa53-bdc6b262abbd", "2009-06-25 12:30:32.797", "5adff8c0-7d67-102c-8452-e2ebc3ec5536", "2009-06-25 12:30:32.797", "524288000", "false", "9ddb97f0-cea5-49fd-9158-06e45b01cd46", "0.0", "00000000-0000-0000-0000-000000000000", "0", "", "Amy Jones1", "personal", "", "" "30676b64-c792-46d1-9c21-bcea1f3350cf", "2009-06-25 16:23:23.354", "5b788f40-7d67-102c-8464-e2ebc3ec5536", "2009-06-25 16:23:23.354", "524288000", "false", "1c00bd59-20c1-48ea-857b-9c998670d715", "8.170700073242188E-5", "00000000-0000-0000-0000-000000000000", "42838", "", "Amy Jones10", "personal", "", "" "547b8f88-0cb9-4f84-95c2-382f235fe251","2009-06-26 10:57:57.384", "5b788f40-7d67-102c-8468-e2ebc3ec5536", "2009-06-26 10:57:57.384", "524288000", "false", "a25fd14a-70d2-4978-b814-9e05f9b56503", "3.90625E-5", "00000000-0000-0000-0000-000000000000", "20480", "", "Amy Jones12", "personal", "", "" "605ff4d6-956a-446f-a393-4995057213c5", "2009-06-26 16:15:58.778", "5ca9bc40-7d67-102c-847c-e2ebc3ec5536", "2009-06-26 16:15:58.778", "524288000", "false", "5a79bd44-e5d1-4b4d-b51f-338eff519636", "3.1642913818359376E-6", "00000000-0000-0000-0000-000000000000", "1659", "", "Amy Jones23", "personal", "", "" "86aa4152-c661-4aae-ac7c-ca04da570715", "2009-06-25 17:29:16.162", "5ddae940-7d67-102c-849e-e2ebc3ec5536", "2009-06-25 17:29:16.162", "524288000", "false", "82252a4e-1355-4518-930c-983d7085ad6e", "2.63214111328125E-7", "00000000-0000-0000-0000-000000000000", "138", "", "Amy Jones40", "personal", "", "" "fb9f97fe-0c41-4276-96f0-f1c91478df68", "2009-06-25 11:52:22.843", "5e737fc0-7d67-102c-84b2-e2ebc3ec5536", "2009-06-25 11:52:22.843", "524288000", "false", "8daf28ad-f51e-4ef3-8990-283c9fd4574a", "0.0", "00000000-0000-0000-0000-000000000000", "0", "", "Amy Jones50", "personal", "", "" "179be703-7e44-45f5-9fa8-eeab1f46e896", "2009-06-25 18:17:38.639", "5f0c1640-7d67-102c-84c6-e2ebc3ec5536", "2009-06-25 18:17:38.639", "524288000", "false", "3a99634c-fd29-4966-ae6f-217472f5439c", "0.0", "00000000-0000-0000-0000-000000000000", "0", "", "Amy Jones60", "personal", "", ""
Disable wiki page versioning
By default, users can see all versions of a wiki page but you can disable versioning by editing the wikis-config.xml configuration file.
To edit configuration files, use the wsadmin client.
Disable versioning can help control the size of data storage. When you disable versioning before users start using Wikis, only one version of a page is stored and all updates are reflected in that version.
Only pages are versioned. File attachments are not versioned.
You can disable versioning at any time. If there are already multiple versions of a page when you disable versioning, the latest version becomes the active version and all future updates are reflected in that version. The older versions are hidden from the user interface but still exist and take up space in the database. If a user reaches a space quota, you can delete older versions by enabling versioning again. Then ask the user to open the page, click the Versions tab, and delete versions.
You can also run a manual database update to remove all older versions of files. Run a delete statement on the MEDIA_REVISION table and specify a constraint that the IS_CURRENT_REVISION column is set to zero. Specifying that value ensures that a record still exists for the current version.
- Start the Wikis Jython script interpreter.
- Access the Wikis configuration files:
execfile("wikisAdmin.py")If you are asked to select a server, you can select any server.
- Check out the Wikis configuration :
WikisConfigService.checkOutConfig("working_directory", "cell_name")
where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied.
The files are kept in this working directory while you make changes to them.
AIX,
IBM i , and Linux only:The directory must grant write permissions or the command will not run successfully.
- cell_name is the name of the WAS cell hosting the Connections application.
This argument is required. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
WikisConfigService.checkOutConfig("/opt/my_temp_dir", "CommCell01")- To view the current configuration settings use the following command:
WikisConfigService.showConfig()- To set the file.versioning.enabled property to false:
WikisConfigService.updateConfig("file.versioning.enabled", "false")- Check in the configuration file.
You must check in the file in the same wsadmin session in which you checked it out.
For more information, see Applying Wikis property changes.
Delete wikis from the system
Use the WikisLibraryService delete commands to delete wikis.
To run administrative commands, use the wsadmin client.
Many commands require an ID as an input parameter, including library IDs, user IDs, policy IDs, and file IDs. You can find an ID by using special commands.
For example, when you run the WikisMemberService.getByEmail(string email) command, where you provide a user's email address as input, the output includes the user's ID. You can also find IDs by using feeds.
For more information, see the Connections API documentation.
Use wikis administrative commands to see wiki library information that can help you decide what wikis to delete.
For example, the WikisLibraryService.browseWiki command returns a list of wikisi. Use the information in the report to see which wikis have not been updated in a long time.
For more information, see Wikis administrative commands.
In the Wikis database context, a library contains the pages, attachments, and other data that make up a wiki. Wikis can be owned by a person or a community. However, you should delete community libraries following steps in the topic Deleting orphaned data.
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")
- Run one of these commands to delete libraries:
- Run this command to delete a single library:
WikisLibraryService.delete(libraryId)where libraryId is the library id in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
- Run this command to delete multiple libraries:
WikisLibraryService.deleteBatch(filePath)where filePath is the full path to a text file containing a list with a single library id per line in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000. You must create the file and save it in a directory local to the server where you are running the wsadmin processor.
Delete draft wiki pages
Delete unsaved changes to a user's wiki pages.
To run administrative commands, use the wsadmin client.
To delete a user's draft pages, you need the user's ID in your LDAP directory.
If a user edits a wiki page without saving the changes, a draft version of the page is stored in the database. Each time the user, or a wiki owner or editor, visits the page, a reminder about the draft is displayed. If the user is no longer available to save or delete the draft version, you can delete it by using the deleteDraftsByOwnerId command. This command is useful when a user leaves your organization.
To delete a user's drafts...
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")- To delete draft wiki pages owned by a specific user:
WikisMediaService.deleteDraftsByOwnerId('extId')
where extId is the unique ID of the user.
All draft pages owned by the specified user are deleted.
Find the location of a stored attachment
Use the FilesUtilService.getFileById command to locate a file attachment in a directory.
To run administrative commands, use the wsadmin client.
Many commands require an ID as an input parameter, including library IDs, user IDs, policy IDs, and file IDs. You can find an ID by using special commands.
For example, when you run the WikisMemberService.getByEmail(string email) command, where you provide a user's email address as input, the output includes the user's ID. You can also find IDs by using feeds.
For more information, see the Connections API documentation.
In network deployments, files attached to wiki pages are stored on a shared file system, as described in the topic Install the first node of a cluster.
This command can be useful when restoring backup versions of data.
See the topic Back up Files data for more information.
- Start the Wikis Jython script interpreter
execfile("wikisAdmin.py")- Run the following command to locate a file attachment stored in the file directory:
WikisUtilService.getFileById(fileId)where fileID is the ID of a file stored in the database. The ID must be a string in the following standard Universally Unique Identifier (UUID) format: 00000000-0000-0000-0000-000000000000.
The command returns the file path as a string, even if the file is not in use.
Displaying file attachments inline
Configure Wikis to display file attachments inline instead of as attachments. This is useful when you download and display active content, such as Adobe Flash (.swf) files, in your own HTML pages. Enable inline display by changing a configuration property in the wikis-config.xml file.
Then change the attachment URLs to use inline parameter.
To edit configuration files, use wsadmin client.
By default, the Connections server passes Wikis application file attachments to browsers with the header "Content-Disposition: attachment." This means files display as attachments; when users click the attachment they are prompted to open or download the file. It also prevents embedding files. To embed files in your own HTML page using an <embed> tag, the content disposition must be inline. This affects active content, such as Adobe Flash (.swf), and HTML pages referenced with <iframe>.
Configure a property in wikis-config.xml to change the content disposition from attachment to inline.
Wikis uses the attachment disposition for security reasons. Specifically, uploaded files could potentially contain malicious code that can exploit the cross-site scripting vulnerabilities of some browsers.
If you switch to inline disposition, you should configure an alternate domain download for greater security.
See Mitigating a cross site scripting attack.
- Start the Wikis Jython script interpreter.
- Access the Wikis configuration files:
execfile("wikisAdmin.py")If you are asked to select a server, you can select any server.
- Check out the Wikis configuration :
WikisConfigService.checkOutConfig("working_directory", "cell_name")
where:
- working_directory is the temporary working directory to which the configuration XML and XSD files are copied.
The files are kept in this working directory while you make changes to them.
AIX,
IBM i , and Linux only:The directory must grant write permissions or the command will not run successfully.
- cell_name is the name of the WAS cell hosting the Connections application.
This argument is required. To determine cell name, run wsadmin command:
print AdminControl.getCell()
For example:
WikisConfigService.checkOutConfig("/opt/my_temp_dir", "CommCell01")
- Change the content disposition to inline
WikisConfigService.updateConfig("security.inlineDownload.enabled", "true")- Check in the configuration file.
You must check in the file in the same wsadmin session in which you checked it out.
For more information, see Applying Wikis property changes.
- Change your attachment URLs to use the inline parameter.
Search Engine Optimization (SEO) for Wikis
Manage Search Engine Optimization (SEO) so that your public wiki content is available from internet search engines and achieves a higher ranking in searches.
Your public wikis are already optimized for search engines. By default, Connections generates a sitemap of your public wiki pages that you can submit to search engines.
To improve the process of search engine optimization, use HTTP compression for your site. HTTP compression decreases the time that is required to download a webpage to a browser. Most webservers, including IBM HTTP Server, use compression by default but you can check that it is enabled by asking your administrator.
The following topics explain how to submit the sitemap and configure your SEO settings:
Enable or disabling SEO
Enable or disable search engine optimization (SEO) of your public wiki content.
By default, Connections automatically runs a scheduled task to generate a sitemap for SEO. The task runs at midnight every Sunday but you can change or disable this schedule if necessary.
The SitemapGenerator task queries the WIKIS database periodically and generates sitemap files. This task consumes memory and database resources so IBM recommends that you allow the task to run at off-peak times.
To change the scheduled SEO task...
- Open the wikis-config.xml file. The default location of the file is:
- AIX,
IBM i , or Linux: connections_root/config/wikis-config.xml- Windows: connections_root\config\wikis-config.xml
- Find the stanza for the SitemapGenerator task.
- Disable the scheduled task by changing the value of the <task enabled> parameter to false.
- Change the schedule by editing the Interval parameter.
The value is a CRON expression.
For more information about setting CRON values, see the Scheduling tasks topic.
- Save and close the wikis-config.xml file.
- To edit the number of URLs per sitemap or the output path of the sitemap file, open the web.xml file. The default location of the file is:
- AIX,
IBM i , or Linux: connections_root/Wikis.ear/wikis.web.war/WEB-INF/web.xml- Windows: connections_root\Wikis.ear\wikis.web.war\WEB-INF\web.xml
- Change the output path by changing the value of the WIKI_SITEMAP_STORAGE_ROOT parameter.
- Change the number of URLs per sitemap by changing the value of the WIKI_SITEMAP_URLS_PER_SITEMAP parameter.
You can provide multiple sitemap files but each file must have no more than 50,000 URLs and must be no larger than 10 MB.
For more information, go to the Sitemaps XML format webpage.
- Save and close the web.xml file.
Submitting a sitemap for SEO
Submit a sitemap to a search engine provider.
The SitemapGenerator task that is configured in the wikis-config.xml file generates a sitemap used for search engine optimization. To use the sitemap, you must submit the URL of the sitemap to the provider's website, using a format such as the following sample URL:
http://example.com:8080/wikis/sitemap/myserverThis address returns the sitemapindex.xml file that contains all the sitemap URLs that will be accessed by the search engine provider's crawler.
The default location of the sitemap is
- AIX or Linux: /opt/IBM/Connections/data/wikis/upload/sitemap
IBM i : /qibm/Proddata/IBM/Connections/data/shared/wikis/upload/sitemap- Windows: C:\IBM\Connections\data\wikis\upload\sitemap
To submit a sitemap...
- Upload the sitemap to a search engine provider by following the instructions for that provider. For example, submit a sitemap to Google.
- Allow the search engine provider to have anonymous access to the sitemap. Providing this access means that you do not have to manually submit the sitemap again. To allow anonymous access, follow the instructions on the provider's website.
- Repeat these steps for each search engine provider to which you want to submit the sitemap.
Secure SEO
Control access to the sitemap that is used for search engine optimization.
By default, search engine providers can access your sitemap. However, you can disable anonymous access to the sitemap if necessary.
When you disable anonymous access, only users who are mapped to the SEEDLIST_AdmgrIN role can access the sitemap.
For more information about mapping users to roles, see the Assigning people to J2EE roles topic.
To disable anonymous access to the sitemap...
- Open the web.xml file in a text editor.
The default location of the file is:
- AIX,
IBM i , or Linux: connections_root/Wikis.ear/wikis.web.war/WEB-INF/web.xml- Windows: connections_root\Wikis.ear\wikis.web.war\WEB-INF\web.xml
- In the servlet stanza, change the value of the allow.anonymous.access parameter to false.
- Save and close the file.