Pi Rdbmspi 3.21.4.30
Pi Rdbmspi 3.21.4.30
Version 3.21.4.x
iii
OSIsoft, LLC
777 Davis St., Suite 250
San Leandro, CA 94577 USA
Tel: (01) 510-297-5800
Fax: (01) 510-357-8136
Web: http://www.osisoft.com
OSIsoft, the OSIsoft logo and logotype, PI Analytics, PI ProcessBook, PI DataLink, ProcessPoint, PI Asset Framework(PI-AF), IT
Monitor, MCN Health Monitor, PI System, PI ActiveView, PI ACE, PI AlarmView, PI BatchView, PI Data Services, PI Manual Logger,
PI ProfileView, PI WebParts, ProTRAQ, RLINK, RtAnalytics, RtBaseline, RtPortal, RtPM, RtReports and RtWebParts are all
trademarks of OSIsoft, LLC. All other trademarks or trade names used herein are the property of their respective owners.
Location1 ................................................................................................32
Location2 ................................................................................................32
Location3 ................................................................................................33
Location4 ................................................................................................33
Location5 ................................................................................................34
InstrumentTag .........................................................................................35
ExDesc (ExtendedDescriptor) ................................................................35
Scan ........................................................................................................38
Shutdown ................................................................................................39
Source Tag ........................................................................................................39
Unused Attributes ..............................................................................................40
Login .....................................................................................................147
Slowdown in statement preparation for more than 50 tags ..................148
Microsoft SQL Server 6.5, 7.0, 2000, 2005, 2008 ...........................................148
DATETIME Data Type ..........................................................................148
TOP 10 ..................................................................................................148
SET NOCOUNT ON .............................................................................148
CA Ingres II .....................................................................................................149
Software Development Kit ....................................................................149
IBM DB2 (NT) ..................................................................................................149
Statement Limitation .............................................................................149
Informix (NT) ...................................................................................................150
Error while ODBC Re-Connection ........................................................150
Paradox ...........................................................................................................150
Error when ALIASES used in WHERE Clause .....................................150
viii
Terminology
To understand this interface manual, you should be familiar with the terminology used in this
document.
Buffering
Buffering refers to an Interface Node’s ability to store temporarily the data that interfaces
collect and to forward these data to the appropriate PI Servers.
N-Way Buffering
If you have PI Servers that are part of a PI Collective, PIBufss supports n-way buffering.
N-way buffering refers to the ability of a buffering application to send the same data to each
of the PI Servers in a PI Collective. (Bufserv also supports n-way buffering to multiple PI
Servers however it does not guarantee identical archive records since point compressions
attributes could be different between PI Servers. With this in mind, OSIsoft recommends that
you run PIBufss instead.)
ICU
ICU refers to the PI Interface Configuration Utility. The ICU is the primary application that
you use to configure PI interface programs. You must install the ICU on the same computer
on which an interface runs. A single copy of the ICU manages all of the interfaces on a
particular computer.
You can configure an interface by editing a startup command file. However, OSIsoft
discourages this approach. Instead, OSIsoft strongly recommends that you use the ICU for
interface management tasks.
ICU Control
An ICU Control is a plug-in to the ICU. Whereas the ICU handles functionality common to
all interfaces, an ICU Control implements interface-specific behavior. Most PI interfaces
have an associated ICU Control.
Interface Node
An Interface Node is a computer on which
the PI API and/or PI SDK are installed, and
PI Server programs are not installed.
PI API
The PI API is a library of functions that allow applications to communicate and exchange
data with the PI Server. All PI interfaces use the PI API.
PI Collective
A PI Collective is two or more replicated PI Servers that collect data concurrently.
Collectives are part of the High Availability environment. When the primary PI Server in a
collective becomes unavailable, a secondary collective member node seamlessly continues to
collect and provide data access to your PI clients.
PIHOME
PIHOME refers to the directory that is the common location for PI 32-bit client applications.
A typical PIHOME on a 32-bit operating system is C:\Program Files\PIPC.
A typical PIHOME on a 64-bit operating system is C:\Program Files (x86)\PIPC.
PI 32-bit interfaces reside in a subdirectory of the Interfaces directory under PIHOME.
For example, files for the 32-bit Modbus Ethernet Interface are in
[PIHOME]\PIPC\Interfaces\ModbusE.
This document uses [PIHOME] as an abbreviation for the complete PIHOME or PIHOME64
directory path. For example, ICU files in [PIHOME]\ICU.
PIHOME64
PIHOME64 is found only on a 64-bit operating system and refers to the directory that is the
common location for PI 64-bit client applications.
A typical PIHOME64 is C:\Program Files\PIPC.
PI 64-bit interfaces reside in a subdirectory of the Interfaces directory under PIHOME64.
For example, files for a 64-bit Modbus Ethernet Interface would be found in
C:\Program Files\PIPC\Interfaces\ModbusE.
This document uses [PIHOME] as an abbreviation for the complete PIHOME or PIHOME64
directory path. For example, ICU files in [PIHOME]\ICU.
PI Message Log
The PI message Log is the file to which OSIsoft interfaces based on UniInt 4.5.0.x and later
writes informational, debug and error message. When a PI interface runs, it writes to the
local PI message log. This message file can only be viewed using the PIGetMsg utility. See
the UniInt Interface Message Logging.docx file for more information on how to access these
messages.
PI SDK
The PI SDK is a library of functions that allow applications to communicate and exchange
data with the PI Server. Some PI interfaces, in addition to using the PI API, require the use of
the PI SDK.
PI Server Node
A PI Server Node is a computer on which PI Server programs are installed. The PI Server
runs on the PI Server Node.
x
PI SMT
PI SMT refers to PI System Management Tools. PI SMT is the program that you use for
configuring PI Servers. A single copy of PI SMT manages multiple PI Servers. PI SMT runs
on either a PI Server Node or a PI Interface Node.
Pipc.log
The pipc.log file is the file to which OSIsoft applications write informational and error
messages. When a PI interface runs, it writes to the pipc.log file. The ICU allows easy
access to the pipc.log.
Point
The PI point is the basic building block for controlling data flow to and from the PI Server.
For a given timestamp, a PI point holds a single value.
A PI point does not necessarily correspond to a “point” on the foreign device. For example, a
single “point” on the foreign device can consist of a set point, a process value, an alarm limit,
and a discrete value. These four pieces of information require four separate PI points.
Service
A Service is a Windows program that runs without user interaction. A Service continues to
run after you have logged off from Windows. It has the ability to start up when the computer
itself starts up.
The ICU allows you to configure a PI interface to run as a Service.
The interface allows bi-directional transfer of data between the PI System and any Relational
Database Management System (RDBMS) that supports Open DataBase Connectivity
(ODBC) drivers. The interface runs on Microsoft Windows operating systems, and is able to
connect to any PI Server node available on the network. This version only supports one
ODBC connection per running copy but multiple interface instances are possible.
SQL statements are generated by the end user, either in the form of ordinary ASCII files, or
are directly defined in the ExtendedDescriptor of a PI Tag. These SQL statements are the
source of data for one or more PI Tags – data input; and, similarly, PI tags can provide values
for RDB – data output.
The interface makes internal use of the PI API and PI SDK in order to keep a standard way of
interfacing from a client node to the PI Server node.
Note: Databases and ODBC drivers not yet tested with the interface may require
additional onsite testing, which will translate to additional charges. Refer to Appendix
G: Interface Test Environment for a list of databases and ODBC drivers that the
interface is known to work with. Even if the customer’s database and/or ODBC driver
is not shown, the interface will likely work with it. Please contact the OSIsoft
technical support, or the local sales rep. for additional guidance.
Note: Version 3.x of the RDBMSPI Interface is a major revision (as the version 2.x
was for version 1.x) and many enhancements have been made that did not fit into
the design of the previous version. Refer to Appendix F: For Users of Previous
Interface Versions prior to upgrading an older version of the interface.
The interface runs on Intel computers with Microsoft Windows operating systems and the
Interface Node may be either a PI home or PI API node – see section Configuration Diagram.
This document contains the following topics:
Brief design overview
Installation and operation details
PI Points configuration details (points that will receive data via this interface)
Supported command line parameters
Commented examples
Note: The value of [PIHOME] variable for the 32-bit interface will depend on whether
the interface is being installed on a 32-bit operating system
Note: Throughout this manual, there are references to where messages are written
by the interface, which is the PIPC.log. Since the interface version 3.20.6.x the
interface has been built against a UniInt version 4.5.5.22, which now writes all its
messages to the local PI Message log.
Please note that any place in this manual where it references PIPC.log should now
refer to the local PI message log. Please see the document UniInt Interface
Message Logging.docx in the %PIHOME%\Interfaces\UniInt directory for more
details on how to access these messages.
Reference Manuals
OSIsoft
PI Server Manuals
PI API and PI SDK Manual
UniInt Interface User Manual
Examples_readme.doc
Vendor
Vendor specific ODBC Driver Manual
Microsoft ODBC Programmer's Reference
Supported Features
Feature Support
Part Number PI-IN-OS-RELDB-NTI
* Platforms 32-bit Interface 64-bit Interface
Windows XP
32-bit OS Yes No
64-bit OS Yes (Emulation Mode) No
Windows 2003 Server
32-bit OS Yes No
64-bit OS Yes (Emulation Mode) No
Windows Vista
2
Feature Support
32-bit OS Yes No
64-bit OS Yes (Emulation Mode) No
Windows 2008
32-bit OS Yes No
Windows 2008 R2
64-bit OS Yes (Emulation Mode) No
Windows 7
32-bit OS Yes No
64-bit OS Yes (Emulation Mode) No
Windows 8
32-bit OS Yes No
64-bit OS Yes (Emulation Mode) No
Windows 2012
64-bit OS Yes (Emulation Mode) No
Feature Support
Vendor Software Required on Foreign Yes
Device
Vendor Hardware Required No
Additional PI Software Included with No
Interface
Device Point Types See note below.
Serial-Based Interface No
Platforms
The Interface is designed to run on the above mentioned Microsoft Windows operating
systems and their associated service packs.
Please contact OSIsoft Technical Support for more information.
Support for reading/writing to PI Annotations
Next to the timestamp, value and status, the RDBMSPI interface can write/read also to PI
annotations (see section Data Acquisition Strategies and take a look at the
PI_ANNOTATION keyword).
Uses PI SDK
The PI SDK and the PI API are bundled together and must be installed on each PI Interface
node. This Interface specifically makes PI SDK calls to access the PI Batch Database and
read some PI Point Attributes. Since interface version 3.15, PI SDK is used to write and read
to/from PI Annotations.
Source of Timestamps
The interface can accept timestamps from the RDBMS or it can provide PI Server
synchronized timestamps.
History Recovery
For output tags, the interface goes back in time and uses values stored in the PI Archive for
outputting them through a suitable SQL statement (mostly INSERTs or UPDATEs). See
section RDBMSPI – Output Recovery Modes, for more on this topic.
For input tags, history recovery often depends on the WHERE condition of a SELECT query.
Moreover, since version 3.17, the interface implemented enhanced support for the input
history recovery; for more detailed description, see section RDBMSPI – Input Recovery
Modes
UniInt-based
UniInt stands for Universal Interface. UniInt is not a separate product or file; it is an
OSIsoft-developed template used by developers and is integrated into many interfaces,
including this interface. The purpose of UniInt is to keep a consistent feature set and behavior
across as many of OSIsoft’s interfaces as possible. It also allows for the very rapid
development of new interfaces. In any UniInt-based interface, the interface uses some of the
UniInt-supplied configuration parameters and some interface-specific parameters. UniInt is
constantly being upgraded with new options and features.
4
The UniInt Interface User Manual is a supplement to this manual.
SetDeviceStatus
The RDBMSPI Interface 3.15+ is built with UniInt 4.3+, where the new functionality has
been added to support health tags – the health tag with the point attribute
Exdesc = [UI_DEVSTAT] is used to represent the status of the source device.
The following events will be written into the tag:
"0 | Good | " the interface is properly communicating and gets data from/to the
RDBMS system via the given ODBC driver
"3 | 1 device(s) in error | " ODBC data source communication failure
"4 | Intf Shutdown | "the interface was shut down
Refer to the UniInt Interface User Manual.doc file for more information on how to
configure health points.
Failover
Server-Level Failover
The interface supports the FAILOVER_PARTNER keyword in the connection string
when used with the Microsoft SQL Server 2005 (and above) and the Native Client
ODBC driver. In other words, in case the interface connects to the mirrored
Microsoft SQL Servers and the connection gets broken, the interface will attempt to
reconnect the second SQL Server.
UniInt Failover Support
UniInt Phase 2 Failover provides support for cold, warm, or hot failover
configurations. The Phase 2 hot failover results in a no data loss solution for bi-
directional data transfer between the PI Server and the Data Source given a single
point of failure in the system architecture similar to Phase 1. However, in warm and
cold failover configurations, you can expect a small period of data loss during a
single point of failure transition. This failover solution requires that two copies of the
interface be installed on different interface nodes collecting data simultaneously from
a single data source. Phase 2 Failover requires each interface have access to a shared
data file. Failover operation is automatic and operates with no user interaction. Each
interface participating in failover has the ability to monitor and determine liveliness
and failover status. To assist in administering system operations, the ability to
manually trigger failover to a desired interface is also supported by the failover
scheme.
The failover scheme is described in detail in the UniInt Interface User Manual,
which is a supplement to this manual. Details for configuring this Interface to use
failover are described in the UniInt Failover Configuration section of this manual.
This interface supports UniInt Phase 2, cold failover.
6
Configuration Diagram
In the following figures there is the basic configuration of the hardware and software
components in a typical scenario used with the RDBMSPI Interface.
Configuration Diagram – PI Home Node with PI Interface Node and RDBMS Node
Note: For both of the configuration options depicted above - the communication
between the RDBMPI interface and a PI Server is established via PI API and,
optionally, through PI SDK. PI API is used for the actual transfer of time series, PI
SDK is used for replication of the PI Batch Database and for reading from and writing
to PI Annotations. To activate the PI SDK link, the /PISDK=1 start-up parameter
must be defined.
The communication between the RDBMSPI interface and a relational database
happens through the ODBC library. The interface can thus connect a relational
database, which runs either on an interface node, or can be remote. This remote
database node does not have to be the Windows platform.
8
Chapter 2. Principles of Operation
The two sections, which follow - Concept of Data Input from Relational Database to PI and
Concept of Data Output from PI to Relational Database, briefly explain the basics how the
time-series data is transferred from RDB to PI and vice versa (from PI to RDB). The more
detailed description of SQL statements, placeholders, retrieval strategies, hints to individual
RDBs, etc. follows in section SQL Statements later on in this manual.
Note: The sole purpose for supporting several distribution strategies is to minimize
the number of queries the interface executes. As stated in the beginning of this
manual – an execution of fewer queries, which retrieve data for more PI tags, is less
expensive and faster than per-tag query execution.
There are Distributed Control Systems (DCS) that keep only current values in relational
database tables. Through the scan-based, per-tag defined SELECT, the interface can read
such data in a timely (periodical) manner. In fact, RDBMSPI then simply emulates the
behavior of a standard DCS interface - the corresponding SELECT is expected to return
exactly one row, which represents a snapshot of the given value. The obvious disadvantage of
this kind of data retrieval is low performance and accuracy limited to the scan frequency.
Needless to say that relational databases are not designed for this kind querying on a massive
scale, like hundreds or thousands of queries executed in a loop.
A typical low throughput query can be similar to:
A better strategy (compared to the one described in the previous paragraph) is to define lower
scan rates (e.g. 1+ minute) instead of executing the “one-row-delivering” SELECT every
second. In other words, getting the same amount of data in one call is faster and less
expensive than getting it in many calls. This approach, however, assumes that RDB tables get
10
populated by INSERTs (not UPDATEs); the interface then retrieves “true time-series”, even
if still for one tag. The mandatory timestamp column serves as a book-mark, because the
ultimate goal is to read only the newly arrived rows since the last scan.
An example of a query delivering a per-tag-time-series can be like:
As a result, the interface obtains a succession of time-ordered rows representing only those,
which have been INSERTed since the last scan. This is achieved by asking for rows bigger
than the timestamp question-mark. Moreover, the interface can also do the PI exception
reporting, because the result-set is ordered.
For more detailed description of the per-tag reads, see section SQL SELECT Statement for
Single PI Tag. Concrete examples are in:
Appendix C: Examples Example 1.1 – single tag query.
Appendix C: Examples Example 1.2 – query data array for a single tag.
Tag Groups
Another way of improving performance (compared to reading time-series for a single tag) is
grouping tags together. For Tag Groups - an RDB table should be structured in a way that
multiple events are stored in the same row in more columns. Hence, the result-set for Tag
Groups must be of the following form:
Timestamp,Value1,Status1,Value2,Status2,…
Timestamp,Value1,Status1,Value2,Status2,…
…
or, in case there is a timestamp column for every value/status pair:
Timestamp1,Value1,Status1,Timestamp2,Value2,Status2,…
Timestamp1,Value1,Status1,Timestamp2,Value2,Status2,…
…
An example of the “Tag Group” SELECT statement can be like:
SELECT Timestamp,Value1,Status1,Value2,Status2,… FROM Table WHERE
Timestamp>? ORDER BY Timestamp ASC;
Obviously, querying such a table is more efficient than having a SELECT statement defined
per a single tag. For detailed description of this distribution strategy, see section SQL
SELECT Statement for Tag Groups. The concrete example: Appendix C: Examples Example
1.3 – three PI points forming a GROUP.
Tag Distribution
Compared to Tag Groups, where grouping happens in the form of multiple ValueN, StatusN
pairs in the SELECT list, the Tag Distribution approach requires “named rows”. That means,
an additional field must be provided in the result-set; the Name column then tells the interface
to which target point a particular row will be distributed. The result set for Tag Distribution
should have the following form:
Timestamp,Name,Value,Status
Timestamp,Name,Value,Status
…
Some laboratory data in RDB tables have a common structure that looks like:
SAMPLETIME,TANK_NAME,TANK_LEVEL,TANK_LEVEL_STATUS,TEMPERATURE_NAME,T
EMPERATURE_VALUE,TEMPERATURE_STATUS,DENSITY_NAME, DENSITY_VALUE,
DENSITY_STATUS, …
…
To efficiently transform and forward this kind of result-set to PI, the interface implements a
strategy that accepts data being structured as follows:
Timestamp,Name1,Value1,Status1,Name2,Value2,Status2,…
Timestamp,Name1,Value1,Status1,Name2,Value2,Status2,…
…
or, in case there is a timestamp column for every name/value/status set:
Timestamp1,Name1,Value1,Status1,Timestamp2,Name2,Value2,Status2,…
Timestamp1,Name1,Value1,Status1,Timestamp2,Name2,Value2,Status2,…
…
The RxC approach is actually a combination of the Tag Group and Tag Distribution
retrievals. For a more detailed description of this distribution strategy, see section SQL
SELECT Statement for RxC Distribution and a concrete example can be found in Appendix
C: Examples Example 1.5 – RxC Distribution.
12
Example 2.1d – insert sinusoid values with (string) annotations into RDB table
(event based).
Use of PI SDK
RDBMSPI features implemented through PI SDK are the following:
Writing to and reading from PI annotations
Next to the timestamp, value and status, RDBMSPI interface can write/read also
to PI annotations (see section Data Acquisition Strategies and see the
PI_ANNOTATION keyword).
Replication of PI Batch Database
See chapter PI Batch Database Output.
All the above mentioned features are optional. However, users have to be aware that when
these features are configured on nodes with buffering, the PI SDK buffering must be enabled
in order the PI SDK calls are buffered properly; the PI SDK buffering ships with PI SDK
version 1.4 and above.
Note: In order to activate the PI SDK link, enable PI SDK through PI ICU,
or manually specify the /PISDK=1 start-up parameter.
CAUTION!
When RDBMSPI interface runs against High Availability PI Servers, SQL queries
containing the annotation column on nodes NOT supporting the PI SDK buffering,
will NOT deliver events to other collective members, but the primary.
Use of PI SDK requires the PI Known Server’s Table contains the PI Server name the
interface connects to.
UniInt Failover
This interface supports UniInt failover. Refer to the UniInt Failover Configuration section of
this document for configuring the interface for failover.
14
Chapter 3. Installation Checklist
If you are familiar with running PI data collection interface programs, this checklist helps you
get the Interface running. If you are not familiar with PI interfaces, return to this section after
reading the rest of the manual in detail.
This checklist summarizes the steps for installing this Interface. You need not perform a
given task if you have already done so as part of the installation of another interface. For
example, you only have to configure one instance of buffering for every Interface Node
regardless of how many interfaces run on that node.
The Data Collection Steps below are required. Interface Diagnostics and Advanced Interface
Features are optional.
8. Build input tags and, if desired, output tags for this interface. Important point
attributes and their purposes are:
Location1 specifies the interface instance ID.
Location2 specifies bulk vs. non-bulk reading.
Location3 defines the reading strategy.
Location4 specifies the scan class.
Location5 specifies how the data is sent to PI (snapshot, archive write,..).
ExtendedDescriptor stores the various keywords.
InstrumentTag specifies name of the file that stores the SQL file.
SourceTag for output points.
9. Configure the interface using the PI ICU utility or edit startup command file manual.
It is recommended to use PI ICU whenever possible.
10. Configure performance points.
11. Configure the I/O Rate tag.
12. It is recommended to test the connection between the interface node and the RDB
using any third-party ODBC based application. For example the ODBC Tester app.
from Microsoft or any other tool that works with ODBC data sources. Verify that the
SQL query(ies) are syntactically correct and they deliver data from/to the above
mentioned third-party ODBC based applications.
13. Take one, (ideally) simple SQL statement, which has been tested in the third party
tool and configure an interface tag, which will use it.
14. Set or check the interface node clock.
15. Start the interface interactively and confirm its successful connection to the PI Server
without buffering.
16. Confirm that the interface collects data successfully for one tag. If OK, add more.
17. Stop the interface and configure a buffering application (either Bufserv or PIBufss).
When configuring buffering use the ICU menu item Tools Buffering…
Buffering Settings to make a change to the default value (32678) for the Primary and
Secondary Memory Buffer Size (Bytes) to 2000000. This will optimize the
throughput for buffering and is recommended by OSIsoft.
18. Start the buffering application and the interface. Confirm that the interface works
together with the buffering application by either physically removing the connection
between the Interface Node and the PI Server Node or by stopping the PI Server.
19. Configure the Interface to run as the Windows Service. Confirm that the interface
runs properly as a Service.
20. Restart the interface node and confirm that the interface and the buffering application
restart.
16
Interface Diagnostics
1. Configure Scan Class Performance points.
2. Install the PI Performance Monitor Interface (Full Version only) on the Interface
Node.
3. Configure Performance Counter points.
4. Configure UniInt Health Monitoring points
5. Configure the I/O Rate point.
6. Install and configure the Interface Status Utility on the PI Server Node.
7. Configure the Interface Status point.
Note: The interface is installed along with the .pdb file (file containing the debug
information). This file can be found in the same directory as the executable file or in
%windir%\Symbols\exe. If you rename the rdbmspi.exe to rdbmspi1.exe,
you also have to create/rename the corresponding .pdb file. That is, rdbmspi.pdb
to rdbmspi1.pdb.
Interface Directories
32-bit Interfaces
The [PIHOME] directory tree is defined by the PIHOME entry in the pipc.ini configuration
file. This pipc.ini file is an ASCII text file, which is located in the %windir% directory.
For 32-bit operating systems, a typical pipc.ini file contains the following lines:
[PIPC]
PIHOME=C:\Program Files\PIPC
For 64-bit operating systems, a typical pipc.ini file contains the following lines:
[PIPC]
PIHOME=C:\Program Files (X86)\PIPC
The above lines define the root of the PIHOME directory on the C: drive. The PIHOME
directory does not need to be on the C: drive. OSIsoft recommends using the paths shown
above as the root PIHOME directory name.
The interface install kit will automatically install the interface to:
PIHOME\Interfaces\RDBMSPI\
PIHOME is defined in the pipc.ini file.
20
Installing Interface Service with PI Interface Configuration Utility
The PI Interface Configuration Utility provides a user interface for creating, editing, and
deleting the interface service:
Service Configuration
Service name
The Service name box shows the name of the current interface service. This service name is
obtained from the interface executable.
ID
This is the service id used to distinguish multiple instances of the same interface using the
same executable.
Display name
The Display Name text box shows the current Display Name of the interface service. If there
is currently no service for the selected interface, the default Display Name is the service name
with a “PI-” prefix. Users may specify a different Display Name. OSIsoft suggests that the
prefix “PI-” be appended to the beginning of the interface to indicate that the service is part of
the OSIsoft suite of products.
Log on as
The Log on as text box shows the current “Log on as” Windows User Account of the
interface service. If the service is configured to use the Local System account, the Log on as
text box will show “LocalSystem”. Users may specify a different Windows User account for
the service to use.
Password
If a Windows User account is entered in the Log on as text box, then a password must be
provided in the Password text box, unless the account requires no password.
Confirm password
If a password is entered in the Password text box, then it must be confirmed in the Confirm
Password text box.
Dependencies
The Installed services list is a list of the services currently installed on this machine. Services
upon which this interface is dependent should be moved into the Dependencies list using the
button. For example, if API Buffering is running, then “bufserv” should be selected
from the list at the right and added to the list on the left. To remove a service from the list of
dependencies, use the button, and the service name will be removed from the
Dependencies list.
When the interface is started (as a service), the services listed in the dependency list will be
verified as running (or an attempt will be made to start them). If the dependent service(s)
cannot be started for any reason, then the interface service will not run.
Note: Please see the PI Log and Windows Event Logger for messages that may
indicate the cause for any service not running as expected.
- Add Button
To add a dependency from the list of Installed services, select the dependency name, and
click the Add button.
- Remove Button
To remove a selected dependency, highlight the service name in the Dependencies list, and
click the Remove button.
The full name of the service selected in the Installed services list is displayed below the
Installed services list box.
22
Startup Type
The Startup Type indicates whether the interface service will start automatically or needs to
be started manually on reboot.
If the Auto option is selected, the service will be installed to start automatically
when the machine reboots.
If the Manual option is selected, the interface service will not start on reboot, but
will require someone to manually start the service.
If the Disabled option is selected, the service will not start at all.
Generally, interface services are set to start automatically.
Create
The Create button adds the displayed service with the specified Dependencies and with the
specified Startup Type.
Remove
The Remove button removes the displayed service. If the service is not currently installed, or
if the service is currently running, this button will be grayed out.
Help for installing the interface as a service is available at any time with the command:
RDBMSPI.exe –help
Open a Windows command prompt window and change to the directory where the
rdbmspi1.exe executable is located. Then, consult the following table to determine the
appropriate service installation command.
Windows Service Installation Commands on a PI Interface Node or a PI Server Node with
Bufserv implemented
Manual service RDBMSPI.exe –install –depend "tcpip bufserv"
Automatic service RDBMSPI.exe –install –auto –depend "tcpip bufserv"
*Automatic service with RDBMSPI.exe –serviceid X –install –auto –depend "tcpip bufserv"
service id
Windows Service Installation Commands on a PI Interface Node or a PI Server Node
without Bufserv implemented
Manual service RDBMSPI.exe –install –depend tcpip
Automatic service RDBMSPI.exe –install –auto –depend tcpip
*Automatic service with RDBMSPI.exe –serviceid X –install –auto –depend tcpip
service id
*When specifying service id, the user must include an id number. It is suggested that this
number correspond to the interface id (/id) parameter found in the interface .bat file.
Check the Microsoft Windows Services control panel to verify that the service was added
successfully. The services control panel can be used at any time to change the interface from
an automatic service to a manual service or vice versa.
24
What is Meant by "Running an ODBC Application as Windows
Service"?
Consider the following guidelines carefully before configuring the interface:
The interface MUST be capable of connecting to RDB as a console application before
you attempt to run it as a Windows service.
Including this step is vitally important, because running an application as Windows service
adds another level of complexity that can mask other issues that have nothing to do with the
fact that the application is running as a Windows service. Once it has been verified that the
application can run successfully as a stand-alone application, it can be assumed that any
problems that arise when running the application as Windows service have something to do
with the system’s configuration.
The ODBC driver/client and any necessary database client software MUST be on the
system PATH.
On Windows machines, there is a distinction made between system environment variables
and user environment variables. System environment variables are used whenever the
operating system is in use, no matter whether there is a particular user-id logged in or not.
This is important, because if the ODBC driver/client (and database client software, if needed)
is listed on the PATH environment variable as user environment variables, these values will
only be valid as long as the particular user-id for whom they are set is logged in, and not at
system boot-up.
If you are using an ODBC data source to establish the connection, the data source
MUST be a System DSN.
The reasons for this are similar to the first situation – user DSNs can only be accessed by
someone logged into the machine with a particular user-id, and not at system boot-up. System
DSNs are available at boot-up and by any application running under any account.
To check this, open the ODBC Data Source Administrator and make sure that the data source
in question appears on the list on the "System DSN" tab. If it is not there, create one and add
it to this list, and ensure the application points to it.
The latest version of MDAC MUST be on the interface node.
There has been at least one occasion where a customer was able to resolve his issue running
his application as a service with his database by installing the latest MDAC. As of the
authoring of this document, MDAC 2.8 SP1 is the latest version.
For more information regarding Digital States, refer to the PI Server documentation.
The PointSource is a unique, single or multi-character string that is used to identify the PI
point as a point that belongs to a particular interface. For example, the string Boiler1 may be
used to identify points that belong to the MyInt Interface. To implement this, the
PointSource attribute would be set to Boiler1 for every PI point that is configured for the
MyInt Interface. Then, if /ps=Boiler1 is used on the startup command-line of the MyInt
Interface, the Interface will search the PI Point Database upon startup for every PI point that
is configured with a PointSource of Boiler1. Before an interface loads a point, the
interface usually performs further checks by examining additional PI point attributes to
determine whether a particular point is valid for the interface. For additional information, see
the /ps parameter. If the PI API version being used is prior to 1.6.x or the PI Server version
is prior to 3.4.370.x, the PointSource is limited to a single character unless the SDK is
being used.
Note: Do not use a PointSource character that is already associated with another
interface program. However it is acceptable to use the same point source for multiple
instances of an interface.
The PI point is the basic building block for controlling data flow to and from the PI Server. A
single point is configured for each measurement value that needs to be archived.
Point Attributes
Use the point attributes below to define the PI point configuration for the Interface, including
specifically what data to transfer.
Tag
The Tag attribute (or tagname) is the name for a point. There is a one-to-one correspondence
between the name of a point and the point itself. Because of this relationship, PI
documentation uses the terms “tag” and “point” interchangeably.
Follow these rules for naming PI points:
The name must be unique on the PI Server.
The first character must be alphanumeric, the underscore (_), or the percent sign
(%).
Control characters such as linefeeds or tabs are illegal.
The following characters also are illegal: * ’ ? ; { } [ ] | \ ` ' "
Length
Depending on the version of the PI API and the PI Server, this Interface supports tags whose
length is at most 255 or 1023 characters. The following table indicates the maximum length
of this attribute for all the different combinations of PI API and PI Server versions.
PI API PI Server Maximum Length
1.6.0.2 or higher 3.4.370.x or higher 1023
1.6.0.2 or higher Below 3.4.370.x 255
Below 1.6.0.2 3.4.370.x or higher 255
Below 1.6.0.2 Below 3.4.370.x 255
PointSource
The PointSource attribute contains a unique, single or multi-character string that is used to
identify the PI point as a point that belongs to a particular interface. For additional
information, see the /ps command-line parameter and the PointSource section.
PI Interface for Relational Database (RDBMS via ODBC) 31
PI Point Configuration
PointType
Typically, device point types do not need to correspond to PI point types. For example,
integer values from a device can be sent to floating point or digital PI tags. Similarly, a
floating-point value from the device can be sent to integer or digital PI tags, although the
values will be truncated.
PointType How It Is Used
Digital Used for points whose value can only be one of several discrete states. These
states are predefined in a particular state set (PI 3.x).
Int16 15-bit unsigned integers (0-32767)
Int32 32-bit signed integers (-2147450880 – 2147483647)
Float16 Scaled floating-point values. The accuracy is one part in 32767
Float32 Single-precision floating point values.
Float64 Double-precision floating point values.
String Stores string data of up to 977 characters.
Timestamp The Timestamp point type for any time/date in the range
01-Jan-1970 to 01-Jan-2038 Universal Time (UTC).
For more information about the individual point types, see PI Server Manual.
Location1
This is the number of the interface process that collects data for this tag. The interface can run
multiple times on one node (PC) and therefore distribute the CPU power evenly. In other
words Location1 allows further division of points within one Point Source. The Location1
parameter should match the parameter /id (or /in) found in the startup file.
Location2
The second location parameter specifies if all rows of data returned by a SELECT statement
should be written into the PI database, or if just the first one is taken (and the rest skipped).
Note: For Tag Groups, the master tag will define this option for all tags in a group.
It is not possible to read only the first record for one group member and all records
for another one.
For Tag Distribution, the interface ALWAYS takes the whole result-set regardless of
the Location2 setting.
32
Location2 Data Acquisition Strategy
0 Only the first record is valid
(except for the Tag Distribution Strategy and the RxC Strategy)
1 The interface fetches and sends all data in the result-set to PI
Location3
The third location parameter specifies the Distribution Strategy – how the selected data will
be interpreted and sent to PI.
Location3 Data Acquisition Strategy
0 SQL query populates a Single Tag
>0 Location3 represents the column number of a multiple field query Tag
Groups
-1 Tag Distribution
(Tag name or Tag Alias name must be part of the result set)
-2 RxC Distribution
(Multiple tag names or tag aliases name must be part of the result set)
Location4
Scan-based Inputs
For interfaces that support scan-based collection of data, Location4 defines the scan class
for the PI point. The scan class determines the frequency at which input points are scanned
for new values. For more information, see the description of the /f parameter in the Startup
Command File section.
Location5
Input Tags
If Location5=1 the interface bypasses the exception reporting (for sending data to PI it then
uses the pisn_putsnapshot() function; see the PI API manual for more about this function
call). Out-of-order data always goes directly to the archive through the function
piar_putarcvaluex(ARCREPLACE).
Location5 Behavior
0 The interface does the exception reporting in the standard
way. Out-of-order data is supported, but existing archive
values cannot be replaced; there will be the -109 error in the
pimessagelog.
1 In-order data – the interface gives up the exception reporting –
each retrieved value is sent to PI.
For out-of-order data – the existing archive values (same
timestamps) will be replaced and the new events will be added
(piar_putarcvaluex(ARCREPLACE)).
For PI3.3+ servers the existing snapshot data (the current
value of a tag) is replaced. For PI2 and PI3.2 (or earlier)
systems the snapshot values cannot be replaced. In this case
the new value is added and the old value remains.
Note: When there are more events in the archive at the same
timestamp, and the piar_putarcvaluex(ARCREPLACE) is used
(out-of-order-data), only one event is overwritten – the first
one!
2 If the data comes in-order – the behavior is the same as with
Location5=1
For out-of-order data – values are always added; that is,
multiple values at the same timestamp can occur
(piar_putarcvaluex(ARCAPPENDX)).
Output Tags
Location5 Behavior
-1 In-order data is processed normally.
Out-of-order data does not trigger the query execution.
0 In-order as well as out-of-order data is processed normally.
Note: No out-of-order data handling in the recovery mode.
See chapter RDBMSPI – Output Recovery Modes (Only
Applicable to Output Points)
1 In-order data is processed normally.
Enhanced out-of-order data management.
Note: special parameters that can be evaluated in the SQL
query are available; see the section Out-Of-Order Recovery.
Note: If the query (for input points) contains the annotation column, the exception
reporting will NOT be applied!
34
InstrumentTag
Length
Depending on the version of the PI API and the PI Server, this Interface supports an
InstrumentTag attribute whose length is at most 32 or 1023 characters. The following table
indicates the maximum length of this attribute for all the different combinations of PI API
and PI Server versions.
PI API PI Server Maximum Length
1.6.0.2 or higher 3.4.370.x or higher 1023
1.6.0.2 or higher Below 3.4.370.x 32
Below 1.6.0.2 3.4.370.x or higher 32
Below 1.6.0.2 Below 3.4.370.x 32
If the PI Server version is earlier than 3.4.370.x or the PI API version is earlier than 1.6.0.2,
and you want to use a maximum InstrumentTag length of 1023, you need to enable the PI
SDK. See Appendix B for information.
The InstrumentTag attribute is the filename containing the SQL statement(s). The file
location is defined in a start-up parameter by the /SQL=directory_path.
Note: The referenced file is only evaluated when the pertinent tag gets executed
for the first time, and then, after each point attribute change event. If the SQL
statement(s) needs to be changed (during the interface operation, without the
interface restart), OSIsoft recommends editing any of the PI point attributes – this
action forces the interface to re-evaluate the tag in terms of closing the opened SQL
statement(s) and re-evaluating the new statement(s) again.
ExDesc (ExtendedDescriptor)
Length
Depending on the version of the PI API and the PI Server, this Interface supports an ExDesc
attribute whose length is at most 80 or 1023 characters. The following table indicates the
maximum length of this attribute for all the different combinations of PI API and PI Server
versions.
PI API PI Server Maximum Length
1.6.0.2 or higher 3.4.370.x or higher 1023
1.6.0.2 or higher Below 3.4.370.x 80
Below 1.6.0.2 3.4.370.x or higher 80
Below 1.6.0.2 Below 3.4.370.x 80
If the PI Server version is earlier than 3.4.370.x or the PI API version is earlier than 1.6.0.2,
and you want to use a maximum ExDesc length of 1023, you need to enable the PI SDK. See
Appendix B for information.
The following tables summarize all the RDBMSPI specific definitions that can be specified in
ExtendedDescriptor.
Recognized Keywords in the ExtendedDescriptor
36
Batch Database Related Keywords in the ExtendedDescriptor
Note: The keyword evaluation is case sensitive. That is, the aforementioned
keywords have to be in capital letters!
Performance Points
For UniInt-based interfaces, the ExtendedDescriptor is checked for the string
“PERFORMANCE_POINT”. If this character string is found, UniInt treats this point as a
Performance Point. See the section called Performance Counters Points.
Trigger-based Inputs
For trigger-based input points, a separate trigger point must be configured. An input point is
associated with a trigger point by entering a case-insensitive string in the
ExtendedDescriptor PI point attribute of the input point of the form:
keyword=trigger_tag_name
Scan
By default, the Scan attribute has a value of 1, which means that scanning is turned on for the
point. Setting the scan attribute to 0 turns scanning off. If the scan attribute is 0 when the
Interface starts, a message is written to the pipc.log and the tag is not loaded by the
Interface. There is one exception to the previous statement.
If any PI point is removed from the Interface while the Interface is running (including setting
the scan attribute to 0), SCAN OFF will be written to the PI point regardless of the value of
the Scan attribute. Two examples of actions that would remove a PI point from an interface
are to change the point source or set the scan attribute to 0. If an interface specific attribute is
38
changed that causes the tag to be rejected by the Interface, SCAN OFF will be written to the
PI point.
Shutdown
The Shutdown attribute is 1 (true) by default. The default behavior of the PI Shutdown
subsystem is to write the SHUTDOWN digital state to all PI points when PI is started. The
timestamp that is used for the SHUTDOWN events is retrieved from a file that is updated by the
Snapshot Subsystem. The timestamp is usually updated every 15 minutes, which means that
the timestamp for the SHUTDOWN events will be accurate to within 15 minutes in the event of
a power failure. For additional information on shutdown events, refer to PI Server manuals.
Note: The SHUTDOWN events that are written by the PI Shutdown subsystem are
independent of the SHUTDOWN events that are written by the Interface when
the /stopstat=Shutdown command-line parameter is specified.
SHUTDOWN events can be disabled from being written to PI when PI is restarted by setting the
Shutdown attribute to 0 for each point. Alternatively, the default behavior of the PI Shutdown
Subsystem can be changed to write SHUTDOWN events only for PI points that have their
Shutdown attribute set to 0. To change the default behavior, edit the
\PI\dat\Shutdown.dat file, as discussed in PI Server manuals.
Source Tag
Output points control the flow of data from the PI Data Archive to any outside destination,
such as a PLC or a third-party database. The UniInt based interfaces (including RDBMSPI)
do use an indirect method for outputting values. That is, there are always two points involved
– the SourceTag and the output tag. The output tag is actually an intermediary through
which the SourceTag's snapshot is sent out. The rule is that whenever a value of the
SourceTag changes, the interface outputs the value and, consequently, the output tag
receives a copy of this event.
That means that outputs are normally not scheduled via scan classes (executed periodically).
Nevertheless, outputting data to RDB on a periodical basis is possible. The interface does not
namely mandate that the SQL statements for input points must be SELECTs. Input points can
execute INSERTs, UPDATEs, DELETEs – SQL statements that send values to RDB (see
section Output from PI for examples).
For outputs triggered by the SourceTag, the trigger tag (SourceTag) can be associated with
any point source, including the point source of the interface it works with (referenced through
PI Interface for Relational Database (RDBMS via ODBC) 39
PI Point Configuration
the /ps start-up parameter). Also, the point type of the trigger tag does not need to be the
same as the point type of the output tag. The default data type transformation is implemented.
As mentioned in previous paragraphs, an output is triggered when a new value is sent to the
snapshot of a SourceTag. If no error is indicated (during the interface's output operation)
then this value is finally copied to the output point. If the output operation is unsuccessful
(e.g. any ODBC run time error occurred), then an appropriate digital state (Bad Output) is
written to the output point.
Note: In case of an ODBC call failure, the output tag will receive the status Bad
Output.
Unused Attributes
The interface does not use the following tag attributes.
Conversion factor
Filter code
Square root code
Total code
UserInt1, UserInt2
UserReal1, UserReal2
40
Chapter 8. SQL Statements
As outlined in the previous sections, SQL statements can be defined in ASCII files, or can be
specified directly within the ExtendedDescriptor of a PI tag. Both options are
equivalent. ASCII files are located in the directory pointed to by the
/SQL=directory_path keyword (found among the interface start-up parameters). Names
of these files are arbitrary (the recommended form is filename.SQL) and the ASCII SQL
file is bound to a given point via the InstrumentTag attribute. In case the InstrumentTag
field is empty, the interface looks for a SQL statement definition in the
ExtendedDescriptor (searching for the keyword /SQL). If no statement definition is
found, the point is accepted, but marked inactive. Such a tag would only receive data via Tag
Distribution or Tag Group strategies. Example: SQL statement definition in
ExtendedDescriptor:
/SQL="SELECT Timestamp,Value,0 FROM Table WHERE Timestamp > ? ORDER
BY Timestamp;" P1=TS
Note: Both, the ASCII file and ExtendedDescriptor definitions can contain a
sequence of SQL commands separated by semicolons ';'. When the interface works
in the ODBC AUTOCOMMIT mode (default setting), each SQL statement gets
committed immediately after the execution. Transaction can be enforced by the
/TRANSACT keyword in the ExtendedDescriptor of a given tag; see section
Explicit Transactions for more details.
Prepared Execution
Once SQL statement(s) have been accepted by the interface (during the interface startup, or
after a point creation/edit), the corresponding ODBC statement handles are internally
allocated and prepared. The prepared statements are then executed whenever the related tag
gets scanned/triggered. This setup is most efficient when statements are executed repeatedly
with only different parameter values supplied. On the other hand, some ODBC drivers are
limited on the number of concurrently prepared ODBC statements (see the section Database
Specifics); therefore, the interface also allows for the direct execution mode as described in
the next paragraph.
Note: Prepared execution is the default behavior. It was the only option in previous
versions of this interface (prior to version 3.0.6)
Direct Execution
The interface will use the direct ODBC Execution (will call the SQLExecDirect() function)
when the start-up parameter /EXECDIRECT is specified. In this mode, the interface allocates,
binds, executes and frees the ODBC statement(s) each time the given tag is examined. Direct
execution has the advantage of not running into the concurrently prepared statement
limitation known for some ODBC drivers. Another situation where the direct execution is
useful, are complex stored procedures, because the direct execution allows dynamic binding
and effectively examining different result-sets these stored procedures can generate.
42
Note: It is highly recommended to test a new query for the interface with third party
tools like for instance - MS Query or ODBCTester.
SQL Placeholders
The concept of placeholders allows for passing runtime values onto places marked by
question marks '?' within SQL queries. Question mark placeholders can be used in many
situations, for example in the WHERE clause of the SELECT or UPDATE statements, in an
argument list of a stored procedure etc. Placeholders are defined in the tag's
ExtendedDescriptor attribute. The assignment of a placeholder definition to a given
question mark is sequential. This means that the first placeholder definition (P1=…) in the
ExtendedDescriptor refers to the first question mark found in the SQL statement, second
question mark to the second definition and so on. The individual Pn definitions are separated
by white chars. The syntax and a short description of the supported placeholder definitions is
shown in the table below (the table is divided into several sections that correspond with the
given placeholder types PI Snapshot and Archive placeholders, PI Point Database
placeholders and PI Batch Database placeholders).
Timestamp, Value, status and Annotation Placeholders Definitions
44
Placeholder Keywords for Meaning / Substitution in SQL Remark
Extended Descriptor Query
PI Point Database Placeholders
current tag
Pn=AT.POINTTYPE Point type of the current tag Max. 1 character
Pn=AT.POINTSOURCE Point source of the current tag Max. 1023
characters
Pn=AT.LOCATION1 Location1 of the current tag
Pn=AT.LOCATION2 Location2 of the current tag
Pn=AT.LOCATION3 Location3 of the current tag
Pn=AT.LOCATION4 Location4 of the current tag
Pn=AT.LOCATION5 Location5 of the current tag
Pn=AT.SQUAREROOT Square root of the current tag
Pn=AT.SCAN Scan flag of the current tag
Pn=AT.EXCDEV Exception deviation of the current
tag
Pn=AT.EXCMIN Exception minimum time of the
current tag
Pn=AT.EXCMAX Exception maximum time of the
current tag
Pn=AT.ARCHIVING Archiving flag of the current tag
Pn=AT.COMPRESSING Compression flag of the current tag
Pn=AT.FILTERCODE Filter code of the current tag
Pn=AT.RES Resolution code of the current tag PI2
Pn=AT.COMPDEV Compression deviation of the current
tag
Pn=AT.COMPMIN Compression minimum time of the
current tag
Pn=AT.COMPMAX Compression maximum of the
current tag
Pn=AT.TOTALCODE Total code of the current tag
Pn=AT.CONVERS Conversion factor of the current tag
Pn=AT.CREATIONDATE Creation date of the current tag
Pn=AT.CHANGEDATE Change date of the current tag
Pn=AT.CREATOR Creator of the current tag. Max. 8 characters
REM: A string containing a number.
The number is associated with the PI
user name internally on the PI
Server.
Pn=AT.CHANGER Changer of the current tag. Max. 8 characters
REM: See also AT.CREATOR
Pn=AT.RECORDTYPE Record type of the current tag
Pn=AT.POINTNUMBER Point ID of the current tag
Pn=AT.DISPLAYDIGITS Display digits after decimal point of
the current tag
Pn=AT.SOURCETAG Source tag of the current tag Max. 1023
characters
Pn=AT.INSTRUMENTTAG Instrument tag of the current tag Max. 1023
46
Placeholder Keywords for Meaning / Substitution in SQL Remark
Extended Descriptor Query
PI Batch Database Placeholders
Useable only beginning with PI Server 3.3 and PI SDK 1.1+
Pn=BA.UNIT Batch unit Max. 255
characters
Pn=BA.PRID Batch product identification Max. 255
characters
Pn=BA.START Batch start time
Miscellaneous
Pn="any-string" Double quoted string Max. 1023
characters
Note: For valid events, SS_C will be populated with the string “O.K.”
'tagname'/VL('*',mode).
'tagname'/VL('*',mode) Placeholder
48
Placeholder and PI Data Type RDB Data Type
Snapshot Placeholders
AT.NEWVALUE, AT.OLDVALUE, "any_string"
AT.DIGSTARTCODE, AT.DIGNUMBER, SQL_INTEGER
AT.LOCATION1, AT.LOCATION2, AT.LOCATION3, If error SQL_FLOAT
AT_LOCATION4, AT.LOCATION5, AT.SQUAREROOT, If error SQL_DOUBLE
AT.SCAN, AT.EXCMIN, AT.EXCMAX, AT.ARCHIVING,
AT.COMPRESSING, AT.FILTERCODE, AT.RES,
AT.COMPMIN, AT.COMPMAX, AT.TOTALCODE,
AT.RECORDTYPE, AT.POINTNUMBER,
AT.DISPLAYDIGITS, AT.USERINT1,AT.USERINT2
AT_TYPICALVALUE, AT_ZERO, AT_SPAN, SQL_REAL
AT_EXCDEV, AT_COMPDEV, AT_CONVERS If error SQL_FLOAT
AT.USERREAL1,AT.USERREAL2
PI Batch Database Placeholders
BA.ID,BA. BAID, BA.UNIT, BA.PRODID, BA_GUID, SQL_VARCHAR
BA_PRODID, BA_RECID, UB_BAID, UB_GUID,
UB_MODID, UB_MODGUID, UB_PRODID,
UB_PROCID, SB_ID, SB_GUID, SB_HEADID
BA.START, BA.END, UB.START, UB.END, SB.START, SQL_TIMESTAMP
SB.END
Timestamp Format
Even though the timestamp data type implementation is not consistent among various RDB
vendors, the ODBC specification nicely hides these inconsistencies. For an ODBC client, the
timestamp data type is always unified (the ODBC data type marker for a timestamp column is
SQL_TIMESTAMP). Thanks to this unification, the generic ODBC clients can easily work
with many data sources without worrying about the data type implementation details.
The RDBMSPI interface recognizes two places where a timestamp data type can appear,
depending on which kind of query it executes:
Input timestamps - those used in the SELECT's column lists, which are, along
with the value and status, sent to PI
Timestamps used in query parameters (accessed through placeholders).
This chapter briefly describes both of them:
timestamps represent the number of seconds since 01-Jan-1970 UTC). One of the
advantages/reasons why the numeric timestamps are implemented is that the double/integer
timestamps can go behind the millisecond precision (while the ODBC's SQL_TIMESTAMP
can only store milliseconds). An example of a SELECT with a numeric timestamp can be
like:
SELECT timestamp-as-number AS PI_TIMESTAMP, value AS PI_VALUE, 0 AS
PI_STATUS FROM table WHERE …;
The interface automatically detects that the timestamp-as-number column is not of
SQL_TIMESTAMP data type and transforms the number to the PI timestamp accordingly.
Note: The timestamp-as-number can only be used in the aliased mode (see
chapter Data Acquisition Strategies – Option 2: Arbitrary Position of Fields in a
SELECT Statement – Aliases). That is, the numeric column needs to be aliased
using the PI_TIMESTAMP keyword.
CAUTION! The numeric timestamps can also only be used in the SELECT lists
and not as placeholders. The following query will therefore NOT be accepted:
Both examples only convert numbers that represent whole seconds since 01-Jan-1970.
That is, the millisecond part is truncated in the conversion!
50
Timestamp Placeholders – Input Points
Timestamp when the previous tag execution has finished. Queries can take some
time to execute and LET thus differs from LST.
When there are more statements defined (that is, a batch of SQL statements is
executed), LET is the time when the last statement finished execution.
That also means that LET is different for each query.
Note: LET is not updated if a query fails. On multi-statement query files LET is
updated until the first query fails (no further queries are executed in the batch).
ANN_TS PI Annotation in the form of DateTime. If the tag’s snapshot does not have any
annotation, the value is undefined (NULL).
The output points (points that do have the SourceTag attribute populated) direction
interprets the placeholders as follows:
Timestamp Placeholders – Output Points
You can stop the interface for a while, let the data “be buffered” in RDB tables
and the first query execution after the interface start will get all the rows since the
last one retrieved; that is, since the snapshot of the given point.
Internal Interface Snapshot.
For input tags – the TS will be taken from the Internal Interface Snapshot. See
the table above for more details on this term.
52
SELECT Statements without Timestamp Column.
The interface offers the execution time for the input points when the RDB table
does not have the timestamp column available. If the interface runs on an API
node, the employed execution time is synchronized with the PI Server.
An example of the timestamp-less query can be as follows:
Another alternative is to use the timestamp provided by the RDB. Either use the
ODBC function {Fn NOW()} or use the appropriate (database specific) built-in
function. The second query uses the Oracle's sysdate function:
NULL Columns
As NULLs can come in any column of the SELECT list, the interface applies the following
rule before it forwards such a row to PI:
If the timestamp column is NULL, the execution time is used.
If the status column is NULL and the value column IS NOT NULL, the value is
valid.
When both, the value and the status are NULLs (or just the value is NULL) the
No Data digital state is used to indicate the fact that the expected value is
absent.
For Tag Groups and RxC strategies the /IGNORE_NULLS start-up parameter
allows ignoring values, which are NULL.
For further details see section Evaluation of STATUS Field – Data Input.
Note: Location2=0 should be used very rarely. In most cases this parameter
must be set to one.
54
Option 2: Arbitrary Position of Fields in a SELECT Statement – Aliases
If the RDB supports aliasing, the interface recognizes keywords, which help to translate the
columns to the concept of timestamp, value, status and/or annotation. By naming (aliasing)
the columns, there is no need to stick to the fixed positions of columns like explained in
previous section. The corresponding keywords are the following:
PI_TIMESTAMP, PI_VALUE, PI_STATUS, PI_ANNOTATION
Using the predefined keywords, the following query:
SELECT Timestamp AS PI_TIMESTAMP, Value AS PI_VALUE, Status AS
PI_STATUS, Annotation AS PI_ANNOTATION FROM…
is equal to:
SELECT Value AS PI_VALUE, Status AS PI_STATUS, Timestamp AS
PI_TIMESTAMP, Annotation AS PI_ANNOTATION FROM …
Note: When the columns in the SELECT list are aliased, the status column is not
mandatory. Therefore, a s valid SELECT can be of the following form:
SELECT Value AS PI_VALUE FROM Table …;
56
Note: When the distributor tag is event based, Location4 of the target tags must
be zero.
Note: String comparison of data in the tag name column against the real PI tag
names is case INSENSITIVE, while searching against the ALIASes is case
SENSITIVE.
Because the distributor tag is timestamped with current time, be aware that you cannot use
the TS placeholder in the same way as in queries providing data to single tags. Remember,
that when scanning a table in a timely manner, the goal is to retrieve just those rows,
which have been added to the table since the last scan. As a work-around, the following
scenarios (for the distribution strategy) can be considered:
1) Use/create an additional column in the queried table that will be UPDATEd after each
scan. That is, the next statement (after the SELECT) will have to be an UPDATE that
will mark each row that it has already been sent to PI. The WHERE condition of the
SELECT query will then out-filter the marked-as-read rows.
See this example available in Appendix B: Examples
Example 3.4c – Tag Distribution with Auxiliary Column – rowRead
2) A variation of the above is to create an additional table in RDB consisting of two
columns – TagName and Time. The interface will have to UPDATE this table after each
scan with the most recent times of those TagNames that have been just sent to PI. This
table will thus remember the most recent timestamps (snapshots) of the involved.
The actual SELECT will then have to be a JOIN between the real data table and this
additional “snapshot table”. In other words, the join will deliver only rows (from the data
table) that have the time column newer than is recorded in this helper table.
/ALIAS
Since names in RDB do not have to exactly correspond to PI tag names, the optional keyword
/ALIAS (in ExtendedDescriptor) is supported. This allows mapping of PI points to
rows retrieved from the relational database where there is no direct match between the PI tag
name and a value obtained from a table. Please note that this switch causes the case
SENSITIVE comparison.
See this example available in Appendix B: Examples
Example 3.4b – Tag Distribution, Search According to Tag's ALIAS Name
Note: In case the /ALIAS (in ExtendedDescriptor) are set, the target points
do not have to be in the same scan class as the Distributor tag. This is different to
the scenario when no /ALIAS is defined; then the target tags are expected to share
the same scan class. See also the /DOP description for an option to distribute events
outside the specified PointSource.
Note: If the TagName column in RDB has a fixed length (the CHAR(n) data type),
the interface tries to automatically strip the trailing and leading spaces for the
comparison. Another way can be to convert the TagName column via the
CONVERT() scalar function or CAST it to SQL_VARCHAR. SELECT Timestamp,
{Fn CONVERT(PI_TagName, SQL_VARCHAR)},…
58
Option 2: Arbitrary Position of Fields in SELECT Statement – Aliases
The interface then recognizes the column meaning by the following known keywords:
PI_TIMESTAMP, PI_TAGNAME, PI_VALUE, PI_STATUS, PI_ANNOTATION:
SELECT Timestamp AS PI_TIMESTAMP, Name AS PI_TAGNAME …
Note: The @rows_dropped variable only works in the Tag Distribution strategy.
That is, it is not implemented for the RxC Distribution (see below).
In case there is just one timestamp for all the entries in a row, the keyword
PI_TIMESTAMP can be used (Example 3.6b – RxC Distribution Using
PI_TIMESTAMP Keyword)
Location3 = -2
Note: The /EVENT=TagName keyword should be separated from the next keyword
definition by the comma ',' like: /EVENT=sinusoid, /SQL="SELECT …;"
Note: If no timestamp field is provided in the query, the retrieved data will be
stored in PI using the event timestamp rather than the query execution time.
As of RDBMSPI 3.11, conditions can be placed on trigger events. Event conditions are
specified in the ExtendedDescriptor as follows:
60
/EVENT= 'tagname' condition
The trigger tag name must be in single quotes. For example:
/EVENT= 'Sinusoid' Anychange
will trigger on any event coming from tag 'Sinusoid' as long as the next event is different than
the last event. The initial event is read from the snapshot. For a complete list of available
keywords see the ExDesc definition.
Note: Any requirement that goes beyond that needs more than one tag.
Previous sections of this manual demonstrate that the interface requires both value and status
in the SELECT field list. The following paragraphs will explain how these two fields make it
into various PI point types.
62
Input Field SQL Data Type PI Point Type
Note: The full conversion of all possible data types supported in SQL to PI data
types goes beyond the ability of this interface. To allow additional conversions, use
the ODBC CONVERT() function described below or use the ANSI CAST().
Note: More information about the CONVERT() function can be gained from the
ODBC.CHM file, which comes with the MSDN Library or from the documentation of a
certain ODBC driver.
The ANSI CAST() function has similar functionality as the CONVERT(). As CAST() is not
ODBC specific, those RDBs that have it implemented do accept the following queries/syntax:
SELECT Timestamp, CAST(Value AS Varchar(64)), Status FROM…
Note: More information about the CAST() function can be found in any SQL
reference, for example, Microsoft SQL Server Books OnLine.
The interface translates the status column into the PI value-status concept as described in the
table below. For a string field, the verification is more complex, and in order to extend the
flexibility of the interface, two areas in the PI System Digital Set table can be defined.
The first area defines the success range and the second one the bad range. Those ranges are
referenced via the following interface start-up parameters: /SUCC1, /SUCC2, /BAD,
/BAD2, see chapter Startup Command File for their full description.
Status Field Interpretation
64
Note: String comparisons in /SUCC and /BAD ranges are case INSENSITIVE!
Note: For a Digital PI tag any other numeric status but zero means Bad Input.
Note: There can be multiple statements per tag, but there can only be one
SELECT in such a batch.
Note: The interface only allows statements containing one of the following SQL
keywords: SELECT, INSERT, UPDATE, DELETE, {CALL} ; any proprietary language
construction (T-SQL, PL/SQL,…) is NOT guaranteed to work. For example, the MS
SQL Server's T-SQL is allowed with the MS SQL ODBC driver, but similar
construction fails when used with an Oracle's ODBC.
The following example will work with MS SQL; nevertheless, other ODBCs can
complain:
if(?<>0)
SELECT Timestamp,Value,0 FROM Table1
else
SELECT Value,0 FROM Table1; P1=SS_I
The preferred way is to use store procedures for any kind of the code flow control.
Explicit Transactions
Transaction control is configurable on a per tag basis by specifying the /TRANSACT keyword
in the ExtendedDescriptor. The interface then switches off the default AUTOCOMMIT
mode and explicitly starts a transaction. After the execution, the transaction is either
COMMITed or ROLLed BACK (in case of a run-time error).
For the multi-statement queries – the batch gets interrupted after the first runtime error and
ROLLed BACK.
Stored Procedures
As already stated in the above paragraphs, the interface offers the possibility of executing
stored procedures. Stored procedure calls can use placeholders (input parameters) in their
argument lists and they behave the same way as standard queries do. The syntax for the
procedure invocation conforms to the rules of SQL extensions defined by ODBC:
{CALL procedure-name[([parameter][,[parameter]]…)]}
A procedure can have zero or more input parameters; the output parameters are not
supported. Stored procedures are therefore mainly used for execution of more complex
actions that cannot be expressed by the limited SQL syntax the interface supports.
Note: Some RDBMSs like MS SQL Server or IBM DB2 7.01 allow for having the
SELECT statement inside a procedure body. The execution of such a procedure
then returns the standard result-set, as if it were generated via a simple SELECT. A
stored procedure can thus be used to read data out of the relational database into
PI. For information on how to construct a stored procedure on Oracle so that it
behaves similarly (in terms of returning a result-set) as stored procedures on MS
SQL Server or DB2, refer to section Oracle 7.0; Oracle 8.x, 9i, 10g, 11g; Oracle
RDB.
Output from PI
General Considerations
Output points control the flow of data from the PI Server to any destination that is external to
the PI Server, such as a PLC or a third-party database. For example, to write a value to a
register in a PLC, use an output point. Each interface has its own rules for determining
whether a given point is an input point or an output point. Among OSIsoft interfaces, there is
no de facto PI point attribute that distinguishes a point as an input point or an output point.
Outputs are triggered event based for UniInt-based interfaces; that is, outputs are not
scheduled to occur on a periodic basis.
The above paragraph discussed outputs from PI in general. For RDBMSPI interface, there are
two mechanisms for executing an output query:
Through exceptions generated by the SourceTag
Periodically executing statements like: INSERT, UPDATE, DELETE or
{CALL} used with input points
The first two examples referenced below INSERT a record into an RDB table event based
and scan based; the third example UPDATEs an existing record in a given table (event
based).
66
See these examples available in Appendix B:
Example 2.1a – insert sinusoid values into table (event based)
Example 2.1b – insert sinusoid values into table (scan based)
Example 3.10 – Event Based Output
Note: The output point itself is populated with a copy of the SourceTag data
whether the output operation was successful. Otherwise, the output tag will receive a
digital state of Bad Output.
DIGITAL Tags
Digital output tag values are mapped only to RDB string types. This means that the
corresponding field data type in the table must be string, otherwise explicit conversion is
required CAST(value_exp AS data_type). The following table shows the assignment of
value placeholders (VL, SS_I, SS_C) for a Digital tag:
Digital Output Tags Can only be Output to RDB Strings
Global Variables
68
Chapter 9. Recording PI Point Database Changes
The interface can record changes made to the PI Point Database. The concept is similar to the
regular output point handling. The difference is that the managing tag is not triggered by a
snapshot event, but by a point attribute modification.
Note: The interface stores the number of executed queries into the managing
tag.In the short form, nothing is stored when a point was edited and no real attribute
change has been made.
Note: By default, the interface checks for attribute changes each 2 minutes. It can
therefore happen that when an attribute is changed twice within 2 minutes ending
with its original value, the interface will NOT record this change. Since RDBMSPI
3.11, the two minutes interval can be changed by specifying the start-up parameter
/UPDATEINTERVAL
Note: The interface stores the number of executed queries into the managing tag.
70
Chapter 10. PI Batch Database Output
PI Batch Database can be replicated to RDB tables in a timely manner. That is,
the RDBMSPI interface executes the parameterized SQL statements (INSERTs) so that a new
row is added to an RDB table only when a new batch/unit-batch/sub-batch is closed (means,
it has a non-zero end time) in the PI Batch Database.
The batch/unit-batch/sub-batch managing tags (tags defining the INSERT statements) are
recognized by the presence of any of the PI Batch Database related placeholders (see section
SQL Placeholders ). The managing tags are configured as standard input tags (Location4
defines the scan frequency, etc.) and an occurrence of the 'BA.*' placeholder definition (in the
ExtendedDescriptor) marks these tags “batch replicators”; see the examples referenced in
the following paragraphs for more details.
Example 5.1 – Batch Export (not requiring Module Database) demonstrates the syntax
needed for the replication of the PI Batch Database (old batches), using the managing point
defining a parameterized INSERT. The interface periodically asks for new batches since the
previous scan and only the closed batches (as mentioned in the previous paragraph - batches
with non-zero end time) are INSERTed into the RDB table.
Note: The input point carrying the INSERT statement receives the number of
succesfully INSERTed batches after each scan. It is therefore advisable to define
this point as numeric.
Note: For the PI Batch Database with Module Database output (new batches),
the PI SDK connection needs to be activated. Enable PI SDK through PI ICU, or set
the /PISDK=1 start-up parameter manually.
PI SDK object model divides the PI Batch Database into several object collections.
The simplified model is shown in the following figure:
(A more detailed description of each object can be found in the PI SDK Manual.)
The RDBMSPI Interface currently replicates these objects from the three main collections
found in the PI Batch Database. These collections are:
PIBatchDB stores PIBatch objects
PIUnitBatches stores PIUnitBatch objects
PISubBatches stores PISubBatch objects
Each aforementioned object has a different set of properties. Moreover, it can reference its
parent object (object from the superior collection) via the GUID (Global Unique Identifier) –
16 byte unique number. This GUID can be used as a key in RDB tables to relate
the PIUnitBatch records to their parent PIBatch(es) and PISubBatches to their parent
PIUnitBatch(es). The structure of an RDB table should therefore reflect the available
properties on a given object. The following tables enumerate the properties of the batch,
unitbatch and subbatch objects, their data types and the corresponding placeholders:
PI Batch Object
72
Recipe Character string up to 1024 bytes BA.RECID
Unique ID Character string 16 bytes BA.GUID
Start Time Timestamp BA.START
End Time Timestamp BA.END
PIUnitBatch Object
PISubBatch Object
Note: Both batches and unit-batches must be closed. This means, they must have
the non-empty 'end time' property. The interface will not store the open batches or
unit-batches. Exceptions to this rule are sub-batches. Sub-batches are sent to an
RDB table when their parent unit-batch closes.
Three tables are required for the data extracted from the PI Batch database.
Example of RDB Tables Needed for PI Batch Database Replication (new batches)
Table Structure for PIBatch objects Table Structure for PIUnitBatch Objects
BA_START (SQL_TIMESTAMP) UB_START (SQL_TIMESTAMP)
BA_END (SQL_TIMESTAMP) UB_END (SQL_TIMESTAMP)
BA_ID (SQL_VARCHAR) UB_ID (SQL_VARCHAR)
BA_PRODUCT (SQL_VARCHAR) UB_PRODUCT (SQL_VARCHAR)
BA_RECIPE (SQL_VARCHAR) UB_PROCEDURE (SQL_VARCHAR)
BA_GUID (SQL_CHAR[37]) BA_GUID (SQL_CHAR[37])
UB_MODULE (SQL_VARCHAR)
UB_GUID (SQL_CHAR[37])
Table Structure for PISubBatch Objects
SB_START (SQL_TIMESTAMP) SB_HEAD (SQL_VARCHAR)
SB_END (SQL_TIMESTAMP) UB_GUID (SQL_CHAR[37])
SB_ID (SQL_VARCHAR) SB_GUID (SQL_CHAR[37])
The arrows show the keys that form the relationship between these three tables. Sub-batches
can form their own tree structure allowing for a sub-batch object to contain the collection of
another sub-batch. To express this hierarchy in one table, the interface constructs the sub-
batch name in a way that it contains the above positioned sub-batches divided by a
backslashes '\' (an analogy with the file and directory structure). In our case the SB_ID
column will contain items like:
…
PIUnitBatch_01\SB_01\SB_101
PIUnitBatch_01\SB_01\SB_102
…
PIUnitBatch_01\SB_01\SB_10n
…
Because sub-batches have different properties than their parent objects – unit-batches, an
independent INSERT is needed. Moreover, the unit-batch managing tag needs to know the
sub-batch’s managing tag name. A special keyword /SB_TAG ='subbatch_managing_tag'
must therefore be defined in the ExtendedDescriptor of the unit-batch’s managing tag.
Refer to these examples that replicate all batches, unit-batches plus their sub-batches over the
period of last 10 days:
Example 5.2a – Batch Export (Module Database required)
Example 5.2b – UnitBatch Export (Module Database required)
Example 5.2c – SubBatch Export (Module Database required)
74
Chapter 11. RDBMSPI – Input Recovery Modes
The primary task of the RDBMSPI interface is on-line copying of data from relational
databases to the PI archive. For this, users specify SQL queries (mostly SELECTs) and the
task of the interface is delivering the newly stored rows to PI tags. On the other hand, history
(input) recovery means copying bigger amounts of data (from classic RBDs or other
historians) to PI. This task is usually not periodical; that means, it is one-time action only.
The interface must thus address different issues; mainly, divide the time interval, for which
the data needs to be copied into smaller, configurable chunks. There are many reasons for it,
above all, avoid higher memory consumption, improve performance and increase the
robustness of the recovery process. In the following paragraphs we will describe the settings,
which the interface (since version 3.17) supports.
In the simplest possible scenario, the history recovery is actually covered by the most
common query customers have:
SELECT Timestamp,Value,0 FROM Table WHERE Timestamp > ? ORDER BY
Timestamp ASC; P1=TS
Provided the amount of data in RDB between the snapshot and the current time is of
reasonable size, the query above simply fills in the missing events in PI archive during the
first query execution. The interface will then continue executing the SELECT (in on-line
mode) and the query will return only the newly inserted rows.
As stated at the beginning of this section, in case the amount of data in RDB is big, it is
desirable to divide the time interval into chunks in order to avoid potential high resource
utilization (CPU, memory, etc.) on the interface node as well as on the RDB side. For this,
the interface offers two switches: /RECOVERY_TIME and the new start-up parameter
/RECOVERY_STEP. Both parameters accept various input formats.
Their definitions and short description can be found in the following table:
Input History Recovery startup switches and their definitions
Note: Valid start and end time definition syntax used in the /RECOVERY_TIME
keyword are strings, which represent:
- absolute times containing some fields of DD-MMM-YY hh:mm:ss
- relative times in +|- n d|h|m|s
- names of the PI tags
In addition, an absolute time can be specified with a word (TODAY, YESTERDAY,
SUNDAY, MONDAY,…), an asterisk for the current time, or a combination time using
one of the word absolute times and a relative times. See the Data Archive Manual for
more information on the time string format.
See also the description of /RECOVERY_TIME and /RECOVERY_STEP in section
Command-Line Parameters.
A suitable SQL statement (for the input history recovery) must be of the following pattern:
SELECT Timestamp, Value, 0 FROM Table
WHERE Timestamp > ? AND Timestamp <= ?
ORDER BY Timestamp ASC;
P1=TS P2=TE
That is, a query, which allows binding the start and end times recovery steps, is expected.
That does not mean the query must be exactly as stated above. In fact, it can be any query,
which delivers suitable result sets, but it must contain at least two timestamp placeholders
defined by TS and TE. The query above actually resembles the most often used type of an
SQL statement, which delivers ordered time series since the last scan.
76
Provided the /RECOVERY_TIME and /RECOVERY_STEP are specified, the interface will
automatically populate the placeholders with appropriate times and will incrementally
process the historical data. When the end time is reached, the interface process will exit.
Exiting occurs when the /RECOVERY_TIME contains also end time.
Configuration Example for Input History Recovery
Interface start-up file:
RDBMSPI.exe /PS=RDBMSPI /F=10 /DSN=SQLServer /lb ... /RECOVERY_TIME=”01-Jan-
05,*” /RECOVERY_STEP=10d /RECOVERY=TS
SQL Query (using the distributor strategy):
SELECT Timestamp, Name, Value, 0 FROM Table WHERE Timestamp > ? AND
Timestamp <= ? ORDER BY Timestamp ASC; P1=TS P2=TE
Explanation:
After the interface starts, all input points’ queries will be executed on the interval beginning
01-Jan-2005 till current time. The recovery step will be 10 days. That is, the placeholders will
be populated as follows:
1. Step: TS=01-Jan-2005 00:00:00 TE=10—Jan-2005 00:00:00
2. Step: TS=10-Jan-2005 00:00:00 TE=20—Jan-2005 00:00:00
3. …
When the current time is reached, the interface process will exit. The interface specific log
will contain the following printout:
[INFO]: Input recovery on the interval
<01-Sep-2009 00:00:00.000 , 22-Oct-2009 10:27:46.000>
with step 864000 sec started.
[DEB-1]: Point – Recovery_Distributor : SQL statement(s) : SELECT
DateTime AS PI_TIMESTAMP, 'Recovery_Target_1' AS PI_NAME, value AS
PI_VALUE, 0 AS PI_STATUS FROM History WHERE DateTime > ? AND
DateTime <= ? ORDER BY DateTime;
[INFO]: Processing the input recovery interval
<01-Sep-2009 00:00:00.000 , 11-Sep-2009 00:00:00.000>.
[INFO]: Processing the input recovery interval
<11-Sep-2009 00:00:00.000 , 21-Sep-2009 00:00:00.000>.
…
[INFO]: Processing the input recovery interval
<21-Oct-2009 00:00:00.000 , 22-Oct-2009 10:27:46.000>.
Thu Oct 22 10:28:02 2009 [INFO]: Input recovery completed.
Thu Oct 22 10:28:02 2009 [INFO]: Interface exiting.
Recovery TS
This recovery mode is specified by the /RECOVERY=TS start-up parameter. Whether the
recovery handles out-of-order data or not, depends on the Location5 attribute of an output
tag:
Location5=0
Recovery starts at snapshot timestamp of the output tag (or at the recovery start-time if that is
later) and only in-order data will be recovered.
Location5=1
Recovery begins at the recovery start-time (specified by the /RECOVERY_TIME start-up
param.) and the out-of-order events can be covered.
The /OOO_OPTION then determines how the out-of-order events are handled.
Note: During the recovery, the snapshot placeholders are populated with historical
(archive) values. In case the placeholder is defined as: Pn=’tagname’/VL , during
the recovery, the interpolated archive values are taken.
Out-Of-Order Recovery
For output points that have Location5=1, the interface compares the source with the output
tag values and detects the archive events that were added, replaced or deleted. This
comparison is done immediately after the interface started on condition the comparison time-
window had been specified; e.g. /RECOVERY_TIME='*-10d'.
The following two figures depict the situation before and after the out-of-order recovery.
Two values
added when i/f was
stopped
/RECOVERY_TIME = *-
1d
OutputTag (blue) Synchronized with SourceTag (green).
Source
tag
synchro
nized
with the
output
tag
after
recover
y
The Out-Of-Order recovery can be further parameterized through another start-up parameter
/OOO_OPTION. This parameter defines a combination of three keywords: append,
replace, and remove.
Keywords are separated by commas: /OOO_OPTION="append,replace".
Depending on these keywords, the interface only takes those actions, for which the
corresponding options are set. In this case, even if there were some deletions of the
SourceTag events, the interface will not synchronize them with the output tag (in terms of
deleting the corresponding output tag entries).
The comparison results are signaled to the user via the following (Boolean) variables:
@source_appended, @source_replaced, and @source_removed.
So that they can be used in an 'IF' construct that the interface is able to parse.
For example:
IF @source_appended INSERT INTO table (…);
IF @source_replaced UPDATE table SET column1 = ? …;
IF @source_removed DELETE table WHERE column1 <= ?;
80
Usually new SourceTag events come in in-order so that only the @source_appended
variable is set to true (the others remain false).
The table above describes the recovery-relevant settings that are valid only when the interface
starts (off-line-mode). During the normal operation (on-line-mode), the interface handles the
Out-Of-Order events as described in the section below:
Location5=1 supports out-of-order recovery also in the on-line-mode; When the Out-Of-
Order SourceTag events are detected, either the @source_appended or the
@source_replaced is set to true (depending on the addition, or replacement of the
SourceTag event).
Note: A new event that has the same timestamp as the current snapshot is
considered an out-of-order event too!
Note: If the SourceTag value is edited, but remains the same, then the
@source_replaced variable stays False
82
Recovery SHUTDOWN
Shutdown recovery is the same as 'TS', if the output tag's snapshot value is either Shutdown
or I/O Timeout. If the output tag snapshot does not contain these digital states, NO recovery
takes place.
Input Recovery
When the recovery time-window definition contains both; that is, the start and the end-times
(separated by comma); for instance, /RECOVERY_TIME="01-Jan-08,01-Jan-09" all
input points will be processed on the defined time interval and then the interface will end (the
interface process will exit).
Output Recovery
During recovery, the interface retrieves and reprocesses the compressed data from the PI
Archive (as opposed to executing the output points' events coming from the event queue
during the interface's normal operation). When the recovery time-window does contain both;
that is, the start and the end-times (separated by comma) for example,
/RECOVERY_TIME="*-1d,*" all output points will be processed for the defined time
interval and then the interface stops (exits).
In the Pure Replication Mode one can schedule the interface execution via the Windows
scheduling service (AT) and let the PI Archive (compressed) data replicate in a batch manner.
Note: Due to the different nature of both recovery modes, it is not recomended to
run input and output recovery with one interface instance!
For exact specification of all recovery related parameters, see section Startup Command File.
Note: The interface tries to re-create the ODBC link every minute. This time
interval is hardcoded and cannot be changed.
Note: Since version 3.12, for the output tags, the placeholder values are retained
and the query, which discovered the broken ODBC link is executed again when the
connection to RDB is re-established.
Note: During the re-connection attempts (1 min intervals) the interface does NOT
empty the update-event queue (for output tags). Some events can thus be lost due
to the queue overflow. Should such a situation happen, there is currently NO
automatic recovery action taken. Only a manual solution is possible – set-up the
corresponding /OOO_OPTION recovery parameters, and re-processes the period
when the interface was disconnected from the RDBMS by restarting the interface.
See section RDBMSPI – Output Recovery Modes (Only Applicable to Output
Points). See the PI Server Manual for details how the event queue size can be
increased.
When the ODBC link is broken, and the PI System remains available, the interface normally
writes the I/O Timeout digital state to all input points. This can be avoided by setting the
interface start-up parameter /NO_INPUT_ERROR.
PI Connection Loss
During the PI API or PI SDK connection loss, neither the snapshot placeholders (TS, VL,
SS_I,…) nor the attribute placeholders (AT.xxx) can be refreshed. Corresponding error
messages are sent to the interface log-file and the interface enters a loop where it tries to re-
connect to PI in one minute intervals. The PI Server availability check is made before each
scan class processing.
Note: In case the interface runs as a console application (and there are the
/user_pi= or/and /pass_pi= startup parameters specified), the login dialog pops
up waiting for the user to re-enter the authentication information.
86
Chapter 14. Result Variables
Send Data to PI
The interface sets the following variables according to the result of the write-to-PI action:
@write_success and @write_failure.
A failure sets the @write_success to false, the @write_failure to true and vice-versa.
Both variables are accessible to users, as indicates the example below:
SELECT Timestamp, Value,0 FROM Table WHERE Timestamp > ? ORDER BY
Timestamp;
IF @write_success DELETE FROM Table WHERE Timestamp <= ?;
That means, the rows in the first table can be safely deleted, because they were already
copied to PI.
Note: Only if ALL SELECTed rows are successfully sent to the corresponding PI
tags then the @write_success variable is true. Data that have no corresponding PI
tag (e.g. in the tag distribution strategy, there is a row that references a nonexistent
tag and this row thus cannot be sent to PI), do not count as a failures. To achieve
this, consider the @rows_dropped variable in section SQL SELECT Statement for
Tag Distribution.
Note: The implemented IF does NOT support the ELSE part and only covers one
statement after the variable.
@query_success
and
@query_failure.
The @query_success is set to true (and @query_failure to false), when the previous
query was successfully executed and data fetched. The variables can be checked by an 'IF'
construct; for example:
SELECT Timestamp, Value,0 FROM Table WHERE Timestamp > ? ORDER BY
Timestamp;
IF @query_failure INSERT INTO Table2 (Timestamp,Tag, error_message)
VALUES (?,?,'Query failed');
88
Chapter 15. RDBMSPI – Redundancy Considerations
Note: The interface supports the server level failover only when configured with the
Microsoft Native Client ODBC driver against the mirrored SQL Server 2005 or later!
See the corresponding ODBC driver description for more.
From the interface perspective, the only requirement is to specify the Mirror server name in
the DSN configuration, as shown in the following figure:
In case the ODBC link gets disconnected, the reconnection attempt will be redirected to the
second (mirrored) SQL Server.
Command-line parameters can begin with a slash ‘/’ or with a hyphen ‘-‘. For example, the
/ps=M
or
-ps=M
command-line parameters are equivalent.
For Windows, command file names have a .bat extension. The Windows continuation
character (^) allows for the use of multiple lines for the startup command. The maximum
length of each line is 1024 characters (1 kilobyte). The number of parameters is unlimited,
and the maximum length of each parameter is 1024 characters.
The PI Interface Configuration Utility (PI ICU) provides a tool for configuring the Interface
startup command file.
The PI Interface Configuration Utility provides a graphical user interface for configuring PI
interfaces. If the Interface is configured by the PI ICU, the batch file of the Interface
(rdbmspi.bat) will be maintained by the PI ICU and all configuration changes will be kept
in that file and the module database. The procedure below describes the necessary steps for
using PI ICU to configure the RDBMSPI Interface.
From the PI ICU menu, select Interface, then NewWindows Interface Instance from EXE…,
and then Browse to the rdbmspi.exe executable file. Then, enter values for Host PI
System, Point Source and Interface ID#. A window such as the following results:
“Interface name as displayed in the ICU (optional)” will have PI- pre-pended to this name
and it will be the display name in the services menu.
Click on Add.
The following display should appear:
Note that in this example the Host PI System is MKELLYD630W7. To configure the
interface to communicate with a remote PI Server, select ‘Interface => Connections…’ item
from PI ICU menu and select the default server. If the remote node is not present in the list of
servers, it can be added.
Once the interface is added to PI ICU, near the top of the main PI ICU screen, the Interface
Type should be rdbodbc. If not, use the drop-down box to change the Interface Type to be
rdbodbc.
Click on Apply to enable the PI ICU to manage this copy of the RDBMSPI Interface.
94
PI Interface for Relational Database (RDBMS via ODBC) 95
Startup Command File
The next step is to make selections in the interface-specific tab (i.e. “RDBODBC”) that allow
the user to enter values for the startup parameters that are particular to the RDBMSPI
Interface.
Since the RDBMSPI Interface is a UniInt-based interface, in some cases the user will need to
make appropriate selections in the UniInt page. This page allows the user to access UniInt
features through the PI ICU and to make changes to the behavior of the interface.
To set up the interface as a Windows Service, use the Service page. This page allows
configuration of the interface to run as a service as well as to starting and stopping of the
interface. The interface can also be run interactively from the PI ICU. To do that go to menu,
select the Interface item and then Start Interactive.
For more detailed information on how to use the above-mentioned and other PI ICU pages
and selections, please refer to the PI Interface Configuration Utility User Manual. The next
section describes the selections that are available from the RDBODBC page. Once selections
have been made on the PI ICU GUI, press the Apply button in order for PI ICU to make these
changes to the interface’s startup file.
Since the startup file of the RDBMSPI Interface is maintained automatically by the PI ICU,
use the RDBODBC page to configure the startup parameters and do not make changes in the
file manually. The following is the description of interface configuration parameters used in
the PI ICU Control and corresponding manual parameters.
96
The PI Interface for Relational Database (RDBMS via ODBC) - ICU Control has four tabs. A
yellow text box indicates that an invalid value has been entered, or that a required value has
not been entered.
Startup Parameters
File Locations
DSN Settings
DSN:
Data Source Name (/DSN=<DSN name>, Required)
Username:
Username for access to RDB (/USER_ODBC=<username>, Required)
Password:
Password for access to RDB. Once this has been entered and saved the password will be
written to an encrypted password file found in the directory pointed to by the
/Output=<path> command line parameter. During the save function this field will be
changed from asterisks to the string “* Encrypted *” to indicate there is a valid encrypted
password file has been saved. The Reset button can be used to delete the encrypted
password file and allow a new password to be entered. (/PASS_ODBC=<password>,
Optional)
Start Code:
Enter the starting location in the system digital state table. (/SUCC1=#, Optional)
End Code:
Enter the ending location in the system digital state table (/SUCC2=#, Optional)
Start Code:
Enter the starting location in the system digital state table. (/BAD1=#, Optional)
End Code:
Enter the ending location in the system digital state table (/BAD2=#, Optional)
98
Recovery Parameters
Recovery Mode:
Select the output recovery mode, possible options are: No Recovery and TimeStamp. If
TimeStamp is selected then select the type of processing Input or Output. (/RECOVERY=c
where:
c = TS (Timestamp) or NO_REC (No Recovery, Default=NO_REC, Optional)
Input Processing
100
Output Processing
Optional Parameters
Laboratory Caching
LaBoratory. Events are written directly to PI Archive in bulks.
The event ratio is then significantly faster comparing to the event-by-event sending, which
occurs when no /lb is present. The archive mode is ARCREPLACE. (/lb, Optional)
No Input Errors
Suppresses writing the BAD_INPUT, IO_TIMEOUT digital states when a runtime error
occurs. (/NO_INPUT_ERROR, Optional)
Ignore Nulls
(/Ignore_Nulls, Optional)
Scan class:
Select a scan class to assign to a rate tag.
Debug Parameters
Debug Level
The interface prints additional information into the interface specific log file, depending on
the debug level used. The amount of log information increases with the debug number as
specified in the table below (see the /DEB=# description)
Additional Parameters
This section is provided for any additional parameters that the current ICU Control does not
support.
104
Note: The UniInt Interface User Manual includes details about other command-line
parameters, which may be useful.
Command-line Parameters
Parameter Description
/BAD1=# The /BAD1 parameter is used as an index pointing to the beginning of
Default: 0 the range (in the system digital state table) that contains Bad Input
Optional status strings.
Strings coming as statuses from RDB are compared with this range.
The following example indicates what rule is implemented
Example:
SELECT timestamp, value, 'N/A' FROM table …
In case the interface finds a match for the 'N/A' string in the PI system
digital set table (in the range defined through /bad1 and /bad2), the
event is archived as 'N/A'; that is, as the digital state selected from
RDB.
See section Evaluation of STATUS Field – Data Input.
/BAD2=# The /BAD2 parameter is used as an index pointing to the end of the
Default: 0 range (in the system digital state table) that contains Bad Input status
Optional strings.
Parameter Description
/DEB=# The interface prints additional information into the interface specific log
Default: 1 file, depending on the debug level used. The amount of log information
increases with the debug number as follows:
Optional
Debug Output
Level
0 No debug output.
1 Additional information about the interface operation – PI
(Default) and ODBC connection related info, defined SQL queries,
information about actions taken during the ODBC link re-
creation, output points recovery, etc.
2 Not implemented
3 Prints out the original data (raw values received by ODBC
fetch calls per tag and scan).This helps to trace a data
type conversion or other inconsistencies.
4 Prints out the actual values just before sending them to PI.
5 Prints out relevant subroutine markers, the program runs
through.
Note: Only for onsite test purposes!
Potentially huge print out!
Debug Level Granularity
The message in the file is prefixed with the [DEB-n] marker where n
reflects the set debug level.
/DOPS Allow Distribute Outside Point Source. If this start-up parameter is set,
Default: for the interface will distribute events to tags outside the specified point
DISTRIBUTOR and source (based on the TagName or Alias). Otherwise, rows with Tag
RxC strategies the Names / Aliases pointing outside the point source will be skipped.
interface does NOT
store events outside Note: this startup- parameter applies to Tag Distribution and
specified point source. RxC Distribution (combination of Group and Distribution)
Optional strategies only.
106
Parameter Description
/DSN=dsn_name Data Source Name created via the ODBC Administrator utility (found
Required in Windows Control Panel). This interface only supports Machine data-
sources and preferably System data-sources!
%systemroot%\syswow64\odbcad32.exe
Parameter Description
/ERC=# Consecutive Errors to Reconnect, the /ERC parameter defines the
Default: (not specified) number (#) of (same) consecutive occurring errors that cause the
Optional interface closes all existing ODBC statements and attempts to re-
create the whole ODBC link.
Optional
/f=SS.## The /f parameter defines the time period between scans in terms of
or hours (HH), minutes (MM), seconds (SS) and sub-seconds (##). The
/f=SS.##,SS.## scans can be scheduled to occur at discrete moments in time with an
optional time offset specified in terms of hours (hh), minutes (mm),
or
seconds (ss) and sub-seconds (##). If HH and MM are omitted, then
/f=HH:MM:SS.## the time period that is specified is assumed to be in seconds.
or Each instance of the /f parameter on the command-line defines a
/f=HH:MM:SS.##, scan class for the interface. There is no limit to the number of scan
hh:mm:ss.## classes that can be defined. The first occurrence of the /f parameter
on the command-line defines the first scan class of the interface; the
Required for reading second occurrence defines the second scan class, and so on. PI
scan-based inputs Points are associated with a particular scan class via the Location4 PI
Point attribute. For example, all PI Points that have Location4 set to 1
will receive input values at the frequency defined by the first scan
class. Similarly, all points that have Location4 set to 2 will receive input
values at the frequency specified by the second scan class, and so on.
Two scan classes are defined in the following example:
/f=00:01:00,00:00:05 /f=00:00:07
or, equivalently:
/f=60,5 /f=7
The first scan class has a scanning frequency of 1 minute with an
offset of 5 seconds, and the second scan class has a scanning
frequency of 7 seconds. When an offset is specified, the scans occur
at discrete moments in time according to the formula:
scan times = (reference time) + n(frequency) + offset
where n is an integer and the reference time is midnight on the day
that the interface was started. In the above example, frequency is
60 seconds and offset is 5 seconds for the first scan class. This means
that if the interface was started at 05:06:06, the first scan would be at
05:07:05, the second scan would be at 05:08:05, and so on. Since no
offset is specified for the second scan class, the absolute scan times
are undefined.
The definition of a scan class does not guarantee that the associated
points will be scanned at the given frequency. If the interface is under
a large load, then some scans may occur late or be skipped entirely.
See the section “Performance Summaries” in the UniInt Interface User
Manual.doc for more information on skipped or missed scans.
Sub-second Scan Classes
Sub-second scan classes can be defined on the command-line, such
108
Parameter Description
as
/f=0.5 /f=00:00:00.1
where the scanning frequency associated with the first scan class is
0.5 seconds and the scanning frequency associated with the second
scan class is 0.1 of a second.
Similarly, sub-second scan classes with sub-second offsets can be
defined, such as
/f=0.5,0.2 /f=1,0
Wall Clock Scheduling
Scan classes that strictly adhere to wall clock scheduling are now
possible. This feature is available for interfaces that run on Windows
and/or UNIX. Previously, wall clock scheduling was possible, but not
across daylight saving time. For example,
/f=24:00:00,08:00:00 corresponds to 1 scan a day starting at
8 AM. However, after a Daylight Saving Time change, the scan would
occur either at 7 AM or 9 AM, depending upon the direction of the time
shift. To schedule a scan once a day at 8 AM (even across daylight
saving time), use /f=24:00:00,00:08:00,L. The ,L at the
end of the scan class tells UniInt to use the new wall clock scheduling
algorithm.
/Failover_Timeou This parameter is used to set a maximum timeout in seconds before
t=# the interface will failover. In other words, the interface will not fail over
Default: None if a query takes shorter time than the specified timeout.
Optional
/Global=FilePath The /Global parameter is used to specify the full path to the file that
Default: no global contains definitions of the global variables.
variables file
Optional
/host=host:port The /host parameter is used to specify the PI Home node. Host
Required is the IP address of the PI Sever node or the domain name of the PI
Server node. Port is the port number for TCP/IP communication.
The port is always 5450. It is recommended to explicitly define the
host and port on the command-line with the /host parameter.
Nevertheless, if either the host or port is not specified, the interface will
attempt to use defaults.
Examples:
Parameter Description
/id=x The /id parameter is used to specify the interface identifier.
/in=x (included for The interface identifier is a string that is no longer than 9 characters in
backwards compatibility length. UniInt concatenates this string to the header that is used to
with older version of the identify error messages as belonging to a particular interface. See the
interface) Appendix A Error and Informational Messages for more information.
Highly Recommended UniInt always uses the /id parameter in the fashion described above.
This interface also uses the /id parameter to identify a particular
interface copy number that corresponds to an integer value that is
assigned to Location1. For this interface, use only numeric characters
in the identifier. For example,
/id=1
/Ignore_Nulls The /Ignore_Nulls start-up parameter will cause the interface
Default: for GROUP and will not write the No Data system digital state for tags populated
RxC strategies the through the Tag Groups and RxC Distribution (combination of Group
interface writes and Distribution) strategies. (The mandated result-set format for the
NO_DATA in case the two above referenced strategies does not allow excluding the NULLs
value column is NULL. in the WHERE clause.)
Optional
/LB LaBoratory. Events are written directly to PI Archive in bulks.
Optional The event ratio is then significantly faster comparing to the event-by-
event sending, which occurs when no /LB is present. The archive
mode is ARCREPLACE.
/MaxLog=# Maximum number of log files in the circular buffer. The
Default: indefinite interface starts overwriting the oldest log files when the
Optional MAXLOG has been reached. When not specified, the log files
will be indexed indefinitely.
/MaxLogSize=# Maximum size of the log file in MB. If this parameter is not
Default: 20 specified, the default MAXLOGSIZE is 20 MB.
Optional
/No_Input_Error The /No_Input_Error parameter suppresses writing
Default: writes IO_TIMEOUT and BAD_INPUT for input tags when any runtime error
BAD_INPUT, occurs or ODBC connection is lost.
IO_TIMEOUT in case Example:
of any runtime error SELECT timestamp,value,0 WHERE timestamp > ?
Optional ORDER BY timestamp; P1=TS
The ? will be updated (during run-time) with the latest timestamp
retrieved. Now, if the interface runs into a communication problem, it
will normally write I/O Timeout and use current time to timestamp
it. The latest timestamp will thus become the current time, which is
potentially a problem, because the next query will miss all values
between the last retrieved timestamp and the I/O Timeout
timestamp! The /no_input_error will avoid it.
110
Parameter Description
/OOO_Option= For output tags (which have Location5=1), this option specifies what
"append,replace, kind of out-of-order output-point events will trigger the SQL query
remove" execution. In addition, the option will set a variable that can be
evaluated in the query file (see section Out-Of-Order Recovery for the
Default: description of the related @* variables).
/ooo_option="append"
Example:
Optional /OOO_Option="append,replace"
means only additions and modifications of the SourceTag's values
cause the defined SQL query(ies) to be executed .
The order of the keywords (append, replace, remove) is arbitrary, they
can appear only once and the user can specify any of these.
Note: The remove option will only have an effect during the
interface start-up. Value deletions will not be detected when the
interface in on-line mode.
Parameter Description
/Pass_ODBC=passw The /Pass_ODBC parameter is used to specify the password for the
ord_odbc ODBC connection. The password entered is case sensitive! If this
Default: empty string parameter is omitted, the standard ODBC connection dialog prompts
the user for his name and password. The password has to be entered
Optional
only once. On all future startups the interface will take the password
from the encrypted file.
Since interface version 3.16.0, this encrypted file has the same name
as the interface executable concatenated with pointsource and the id
and the file extension is PWD. The file is stored in the same directory
as the interface specific output file.
Example of the relevant start-up parameters:
rdbmspi.exe …/id=2 /ps=SQL …
/Output=c:\pipc\interfaces\rdbmspi\logs\
rdbmspi.log …
Encrypted password is stored in:
c:\pipc\interfaces\rdbmspi\logs\
rdbmspi_SQL_2.PWD
112
Parameter Description
/Pass_PI=passwor The /Pass_PI parameter is used to specify the password for the
d_pi piadmin account (default), or for the account set by the /user_pi
Default: empty string parameter. The password entered is Case sensitive. If the interface is
started in the console mode, the log-on prompt will request the
Optional
password. The password is consequently stored in the encrypted form;
named as the interface executable and the file extension will be
Obsolete PI_PWD. It is stored in the same directory as the output log-file. The
password has to be entered only once. In the course of all future
startups, the interface will read the password from this encrypted file.
Example:
rdbmspi.exe … /id=2…
/Output=c:\pipc\interfaces\rdbmspi\log\
rdbmspi.log …
The encrypted password is stored in:
c:\pipc\interfaces\rdbmspi\log\rdbmspi.PI_PWD
Parameter Description
/perf=# The /perf parameter specifies the interval between output of
Default: 8 hours performance summary information in hours. If zero is specified, no
Optional performance summaries will be done.
This printout is directed to pipc.log.
UniInt monitors interface performance by keeping track of the number
of scans that are hit, missed, and/or skipped for scan-based input
points. Scans that occur on time are considered hit. If a scan occurs
more than 1 second after its scheduled time, the scan is considered
missed. If a scan occurs 1 scan period or more after its scheduled
time, then 1 or more scans are considered skipped. Say that a
particular scan class has a period of 2 seconds. If a scan for this class
occurs 1.1 seconds after its scheduled time, then 1 scan has been
missed. However, no scans have been skipped because the next
scan still has the opportunity to occur at its scheduled time, which
happens to be 0.9 seconds after the last scan in this case. For scans
that have periods of 1 second or less, the above definition of a missed
scan does not make sense. In these cases, scans are considered
either hit or skipped. Since every skipped scan is also considered to
be a missed scan, the scan performance summary should indicate the
same percentage of skipped and missed scans for scan classes with
periods of 1 second or less.
By default, UniInt prints out a performance summary to the message
log every 8 hours if the hit ratio (hit ratio = hits / (hits + misses)) drops
below 0.95. The performance summary shows the percentage of
scans that are missed and skipped for every scan class. The
frequency at which performance summaries are printed out can be
adjusted using the /perf command-line parameter.
For interfaces that use unsolicited input points, performance
summaries should be inactivated by setting /perf=0 because
performance summaries are meaningless for unsolicited inputs.
/PISDK=# The /pisdk parameter can be used to enable or disable the PI SDK
Optional in some situations. Use /pisdk=1 to enable the PI SDK. Use
/pisdk=0 to disable the PI SDK. If a particular interface requires the PI
SDK, then the PI SDK will always be enabled and the /pisdk
parameter will be ignored.
If the interface is running on an interface node with the PI API version
1.6.x or greater and the version of the PI Server is 3.4.370.x or
greater, the interface will ignore the /pisdk parameter and the SDK
will not be used to retrieve point attributes.
114
Parameter Description
/RBO The Read Before Overwrite /RBO parameter tells the interface to
Default: No comparison check upfront if a new event already exists in the archive. The
with archive values. interface does a value comparison, and if at a given timestamp it finds
Optional the SAME value, it will NOT send it to PI. This setting applies only to
those input points, which have Location5=1 (see section Input Tags).
This parameter is for instance useful for customers using audit logs.
Re-writing the same values can make the audit logs grow too fast, or
in cases when the interface is configured in redundant scenarios
(queries against the same tables), etc.
Note: A tag edit of an output tag will also trigger recovery, but
for this tag only.
Parameter Description
/Recovery_Time= Output recovery:
"*-8 h" In conjunction with the recovery parameter (/Recovery),
or the /Recovery_Time parameter determines the oldest timestamp
/Recovery_Time= for retrieving data from the archive. The time syntax is in PI time
*-1d format. (See the Data Archive Manual for more information on the PI
or time string format.)
/Recovery_Time= Input recovery:
*-1h,* The /Recovery_Time supports syntax listed in table in chapter
or RDBMSPI – Input Recovery Modes.
/Recovery_Time=
"01-Jan-05
15:00:00, Note: for both modes; that is, for input as well as output
31-Jan-05 recovery; when the /Recovery_Time definition contains start
15:00:00" as well as end times, the interface will process the specified
Default: no recovery
interval and then it will exit.
Optional
116
Parameter Description
/stopstat=digsta If /stopstat=digstate is present on the command line, then the
te digital state, digstate, will be written to each PI Point when the
or interface is stopped. For a PI 3 Server, digstate must be in the
/stopstat system digital state table. . UniInt will use the first occurrence of
digstate found in the table.
/stopstat only is If the /stopstat parameter is present on the startup command
equivalent to line, then the digital state “Intf Shut” will be written to each PI
Point when the interface is stopped.
/stopstat="Intf
Shut" If neither /stopstat nor /stopstat=digstate is specified on
the command line, then no digital states will be written when the
interface is shut down.
Optional
Default = no digital state
Note: The /stopstat parameter is disabled If the interface is
written at shutdown.
running in a UniInt failover configuration as defined in the UniInt
Failover Configuration section of this manual. Therefore, the
digital state, digstate, will not be written to each PI Point
when the interface is stopped. This prevents the digital state
being written to PI Points while a redundant system is also
writing data to the same PI Points. The /stopstat parameter
is disabled even if there is only one interface active in the
failover configuration.
Examples:
/stopstat=shutdown
/stopstat=”Intf Shut”
The entire digstate value should be enclosed within double quotes
when there is a space in digstate.
/SUCC1=# The /SUCC1 parameter points to the beginning of the range in the
Default: 0 system digital state table that contains the 'OK value area' strings
Optional
/SUCC2=# The /SUCC2 parameter points to the end of the range in the system
Default: 0 digital state table that contains 'OK value area' strings
Optional
/TF=tagname The /TF parameter specifies the query rate tag per scan and stores
Optional the number of successfully executed queries in a scan
Each scan class can get its own query rate tag. The order in the
startup line will correlate the tag name to the related scan class (same
as the /f=hh:mm:ss /f=hh:mm:ss do)
After each scan, the number of successfully executed queries will be
stored into the related /TF=tagname.
Example: Two scan frequencies and corresponding two query rate
tags:
. . . /f=00:00:03 /f=00:00:05 /TF=tagname1
/TF=tagname2
Scan class 1 will service the query rate tag tagname1 and scan class
2 will service the tag tagname2. The tags pointed to by the /TF have
to be of the same PointSource (/ps=) and Location4 must
correspond to a scan class a given 'tf' tag measures.
Parameter Description
/UFO_ID=# Failover ID. This value must be different from the Failover ID of the
other interface in the failover pair. It can be any positive, non-zero
integer.
Required for UniInt
Interface Level Failover
Phase 1 or 2
/UFO_Interval=# Failover Update Interval
Specifies the heartbeat Update Interval in milliseconds and must be
Optional the same on both interface computers.
Default: 1000 This is the rate at which UniInt updates the Failover Heartbeat tags as
well as how often UniInt checks on the status of the other copy of the
interface.
Valid values are
50-20000.
/UFO_OtherID=# Other Failover ID. This value must be equal to the Failover ID
configured for the other interface in the failover pair.
Required for UniInt
Interface Level Failover
Phase 1 or 2
/UFO_Sync=path/[ The Failover File Synchronization Filepath and Optional Filename
filename] specify the path to the shared file used for failover synchronization and
an optional filename used to specify a user defined filename in lieu of
the default filename.
Required for UniInt
Interface Level Failover
Phase 2 The path to the shared file directory can be a fully qualified machine
synchronization. name and directory, a mapped drive letter, or a local path if the shared
file is on one of the interface nodes. The path must be terminated by a
slash ( / ) or backslash ( \ ) character. If no d terminating slash is
Any valid pathname /
any valid filename found, in the /UFO_Sync parameter, the interface interprets the final
character string as an optional filename.
The default filename is
generated as The optional filename can be any valid filename. If the file does not
executablename_points exist, the first interface to start attempts to create the file.
ource_interfaceID.dat
Note: If using the optional filename, do not supply a
terminating slash or backslash character.
If there are any spaces in the path or filename, the entire path
and filename must be enclosed in quotes.
The service that the interface runs against must specify a valid
logon user account under the “Log On” tab for the service
properties.
118
Parameter Description
/UFO_Type=type The Failover Type indicates which type of failover configuration the
interface will run. The valid types for failover are HOT, WARM, and
COLD configurations.
Required for UniInt
Interface Level Failover If an interface does not supported the requested type of failover, the
Phase 2. interface will shut down and log an error to the pipc.log file stating
the requested failover type is not supported.
/updateinterval= Adjusts the minimum interval (in seconds) when the interface checks
# for point updates
Default=120 seconds The default interval is 120 seconds, the minimum interval is 1 second,
Optional and the maximum interval is 300 seconds
Example:
. . . /updateinterval=60
/User_ODBC=usern The /User_ODBC parameter specifies the username for the ODBC
ame_odbc connection.
Optional Databases like MS Access or dBase may not always have usernames
set up. In this case a dummy username must be used, e.g.
/User_ODBC=dummy.
/User_PI=usernam The /User_PI parameter specifies the PI username. PI interfaces
e_pi usually log in as piadmin and rely on an entry in the PI trust table to
Default: piadmin get the piadmin credentials. This switch is maintained for legacy
reasons and the suggested scenario today (with PI Servers 3.3+) is
Optional
thus is to always specify a PI trust.
Obsolete!
Note: Since RDBMSPI version 3.11.0.0 – when this
parameter is NOT present, the interface does not explicitly log
in and relies on entries in the PI trust table
Parameter Description
/WD=# In conjunction with the /LB parameter; Write Delay (in milliseconds)
Default: 10 between two bulk writes to the PI archive. Default is 10ms. Used to
Optional tune the load on the PI Archive and the network. See also the /LB
and /WS=# parameters.
/WS=# In conjunction with the /LB parameter; Write Size. Maximum number
Default: 10240 of values written in one (bulk) call to the PI Archive; default is 10240
Optional events per bulk.
This parameter can be used to tune (throttle) the load on the PI
Archive.
With RDBMSPI in history recovery scenarios, it is possible to load
huge amounts of data in a short time; for example, when loading data
from tables covering spanning years, the /WS /WD can be used to
throttle the load.
120
Sample RDBMSPI.bat File
The following is an example file:
REM===========================================================================
REM
REM RDBMSPI.BAT
REM
REM Sample startup file for the Relational Database (RDBMS via ODBC) Interface
REM
REM ===========================================================================
REM
REM OSIsoft recommends using PI ICU to modify startup files.
REM
REM Sample command line
REM
RDBMSPI.exe
/ps=RDBMSPI ^
/id=1 ^
/DSN=Oracle8 ^
/User_ODBC=system ^
/Pass_ODBC= ^
/host=XXXXXX:5450 ^
/f=00:00:05 ^
/f=00:00:10 ^
/f=00:00:15 ^
/Output="C:\Program Files\PIPC\Interfaces\RDBMSPI\Log\RDBMSPI.out" ^
/SQL="C:\Program Files\PIPC\Interfaces\RDBMSPI\SQL\" ^
/DEB=1 ^
/PISDK=1 ^
/Recovery=TS ^
/Recovery_Time=*-5m
REM
REM End of RDBMSPI.bat
Introduction
To minimize data loss during a single point of failure within a system, UniInt provides two
failover schemas: (1) synchronization through the data source and (2) synchronization
through a shared file. Synchronization through the data source is Phase 1, and
synchronization through a shared file is Phase 2.
Phase 1 UniInt Failover uses the data source itself to synchronize failover operations and
provides a hot failover, no data loss solution when a single point of failure occurs. For this
option, the data source must be able to communicate with and provide data for two interfaces
simultaneously. Additionally, the failover configuration requires the interface to support
outputs.
Phase 2 UniInt Failover uses a shared file to synchronize failover operations and provides for
hot, warm, or cold failover. The Phase 2 hot failover configuration provides a no data loss
solution for a single point of failure similar to Phase 1. However, in warm and cold failover
configurations, you can expect a small period of data loss during a single point of failure
transition.
You can also configure the UniInt interface level failover to send data to a High Availability
(HA) PI Server collective. The collective provides redundant PI Servers to allow for the
uninterrupted collection and presentation of PI time series data. In an HA configuration,
PI Servers can be taken down for maintenance or repair. The HA PI Server collective is
described in the PI Server Reference Guide.
When configured for UniInt failover, the interface routes all PI data through a state machine.
The state machine determines whether to queue data or send it directly to PI depending on the
current state of the interface. When the interface is in the active state, data sent through the
interface gets routed directly to PI. In the backup state, data from the interface gets queued
for a short period. Queued data in the backup interface ensures a no-data loss failover under
normal circumstances for Phase 1 and for the hot failover configuration of Phase 2. The same
algorithm of queuing events while in backup is used for output data.
Quick Overview
The Quick Overview below may be used to configure this Interface for failover. The failover
configuration requires the two copies of the interface participating in failover be installed on
different nodes. Users should verify non-failover interface operation as discussed in the
Installation Checklist section of this manual prior to configuring the interface for failover
operations. If you are not familiar with UniInt failover configuration, return to this section
after reading the rest of the UniInt Failover Configuration section in detail. If a failure occurs
at any step below, correct the error and start again at the beginning of step 6 Test in the table
below. For the discussion below, the first copy of the interface configured and tested will be
considered the primary interface and the second copy of the interface configured will be the
backup interface.
Configuration
One Data Source
Two Interfaces
Prerequisites
Interface 1 is the Primary interface for collection of PI data from the data source.
Interface 2 is the Backup interface for collection of PI data from the data source.
You must set up a shared file if using Phase 2 failover..
Phase 2: The shared file must store data for five failover tags:
(1) Active ID.
(2) Heartbeat 1.
(3) Heartbeat 2.
(4) Device Status 1.
(5) Device Status 2.
Each interface must be configured with two required failover command line
parameters: (1) its FailoverID number (/UFO_ID); (2) the FailoverID number of
its Backup interface (/UFO_OtherID). You must also specify the name of the
PI Server host for exceptions and PI tag updates.
All other configuration parameters for the two interfaces must be identical.
124
Synchronization through a Shared File (Phase 2)
Data register 0
. DataSource
. DCS/PLC/Data Server
.
Data register n
Process Network
FileSvr
IF-Node1 IF-Node2
.\UFO\Intf_PS_1.dat PI-Interface.exe
PI-Interface.exe
/host=PrimaryPI /host=SecondaryPI
/UFO_ID=1 /UFO_ID=2
/UFO_OTHERID=2 /UFO_OTHERID=1
/UFO_TYPE=HOT /UFO_TYPE=HOT
/UFO_SYNC=\\FileSvr\UFO\Intf_PS_1.dat /UFO_SYNC=\\FileSvr\UFO\Intf_PS_1.dat
Business Network
Data register 0
. DataSource
. DCS/PLC/Data Server
.
Data register n
Process Network
FileSvr
IF-Node1 IF-Node2
.\UFO\Intf_PS_1.dat PI-Interface.exe
PI-Interface.exe
/host=PrimaryPI /host=SecondaryPI
/UFO_ID=1 /UFO_ID=2
/UFO_OTHERID=2 /UFO_OTHERID=1
/UFO_TYPE=HOT /UFO_TYPE=HOT
/UFO_SYNC=\\FileSvr\UFO\Intf_PS_1.dat /UFO_SYNC=\\FileSvr\UFO\Intf_PS_1.dat
Business Network
Figure 1 which depicts a typical network setup including the path to the synchronization file
located on a File Server (FileSvr). Other configurations may be supported and this figure is
used only as an example for the following discussion.
For a more detailed explanation of this synchronization method, see Detailed Explanation of
Synchronization through a Shared File (Phase 2)
126
Configuring Synchronization through a Shared File (Phase 2)
Step Description
1. Verify non-failover interface operation as described in the Installation Checklist section of
this manual
2. Configure the Shared File
Choose a location for the shared file. The file can reside on one of the interface nodes or
on a separate node from the Interfaces; however OSIsoft strongly recommends that you
put the file on a Windows Server platform that has the “File Server” role configured. .
Setup a file share and make sure to assign the permissions so that both Primary and
Backup interfaces have read/write access to the file.
3. Configure the interface parameters
Use the Failover section of the Interface Configuration Utility (ICU) to enable failover and
create two parameters for each interface: (1) a Failover ID number for the interface; and
(2) the Failover ID number for its backup interface.
The Failover ID for each interface must be unique and each interface must know the
Failover ID of its backup interface.
If the interface can perform using either Phase 1 or Phase 2 pick the Phase 2 radio button
in the ICU.
Select the synchronization File Path and File to use for Failover.
Select the type of failover required (Cold, Warm, Hot). The choice depends on what types
of failover the interface supports.
Ensure that the user name assigned in the “Log on as:” parameter in the Service section
of the ICU is a user that has read/write access to the folder where the shared file will
reside.
All other command line parameters for the primary and secondary interfaces must be
identical.
If you use a PI Collective, you must point the primary and secondary interfaces to different
members of the collective by setting the SDK Member under the PI Host Information
section of the ICU.
[Option] Set the update rate for the heartbeat point if you need a value other than the
default of 5000 milliseconds.
4. Configure the PI tags
Configure five PI tags for the interface: the Active ID, Heartbeat 1, Heartbeat2, Device
Status 1 and Device Status 2. You can also configure two state tags for monitoring the
status of the interfaces.
Do not confuse the failover Device status tags with the UniInt Health Device Status tags.
The information in the two tags is similar, but the failover device status tags are integer
values and the health device status tags are string values.
Tag ExDesc digitalset
ActiveID [UFO2_ACTIVEID]
IF1_Heartbeat
(IF-Node1) [UFO2_HEARTBEAT:#]
IF2_Heartbeat UniInt does not
(IF-Node2) [UFO2_HEARTBEAT:#] examine the
IF1_DeviceStatus remaining attributes,
(IF-Node1) [UFO2_DEVICESTAT:#] but the pointsource
and location1 must
IF2_DeviceStatus match
(IF-Node2) [UFO2_DEVICESTAT:#]
IF1_State
(IF-Node1) [UFO2_STATE:#] IF_State
IF2_State
(IF-Node2) [UFO2_STATE:#] IF_State
Step Description
5. Test the configuration.
After configuring the shared file and the interface and PI tags, the interface should be
ready to run.
See Troubleshooting UniInt Failover for help resolving Failover issues.
1. Start the primary interface interactively without buffering.
2. Verify a successful interface start by reviewing the pipc.log file. The log
file will contain messages that indicate the failover state of the interface. A
successful start with only a single interface copy running will be indicated by
an informational message stating “UniInt failover: Interface in
the “Primary” state and actively sending data to PI.
Backup interface not available.” If the interface has failed to start,
an error message will appear in the log file. For details relating to
informational and error messages, refer to the Messages section below.
3. Verify data on the PI Server using available PI tools.
The Active ID control tag on the PI Server must be set to the value of
the running copy of the interface as defined by the /UFO_ID startup
command-line parameter.
The Heartbeat control tag on the PI Server must be changing values at
a rate specified by the /UFO_Interval startup command-line
parameter.
4. Stop the primary interface.
5. Start the backup interface interactively without buffering. Notice that this copy
will become the primary because the other copy is stopped.
6. Repeat steps 2, 3, and 4.
7. Stop the backup interface.
8. Start buffering.
9. Start the primary interface interactively.
10. Once the primary interface has successfully started and is collecting data,
start the backup interface interactively.
11. Verify that both copies of the interface are running in a failover configuration.
Review the pipc.log file for the copy of the interface that was started
first. The log file will contain messages that indicate the failover state of
the interface. The state of this interface must have changed as
indicated with an informational message stating “UniInt failover:
Interface in the “Primary” state and actively sending
data to PI. Backup interface available.” If the interface
has not changed to this state, browse the log file for error messages.
For details relating to informational and error messages, refer to the
Messages section below.
Review the pipc.log file for the copy of the interface that was started
last. The log file will contain messages that indicate the failover state of
the interface. A successful start of the interface will be indicated by an
informational message stating “UniInt failover: Interface in
the “Backup” state.” If the interface has failed to start, an error
message will appear in the log file. For details relating to informational
and error messages, refer to the Messages section below.
12. Verify data on the PI Server using available PI tools.
The Active ID control tag on the PI Server must be set to the value of
the running copy of the interface that was started first as defined by the
128
Step Description
/UFO_ID startup command-line parameter.
The Heartbeat control tags for both copies of the interface on the PI
Server must be changing values at a rate specified by the
/UFO_Interval startup command-line parameter or the scan class
which the points have been built against.
13. Test Failover by stopping the primary interface.
14. Verify the backup interface has assumed the role of primary by searching the
pipc.log file for a message indicating the backup interface has changed to
the “UniInt failover: Interface in the “Primary” state and
actively sending data to PI. Backup interface not
available.” The backup interface is now considered primary and the
previous primary interface is now backup.
15. Verify no loss of data in PI. There may be an overlap of data due to the
queuing of data. However, there must be no data loss.
16. Start the backup interface. Once the primary interface detects a backup
interface, the primary interface will now change state indicating “UniInt
failover: Interface in the “Primary” state and actively
sending data to PI. Backup interface available.” In the
pipc.log file.
17. Verify the backup interface starts and assumes the role of backup. A
successful start of the backup interface will be indicated by an informational
message stating “UniInt failover: Interface in “Backup
state.” Since this is the initial state of the interface, the informational
message will be near the beginning of the start sequence of the pipc.log
file.
18. Test failover with different failure scenarios (e.g. loss of PI connection for a
single interface copy). UniInt failover guarantees no data loss with a single
point of failure. Verify no data loss by checking the data in PI and on the
data source.
19. Stop both copies of the interface, start buffering, start each interface as a
service.
20. Verify data as stated above.
21. To designate a specific interface as primary. Set the Active ID point on the
Data Source Server of the desired primary interface as defined by the
/UFO_ID startup command-line parameter.
Start-Up Parameters
The following table lists the start-up parameters used by UniInt Failover Phase 2. All of the
parameters are required except the /UFO_Interval startup parameter. See the table below
for further explanation.
Parameter Required/ Description Value/Default
Optional
/UFO_ID=# Required Failover ID for IF-Node1 Any positive, non-
This value must be different from zero integer / 1
the failover ID of IF-Node2.
Required Failover ID for IF-Node2 Any positive, non-
This value must be different from zero integer / 2
the failover ID of IF-Node1.
/UFO_OtherID=# Required Other Failover ID for IF-Node1 Same value as
The value must be equal to the Failover ID for
Failover ID configured for the IF-Node2 / 2
interface on IF-Node2.
Required Other Failover ID for IF-Node2 Same value as
The value must be equal to the Failover ID for
Failover ID configured for the IF-Node1 / 1
interface on IF-Node1.
/UFO_Sync= Required for The Failover File Synchronization Any valid pathname /
path/[filename] Phase 2 Filepath and Optional Filename any valid filename
synchronization specify the path to the shared file The default filename
used for failover synchronization is generated as
and an optional filename used to executablename_
specify a user defined filename in pointsource_
lieu of the default filename.
interfaceID.dat
The path to the shared file
directory can be a fully qualified
machine name and directory, a
mapped drive letter, or a local path
if the shared file is on one of the
interface nodes. The path must be
terminated by a slash ( / ) or
backslash ( \ ) character. If no
terminating slash is found, in the
/UFO_Sync parameter, the
interface interprets the final
character string as an optional
filename.
The optional filename can be any
valid filename. If the file does not
130
Parameter Required/ Description Value/Default
Optional
exist, the first interface to start
attempts to create the file.
Note: If using the optional
filename, do not supply a
terminating slash or backslash
character.
If there are any spaces in the path
or filename, the entire path and
filename must be enclosed in
quotes.
Note: If you use the backslash
and path separators and enclose
the path in double quotes, the final
backslash must be a double
backslash (\\). Otherwise the
closing double quote becomes
part of the parameter instead of a
parameter separator.
Each node in the failover
configuration must specify the
same path and filename and must
have read, write, and file creation
rights to the shared directory
specified by the path parameter.
The service that the interface runs
against must specify a valid logon
user account under the “Log On”
tab for the service properties.
/UFO_Type=type Required The Failover Type indicates which COLD|WARM|HOT /
type of failover configuration the COLD
interface will run. The valid types
for failover are HOT, WARM, and
COLD configurations.
If an interface does not supported
the requested type of failover, the
interface will shutdown and log an
error to the pipc.log file stating
the requested failover type is not
supported.
/UFO_Interval=# Optional Failover Update Interval 50 – 20000 / 1000
Specifies the heartbeat Update
Interval in milliseconds and must
be the same on both interface
computers.
This is the rate at which UniInt
updates the Failover Heartbeat
tags as well as how often UniInt
checks on the status of the other
copy of the interface.
The following table describes the points that are required to manage failover. In Phase 2
Failover, these points are located in a data file shared by the Primary and Backup interfaces.
OSIsoft recommends that you locate the shared file on a dedicated server that has no other
role in data collection. This avoids potential resource contention and processing degradation
if your system monitors a large number of data points at a high frequency.
Point Description Value / Default
ActiveID Monitored by the interfaces to determine which From 0 to the highest
interface is currently sending data to PI. Interface Failover ID
ActiveID must be initialized so that when the number / None)
interfaces read it for the first time, it is not an Updated by the
error. redundant Interfaces
ActiveID can also be used to force failover. For Can be changed
example, if the current Primary is IF-Node 1 and manually to initiate a
ActiveID is 1, you can manually change manual failover
ActiveID to 2. This causes the interface at IF-
Node2 to transition to the primary role and the
interface at IF-Node1 to transition to the backup
role.
Heartbeat 1 Updated periodically by the interface on Values range between
IF-Node1. The interface on IF-Node2 monitors 0 and 31 / None
this value to determine if the interface on Updated by the
IF-Node1 has become unresponsive. Interface on IF-Node1
Heartbeat 2 Updated periodically by the interface on IF- Values range between
Node2. The interface on IF-Node1 monitors this 0 and 31 / None
value to determine if the interface on IF-Node2 Updated by the
has become unresponsive. Interface on IF-Node2
132
PI Tags
The following tables list the required UniInt Failover Control PI tags, the values they will
receive, and descriptions.
The following table describes the ExtendedDescriptor for the above PI tags in more
detail.
PI Tag ExDesc Required / Description Value
Optional
[UFO2_ACTIVEID] Required Active ID tag 0 – highest
The ExDesc must start with the Interface Failover
case sensitive string: ID
[UFO2_ACTIVEID]. Updated by the
The pointsource must match the redundant
interfaces’ point source. Interfaces
Location1 must match the ID for the
interfaces.
Location5 is the COLD failover retry
interval in minutes. This can be
used to specify how long before an
interface retries to connect to the
device in a COLD failover
configuration. (See the description
of COLD failover retry interval for a
detailed explanation.)
[UFO2_HEARTBEAT:#] Required Heartbeat 1 Tag 0 – 31 / None
(IF-Node1) The ExDesc must start with the Updated by the
case sensitive string: Interface on
[UFO2_HEARTBEAT:#] IF-Node1
The number following the colon (:)
must be the Failover ID for the
interface running on IF-Node1.
The pointsource must match the
interfaces’ point source.
Location1 must match the ID for the
interfaces.
[UFO2_HEARTBEAT:#] Required Heartbeat 2 Tag 0 – 31 / None
(IF-Node2) The ExDesc must start with the Updated by the
case sensitive string: Interface on
[UFO2_HEARTBEAT:#] IF-Node2
The number following the colon (:)
must be the Failover ID for the
interface running on IF-Node2.
The pointsource must match the
interfaces’ point source.
Location1 must match the id for the
interfaces.
134
PI Tag ExDesc Required / Description Value
Optional
[UFO2_DEVICESTAT :#] Required Device Status 1 Tag 0 – 99 / None
(IF-Node1) The ExDesc must start with the Updated by the
case sensitive string: Interface on
[UFO2_HEARTBEAT:#] IF-Node1
The value following the colon (:)
must be the Failover ID for the
interface running on IF-Node1
The pointsource must match the
interfaces’ point source.
Location1 must match the id for the
interfaces.
A lower value is a better status and
the interface with the lower status
will attempt to become the primary
interface.
The failover 1 device status tag is
very similar to the UniInt Health
Device Status tag except the data
written to this tag are integer
values. A value of 0 is good and a
value of 99 is OFF. Any value
between these two extremes may
result in a failover. The interface
client code updates these values
when the health device status tag is
updated.
[UFO2_DEVICESTAT :#] Required Device Status 2 Tag 0 – 99 / None
(IF-Node2) The ExDesc must start with the Updated by the
case sensitive string: Interface on
[UFO2_HEARTBEAT:#] IF-Node2
The number following the colon (:)
must be the Failover ID for the
interface running on IF-Node2
The pointsource must match the
interfaces’ point source.
Location1 must match the ID for the
interfaces.
A lower value is a better status and
the interface with the lower status
will attempt to become the primary
interface.
136
Detailed Explanation of Synchronization through a Shared File
(Phase 2)
In a shared file failover configuration, there is no direct failover control information passed
between the data source and the interface. This failover scheme uses five PI tags to control
failover operation, and all failover communication between primary and backup interfaces
passes through a shared data file.
Once the interface is configured and running, the ability to read or write to the PI tags is not
required for the proper operation of failover. This solution does not require a connection to
the PI Server after initial startup because the control point data are set and monitored in the
shared file. However, the PI tag values are sent to the PI Server so that you can monitor them
with standard OSIsoft client tools.
You can force manual failover by changing the ActiveID on the data source to the backup
failover ID.
Data register 0
. DataSource
. DCS/PLC/Data Server
.
Data register n
Process Network
FileSvr
IF-Node1 IF-Node2
.\UFO\Intf_PS_1.dat PI-Interface.exe
PI-Interface.exe
/host=PrimaryPI /host=SecondaryPI
/UFO_ID=1 /UFO_ID=2
/UFO_OTHERID=2 /UFO_OTHERID=1
/UFO_TYPE=HOT /UFO_TYPE=HOT
/UFO_SYNC=\\FileSvr\UFO\Intf_PS_1.dat /UFO_SYNC=\\FileSvr\UFO\Intf_PS_1.dat
Business Network
The figure above shows a typical network setup in the normal or steady state. The solid
magenta lines show the data path from the interface nodes to the shared file used for failover
synchronization. The shared file can be located anywhere in the network as long as both
interface nodes can read, write, and create the necessary file on the shared file machine.
OSIsoft strongly recommends that you put the file on a dedicated file server that has no other
role in the collection of data.
The major difference between synchronizing the interfaces through the data source (Phase 1)
and synchronizing the interfaces through the shared file (Phase 2) is where the control data is
located. When synchronizing through the data source, the control data is acquired directly
from the data source. We assume that if the primary interface cannot read the failover control
PI Interface for Relational Database (RDBMS via ODBC) 137
UniInt Failover Configuration
points, then it cannot read any other data. There is no need for a backup communications path
between the control data and the interface.
When synchronizing through a shared file, however, we cannot assume that loss of control
information from the shared file implies that the primary interface is down. We must account
for the possible loss of the path to the shared file itself and provide an alternate control path
to determine the status of the primary interface. For this reason, if the shared file is
unreachable for any reason, the interfaces use the PI Server as an alternate path to pass
control data.
When the backup interface does not receive updates from the shared file, it cannot tell
definitively why the primary is not updating the file, whether the path to the shared file is
down, whether the path to the data source is down, or whether the interface itself is having
problems. To resolve this uncertainty, the backup interface uses the path to the PI Server to
determine the status of the primary interface. If the primary interface is still communicating
with the PI Server, than failover to the backup is not required. However, if the primary
interface is not posting data to the PI Server, then the backup must initiate failover operations.
The primary interface also monitors the connection with the shared file to maintain the
integrity of the failover configuration. If the primary interface can read and write to the
shared file with no errors but the backup control information is not changing, then the backup
is experiencing some error condition. To determine exactly where the problem exists, the
primary interface uses the path to PI to establish the status of the backup interface. For
example, if the backup interface controls indicate that it has been shutdown, it may have been
restarted and is now experiencing errors reading and writing to the shared file. Both primary
and backup interfaces must always check their status through PI to determine if one or the
other is not updating the shared file and why.
Steady state operation is considered the normal operating condition. In this state, the primary
interface is actively collecting data and sending its data to PI. The primary interface is also
updating its heartbeat value; monitoring the heartbeat value for the backup interface,
checking the active ID value, and checking the device status for the backup interface every
failover update interval on the shared file. Likewise, the backup interface is updating its
heartbeat value; monitoring the heartbeat value for the primary interface, checking the active
ID value, and checking the device status for the primary interface every failover update
interval on the shared file. As long as the heartbeat value for the primary interface indicates
that it is operating properly, the ActiveID has not changed, and the device status on the
primary interface is good, the backup interface will continue in this mode of operation.
An interface configured for hot failover will have the backup interface actively collecting and
queuing data but not sending that data to PI. An interface for warm failover in the backup role
is not actively collecting data from the data source even though it may be configured with PI
tags and may even have a good connection to the data source. An interface configured for
cold failover in the backup role is not connected to the data source and upon initial startup
will not have configured PI tags.
The interaction between the interface and the shared file is fundamental to failover. The
discussion that follows only refers to the data written to the shared file. However, every value
written to the shared file is echoed to the tags on the PI Server. Updating of the tags on the
PI Server is assumed to take place unless communication with the PI Server is interrupted.
The updates to the PI Server will be buffered by bufserv or BufSS in this case.
138
In a hot failover configuration, each interface participating in the failover solution will queue
three failover intervals worth of data to prevent any data loss. When a failover occurs, there
may be a period of overlapping data for up to 3 intervals. The exact amount of overlap is
determined by the timing and the cause of the failover and may be different every time. Using
the default update interval of 5 seconds will result in overlapping data between 0 and 15
seconds. The no data loss claim for hot failover is based on a single point of failure. If both
interfaces have trouble collecting data for the same period of time, data will be lost during
that time.
As mentioned above, each interface has its own heartbeat value. In normal operation, the
Heartbeat value on the shared file is incremented by UniInt from 1 – 15 and then wraps
around to a value of 1 again. UniInt increments the heartbeat value on the shared file every
failover update interval. The default failover update interval is 5 seconds. UniInt also reads
the heartbeat value for the other interface copy participating in failover every failover update
interval. If the connection to the PI Server is lost, the value of the heartbeat will be
incremented from 17 – 31 and then wrap around to a value of 17 again. Once the connection
to the PI Server is restored, the heartbeat values will revert back to the 1 – 15 range. During a
normal shutdown process, the heartbeat value will be set to zero.
During steady state, the ActiveID will equal the value of the failover ID of the primary
interface. This value is set by UniInt when the interface enters the primary state and is not
updated again by the primary interface until it shuts down gracefully. During shutdown, the
primary interface will set the ActiveID to zero before shutting down. The backup interface
has the ability to assume control as primary even if the current primary is not experiencing
problems. This can be accomplished by setting the ActiveID tag on the PI Server to the
ActiveID of the desired interface copy.
As previously mentioned, in a hot failover configuration the backup interface actively collects
data but does not send its data to PI. To eliminate any data loss during a failover, the backup
interface queues data in memory for three failover update intervals. The data in the queue is
continuously updated to contain the most recent data. Data older than three update intervals is
discarded if the primary interface is in a good status as determined by the backup. If the
backup interface transitions to the primary, it will have data in its queue to send to PI. This
queued data is sent to PI using the same function calls that would have been used had the
interface been in a primary state when the function call was received from UniInt. If UniInt
receives data without a timestamp, the primary copy uses the current PI time to timestamp
data sent to PI. Likewise, the backup copy timestamps data it receives without a timestamp
with the current PI time before queuing its data. This preserves the accuracy of the
timestamps.
Note: With the exception of the /UFO_ID and /UFO_OtherID startup command-
line parameters, the UniInt failover scheme requires that both copies of the interface
have identical startup command files. This requirement causes the PI ICU to
produce a message when creating the second copy of the interface stating that the
“PS/ID combo already in use by the interface” as shown in Figure 2 below. Ignore
this message and click the Add button.
Figure 2: PI ICU configuration screen shows that the “PS/ID combo is already in use by
the interface.” The user must ignore the yellow boxes, which indicate errors, and click the
Add button to configure the interface for failover.
140
Configuring the UniInt Failover Startup Parameters with PI ICU
There are three interface startup parameters that control UniInt failover: /UFO_ID,
/UFO_OtherID, and /UFO_Interval. The UFO stands for UniInt Failover. The /UFO_ID
and /UFO_OtherID parameters are required for the interface to operate in a failover
configuration, but the /UFO_Interval is optional. Each of these parameters is described in
detail in Configuring UniInt Failover through a Shared File (Phase 2)section and Start-Up
Parameters
Figure 3: The figure above illustrates the PI ICU failover configuration screen showing
the UniInt failover startup parameters (Phase 2). This copy of the interface defines its
Failover ID as 2 (/UFO_ID=2) and the other interfaces Failover ID as 1
(/UFO_OtherID=1). The other failover interface copy must define its Failover ID as 1
(/UFO_ID=1) and the other interface Failover ID as 2 (/UFO_OtherID=2) in its ICU
failover configuration screen. It also defines the location and name of the
synchronization file as well as the type of failover as COLD.
To use the UniInt Failover page to create the UFO_State digital state set right click on any of
the failover tags in the tag list and then select the “Create UFO_State Digital Set on Server
XXXXXX…”, where XXXXXX is the PI Server where the points will be or are create on.
This choice will be grayed out if the UFO_State digital state set is already created on the
XXXXXX PI Server.
Optionally the “Export UFO_State Digital Set (.csv) can be selected to create a comma
separated file to be imported via the System Manangement Tools (SMT3) (version 3.0.0.7 or
higher) or use the UniInt_Failover_DigitalSet_UFO_State.csv file included in the
installation kit.
The procedure below outlines the steps necessary to create a digital set on a PI Sever using
the “Import from File” function found in the SMT3 application. The procedure assumes the
user has a basic understanding of the SMT3 application.
1. Open the SMT3 application.
2. Select the appropriate PI Server from the PI Servers window. If the desired server is
not listed, add it using the PI Connection Manager. A view of the SMT application is
shown in Figure 4 below.
3. From the System Management Plug-Ins window, select Points then Digital States. A
list of available digital state sets will be displayed in the main window for the
selected PI Server. Refer to Figure 4 below.
4. In the main window, right click on the desired server and select the “Import from
File” option. Refer to Figure 4 below.
142
Figure 4: PI SMT application configured to import a digital state set file. The PI Servers
window shows the “localhost” PI Server selected along with the System Management
Plug-Ins window showing the Digital States Plug-In as being selected. The digital state
set file can now be imported by selecting the Import from File option for the localhost.
5. Navigate to and select the UniInt_Failover_DigitalSet_UFO_State.csv file
for import using the Browse icon on the display. Select the desired Overwrite
Options. Click on the OK button. Refer to Figure 5 below.
Figure 5: PI SMT application Import Digital Set(s) window. This view shows the
UniInt_Failover_DigitalSet_UFO_State.csv file as being selected for import.
Select the desired Overwrite Options by choosing the appropriate radio button.
Figure 6: The PI SMT application showing the UFO_State digital set created on the
“localhost” PI Server.
144
Creating the UniInt Failover Control and Failover State Tags (Phase 2)
The ICU can be used to create the UniInt Failover Control and State Tags.
To use the ICU Failover page to create these tags simply right click any of the failover tags in
the tag list and select the “Create all points (UFO Phase 2)” menu item.
If this menu choice is grayed out it is because the UFO_State digital state set has not been
created on the Server yet. There is a menu choice “Create UFO_State Digitial Set on Server
xxxxxxx…” which can be used to create that digital state set. Once this has been done then
the “Create all points (UFO Phase2) should be available.
Once the failover control and failover state tags have been created the Failover page of the
ICU should look similar to the illustration below.
Although ODBC is the de-facto standard for accessing data stored in relational databases,
there are ODBC driver implementation differences. Also the underlying relational databases
differ in functionality, supported data-types, SQL syntax and so on. The following section
describes some of the interface relevant limits and/or differences; however, users must be
aware that this list is by far not complete.
There is a limitation on the number of statements that can be opened concurrently and on
some Oracle versions this limitation amounts to just 100 concurrently allocated statements.
Since the interface normally uses one SQL statement per tag, not more than the specified
number of tags could thus be serviced (per one RDBMSPI instance). Although it is possible
to increase this limit via the keyword OPEN_CURSORS configured in the file INIT.ORA
(located at the server side of the ORACLE database), this change, because it has the global
influence, isn't easily applicable.
One way around this limit is to group tags together (see Data Acquisition Strategies), or run
multiple instances of the interface (different Location1), because this limit is per connection.
The other approach is to use the interface option /EXECDIRECT that does not use the
prepared execution at all. The direct execution (/EXECDIRECT start-up parameter) is the
preferred solution.
Note: The described problem also occurs when too many cursors are open from
stored procedures. All cursors open within a stored procedure thus have to be
properly closed.
TOP 10
If it is required to limit the number of returned rows (e.g. to reduce the CPU load), there is a
possibility to formulate the SQL query with the number representing the maximum rows that
will be returned. This option is database specific and Oracles' implementation is as follows:
Oracle RDB
SELECT timestamp,value,status FROM Table LIMIT TO 10 ROWS;
SELECT timestamp,value,status FROM Table LIMIT TO 10 ROWS WHERE
timestamp > ? ORDER BY timestamp;
Oracle 8.0 (NT) and above
Similar to the example for Oracle RDB, the statement to select a maximum of just 10 records
looks as follows:
SELECT timestamp,value,status FROM Table WHERE ROWNUM<11;
It is necessary to construct two Oracle objects – a PACKAGE and the actual STORED
PROCEDURE:
1. Package:
CREATE OR REPLACE PACKAGE myTestPackage IS
TYPE gen_cursor IS REF CURSOR;
END myTestPackage;
2. Stored procedure (that takes for example the date argument as the input parameter):
CREATE OR REPLACE PROCEDURE myTestProc
(cur OUT myTestPackage.gen_cursor, ts IN date)
IS res myTestPackage.gen_cursor;
BEGIN
OPEN res FOR SELECT pi_time,pi_value,0 FROM pi_test1 WHERE
pi_time > ts;
cur := res;
END myTestProc;
This store procedure can then be executed like:
{CALL myTestProc(?)}; P1=TS
And it delivers a result-set; the same as if the SELECT statement were executed directly.
Note: The above example works only with Oracle's ODBC drivers. It has been
tested with Oracle9i.
148
dBase III, dBase IV
dBase does not have any native timestamp data type. If sending PI timestamps to dBase, the
interface and the ODBC driver will automatically convert the timestamp placeholder from the
SQL_TIMESTAMP into SQL_VARCHAR (the dBase target column therefore has to be
TEXT(20)).
The other direction RDB->PI is not that simple. Actually, it is not possible to read a
timestamp from a TEXT field because the required ODBC function CONVERT does not
support the SQL_VARCHAR into SQL_TIMESTAMP conversion either. However, a
workaround is possible:
Use the dBase database as a linked table from within MS Access. Now the MS Access ODBC
driver is available, which implements a function called CDATE(). The following query works
for string columns e.g. TEXT(20) in dBase with the format "DD-MMM-YY hh:mm:ss":
SELECT CDATE(Timestamp), Value, Status FROM Table WHERE
CDATE(Timestamp) > ?; P1=TS
ODBC drivers used:
Microsoft dBase Driver 4.00.4403.02
Microsoft Access Driver 4.00.4403.02
Login
dBase works without Username and Password. In order to get access from the interface a
dummy username and password must be used in the startup line.
/user_odbc=dummy /pass_odbc=dummy
Multi-User Access
The Microsoft dBase ODBC driver seems to lock the dBase tables. That means no other
application can access the table at the same time.
There are no known workarounds, other than the Microsoft Access linked table.
Microsoft Access
Login
Access can also be configured without Username and Password. In order to get access from
the interface a dummy username and password have to be used in the startup line.
/user_odbc=dummy /pass_odbc=dummy
Only the DATETIME data type represents the date and time implementation. The slightly
misleading name TIMESTAMP, another MS SQL Server supported data type, is a database-
wide unique number that cannot be bound to the interface time related placeholders (TS,
ST,…).
TOP 10
SET NOCOUNT ON
If the stored procedure on MS SQL Server contains more complex T-SQL code, e.g. a
combination of INSERT and SELECT statements, the SET NOCOUNT ON setting is
preferable. Statements like INSERT, UPDATE, DELETE, {CALL} then do NOT return the
number of affected rows (as the default result-set) which, in combination with the result set
from a SELECT statement can cause the following errors:
"[S][24000]: [Microsoft][ODBC SQL Server Driver]Invalid cursor
state"
or
" [S][HY000]: [Microsoft][ODBC SQL Server Driver]Connection is busy
with results for another hstmt "
150
The following code shows how to avoid the above error:
CREATE PROCEDURE sp_RDBMSPI1
(
@name varchar(30), -- tag name
@TS datetime -- timestamp
)
AS
SET NOCOUNT ON
INSERT Table1 VALUES(@name,@TS)
SELECT Timestamp,Value,0 FROM Table2 WHERE Tagname = @name
and Timestamp > @TS
CA Ingres II
The ODBC driver which comes with the Ingres II Software Development Kit does not work
for this interface. This is due to the fact that the ODBC driver expects the statements being
re-prepared before each execution (even if the ODBC driver reports SQL_CB_CLOSE when
checking SQL_CURSOR_COMMIT_BEHAVIOR). That means that the ODBC driver is
inconsistent with the ODBC specification.
Other ODBC drivers for Ingres II may still work. Alternatively it is possible to set the
/EXECDIRECT start-up switch.
Statement Limitation
There is a limitation on the number of statements that can be open concurrently (prepared
ODBC execution) for the version 7.1. The limitation only allows 100 concurrently prepared
ODBC statements. It is nevertheless possible to increase this value via a corresponding DB2
database parameter (applheapsz via the DB2 Control Center: Configure (right clicking the
particular database instance) PerformanceApplication heap size)
ODBC drivers used:
IBM DB2 (NT) 07.01.0000
ODBC Driver 06.01.0000
Note: The corresponding ODBC Error message describing the situation is as follows:
[S][57011]: [IBM][CLI Driver][DB2/NT] SQL0954C Not enough storage is available in
the application heap to process the statement. SQLSTATE=57011
See the above discussion of the same topic with Oracle database.
Informix (NT)
ODBC drivers used:
Informix 07.31.0000 TC5 (NT) 02.80.0008 2.20 TC1
An access violation in the Informix ODBC driver DLL was experienced when the Informix
RDB was shut down during the interface operation.
Paradox
ODBC drivers used:
Paradox, 5.x ODBC Driver 4.00.5303.01
BDE (Borland Database Engine) 5.0
152
Chapter 20. Interface Node Clock
Make sure that the time and time zone settings on the computer are correct. To confirm, run
the Date/Time applet located in the Windows Control Panel. If the locale where the Interface
Node resides observes Daylight Saving Time, check the “Automatically adjust clock for
daylight saving changes” box. For example,
In addition, make sure that the TZ environment variable is not defined. All of the currently
defined environment variables can be viewed by opening a Command Prompt window and
typing set. That is,
C:> set
Confirm that TZ is not in the resulting list. If it is, run the System applet of the Control
Panel, click the “Environment Variables” button under the Advanced Tab, and remove TZ
from the list of environment variables. For more information see section Time Zone and
Daylight Saving.
Note: The RDB timestamps are thus sent to PI with the Time Zone/DST settings of
the interface node!
OSIsoft suggests to set the same (Time Zone/DST) settings on the interface node AS THEY
ARE on the RDB machine. For example, many RDB systems are running with DST off; that
is – set the DST off also for the interface node and let the PI API to take care of the
timestamp conversion between the interface node and the PI Server.
The other scenario assumes the RDB timestamps are UTC timestamps; that is, the interface
considers them independent of the local operating system settings. This mode is activated by
the /UTC startup switch; see section Command-Line Parameters for more details.
Note: The RDBMSPI Interface uses the extended PI API functions, which do the
time zone/DST adjustment automatically. PI API version 1.3.8 or above is therefore
required.
154
Chapter 21. Security
Windows
The PI Firewall Database and the PI Proxy Database must be configured so that the interface
is allowed to write data to the PI Server. See “Modifying the Firewall Database” and
“Modifying the Proxy Database” in the PI Server manuals.
Note that the Trust Database, which is maintained by the Base Subsystem, replaces the Proxy
Database used prior to PI version 3.3. The Trust Database maintains all the functionality of
the proxy mechanism while being more secure. See “Trust Login Security” in the chapter
“Managing Security” of the PI Server System Management Guide.
If the interface cannot write data to the PI Server because it has insufficient privileges,
a -10401 error will be reported in the pipc.log file. If the interface cannot send data to a
PI2 Serve, it writes a -999 error. See the section Appendix A: Error and Informational
Messages for additional information on error messaging.
PI Server v3.2
For PI Server v3.2, the following example demonstrates how to edit the PI Proxy table:
C:\PI\adm> piconfig
@table pi_gen,piproxy
@mode create
@istr host,proxyaccount
piapimachine,piadmin
@quit
In place of piapimachine, put the name of the PI Interface node as it is seen by PI Server.
156
Chapter 22. Starting / Stopping the Interface
This section describes starting and stopping the Interface once it has been installed as a
service. See the UniInt Interface User Manual to run the Interface interactively.
To start the interface service with PI ICU, use the button on the PI ICU toolbar.
A message will inform the user of the status of the interface service. Even if the message
indicates that the service has started successfully, double check through the Services control
panel applet. Services may terminate immediately after startup for a variety of reasons, and
one typical reason is that the service is not able to find the command-line parameters in the
associated .bat file. Verify that the root name of the .bat file and the .exe file are the
same, and that the .bat file and the .exe file are in the same directory. Further
troubleshooting of services might require consulting the pipc.log file, Windows Event
Viewer, or other sources of log messages. See the section Appendix A: Error and
Informational Messages for additional information.
To stop the interface service with PI ICU, use the button on the PI ICU toolbar.
Buffering refers to an Interface Node’s ability to temporarily store the data that interfaces
collect and to forward these data to the appropriate PI Servers. OSIsoft strongly recommends
that you enable buffering on your Interface Nodes. Otherwise, if the Interface Node stops
communicating with the PI Server, you lose the data that your interfaces collect.
The PI SDK installation kit installs two buffering applications: the PI Buffer Subsystem
(PIBufss) and the PI API Buffer Server (Bufserv). PIBufss and Bufserv are mutually
exclusive; that is, on a particular computer, you can run only one of them at any given time.
If you have PI Servers that are part of a PI Collective, PIBufss supports n-way buffering. N-
way buffering refers to the ability of a buffering application to send the same data to each of
the PI Servers in a PI Collective. (Bufserv also supports n-way buffering, but OSIsoft
recommends that you run PIBufss instead.)
Note: Combining the RDBMSPI interface with buffering can present a couple of
issues. Buffering is, in general, very useful concept, especially when run with
interfaces that scan the “classic” DCS systems. Such interfaces, however, mostly
only keep sending current data to PI and do not need to read anything back from the
PI Server. The RDBMSPI interface, on the other hand, needs to refresh its
placeholders before each query execution and because buffering supports just one-
way communication (from Interface to PI), queries with placeholders will, at times
when the PI Server is not accessible, not be executed; while queries without
placeholders will run fine. Moreover, queries, which contain the annotation column;
that is, queries, which need PI SDK support, will bypass buffering entirely.
Whether buffering should or should not be used depends on the individual
installation and data retrieval scenarios.
160
When buffering is enabled, it affects the entire Interface Node. That is, you do not have a
scenario whereby the buffering application buffers data for one interface running on an
Interface Node but not for another interface running on the same Interface Node.
To use a process name greater than 4 characters in length for a trust application name, use the
LONGAPPNAME=1 in the PIClient.ini file.
To select PIBufss as the buffering application, choose Enable buffering with PI Buffer
Subsystem.
To select Bufserv as the buffering application, choose Enable buffering with API Buffer
Server.
If a warning message such as the following appears, click Yes.
Buffering Settings
There are a number of settings that affect the operation of PIBufss and Bufserv. The
Buffering Settings section allows you to set these parameters. If you do not enter values for
these parameters, PIBufss and Bufserv use default values.
PIBufss
For PIBufss, the paragraphs below describe the settings that may require user intervention.
Please contact OSIsoft Technical Support for assistance in further optimizing these and all
remaining settings.
162
Primary and Secondary Memory Buffer Size (Bytes)
This is a key parameter for buffering performance. The sum of these two memory buffer sizes
must be large enough to accommodate the data that an interface collects during a single scan.
A typical event with a Float32 point type requires about 25 bytes. If an interface writes data
to 5,000 points, it can potentially send 125,000 bytes (25 * 5000) of data in one scan. As a
result, the size of each memory buffer should be 62,500 bytes.
The default value of these memory buffers is 32,768 bytes. OSIsoft recommends that these
two memory buffer sizes should be increased to the maximum of 2000000 for the best
buffering performance.
For optimal performance and reliability, OSIsoft recommends that you place the PIBufss
event queue files on a different drive/controller from the system drive and the drive with the
Windows paging file. (By default, these two drives are the same.)
Bufserv
For Bufserv, the paragraphs below describe the settings that may require user intervention.
Please contact OSIsoft Technical Support for assistance in further optimizing these and all
remaining settings.
164
Send rate (milliseconds)
Send rate is the time in milliseconds that Bufserv waits between sending up to the Maximum
transfer objects (described below) to the PI Server. The default value is 100. The valid range
is 0 to 2,000,000.
Buffered Servers
The Buffered Servers section allows you to define the PI Servers or PI Collective that the
buffering application writes data.
PIBufss
PIBufss buffers data only to a single PI Server or a PI Collective. Select the PI Server or the
PI Collective from the Buffering to collective/server drop down list box.
The following screen shows that PIBufss is configured to write data to a standalone PI Server
named starlight. Notice that the Replicate data to all collective member nodes check box
is disabled because this PI Server is not part of a collective. (PIBufss automatically detects
whether a PI Server is part of a collective.)
The following screen shows that PIBufss is configured to write data to a PI Collective named
admiral. By default, PIBufss replicates data to all collective members. That is, it provides n-
way buffering.
You can override this option by not checking the Replicate data to all collective member
nodes check box. Then, uncheck (or check) the PI Server collective members as desired.
Bufserv
Bufserv buffers data to a standalone PI Server, or to multiple standalone PI Servers. (If you
want to buffer to multiple PI Servers that are part of a PI Collective, you should use PIBufss.)
If the PI Server to which you want Bufserv to buffer data is not in the Server list, enter its
name in the Add a server box and click the Add Server button. This PI Server name must be
identical to the API Hostname entry:
166
The following screen shows that Bufserv is configured to write to a standalone PI Server
named etamp390. You use this configuration when all the interfaces on the Interface Node
write data to etamp390.
The following screen shows that Bufserv is configured to write to two standalone PI Servers,
one named etamp390 and the other one named starlight. You use this configuration
when some of the interfaces on the Interface Node write data to etamp390 and some write to
starlight.
168
API Buffer Server Service
Use the API Buffer Server Service page to configure Bufserv as a Service. This page also
allows you to start and stop the Bufserv Service
Bufserv version 1.6 and later does not require the logon rights of the local administrator
account. It is sufficient to use the LocalSystem account instead. Although the screen below
shows asterisks for the LocalSystem password, this account does not have a password.
The Interface Point Configuration chapter provides information on building PI points for
collecting data from the device. This chapter describes the configuration of points related to
interface diagnostics.
Note: The procedure for configuring interface diagnostics is not specific to this
Interface. Thus, for simplicity, the instructions and screenshots that follow refer to an
interface named ModbusE.
Some of the points that follow refer to a “performance summary interval”. This interval is 8
hours by default. You can change this parameter via the Scan performance summary box in
the UniInt – Debug parameter category pane:
You configure one Scan Class Performance Point for each Scan Class in this Interface. From
the ICU, select this Interface from the Interface drop-down list and click UniInt-Performance
Points in the parameter category pane:
Right click the row for a particular Scan Class # to bring up the context menu:
You need not restart the Interface for it to write values to the Scan Class Performance Points.
To see the current values (snapshots) of the Scan Class Performance Points, right click and
select Refresh Snapshots.
Delete
To delete a Performance Point, right-click the line belonging to the tag to be deleted, and
select Delete.
172
Correct / Correct All
If the “Status” of a point is marked “Incorrect”, the point configuration can be automatically
corrected by ICU by right-clicking on the line belonging to the tag to be corrected, and
selecting Correct. The Performance Points are created with the following PI attribute values.
If ICU detects that a Performance Point is not defined with the following, it will be marked
Incorrect: To correct all points click the Correct All menu item.
The Performance Points are created with the following PI attribute values:
Attribute Details
Tag Tag name that appears in the list box
Point Source Point Source for tags for this interface, as specified on the first tab
Compressing Off
Excmax 0
Descriptor Interface name + “ Scan Class # Performance Point”
Rename
Right-click the line belonging to the tag and select “Rename” to rename the Performance
Point.
Column descriptions
Status
The Status column in the Performance Points table indicates whether the Performance Point
exists for the scan class in column 2.
Created – Indicates that the Performance Point does exist
Not Created – Indicates that the Performance Point does not exist
Deleted – Indicates that a Performance Point existed, but was just deleted by the user
Scan Class #
The Scan Class column indicates which scan class the Performance Point in the Tagname
column belongs to. There will be one scan class in the Scan Class column for each scan class
listed in the Scan Classes combo box on the UniInt Parameters tab.
Tagname
The Tagname column holds the Performance Point tag name.
PS
This is the point source used for these performance points and the interface.
Location1
This is the value used by the interface for the /ID=# point attribute.
Exdesc
This is the used to tell the interface that these are performance points and the value is used to
corresponds to the /ID=# command line parameter if multiple copies of the same interface
are running on the Interface node.
Snapshot
The Snapshot column holds the snapshot value of each Performance Point that exists in PI.
The Snapshot column is updated when the Performance Points/Counters tab is clicked, and
when the interface is first loaded. You may have to scroll to the right to see the snapshots.
174
After installing the PI Performance Monitor Interface as a service, select this Interface
instance from the Interface drop-down list, then click Performance Counters in the parameter
categories pane, and right click on the row containing the Performance Counters Point you
wish to create. This will bring up the context menu:
Click Create to create the Performance Counters Point for that particular row. Click Create
All to create all the Performance Counters Points listed which have a status of Not Created.
To see the current values (snapshots) of the created Performance Counters Points, right click
on any row and select Refresh Snapshots.
Note: The PI Performance Monitor Interface – and not this Interface – is responsible
for updating the values for the Performance Counters Points in PI. So, make sure
that the PI Performance Monitor Interface is running correctly.
Performance Counters
In the following lists of Performance Counters the naming convention used will be:
“PerformanceCounterName” (.PerformanceCountersPoint Suffix)
The tagname created by the ICU for each Performance Counter point is based on the setting
found under the Tools Options Naming Conventions Performance Counter Points.
The default for this is “sy.perf.[machine].[if service] followed by the Performance Counter
Point suffix.
176
The ICU uses a naming convention such that the tag containing “(Scan Class 1)” (for
example, “sy.perf.etamp390.E1(Scan Class 1).sched_scans_this_interval”
refers to Scan Class 1, “(Scan Class 2)” refers to Scan Class 2, and so on. The tag containing
“(_Total)” refers to the sum of all Scan Classes.
178
“Points removed from the interface” (.pts_removed_from_interface)
The .pts_removed_from_interface Performance Counters Point indicates the number of points
that have been removed from the Interface configuration. A point can be removed from the
interface when one of the tag properties for the interface is updated and the point is no longer
a part of the interface configuration. For example, changing the point source, location 1, or
scan property can cause the tag to no longer be a part of the interface configuration.
180
Interface Health Monitoring Points
Interface Health Monitoring Points provide information about the health of this Interface. To
use the ICU to configure these points, select this Interface from the Interface drop-down list
and click Health Points from the parameter category pane:
Right click the row for a particular Health Point to display the context menu:
Click Create to create the Health Point for that particular row. Click Create All to create all
the Health Points.
To see the current values (snapshots) of the Health Points, right click and select Refresh
Snapshots.
PI Interface for Relational Database (RDBMS via ODBC) 181
Interface Diagnostics Configuration
For some of the Health Points described subsequently, the Interface updates their values at
each performance summary interval (typically, 8 hours).
[UI_HEARTBEAT]
The [UI_HEARTBEAT] Health Point indicates whether the Interface is currently running.
The value of this point is an integer that increments continuously from 1 to 15. After reaching
15, the value resets to 1.
The fastest scan class frequency determines the frequency at which the Interface updates this
point:
Fastest Scan Frequency Update frequency
Less than 1 second 1 second
Between 1 and 60 Scan frequency
seconds, inclusive
More than 60 seconds 60 seconds
If the value of the [UI_HEARTBEAT] Health Point is not changing, then this Interface is in
an unresponsive state.
[UI_DEVSTAT]
The RDBMSPI Interface is built with UniInt 4.3+, where the new functionality has been
added to support health tags – the health tag with the point attribute
Exdesc = [UI_DEVSTAT] is used to represent the status of the source device.
The following events will be written into the tag:
"0 | Good | " the interface is properly communicating and gets data from/to the
RDBMS system via the given ODBC driver
"3 | 1 device(s) in error | " ODBC data source communication failure
"4 | Intf Shutdown | " the interface was shut down
Please refer to the UniInt Interface User Manual.doc file for more information on how to
configure health points.
[UI_SCINFO]
The [UI_SCINFO] Health Point provides scan class information. The value of this point is a
string that indicates
the number of scan classes;
the update frequency of the [UI_HEARTBEAT] Health Point; and
the scan class frequencies
An example value for the [UI_SCINFO] Health Point is:
3 | 5 | 5 | 60 | 120
The Interface updates the value of this point at startup and at each performance summary
interval.
182
[UI_IORATE]
The [UI_IORATE] Health Point indicates the sum of
1. the number of scan-based input values the Interface collects before it performs
exception reporting; and
2. the number of event-based input values the Interface collects before it performs
exception reporting; and
3. the number of values that the Interface writes to output tags that have a SourceTag.
The Interface updates this point at the same frequency as the [UI_HEARTBEAT] point. The
value of this [UI_IORATE] Health Point may be zero. A stale timestamp for this point
indicates that this Interface has stopped collecting data.
[UI_MSGCOUNT]
The [UI_MSGCOUNT] Health Point tracks the number of messages that the Interface has
written to the pipc.log file since start-up. In general, a large number for this point indicates
that the Interface is encountering problems. You should investigate the cause of these
problems by looking in pipc.log.
The Interface updates the value of this point every 60 seconds. While the Interface is running,
the value of this point never decreases.
[UI_POINTCOUNT]
The [UI_POINTCOUNT] Health Point counts number of PI tags loaded by the interface. This
count includes all input, output and triggered input tags. This count does NOT include any
Interface Health tags or performance points.
The interface updates the value of this point at startup, on change and at shutdown.
[UI_OUTPUTRATE]
After performing an output to the device, this Interface writes the output value to the output
tag if the tag has a SourceTag. The [UI_OUTPUTRATE] Health Point tracks the number of
these values. If there are no output tags for this Interface, it writes the System Digital State No
Result to this Health Point.
The Interface updates this point at the same frequency as the [UI_HEARTBEAT] point’s.
The Interface resets the value of this point to zero at each performance summary interval.
[UI_OUTPUTBVRATE]
The [UI_OUTPUTBVRATE] Health Point tracks the number of System Digital State values
that the Interface writes to output tags that have a SourceTag. If there are no output tags for
this Interface, it writes the System Digital State No Result to this Health Point.
The Interface updates this point at the same frequency as the [UI_HEARTBEAT] point’s.
The Interface resets the value of this point to zero at each performance summary interval.
[UI_TRIGGERRATE]
The [UI_TRIGGERRATE] Health Point tracks the number of values that the Interface writes
to event-based input tags. If there are no event-based input tags for this Interface, it writes the
System Digital State No Result to this Health Point.
The Interface updates this point at the same frequency as the [UI_HEARTBEAT] point’s.
The Interface resets the value of this point to zero at each performance summary interval.
[UI_TRIGGERBVRATE]
The [UI_TRIGGERRATE] Health Point tracks the number of System Digital State values
that the Interface writes to event-based input tags. If there are no event-based input tags for
this Interface, it writes the System Digital State No Result to this Health Point.
The Interface updates this point at the same frequency as the [UI_HEARTBEAT] point’s.
The Interface resets the value of this point to zero at each performance summary interval.
[UI_SCIORATE]
You can create a [UI_SCIORATE] Health Point for each Scan Class in this Interface. The
ICU uses a tag naming convention such that the suffix “.sc1” (for example,
sy.st.etamp390.E1.Scan Class IO Rate.sc1) refers to Scan Class 1, “.sc2” refers to
Scan Class 2, and so on.
A particular Scan Class’s [UI_SCIORATE] point indicates the number of values that the
Interface has collected. If the current value of this point is between zero and the
corresponding [UI_SCPOINTCOUNT] point, inclusive, then the Interface executed the scan
successfully. If a [UI_SCIORATE] point stops updating, then this condition indicates that an
error has occurred and the tags for the scan class are no longer receiving new data.
The Interface updates the value of a [UI_SCIORATE] point after the completion of the
associated scan.
Although the ICU allows you to create the point with the suffix “.sc0”, this point is not
applicable to this Interface.
[UI_SCBVRATE]
You can create a [UI_SCBVRATE] Health Point for each Scan Class in this Interface. The
ICU uses a tag naming convention such that the suffix “.sc1” (for example,
sy.st.etamp390.E1.Scan Class Bad Value Rate.sc1) refers to Scan Class 1,
“.sc2” refers to Scan Class 2, and so on.
A particular Scan Class’s [UI_SCBVRATE] point indicates the number System Digital State
values that the Interface has collected.
The Interface updates the value of a [UI_SCBVRATE] point after the completion of the
associated scan.
Although the ICU allows you to create the point with the suffix “.sc0”, this point is not
applicable to this Interface.
[UI_SCSCANCOUNT]
You can create a [UI_SCSCANCOUNT] Health Point for each Scan Class in this Interface.
The ICU uses a tag naming convention such that the suffix “.sc1” (for example,
184
sy.st.etamp390.E1.Scan Class Scan Count.sc1) refers to Scan Class 1, “.sc2”
refers to Scan Class 2, and so on.
A particular Scan Class’s [UI_ SCSCANCOUNT] point tracks the number of scans that the
Interface has performed.
The Interface updates the value of this point at the completion of the associated scan. The
Interface resets the value to zero at each performance summary interval.
Although there is no “Scan Class 0”, the ICU allows you to create the point with the suffix
“.sc0”. This point indicates the total number of scans the Interface has performed for all of its
Scan Classes.
[UI_SCSKIPPED]
You can create a [UI_SCSKIPPED] Health Point for each Scan Class in this Interface. The
ICU uses a tag naming convention such that the suffix “.sc1” (for example,
sy.st.etamp390.E1.Scan Class Scans Skipped.sc1) refers to Scan Class 1, “.sc2”
refers to Scan Class 2, and so on.
A particular Scan Class’s [UI_SCSKIPPED] point tracks the number of scans that the
Interface was not able to perform before the scan time elapsed and before the Interface
performed the next scheduled scan.
The Interface updates the value of this point each time it skips a scan. The value represents
the total number of skipped scans since the previous performance summary interval. The
Interface resets the value of this point to zero at each performance summary interval.
Although there is no “Scan Class 0”, the ICU allows you to create the point with the suffix
“.sc0”. This point monitors the total skipped scans for all of the Interface’s Scan Classes.
[UI_SCPOINTCOUNT]
You can create a [UI_SCPOINTCOUNT] Health Point for each Scan Class in this Interface.
The ICU uses a tag naming convention such that the suffix “.sc1” (for example,
sy.st.etamp390.E1.Scan Class Point Count.sc1) refers to Scan Class 1, “.sc2”
refers to Scan Class 2, and so on.
This Health Point monitors the number of tags in a Scan Class.
The Interface updates a [UI_SCPOINTCOUNT] Health Point when it performs the associated
scan.
Although the ICU allows you to create the point with the suffix “.sc0”, this point is not
applicable to this Interface.
[UI_SCINSCANTIME]
You can create a [UI_SCINSCANTIME] Health Point for each Scan Class in this Interface.
The ICU uses a tag naming convention such that the suffix “.sc1” (for example,
sy.st.etamp390.E1.Scan Class Scan Time.sc1) refers to Scan Class 1, “.sc2”
refers to Scan Class 2, and so on.
A particular Scan Class’s [UI_ SCINSCANTIME] point represents the amount of time (in
milliseconds) the Interface takes to read data from the device, fill in the values for the tags,
and send the values to the PI Server.
The Interface updates the value of this point at the completion of the associated scan.
PI Interface for Relational Database (RDBMS via ODBC) 185
Interface Diagnostics Configuration
[UI_SCINDEVSCANTIME]
You can create a [UI_SCINDEVSCANTIME] Health Point for each Scan Class in this
Interface. The ICU uses a tag naming convention such that the suffix “.sc1” (for example,
sy.st.etamp390.E1.Scan Class Device Scan Time.sc1) refers to Scan Class 1,
“.sc2” refers to Scan Class 2, and so on.
A particular Scan Class’s [UI_ SCINDEVSCANTIME] point represents the amount of time
(in milliseconds) the Interface takes to read data from the device and fill in the values for the
tags.
The value of a [UI_ SCINDEVSCANTIME] point is a fraction of the corresponding
[UI_SCINSCANTIME] point value. You can use these numbers to determine the percentage
of time the Interface spends communicating with the device compared with the percentage of
time communicating with the PI Server.
If the [UI_SCSKIPPED] value is increasing, the [UI_SCINDEVSCANTIME] points along
with the [UI_SCINSCANTIME] points can help identify where the delay is occurring:
whether the reason is communication with the device, communication with the PI Server, or
elsewhere.
The Interface updates the value of this point at the completion of the associated scan.
186
As the preceding picture shows, the ICU suggests an Event Counter number and a Tagname
for the I/O Rate Point. Click the Save button to save the settings and create the I/O Rate point.
Click the Apply button to apply the changes to this copy of the Interface.
You need to restart the Interface in order for it to write a value to the newly created I/O Rate
point. Restart the Interface by clicking the Restart button:
(The reason you need to restart the Interface is that the PointSource attribute of an I/O Rate
point is Lab.)
To confirm that the Interface recognizes the I/O Rate Point, look in the pipc.log for a
message such as:
PI-ModBus 1> IORATE: tag sy.io.etamp390.ModbusE1 configured.
To see the I/O Rate point’s current value (snapshot), click the Refresh snapshot button:
Event Counter
The Event Counter correlates a tag specified in the iorates.dat file with this copy of the
interface. The command-line equivalent is /ec=x, where x is the same number that is
assigned to a tag name in the iorates.dat file.
Tagname
The tag name listed under the Tagname column is the name of the I/O Rate tag.
Tag Status
The Tag Status column indicates whether the I/O Rate tag exists in PI. The possible states
are:
Created – This status indicates that the tag exist in PI
Not Created – This status indicates that the tag does not yet exist in PI
Deleted – This status indicates that the tag has just been deleted
Unknown – This status indicates that the PI ICU is not able to access the PI
Server
In File
The In File column indicates whether the I/O Rate tag listed in the tag name and the event
counter is in the IORates.dat file. The possible states are:
Yes – This status indicates that the tag name and event counter are in the
IORates.dat file
No – This status indicates that the tag name and event counter are not in the
IORates.dat file
Snapshot
The Snapshot column holds the snapshot value of the I/O Rate tag, if the I/O Rate tag exists
in PI. The Snapshot column is updated when the IORates/Status Tags tab is clicked, and
when the Interface is first loaded.
Create
Create the suggested I/O Rate tag with the tag name indicated in the Tagname column.
Delete
Delete the I/O Rate tag listed in the Tagname column.
Rename
Allow the user to specify a new name for the I/O Rate tag listed in the Tagname column.
Add to File
Add the tag to the IORates.dat file with the event counter listed in the Event Counter Column.
Search
Allow the user to search the PI Server for a previously defined I/O Rate tag.
188
changing timestamp for the Watchdog Tag indicates that the monitored interface is not
writing data.
Please see the Interface Status Interface for complete information on using the ISU. PI
Interface Status runs only on a PI Server Node.
If you have used the ICU to configure the PI Interface Status Utility on the PI Server Node,
the ICU allows you to create the appropriate ISU point. Select this Interface from the
Interface drop-down list and click Interface Status in the parameter category pane. Right
click on the ISU tag definition window to bring up the context menu:
Note: The PI Interface Status Utility – and not this Interface – is responsible for
updating the ISU tag. So, make sure that the PI Interface Status Utility is running
correctly.
190
Appendix A. Error and Informational Messages
A string RDBMSPI'ID' is prefixed to error messages written to the message log. RDBMSPI is
a non-configurable identifier. ID is a configurable identifier that is no longer than 9 characters
and is specified using the /id parameter on the startup command line.
General information messages are written to the pipc.log file; in addition, all PI API and
buffering related errors are also directed there. The location of the pipc.log file is
determined by the PIHOME entry in the pipc.ini file. The pipc.ini file should always
be in the WinNT directory. For example, if the PIHOME entry is
C:\PIPC
then the pipc.log file will be located in the c:\PIPC\dat directory.
Messages are written to PIHOME\dat\pipc.log at the following times.
When the Interface starts many informational messages are written to the log.
These include the version of the interface, the version of UniInt, the
command-line parameters used, and the number of points.
As the Interface loads points, messages are sent to the log if there are any
problems with the configuration of the points.
If the /db is used on the command line, then various messages are written to the
log file. The /db the UniInt start-up switch. For more about it, see the relevant
documentation. However, with this interface it is recommended using the /deb
parameter instead.
Note: For PI API version 1.3 and greater, a process called pilogsrv can be installed
to run as a service. After the pipc.log file exceeds a user-defined maximum size,
the pilogsrv process renames the pipc.log file to pipcxxxx.log , where xxxx
ranges from 0000 to the maximum number of allowed log files. Both the maximum
file size and the maximum number of allowed log files are configured in the
pipc.ini file. Configuration of the pilogsrv process is discussed in detail in the PI
API Installation Instructions manual.
Note: Errors related to tag values will also be reported in giving the tag a Bad Input
or Bad Output state. This happens if the status of a RDBMS value is BAD or the
output operation failed. Points can also get a status of I/O Timeout if the interface
detects connection problems.
Messages
Informational
192
Errors (Phase 1 & 2)
Errors (Phase 2)
194
Appendix B. PI SDK Options
To access the PI SDK settings for this Interface, select this Interface from the Interface drop-
down list and click UniInt – PI SDK in the parameter category pane.
Disable PI SDK
Select Disable PI SDK to tell the Interface not to use the PI SDK. If you want to run the
Interface in Disconnected Startup mode, you must choose this option.
The command line equivalent for this option is /pisdk=0.
Enable PI SDK
Select Enable PI SDK to tell the Interface to use the PI SDK. Choose this option if the PI
Server version is earlier than 3.4.370.x or the PI API is earlier than 1.6.0.2, and you want to
use extended lengths for the Tag, Descriptor, ExDesc, InstrumentTag, or
PointSource point attributes. The maximum lengths for these attributes are:
However, if you want to run the Interface in Disconnected Startup mode, you must not
choose this option.
The command line equivalent for this option is /pisdk=1.
Note: Location2 is set to zero. This setting makes sure the interface takes just
one row from the SELECTed result-set. See Location2 for more details.
SQL Statement
(defined in file PI_STRING1.SQL)
SELECT PI_TIMESTAMP, PI_VALUE, 0 FROM T1_2 WHERE PI_TIMESTAMP > ?
ORDER BY PI_TIMESTAMP ASC;
Relevant PI Point Attributes
Extended Location1 Location2 Location3 Location4 Location5
Descriptor
P1=TS 1 1 0 1 0
Instrumenttag Point Type Point Source
PI_STRING1.SQL String S
RDBMS Table Design
Table T1_2
PI_TIMESTAMP PI_VALUE
Datetime Varchar(1000)
(MS SQL Server) (MS SQL Server)
Date/Time Text(255)
(MS Access) (MS Access)
198
Example 1.3 – three PI points forming a GROUP
SQL Statement
(defined in file PI_INT_GROUP1.SQL)
SELECT PI_TIMESTAMP, PI_VALUE1, 0 ,PI_VALUE2, 0, PI_VALUE3, 0 FROM T1_3 WHERE
PI_TIMESTAMP > ? ORDER BY PI_TIMESTAMP ASC;
Relevant PI Point Attributes
Extended Location1 Location2 Location3 Location4 Location5
Descriptor (All points) (All points) (All points) (All points)
(Master Tag)
P1=TS 1 1 Target_Point1 2 1 0
Target_Point2 4
Target_Point3 6
Instrumenttag Point Type Point Source
(All Points) (All Points)
PI_INT_ Int32 S
GROUP1.SQL
RDBMS Table Design
Table T1_3
PI_TIMESTAMP PI_VALUEn
Datetime Smallint
(MS SQL Server) (MS SQL Server)
Date/Time Number (Whole Number)
(MS Access) (MS Access)
Note: See also section: Detailed Description of Information the Distributor Tags
Store.
200
Example 1.5 – RxC Distribution
SQL Statement
(defined in file PI_REAL_DISTR1.SQL)
SELECT sampletime AS PI_TIMESTAMP1, 'Tag1' AS PI_TAGNAME1, [level] AS PI_VALUE1,
sampletime AS PI_TIMESTAMP2, 'Tag2' AS PI_TAGNAME2, temperature AS PI_VALUE2,
temperature_status AS PI_STATUS2, sampletime AS PI_TIMESTAMP3,'Tag3' AS
PI_TAGNAME3, density AS PI_VALUE3, density_status AS PI_STATUS3 FROM T1_5 WHERE
sampletime > ? AND tank = 'Tank1'
Relevant PI Point Attributes
Extended Location1 Location2 Location3 Location4 Location5
Descriptor (All points) (All points) (All points) (All points)
(RxC Distributor)
P1=TS 1 0 'RxC Distributor' 1 0
-2
'Target points'
0
Instrumenttag Point Type Point Source
(Distributor) (All points) (All Points)
PI_REAL_DISTR_R Float32 S
xC.SQL
RDBMS Table Design
Table T1_5
SAMPLETIME LEVEL, LEVEL_STATUS, TANK
TEMPERATURE, TEMPERAURE_
DENSITY STATUS,
DENSITY_STATUS
Datetime Real Varchar(12) Varchar(80)
(MS SQL Server) MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Number (Single) Text(12) Text(80)
(MS Access) Prec.(MS Access) (MS Access) (MS Access)
Note: See also section: Detailed Description of Information the Distributor Tags
Store.
P1=TS 1 1 0 1 1
PI_ANNO1.SQL Float32 S
RDBMS Table Design
T1_6
TIME VALUE ANNOTATION
Datetime Real Varchar(255)
MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Number-Single Precision Text(50)
(MS Access) (MS Access) (MS Access)
202
Example 2.1a – insert sinusoid values into table (event based)
SQL Statement
(defined in file PI_SINUSOID_OUT.SQL)
INSERT INTO T2_1a (PI_TIMESTAMP1, PI_VALUE, PI_STATUS) VALUES (?,?,?);
Relevant PI Point Attributes
Extended Descriptor Location1 Location2 Location3 Location4 Location5
P1=TS P2=VL P3=SS_I 1 0 0 0 0
Instrumenttag Point Type Source Point
Tag Source
PI_SINUSOID_OUT.SQL Float32 SINUSOID S
RDBMS Table Design
Table T2_1a
PI_TIMESTAMPn PI_VALUE PI_STATUS
Datetime Real Smallint
(MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Single Precision Whole Number
(MS Access) (MS Access) (MS Access)
204
Example 2.1c – insert 2 different sinusoid values into table (event
based)
SQL Statement
(defined in file PI_SIN_VALUES_OUT.SQL)
INSERT INTO T2_1c (PI_TAGNAME1, PI_TIMESTAMP1, PI_VALUE1, PI_STATUS1,
PI_TAGNAME2, PI_VALUE2, PI_STATUS2) VALUES (?,?,?,?,?,?,?);
Relevant PI Point Attributes
Extended Descriptor Location1 Location2 Location3 Location4 Location5
/EXD=…path…\ 1 0 0 0 0
pi_sin_values_out.plh
Content of the above-stated
file:
P1=AT.TAG
P2=TS
P3=VL
P4=SS_I
P5='SINUSOIDU'/AT.TAG
P6='SINUSOIDU'/VL
P7='SINUSOIDU'/SS_I
Instrumenttag Point Type Source Point
Tag Source
PI_SIN_VALUES_ Float16 SINUSOID S
OUT.SQL
RDBMS Table Design
Table T2_1c
PI_TIMESTAMPn PI_VALUEn PI_STATUSn PI_TAGNAMEn
Datetime Real Smallint (MS Varchar(80)
(MS SQL Server) (MS SQL Server) SQL Server) (MS SQL Server)
Date/Time Single Precision Whole Number Text(80)
(MS Access) (MS Access) (MS Access) (MS Access)
Note: The /EXD= keyword is used when the overall length of placeholders is greater
than 1024 bytes. Normally, the placeholder definitions can be stated in the
Extended Descriptor directly
P1=TS 1 0 0 0 0
P2=VL
P3=ANN_C
Instrumenttag Point Type Source Tag Point
Source
206
Example 3.1 – Field Name Aliases
SQL Statement
(defined in file PI_STRING2.SQL)
SELECT VALIDITY AS PI_STATUS, SCAN_TIME AS PI_TIMESTAMP, VOLUME AS PI_VALUE
FROM T3_1 WHERE KEY_VALUE = ?;
Relevant PI Point Attributes
Extended Location1 Location2 Location3 Location4 Location5
Descriptor
P1="Key_1234" 1 0 0 1 0
Instrumenttag Point Type Point Source
PI_STRING2.SQL String S
RDBMS Table Design
Table T3_1
SCAN_TIME VOLUME VALIDITY KEY_VALUE
Datetime Varchar(1000) Smallint Varchar(50)
(MS SQL Server) (MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Text(255) Whole Number Text(50)
(MS Access) (MS Access) (MS Access) (MS Access)
208
Example 3.3 – Tag Group, Arbitrary Column Position – Aliases
SQL Statement
(file PI_GR2.SQL)
SELECT PI_TIMESTAMP, PI_VALUE1, PI_VALUE2, PI_STATUS1=0, PI_STATUS2=0 FROM
T3_3 WHERE PI_TIMESTAMP > ? ORDER BY PI_TIMESTAMP ASC;
or
SELECT PI_TIMESTAMP, VALUE1 AS PI_VALUE1, VALUE2 AS PI_VALUE2, 0 AS
PI_STATUS1, 0 AS PI_STATUS2 FROM T3_3 WHERE PI_TIMESTAMP > ? ORDER BY
PI_TIMESTAMP ASC;
Relevant PI Point Attributes
Instrument Extended
Tag Location1 Location2 Location3 Location4
tag Descriptor
Target_Point1 PI_GR2.SQL P1=TS 1 1 1 1
Target_Point2 PI_GR2.SQL 1 1 2 1
RDBMS Table Data
Table T3_3
PI_TIMESTAMP PI_VALUE1 PI_VALUE2
20-Oct-2000 08:10:00 1.123 4.567
20-Oct-2000 08:10:10 2.124 5.568
20-Oct-2000 08:10:20 3.125 6.569
20-Oct-2000 08:10:30 4.126 7.570
210
Example 3.4b – Tag Distribution, Search According to Tag's ALIAS
Name
SQL Statement
(file PI_DIST2.SQL)
SELECT TIME, PI_ALIAS, VALUE,0 FROM T3_4b WHERE TIME > ?;
Relevant PI Point Attributes
Instrument Extended
Tag Location1 Location3 Location4
tag Descriptor
Tag1 PI_DIST2.SQL P1=TS 1 -1 1
Tag2 /ALIAS=Valve1 1 1
Tag3 /ALIAS=Valve2 1 1
Tag4 /ALIAS=Valve3 1 1
RDBMS Table Data
Table T3_4b
Time PI_Alias Value
20-Oct-2000 08:10:00 Valve1 "Open"
20-Oct-2000 08:10:00 Valve2 "Closed"
20-Oct-2000 08:10:00 Valve3 "N/A"
212
Example 3.4d – Tag Distribution with Auxiliary Table Keeping Latest
Snapshot
SQL Statement
(file PI_DIST4.SQL)
SELECT T3_4data.time, T3_4data.tag, T3_4data.value, 0 AS status FROM T3_4data INNER
JOIN T3_4snapshot ON T3_4data.tag=T3_4snapshot.tag WHERE T3_4data.time >
T3_4snapshot.time;
UPDATE T3_4snapshot SET time=(SELECT MaxTimeTag.maxTime FROM
(SELECT DISTINCT (SELECT MAX(time) FROM T3_4data WHERE tag=TdataTmp.tag) As
MaxTime, tag FROM T3_4data TdataTmp) MaxTimeTag
INNER JOIN T3_4snapshot TsnapshotTmp ON MaxTimeTag.tag=TsnapshotTmp.tag WHERE
T3_4snapshot.tag=MaxTimeTag.tag)
Relevant PI Point Attributes
Instrument Extended
Tag Location1 Location3 Location4
tag Descriptor
Tag1 PI_DIST4.SQL 1 -1 1
Tag2 1 1
RDBMS Table Design
Table T3_4data
tag time value status
Varchar(255) DateTime Real Integer
(MS SQL Server) (MS SQL Server) (MS SQL Server) (MS SQL Server)
Table T3_4snapshot
tag time
Varchar(255) DateTime
(MS SQL Server) (MS SQL Server)
Explanation:
The T3_4snapshot table has to contain a list of all 'Target Points', and, at the very beginning,
also the initial timestamps (the time column in T3_4snapshot cannot be NULL). The first
statement (the SELECT) will thus deliver all the rows (from the T3_4data) theirs time is
bigger than the time column of the T3_4snapshot. The UPDATE statement will then retrieve
the most recent timestamps – MAX (time) from the T3_4data and will update the
T3_4snapshot. During the next scan, the JOIN makes sure only the new entries (from the
T3_4data) will be SELECTed.
Explanation:
The time-window is created by the MS SQL function GETDATE() (returning the current
time). The (1./24.) means one hour. The interface will thus have to have the /RBO start-up
parameter specified to avoid duplicates in the PI Archive.
214
Example 3.5 – Tag Distribution with Aliases in Column Names
SQL Statement
(file PI_DIST3.SQL)
SELECT NAME AS PI_TAGNAME, VALUE AS PI_VALUE , STATUS AS PI_STATUS,
DATE_TIME AS PI_TIMESTAMP FROM T3_5 WHERE NAME LIKE ?;
Relevant PI Point Attributes
Extended Location1 Location2 Location3 Location4 Location5
Descriptor All points All points All points All points
Distributor – 1 Not -1 1 0
P1="Key_123%" evaluated
Target points - Not
/ALIAS='value evaluated
retrieved from
NAME column'
Instrumenttag Point Type Point
(Distributor) Source
S
PI_DIST3.SQL Float32
RDBMS Table Design
Table T3_5
DATE_TIME NAME VALUE STATUS
Datetime Char(80) Real Real
(MS SQL Server) (MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Text(80) Text(255) Text(12)
(MS Access) (MS Access) (MS Access) (MS Access)
216
Example 3.7 – Event Based Input
SQL Statement
(file PI_EVENT.SQL)
SELECT PI_TIMESTAMP, PI_VALUE, PI_STATUS FROM T3_7;
Relevant PI Point Attributes
Extended Location1 Location2 Location3 Location4 Location5
Descriptor
/EVENT=sinusoid 1 0 0 Not 0
evaluated
InstrumentTag Point Type Point
Source
PI_EVENT.SQL String S
RDBMS Table Design
Table T3_7
PI_TIMESTAMP PI_VALUE PI_STATUS
Datetime Varchar(1000) Smallint
(MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Text(255) Byte
(MS Access) (MS Access) (MS Access)
P1=TS 1 0 0 0 0
P2=VL
P3=SS_I
P4=TS
InstrumentTag Point Type Source Tag Point Source
PI_MULTI.SQL Float32 SINUSOID S
RDBMS Table Design
Table T3_8
PI_TIMESTAMP PI_VALUE PI_STATUS
Datetime SmallInt Smallint
(MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Number-Whole Number Number Single Precision
(MS Access) (MS Access) (MS Access)
218
Example 3.9 – Stored Procedure Call
SQL Statement
{CALL SP_T3_9(?,?)};
Stored procedure definition
CREATE PROCEDURE SP3_9 @Start_Time DateTime, @End_Time DateTime AS
SELECT PI_TIMESTAMP,PI_VALUE,PI_STATUS FROM T3_9 WHERE PI_TIMESTAMP
BETWEEN @Start_Time AND @End_Time
P1=TS P2=VL 1 0 0 0 0
P3=SS_I
InstrumentTag Point Type Source Tag Point Source
PI_EVOUT1.SQL Float16 SINUSOID S
RDBMS Table Design
Table T3_10
PI_TIMESTAMP PI_VALUE PI_STATUS
Datetime Real Smallint
(MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Byte Number Whole Number
(MS Access) (MS Access) (MS Access)
220
Example 3.11 – Output Triggered by 'Sinusoid', Values Taken from
'TagDig'
SQL Statement
(file PI_EVOUT2.SQL)
UPDATE T3_11 SET PI_TIMESTAMP=?, PI_VALUE=?, PI_STATUS_I=?,
PI_STATUS_STR=?;
Relevant PI Point Attributes
Extended Descriptor Location1 Location2 Location3 Location4 Location5
P1='TagDig'/TS 1 0 0 0 0
P2='TagDig'/VL
P3='TagDig'/SS_I
P4='TagDig'/SS_C
InstrumentTag Point Type Source Tag Point Source
PI_EVOUT2.SQL Float16 SINUSOID S
RDBMS Table Design
Table T3_11
PI_TIMESTAMP PI_VALUE PI_STATUS_I PI_STATUS_STR
Datetime Char(12) Smallint Varchar(20)
(MS SQL Server) (MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Text(12) Number Single Text(12) (MS
(MS Access) (MS Access) Precision Access)
(MS Access)
P1=G1 P2=G4 1 0 0 1 0
P3=G5 P4=G6
InstrumentTag Point Type Point Source
PI_G1.SQL Int16 S
RDBMS Table Design
Table T3_12
PI_TIMESTAMP PI_TAGNAME PI_VALUE PI_STATUS
Datetime Char(50) Real Char(12)
(MS SQL Server) (MS SQL Server) (MS SQL Server) (MS SQL Server)
Date/Time Text(50) Number Text(12)
(MS Access) (MS Access) Single Precision (MS Access)
(MS Access)
Content of the global variables file
G1='sinusoid'/TS G2="any_string1" G3="any_string2" G4='sinusoid'/AT.TAG G5='sinusoid'/VL
G6='sinusoid'/SS_C …
222
Example 4.1 – PI Point Database Changes – Short Form Configuration
SQL Statement
(file PI_TAGCHG1.SQL)
INSERT INTO T4_1 (TAG_NAME, ATTRIBUTE_NAME, CHANGE_DATETIME, CHANGER,
NEW_VALUE, OLD_VALUE) VALUES (?, ?, ?, ?, ?, ?);
Relevant PI Point Attributes
Extended Descriptor Location1 Location2 Location3 Location4 Location5
P1= AT.TAG 1 0 0 -1 0
P2= AT.ATTRIBUTE
P3= AT.CHANGEDATE (Marks the
P4=AT.CHANGER tag as
P5=AT.NEWVALUE managing
P6=AT.OLDVALUE point for
point
changes)
InstrumentTag Point Type Point Source
PI_TAGCHG1.SQL Int32 S
RDBMS Table Design
Table T4_1
TAG_NAME ATTRIBUTE_NAME CHANGE_DATETIME CHANGER
Varchar(80) Varchar(80) Datetime Varchar(80)
(MS SQL Server) (MS SQL Server) (MS SQL Server) (MS SQL Server)
Text(80) Text(80) Date/Time Text(80)
(MS Access) (MS Access) (MS Access) (MS Access)
NEW_VALUE OLD_VALUE
Varchar(80) Varchar(80)
(MS SQL Server) (MS SQL Server)
Text(80) Text(80)
(MS Access) (MS Access)
224
Example 5.1 – Batch Export (not requiring Module Database)
SQL Statement
(file PI_BA1.SQL)
INSERT INTO T5_1 (BA_ID,BA_UNITID,BA_PRODUCT,BA_START,BA_END) VALUES (?,?,?,?,?);
Relevant PI Point Attributes
Extended Location1 Location2 Location3 Location4 Location5
Descriptor
P1=BA.BAID 1 0 0 1 0
P2=BA.UNIT
P3=BA.PRID
P4=BA.START
P5=BA.END
Point Type InstrumentTag Point
Source
Float32 PI_BA1.SQL S
RDBMS Table Design
Table T5_1
BA_ID BA_START
BA_UNITID BA_END
BA_PRODUCT
Varchar(1024) Datetime
(MS SQL Server) (MS SQL Server)
Text(255) Date/Time
(MS Access) (MS Access)
/BA.START="*-10d" 1 0 0 1 0
P1=BA.START
P2=BA.END
P3=BA.ID
P4=BA.PRODID
P5=BA.RECID
P6=BA.GUID
Point Type InstrumentTag Point
Source
Float32 PI_BA2a.SQL S
RDBMS Table Design
Table T5_2a
BA_ID BA_START
BA_PRODUCT BA_END
BA_RECIPE
BA_GUID
Varchar(1024) Datetime
(MS SQL Server) (MS SQL Server)
Text(255) Date/Time
(MS Access) (MS Access)
226
Example 5.2b – UnitBatch Export (Module Database required)
SQL Statement
(file PI_BA2b.SQL)
INSERT INTO T5_2b (UB_START,UB_END, UB_ID,
UB_PRODUCT,UB_PROCEDURE,BA_GUID,UB_GUID) VALUES (?,?,?,?,?,?,?);
Relevant PI Point Attributes
Extended Descriptor Location1 Location3 Location4 Location5
/UB.START="*-10d" 1 0 1 0
/SB_TAG="SBTag"
P1=UB.START
P2=UB.END
P3=UB.ID
P4=UB.PRODID
P5=UB.PROCID
P6=BA.GUID
P7=UB.GUID
Point Type InstrumentTag Point Source
Float32 PI_BA2b.SQL S
RDBMS Table Design
Table T5_2b
UB_ID UB_START
UB_PRODUCT UB_END
UB_PROCEDURE
UB_GUID
BA_GUID
Varchar(1024) Datetime
(MS SQL Server) (MS SQL Server)
Text(255) Date/Time
(MS Access) (MS Access)
228
Example 6.1 – Last One Hour of 'Sinusoid'
SQL Statement
(file PI_IU1.SQL)
UPDATE PI_INSERT_UPDATE_1ROW SET PI_TSTAMP=?, PI_VALUE=?, PI_STATUS=?;
UPDATE PI_INSERT_UPDATE RIGHT JOIN PI_INSERT_UPDATE_1ROW ON {Fn
MINUTE(PI_INSERT_UPDATE_1ROW.PI_TSTAMP)}={Fn
MINUTE(PI_INSERT_UPDATE.PI_TSTAMP)}
SET PI_INSERT_UPDATE.PI_TSTAMP = PI_INSERT_UPDATE_1ROW.PI_TSTAMP,
PI_INSERT_UPDATE.PI_VALUE = PI_INSERT_UPDATE_1ROW.PI_VALUE,
PI_INSERT_UPDATE.PI_STATUS = PI_INSERT_UPDATE_1ROW.PI_STATUS;
Relevant PI Point Attributes
Extended Location1 Location2 Location3 Location4 Location5
Descriptor
P1=TS 1 0 0 0 0
P2=VL
P3=SS_I
InstrumentTag Point Type Source Tag Point Source
PI_IU1.SQL Float16 SINUSOID S
RDBMS Table Design
Table PI_INSERT_UPDATE_1ROW and PI_INSERT_UPDATE
PI_TSTAMP (PK) PI_VALUE PI_STATUS
Date/Time Number Single Precision Number Whole Number
(MS Access) (MS Access) (MS Access)
ORDER BY TIMESTAMP
Timestamp/value pairs must arrive ordered by timestamp. Otherwise, the interface cannot
perform exception reporting and the PI Server cannot do compression. The ORDER BY part
of the WHERE clause is therefore recommended to use.
Consider using the /NO_INPUT_ERROR start-up parameter when the relational database gets
shutdown periodically, for instance, for maintenance purposes.
No Data (Input)
The status column is mandatory when the “not aliased” column names are used
The timestamp column must be of data type: SQL_TIMESTAMP
The timestamp placeholders are internally bound to the type:
SQL_TIMESTAMP. Therefore, there is no need to care about the right
timestamp format.
If the query is directly specified in the ExtendedDescriptor, the query string
must be preceded by /SQL= keyword. The statement must be in double quotes
and ended by the semicolon
Distribution target tags must be (mostly) in the same scan class as the distributor
tag
/ALIAS comparison is case sensitive
Version 3.0 of the RDBMSPI Interface is a major revision (as the version 2.0 was for version
1.x) and many enhancements have been made that did not fit into the design of the previous
version. Be aware that version 3.x of the RDBMSPI interface:
Is not available for Windows NT
For some tasks, the interface requires PI SDK.
The /sr parameter to set the Sign-Up-For-Updates scan period has been
removed.
Note: Since 3.11.0.0, there is the /UPDATEINTERVAL parameter that allows for
setting the sign-up-for-update rate.
The /skip_time switch has been removed. See the /perf start up parameter
description in the Startup Command File chapter.
The following minor changes may affect compatibility to a previous
configuration:
If not already installed, update the PI API to the current release of PI SDK
(includes latest PI API as well).
CAUTION! Since RDBMSPI version 3.14 (and UniInt 4.1.2), the interface
does NOT explicitly log in to PI anymore. Users always have to configure the trust entry
for this interface (in the trust table on the PI Server). Delete the *.PI_PWD file (if there
is one in the directory where the /output= parameter points) and remove the
/user_pi= and /pass_pi= from the interface start-up file.
234
CAUTION! RDBMSPI version 3.18.1 changed the implementation of the
/recovery_time start-up parameter when combined with another start-up - /utc. If
the /utc is set, the specified recovery time is NOT transformed to UTC and is
interpreted as local time.
Now proceed with running the set-up program as described in section Interface Installation.
Perform all configuration steps and, optionally, use existing configuration files from the
backup.
238
Tested RDBMSs
RDBMS ODBC Driver
Oracle (NT)
8.0.5 (Oracle 8)
Oracle ODBC Driver
9.0.1 (Oracle 9i)
(http://www.oracle.com/technology/software/tech/wi
10.1 (Oracle 10g)
11.1 (Oracle 11g) ndows/odbc/index.html)
8.0.5.0.0.0
8.01.73.00
9.00.11.00
9.00.15.00
11.01.00.06
DataDirect
(www.datadirect-technologies.com)
4.10.00.4
Microsoft SQL Server
7.00 (SQL Server 7.0) (http://msdn.microsoft.com/data
8.00 (SQL Server 2000) see the latest MDAC)
9.00 (SQL Server 2005) 03.70.0820
10.00 (SQL Server 2008, 2000.80.194.00
2008R2) 2000.81.9031.14
11.00 (SQL Server 2012) 2005.90.1399.00
6.01.7601
2009.100.2500.00
2011.110.2100.60
DB2 (NT platform)
06.01.0000
07.01.0000
Informix (NT platform)
02.80.0008 2.20 TC1
07.31.0000 TC5
Ingres II (NT platform)
3.50.00.11 (Some tests FAILED!)
Advantage Ingres
Version 2.6
Sybase (NT platform)
3.50.00.10
12 ASE
Microsoft Access
2000
4.00.5303.01
2002
4.00.6200.00
2003
6.01.7601.17632
2007
2010
MySQL Server
(NT platform)
MySQL ODBC 5.1 driver (5.01.04.00)
5.0.67
240
Appendix G. Technical Support and Resources
You can read complete information about technical support options, and access all of the
following resources at the OSIsoft Technical Support Web site:
http://techsupport.osisoft.com (http://techsupport.osisoft.com)
You can contact OSIsoft Technical Support 24 hours a day. Use the numbers in the table
below to find the most appropriate number for your area. Dialing any of these numbers will
route your call into our global support queue to be answered by engineers stationed around
the world.
Office Location Access Number Local Language Options
San Leandro, CA, USA 1 510 297 5828 English
Philadelphia, PA, USA 1 215 606 0705 English
Johnson City, TN, USA 1 423 610 3800 English
Montreal, QC, Canada 1 514 493 0663 English, French
Sao Paulo, Brazil 55 11 3053 5040 English, Portuguese
Frankfurt, Germany 49 6047 989 333 English, German
Manama, Bahrain 973 1758 4429 English, Arabic
Singapore 65 6391 1811 English, Mandarin
86 021 2327 8686 Mandarin
Perth, WA, Australia 61 8 9282 9220 English
Support may be provided in languages other than English in certain centers (listed above)
based on availability of attendants. If you select a local language option, we will make best
efforts to connect you with an available Technical Support Engineer (TSE) with that language
skill. If no local language TSE is available to assist you, you will be routed to the first
available attendant.
If all available TSEs are busy assisting other customers when you call, you will be prompted
to remain on the line to wait for the next available TSE or else leave a voicemail message. If
you choose to leave a message, you will not lose your place in the queue. Your voicemail
will be treated as a regular phone call and will be directed to the first TSE who becomes
available.
If you are calling about an ongoing case, be sure to reference your case number when you call
so we can connect you to the engineer currently assigned to your case. If that engineer is not
available, another engineer will attempt to assist you.
Search Support
From the OSIsoft Technical Support Web site, click Search Support.
Quickly and easily search the OSIsoft Technical Support Web site’s Support Solutions,
Documentation, and Support Bulletins using the advanced MS SharePoint search engine.
[email protected]
When contacting OSIsoft Technical Support by email, it is helpful to send the following
information:
Description of issue: Short description of issue, symptoms, informational or error
messages, history of issue
Log files: See the product documentation for information on obtaining logs
pertinent to the situation.
From the OSIsoft Technical Support Web site, click Contact us > My Support > My Calls.
Using OSIsoft’s Online Technical Support, you can:
Enter a new call directly into OSIsoft’s database (monitored 24 hours a day)
View or edit existing OSIsoft calls that you entered
View any of the calls entered by your organization or site, if enabled
See your licensed software and dates of your Service Reliance Program
agreements
242
Remote Access
From the OSIsoft Technical Support Web site, click Contact Us > Remote Support Options.
OSIsoft Support Engineers may remotely access your server in order to provide hands-on
troubleshooting and assistance. See the Remote Access page for details on the various
methods you can use.
On-site Service
From the OSIsoft Technical Support Web site, click Contact Us > On-site Field Service Visit.
OSIsoft provides on-site service for a fee. Visit our On-site Field Service Visit page for more
information.
Knowledge Center
From the OSIsoft Technical Support Web site, click Knowledge Center.
The Knowledge Center provides a searchable library of documentation and technical data, as
well as a special collection of resources for system managers. For these options, click
Knowledge Center on the Technical Support Web site.
The Search feature allows you to search Support Solutions, Bulletins, Support
Pages, Known Issues, Enhancements, and Documentation (including user
manuals, release notes, and white papers).
System Manager Resources include tools and instructions that help you manage:
Archive sizing, backup scripts, daily health checks, daylight savings time
configuration, PI Server security, PI System sizing and configuration, PI trusts
for Interface Nodes, and more.
Upgrades
From the OSIsoft Technical Support Web site, click Contact Us > Obtaining Upgrades.
You are eligible to download or order any available version of a product for which you have
an active Service Reliance Program (SRP), formerly known as Tech Support Agreement
(TSA). To verify or change your SRP status, contact your Sales Representative or Technical
Support (http://techsupport.osisoft.com/) for assistance.
The OSIsoft Virtual Campus (vCampus) Web site offers a community-oriented program that
focuses on PI System development and integration. The Web site's annual online
subscriptions provide customers with software downloads, resources that include a personal
development PI System, online library, technical webinars, online training, and community-
oriented features such as blogs and discussion forums.
OSIsoft vCampus is intended to facilitate and encourage communication around PI
programming and integration between OSIsoft partners, customers and employees. See the
244
Appendix H. Revision History
246
Date Author Comments
Final.
15-May-2009 Mfreitag Version 3.16.1.4
Applied the new Interface Skeleton (3.0.9)
24-Jun-2009 Mkelly Version 3.16.1.4, Revision A; Fixed headers,
footers, section breaks. Fixed miscellaneous
formatting problems. Added clarification for /id
and /in to indicate /in is for backwards
compatilibity with older versions of the interface.
/id is the preferred command line parameter to
use.
04-Nov-2009 Mfreitag Version 3.17.0.8
16-Jun-2010 Mfreitag Version 3.18.1.10 added description for
/ignore_nulls and /dops new start-up
parameters; removed the Connected/No Data
device status.
11-Jan-2011 Sbranscomb Version 3.18.1.0, Revision A; Updated to
Skeleton Version 3.0.31
03-Feb-2011 Mkelly Version 3.19.1.x, Updated ICU Control section of
the manual and added new command line
parameter /Failover_Timeout=#.
12-Feb-2011 Mfreitag Version 3.19.1.x, Revision A. Removed the CPPI
references, added the /Failover_Timeout=#
description.
19-Jul-2011 MKelly Version 3.19.1.x – 3.19.2.x; Updated the version
number for a rebuild with new UniInt 4.5.2.0.
03-Apr-2012 MFreitag Version 3.20.6.x, added a note about using the
necessity to use the 32-bit ODBC Administrator
on the 64-bit platforms.
05-Jun-2012 MFreitag Version 3.20.6.x, Rev. A. Reformulated several
paragraphs in chapter 2 and in chapters 8 till 17.
19-Feb-2013 MFreitag Version 3.21.4.x. Corrected support for Win7 Yes
19-Feb-2013 TWilliams Updated title and ICU Control name to use new
marketing name format
28-Feb-2013 MFreitag Added support for Windows 8 and Windows
2012