Skip to main content

OLEDB

Connect to OLEDB data sources through a remote OIBus Agent. Particularly useful for systems like OSIsoft PI System™ and other OLEDB-compatible applications.

Compatibility Notes
  • Requires OIBus Agent installation on target machine
  • Supports all OLEDB-compatible data sources
  • Optimized for OSIsoft PI System™ connections

Specific Settings

SettingDescriptionDefault ValueNotes
Remote agent URLAgent endpoint (e.g., http://host:2224)-Required for remote connections
Connection timeoutMaximum time (ms) to establish connection15000Recommended: 5000-30000ms
Retry intervalTime (ms) to wait before retrying connection10000
Request timeoutMaximum execution time (ms) for individual queries15000Adjust for large historian queries
Connection stringOLEDB connection string in provider-specific format-See examples below

Throttling Settings

You can optimize data requests using throttling configurations. These settings help manage large data intervals and network conditions effectively.

Key Throttling Parameters

SettingDescriptionData TypeExample Value
Max read intervalMaximum duration (in seconds) for each data request interval. Larger intervals are automatically divided into smaller chunks not exceeding this value.Seconds3600
Read delayDelay (in milliseconds) between consecutive data requests. Helps prevent server overload.Milliseconds1000
OverlapTime overlap (in milliseconds) between consecutive queries to ensure data continuity. This value is subtracted from the @StartTime of the next query.Milliseconds60000

How Throttling Works

  1. Interval Division:
  • Large time ranges are automatically split into smaller intervals
  • Each sub-interval does not exceed the Max read interval duration
  • Example: A 24-hour request with Max read interval = 3600 (1 hour) will be split into 24 separate 1-hour requests
  1. Request Timing:
  • The Read delay introduces a pause between consecutive requests
  • Helps manage server load and prevents rate limiting
  • Particularly useful during network instability or when querying large datasets
  1. Data Continuity:
  • The Overlap setting ensures no gaps in your data
  • Creates a time buffer between consecutive queries
  • Example: With Overlap = 60000 (1 minute), each query will include 1 minute of overlapping data with the previous query
Overlap Functionality

The overlap setting modifies the @StartTime of each query by subtracting the overlap value from the calculated start time. This adjustment applies to the entire query range, not to individual sub-intervals when large requests are split into smaller chunks.

This functionality is particularly useful when:

  • New data is ingested after previous queries have executed
  • You need to account for potential ingestion delays in the source system

Example: With overlap=60000 (1 minute), a query for the interval [10:00-11:00] will actually request [9:59-11:00], ensuring capture of any late-arriving data from the previous time period.

ScenarioMax read intervalRead delayOverlap
Stable network, small datasets3600 (1 hour)5000 (no overlap)
Unstable network1800 (30 min)20000 (no overlap)
Large historical retrievals7200 (2 hours)10000 (no overlap)
Real-time with occasional gaps900 (15 min)20015000 (15 sec)

Item Settings

Each item in the South connector can be individually configured with specific query parameters and datetime handling. Multiple queries to the same database can be executed sequentially within a single connector, with results consolidated into output files for North connectors.

Query Configuration

The query field accepts standard SQL syntax with support for internal variables that enhance data retrieval resilience and performance optimization.

Query Variables

VariableDescriptionBehavior
@StartTimeInitialized to first execution time, then updated to the most recent timestamp from reference fieldAutomatically advances based on retrieved data
@EndTimeSet to current time (now()) or sub-interval end when queries are splitDefines upper bound for data retrieval

Example Query:

SELECT device_id, value, reading_time
FROM sensor_data
WHERE reading_time > @StartTime
AND reading_time < @EndTime
ORDER BY reading_time
Query Splitting

For handling large datasets, queries can be automatically divided into smaller time-based chunks. See Splitting Large Queries for configuration details.

Datetime Field Configuration

SettingDescriptionApplies ToExample Value
Field nameName of the datetime field in your SELECT statementAll typestimestamp, reading_time
Reference fieldDesignates which field determines the @StartTime for subsequent queriesAll typesreading_time
TypeData type of the datetime field in databaseAll fieldsString, Date, DateTime
TimezoneTimezone of the stored datetime (required for string/date types)String/Date/DateTime/SmallDateTimeUTC, Europe/Paris
FormatFormat pattern for string-based datetimesString type onlyyyyy-MM-dd HH:mm:ss
LocaleLocale for format elements (e.g., month names)String type onlyen_US, fr_FR
Type Consistency Requirement

When using datetime conversions in queries:

Problematic approach (may cause unexpected behavior):

SELECT value, CONVERT(datetime, string_field) AS timestamp
FROM table
WHERE string_field > @StartTime

Recommended approach (maintains type consistency):

SELECT value, CONVERT(datetime, string_field) AS timestamp
FROM table
WHERE CONVERT(datetime, string_field) > @StartTime

Data Flow Process

  1. OIBus executes each configured query sequentially
  2. Results are consolidated into structured output
  3. Max Instant is retrieved from the datetime field used as reference, converted to UTC format and stored to be used as the next @StartTime
  4. Final output is formatted according to serialization settings
  5. Processed data is sent to configured North connectors
Best Practices
  1. Choose reference fields with consistent, increasing values
  2. For string datetimes, ensure format matches between database storage and query variables
  3. Add indexes on datetime fields for better performance
  4. Use query splitting for large time ranges

CSV Serialization

OIBus provides flexible options for serializing retrieved data into CSV format with customizable output settings.

File Configuration

SettingDescriptionExample Value
FilenameName pattern for output files.data_@ConnectorName_@CurrentDate.csv
DelimiterCharacter used to separate values in the CSV file, or ;
CompressionEnable gzip compression for output filesEnabled/Disabled
Filename Variables

The following variables can be used in filename patterns:

  • @ConnectorName: Automatically replaced with the connector's name
  • @CurrentDate: Inserts current timestamp in fixed yyyy_MM_dd_HH_mm_ss_SSS format

Temporal Data Handling

SettingDescriptionExample Value
Output datetime formatFormat pattern for datetime fields in the CSV (does not affect @CurrentDate in filenames)yyyy-MM-dd HH:mm:ss
Output timezoneTimezone used for datetime values in the CSVUTC or Europe/Paris
Important Notes
  1. The @CurrentDate variable in filenames uses a fixed format (yyyy_MM_dd_HH_mm_ss_SSS) regardless of the datetime format setting
  2. Only datetime fields specified in your configuration will be formatted according to these settings
  3. Timezone conversion only applies to datetime values in the CSV content, not to the filename timestamp