Oracle Database™
Connect to and query Oracle Database™ using OIBus with support for both thin and thick connection modes.
Version Compatibility
| Oracle DB Version | Thin Mode | Thick Mode | Notes |
|---|---|---|---|
| 19c+ | ✅ | ✅ | Recommended |
| 12.2 (12c R2) | ✅ | ✅ | Minimum for thin mode |
| 12.1 (12c R1) | ⚠️ | ✅ | Thin mode not officially supported |
| 11.2 (11g R2) | ❌ | ✅ | Minimum for thick mode |
| 10g or older | ❌ | ❌ | Not compatible |
Specific Settings
| Setting | Description | Default Value | Notes |
|---|---|---|---|
| Thick mode | Uses Oracle Client Library for database access | Disabled | Requires Instant Client installation |
| Oracle Client Library | Path to Oracle Instant Client (e.g., C:\oracle\instantclient_23_5) | - | Required for thick mode |
| Host | Address of the Oracle server | - | |
| Port | Server port | 1521 | Standard Oracle port |
| Connection timeout | Maximum time (ms) to establish connection | 15000 | Recommended: 5000-30000ms |
| Database | Name/SID of the target database | - | |
| Username | Authentication username | - | |
| Password | Authentication password | - |
Connection Modes
| Mode | Description | Requirements | Benefits |
|---|---|---|---|
| Thin | Pure JavaScript implementation | None | Easy setup, no native binaries |
| Thick | Uses Oracle Client Library | Instant Client 18.5+ | Full feature support |
- Required only for thick mode
- Minimum version: 18.5.0.0.0
- Supports Oracle Database 11.2+
- Download: Oracle Instant Client
- Use a read-only database user
- Prefer thin mode when possible for simpler deployment
Throttling Settings
You can optimize data requests using throttling configurations. These settings help manage large data intervals and network conditions effectively.
Key Throttling Parameters
| Setting | Description | Data Type | Example Value |
|---|---|---|---|
| Max read interval | Maximum duration (in seconds) for each data request interval. Larger intervals are automatically divided into smaller chunks not exceeding this value. | Seconds | 3600 |
| Read delay | Delay (in milliseconds) between consecutive data requests. Helps prevent server overload. | Milliseconds | 1000 |
| Overlap | Time overlap (in milliseconds) between consecutive queries to ensure data continuity. This value is subtracted from the @StartTime of the next query. | Milliseconds | 60000 |
How Throttling Works
- Interval Division:
- Large time ranges are automatically split into smaller intervals
- Each sub-interval does not exceed the
Max read intervalduration - Example: A 24-hour request with
Max read interval = 3600(1 hour) will be split into 24 separate 1-hour requests
- Request Timing:
- The
Read delayintroduces a pause between consecutive requests - Helps manage server load and prevents rate limiting
- Particularly useful during network instability or when querying large datasets
- Data Continuity:
- The
Overlapsetting ensures no gaps in your data - Creates a time buffer between consecutive queries
- Example: With
Overlap = 60000(1 minute), each query will include 1 minute of overlapping data with the previous query
The overlap setting modifies the @StartTime of each query by subtracting the overlap value from the calculated start time. This adjustment applies to the entire query range, not to individual sub-intervals when large requests are split into smaller chunks.
This functionality is particularly useful when:
- New data is ingested after previous queries have executed
- You need to account for potential ingestion delays in the source system
Example:
With overlap=60000 (1 minute), a query for the interval [10:00-11:00] will actually request [9:59-11:00], ensuring capture of any late-arriving data from the previous time period.
Recommended Configurations
| Scenario | Max read interval | Read delay | Overlap |
|---|---|---|---|
| Stable network, small datasets | 3600 (1 hour) | 500 | 0 (no overlap) |
| Unstable network | 1800 (30 min) | 2000 | 0 (no overlap) |
| Large historical retrievals | 7200 (2 hours) | 1000 | 0 (no overlap) |
| Real-time with occasional gaps | 900 (15 min) | 200 | 15000 (15 sec) |
Item Settings
Each item in the South connector can be individually configured with specific query parameters and datetime handling. Multiple queries to the same database can be executed sequentially within a single connector, with results consolidated into output files for North connectors.
Query Configuration
The query field accepts standard SQL syntax with support for internal variables that enhance data retrieval resilience and performance optimization.
Query Variables
| Variable | Description | Behavior |
|---|---|---|
@StartTime | Initialized to first execution time, then updated to the most recent timestamp from reference field | Automatically advances based on retrieved data |
@EndTime | Set to current time (now()) or sub-interval end when queries are split | Defines upper bound for data retrieval |
Example Query:
SELECT device_id, value, reading_time
FROM sensor_data
WHERE reading_time > @StartTime
AND reading_time < @EndTime
ORDER BY reading_time
For handling large datasets, queries can be automatically divided into smaller time-based chunks. See Splitting Large Queries for configuration details.
Datetime Field Configuration
| Setting | Description | Applies To | Example Value |
|---|---|---|---|
| Field name | Name of the datetime field in your SELECT statement | All types | timestamp, reading_time |
| Reference field | Designates which field determines the @StartTime for subsequent queries | All types | reading_time |
| Type | Data type of the datetime field in database | All fields | String, Date, DateTime |
| Timezone | Timezone of the stored datetime (required for string/date types) | String/Date/DateTime/SmallDateTime | UTC, Europe/Paris |
| Format | Format pattern for string-based datetimes | String type only | yyyy-MM-dd HH:mm:ss |
| Locale | Locale for format elements (e.g., month names) | String type only | en_US, fr_FR |
When using datetime conversions in queries:
Problematic approach (may cause unexpected behavior):
SELECT value, CONVERT(datetime, string_field) AS timestamp
FROM table
WHERE string_field > @StartTime
Recommended approach (maintains type consistency):
SELECT value, CONVERT(datetime, string_field) AS timestamp
FROM table
WHERE CONVERT(datetime, string_field) > @StartTime
Data Flow Process
- OIBus executes each configured query sequentially
- Results are consolidated into structured output
- Max Instant is retrieved from the datetime field used as reference, converted to UTC format and stored to be used as the next
@StartTime - Final output is formatted according to serialization settings
- Processed data is sent to configured North connectors
- Choose reference fields with consistent, increasing values
- For string datetimes, ensure format matches between database storage and query variables
- Add indexes on datetime fields for better performance
- Use query splitting for large time ranges
CSV Serialization
OIBus provides flexible options for serializing retrieved data into CSV format with customizable output settings.
File Configuration
| Setting | Description | Example Value |
|---|---|---|
| Filename | Name pattern for output files. | data_@ConnectorName_@CurrentDate.csv |
| Delimiter | Character used to separate values in the CSV file | , or ; |
| Compression | Enable gzip compression for output files | Enabled/Disabled |
The following variables can be used in filename patterns:
@ConnectorName: Automatically replaced with the connector's name@CurrentDate: Inserts current timestamp in fixedyyyy_MM_dd_HH_mm_ss_SSSformat
Temporal Data Handling
| Setting | Description | Example Value |
|---|---|---|
| Output datetime format | Format pattern for datetime fields in the CSV (does not affect @CurrentDate in filenames) | yyyy-MM-dd HH:mm:ss |
| Output timezone | Timezone used for datetime values in the CSV | UTC or Europe/Paris |
- The
@CurrentDatevariable in filenames uses a fixed format (yyyy_MM_dd_HH_mm_ss_SSS) regardless of the datetime format setting - Only datetime fields specified in your configuration will be formatted according to these settings
- Timezone conversion only applies to datetime values in the CSV content, not to the filename timestamp