Microsoft SQL Server™
Connect to and query Microsoft SQL Server™ (MSSQL) databases with OIBus.
- Supports SQL Server 2012 and later
- Requires TCP/IP protocol to be enabled on SQL Server
Specific Settings
| Setting | Description | Default Value | Notes |
|---|---|---|---|
| Host | Address of the Microsoft SQL Server instance | - | IP address or hostname |
| Port | SQL Server port number | 1433 | Standard MSSQL port |
| Connection timeout | Maximum time (ms) to establish connection | 5000 | Recommended: 5000-30000ms |
| Database | Name of the database to connect to | - | Case-sensitive |
| Use encryption | Enable data encryption between OIBus and database | Disabled | May impact server performance |
| Trust server certificate | Accept self-signed or expired certificates | Disabled | Use with caution in production |
| Username | Authentication username | - | |
| Password | Authentication password | - | |
| Domain | Active Directory domain (optional) | - | Required for domain authentication |
| Request timeout | Maximum execution time (ms) for individual SQL queries | - | Recommended: 15000-60000ms |
Security Best Practices
- User Permissions: Always use a read-only database user
- Certificate Validation: Only trust server certificates in development/test environments
- Encryption: Enable for sensitive data, untrusted networks, or compliance requirements
Performance Considerations
Query Optimization:
- Use indexed columns in WHERE clauses
- Limit result sets (avoid
SELECT *) - Prefer parameterized queries
Resource Management:
- Encryption adds CPU overhead
- Increase timeouts for long-running queries
- Implement pagination for large result sets
- Reuse connections where possible
Throttling Settings
You can optimize data requests using throttling configurations. These settings help manage large data intervals and network conditions effectively.
Key Throttling Parameters
| Setting | Description | Data Type | Example Value |
|---|---|---|---|
| Max read interval | Maximum duration (in seconds) for each data request interval. Larger intervals are automatically divided into smaller chunks not exceeding this value. | Seconds | 3600 |
| Read delay | Delay (in milliseconds) between consecutive data requests. Helps prevent server overload. | Milliseconds | 1000 |
| Overlap | Time overlap (in milliseconds) between consecutive queries to ensure data continuity. This value is subtracted from the @StartTime of the next query. | Milliseconds | 60000 |
How Throttling Works
- Interval Division:
- Large time ranges are automatically split into smaller intervals
- Each sub-interval does not exceed the
Max read intervalduration - Example: A 24-hour request with
Max read interval = 3600(1 hour) will be split into 24 separate 1-hour requests
- Request Timing:
- The
Read delayintroduces a pause between consecutive requests - Helps manage server load and prevents rate limiting
- Particularly useful during network instability or when querying large datasets
- Data Continuity:
- The
Overlapsetting ensures no gaps in your data - Creates a time buffer between consecutive queries
- Example: With
Overlap = 60000(1 minute), each query will include 1 minute of overlapping data with the previous query
The overlap setting modifies the @StartTime of each query by subtracting the overlap value from the calculated start time. This adjustment applies to the entire query range, not to individual sub-intervals when large requests are split into smaller chunks.
This functionality is particularly useful when:
- New data is ingested after previous queries have executed
- You need to account for potential ingestion delays in the source system
Example:
With overlap=60000 (1 minute), a query for the interval [10:00-11:00] will actually request [9:59-11:00], ensuring capture of any late-arriving data from the previous time period.
Recommended Configurations
| Scenario | Max read interval | Read delay | Overlap |
|---|---|---|---|
| Stable network, small datasets | 3600 (1 hour) | 500 | 0 (no overlap) |
| Unstable network | 1800 (30 min) | 2000 | 0 (no overlap) |
| Large historical retrievals | 7200 (2 hours) | 1000 | 0 (no overlap) |
| Real-time with occasional gaps | 900 (15 min) | 200 | 15000 (15 sec) |
Item Settings
Each item in the South connector can be individually configured with specific query parameters and datetime handling. Multiple queries to the same database can be executed sequentially within a single connector, with results consolidated into output files for North connectors.
Query Configuration
The query field accepts standard SQL syntax with support for internal variables that enhance data retrieval resilience and performance optimization.
Query Variables
| Variable | Description | Behavior |
|---|---|---|
@StartTime | Initialized to first execution time, then updated to the most recent timestamp from reference field | Automatically advances based on retrieved data |
@EndTime | Set to current time (now()) or sub-interval end when queries are split | Defines upper bound for data retrieval |
Example Query:
SELECT device_id, value, reading_time
FROM sensor_data
WHERE reading_time > @StartTime
AND reading_time < @EndTime
ORDER BY reading_time
For handling large datasets, queries can be automatically divided into smaller time-based chunks. See Splitting Large Queries for configuration details.
Datetime Field Configuration
| Setting | Description | Applies To | Example Value |
|---|---|---|---|
| Field name | Name of the datetime field in your SELECT statement | All types | timestamp, reading_time |
| Reference field | Designates which field determines the @StartTime for subsequent queries | All types | reading_time |
| Type | Data type of the datetime field in database | All fields | String, Date, DateTime |
| Timezone | Timezone of the stored datetime (required for string/date types) | String/Date/DateTime/SmallDateTime | UTC, Europe/Paris |
| Format | Format pattern for string-based datetimes | String type only | yyyy-MM-dd HH:mm:ss |
| Locale | Locale for format elements (e.g., month names) | String type only | en_US, fr_FR |
When using datetime conversions in queries:
Problematic approach (may cause unexpected behavior):
SELECT value, CONVERT(datetime, string_field) AS timestamp
FROM table
WHERE string_field > @StartTime
Recommended approach (maintains type consistency):
SELECT value, CONVERT(datetime, string_field) AS timestamp
FROM table
WHERE CONVERT(datetime, string_field) > @StartTime
Data Flow Process
- OIBus executes each configured query sequentially
- Results are consolidated into structured output
- Max Instant is retrieved from the datetime field used as reference, converted to UTC format and stored to be used as the next
@StartTime - Final output is formatted according to serialization settings
- Processed data is sent to configured North connectors
- Choose reference fields with consistent, increasing values
- For string datetimes, ensure format matches between database storage and query variables
- Add indexes on datetime fields for better performance
- Use query splitting for large time ranges
CSV Serialization
OIBus provides flexible options for serializing retrieved data into CSV format with customizable output settings.
File Configuration
| Setting | Description | Example Value |
|---|---|---|
| Filename | Name pattern for output files. | data_@ConnectorName_@CurrentDate.csv |
| Delimiter | Character used to separate values in the CSV file | , or ; |
| Compression | Enable gzip compression for output files | Enabled/Disabled |
The following variables can be used in filename patterns:
@ConnectorName: Automatically replaced with the connector's name@CurrentDate: Inserts current timestamp in fixedyyyy_MM_dd_HH_mm_ss_SSSformat
Temporal Data Handling
| Setting | Description | Example Value |
|---|---|---|
| Output datetime format | Format pattern for datetime fields in the CSV (does not affect @CurrentDate in filenames) | yyyy-MM-dd HH:mm:ss |
| Output timezone | Timezone used for datetime values in the CSV | UTC or Europe/Paris |
- The
@CurrentDatevariable in filenames uses a fixed format (yyyy_MM_dd_HH_mm_ss_SSS) regardless of the datetime format setting - Only datetime fields specified in your configuration will be formatted according to these settings
- Timezone conversion only applies to datetime values in the CSV content, not to the filename timestamp