Qorus Integration Engine®
4.0.3.p2_git
|
provides access to SqlUtil functionality from system datasources More...
Classes | |
class | QorusSystemSqlutilService |
the main sqlutil service class More... | |
provides access to SqlUtil functionality from system datasources
The sqlutil service supports several data streams, allowing data to be efficiently selected, updated, inserted, deleted, or "upserted" (ie merged) from a remote system, allowing large volumes of data to be processed in a remote database with minimal intermediate state in Qorus.
In client code, data streams exposed by the sqlutil service are accessed through a QorusSystemRestHelper object as returned by a call to OMQ::get_remote_rest_connection() as in the following example:
In the above example, "node-a"
is assumed to be a remote connection configured in the Qorus to Qorus Connections
The data streams in the following table are provided.
SqlUtil Streams
Stream | Dir | HTTP Method | Args | Description |
select | out | GET | datasource=name table=name timeout=ms [ block=integer ] [ column_format=bool ] | options:block: the number of rows to send in each chunkcolumn_format: gives the format of the encoded data; if True then each chunk is a hash of lists, otherwise it is a list of hashesAll other arguments are passed to AbstractTable::getRowIterator(). Requires DATASOURCE-CONTROL or SQLUTIL-READ permissions to access |
insert | in | POST | datasource=name table=name timeout=ms | Streamed data (as hashes representing rows or lists of hashes representing blocks of rows) are inserted into the target table in an atomic transaction. For DBI drivers supporting bulk DML, efficient bulk DML queries are executed with one round trip to the server for each block of row data sent to the stream. Requires DATASOURCE-CONTROL or SQLUTIL-WRITE permissions to access |
update | in | POST | datasource=name table=name timeout=ms | Streamed data (as hashes with 'set' and 'cond' keys (with lists of hashes giving the 'set' and 'cond' parameters) or lists of such hashes and uses this information to update data in the target table in a single atomic transaction. All values sent to this stream must have the same hash keys in the same order. For DBI drivers supporting bulk DML, efficient bulk DML queries are executed with one round trip to the server for each block of row data sent to the stream. Requires DATASOURCE-CONTROL or SQLUTIL-WRITE permissions to access |
delete | in | POST | datasource=name table=name timeout=ms | Streamed data (as hashes with 'cond' keys or lists of such hashes and uses this information to delete data from the target table in a single atomic transaction. Requires DATASOURCE-CONTROL or SQLUTIL-WRITE permissions to access |
upsert | in | POST | datasource=name table=name timeout=ms [ upsert_strategy= integer] [ delete_others=boolean ] [ omit_update=list ] | Streamed data (as hashes representing rows or lists of hashes representing blocks of rows) are merged or "upserted" into the target table in an atomic transaction; see Upserting or Merging Data for more information. For DBI drivers supporting bulk DML, efficient bulk DML queries are executed with one round trip to the server for each block of row data sent to the stream. Requires DATASOURCE-CONTROL or SQLUTIL-WRITE permissions to access |
The following are pseudo-streams in that no stream data is accepted or returned (and therefore these "streams" can be accessed with standard REST commands), but they are accessed otherwise like normal streams; see examples below the following table and see Transaction Management with the sqlutil Service for more information.
SqlUtil Transaction-Management Streams
Stream | Dir | HTTP Method | Args | Description |
beginTransaction | n/a | POST | datasource=name timeout=ms | in order for this call to work, the following HTTP header must be sent: Qorus-Connection: Persistent While a persistent remote transaction is in progress, even "normal" sqlutil methods called over the same HTTP connection with REST calls will be made in the same transaction, meaning that these must be explicitly committed with a call to the "commit" pseudo-stream. Requires DATASOURCE-CONTROL or SQLUTIL-WRITE permissions to access. |
commit | timeout=ms | POST | n/a | Commits an existing transaction; can only be executed in a persistent connection while a transaction is in progress. Requires DATASOURCE-CONTROL or SQLUTIL-WRITE permissions to access |
rollback | timeout=ms | POST | n/a | Rolls an existing transaction back; can only be executed in a persistent connection while a transaction is in progress. Requires DATASOURCE-CONTROL or SQLUTIL-WRITE permissions to access |
Please see the following high-level classes for easy-to-use APIs based on the sqlutil service:
The following are low-level examples of how to use sqlutil streams in Qorus client code.
"node-a"
defined in the Qorus to Qorus Connections : "GET"
call does not require a "datasource" parameter because a persistent connection was made in the stream call to "beginTransaction"
aboveBefore initiating remote transaction management, it's critical to call Qore::HTTPClient::setPersistent() on the QorusSystemRestHelper object to ensure that any break in the HTTP session is caught and an exception is thrown, otherwise the client object would silently reconnect and the transaction state would be lost on the remote end.
After disabling automatic reconnections for this HTTP session, transaction management is initiated by calling the "beginTransaction"
stream with a "datasource"
argument with a "POST"
REST call and the following header included: "Qorus-Connection: Persistent"
(meaning start a new transaction unconditionally; if any if currently in progress then it will be rolled back and logged in the sqlutil service log), or "Qorus-Connection: Continue-Persistent"
(start a new transaction if none is in progress, continue any existing transaction).
Note that the following helper classes start or continue transactions:
"transaction"
option is set in the constructorAdditionally, the OMQ::AbstractParallelStream::beginTransaction() static method will connect to the remote server, set a persistent connection, and start or continue a remote transaction as described above. This is the easiest way to start a remote transaction when not using one of the DB data streaming classes listed above.
Once a remote transaction is in progress, then even calls to non-stream service methods in the same HTTP connection in the same datasource over the REST infrastructure will take part in the transaction, meaning that no implicit commits are made to non-stream methods called on the same datasource in the same HTTP connection as long as the transaction is in progress.