Qore BulkSqlUtil Module Reference 1.3
Loading...
Searching...
No Matches
BulkSqlUtil::AbstractBulkOperation Class Referenceabstract

base class for bulk DML operations More...

#include <AbstractBulkOperation.qc.dox.h>

Inheritance diagram for BulkSqlUtil::AbstractBulkOperation:
[legend]

Public Member Methods

nothing commit ()
 flushes any queued data and commits the transaction
 
 constructor (string name, SqlUtil::AbstractTable target, *hash opts)
 creates the object from the supplied arguments More...
 
 constructor (string name, SqlUtil::Table target, *hash opts)
 creates the object from the supplied arguments More...
 
 destructor ()
 throws an exception if there is data pending in the internal row data cache; make sure to call flush() or discard() before destroying the object More...
 
 discard ()
 discards any buffered batched data; this method should be called before destroying the object if an error occurs More...
 
 flush ()
 flushes any remaining batched data to the database; this method should always be called before committing the transaction or destroying the object More...
 
Qore::SQL::AbstractDatasource getDatasource ()
 returns the AbstractDatasource object associated with this object
 
int getRowCount ()
 returns the affected row count
 
SqlUtil::AbstractTable getTable ()
 returns the underlying SqlUtil::AbstractTable object
 
string getTableName ()
 returns the table name
 
 queueData (hash data)
 queues row data in the block buffer; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the block_size option; does not commit the transaction More...
 
 queueData (list l)
 queues row data in the block buffer; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the block_size option; does not commit the transaction More...
 
nothing rollback ()
 discards any queued data and rolls back the transaction
 
int size ()
 returns the current size of the cache as a number of rows More...
 

Public Attributes

const OptionDefaults = ...
 default option values
 
const OptionKeys = ...
 option keys for this object
 

Private Member Methods

abstract flushImpl ()
 flushes queued data to the database
 
 flushIntern ()
 flushes queued data to the database
 
 init (*hash opts)
 common constructor initialization
 
 setupInitialRow (hash row)
 sets up the block buffer given the initial template row for inserting
 
 setupInitialRowColumns (hash row)
 sets up the block buffer given the initial template hash of lists for inserting
 

Private Attributes

softint block_size
 bulk operation block size
 
hash cval
 "constant" row values; must be equal in all calls to queueData
 
list cval_keys
 "constant" row value keys
 
hash hbuf
 buffer for bulk operations
 
*code info_log
 an optional info logging callback; must accept a sprintf()-style format specifier and optional arguments
 
string opname
 operation name
 
list ret_args = ()
 list of "returning" columns
 
int row_count = 0
 row count
 
SqlUtil::AbstractTable table
 the target table object
 

Detailed Description

base class for bulk DML operations

This is an abstract base class for bulk DML operations; this class provides the majority of the API support for bulk DML operations for the concrete child classes that inherit it.

Submitting Data
To use this class's API, queue data in the form of a hash (a single row or a set of rows) or a list of rows by calling the queueData() method.

The queueData() method queues data to be written to the database; the queue is flush()ed automatically when block_size rows have been queued.
Flushing and Discarding Data
Each call to flush() (whether implicit or explicit) will cause a single call to be made to the dataserver; all queued rows are sent in a single bulk DML call, which allows for efficient processing of large amounts of data.

A call to flush() must be made before committing the transaction to ensure that any remaining rows in the internal queue have been written to the database. Because the destructor() will throw an exception if any data is left in the internal queue when the object is destroyed, a call to discard() must be made prior to the destruction of the object in case of errors.
# single commit and rollback
on_success ds.commit();
on_error ds.rollback();
{
# each operation needs to be flushed or discarded individually
on_success {
op1.flush();
op2.flush();
}
on_error {
op1.discard();
op2.discard();
}
# data is queued and flushed automatically when the buffer is full
map op1.queueData($1), data1.iterator();
map op2.queueData($1), data2.iterator();
}
Note
  • Each bulk DML object must be manually flush()ed before committing or manually discard()ed before rolling back to ensure that all data is managed properly in the same transaction and to ensure that no exception is thrown in the destructor(). See the example above for more information.
  • If the underlying driver does not support bulk operations, then such support is emulated with single SQL operations; in such cases performance will be reduced. Call SqlUtil::AbstractTable::hasArrayBind() to check at runtime if the driver supports bulk SQL operations.

Member Function Documentation

◆ constructor() [1/2]

BulkSqlUtil::AbstractBulkOperation::constructor ( string  name,
SqlUtil::AbstractTable  target,
*hash  opts 
)

creates the object from the supplied arguments

Parameters
namethe name of the operation
targetthe target table object
optsan optional hash of options for the object as follows:
  • "block_size": the number of rows executed at once (default: 1000)
  • "info_log": an optional info logging callback; must accept a string format specifier and sprintf()-style arguments

◆ constructor() [2/2]

BulkSqlUtil::AbstractBulkOperation::constructor ( string  name,
SqlUtil::Table  target,
*hash  opts 
)

creates the object from the supplied arguments

Parameters
namethe name of the operation
targetthe target table object
optsan optional hash of options for the object as follows:
  • "block_size": the number of rows executed at once (default: 1000)
  • "info_log": an optional info logging callback; must accept a string format specifier and sprintf()-style arguments

◆ destructor()

BulkSqlUtil::AbstractBulkOperation::destructor ( )

throws an exception if there is data pending in the internal row data cache; make sure to call flush() or discard() before destroying the object

Exceptions
BLOCK-ERRORthere is unflushed data in the internal row data cache; make sure to call flush() or discard() before destroying the object

◆ discard()

BulkSqlUtil::AbstractBulkOperation::discard ( )

discards any buffered batched data; this method should be called before destroying the object if an error occurs

Example:
# single commit and rollback
on_success ds.commit();
on_error ds.rollback();
{
# each operation needs to be flushed or discarded individually
on_success {
op1.flush();
op2.flush();
}
on_error {
op1.discard();
op2.discard();
}
# data is queued and flushed automatically when the buffer is full
map op1.queueData($1), data1.iterator();
map op2.queueData($1), data2.iterator();
}
Note
  • make sure to call flush() before committing the transaction or discard() before rolling back the transaction or destroying the object when using this method
  • flush() or discard() needs to be executed individually for each bulk operation object used in the block whereas the DB transaction needs to be committed or rolled back once per datasource
See also

◆ flush()

BulkSqlUtil::AbstractBulkOperation::flush ( )

flushes any remaining batched data to the database; this method should always be called before committing the transaction or destroying the object

Example:
# single commit and rollback
on_success ds.commit();
on_error ds.rollback();
{
# each operation needs to be flushed or discarded individually
on_success {
op1.flush();
op2.flush();
}
on_error {
op1.discard();
op2.discard();
}
# data is queued and flushed automatically when the buffer is full
map op1.queueData($1), data1.iterator();
map op2.queueData($1), data2.iterator();
}
Note
  • make sure to call flush() before committing the transaction or discard() before rolling back the transaction or destroying the object when using this method
  • flush() or discard() needs to be executed individually for each bulk operation object used in the block whereas the DB transaction needs to be committed or rolled back once per datasource
See also

◆ queueData() [1/2]

BulkSqlUtil::AbstractBulkOperation::queueData ( hash  data)

queues row data in the block buffer; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the block_size option; does not commit the transaction

Example:
# single commit and rollback
on_success ds.commit();
on_error ds.rollback();
{
# each operation needs to be flushed or discarded individually
on_success {
op1.flush();
op2.flush();
}
on_error {
op1.discard();
op2.discard();
}
# data is queued and flushed automatically when the buffer is full
map op1.queueData($1), data1.iterator();
map op2.queueData($1), data2.iterator();
}
Parameters
datathe input record or record set in case a hash of lists is passed; each hash represents a row (keys are column names and values are column values); when inserting, SQL Insert Operator Functions can also be used. If at least one hash value is a list, then any non-hash (indicating an insert opertor hash) and non-list values will be assumed to be constant values for every row and therefore future calls of this method (and overloaded variants) will ignore any values given for such keys and use the values given in the first call.
Note
  • the first row passed is taken as a template row; every other row must always have the same keys in the same order, otherwise the results are unpredictable
  • if any SQL Insert Operator Functions are used, then they are assumed to be identical in every row
  • make sure to call flush() before committing the transaction or discard() before rolling back the transaction or destroying the object when using this method
  • flush() or discard() needs to be executed individually for each bulk operation object used in the block whereas the DB transaction needs to be committed or rolled back once per datasource
See also

◆ queueData() [2/2]

BulkSqlUtil::AbstractBulkOperation::queueData ( list  l)

queues row data in the block buffer; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the block_size option; does not commit the transaction

Example:
# single commit and rollback
on_success ds.commit();
on_error ds.rollback();
{
# each operation needs to be flushed or discarded individually
on_success {
op1.flush();
op2.flush();
}
on_error {
op1.discard();
op2.discard();
}
# data is queued and flushed automatically when the buffer is full
map op1.queueData($1), data1.iterator();
map op2.queueData($1), data2.iterator();
}
Parameters
la list of hashes representing the input row data; each hash represents a row (keys are column names and values are column values); when inserting, SQL Insert Operator Functions can also be used
Note
  • the first row passed is taken as a template row; every other row must always have the same keys in the same order, otherwise the results are unpredictable
  • if any SQL Insert Operator Functions are used, then they are assumed to be identical in every row
  • make sure to call flush() before committing the transaction or discard() before rolling back the transaction or destroying the object when using this method
  • flush() or discard() needs to be executed individually for each bulk operation object used in the block whereas the DB transaction needs to be committed or rolled back once per datasource
See also

◆ size()

int BulkSqlUtil::AbstractBulkOperation::size ( )

returns the current size of the cache as a number of rows

Returns
the current size of the cache as a number of rows
Since
BulkSqlUtil 1.2