Qore TableMapper Module Reference  1.3
TableMapper::InboundTableMapper Class Reference

provides an inbound data mapper to a Table target More...

Inheritance diagram for TableMapper::InboundTableMapper:

Public Member Methods

nothing commit ()
 flushes any queued data and commits the transaction
 
 constructor (SqlUtil::AbstractTable target, hash mapv, *hash opts)
 builds the object based on a hash providing field mappings, data constraints, and optionally custom mapping logic More...
 
 destructor ()
 throws an exception if there is data pending in the block cache More...
 
 discard ()
 discards any buffered batched data; this method should be called after using the batch APIs (queueData()) and an error occurs More...
 
*hash flush ()
 flushes any remaining batched data to the database; this method should always be called before committing the transaction or destroying the object More...
 
Qore::SQL::AbstractDatasource getDatasource ()
 returns the AbstractDatasource object associated with this object
 
*list getReturning ()
 returns a list argument for the SqlUtil "returning" option, if applicable
 
SqlUtil::AbstractTable getTable ()
 returns the underlying SqlUtil::AbstractTable object
 
string getTableName ()
 returns the table name
 
hash insertRow (hash< auto > rec)
 inserts or upserts a row into the target table based on a mapped input record; does not commit the transaction More...
 
deprecated hash insertRowNoCommit (hash rec)
 Plain alias to insertRow(). Obsolete. Do not use.
 
TableMapper::InboundTableMapperIterator iterator (Qore::AbstractIterator i)
 returns an iterator for the current object More...
 
 logOutput (hash h)
 ignore logging from Mapper since we may have to log sequence values; output logged manually in insertRow()
 
hash optionKeys ()
 returns a list of valid constructor options for this class (can be overridden in subclasses) More...
 
*hash queueData (hash< auto > rec, *hash< auto > crec)
 inserts/upserts a row (or a set of rows, in case a hash of lists is passed) into the block buffer based on a mapped input record; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the "insert_block" option; does not commit the transaction More...
 
*hash queueData (Qore::AbstractIterator iter, *hash crec)
 inserts/upserts a set of rows (from an iterator that returns hashes as values where each hash value represents an input record) into the block buffer based on a mapped input record; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the "insert_block" option; does not commit the transaction More...
 
*hash queueData (list l, *hash crec)
 inserts/upserts a set of rows (list of hashes representing input records) into the block buffer based on a mapped input record; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the "insert_block" option; does not commit the transaction More...
 
nothing rollback ()
 discards any queued data and rolls back the transaction
 
 setRowCode (*code rowc)
 sets a closure or call reference that will be called when data has been sent to the database and all output data is available; must accept a hash argument that represents the data written to the database including any output arguments. This code will be reset, once the transaction is commited. More...
 
hash< string, bool > validKeys ()
 returns a list of valid field keys for this class (can be overridden in subclasses) More...
 
hash< string, bool > validTypes ()
 returns a list of valid field types for this class (can be overridden in subclasses) More...
 

Static Public Member Methods

static nothing addBatchToBatch (reference< hash > hb, hash batch)
 
static deprecated nothing addBatchToBatch (reference< hash > hb, reference x1, hash batch, *reference x2)
 adds a batch (hash of lists) to another batch (in-place) More...
 
static hash getOutputRecord (*string mname, AbstractTable table, *hash output)
 returns a description of the output record based on the AbstractTable target
 

Public Attributes

const OptionDefaults
 default option values
 
const OptionKeys
 option keys for this object
 

Private Member Methods

 checkMapField (string k, reference< auto > fh)
 perform per-field pre-processing on the passed map in the constructor More...
 
 error (string fmt)
 prepends the datasource description to the error string and calls Mapper::error()
 
 error2 (string ex, string fmt)
 prepends the datasource description to the error description and calls Mapper::error2()
 
*hash flushIntern (bool force_flush)
 
*int getRecListSize (hash rec)
 
 init (hash mapv, *hash opts)
 common constructor initialization
 
bool isMapperConstant ()
 
 mapFieldType (string key, hash m, reference< auto > v, hash rec)
 performs type handling
 
*hash queueDataIntern (hash rec)
 inserts a row into the block buffer based on a mapped input record; does not commit the transaction More...
 
hash record2Batch (hash h)
 

Private Attributes

SqlUtil::AbstractDatabase db
 the target Database object in case sequence value need to be acquired
 
bool has_returning
 if the AbstractTable object supports the "returning" clause
 
hash hbuf
 buffer for bulk DML
 
int insert_block
 bulk DML block size (also valid for upserts despite the name)
 
list out_args = ()
 extra arguments for sequence output binds
 
list ret_args = ()
 "returning" arguments for sequences
 
Qore::SQL::AbstractSQLStatement stmt
 statement for inserts/upserts
 
SqlUtil::AbstractTable table
 the target table object
 
bool unstable_input = False
 "unstable input" option for non-optimized inserts/upserts (~33% performance reduction in insert/upsert speed)
 

Detailed Description

provides an inbound data mapper to a Table target

Member Function Documentation

◆ addBatchToBatch() [1/2]

static nothing TableMapper::InboundTableMapper::addBatchToBatch ( reference< hash hb,
hash  batch 
)
static

adds a batch (hash of lists) to another batch (in-place)

Parameters
hbreference to a batch that will be enriched
batchthe batch to be added to 'hb'

◆ addBatchToBatch() [2/2]

static deprecated nothing TableMapper::InboundTableMapper::addBatchToBatch ( reference< hash hb,
reference  x1,
hash  batch,
*reference  x2 
)
static

adds a batch (hash of lists) to another batch (in-place)

Deprecated:
use addBatchToBatch(reference<hash>, hash) instead

◆ checkMapField()

TableMapper::InboundTableMapper::checkMapField ( string  k,
reference< auto >  fh 
)
private

perform per-field pre-processing on the passed map in the constructor

Parameters
kthe field name
fha reference to the field's value in the map

◆ constructor()

TableMapper::InboundTableMapper::constructor ( SqlUtil::AbstractTable  target,
hash  mapv,
*hash  opts 
)

builds the object based on a hash providing field mappings, data constraints, and optionally custom mapping logic

The target table is also scanned using SqlUtil and column definitions are used to update the target record specification, also if there are any columns with NOT NULL constraints and no default value, mapping, or constant value, then a MAP-ERROR exception is thrown

Example:
const DbMapper = (
"id": ("sequence": "seq_inventory_example"),
"store_code": "StoreCode",
"product_code": "ProductCode",
"product_desc": "ProductDescription",
"ordered": "Ordered",
"available": "Available",
"in_transit": "InTransit",
"status": ("constant": "01"),
"total": int sub (any x, hash rec) { return rec.Available.toInt() + rec.Ordered.toInt() + rec.InTransit.toInt(); },
);
InboundTableMapper mapper(table, DbMapper);
Parameters
targetthe target table object
mapva hash providing field mappings; each hash key is the name of the output field; each value is either True (meaning no translations are done; the data is copied 1:1) or a hash describing the mapping; see TableMapper Specification Format for detailed documentation for this option
optsan optional hash of options for the mapper; see Mapper Options for a description of valid mapper options plus the following options specific to this object:
  • "unstable_input": set this option to True (default False) if the input passed to the mapper is unstable, meaning that different hash keys or a different hash key order can be passed as input data in each call to insertRow(); if this option is set, then insert speed will be reduced by about 33%; when this option is not set, an optimized insert/upsert approach is used which allows for better performance
  • "insert_block": for DB drivers supporting bulk DML (for use with the queueData(), flush(), and discard() methods), the number of rows inserted/upserted at once (default: 1000, only used when "unstable_input" is False) and bulk inserts are supported in the table object; see InboundTableMapper Bulk Insert API for more information; note that this is also applied when upserting despite the name
  • "rowcode": a per-row Closures or Call References for batch inserts/upserts; this must take a single hash argument and will be called for every row after a bulk insert/upsert; the hash argument representing the row inserted/upserted will also contain any output values for inserts if applicable (such as sequence values inserted from "sequence" field options for the given column)
  • "upsert": if True then data will be upserted instead of inserted (default is to insert)
  • "upsert_strategy": see Upsert Strategy Codes for possible values for the upsert strategy; if this option is present, then "upsert" is also assumed to be True; if not present but "upsert" is True, then SqlUtil::AbstractTable::UpsertAuto is assumed
Exceptions
MAP-ERRORthe map hash has a logical error (ex: "trunc" key given without "maxlen", invalid map key); insert-only options used with the "upsert" option
TABLE-ERRORthe table includes a column using an unknown native data type
See also
setRowCode()

◆ destructor()

TableMapper::InboundTableMapper::destructor ( )

throws an exception if there is data pending in the block cache

Exceptions
BLOCK-ERRORthere is unflushed data in the block cache; make sure to call flush() or discard() before destroying the object

◆ discard()

TableMapper::InboundTableMapper::discard ( )

discards any buffered batched data; this method should be called after using the batch APIs (queueData()) and an error occurs

Example:
on_success table_mapper.commit();
on_error table_mapper.rollback();
{
on_success table_mapper.flush();
on_error table_mapper.discard();
map table_mapper.queueData($1), data.iterator();
}
Note
  • flush() or discard() needs to be executed for each mapper used in the block when using multiple mappers whereas the DB transaction needs to be committed or rolled back once per datasource
  • also clears any row Closures or Call References set for batch operations
  • if an error occurs flushing data, the count is reset by calling Mapper::resetCount()
See also

◆ flush()

*hash TableMapper::InboundTableMapper::flush ( )

flushes any remaining batched data to the database; this method should always be called before committing the transaction or destroying the object

Example:
on_success table_mapper.commit();
on_error table_mapper.rollback();
{
on_success table_mapper.flush();
on_error table_mapper.discard();
map table_mapper.queueData($1), data.iterator();
}
Returns
if batch data was inserted then a hash (columns) of lists (row data) of all data inserted and potentially returned (in case of sequences) from the database server is returned; if constant mappings are used with batch data, then they are returned as single values assigned to the hash keys
Note
  • flush() or discard() needs to be executed for each mapper used in the block when using multiple mappers whereas the DB transaction needs to be committed or rolled back once per datasource
  • also clears any row Closures or Call References set for batch operations
  • if an error occurs flushing data, the count is reset by calling Mapper::resetCount()
See also

◆ flushIntern()

*hash TableMapper::InboundTableMapper::flushIntern ( bool  force_flush)
private

flushes queued data to the database

Parameters
force_flushto flush even if there is not enough data to fill a block, e.g. on explicit flush() call
Returns
Qore::NOTHING when nothing was flushed, otherwise returns a batch (hash of lists) with the data that were flushed into the DB, updated by the 'ret_args' from the DB.

◆ getRecListSize()

*int TableMapper::InboundTableMapper::getRecListSize ( hash  rec)
private

For a possible hash of lists (bulk data) returns the size of the first list found within the hash. If no list if found, returns NOTHING.

Parameters
reca hash representing (possiblee) bulk data
Returns
NOTHING if the hash represents single input record, or the number of records represented (may also be 0)

◆ insertRow()

hash TableMapper::InboundTableMapper::insertRow ( hash< auto >  rec)

inserts or upserts a row into the target table based on a mapped input record; does not commit the transaction

Parameters
recthe input record
Returns
a hash of the row values inserted/upserted (row name: value); note that any sequence values inserted are also returned here
Note
on mappers with "insert_block > 1" (i.e. also the underlying DB must allow for bulk operations), it is not allowed to use both single-record insertions (like insertRow()) and bulk operations (queueData()) in one transaction. Mixing insertRow() with queueData() leads to mismatch of Oracle DB types raising corresponding exceptions depending on particular record types (e.g. "ORA-01722: invalid number").
Exceptions
MISSING-INPUTa field marked mandatory is missing
STRING-TOO-LONGa field value exceeds the maximum value and the 'trunc' key is not set
INVALID-NUMBERthe field is marked as numeric but the input value contains non-numeric data

◆ isMapperConstant()

bool TableMapper::InboundTableMapper::isMapperConstant ( )
private

returns true when the mapper is all "constant", i.e. if it always provide the same output regardless on the input

◆ iterator()

TableMapper::InboundTableMapperIterator TableMapper::InboundTableMapper::iterator ( Qore::AbstractIterator  i)

returns an iterator for the current object

Parameters
iinput iterator; AbstractIterator::getValue() must return a hash
Since
TableMapper 1.1.1

◆ optionKeys()

hash TableMapper::InboundTableMapper::optionKeys ( )

returns a list of valid constructor options for this class (can be overridden in subclasses)

Returns
a list of valid constructor options for this class (can be overridden in subclasses)

◆ queueData() [1/3]

*hash TableMapper::InboundTableMapper::queueData ( hash< auto >  rec,
*hash< auto >  crec 
)

inserts/upserts a row (or a set of rows, in case a hash of lists is passed) into the block buffer based on a mapped input record; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the "insert_block" option; does not commit the transaction

Example:
# Example 1:
{
on_success table_mapper.flush();
on_error table_mapper.discard();
while (*hash h = stmt.fetchColumns(1000)) {
table_mapper.queueData(h);
}
}
# Example 2:
{
const Map1 = (
"num": ("a"),
"str": ("b"),
);
Table table(someDataSource, "some_table");
InboundTableMapper table_mapper(table, DataMap. ("insert_block" : 3));
on_error table_mapper.discard();
list data = (("a" : 1, "b" : "bar"), ("a" : 2, "b" : "foo"));
list mapped_data = (map table_mapper.queueData($1), data.iterator()) ?? ();
# mapped_data = () - no insertion done yet since to insert_block=3
list flushed_data = table_mapper.flush() ?? ();
mapped_data += flushed_data;
# mapped_data = ("num" : (1,2), "str" : ("bar","foo"));
# the table is updated too
}

Data is only inserted/upserted if the block buffer size reaches the limit defined by the "insert_block" option, in which case this method returns all the data inserted/upserted. In case the mapped data is only inserted into the cache, no value is returned.

Parameters
recthe input record or record set in case a hash of lists is passed
crecan optional simple hash of data to be added to each input row before mapping
Returns
if batch data was inserted then a hash (columns) of lists (row data) of all data inserted and potentially returned (in case of sequences) from the database server is returned; if constant mappings are used with batch data, then they are returned as single values assigned to the hash keys; always returns a batch (hash of lists)
Note
  • make sure to call flush() before committing the transaction or discard() before rolling back the transaction or destroying the object when using this method
  • flush() or discard() needs to be executed for each mapper used in the block when using multiple mappers whereas the DB transaction needs to be committed or rolled back once per datasource
  • this method and batched inserts/upserts in general cannot be used when the "unstable_input" option is given in the constructor
  • if the "insert_block" option is set to 1, then this method simply calls insertRow(); however please note that in this case the return value is a hash of single value lists corresponding to a batch data insert
  • if an error occurs flushing data, the count is reset by calling Mapper::resetCount()
  • using a hash of lists in rec; note that this provides very high performance with SQL drivers that support Bulk DML
  • in case a hash of empty lists is passed, Qore::NOTHING is returned
  • 'crec' does not affect the number of output lines; in particular, if 'rec' is a batch with N rows of a column C and 'crec = ("C" : "mystring")' then the output will be as if there was 'N' rows with C = "mystring" on the input.
  • on mappers with "insert_block > 1" (i.e. also the underlying DB must allow for bulk operations), it is not allowed to use both single-record insertions (like insertRow()) and bulk operations (queueData()) in one transaction. Mixing insertRow() with queueData() leads to mismatch of Oracle DB types raising corresponding exceptions depending on particular record types (e.g. "ORA-01722: invalid number").
See also
Exceptions
MAPPER-BATCH-ERRORthis exception is thrown if this method is called when the "unstable_input" option was given in the constructor
MISSING-INPUTa field marked mandatory is missing
STRING-TOO-LONGa field value exceeds the maximum value and the 'trunc' key is not set
INVALID-NUMBERthe field is marked as numeric but the input value contains non-numeric data

◆ queueData() [2/3]

*hash TableMapper::InboundTableMapper::queueData ( Qore::AbstractIterator  iter,
*hash  crec 
)

inserts/upserts a set of rows (from an iterator that returns hashes as values where each hash value represents an input record) into the block buffer based on a mapped input record; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the "insert_block" option; does not commit the transaction

Example:
on_success table_mapper.commit();
on_error table_mapper.rollback();
{
on_success table_mapper.flush();
on_error table_mapper.discard();
table_mapper.queueData(data.iterator());
}

Data is only inserted/upserted if the block buffer size reaches the limit defined by the "insert_block" option, in which case this method returns all the data inserted/upserted. In case the mapped data is only inserted into the cache, no value is returned.

Parameters
iteriterator over the record set (list of hashes)
crecan optional simple hash of data to be added to each input row before mapping
Returns
if batch data was inserted then a hash (columns) of lists (row data) of all data inserted and potentially returned (in case of sequences) from the database server is returned; if constant mappings are used with batch data, then they are returned as single values assigned to the hash keys
Note
  • make sure to call flush() before committing the transaction or discard() before rolling back the transaction or destroying the object when using this method
  • flush() or discard() needs to be executed for each mapper used in the block when using multiple mappers whereas the DB transaction needs to be committed or rolled back once per datasource
  • this method and batched inserts/upserts in general cannot be used when the "unstable_input" option is given in the constructor
  • if the "insert_block" option is set to 1, then this method simply calls insertRow() to insert the data, nevertheless it still returns bulk output
  • if an error occurs flushing data, the count is reset by calling Mapper::resetCount()
  • on mappers with "insert_block > 1" (i.e. also the underlying DB must allow for bulk operations), it is not allowed to use both single-record insertions (like insertRow()) and bulk operations (queueData()) in one transaction. Mixing insertRow() with queueData() leads to mismatch of Oracle DB types raising corresponding exceptions depending on particular record types (e.g. "ORA-01722: invalid number").
See also
Exceptions
MAPPER-BATCH-ERRORthis exception is thrown if this method is called when the "unstable_input" option was given in the constructor
MISSING-INPUTa field marked mandatory is missing
STRING-TOO-LONGa field value exceeds the maximum value and the 'trunc' key is not set
INVALID-NUMBERthe field is marked as numeric but the input value contains non-numeric data

◆ queueData() [3/3]

*hash TableMapper::InboundTableMapper::queueData ( list  l,
*hash  crec 
)

inserts/upserts a set of rows (list of hashes representing input records) into the block buffer based on a mapped input record; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the "insert_block" option; does not commit the transaction

Example:
on_success table_mapper.commit();
on_error table_mapper.rollback();
{
on_success table_mapper.flush();
on_error table_mapper.discard();
table_mapper.queueData(data.iterator());
}

Data is only inserted/upserted if the block buffer size reaches the limit defined by the "insert_block" option, in which case this method returns all the data inserted/upserted. In case the mapped data is only inserted into the cache, no value is returned.

Parameters
la list of hashes representing the input records
crecan optional simple hash of data to be added to each row
Returns
if batch data was inserted/upserted then a hash (columns) of lists (row data) of all data inserted/upserted and potentially returned (in case of sequences) from the database server is returned
Note
  • make sure to call flush() before committing the transaction or discard() before rolling back the transaction or destroying the object when using this method
  • flush() or discard() needs to be executed for each mapper used in the block when using multiple mappers whereas the DB transaction needs to be committed or rolled back once per datasource
  • this method and batched inserts/upserts in general cannot be used when the "unstable_input" option is given in the constructor
  • if the "insert_block" option is set to 1, then this method simply calls insertRow()
  • if an error occurs flushing data, the count is reset by calling Mapper::resetCount()
See also
Exceptions
MAPPER-BATCH-ERRORthis exception is thrown if this method is called when the "unstable_input" option was given in the constructor
MISSING-INPUTa field marked mandatory is missing
STRING-TOO-LONGa field value exceeds the maximum value and the 'trunc' key is not set
INVALID-NUMBERthe field is marked as numeric but the input value contains non-numeric data

◆ queueDataIntern()

*hash TableMapper::InboundTableMapper::queueDataIntern ( hash  rec)
private

inserts a row into the block buffer based on a mapped input record; does not commit the transaction

Data is only inserted/upserted if the block buffer size reaches the limit defined by the "insert_block" option, in which case this method returns all the data inserted/upserted. In case the mapped data is only inserted/upserted into the cache, no value is returned.

Parameters
reca hash representing a single input record
Returns
if batch data was inserted (flushed) then a hash (columns) of lists (row data) of all data inserted and potentially returned (in case of sequences) from the database server is returned; if constant mappings are used with batch data, then they are returned as single values assigned to the hash keys; if nothing was flushed to the database in the call, Qore::NOTHING is returned
Note
this function does not process hashes of lists (bulk format), it expects always single input record on the input; it returns a batch, though.
Exceptions
MISSING-INPUTa field marked mandatory is missing
STRING-TOO-LONGa field value exceeds the maximum value and the 'trunc' key is not set
INVALID-NUMBERthe field is marked as numeric but the input value contains non-numeric data

◆ record2Batch()

hash TableMapper::InboundTableMapper::record2Batch ( hash  h)
private

transforms single record to a batch - i.e. all non-constant elements are transformed to lists (with single elements)

◆ setRowCode()

TableMapper::InboundTableMapper::setRowCode ( *code  rowc)

sets a closure or call reference that will be called when data has been sent to the database and all output data is available; must accept a hash argument that represents the data written to the database including any output arguments. This code will be reset, once the transaction is commited.

Example:
code rowcode = sub (hash row) {
# process row data
};
table_mapper.setRowCode(rowcode);
Parameters
rowca closure or call reference that will be called when data has been sent to the database and all output data is available; must accept a hash argument that represents the data written to the database including any output arguments
Note
the per-row closure or call reference can also be set by using the "rowcode" option in the constructor()

◆ validKeys()

hash<string, bool> TableMapper::InboundTableMapper::validKeys ( )

returns a list of valid field keys for this class (can be overridden in subclasses)

Returns
a list of valid field keys for this class (can be overridden in subclasses)

◆ validTypes()

hash<string, bool> TableMapper::InboundTableMapper::validTypes ( )

returns a list of valid field types for this class (can be overridden in subclasses)

Returns
a list of valid types for this class (can be overridden in subclasses)