Qorus’ User Development Guidelines and Best Practices

Designing Shared Interface Components | Parse Options in Qorus User Code | Connections | Author Labels | Do Not Use Deprecated APIs | Use UserAPI | Workflows | Bulk DML Processing And Data Streaming | Service Resources | Interface Testing | Unit Testing

Designing Shared Interface Components

Rules for Shared Library Code Objects

Motivation: Code Quality, Eliminate Regressions

Shared code is any library object (function, class, or constant object) that is used by more than one interface (workflows, services, job, mappers).

When code is shared in multiple interfaces, there is a chance that any change to the shared code could have unintended consequences for other interfaces not meant to be affected by the change. The more complex the solution, the greater the chance that unintended regression errors could be caused when modifying the shared code.

Shared code is only acceptable if the following criteria are met:

  • The shared code is completely new
  • The shared code is part of interfacing fundamental infrastructure where there is a clear advantage to sharing the code
  • The shared code will only be used by interfaces currently part of the change or new development and therefore all affected code will be tested together

When changing existing shared code, particularly when the change or new development only should affect a portion of the interfaces that use the shared code, the shared code should be copied and the copy should be modified for the change to eliminate the chance of unintended regressions.

Note: Shared code should not be directly referenced as step function attributes; in this case the library function object should be referenced in the workflow and the new step function should call the library function to ensure that workflow recoverability compatibility is not affected when changes are made to workflows that share this library object (in addition, the above rules must be applied to any new development); see the next guideline

Shared Class Use in Workflows

Parse Options in Qorus User Code

Parse Options in Qorus User Code

Motivation: Code Maintainability, Performance

The following parse options should be used in Qorus code:

%new-style
%require-types
%strict-args
%enable-all-warnings
  • %new-style: allows for more a compact syntax and therefore less typing; note that the old2new script in the qore source repository https://github.com/qorelanguage/qore/tree/develop/examples directory can be used to convert code from old-style to new-style
  • %require-types: serves for more maintainable code; old-style typeless code is much harder to understand and maintain, additionally, code with type declarations executes faster and more efficiently due to optimizations that can be made by the runtime engine when types are restricted in advance. Furthermore more programming errors can be caught at parse time which allows for faster development.
  • %strict-args: eliminates "noop" function variants from being used and also causes errors with argument passing to be raised instead of being silently ignored which can hide errors
  • %enable-all-warnings: allows for more errors to be caught at parse time, allowing for faster development at a higher quality

No warnings should appear when loading Qorus user code.

Connections

Connections

Motivation: Operational Control and Transparency

Connections should be used for all external connections; a connection-specific module should be developed for connections that don't have an existing connection class.

Why? Because connections allow for configuration transparency (connection configuration and status is displayed in the UI) as well automatic dependency tracking and monitoring, which allows Qorus to pro-actively alert operations to connection problems (or even missing filesystems, which are also implemented as connections). Therefore this applies equally to filesystem connections (ex: file://appl/data/mseplftp, for filesystem polling or sending; using a filesystem as an interface) as well as more standard network-based connection types (ex: code>sftp://partnerp@sftp.example.com</code). Note that filesystem connections are monitored not only for their presence (mount status) but also for when they exceed pre-defined usage thresholds; see alert-fs-full for more information.

See the following link for pre-defined connection object types: http://www.qoretechnologies.com/manual/qorus/latest/qorus/connmon.html#userconntypes

example ( sftp-partner.qconn.yaml ):

# This is a generated file, don't edit!
type: connection
name: sftp-partner
desc: Partner SFTP Polling / Delivery connection
url: sftp://parterp@sftp.example.com
options:
  keyfile: $OMQ_DIR/user/sftp-config/id_rsa-partner

See the following for more information on defining user connections in Qorus: http://www.qoretechnologies.com/manual/qorus/latest/qorus/definingconnections.html

To define new connection types, implement a user module defining the connection type as a concrete implementation of the AbstractConnection class, and add the user module's name to the connection-modules option. Each connection type is associated to one unique scheme; schemes and connection factories are exported from the module using a special function in the connection module (public AbstractIterator sub get_schemes() {}); this is documented in the preceding system option documentation.

Author Labels

Author Labels

Motivation: Code Maintainability

All code should contain author labels or an author attribute. When a new author is added, add the name to the front if taking over ownership of the object, otherwise if just performing some minor changes (for example, the primary developer is on vacation), add the name to the end.

Do Not Use Deprecated APIs

Do Not Use Deprecated APIs

Motivation: Maintainability

correct

hash<auto> h = WorkflowApi::getDynamicData();

incorrect

hash h = wf_get_dynamic_data();

Use UserApi::getSqlTable() instead of Table::constructor()

Use UserApi::getSqlTable() instead of Table::constructor()

Motivation: Performance

UserApi::getSqlTable() returns an AbstractTable object from the table cache and therefore provides higher-performance and more efficient memory usage, since AbstractTable (and therefore Table) objects are large objects subject to heavy I/O to create in the form of a series of complex queries in the underlying database's data dictionary.

By using the table cache, these objects are created once on demand and then can be returned quickly on request to clients requiring the use of the table object.

correct

AbstractTable my_table = UserApi::getSqlTable("my-datasource", "my_table_name");

incorrect

Table my_table(UserApi::getDatasourcePool("my-datasource"), "my_table_name");

Workflows

Autostart

Motivation: Operational Control

All new workflows should include an autostart value in their definition in order to ensure that the workflow is autostarted when installed. Operations can change it themselves if necessary; note that any operational changes are then taken as the "master value" for the workflow autostart value; oloading the workflow subsequently will not change a value edited by operations.

Workflow OneTimeInit Function

Motivation: Code Maintainability

The Qorus onetimeinit function (workflow initialization) should initialize any objects that have a large initialization cost and set them in workflow execution instance data.

System Preferences/Options Usage and Access

Motivation: Code Maintainability

The following variables / parameters should be set in the onetimeinit function :

  • Objects that are expensive to initialize and therefore would adversely affect performance if initialized for every step or every order

Normally workflow runtime options should not be set in the init function, because in such a case, any change to these parameters requires a workflow reload.

System preference values that define workflow runtime options should be acquired at runtime when needed, providing the following advantages:

  • Much better readability, variables are in local scope
  • Workflows/services/jobs do not need reload to put the new value in effect

Bulk DML Processing and Data Streaming

Use SQLStatement::fetchColumns() to Select Data

Motivation: Performance

This allows for the row block size to be used to fetch all the required rows in one round trip (for example, in the Oracle driver) which greatly improves performance, additionally when streaming data with DbRemoteSend (which uses column format by default), the data format used does not need any translation to the format used in the serialized messages.

Disconnect Instead of Calling Rollback in Case of Stream Errors

Motivation: Clear Error-Handling

In case of a network error it will be impossible to rollback anyway. Disconnecting without an explicit commit causes a rollback on the remote sqlutil service side in any case. By calling disconnect instead of rollback, extraneous error messages are avoided in local log files.

QorusSystemRestHelper rest = UserApi::getRemoteRestConnection(UserApi::getConfigItemValue("connection-name"));
DbRemoteSend stream(rest, "omquser", "remote_table_name", "insert");
on_error stream.disconnect();
on_success stream.commit();
...

Note: DbRemoteSend::rollback() and DbRemoteReceive::rollback() disconnect by default in newer version of Qorus in any case.

Use BulkSqlUtil When Possible

Motivation: Performance

When inserting or upserting data, use the BulkInsertOperation and BulkUpsertOperation classes from the BulkSqlUtil module to perform bulk inserts or upserts; these classes use the bulk DML APIs to reach maximum performance.

Service Resources

Service Resources

Motivation: Code Maintainability

Services should use service file resources for file data; this particularly applies to services serving HTML pages (ie: UI extensions, etc) but also applies to services providing SOAP services (ex: WSDL and optionally XSD files).

Interface Testing

Interface Testing

Motivation: Code Correctness

  • Tests for each interface should be written using the QUnit-based QorusInterfaceTest module
  • Test scripts should have the same name as the object being tested
  • Ensure you have good test coverage:
    • "happy day" case (when everything works properly) must be implemented
    • error-handling (negative) tests must also be implemented
  • Test scripts should be designed to run on any valid environment; in particular:
    • Configuration-dependent values in the interface should be derived in the same way in the test
      • No hardcoded paths - use file / directory connection objects (or generally the same logic used to derive the configuration information as the interface uses)
      • No hardcoded datasource names or other configuration information (same as above - use the same logic in the test as in the interface)
    • Review all queries for index usage - when tests are run on the customer's systems, DBs may be very large and a query requiring a full table scan that runs fast on a small development database may take a very long time on a large shared dev or test system
  • Test scripts should be executable and (as with all other executable Qore scripts) should use the following hash-bang line: #!/usr/bin/env qore; also parse options must be used as per Parse Options in Qorus User Code

Unit Testing

Unit Testing

  • You should write unit tests while developing any code
  • Unit tests can be written using the QMock module
  • Test scripts should have the same name as the object being tested ended with .unit.qtest
  • Ensure you have good test coverage:
    • "happy day" case (when everything works properly) must be implemented
    • error-handling (negative) tests must also be implemented
  • Unit test must NOT be dependent on any environment, datasource, filesystem, etc. Use the QMock module to mock all necessary APIs
  • Test scripts should be executable and (as with all other executable Qore scripts) should use the following hash-bang line: #!/usr/bin/env qore; also parse options must be used as per Parse Options in Qorus User Code

Grab your FREE application integrations eBook.
You'll quickly learn how to connect your apps and data into robust processes!