Introduction | Architecture | Code | Maintainability | Release and Documentation | Performance and Memory
1. Introduction
This page gives example criteria for code reviews to ensure high quality results with Qorus Integration Engine.
2. Architecture
2.1. Naming Convention
Ensure all objects correspond to the documented naming conventions
2.2. Failsafe Interface Implementation
This is a critical item; Qorus Integration Engine facilitates the development and operation of fault-tolerant business integration processes, and failsafe design is critical to the long-term value of IT solutions developed on the platform. Failure to implement a failsafe design will directly lead to higher operational costs through the need for expensive manual interventions by operational step as well as lower process quality.
For workflows, this means that every step that cannot be repeated must have a validation function. Additionally, no more than one non-repeatable action can occur in a single workflow step. See Designing and Implementing Workflows for more information about failsafe workflow design.
Services can also be implemented in a failsafe manner; see ServiceApi::saveStateData() for more information.
Jobs can also be implemented in a a failsafe manner; see JobApi::saveStateData() for more information.
2.3. Interface Components Match
Expected Systems
When dealing with a complex integrated solution, ensure that the components are configured for the correct systems.
2.4. Shared Objects Defined Correctly
The proper use of shared code objects is important for avoiding regression errors when the shared code is updated in the future. See Designing Shared Interface Components for details.
2.5. Use of Mappers and Value Maps
Mappers provide powerful data transformation constructs as system configuration. By using a Qorus system mapper, interfaces take advantage of configuration over coding which improves the transparency of the code and reduces long-term maintenance costs.
Value Maps are also an example of configuration over coding, and in the same way provide a high level of transparency, reduced maintenance costs, and increased business flexibility regarding interfaces changes.
2.6. Connection Objects
Interfaces should always use Connections because Qorus connections allow for configuration and operational transparency as well as automatic external dependency tracking; see User Development Guidelines and Best Practices: Connections for more information.
2.7. Schema Modules
Database objects delivered with Qorus should always be delivered as Schema Modules; schema modules allow for easy installation, maintenance, and management of database objects in user schemas in Datasources managed by Qorus.
See the Schema Management for details.
2.8. Service Resources
When a service requires configuration files or files supporting HTTP or SOAP service functionality implemented by the service, the service should use Service File Resources to manage those resources.
See User Development Guidelines and Best Practices: Service Resources for more information.
2.9. Transaction Management
Any database access needs to be reviewed for correct / atomic transaction management; generally only one transaction in each database should be performed in a workflow step, for example.
A typical error would be that a method such as the following would be defined:
doDb(AbstractDatasource ds) {
on_error ds.rollback();
on_success ds.commit();
ds.exec(....);
}
And then the above code would be inadvertently called in another transaction, erroneously committing the transaction prematurely, which could lead to inconsistent data in the database in case of errors later in the step's execution.
3. Code
3.1. Logging and Debug Logging
Ensure that logging is appropriate for production, particularly that appropriate log levels are used so that production use will not be subject to debug logging.
Very detailed technical or data logging should use UserApi::logDebug()
or higher as in the following example:
UserApi::logDebug("input data: %N", data);
3.2. Error Handling
Ensure that all errors are properly handled; in Qorus steps for example, normally it's desirably to allow an exception to be raised and use the exception error string as the error code.
3.3. No Use of Deprecated APIs
Self explanatory; see User Development Guidelines and Best Practices: Do Not Use Deprecated APIs
3.4. Parse Options
Ensure that standard parse options are used; see User Development Guidelines and Best Practices: Parse Options in Qorus User Code for details.
3.5. Object Versioning / Workflow Recoverability
Ensure that any workflow structure or step changes that cause a compatibility break through the issue of a new workflowid
are also made with a corresponding update in the workflow's version number.
See Workflow Upgrades, Bug Fixes, and Recovery Compatibility for more information.
3.6. Workflow and Service Autostart
Workflows should normally have at least an autostart
value of 1
to ensure that the system always keeps the workflow running as long as its external dependencies are available.
The autostart
attribute of workflows is considered an "operational" attribute; Qorus operational teams "own" this attribute; if it is changed through an API call, then subsequent oloads of the asme workflow will not cause the value to change; the value changes by the API takes precedence.
3.7. System Property Usage
Ensure that system properties used for interface configuration are used consistently. Additionally, system properties should be acquired on-demand with prop_get(), so that changes in system properties will take effect immediately in affected interfaces.
3.8. Workflow Order Keys
The proper usage of Workflow Order Keys is important for technical and business operations.
Order keys should be non-detail values that can be used to quickly locate Qorus interfaces that have processed order data. When an interface processes large amounts of data, order keys should be high-level information and not detail-level values. For example, if an interface processes orders with thousands of detail lines, the order number should be used as a workflow key, but line identifiers should not.
3.9. No Dynamic SQL
Dynamic SQL can cause problems with DB servers' statement caches, therefore it should be avoided. Dynamic SQL can be avoided by using bind by value and SqlUtil for SQL DML operations.
Eliminating dynamic SQL is most important with Oracle; see Oracle Bind by Value Howto
3.10. SQL Scalability / Indexing
Any SQL that will later be used on large DBs should be verified for scalability with large data volumes, in particular proper (but not excessive) indexing should be used.
The typical scenario that happens is that interfaces are tested and work perfectly well on small tables with only a little data, and then after some time in production start showing very bad performance due to the lack of appropriate indexing. It's less expensive to verify correct indexing during development and test than to have to react to problems in production.
4. Maintainability
4.1. Library Object Documentation
Library objects should be very well defined and documented, because they will be used by multiple interfaces (and presumably multiple interface authors).
4.2. Code Comments
Code should be appropriately commented to enable future support and maintenance from developers who are not necessarily the original authors.
4.3. Formatting
Code formatting should correspond to any established code formatting guidelines.
4.4. Author Labels
Author labels are important to trace development back to the original authors; while authors can be traced through source and revision management systems; author labels are visible in the system UI.
See User Development Guidelines and Best Practices: Author Labels for more information.
5. Release and Documentation
5.1. Release check
Every release should be verified that it installs cleanly and contains all the required code and configuration.
5.2. Interface Tests
Writing QUnit and QorusInterfaceTest-based tests is important during development, test, and for long-term maintenance.
See User Development Guidelines and Best Practices: Interface Testing for more information.
5.3. Documentation
Appropriate documentation for interfaces is a must for the operational handover. Interface documentation should document:
- The functionality of the interface; execution steps
- Dependencies on other interfaces
- Interface options
- Workflow order keys
6. Performance and Memory
6.1. Bulk DML and Data Streaming
Bulk DML is the approach used by Qore and Qorus to process large amounts of SQL data in one server round trip. By default, Qore and Qorus use a block size of 1000 rows, meaning that data for 1000 SQL rows is sent to the server in each SQL network command, which results in very efficient network communication with the database server and therefore much higher performance than the classic "1 row at a time" DML approach.
Furthermore, bulk DML when combined with the Data Stream protocol (particularly with the sqlutil service and the DbRemoteReceive and DbRemoteSend classes) allows for very high volumes of database data to be transferred even over international network links in a very efficient manner.
See Bulk DML Processing and Data Streaming for details.
6.2. SAX-Style Parsing / Iteration
Large amounts of data should be processed "SAX-style" (i.e. piecewise) instead of "DOM-style" (all at once), since it would be memory prohibitive to parse a very large file in memory all at once.
Examples of classes allowing for "SAX-style" data manipulation are: