Migrating from the old to the new storage system usually requires four distinct migration steps. Migration of the models, the robots, the data, and any SQL scripts interacting with the data tables.
Change the model type from DatabaseOutputObject (Legacy) to OutputObject and remove the 6 hidden fields (robotId, robotRunId, refindKey, firstExtractionDate, latestExtractionDate, extractedInLatestRun). Consider which fields should be part of the ObjectKey.
The following steps must be completed to migrate your robots.
Replace Return Object with Store in Database, and configure it appropriately.
Replace Refind Object with Find in Database. Since Refind Object will update the object and stop if the object was found, the structure of the robot has to be changed a little.
Here is a classic scenario using Refind Object. The object Id is extracted (the only attribute used for refindKey in this example), then the price is extracted (not part of refindKey), then the object is refound. If the object exists, the price is updated (and the rest of the branch is skipped), if the object doesn't exist, the details page is loaded and the details are extracted and the object is then returned.
This should be converted to:
First we extract the id (Object Key attribute) then we use Find in Database to check if the object already exists. If the object exists the current record is loaded into our output object, and we now extract the price on-top of the previous value, and then we store the object again. If the object was not found by Find in Database an error is generated, and "Try Next Alternative" error handling sends the execution into the lower branch where we extract the price, load and extract the details, and then store the object.
The new method may require an extra Extract Price step, but it is much easier to see what happens, and there is no refind-object-magic.
If your robot has any Query Database or Execute SQL steps which query the storage tables, these may need to be updated as well.
If you have small amounts of data, or you collect all the data every time your robots run, the easiest is simply to delete (drop) the existing tables, and create new tables using the updated model objects.
If you have large amounts of data, or need the historic data from previous runs, you have to convert your existing data tables into the new table layout. It is possible to use tools from your database vendor to add/remove and rename the fields, but it is not recommended. The best solution is to use a robot to load the old data and store it using the new format.
To do this, create a table from you updated output object; you will have to rename the original data table unless you have given the new version of the object a different name. Then create a robot which loads the existing data using the Query Database step action, and stores the objects using Store in Database.
If you require the data history, like first and last Extracted date, you can extract these from the old row, and use the Execute SQL step to update the object after it has been stored.
If you have exposed the refindKey outside of your Kapow environment, you probably need to track which ObjectKey maps to an existing refindKey, and then provide the outside world with a mapping table, allowing them to convert existing refindKeys into the new ObjectKeys. This is important since you can't always generate an ObjectKey identical to the previous refindKey, since the robotId was often part of the refindKey calculation.
It is not possible to give a generic rule for how to migrate external SQL scripts, since the possibilities and complexity of such scripts are virtually endless. Instead we will try to give some guidelines.
Moving production data out of the harvest database
It is not a good idea to run user queries against the harvest tables (the tables the robots collect into), instead you should have a separate set of tables for the production data. These tables will contain the refindKey/ObjectKey, but not necessarily any of the other household fields. The production tables usually should be updated each time a robot has executed successfully.
Previously, the rows to move was often identified by using the robotRunId column to find the most recent run, but this column no longer exists. Instead we now have a column named LastUpdated, which contains a timestamp of when this row was last updated. This way all you need to do is find all rows which have been updated since you last moved data out of the harvest tables. If you have multiple robots harvesting data into the same table, you now need to use the robotName and not the robotId to distinguish data from different robots.
Scripts against RoboManager tables
The new database log offers superior information compared to the old RoboManager logs, some of these (like logging of Input Objects) are for internal use, but the rest are viewable in log view of the Management Console. If you have scripts containing queries against the RobotRun or Message tables in RoboManager, these will need to be rewritten to match the new logging layout.
Most significantly the robotId and robotRunId have been replaced with RobotName and ExecutionId. The advent of the executionId allows you to trace a specific run or message to an execution created using the API, something which was not previously possible. The new ROBOT_RUN table also contains a number of extra statistics, such as queue time, execution time and total runtime.