CDPCompactDatastore
Introduction
Datastore that stores data in a local database. Packs data into keyframes for faster access. Optimized for storing numeric values changing with different intervals.
Features
- High-performance logging to a local file. No need to set up an external database server.
- Very little configuration is needed and all of it can be done through CDP Studio.
- Good integration with CDP Studio Analyze mode and the Database Graph Widget. See Viewing Logged Data for more information.
- Compact data format that does not repeat unchanged values.
- The data format is optimized for plotting large amounts numeric of data while keeping peak values visible. For that special keyframes are periodically created that contain the minimum and maximum value of every logged value in a given period.
- The KeepHistory functionality can be used to create a second low-resolution database to keep a long-term log.
Limitations
- Uses a custom data format that cannot be processed by external tools unless first converted to a more suitable format. For example, the CDP Studio Analyze mode can open the CDPCompactDatastore and export the data to CSV format. See Viewing Logged Data for more info.
- While string values are supported, it is not suitable for logging large amounts of text data. A single row containing all changed values can contain up to 64 kB of data. If the row is larger, an alarm will be set and the row will be skipped.
Configuring the Datastore
Property | Description |
---|---|
DBName | Name of the main database file (or full path). Can be a simple file name (e.g. 'Log.db' - file is placed next to application executable), a relative path e.g. '../Databases/Log.db') or an absolute path (e.g. 'C:\Databases\log.db'). |
DaysOfHistory | Entries in the database older than specified days are continuously deleted. Use 0.0 for no limit. |
SizeLimitGB | When the limit is reached, the logger will start deleting older entries. Use 0.0 for no limit. |
Note: While only one file name is specified by DBName property, actually the data is split into multiple files and more files are created as more items are added to the LoggedValues table of CDP Logger. If DBName is 'Log.db' then files like 'Log0.db', 'Log1.db' etc. might be created.
KeepHistory
Use this to create a second database with low-resolution data. When data is compacted, min and max values are kept, so you can still see signal peaks. This is useful for saving disk space. For example, CDP Logger may log with 100 Hz and keep the log for 1 day and KeepHistory may keep the log with 1 Hz resolution for a month.
Note: Only one row should be added under the KeepHistory table in Configure mode. Otherwise, the first one is used and others are ignored.
Property | Description |
---|---|
Name | Friendly name |
LogResolutionHz | Log resolution in the second database for long-term logging |
DaysOfHistory | Entries in the database older than specified days are continuously deleted. Use 0.0 for no limit. |
SizeLimitGB | When the limit is reached, the logger will start deleting older entries. Use 0.0 for no limit. |
If the KeepHistory element is added and configuration is invalid or extracting data to the second database fails for another reason, a CDPAlarm called ExtractorAlarm is set.
Advanced Configuration
When disabling the Hide Internal Items filter, one additional property of CDPCompactDatastore becomes visible. It is normally not necessary to change this.
Property | Description |
---|---|
RowsPerKeyframe | While normally only changed values are logged, after this many rows a keyframe is written that repeats non-changed values (needed for faster reading). Use 20 when all values change every cycle and writing to HDD. Increase when logged values are changing rarely or when using a faster SSD. Note change is only applied after recreating the database. |
Viewing Logged Data
The preferred way to view logged data is to access through CDP Logger's built-in server (see Viewing Data). This way both the main database and the second low-resolution KeepHistory database data are combined.
The second option is to open the local database file directly using Load Data functionality in Analyze mode. See Historic Data for more info.
The third option is to open the local database file (specified by DBName property) using Database Graph Widget. Note that the KeepHistory functionality creates a second database that must be opened separately.
Reading the Datastore from Code
For more advanced use cases a C++ API is provided to read CDPCompactDatastore data from code. They are defined in the LogManager namespace. Following is a short example that converts the logged data into CSV format.
First, create a library as described in How to Create a Library manual. Next, add a threaded component:
- Right-click on the library.
- Select Add New... option in the context menu.
- Select CDP from the left column.
- Select Threaded Component Model from the middle column.
- Click Choose...
- Name the component LogConverter. Then click Next and Finish.
Then a CDP2SQL dependency must be added:
- In Code mode, right click on your project and select Add Library....
- In the wizard select CDP Library and click Next.
- Select
CDP2SQL
. Then click Next and Finish.
Next open LogConverter.cpp file in Code mode. Add some includes and edit the void LogConverter::Main()
method:
#include <CDPSystem/Application/Application.h> #include <LogManager/LogManagerFactory.h> #include <LogManager/LogReader.h> #include <OSAPI/Timer/CDPTime.h> #include <fstream> using namespace LogManager; ... // Main() is executed in a background thread. Will not interfere with the CDP framework. void LogConverter::Main() { ConnectionInfo conInfo; conInfo.dbName = "/home/user/log.db"; std::unique_ptr<LogReader> reader(LogManagerFactory().CreateReader(conInfo)); auto span = reader->ReadXAxisSpan(); // Reads begin and end timestamp of log. auto nodes = reader->GetNodeInfo(); // Gets a list of all logged nodes. auto rows = reader->Read(ToIDList(nodes), // Nodes to include in the result. 0, // Number of rows in the result. 0 means no limit. span.first, // Start of the range (timestamp). span.second); // End of the range (timestamp). // Create the header of the CSV file. std::ofstream file("/home/user/log.csv", std::ofstream::out); file << "Timestamp"; for (const auto& node : nodes) file << ',' << node.name; file << '\n'; // Write all rows into the CSV file. for (const auto& row : rows) { file << CDPTime::ConvertGivenDoubleToTimeString(row.xAxis, false, CDP_ISO_8601_With_Ms); for (const auto& loggedValue : row.values) file << ',' << loggedValue.lastValue; file << '\n'; } // All done. Close the application. RunInComponentThread([&] { Application::GetInstance()->Suspend(); }); }
To test the component:
- Build the library.
- Create a system project.
- In Configure mode, add the created LogConverter component from Resource tree into your application.
- Run the system. The application will stop automatically once logged data has been converted to CSV format.
Finally, open the created CSV file to verify conversion was successful.
The classes used in this example are documented in the LogManager namespace.
Note: This example code reads the entire database into memory and then writes it into CSV. For large databases it is recommended to split the conversion into smaller batches to prevent the PC from running out of memory.
Creating a Backup
If CDP Logger is not running, making a backup simply means copying all the database files created by CDP Logger. If DBName is 'Log.db', make sure to also copy 'Log0.db', 'Log1.db' etc.
However, if the logger is running then simple copying might corrupt the database. If stopping the logger is not an option, a tool must be used for backup. By default, a dbextractor
tool is deployed with the CDP Logger which can be used for this purpose. It is located in the tools folder.
./tools/dbextractor "Log.db" --destination BackupLog.db
Note: In addition to taking a full backup, the dbextractor
tool can limit the time range of the extraction, include only some of the logged nodes and write the data in a lower resolution to conserve disk space. See the DBExtractor Tool manual for more details.
The backup database 'BackupLog.db' can then be opened by Analyze mode. See Historic Data for more info.
Get started with CDP Studio today
Let us help you take your great ideas and turn them into the products your customer will love.