Datastore that stores data into a local database. Packs data into keyframes for faster access. Optimized for storing values changing with different intervals.

Configuring the Datastore

DBNameName of the main database file (or full path). Can be a simple file name (e.g. 'Log.db' - file is placed next to application executable), a relative path e.g. '../Databases/Log.db') or an absolute path (e.g. 'C:\Databases\log.db').
DaysOfHistoryEntries in the database older than specified days are continuously deleted. Use 0.0 for no limit.
SizeLimitGBWhen the limit is reached, the logger will start deleting older entries. Use 0.0 for no limit.

Note: While only one file name is specified by DBName property, actually the data is split into multiple files and more files are created as more items are added to the LoggedValues table of CDP Logger. If DBName is 'Log.db' then files like 'Log0.db', 'Log1.db' etc. might be created.


Use this to create a second database with low resolution data. When data is compacted, min and max values are kept, so you can still see signal peaks. Useful for saving disk space, for example CDP Logger may log with 100 Hz and keep log for 1 day and KeepHistory may keep log with 1 Hz resolution for a month.

Note: Only one row should be added under KeepHistory table in Configure mode. Otherwise first one is used, others are ignored.

NameFriendly name
LogResolutionHzLog resolution in the second database for long term logging
DaysOfHistoryEntries in the database older than specified days are continuously deleted. Use 0.0 for no limit.
SizeLimitGBWhen the limit is reached, the logger will start deleting older entries. Use 0.0 for no limit.

If KeepHistory element is added and configuration is invalid or extracting data to second database fails for another reason, a CDPAlarm called ExtractorAlarm is set.

Viewing Logged Data

The preferred way to view logged data is to access through CDP Logger built-in server (see Viewing Data). This way both the main database and the second low resolution KeepHistory database data are combined.

The second option is to open the local database file directly using Load Data functionality in Analyze mode. See Historic Data for more info.

The third option is to open the local database file (specified by DBName property) using Database Graph Widget. Note that the KeepHistory functionality creates a second database that must be opened separately.

Reading the Datastore from Code

For more advanced use cases a C++ API is provided to read CDPCompactDatastore data from code. They are defined in the LogManager namespace. Following is a short example that converts the logged data into CSV format.

First create a library as described in How to Create a Library manual. Next add a threaded component:

  • Right click on the library.
  • Select Add new... option in the context menu.
  • Select CDP from the left column.
  • Select Threaded Component Model from the middle column.
  • Click Choose...
  • Name the component LogConverter. Then click Next and Finish.

Then a CDP2SQL dependency must be added:

  • In Code mode, right click on your project and select Add Library....
  • In the wizard select CDP Library and click Next.
  • Select CDP2SQL. Then click Next and Finish.

Next open LogConverter.cpp file in Code mode. Add some includes and edit the void LogConverter::Main() method:

#include <CDPSystem/Application/Application.h>
#include <LogManager/LogManagerFactory.h>
#include <LogManager/LogReader.h>
#include <OSAPI/Timer/CDPTime.h>
#include <fstream>

using namespace LogManager;


// Main() is executed in a backround thread. Will not interfere with the CDP framework.
void LogConverter::Main()
  ConnectionInfo conInfo;
  conInfo.dbName = "/home/user/log.db";
  std::unique_ptr<LogReader> reader(LogManagerFactory().CreateReader(conInfo));

  auto span = reader->ReadXAxisSpan(); // Reads begin and end timestamp of log.
  auto nodes = reader->GetNodeInfo();  // Gets a list of all logged nodes.
  auto rows = reader->Read(ToIDList(nodes),  // Nodes to include in the result.
                           0,                // Number of rows in the result. 0 means no limit.
                           span.first,       // Start of the range (timestamp).
                           span.second);     // End of the range (timestamp).

  // Create the header of the CSV file.
  std::ofstream file("/home/user/log.csv", std::ofstream::out);
  file << "Timestamp";
  for (const auto& node : nodes)
    file << ',' <<;
  file << '\n';

  // Write all rows into the CSV file.
  for (const auto& row : rows)
    file << CDPTime::ConvertGivenDoubleToTimeString(row.xAxis, false, CDP_ISO_8601_With_Ms);
    for (const auto& loggedValue : row.values)
      file << ',' << loggedValue.lastValue;
    file << '\n';

  // All done. Close the application.
  RunInComponentThread([&] {

To test the component:

  • Build the library.
  • Create a system project.
  • In Configure mode, add the created LogConverter component from Resource tree into your application.
  • Run the system. The application will stop automatically once logged data has been converted to CSV format.

Finally open the created CSV file to verify conversion was successful.

The classes used in this example are documented in the LogManager namespace.

Note: This example code reads the entire database into memory and then writes it into CSV. For large databases it is recommended to split the conversion into smaller batches to prevent the PC from running out of memory.

Creating a Backup

If CDP Logger is not running, making a backup simply means copying all the database files created by CDP Logger. If DBName is 'Log.db', make sure to also copy 'Log0.db', 'Log1.db' etc.

However, if the logger is running then simple copying might corrupt the database. If stopping the logger is not an option, a tool must be used for backup. The CDPCompactDatastore is internally using multiple SQLite3 database files. They can be backed up using SQLite3 command line tool. Download it from (

sqlite3 Log.db ".backup 'BackupLog.db'"
sqlite3 Log0.db ".backup 'BackupLog0.db'"

Make sure to execute the command for every database file created by CDPLogger (except temporary files which have -shm or -wal ending). The backup database 'BackupLog.db' can then be opened by Analyze mode. See Historic Data for more info.

© CDP Technologies AS - All rights reserved