Data Flows are one of the key components in the Accumine platform. They're responsible for converting raw data to meaningful information. With the exception for custom integration and historical modification of data, all data in the Accumine platform goes through data flows.
Let's think about a scenario where a SensorBot is installed in a machine panel. The SensorBot is capturing a single input - "exhaustplc". This input is a certain value when the machine is not running and is another value when it is running.
Input (exhaustplc): Signal from SensorBot that has a voltage of 0 when the machine is not running and has a voltage of 24 when the machine is running.
Transform (Map): When Input is 24V, create a value of TRUE and when Input is 0V, create a value of FALSE.
Output (Accumine Database): In-Cycle state starts when Transform is TRUE and ends when Transform is FALSE. Inversely, Downtime state starts when Transform is FALSE and ends when Transform is TRUE.
Assets typically have 1-3 data flows that handle various raw data inputs and convert them to database records. Most of the time (over 80%) data flows are copied from templates or other flows in the customer's account. Flows are manually created when there is more complex logic needed (i.e. multiple inputs determine the machine status).
Data flows run on a single server hosted inside of a private VPC in AWS. Each customer gets their own data flows instance (Docker container) that continually processes data without any user intervention.