Rayven handles data across two core storage layers: Cassandra for real-time, high-volume workflow data, and MySQL tables for structured, user-defined records. Understanding how data enters, flows through, and is retrieved from these systems
Rayven.io handles data across two storage layers: Cassandra, for high-speed real-time workflow data, and MySQL, for structured user-defined tables. This article explains how data flows into each system, how they interact, and how to use them effectively for real-time automation and structured business logic.
Overview
Rayven supports two types of data storage:
-
Cassandra Database – Optimized for time-series, high-volume data generated by workflows.
-
MySQL Tables (Primary & Secondary) – Designed for persistent, structured records like configurations, reference data, and business logs.
Together, they enable fast data processing and scalable application modeling.
1. How Data Flows Into the System
A. Workflow Data → Cassandra
Any data generated or transformed in a workflow is automatically stored in Cassandra. Examples include:
-
Sensor or API readings
-
Calculated values (e.g., KPIs)
-
Alerts or rule-based triggers
Data in Cassandra is used for:
-
Real-time dashboards and visualizations
-
Input to AI models
-
Event-driven workflow automation
Note: This data does not appear in MySQL tables unless a workflow explicitly writes to a table using the Table Import Node.
B. How Data Flows Into MySQL Tables
Rayven's MySQL tables can be populated through multiple channels:
1. Manual Uploads
Method: Upload structured data files (CSV, Excel, JSON) via the table interface in the platform.
Use Case: One-time or occasional uploads such as user lists, configuration thresholds, or static reference data.
2. Workflow → Table Import Node
Method: Insert or update table rows by using a Table Import Node within a workflow.
Use Case: Automatically log events, calculated values, summaries, or structured business outcomes.
This is the only way to persist workflow-generated data into MySQL tables.
3. File-Based Imports (via Connector or FTP)
Method: Drop a file into a monitored location (e.g., SFTP) to trigger a workflow that parses and writes data into a table.
Use Case: Scheduled or automated imports such as shift rosters, maintenance logs, or production schedules.
4. External API → Workflow → Table
Method: Use API Input Nodes to receive data from external systems, then process and write to MySQL tables via Table Import Node.
Use Case: Sync data from third-party CRMs, ERPs, ticketing platforms, or cloud apps.
5. UI Forms (User Input → Table)
Method: Bind dashboard forms to workflows that submit user-entered data into tables.
Use Case: Capture service requests, project updates, or issue reports submitted via web apps.
2. How Data Flows Out of Tables
Once data is stored in MySQL, it can be consumed by other components of the platform:
A. Dashboards and Interfaces
Tables can drive:
-
Real-time tables and charts
-
Filters, dropdowns, and search controls
-
Form components and summary cards
Ideal for presenting structured lists like assets, users, locations, or categories.
B. Workflows (Read, Lookup, Filter)
Workflows can read from tables to:
-
Look up associated data (e.g., find a device’s site or category)
-
Retrieve configuration values
-
Apply conditional logic using reference data
This allows blending real-time data from Cassandra with persistent logic from MySQL.
C. External Outputs
Tables can be the source of exported data via:
-
Scheduled workflows
-
API outputs or FTP uploads
-
Email-based reports or attachments
Use Case: Export daily summaries, transaction logs, or reports to external systems.
3. Real-Time vs Batch Updates
Mode | Stored In | Use Case Examples |
---|---|---|
Real-Time | Cassandra | Streaming data, AI input, event triggers |
Batch | MySQL Tables | Configs, logs, schedules, business records |
-
Cassandra is automatically updated by workflow activity.
-
MySQL Tables are updated through uploads, workflows, or APIs.
4. Retention and Storage Management
Cassandra Retention
-
Managed at the system level
-
Suitable for high-frequency, short-term data
-
Retention settings are not user-configurable
Contact support to adjust or extend retention policies.
MySQL Table Retention
-
Fully user-controlled
-
Use workflows to purge or archive old records
-
Export and compress data regularly to keep tables lean
Best practice is to store essential data only and manage history through scheduled processes.
Summary Table
Data Source | Stored In | Accessed By | Use Cases |
---|---|---|---|
Workflow Output | Cassandra | Dashboards, Alerts, AI | Real-time metrics, automation triggers |
File Upload (UI/FTP) | MySQL Tables | Interfaces, Forms | Reference data, shift logs, thresholds |
Workflow → Table Node | MySQL Tables | Workflows, Reports | Business summaries, status updates |
API Input → Table | MySQL Tables | Workflows, Integrations | External syncs, logging incoming data |
Form Input | MySQL Tables | Dashboards, Forms | Manual submissions, internal records |
Best Practices
-
Use Cassandra for transient, real-time data from workflows.
-
Use MySQL tables for persistent, structured records.
-
Always use Table Import Nodes to push data into tables from workflows.
-
Keep tables optimized by archiving or purging old records.
-
Blend data sources in workflows using lookup and filter nodes.
Q&A
Q: How can I write workflow data into a MySQL table?
A: Use a Table Import Node in the workflow. This is the only way to persist workflow-generated data into structured tables.
Q: Can dashboards use both Cassandra and MySQL data?
A: Yes. Dashboards can show live metrics (Cassandra) alongside structured data lists and filters (MySQL).
Q: What are the main ways to populate MySQL tables?
A: Through manual file uploads, workflow import nodes, API-based integrations, file drops (e.g., via FTP), or form submissions.
Q: How do I keep table size under control?
A: Use cleanup workflows to remove or export old rows. This keeps dashboards and interfaces responsive.
Q: Can I combine Cassandra and MySQL data in one workflow?
A: Yes. Workflows can read live data from Cassandra and enrich it with lookups from MySQL tables using Find Record or Join nodes.