Rayven.io provides a fully integrated Generative AI Node powered by a LLaMA-based language model running entirely on Rayven-managed or customer-hosted infrastructure.
This allows you to use large language models (LLMs) securely within Rayven workflows and applications—without relying on third-party providers like OpenAI.
This node enables dynamic AI-driven outputs across use cases, while ensuring data privacy, control, and compliance with data sovereignty requirements.
Key Capabilities
1. On-Platform LLM Execution
Rayven’s Generative AI runs locally on Rayven’s servers or can be deployed in your private cloud or on-premise. No API calls to external services are required.
-
No third-party data exposure
-
Compliant with privacy and sovereignty standards
-
Suitable for internal, regulated, or sensitive applications
2. Prompt-Based AI Logic
You can create and manage prompts within the Generative AI Node. Prompts can include inputs from:
-
Real-time workflow values
-
Table records
-
Uploaded files (PDFs, CSVs, plain text)
-
External data ingested through connectors
3. Workflow Integration
The AI Node is a native component in Rayven’s low-code workflow builder. Use it to:
-
Process inputs dynamically
-
Generate AI-based responses
-
Route outputs to logic, dashboards, alerts, or storage
4. Private Cloud Deployment Option
Rayven supports dedicated deployment of the LLaMA AI engine to your private infrastructure:
-
Full ownership of the model
-
Complete control over hosting, access, and usage
-
Suitable for enterprises with strict IT governance
Common Use Cases
Automated Summaries in Workflows
Use the AI Node to summarize daily operations, shift activities, or status logs:
Prompt: “Summarize this shift’s performance: [insert workflow variables]”
Output: “Production met 94% of target. Minor delays due to equipment resets. No critical faults reported.”
Document-Based Analysis
Upload an internal policy, SOP, or log file and generate a concise summary or response:
Prompt: “Summarize the main procedures described in this PDF.”
Output: “The document outlines emergency response steps, focusing on containment, isolation, and reporting.”
Proposal or Email Generation
Use workflow or table data to automatically create proposals, quotes, or email content:
Prompt: “Create a maintenance proposal based on the following usage metrics: [insert data]”
Output: “We recommend replacing Filter A every 2 weeks and scheduling monthly inspections for Pump 3.”
App Interface Personalization
Insert AI-generated text into dashboards or forms:
-
HTML nodes display human-like recommendations
-
Table columns show predicted next actions
-
Widgets update with live, LLM-generated insights
Workflow Output Destinations
Generative AI outputs can be sent to:
-
Tables: Write structured results alongside existing data
-
HTML Nodes: Display free-text insights or summaries
-
Email/SMS Nodes: Send personalized, AI-generated content
-
APIs or Webhooks: Forward results to external systems
Benefits
-
No External Dependencies: Everything runs inside your secured Rayven or private infrastructure
-
Prompt Flexibility: Tailor prompts using any input data source
-
Real-Time Responses: AI logic runs inline with workflow execution
-
Scalable Integration: Use in any workflow, app, or output node
-
Compliance-Ready: Designed for organizations with data governance or sovereignty requirements
Q&A
Q: Is Rayven’s Generative AI connected to OpenAI or external models?
A: No. Rayven uses a self-hosted LLaMA-based model that runs entirely on Rayven or customer infrastructure. No data leaves your environment.
Q: Can I use workflow and table data inside prompts?
A: Yes. You can dynamically insert real-time workflow values, historical records, or user input directly into the prompt to generate customized outputs.
Q: Can the AI Node read files?
A: Yes. You can upload documents (PDF, CSV, text) into the workflow, then reference their contents in prompts for summarization or analysis.
Q: Where can the AI output be used?
A: You can send outputs to dashboards, HTML widgets, tables, alerts (email/SMS), or use them to inform decisions within the workflow itself.
Q: Can I host the LLM privately?
A: Yes. Rayven offers a private cloud or on-premise deployment of the LLaMA model, giving you full control over data, hosting, and access.
Q: Does this require coding?
A: No. The Generative AI Node is used within Rayven’s no-code workflow builder. Prompts and outputs are configured visually.
Q: Is the output updated in real time?
A: Yes. Each time the workflow runs, the prompt is executed using the latest available data, ensuring outputs are always current.
Q: Can I use multiple AI nodes in one workflow?
A: Yes. You can use multiple Generative AI nodes for different prompts or use cases—e.g., one for summaries, another for recommendations.