Data Layer: Connect Everything
Universal data connections to POS systems, databases, cloud warehouses, APIs, Google Drive, and files. Automatic schema discovery and unified data warehouse.
Data Layer Overview
The Data Layer is the foundation of Craveva AI. It connects all your enterprise data sources into one unified system, enabling AI agents to understand and query your data seamlessly.
Supported Data Sources
Rendering via Kroki...
Usually takes 1-2 seconds
POS Systems
- Qashier
- Eats365
- Raptor
- Micros
- Toast
- Lightspeed
- StoreHub
- Square
Databases (12 Total)
- PostgreSQL
- MySQL
- MongoDB
- SQL Server
- Oracle
- DuckDB
- BigQuery
- Snowflake
- Redshift
- Athena
- ClickHouse
- Trino
Cloud Warehouses
- BigQuery
- Snowflake
- Redshift
- Athena
- ClickHouse
- Trino
APIs
- REST APIs
- GraphQL
- Webhooks
Google Workspace
Beta V2.0 (Feb 14, 2026)- Google Drive
- Google Docs
- Google Sheets
Files
- CSV
- Excel
- JSON
- Word
- Parquet
Deployment Modes
Rendering via Kroki...
Usually takes 1-2 seconds
Offline-Only
Process local files without internet connection
- Upload files directly
- Process data locally
- Generate reports offline
- No external API calls
Best for: Sensitive data, air-gapped environments
Online-Only
Real-time data from connected sources
- Real-time synchronization
- Live query execution
- Automatic updates
- Multi-source aggregation
Best for: Real-time operations, live dashboards
Hybrid
Combine historical files with live data
- Merge offline and online data
- Unified query interface
- Data enrichment
- Comprehensive analysis
Best for: Trend analysis, historical comparisons
Automatic Schema Discovery
Our AI automatically discovers your data structure without manual configuration:
Table/Collection Analysis
Automatically detects all tables, collections, and their structures
Column/Field Type Detection
Identifies data types, constraints, and relationships automatically
Constraint Detection
Detects primary keys, foreign keys, and indexes
Data Sampling
Analyzes sample data to understand patterns and business logic
RAG Implementation (For Files)
Beta V2.0 (Feb 14, 2026)Beta V2.0: PostgreSQL pgvector migration coming Feb 14, 2026 for 70-80% faster vector searches.
Offline file uploads (PDF, Word, Excel, CSV, JSON) are processed using RAG (Retrieval-Augmented Generation) with MongoDB Atlas Vector Search. Beta V2.0 (Feb 14, 2026) will migrate to PostgreSQL pgvector for 70-80% faster vector searches.
Parse
Extract text from files
Chunk
Split into semantic chunks
Embed
Generate vector embeddings
Store
Store in vector database
Enables semantic search over uploaded documents for chat-based queries, separate from SQL-based semantic layer used for databases.
Semantic Layer (For Databases)
Our custom Craveva AI Semantic Layer converts natural language to SQL:
- Self-hosted semantic layer (no external dependencies)
- MDL (Modeling Definition Language) stored in MongoDB
- LLM-powered SQL generation using Craveva LLM Router
- Automatic schema analysis to create MDL definitions
- Multi-tenant support with full tenant isolation
Copy-Paste JavaScript Deployment
Deploy the AI Data Warehouse anywhere with a simple copy-paste JavaScript snippet:
<script>
(function() {
const cravevaScript = document.createElement('script');
cravevaScript.src = 'https://cdn.craveva.ai/v1/data-warehouse.js';
cravevaScript.setAttribute('data-api-key', 'YOUR_API_KEY');
cravevaScript.setAttribute('data-mode', 'online+offline');
cravevaScript.setAttribute('data-company-id', 'YOUR_COMPANY_ID');
document.head.appendChild(cravevaScript);
})();
</script>