A feature-complete, high-performance RDBMS with native AI/ML support, time-travel queries, streaming statistics, built-in GUI, and enterprise security.
[!IMPORTANT] Mission: Deliver a modern database that seamlessly integrates traditional RDBMS capabilities with cutting-edge features like vector search, temporal queries, adaptive compression, and comprehensive security - all in a single, cohesive system.
LatticeDB is a Next-Gen RDBMS built from scratch in modern C++17, combining traditional database capabilities with cutting-edge features:
FOR SYSTEM_TIME AS OF TX n
syntax for querying historical data.These features are designed to work seamlessly together in a single, unified engineβunlike many incumbents that require extensions, plugins, or external services to achieve similar functionality.
LatticeDB focuses on features that major RDBMS generally donβt provide natively out-of-the-box all together:
Capability | LatticeDB | PostgreSQL | MySQL | SQL Server |
---|---|---|---|---|
Full SQL with JOINs, GROUP BY, Aggregates | β Fully Implemented | β Built-in | β Built-in | β Built-in |
Time Travel Queries (FOR SYSTEM_TIME) | β Fully Implemented | Via extensions | Limited | Temporal tables |
Native Vector Search (AI/ML) | β Fully Implemented | pgvector extension | Via plugins | Limited |
Row-Level Security | β Fully Implemented | β Built-in | Via views | β Built-in |
Column Encryption | β Fully Implemented | Via extensions | TDE only | TDE + Always Encrypted |
Adaptive Compression | β Fully Implemented | Basic support | Basic | Advanced |
Stream Processing | β Fully Implemented | Via extensions | Via binlog | CDC |
ACID with MVCC | β Fully Implemented | β Built-in | β InnoDB | β Built-in |
Write-Ahead Logging | β Fully Implemented | β Built-in | β Built-in | β Built-in |
B+ Tree Indexes | β Fully Implemented | β Built-in | β Built-in | β Built-in |
Comprehensive Audit Logging | β Fully Implemented | Via extensions | Limited | β Built-in |
Custom Merge Policies (CRDTs) | β Fully Implemented | Third-party | Third-party | Third-party |
Vector Search with Multiple Algorithms | β Fully Implemented | pgvector extension | Via plugins | Limited |
Built-in Web GUI | β Fully Implemented | Third-party | Third-party | Third-party |
[!IMPORTANT] Notes: These comparisons refer to native, unified features in a single engine. Many incumbents can achieve parts of this via extensions, editions, or external services, but not as a cohesive, first-class design as in LatticeDB.
The architecture is modular, with clear separation of concerns:
flowchart TD
subgraph "Client Layer"
GUI[Web GUI]
CLI[CLI Client]
SDK[SDKs]
end
subgraph "Query Engine"
PARSE[SQL Parser]
OPT[Query Optimizer]
EXEC[Executor]
end
subgraph "Storage Layer"
MVCC[MVCC Controller]
ULS[Universal Log Store]
IDX[B+ Tree & Vector Indexes]
end
subgraph "CRDT Engine"
LWW[LWW Register]
GSET[G-Set]
COUNTER[PN-Counter]
end
subgraph "Security & Privacy"
RLS[Row-Level Security]
DP[Differential Privacy]
AUDIT[Audit Logger]
end
GUI --> PARSE
CLI --> PARSE
SDK --> PARSE
PARSE --> OPT
OPT --> EXEC
EXEC --> MVCC
MVCC --> ULS
ULS --> IDX
EXEC --> RLS
EXEC --> DP
EXEC --> AUDIT
ULS --> LWW
ULS --> GSET
ULS --> COUNTER
F1 --> O3
F2 --> O3
F3 --> O3
This requires you to have the prerequisites installed. Then:
git clone https://github.com/hoangsonww/LatticeDB-NextGen-DBMS.git
cd LatticeDB-NextGen-DBMS
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
./latticedb # launches CLI interface
You do not need to install any dependencies locally. Just:
Run (in a terminal inside the container):
cmake -S . -B build-container -DCMAKE_BUILD_TYPE=Release -G Ninja
cmake --build build-container --parallel
./build-container/latticedb --version # will show version info
./build-container/latticedb # launches CLI interface
When successful, you should see the following:
Feel free to try out some commands such as help
, \d
(list all tables), and \stats
(show DB stats), and more!
-- Create tables with various data types
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(255) UNIQUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
user_id INTEGER,
amount DOUBLE,
status VARCHAR(20)
);
-- Insert data
INSERT INTO users (id, name, email) VALUES
(1, 'Alice Johnson', 'alice@example.com'),
(2, 'Bob Smith', 'bob@example.com');
INSERT INTO orders VALUES
(1, 1, 99.99, 'completed'),
(2, 1, 149.50, 'pending'),
(3, 2, 75.00, 'completed');
-- JOIN queries (fully working)
SELECT u.name, COUNT(o.id) as order_count, SUM(o.amount) as total
FROM users u
INNER JOIN orders o ON u.id = o.user_id
GROUP BY u.id, u.name;
-- Aggregates (all working)
SELECT status,
COUNT(*) as count,
SUM(amount) as total,
AVG(amount) as average,
MIN(amount) as minimum,
MAX(amount) as maximum
FROM orders
GROUP BY status;
-- Time travel query
SELECT * FROM orders FOR SYSTEM_TIME AS OF TX 5;
-- Vector search (if table has vector column)
CREATE TABLE embeddings (
id INTEGER PRIMARY KEY,
content TEXT,
vector VECTOR(768)
);
-- Transaction example
BEGIN;
UPDATE orders SET status = 'shipped' WHERE id = 2;
INSERT INTO orders VALUES (4, 2, 199.99, 'pending');
COMMIT;
EXIT;
[!NOTE] The REPL demonstrates the core LatticeDB concepts end-to-end in a single process. For distributed mode, use the coordinator + shard binaries (see
/cmd
).
The CLI provides an interactive shell to run SQL commands:
./latticedb
Type help
for a list of commands.
[!TIP] Visit any
./latticedb
file in eitherbuild
orbuild-container
to see available options and run it so you donβt have to manually build every time.
LatticeDB includes a modern web-based GUI with powerful features:
# 1) Build and start the HTTP server:
cmake -S . -B build && cmake --build build -j
./build/latticedb_server
# 2) In a new terminal, start the GUI:
cd gui
npm install
npm run dev
# Open http://localhost:5173
Ensure your server is running before using the GUI:
The GUI can run standalone with sample data, just in case you donβt want to start the server:
cd gui
npm install
npm run dev
# Toggle "Mock Mode" in the UI to use sample data
How the GUI looksβ¦
[!IMPORTANT] Preview the GUI without building from source by visiting the https://latticedb.vercel.app/ hosted on Vercel.
To create a distributable package of the built binaries and necessary files:
cd build # or build-container if using DevContainer
cmake -E tar cf ../latticedb-build.zip --format=zip .
This will create latticedb-build.zip
in the parent directory, containing all compiled binaries and resources needed to run LatticeDB on another compatible system. You can then unzip and run the server or CLI directly.
[!TIP] Instead of manually zipping the DBMS, you can visit Releases to download pre-packaged binaries/ZIPs for your platform.
lww
, sum_bounded(min,max)
, gset
, and custom WASM resolvers.stateDiagram-v2
[*] --> LocalWrite: Write Operation
LocalWrite --> CRDTDelta: Generate Delta
CRDTDelta --> VectorClock: Update Clock
VectorClock --> Broadcast: Ship to Peers
Broadcast --> MergeResolve: Apply Policy
MergeResolve --> LWW: Last-Writer-Wins
MergeResolve --> Union: Set Union
MergeResolve --> Custom: Custom Resolver
LWW --> Apply: Persist
Union --> Apply: Persist
Custom --> Apply: Persist
Apply --> [*]
Example
-- Add a custom WASM resolver for notes (pseudo)
CREATE MERGE RESOLVER rev_note LANGUAGE wasm
AS 'wasm://org.example.merges/resolve_rev_note@1.0';
ALTER TABLE tickets
ALTER COLUMN note SET MERGE USING rev_note;
tx_from/tx_to
and valid_from/valid_to
.sequenceDiagram
participant Q as Query
participant P as Parser
participant T as Temporal Rewriter
participant I as Temporal Index
participant E as Executor
Q->>P: FOR SYSTEM_TIME AS OF timestamp
P->>T: Extract temporal predicates
T->>I: CSN range lookup
I-->>E: Pruned row versions
E-->>Q: Snapshot-consistent results
Example
-- Snapshot by transaction time
SELECT * FROM orders FOR SYSTEM_TIME AS OF '2025-08-10T13:37:00Z' WHERE id=42;
-- Correct valid time retroactively
UPDATE orders VALID PERIOD ['2025-07-01','2025-07-31') SET status='canceled' WHERE id=42;
-- Why did it change?
SELECT lineage_explain(orders, 42, '2025-08-10T13:37:00Z');
flowchart TD
U[User/Service] --> AUTH[Authentication]
AUTH --> CTX[Security Context]
CTX --> POL[Policy Engine]
POL --> RLS[Row-Level Security]
POL --> CLS[Column-Level Security]
POL --> DP[Differential Privacy]
RLS --> PLAN[Query Plan]
CLS --> PLAN
DP --> BUDGET[Epsilon Budget]
BUDGET --> PLAN
PLAN --> EXEC[Executor]
EXEC --> AUDIT[Audit Logger]
Example
CREATE POLICY ssn_mask
ON people AS COLUMN (ssn)
USING MASK WITH (expr => 'CASE WHEN has_role(''auditor'') THEN ssn ELSE sha2(ssn) END');
CREATE POLICY dp_count_sales
ON sales AS DP USING (epsilon => 0.5, sensitivity => 1);
SET DP_EPSILON = 0.5;
SELECT DP_COUNT(*) FROM sales WHERE region='NA';
VECTOR<D>
columns and ANN indexes (HNSW/IVF plugins).CREATE TABLE items(
id UUID PRIMARY KEY,
title TEXT,
embedding VECTOR<768> INDEX HNSW (M=32, ef_search=64)
);
SELECT o.id, i.title
FROM orders o
JOIN ANN items ON distance(o.query_vec, items.embedding) < 0.25
WHERE o.status = 'open';
flowchart LR
subgraph Sources
CDC[Table CDC]
KAFKA[Kafka Stream]
end
subgraph Processing
DECODE[Decoder]
WINDOW[Window Assignment]
AGG[Incremental Aggregation]
end
subgraph Output
STATE[State Store]
MV[Materialized View]
end
CDC --> DECODE
KAFKA --> DECODE
DECODE --> WINDOW
WINDOW --> AGG
AGG --> STATE
STATE --> MV
Example
CREATE MATERIALIZED VIEW revenue_daily
WITH (refresh='continuous', watermark = INTERVAL '1 minute')
AS
SELECT DATE_TRUNC('day', ts) d, SUM(amount) amt
FROM STREAM OF payments
GROUP BY d;
CALL mv.backfill('revenue_daily', source => 'payments_archive', from => '2025-01-01');
LatticeDB uses a Unified Log-Structured Storage (ULS):
flowchart LR
subgraph "Storage Engine"
WAL[Write-Ahead Log]
MEM[MemTable]
L0[L0 SSTables]
L1[L1 SSTables]
L2[L2 SSTables]
end
subgraph "Indexes"
BT[B+ Tree]
VEC[Vector HNSW]
BLOOM[Bloom Filter]
end
WAL --> MEM
MEM --> L0
L0 --> L1
L1 --> L2
L0 --> BT
L0 --> VEC
L0 --> BLOOM
Transactions & Consistency
sequenceDiagram
participant C as Client
participant P as Parser
participant O as Optimizer
participant E as Executor
participant M as MVCC
participant S as Storage
participant A as Audit
C->>P: SQL Query
P->>O: AST + Policies
O->>E: Physical Plan
E->>M: Begin TX (CSN)
M->>S: Read/Write at CSN
S->>S: Apply CRDT Merge
S-->>M: Result Set
M-->>E: Versioned Data
E->>A: Log Access
E-->>C: COMMIT (txid)
MERGE
policies in column definitions (MERGE lww
, MERGE sum_bounded(a,b)
, MERGE gset
).FOR SYSTEM_TIME AS OF
for time travel; VALID PERIOD [from,to)
.DP_COUNT(*)
and other DP aggregates (with SET DP_EPSILON
).VECTOR<D>
with DISTANCE(vec, [..])
predicates.STREAM OF
sources in CREATE MATERIALIZED VIEW
.[!NOTE] LatticeSQL is a strict superset of a familiar ANSI subsetβwith new temporal, merge, vector, DP, and streaming constructs.
flowchart TD
IN[Incoming Queries] --> CLASS[Classifier]
CLASS -->|OLTP| RG1[Low-Latency]
CLASS -->|Analytics| RG2[Throughput]
CLASS -->|Vector| RG3[Cache-Heavy]
CLASS -->|Streaming| RG4[Deadline]
RG1 --> ADM[Admission Controller]
RG2 --> ADM
RG3 --> ADM
RG4 --> ADM
ADM --> RUN[Executors]
Our comprehensive test suite includes:
Run tests with:
cd tests
./run_all.sh
Preliminary benchmarks show competitive performance on OLTP workloads compared to PostgreSQL and MySQL, with room for optimization in OLAP and vector search scenarios.
Visit the benchmarks
file for details.
Run the native suite from the repository root:
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release # configure (skip if build/ already exists)
cmake --build build --target latticedb_bench --config Release
./build/latticedb_bench
Add -v
to show per-iteration progress and --vector
to include the optional vector-search benchmarks. Each run prints a summary table in the terminal and writes a benchmark_results.csv
artifact in the project root.
Wanna run LatticeDB in production? No problem! LatticeDB provides production-ready deployment configurations for major cloud providers and HashiCorp stack, enabling you to deploy your database with enterprise-grade reliability, security, and scalability.
All LatticeDB deployments include integrated Prometheus and Grafana monitoring out of the box:
# Access monitoring dashboards
# AWS: kubectl port-forward svc/grafana 3000:3000
# Azure: az containerapp browse --name grafana
# GCP: gcloud run services proxy grafana --port=3000
# HashiCorp: consul connect proxy -service grafana -upstream grafana:3000
Ready-to-use monitoring for production workloads with zero configuration required! π―
[!TIP] See MONITORING.md for detailed setup, customization, and alerting instructions.
Platform | Service | Features | Best For |
---|---|---|---|
βοΈ AWS | ECS Fargate + ALB | Auto-scaling, EFS storage, CloudWatch | Enterprise workloads |
π Azure | Container Apps | Serverless, Auto-scaling, Azure Files | Modern cloud-native apps |
π GCP | Cloud Run + Cloud SQL | Serverless, Global load balancer | Global applications |
ποΈ HashiCorp | Nomad + Consul + Vault | Service mesh, Secret management | On-premises/hybrid |
Choose your preferred platform and deploy with a single command:
# AWS - Deploy to ECS Fargate with auto-scaling
cd aws && ./deploy.sh
# Azure - Deploy to Container Apps
cd azure && ./deploy.sh
# GCP - Deploy to Cloud Run (serverless)
export GCP_PROJECT=your-project-id
cd gcp && ./deploy.sh
# HashiCorp Stack - Deploy to Nomad with service mesh
cd hashicorp && ./deploy.sh
Each deployment provides:
cd aws
# Customize your deployment
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your AWS region, domain, etc.
# Deploy with monitoring and auto-scaling
./deploy.sh
# Your LatticeDB is now running at:
# https://your-domain.com (or load balancer DNS)
Automated deployments with:
[!TIP] For complete deployment instructions, troubleshooting, and advanced configurations, see DEPLOYMENT.md.
In addition to local builds & Travis/Jenkins, we also use GitHub Actions for CI/CD:
The workflow automatically builds and tests the code on every push and pull request, ensuring code quality and stability.
We β€οΈ contributions! Ways to help:
good-first-issue
or help-wanted
.[!NOTE] Please run
clang-format
withcmake --build build-container --target format
before submitting PRs! Replacebuild-container
with your build directory if different.
Unless stated otherwise in the repository, LatticeDB is released under the MIT License. See LICENSE
for details.
erDiagram
TABLE ||--o{ ROW_VERSION : has
POLICY ||--o{ POLICY_BINDING : applies_to
ROW_VERSION }o--|| LINEAGE_EVENT : derived_from
USER ||--o{ QUERY_SESSION : initiates
QUERY_SESSION ||--o{ AUDIT_LOG : writes
TABLE ||--o{ INDEX : has
TABLE ||--o{ MATERIALIZED_VIEW : source
TABLE {
uuid table_id PK
string name
json schema_versions
json merge_policy
string crdt_type
interval retention_period
}
ROW_VERSION {
uuid row_id
uuid table_id FK
bigint csn_min
bigint csn_max
timestamptz valid_from
timestamptz valid_to
jsonB data
jsonB provenance
vector_clock merge_clock
}
POLICY {
uuid policy_id PK
string name
string type "RLS|CLS|DP|MASK"
json spec
float epsilon_budget
}
INDEX {
uuid index_id PK
uuid table_id FK
string type "BTREE|HNSW|BITMAP|BLOOM"
json config
}
MATERIALIZED_VIEW {
uuid view_id PK
string name
string refresh_type
interval refresh_interval
}
[!IMPORTANT] LatticeDB natively combines CRDT mergeability, bitemporal & lineage, policy-as-data with differential privacy, streaming MVs, vector search, and WASM extensibility into the core engineβso you can build offline-tolerant, audited, privacy-preserving, real-time apps without stitching together sidecars, plugins, and external services.
Thank you for exploring LatticeDB! Weβre excited about the future of databases and would love to hear your feedback and contributions.