|
|
||
|---|---|---|
| protos | ||
| src | ||
| .env.example | ||
| .gitignore | ||
| Dockerfile | ||
| README.md | ||
| changelog | ||
| docker-compose.yaml | ||
README.md
Brunix Assistance Engine
The Brunix Assistance Engine is a high-performance, gRPC-powered AI orchestration service. It serves as the core intelligence layer for the Brunix ecosystem, integrating advanced RAG (Retrieval-Augmented Generation) capabilities with real-time observability.
This project is a strategic joint development:
- 101OBEX Corp: Infrastructure, System Architecture, and the proprietary AVAP Technology stack.
- MrHouston: Advanced LLM Fine-tuning, Model Training, and Prompt Engineering.
System Architecture
The following diagram illustrates the interaction between the AVAP technology, the trained intelligence, and the infrastructure components:
graph TD
subgraph Local_Dev [Laptop Ivar/Rafael]
BE[Brunix Assistance Engine]
KT[Kubectl Tunnel]
end
subgraph Vultr_K8s_Cluster [Production - Vultr Cloud]
EDB[(Elasticsearch Vector DB - HDD)]
PG[(Postgres - System Data)]
LF[Langfuse UI]
end
BE -- localhost:9200/5432 --> KT
KT -- Secure Tunnel --> EDB
KT -- Secure Tunnel --> PG
BE -- Public IP --> LF
Technology Stack
- Logic Layer: LangChain & LangGraph (Python 3.11).
- Communication: gRPC (High-performance, low-latency RPC framework).
- Vector Database: Elasticsearch 8.12 (For semantic search and AVAP data retrieval).
- Observability: Langfuse (End-to-end tracing, latency monitoring, and cost management).
- Infrastructure: Dockerized environment with PostgreSQL 15 persistence.
Getting Started
Prerequisites
- Docker & Docker Compose
- OpenAI API Key (or configured local provider)
Installation & Deployment
-
Clone the repository:
git clone git@github.com:BRUNIX-AI/assistance-engine.git cd assistance-engine -
Configure Environment Variables: Create a
.envfile in the root directory:OPENAI_API_KEY=your_key_here LANGFUSE_PUBLIC_KEY=pk-lf-... LANGFUSE_SECRET_KEY=sk-lf-... LANGFUSE_HOST=http://langfuse:3000 -
Launch the Stack:
docker-compose up -d --build
The engine will be listening for gRPC requests on port 50052.
Component Overview
| Service | Container Name | Description | Role |
|---|---|---|---|
| Engine | brunix-assistance-engine |
The AVAP-powered brain. | 101OBEX Corp |
| Vector DB | brunix-vector-db |
Elasticsearch instance (Knowledge Base). | Training Support |
| Observability | brunix-observability |
Langfuse UI (Tracing & Costs). | System Quality |
| System DB | brunix-postgres |
Internal storage for Langfuse. | Infrastructure |
Partnership & Contributions
This repository is private and represents the intellectual property of 101OBEX Corp and MrHouston.
- Architecture & AVAP: Managed by 101OBEX Engineering.
- Model Training & Intelligence: Managed by MrHouston Data Science Team.
Open Source & Intellectual Property
The Brunix Assistance Engine is built on a hybrid architecture that balances the flexibility of open-source tools with the security of proprietary intelligence:
- Open Source Frameworks: Utilizes LangChain and LangGraph (MIT License) for orchestration, and gRPC for high-performance communication.
- Infrastructure: Deploys via Docker using PostgreSQL and Elasticsearch (Elastic License 2.0).
- Proprietary Logic: The AVAP Technology (101OBEX Corp) and the specific Model Training/Prompts (MrHouston) are protected intellectual property.
- LLM Provider: Currently configured for OpenAI (Proprietary SaaS). The modular design allows for future integration with locally-hosted Open Source models (e.g., Llama 3, Mistral) to ensure 100% data sovereignty if required.
Security & Privacy
The system is designed with a "Security-First" approach to protect corporate intelligence:
- Data in Transit: Communication between the Engine and external clients is handled via gRPC, supporting TLS/SSL encryption to ensure that data remains private and tamper-proof.
- Internal Networking: All database interactions (Elasticsearch, PostgreSQL) occur within a private Docker bridge network (
avap-network), isolated from the public internet. - Observability Governance: Langfuse provides a full audit trail of every LLM interaction, allowing for real-time monitoring of data leakage or unexpected model behavior.
- Enterprise Secret Management: While local development uses
.envfiles, the architecture is Production-Ready for Kubernetes. In production environments, sensitive credentials (API Keys, Database passwords) are managed via Kubernetes Secrets or HashiCorp Vault, ensuring that no sensitive data is stored within the container images or source control.
graph LR
subgraph Public_Internet
Client[External Client]
end
subgraph Encrypted_Tunnel [TLS/SSL]
gRPC[gRPC Protocol]
end
subgraph K8s_Cluster [Production Environment]
Engine[Brunix Engine]
Sec{{"Kubernetes Secrets"}}
DB[(Databases)]
end
Client --> gRPC
gRPC --> Engine
Sec -.->|Injected as Env| Engine
Engine <--> DB