Compare commits
42 Commits
mrh-online
...
main
| Author | SHA1 | Date |
|---|---|---|
|
|
27fb889275 | |
|
|
52d381fa22 | |
|
|
fda47edae0 | |
|
|
1daac66f89 | |
|
|
dfcbf43fa2 | |
|
|
f986c70845 | |
|
|
9a435120d5 | |
|
|
2493723f8b | |
|
|
4fce9c9a98 | |
|
|
0abbae93a4 | |
|
|
90ba49d613 | |
|
|
01ce959aab | |
|
|
3463fb05e8 | |
|
|
654ac88da7 | |
|
|
f183beb088 | |
|
|
648f0f7318 | |
|
|
a4478cb7ff | |
|
|
dc7568b622 | |
|
|
aa80f60fdc | |
|
|
90857e1b0a | |
|
|
b5167b71e3 | |
|
|
de21bcb5fb | |
|
|
35ca56118d | |
|
|
a08f754e25 | |
|
|
7839793eff | |
|
|
a4267e1b60 | |
|
|
8914acbb95 | |
|
|
8379033900 | |
|
|
1c9ee8d5dd | |
|
|
0113b32f8a | |
|
|
2d66266fd8 | |
|
|
86f426ae48 | |
|
|
39a9e7eaba | |
|
|
aa9865f5d0 | |
|
|
5024bde8fb | |
|
|
f075a0679a | |
|
|
e555cffa57 | |
|
|
c26deae5c9 | |
|
|
c862ea9194 | |
|
|
0415a3c271 | |
|
|
5eb7d7c939 | |
|
|
b29c1214f4 |
|
|
@ -1,26 +0,0 @@
|
||||||
{
|
|
||||||
"name": "brunix-assistance-engine",
|
|
||||||
"dockerComposeFile": "../docker-compose.yaml",
|
|
||||||
"service": "brunix-engine",
|
|
||||||
"workspaceFolder": "/workspace",
|
|
||||||
"remoteUser": "root",
|
|
||||||
"runArgs": [
|
|
||||||
"--add-host",
|
|
||||||
"host.docker.internal:host-gateway"
|
|
||||||
],
|
|
||||||
"customizations": {
|
|
||||||
"vscode": {
|
|
||||||
"extensions": [
|
|
||||||
"ms-python.python",
|
|
||||||
"ms-python.vscode-pylance",
|
|
||||||
"ms-python.debugpy",
|
|
||||||
"astral-sh.ruff",
|
|
||||||
"ms-python.black-formatter",
|
|
||||||
"njpwerner.autodocstring"
|
|
||||||
],
|
|
||||||
"settings": {
|
|
||||||
"python.defaultInterpreterPath": "/usr/local/bin/python"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -0,0 +1,76 @@
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
<!-- What does this PR do? Why is this change needed? Be specific. -->
|
||||||
|
|
||||||
|
## Type of change
|
||||||
|
|
||||||
|
- [ ] New feature (`Added`)
|
||||||
|
- [ ] Change to existing behavior (`Changed`)
|
||||||
|
- [ ] Bug fix (`Fixed`)
|
||||||
|
- [ ] Security / infrastructure (`Security`)
|
||||||
|
- [ ] Internal refactor (no behavioral change)
|
||||||
|
- [ ] Docs / changelog only
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PR Checklist
|
||||||
|
|
||||||
|
> All applicable items must be checked before requesting review.
|
||||||
|
> Reviewers are authorized to close and request resubmission of PRs that do not meet these standards.
|
||||||
|
|
||||||
|
### Code & Environment
|
||||||
|
- [ ] Tested locally against the **authorized Devaron Cluster** (no external or unauthorized infrastructure used)
|
||||||
|
- [ ] No personal IDE/environment files committed (`.vscode`, `.devcontainer`, etc.)
|
||||||
|
- [ ] No `root` user configurations introduced
|
||||||
|
- [ ] `Dockerfile` and `.dockerignore` comply with build context standards (`/app` only, no `/workspace`)
|
||||||
|
|
||||||
|
### Ingestion Files
|
||||||
|
- [ ] **No ingestion files were added or modified in this PR**
|
||||||
|
- [ ] **Ingestion files were added or modified** and are committed to the repository under `ingestion/` or `data/`
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
- [ ] **No new environment variables were introduced in this PR**
|
||||||
|
- [ ] **New variables were introduced** and are fully documented in the `.env` table in `README.md`
|
||||||
|
|
||||||
|
If new variables were added, list them here:
|
||||||
|
|
||||||
|
| Variable | Required | Description | Example value |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `VARIABLE_NAME` | Yes / No | What it does | `example` |
|
||||||
|
|
||||||
|
### Changelog
|
||||||
|
- [ ] **Not required** — internal refactor, typo/comment fix, or zero behavioral impact
|
||||||
|
- [ ] **Updated** — entry added to `changelog` with correct version bump and today's date
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
- [ ] **Not required** — internal change with no impact on setup, API, or usage
|
||||||
|
- [ ] **Updated** — `README.md` or relevant docs reflect this change
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Changelog entry
|
||||||
|
|
||||||
|
<!-- Paste the entry you added, or write "N/A" -->
|
||||||
|
|
||||||
|
```
|
||||||
|
## [X.Y.Z] - YYYY-MM-DD
|
||||||
|
|
||||||
|
### Added / Changed / Fixed / Security
|
||||||
|
- LABEL: Description.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Infrastructure status during testing
|
||||||
|
|
||||||
|
| Tunnel | Status |
|
||||||
|
|---|---|
|
||||||
|
| Ollama (port 11434) | `active` / `N/A` |
|
||||||
|
| Elasticsearch (port 9200) | `active` / `N/A` |
|
||||||
|
| PostgreSQL (port 5432) | `active` / `N/A` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes for reviewer
|
||||||
|
|
||||||
|
<!-- Anything the reviewer should pay special attention to. If none, write "None." -->
|
||||||
|
|
@ -250,3 +250,6 @@ fabric.properties
|
||||||
src/mrh_saltoki_common/py.typed
|
src/mrh_saltoki_common/py.typed
|
||||||
|
|
||||||
*.history
|
*.history
|
||||||
|
|
||||||
|
.devcontainer
|
||||||
|
.vscode
|
||||||
|
|
@ -1,22 +0,0 @@
|
||||||
{
|
|
||||||
"makefile.configureOnOpen": false,
|
|
||||||
"python-envs.pythonProjects": [],
|
|
||||||
"python.terminal.useEnvFile": true,
|
|
||||||
"python.envFile": "${workspaceFolder}/.env",
|
|
||||||
"jupyter.logging.level": "info",
|
|
||||||
"terminal.integrated.env.linux": {
|
|
||||||
"PYTHONPATH": "${workspaceFolder}:${env:PYTHONPATH}"
|
|
||||||
},
|
|
||||||
"python.analysis.ignore": [
|
|
||||||
"*"
|
|
||||||
], // Disables Pylance's native linting
|
|
||||||
"python.analysis.typeCheckingMode": "basic", // Keeps Pylance's type validation
|
|
||||||
"[python]": {
|
|
||||||
"editor.defaultFormatter": "charliermarsh.ruff",
|
|
||||||
"editor.formatOnSave": true,
|
|
||||||
"editor.codeActionsOnSave": {
|
|
||||||
"source.fixAll.ruff": "explicit",
|
|
||||||
"source.organizeImports.ruff": "explicit"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -0,0 +1,308 @@
|
||||||
|
# Contributing to Brunix Assistance Engine
|
||||||
|
|
||||||
|
> This document is the single source of truth for all contribution standards in the Brunix Assistance Engine repository. All contributors — regardless of seniority or role — are expected to read, understand, and comply with these guidelines before opening any Pull Request.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Development Workflow (GitFlow)](#1-development-workflow-gitflow)
|
||||||
|
2. [Infrastructure Standards](#2-infrastructure-standards)
|
||||||
|
3. [Repository Standards](#3-repository-standards)
|
||||||
|
4. [Pull Request Requirements](#4-pull-request-requirements)
|
||||||
|
5. [Ingestion Files Policy](#5-ingestion-files-policy)
|
||||||
|
6. [Environment Variables Policy](#6-environment-variables-policy)
|
||||||
|
7. [Changelog Policy](#7-changelog-policy)
|
||||||
|
8. [Documentation Policy](#8-documentation-policy)
|
||||||
|
9. [Architecture Decision Records (ADRs)](#9-architecture-decision-records-adrs)
|
||||||
|
10. [Incident & Blockage Reporting](#10-incident--blockage-reporting)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Development Workflow (GitFlow)
|
||||||
|
|
||||||
|
### Branch Strategy
|
||||||
|
|
||||||
|
| Branch type | Naming convention | Purpose |
|
||||||
|
|---|---|---|
|
||||||
|
| Feature | `*-dev` | Active development — volatile, no CI validation |
|
||||||
|
| Main | `online` | Production-ready, fully validated |
|
||||||
|
|
||||||
|
- **Feature branches** (`*-dev`) are volatile environments. No validation tests or infrastructure deployments are performed on these branches.
|
||||||
|
- **Official validation** only occurs after a documented Pull Request is merged into `online`.
|
||||||
|
- **Developer responsibility:** Code must be stable and functional against the authorized environment before a PR is opened. Do not use the PR review process as a debugging step.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Infrastructure Standards
|
||||||
|
|
||||||
|
The project provides a validated, shared environment (Devaron Cluster, Vultr) including Ollama, Elasticsearch, and PostgreSQL.
|
||||||
|
|
||||||
|
- **Authorized environment only.** The use of parallel, unauthorized infrastructures — external EC2 instances, ad-hoc local setups, non-replicable environments — is strictly prohibited for official development.
|
||||||
|
- **No siloed environments.** Isolated development creates technical debt and incompatibility risks that directly impact delivery timelines.
|
||||||
|
- All infrastructure access must be established via the documented `kubectl` port-forward tunnels defined in the [README](./README.md#3-infrastructure-tunnels).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Repository Standards
|
||||||
|
|
||||||
|
### IDE Agnosticism
|
||||||
|
|
||||||
|
The `online` branch must remain neutral to any individual's development environment. The following **must not** be committed under any circumstance:
|
||||||
|
|
||||||
|
- `.devcontainer/`
|
||||||
|
- `.vscode/`
|
||||||
|
- Any local IDE or editor configuration files
|
||||||
|
|
||||||
|
The `.gitignore` automates exclusion of these artifacts. Ensure your local environment is fully decoupled from the production-ready source code.
|
||||||
|
|
||||||
|
### Security & Least Privilege
|
||||||
|
|
||||||
|
- Never use `root` as `remoteUser` in any shared dev environment configuration.
|
||||||
|
- All configurations must comply with the **Principle of Least Privilege**.
|
||||||
|
- Using root in shared environments introduces unacceptable supply chain risk.
|
||||||
|
|
||||||
|
### Docker & Build Context
|
||||||
|
|
||||||
|
- All executable code must reside in `/app` within the container.
|
||||||
|
- The `/workspace` root directory is **deprecated** — do not reference it.
|
||||||
|
- Every PR must verify the `Dockerfile` context is optimized via `.dockerignore`.
|
||||||
|
|
||||||
|
> **PRs that violate these architectural standards will be rejected without review.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Pull Request Requirements
|
||||||
|
|
||||||
|
A PR is not ready for review unless **all applicable items** in the following checklist are satisfied. Reviewers are authorized to close PRs that do not meet these standards and request resubmission.
|
||||||
|
|
||||||
|
### PR Checklist
|
||||||
|
|
||||||
|
**Code & Environment**
|
||||||
|
- [ ] Tested locally against the authorized Devaron Cluster (no unauthorized infrastructure used)
|
||||||
|
- [ ] No IDE or environment configuration files committed (`.vscode`, `.devcontainer`, etc.)
|
||||||
|
- [ ] No `root` user configurations introduced
|
||||||
|
- [ ] `Dockerfile` and `.dockerignore` comply with build context standards
|
||||||
|
|
||||||
|
**Ingestion Files** *(see [Section 5](#5-ingestion-files-policy))*
|
||||||
|
- [ ] No ingestion files were added or modified
|
||||||
|
- [ ] New or modified ingestion files are committed to the repository under `ingestion/` or `data/`
|
||||||
|
|
||||||
|
**Environment Variables** *(see [Section 6](#6-environment-variables-policy))*
|
||||||
|
- [ ] No new environment variables were introduced
|
||||||
|
- [ ] New environment variables are documented in the `.env` reference table in `README.md`
|
||||||
|
|
||||||
|
**Changelog** *(see [Section 6](#6-changelog-policy))*
|
||||||
|
- [ ] No changelog entry required (internal refactor, comment/typo fix, zero behavioral change)
|
||||||
|
- [ ] Changelog updated with correct version bump and date
|
||||||
|
|
||||||
|
**Documentation** *(see [Section 8](#8-documentation-policy))*
|
||||||
|
- [ ] No documentation update required (internal change, no impact on setup or API)
|
||||||
|
- [ ] `README.md` or relevant docs updated to reflect this change
|
||||||
|
- [ ] If a significant architectural decision was made, an ADR was created in `docs/adr/`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Ingestion Files Policy
|
||||||
|
|
||||||
|
All files used to populate the vector knowledge base — source documents, AVAP manuals, structured data, or ingestion scripts — **must be committed to the repository.**
|
||||||
|
|
||||||
|
### Rules
|
||||||
|
|
||||||
|
- Ingestion files must reside in a dedicated directory (e.g., `ingestion/` or `data/`) within the repository.
|
||||||
|
- Any PR that introduces new knowledge base content or modifies existing ingestion pipelines must include the corresponding source files.
|
||||||
|
- Files containing sensitive content that cannot be committed in plain form must be flagged for discussion before proceeding. Encryption, redaction, or a separate private submodule are all valid solutions — committing to an external or local-only location is not.
|
||||||
|
|
||||||
|
### Why this matters
|
||||||
|
|
||||||
|
The Elasticsearch vector index is only as reliable as the source material that feeds it. Ingestion files that exist only on a local machine or external location cannot be audited, rebuilt, or validated by the team. A knowledge base populated from untracked files is a non-reproducible dependency — and a risk to the entire RAG pipeline.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Environment Variables Policy
|
||||||
|
|
||||||
|
This is a critical requirement. **Every environment variable introduced in a PR must be documented before the PR can be merged.**
|
||||||
|
|
||||||
|
### Rules
|
||||||
|
|
||||||
|
- Any new variable added to the codebase (`.env`, `docker-compose.yaml`, `server.py`, or any config file) must be declared in the `.env` reference table in `README.md`.
|
||||||
|
- The documentation must include: variable name, purpose, whether it is required or optional, and an example value.
|
||||||
|
- Variables that contain secrets must use placeholder values (e.g., `your-secret-key-here`) — never commit real values.
|
||||||
|
|
||||||
|
### Required format in README.md
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
| Variable | Required | Description | Example |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `LANGFUSE_PUBLIC_KEY` | Yes | Langfuse project public key for tracing | `pk-lf-...` |
|
||||||
|
| `LANGFUSE_SECRET_KEY` | Yes | Langfuse project secret key | `sk-lf-...` |
|
||||||
|
| `LANGFUSE_HOST` | Yes | Langfuse server endpoint | `http://45.77.119.180` |
|
||||||
|
| `NEW_VARIABLE` | Yes | Description of what it does | `example-value` |
|
||||||
|
```
|
||||||
|
|
||||||
|
### Why this matters
|
||||||
|
|
||||||
|
An undocumented environment variable silently breaks the setup for every other developer on the team. It also makes the service non-reproducible, which is a direct violation of the infrastructure standards in Section 2. There are no exceptions to this policy.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Changelog Policy
|
||||||
|
|
||||||
|
The `changelog` file tracks all notable changes and follows [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
|
### When a changelog entry IS required
|
||||||
|
|
||||||
|
| Change type | Label to use |
|
||||||
|
|---|---|
|
||||||
|
| New feature or capability | `Added` |
|
||||||
|
| Change to existing behavior, API, or interface | `Changed` |
|
||||||
|
| Bug fix | `Fixed` |
|
||||||
|
| Security patch or security-related change | `Security` |
|
||||||
|
| Breaking change or deprecation | `Deprecated` / `Removed` |
|
||||||
|
|
||||||
|
### When a changelog entry is NOT required
|
||||||
|
|
||||||
|
- Typo or comment fixes only
|
||||||
|
- Internal refactors with zero behavioral or interface change
|
||||||
|
- Tooling/CI updates with no user-visible impact
|
||||||
|
|
||||||
|
**If in doubt, add an entry.**
|
||||||
|
|
||||||
|
### Format
|
||||||
|
|
||||||
|
New entries go at the top of the file, above the previous version:
|
||||||
|
|
||||||
|
```
|
||||||
|
## [X.Y.Z] - YYYY-MM-DD
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- LABEL: Description of the new feature or capability.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- LABEL: Description of what changed and the rationale.
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- LABEL: Description of the bug resolved.
|
||||||
|
```
|
||||||
|
|
||||||
|
Use uppercase short labels for scanability: `API:`, `DOCKER:`, `INFRA:`, `SECURITY:`, `ENV:`, `CONFIG:`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Documentation Policy
|
||||||
|
|
||||||
|
### When documentation MUST be updated
|
||||||
|
|
||||||
|
Update `README.md` (or the relevant doc file) if the PR includes any of the following:
|
||||||
|
|
||||||
|
- Changes to project structure (new files, directories, removed components)
|
||||||
|
- Changes to setup, installation, or environment configuration
|
||||||
|
- New or modified API endpoints or Protobuf definitions (`brunix.proto`)
|
||||||
|
- New, modified, or removed environment variables
|
||||||
|
- Changes to infrastructure tunnels or Kubernetes service names
|
||||||
|
- New dependencies or updated dependency versions
|
||||||
|
- Changes to security, access, or repository standards
|
||||||
|
|
||||||
|
### When documentation is NOT required
|
||||||
|
|
||||||
|
- Internal implementation changes with no impact on setup, usage, or API
|
||||||
|
- Fixes that do not alter any documented behavior
|
||||||
|
|
||||||
|
### Documentation files in this repository
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|---|---|
|
||||||
|
| `README.md` | Setup guide, env vars reference, quick start |
|
||||||
|
| `CONTRIBUTING.md` | Contribution standards (this file) |
|
||||||
|
| `SECURITY.md` | Security policy and vulnerability reporting |
|
||||||
|
| `docs/ARCHITECTURE.md` | Deep technical architecture reference |
|
||||||
|
| `docs/API_REFERENCE.md` | Complete gRPC API contract and examples |
|
||||||
|
| `docs/RUNBOOK.md` | Operational playbooks and incident response |
|
||||||
|
| `docs/AVAP_CHUNKER_CONFIG.md` | `avap_config.json` reference — blocks, statements, semantic tags |
|
||||||
|
| `docs/adr/` | Architecture Decision Records |
|
||||||
|
|
||||||
|
> **PRs that change user-facing behavior or setup without updating documentation will be rejected.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Architecture Decision Records (ADRs)
|
||||||
|
|
||||||
|
Architecture Decision Records document **significant technical decisions** — choices that have lasting consequences on the codebase, infrastructure, or development process.
|
||||||
|
|
||||||
|
### When to write an ADR
|
||||||
|
|
||||||
|
Write an ADR when a PR introduces or changes:
|
||||||
|
|
||||||
|
- A fundamental technology choice (communication protocol, storage backend, framework)
|
||||||
|
- A design pattern that other components will follow
|
||||||
|
- A deliberate trade-off with known consequences
|
||||||
|
- A decision that future engineers might otherwise reverse without understanding the rationale
|
||||||
|
|
||||||
|
### When NOT to write an ADR
|
||||||
|
|
||||||
|
- Implementation details within a single module
|
||||||
|
- Bug fixes
|
||||||
|
- Dependency version bumps
|
||||||
|
- Configuration changes
|
||||||
|
|
||||||
|
### ADR format
|
||||||
|
|
||||||
|
ADRs live in `docs/adr/` and follow this naming convention:
|
||||||
|
|
||||||
|
```
|
||||||
|
ADR-XXXX-short-title.md
|
||||||
|
```
|
||||||
|
|
||||||
|
Where `XXXX` is a zero-padded sequential number (e.g., `ADR-0005-new-decision.md`).
|
||||||
|
|
||||||
|
Each ADR must contain:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# ADR-XXXX: Title
|
||||||
|
|
||||||
|
**Date:** YYYY-MM-DD
|
||||||
|
**Status:** Proposed | Accepted | Deprecated | Superseded by ADR-YYYY
|
||||||
|
**Deciders:** Names or roles
|
||||||
|
|
||||||
|
## Context
|
||||||
|
What problem are we solving? What forces are at play?
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
What did we decide?
|
||||||
|
|
||||||
|
## Rationale
|
||||||
|
Why this option over alternatives? Include a trade-off analysis.
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
What are the positive and negative results of this decision?
|
||||||
|
```
|
||||||
|
|
||||||
|
### Existing ADRs
|
||||||
|
|
||||||
|
| ADR | Title | Status |
|
||||||
|
|---|---|---|
|
||||||
|
| [ADR-0001](docs/adr/ADR-0001-grpc-primary-interface.md) | gRPC as the Primary Communication Interface | Accepted |
|
||||||
|
| [ADR-0002](docs/adr/ADR-0002-two-phase-streaming.md) | Two-Phase Streaming Design for AskAgentStream | Accepted |
|
||||||
|
| [ADR-0003](docs/adr/ADR-0003-hybrid-retrieval-rrf.md) | Hybrid Retrieval (BM25 + kNN) with RRF Fusion | Accepted |
|
||||||
|
| [ADR-0004](docs/adr/ADR-0004-claude-eval-judge.md) | Claude as the RAGAS Evaluation Judge | Accepted |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Incident & Blockage Reporting
|
||||||
|
|
||||||
|
If you encounter a technical blockage (connection timeouts, service downtime, tunnel failures):
|
||||||
|
|
||||||
|
1. **Immediate notification** — Report via the designated Slack channel at the moment of detection. Do not wait until end of day.
|
||||||
|
2. **GitHub Issue must include:**
|
||||||
|
- The exact command executed
|
||||||
|
- Full terminal output (complete error logs)
|
||||||
|
- Current status of all `kubectl` tunnels
|
||||||
|
3. **Resolution** — If the error is not reproducible by the CTO/DevOps team, a 5-minute live debugging session will be scheduled to identify local network or configuration issues.
|
||||||
|
|
||||||
|
See [`docs/RUNBOOK.md`](docs/RUNBOOK.md) for full incident playbooks and escalation paths.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*These standards exist to protect the integrity of the Brunix Assistance Engine and to ensure every member of the team can work confidently and efficiently. They are not bureaucratic overhead — they are the foundation of a reliable, scalable engineering practice.*
|
||||||
|
|
||||||
|
*— Rafael Ruiz, CTO, AVAP Technology*
|
||||||
|
|
@ -3,7 +3,6 @@ FROM python:3.11-slim
|
||||||
ENV PYTHONDONTWRITEBYTECODE=1
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
ENV PYTHONUNBUFFERED=1
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
|
||||||
|
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
COPY ./requirements.txt .
|
COPY ./requirements.txt .
|
||||||
|
|
@ -26,6 +25,10 @@ RUN python -m grpc_tools.protoc \
|
||||||
--grpc_python_out=./src \
|
--grpc_python_out=./src \
|
||||||
./protos/brunix.proto
|
./protos/brunix.proto
|
||||||
|
|
||||||
EXPOSE 50051
|
COPY entrypoint.sh /entrypoint.sh
|
||||||
|
RUN chmod +x /entrypoint.sh
|
||||||
|
|
||||||
CMD ["python", "src/server.py"]
|
EXPOSE 50051
|
||||||
|
EXPOSE 8000
|
||||||
|
|
||||||
|
ENTRYPOINT ["/entrypoint.sh"]
|
||||||
|
|
@ -0,0 +1,24 @@
|
||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
brunix-engine:
|
||||||
|
build: .
|
||||||
|
container_name: brunix-assistance-engine
|
||||||
|
ports:
|
||||||
|
- "50052:50051"
|
||||||
|
- "8000:8000"
|
||||||
|
environment:
|
||||||
|
ELASTICSEARCH_URL: ${ELASTICSEARCH_URL}
|
||||||
|
ELASTICSEARCH_INDEX: ${ELASTICSEARCH_INDEX}
|
||||||
|
POSTGRES_URL: ${POSTGRES_URL}
|
||||||
|
LANGFUSE_HOST: ${LANGFUSE_HOST}
|
||||||
|
LANGFUSE_PUBLIC_KEY: ${LANGFUSE_PUBLIC_KEY}
|
||||||
|
LANGFUSE_SECRET_KEY: ${LANGFUSE_SECRET_KEY}
|
||||||
|
OLLAMA_URL: ${OLLAMA_URL}
|
||||||
|
OLLAMA_MODEL_NAME: ${OLLAMA_MODEL_NAME}
|
||||||
|
OLLAMA_EMB_MODEL_NAME: ${OLLAMA_EMB_MODEL_NAME}
|
||||||
|
PROXY_THREAD_WORKERS: 10
|
||||||
|
|
||||||
|
extra_hosts:
|
||||||
|
- "host.docker.internal:host-gateway"
|
||||||
|
|
||||||
|
|
@ -0,0 +1,30 @@
|
||||||
|
#!/bin/sh
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "[entrypoint] Starting Brunix Engine (gRPC :50051)..."
|
||||||
|
python src/server.py &
|
||||||
|
ENGINE_PID=$!
|
||||||
|
|
||||||
|
echo "[entrypoint] Starting OpenAI Proxy (HTTP :8000)..."
|
||||||
|
uvicorn openai_proxy:app --host 0.0.0.0 --port 8000 --workers 4 --app-dir src &
|
||||||
|
PROXY_PID=$!
|
||||||
|
|
||||||
|
wait_any() {
|
||||||
|
while kill -0 $ENGINE_PID 2>/dev/null && kill -0 $PROXY_PID 2>/dev/null; do
|
||||||
|
sleep 2
|
||||||
|
done
|
||||||
|
|
||||||
|
if ! kill -0 $ENGINE_PID 2>/dev/null; then
|
||||||
|
echo "[entrypoint] Engine died — stopping proxy"
|
||||||
|
kill $PROXY_PID 2>/dev/null
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! kill -0 $PROXY_PID 2>/dev/null; then
|
||||||
|
echo "[entrypoint] Proxy died — stopping engine"
|
||||||
|
kill $ENGINE_PID 2>/dev/null
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
wait_any
|
||||||
|
|
@ -0,0 +1,62 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
|
||||||
|
package brunix;
|
||||||
|
|
||||||
|
service AssistanceEngine {
|
||||||
|
// Respuesta completa — compatible con clientes existentes
|
||||||
|
rpc AskAgent (AgentRequest) returns (stream AgentResponse);
|
||||||
|
|
||||||
|
// Streaming real token a token desde Ollama
|
||||||
|
rpc AskAgentStream (AgentRequest) returns (stream AgentResponse);
|
||||||
|
|
||||||
|
// Evaluación RAGAS con Claude como juez
|
||||||
|
rpc EvaluateRAG (EvalRequest) returns (EvalResponse);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// AskAgent / AskAgentStream — mismos mensajes, dos comportamientos
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
message AgentRequest {
|
||||||
|
string query = 1;
|
||||||
|
string session_id = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message AgentResponse {
|
||||||
|
string text = 1;
|
||||||
|
string avap_code = 2;
|
||||||
|
bool is_final = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// EvaluateRAG
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
message EvalRequest {
|
||||||
|
string category = 1;
|
||||||
|
int32 limit = 2;
|
||||||
|
string index = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvalResponse {
|
||||||
|
string status = 1;
|
||||||
|
int32 questions_evaluated = 2;
|
||||||
|
float elapsed_seconds = 3;
|
||||||
|
string judge_model = 4;
|
||||||
|
string index = 5;
|
||||||
|
float faithfulness = 6;
|
||||||
|
float answer_relevancy = 7;
|
||||||
|
float context_recall = 8;
|
||||||
|
float context_precision = 9;
|
||||||
|
float global_score = 10;
|
||||||
|
string verdict = 11;
|
||||||
|
repeated QuestionDetail details = 12;
|
||||||
|
}
|
||||||
|
|
||||||
|
message QuestionDetail {
|
||||||
|
string id = 1;
|
||||||
|
string category = 2;
|
||||||
|
string question = 3;
|
||||||
|
string answer_preview = 4;
|
||||||
|
int32 n_chunks = 5;
|
||||||
|
}
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
# This file was autogenerated by uv via the following command:
|
# This file was autogenerated by uv via the following command:
|
||||||
# uv export --format requirements-txt --no-hashes --no-dev -o requirements.txt
|
# uv export --format requirements-txt --no-hashes --no-dev -o Docker/requirements.txt
|
||||||
aiohappyeyeballs==2.6.1
|
aiohappyeyeballs==2.6.1
|
||||||
# via aiohttp
|
# via aiohttp
|
||||||
aiohttp==3.13.3
|
aiohttp==3.13.3
|
||||||
|
|
@ -12,6 +12,12 @@ anyio==4.12.1
|
||||||
# via httpx
|
# via httpx
|
||||||
attrs==25.4.0
|
attrs==25.4.0
|
||||||
# via aiohttp
|
# via aiohttp
|
||||||
|
boto3==1.42.58
|
||||||
|
# via langchain-aws
|
||||||
|
botocore==1.42.58
|
||||||
|
# via
|
||||||
|
# boto3
|
||||||
|
# s3transfer
|
||||||
certifi==2026.1.4
|
certifi==2026.1.4
|
||||||
# via
|
# via
|
||||||
# elastic-transport
|
# elastic-transport
|
||||||
|
|
@ -20,8 +26,15 @@ certifi==2026.1.4
|
||||||
# requests
|
# requests
|
||||||
charset-normalizer==3.4.4
|
charset-normalizer==3.4.4
|
||||||
# via requests
|
# via requests
|
||||||
|
chonkie==1.5.6
|
||||||
|
# via assistance-engine
|
||||||
|
chonkie-core==0.9.2
|
||||||
|
# via chonkie
|
||||||
|
click==8.3.1
|
||||||
|
# via nltk
|
||||||
colorama==0.4.6 ; sys_platform == 'win32'
|
colorama==0.4.6 ; sys_platform == 'win32'
|
||||||
# via
|
# via
|
||||||
|
# click
|
||||||
# loguru
|
# loguru
|
||||||
# tqdm
|
# tqdm
|
||||||
dataclasses-json==0.6.7
|
dataclasses-json==0.6.7
|
||||||
|
|
@ -30,104 +43,151 @@ elastic-transport==8.17.1
|
||||||
# via elasticsearch
|
# via elasticsearch
|
||||||
elasticsearch==8.19.3
|
elasticsearch==8.19.3
|
||||||
# via langchain-elasticsearch
|
# via langchain-elasticsearch
|
||||||
|
filelock==3.24.3
|
||||||
|
# via huggingface-hub
|
||||||
frozenlist==1.8.0
|
frozenlist==1.8.0
|
||||||
# via
|
# via
|
||||||
# aiohttp
|
# aiohttp
|
||||||
# aiosignal
|
# aiosignal
|
||||||
greenlet==3.3.1 ; platform_machine == 'AMD64' or platform_machine == 'WIN32' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'ppc64le' or platform_machine == 'win32' or platform_machine == 'x86_64'
|
fsspec==2025.10.0
|
||||||
|
# via huggingface-hub
|
||||||
|
greenlet==3.3.2 ; platform_machine == 'AMD64' or platform_machine == 'WIN32' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'ppc64le' or platform_machine == 'win32' or platform_machine == 'x86_64'
|
||||||
# via sqlalchemy
|
# via sqlalchemy
|
||||||
grpcio==1.78.0
|
grpcio==1.78.1
|
||||||
# via
|
# via
|
||||||
# assistance-engine
|
# assistance-engine
|
||||||
# grpcio-reflection
|
# grpcio-reflection
|
||||||
# grpcio-tools
|
# grpcio-tools
|
||||||
grpcio-reflection==1.78.0
|
grpcio-reflection==1.78.1
|
||||||
# via assistance-engine
|
# via assistance-engine
|
||||||
grpcio-tools==1.78.0
|
grpcio-tools==1.78.1
|
||||||
# via assistance-engine
|
# via assistance-engine
|
||||||
h11==0.16.0
|
h11==0.16.0
|
||||||
# via httpcore
|
# via httpcore
|
||||||
|
hf-xet==1.3.0 ; platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'arm64' or platform_machine == 'x86_64'
|
||||||
|
# via huggingface-hub
|
||||||
httpcore==1.0.9
|
httpcore==1.0.9
|
||||||
# via httpx
|
# via httpx
|
||||||
httpx==0.28.1
|
httpx==0.28.1
|
||||||
# via
|
# via
|
||||||
# langgraph-sdk
|
# langgraph-sdk
|
||||||
# langsmith
|
# langsmith
|
||||||
|
# ollama
|
||||||
httpx-sse==0.4.3
|
httpx-sse==0.4.3
|
||||||
# via langchain-community
|
# via langchain-community
|
||||||
|
huggingface-hub==0.36.2
|
||||||
|
# via
|
||||||
|
# langchain-huggingface
|
||||||
|
# tokenizers
|
||||||
idna==3.11
|
idna==3.11
|
||||||
# via
|
# via
|
||||||
# anyio
|
# anyio
|
||||||
# httpx
|
# httpx
|
||||||
# requests
|
# requests
|
||||||
# yarl
|
# yarl
|
||||||
|
jinja2==3.1.6
|
||||||
|
# via model2vec
|
||||||
|
jmespath==1.1.0
|
||||||
|
# via
|
||||||
|
# boto3
|
||||||
|
# botocore
|
||||||
|
joblib==1.5.3
|
||||||
|
# via
|
||||||
|
# model2vec
|
||||||
|
# nltk
|
||||||
jsonpatch==1.33
|
jsonpatch==1.33
|
||||||
# via langchain-core
|
# via langchain-core
|
||||||
jsonpointer==3.0.0
|
jsonpointer==3.0.0
|
||||||
# via jsonpatch
|
# via jsonpatch
|
||||||
langchain==1.2.10
|
langchain==1.2.10
|
||||||
# via assistance-engine
|
# via assistance-engine
|
||||||
|
langchain-aws==1.3.1
|
||||||
|
# via assistance-engine
|
||||||
langchain-classic==1.0.1
|
langchain-classic==1.0.1
|
||||||
# via langchain-community
|
# via langchain-community
|
||||||
langchain-community==0.4.1
|
langchain-community==0.4.1
|
||||||
# via assistance-engine
|
# via assistance-engine
|
||||||
langchain-core==1.2.11
|
langchain-core==1.2.15
|
||||||
# via
|
# via
|
||||||
# langchain
|
# langchain
|
||||||
|
# langchain-aws
|
||||||
# langchain-classic
|
# langchain-classic
|
||||||
# langchain-community
|
# langchain-community
|
||||||
# langchain-elasticsearch
|
# langchain-elasticsearch
|
||||||
|
# langchain-huggingface
|
||||||
|
# langchain-ollama
|
||||||
# langchain-text-splitters
|
# langchain-text-splitters
|
||||||
# langgraph
|
# langgraph
|
||||||
# langgraph-checkpoint
|
# langgraph-checkpoint
|
||||||
# langgraph-prebuilt
|
# langgraph-prebuilt
|
||||||
langchain-elasticsearch==1.0.0
|
langchain-elasticsearch==1.0.0
|
||||||
# via assistance-engine
|
# via assistance-engine
|
||||||
langchain-text-splitters==1.1.0
|
langchain-huggingface==1.2.0
|
||||||
|
# via assistance-engine
|
||||||
|
langchain-ollama==1.0.1
|
||||||
|
# via assistance-engine
|
||||||
|
langchain-text-splitters==1.1.1
|
||||||
# via langchain-classic
|
# via langchain-classic
|
||||||
langgraph==1.0.8
|
langgraph==1.0.9
|
||||||
# via langchain
|
# via langchain
|
||||||
langgraph-checkpoint==4.0.0
|
langgraph-checkpoint==4.0.0
|
||||||
# via
|
# via
|
||||||
# langgraph
|
# langgraph
|
||||||
# langgraph-prebuilt
|
# langgraph-prebuilt
|
||||||
langgraph-prebuilt==1.0.7
|
langgraph-prebuilt==1.0.8
|
||||||
# via langgraph
|
# via langgraph
|
||||||
langgraph-sdk==0.3.5
|
langgraph-sdk==0.3.8
|
||||||
# via langgraph
|
# via langgraph
|
||||||
langsmith==0.7.1
|
langsmith==0.7.6
|
||||||
# via
|
# via
|
||||||
# langchain-classic
|
# langchain-classic
|
||||||
# langchain-community
|
# langchain-community
|
||||||
# langchain-core
|
# langchain-core
|
||||||
loguru==0.7.3
|
loguru==0.7.3
|
||||||
# via assistance-engine
|
# via assistance-engine
|
||||||
|
markdown-it-py==4.0.0
|
||||||
|
# via rich
|
||||||
|
markupsafe==3.0.3
|
||||||
|
# via jinja2
|
||||||
marshmallow==3.26.2
|
marshmallow==3.26.2
|
||||||
# via dataclasses-json
|
# via dataclasses-json
|
||||||
|
mdurl==0.1.2
|
||||||
|
# via markdown-it-py
|
||||||
|
model2vec==0.7.0
|
||||||
|
# via chonkie
|
||||||
multidict==6.7.1
|
multidict==6.7.1
|
||||||
# via
|
# via
|
||||||
# aiohttp
|
# aiohttp
|
||||||
# yarl
|
# yarl
|
||||||
mypy-extensions==1.1.0
|
mypy-extensions==1.1.0
|
||||||
# via typing-inspect
|
# via typing-inspect
|
||||||
|
nltk==3.9.3
|
||||||
|
# via assistance-engine
|
||||||
numpy==2.4.2
|
numpy==2.4.2
|
||||||
# via
|
# via
|
||||||
# assistance-engine
|
# assistance-engine
|
||||||
|
# chonkie
|
||||||
|
# chonkie-core
|
||||||
# elasticsearch
|
# elasticsearch
|
||||||
|
# langchain-aws
|
||||||
# langchain-community
|
# langchain-community
|
||||||
|
# model2vec
|
||||||
# pandas
|
# pandas
|
||||||
|
ollama==0.6.1
|
||||||
|
# via langchain-ollama
|
||||||
orjson==3.11.7
|
orjson==3.11.7
|
||||||
# via
|
# via
|
||||||
# langgraph-sdk
|
# langgraph-sdk
|
||||||
# langsmith
|
# langsmith
|
||||||
ormsgpack==1.12.2
|
ormsgpack==1.12.2
|
||||||
# via langgraph-checkpoint
|
# via langgraph-checkpoint
|
||||||
packaging==26.0
|
packaging==24.2
|
||||||
# via
|
# via
|
||||||
|
# huggingface-hub
|
||||||
# langchain-core
|
# langchain-core
|
||||||
# langsmith
|
# langsmith
|
||||||
# marshmallow
|
# marshmallow
|
||||||
pandas==3.0.0
|
pandas==3.0.1
|
||||||
# via assistance-engine
|
# via assistance-engine
|
||||||
propcache==0.4.1
|
propcache==0.4.1
|
||||||
# via
|
# via
|
||||||
|
|
@ -140,17 +200,22 @@ protobuf==6.33.5
|
||||||
pydantic==2.12.5
|
pydantic==2.12.5
|
||||||
# via
|
# via
|
||||||
# langchain
|
# langchain
|
||||||
|
# langchain-aws
|
||||||
# langchain-classic
|
# langchain-classic
|
||||||
# langchain-core
|
# langchain-core
|
||||||
# langgraph
|
# langgraph
|
||||||
# langsmith
|
# langsmith
|
||||||
|
# ollama
|
||||||
# pydantic-settings
|
# pydantic-settings
|
||||||
pydantic-core==2.41.5
|
pydantic-core==2.41.5
|
||||||
# via pydantic
|
# via pydantic
|
||||||
pydantic-settings==2.12.0
|
pydantic-settings==2.13.1
|
||||||
# via langchain-community
|
# via langchain-community
|
||||||
|
pygments==2.19.2
|
||||||
|
# via rich
|
||||||
python-dateutil==2.9.0.post0
|
python-dateutil==2.9.0.post0
|
||||||
# via
|
# via
|
||||||
|
# botocore
|
||||||
# elasticsearch
|
# elasticsearch
|
||||||
# pandas
|
# pandas
|
||||||
python-dotenv==1.2.1
|
python-dotenv==1.2.1
|
||||||
|
|
@ -159,20 +224,34 @@ python-dotenv==1.2.1
|
||||||
# pydantic-settings
|
# pydantic-settings
|
||||||
pyyaml==6.0.3
|
pyyaml==6.0.3
|
||||||
# via
|
# via
|
||||||
|
# huggingface-hub
|
||||||
# langchain-classic
|
# langchain-classic
|
||||||
# langchain-community
|
# langchain-community
|
||||||
# langchain-core
|
# langchain-core
|
||||||
|
rapidfuzz==3.14.3
|
||||||
|
# via assistance-engine
|
||||||
|
regex==2026.2.19
|
||||||
|
# via nltk
|
||||||
requests==2.32.5
|
requests==2.32.5
|
||||||
# via
|
# via
|
||||||
|
# huggingface-hub
|
||||||
# langchain-classic
|
# langchain-classic
|
||||||
# langchain-community
|
# langchain-community
|
||||||
# langsmith
|
# langsmith
|
||||||
# requests-toolbelt
|
# requests-toolbelt
|
||||||
requests-toolbelt==1.0.0
|
requests-toolbelt==1.0.0
|
||||||
# via langsmith
|
# via langsmith
|
||||||
|
rich==14.3.3
|
||||||
|
# via model2vec
|
||||||
|
s3transfer==0.16.0
|
||||||
|
# via boto3
|
||||||
|
safetensors==0.7.0
|
||||||
|
# via model2vec
|
||||||
setuptools==82.0.0
|
setuptools==82.0.0
|
||||||
# via grpcio-tools
|
# via
|
||||||
simsimd==6.5.12
|
# grpcio-tools
|
||||||
|
# model2vec
|
||||||
|
simsimd==6.5.13
|
||||||
# via elasticsearch
|
# via elasticsearch
|
||||||
six==1.17.0
|
six==1.17.0
|
||||||
# via python-dateutil
|
# via python-dateutil
|
||||||
|
|
@ -182,16 +261,28 @@ sqlalchemy==2.0.46
|
||||||
# langchain-community
|
# langchain-community
|
||||||
tenacity==9.1.4
|
tenacity==9.1.4
|
||||||
# via
|
# via
|
||||||
|
# chonkie
|
||||||
# langchain-community
|
# langchain-community
|
||||||
# langchain-core
|
# langchain-core
|
||||||
|
tokenizers==0.22.2
|
||||||
|
# via
|
||||||
|
# chonkie
|
||||||
|
# langchain-huggingface
|
||||||
|
# model2vec
|
||||||
tqdm==4.67.3
|
tqdm==4.67.3
|
||||||
# via assistance-engine
|
# via
|
||||||
|
# assistance-engine
|
||||||
|
# chonkie
|
||||||
|
# huggingface-hub
|
||||||
|
# model2vec
|
||||||
|
# nltk
|
||||||
typing-extensions==4.15.0
|
typing-extensions==4.15.0
|
||||||
# via
|
# via
|
||||||
# aiosignal
|
# aiosignal
|
||||||
# anyio
|
# anyio
|
||||||
# elasticsearch
|
# elasticsearch
|
||||||
# grpcio
|
# grpcio
|
||||||
|
# huggingface-hub
|
||||||
# langchain-core
|
# langchain-core
|
||||||
# pydantic
|
# pydantic
|
||||||
# pydantic-core
|
# pydantic-core
|
||||||
|
|
@ -208,9 +299,10 @@ tzdata==2025.3 ; sys_platform == 'emscripten' or sys_platform == 'win32'
|
||||||
# via pandas
|
# via pandas
|
||||||
urllib3==2.6.3
|
urllib3==2.6.3
|
||||||
# via
|
# via
|
||||||
|
# botocore
|
||||||
# elastic-transport
|
# elastic-transport
|
||||||
# requests
|
# requests
|
||||||
uuid-utils==0.14.0
|
uuid-utils==0.14.1
|
||||||
# via
|
# via
|
||||||
# langchain-core
|
# langchain-core
|
||||||
# langsmith
|
# langsmith
|
||||||
|
|
@ -224,3 +316,10 @@ yarl==1.22.0
|
||||||
# via aiohttp
|
# via aiohttp
|
||||||
zstandard==0.25.0
|
zstandard==0.25.0
|
||||||
# via langsmith
|
# via langsmith
|
||||||
|
|
||||||
|
ragas
|
||||||
|
datasets
|
||||||
|
langchain-anthropic
|
||||||
|
|
||||||
|
fastapi>=0.111.0
|
||||||
|
uvicorn[standard]>=0.29.0
|
||||||
|
|
@ -0,0 +1,230 @@
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from collections import defaultdict
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
from ragas import evaluate as ragas_evaluate
|
||||||
|
from ragas.metrics import ( faithfulness, answer_relevancy, context_recall, context_precision,)
|
||||||
|
from ragas.llms import LangchainLLMWrapper
|
||||||
|
from ragas.embeddings import LangchainEmbeddingsWrapper
|
||||||
|
from datasets import Dataset
|
||||||
|
from langchain_anthropic import ChatAnthropic
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
GOLDEN_DATASET_PATH = Path(__file__).parent / "golden_dataset.json"
|
||||||
|
CLAUDE_MODEL = os.getenv("ANTHROPIC_MODEL", "claude-sonnet-4-20250514")
|
||||||
|
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY", "")
|
||||||
|
K_RETRIEVE = 5
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
ANTHROPIC_AVAILABLE = True
|
||||||
|
|
||||||
|
|
||||||
|
from elasticsearch import Elasticsearch
|
||||||
|
from langchain_core.messages import SystemMessage, HumanMessage
|
||||||
|
|
||||||
|
def retrieve_context( es_client, embeddings, question, index, k = K_RETRIEVE,):
|
||||||
|
|
||||||
|
query_vector = None
|
||||||
|
try:
|
||||||
|
query_vector = embeddings.embed_query(question)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[eval] embed_query fails: {e}")
|
||||||
|
|
||||||
|
bm25_hits = []
|
||||||
|
try:
|
||||||
|
resp = es_client.search(
|
||||||
|
index=index,
|
||||||
|
body={
|
||||||
|
"size": k,
|
||||||
|
"query": {
|
||||||
|
"multi_match": {
|
||||||
|
"query": question,
|
||||||
|
"fields": ["content^2", "text^2"],
|
||||||
|
"type": "best_fields",
|
||||||
|
"fuzziness": "AUTO",
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"_source": {"excludes": ["embedding"]},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
bm25_hits = resp["hits"]["hits"]
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[eval] BM25 fails: {e}")
|
||||||
|
|
||||||
|
knn_hits = []
|
||||||
|
if query_vector:
|
||||||
|
try:
|
||||||
|
resp = es_client.search(
|
||||||
|
index=index,
|
||||||
|
body={
|
||||||
|
"size": k,
|
||||||
|
"knn": {
|
||||||
|
"field": "embedding",
|
||||||
|
"query_vector": query_vector,
|
||||||
|
"k": k,
|
||||||
|
"num_candidates": k * 5,
|
||||||
|
},
|
||||||
|
"_source": {"excludes": ["embedding"]},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
knn_hits = resp["hits"]["hits"]
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[eval] kNN falló: {e}")
|
||||||
|
|
||||||
|
rrf_scores: dict[str, float] = defaultdict(float)
|
||||||
|
hit_by_id: dict[str, dict] = {}
|
||||||
|
|
||||||
|
for rank, hit in enumerate(bm25_hits):
|
||||||
|
doc_id = hit["_id"]
|
||||||
|
rrf_scores[doc_id] += 1.0 / (rank + 60)
|
||||||
|
hit_by_id[doc_id] = hit
|
||||||
|
|
||||||
|
for rank, hit in enumerate(knn_hits):
|
||||||
|
doc_id = hit["_id"]
|
||||||
|
rrf_scores[doc_id] += 1.0 / (rank + 60)
|
||||||
|
if doc_id not in hit_by_id:
|
||||||
|
hit_by_id[doc_id] = hit
|
||||||
|
|
||||||
|
ranked = sorted(rrf_scores.items(), key=lambda x: x[1], reverse=True)[:k]
|
||||||
|
|
||||||
|
return [
|
||||||
|
hit_by_id[doc_id]["_source"].get("content")
|
||||||
|
or hit_by_id[doc_id]["_source"].get("text", "")
|
||||||
|
for doc_id, _ in ranked
|
||||||
|
if (
|
||||||
|
hit_by_id[doc_id]["_source"].get("content")
|
||||||
|
or hit_by_id[doc_id]["_source"].get("text", "")
|
||||||
|
).strip()
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def generate_answer(llm, question: str, contexts: list[str]) -> str:
|
||||||
|
try:
|
||||||
|
from prompts import GENERATE_PROMPT
|
||||||
|
context_text = "\n\n".join(
|
||||||
|
f"[{i+1}] {ctx}" for i, ctx in enumerate(contexts)
|
||||||
|
)
|
||||||
|
prompt = SystemMessage(
|
||||||
|
content=GENERATE_PROMPT.content.format(context=context_text)
|
||||||
|
)
|
||||||
|
resp = llm.invoke([prompt, HumanMessage(content=question)])
|
||||||
|
return resp.content.strip()
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[eval] generate_answer fails: {e}")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
def run_evaluation( es_client, llm, embeddings, index_name, category = None, limit = None,):
|
||||||
|
|
||||||
|
if not ANTHROPIC_AVAILABLE:
|
||||||
|
return {"error": "langchain-anthropic no instalado. pip install langchain-anthropic"}
|
||||||
|
if not ANTHROPIC_API_KEY:
|
||||||
|
return {"error": "ANTHROPIC_API_KEY no configurada en .env"}
|
||||||
|
if not GOLDEN_DATASET_PATH.exists():
|
||||||
|
return {"error": f"Golden dataset no encontrado en {GOLDEN_DATASET_PATH}"}
|
||||||
|
|
||||||
|
|
||||||
|
questions = json.loads(GOLDEN_DATASET_PATH.read_text(encoding="utf-8"))
|
||||||
|
if category:
|
||||||
|
questions = [q for q in questions if q.get("category") == category]
|
||||||
|
if limit:
|
||||||
|
questions = questions[:limit]
|
||||||
|
if not questions:
|
||||||
|
return {"error": "NO QUESTIONS WITH THIS FILTERS"}
|
||||||
|
|
||||||
|
logger.info(f"[eval] makind: {len(questions)} questions, index={index_name}")
|
||||||
|
|
||||||
|
claude_judge = ChatAnthropic(
|
||||||
|
model=CLAUDE_MODEL,
|
||||||
|
api_key=ANTHROPIC_API_KEY,
|
||||||
|
temperature=0,
|
||||||
|
max_tokens=2048,
|
||||||
|
)
|
||||||
|
|
||||||
|
rows = {"question": [], "answer": [], "contexts": [], "ground_truth": []}
|
||||||
|
details = []
|
||||||
|
t_start = time.time()
|
||||||
|
|
||||||
|
for item in questions:
|
||||||
|
q_id = item["id"]
|
||||||
|
question = item["question"]
|
||||||
|
gt = item["ground_truth"]
|
||||||
|
|
||||||
|
logger.info(f"[eval] {q_id}: {question[:60]}")
|
||||||
|
|
||||||
|
contexts = retrieve_context(es_client, embeddings, question, index_name)
|
||||||
|
if not contexts:
|
||||||
|
logger.warning(f"[eval] No context for {q_id} — skipping")
|
||||||
|
continue
|
||||||
|
|
||||||
|
answer = generate_answer(llm, question, contexts)
|
||||||
|
if not answer:
|
||||||
|
logger.warning(f"[eval] No answers for {q_id} — skipping")
|
||||||
|
continue
|
||||||
|
|
||||||
|
rows["question"].append(question)
|
||||||
|
rows["answer"].append(answer)
|
||||||
|
rows["contexts"].append(contexts)
|
||||||
|
rows["ground_truth"].append(gt)
|
||||||
|
|
||||||
|
details.append({
|
||||||
|
"id": q_id,
|
||||||
|
"category": item.get("category", ""),
|
||||||
|
"question": question,
|
||||||
|
"answer_preview": answer[:300],
|
||||||
|
"n_chunks": len(contexts),
|
||||||
|
})
|
||||||
|
|
||||||
|
if not rows["question"]:
|
||||||
|
return {"error": "NO SAMPLES GENETARED"}
|
||||||
|
|
||||||
|
dataset = Dataset.from_dict(rows)
|
||||||
|
ragas_llm = LangchainLLMWrapper(claude_judge)
|
||||||
|
ragas_emb = LangchainEmbeddingsWrapper(embeddings)
|
||||||
|
|
||||||
|
metrics = [faithfulness, answer_relevancy, context_recall, context_precision]
|
||||||
|
for metric in metrics:
|
||||||
|
metric.llm = ragas_llm
|
||||||
|
if hasattr(metric, "embeddings"):
|
||||||
|
metric.embeddings = ragas_emb
|
||||||
|
|
||||||
|
logger.info("[eval] JUDGING BY CLAUDE...")
|
||||||
|
result = ragas_evaluate(dataset, metrics=metrics)
|
||||||
|
|
||||||
|
elapsed = time.time() - t_start
|
||||||
|
|
||||||
|
scores = {
|
||||||
|
"faithfulness": round(float(result.get("faithfulness", 0)), 4),
|
||||||
|
"answer_relevancy": round(float(result.get("answer_relevancy", 0)), 4),
|
||||||
|
"context_recall": round(float(result.get("context_recall", 0)), 4),
|
||||||
|
"context_precision": round(float(result.get("context_precision", 0)), 4),
|
||||||
|
}
|
||||||
|
|
||||||
|
valid_scores = [v for v in scores.values() if v > 0]
|
||||||
|
global_score = round(sum(valid_scores) / len(valid_scores), 4) if valid_scores else 0.0
|
||||||
|
|
||||||
|
verdict = (
|
||||||
|
"EXCELLENT" if global_score >= 0.8 else
|
||||||
|
"ACCEPTABLE" if global_score >= 0.6 else
|
||||||
|
"INSUFFICIENT"
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(f"[eval] FINISHED — global={global_score} verdict={verdict} "
|
||||||
|
f"elapsed={elapsed:.0f}s")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": "ok",
|
||||||
|
"questions_evaluated": len(rows["question"]),
|
||||||
|
"elapsed_seconds": round(elapsed, 1),
|
||||||
|
"judge_model": CLAUDE_MODEL,
|
||||||
|
"index": index_name,
|
||||||
|
"category_filter": category or "all",
|
||||||
|
"scores": scores,
|
||||||
|
"global_score": global_score,
|
||||||
|
"verdict": verdict,
|
||||||
|
"details": details,
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,391 @@
|
||||||
|
import logging
|
||||||
|
from collections import defaultdict
|
||||||
|
from elasticsearch import Elasticsearch
|
||||||
|
from langchain_core.documents import Document
|
||||||
|
from langchain_core.messages import AIMessage, SystemMessage, HumanMessage, BaseMessage
|
||||||
|
from langgraph.graph import END, StateGraph
|
||||||
|
from langgraph.graph.state import CompiledStateGraph
|
||||||
|
|
||||||
|
from prompts import (
|
||||||
|
CLASSIFY_PROMPT_TEMPLATE,
|
||||||
|
CODE_GENERATION_PROMPT,
|
||||||
|
CONVERSATIONAL_PROMPT,
|
||||||
|
GENERATE_PROMPT,
|
||||||
|
REFORMULATE_PROMPT,
|
||||||
|
)
|
||||||
|
|
||||||
|
from state import AgentState
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
session_store: dict[str, list] = defaultdict(list)
|
||||||
|
|
||||||
|
def format_context(docs):
|
||||||
|
chunks = []
|
||||||
|
for i, doc in enumerate(docs, 1):
|
||||||
|
meta = doc.metadata or {}
|
||||||
|
chunk_id = meta.get("chunk_id", meta.get("id", f"chunk-{i}"))
|
||||||
|
source = meta.get("source_file", meta.get("source", "unknown"))
|
||||||
|
doc_type = meta.get("doc_type", "")
|
||||||
|
block_type = meta.get("block_type", "")
|
||||||
|
section = meta.get("section", "")
|
||||||
|
|
||||||
|
text = (doc.page_content or "").strip()
|
||||||
|
if not text:
|
||||||
|
text = meta.get("content") or meta.get("text") or ""
|
||||||
|
|
||||||
|
header_parts = [f"[{i}]", f"id={chunk_id}"]
|
||||||
|
if doc_type: header_parts.append(f"type={doc_type}")
|
||||||
|
if block_type: header_parts.append(f"block={block_type}")
|
||||||
|
if section: header_parts.append(f"section={section}")
|
||||||
|
header_parts.append(f"source={source}")
|
||||||
|
|
||||||
|
if doc_type in ("code", "code_example", "bnf") or \
|
||||||
|
block_type in ("function", "if", "startLoop", "try"):
|
||||||
|
header_parts.append("[AVAP CODE]")
|
||||||
|
|
||||||
|
chunks.append(" ".join(header_parts) + "\n" + text)
|
||||||
|
|
||||||
|
return "\n\n".join(chunks)
|
||||||
|
|
||||||
|
|
||||||
|
def format_history_for_classify(messages):
|
||||||
|
lines = []
|
||||||
|
for msg in messages[-6:]:
|
||||||
|
if isinstance(msg, HumanMessage):
|
||||||
|
lines.append(f"User: {msg.content}")
|
||||||
|
elif isinstance(msg, AIMessage):
|
||||||
|
lines.append(f"Assistant: {msg.content[:300]}")
|
||||||
|
elif isinstance(msg, dict):
|
||||||
|
role = msg.get("role", "user")
|
||||||
|
content = msg.get("content", "")[:300]
|
||||||
|
lines.append(f"{role.capitalize()}: {content}")
|
||||||
|
return "\n".join(lines) if lines else "(no history)"
|
||||||
|
|
||||||
|
|
||||||
|
def hybrid_search_native(es_client, embeddings, query, index_name, k=8):
|
||||||
|
query_vector = None
|
||||||
|
try:
|
||||||
|
query_vector = embeddings.embed_query(query)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[hybrid] embed_query fails: {e}")
|
||||||
|
|
||||||
|
bm25_hits = []
|
||||||
|
try:
|
||||||
|
resp = es_client.search(
|
||||||
|
index=index_name,
|
||||||
|
body={
|
||||||
|
"size": k,
|
||||||
|
"query": {
|
||||||
|
"multi_match": {
|
||||||
|
"query": query,
|
||||||
|
"fields": ["content^2", "text^2"],
|
||||||
|
"type": "best_fields",
|
||||||
|
"fuzziness": "AUTO",
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"_source": {"excludes": ["embedding"]},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
bm25_hits = resp["hits"]["hits"]
|
||||||
|
logger.info(f"[hybrid] BM25 -> {len(bm25_hits)} hits")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[hybrid] BM25 fails: {e}")
|
||||||
|
|
||||||
|
knn_hits = []
|
||||||
|
if query_vector:
|
||||||
|
try:
|
||||||
|
resp = es_client.search(
|
||||||
|
index=index_name,
|
||||||
|
body={
|
||||||
|
"size": k,
|
||||||
|
"knn": {
|
||||||
|
"field": "embedding",
|
||||||
|
"query_vector": query_vector,
|
||||||
|
"k": k,
|
||||||
|
"num_candidates": k * 5,
|
||||||
|
},
|
||||||
|
"_source": {"excludes": ["embedding"]},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
knn_hits = resp["hits"]["hits"]
|
||||||
|
logger.info(f"[hybrid] kNN -> {len(knn_hits)} hits")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[hybrid] kNN fails: {e}")
|
||||||
|
|
||||||
|
rrf_scores: dict[str, float] = defaultdict(float)
|
||||||
|
hit_by_id: dict[str, dict] = {}
|
||||||
|
|
||||||
|
for rank, hit in enumerate(bm25_hits):
|
||||||
|
doc_id = hit["_id"]
|
||||||
|
rrf_scores[doc_id] += 1.0 / (rank + 60)
|
||||||
|
hit_by_id[doc_id] = hit
|
||||||
|
|
||||||
|
for rank, hit in enumerate(knn_hits):
|
||||||
|
doc_id = hit["_id"]
|
||||||
|
rrf_scores[doc_id] += 1.0 / (rank + 60)
|
||||||
|
if doc_id not in hit_by_id:
|
||||||
|
hit_by_id[doc_id] = hit
|
||||||
|
|
||||||
|
ranked = sorted(rrf_scores.items(), key=lambda x: x[1], reverse=True)[:k]
|
||||||
|
|
||||||
|
docs = []
|
||||||
|
for doc_id, score in ranked:
|
||||||
|
src = hit_by_id[doc_id]["_source"]
|
||||||
|
text = src.get("content") or src.get("text") or ""
|
||||||
|
meta = {k: v for k, v in src.items()
|
||||||
|
if k not in ("content", "text", "embedding")}
|
||||||
|
meta["id"]= doc_id
|
||||||
|
meta["rrf_score"] = score
|
||||||
|
docs.append(Document(page_content=text, metadata=meta))
|
||||||
|
|
||||||
|
logger.info(f"[hybrid] RRF -> {len(docs)} final docs")
|
||||||
|
return docs
|
||||||
|
|
||||||
|
def build_graph(llm, embeddings, es_client, index_name):
|
||||||
|
|
||||||
|
def _persist(state: AgentState, response: BaseMessage):
|
||||||
|
session_id = state.get("session_id", "")
|
||||||
|
if session_id:
|
||||||
|
session_store[session_id] = list(state["messages"]) + [response]
|
||||||
|
|
||||||
|
def classify(state):
|
||||||
|
messages = state["messages"]
|
||||||
|
user_msg = messages[-1]
|
||||||
|
question = getattr(user_msg, "content",
|
||||||
|
user_msg.get("content", "")
|
||||||
|
if isinstance(user_msg, dict) else "")
|
||||||
|
history_msgs = messages[:-1]
|
||||||
|
|
||||||
|
if not history_msgs:
|
||||||
|
prompt_content = (
|
||||||
|
CLASSIFY_PROMPT_TEMPLATE
|
||||||
|
.replace("{history}", "(no history)")
|
||||||
|
.replace("{message}", question)
|
||||||
|
)
|
||||||
|
resp = llm.invoke([SystemMessage(content=prompt_content)])
|
||||||
|
raw = resp.content.strip().upper()
|
||||||
|
query_type = _parse_query_type(raw)
|
||||||
|
logger.info(f"[classify] no historic content raw='{raw}' -> {query_type}")
|
||||||
|
return {"query_type": query_type}
|
||||||
|
|
||||||
|
history_text = format_history_for_classify(history_msgs)
|
||||||
|
prompt_content = (
|
||||||
|
CLASSIFY_PROMPT_TEMPLATE
|
||||||
|
.replace("{history}", history_text)
|
||||||
|
.replace("{message}", question)
|
||||||
|
)
|
||||||
|
resp = llm.invoke([SystemMessage(content=prompt_content)])
|
||||||
|
raw = resp.content.strip().upper()
|
||||||
|
query_type = _parse_query_type(raw)
|
||||||
|
logger.info(f"[classify] raw='{raw}' -> {query_type}")
|
||||||
|
return {"query_type": query_type}
|
||||||
|
|
||||||
|
def _parse_query_type(raw: str) -> str:
|
||||||
|
if raw.startswith("CODE_GENERATION") or "CODE" in raw:
|
||||||
|
return "CODE_GENERATION"
|
||||||
|
if raw.startswith("CONVERSATIONAL"):
|
||||||
|
return "CONVERSATIONAL"
|
||||||
|
return "RETRIEVAL"
|
||||||
|
|
||||||
|
def reformulate(state: AgentState) -> AgentState:
|
||||||
|
user_msg = state["messages"][-1]
|
||||||
|
resp = llm.invoke([REFORMULATE_PROMPT, user_msg])
|
||||||
|
reformulated = resp.content.strip()
|
||||||
|
logger.info(f"[reformulate] -> '{reformulated}'")
|
||||||
|
return {"reformulated_query": reformulated}
|
||||||
|
|
||||||
|
def retrieve(state: AgentState) -> AgentState:
|
||||||
|
query = state["reformulated_query"]
|
||||||
|
docs = hybrid_search_native(
|
||||||
|
es_client=es_client,
|
||||||
|
embeddings=embeddings,
|
||||||
|
query=query,
|
||||||
|
index_name=index_name,
|
||||||
|
k=8,
|
||||||
|
)
|
||||||
|
context = format_context(docs)
|
||||||
|
logger.info(f"[retrieve] {len(docs)} docs, context len={len(context)}")
|
||||||
|
return {"context": context}
|
||||||
|
|
||||||
|
def generate(state):
|
||||||
|
prompt = SystemMessage(
|
||||||
|
content=GENERATE_PROMPT.content.format(context=state["context"])
|
||||||
|
)
|
||||||
|
resp = llm.invoke([prompt] + state["messages"])
|
||||||
|
logger.info(f"[generate] {len(resp.content)} chars")
|
||||||
|
_persist(state, resp)
|
||||||
|
return {"messages": [resp]}
|
||||||
|
|
||||||
|
def generate_code(state):
|
||||||
|
prompt = SystemMessage(
|
||||||
|
content=CODE_GENERATION_PROMPT.content.format(context=state["context"])
|
||||||
|
)
|
||||||
|
resp = llm.invoke([prompt] + state["messages"])
|
||||||
|
logger.info(f"[generate_code] {len(resp.content)} chars")
|
||||||
|
_persist(state, resp)
|
||||||
|
return {"messages": [resp]}
|
||||||
|
|
||||||
|
def respond_conversational(state):
|
||||||
|
resp = llm.invoke([CONVERSATIONAL_PROMPT] + state["messages"])
|
||||||
|
logger.info("[conversational] from comversation")
|
||||||
|
_persist(state, resp)
|
||||||
|
return {"messages": [resp]}
|
||||||
|
|
||||||
|
def route_by_type(state):
|
||||||
|
return state.get("query_type", "RETRIEVAL")
|
||||||
|
|
||||||
|
def route_after_retrieve(state):
|
||||||
|
qt = state.get("query_type", "RETRIEVAL")
|
||||||
|
return "generate_code" if qt == "CODE_GENERATION" else "generate"
|
||||||
|
|
||||||
|
graph_builder = StateGraph(AgentState)
|
||||||
|
|
||||||
|
graph_builder.add_node("classify", classify)
|
||||||
|
graph_builder.add_node("reformulate", reformulate)
|
||||||
|
graph_builder.add_node("retrieve", retrieve)
|
||||||
|
graph_builder.add_node("generate", generate)
|
||||||
|
graph_builder.add_node("generate_code", generate_code)
|
||||||
|
graph_builder.add_node("respond_conversational", respond_conversational)
|
||||||
|
|
||||||
|
graph_builder.set_entry_point("classify")
|
||||||
|
|
||||||
|
graph_builder.add_conditional_edges(
|
||||||
|
"classify",
|
||||||
|
route_by_type,
|
||||||
|
{
|
||||||
|
"RETRIEVAL": "reformulate",
|
||||||
|
"CODE_GENERATION": "reformulate",
|
||||||
|
"CONVERSATIONAL": "respond_conversational",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
graph_builder.add_edge("reformulate", "retrieve")
|
||||||
|
|
||||||
|
graph_builder.add_conditional_edges(
|
||||||
|
"retrieve",
|
||||||
|
route_after_retrieve,
|
||||||
|
{
|
||||||
|
"generate": "generate",
|
||||||
|
"generate_code": "generate_code",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
graph_builder.add_edge("generate", END)
|
||||||
|
graph_builder.add_edge("generate_code", END)
|
||||||
|
graph_builder.add_edge("respond_conversational", END)
|
||||||
|
|
||||||
|
return graph_builder.compile()
|
||||||
|
|
||||||
|
|
||||||
|
def build_prepare_graph(llm, embeddings, es_client, index_name):
|
||||||
|
|
||||||
|
def classify(state):
|
||||||
|
messages = state["messages"]
|
||||||
|
user_msg = messages[-1]
|
||||||
|
question = getattr(user_msg, "content",
|
||||||
|
user_msg.get("content", "")
|
||||||
|
if isinstance(user_msg, dict) else "")
|
||||||
|
history_msgs = messages[:-1]
|
||||||
|
|
||||||
|
if not history_msgs:
|
||||||
|
prompt_content = (
|
||||||
|
CLASSIFY_PROMPT_TEMPLATE
|
||||||
|
.replace("{history}", "(no history)")
|
||||||
|
.replace("{message}", question)
|
||||||
|
)
|
||||||
|
resp = llm.invoke([SystemMessage(content=prompt_content)])
|
||||||
|
raw = resp.content.strip().upper()
|
||||||
|
query_type = _parse_query_type(raw)
|
||||||
|
logger.info(f"[prepare/classify] no history raw='{raw}' -> {query_type}")
|
||||||
|
return {"query_type": query_type}
|
||||||
|
|
||||||
|
history_text = format_history_for_classify(history_msgs)
|
||||||
|
prompt_content = (
|
||||||
|
CLASSIFY_PROMPT_TEMPLATE
|
||||||
|
.replace("{history}", history_text)
|
||||||
|
.replace("{message}", question)
|
||||||
|
)
|
||||||
|
resp = llm.invoke([SystemMessage(content=prompt_content)])
|
||||||
|
raw = resp.content.strip().upper()
|
||||||
|
query_type = _parse_query_type(raw)
|
||||||
|
logger.info(f"[prepare/classify] raw='{raw}' -> {query_type}")
|
||||||
|
return {"query_type": query_type}
|
||||||
|
|
||||||
|
def _parse_query_type(raw: str) -> str:
|
||||||
|
if raw.startswith("CODE_GENERATION") or "CODE" in raw:
|
||||||
|
return "CODE_GENERATION"
|
||||||
|
if raw.startswith("CONVERSATIONAL"):
|
||||||
|
return "CONVERSATIONAL"
|
||||||
|
return "RETRIEVAL"
|
||||||
|
|
||||||
|
def reformulate(state: AgentState) -> AgentState:
|
||||||
|
user_msg = state["messages"][-1]
|
||||||
|
resp = llm.invoke([REFORMULATE_PROMPT, user_msg])
|
||||||
|
reformulated = resp.content.strip()
|
||||||
|
logger.info(f"[prepare/reformulate] -> '{reformulated}'")
|
||||||
|
return {"reformulated_query": reformulated}
|
||||||
|
|
||||||
|
def retrieve(state: AgentState) -> AgentState:
|
||||||
|
query = state["reformulated_query"]
|
||||||
|
docs = hybrid_search_native(
|
||||||
|
es_client=es_client,
|
||||||
|
embeddings=embeddings,
|
||||||
|
query=query,
|
||||||
|
index_name=index_name,
|
||||||
|
k=8,
|
||||||
|
)
|
||||||
|
context = format_context(docs)
|
||||||
|
logger.info(f"[prepare/retrieve] {len(docs)} docs, context len={len(context)}")
|
||||||
|
return {"context": context}
|
||||||
|
|
||||||
|
def skip_retrieve(state: AgentState) -> AgentState:
|
||||||
|
return {"context": ""}
|
||||||
|
|
||||||
|
def route_by_type(state):
|
||||||
|
return state.get("query_type", "RETRIEVAL")
|
||||||
|
|
||||||
|
graph_builder = StateGraph(AgentState)
|
||||||
|
|
||||||
|
graph_builder.add_node("classify", classify)
|
||||||
|
graph_builder.add_node("reformulate", reformulate)
|
||||||
|
graph_builder.add_node("retrieve", retrieve)
|
||||||
|
graph_builder.add_node("skip_retrieve", skip_retrieve)
|
||||||
|
|
||||||
|
graph_builder.set_entry_point("classify")
|
||||||
|
|
||||||
|
graph_builder.add_conditional_edges(
|
||||||
|
"classify",
|
||||||
|
route_by_type,
|
||||||
|
{
|
||||||
|
"RETRIEVAL": "reformulate",
|
||||||
|
"CODE_GENERATION": "reformulate",
|
||||||
|
"CONVERSATIONAL": "skip_retrieve",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
graph_builder.add_edge("reformulate", "retrieve")
|
||||||
|
graph_builder.add_edge("retrieve", END)
|
||||||
|
graph_builder.add_edge("skip_retrieve",END)
|
||||||
|
|
||||||
|
return graph_builder.compile()
|
||||||
|
|
||||||
|
|
||||||
|
def build_final_messages(state: AgentState) -> list:
|
||||||
|
query_type = state.get("query_type", "RETRIEVAL")
|
||||||
|
context = state.get("context", "")
|
||||||
|
messages = state.get("messages", [])
|
||||||
|
|
||||||
|
if query_type == "CONVERSATIONAL":
|
||||||
|
return [CONVERSATIONAL_PROMPT] + messages
|
||||||
|
|
||||||
|
if query_type == "CODE_GENERATION":
|
||||||
|
prompt = SystemMessage(
|
||||||
|
content=CODE_GENERATION_PROMPT.content.format(context=context)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
prompt = SystemMessage(
|
||||||
|
content=GENERATE_PROMPT.content.format(context=context)
|
||||||
|
)
|
||||||
|
|
||||||
|
return [prompt] + messages
|
||||||
|
|
@ -0,0 +1,420 @@
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
import logging
|
||||||
|
import asyncio
|
||||||
|
import concurrent.futures
|
||||||
|
from typing import AsyncIterator, Optional, Any, Literal, Union
|
||||||
|
|
||||||
|
import grpc
|
||||||
|
import brunix_pb2
|
||||||
|
import brunix_pb2_grpc
|
||||||
|
|
||||||
|
from fastapi import FastAPI, HTTPException
|
||||||
|
from fastapi.responses import JSONResponse, StreamingResponse
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
logging.basicConfig(level=logging.INFO)
|
||||||
|
logger = logging.getLogger("openai-proxy")
|
||||||
|
|
||||||
|
_thread_pool = concurrent.futures.ThreadPoolExecutor(
|
||||||
|
max_workers=int(os.getenv("PROXY_THREAD_WORKERS", "20"))
|
||||||
|
)
|
||||||
|
|
||||||
|
GRPC_TARGET = os.getenv("BRUNIX_GRPC_TARGET", "localhost:50051")
|
||||||
|
PROXY_MODEL = os.getenv("PROXY_MODEL_ID", "brunix")
|
||||||
|
|
||||||
|
_channel: Optional[grpc.Channel] = None
|
||||||
|
_stub: Optional[brunix_pb2_grpc.AssistanceEngineStub] = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_stub() -> brunix_pb2_grpc.AssistanceEngineStub:
|
||||||
|
global _channel, _stub
|
||||||
|
if _stub is None:
|
||||||
|
_channel = grpc.insecure_channel(GRPC_TARGET)
|
||||||
|
_stub = brunix_pb2_grpc.AssistanceEngineStub(_channel)
|
||||||
|
logger.info(f"[gRPC] connected to {GRPC_TARGET}")
|
||||||
|
return _stub
|
||||||
|
|
||||||
|
|
||||||
|
app = FastAPI(
|
||||||
|
title="Brunix OpenAI-Compatible Proxy",
|
||||||
|
version="2.0.0",
|
||||||
|
description="stream:false → AskAgent | stream:true → AskAgentStream",
|
||||||
|
)
|
||||||
|
|
||||||
|
class ChatMessage(BaseModel):
|
||||||
|
role: Literal["system", "user", "assistant", "function"] = "user"
|
||||||
|
content: str = ""
|
||||||
|
name: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
class ChatCompletionRequest(BaseModel):
|
||||||
|
model: str = PROXY_MODEL
|
||||||
|
messages: list[ChatMessage]
|
||||||
|
stream: bool = False
|
||||||
|
temperature: Optional[float] = None
|
||||||
|
max_tokens: Optional[int] = None
|
||||||
|
session_id: Optional[str] = None # extensión Brunix
|
||||||
|
top_p: Optional[float] = None
|
||||||
|
n: Optional[int] = 1
|
||||||
|
stop: Optional[Any] = None
|
||||||
|
presence_penalty: Optional[float] = None
|
||||||
|
frequency_penalty: Optional[float] = None
|
||||||
|
user: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
class CompletionRequest(BaseModel):
|
||||||
|
model: str = PROXY_MODEL
|
||||||
|
prompt: Union[str, list[str]] = ""
|
||||||
|
stream: bool = False
|
||||||
|
temperature: Optional[float] = None
|
||||||
|
max_tokens: Optional[int] = None
|
||||||
|
session_id: Optional[str] = None
|
||||||
|
suffix: Optional[str] = None
|
||||||
|
top_p: Optional[float] = None
|
||||||
|
n: Optional[int] = 1
|
||||||
|
stop: Optional[Any] = None
|
||||||
|
user: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
# Ollama schemas
|
||||||
|
class OllamaChatMessage(BaseModel):
|
||||||
|
role: str = "user"
|
||||||
|
content: str = ""
|
||||||
|
|
||||||
|
|
||||||
|
class OllamaChatRequest(BaseModel):
|
||||||
|
model: str = PROXY_MODEL
|
||||||
|
messages: list[OllamaChatMessage]
|
||||||
|
stream: bool = True # Ollama hace stream por defecto
|
||||||
|
session_id: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
class OllamaGenerateRequest(BaseModel):
|
||||||
|
model: str = PROXY_MODEL
|
||||||
|
prompt: str = ""
|
||||||
|
stream: bool = True
|
||||||
|
session_id: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
|
def _ts() -> int:
|
||||||
|
return int(time.time())
|
||||||
|
|
||||||
|
|
||||||
|
def _chat_response(content: str, req_id: str) -> dict:
|
||||||
|
return {
|
||||||
|
"id": req_id, "object": "chat.completion", "created": _ts(),
|
||||||
|
"model": PROXY_MODEL,
|
||||||
|
"choices": [{"index": 0, "message": {"role": "assistant", "content": content}, "finish_reason": "stop"}],
|
||||||
|
"usage": {"prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _completion_response(text: str, req_id: str) -> dict:
|
||||||
|
return {
|
||||||
|
"id": req_id, "object": "text_completion", "created": _ts(),
|
||||||
|
"model": PROXY_MODEL,
|
||||||
|
"choices": [{"text": text, "index": 0, "logprobs": None, "finish_reason": "stop"}],
|
||||||
|
"usage": {"prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _chat_chunk(delta: str, req_id: str, finish: Optional[str] = None) -> dict:
|
||||||
|
return {
|
||||||
|
"id": req_id, "object": "chat.completion.chunk", "created": _ts(),
|
||||||
|
"model": PROXY_MODEL,
|
||||||
|
"choices": [{"index": 0,
|
||||||
|
"delta": {"role": "assistant", "content": delta} if delta else {},
|
||||||
|
"finish_reason": finish}],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _completion_chunk(text: str, req_id: str, finish: Optional[str] = None) -> dict:
|
||||||
|
return {
|
||||||
|
"id": req_id, "object": "text_completion", "created": _ts(),
|
||||||
|
"model": PROXY_MODEL,
|
||||||
|
"choices": [{"text": text, "index": 0, "logprobs": None, "finish_reason": finish}],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _sse(data: dict) -> str:
|
||||||
|
return f"data: {json.dumps(data)}\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def _sse_done() -> str:
|
||||||
|
return "data: [DONE]\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def _query_from_messages(messages: list[ChatMessage]) -> str:
|
||||||
|
for m in reversed(messages):
|
||||||
|
if m.role == "user":
|
||||||
|
return m.content
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
async def _invoke_blocking(query: str, session_id: str) -> str:
|
||||||
|
|
||||||
|
loop = asyncio.get_event_loop()
|
||||||
|
|
||||||
|
def _call():
|
||||||
|
stub = get_stub()
|
||||||
|
req = brunix_pb2.AgentRequest(query=query, session_id=session_id)
|
||||||
|
parts = []
|
||||||
|
for resp in stub.AskAgent(req):
|
||||||
|
if resp.text:
|
||||||
|
parts.append(resp.text)
|
||||||
|
return "".join(parts)
|
||||||
|
|
||||||
|
return await loop.run_in_executor(_thread_pool, _call)
|
||||||
|
|
||||||
|
|
||||||
|
async def _iter_stream(query: str, session_id: str) -> AsyncIterator[brunix_pb2.AgentResponse]:
|
||||||
|
|
||||||
|
loop = asyncio.get_event_loop()
|
||||||
|
queue: asyncio.Queue = asyncio.Queue()
|
||||||
|
|
||||||
|
def _producer():
|
||||||
|
try:
|
||||||
|
stub = get_stub()
|
||||||
|
req = brunix_pb2.AgentRequest(query=query, session_id=session_id)
|
||||||
|
for resp in stub.AskAgentStream(req): # ← AskAgentStream
|
||||||
|
asyncio.run_coroutine_threadsafe(queue.put(resp), loop).result()
|
||||||
|
except Exception as e:
|
||||||
|
asyncio.run_coroutine_threadsafe(queue.put(e), loop).result()
|
||||||
|
finally:
|
||||||
|
asyncio.run_coroutine_threadsafe(queue.put(None), loop).result() # sentinel
|
||||||
|
|
||||||
|
_thread_pool.submit(_producer)
|
||||||
|
|
||||||
|
while True:
|
||||||
|
item = await queue.get()
|
||||||
|
if item is None:
|
||||||
|
break
|
||||||
|
if isinstance(item, Exception):
|
||||||
|
raise item
|
||||||
|
yield item
|
||||||
|
|
||||||
|
|
||||||
|
async def _stream_chat(query: str, session_id: str, req_id: str) -> AsyncIterator[str]:
|
||||||
|
try:
|
||||||
|
async for resp in _iter_stream(query, session_id):
|
||||||
|
if resp.is_final:
|
||||||
|
yield _sse(_chat_chunk("", req_id, finish="stop"))
|
||||||
|
break
|
||||||
|
if resp.text:
|
||||||
|
yield _sse(_chat_chunk(resp.text, req_id))
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[stream_chat] error: {e}")
|
||||||
|
yield _sse(_chat_chunk(f"[Error: {e}]", req_id, finish="stop"))
|
||||||
|
|
||||||
|
yield _sse_done()
|
||||||
|
|
||||||
|
|
||||||
|
async def _stream_completion(query: str, session_id: str, req_id: str) -> AsyncIterator[str]:
|
||||||
|
try:
|
||||||
|
async for resp in _iter_stream(query, session_id):
|
||||||
|
if resp.is_final:
|
||||||
|
yield _sse(_completion_chunk("", req_id, finish="stop"))
|
||||||
|
break
|
||||||
|
if resp.text:
|
||||||
|
yield _sse(_completion_chunk(resp.text, req_id))
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[stream_completion] error: {e}")
|
||||||
|
yield _sse(_completion_chunk(f"[Error: {e}]", req_id, finish="stop"))
|
||||||
|
|
||||||
|
yield _sse_done()
|
||||||
|
|
||||||
|
|
||||||
|
def _ollama_chat_chunk(token: str, done: bool) -> str:
|
||||||
|
return json.dumps({
|
||||||
|
"model": PROXY_MODEL,
|
||||||
|
"created_at": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
|
||||||
|
"message": {"role": "assistant", "content": token},
|
||||||
|
"done": done,
|
||||||
|
}) + "\n"
|
||||||
|
|
||||||
|
|
||||||
|
def _ollama_generate_chunk(token: str, done: bool) -> str:
|
||||||
|
return json.dumps({
|
||||||
|
"model": PROXY_MODEL,
|
||||||
|
"created_at": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
|
||||||
|
"response": token,
|
||||||
|
"done": done,
|
||||||
|
}) + "\n"
|
||||||
|
|
||||||
|
|
||||||
|
async def _stream_ollama_chat(query: str, session_id: str) -> AsyncIterator[str]:
|
||||||
|
try:
|
||||||
|
async for resp in _iter_stream(query, session_id):
|
||||||
|
if resp.is_final:
|
||||||
|
yield _ollama_chat_chunk("", done=True)
|
||||||
|
break
|
||||||
|
if resp.text:
|
||||||
|
yield _ollama_chat_chunk(resp.text, done=False)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[ollama_chat] error: {e}")
|
||||||
|
yield _ollama_chat_chunk(f"[Error: {e}]", done=True)
|
||||||
|
|
||||||
|
|
||||||
|
async def _stream_ollama_generate(query: str, session_id: str) -> AsyncIterator[str]:
|
||||||
|
try:
|
||||||
|
async for resp in _iter_stream(query, session_id):
|
||||||
|
if resp.is_final:
|
||||||
|
yield _ollama_generate_chunk("", done=True)
|
||||||
|
break
|
||||||
|
if resp.text:
|
||||||
|
yield _ollama_generate_chunk(resp.text, done=False)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[ollama_generate] error: {e}")
|
||||||
|
yield _ollama_generate_chunk(f"[Error: {e}]", done=True)
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/v1/models")
|
||||||
|
async def list_models():
|
||||||
|
return {
|
||||||
|
"object": "list",
|
||||||
|
"data": [{
|
||||||
|
"id": PROXY_MODEL, "object": "model", "created": 1700000000,
|
||||||
|
"owned_by": "brunix", "permission": [], "root": PROXY_MODEL, "parent": None,
|
||||||
|
}],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@app.post("/v1/chat/completions")
|
||||||
|
async def chat_completions(req: ChatCompletionRequest):
|
||||||
|
query = _query_from_messages(req.messages)
|
||||||
|
session_id = req.session_id or req.user or "default"
|
||||||
|
req_id = f"chatcmpl-{uuid.uuid4().hex}"
|
||||||
|
|
||||||
|
logger.info(f"[chat] session={session_id} stream={req.stream} query='{query[:80]}'")
|
||||||
|
|
||||||
|
if not query:
|
||||||
|
raise HTTPException(status_code=400, detail="No user message found in messages.")
|
||||||
|
|
||||||
|
if req.stream:
|
||||||
|
|
||||||
|
return StreamingResponse(
|
||||||
|
_stream_chat(query, session_id, req_id),
|
||||||
|
media_type="text/event-stream",
|
||||||
|
headers={"Cache-Control": "no-cache", "X-Accel-Buffering": "no"},
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
text = await _invoke_blocking(query, session_id)
|
||||||
|
except grpc.RpcError as e:
|
||||||
|
raise HTTPException(status_code=502, detail=f"gRPC error: {e.details()}")
|
||||||
|
|
||||||
|
return JSONResponse(_chat_response(text, req_id))
|
||||||
|
|
||||||
|
|
||||||
|
@app.post("/v1/completions")
|
||||||
|
async def completions(req: CompletionRequest):
|
||||||
|
query = req.prompt if isinstance(req.prompt, str) else " ".join(req.prompt)
|
||||||
|
session_id = req.session_id or req.user or "default"
|
||||||
|
req_id = f"cmpl-{uuid.uuid4().hex}"
|
||||||
|
|
||||||
|
logger.info(f"[completion] session={session_id} stream={req.stream} prompt='{query[:80]}'")
|
||||||
|
|
||||||
|
if not query:
|
||||||
|
raise HTTPException(status_code=400, detail="prompt is required.")
|
||||||
|
|
||||||
|
if req.stream:
|
||||||
|
return StreamingResponse(
|
||||||
|
_stream_completion(query, session_id, req_id),
|
||||||
|
media_type="text/event-stream",
|
||||||
|
headers={"Cache-Control": "no-cache", "X-Accel-Buffering": "no"},
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
text = await _invoke_blocking(query, session_id)
|
||||||
|
except grpc.RpcError as e:
|
||||||
|
raise HTTPException(status_code=502, detail=f"gRPC error: {e.details()}")
|
||||||
|
|
||||||
|
return JSONResponse(_completion_response(text, req_id))
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/health")
|
||||||
|
async def health():
|
||||||
|
return {"status": "ok", "grpc_target": GRPC_TARGET}
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/api/tags")
|
||||||
|
async def ollama_tags():
|
||||||
|
return {
|
||||||
|
"models": [{
|
||||||
|
"name": PROXY_MODEL,
|
||||||
|
"model":PROXY_MODEL,
|
||||||
|
"modified_at": "2024-01-01T00:00:00Z",
|
||||||
|
"size": 0,
|
||||||
|
"digest":"brunix",
|
||||||
|
"details": {
|
||||||
|
"format": "gguf",
|
||||||
|
"family": "brunix",
|
||||||
|
"parameter_size": "unknown",
|
||||||
|
"quantization_level": "unknown",
|
||||||
|
},
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@app.post("/api/chat")
|
||||||
|
async def ollama_chat(req: OllamaChatRequest):
|
||||||
|
|
||||||
|
query = next((m.content for m in reversed(req.messages) if m.role == "user"), "")
|
||||||
|
session_id = req.session_id or "default"
|
||||||
|
|
||||||
|
logger.info(f"[ollama/chat] session={session_id} stream={req.stream} query='{query[:80]}'")
|
||||||
|
|
||||||
|
if not query:
|
||||||
|
raise HTTPException(status_code=400, detail="No user message found.")
|
||||||
|
|
||||||
|
if req.stream:
|
||||||
|
return StreamingResponse(
|
||||||
|
_stream_ollama_chat(query, session_id),
|
||||||
|
media_type="application/x-ndjson",
|
||||||
|
headers={"Cache-Control": "no-cache", "X-Accel-Buffering": "no"},
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
text = await _invoke_blocking(query, session_id)
|
||||||
|
except grpc.RpcError as e:
|
||||||
|
raise HTTPException(status_code=502, detail=f"gRPC error: {e.details()}")
|
||||||
|
|
||||||
|
return JSONResponse({
|
||||||
|
"model": PROXY_MODEL,
|
||||||
|
"created_at": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
|
||||||
|
"message": {"role": "assistant", "content": text},
|
||||||
|
"done": True,
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
@app.post("/api/generate")
|
||||||
|
async def ollama_generate(req: OllamaGenerateRequest):
|
||||||
|
|
||||||
|
session_id = req.session_id or "default"
|
||||||
|
|
||||||
|
logger.info(f"[ollama/generate] session={session_id} stream={req.stream} prompt='{req.prompt[:80]}'")
|
||||||
|
|
||||||
|
if not req.prompt:
|
||||||
|
raise HTTPException(status_code=400, detail="prompt is required.")
|
||||||
|
|
||||||
|
if req.stream:
|
||||||
|
return StreamingResponse(
|
||||||
|
_stream_ollama_generate(req.prompt, session_id),
|
||||||
|
media_type="application/x-ndjson",
|
||||||
|
headers={"Cache-Control": "no-cache", "X-Accel-Buffering": "no"},
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
text = await _invoke_blocking(req.prompt, session_id)
|
||||||
|
except grpc.RpcError as e:
|
||||||
|
raise HTTPException(status_code=502, detail=f"gRPC error: {e.details()}")
|
||||||
|
|
||||||
|
return JSONResponse({
|
||||||
|
"model": PROXY_MODEL,
|
||||||
|
"created_at": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
|
||||||
|
"response": text,
|
||||||
|
"done": True,
|
||||||
|
})
|
||||||
|
|
@ -0,0 +1,250 @@
|
||||||
|
|
||||||
|
from langchain_core.messages import SystemMessage
|
||||||
|
|
||||||
|
CLASSIFY_PROMPT_TEMPLATE = (
|
||||||
|
"<role>\n"
|
||||||
|
"You are a query classifier for an AVAP language assistant. "
|
||||||
|
"Your only job is to classify the user message into one of three categories.\n"
|
||||||
|
"</role>\n\n"
|
||||||
|
|
||||||
|
"<categories>\n"
|
||||||
|
"RETRIEVAL — the user is asking about AVAP concepts, documentation, syntax rules, "
|
||||||
|
"or how something works. They want an explanation, not code.\n"
|
||||||
|
"Examples: 'What is addVar?', 'How does registerEndpoint work?', "
|
||||||
|
"'What is the difference between if() modes?'\n\n"
|
||||||
|
|
||||||
|
"CODE_GENERATION — the user is asking to generate, write, create, build, or show "
|
||||||
|
"an example of an AVAP script, function, API, or code snippet. "
|
||||||
|
"They want working code as output.\n"
|
||||||
|
"Examples: 'Write an API that returns hello world', "
|
||||||
|
"'Generate a function that queries the DB', "
|
||||||
|
"'Show me how to create an endpoint', "
|
||||||
|
"'dame un ejemplo de codigo', 'escribeme un script', "
|
||||||
|
"'dime como seria un API', 'genera un API', 'como haria'\n\n"
|
||||||
|
|
||||||
|
"CONVERSATIONAL — the user is following up on the previous answer. "
|
||||||
|
"They want a reformulation, summary, or elaboration of what was already said.\n"
|
||||||
|
"Examples: 'can you explain that?', 'en menos palabras', "
|
||||||
|
"'describe it in your own words', 'what did you mean?'\n"
|
||||||
|
"</categories>\n\n"
|
||||||
|
|
||||||
|
"<output_rule>\n"
|
||||||
|
"Your entire response must be exactly one word: "
|
||||||
|
"RETRIEVAL, CODE_GENERATION, or CONVERSATIONAL. Nothing else.\n"
|
||||||
|
"</output_rule>\n\n"
|
||||||
|
|
||||||
|
"<conversation_history>\n"
|
||||||
|
"{history}\n"
|
||||||
|
"</conversation_history>\n\n"
|
||||||
|
|
||||||
|
"<user_message>{message}</user_message>"
|
||||||
|
)
|
||||||
|
|
||||||
|
REFORMULATE_PROMPT = SystemMessage(
|
||||||
|
content=(
|
||||||
|
"<role>\n"
|
||||||
|
"You are a deterministic query rewriter whose sole purpose is to prepare "
|
||||||
|
"user questions for vector similarity retrieval against an AVAP language "
|
||||||
|
"knowledge base. You do not answer questions. You only transform phrasing "
|
||||||
|
"into keyword queries that will find the right AVAP documentation chunks.\n"
|
||||||
|
"</role>\n\n"
|
||||||
|
|
||||||
|
"<task>\n"
|
||||||
|
"Rewrite the user message into a compact keyword query for semantic search.\n\n"
|
||||||
|
|
||||||
|
"SPECIAL RULE for code generation requests:\n"
|
||||||
|
"When the user asks to generate/create/build/show AVAP code, expand the query "
|
||||||
|
"with the AVAP commands typically needed. Use this mapping:\n\n"
|
||||||
|
|
||||||
|
"- API / endpoint / route / HTTP response\n"
|
||||||
|
" expand to: AVAP registerEndpoint addResult _status\n\n"
|
||||||
|
|
||||||
|
"- Read input / parameter\n"
|
||||||
|
" expand to: AVAP addParam getQueryParamList\n\n"
|
||||||
|
|
||||||
|
"- Database / ORM / query\n"
|
||||||
|
" expand to: AVAP ormAccessSelect ormAccessInsert avapConnector\n\n"
|
||||||
|
|
||||||
|
"- Error handling\n"
|
||||||
|
" expand to: AVAP try exception end\n\n"
|
||||||
|
|
||||||
|
"- Loop / iterate\n"
|
||||||
|
" expand to: AVAP startLoop endLoop itemFromList getListLen\n\n"
|
||||||
|
|
||||||
|
"- HTTP request / call external\n"
|
||||||
|
" expand to: AVAP RequestPost RequestGet\n"
|
||||||
|
"</task>\n\n"
|
||||||
|
|
||||||
|
"<rules>\n"
|
||||||
|
"- Preserve all AVAP identifiers verbatim.\n"
|
||||||
|
"- Remove filler words.\n"
|
||||||
|
"- Output a single line.\n"
|
||||||
|
"- Never answer the question.\n"
|
||||||
|
"</rules>\n\n"
|
||||||
|
|
||||||
|
"<examples>\n"
|
||||||
|
"<example>\n"
|
||||||
|
"<input>What does AVAP stand for?</input>\n"
|
||||||
|
"<o>AVAP stand for</o>\n"
|
||||||
|
"</example>\n\n"
|
||||||
|
|
||||||
|
"<example>\n"
|
||||||
|
"<input>dime como seria un API que devuelva hello world con AVAP</input>\n"
|
||||||
|
"<o>AVAP registerEndpoint addResult _status hello world example</o>\n"
|
||||||
|
"</example>\n\n"
|
||||||
|
|
||||||
|
"<example>\n"
|
||||||
|
"<input>generate an AVAP script that reads a parameter and queries the DB</input>\n"
|
||||||
|
"<o>AVAP addParam ormAccessSelect avapConnector registerEndpoint addResult</o>\n"
|
||||||
|
"</example>\n"
|
||||||
|
"</examples>\n\n"
|
||||||
|
|
||||||
|
"Return only the rewritten query. No labels, no prefixes, no explanation."
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
CONFIDENCE_PROMPT_TEMPLATE = (
|
||||||
|
"<role>\n"
|
||||||
|
"You are a relevance evaluator. Decide whether the context contains "
|
||||||
|
"useful information to address the user question.\n"
|
||||||
|
"</role>\n\n"
|
||||||
|
|
||||||
|
"<task>\n"
|
||||||
|
"Answer YES if the context contains at least one relevant passage. "
|
||||||
|
"Answer NO only if context is empty or completely unrelated.\n"
|
||||||
|
"</task>\n\n"
|
||||||
|
|
||||||
|
"<output_rule>\n"
|
||||||
|
"Exactly one word: YES or NO.\n"
|
||||||
|
"</output_rule>\n\n"
|
||||||
|
|
||||||
|
"<question>{question}</question>\n\n"
|
||||||
|
"<context>{context}</context>"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
CODE_GENERATION_PROMPT = SystemMessage(
|
||||||
|
content=(
|
||||||
|
"<role>\n"
|
||||||
|
"You are an expert AVAP programmer. AVAP (Advanced Virtual API Programming) "
|
||||||
|
"is a domain-specific language for orchestrating microservices and HTTP I/O. "
|
||||||
|
"Write correct, minimal, working AVAP code.\n"
|
||||||
|
"</role>\n\n"
|
||||||
|
|
||||||
|
"<critical_rules>\n"
|
||||||
|
"1. AVAP is line-oriented: every statement on a single line.\n"
|
||||||
|
"2. Use ONLY commands from <avap_syntax_reminder> or explicitly described in <context>.\n"
|
||||||
|
"3. Do NOT copy code examples from <context> that solve a DIFFERENT problem. "
|
||||||
|
"Context examples are syntax references only — ignore them if unrelated.\n"
|
||||||
|
"4. Write the MINIMUM code needed. No extra connectors, no unrelated variables.\n"
|
||||||
|
"5. Add brief inline comments explaining each part.\n"
|
||||||
|
"6. Answer in the same language the user used.\n"
|
||||||
|
"</critical_rules>\n\n"
|
||||||
|
|
||||||
|
"<avap_syntax_reminder>\n"
|
||||||
|
"// Register an HTTP endpoint\n"
|
||||||
|
"registerEndpoint(\"GET\", \"/path\", [], \"scope\", handlerFn, \"\")\n\n"
|
||||||
|
"// Declare a function — uses curly braces, NOT end()\n"
|
||||||
|
"function handlerFn() {{\n"
|
||||||
|
" msg = \"Hello World\"\n"
|
||||||
|
" addResult(msg)\n"
|
||||||
|
"}}\n\n"
|
||||||
|
"// Assign a value to a variable\n"
|
||||||
|
"addVar(varName, \"value\") // or: varName = \"value\"\n\n"
|
||||||
|
"// Add variable to HTTP JSON response body\n"
|
||||||
|
"addResult(varName)\n\n"
|
||||||
|
"// Set HTTP response status code\n"
|
||||||
|
"_status = 200 // or: addVar(_status, 200)\n\n"
|
||||||
|
"// Read a request parameter (URL, body, or form)\n"
|
||||||
|
"addParam(\"paramName\", targetVar)\n\n"
|
||||||
|
"// Conditional\n"
|
||||||
|
"if(var, value, \"==\")\n"
|
||||||
|
" // ...\n"
|
||||||
|
"end()\n\n"
|
||||||
|
"// Loop\n"
|
||||||
|
"startLoop(i, 0, length)\n"
|
||||||
|
" // ...\n"
|
||||||
|
"endLoop()\n\n"
|
||||||
|
"// Error handling\n"
|
||||||
|
"try()\n"
|
||||||
|
" // ...\n"
|
||||||
|
"exception(errVar)\n"
|
||||||
|
" // handle\n"
|
||||||
|
"end()\n"
|
||||||
|
"</avap_syntax_reminder>\n\n"
|
||||||
|
|
||||||
|
"<task>\n"
|
||||||
|
"Generate a minimal, complete AVAP example for the user's request.\n\n"
|
||||||
|
"Structure:\n"
|
||||||
|
"1. One sentence describing what the code does.\n"
|
||||||
|
"2. The AVAP code block — clean, minimal, with inline comments.\n"
|
||||||
|
"3. Two or three lines explaining the key commands used.\n"
|
||||||
|
"</task>\n\n"
|
||||||
|
|
||||||
|
"<context>\n"
|
||||||
|
"{context}\n"
|
||||||
|
"</context>"
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
CONVERSATIONAL_PROMPT = SystemMessage(
|
||||||
|
content=(
|
||||||
|
"<role>\n"
|
||||||
|
"You are a helpful AVAP assistant continuing an ongoing conversation.\n"
|
||||||
|
"</role>\n\n"
|
||||||
|
|
||||||
|
"<task>\n"
|
||||||
|
"The user is following up on something already discussed. "
|
||||||
|
"Rephrase, summarize, or elaborate using the conversation history.\n"
|
||||||
|
"</task>\n\n"
|
||||||
|
|
||||||
|
"<rules>\n"
|
||||||
|
"- Base your answer on the conversation history.\n"
|
||||||
|
"- Do not introduce new AVAP facts not in the history.\n"
|
||||||
|
"- Keep the same language the user is using.\n"
|
||||||
|
"- No Answer/Evidence format. Just answer naturally.\n"
|
||||||
|
"</rules>"
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
GENERATE_PROMPT = SystemMessage(
|
||||||
|
content=(
|
||||||
|
"<role>\n"
|
||||||
|
"You are a precise, retrieval-grounded assistant specialized in AVAP. "
|
||||||
|
"Answers are honest, calibrated to evidence, and clearly structured.\n"
|
||||||
|
"</role>\n\n"
|
||||||
|
|
||||||
|
"<critical_constraint>\n"
|
||||||
|
"AVAP is a new proprietary language. Use ONLY content inside <context>. "
|
||||||
|
"Treat any AVAP knowledge outside <context> as unreliable.\n"
|
||||||
|
"</critical_constraint>\n\n"
|
||||||
|
|
||||||
|
"<task>\n"
|
||||||
|
"Answer using exclusively the information in <context>.\n"
|
||||||
|
"</task>\n\n"
|
||||||
|
|
||||||
|
"<thinking_steps>\n"
|
||||||
|
"Step 1 — Find relevant passages in <context>.\n"
|
||||||
|
"Step 2 — Assess if question can be fully or partially answered.\n"
|
||||||
|
"Step 3 — Write a clear answer backed by those passages.\n"
|
||||||
|
"Step 4 — If context contains relevant AVAP code, include it exactly.\n"
|
||||||
|
"</thinking_steps>\n\n"
|
||||||
|
|
||||||
|
"<output_format>\n"
|
||||||
|
"Answer:\n"
|
||||||
|
"<direct answer; include code blocks if context has relevant code>\n\n"
|
||||||
|
|
||||||
|
"Evidence:\n"
|
||||||
|
"- \"<exact quote from context>\"\n"
|
||||||
|
"(only quotes you actually used)\n\n"
|
||||||
|
|
||||||
|
"If context has no relevant information reply with exactly:\n"
|
||||||
|
"\"I don't have enough information in the provided context to answer that.\"\n"
|
||||||
|
"</output_format>\n\n"
|
||||||
|
|
||||||
|
"<context>\n"
|
||||||
|
"{context}\n"
|
||||||
|
"</context>"
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
@ -0,0 +1,243 @@
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
from concurrent import futures
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
import brunix_pb2
|
||||||
|
import brunix_pb2_grpc
|
||||||
|
import grpc
|
||||||
|
from grpc_reflection.v1alpha import reflection
|
||||||
|
from elasticsearch import Elasticsearch
|
||||||
|
from langchain_core.messages import AIMessage
|
||||||
|
|
||||||
|
from utils.llm_factory import create_chat_model
|
||||||
|
from utils.emb_factory import create_embedding_model
|
||||||
|
from graph import build_graph, build_prepare_graph, build_final_messages, session_store
|
||||||
|
|
||||||
|
from evaluate import run_evaluation
|
||||||
|
|
||||||
|
logging.basicConfig(level=logging.INFO)
|
||||||
|
logger = logging.getLogger("brunix-engine")
|
||||||
|
|
||||||
|
|
||||||
|
class BrunixEngine(brunix_pb2_grpc.AssistanceEngineServicer):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
es_url = os.getenv("ELASTICSEARCH_URL", "http://localhost:9200")
|
||||||
|
es_user = os.getenv("ELASTICSEARCH_USER")
|
||||||
|
es_pass = os.getenv("ELASTICSEARCH_PASSWORD")
|
||||||
|
es_apikey = os.getenv("ELASTICSEARCH_API_KEY")
|
||||||
|
index = os.getenv("ELASTICSEARCH_INDEX", "avap-knowledge-v1")
|
||||||
|
|
||||||
|
self.llm = create_chat_model(
|
||||||
|
provider="ollama",
|
||||||
|
model=os.getenv("OLLAMA_MODEL_NAME"),
|
||||||
|
base_url=os.getenv("OLLAMA_URL"),
|
||||||
|
temperature=0,
|
||||||
|
validate_model_on_init=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
self.embeddings = create_embedding_model(
|
||||||
|
provider="ollama",
|
||||||
|
model=os.getenv("OLLAMA_EMB_MODEL_NAME"),
|
||||||
|
base_url=os.getenv("OLLAMA_URL"),
|
||||||
|
)
|
||||||
|
|
||||||
|
es_kwargs: dict = {"hosts": [es_url], "request_timeout": 60}
|
||||||
|
if es_apikey:
|
||||||
|
es_kwargs["api_key"] = es_apikey
|
||||||
|
elif es_user and es_pass:
|
||||||
|
es_kwargs["basic_auth"] = (es_user, es_pass)
|
||||||
|
|
||||||
|
self.es_client = Elasticsearch(**es_kwargs)
|
||||||
|
self.index_name = index
|
||||||
|
|
||||||
|
if self.es_client.ping():
|
||||||
|
info = self.es_client.info()
|
||||||
|
logger.info(f"[ESEARCH] Connected: {info['version']['number']} — index: {index}")
|
||||||
|
else:
|
||||||
|
logger.error("[ESEARCH] Cant Connect")
|
||||||
|
|
||||||
|
self.graph = build_graph(
|
||||||
|
llm = self.llm,
|
||||||
|
embeddings = self.embeddings,
|
||||||
|
es_client = self.es_client,
|
||||||
|
index_name = self.index_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
self.prepare_graph = build_prepare_graph(
|
||||||
|
llm = self.llm,
|
||||||
|
embeddings = self.embeddings,
|
||||||
|
es_client = self.es_client,
|
||||||
|
index_name = self.index_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info("Brunix Engine initialized.")
|
||||||
|
|
||||||
|
|
||||||
|
def AskAgent(self, request, context):
|
||||||
|
session_id = request.session_id or "default"
|
||||||
|
query = request.query
|
||||||
|
logger.info(f"[AskAgent] session={session_id} query='{query[:80]}'")
|
||||||
|
|
||||||
|
try:
|
||||||
|
history = list(session_store.get(session_id, []))
|
||||||
|
logger.info(f"[AskAgent] conversation: {len(history)} previous messages.")
|
||||||
|
|
||||||
|
initial_state = {
|
||||||
|
"messages": history + [{"role": "user", "content": query}],
|
||||||
|
"session_id": session_id,
|
||||||
|
"reformulated_query": "",
|
||||||
|
"context": "",
|
||||||
|
"query_type": "",
|
||||||
|
}
|
||||||
|
|
||||||
|
final_state = self.graph.invoke(initial_state)
|
||||||
|
messages = final_state.get("messages", [])
|
||||||
|
last_msg = messages[-1] if messages else None
|
||||||
|
result_text = getattr(last_msg, "content", str(last_msg)) \
|
||||||
|
if last_msg else ""
|
||||||
|
|
||||||
|
logger.info(f"[AskAgent] query_type={final_state.get('query_type')} "
|
||||||
|
f"answer='{result_text[:100]}'")
|
||||||
|
|
||||||
|
yield brunix_pb2.AgentResponse(
|
||||||
|
text = result_text,
|
||||||
|
avap_code= "AVAP-2026",
|
||||||
|
is_final = True,
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[AskAgent] Error: {e}", exc_info=True)
|
||||||
|
yield brunix_pb2.AgentResponse(
|
||||||
|
text = f"[ENG] Error: {str(e)}",
|
||||||
|
is_final = True,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def AskAgentStream(self, request, context):
|
||||||
|
session_id = request.session_id or "default"
|
||||||
|
query = request.query
|
||||||
|
logger.info(f"[AskAgentStream] session={session_id} query='{query[:80]}'")
|
||||||
|
|
||||||
|
try:
|
||||||
|
history = list(session_store.get(session_id, []))
|
||||||
|
logger.info(f"[AskAgentStream] conversation: {len(history)} previous messages.")
|
||||||
|
|
||||||
|
initial_state = {
|
||||||
|
"messages": history + [{"role": "user", "content": query}],
|
||||||
|
"session_id": session_id,
|
||||||
|
"reformulated_query": "",
|
||||||
|
"context": "",
|
||||||
|
"query_type": "",
|
||||||
|
}
|
||||||
|
|
||||||
|
prepared = self.prepare_graph.invoke(initial_state)
|
||||||
|
logger.info(
|
||||||
|
f"[AskAgentStream] query_type={prepared.get('query_type')} "
|
||||||
|
f"context_len={len(prepared.get('context', ''))}"
|
||||||
|
)
|
||||||
|
|
||||||
|
final_messages = build_final_messages(prepared)
|
||||||
|
full_response = []
|
||||||
|
|
||||||
|
for chunk in self.llm.stream(final_messages):
|
||||||
|
token = chunk.content
|
||||||
|
if token:
|
||||||
|
full_response.append(token)
|
||||||
|
yield brunix_pb2.AgentResponse(
|
||||||
|
text = token,
|
||||||
|
is_final = False,
|
||||||
|
)
|
||||||
|
|
||||||
|
complete_text = "".join(full_response)
|
||||||
|
if session_id:
|
||||||
|
session_store[session_id] = (
|
||||||
|
list(prepared["messages"]) + [AIMessage(content=complete_text)]
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f"[AskAgentStream] done — "
|
||||||
|
f"chunks={len(full_response)} total_chars={len(complete_text)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
yield brunix_pb2.AgentResponse(text="", is_final=True)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[AskAgentStream] Error: {e}", exc_info=True)
|
||||||
|
yield brunix_pb2.AgentResponse(
|
||||||
|
text = f"[ENG] Error: {str(e)}",
|
||||||
|
is_final = True,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def EvaluateRAG(self, request, context):
|
||||||
|
category = request.category or None
|
||||||
|
limit = request.limit or None
|
||||||
|
index = request.index or self.index_name
|
||||||
|
|
||||||
|
logger.info(f"[EvaluateRAG] category={category} limit={limit} index={index}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = run_evaluation(
|
||||||
|
es_client = self.es_client,
|
||||||
|
llm = self.llm,
|
||||||
|
embeddings = self.embeddings,
|
||||||
|
index_name = index,
|
||||||
|
category = category,
|
||||||
|
limit = limit,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[EvaluateRAG] Error: {e}", exc_info=True)
|
||||||
|
return brunix_pb2.EvalResponse(status=f"error: {e}")
|
||||||
|
|
||||||
|
if result.get("status") != "ok":
|
||||||
|
return brunix_pb2.EvalResponse(status=result.get("error", "unknown error"))
|
||||||
|
|
||||||
|
details = [
|
||||||
|
brunix_pb2.QuestionDetail(
|
||||||
|
id = d["id"],
|
||||||
|
category = d["category"],
|
||||||
|
question = d["question"],
|
||||||
|
answer_preview = d["answer_preview"],
|
||||||
|
n_chunks = d["n_chunks"],
|
||||||
|
)
|
||||||
|
for d in result.get("details", [])
|
||||||
|
]
|
||||||
|
|
||||||
|
scores = result["scores"]
|
||||||
|
return brunix_pb2.EvalResponse(
|
||||||
|
status = "ok",
|
||||||
|
questions_evaluated = result["questions_evaluated"],
|
||||||
|
elapsed_seconds = result["elapsed_seconds"],
|
||||||
|
judge_model = result["judge_model"],
|
||||||
|
index = result["index"],
|
||||||
|
faithfulness = scores["faithfulness"],
|
||||||
|
answer_relevancy = scores["answer_relevancy"],
|
||||||
|
context_recall = scores["context_recall"],
|
||||||
|
context_precision = scores["context_precision"],
|
||||||
|
global_score = result["global_score"],
|
||||||
|
verdict= result["verdict"],
|
||||||
|
details= details,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def serve():
|
||||||
|
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
|
||||||
|
brunix_pb2_grpc.add_AssistanceEngineServicer_to_server(BrunixEngine(), server)
|
||||||
|
|
||||||
|
SERVICE_NAMES = (
|
||||||
|
brunix_pb2.DESCRIPTOR.services_by_name["AssistanceEngine"].full_name,
|
||||||
|
reflection.SERVICE_NAME,
|
||||||
|
)
|
||||||
|
reflection.enable_server_reflection(SERVICE_NAMES, server)
|
||||||
|
|
||||||
|
server.add_insecure_port("[::]:50051")
|
||||||
|
logger.info("[ENGINE] listen on 50051 (gRPC)")
|
||||||
|
server.start()
|
||||||
|
server.wait_for_termination()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
serve()
|
||||||
|
|
@ -0,0 +1,11 @@
|
||||||
|
# state.py
|
||||||
|
from typing import TypedDict, Annotated
|
||||||
|
from langgraph.graph.message import add_messages
|
||||||
|
|
||||||
|
|
||||||
|
class AgentState(TypedDict):
|
||||||
|
messages: Annotated[list, add_messages]
|
||||||
|
reformulated_query: str
|
||||||
|
context: str
|
||||||
|
query_type: str
|
||||||
|
session_id: str
|
||||||
|
|
@ -0,0 +1,67 @@
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
|
|
||||||
|
class BaseEmbeddingFactory(ABC):
|
||||||
|
@abstractmethod
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class OpenAIEmbeddingFactory(BaseEmbeddingFactory):
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
from langchain_openai import OpenAIEmbeddings
|
||||||
|
|
||||||
|
return OpenAIEmbeddings(model=model, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class OllamaEmbeddingFactory(BaseEmbeddingFactory):
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
from langchain_ollama import OllamaEmbeddings
|
||||||
|
|
||||||
|
return OllamaEmbeddings(model=model, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class BedrockEmbeddingFactory(BaseEmbeddingFactory):
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
from langchain_aws import BedrockEmbeddings
|
||||||
|
|
||||||
|
return BedrockEmbeddings(model_id=model, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class HuggingFaceEmbeddingFactory(BaseEmbeddingFactory):
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
from langchain_huggingface import HuggingFaceEmbeddings
|
||||||
|
|
||||||
|
return HuggingFaceEmbeddings(model_name=model, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
EMBEDDING_FACTORIES: Dict[str, BaseEmbeddingFactory] = {
|
||||||
|
"openai": OpenAIEmbeddingFactory(),
|
||||||
|
"ollama": OllamaEmbeddingFactory(),
|
||||||
|
"bedrock": BedrockEmbeddingFactory(),
|
||||||
|
"huggingface": HuggingFaceEmbeddingFactory(),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def create_embedding_model(provider: str, model: str, **kwargs: Any):
|
||||||
|
"""
|
||||||
|
Create an embedding model instance for the given provider.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
provider: The provider name (openai, ollama, bedrock, huggingface).
|
||||||
|
model: The model identifier.
|
||||||
|
**kwargs: Additional keyword arguments passed to the model constructor.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
An embedding model instance.
|
||||||
|
"""
|
||||||
|
key = provider.strip().lower()
|
||||||
|
|
||||||
|
if key not in EMBEDDING_FACTORIES:
|
||||||
|
raise ValueError(
|
||||||
|
f"Unsupported embedding provider: {provider}. "
|
||||||
|
f"Available providers: {list(EMBEDDING_FACTORIES.keys())}"
|
||||||
|
)
|
||||||
|
|
||||||
|
return EMBEDDING_FACTORIES[key].create(model=model, **kwargs)
|
||||||
|
|
@ -0,0 +1,72 @@
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
|
|
||||||
|
class BaseProviderFactory(ABC):
|
||||||
|
@abstractmethod
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
|
class OpenAIChatFactory(BaseProviderFactory):
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
from langchain_openai import ChatOpenAI
|
||||||
|
|
||||||
|
return ChatOpenAI(model=model, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class OllamaChatFactory(BaseProviderFactory):
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
from langchain_ollama import ChatOllama
|
||||||
|
|
||||||
|
return ChatOllama(model=model, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class BedrockChatFactory(BaseProviderFactory):
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
from langchain_aws import ChatBedrockConverse
|
||||||
|
|
||||||
|
return ChatBedrockConverse(model=model, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class HuggingFaceChatFactory(BaseProviderFactory):
|
||||||
|
def create(self, model: str, **kwargs: Any):
|
||||||
|
from langchain_huggingface import ChatHuggingFace, HuggingFacePipeline
|
||||||
|
|
||||||
|
llm = HuggingFacePipeline.from_model_id(
|
||||||
|
model_id=model,
|
||||||
|
task="text-generation",
|
||||||
|
pipeline_kwargs=kwargs,
|
||||||
|
)
|
||||||
|
return ChatHuggingFace(llm=llm)
|
||||||
|
|
||||||
|
|
||||||
|
CHAT_FACTORIES: Dict[str, BaseProviderFactory] = {
|
||||||
|
"openai": OpenAIChatFactory(),
|
||||||
|
"ollama": OllamaChatFactory(),
|
||||||
|
"bedrock": BedrockChatFactory(),
|
||||||
|
"huggingface": HuggingFaceChatFactory(),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def create_chat_model(provider: str, model: str, **kwargs: Any):
|
||||||
|
"""
|
||||||
|
Create a chat model instance for the given provider.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
provider: The provider name (openai, ollama, bedrock, huggingface).
|
||||||
|
model: The model identifier.
|
||||||
|
**kwargs: Additional keyword arguments passed to the model constructor.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A chat model instance.
|
||||||
|
"""
|
||||||
|
key = provider.strip().lower()
|
||||||
|
|
||||||
|
if key not in CHAT_FACTORIES:
|
||||||
|
raise ValueError(
|
||||||
|
f"Unsupported chat provider: {provider}. "
|
||||||
|
f"Available providers: {list(CHAT_FACTORIES.keys())}"
|
||||||
|
)
|
||||||
|
|
||||||
|
return CHAT_FACTORIES[key].create(model=model, **kwargs)
|
||||||
22
Makefile
22
Makefile
|
|
@ -1,22 +0,0 @@
|
||||||
.PHONY: help requirements docker-build docker-up docker-down clean start tunnels_up
|
|
||||||
|
|
||||||
help:
|
|
||||||
@echo "Available commands:"
|
|
||||||
@echo " make sync_requirements - Export dependencies from pyproject.toml to requirements.txt"
|
|
||||||
@echo " make tunnels_up - Start tunnels"
|
|
||||||
@echo " make compose_up - Run tunnels script and start Docker Compose"
|
|
||||||
|
|
||||||
sync_requirements:
|
|
||||||
@echo "Exporting dependencies from pyproject.toml to requirements.txt..."
|
|
||||||
uv export --format requirements-txt --no-hashes --no-dev -o requirements.txt
|
|
||||||
@echo "✓ requirements.txt updated successfully"
|
|
||||||
|
|
||||||
tunnels_up:
|
|
||||||
bash ./scripts/start-tunnels.sh < /dev/null &
|
|
||||||
@echo "✓ Tunnels started!"
|
|
||||||
|
|
||||||
compose_up:
|
|
||||||
bash ./scripts/start-tunnels.sh < /dev/null &
|
|
||||||
sleep 2
|
|
||||||
docker compose up -d --build
|
|
||||||
@echo "✓ Done!"
|
|
||||||
529
README.md
529
README.md
|
|
@ -42,15 +42,78 @@ graph TD
|
||||||
## Project Structure
|
## Project Structure
|
||||||
|
|
||||||
```text
|
```text
|
||||||
.
|
├── README.md # Setup guide & dev reference (this file)
|
||||||
├── Dockerfile # Container definition for the Engine
|
├── CONTRIBUTING.md # Contribution standards, GitFlow, PR process
|
||||||
├── README.md # System documentation & Dev guide
|
├── SECURITY.md # Security policy and vulnerability reporting
|
||||||
├── changelog # Version tracking and release history
|
├── changelog # Version tracking and release history
|
||||||
├── docker-compose.yaml # Local orchestration for dev environment
|
├── pyproject.toml # Python project configuration (uv)
|
||||||
├── protos/
|
├── uv.lock # Locked dependency graph
|
||||||
│ └── brunix.proto # Protocol Buffers: The source of truth for the API
|
│
|
||||||
└── src/
|
├── Docker/ # Production container
|
||||||
└── server.py # Core Logic: gRPC Server & RAG Orchestration
|
│ ├── protos/
|
||||||
|
│ │ └── brunix.proto # gRPC API contract (source of truth)
|
||||||
|
│ ├── src/
|
||||||
|
│ │ ├── server.py # gRPC server — AskAgent, AskAgentStream, EvaluateRAG
|
||||||
|
│ │ ├── openai_proxy.py # OpenAI & Ollama-compatible HTTP proxy (port 8000)
|
||||||
|
│ │ ├── graph.py # LangGraph orchestration — build_graph, build_prepare_graph
|
||||||
|
│ │ ├── prompts.py # Centralized prompt definitions (CLASSIFY, GENERATE, etc.)
|
||||||
|
│ │ ├── state.py # AgentState TypedDict (shared across graph nodes)
|
||||||
|
│ │ ├── evaluate.py # RAGAS evaluation pipeline (Claude as judge)
|
||||||
|
│ │ ├── golden_dataset.json # Ground-truth Q&A dataset for EvaluateRAG
|
||||||
|
│ │ └── utils/
|
||||||
|
│ │ ├── emb_factory.py # Provider-agnostic embedding model factory
|
||||||
|
│ │ └── llm_factory.py # Provider-agnostic LLM factory
|
||||||
|
│ ├── Dockerfile # Multi-stage container build
|
||||||
|
│ ├── docker-compose.yaml # Local dev orchestration
|
||||||
|
│ ├── entrypoint.sh # Starts gRPC server + HTTP proxy in parallel
|
||||||
|
│ ├── requirements.txt # Pinned production dependencies (exported by uv)
|
||||||
|
│ ├── .env # Local secrets (never commit — see .gitignore)
|
||||||
|
│ └── .dockerignore # Excludes dev artifacts from image build context
|
||||||
|
│
|
||||||
|
├── docs/ # Knowledge base & project documentation
|
||||||
|
│ ├── ARCHITECTURE.md # Deep technical architecture reference
|
||||||
|
│ ├── API_REFERENCE.md # Complete gRPC & HTTP API contract with examples
|
||||||
|
│ ├── RUNBOOK.md # Operational playbooks and incident response
|
||||||
|
│ ├── AVAP_CHUNKER_CONFIG.md # avap_config.json reference — blocks, statements, semantic tags
|
||||||
|
│ ├── adr/ # Architecture Decision Records
|
||||||
|
│ │ ├── ADR-0001-grpc-primary-interface.md
|
||||||
|
│ │ ├── ADR-0002-two-phase-streaming.md
|
||||||
|
│ │ ├── ADR-0003-hybrid-retrieval-rrf.md
|
||||||
|
│ │ └── ADR-0004-claude-eval-judge.md
|
||||||
|
│ ├── avap_language_github_docs/ # AVAP language reference docs (GitHub source)
|
||||||
|
│ ├── developer.avapframework.com/ # AVAP developer portal docs
|
||||||
|
│ ├── LRM/
|
||||||
|
│ │ └── avap.md # AVAP Language Reference Manual (LRM)
|
||||||
|
│ └── samples/ # AVAP code samples (.avap) used for ingestion
|
||||||
|
│
|
||||||
|
├── ingestion/
|
||||||
|
│ └── chunks.json # Last export of ingested chunks (ES bulk output)
|
||||||
|
│
|
||||||
|
├── scripts/
|
||||||
|
│ └── pipelines/
|
||||||
|
│ │
|
||||||
|
│ ├── flows/ # Executable pipeline entry points (Typer CLI)
|
||||||
|
│ │ ├── elasticsearch_ingestion.py # [PIPELINE A] Chonkie-based ingestion flow
|
||||||
|
│ │ ├── generate_mbap.py # Synthetic MBPP-AVAP dataset generator (Claude)
|
||||||
|
│ │ └── translate_mbpp.py # MBPP→AVAP dataset translation pipeline
|
||||||
|
│ │
|
||||||
|
│ ├── tasks/ # Reusable task modules for Pipeline A
|
||||||
|
│ │ ├── chunk.py # Document fetching, Chonkie chunking & ES bulk write
|
||||||
|
│ │ ├── embeddings.py # OllamaEmbeddings adapter (Chonkie-compatible)
|
||||||
|
│ │ └── prompts.py # Prompt templates for pipeline LLM calls
|
||||||
|
│ │
|
||||||
|
│ └── ingestion/ # [PIPELINE B] AVAP-native classic ingestion
|
||||||
|
│ ├── avap_chunker.py # Custom AVAP lexer + chunker (MinHash dedup, overlaps)
|
||||||
|
│ ├── avap_ingestor.py # Async ES ingestor with DLQ (producer/consumer pattern)
|
||||||
|
│ ├── avap_config.json # AVAP language config (blocks, statements, semantic tags)
|
||||||
|
│ └── ingestion/
|
||||||
|
│ └── chunks.jsonl # JSONL output from avap_chunker.py
|
||||||
|
│
|
||||||
|
└── src/ # Shared library (used by both Docker and scripts)
|
||||||
|
├── config.py # Pydantic settings — reads all environment variables
|
||||||
|
└── utils/
|
||||||
|
├── emb_factory.py # Embedding model factory
|
||||||
|
└── llm_factory.py # LLM model factory
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
@ -87,7 +150,145 @@ sequenceDiagram
|
||||||
Note over E: Close Langfuse Trace
|
Note over E: Close Langfuse Trace
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Knowledge Base Ingestion
|
||||||
|
|
||||||
|
The Elasticsearch vector index is populated via one of two independent pipelines. Both pipelines require the Elasticsearch tunnel to be active (`localhost:9200`) and the Ollama embedding model (`OLLAMA_EMB_MODEL_NAME`) to be available.
|
||||||
|
|
||||||
|
### Pipeline A — Chonkie (recommended for markdown + .avap)
|
||||||
|
|
||||||
|
Uses the [Chonkie](https://github.com/chonkie-ai/chonkie) library for semantic chunking. Supports `.md` (via `MarkdownChef`) and `.avap` (via `TextChef` + `TokenChunker`). Chunks are embedded with Ollama and bulk-indexed into Elasticsearch via `ElasticHandshakeWithMetadata`.
|
||||||
|
|
||||||
|
**Entry point:** `scripts/pipelines/flows/elasticsearch_ingestion.py`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Index all markdown and AVAP files from docs/LRM
|
||||||
|
python -m scripts.pipelines.flows.elasticsearch_ingestion \
|
||||||
|
--docs-folder-path docs/LRM \
|
||||||
|
--output ingestion/chunks.json \
|
||||||
|
--docs-extension .md .avap \
|
||||||
|
--es-index avap-docs-test \
|
||||||
|
--delete-es-index
|
||||||
|
|
||||||
|
# Index the AVAP code samples
|
||||||
|
python -m scripts.pipelines.flows.elasticsearch_ingestion \
|
||||||
|
--docs-folder-path docs/samples \
|
||||||
|
--output ingestion/chunks.json \
|
||||||
|
--docs-extension .avap \
|
||||||
|
--es-index avap-docs-test
|
||||||
|
```
|
||||||
|
|
||||||
|
**How it works:**
|
||||||
|
|
||||||
|
```
|
||||||
|
docs/**/*.md + docs/**/*.avap
|
||||||
|
│
|
||||||
|
▼ FileFetcher (Chonkie)
|
||||||
|
│
|
||||||
|
├─ .md → MarkdownChef → merge code blocks + tables into chunks
|
||||||
|
│ ↓
|
||||||
|
│ TokenChunker (HuggingFace tokenizer: HF_EMB_MODEL_NAME)
|
||||||
|
│
|
||||||
|
└─ .avap → TextChef → TokenChunker
|
||||||
|
│
|
||||||
|
▼ OllamaEmbeddings.embed_batch() (OLLAMA_EMB_MODEL_NAME)
|
||||||
|
│
|
||||||
|
▼ ElasticHandshakeWithMetadata.write()
|
||||||
|
bulk index → {text, embedding, file, start_index, end_index, token_count}
|
||||||
|
│
|
||||||
|
▼ export_documents() → ingestion/chunks.json
|
||||||
|
```
|
||||||
|
|
||||||
|
| Chunk field | Source |
|
||||||
|
|---|---|
|
||||||
|
| `text` | Raw chunk text |
|
||||||
|
| `embedding` | Ollama dense vector |
|
||||||
|
| `start_index` / `end_index` | Character offsets in source file |
|
||||||
|
| `token_count` | HuggingFace tokenizer count |
|
||||||
|
| `file` | Source filename |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Pipeline B — AVAP Native (classic, for .avap files with full semantic analysis)
|
||||||
|
|
||||||
|
A custom lexer-based chunker purpose-built for the AVAP language using `avap_config.json` as its grammar definition. Produces richer metadata (block type, section, semantic tags, complexity score) and includes **MinHash LSH deduplication** and **semantic overlap** between chunks.
|
||||||
|
|
||||||
|
**Entry point:** `scripts/pipelines/ingestion/avap_chunker.py`
|
||||||
|
**Grammar config:** `scripts/pipelines/ingestion/avap_config.json` — see [`docs/AVAP_CHUNKER_CONFIG.md`](./docs/AVAP_CHUNKER_CONFIG.md) for the full reference on blocks, statements, semantic tags, and how to extend the grammar.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/pipelines/ingestion/avap_chunker.py \
|
||||||
|
--lang-config scripts/pipelines/ingestion/avap_config.json \
|
||||||
|
--docs-path docs/samples \
|
||||||
|
--output scripts/pipelines/ingestion/ingestion/chunks.jsonl \
|
||||||
|
--workers 4
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2 — Ingest:** `scripts/pipelines/ingestion/avap_ingestor.py`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Ingest from existing JSONL
|
||||||
|
python scripts/pipelines/ingestion/avap_ingestor.py \
|
||||||
|
--chunks scripts/pipelines/ingestion/ingestion/chunks.jsonl \
|
||||||
|
--index avap-knowledge-v1 \
|
||||||
|
--delete
|
||||||
|
|
||||||
|
# Check model embedding dimensions first
|
||||||
|
python scripts/pipelines/ingestion/avap_ingestor.py --probe-dim
|
||||||
|
```
|
||||||
|
|
||||||
|
**How it works:**
|
||||||
|
|
||||||
|
```
|
||||||
|
docs/**/*.avap + docs/**/*.md
|
||||||
|
│
|
||||||
|
▼ avap_chunker.py (GenericLexer + LanguageConfig)
|
||||||
|
│ ├─ .avap: block detection (function/if/startLoop/try), statement classification
|
||||||
|
│ │ semantic tags enrichment, function signature extraction
|
||||||
|
│ │ semantic overlap injection (OVERLAP_LINES=3)
|
||||||
|
│ └─ .md: H1/H2/H3 sectioning, fenced code extraction, table isolation,
|
||||||
|
│ narrative split by token budget (MAX_NARRATIVE_TOKENS=400)
|
||||||
|
│ ├─ MinHash LSH deduplication (threshold=0.85, 128 permutations)
|
||||||
|
│ └─ parallel workers (ProcessPoolExecutor)
|
||||||
|
│
|
||||||
|
▼ chunks.jsonl (one JSON per line)
|
||||||
|
│
|
||||||
|
▼ avap_ingestor.py (async producer/consumer)
|
||||||
|
│ ├─ OllamaAsyncEmbedder — batch embed (BATCH_SIZE_EMBED=8)
|
||||||
|
│ ├─ asyncio.Queue (backpressure, QUEUE_MAXSIZE=5)
|
||||||
|
│ ├─ ES async_bulk (BATCH_SIZE_ES=50)
|
||||||
|
│ └─ DeadLetterQueue — failed chunks saved to failed_chunks_<ts>.jsonl
|
||||||
|
│
|
||||||
|
▼ Elasticsearch index
|
||||||
|
{chunk_id, content, embedding, doc_type, block_type, section,
|
||||||
|
source_file, start_line, end_line, token_estimate, metadata{...}}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Chunk types produced:**
|
||||||
|
|
||||||
|
| `doc_type` | `block_type` | Description |
|
||||||
|
|---|---|---|
|
||||||
|
| `code` | `function` | Complete AVAP function block |
|
||||||
|
| `code` | `if` / `startLoop` / `try` | Control flow blocks |
|
||||||
|
| `function_signature` | `function_signature` | Extracted function signature only (for fast lookup) |
|
||||||
|
| `code` | `registerEndpoint` / `addVar` / … | Statement-level chunks by AVAP command category |
|
||||||
|
| `spec` | `narrative` | Markdown prose sections |
|
||||||
|
| `code_example` | language tag | Fenced code blocks from markdown |
|
||||||
|
| `bnf` | `bnf` | BNF grammar blocks from markdown |
|
||||||
|
| `spec` | `table` | Markdown tables |
|
||||||
|
|
||||||
|
**Semantic tags** (automatically detected, stored in `metadata`):
|
||||||
|
|
||||||
|
`uses_orm` · `uses_http` · `uses_connector` · `uses_async` · `uses_crypto` · `uses_auth` · `uses_error_handling` · `uses_loop` · `uses_json` · `uses_list` · `uses_regex` · `uses_datetime` · `returns_result` · `registers_endpoint`
|
||||||
|
|
||||||
|
**Ingestor environment variables:**
|
||||||
|
|
||||||
|
| Variable | Default | Description |
|
||||||
|
|---|---|---|
|
||||||
|
| `OLLAMA_URL` | `http://localhost:11434` | Ollama base URL for embeddings |
|
||||||
|
| `OLLAMA_MODEL` | `qwen3-0.6B-emb:latest` | Embedding model name |
|
||||||
|
| `OLLAMA_EMBEDDING_DIM` | `1024` | Expected embedding dimension (must match model) |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
@ -102,31 +303,75 @@ sequenceDiagram
|
||||||
The engine utilizes Langfuse for end-to-end tracing and performance monitoring.
|
The engine utilizes Langfuse for end-to-end tracing and performance monitoring.
|
||||||
1. Access the Dashboard: **http://45.77.119.180**
|
1. Access the Dashboard: **http://45.77.119.180**
|
||||||
2. Create a project and generate API Keys in **Settings**.
|
2. Create a project and generate API Keys in **Settings**.
|
||||||
3. Configure your local `.env` file:
|
3. Configure your local `.env` file using the reference table below.
|
||||||
|
|
||||||
|
### 3. Environment Variables Reference
|
||||||
|
|
||||||
|
> **Policy:** Every environment variable used by the engine must be documented in this table. Any PR that introduces a new variable without a corresponding entry here will be rejected. See [CONTRIBUTING.md](./CONTRIBUTING.md#5-environment-variables-policy) for full details.
|
||||||
|
|
||||||
|
Create a `.env` file in the project root with the following variables:
|
||||||
|
|
||||||
```env
|
```env
|
||||||
|
PYTHONPATH=${PYTHONPATH}:/home/...
|
||||||
|
ELASTICSEARCH_URL=http://host.docker.internal:9200
|
||||||
|
ELASTICSEARCH_LOCAL_URL=http://localhost:9200
|
||||||
|
ELASTICSEARCH_INDEX=avap-docs-test
|
||||||
|
ELASTICSEARCH_USER=elastic
|
||||||
|
ELASTICSEARCH_PASSWORD=changeme
|
||||||
|
ELASTICSEARCH_API_KEY=
|
||||||
|
POSTGRES_URL=postgresql://postgres:postgres@localhost:5432/langfuse
|
||||||
|
LANGFUSE_HOST=http://45.77.119.180
|
||||||
LANGFUSE_PUBLIC_KEY=pk-lf-...
|
LANGFUSE_PUBLIC_KEY=pk-lf-...
|
||||||
LANGFUSE_SECRET_KEY=sk-lf-...
|
LANGFUSE_SECRET_KEY=sk-lf-...
|
||||||
LANGFUSE_HOST=http://45.77.119.180
|
OLLAMA_URL=http://host.docker.internal:11434
|
||||||
|
OLLAMA_LOCAL_URL=http://localhost:11434
|
||||||
|
OLLAMA_MODEL_NAME=qwen2.5:1.5b
|
||||||
|
OLLAMA_EMB_MODEL_NAME=qwen3-0.6B-emb:latest
|
||||||
|
HF_TOKEN=hf_...
|
||||||
|
HF_EMB_MODEL_NAME=Qwen/Qwen3-Embedding-0.6B
|
||||||
|
ANTHROPIC_API_KEY=sk-ant-...
|
||||||
|
ANTHROPIC_MODEL=claude-sonnet-4-20250514
|
||||||
```
|
```
|
||||||
|
|
||||||
### 3. Infrastructure Tunnels
|
| Variable | Required | Description | Example |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `PYTHONPATH` | No | Path that aims to the root of the project | `${PYTHONPATH}:/home/...` |
|
||||||
|
| `ELASTICSEARCH_URL` | Yes | Elasticsearch endpoint used for vector/context retrieval in Docker | `http://host.docker.internal:9200` |
|
||||||
|
| `ELASTICSEARCH_LOCAL_URL` | Yes | Elasticsearch endpoint used for vector/context retrieval in local | `http://localhost:9200` |
|
||||||
|
| `ELASTICSEARCH_INDEX` | Yes | Elasticsearch index name used by the engine | `avap-docs-test` |
|
||||||
|
| `ELASTICSEARCH_USER` | No | Elasticsearch username (used when API key is not set) | `elastic` |
|
||||||
|
| `ELASTICSEARCH_PASSWORD` | No | Elasticsearch password (used when API key is not set) | `changeme` |
|
||||||
|
| `ELASTICSEARCH_API_KEY` | No | Elasticsearch API key (takes precedence over user/password auth) | `abc123...` |
|
||||||
|
| `POSTGRES_URL` | Yes | PostgreSQL connection string used by the service | `postgresql://postgres:postgres@localhost:5432/langfuse` |
|
||||||
|
| `LANGFUSE_HOST` | Yes | Langfuse server endpoint (Devaron Cluster) | `http://45.77.119.180` |
|
||||||
|
| `LANGFUSE_PUBLIC_KEY` | Yes | Langfuse project public key for tracing and observability | `pk-lf-...` |
|
||||||
|
| `LANGFUSE_SECRET_KEY` | Yes | Langfuse project secret key | `sk-lf-...` |
|
||||||
|
| `OLLAMA_URL` | Yes | Ollama endpoint used for text generation/embeddings in Docker | `http://host.docker.internal:11434` |
|
||||||
|
| `OLLAMA_LOCAL_URL` | Yes | Ollama endpoint used for text generation/embeddings in local | `http://localhost:11434` |
|
||||||
|
| `OLLAMA_MODEL_NAME` | Yes | Ollama model name for generation | `qwen2.5:1.5b` |
|
||||||
|
| `OLLAMA_EMB_MODEL_NAME` | Yes | Ollama embeddings model name | `qwen3-0.6B-emb:latest` |
|
||||||
|
| `HF_TOKEN` | Yes | HuggingFace secret token | `hf_...` |
|
||||||
|
| `HF_EMB_MODEL_NAME` | Yes | HuggingFace embeddings model name | `Qwen/Qwen3-Embedding-0.6B` |
|
||||||
|
| `ANTHROPIC_API_KEY` | Yes* | Anthropic API key — required for the `EvaluateRAG` endpoint | `sk-ant-...` |
|
||||||
|
| `ANTHROPIC_MODEL` | No | Claude model used by the RAG evaluation suite | `claude-sonnet-4-20250514` |
|
||||||
|
|
||||||
|
> Never commit real secret values. Use placeholder values when sharing configuration examples.
|
||||||
|
|
||||||
|
### 4. Infrastructure Tunnels
|
||||||
Open a terminal and establish the connection to the Devaron Cluster:
|
Open a terminal and establish the connection to the Devaron Cluster:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. AI Model Tunnel (Ollama)
|
# 1. AI Model Tunnel (Ollama)
|
||||||
kubectl port-forward --address 0.0.0.0 svc/ollama-light-service 11434:11434 -n brunix --kubeconfig ./kubernetes/ivar.yaml &
|
kubectl port-forward --address 0.0.0.0 svc/ollama-light-service 11434:11434 -n brunix --kubeconfig ./kubernetes/kubeconfig.yaml &
|
||||||
|
|
||||||
# 2. Knowledge Base Tunnel (Elasticsearch)
|
# 2. Knowledge Base Tunnel (Elasticsearch)
|
||||||
kubectl port-forward --address 0.0.0.0 svc/brunix-vector-db 9200:9200 -n brunix --kubeconfig ./kubernetes/ivar.yaml &
|
kubectl port-forward --address 0.0.0.0 svc/brunix-vector-db 9200:9200 -n brunix --kubeconfig ./kubernetes/kubeconfig.yaml &
|
||||||
|
|
||||||
# 3. Observability DB Tunnel (PostgreSQL)
|
# 3. Observability DB Tunnel (PostgreSQL)
|
||||||
kubectl port-forward --address 0.0.0.0 svc/brunix-postgres 5432:5432 -n brunix --kubeconfig ./kubernetes/ivar.yaml &
|
kubectl port-forward --address 0.0.0.0 svc/brunix-postgres 5432:5432 -n brunix --kubeconfig ./kubernetes/kubeconfig.yaml &
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 5. Launch the Engine
|
||||||
|
|
||||||
### 4. Launch the Engine
|
|
||||||
```bash
|
```bash
|
||||||
docker-compose up -d --build
|
docker-compose up -d --build
|
||||||
```
|
```
|
||||||
|
|
@ -135,25 +380,263 @@ docker-compose up -d --build
|
||||||
|
|
||||||
## Testing & Debugging
|
## Testing & Debugging
|
||||||
|
|
||||||
The service is exposed on port `50052` with **gRPC Reflection** enabled.
|
The gRPC service is exposed on port `50052` with **gRPC Reflection** enabled — introspect it at any time without needing the `.proto` file.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List available services
|
||||||
|
grpcurl -plaintext localhost:50052 list
|
||||||
|
|
||||||
|
# Describe the full service contract
|
||||||
|
grpcurl -plaintext localhost:50052 describe brunix.AssistanceEngine
|
||||||
|
```
|
||||||
|
|
||||||
|
### `AskAgent` — complete response (non-streaming)
|
||||||
|
|
||||||
|
Returns the full answer as a single message with `is_final: true`. Suitable for clients that do not support streaming.
|
||||||
|
|
||||||
### Streaming Query Example
|
|
||||||
```bash
|
```bash
|
||||||
grpcurl -plaintext \
|
grpcurl -plaintext \
|
||||||
-d '{"query": "Hola Brunix, ¿qué es AVAP?", "session_id": "dev-test-123"}' \
|
-d '{"query": "What is addVar in AVAP?", "session_id": "dev-001"}' \
|
||||||
localhost:50052 \
|
localhost:50052 \
|
||||||
brunix.AssistanceEngine/AskAgent
|
brunix.AssistanceEngine/AskAgent
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Expected response:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"text": "addVar is an AVAP command used to declare a variable...",
|
||||||
|
"avap_code": "AVAP-2026",
|
||||||
|
"is_final": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `AskAgentStream` — real token streaming
|
||||||
|
|
||||||
|
Emits one `AgentResponse` per token from Ollama. The final message has `is_final: true` and empty `text` — it is a termination signal, not part of the answer.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"query": "Write an AVAP API that returns hello world", "session_id": "dev-001"}' \
|
||||||
|
localhost:50052 \
|
||||||
|
brunix.AssistanceEngine/AskAgentStream
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected response stream:
|
||||||
|
```json
|
||||||
|
{"text": "Here", "is_final": false}
|
||||||
|
{"text": " is", "is_final": false}
|
||||||
|
...
|
||||||
|
{"text": "", "is_final": true}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Multi-turn conversation:** send subsequent requests with the same `session_id` to maintain context.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Turn 1
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"query": "What is registerEndpoint?", "session_id": "user-abc"}' \
|
||||||
|
localhost:50052 brunix.AssistanceEngine/AskAgentStream
|
||||||
|
|
||||||
|
# Turn 2 — engine has Turn 1 history
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"query": "Show me a code example", "session_id": "user-abc"}' \
|
||||||
|
localhost:50052 brunix.AssistanceEngine/AskAgentStream
|
||||||
|
```
|
||||||
|
|
||||||
|
### `EvaluateRAG` — quality evaluation
|
||||||
|
|
||||||
|
Runs the RAGAS evaluation pipeline against the golden dataset using Claude as the judge. Requires `ANTHROPIC_API_KEY` to be set.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full evaluation
|
||||||
|
grpcurl -plaintext -d '{}' localhost:50052 brunix.AssistanceEngine/EvaluateRAG
|
||||||
|
|
||||||
|
# Filtered: first 10 questions of category "core_syntax"
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"category": "core_syntax", "limit": 10, "index": "avap-docs-test"}' \
|
||||||
|
localhost:50052 \
|
||||||
|
brunix.AssistanceEngine/EvaluateRAG
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected response:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "ok",
|
||||||
|
"questions_evaluated": 10,
|
||||||
|
"elapsed_seconds": 142.3,
|
||||||
|
"judge_model": "claude-sonnet-4-20250514",
|
||||||
|
"faithfulness": 0.8421,
|
||||||
|
"answer_relevancy": 0.7913,
|
||||||
|
"context_recall": 0.7234,
|
||||||
|
"context_precision": 0.6891,
|
||||||
|
"global_score": 0.7615,
|
||||||
|
"verdict": "ACCEPTABLE"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Verdict thresholds: `EXCELLENT` ≥ 0.80 · `ACCEPTABLE` ≥ 0.60 · `INSUFFICIENT` < 0.60
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## HTTP Proxy (OpenAI & Ollama Compatible)
|
||||||
|
|
||||||
|
The container also runs an **OpenAI-compatible HTTP proxy** on port `8000` (`openai_proxy.py`). It wraps the gRPC engine transparently — `stream: false` routes to `AskAgent`, `stream: true` routes to `AskAgentStream`.
|
||||||
|
|
||||||
|
This enables integration with any tool that supports the OpenAI or Ollama API (continue.dev, LiteLLM, Open WebUI, etc.) without code changes.
|
||||||
|
|
||||||
|
### OpenAI endpoints
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|---|---|---|
|
||||||
|
| `GET` | `/v1/models` | List available models |
|
||||||
|
| `POST` | `/v1/chat/completions` | Chat completion — streaming and non-streaming |
|
||||||
|
| `POST` | `/v1/completions` | Legacy text completion — streaming and non-streaming |
|
||||||
|
| `GET` | `/health` | Health check — returns gRPC target and status |
|
||||||
|
|
||||||
|
**Non-streaming chat:**
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8000/v1/chat/completions \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"model": "brunix",
|
||||||
|
"messages": [{"role": "user", "content": "What is AVAP?"}],
|
||||||
|
"stream": false
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Streaming chat (SSE):**
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8000/v1/chat/completions \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"model": "brunix",
|
||||||
|
"messages": [{"role": "user", "content": "Write an AVAP hello world API"}],
|
||||||
|
"stream": true,
|
||||||
|
"session_id": "user-xyz"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Brunix extension:** `session_id` is a non-standard field added to the OpenAI schema. Use it to maintain multi-turn conversation context across HTTP requests. If omitted, all requests share the `"default"` session.
|
||||||
|
|
||||||
|
### Ollama endpoints
|
||||||
|
|
||||||
|
| Method | Endpoint | Description |
|
||||||
|
|---|---|---|
|
||||||
|
| `GET` | `/api/tags` | List models (Ollama format) |
|
||||||
|
| `POST` | `/api/chat` | Chat — NDJSON stream, `stream: true` by default |
|
||||||
|
| `POST` | `/api/generate` | Text generation — NDJSON stream, `stream: true` by default |
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8000/api/chat \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"model": "brunix",
|
||||||
|
"messages": [{"role": "user", "content": "Explain AVAP loops"}],
|
||||||
|
"stream": true
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Proxy environment variables
|
||||||
|
|
||||||
|
| Variable | Default | Description |
|
||||||
|
|---|---|---|
|
||||||
|
| `BRUNIX_GRPC_TARGET` | `localhost:50051` | gRPC engine address the proxy connects to |
|
||||||
|
| `PROXY_MODEL_ID` | `brunix` | Model name returned in API responses |
|
||||||
|
| `PROXY_THREAD_WORKERS` | `20` | Thread pool size for concurrent gRPC calls |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## API Contract (Protobuf)
|
## API Contract (Protobuf)
|
||||||
To update the communication interface, modify `protos/brunix.proto` and re-generate the stubs:
|
|
||||||
|
The source of truth for the gRPC interface is `Docker/protos/brunix.proto`. After modifying it, regenerate the stubs:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python -m grpc_tools.protoc -I./protos --python_out=./src --grpc_python_out=./src ./protos/brunix.proto
|
python -m grpc_tools.protoc \
|
||||||
|
-I./Docker/protos \
|
||||||
|
--python_out=./Docker/src \
|
||||||
|
--grpc_python_out=./Docker/src \
|
||||||
|
./Docker/protos/brunix.proto
|
||||||
```
|
```
|
||||||
|
|
||||||
|
For the full API reference — message types, field descriptions, error handling, and all client examples — see [`docs/API_REFERENCE.md`](./docs/API_REFERENCE.md).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dataset Generation & Evaluation
|
||||||
|
|
||||||
|
The engine includes a specialized benchmarking suite to evaluate the model's proficiency in **AVAP syntax**. This is achieved through a synthetic data generator that creates problems in the MBPP (Mostly Basic Python Problems) style, but tailored for the AVAP Language Reference Manual (LRM).
|
||||||
|
|
||||||
|
### 1. Synthetic Data Generator
|
||||||
|
The script `scripts/pipelines/flows/generate_mbap.py` leverages Claude to produce high-quality, executable code examples and validation tests.
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
* **LRM Grounding:** Uses the provided `avap.md` as the source of truth for syntax and logic.
|
||||||
|
* **Validation Logic:** Generates `test_list` with Python regex assertions to verify the state of the AVAP stack after execution.
|
||||||
|
* **Balanced Categories:** Covers 14 domains including ORM, Concurrency (`go/gather`), HTTP handling, and Cryptography.
|
||||||
|
|
||||||
|
### 2. Usage
|
||||||
|
Ensure you have the `anthropic` library installed and your API key configured:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install anthropic
|
||||||
|
export ANTHROPIC_API_KEY="your-sk-ant-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the generator specifying the path to your LRM and the desired output:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python scripts/pipelines/flows/generate_mbap.py \
|
||||||
|
--lrm docs/LRM/avap.md \
|
||||||
|
--output evaluation/mbpp_avap.json \
|
||||||
|
--problems 300
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Dataset Schema
|
||||||
|
The generated JSON follows this structure:
|
||||||
|
|
||||||
|
| Field | Type | Description |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| `task_id` | Integer | Unique identifier for the benchmark. |
|
||||||
|
| `text` | String | Natural language description of the problem (Spanish). |
|
||||||
|
| `code` | String | The reference AVAP implementation. |
|
||||||
|
| `test_list` | Array | Python `re.match` expressions to validate execution results. |
|
||||||
|
|
||||||
|
### 4. Integration in RAG
|
||||||
|
These generated examples are used to:
|
||||||
|
1. **Fine-tune** the local models (`qwen2.5:1.5b`) or others via the MrHouston pipeline.
|
||||||
|
2. **Evaluate** the "Zero-Shot" performance of the engine before deployment.
|
||||||
|
3. **Provide Few-Shot examples** in the RAG prompt orchestration (`src/prompts.py`).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Repository Standards & Architecture
|
||||||
|
|
||||||
|
### Docker & Build Context
|
||||||
|
To maintain production-grade security and image efficiency, this project enforces a strict separation between development files and the production runtime:
|
||||||
|
|
||||||
|
* **Production Root:** All executable code must reside in the `/app` directory within the container.
|
||||||
|
* **Exclusions:** The root `/workspace` directory is deprecated. No development artifacts, local logs, or non-essential source files (e.g., `.git`, `tests/`, `docs/`) should be bundled into the final image.
|
||||||
|
* **Compliance:** All Pull Requests must verify that the `Dockerfile` context is optimized using the provided `.dockerignore`.
|
||||||
|
|
||||||
|
*Failure to comply with these architectural standards will result in PR rejection.*
|
||||||
|
|
||||||
|
For the full set of contribution standards, see [CONTRIBUTING.md](./CONTRIBUTING.md).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation Index
|
||||||
|
|
||||||
|
| Document | Purpose |
|
||||||
|
|---|---|
|
||||||
|
| [README.md](./README.md) | Setup guide, env vars reference, quick start (this file) |
|
||||||
|
| [CONTRIBUTING.md](./CONTRIBUTING.md) | Contribution standards, GitFlow, PR process |
|
||||||
|
| [SECURITY.md](./SECURITY.md) | Security policy, vulnerability reporting, known limitations |
|
||||||
|
| [docs/ARCHITECTURE.md](./docs/ARCHITECTURE.md) | Deep technical architecture, component inventory, data flows |
|
||||||
|
| [docs/API_REFERENCE.md](./docs/API_REFERENCE.md) | Complete gRPC API contract, message types, client examples |
|
||||||
|
| [docs/RUNBOOK.md](./docs/RUNBOOK.md) | Operational playbooks, health checks, incident response |
|
||||||
|
| [docs/AVAP_CHUNKER_CONFIG.md](./docs/AVAP_CHUNKER_CONFIG.md) | `avap_config.json` reference — blocks, statements, semantic tags, how to extend |
|
||||||
|
| [docs/adr/](./docs/adr/) | Architecture Decision Records |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Security & Intellectual Property
|
## Security & Intellectual Property
|
||||||
|
|
|
||||||
105
changelog
105
changelog
|
|
@ -4,14 +4,115 @@ All notable changes to the **Brunix Assistance Engine** will be documented in th
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## [1.5.1] - 2026-03-18
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- DOCS: Created `docs/ARCHITECTURE.md` — full technical architecture reference covering component inventory, request lifecycle, LangGraph workflow, hybrid RAG pipeline, streaming design, evaluation pipeline, infrastructure layout, session memory, observability, and security boundaries.
|
||||||
|
- DOCS: Created `docs/API_REFERENCE.md` — complete gRPC API contract documentation with method descriptions, message type tables, error handling, and `grpcurl` client examples for all three RPCs (`AskAgent`, `AskAgentStream`, `EvaluateRAG`).
|
||||||
|
- DOCS: Created `docs/RUNBOOK.md` — operational playbook with health checks, startup/shutdown procedures, tunnel management, and incident playbooks for all known failure modes.
|
||||||
|
- DOCS: Created `SECURITY.md` — security policy covering transport security, authentication, secrets management, container security, data privacy, known limitations table, and vulnerability reporting process.
|
||||||
|
- DOCS: Created `docs/AVAP_CHUNKER_CONFIG.md` — full reference for `avap_config.json`: lexer fields, all 4 block definitions with regex breakdown, all 10 statement categories with ordering rationale, all 14 semantic tags with detection patterns, a worked example showing chunks produced from real AVAP code, and a step-by-step guide for adding new constructs.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- DOCS: Fully rewrote `README.md` project structure tree — now reflects all files accurately including `openai_proxy.py`, `entrypoint.sh`, `golden_dataset.json`, `SECURITY.md`, `docs/ARCHITECTURE.md`, `docs/API_REFERENCE.md`, `docs/RUNBOOK.md`, `docs/adr/`, `avap_chunker.py`, `avap_config.json`, `ingestion/chunks.jsonl`, and `src/config.py`.
|
||||||
|
- DOCS: Added `Knowledge Base Ingestion` section to `README.md` documenting both ingestion pipelines in full: Pipeline A (Chonkie — `elasticsearch_ingestion.py`) with flow diagram, CLI usage, and chunk field table; Pipeline B (AVAP Native — `avap_chunker.py` + `avap_ingestor.py`) with flow diagram, chunk type table, semantic tags reference, and ingestor env vars.
|
||||||
|
- DOCS: Replaced minimal `Testing & Debugging` section with complete documentation of all three gRPC methods (`AskAgent`, `AskAgentStream`, `EvaluateRAG`) including expected responses, multi-turn example, and verdict thresholds.
|
||||||
|
- DOCS: Added `HTTP Proxy` section documenting all 7 HTTP endpoints (4 OpenAI + 3 Ollama), streaming vs non-streaming routing, `session_id` extension, and proxy env vars table.
|
||||||
|
- DOCS: Fixed `API Contract (Protobuf)` section — corrected `grpc_tools.protoc` paths and added reference to `docs/API_REFERENCE.md`.
|
||||||
|
- DOCS: Fixed remaining stale reference to `scripts/generate_mbpp_avap.py` in Dataset Generation section.
|
||||||
|
- DOCS: Added Documentation Index table to `README.md` linking all documentation files.
|
||||||
|
- DOCS: Updated `CONTRIBUTING.md` — added Section 9 (Architecture Decision Records) and updated PR checklist and doc policy table.
|
||||||
|
- ENV: Added missing variable documentation to `README.md`: `ELASTICSEARCH_USER`, `ELASTICSEARCH_PASSWORD`, `ELASTICSEARCH_API_KEY`, `ANTHROPIC_API_KEY`, `ANTHROPIC_MODEL`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [1.5.0] - 2026-03-12
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- IMPLEMENTED:
|
||||||
|
- `scripts/pipelines/flows/translate_mbpp.py`: pipeline to generate synthetic dataset from mbpp dataset.
|
||||||
|
- `scripts/tasks/prompts.py`: module containing prompts for pipelines.
|
||||||
|
- `scripts/tasks/chunk.py`: module containing functions related to chunk management.
|
||||||
|
- `synthetic_datasets`: folder containing generated synthetic datasets.
|
||||||
|
- `src/config.py`: environment variables configuration file.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- REFACTORED: `scripts/pipelines/flows/elasticsearch_ingestion.py` now uses `docs/LRM` or `docs/samples` documents instead of pre chunked files.
|
||||||
|
- RENAMED `docs/AVAP Language: Core Commands & Functional Specification` to `docs/avap_language_github_docs`.
|
||||||
|
- REMOVED: `Makefile` file.
|
||||||
|
- REMOVED: `scripts/start-tunnels.sh` script.
|
||||||
|
- DEPENDENCIES: `requirements.txt` updated with new libraries required by the new modules.
|
||||||
|
- MOVED `scripts/generate_mbap.py` into `scripts/flows/generate_mbap.py`.
|
||||||
|
|
||||||
|
|
||||||
|
## [1.4.0] - 2026-03-10
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **Dataset Generation Suite**: Added `scripts/generate_mbpp_avap.py` to automate the creation of synthetic AVAP training data.
|
||||||
|
- **MBPP-style Benchmarking**: Support for generating structured JSON datasets with code solutions and Python-based validation tests (`test_list`).
|
||||||
|
- **LRM Integration**: The generator now performs grounded synthesis using the `avap.md` Language Reference Manual.
|
||||||
|
- **Anthropic Claude 3.5 Sonnet Integration**: Orchestration logic for high-fidelity code generation via API.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- **README.md**: Added comprehensive documentation for the Evaluation & Dataset Generation pipeline.
|
||||||
|
- **Project Structure**: Integrated `evaluation/` directory for synthetic dataset storage.
|
||||||
|
|
||||||
|
### Security
|
||||||
|
- Added explicit policy to avoid committing real Anthropic API keys, enforcing the use of environment variables.
|
||||||
|
|
||||||
|
## [1.3.0] - 2026-03-05
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- IMPLEMENTED:
|
||||||
|
- `Docker/src/utils/emb_factory`: factory modules created for embedding model generation.
|
||||||
|
- `Docker/src/utils/llm_factory`: factory modules created for LLM generation.
|
||||||
|
- `Docker/src/graph.py`: workflow graph orchestration module added.
|
||||||
|
- `Docker/src/prompts.py`: centralized prompt definitions added.
|
||||||
|
- `Docker/src/state.py`: shared state management module added.
|
||||||
|
- `scripts/pipelines/flows/elasticsearch_ingestion.py`: pipeline to populate the elasticsearch vector database.
|
||||||
|
- `ingestion/docs`: folder containing all chunked AVAP documents.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- REFACTORED: `server.py` updated to integrate the new graph/state/prompt and utils-based architecture.
|
||||||
|
- REFACTORED: `docker-compose.yaml` now uses fully parameterized environment variables instead of hardcoded service URLs and credentials.
|
||||||
|
- DEPENDENCIES: `requirements.txt` updated with new libraries required by the new modules.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## [1.2.0] - 2026-03-03
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- GOVERNANCE: Introduced `CONTRIBUTING.md` as the single source of truth for all contribution standards, covering GitFlow, infrastructure policy, repository standards, environment variables, changelog, documentation, and incident reporting.
|
||||||
|
- GOVERNANCE: Added `.github/pull_request_template.md` enforcing a mandatory structured checklist on every PR — including explicit sign-off on environment variables, changelog, and documentation.
|
||||||
|
- DOCS: Added Environment Variables reference table to `README.md`. All variables must be registered here. PRs introducing undocumented variables will be rejected.
|
||||||
|
- DOCS: Updated project structure map in `README.md` to reflect new governance files.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- PROCESS: Pull Requests that introduce new environment variables without documentation, omit required changelog entries, or skip required documentation updates are now formally non-mergeable per `CONTRIBUTING.md`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## [1.1.0] - 2026-02-16
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- IMPLEMENTED: Strict repository structure enforcement to separate development environment from production runtime.
|
||||||
|
- SECURITY: Added `.dockerignore` to prevent leaking sensitive source files and local configurations into the container.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- REFACTORED: Dockerfile build logic to optimize build context and reduce image footprint.
|
||||||
|
- ARCHITECTURE: Moved application entry point to `/app` and eliminated the redundant root `/workspace` directory for enhanced security.
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- RESOLVED: Issue where non-production files were being bundled into the Docker image, improving deployment speed and container isolation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## [1.0.0] - 2026-02-09
|
## [1.0.0] - 2026-02-09
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- **System Architecture:** Implementation of the triple-layer stack (Engine, Vector DB, Observability).
|
- **System Architecture:** Implementation of the triple-layer stack (Engine, Vector DB, Observability).
|
||||||
- **Core Engine:** Deployment of the `brunix-assistance-engine` using **Python 3.11**, **LangChain**, and **LangGraph** for agentic workflows.
|
- **Core Engine:** Deployment of the `brunix-assistance-engine` using **Python 3.11**, **LangChain**, and **LangGraph** for agentic workflows.
|
||||||
- **Communication Layer:** Established **gRPC** as the primary high-performance interface (Port 50051/50052).
|
- **Communication Layer:** Established **gRPC** as the primary high-performance interface (Port 50051/50052).
|
||||||
- **Knowledge Base:** Integration of **Elasticsearch 8.12** (`brunix-vector-db`) for AVAP technology RAG support.
|
- **Knowledge Base:** Integration of **Elasticsearch 8.12** (`brunix-vector-db`) for AVAP technology RAG support.
|
||||||
- **Observability Framework:** Deployment of **Langfuse** and **PostgreSQL** for full trace audit and cost management.
|
- **Observability Framework:** Deployment of **Langfuse** and **PostgreSQL** for full trace audit and cost management.
|
||||||
- **Security:** Initial network isolation within Docker (`avap-network`) and production-ready secret management design.
|
- **Security:** Initial network isolation within Docker (`avap-network`) and production-ready secret management design.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,24 +0,0 @@
|
||||||
version: '3.8'
|
|
||||||
|
|
||||||
services:
|
|
||||||
brunix-engine:
|
|
||||||
build: .
|
|
||||||
container_name: brunix-assistance-engine
|
|
||||||
volumes:
|
|
||||||
- .:/workspace
|
|
||||||
env_file: .env
|
|
||||||
ports:
|
|
||||||
- "50052:50051"
|
|
||||||
environment:
|
|
||||||
- ELASTICSEARCH_URL=http://host.docker.internal:9200
|
|
||||||
- DATABASE_URL=postgresql://postgres:brunix_pass@host.docker.internal:5432/postgres
|
|
||||||
|
|
||||||
- LANGFUSE_HOST=http://45.77.119.180
|
|
||||||
- LANGFUSE_PUBLIC_KEY=${LANGFUSE_PUBLIC_KEY}
|
|
||||||
- LANGFUSE_SECRET_KEY=${LANGFUSE_SECRET_KEY}
|
|
||||||
- LLM_BASE_URL=http://host.docker.internal:11434
|
|
||||||
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
|
||||||
|
|
||||||
extra_hosts:
|
|
||||||
- "host.docker.internal:host-gateway"
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,54 @@
|
||||||
|
# ADR-0001: gRPC as the Primary Communication Interface
|
||||||
|
|
||||||
|
**Date:** 2026-02-09
|
||||||
|
**Status:** Accepted
|
||||||
|
**Deciders:** Rafael Ruiz (CTO, AVAP Technology), MrHouston Engineering
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
The Brunix Assistance Engine needs a communication protocol to serve AI completions from internal backend services and client applications. The primary requirement is **real-time token streaming** — the engine must forward Ollama's token output to clients with minimal latency, not buffer the full response.
|
||||||
|
|
||||||
|
Secondary requirements:
|
||||||
|
- Strict API contract enforcement (no schema drift)
|
||||||
|
- High throughput for potential multi-client scenarios
|
||||||
|
- Easy introspection and testing in development
|
||||||
|
|
||||||
|
Candidates evaluated: REST/HTTP+JSON, gRPC, WebSockets, GraphQL subscriptions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Use **gRPC with Protocol Buffers (proto3)** as the primary interface, exposed on port `50051` (container) / `50052` (host).
|
||||||
|
|
||||||
|
The API contract is defined in a single source of truth: `Docker/protos/brunix.proto`.
|
||||||
|
|
||||||
|
An **OpenAI-compatible HTTP proxy** (`openai_proxy.py`, port `8000`) is provided as a secondary interface to enable integration with standard tooling (continue.dev, LiteLLM, etc.) without modifying the core engine.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rationale
|
||||||
|
|
||||||
|
| Criterion | REST+JSON | **gRPC** | WebSockets |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Streaming support | Requires SSE or chunked | ✅ Native server-side streaming | ✅ Bidirectional |
|
||||||
|
| Schema enforcement | ❌ Optional (OpenAPI) | ✅ Enforced by protobuf | ❌ None |
|
||||||
|
| Code generation | Manual or OpenAPI tooling | ✅ Automatic stub generation | Manual |
|
||||||
|
| Performance | Good | ✅ Better (binary framing) | Good |
|
||||||
|
| Dev tooling | Excellent | Good (`grpcurl`, reflection) | Limited |
|
||||||
|
| Browser-native | ✅ Yes | ❌ Requires grpc-web proxy | ✅ Yes |
|
||||||
|
|
||||||
|
gRPC was chosen because: (1) streaming is a first-class citizen, not bolted on; (2) the proto contract makes API evolution explicit and breaking changes detectable at compile time; (3) stub generation eliminates a class of integration bugs.
|
||||||
|
|
||||||
|
The lack of browser-native support is not a concern — all current clients are server-side services or CLI tools.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
- All API changes require modifying `brunix.proto` and regenerating stubs (`grpc_tools.protoc`).
|
||||||
|
- Client libraries must use the generated stubs or `grpcurl` — no curl-based ad-hoc testing of the main API.
|
||||||
|
- The OpenAI proxy adds a second entry point that must be kept in sync with the gRPC interface behavior.
|
||||||
|
- gRPC reflection is enabled in development. It should be evaluated for disabling in production to reduce the attack surface.
|
||||||
|
|
@ -0,0 +1,61 @@
|
||||||
|
# ADR-0002: Two-Phase Streaming Design for `AskAgentStream`
|
||||||
|
|
||||||
|
**Date:** 2026-03-05
|
||||||
|
**Status:** Accepted
|
||||||
|
**Deciders:** Rafael Ruiz (CTO), MrHouston Engineering
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
The initial `AskAgent` implementation calls `graph.invoke()` — LangGraph's synchronous execution — and returns the complete answer as a single gRPC message. This blocks the gRPC connection for the full generation time (typically 3–15 seconds) with no intermediate feedback to the client.
|
||||||
|
|
||||||
|
A streaming variant is required that forwards Ollama's token output to the client as tokens are produced, enabling real-time rendering in client UIs.
|
||||||
|
|
||||||
|
The straightforward approach would be to use LangGraph's own `graph.stream()` method.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Implement `AskAgentStream` using a **two-phase design**:
|
||||||
|
|
||||||
|
**Phase 1 — Graph-managed preparation:**
|
||||||
|
Run `build_prepare_graph()` (classify → reformulate → retrieve) via `prepare_graph.invoke()`. This phase runs synchronously and produces the full classified, reformulated query and retrieved context. It does **not** call the LLM for generation.
|
||||||
|
|
||||||
|
**Phase 2 — Manual LLM streaming:**
|
||||||
|
Call `build_final_messages()` to reconstruct the exact prompt that the full graph would have used, then call `llm.stream(final_messages)` directly. Each token chunk is yielded immediately as an `AgentResponse`.
|
||||||
|
|
||||||
|
A separate `build_prepare_graph()` function mirrors the routing logic of `build_graph()` but terminates at `END` before any generation node. A `build_final_messages()` function replicates the prompt-building logic of `generate`, `generate_code`, and `respond_conversational`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rationale
|
||||||
|
|
||||||
|
### Why not use `graph.stream()`?
|
||||||
|
|
||||||
|
LangGraph's `stream()` yields **state snapshots** at node boundaries, not LLM tokens. When using `llm.invoke()` inside a graph node, the invocation is atomic — there are no intermediate yields. To get per-token streaming from `llm.stream()`, the call must happen outside the graph.
|
||||||
|
|
||||||
|
### Why not inline the streaming call inside a graph node?
|
||||||
|
|
||||||
|
Yielding from inside a LangGraph node to an outer generator is architecturally complex and not idiomatic to LangGraph. It requires either a callback mechanism or breaking the node abstraction.
|
||||||
|
|
||||||
|
### Trade-offs
|
||||||
|
|
||||||
|
| Concern | Two-phase design | Alternative (streaming inside graph) |
|
||||||
|
|---|---|---|
|
||||||
|
| Code duplication | Medium — routing logic exists in both graphs | Low |
|
||||||
|
| Architectural clarity | High — phases are clearly separated | Low |
|
||||||
|
| LangGraph compatibility | High — standard usage | Low — requires framework internals |
|
||||||
|
| Maintainability | Requires keeping `build_prepare_graph` and `build_final_messages` in sync with `build_graph` | Single source of routing truth |
|
||||||
|
|
||||||
|
The duplication risk is accepted because: (1) the routing logic is simple (3 branches), (2) the prepare graph is strictly a subset of the full graph, and (3) both are tested via the same integration test queries.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
- `graph.py` now exports three functions: `build_graph`, `build_prepare_graph`, `build_final_messages`.
|
||||||
|
- Any change to query routing logic in `build_graph` must be mirrored in `build_prepare_graph`.
|
||||||
|
- Any change to prompt selection in `generate` / `generate_code` / `respond_conversational` must be mirrored in `build_final_messages`.
|
||||||
|
- Session history persistence happens **after the stream ends**, not mid-stream. A client that disconnects early will cause history to not be saved for that turn.
|
||||||
|
|
@ -0,0 +1,63 @@
|
||||||
|
# ADR-0003: Hybrid Retrieval (BM25 + kNN) with RRF Fusion
|
||||||
|
|
||||||
|
**Date:** 2026-03-05
|
||||||
|
**Status:** Accepted
|
||||||
|
**Deciders:** Rafael Ruiz (CTO), MrHouston Engineering
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
The RAG pipeline needs a retrieval strategy for finding relevant AVAP documentation chunks from Elasticsearch. The knowledge base contains a mix of:
|
||||||
|
|
||||||
|
- **Prose documentation** (explanations of AVAP concepts, commands, parameters) — benefits from semantic (dense) retrieval
|
||||||
|
- **Code examples and BNF grammar** (exact syntax patterns, function signatures) — benefits from lexical (sparse) retrieval, where exact token matches are critical
|
||||||
|
|
||||||
|
A single retrieval strategy will underperform for one of these document types.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Implement **hybrid retrieval** combining:
|
||||||
|
- **BM25** (Elasticsearch `multi_match` on `content^2` and `text^2` fields) for lexical relevance
|
||||||
|
- **kNN** (Elasticsearch `knn` on the `embedding` field) for semantic relevance
|
||||||
|
- **RRF (Reciprocal Rank Fusion)** with constant `k=60` to fuse rankings from both systems
|
||||||
|
|
||||||
|
The fused top-8 documents are passed to the generation node as context.
|
||||||
|
|
||||||
|
Query reformulation (`reformulate` node) runs before retrieval and rewrites the user query into keyword-optimized form to improve BM25 recall for AVAP-specific terminology.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rationale
|
||||||
|
|
||||||
|
### Why hybrid over pure semantic?
|
||||||
|
|
||||||
|
AVAP is a domain-specific language with precise, non-negotiable syntax. For queries like "how does `addVar` work", exact lexical matching on the function name `addVar` is more reliable than semantic similarity, which may confuse similar-sounding functions or return contextually related but syntactically different commands.
|
||||||
|
|
||||||
|
### Why hybrid over pure BM25?
|
||||||
|
|
||||||
|
Conversational queries ("explain how loops work in AVAP", "what's the difference between addVar and setVar") benefit from semantic search that captures meaning beyond exact keyword overlap.
|
||||||
|
|
||||||
|
### Why RRF over score normalization?
|
||||||
|
|
||||||
|
BM25 and kNN scores are on different scales and distributions. Normalizing them requires careful calibration per index. RRF operates on ranks — not scores — making it robust to distribution differences and requiring no per-deployment tuning. The `k=60` constant is the standard literature value.
|
||||||
|
|
||||||
|
### Retrieval parameters
|
||||||
|
|
||||||
|
| Parameter | Value | Rationale |
|
||||||
|
|---|---|---|
|
||||||
|
| `k` (top documents) | 8 | Balances context richness vs. context window length |
|
||||||
|
| `num_candidates` (kNN) | `k × 5 = 40` | Standard ES kNN oversampling ratio |
|
||||||
|
| BM25 fields | `content^2, text^2` | Boost content/text fields; `^2` emphasizes them over metadata |
|
||||||
|
| Fuzziness (BM25) | `AUTO` | Handles minor typos in AVAP function names |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
- Retrieval requires two ES queries per request (BM25 + kNN). This is acceptable given the tunnel latency baseline already incurred.
|
||||||
|
- If either BM25 or kNN fails (e.g., embedding model unavailable), the system degrades gracefully: the failing component logs a warning and returns an empty list; RRF fusion proceeds with the available rankings.
|
||||||
|
- Context length grows with `k`. At `k=8` with typical chunk sizes (~300 tokens each), context is ~2400 tokens — within the `qwen2.5:1.5b` context window.
|
||||||
|
- Changing `k` has a direct impact on both retrieval quality and generation latency. Any change must be evaluated with `EvaluateRAG` before merging.
|
||||||
|
|
@ -0,0 +1,54 @@
|
||||||
|
# ADR-0004: Claude as the RAGAS Evaluation Judge
|
||||||
|
|
||||||
|
**Date:** 2026-03-10
|
||||||
|
**Status:** Accepted
|
||||||
|
**Deciders:** Rafael Ruiz (CTO), MrHouston Engineering
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
The `EvaluateRAG` endpoint runs RAGAS metrics to measure the quality of the RAG pipeline. RAGAS metrics (`faithfulness`, `answer_relevancy`, `context_recall`, `context_precision`) require an LLM judge to score answers against ground truth and context.
|
||||||
|
|
||||||
|
The production LLM is Ollama `qwen2.5:1.5b` — a small, locally-hosted model optimized for AVAP code generation speed. Using it as the evaluation judge creates a conflict of interest (measuring a system with the same model that produces it) and a quality concern (small models produce unreliable evaluation scores).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Use **Claude (`claude-sonnet-4-20250514`) as the RAGAS evaluation judge**, accessed via the Anthropic API.
|
||||||
|
|
||||||
|
The production Ollama LLM is still used for **answer generation** during evaluation (to measure real-world pipeline quality). Only the scoring step uses Claude.
|
||||||
|
|
||||||
|
This requires `ANTHROPIC_API_KEY` to be set. The `EvaluateRAG` endpoint fails with an explicit error if the key is missing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rationale
|
||||||
|
|
||||||
|
### Separation of generation and evaluation
|
||||||
|
|
||||||
|
Using a different model for generation and evaluation is standard practice in LLM system evaluation. The evaluation judge must be:
|
||||||
|
1. **Independent** — not the same model being measured
|
||||||
|
2. **High-capability** — capable of nuanced faithfulness and relevancy judgements
|
||||||
|
3. **Deterministic** — consistent scores across runs (achieved via `temperature=0`)
|
||||||
|
|
||||||
|
### Why Claude specifically?
|
||||||
|
|
||||||
|
- Claude Sonnet-class models score among the highest on LLM-as-judge benchmarks for English and multilingual evaluation tasks
|
||||||
|
- The AVAP knowledge base contains bilingual content (Spanish + English); Claude handles both reliably
|
||||||
|
- The Anthropic SDK is already available in the dependency stack (`langchain-anthropic`)
|
||||||
|
|
||||||
|
### Cost implications
|
||||||
|
|
||||||
|
Claude is called only during explicit `EvaluateRAG` invocations, not during production queries. Cost per evaluation run depends on dataset size. For 50 questions at standard RAGAS prompt lengths, estimated cost is < $0.50 using Sonnet pricing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
- `ANTHROPIC_API_KEY` and `ANTHROPIC_MODEL` become required configuration for the evaluation feature.
|
||||||
|
- Evaluation runs incur external API costs. This should be factored into the evaluation cadence policy.
|
||||||
|
- The `judge_model` field in `EvalResponse` records which Claude version was used, enabling score comparisons across model versions over time.
|
||||||
|
- If Anthropic's API is unreachable or rate-limited, `EvaluateRAG` will fail. This is acceptable since evaluation is a batch operation, not a real-time user-facing feature.
|
||||||
|
- Any change to `ANTHROPIC_MODEL` may alter scoring distributions. Historical eval scores are only comparable when the same judge model was used.
|
||||||
|
|
@ -0,0 +1,339 @@
|
||||||
|
# Brunix Assistance Engine — API Reference
|
||||||
|
|
||||||
|
> **Protocol:** gRPC (proto3)
|
||||||
|
> **Port:** `50052` (host) → `50051` (container)
|
||||||
|
> **Reflection:** Enabled — service introspection available via `grpcurl`
|
||||||
|
> **Source of truth:** `Docker/protos/brunix.proto`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Service Definition](#1-service-definition)
|
||||||
|
2. [Methods](#2-methods)
|
||||||
|
- [AskAgent](#21-askagent)
|
||||||
|
- [AskAgentStream](#22-askagentstream)
|
||||||
|
- [EvaluateRAG](#23-evaluaterag)
|
||||||
|
3. [Message Types](#3-message-types)
|
||||||
|
4. [Error Handling](#4-error-handling)
|
||||||
|
5. [Client Examples](#5-client-examples)
|
||||||
|
6. [OpenAI-Compatible Proxy](#6-openai-compatible-proxy)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Service Definition
|
||||||
|
|
||||||
|
```protobuf
|
||||||
|
package brunix;
|
||||||
|
|
||||||
|
service AssistanceEngine {
|
||||||
|
rpc AskAgent (AgentRequest) returns (stream AgentResponse);
|
||||||
|
rpc AskAgentStream (AgentRequest) returns (stream AgentResponse);
|
||||||
|
rpc EvaluateRAG (EvalRequest) returns (EvalResponse);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Both `AskAgent` and `AskAgentStream` return a **server-side stream** of `AgentResponse` messages. They differ in how they produce and deliver the response — see [§2.1](#21-askagent) and [§2.2](#22-askagentstream).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Methods
|
||||||
|
|
||||||
|
### 2.1 `AskAgent`
|
||||||
|
|
||||||
|
**Behaviour:** Runs the full LangGraph pipeline (classify → reformulate → retrieve → generate) using `llm.invoke()`. Returns the complete answer as a **single** `AgentResponse` message with `is_final = true`.
|
||||||
|
|
||||||
|
**Use case:** Clients that do not support streaming or need a single atomic response.
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
|
||||||
|
```protobuf
|
||||||
|
message AgentRequest {
|
||||||
|
string query = 1; // The user's question. Required. Max recommended: 4096 chars.
|
||||||
|
string session_id = 2; // Conversation session identifier. Optional.
|
||||||
|
// If empty, defaults to "default" (shared session).
|
||||||
|
// Use a UUID per user/conversation for isolation.
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response stream:**
|
||||||
|
|
||||||
|
| Message # | `text` | `avap_code` | `is_final` |
|
||||||
|
|---|---|---|---|
|
||||||
|
| 1 (only) | Full answer text | `"AVAP-2026"` | `true` |
|
||||||
|
|
||||||
|
**Latency characteristics:** Depends on LLM generation time (non-streaming). Typically 3–15 seconds for `qwen2.5:1.5b` on the Devaron cluster.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.2 `AskAgentStream`
|
||||||
|
|
||||||
|
**Behaviour:** Runs `prepare_graph` (classify → reformulate → retrieve), then calls `llm.stream()` directly. Emits one `AgentResponse` per token from Ollama, followed by a terminal message.
|
||||||
|
|
||||||
|
**Use case:** Interactive clients (chat UIs, terminal tools) that need progressive rendering.
|
||||||
|
|
||||||
|
**Request:** Same `AgentRequest` as `AskAgent`.
|
||||||
|
|
||||||
|
**Response stream:**
|
||||||
|
|
||||||
|
| Message # | `text` | `avap_code` | `is_final` |
|
||||||
|
|---|---|---|---|
|
||||||
|
| 1…N | Single token | `""` | `false` |
|
||||||
|
| N+1 (final) | `""` | `""` | `true` |
|
||||||
|
|
||||||
|
**Client contract:**
|
||||||
|
- Accumulate `text` from all messages where `is_final == false` to reconstruct the full answer.
|
||||||
|
- The `is_final == true` message signals end-of-stream. Its `text` is always empty and should be discarded.
|
||||||
|
- Do not close the stream early — the engine will fail to persist conversation history if the stream is interrupted.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2.3 `EvaluateRAG`
|
||||||
|
|
||||||
|
**Behaviour:** Runs the RAGAS evaluation pipeline against the golden dataset. Uses the production Ollama LLM for answer generation and Claude as the evaluation judge.
|
||||||
|
|
||||||
|
> **Requirement:** `ANTHROPIC_API_KEY` must be configured in the environment. This endpoint will return an error response if it is missing.
|
||||||
|
|
||||||
|
**Request:**
|
||||||
|
|
||||||
|
```protobuf
|
||||||
|
message EvalRequest {
|
||||||
|
string category = 1; // Optional. Filter golden dataset by category name.
|
||||||
|
// If empty, all categories are evaluated.
|
||||||
|
int32 limit = 2; // Optional. Evaluate only the first N questions.
|
||||||
|
// If 0, all matching questions are evaluated.
|
||||||
|
string index = 3; // Optional. Elasticsearch index to evaluate against.
|
||||||
|
// If empty, uses the server's configured ELASTICSEARCH_INDEX.
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (single, non-streaming):**
|
||||||
|
|
||||||
|
```protobuf
|
||||||
|
message EvalResponse {
|
||||||
|
string status = 1; // "ok" or error description
|
||||||
|
int32 questions_evaluated = 2; // Number of questions actually processed
|
||||||
|
float elapsed_seconds = 3; // Total wall-clock time
|
||||||
|
string judge_model = 4; // Claude model used as judge
|
||||||
|
string index = 5; // Elasticsearch index evaluated
|
||||||
|
|
||||||
|
// RAGAS metric scores (0.0 – 1.0)
|
||||||
|
float faithfulness = 6;
|
||||||
|
float answer_relevancy = 7;
|
||||||
|
float context_recall = 8;
|
||||||
|
float context_precision = 9;
|
||||||
|
|
||||||
|
float global_score = 10; // Mean of non-zero metric scores
|
||||||
|
string verdict = 11; // "EXCELLENT" | "ACCEPTABLE" | "INSUFFICIENT"
|
||||||
|
|
||||||
|
repeated QuestionDetail details = 12;
|
||||||
|
}
|
||||||
|
|
||||||
|
message QuestionDetail {
|
||||||
|
string id = 1; // Question ID from golden dataset
|
||||||
|
string category = 2; // Question category
|
||||||
|
string question = 3; // Question text
|
||||||
|
string answer_preview = 4; // First 300 chars of generated answer
|
||||||
|
int32 n_chunks = 5; // Number of context chunks retrieved
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verdict thresholds:**
|
||||||
|
|
||||||
|
| Score | Verdict |
|
||||||
|
|---|---|
|
||||||
|
| ≥ 0.80 | `EXCELLENT` |
|
||||||
|
| ≥ 0.60 | `ACCEPTABLE` |
|
||||||
|
| < 0.60 | `INSUFFICIENT` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Message Types
|
||||||
|
|
||||||
|
### `AgentRequest`
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `query` | `string` | Yes | User's natural language question |
|
||||||
|
| `session_id` | `string` | No | Conversation identifier for multi-turn context. Use a stable UUID per user session. |
|
||||||
|
|
||||||
|
### `AgentResponse`
|
||||||
|
|
||||||
|
| Field | Type | Description |
|
||||||
|
|---|---|---|
|
||||||
|
| `text` | `string` | Token text (streaming) or full answer text (non-streaming) |
|
||||||
|
| `avap_code` | `string` | Currently always `"AVAP-2026"` in non-streaming mode, empty in streaming |
|
||||||
|
| `is_final` | `bool` | `true` only on the last message of the stream |
|
||||||
|
|
||||||
|
### `EvalRequest`
|
||||||
|
|
||||||
|
| Field | Type | Required | Default | Description |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| `category` | `string` | No | `""` (all) | Filter golden dataset by category |
|
||||||
|
| `limit` | `int32` | No | `0` (all) | Max questions to evaluate |
|
||||||
|
| `index` | `string` | No | `$ELASTICSEARCH_INDEX` | ES index to evaluate |
|
||||||
|
|
||||||
|
### `EvalResponse`
|
||||||
|
|
||||||
|
See full definition in [§2.3](#23-evaluaterag).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Error Handling
|
||||||
|
|
||||||
|
The engine catches all exceptions and returns them as terminal `AgentResponse` messages rather than gRPC status errors. This means:
|
||||||
|
|
||||||
|
- The stream will **not** be terminated with a non-OK gRPC status code on application-level errors.
|
||||||
|
- Check for error strings in the `text` field that begin with `[ENG] Error:`.
|
||||||
|
- The stream will still end with `is_final = true`.
|
||||||
|
|
||||||
|
**Example error response:**
|
||||||
|
```json
|
||||||
|
{"text": "[ENG] Error: Connection refused connecting to Ollama", "is_final": true}
|
||||||
|
```
|
||||||
|
|
||||||
|
**`EvaluateRAG` error response:**
|
||||||
|
Returned as a single `EvalResponse` with `status` set to the error description:
|
||||||
|
```json
|
||||||
|
{"status": "ANTHROPIC_API_KEY no configurada en .env", ...}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Client Examples
|
||||||
|
|
||||||
|
### Introspect the service
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grpcurl -plaintext localhost:50052 list
|
||||||
|
# Output: brunix.AssistanceEngine
|
||||||
|
|
||||||
|
grpcurl -plaintext localhost:50052 describe brunix.AssistanceEngine
|
||||||
|
```
|
||||||
|
|
||||||
|
### `AskAgent` — full response
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"query": "What is addVar in AVAP?", "session_id": "dev-001"}' \
|
||||||
|
localhost:50052 \
|
||||||
|
brunix.AssistanceEngine/AskAgent
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected response:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"text": "addVar is an AVAP command that declares a new variable...",
|
||||||
|
"avap_code": "AVAP-2026",
|
||||||
|
"is_final": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `AskAgentStream` — token streaming
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"query": "Write an AVAP API that returns hello world", "session_id": "dev-001"}' \
|
||||||
|
localhost:50052 \
|
||||||
|
brunix.AssistanceEngine/AskAgentStream
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected response (truncated):
|
||||||
|
```json
|
||||||
|
{"text": "Here", "is_final": false}
|
||||||
|
{"text": " is", "is_final": false}
|
||||||
|
{"text": " a", "is_final": false}
|
||||||
|
...
|
||||||
|
{"text": "", "is_final": true}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `EvaluateRAG` — run evaluation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Evaluate first 10 questions from the "core_syntax" category
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"category": "core_syntax", "limit": 10}' \
|
||||||
|
localhost:50052 \
|
||||||
|
brunix.AssistanceEngine/EvaluateRAG
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected response:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "ok",
|
||||||
|
"questions_evaluated": 10,
|
||||||
|
"elapsed_seconds": 142.3,
|
||||||
|
"judge_model": "claude-sonnet-4-20250514",
|
||||||
|
"index": "avap-docs-test",
|
||||||
|
"faithfulness": 0.8421,
|
||||||
|
"answer_relevancy": 0.7913,
|
||||||
|
"context_recall": 0.7234,
|
||||||
|
"context_precision": 0.6891,
|
||||||
|
"global_score": 0.7615,
|
||||||
|
"verdict": "ACCEPTABLE",
|
||||||
|
"details": [...]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multi-turn conversation example
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Turn 1
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"query": "What is registerEndpoint?", "session_id": "user-abc"}' \
|
||||||
|
localhost:50052 brunix.AssistanceEngine/AskAgentStream
|
||||||
|
|
||||||
|
# Turn 2 — the engine has history from Turn 1
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"query": "Can you show me an example?", "session_id": "user-abc"}' \
|
||||||
|
localhost:50052 brunix.AssistanceEngine/AskAgentStream
|
||||||
|
```
|
||||||
|
|
||||||
|
### Regenerate gRPC stubs after modifying `brunix.proto`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m grpc_tools.protoc \
|
||||||
|
-I./Docker/protos \
|
||||||
|
--python_out=./Docker/src \
|
||||||
|
--grpc_python_out=./Docker/src \
|
||||||
|
./Docker/protos/brunix.proto
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. OpenAI-Compatible Proxy
|
||||||
|
|
||||||
|
The container also exposes an HTTP server on port `8000` (`openai_proxy.py`) that wraps `AskAgentStream` under an OpenAI-compatible endpoint. This allows integration with any tool that supports the OpenAI Chat Completions API.
|
||||||
|
|
||||||
|
**Base URL:** `http://localhost:8000`
|
||||||
|
|
||||||
|
### `POST /v1/chat/completions`
|
||||||
|
|
||||||
|
**Request body:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"model": "brunix",
|
||||||
|
"messages": [
|
||||||
|
{"role": "user", "content": "What is addVar in AVAP?"}
|
||||||
|
],
|
||||||
|
"stream": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Notes:**
|
||||||
|
- The `model` field is ignored; the engine always uses the configured `OLLAMA_MODEL_NAME`.
|
||||||
|
- Session management is handled internally by the proxy. Conversation continuity across separate HTTP requests is not guaranteed.
|
||||||
|
- Only `stream: true` is fully supported. Non-streaming mode may be available but is not the primary use case.
|
||||||
|
|
||||||
|
**Example with curl:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8000/v1/chat/completions \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"model": "brunix",
|
||||||
|
"messages": [{"role": "user", "content": "Explain AVAP loops"}],
|
||||||
|
"stream": true
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,463 @@
|
||||||
|
# Brunix Assistance Engine — Architecture Reference
|
||||||
|
|
||||||
|
> **Audience:** Engineers contributing to this repository, architects reviewing the system design, and operators responsible for its deployment.
|
||||||
|
> **Last updated:** 2026-03-18
|
||||||
|
> **Version:** 1.5.x
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [System Overview](#1-system-overview)
|
||||||
|
2. [Component Inventory](#2-component-inventory)
|
||||||
|
3. [Request Lifecycle](#3-request-lifecycle)
|
||||||
|
4. [LangGraph Workflow](#4-langgraph-workflow)
|
||||||
|
5. [RAG Pipeline — Hybrid Search](#5-rag-pipeline--hybrid-search)
|
||||||
|
6. [Streaming Architecture (AskAgentStream)](#6-streaming-architecture-askagentstream)
|
||||||
|
7. [Evaluation Pipeline (EvaluateRAG)](#7-evaluation-pipeline-evaluaterag)
|
||||||
|
8. [Data Ingestion Pipeline](#8-data-ingestion-pipeline)
|
||||||
|
9. [Infrastructure Layout](#9-infrastructure-layout)
|
||||||
|
10. [Session State & Conversation Memory](#10-session-state--conversation-memory)
|
||||||
|
11. [Observability Stack](#11-observability-stack)
|
||||||
|
12. [Security Boundaries](#12-security-boundaries)
|
||||||
|
13. [Known Limitations & Future Work](#13-known-limitations--future-work)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. System Overview
|
||||||
|
|
||||||
|
The **Brunix Assistance Engine** is a stateful, streaming-capable AI service that answers questions about the AVAP programming language. It combines:
|
||||||
|
|
||||||
|
- **gRPC** as the primary communication interface (port `50051` inside container, `50052` on host)
|
||||||
|
- **LangGraph** for deterministic, multi-step agentic orchestration
|
||||||
|
- **Hybrid RAG** (BM25 + kNN with RRF fusion) over an Elasticsearch vector index
|
||||||
|
- **Ollama** as the local LLM and embedding backend
|
||||||
|
- **RAGAS + Claude** as the automated evaluation judge
|
||||||
|
|
||||||
|
A secondary **OpenAI-compatible HTTP proxy** (port `8000`) is served via FastAPI/Uvicorn, enabling integration with tools that expect the OpenAI API format.
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
|
│ External Clients │
|
||||||
|
│ grpcurl / App SDK │ OpenAI-compatible client │
|
||||||
|
└────────────┬────────────────┴──────────────┬────────────────┘
|
||||||
|
│ gRPC :50052 │ HTTP :8000
|
||||||
|
▼ ▼
|
||||||
|
┌────────────────────────────────────────────────────────────┐
|
||||||
|
│ Docker Container │
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────────────┐ ┌──────────────────────────┐ │
|
||||||
|
│ │ server.py (gRPC) │ │ openai_proxy.py (HTTP) │ │
|
||||||
|
│ │ BrunixEngine │ │ FastAPI / Uvicorn │ │
|
||||||
|
│ └──────────┬──────────┘ └──────────────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ┌──────────▼──────────────────────────────────────────┐ │
|
||||||
|
│ │ LangGraph Orchestration │ │
|
||||||
|
│ │ classify → reformulate → retrieve → generate │ │
|
||||||
|
│ └──────────────────────────┬───────────────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ┌───────────────────┼────────────────────┐ │
|
||||||
|
│ ▼ ▼ ▼ │
|
||||||
|
│ Ollama (LLM) Ollama (Embed) Elasticsearch │
|
||||||
|
│ via tunnel via tunnel via tunnel │
|
||||||
|
└────────────────────────────────────────────────────────────┘
|
||||||
|
│ kubectl port-forward tunnels │
|
||||||
|
▼ ▼
|
||||||
|
Devaron Cluster (Vultr Kubernetes)
|
||||||
|
ollama-light-service:11434 brunix-vector-db:9200
|
||||||
|
brunix-postgres:5432 Langfuse UI
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Component Inventory
|
||||||
|
|
||||||
|
| Component | File / Service | Responsibility |
|
||||||
|
|---|---|---|
|
||||||
|
| **gRPC Server** | `Docker/src/server.py` | Entry point. Implements the `AssistanceEngine` servicer. Initializes LLM, embeddings, ES client, and both graphs. |
|
||||||
|
| **Full Graph** | `Docker/src/graph.py` → `build_graph()` | Complete workflow: classify → reformulate → retrieve → generate. Used by `AskAgent` and `EvaluateRAG`. |
|
||||||
|
| **Prepare Graph** | `Docker/src/graph.py` → `build_prepare_graph()` | Partial workflow: classify → reformulate → retrieve. Does **not** call the LLM for generation. Used by `AskAgentStream` to enable manual token streaming. |
|
||||||
|
| **Message Builder** | `Docker/src/graph.py` → `build_final_messages()` | Reconstructs the final prompt list from prepared state for `llm.stream()`. |
|
||||||
|
| **Prompt Library** | `Docker/src/prompts.py` | Centralized definitions for `CLASSIFY`, `REFORMULATE`, `GENERATE`, `CODE_GENERATION`, and `CONVERSATIONAL` prompts. |
|
||||||
|
| **Agent State** | `Docker/src/state.py` | `AgentState` TypedDict shared across all graph nodes. |
|
||||||
|
| **Evaluation Suite** | `Docker/src/evaluate.py` | RAGAS-based pipeline. Uses the production retriever + Ollama LLM for generation, and Claude as the impartial judge. |
|
||||||
|
| **OpenAI Proxy** | `Docker/src/openai_proxy.py` | FastAPI application that wraps `AskAgentStream` under an `/v1/chat/completions` endpoint. |
|
||||||
|
| **LLM Factory** | `Docker/src/utils/llm_factory.py` | Provider-agnostic factory for chat models (Ollama, AWS Bedrock). |
|
||||||
|
| **Embedding Factory** | `Docker/src/utils/emb_factory.py` | Provider-agnostic factory for embedding models (Ollama, HuggingFace). |
|
||||||
|
| **Ingestion Pipeline** | `scripts/pipelines/flows/elasticsearch_ingestion.py` | Chunks and ingests AVAP documents into Elasticsearch with embeddings. |
|
||||||
|
| **Dataset Generator** | `scripts/pipelines/flows/generate_mbap.py` | Generates synthetic MBPP-style AVAP problems using Claude. |
|
||||||
|
| **MBPP Translator** | `scripts/pipelines/flows/translate_mbpp.py` | Translates MBPP Python dataset into AVAP equivalents. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Request Lifecycle
|
||||||
|
|
||||||
|
### 3.1 `AskAgent` (non-streaming)
|
||||||
|
|
||||||
|
```
|
||||||
|
Client → gRPC AgentRequest{query, session_id}
|
||||||
|
│
|
||||||
|
├─ Load conversation history from session_store[session_id]
|
||||||
|
├─ Build initial_state = {messages: history + [user_msg], ...}
|
||||||
|
│
|
||||||
|
└─ graph.invoke(initial_state)
|
||||||
|
├─ classify → query_type ∈ {RETRIEVAL, CODE_GENERATION, CONVERSATIONAL}
|
||||||
|
├─ reformulate → reformulated_query (keyword-optimized for semantic search)
|
||||||
|
├─ retrieve → context (top-8 hybrid RRF chunks from Elasticsearch)
|
||||||
|
└─ generate → final AIMessage (llm.invoke)
|
||||||
|
│
|
||||||
|
├─ Persist updated history to session_store[session_id]
|
||||||
|
└─ yield AgentResponse{text, avap_code="AVAP-2026", is_final=True}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.2 `AskAgentStream` (token streaming)
|
||||||
|
|
||||||
|
```
|
||||||
|
Client → gRPC AgentRequest{query, session_id}
|
||||||
|
│
|
||||||
|
├─ Load history from session_store[session_id]
|
||||||
|
├─ Build initial_state
|
||||||
|
│
|
||||||
|
├─ prepare_graph.invoke(initial_state) ← Phase 1: no LLM generation
|
||||||
|
│ ├─ classify
|
||||||
|
│ ├─ reformulate
|
||||||
|
│ └─ retrieve (or skip_retrieve if CONVERSATIONAL)
|
||||||
|
│
|
||||||
|
├─ build_final_messages(prepared_state) ← Reconstruct prompt list
|
||||||
|
│
|
||||||
|
└─ for chunk in llm.stream(final_messages):
|
||||||
|
└─ yield AgentResponse{text=token, is_final=False}
|
||||||
|
│
|
||||||
|
├─ Persist full assembled response to session_store
|
||||||
|
└─ yield AgentResponse{text="", is_final=True}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.3 `EvaluateRAG`
|
||||||
|
|
||||||
|
```
|
||||||
|
Client → gRPC EvalRequest{category?, limit?, index?}
|
||||||
|
│
|
||||||
|
└─ evaluate.run_evaluation(...)
|
||||||
|
├─ Load golden_dataset.json
|
||||||
|
├─ Filter by category / limit
|
||||||
|
├─ For each question:
|
||||||
|
│ ├─ retrieve_context (hybrid BM25+kNN, same as production)
|
||||||
|
│ └─ generate_answer (Ollama LLM + GENERATE_PROMPT)
|
||||||
|
├─ Build RAGAS Dataset
|
||||||
|
├─ Run RAGAS metrics with Claude as judge:
|
||||||
|
│ faithfulness / answer_relevancy / context_recall / context_precision
|
||||||
|
└─ Compute global_score + verdict (EXCELLENT / ACCEPTABLE / INSUFFICIENT)
|
||||||
|
│
|
||||||
|
└─ return EvalResponse{scores, global_score, verdict, details[]}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. LangGraph Workflow
|
||||||
|
|
||||||
|
### 4.1 Full Graph (`build_graph`)
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────┐
|
||||||
|
│ classify │
|
||||||
|
└──────┬──────┘
|
||||||
|
│
|
||||||
|
┌────────────────┼──────────────────┐
|
||||||
|
▼ ▼ ▼
|
||||||
|
RETRIEVAL CODE_GENERATION CONVERSATIONAL
|
||||||
|
│ │ │
|
||||||
|
└────────┬───────┘ │
|
||||||
|
▼ ▼
|
||||||
|
┌──────────────┐ ┌────────────────────────┐
|
||||||
|
│ reformulate │ │ respond_conversational │
|
||||||
|
└──────┬───────┘ └───────────┬────────────┘
|
||||||
|
▼ │
|
||||||
|
┌──────────────┐ │
|
||||||
|
│ retrieve │ │
|
||||||
|
└──────┬───────┘ │
|
||||||
|
│ │
|
||||||
|
┌────────┴───────────┐ │
|
||||||
|
▼ ▼ │
|
||||||
|
┌──────────┐ ┌───────────────┐ │
|
||||||
|
│ generate │ │ generate_code │ │
|
||||||
|
└────┬─────┘ └───────┬───────┘ │
|
||||||
|
│ │ │
|
||||||
|
└────────────────────┴────────────────┘
|
||||||
|
│
|
||||||
|
END
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 Prepare Graph (`build_prepare_graph`)
|
||||||
|
|
||||||
|
Identical routing for classify, but generation nodes are replaced by `END`. The `CONVERSATIONAL` branch uses `skip_retrieve` (returns empty context without querying Elasticsearch).
|
||||||
|
|
||||||
|
### 4.3 Query Type Routing
|
||||||
|
|
||||||
|
| `query_type` | Triggers retrieve? | Generation prompt |
|
||||||
|
|---|---|---|
|
||||||
|
| `RETRIEVAL` | Yes | `GENERATE_PROMPT` (explanation-focused) |
|
||||||
|
| `CODE_GENERATION` | Yes | `CODE_GENERATION_PROMPT` (code-focused, returns AVAP blocks) |
|
||||||
|
| `CONVERSATIONAL` | No | `CONVERSATIONAL_PROMPT` (reformulation of prior answer) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. RAG Pipeline — Hybrid Search
|
||||||
|
|
||||||
|
The retrieval system (`hybrid_search_native`) fuses BM25 lexical search and kNN dense vector search using **Reciprocal Rank Fusion (RRF)**.
|
||||||
|
|
||||||
|
```
|
||||||
|
User query
|
||||||
|
│
|
||||||
|
├─ embeddings.embed_query(query) → query_vector [768-dim]
|
||||||
|
│
|
||||||
|
├─ ES multi_match (BM25) on fields [content^2, text^2]
|
||||||
|
│ └─ top-k BM25 hits
|
||||||
|
│
|
||||||
|
└─ ES knn on field [embedding], num_candidates = k×5
|
||||||
|
└─ top-k kNN hits
|
||||||
|
│
|
||||||
|
├─ RRF fusion: score(doc) = Σ 1/(rank + 60)
|
||||||
|
│
|
||||||
|
└─ Top-8 documents → format_context() → context string
|
||||||
|
```
|
||||||
|
|
||||||
|
**RRF constant:** `60` (standard value; prevents high-rank documents from dominating while still rewarding consensus between both retrieval modes).
|
||||||
|
|
||||||
|
**Chunk metadata** attached to each retrieved document:
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|---|---|
|
||||||
|
| `chunk_id` | Unique identifier within the index |
|
||||||
|
| `source_file` | Origin document filename |
|
||||||
|
| `doc_type` | `prose`, `code`, `code_example`, `bnf` |
|
||||||
|
| `block_type` | AVAP block type: `function`, `if`, `startLoop`, `try` |
|
||||||
|
| `section` | Document section/chapter heading |
|
||||||
|
|
||||||
|
Documents of type `code`, `code_example`, `bnf`, or block type `function / if / startLoop / try` are tagged as `[AVAP CODE]` in the formatted context, signaling the LLM to treat them as executable syntax rather than prose.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Streaming Architecture (AskAgentStream)
|
||||||
|
|
||||||
|
The two-phase streaming design is critical to understand:
|
||||||
|
|
||||||
|
**Why not stream through LangGraph?**
|
||||||
|
LangGraph's `stream()` method yields full state snapshots per node, not individual tokens. To achieve true per-token streaming to the gRPC client, the generation step is deliberately extracted from the graph and called directly via `llm.stream()`.
|
||||||
|
|
||||||
|
**Phase 1 — Deterministic preparation (graph-managed):**
|
||||||
|
- Classification, query reformulation, and retrieval run through `prepare_graph.invoke()`.
|
||||||
|
- This phase runs synchronously and produces the complete context before any token is emitted to the client.
|
||||||
|
|
||||||
|
**Phase 2 — Token streaming (manual):**
|
||||||
|
- `build_final_messages()` reconstructs the exact prompt that `generate` / `generate_code` / `respond_conversational` would have used.
|
||||||
|
- `llm.stream(final_messages)` yields one `AIMessageChunk` per token from Ollama.
|
||||||
|
- Each token is immediately forwarded to the gRPC client as `AgentResponse{text=token, is_final=False}`.
|
||||||
|
- After the stream ends, the full assembled text is persisted to `session_store`.
|
||||||
|
|
||||||
|
**Backpressure:** gRPC streaming is flow-controlled by the client. If the client stops reading, the Ollama token stream will block at the `yield` point. No explicit buffer overflow protection is implemented (acceptable for the current single-client dev mode).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Evaluation Pipeline (EvaluateRAG)
|
||||||
|
|
||||||
|
The evaluation suite implements an **offline RAG evaluation** pattern using RAGAS metrics.
|
||||||
|
|
||||||
|
### Judge model separation
|
||||||
|
|
||||||
|
The production LLM (Ollama `qwen2.5:1.5b`) is used for **answer generation** — the same pipeline as production to measure real-world quality. Claude (`claude-sonnet-4-20250514`) is used as the **evaluation judge** — an independent, high-capability model that scores the generated answers against ground truth.
|
||||||
|
|
||||||
|
### RAGAS metrics
|
||||||
|
|
||||||
|
| Metric | Measures | Input |
|
||||||
|
|---|---|---|
|
||||||
|
| `faithfulness` | Are claims in the answer supported by the retrieved context? | answer + contexts |
|
||||||
|
| `answer_relevancy` | Is the answer relevant to the question? | answer + question |
|
||||||
|
| `context_recall` | Does the retrieved context cover the ground truth? | contexts + ground_truth |
|
||||||
|
| `context_precision` | Are the retrieved chunks useful (signal-to-noise)? | contexts + ground_truth |
|
||||||
|
|
||||||
|
### Global score & verdict
|
||||||
|
|
||||||
|
```
|
||||||
|
global_score = mean(non-zero metric scores)
|
||||||
|
|
||||||
|
verdict:
|
||||||
|
≥ 0.80 → EXCELLENT
|
||||||
|
≥ 0.60 → ACCEPTABLE
|
||||||
|
< 0.60 → INSUFFICIENT
|
||||||
|
```
|
||||||
|
|
||||||
|
### Golden dataset
|
||||||
|
|
||||||
|
Located at `Docker/src/golden_dataset.json`. Each entry follows this schema:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "avap-001",
|
||||||
|
"category": "core_syntax",
|
||||||
|
"question": "How do you declare a variable in AVAP?",
|
||||||
|
"ground_truth": "Use addVar to declare a variable..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Data Ingestion Pipeline
|
||||||
|
|
||||||
|
Documents flow into the Elasticsearch index through two paths:
|
||||||
|
|
||||||
|
### Path A — AVAP documentation (structured markdown)
|
||||||
|
|
||||||
|
```
|
||||||
|
docs/LRM/avap.md
|
||||||
|
docs/avap_language_github_docs/*.md
|
||||||
|
docs/developer.avapframework.com/*.md
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
scripts/pipelines/flows/elasticsearch_ingestion.py
|
||||||
|
│
|
||||||
|
├─ Load markdown files
|
||||||
|
├─ Chunk using scripts/pipelines/tasks/chunk.py
|
||||||
|
│ (semantic chunking via Chonkie library)
|
||||||
|
├─ Generate embeddings via scripts/pipelines/tasks/embeddings.py
|
||||||
|
│ (Ollama or HuggingFace embedding model)
|
||||||
|
└─ Bulk index into Elasticsearch
|
||||||
|
index: avap-docs-* (configurable via ELASTICSEARCH_INDEX)
|
||||||
|
mapping: {content, embedding, source_file, doc_type, section, ...}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Path B — Synthetic AVAP code samples
|
||||||
|
|
||||||
|
```
|
||||||
|
docs/samples/*.avap
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
scripts/pipelines/flows/generate_mbap.py
|
||||||
|
│
|
||||||
|
├─ Read AVAP LRM (docs/LRM/avap.md)
|
||||||
|
├─ Call Claude API to generate MBPP-style problems
|
||||||
|
└─ Output synthetic_datasets/mbpp_avap.json
|
||||||
|
(used for fine-tuning and few-shot examples)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Infrastructure Layout
|
||||||
|
|
||||||
|
### Devaron Cluster (Vultr Kubernetes)
|
||||||
|
|
||||||
|
| Service | K8s Name | Port | Purpose |
|
||||||
|
|---|---|---|---|
|
||||||
|
| LLM inference | `ollama-light-service` | `11434` | Text generation + embeddings |
|
||||||
|
| Vector database | `brunix-vector-db` | `9200` | Elasticsearch 8.x |
|
||||||
|
| Observability DB | `brunix-postgres` | `5432` | PostgreSQL for Langfuse |
|
||||||
|
| Langfuse UI | — | `80` | `http://45.77.119.180` |
|
||||||
|
|
||||||
|
### Kubernetes tunnel commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Terminal 1 — LLM
|
||||||
|
kubectl port-forward --address 0.0.0.0 svc/ollama-light-service 11434:11434 \
|
||||||
|
-n brunix --kubeconfig ./kubernetes/kubeconfig.yaml
|
||||||
|
|
||||||
|
# Terminal 2 — Elasticsearch
|
||||||
|
kubectl port-forward --address 0.0.0.0 svc/brunix-vector-db 9200:9200 \
|
||||||
|
-n brunix --kubeconfig ./kubernetes/kubeconfig.yaml
|
||||||
|
|
||||||
|
# Terminal 3 — PostgreSQL (Langfuse)
|
||||||
|
kubectl port-forward --address 0.0.0.0 svc/brunix-postgres 5432:5432 \
|
||||||
|
-n brunix --kubeconfig ./kubernetes/kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Port map summary
|
||||||
|
|
||||||
|
| Port | Protocol | Service | Scope |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `50051` | gRPC | Brunix Engine (inside container) | Internal |
|
||||||
|
| `50052` | gRPC | Brunix Engine (host-mapped) | External |
|
||||||
|
| `8000` | HTTP | OpenAI proxy | External |
|
||||||
|
| `11434` | HTTP | Ollama (via tunnel) | Tunnel |
|
||||||
|
| `9200` | HTTP | Elasticsearch (via tunnel) | Tunnel |
|
||||||
|
| `5432` | TCP | PostgreSQL/Langfuse (via tunnel) | Tunnel |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Session State & Conversation Memory
|
||||||
|
|
||||||
|
Conversation history is managed via an in-process dictionary:
|
||||||
|
|
||||||
|
```python
|
||||||
|
session_store: dict[str, list] = defaultdict(list)
|
||||||
|
# key: session_id (string, provided by client)
|
||||||
|
# value: list of LangChain BaseMessage objects
|
||||||
|
```
|
||||||
|
|
||||||
|
**Characteristics:**
|
||||||
|
- **In-memory only.** History is lost on container restart.
|
||||||
|
- **No TTL or eviction.** Sessions grow unbounded for the lifetime of the process.
|
||||||
|
- **Thread safety:** Python's GIL provides basic safety for the `ThreadPoolExecutor(max_workers=10)` gRPC server, but concurrent writes to the same `session_id` from two simultaneous requests are not explicitly protected.
|
||||||
|
- **History window:** `format_history_for_classify()` uses only the last 6 messages for query classification to keep the classify prompt short and deterministic.
|
||||||
|
|
||||||
|
> **Future work:** Replace `session_store` with a Redis-backed persistent store to survive restarts and support horizontal scaling.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 11. Observability Stack
|
||||||
|
|
||||||
|
### Langfuse tracing
|
||||||
|
|
||||||
|
The server integrates Langfuse for end-to-end LLM tracing. Every `AskAgent` / `AskAgentStream` request creates a trace that captures:
|
||||||
|
- Input query and session ID
|
||||||
|
- Each LangGraph node execution (classify, reformulate, retrieve, generate)
|
||||||
|
- LLM token counts, latency, and cost
|
||||||
|
- Final response
|
||||||
|
|
||||||
|
**Access:** `http://45.77.119.180` — requires a project API key configured via `LANGFUSE_PUBLIC_KEY` and `LANGFUSE_SECRET_KEY`.
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
|
||||||
|
Structured logging via Python's `logging` module, configured at `INFO` level. Log format:
|
||||||
|
|
||||||
|
```
|
||||||
|
[MODULE] context_info — key=value key=value
|
||||||
|
```
|
||||||
|
|
||||||
|
Key log markers:
|
||||||
|
|
||||||
|
| Marker | Module | Meaning |
|
||||||
|
|---|---|---|
|
||||||
|
| `[ESEARCH]` | `server.py` | Elasticsearch connection status |
|
||||||
|
| `[classify]` | `graph.py` | Query type decision + raw LLM output |
|
||||||
|
| `[reformulate]` | `graph.py` | Reformulated query string |
|
||||||
|
| `[hybrid]` | `graph.py` | BM25 / kNN hit counts and RRF result count |
|
||||||
|
| `[retrieve]` | `graph.py` | Number of docs retrieved and context length |
|
||||||
|
| `[generate]` | `graph.py` | Response character count |
|
||||||
|
| `[AskAgentStream]` | `server.py` | Token count and total chars per stream |
|
||||||
|
| `[eval]` | `evaluate.py` | Per-question retrieval and generation status |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 12. Security Boundaries
|
||||||
|
|
||||||
|
| Boundary | Current state | Risk |
|
||||||
|
|---|---|---|
|
||||||
|
| gRPC transport | **Insecure** (`add_insecure_port`) | Network interception possible. Acceptable in dev/tunnel setup; requires mTLS for production. |
|
||||||
|
| Elasticsearch auth | Optional (user/pass or API key via env vars) | Index is accessible without auth if `ELASTICSEARCH_USER` and `ELASTICSEARCH_API_KEY` are unset. |
|
||||||
|
| Container user | Non-root (`python:3.11-slim` default) | Low risk. Do not override with `root`. |
|
||||||
|
| Secrets in env | Via `.env` / `docker-compose` env injection | Never commit real values. See [CONTRIBUTING.md](../CONTRIBUTING.md#6-environment-variables-policy). |
|
||||||
|
| Session store | In-memory, no auth | Any caller with access to the gRPC port can read/write any session by guessing its ID. |
|
||||||
|
| Kubeconfig | `./kubernetes/kubeconfig.yaml` (local only) | Grants cluster access. Never commit. Listed in `.gitignore`. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 13. Known Limitations & Future Work
|
||||||
|
|
||||||
|
| Area | Limitation | Proposed solution |
|
||||||
|
|---|---|---|
|
||||||
|
| Session persistence | In-memory, lost on restart | Redis-backed `session_store` |
|
||||||
|
| Horizontal scaling | `session_store` is per-process | Sticky sessions or external session store |
|
||||||
|
| gRPC security | Insecure port | Add TLS + optional mTLS |
|
||||||
|
| Elasticsearch auth | Not enforced if vars unset | Make auth required; fail-fast on startup |
|
||||||
|
| Context window | Full history passed to generate; no truncation | Sliding window or summarization for long sessions |
|
||||||
|
| Evaluation | Golden dataset must be manually maintained | Automated golden dataset refresh pipeline |
|
||||||
|
| Rate limiting | None on gRPC server | Add interceptor-based rate limiter |
|
||||||
|
| Health check | No gRPC health protocol | Implement `grpc.health.v1` |
|
||||||
|
|
@ -0,0 +1,372 @@
|
||||||
|
# AVAP Chunker — Language Configuration Reference
|
||||||
|
|
||||||
|
> **File:** `scripts/pipelines/ingestion/avap_config.json`
|
||||||
|
> **Used by:** `avap_chunker.py` (Pipeline B)
|
||||||
|
> **Last updated:** 2026-03-18
|
||||||
|
|
||||||
|
This file is the **grammar definition** for the AVAP language chunker. It tells `avap_chunker.py` how to tokenize, parse, and semantically classify `.avap` source files before they are embedded and ingested into Elasticsearch. Modifying this file changes what the chunker recognises as a block, a statement, or a semantic feature — and therefore what metadata every chunk in the knowledge base carries.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Top-Level Fields](#1-top-level-fields)
|
||||||
|
2. [Lexer](#2-lexer)
|
||||||
|
3. [Blocks](#3-blocks)
|
||||||
|
4. [Statements](#4-statements)
|
||||||
|
5. [Semantic Tags](#5-semantic-tags)
|
||||||
|
6. [How They Work Together](#6-how-they-work-together)
|
||||||
|
7. [Adding New Constructs](#7-adding-new-constructs)
|
||||||
|
8. [Full Annotated Example](#8-full-annotated-example)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Top-Level Fields
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"language": "avap",
|
||||||
|
"version": "1.0",
|
||||||
|
"file_extensions": [".avap"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Field | Type | Description |
|
||||||
|
|---|---|---|
|
||||||
|
| `language` | string | Human-readable language name. Used in chunker progress reports. |
|
||||||
|
| `version` | string | Config schema version. Increment when making breaking changes. |
|
||||||
|
| `file_extensions` | array of strings | File extensions the chunker will process. `.md` files are always processed regardless of this setting. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Lexer
|
||||||
|
|
||||||
|
The lexer section controls how raw source lines are stripped of comments and string literals before pattern matching is applied.
|
||||||
|
|
||||||
|
```json
|
||||||
|
"lexer": {
|
||||||
|
"string_delimiters": ["\"", "'"],
|
||||||
|
"escape_char": "\\",
|
||||||
|
"comment_line": ["///", "//"],
|
||||||
|
"comment_block": { "open": "/*", "close": "*/" },
|
||||||
|
"line_oriented": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Field | Type | Description |
|
||||||
|
|---|---|---|
|
||||||
|
| `string_delimiters` | array of strings | Characters that open and close string literals. Content inside strings is ignored during pattern matching. |
|
||||||
|
| `escape_char` | string | Character used to escape the next character inside a string. Prevents `\"` from closing the string. |
|
||||||
|
| `comment_line` | array of strings | Line comment prefixes, evaluated longest-first. Everything after the matched prefix is stripped. AVAP supports both `///` (documentation comments) and `//` (inline comments). |
|
||||||
|
| `comment_block.open` | string | Block comment opening delimiter. |
|
||||||
|
| `comment_block.close` | string | Block comment closing delimiter. Content between `/*` and `*/` is stripped before pattern matching. |
|
||||||
|
| `line_oriented` | bool | When `true`, the lexer processes one line at a time. Should always be `true` for AVAP. |
|
||||||
|
|
||||||
|
**Important:** Comment stripping and string boundary detection happen before any block or statement pattern is evaluated. A keyword inside a string literal or a comment will never trigger a block or statement match.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Blocks
|
||||||
|
|
||||||
|
Blocks are **multi-line constructs** with a defined opener and closer. The chunker tracks nesting depth — each opener increments depth, each closer decrements it, and the block ends when depth returns to zero. This correctly handles nested `if()` inside `function{}` and similar cases.
|
||||||
|
|
||||||
|
Each block definition produces a chunk with `doc_type` as specified and `block_type` equal to the block `name`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
"blocks": [
|
||||||
|
{
|
||||||
|
"name": "function",
|
||||||
|
"doc_type": "code",
|
||||||
|
"opener_pattern": "^\\s*function\\s+(\\w+)\\s*\\(([^)]*)",
|
||||||
|
"closer_pattern": "^\\s*\\}\\s*$",
|
||||||
|
"extract_signature": true,
|
||||||
|
"signature_template": "function {group1}({group2})"
|
||||||
|
},
|
||||||
|
...
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Block fields
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `name` | string | Yes | Identifier for this block type. Used as `block_type` in the chunk metadata and in the `semantic_overlap` context header. |
|
||||||
|
| `doc_type` | string | Yes | Elasticsearch `doc_type` field value for chunks from this block. |
|
||||||
|
| `opener_pattern` | regex string | Yes | Pattern matched against the clean (comment-stripped) line to detect the start of this block. Must be anchored at the start (`^`). |
|
||||||
|
| `closer_pattern` | regex string | Yes | Pattern matched to detect the end of this block. Checked at every line after the opener. |
|
||||||
|
| `extract_signature` | bool | No (default: `false`) | When `true`, the chunker extracts a compact signature string from the opener line using capture groups, and creates an additional `function_signature` chunk alongside the full block chunk. |
|
||||||
|
| `signature_template` | string | No | Template for the signature string. Uses `{group1}`, `{group2}`, etc. as placeholders for the regex capture groups from `opener_pattern`. |
|
||||||
|
|
||||||
|
### Current block definitions
|
||||||
|
|
||||||
|
#### `function`
|
||||||
|
|
||||||
|
```
|
||||||
|
opener: ^\\s*function\\s+(\\w+)\\s*\\(([^)]*)
|
||||||
|
closer: ^\\s*\\}\\s*$
|
||||||
|
```
|
||||||
|
|
||||||
|
Matches any top-level or nested AVAP function declaration. The two capture groups extract the function name (`group1`) and parameter list (`group2`), which are combined into the signature template `function {group1}({group2})`.
|
||||||
|
|
||||||
|
Because `extract_signature: true`, every function produces **two chunks**:
|
||||||
|
1. A `doc_type: "code"`, `block_type: "function"` chunk containing the full function body.
|
||||||
|
2. A `doc_type: "function_signature"`, `block_type: "function_signature"` chunk containing only the signature string (e.g. `function validateAccess(userId, token)`). This lightweight chunk is indexed separately to enable fast function-name lookup without retrieving the entire body.
|
||||||
|
|
||||||
|
Additionally, the function signature is registered in the `SemanticOverlapBuffer`. Subsequent non-function chunks in the same file will receive the current function signature prepended as a context comment (`// contexto: function validateAccess(userId, token)`), keeping the surrounding code semantically grounded.
|
||||||
|
|
||||||
|
#### `if`
|
||||||
|
|
||||||
|
```
|
||||||
|
opener: ^\\s*if\\s*\\(
|
||||||
|
closer: ^\\s*end\\s*\\(\\s*\\)
|
||||||
|
```
|
||||||
|
|
||||||
|
Matches AVAP conditional blocks. Note: AVAP uses `end()` as the closer, not `}`.
|
||||||
|
|
||||||
|
#### `startLoop`
|
||||||
|
|
||||||
|
```
|
||||||
|
opener: ^\\s*startLoop\\s*\\(
|
||||||
|
closer: ^\\s*endLoop\\s*\\(\\s*\\)
|
||||||
|
```
|
||||||
|
|
||||||
|
Matches AVAP iteration blocks. The closer is `endLoop()`.
|
||||||
|
|
||||||
|
#### `try`
|
||||||
|
|
||||||
|
```
|
||||||
|
opener: ^\\s*try\\s*\\(\\s*\\)
|
||||||
|
closer: ^\\s*end\\s*\\(\\s*\\)
|
||||||
|
```
|
||||||
|
|
||||||
|
Matches AVAP error-handling blocks (`try()` … `end()`).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Statements
|
||||||
|
|
||||||
|
Statements are **single-line constructs**. Lines that are not part of any block opener or closer are classified against the statement patterns in order. The first match wins. If no pattern matches, the statement is classified as `"statement"` (the fallback).
|
||||||
|
|
||||||
|
Consecutive lines with the same statement type are **grouped into a single chunk**, keeping semantically related statements together. When the statement type changes, the current group is flushed as a chunk.
|
||||||
|
|
||||||
|
```json
|
||||||
|
"statements": [
|
||||||
|
{ "name": "registerEndpoint", "pattern": "^\\s*registerEndpoint\\s*\\(" },
|
||||||
|
{ "name": "addVar", "pattern": "^\\s*addVar\\s*\\(" },
|
||||||
|
...
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Statement fields
|
||||||
|
|
||||||
|
| Field | Type | Description |
|
||||||
|
|---|---|---|
|
||||||
|
| `name` | string | Used as `block_type` in the chunk metadata. |
|
||||||
|
| `pattern` | regex string | Matched against the clean line. First match wins — order matters. |
|
||||||
|
|
||||||
|
### Current statement definitions
|
||||||
|
|
||||||
|
| Name | Matches | AVAP commands |
|
||||||
|
|---|---|---|
|
||||||
|
| `registerEndpoint` | API route registration | `registerEndpoint(...)` |
|
||||||
|
| `addVar` | Variable declaration | `addVar(...)` |
|
||||||
|
| `io_command` | Input/output operations | `addParam`, `getListLen`, `addResult`, `getQueryParamList` |
|
||||||
|
| `http_command` | HTTP client calls | `RequestPost`, `RequestGet` |
|
||||||
|
| `orm_command` | Database ORM operations | `ormDirect`, `ormCheckTable`, `ormCreateTable`, `ormAccessSelect`, `ormAccessInsert`, `ormAccessUpdate` |
|
||||||
|
| `util_command` | Utility and helper functions | `variableToList`, `itemFromList`, `variableFromJSON`, `AddVariableToJSON`, `encodeSHA256`, `encodeMD5`, `getRegex`, `getDateTime`, `stampToDatetime`, `getTimeStamp`, `randomString`, `replace` |
|
||||||
|
| `async_command` | Concurrency primitives | `x = go funcName(`, `gather(` |
|
||||||
|
| `connector` | External service connector | `x = avapConnector(` |
|
||||||
|
| `modularity` | Module imports | `import`, `include` |
|
||||||
|
| `assignment` | Variable assignment (catch-all before fallback) | `x = ...` |
|
||||||
|
|
||||||
|
**Ordering note:** `registerEndpoint`, `addVar`, and the specific command categories are listed before `assignment` intentionally. `assignment` would match many of them (they all contain `=` or are function calls that could follow an assignment), so the more specific patterns must come first.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Semantic Tags
|
||||||
|
|
||||||
|
Semantic tags are **boolean metadata flags** applied to every chunk (both blocks and statements) by scanning the entire chunk content with a regex. A chunk can have multiple tags simultaneously.
|
||||||
|
|
||||||
|
The `complexity` field is automatically computed as the count of `true` tags in a chunk's metadata, providing a rough signal of how much AVAP functionality a given chunk exercises.
|
||||||
|
|
||||||
|
```json
|
||||||
|
"semantic_tags": [
|
||||||
|
{ "tag": "uses_orm", "pattern": "\\b(ormDirect|ormAccessSelect|...)\\s*\\(" },
|
||||||
|
...
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tag fields
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|---|---|
|
||||||
|
| `tag` | Key name in the `metadata` object stored in Elasticsearch. Value is always `true` when present. |
|
||||||
|
| `pattern` | Regex searched (not matched) across the full chunk text. Uses `\b` word boundaries to avoid false positives. |
|
||||||
|
|
||||||
|
### Current semantic tags
|
||||||
|
|
||||||
|
| Tag | Detected when chunk contains |
|
||||||
|
|---|---|
|
||||||
|
| `uses_orm` | Any ORM command: `ormDirect`, `ormCheckTable`, `ormCreateTable`, `ormAccessSelect`, `ormAccessInsert`, `ormAccessUpdate` |
|
||||||
|
| `uses_http` | HTTP client calls: `RequestPost`, `RequestGet` |
|
||||||
|
| `uses_connector` | External connector: `avapConnector(` |
|
||||||
|
| `uses_async` | Concurrency: `go funcName(` or `gather(` |
|
||||||
|
| `uses_crypto` | Hashing/encoding: `encodeSHA256(`, `encodeMD5(` |
|
||||||
|
| `uses_auth` | Auth-related commands: `addParam`, `_status` |
|
||||||
|
| `uses_error_handling` | Error handling block: `try()` |
|
||||||
|
| `uses_loop` | Loop construct: `startLoop(` |
|
||||||
|
| `uses_json` | JSON operations: `variableFromJSON(`, `AddVariableToJSON(` |
|
||||||
|
| `uses_list` | List operations: `variableToList(`, `itemFromList(`, `getListLen(` |
|
||||||
|
| `uses_regex` | Regular expressions: `getRegex(` |
|
||||||
|
| `uses_datetime` | Date/time operations: `getDateTime(`, `getTimeStamp(`, `stampToDatetime(` |
|
||||||
|
| `returns_result` | Returns data to the API caller: `addResult(` |
|
||||||
|
| `registers_endpoint` | Defines an API route: `registerEndpoint(` |
|
||||||
|
|
||||||
|
**How tags are used at retrieval time:** The Elasticsearch mapping stores each tag as a `boolean` field under the `metadata` object. This enables filtered retrieval — for example, a future retrieval enhancement could boost chunks with `metadata.uses_orm: true` for queries that contain ORM-related keywords, improving precision for database-related questions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. How They Work Together
|
||||||
|
|
||||||
|
The following example shows how `avap_chunker.py` processes a real `.avap` file using this config:
|
||||||
|
|
||||||
|
```avap
|
||||||
|
// Validate user session
|
||||||
|
function validateAccess(userId, token) {
|
||||||
|
addVar(isValid = false)
|
||||||
|
addParam(userId)
|
||||||
|
try()
|
||||||
|
ormAccessSelect(users, id = userId)
|
||||||
|
addVar(isValid = true)
|
||||||
|
end()
|
||||||
|
addResult(isValid)
|
||||||
|
}
|
||||||
|
|
||||||
|
registerEndpoint(POST, /validate)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Chunks produced:**
|
||||||
|
|
||||||
|
| # | `doc_type` | `block_type` | Content | Tags |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| 1 | `code` | `function` | Full function body (lines 2–10) | `uses_auth`, `uses_orm`, `uses_error_handling`, `returns_result` · `complexity: 4` |
|
||||||
|
| 2 | `function_signature` | `function_signature` | `function validateAccess(userId, token)` | — |
|
||||||
|
| 3 | `code` | `registerEndpoint` | `registerEndpoint(POST, /validate)` | `registers_endpoint` · `complexity: 1` |
|
||||||
|
|
||||||
|
Chunk 1 also receives the function signature as a semantic overlap header because the `SemanticOverlapBuffer` tracks `validateAccess` and injects it as context into any subsequent non-function chunks in the same file.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Adding New Constructs
|
||||||
|
|
||||||
|
### Adding a new block type
|
||||||
|
|
||||||
|
1. Identify the opener and closer patterns from the AVAP LRM (`docs/LRM/avap.md`).
|
||||||
|
2. Add an entry to `"blocks"` in `avap_config.json`.
|
||||||
|
3. If the block introduces a named construct worth indexing independently (like functions), set `"extract_signature": true` and define a `"signature_template"`.
|
||||||
|
4. Run a smoke test on a representative `.avap` file:
|
||||||
|
```bash
|
||||||
|
python scripts/pipelines/ingestion/avap_chunker.py \
|
||||||
|
--lang-config scripts/pipelines/ingestion/avap_config.json \
|
||||||
|
--docs-path docs/samples \
|
||||||
|
--output /tmp/test_chunks.jsonl \
|
||||||
|
--no-dedup
|
||||||
|
```
|
||||||
|
5. Inspect `/tmp/test_chunks.jsonl` and verify the new `block_type` appears with the expected content.
|
||||||
|
6. Re-run the ingestion pipeline to rebuild the index.
|
||||||
|
|
||||||
|
### Adding a new statement category
|
||||||
|
|
||||||
|
1. Add an entry to `"statements"` **before** the `assignment` catch-all.
|
||||||
|
2. Use `^\\s*` to anchor the pattern at the start of the line.
|
||||||
|
3. Test as above — verify the new `block_type` appears in the JSONL output.
|
||||||
|
|
||||||
|
### Adding a new semantic tag
|
||||||
|
|
||||||
|
1. Add an entry to `"semantic_tags"`.
|
||||||
|
2. Use `\\b` word boundaries to prevent false positives on substrings.
|
||||||
|
3. Add the new tag as a `boolean` field to the Elasticsearch index mapping in `avap_ingestor.py` (`build_index_mapping()`).
|
||||||
|
4. **Re-index from scratch** — existing documents will not have the new tag unless the index is rebuilt (`--delete` flag).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Full Annotated Example
|
||||||
|
|
||||||
|
```jsonc
|
||||||
|
{
|
||||||
|
// Identifies this config as the AVAP v1.0 grammar
|
||||||
|
"language": "avap",
|
||||||
|
"version": "1.0",
|
||||||
|
"file_extensions": [".avap"], // Only .avap files; .md is always included
|
||||||
|
|
||||||
|
"lexer": {
|
||||||
|
"string_delimiters": ["\"", "'"], // Both quote styles used in AVAP
|
||||||
|
"escape_char": "\\",
|
||||||
|
"comment_line": ["///", "//"], // /// first — longest match wins
|
||||||
|
"comment_block": { "open": "/*", "close": "*/" },
|
||||||
|
"line_oriented": true
|
||||||
|
},
|
||||||
|
|
||||||
|
"blocks": [
|
||||||
|
{
|
||||||
|
"name": "function",
|
||||||
|
"doc_type": "code",
|
||||||
|
// Captures: group1=name, group2=params
|
||||||
|
"opener_pattern": "^\\s*function\\s+(\\w+)\\s*\\(([^)]*)",
|
||||||
|
"closer_pattern": "^\\s*\\}\\s*$", // AVAP functions close with }
|
||||||
|
"extract_signature": true,
|
||||||
|
"signature_template": "function {group1}({group2})"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "if",
|
||||||
|
"doc_type": "code",
|
||||||
|
"opener_pattern": "^\\s*if\\s*\\(",
|
||||||
|
"closer_pattern": "^\\s*end\\s*\\(\\s*\\)" // AVAP if closes with end()
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "startLoop",
|
||||||
|
"doc_type": "code",
|
||||||
|
"opener_pattern": "^\\s*startLoop\\s*\\(",
|
||||||
|
"closer_pattern": "^\\s*endLoop\\s*\\(\\s*\\)"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "try",
|
||||||
|
"doc_type": "code",
|
||||||
|
"opener_pattern": "^\\s*try\\s*\\(\\s*\\)",
|
||||||
|
"closer_pattern": "^\\s*end\\s*\\(\\s*\\)" // try also closes with end()
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
|
"statements": [
|
||||||
|
// Specific patterns first — must come before the generic "assignment" catch-all
|
||||||
|
{ "name": "registerEndpoint", "pattern": "^\\s*registerEndpoint\\s*\\(" },
|
||||||
|
{ "name": "addVar", "pattern": "^\\s*addVar\\s*\\(" },
|
||||||
|
{ "name": "io_command", "pattern": "^\\s*(addParam|getListLen|addResult|getQueryParamList)\\s*\\(" },
|
||||||
|
{ "name": "http_command", "pattern": "^\\s*(RequestPost|RequestGet)\\s*\\(" },
|
||||||
|
{ "name": "orm_command", "pattern": "^\\s*(ormDirect|ormCheckTable|ormCreateTable|ormAccessSelect|ormAccessInsert|ormAccessUpdate)\\s*\\(" },
|
||||||
|
{ "name": "util_command", "pattern": "^\\s*(variableToList|itemFromList|variableFromJSON|AddVariableToJSON|encodeSHA256|encodeMD5|getRegex|getDateTime|stampToDatetime|getTimeStamp|randomString|replace)\\s*\\(" },
|
||||||
|
{ "name": "async_command", "pattern": "^\\s*(\\w+\\s*=\\s*go\\s+|gather\\s*\\()" },
|
||||||
|
{ "name": "connector", "pattern": "^\\s*\\w+\\s*=\\s*avapConnector\\s*\\(" },
|
||||||
|
{ "name": "modularity", "pattern": "^\\s*(import|include)\\s+" },
|
||||||
|
{ "name": "assignment", "pattern": "^\\s*\\w+\\s*=\\s*" } // catch-all
|
||||||
|
],
|
||||||
|
|
||||||
|
"semantic_tags": [
|
||||||
|
// Applied to every chunk by full-content regex search (not line-by-line)
|
||||||
|
{ "tag": "uses_orm", "pattern": "\\b(ormDirect|ormCheckTable|ormCreateTable|ormAccessSelect|ormAccessInsert|ormAccessUpdate)\\s*\\(" },
|
||||||
|
{ "tag": "uses_http", "pattern": "\\b(RequestPost|RequestGet)\\s*\\(" },
|
||||||
|
{ "tag": "uses_connector", "pattern": "\\bavapConnector\\s*\\(" },
|
||||||
|
{ "tag": "uses_async", "pattern": "\\bgo\\s+\\w+\\s*\\(|\\bgather\\s*\\(" },
|
||||||
|
{ "tag": "uses_crypto", "pattern": "\\b(encodeSHA256|encodeMD5)\\s*\\(" },
|
||||||
|
{ "tag": "uses_auth", "pattern": "\\b(addParam|_status)\\b" },
|
||||||
|
{ "tag": "uses_error_handling", "pattern": "\\btry\\s*\\(\\s*\\)" },
|
||||||
|
{ "tag": "uses_loop", "pattern": "\\bstartLoop\\s*\\(" },
|
||||||
|
{ "tag": "uses_json", "pattern": "\\b(variableFromJSON|AddVariableToJSON)\\s*\\(" },
|
||||||
|
{ "tag": "uses_list", "pattern": "\\b(variableToList|itemFromList|getListLen)\\s*\\(" },
|
||||||
|
{ "tag": "uses_regex", "pattern": "\\bgetRegex\\s*\\(" },
|
||||||
|
{ "tag": "uses_datetime", "pattern": "\\b(getDateTime|getTimeStamp|stampToDatetime)\\s*\\(" },
|
||||||
|
{ "tag": "returns_result", "pattern": "\\baddResult\\s*\\(" },
|
||||||
|
{ "tag": "registers_endpoint", "pattern": "\\bregisterEndpoint\\s*\\(" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,389 @@
|
||||||
|
# Brunix Assistance Engine — Operations Runbook
|
||||||
|
|
||||||
|
> **Audience:** Engineers on-call, DevOps, and anyone debugging the Brunix Engine in a live environment.
|
||||||
|
> **Last updated:** 2026-03-18
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Health Checks](#1-health-checks)
|
||||||
|
2. [Starting the Engine](#2-starting-the-engine)
|
||||||
|
3. [Stopping & Restarting](#3-stopping--restarting)
|
||||||
|
4. [Tunnel Management](#4-tunnel-management)
|
||||||
|
5. [Incident Playbooks](#5-incident-playbooks)
|
||||||
|
- [Engine fails to start](#51-engine-fails-to-start)
|
||||||
|
- [Elasticsearch unreachable](#52-elasticsearch-unreachable)
|
||||||
|
- [Ollama unreachable / model not found](#53-ollama-unreachable--model-not-found)
|
||||||
|
- [AskAgent returns `[ENG] Error`](#54-askagent-returns-eng-error)
|
||||||
|
- [EvaluateRAG returns ANTHROPIC_API_KEY error](#55-evaluaterag-returns-anthropic_api_key-error)
|
||||||
|
- [Container memory / OOM](#56-container-memory--oom)
|
||||||
|
- [Session history not persisting between requests](#57-session-history-not-persisting-between-requests)
|
||||||
|
6. [Log Reference](#6-log-reference)
|
||||||
|
7. [Useful Commands](#7-useful-commands)
|
||||||
|
8. [Escalation Path](#8-escalation-path)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Health Checks
|
||||||
|
|
||||||
|
### Is the gRPC server up?
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grpcurl -plaintext localhost:50052 list
|
||||||
|
# Expected: brunix.AssistanceEngine
|
||||||
|
```
|
||||||
|
|
||||||
|
If `grpcurl` hangs or returns a connection error, the container is not running or the port is not mapped.
|
||||||
|
|
||||||
|
### Is Elasticsearch reachable?
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:9200/_cluster/health | python3 -m json.tool
|
||||||
|
# Expected: "status": "green" or "yellow"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Is Ollama reachable?
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:11434/api/tags | python3 -m json.tool
|
||||||
|
# Expected: list of available models including qwen2.5:1.5b
|
||||||
|
```
|
||||||
|
|
||||||
|
### Is the embedding model loaded?
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:11434/api/tags | grep qwen3-0.6B-emb
|
||||||
|
# Expected: model entry present
|
||||||
|
```
|
||||||
|
|
||||||
|
### Is Langfuse reachable?
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s http://45.77.119.180/api/public/health
|
||||||
|
# Expected: {"status":"ok"}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Starting the Engine
|
||||||
|
|
||||||
|
### Prerequisites checklist
|
||||||
|
|
||||||
|
- [ ] Kubeconfig present at `./kubernetes/kubeconfig.yaml`
|
||||||
|
- [ ] `.env` file populated with all required variables (see `README.md`)
|
||||||
|
- [ ] All three kubectl tunnels active (see [§4](#4-tunnel-management))
|
||||||
|
- [ ] Docker daemon running
|
||||||
|
|
||||||
|
### Start command
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd Docker/
|
||||||
|
docker-compose up -d --build
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify startup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Watch logs until you see "Brunix Engine initialized."
|
||||||
|
docker logs -f brunix-assistance-engine
|
||||||
|
|
||||||
|
# Expected log sequence:
|
||||||
|
# [ESEARCH] Connected: 8.x.x — index: avap-docs-test
|
||||||
|
# [ENGINE] listen on 50051 (gRPC)
|
||||||
|
# Brunix Engine initialized.
|
||||||
|
# [entrypoint] Starting OpenAI Proxy (HTTP :8000)...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Startup typically takes 20–60 seconds** depending on Ollama model loading time.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Stopping & Restarting
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Graceful stop
|
||||||
|
docker-compose down
|
||||||
|
|
||||||
|
# Hard stop (if container is unresponsive)
|
||||||
|
docker stop brunix-assistance-engine
|
||||||
|
docker rm brunix-assistance-engine
|
||||||
|
|
||||||
|
# Restart only the engine (no rebuild)
|
||||||
|
docker-compose restart brunix-engine
|
||||||
|
|
||||||
|
# Rebuild and restart (after code changes)
|
||||||
|
docker-compose up -d --build
|
||||||
|
```
|
||||||
|
|
||||||
|
> ⚠️ **Restart clears all in-memory session history.** All active conversations will lose context.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Tunnel Management
|
||||||
|
|
||||||
|
All three tunnels must be active for the engine to function. Run each in a separate terminal or as a background process.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Tunnel 1 — Ollama (LLM + embeddings)
|
||||||
|
kubectl port-forward --address 0.0.0.0 svc/ollama-light-service 11434:11434 \
|
||||||
|
-n brunix --kubeconfig ./kubernetes/kubeconfig.yaml
|
||||||
|
|
||||||
|
# Tunnel 2 — Elasticsearch (vector knowledge base)
|
||||||
|
kubectl port-forward --address 0.0.0.0 svc/brunix-vector-db 9200:9200 \
|
||||||
|
-n brunix --kubeconfig ./kubernetes/kubeconfig.yaml
|
||||||
|
|
||||||
|
# Tunnel 3 — PostgreSQL (Langfuse observability)
|
||||||
|
kubectl port-forward --address 0.0.0.0 svc/brunix-postgres 5432:5432 \
|
||||||
|
-n brunix --kubeconfig ./kubernetes/kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check tunnel status
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List active port-forwards
|
||||||
|
ps aux | grep "kubectl port-forward"
|
||||||
|
|
||||||
|
# Alternatively
|
||||||
|
lsof -i :11434
|
||||||
|
lsof -i :9200
|
||||||
|
lsof -i :5432
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tunnel dropped?
|
||||||
|
|
||||||
|
kubectl tunnels drop silently. Symptoms:
|
||||||
|
- Elasticsearch: `[ESEARCH] Cant Connect` in engine logs
|
||||||
|
- Ollama: requests timeout or return connection errors
|
||||||
|
- Langfuse: tracing data stops appearing in the dashboard
|
||||||
|
|
||||||
|
**Fix:** Re-run the affected tunnel command. The engine will reconnect automatically on the next request.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Incident Playbooks
|
||||||
|
|
||||||
|
### 5.1 Engine fails to start
|
||||||
|
|
||||||
|
**Symptom:** `docker-compose up` exits immediately, or container restarts in a loop.
|
||||||
|
|
||||||
|
**Diagnosis:**
|
||||||
|
```bash
|
||||||
|
docker logs brunix-assistance-engine 2>&1 | head -50
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common causes and fixes:**
|
||||||
|
|
||||||
|
| Log message | Cause | Fix |
|
||||||
|
|---|---|---|
|
||||||
|
| `Cannot connect to Ollama` | Ollama tunnel not running | Start Tunnel 1 |
|
||||||
|
| `model 'qwen2.5:1.5b' not found` | Model not loaded in Ollama | See [§5.3](#53-ollama-unreachable--model-not-found) |
|
||||||
|
| `ELASTICSEARCH_URL not set` | Missing `.env` | Check `.env` file exists and is complete |
|
||||||
|
| `No module named 'brunix_pb2'` | Proto stubs not generated | Run `docker-compose up --build` |
|
||||||
|
| `Port 50051 already in use` | Another instance running | `docker stop brunix-assistance-engine && docker rm brunix-assistance-engine` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5.2 Elasticsearch unreachable
|
||||||
|
|
||||||
|
**Symptom:** Log shows `[ESEARCH] Cant Connect`. Queries return empty context.
|
||||||
|
|
||||||
|
**Step 1 — Verify tunnel:**
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:9200/_cluster/health
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2 — Restart tunnel if down:**
|
||||||
|
```bash
|
||||||
|
kubectl port-forward --address 0.0.0.0 svc/brunix-vector-db 9200:9200 \
|
||||||
|
-n brunix --kubeconfig ./kubernetes/kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3 — Check index exists:**
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:9200/_cat/indices?v | grep avap
|
||||||
|
```
|
||||||
|
|
||||||
|
If the index is missing, the knowledge base has not been ingested. Run:
|
||||||
|
```bash
|
||||||
|
cd scripts/pipelines/flows/
|
||||||
|
python elasticsearch_ingestion.py
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4 — Verify authentication:**
|
||||||
|
If your cluster uses authentication, confirm `ELASTICSEARCH_USER` + `ELASTICSEARCH_PASSWORD` or `ELASTICSEARCH_API_KEY` are set in `.env`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5.3 Ollama unreachable / model not found
|
||||||
|
|
||||||
|
**Symptom:** Engine logs show connection errors to `http://host.docker.internal:11434`, or `validate_model_on_init=True` raises a model-not-found error on startup.
|
||||||
|
|
||||||
|
**Step 1 — Verify Ollama tunnel is active:**
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:11434/api/tags
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2 — List available models:**
|
||||||
|
```bash
|
||||||
|
curl -s http://localhost:11434/api/tags | python3 -c "
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
for m in data.get('models', []):
|
||||||
|
print(m['name'])
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3 — Pull missing models if needed:**
|
||||||
|
```bash
|
||||||
|
# On the Devaron cluster (via kubectl exec or direct access):
|
||||||
|
ollama pull qwen2.5:1.5b
|
||||||
|
ollama pull qwen3-0.6B-emb:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4 — Restart engine** after models are available:
|
||||||
|
```bash
|
||||||
|
docker-compose restart brunix-engine
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5.4 AskAgent returns `[ENG] Error`
|
||||||
|
|
||||||
|
**Symptom:** Client receives `{"text": "[ENG] Error: ...", "is_final": true}`.
|
||||||
|
|
||||||
|
**Diagnosis:**
|
||||||
|
```bash
|
||||||
|
docker logs brunix-assistance-engine 2>&1 | grep -A 10 "Error"
|
||||||
|
```
|
||||||
|
|
||||||
|
| Error substring | Cause | Fix |
|
||||||
|
|---|---|---|
|
||||||
|
| `Connection refused` to `11434` | Ollama tunnel down | Restart Tunnel 1 |
|
||||||
|
| `Connection refused` to `9200` | ES tunnel down | Restart Tunnel 2 |
|
||||||
|
| `Index not found` | ES index missing | Run ingestion pipeline |
|
||||||
|
| `context length exceeded` | Query + history too long for model | Reduce session history or use a larger context model |
|
||||||
|
| `Traceback` / `KeyError` | Code bug | Check full traceback, open GitHub Issue |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5.5 EvaluateRAG returns ANTHROPIC_API_KEY error
|
||||||
|
|
||||||
|
**Symptom:** `EvalResponse.status` = `"ANTHROPIC_API_KEY no configurada en .env"`.
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
1. Add `ANTHROPIC_API_KEY=sk-ant-...` to your `.env` file.
|
||||||
|
2. Add `ANTHROPIC_MODEL=claude-sonnet-4-20250514` (optional, has default).
|
||||||
|
3. Restart the engine: `docker-compose restart brunix-engine`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5.6 Container memory / OOM
|
||||||
|
|
||||||
|
**Symptom:** Container is killed by the OOM killer. `docker inspect brunix-assistance-engine` shows `OOMKilled: true`.
|
||||||
|
|
||||||
|
**Diagnosis:**
|
||||||
|
```bash
|
||||||
|
docker stats brunix-assistance-engine
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common causes:**
|
||||||
|
- Large context window being passed to Ollama (many retrieved chunks × long document).
|
||||||
|
- Session history growing unbounded over a long-running session.
|
||||||
|
|
||||||
|
**Mitigation:**
|
||||||
|
- Set `mem_limit` in `docker-compose.yaml`:
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
brunix-engine:
|
||||||
|
mem_limit: 4g
|
||||||
|
```
|
||||||
|
- Restart the container to clear session store.
|
||||||
|
- Consider reducing `k=8` in `hybrid_search_native` to limit context size.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5.7 Session history not persisting between requests
|
||||||
|
|
||||||
|
**Expected behaviour:** Sending two requests with the same `session_id` should maintain context.
|
||||||
|
|
||||||
|
**If Turn 2 does not seem to know about Turn 1:**
|
||||||
|
|
||||||
|
1. Confirm both requests use **identical** `session_id` strings (case-sensitive, no trailing spaces).
|
||||||
|
2. Confirm the engine was **not restarted** between the two requests (restart wipes `session_store`).
|
||||||
|
3. Check logs for `[AskAgentStream] conversation: N previous messages.` — if `N=0` on Turn 2, the session was not found.
|
||||||
|
4. Confirm the stream for Turn 1 was **fully consumed** (client read all messages including `is_final=true`) — the engine only persists history after the stream ends.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Log Reference
|
||||||
|
|
||||||
|
| Log prefix | Module | What it means |
|
||||||
|
|---|---|---|
|
||||||
|
| `[ESEARCH] Connected` | `server.py` | Elasticsearch OK on startup |
|
||||||
|
| `[ESEARCH] Cant Connect` | `server.py` | Elasticsearch unreachable on startup |
|
||||||
|
| `[ENGINE] listen on 50051` | `server.py` | gRPC server ready |
|
||||||
|
| `[AskAgent] session=... query=...` | `server.py` | New non-streaming request |
|
||||||
|
| `[AskAgent] conversation: N messages` | `server.py` | History loaded for session |
|
||||||
|
| `[AskAgentStream] done — chunks=N` | `server.py` | Stream completed, history saved |
|
||||||
|
| `[classify] raw=... -> TYPE` | `graph.py` | Query classification result |
|
||||||
|
| `[reformulate] -> '...'` | `graph.py` | Reformulated query |
|
||||||
|
| `[hybrid] BM25 -> N hits` | `graph.py` | BM25 retrieval result |
|
||||||
|
| `[hybrid] kNN -> N hits` | `graph.py` | kNN retrieval result |
|
||||||
|
| `[hybrid] RRF -> N final docs` | `graph.py` | After RRF fusion |
|
||||||
|
| `[retrieve] N docs, context len=X` | `graph.py` | Context assembled |
|
||||||
|
| `[generate] X chars` | `graph.py` | Non-streaming answer generated |
|
||||||
|
| `[eval] Iniciando: N preguntas` | `evaluate.py` | Evaluation started |
|
||||||
|
| `[eval] Completado — global=X` | `evaluate.py` | Evaluation finished |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Useful Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Real-time log streaming
|
||||||
|
docker logs -f brunix-assistance-engine
|
||||||
|
|
||||||
|
# Filter for errors only
|
||||||
|
docker logs brunix-assistance-engine 2>&1 | grep -i error
|
||||||
|
|
||||||
|
# Check container resource usage
|
||||||
|
docker stats brunix-assistance-engine --no-stream
|
||||||
|
|
||||||
|
# Enter container for debugging
|
||||||
|
docker exec -it brunix-assistance-engine /bin/bash
|
||||||
|
|
||||||
|
# Send a test query
|
||||||
|
grpcurl -plaintext \
|
||||||
|
-d '{"query": "What is AVAP?", "session_id": "test"}' \
|
||||||
|
localhost:50052 brunix.AssistanceEngine/AskAgent
|
||||||
|
|
||||||
|
# Check ES index document count
|
||||||
|
curl -s "http://localhost:9200/avap-docs-test/_count" | python3 -m json.tool
|
||||||
|
|
||||||
|
# Check ES index mapping
|
||||||
|
curl -s "http://localhost:9200/avap-docs-test/_mapping" | python3 -m json.tool
|
||||||
|
|
||||||
|
# List active containers
|
||||||
|
docker ps --filter name=brunix
|
||||||
|
|
||||||
|
# Check port bindings
|
||||||
|
docker port brunix-assistance-engine
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Escalation Path
|
||||||
|
|
||||||
|
| Severity | Condition | Action |
|
||||||
|
|---|---|---|
|
||||||
|
| P1 | Engine completely down, not recoverable in 15 min | Notify via Slack `#brunix-incidents` immediately. Tag CTO. |
|
||||||
|
| P2 | Degraded quality (bad answers) or evaluation score drops below 0.60 | Open GitHub Issue with full log output and evaluation report. |
|
||||||
|
| P3 | Tunnel instability, intermittent errors | Report in daily standup. Document in GitHub Issue within 24h. |
|
||||||
|
| P4 | Documentation gap or non-critical config issue | Open GitHub Issue with label `documentation` or `improvement`. |
|
||||||
|
|
||||||
|
**For all P1/P2 incidents, the GitHub Issue must include:**
|
||||||
|
1. Exact command that triggered the failure
|
||||||
|
2. Full terminal output / error log
|
||||||
|
3. Status of all three kubectl tunnels at the time of failure
|
||||||
|
4. Docker container status (`docker inspect brunix-assistance-engine`)
|
||||||
|
|
@ -0,0 +1,102 @@
|
||||||
|
# Security Policy
|
||||||
|
|
||||||
|
## Supported Versions
|
||||||
|
|
||||||
|
| Version | Security patches |
|
||||||
|
|---|---|
|
||||||
|
| 1.5.x | ✅ Active |
|
||||||
|
| 1.4.x | ⚠️ Critical fixes only |
|
||||||
|
| < 1.4 | ❌ Not supported |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Reporting a Vulnerability
|
||||||
|
|
||||||
|
**Do not open a public GitHub Issue for security vulnerabilities.**
|
||||||
|
|
||||||
|
Report security issues directly to the CTO via the private Slack channel `#brunix-security` or by email to the address on file. Include:
|
||||||
|
|
||||||
|
1. A clear description of the vulnerability and its potential impact.
|
||||||
|
2. Steps to reproduce (proof-of-concept if applicable).
|
||||||
|
3. Affected component(s) and version(s).
|
||||||
|
4. Suggested remediation, if known.
|
||||||
|
|
||||||
|
You will receive an acknowledgement within **48 hours** and a resolution timeline within **7 business days** for confirmed issues.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Security Model
|
||||||
|
|
||||||
|
### Transport
|
||||||
|
|
||||||
|
The gRPC server currently runs with `add_insecure_port` — **there is no TLS in the current dev configuration.** This is intentional for the local development setup where all traffic flows through authenticated kubectl tunnels.
|
||||||
|
|
||||||
|
**For any production or internet-exposed deployment, TLS must be enabled.** See ADR-0003 for context.
|
||||||
|
|
||||||
|
### Authentication & Authorization
|
||||||
|
|
||||||
|
The current version has **no authentication layer** on the gRPC API. Any client with network access to port `50052` can call any RPC method and access any session by session ID.
|
||||||
|
|
||||||
|
Acceptable risk boundaries for the current deployment:
|
||||||
|
- Port `50052` must be accessible **only** to authorized developers via firewall rules or VPN.
|
||||||
|
- Do not expose port `50052` on a public IP without an authenticating reverse proxy.
|
||||||
|
|
||||||
|
### Secrets Management
|
||||||
|
|
||||||
|
All secrets (API keys, database credentials) are managed exclusively via environment variables. The following rules are enforced:
|
||||||
|
|
||||||
|
- **Never commit real secret values** to any branch, including feature branches.
|
||||||
|
- Use placeholder values (e.g., `sk-ant-...`, `pk-lf-...`) in documentation and examples.
|
||||||
|
- The `.env` file is listed in `.gitignore` and must never be committed.
|
||||||
|
- The `kubernetes/kubeconfig.yaml` file grants cluster-level access and must never be committed.
|
||||||
|
- PRs containing secrets or committed `.env` / kubeconfig files will be **immediately closed** and the committer will be required to rotate all exposed credentials before resubmission.
|
||||||
|
|
||||||
|
**Environment variables that contain secrets:**
|
||||||
|
|
||||||
|
| Variable | Type |
|
||||||
|
|---|---|
|
||||||
|
| `LANGFUSE_PUBLIC_KEY` | API key |
|
||||||
|
| `LANGFUSE_SECRET_KEY` | API key |
|
||||||
|
| `ANTHROPIC_API_KEY` | API key |
|
||||||
|
| `ELASTICSEARCH_PASSWORD` | Credential |
|
||||||
|
| `ELASTICSEARCH_API_KEY` | API key |
|
||||||
|
| `HF_TOKEN` | API key |
|
||||||
|
|
||||||
|
### Container Security
|
||||||
|
|
||||||
|
- The container runs as a **non-root user** (Python 3.11 slim base image default).
|
||||||
|
- Using `root` as the container user is explicitly prohibited (see `CONTRIBUTING.md` §3).
|
||||||
|
- The `/workspace` directory is deprecated. All application code runs from `/app`.
|
||||||
|
- The `.dockerignore` ensures that development artifacts (`.git`, `.env`, `tests/`, `docs/`) are excluded from the production image.
|
||||||
|
|
||||||
|
### Data Privacy
|
||||||
|
|
||||||
|
- All LLM inference (text generation and embeddings) is performed within the **private Devaron Kubernetes cluster** on Vultr infrastructure. No user query data is sent to external third-party APIs during normal operation.
|
||||||
|
- The exception is the `EvaluateRAG` endpoint, which sends **golden dataset questions and generated answers** to the Anthropic API (Claude) for evaluation scoring. No real user queries from production sessions are used in evaluation.
|
||||||
|
- Conversation history is stored **in-memory only** and is never persisted to disk or an external database.
|
||||||
|
|
||||||
|
### Dependency Security
|
||||||
|
|
||||||
|
- Dependencies are pinned via `uv.lock` and exported to `Docker/requirements.txt`.
|
||||||
|
- Dependency updates should be reviewed for security advisories before merging.
|
||||||
|
- Run `pip audit` or `safety check` against `Docker/requirements.txt` before major releases.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install pip-audit
|
||||||
|
pip-audit -r Docker/requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Known Security Limitations
|
||||||
|
|
||||||
|
These are acknowledged risks accepted for the current development phase. They must be addressed before any production internet-facing deployment.
|
||||||
|
|
||||||
|
| ID | Limitation | Risk | Mitigation required |
|
||||||
|
|---|---|---|---|
|
||||||
|
| SEC-001 | No gRPC TLS | Traffic interception | Enable TLS with server certificate |
|
||||||
|
| SEC-002 | No API authentication | Unauthorized access | Add JWT / mutual TLS authentication |
|
||||||
|
| SEC-003 | Session IDs are guessable | Session hijacking | Enforce UUIDs; validate ownership |
|
||||||
|
| SEC-004 | No rate limiting | DoS / cost amplification | Add gRPC interceptor rate limiter |
|
||||||
|
| SEC-005 | In-memory session store | Data loss on restart | Acceptable for dev; requires Redis for prod |
|
||||||
|
| SEC-006 | `ELASTICSEARCH_USER/PASS` optional | Unauthenticated ES access | Make auth required in prod; fail-fast if absent |
|
||||||
|
|
@ -0,0 +1,735 @@
|
||||||
|
# SECTION I: Architecture, Memory, and Fundamentals
|
||||||
|
|
||||||
|
This section sets the foundations of how AVAP manages service logic and data manipulation in memory. Unlike conventional interpreted languages, AVAP uses a **hybrid evaluation** engine that allows combining declarative commands with dynamic expressions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1.1 Endpoint Registration (`registerEndpoint`)
|
||||||
|
|
||||||
|
The `registerEndpoint` command is the atomic unit of configuration. It acts as the bridge between the network (HTTP) and the code.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`registerEndpoint(path, method, middleware, description, handler, output)`
|
||||||
|
|
||||||
|
### Parameter Specification
|
||||||
|
|
||||||
|
- **`path` (String):** Defines the URL path. It supports static routes and is prepared for future implementations of path parameters (variable segments).
|
||||||
|
|
||||||
|
- **`method` (String):** Specifies the allowed HTTP verb (`GET`, `POST`, `PUT`, `DELETE`). The server will automatically reject any request that does not match this method (Error 405).
|
||||||
|
|
||||||
|
- **`middleware` (List):** A list of functions that are executed sequentially before the `handler`. Ideal for JWT token validations or maintenance checks. If a middleware function fails, the flow stops before reaching the main code.
|
||||||
|
|
||||||
|
- **`description` (String):** Metadata for automatic documentation (Swagger/OpenAPI). It does not affect execution but is vital for the development lifecycle.
|
||||||
|
|
||||||
|
- **`handler` (Function):** The logical Entry Point. It is the name of the main function where the business logic resides.
|
||||||
|
|
||||||
|
- **`output` (Variable):** Defines the "master" variable that the engine will automatically return upon completion, unless additional results are specified via `addResult`.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1.2 Variable Assignment Engine (Dynamic Assignment)
|
||||||
|
|
||||||
|
AVAP allows a direct assignment syntax using the `=` symbol, which grants flexibility similar to languages like Python, but under strict context control.
|
||||||
|
|
||||||
|
### Internal Mechanics: The `eval` Process
|
||||||
|
|
||||||
|
When the interpreter finds an instruction of the type `variable = expression`, it activates a three-step process:
|
||||||
|
|
||||||
|
1. **Cleaning and Tokenization:** The engine identifies if the expression contains references to existing variables (using `$`), method calls, or literals.
|
||||||
|
|
||||||
|
2. **Expression Evaluation:** Operations are resolved in real-time. This allows:
|
||||||
|
|
||||||
|
- **Boolean Logic:** `is_valid = (age > 18 and has_permission == True)`.
|
||||||
|
|
||||||
|
- **Arithmetic:** `tax = subtotal * 0.21`.
|
||||||
|
|
||||||
|
- **String Formatting:** `query = "SELECT * FROM users WHERE id = %s" % recovered_id`.
|
||||||
|
|
||||||
|
3. **Object and Property Resolution:** Allows deep access to complex structures returned by database connectors or APIs: `client_email = user_list[0].profile.email`.
|
||||||
|
|
||||||
|
|
||||||
|
### Effect on Memory
|
||||||
|
|
||||||
|
Unlike `addVar`, dynamic assignment is capable of transforming the variable type on the fly (Mutable Type System). If a variable contained a number and is assigned a string after evaluation, the engine updates the variable metadata automatically.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1.3 State Initialization and References (`addVar`)
|
||||||
|
|
||||||
|
`addVar` is the fundamental command to define the global state of the script.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`addVar(targetVarName, varValue)`
|
||||||
|
|
||||||
|
### Advanced Behavior
|
||||||
|
|
||||||
|
- **Intelligent Automatic Typing:** The engine inspects `varValue`. If it detects a numerical format (even if it comes as a string from configuration), it will internally convert it to `int` or `float`. It supports the use of commas and periods interchangeably, normalizing the value for mathematical operations.
|
||||||
|
|
||||||
|
- **The Reference Prefix `$`:** This is the dereferencing operator.
|
||||||
|
|
||||||
|
- `addVar(copy, $original)`: Indicates to the engine that it should not assign the string "$original", but look in the symbol table for the current value of the variable `original` and copy it.
|
||||||
|
|
||||||
|
- **Scope:** Variables created with `addVar` in the main body of the script are considered **Request Session Variables**, meaning they live during the entire execution lifecycle of that specific API call, but are isolated from other concurrent requests to guarantee data security (Thread-Safety).
|
||||||
|
|
||||||
|
| **Syntax** | **Usage** | **Description** |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| `name = "Juan"` | Direct Assignment | Creates a simple text string. |
|
||||||
|
| `total = $price * 1.10` | Dynamic Evaluation | Uses the value of `price` for a calculation and saves the result. |
|
||||||
|
| `addVar(status, 200)` | Initialization | Explicit method to ensure creation in the global context. |
|
||||||
|
| `data = res[0].info` | Object Access | Extracts a specific property from a JSON object or DB result. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# SECTION II: Input and Output (I/O) Management
|
||||||
|
|
||||||
|
This section describes the mechanisms that AVAP uses for external data ingestion, parameter integrity validation, and constructing the response package that will be delivered to the final client.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2.1 Intelligent Parameter Capture (`addParam`)
|
||||||
|
|
||||||
|
The `addParam` command is the component in charge of extracting information from the incoming HTTP request. Its design is **source-agnostic**, which simplifies development by not requiring the programmer to specify where the data comes from.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`addParam(param_name, target_variable)`
|
||||||
|
|
||||||
|
### Priority Mechanics (Cascading Search)
|
||||||
|
|
||||||
|
When `addParam` is invoked, the AVAP engine inspects the request in the following hierarchical order:
|
||||||
|
|
||||||
|
1. **Query Arguments:** Parameters present in the URL (e.g., `?id=123`).
|
||||||
|
|
||||||
|
2. **JSON Body:** If the request has a `Content-Type: application/json`, it looks for the key inside the JSON object.
|
||||||
|
|
||||||
|
3. **Form Data / Body Arguments:** Data sent via standard forms (`x-www-form-urlencoded`).
|
||||||
|
|
||||||
|
|
||||||
|
### Technical Behavior
|
||||||
|
|
||||||
|
- **Automatic Decoding:** The engine attempts to decode values to ASCII/UTF-8 format, eliminating encoding inconsistencies.
|
||||||
|
|
||||||
|
- **Null Treatment:** If the requested parameter does not exist in any of the sources, the target variable (`target_variable`) is initialized as `None`. This allows performing subsequent security checks via `if` blocks.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2.2 Validation and Collection Counting (`getListLen`)
|
||||||
|
|
||||||
|
To guarantee that an API is robust, it is necessary to validate how much information has been received. `getListLen` acts as AVAP's volume inspector.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`getListLen(source_variable, target_variable)`
|
||||||
|
|
||||||
|
### I/O Applications
|
||||||
|
|
||||||
|
- **Parameter Validation:** Allows counting how many elements a variable contains that has been populated by `addParam` or `getQueryParamList`.
|
||||||
|
|
||||||
|
- **Loop Security:** Before starting a `startLoop`, it is recommended to use `getListLen` to define the upper limit of the cycle, avoiding overflow errors.
|
||||||
|
|
||||||
|
- **Database Results:** After a query, it determines if records were obtained (length > 0) or if the query resulted empty.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2.3 Multiple List Capture (`getQueryParamList`)
|
||||||
|
|
||||||
|
Scenarios exist where the same parameter is sent several times (e.g., search filters like `?color=red&color=blue`). AVAP manages this through a specialized capture in lists.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`getQueryParamList(param_name, target_list_variable)`
|
||||||
|
|
||||||
|
### Effect
|
||||||
|
|
||||||
|
Transforms all occurrences of `param_name` into a structured list inside `target_list_variable`. If there is only one value, it creates a single-element list. This ensures that subsequent logic can always treat the data as a collection, avoiding type errors.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2.4 Construction of the Response (`addResult`)
|
||||||
|
|
||||||
|
The `addResult` command is in charge of registering which variables will be part of the response body. AVAP dynamically constructs a JSON output object based on calls to this command.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`addResult(source_variable)`
|
||||||
|
|
||||||
|
### Advanced Features
|
||||||
|
|
||||||
|
- **Promise Management:** If the variable passed to `addResult` is the result of an operation initiated with `go_async`, the engine will automatically mark that field as `"promised"` in the response, or return the thread ID if synchronization has not completed.
|
||||||
|
|
||||||
|
- **String Cleaning:** The engine detects if the content of the variable has redundant quotes (product of previous evaluations) and normalizes them to ensure that the resulting JSON is valid and clean.
|
||||||
|
|
||||||
|
- **Multi-Registration:** Multiple calls to `addResult` can be made. Each call adds a new key to the output JSON object. By default, the key in the JSON will be the variable name, unless a custom key is used in the engine.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2.5 HTTP Status Control (`_status`)
|
||||||
|
|
||||||
|
AVAP uses a reserved system variable to communicate with the underlying web server and define the success or error code of the transaction.
|
||||||
|
|
||||||
|
### Use of `_status`
|
||||||
|
|
||||||
|
When assigning a numerical value to the `_status` variable (using `addVar` or direct assignment), the programmer defines the HTTP code of the response.
|
||||||
|
|
||||||
|
| **Code** | **Common Use in AVAP** |
|
||||||
|
| :--- | :--- |
|
||||||
|
| **200** | Successful operation (Default value). |
|
||||||
|
| **201** | Resource successfully created. |
|
||||||
|
| **400** | Parameter validation error (Bad Request). |
|
||||||
|
| **401** | Authentication failure. |
|
||||||
|
| **500** | Internal error captured in an `exception` block. |
|
||||||
|
|
||||||
|
### Section II Integrated Example
|
||||||
|
|
||||||
|
```
|
||||||
|
// We capture input
|
||||||
|
addParam("user_id", id)
|
||||||
|
|
||||||
|
// We validate presence
|
||||||
|
if(id, None, '==')
|
||||||
|
addVar(_status, 400)
|
||||||
|
addVar(error, "The user ID is mandatory")
|
||||||
|
addResult(error)
|
||||||
|
return()
|
||||||
|
end()
|
||||||
|
|
||||||
|
// If we reach here, we respond with success
|
||||||
|
addVar(_status, 200)
|
||||||
|
addResult(id)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# SECTION III: Control Logic and Decision Structures
|
||||||
|
|
||||||
|
This section details how AVAP manages execution flow. The language uses closed block structures that allow for a clear sequential reading, facilitating the debugging of complex APIs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3.1 The Conditional Block (`if / else / end`)
|
||||||
|
|
||||||
|
The `if` structure in AVAP is a versatile tool that allows for atomic comparisons or the evaluation of complex logical expressions processed by the dynamic evaluation engine.
|
||||||
|
|
||||||
|
### Standard Interface
|
||||||
|
|
||||||
|
`if(variable_A, valor_B, operador)`
|
||||||
|
|
||||||
|
### Available Operators
|
||||||
|
|
||||||
|
| **Operator** | **Description** | **Example** |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| `=` | Strict equality (or numerical equivalence). | `if(rol, "admin", "=")` |
|
||||||
|
| `!=` | Inequality. | `if(status, 200, "!=")` |
|
||||||
|
| `>` / `<` | Numerical magnitude comparison. | `if(edad, 18, ">")` |
|
||||||
|
| `in` | Checks if an element belongs to a list or string. | `if(user, lista_negra, "in")` |
|
||||||
|
|
||||||
|
### Evaluation of Complex Expressions
|
||||||
|
|
||||||
|
AVAP allows omitting comparison parameters to evaluate a complete logical expression directly in the third parameter.
|
||||||
|
|
||||||
|
- **Example:** `if(None, None, "age >= 18 and balance > 100")`.
|
||||||
|
|
||||||
|
|
||||||
|
### Closing Structure
|
||||||
|
|
||||||
|
Every `if` block can include an optional `else()` block and **must** end with the `end()` command.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3.2 Iterations and Loops (`startLoop / endLoop`)
|
||||||
|
|
||||||
|
For the processing of collections (like database rows or parameter lists), AVAP implements a repetition cycle controlled by indices.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`startLoop(contador, inicio, fin)`
|
||||||
|
|
||||||
|
### Execution Mechanics
|
||||||
|
|
||||||
|
1. **Initialization:** The engine creates the `contador` variable with the value of `inicio`.
|
||||||
|
|
||||||
|
2. **Increment:** In each turn, the counter increases automatically by 1 unit.
|
||||||
|
|
||||||
|
3. **Exit Condition:** The loop stops when the `contador` exceeds the value of `fin`.
|
||||||
|
|
||||||
|
|
||||||
|
### Practical Example: List Processing
|
||||||
|
|
||||||
|
```
|
||||||
|
// We obtain the length of a list captured in Section II
|
||||||
|
getListLen(items_received, total)
|
||||||
|
|
||||||
|
startLoop(i, 0, total)
|
||||||
|
current_item = items_received[i]
|
||||||
|
// Processing logic for each item...
|
||||||
|
endLoop()
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3.3 Error Management and Robustness (`try / exception`)
|
||||||
|
|
||||||
|
AVAP is a language designed for production environments where external failures (database timeout, down third-party APIs) are a reality. The `try` block allows capturing these events without stopping the server.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`try` ... `exception(error_variable)` ... `end()`
|
||||||
|
|
||||||
|
### Technical Operation
|
||||||
|
|
||||||
|
- **`try` Block:** The engine attempts to execute the instructions contained within. If a critical failure occurs, it stops execution of that block immediately.
|
||||||
|
|
||||||
|
- **`exception` Block:** If an error is detected, control passes to this block. The `error_variable` is automatically populated with a string describing the failure (simplified Stack trace).
|
||||||
|
|
||||||
|
- **`end()` Block:** Closes the structure and allows the script to continue its normal execution after the error handling.
|
||||||
|
|
||||||
|
|
||||||
|
### Example of Security in Connectors
|
||||||
|
|
||||||
|
```
|
||||||
|
try
|
||||||
|
// We attempt a query to an external connector (Section V)
|
||||||
|
resultado = db.query("SELECT * FROM pagos")
|
||||||
|
exception(failure_detail)
|
||||||
|
// If it fails, we log the error and notify
|
||||||
|
addVar(_status, 500)
|
||||||
|
addVar(message, "Persistence error: %s" % failure_detail)
|
||||||
|
addResult(message)
|
||||||
|
end()
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3.4 Premature Exit Control (`return`)
|
||||||
|
|
||||||
|
The `return()` command is a control instruction that immediately terminates the execution of the current context (be it a function or the main script).
|
||||||
|
|
||||||
|
- If used within a **function**, it returns control (and optionally a value) to the caller.
|
||||||
|
|
||||||
|
- If used in the **main flow**, it ends execution of the API and triggers the automatic sending of the JSON response built up to that moment.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Summary of Section III:** With these structures, the AVAP programmer has total control over the dynamic behavior of the code. The combination of **Dynamic Evaluation (Section I)**, **Data Validation (Section II)** and **Control Structures (Section III)** allows for building extremely complex and secure microservices.
|
||||||
|
|
||||||
|
|
||||||
|
# SECTION IV: Concurrency and Asynchrony
|
||||||
|
|
||||||
|
AVAP implements a concurrency model based on threads that allows for "fire-and-forget" or parallel execution with subsequent synchronization. This is fundamental for tasks like sending emails, processing logs or querying multiple external APIs simultaneously.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4.1 Launching Background Processes (`go_async`)
|
||||||
|
|
||||||
|
The command `go_async` extracts a block of code from the main sequential flow and places it in a parallel execution queue.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`go_async(thread_id)`
|
||||||
|
|
||||||
|
### Execution Mechanics
|
||||||
|
|
||||||
|
1. **Identification:** The programmer assigns a `thread_id` (a string or variable) to be able to reference that process later.
|
||||||
|
|
||||||
|
2. **Bifurcation (Forking):** As soon as it is invoked, the AVAP engine creates a new native thread. The main flow immediately continues to the next instruction after the `go_async` block.
|
||||||
|
|
||||||
|
3. **Context Isolation:** The asynchronous thread inherits a copy of the state of the variables at the moment of firing, allowing it to work safely without interfering with the main thread.
|
||||||
|
|
||||||
|
|
||||||
|
### Example: Immediate Response with Long Process
|
||||||
|
|
||||||
|
```
|
||||||
|
addParam("email", destination)
|
||||||
|
|
||||||
|
go_async("email_delivery")
|
||||||
|
// This block takes 5 seconds, but the API does not wait
|
||||||
|
email_service.send(destination, "Welcome to AVAP")
|
||||||
|
end()
|
||||||
|
|
||||||
|
addVar(msg, "Your email is being processed in the background")
|
||||||
|
addResult(msg)
|
||||||
|
// The client receives the response in milliseconds
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4.2 Result Synchronization (`gather`)
|
||||||
|
|
||||||
|
When the main flow needs data generated by an asynchronous thread to continue, the `gather` synchronization mechanism is used.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`gather(thread_id, timeout)`
|
||||||
|
|
||||||
|
### Specifications
|
||||||
|
|
||||||
|
- **`thread_id`:** The identifier used in the `go_async` command.
|
||||||
|
|
||||||
|
- **`timeout` (Seconds):** Maximum time that the main thread will wait. If the thread does not finish in this time, AVAP launches an exception that can be captured (Section III).
|
||||||
|
|
||||||
|
|
||||||
|
### Technical Behavior
|
||||||
|
|
||||||
|
- **Controlled Blocking:** The main thread stops (suspends) until the `thread_id` thread finishes.
|
||||||
|
|
||||||
|
- **State Recovery:** Once synchronized, any variable modified inside the asynchronous thread is fused with the context of the main thread.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4.3 Optimized Parallel Execution (Fan-Out Pattern)
|
||||||
|
|
||||||
|
AVAP allows launching multiple threads and then waiting for all of them, reducing total execution time to the time of the slowest thread, instead of the sum of all.
|
||||||
|
|
||||||
|
### Example: Querying multiple databases
|
||||||
|
|
||||||
|
```
|
||||||
|
go_async("db_north")
|
||||||
|
data_north = connector_north.query("SELECT...")
|
||||||
|
end()
|
||||||
|
|
||||||
|
go_async("db_south")
|
||||||
|
data_south = connector_south.query("SELECT...")
|
||||||
|
end()
|
||||||
|
|
||||||
|
// We wait for both (Maximum 10 seconds)
|
||||||
|
gather("db_north", 10)
|
||||||
|
gather("db_south", 10)
|
||||||
|
|
||||||
|
// We combine results using Section I
|
||||||
|
total_result = data_north + data_south
|
||||||
|
addResult(total_result)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4.4 Status of Promises in the Output
|
||||||
|
|
||||||
|
As mentioned in **Section II**, if a variable that is still being processed in an asynchronous thread is sent to `addResult`, AVAP manages the response intelligently:
|
||||||
|
|
||||||
|
- **If the thread still runs:** The output JSON will show `"variable": "promised"` or the thread ID.
|
||||||
|
|
||||||
|
- **If the thread failed:** The error will be registered in the internal log and the variable will be `None`.
|
||||||
|
|
||||||
|
- **If `gather` was used before `addResult`:** The real processed value will be sent.
|
||||||
|
|
||||||
|
|
||||||
|
# SECTION V: Persistence, Connectors and Native ORM
|
||||||
|
|
||||||
|
AVAP is designed to be database agnostic. It allows data manipulation through three layers: the universal connector, simplified ORM commands and direct SQL execution.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5.1 The Universal Connector (`avapConnector`)
|
||||||
|
|
||||||
|
The `avapConnector` command is the starting point for any external integration. It uses a system of **Connection Tokens** (Base64) that encapsulate the configuration (host, port, credentials, driver) to keep the code clean and safe.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`connector_variable = avapConnector("BASE64_TOKEN")`
|
||||||
|
|
||||||
|
### Characteristics of Connector Objects
|
||||||
|
|
||||||
|
Once the variable is instantiated, it behaves as an object with dynamic methods:
|
||||||
|
|
||||||
|
- **DB Connectors:** Expose the `.query(sql_string)` method, which returns objects or lists according to the result.
|
||||||
|
|
||||||
|
- **API Connectors (Twilio, Slack, etc.):** Expose native methods of the service (e.g., `.send_sms()`).
|
||||||
|
|
||||||
|
|
||||||
|
### Example: Use of Dynamic Assignment with Connectors
|
||||||
|
|
||||||
|
```
|
||||||
|
// We instantiate the connection
|
||||||
|
db = avapConnector("REJfQ09OTkVDVE9SR...")
|
||||||
|
|
||||||
|
// We execute and use Section I to filter on the fly
|
||||||
|
users = db.query("SELECT * FROM users")
|
||||||
|
first_admin = users[0].name if users[0].role == 'admin' else 'N/A'
|
||||||
|
|
||||||
|
addResult(first_admin)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5.2 Native ORM Layer (`ormCheckTable` / `ormDirect`)
|
||||||
|
|
||||||
|
For quick operations on the local or default database cluster, AVAP offers system-level commands that do not require prior instantiation.
|
||||||
|
|
||||||
|
### 5.2.1 `ormCheckTable`
|
||||||
|
|
||||||
|
Verifies the existence of a structure in the database. It is vital for installation scripts or automatic migrations.
|
||||||
|
|
||||||
|
- **Interface:** `ormCheckTable(table_name, target_var)`
|
||||||
|
|
||||||
|
- **Response:** The `target_var` will receive the string values `"True"` or `"False"`.
|
||||||
|
|
||||||
|
|
||||||
|
### 5.2.2 `ormDirect`
|
||||||
|
|
||||||
|
Executes SQL commands directly. It differs from `.query()` in that it is optimized for statements that do not necessarily return rows (like `INSERT`, `UPDATE` or `CREATE TABLE`).
|
||||||
|
|
||||||
|
- **Interface:** `ormDirect(statement, target_var)`
|
||||||
|
|
||||||
|
- **Use of Interpolation:** `ormDirect("UPDATE users SET login = '%s' WHERE id = %s" % (now, id), result)`
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5.3 Data Access Abstraction (Implicit Commands)
|
||||||
|
|
||||||
|
AVAP includes specialized commands for the most common CRUD operations, reducing the need to write manual SQL and mitigating injection risks.
|
||||||
|
|
||||||
|
### `ormAccessSelect`
|
||||||
|
|
||||||
|
Performs filtered queries returning a structure of object lists.
|
||||||
|
|
||||||
|
- **Syntax:** `ormAccessSelect(table, filters, target)`
|
||||||
|
|
||||||
|
|
||||||
|
### `ormAccessInsert` / `ormAccessUpdate`
|
||||||
|
|
||||||
|
Manages data persistence. If used on an object that already has an ID, `Update` synchronizes changes; otherwise, `Insert` creates the record.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5.4 Dynamic Query Formatting (Anti-Injection)
|
||||||
|
|
||||||
|
As detailed in **Section I**, the AVAP engine processes SQL strings before sending them to the database engine. The official recommendation is to always use interpolation using the `%` operator to ensure that data types (Strings vs Integers) are correctly treated by the driver.
|
||||||
|
|
||||||
|
```
|
||||||
|
// Safe and Recommended Way
|
||||||
|
sql = "SELECT * FROM %s WHERE status = '%s'" % (table_name, status_recovered)
|
||||||
|
res = db.query(sql)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5.5 Cryptographic Security Integration (`encodeSHA256`)
|
||||||
|
|
||||||
|
In the persistence flow, AVAP provides native tools to secure sensitive data before it touches the disk.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`encodeSHA256(source_text, target_variable)`
|
||||||
|
|
||||||
|
### Full Registration Flow (Final Example)
|
||||||
|
|
||||||
|
This example joins **Sections I, II, III and V**:
|
||||||
|
|
||||||
|
```
|
||||||
|
// II: Capture
|
||||||
|
addParam("pass", p)
|
||||||
|
addParam("user", u)
|
||||||
|
|
||||||
|
// I and V: Processing and Security
|
||||||
|
encodeSHA256(p, secure_pass)
|
||||||
|
|
||||||
|
// V: Insertion
|
||||||
|
sql = "INSERT INTO users (username, password) VALUES ('%s', '%s')" % (u, secure_pass)
|
||||||
|
ormDirect(sql, db_result)
|
||||||
|
|
||||||
|
// III and II: Response
|
||||||
|
if(db_result, "Success", "=")
|
||||||
|
addVar(msg, "User created")
|
||||||
|
addResult(msg)
|
||||||
|
end()
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
# SECTION VI: System Utilities and Transformation
|
||||||
|
|
||||||
|
This section documents the native commands for advanced string manipulation, precise time handling and generation of dynamic data.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6.1 Time and Date Management (`getDateTime` / `stampToDatetime`)
|
||||||
|
|
||||||
|
AVAP handles time in two ways: as **Epoch/Timestamp** (numerical, ideal for calculations) and as **Datetime** (formatted text, ideal for humans and databases).
|
||||||
|
|
||||||
|
### 6.1.1 `getDateTime`
|
||||||
|
|
||||||
|
Generates current time with high precision.
|
||||||
|
|
||||||
|
- **Interface:** `getDateTime(format, timeDelta, timeZone, targetVar)`
|
||||||
|
|
||||||
|
- **Parameters:**
|
||||||
|
|
||||||
|
- `format`: Ej: `"%Y-%m-%d %H:%M:%S"`. If left empty, returns the current Epoch.
|
||||||
|
|
||||||
|
- `timeDelta`: Seconds to add (positive) or subtract (negative). Very useful for calculating token expiration.
|
||||||
|
|
||||||
|
- `timeZone`: Time zone region (ej: "Europe/Madrid").
|
||||||
|
|
||||||
|
|
||||||
|
### 6.1.2 `stampToDatetime`
|
||||||
|
|
||||||
|
Converts a numerical value (Unix Timestamp) into a legible text string.
|
||||||
|
|
||||||
|
- **Interface:** `stampToDatetime(timestamp, format, offset, targetVar)`
|
||||||
|
|
||||||
|
- **Common use:** Formatting dates recovered from the database (Section V) before sending them to the client (Section II).
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6.2 Advanced String Manipulation (`replace` / `randomString`)
|
||||||
|
|
||||||
|
### 6.2.1 `replace`
|
||||||
|
|
||||||
|
Allows for cleaning and transformation of texts. It is fundamental when data is received from the client that requires sanitization.
|
||||||
|
|
||||||
|
- **Interface:** `replace(sourceText, oldText, newText, targetVar)`
|
||||||
|
|
||||||
|
- **Example:** Cleaning spaces or unwanted characters in a username before a SQL query.
|
||||||
|
|
||||||
|
|
||||||
|
### 6.2.2 `randomString`
|
||||||
|
|
||||||
|
Generates secure random alphanumeric strings.
|
||||||
|
|
||||||
|
- **Interface:** `randomString(length, targetVar)`
|
||||||
|
|
||||||
|
- **Applications:** Generation of temporary passwords, session IDs or unique file names.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6.3 Security and Hash Operations (`encodeSHA256`)
|
||||||
|
|
||||||
|
Although mentioned in persistence, its use is a data transformation utility.
|
||||||
|
|
||||||
|
- **Mechanics:** It is a deterministic one-way function.
|
||||||
|
|
||||||
|
- **Important:** AVAP uses an optimized implementation that guarantees that the same text always produces the same hash, allowing secure "login" comparisons without knowing the real password.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6.4 The Value Return Command (`return`)
|
||||||
|
|
||||||
|
In the context of functions and flows, `return` not only stops execution, but can "inject" the result of a sub-routine into the main flow.
|
||||||
|
|
||||||
|
### Example of complete utility flow:
|
||||||
|
|
||||||
|
```
|
||||||
|
// 1. We generate a temporary token
|
||||||
|
randomString(16, token_raw)
|
||||||
|
|
||||||
|
// 2. We calculate expiration (within 1 hour = 3600 seg)
|
||||||
|
getDateTime("%Y-%m-%d %H:%M:%S", 3600, "UTC", expiration_date)
|
||||||
|
|
||||||
|
// 3. We format a system message using Section I
|
||||||
|
message = "Your token %s expires on %s" % (token_raw, expiration_date)
|
||||||
|
|
||||||
|
// 4. We send to the client (Section II)
|
||||||
|
addResult(message)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6.5 Table of Common Formats (Cheat Sheet)
|
||||||
|
|
||||||
|
|
||||||
|
| **Token** | **Description** | **Example** |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| `%Y` | Full year | 2026 |
|
||||||
|
| `%m` | Month (01-12) | 02 |
|
||||||
|
| `%d` | Day (01-31) | 23 |
|
||||||
|
| `%H` | Hour (00-23) | 21 |
|
||||||
|
| `%M` | Minute (00-59) | 45 |
|
||||||
|
|
||||||
|
# SECTION VII: Function Architecture and Scopes
|
||||||
|
|
||||||
|
This section details how to encapsulate reusable logic and how AVAP manages isolated memory to avoid side effects between different parts of the program.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7.1 Definition and Declaration (`function`)
|
||||||
|
|
||||||
|
A function in AVAP is an independent block of code that is registered in the engine to be invoked at any moment.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`function function_name(argument1, argument2, ...){ ... }`
|
||||||
|
|
||||||
|
### Technical Characteristics:
|
||||||
|
|
||||||
|
- **Local Scope (`function_local_vars`):** Upon entering a function, AVAP creates a new dictionary of local variables. Variables created within (ej. `temp = 10`) do not exist outside the function, protecting global state.
|
||||||
|
|
||||||
|
- **Context Inheritance:** Functions can read global variables using the `$` prefix, but any new assignment (`=`) will remain in local scope unless a global persistence command is used.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7.2 The Exit Command (`return`)
|
||||||
|
|
||||||
|
It is the mechanism to finalize function execution and, optionally, send a value back to the caller.
|
||||||
|
|
||||||
|
### Interface
|
||||||
|
|
||||||
|
`return(variable_or_value)`
|
||||||
|
|
||||||
|
### Behavior:
|
||||||
|
|
||||||
|
1. **Finalization:** Immediately stops processing of the function.
|
||||||
|
|
||||||
|
2. **Data Transfer:** The value passed to the `return` is injected into the variable that performed the call in the main flow.
|
||||||
|
|
||||||
|
3. **Cleaning:** Once `return` is executed, the dictionary of local variables of that function is destroyed to free memory.
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7.3 Invocation and Parameter Passing
|
||||||
|
|
||||||
|
Functions are called by their name followed by the values or variables they require.
|
||||||
|
|
||||||
|
### Example of Professional Implementation:
|
||||||
|
|
||||||
|
```
|
||||||
|
// Function Definition (Local Scope)
|
||||||
|
function calculate_discount(base_price, percentage){
|
||||||
|
factor = percentage / 100
|
||||||
|
discount = base_price * factor
|
||||||
|
total = base_price - discount
|
||||||
|
return(total)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Main Flow (Global Scope)
|
||||||
|
addVar(pvp, 150)
|
||||||
|
// Call to the function passing a reference $ and a literal value
|
||||||
|
final_price = calculate_discount($pvp, 20)
|
||||||
|
|
||||||
|
addResult(final_price) // Result: 120
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7.4 Functions as Middlewares
|
||||||
|
|
||||||
|
In the `registerEndpoint` command (Section I), the `middleware` parameter accepts a list of functions. These functions have special behavior:
|
||||||
|
|
||||||
|
- If a middleware executes a `return()` without a value or with an error value, AVAP can be configured to abort the request before reaching the main `handler`.
|
||||||
|
|
||||||
|
- They are ideal for **Guard** tasks:
|
||||||
|
|
||||||
|
- API Key verification.
|
||||||
|
|
||||||
|
- Data schema validation.
|
||||||
|
|
||||||
|
- Initial audit registration (logs).
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7.5 Recursivity and Limits
|
||||||
|
|
||||||
|
AVAP allows recursivity (a function calling itself), but caution is recommended regarding stack depth for asynchronous processes (Section IV). To process large data volumes, the use of `startLoop` (Section III) is always preferable.
|
||||||
|
|
@ -0,0 +1,155 @@
|
||||||
|
This document will deal with the different types of accesses that exist on
|
||||||
|
the platform.
|
||||||
|
|
||||||
|
To identify the user who owns the account where the operation is going to
|
||||||
|
be carried out, it is necessary to indicate a session identifier (
|
||||||
|
session_id parameter) or sign the call with its private key
|
||||||
|
( signature parameter). In this way, these two calls to
|
||||||
|
service are equivalent for all intents and purposes. For those cases in
|
||||||
|
which there is no pademobile user as executor of the operation, the call
|
||||||
|
with private key must be used:
|
||||||
|
|
||||||
|
* With session ID
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
user_id=457&country_code=MX&comando=listado&idioma=en-us&id_canal=1&session_id=9bb19a6c0a607cb8f1791207395366d6
|
||||||
|
```
|
||||||
|
|
||||||
|
Session sample
|
||||||
|
|
||||||
|
The session_id parameter is obtained from the call to the login service:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
http://desarrollo.pademobile.com:5007/ws/users.py/login?country_code=MX&nick=test_user&pin=0000
|
||||||
|
```
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{ {' '}
|
||||||
|
"status"
|
||||||
|
:{' '}
|
||||||
|
true
|
||||||
|
,
|
||||||
|
"e_mail"
|
||||||
|
:{' '}
|
||||||
|
""
|
||||||
|
,
|
||||||
|
"elapsed"
|
||||||
|
:{' '}
|
||||||
|
0.2370758056640625
|
||||||
|
,
|
||||||
|
"certification_data"
|
||||||
|
: <certification_data>
|
||||||
|
,
|
||||||
|
"session_id"
|
||||||
|
:{' '}
|
||||||
|
"97c4abb925c9b2046ac7432762ad1417"
|
||||||
|
,
|
||||||
|
"user_type"
|
||||||
|
:{' '}
|
||||||
|
"User b\u00e1sico"
|
||||||
|
,
|
||||||
|
"profile_id"
|
||||||
|
:{' '}
|
||||||
|
1
|
||||||
|
,
|
||||||
|
"profile_code"
|
||||||
|
:{' '}
|
||||||
|
"USER"
|
||||||
|
,
|
||||||
|
"user_id"
|
||||||
|
:{' '}
|
||||||
|
225
|
||||||
|
,
|
||||||
|
"state"
|
||||||
|
:{' '}
|
||||||
|
"Distrito Federal"
|
||||||
|
,
|
||||||
|
"phone_longitude"
|
||||||
|
:{' '}
|
||||||
|
10
|
||||||
|
,
|
||||||
|
"menu"
|
||||||
|
: <lista_acciones_menu>
|
||||||
|
,
|
||||||
|
"affiliate_user_id"
|
||||||
|
:{' '}
|
||||||
|
412
|
||||||
|
,
|
||||||
|
"currency"
|
||||||
|
:{' '}
|
||||||
|
"MXN"
|
||||||
|
,
|
||||||
|
"name"
|
||||||
|
:{' '}
|
||||||
|
"Test User"
|
||||||
|
,
|
||||||
|
"certification"
|
||||||
|
:{' '}
|
||||||
|
false
|
||||||
|
,
|
||||||
|
"phone"
|
||||||
|
:{' '}
|
||||||
|
"5012385006"
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
* With signature
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
user_id=457&country_code=MX&comando=listado&idioma=en-us&id_canal=1&signature=cc30e3efc7159bb10b910512ca441664c1578a4d
|
||||||
|
```
|
||||||
|
|
||||||
|
Signed sample
|
||||||
|
|
||||||
|
In this case an extra parameter is added to the entire original query
|
||||||
|
string. This parameter will be a hash (HMAC) of the previous
|
||||||
|
string, so any alteration in the parameters will cause the signed login
|
||||||
|
process to fail.
|
||||||
|
|
||||||
|
This process follows these steps:
|
||||||
|
|
||||||
|
* The private key of the user identified by the user_id parameter is obtained.
|
||||||
|
* The querystringis separated from the signature parameter.
|
||||||
|
* The hash is calculated using the strings obtained in steps 1 and 2.
|
||||||
|
* If the hash obtained in the previous step and the one reported in the signature parameter are the same, the login with signature is successful and the service code is executed. Otherwise an exception is thrown.
|
||||||
|
|
||||||
|
The following Python code snippet returns the querystringprovided in the
|
||||||
|
string parameter of the calculate_signature function with the signature
|
||||||
|
parameter appended to the end.{' '}
|
||||||
|
|
||||||
|
This process follows the these steps:
|
||||||
|
|
||||||
|
* The private key of the user identified by the user_id parameter is obtained.
|
||||||
|
* The querystringis separated from the signature parameter.
|
||||||
|
* The hash is calculated using the strings obtained in steps 1 and 2.
|
||||||
|
* If the hash obtained in the previous step and the one reported in the signature parameter are the same, the login with signature is successful and the service code is executed. Otherwise an exception is thrown.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import hashlib
|
||||||
|
|
||||||
|
import hashlib
|
||||||
|
|
||||||
|
import hmac
|
||||||
|
|
||||||
|
def{' '}
|
||||||
|
calcular_firma
|
||||||
|
(Private key
|
||||||
|
, chain
|
||||||
|
)
|
||||||
|
:
|
||||||
|
signature = hmac
|
||||||
|
.new
|
||||||
|
(Private key
|
||||||
|
, chain
|
||||||
|
, hashlib
|
||||||
|
.sha1
|
||||||
|
)
|
||||||
|
.hexdigest
|
||||||
|
(
|
||||||
|
)
|
||||||
|
return chain{' '}
|
||||||
|
+{' '}
|
||||||
|
'&signature='{' '}
|
||||||
|
+ signature
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,380 @@
|
||||||
|
101OBeX offers the possibility of working with encrypted nodes or
|
||||||
|
projects. All services that are exposed through the API Manager can be
|
||||||
|
consumed in an encrypted manner, provided this preference is established
|
||||||
|
during project creation.
|
||||||
|
|
||||||
|
IT IS IMPORTANT TO UNDERSTAND THAT ONCE A PROJECT IS CREATED, THIS
|
||||||
|
ENCRYPTION SETTING CANNOT BE ALTERED. THEREFORE, IT IS CRITICAL TO
|
||||||
|
CAREFULLY CONSIDER WHETHER YOUR PROJECT REQUIRES ENCRYPTION TO AVOID
|
||||||
|
SUBSEQUENT DATA LOSS.
|
||||||
|
|
||||||
|
When you indicate that you want to be able to consume an encrypted
|
||||||
|
project, you will be assigned an encryption key for it (cipher
|
||||||
|
key) which can be consulted in the project data.
|
||||||
|
|
||||||
|
Once this key has been obtained, the calls can be encrypted under the
|
||||||
|
AES256 algorithm with said key and the response will be encrypted with the
|
||||||
|
same encryption key.
|
||||||
|
|
||||||
|
The nomenclature of the calls will be as follows:
|
||||||
|
|
||||||
|
The nomenclature of the calls will be as follows.
|
||||||
|
|
||||||
|
Decrypted call:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
https://api.101obex.com:8000/servicio?parameters
|
||||||
|
```
|
||||||
|
|
||||||
|
Encrypted call:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
https://api.101obex.com:5000/servicio?encripted_data=(encripted
|
||||||
|
parameters)
|
||||||
|
```
|
||||||
|
|
||||||
|
This adds an additional encryption layer that guarantees the security of
|
||||||
|
the transferred data.
|
||||||
|
|
||||||
|
The response will be encrypted and its morphology will be as detailed
|
||||||
|
below
|
||||||
|
|
||||||
|
Decrypted answer:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
|
||||||
|
"status"
|
||||||
|
:
|
||||||
|
true
|
||||||
|
,
|
||||||
|
|
||||||
|
"e_mail"
|
||||||
|
:
|
||||||
|
"test.user@waynnovate.com"
|
||||||
|
,
|
||||||
|
|
||||||
|
"elapsed"
|
||||||
|
:
|
||||||
|
0.18008685111999512
|
||||||
|
,
|
||||||
|
|
||||||
|
"datos_certificacion"
|
||||||
|
:
|
||||||
|
{
|
||||||
|
"codtran"
|
||||||
|
:
|
||||||
|
"0075f16df4b053a5d10502ffb01e9cd8"
|
||||||
|
}
|
||||||
|
,
|
||||||
|
|
||||||
|
"session_id"
|
||||||
|
:
|
||||||
|
"e9b7945dcbd5d18a6239acc7acafe8e9"
|
||||||
|
,
|
||||||
|
|
||||||
|
"type_of_user"
|
||||||
|
:
|
||||||
|
"impulso bu00e1sico"
|
||||||
|
,
|
||||||
|
|
||||||
|
"profile_id"
|
||||||
|
:
|
||||||
|
137
|
||||||
|
,
|
||||||
|
|
||||||
|
"code_profile"
|
||||||
|
:
|
||||||
|
"USUARIO"
|
||||||
|
,
|
||||||
|
|
||||||
|
"user_id"
|
||||||
|
:
|
||||||
|
50
|
||||||
|
,
|
||||||
|
|
||||||
|
"status"
|
||||||
|
:
|
||||||
|
null
|
||||||
|
,
|
||||||
|
|
||||||
|
"phone_lenght"
|
||||||
|
:
|
||||||
|
10
|
||||||
|
,
|
||||||
|
|
||||||
|
"menu"
|
||||||
|
:
|
||||||
|
[
|
||||||
|
[
|
||||||
|
"Acceso Ru00e1pido"
|
||||||
|
,
|
||||||
|
[
|
||||||
|
"Movements"
|
||||||
|
,
|
||||||
|
"movements"
|
||||||
|
,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
,
|
||||||
|
[
|
||||||
|
"Add a card"
|
||||||
|
,
|
||||||
|
{' '}
|
||||||
|
"gestor_origenes_propios/crear"
|
||||||
|
,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
,
|
||||||
|
[ {' '}
|
||||||
|
"Recharge cellphone minutes"
|
||||||
|
,
|
||||||
|
"Rechargecellphoneminutes"
|
||||||
|
,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
,
|
||||||
|
[ {' '}
|
||||||
|
"Transfer between clients"
|
||||||
|
,
|
||||||
|
"moneysending"
|
||||||
|
,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
,
|
||||||
|
[
|
||||||
|
"Request money"
|
||||||
|
,
|
||||||
|
"requestmoney"
|
||||||
|
,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
,
|
||||||
|
[
|
||||||
|
"Services payment"
|
||||||
|
,
|
||||||
|
"payexpresspay"
|
||||||
|
,
|
||||||
|
false
|
||||||
|
]
|
||||||
|
]
|
||||||
|
]
|
||||||
|
,
|
||||||
|
"user_affiliate_id"
|
||||||
|
:
|
||||||
|
1
|
||||||
|
,
|
||||||
|
"currency"
|
||||||
|
:
|
||||||
|
"MXN"
|
||||||
|
,
|
||||||
|
"name"
|
||||||
|
:
|
||||||
|
"qwertyuio qwertyui"
|
||||||
|
,
|
||||||
|
"certificate"
|
||||||
|
:
|
||||||
|
false
|
||||||
|
,
|
||||||
|
"phone"
|
||||||
|
:
|
||||||
|
"9876543212"
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Encrypted answer:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
|
||||||
|
"status"
|
||||||
|
:{' '}
|
||||||
|
true
|
||||||
|
,
|
||||||
|
|
||||||
|
"encrypted_data"
|
||||||
|
:{' '}
|
||||||
|
"k8DoQ9ADDph2o3oHdzeW0wO-FITgfGQD4xy9GcfuBtQy8IVazicD4J66kZ-HTlgWpCkXn7xlGDqCcXUNV
|
||||||
|
{' '}
|
||||||
|
TW9T7Ww1DpPXPyoilI2GPhOFliAWGpip_R56WVYr07qGmMUJy_n2I3si___hBb9MPEI3KBh9eupUO2gKDT
|
||||||
|
{' '}
|
||||||
|
bULimM_cpCtRHsqFdTZIpRedC0W_HdTgcCrZ_CItCoxAoyiCjx6knaH9dbaUV1GoywBWfuh3Dh4iqHGejH
|
||||||
|
{' '}
|
||||||
|
RbYi7Apm1PjCj5WNPEEN-UlfNj9hvurwTgCjBXilBg19ld3LUJj-1Yh48It_gLkna12ZqBiuUnQ3Rpj1hH
|
||||||
|
{' '}
|
||||||
|
vz7CkTjxStkigCyKA4lPh94cK_cJgaiv7c1Uyb54cB8N2bUTBhD4ojOSfR88bN-4wYiIEspinuKDmpHXO8
|
||||||
|
{' '}
|
||||||
|
HP_IgJSfgkU4QiTfbBKQ8u-2Hxe2x1JgbKIvjpiBNK0H3GNnaPrtciFf88EeQun5oZwOJiFtZBQHv-V4fd
|
||||||
|
{' '}
|
||||||
|
kfuOYBAWaOm13I9_PYiJir9BE145mIQOuugnebLASKju5UA-NHEclZ7fUF1fNyCeFxGW-6oYfadBanzpIM
|
||||||
|
{' '}
|
||||||
|
5PjRUODa92gF4X0pPcLy4v1jcegJSMSpTW0DH_vM14gV56OJ0Dvyf52OB2e3LDlfP7TwYmbY7YWwj5MpR1
|
||||||
|
{' '}
|
||||||
|
uoieOwbGsqbXqKvOOCmlwGIvAc-vowoTLRpviT1_fymNHyRqtb89Gjy_2rvsTgBLoZavKBOv5Wvu1Dil5u
|
||||||
|
{' '}
|
||||||
|
0wVzo7pqk5XV3lnTCi-t7kLiH7SfXtuIBhPQzPTO40btxpZwC2V4QBsx1BcBMs_cb7Kmcy53exgpQQQkRN
|
||||||
|
{' '}
|
||||||
|
bTU6jkSnTcccaCPzT9WGhxiHrS1U5bXXW4BM1j9aHFDjhBp6uT9_2QAh0oh-uljLTnw6r6KH69VFJyO2oK
|
||||||
|
{' '}
|
||||||
|
jG2Qttu-L95ynxW94ecMuLlU26O7F-j9IO1FpI-c8cfKAQs6tbUnv_cU49nTwpX5TZI1ZfCDOb042-KiCJ
|
||||||
|
{' '}
|
||||||
|
qOfP61FWZtEQrMw7VZwUxMylcku_In9caUUYgpvJhHwqE6GKdS0XuKEcGUV-tfMvBcnewCgobcZhIeTYKh
|
||||||
|
{' '}
|
||||||
|
KSoaA1AHR7IYHaf8U4isTCzcexJL_mnwHlvWGVEXmM2Ywy_y9Y6nIDFTXPsUG4aYjw="
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Python code example to encrypt and decrypt (encryption key
|
||||||
|
highlighted)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
from Crypto
|
||||||
|
.Cipher{' '}
|
||||||
|
import AES
|
||||||
|
|
||||||
|
from Crypto
|
||||||
|
.Random{' '}
|
||||||
|
import new{' '}
|
||||||
|
as Random
|
||||||
|
|
||||||
|
from base64{' '}
|
||||||
|
import urlsafe_b64encode
|
||||||
|
, urlsafe_b64decode
|
||||||
|
|
||||||
|
class{' '}
|
||||||
|
CipherByAES
|
||||||
|
:
|
||||||
|
def{' '}
|
||||||
|
__init__
|
||||||
|
(self
|
||||||
|
)
|
||||||
|
:
|
||||||
|
self.block_size{' '}
|
||||||
|
={' '}
|
||||||
|
16
|
||||||
|
self.key{' '}
|
||||||
|
={' '}
|
||||||
|
'cedb3fb962255b1aafd033cabe831530'
|
||||||
|
self.pad{' '}
|
||||||
|
={' '}
|
||||||
|
lambda s
|
||||||
|
: s{' '}
|
||||||
|
+{' '}
|
||||||
|
(self
|
||||||
|
.block_size{' '}
|
||||||
|
-{' '}
|
||||||
|
len
|
||||||
|
(s
|
||||||
|
){' '}
|
||||||
|
% self
|
||||||
|
.block_size
|
||||||
|
){' '}
|
||||||
|
*
|
||||||
|
chr
|
||||||
|
(self
|
||||||
|
.block_size{' '}
|
||||||
|
-{' '}
|
||||||
|
len
|
||||||
|
(s
|
||||||
|
){' '}
|
||||||
|
% self
|
||||||
|
.block_size
|
||||||
|
)
|
||||||
|
self.unpad{' '}
|
||||||
|
={' '}
|
||||||
|
lambda s
|
||||||
|
: s
|
||||||
|
[
|
||||||
|
:
|
||||||
|
-
|
||||||
|
ord
|
||||||
|
(s
|
||||||
|
[
|
||||||
|
len
|
||||||
|
(s
|
||||||
|
){' '}
|
||||||
|
-{' '}
|
||||||
|
1
|
||||||
|
:
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
def{' '}
|
||||||
|
encrypt
|
||||||
|
(self
|
||||||
|
, data
|
||||||
|
)
|
||||||
|
:
|
||||||
|
plain_text = self
|
||||||
|
.pad
|
||||||
|
(data
|
||||||
|
)
|
||||||
|
iv = Random
|
||||||
|
(
|
||||||
|
)
|
||||||
|
.read
|
||||||
|
(AES
|
||||||
|
.block_size
|
||||||
|
)
|
||||||
|
cipher = AES
|
||||||
|
.new
|
||||||
|
(self
|
||||||
|
.key
|
||||||
|
, AES
|
||||||
|
.MODE_OFB
|
||||||
|
, iv
|
||||||
|
)
|
||||||
|
return urlsafe_b64encode
|
||||||
|
(iv{' '}
|
||||||
|
+ cipher
|
||||||
|
.encrypt
|
||||||
|
(plain_text
|
||||||
|
.encode
|
||||||
|
(
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
.decode
|
||||||
|
(
|
||||||
|
)
|
||||||
|
def{' '}
|
||||||
|
decrypt
|
||||||
|
(self
|
||||||
|
, data
|
||||||
|
)
|
||||||
|
:
|
||||||
|
cipher_text ={' '}
|
||||||
|
urlsafe_b64decode(data
|
||||||
|
.encode
|
||||||
|
(
|
||||||
|
)
|
||||||
|
)
|
||||||
|
iv = cipher_text
|
||||||
|
[
|
||||||
|
:self
|
||||||
|
.block_size
|
||||||
|
]
|
||||||
|
cipher = AES
|
||||||
|
.new
|
||||||
|
(self
|
||||||
|
.key
|
||||||
|
, AES
|
||||||
|
.MODE_OFB
|
||||||
|
, iv
|
||||||
|
)
|
||||||
|
return self
|
||||||
|
.unpad
|
||||||
|
(cipher
|
||||||
|
.decrypt
|
||||||
|
(cipher_text
|
||||||
|
[self
|
||||||
|
.block_size
|
||||||
|
]
|
||||||
|
)
|
||||||
|
)
|
||||||
|
.decode
|
||||||
|
(
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
101OBeX organizes and groups the clients of a node or project into
|
||||||
|
communities.
|
||||||
|
|
||||||
|
Communities are groups of users or clients of a project whose main common
|
||||||
|
element or union is the community to which they belong. This is
|
||||||
|
determined, in most cases, although other criteria may apply, by
|
||||||
|
affiliates (corporations) responsible for registering users or
|
||||||
|
clients within the system.
|
||||||
|
|
||||||
|
Users or clients in a project that do not have a specific community will
|
||||||
|
belong to the community of the node or project. Importantly, end users or
|
||||||
|
customers retain the flexibility to switch between communities as needed.
|
||||||
|
Grouping users or customers by communities offers many advantages at the
|
||||||
|
operation and data analysis level, since it allows us to undertake actions
|
||||||
|
on a specific set of users or end customers based on the community to
|
||||||
|
which they belong. Moreover, it greatly enhances capabilities for data
|
||||||
|
mining and reporting activities.
|
||||||
|
|
@ -0,0 +1,150 @@
|
||||||
|
This service returns the amount of a transaction to be able to see it.
|
||||||
|
|
||||||
|
GET:
|
||||||
|
`URL_BASE + /ws/util.py/get_importe_transaccion`
|
||||||
|
|
||||||
|
## Receives:
|
||||||
|
|
||||||
|
All parameters are sent in the querystring of the call, so a percentage
|
||||||
|
encoding for URI must be applied (aka URL encoding).
|
||||||
|
|
||||||
|
## Returns:
|
||||||
|
|
||||||
|
Depending on the result of the operation, this service can return two
|
||||||
|
different JSON:
|
||||||
|
|
||||||
|
### Answer JSON OK:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"status"
|
||||||
|
:{' '}
|
||||||
|
true
|
||||||
|
,
|
||||||
|
"amount"
|
||||||
|
: <string>
|
||||||
|
,
|
||||||
|
"elapsed"
|
||||||
|
: <float>
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Where:
|
||||||
|
|
||||||
|
* `status:` Shows if the call has been successful (true) or not (false).
|
||||||
|
* `amount:` Amount of the transaction searched.
|
||||||
|
* `elapsed:` Operation execution time.
|
||||||
|
|
||||||
|
### Answer JSON KO:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"status"
|
||||||
|
:{' '}
|
||||||
|
false
|
||||||
|
,
|
||||||
|
"level"
|
||||||
|
: <string>
|
||||||
|
,
|
||||||
|
"message"
|
||||||
|
: <string>
|
||||||
|
,
|
||||||
|
"error"
|
||||||
|
: <string>
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Where:
|
||||||
|
|
||||||
|
* `status:` Shows if the call has been successful (true) or not (false).
|
||||||
|
* `level:` Error importance level.
|
||||||
|
* `message:` Error message.
|
||||||
|
* `error:` Sole error code.
|
||||||
|
|
||||||
|
## Example requests:
|
||||||
|
|
||||||
|
### Python - Requests:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import requests
|
||||||
|
url ={' '}
|
||||||
|
|
||||||
|
"URL_BASE/ws/util.py/get_importe_transaccion?country_code=MX&codtran=e34c6167505acbd1994a23082b3f1fc7"
|
||||||
|
|
||||||
|
payload ={' '}
|
||||||
|
{
|
||||||
|
}
|
||||||
|
files ={' '}
|
||||||
|
{
|
||||||
|
}
|
||||||
|
headers={' '}
|
||||||
|
{
|
||||||
|
}
|
||||||
|
response = requests
|
||||||
|
.request
|
||||||
|
(
|
||||||
|
"GET"
|
||||||
|
, url
|
||||||
|
, headers
|
||||||
|
=headers
|
||||||
|
, data{' '}
|
||||||
|
= payload
|
||||||
|
, files{' '}
|
||||||
|
= files
|
||||||
|
)
|
||||||
|
print
|
||||||
|
(response
|
||||||
|
.text
|
||||||
|
.encode
|
||||||
|
(
|
||||||
|
'utf8'
|
||||||
|
)
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### NodeJs - Request:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var request = require('request');
|
||||||
|
var options = {
|
||||||
|
'method': 'GET',
|
||||||
|
'url':
|
||||||
|
'URL_BASE/ws/util.py/get_importe_transaccion?country_code=MX&codtran=e34c6167505acbd1994a23082b3f1fc7',
|
||||||
|
'headers': {},
|
||||||
|
formData: {}
|
||||||
|
};
|
||||||
|
request(options, function (error, response) {
|
||||||
|
if (error) throw new Error(error);
|
||||||
|
console.log(response.body);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### JavaScript - Fetch:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var formdata = new FormData();
|
||||||
|
var requestOptions = {
|
||||||
|
method: 'GET',
|
||||||
|
body: formdata,
|
||||||
|
redirect: 'follow'
|
||||||
|
};
|
||||||
|
{' '}
|
||||||
|
fetch("URL_BASE/ws/util.py/get_importe_transaccion?country_code=MX&codtran=e34c6167505acbd1994a23082b3f1fc7",
|
||||||
|
requestOptions)
|
||||||
|
.then(response => response.text())
|
||||||
|
.then(result => console.log(result))
|
||||||
|
.catch(error => console.log('error', error));
|
||||||
|
```
|
||||||
|
|
||||||
|
### CURL:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
curl --location --request GET{' '}
|
||||||
|
|
||||||
|
'URL_BASE/ws/util.py/get_importe_transaccion?country_code=MX&codtran=e34c6167505acbd1994a23082b3f1fc7'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Business logic:
|
||||||
|
|
||||||
|
By means of this endpoint we obtain the amount associated with a
|
||||||
|
transaction.
|
||||||
|
|
@ -0,0 +1,34 @@
|
||||||
|
Within a node or project, two distinct modules exist, and understanding
|
||||||
|
their differences is crucial to prevent confusion. We are talking about
|
||||||
|
the Currencies and FX Exchange modules.
|
||||||
|
|
||||||
|
A node or project works with a location and a currency, information that
|
||||||
|
is provided at the time of creation of a node with the activation of a
|
||||||
|
project.
|
||||||
|
|
||||||
|
The currency selected when the node or project is created is the only
|
||||||
|
currency with which the node will operate until the administrator decides
|
||||||
|
to create or register new currencies.
|
||||||
|
|
||||||
|
That is, a node can be configured in a specific location, the United
|
||||||
|
States for example, and select USD as the currency for the node or
|
||||||
|
project. From that moment on, all operations carried out in that node or
|
||||||
|
project will be recorded with that currency. If later you want other
|
||||||
|
currencies to exist, such as EUR, you must register the request in the
|
||||||
|
node or project as an authorized currency.
|
||||||
|
|
||||||
|
This task is carried out in the Currencies section which can be found in
|
||||||
|
the node or project tab or in the side menu under the Projects section.
|
||||||
|
|
||||||
|
In this same section you can create your own currencies and assign them a
|
||||||
|
value with a purchase and sale price.
|
||||||
|
|
||||||
|
Working with a loyalty solution based on the accumulation of customer
|
||||||
|
points, based on their activity, requires registering those points as a
|
||||||
|
form of currency and giving them a value. In this way, the client will
|
||||||
|
always be able to use that points wallet in the operation network, thanks
|
||||||
|
to the FX Exchange service.
|
||||||
|
|
||||||
|
The mission of the FX Exchange service is to maintain a list of
|
||||||
|
currencies, the reference price, the purchase price, and the sale price,
|
||||||
|
thus allowing multi-currency operations.
|
||||||
|
|
@ -0,0 +1,26 @@
|
||||||
|
This tool is designed to enable developers to work with the 101OBeX API.
|
||||||
|
With this tool, developers can retrieve information about their API
|
||||||
|
privileges, quotas, API Token, and more..
|
||||||
|
|
||||||
|
To begin, developers need to initialize their token using the
|
||||||
|
'init' parameter. This process involves authenticating through the
|
||||||
|
Google OAuth API to obtain the API token, which is stored locally on their
|
||||||
|
computer. Once the token is initialized, developers can use the
|
||||||
|
'info' parameter to access details about their API privileges,
|
||||||
|
projects, teams, and access token. Finally, developers have the option to
|
||||||
|
remove all downloaded information from their computer using the
|
||||||
|
'clean' parameter.
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases
|
||||||
|
|
||||||
|
Mac:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease-staging/101obexcli-macosx.zip
|
||||||
|
|
||||||
|
Linux:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease/101obexcli.-.linux.zip
|
||||||
|
|
||||||
|
Win32:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease/101obexcli-win32.zip
|
||||||
|
|
@ -0,0 +1,174 @@
|
||||||
|
## ws/orders.py/close
|
||||||
|
|
||||||
|
### Receives
|
||||||
|
|
||||||
|
All the parameters that the service receives must be indicated in the body
|
||||||
|
of the request.
|
||||||
|
|
||||||
|
## Returns:
|
||||||
|
|
||||||
|
Depending on the result of the operation, this service can return two
|
||||||
|
different JSON:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"status"
|
||||||
|
:{' '}
|
||||||
|
true
|
||||||
|
,
|
||||||
|
"codtran"
|
||||||
|
:{' '}
|
||||||
|
"850c29598f8ceae89e7083d1547faa29"
|
||||||
|
,
|
||||||
|
"result"
|
||||||
|
:{' '}
|
||||||
|
"120d29398f8ceae89e707ad1547fa12c"
|
||||||
|
"elapsed"
|
||||||
|
:{' '}
|
||||||
|
0.12410902976989746
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Where:
|
||||||
|
|
||||||
|
* `status:` Shows if the call has been successful (true) or not (false).
|
||||||
|
* `codtran:` Operation result.
|
||||||
|
* `result:` Code of the transaction that cancels the order.
|
||||||
|
* `elapsed:` Operation execution time.
|
||||||
|
|
||||||
|
### Answer JSON KO:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"status"
|
||||||
|
:{' '}
|
||||||
|
false
|
||||||
|
,
|
||||||
|
"level"
|
||||||
|
: <string>
|
||||||
|
,
|
||||||
|
"message"
|
||||||
|
: <string>
|
||||||
|
,
|
||||||
|
"error"
|
||||||
|
: <string>
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Where:
|
||||||
|
|
||||||
|
* `status:` Shows if the call has been successful (true) or not (false).
|
||||||
|
* `level:` Error importance level.
|
||||||
|
* `message:` Error message.
|
||||||
|
* `error:` Sole error code.
|
||||||
|
|
||||||
|
## Example requests:
|
||||||
|
|
||||||
|
### Python - Requests:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import requests
|
||||||
|
|
||||||
|
|
||||||
|
url ={' '}
|
||||||
|
|
||||||
|
"http://34.121.95.179:80/ws/orders.py/close?country_code=ES&user_id=133&session_id=1689-oocyMaFovWi1jljrF-eaSw=="
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
payload=
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
headers ={' '}
|
||||||
|
{
|
||||||
|
'101ObexApiKey'
|
||||||
|
:{' '}
|
||||||
|
'MS1phGJRa3WyLilN9dlZ7vurJDIpe0nM'
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
response = requests
|
||||||
|
.request
|
||||||
|
(
|
||||||
|
"GET"
|
||||||
|
, url
|
||||||
|
, headers
|
||||||
|
=headers
|
||||||
|
, data
|
||||||
|
=payload
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
print
|
||||||
|
(response
|
||||||
|
.text
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### NodeJs - Request:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var request = require('request');
|
||||||
|
|
||||||
|
var options = {
|
||||||
|
'method': 'GET',
|
||||||
|
'url':
|
||||||
|
'http://34.121.95.179:80/ws/orders.py/close?country_code=ES&user_id=133&session_id=1689-oocyMaFovWi1jljrF-eaSw==',
|
||||||
|
'headers': {
|
||||||
|
'101ObexApiKey': 'MS1phGJRa3WyLilN9dlZ7vurJDIpe0nM'
|
||||||
|
}
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
request(options, function (error, response) {
|
||||||
|
if (error) throw new Error(error);
|
||||||
|
console.log(response.body);
|
||||||
|
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### JavaScript - Fetch:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var myHeaders = new Headers();
|
||||||
|
|
||||||
|
myHeaders.append("101ObexApiKey",
|
||||||
|
"MS1phGJRa3WyLilN9dlZ7vurJDIpe0nM");
|
||||||
|
|
||||||
|
|
||||||
|
var requestOptions = {
|
||||||
|
method: 'GET',
|
||||||
|
headers: myHeaders,
|
||||||
|
redirect: 'follow'
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
fetch("http://34.121.95.179:80/ws/orders.py/close?country_code=ES&user_id=133&session_id=1689-oocyMaFovWi1jljrF-eaSw==",
|
||||||
|
requestOptions)
|
||||||
|
.then(response => response.text())
|
||||||
|
.then(result => console.log(result))
|
||||||
|
.catch(error => console.log('error', error));
|
||||||
|
```
|
||||||
|
|
||||||
|
### CURL:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
curl --location --request GET{' '}
|
||||||
|
|
||||||
|
'http://34.121.95.179:80/ws/orders.py/close?country_code=ES&user_id=133&session_id=1689-oocyMaFovWi1jljrF-eaSw=='
|
||||||
|
{' '}
|
||||||
|
\
|
||||||
|
|
||||||
|
--header{' '}
|
||||||
|
|
||||||
|
'101ObexApiKey: MS1phGJRa3WyLilN9dlZ7vurJDIpe0nM'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Business logic:
|
||||||
|
|
||||||
|
The objective of this service is to permit an administrator close an
|
||||||
|
order.
|
||||||
|
|
@ -0,0 +1,26 @@
|
||||||
|
This tool is designed to enable developers to work with the 101OBeX API.
|
||||||
|
With this tool, developers can retrieve information about their API
|
||||||
|
privileges, quotas, API Token, and more.
|
||||||
|
|
||||||
|
To begin, developers need to initialize their token using the
|
||||||
|
'init' parameter. This process involves authenticating through the
|
||||||
|
Google OAuth API to obtain the API token, which is stored locally on their
|
||||||
|
computer. Once the token is initialized, developers can use the
|
||||||
|
'info' parameter to access details about their API privileges,
|
||||||
|
projects, teams, and access token. Finally, developers have the option to
|
||||||
|
remove all downloaded information from their computer using the
|
||||||
|
'clean' parameter.
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/tag/prerelease
|
||||||
|
|
||||||
|
Mac:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease/101obexcli-win32.zip
|
||||||
|
|
||||||
|
Linux:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease/101obexcli.-.linux.zip
|
||||||
|
|
||||||
|
Win32:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease/101obexcli.-.mac.zip
|
||||||
|
|
@ -0,0 +1,156 @@
|
||||||
|
This service is used to get the sign of a informed string
|
||||||
|
|
||||||
|
GET:
|
||||||
|
`URL_BASE + /ws/util.py/get_signature`
|
||||||
|
|
||||||
|
## Receives:
|
||||||
|
|
||||||
|
The string to be signed and the private key to sign the string
|
||||||
|
|
||||||
|
## Returns:
|
||||||
|
|
||||||
|
Depending on the result of the operation, this service can return two
|
||||||
|
different JSON:
|
||||||
|
|
||||||
|
### Answer JSON OK:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"status": true, "signature":
|
||||||
|
"38779748c3bb130d0d1f8084ad92607d705e88b7", "elapsed":
|
||||||
|
0.002902984619140625
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Where:
|
||||||
|
|
||||||
|
* `status:` Shows if the call has been successful (true) or not (false).
|
||||||
|
* `signature:` The signature calculated from the string{' '}
|
||||||
|
* `elapsed:` Operation execution time.
|
||||||
|
|
||||||
|
### Answer JSON KO:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"status"
|
||||||
|
:{' '}
|
||||||
|
false
|
||||||
|
,
|
||||||
|
"nivel"
|
||||||
|
: <string>
|
||||||
|
,
|
||||||
|
"message"
|
||||||
|
: <string>
|
||||||
|
,
|
||||||
|
"error"
|
||||||
|
: <string>
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Where:
|
||||||
|
|
||||||
|
* `status:` Shows if the call has been successful (true) or not (false).
|
||||||
|
* `nivel:` Error importance level.
|
||||||
|
* `message:` Error message.
|
||||||
|
* `error:` Sole error code.
|
||||||
|
|
||||||
|
## Example requests:
|
||||||
|
|
||||||
|
### Python - Requests:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import requests
|
||||||
|
|
||||||
|
url ={' '}
|
||||||
|
|
||||||
|
"http://api.staging.pademobile.com:8000/ws/util.py/get_signature?string_to_sign=codigo_pais%3DMX%26id_usuario%3D2%26telefono%3Doperabills%26importe%3D30000%26referencia%3DFondeo&private_key=3SQb94TOcHCm"
|
||||||
|
|
||||||
|
|
||||||
|
payload=
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
headers ={' '}
|
||||||
|
{
|
||||||
|
'101ObexApiKey'
|
||||||
|
:{' '}
|
||||||
|
'ri1JlbIJ7oO2kobKNwEdXrZDhd4PoZd8'
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
response = requests
|
||||||
|
.request
|
||||||
|
(
|
||||||
|
"GET"
|
||||||
|
, url
|
||||||
|
, headers
|
||||||
|
=headers
|
||||||
|
, data
|
||||||
|
=payload
|
||||||
|
)
|
||||||
|
|
||||||
|
print
|
||||||
|
(response
|
||||||
|
.text
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### NodeJs - Request:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var request = require('request');
|
||||||
|
|
||||||
|
var options = {
|
||||||
|
'method': 'GET',
|
||||||
|
'url':
|
||||||
|
'http://api.staging.pademobile.com:8000/ws/util.py/get_signature?string_to_sign=codigo_pais%3DMX%26id_usuario%3D2%26telefono%3Doperabills%26importe%3D30000%26referencia%3DFondeo&private_key=3SQb94TOcHCm',
|
||||||
|
'headers': {
|
||||||
|
'101ObexApiKey': 'ri1JlbIJ7oO2kobKNwEdXrZDhd4PoZd8'
|
||||||
|
}
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
request(options, function (error, response) {
|
||||||
|
if (error) throw new Error(error);
|
||||||
|
console.log(response.body);
|
||||||
|
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### JavaScript - Fetch:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
var myHeaders = new Headers();
|
||||||
|
|
||||||
|
myHeaders.append("101ObexApiKey",
|
||||||
|
"ri1JlbIJ7oO2kobKNwEdXrZDhd4PoZd8");
|
||||||
|
|
||||||
|
var requestOptions = {
|
||||||
|
method: 'GET',
|
||||||
|
headers: myHeaders,
|
||||||
|
redirect: 'follow'
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
fetch("http://api.staging.pademobile.com:8000/ws/util.py/get_signature?string_to_sign=codigo_pais%3DMX%26id_usuario%3D2%26telefono%3Doperabills%26importe%3D30000%26referencia%3DFondeo&private_key=3SQb94TOcHCm",
|
||||||
|
requestOptions)
|
||||||
|
.then(response => response.text())
|
||||||
|
.then(result => console.log(result))
|
||||||
|
.catch(error => console.log('error', error));
|
||||||
|
```
|
||||||
|
|
||||||
|
### CURL:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
curl --location --request GET{' '}
|
||||||
|
|
||||||
|
'http://api.staging.pademobile.com:8000/ws/util.py/get_signature?string_to_sign=codigo_pais%3DMX%26id_usuario%3D2%26telefono%3Doperabills%26importe%3D30000%26referencia%3DFondeo&private_key=3SQb94TOcHCm'
|
||||||
|
{' '}
|
||||||
|
\
|
||||||
|
```
|
||||||
|
|
||||||
|
## Business logic:
|
||||||
|
|
||||||
|
With this endpoint it is allowed to calculate the sign from a strign using
|
||||||
|
a private key.
|
||||||
|
|
@ -0,0 +1,552 @@
|
||||||
|
AVAP TM Dev Studio 2024 lets you perform most tasks directly
|
||||||
|
from the keyboard. This page lists out the default bindings (keyboard
|
||||||
|
shortcuts) and describes how you can update them.
|
||||||
|
|
||||||
|
### Keyboard Shortcuts editor
|
||||||
|
|
||||||
|
AVAP TM Dev Studio provides a rich and easy keyboard shortcuts
|
||||||
|
editing experience using Keyboard Shortcuts editor. It
|
||||||
|
lists all available commands with and without keybindings and you can
|
||||||
|
easily change / remove / reset their keybindings using the available
|
||||||
|
actions. It also has a search box on the top that helps you in finding
|
||||||
|
commands or keybindings. You can open this editor by going to the menu
|
||||||
|
under File > Preferences >{' '}
|
||||||
|
Keyboard Shortcuts .
|
||||||
|
|
||||||
|
Most importantly, you can see keybindings according to your keyboard
|
||||||
|
layout. For example, key binding `Cmd+\` in US keyboard layout
|
||||||
|
will be shown as `Ctrl+Shift+Alt+Cmd+7` when layout is changed
|
||||||
|
to German. The dialog to enter key binding will assign the correct and
|
||||||
|
desired key binding as per your keyboard layout.
|
||||||
|
|
||||||
|
For doing more advanced keyboard shortcut customization, read Advanced
|
||||||
|
Customization.
|
||||||
|
|
||||||
|
### Keymap extensions
|
||||||
|
|
||||||
|
Keyboard shortcuts are vital to productivity and changing keyboarding
|
||||||
|
habits can be tough. To help with this, File >{' '}
|
||||||
|
Preferences >{' '}
|
||||||
|
Migrate Keyboard Shortcuts from... shows you a list of
|
||||||
|
popular keymap extensions. These extensions modify the AVAP TM {' '}
|
||||||
|
Dev Studio shortcuts to match those of other editors so you don't need
|
||||||
|
to learn new keyboard shortcuts. There is also a Keymaps category of
|
||||||
|
extensions in the Marketplace.
|
||||||
|
|
||||||
|
### Keyboard Shortcuts Reference
|
||||||
|
|
||||||
|
We also have a printable version of these keyboard shortcuts.{' '}
|
||||||
|
Help > Keyboard Shortcut Reference {' '}
|
||||||
|
displays a condensed PDF version suitable for printing as an easy
|
||||||
|
reference.
|
||||||
|
|
||||||
|
Below are links to the three platform-specific versions (US English
|
||||||
|
keyboard):
|
||||||
|
|
||||||
|
* Windows
|
||||||
|
* macOS
|
||||||
|
* Linux
|
||||||
|
|
||||||
|
### Detecting keybinding conflicts
|
||||||
|
|
||||||
|
If you have many extensions installed or you have customized your keyboard
|
||||||
|
shortcuts, you can sometimes have keybinding conflicts where the same
|
||||||
|
keyboard shortcut is mapped to several commands. This can result in
|
||||||
|
confusing behavior, especially if different keybindings are going in and
|
||||||
|
out of scope as you move around the editor.
|
||||||
|
|
||||||
|
The Keyboard Shortcuts editor has a context menu command{' '}
|
||||||
|
Show Same Keybindings , which will filter the keybindings
|
||||||
|
based on a keyboard shortcut to display conflicts.
|
||||||
|
|
||||||
|
Pick a command with the keybinding you think is overloaded and you can see
|
||||||
|
if multiple commands are defined, the source of the keybindings and when
|
||||||
|
they are active.
|
||||||
|
|
||||||
|
### Troubleshooting keybindings
|
||||||
|
|
||||||
|
To troubleshoot keybindings problems, you can execute the command{' '}
|
||||||
|
Developer: Toggle Keyboard Shortcuts Troubleshooting .
|
||||||
|
This will activate logging of dispatched keyboard shortcuts and will open
|
||||||
|
an output panel with the corresponding log file.
|
||||||
|
|
||||||
|
You can then press your desired keybinding and check what keyboard
|
||||||
|
shortcut AVAP™ DS detects and what command is invoked.
|
||||||
|
|
||||||
|
For example, when pressing `cmd+/` in a code editor on macOS,
|
||||||
|
the logging output would be:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
[KeybindingService]: / Received keydown event - modifiers: [meta], code:
|
||||||
|
MetaLeft, keyCode: 91, key: Meta
|
||||||
|
|
||||||
|
[KeybindingService]: | Converted keydown event - modifiers: [meta],
|
||||||
|
code: MetaLeft, keyCode: 57 ('Meta')
|
||||||
|
|
||||||
|
[KeybindingService]: \ Keyboard event cannot be dispatched.
|
||||||
|
|
||||||
|
[KeybindingService]: / Received keydown event - modifiers: [meta], code:
|
||||||
|
Slash, keyCode: 191, key: /
|
||||||
|
[KeybindingService]: | Converted keydown event - modifiers: [meta],
|
||||||
|
code: Slash, keyCode: 85 ('/')
|
||||||
|
|
||||||
|
[KeybindingService]: | Resolving meta+[Slash]
|
||||||
|
|
||||||
|
[KeybindingService]: \ From 2 keybinding entries, matched
|
||||||
|
editor.action.commentLine, when: editorTextFocus &&
|
||||||
|
!editorReadonly, source: built-in.
|
||||||
|
```
|
||||||
|
|
||||||
|
The first keydown event is for the MetaLeft key (cmd) and cannot
|
||||||
|
be dispatched. The second keydown event is for the Slash key (/)
|
||||||
|
and is dispatched as meta+[Slash]. There were two keybinding entries
|
||||||
|
mapped from meta+[Slash] and the one that matched was for the command
|
||||||
|
editor.action.commentLine, which has the when condition editorTextFocus
|
||||||
|
&& !editorReadonly and is a built-in keybinding entry.
|
||||||
|
|
||||||
|
### Viewing modified keybindings
|
||||||
|
|
||||||
|
You can view any user modified keyboard shortcuts in AVAP TM Dev
|
||||||
|
Studio in the Keyboard Shortcuts editor with the Show
|
||||||
|
User Keybindings command in the More Actions (...) menu. This
|
||||||
|
applies the @source:user filter to the Keyboard Shortcuts editor
|
||||||
|
(Source is 'User').
|
||||||
|
|
||||||
|
### Advanced customization
|
||||||
|
|
||||||
|
All keyboard shortcuts in AVAP TM Dev Studio can be customized
|
||||||
|
via the keybindings.json file.
|
||||||
|
|
||||||
|
To configure keyboard shortcuts through the JSON file, open{' '}
|
||||||
|
Keyboard Shortcuts editor and select the{' '}
|
||||||
|
Open Keyboard Shortcuts (JSON) button on the
|
||||||
|
right of the editor title bar. This will open your keybindings.json file
|
||||||
|
where you can overwrite the Default Keyboard Shortcuts.
|
||||||
|
|
||||||
|
You can also open the keybindings.json file from the Command Palette
|
||||||
|
(Ctrl+Shift+P) with the Preferences: Open Keyboard Shortcuts
|
||||||
|
(JSON) command.
|
||||||
|
|
||||||
|
### Keyboard rules
|
||||||
|
|
||||||
|
Each rule consists of:
|
||||||
|
|
||||||
|
* a key that describes the pressed keys.
|
||||||
|
* a command containing the identifier of the command to execute.
|
||||||
|
* an optional when clause containing a boolean expression that will be evaluated depending on the current context.
|
||||||
|
|
||||||
|
Chords (two separate keypress actions) are described by separating
|
||||||
|
the two keypresses with a space. For example, `Ctrl+K Ctrl+C` .
|
||||||
|
|
||||||
|
When a key is pressed:
|
||||||
|
|
||||||
|
* the rules are evaluated from bottom to{' '} top .
|
||||||
|
* the first rule that matches, both the key and in terms of when, is accepted.
|
||||||
|
* no more rules are processed.
|
||||||
|
* if a rule is found and has a command set, the command is executed.
|
||||||
|
|
||||||
|
The additional keybindings.json rules are appended at runtime to the
|
||||||
|
bottom of the default rules, thus allowing them to overwrite the default
|
||||||
|
rules. The keybindings.json file is watched by AVAP™ DS so editing it
|
||||||
|
while AVAP TM Dev Studio is running will update the rules at
|
||||||
|
runtime.
|
||||||
|
|
||||||
|
The keyboard shortcuts dispatching is done by analyzing a list of rules
|
||||||
|
that are expressed in JSON. Here are some examples:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Keybindings that are active when the focus is in the editor
|
||||||
|
|
||||||
|
{ "key": "home", "command": "cursorHome", "when": "editorTextFocus"
|
||||||
|
},
|
||||||
|
|
||||||
|
{ "key": "shift+home", "command": "cursorHomeSelect", "when":
|
||||||
|
"editorTextFocus" },
|
||||||
|
|
||||||
|
|
||||||
|
// Keybindings that are complementary
|
||||||
|
|
||||||
|
{ "key": "f5", "command": "workbench.action.debug.continue",
|
||||||
|
"when": "inDebugMode" },
|
||||||
|
|
||||||
|
{ "key": "f5", "command": "workbench.action.debug.start", "when":
|
||||||
|
"!inDebugMode" },
|
||||||
|
|
||||||
|
|
||||||
|
// Global keybindings
|
||||||
|
|
||||||
|
{ "key": "ctrl+f", "command": "actions.find" },
|
||||||
|
|
||||||
|
{ "key": "alt+left", "command": "workbench.action.navigateBack"
|
||||||
|
},
|
||||||
|
|
||||||
|
{ "key": "alt+right", "command": "workbench.action.navigateForward"
|
||||||
|
},
|
||||||
|
|
||||||
|
|
||||||
|
// Global keybindings using chords (two separate keypress
|
||||||
|
actions)
|
||||||
|
|
||||||
|
{ "key": "ctrl+k enter", "command": "workbench.action.keepEditor"
|
||||||
|
},
|
||||||
|
|
||||||
|
{ "key": "ctrl+k ctrl+w", "command":
|
||||||
|
"workbench.action.closeAllEditors" },
|
||||||
|
```
|
||||||
|
|
||||||
|
### Accepted keys
|
||||||
|
|
||||||
|
The key is made up of modifiers and the key itself.
|
||||||
|
|
||||||
|
The following modifiers are accepted:
|
||||||
|
|
||||||
|
The following keys are accepted:
|
||||||
|
|
||||||
|
* `f1-f19` , `a-z` , `0-9`
|
||||||
|
* ```, `-` , `=` , `[` , `]` ,{' '} `\` , `;` , `'` , `,` ,{' '} `.` , `/`
|
||||||
|
* `left` , `up` , `right` ,{' '} `down` , `pageup` , `pagedown` ,{' '} `end` , `home`
|
||||||
|
* `tab` , `enter` , `escape` ,{' '} `space` , `backspace` , `delete`
|
||||||
|
* `pausebreak` , `capslock` , `insert`
|
||||||
|
* `numpad0-numpad9` , `numpad_multiply` ,{' '} `numpad_add` , `numpad_separator`
|
||||||
|
* `numpad_subtract` , `numpad_decimal` ,{' '} `numpad_divide`
|
||||||
|
|
||||||
|
### Command arguments
|
||||||
|
|
||||||
|
You can invoke a command with arguments. This is useful if you often
|
||||||
|
perform the same operation on a specific file or folder. You can add a
|
||||||
|
custom keyboard shortcut to do exactly what you want.
|
||||||
|
|
||||||
|
The following is an example overriding the `Enter` key to print
|
||||||
|
some text:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"key": "enter",
|
||||||
|
"command": "type",
|
||||||
|
"args": { "text": "Hello World" },
|
||||||
|
"when": "editorTextFocus"
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The type command will receive {"text": "Hello
|
||||||
|
World"} as its first argument and add "Hello World" to
|
||||||
|
the file instead of producing the default command.
|
||||||
|
|
||||||
|
For more information on commands that take arguments, refer to Built-in
|
||||||
|
Commands.
|
||||||
|
|
||||||
|
### Running multiple commands
|
||||||
|
|
||||||
|
It is possible to create a keybinding that runs several other commands
|
||||||
|
sequentially using the command runCommands.
|
||||||
|
|
||||||
|
Run several commands without arguments: copy current line down, mark the
|
||||||
|
current line as comment, move cursor to copied line
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"key": "ctrl+alt+c",
|
||||||
|
"command": "runCommands",
|
||||||
|
"args": {
|
||||||
|
"commands": [ "editor.action.copyLinesDownAction",
|
||||||
|
"cursorUp",
|
||||||
|
"editor.action.addCommentLine",
|
||||||
|
"cursorDown"
|
||||||
|
] }
|
||||||
|
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
It is also possible to pass arguments to commands: create a new untitled
|
||||||
|
TypeScript file and insert a custom snippet
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"key": "ctrl+n",
|
||||||
|
"command": "runCommands",
|
||||||
|
"args": {
|
||||||
|
"commands": [ {
|
||||||
|
"command": "workbench.action.files.newUntitledFile",
|
||||||
|
"args": {
|
||||||
|
"languageId": "typescript"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"command": "editor.action.insertSnippet",
|
||||||
|
"args": {
|
||||||
|
"langId": "typescript",
|
||||||
|
"snippet": "class ${1:ClassName}
|
||||||
|
{\n\tconstructor() {\n\t\t$0\n\t}\n}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
] }
|
||||||
|
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that commands run by runCommands receive the value of
|
||||||
|
"args" as the first argument. So in the example above,
|
||||||
|
workbench.action.files.newUntitledFile receives
|
||||||
|
{"languageId": "typescript" } as its first
|
||||||
|
and only argument.
|
||||||
|
|
||||||
|
To pass several arguments, one needs to have "args" as an array:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"key": "ctrl+shift+e",
|
||||||
|
"command": "runCommands",
|
||||||
|
"args": {
|
||||||
|
"commands": [ {
|
||||||
|
// command invoked with 2 arguments:
|
||||||
|
vscode.executeCommand("myCommand", "arg1", "arg2")
|
||||||
|
"command": "myCommand",
|
||||||
|
"args": ["arg1", "arg2"]
|
||||||
|
}
|
||||||
|
] }
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
To pass an array as the first argument, one needs to wrap the array in
|
||||||
|
another array: "args": [ [1, 2, 3] ].
|
||||||
|
|
||||||
|
### Removing a specific key binding rule
|
||||||
|
|
||||||
|
You can write a key binding rule that targets the removal of a specific
|
||||||
|
default key binding. With the keybindings.json, it was always possible to
|
||||||
|
redefine all the key bindings of AVAP TM Dev Studio, but it can
|
||||||
|
be difficult to make a small tweak, especially around overloaded keys,
|
||||||
|
such as `Tab` or `Escape` . To remove a specific key
|
||||||
|
binding, add a - to the command and the rule will be a removal rule.
|
||||||
|
|
||||||
|
Here is an example:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// In Default Keyboard Shortcuts
|
||||||
|
|
||||||
|
...
|
||||||
|
|
||||||
|
{ "key": "tab", "command": "tab", "when": ... },
|
||||||
|
|
||||||
|
{ "key": "tab", "command": "jumpToNextSnippetPlaceholder", "when":
|
||||||
|
... },
|
||||||
|
|
||||||
|
{ "key": "tab", "command": "acceptSelectedSuggestion", "when": ...
|
||||||
|
},
|
||||||
|
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
|
// To remove the second rule, for example, add in keybindings.json:
|
||||||
|
|
||||||
|
{ "key": "tab", "command": "-jumpToNextSnippetPlaceholder" }
|
||||||
|
```
|
||||||
|
|
||||||
|
To override a specific key binding rule with an empty action, you can
|
||||||
|
specify an empty command:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// To override and disable any `tab` keybinding, for example, add in
|
||||||
|
keybindings.json:
|
||||||
|
|
||||||
|
{ "key": "tab", "command": "" }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Keyboard layouts
|
||||||
|
|
||||||
|
The keys above are string representations for virtual keys and do not
|
||||||
|
necessarily relate to the produced character when they are pressed. More
|
||||||
|
precisely:
|
||||||
|
|
||||||
|
* Reference: Virtual-Key Codes (Windows)
|
||||||
|
* `tab` for VK_TAB (0x09)
|
||||||
|
* ; for VK_OEM_1 (0xBA)
|
||||||
|
* `=` for VK_OEM_PLUS (0xBB)
|
||||||
|
* `,` for VK_OEM_COMMA (0xBC)
|
||||||
|
* `-` for VK_OEM_MINUS (0xBD)
|
||||||
|
* `.` for VK_OEM_PERIOD (0xBE)
|
||||||
|
* `/` for VK_OEM_2 (0xBF)
|
||||||
|
* ` for VK_OEM_3 (0xC0)
|
||||||
|
* `[` for VK_OEM_4 (0xDB)
|
||||||
|
* `\` for VK_OEM_5 (0xDC)
|
||||||
|
* `]` for VK_OEM_6 (0xDD)
|
||||||
|
* `'` for VK_OEM_7 (0xDE)
|
||||||
|
* etc.
|
||||||
|
|
||||||
|
Different keyboard layouts usually reposition the above virtual keys or
|
||||||
|
change the characters produced when they are pressed. When using a
|
||||||
|
different keyboard layout than the standard US, AVAP TM Dev
|
||||||
|
Studio does the following:
|
||||||
|
|
||||||
|
All the key bindings are rendered in the UI using the current system's
|
||||||
|
keyboard layout. For example, Split Editor when using a French
|
||||||
|
(France) keyboard layout is now rendered as `Ctrl+*` :
|
||||||
|
|
||||||
|
When editing keybindings.json, AVAP TM Dev Studio highlights
|
||||||
|
misleading key bindings, those that are represented in the file with the
|
||||||
|
character produced under the standard US keyboard layout, but that need
|
||||||
|
pressing keys with different labels under the current system's
|
||||||
|
keyboard layout. For example, here is how the{' '}
|
||||||
|
Default Keyboard Shortcuts rules look like when using a
|
||||||
|
French (France) keyboard layout:
|
||||||
|
|
||||||
|
There is also a widget that helps input the key binding rule when editing
|
||||||
|
keybindings.json. To launch the Define Keybinding widget, press{' '}
|
||||||
|
`Ctrl+K Ctrl+K` . The widget listens for key presses and renders
|
||||||
|
the serialized JSON representation in the text box and below it, the keys
|
||||||
|
that AVAP TM Dev Studio has detected under your current keyboard
|
||||||
|
layout. Once you've typed the key combination you want, you can press{' '}
|
||||||
|
`Enter` and a rule snippet will be inserted.
|
||||||
|
|
||||||
|
### Keyboard layout-independent bindings
|
||||||
|
|
||||||
|
Using scan codes, it is possible to define keybindings which do not change
|
||||||
|
with the change of the keyboard layout. For example:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{ "key": "cmd+[Slash]", "command": "editor.action.commentLine",
|
||||||
|
"when": "editorTextFocus" }
|
||||||
|
```
|
||||||
|
|
||||||
|
Accepted scan codes:
|
||||||
|
|
||||||
|
* `[F1]-[F19]` , `[KeyA]-[KeyZ]` ,{' '} `[Digit0]-[Digit9]`
|
||||||
|
* `[Backquote]` , `[Minus]` , `[Equal]` ,{' '} `[BracketLeft]` , `[BracketRight]` ,{' '} `[Backslash]` , `[Semicolon]` ,{' '} `[Quote]` , `[Comma]` , `[Period]` ,{' '} `[Slash]`
|
||||||
|
* `[ArrowLeft]` , `[ArrowUp]` ,{' '} `[ArrowRight]` , `[ArrowDown]` ,{' '} `[PageUp]` , `[PageDown]` , `[End]` ,{' '} `[Home]`
|
||||||
|
* `[Tab]` , `[Enter]` , `[Escape]` ,{' '} `[Space]` , `[Backspace]` , `[Delete]`
|
||||||
|
* `[Pause]` , `[CapsLock]` , `[Insert]`
|
||||||
|
* `[Numpad0]-[Numpad9]` , `[NumpadMultiply]` ,{' '} `[NumpadAdd]` , `[NumpadComma]`
|
||||||
|
* `[NumpadSubtract]` , `[NumpadDecimal]` ,{' '} `[NumpadDivide]`
|
||||||
|
|
||||||
|
### when clause contexts
|
||||||
|
|
||||||
|
AVAP TM Dev Studio gives you fine control over when your key
|
||||||
|
bindings are enabled through the optional when clause. If your key binding
|
||||||
|
doesn't have a when clause, the key binding is globally available at
|
||||||
|
all times. A when clause evaluates to either Boolean true or false for
|
||||||
|
enabling key bindings.
|
||||||
|
|
||||||
|
AVAP TM Dev Studio sets various context keys and specific values
|
||||||
|
depending on what elements are visible and active in the AVAP TM {' '}
|
||||||
|
Dev Studio UI. For example, the built-in Start Debugging command has the
|
||||||
|
keyboard shortcut `F5` , which is only enabled when there is an
|
||||||
|
appropriate debugger available (context debuggersAvailable is
|
||||||
|
true) and the editor isn't in debug mode (context inDebugMode
|
||||||
|
is false):
|
||||||
|
|
||||||
|
You can also view a keybinding's when clause directly in the Default
|
||||||
|
Keybindings JSON (
|
||||||
|
|
||||||
|
Preferences: Open Default Keyboard Shortcuts (JSON)
|
||||||
|
|
||||||
|
):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{ "key": "f5", "command": "workbench.action.debug.start",
|
||||||
|
"when": "debuggersAvailable && !inDebugMode" },
|
||||||
|
```
|
||||||
|
|
||||||
|
For when clause conditional expressions, the following conditional
|
||||||
|
operators are useful for keybindings:
|
||||||
|
|
||||||
|
You can find the full list of when clause conditional operators in the
|
||||||
|
when clause contexts reference.
|
||||||
|
|
||||||
|
You can find some of the available when clause contexts in the when clause
|
||||||
|
context reference.
|
||||||
|
|
||||||
|
The list there isn't exhaustive and you can find other when clause
|
||||||
|
contexts by searching and filtering in the Keyboard Shortcuts editor (
|
||||||
|
Preferences: Open Keyboard Shortcuts ) or reviewing
|
||||||
|
the Default Keybindings JSON file (
|
||||||
|
|
||||||
|
Preferences: Open Default Keyboard Shortcuts (JSON)
|
||||||
|
|
||||||
|
).
|
||||||
|
|
||||||
|
### Custom keybindings for refactorings
|
||||||
|
|
||||||
|
The editor.action.codeAction command lets you configure keybindings for
|
||||||
|
specific Refactorings (Code Actions). For example, the keybinding
|
||||||
|
below triggers the Extract function refactoring Code
|
||||||
|
Actions:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"key": "ctrl+shift+r ctrl+e",
|
||||||
|
"command": "editor.action.codeAction",
|
||||||
|
"args": {
|
||||||
|
"kind": "refactor.extract.function"
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This is covered in depth in the Refactoring topic where you can learn
|
||||||
|
about different kinds of Code Actions and how to prioritize them in the
|
||||||
|
case of multiple possible refactorings.
|
||||||
|
|
||||||
|
### Default Keyboard Shortcuts
|
||||||
|
|
||||||
|
You can view all default keyboard shortcuts in AVAP Dev Studio in the{' '}
|
||||||
|
Keyboard Shortcuts editor with the{' '}
|
||||||
|
Show Default Keybindings command in the{' '}
|
||||||
|
More Actions (...) menu. This applies the
|
||||||
|
@source:default filter to the Keyboard Shortcuts editor
|
||||||
|
( Source is 'Default').
|
||||||
|
|
||||||
|
You can view the default keyboard shortcuts as a JSON file using the
|
||||||
|
command{' '}
|
||||||
|
|
||||||
|
Preferences: Open Default Keyboard Shortcuts (JSON)
|
||||||
|
|
||||||
|
.
|
||||||
|
|
||||||
|
Some commands included below do not have default keyboard shortcuts and so
|
||||||
|
are displayed as unassigned but you can assign your own keybindings.
|
||||||
|
|
||||||
|
### Next steps
|
||||||
|
|
||||||
|
Now that you know about our Key binding support, what's next...
|
||||||
|
|
||||||
|
* Language Support - Our Good, Better, Best language grid to see what you can expect
|
||||||
|
* Debugging - This is where AVAP™ DS really shines
|
||||||
|
* Node.js - End to end Node.js scenario with a sample app
|
||||||
|
|
||||||
|
### Common questions
|
||||||
|
|
||||||
|
In the Keyboard Shortcut editor, you can filter on
|
||||||
|
specific keystrokes to see which commands are bound to which keys. Below
|
||||||
|
you can see that Ctrl+Shift+P is bound to{' '}
|
||||||
|
Show All Commands to bring up the Command Palette.
|
||||||
|
|
||||||
|
Find a rule that triggers the action in the{' '}
|
||||||
|
Default Keyboard Shortcuts and write a modified version
|
||||||
|
of it in your keybindings.json file:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Original, in Default Keyboard Shortcuts
|
||||||
|
|
||||||
|
{ "key": "ctrl+shift+k", "command": "editor.action.deleteLines",
|
||||||
|
"when": "editorTextFocus" },
|
||||||
|
|
||||||
|
// Modified, in User/keybindings.json, Ctrl+D now will also trigger this
|
||||||
|
action
|
||||||
|
|
||||||
|
{ "key": "ctrl+d", "command": "editor.action.deleteLines",
|
||||||
|
"when": "editorTextFocus" },
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the editorLangId context key in your when clause:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{ "key": "shift+alt+a", "command": "editor.action.blockComment",
|
||||||
|
"when": "editorTextFocus && editorLangId == csharp"
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
The most common problem is a syntax error in the file. Otherwise, try
|
||||||
|
removing the when clause or picking a different key. Unfortunately, at
|
||||||
|
this point, it is a trial and error process.
|
||||||
|
|
@ -0,0 +1,933 @@
|
||||||
|
"Tips and Tricks" lets you jump right in and learn how to be
|
||||||
|
productive with AVAP™ Dev Studio 2024. You'll become familiar with its
|
||||||
|
powerful editing, code intelligence, and source code control features and
|
||||||
|
learn useful keyboard shortcuts. This topic goes pretty fast and provides
|
||||||
|
a broad overview, so be sure to look at the other in-depth topics in
|
||||||
|
Getting Started and the User Guide to learn more.
|
||||||
|
|
||||||
|
### Basics
|
||||||
|
|
||||||
|
The best way of exploring AVAP TM Dev Studio hands-on is to open
|
||||||
|
the Welcome page. You will get an overview of AVAP TM Dev
|
||||||
|
Studio's customizations and features. Help > Welcome.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Pick a Walkthrough for a self-guided tour through the
|
||||||
|
setup steps, features, and deeper customizations that AVAP TM {' '}
|
||||||
|
Dev Studio offers. As you discover and learn, the walkthroughs track your
|
||||||
|
progress.
|
||||||
|
|
||||||
|
If you are looking to improve your code editing skills open the{' '}
|
||||||
|
Interactive Editor Playground . Try out AVAP TM {' '}
|
||||||
|
Dev Studio's code editing features, like multi-cursor editing,
|
||||||
|
IntelliSense, Snippets, Emmet, and many more.{' '}
|
||||||
|
Help > Editor Playground .
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Access all available commands based on your current context.
|
||||||
|
|
||||||
|
Keyboard Shortcut: Ctrl+Shift+P
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
All of the commands are in the Command Palette with the associated key
|
||||||
|
binding (if it exists). If you forget a keyboard shortcut, use the
|
||||||
|
Command Palette to help you out.
|
||||||
|
|
||||||
|
Download the keyboard shortcut reference sheet for your platform
|
||||||
|
(macOS, Windows, Linux).
|
||||||
|
|
||||||
|
Quickly open files.
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+P`
|
||||||
|
|
||||||
|
Tip : Type `?` to view command suggestions.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Typing commands such as edt and term followed by a space will bring up
|
||||||
|
dropdown lists.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Repeat the Quick Open keyboard shortcut to cycle quickly
|
||||||
|
between recently opened files.
|
||||||
|
|
||||||
|
You can open multiple files from Quick Open by pressing
|
||||||
|
the Right arrow key. This will open the currently selected file in the
|
||||||
|
background and you can continue selecting files from{' '}
|
||||||
|
Quick Open .
|
||||||
|
|
||||||
|
Open Recent
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+R`
|
||||||
|
|
||||||
|
Displays a Quick Pick dropdown with the list from File {' '}
|
||||||
|
> Open Recent with recently opened folders and
|
||||||
|
workspaces followed by files.
|
||||||
|
|
||||||
|
### Command line
|
||||||
|
|
||||||
|
AVAP TM Dev Studio has a powerful command line interface
|
||||||
|
(CLI) which allows you to customize how the editor is launched to
|
||||||
|
support various scenarios.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
# open code with current directory
|
||||||
|
|
||||||
|
code .
|
||||||
|
# open the current directory in the most recently used code window
|
||||||
|
|
||||||
|
code -r .
|
||||||
|
# create a new window
|
||||||
|
|
||||||
|
code -n
|
||||||
|
|
||||||
|
# change the language
|
||||||
|
|
||||||
|
code --locale=es
|
||||||
|
|
||||||
|
# open diff editor
|
||||||
|
|
||||||
|
code --diff <file1> <file2>
|
||||||
|
|
||||||
|
# open file at specific line and column <file:line[:character]>
|
||||||
|
|
||||||
|
code --goto package.json:10:5
|
||||||
|
|
||||||
|
# see help options
|
||||||
|
|
||||||
|
code --help
|
||||||
|
|
||||||
|
# disable all extensions
|
||||||
|
|
||||||
|
code --disable-extensions .
|
||||||
|
```
|
||||||
|
|
||||||
|
Workspace specific files are in a .avapcode folder at the root. For
|
||||||
|
example, tasks.json for the Task Runner and launch.json for the debugger.{' '}
|
||||||
|
|
||||||
|
### Status Bar
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+Shift+M`
|
||||||
|
|
||||||
|
Quickly jump to errors and warnings in the project.
|
||||||
|
|
||||||
|
Cycle through errors with `F8` or `Shift+F8`
|
||||||
|
|
||||||
|
You can filter problems either by type ('errors',
|
||||||
|
'warnings') or text matching.
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+K M`
|
||||||
|
|
||||||
|
If you want to persist the new language mode for that file type, you can
|
||||||
|
use the Configure File Association for command to
|
||||||
|
associate the current file extension with an installed language.
|
||||||
|
|
||||||
|
### Customization
|
||||||
|
|
||||||
|
There are many things you can do to customize AVAP TM Dev .
|
||||||
|
|
||||||
|
* Change your theme
|
||||||
|
* Change your keyboard shortcuts
|
||||||
|
* Tune your settings
|
||||||
|
* Add JSON validation
|
||||||
|
* Create snippets
|
||||||
|
* Install extensions
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+K Ctrl+T`
|
||||||
|
|
||||||
|
You can install more themes from the AVAP TM Dev Studio
|
||||||
|
extension Marketplace.
|
||||||
|
|
||||||
|
Are you used to keyboard shortcuts from another editor? You can install a
|
||||||
|
Keymap extension that brings the keyboard shortcuts from your favorite
|
||||||
|
editor to AVAP TM Dev Studio. Go to Preferences {' '}
|
||||||
|
> Migrate Keyboard Shortcuts from ... to see the
|
||||||
|
current list on the Marketplace. Some of the more popular ones:
|
||||||
|
|
||||||
|
* Vim
|
||||||
|
* Sublime Text Keymap
|
||||||
|
* Emacs Keymap
|
||||||
|
* Atom Keymap
|
||||||
|
* Brackets Keymap
|
||||||
|
* Eclipse Keymap
|
||||||
|
* AVAP™ Dev Studio Keymap
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+K Ctrl+S`
|
||||||
|
|
||||||
|
You can search for shortcuts and add your own keybindings to the
|
||||||
|
keybindings.json file.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
See more in Key Bindings for AVAP TM Dev Studio.
|
||||||
|
|
||||||
|
By default AVAP TM Dev Studio shows the Settings editor, you can
|
||||||
|
find settings listed below in a search bar, but you can still edit the
|
||||||
|
underlying settings.json file by using the{' '}
|
||||||
|
Open User Settings (JSON) command or by changing
|
||||||
|
your default settings editor with the workbench.settings.editor setting.
|
||||||
|
|
||||||
|
Open User Settings settings.json
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+,`
|
||||||
|
|
||||||
|
Change the font size of various UI elements
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Main editor
|
||||||
|
|
||||||
|
"editor.fontSize": 18,
|
||||||
|
|
||||||
|
// Terminal panel
|
||||||
|
|
||||||
|
"terminal.integrated.fontSize": 14,
|
||||||
|
|
||||||
|
// Output panel
|
||||||
|
|
||||||
|
"[Log]": {
|
||||||
|
"editor.fontSize": 15
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Change the zoom level
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"window.zoomLevel": 5
|
||||||
|
```
|
||||||
|
|
||||||
|
Font ligatures
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"editor.fontFamily": "Fira Code",
|
||||||
|
|
||||||
|
"editor.fontLigatures": true
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Auto Save
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"files.autoSave": "afterDelay"
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also toggle Auto Save from the top-level menu with the File >
|
||||||
|
Auto Save.
|
||||||
|
|
||||||
|
Format on save
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"editor.formatOnSave": true
|
||||||
|
```
|
||||||
|
|
||||||
|
Format on paste
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"editor.formatOnPaste": true
|
||||||
|
```
|
||||||
|
|
||||||
|
Change the size of Tab characters
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"editor.tabSize": 4
|
||||||
|
```
|
||||||
|
|
||||||
|
Spaces or Tabs
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"editor.insertSpaces": true
|
||||||
|
```
|
||||||
|
|
||||||
|
Render whitespace
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"editor.renderWhitespace": "all"
|
||||||
|
```
|
||||||
|
|
||||||
|
Whitespace characters are rendered by default in text selection.
|
||||||
|
|
||||||
|
Ignore files / folders
|
||||||
|
|
||||||
|
Removes these files / folders from your editor window.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"files.exclude": {
|
||||||
|
"somefolder/": true,
|
||||||
|
"somefile": true
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Remove these files / folders from search results.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"search.exclude": {
|
||||||
|
"someFolder/": true,
|
||||||
|
"somefile": true
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
And many, many other customizations.
|
||||||
|
|
||||||
|
You can scope the settings that you only want for specific languages by
|
||||||
|
the language identifier. You can find a list of commonly used language IDs
|
||||||
|
in the Language Identifiers reference.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"[languageid]": {
|
||||||
|
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Enabled by default for many file types. Create your own schema and
|
||||||
|
validation in settings.json
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"json.schemas": [ {
|
||||||
|
"fileMatch": [ "/bower.json"
|
||||||
|
],
|
||||||
|
"url": "https://json.schemastore.org/bower"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
or for a schema defined in your workspace
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"json.schemas": [ {
|
||||||
|
"fileMatch": [ "/foo.json"
|
||||||
|
],
|
||||||
|
"url": "./myschema.json"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
or a custom schema
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"json.schemas": [ {
|
||||||
|
"fileMatch": [ "/.myconfig"
|
||||||
|
],
|
||||||
|
"schema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"name" : {
|
||||||
|
"type": "string",
|
||||||
|
"description": "The name of the entry"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
See more in the JSON documentation.
|
||||||
|
|
||||||
|
### Extensions
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+Shift+X`
|
||||||
|
|
||||||
|
In the Extensions view, you can search via the search bar
|
||||||
|
or click the More Actions (...) button to filter
|
||||||
|
and sort by install count.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
In the Extensions view, click{' '}
|
||||||
|
Show Recommended Extensions in the{' '}
|
||||||
|
More Actions (...) button menu.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Are you interested in creating your own extension? You can learn how to do
|
||||||
|
this in the Extension API documentation, specifically check out the
|
||||||
|
documentation on contribution points.
|
||||||
|
|
||||||
|
* configuration
|
||||||
|
* commands
|
||||||
|
* keybindings
|
||||||
|
* languages
|
||||||
|
* debuggers
|
||||||
|
* grammars
|
||||||
|
* themes
|
||||||
|
* snippets
|
||||||
|
* jsonValidation
|
||||||
|
|
||||||
|
### Files and folders
|
||||||
|
|
||||||
|
Keyboard Shortcut: Ctrl+`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Further reading:
|
||||||
|
|
||||||
|
* Integrated Terminal documentation
|
||||||
|
* Mastering AVAP™ DS's Terminal article
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+B`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+J`
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+K Z`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Enter distraction free Zen mode.
|
||||||
|
|
||||||
|
Press `Esc` twice to exit Zen Mode.
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+\`
|
||||||
|
|
||||||
|
You can also drag and drop editors to create new editor groups and move
|
||||||
|
editors between groups.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+1` , `Ctrl+2` ,{' '}
|
||||||
|
`Ctrl+3`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+Shift+E`
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+click` ( `Cmd+click` on
|
||||||
|
macOS)
|
||||||
|
|
||||||
|
You can quickly open a file or image or create a new file by moving the
|
||||||
|
cursor to the file link and using `Ctrl+click` .
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+K F`
|
||||||
|
|
||||||
|
Navigate entire history: `Ctrl+Tab`
|
||||||
|
|
||||||
|
Navigate back: `Alt+Left`
|
||||||
|
|
||||||
|
Navigate forward: `Alt+Right`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Create language associations for files that aren't detected correctly.
|
||||||
|
For example, many configuration files with custom file extensions are
|
||||||
|
actually JSON.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"files.associations": {
|
||||||
|
".database": "json"
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
AVAP TM Dev Studio will show you an error message when you try
|
||||||
|
to save a file that cannot be saved because it has changed on disk. AVAP
|
||||||
|
TM Dev Studio blocks saving the file to prevent overwriting
|
||||||
|
changes that have been made outside of the editor.
|
||||||
|
|
||||||
|
In order to resolve the save conflict, click the Compare action in the
|
||||||
|
error message to open a diff editor that will show you the contents of the
|
||||||
|
file on disk (to the left) compared to the contents in AVAP
|
||||||
|
TM Dev Studio (on the right):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Use the actions in the editor toolbar to resolve the save conflict. You
|
||||||
|
can either Accept your changes and thereby overwriting
|
||||||
|
any changes on disk, or Revert to the version on disk.
|
||||||
|
Reverting means that your changes will be lost.
|
||||||
|
|
||||||
|
Note : The file will remain dirty and cannot be saved
|
||||||
|
until you pick one of the two actions to resolve the conflict.
|
||||||
|
|
||||||
|
### Editing Hacks
|
||||||
|
|
||||||
|
Here is a selection of common features for editing code. If the keyboard
|
||||||
|
shortcuts aren't comfortable for you, consider installing a keymap
|
||||||
|
extension for your old editor.
|
||||||
|
|
||||||
|
Tip : You can see recommended keymap extensions in the{' '}
|
||||||
|
Extensions view by filtering the search to
|
||||||
|
@recommended:keymaps.
|
||||||
|
|
||||||
|
To add cursors at arbitrary positions, select a position with your mouse
|
||||||
|
and use `Alt+Click` ( `Option+Click` on
|
||||||
|
macOS).
|
||||||
|
|
||||||
|
To set cursors above or below the current position use:
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+Alt+Up` or `Ctrl+Alt+Down`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can add additional cursors to all occurrences of the current selection
|
||||||
|
with Ctrl+Shift+L.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
If you do not want to add all occurrences of the current selection, you
|
||||||
|
can use Ctrl+D instead. This only selects the next occurrence after the
|
||||||
|
one you selected so you can add selections one by one.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can select blocks of text by holding `Shift+Alt` (
|
||||||
|
`Shift+Option` on macOS) while you drag your mouse. A
|
||||||
|
separate cursor will be added to the end of each selected line.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can also use keyboard shortcuts to trigger column selection.
|
||||||
|
|
||||||
|
You can add vertical column rulers to the editor with the editor.rulers
|
||||||
|
setting, which takes an array of column character positions where
|
||||||
|
you'd like vertical rulers.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"editor.rulers": [20, 40, 60]
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Pressing the Alt key enables fast scrolling in the editor and Explorers.
|
||||||
|
By default, fast scrolling uses a 5X speed multiplier but you can control
|
||||||
|
the multiplier with the * Editor: Fast Scroll Sensitivity *
|
||||||
|
(editor.fastScrollSensitivity) setting.
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Shift+Alt+Up` or{' '}
|
||||||
|
`Shift+Alt+Down`
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Alt+Up` or `Alt+Down`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Shift+Alt+Left` or{' '}
|
||||||
|
`Shift+Alt+Right`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can learn more in the Basic Editing documentation.
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+Shift+O`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can group the symbols by kind by adding a colon, @:.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+T`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
The Outline view in the File Explorer (default collapsed at the
|
||||||
|
bottom) shows you the symbols of the currently open file.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can sort by symbol name, category, and position in the file and allows
|
||||||
|
quick navigation to symbol locations.
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+G`
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+U`
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+K Ctrl+X`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Currently selected source code: `Ctrl+K Ctrl+F`
|
||||||
|
|
||||||
|
Whole document format: `Shift+Alt+F`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+Shift+[` and `Ctrl+Shift+]`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can also fold/unfold all regions in the editor with Fold All (
|
||||||
|
`Ctrl+K Ctrl+0` ) and Unfold All (
|
||||||
|
`Ctrl+K Ctrl+J` ).
|
||||||
|
|
||||||
|
You can fold all block comments with Fold All Block Comments (
|
||||||
|
`Ctrl+K Ctrl+/` ).
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+L`
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+Home` and `Ctrl+End`
|
||||||
|
|
||||||
|
In a Markdown file, use
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+Shift+V`
|
||||||
|
|
||||||
|
In a Markdown file, use
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+K V`
|
||||||
|
|
||||||
|
The preview and editor will synchronize with your scrolling in either
|
||||||
|
view.
|
||||||
|
|
||||||
|
### IntelliSense
|
||||||
|
|
||||||
|
`Ctrl+Space` to trigger the Suggestions widget.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can view available methods, parameter hints, short documentation, etc.
|
||||||
|
|
||||||
|
Select a symbol then type `Alt+F12` . Alternatively, you can use
|
||||||
|
the context menu.
|
||||||
|
|
||||||
|
Select a symbol then type `F12` . Alternatively, you can use the
|
||||||
|
context menu or `Ctrl+click` ( `Cmd+click` on
|
||||||
|
macOS).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can go back to your previous location with the Go {' '}
|
||||||
|
> Back command or `Alt+Left` .
|
||||||
|
|
||||||
|
You can also see the type definition if you press `Ctrl` (
|
||||||
|
`Cmd` on macOS) when you are hovering over the type.
|
||||||
|
|
||||||
|
Select a symbol then type `Shift+F12` . Alternatively, you can
|
||||||
|
use the context menu.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Select a symbol then type `Shift+Alt+F12` to open the
|
||||||
|
References view showing all your file's symbols in a dedicated view.
|
||||||
|
|
||||||
|
Select a symbol then type `F2` . Alternatively, you can use the
|
||||||
|
context menu.
|
||||||
|
|
||||||
|
rename symbol
|
||||||
|
|
||||||
|
Besides searching and replacing expressions, you can also search and reuse
|
||||||
|
parts of what was matched, using regular expressions with capturing
|
||||||
|
groups. Enable regular expressions in the search box by clicking the{' '}
|
||||||
|
Use Regular Expression .* button ( `Alt+R`
|
||||||
|
) and then write a regular expression and use parentheses to define
|
||||||
|
groups. You can then reuse the content matched in each group by using $1,
|
||||||
|
$2, etc. in the Replace field.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Install the ESLint extension. Configure your linter however you'd
|
||||||
|
like. Consult the ESLint specification for details on its linting rules
|
||||||
|
and options.
|
||||||
|
|
||||||
|
Here is configuration to use ES6.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"env": {
|
||||||
|
"browser": true,
|
||||||
|
"commonjs": true,
|
||||||
|
"es6": true,
|
||||||
|
"node": true
|
||||||
|
},
|
||||||
|
"parserOptions": {
|
||||||
|
"ecmaVersion": 6,
|
||||||
|
"sourceType": "module",
|
||||||
|
"ecmaFeatures": {
|
||||||
|
"jsx": true,
|
||||||
|
"classes": true,
|
||||||
|
"defaultParams": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"rules": {
|
||||||
|
"no-const-assign": 1,
|
||||||
|
"no-extra-semi": 0,
|
||||||
|
"semi": 0,
|
||||||
|
"no-fallthrough": 0,
|
||||||
|
"no-empty": 0,
|
||||||
|
"no-mixed-spaces-and-tabs": 0,
|
||||||
|
"no-redeclare": 0,
|
||||||
|
"no-this-before-super": 1,
|
||||||
|
"no-undef": 1,
|
||||||
|
"no-unreachable": 1,
|
||||||
|
"no-use-before-define": 0,
|
||||||
|
"constructor-super": 1,
|
||||||
|
"curly": 0,
|
||||||
|
"eqeqeq": 0,
|
||||||
|
"func-names": 0,
|
||||||
|
"valid-typeof": 1 }
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
See IntelliSense for your package.json file.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Support for Emmet syntax.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Snippets
|
||||||
|
|
||||||
|
File > Preferences >{' '}
|
||||||
|
Configure User Snippets , select the language, and create
|
||||||
|
a snippet.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"create component": {
|
||||||
|
"prefix": "component",
|
||||||
|
"body": [ "class $1 extends React.Component {",
|
||||||
|
"",
|
||||||
|
"\trender() {",
|
||||||
|
"\t\treturn ($2);",
|
||||||
|
"\t}",
|
||||||
|
"",
|
||||||
|
"}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
See more details in Creating your own Snippets.
|
||||||
|
|
||||||
|
### Git integration
|
||||||
|
|
||||||
|
Keyboard Shortcut: `Ctrl+Shift+G`
|
||||||
|
|
||||||
|
Git integration comes with AVAP TM Dev Studio
|
||||||
|
"out-of-the-box". You can install other SCM providers from the
|
||||||
|
Extension Marketplace. This section describes the Git integration but much
|
||||||
|
of the UI and gestures are shared by other SCM providers.
|
||||||
|
|
||||||
|
From the Source Control view, select a file to open the diff.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Alternatively, click the Open Changes button in the top
|
||||||
|
right corner to diff the current open file.
|
||||||
|
|
||||||
|
Views
|
||||||
|
|
||||||
|
The default view for diffs is the side by side view .
|
||||||
|
|
||||||
|
Toggle inline view by clicking the{' '}
|
||||||
|
More Actions (...) button in the top right and
|
||||||
|
selecting Toggle Inline View .
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
If you prefer the inline view, you can set
|
||||||
|
"diffEditor.renderSideBySide": false.
|
||||||
|
|
||||||
|
Accessible Diff Viewer
|
||||||
|
|
||||||
|
Navigate through diffs with `F7` and `Shift+F7` .
|
||||||
|
This will present them in a unified patch format. Lines can be navigated
|
||||||
|
with arrow keys and pressing `Enter` will jump back in the diff
|
||||||
|
editor and the selected line.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Edit pending changes
|
||||||
|
You can make edits directly in the pending changes of the diff view.
|
||||||
|
|
||||||
|
Easily switch between Git branches via the Status Bar.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Stage file changes
|
||||||
|
|
||||||
|
Hover over the number of files and click the plus button.
|
||||||
|
|
||||||
|
Click the minus button to unstage changes.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Stage selected
|
||||||
|
|
||||||
|
Stage a portion of a file by selecting that file (using the
|
||||||
|
arrows) and then choosing Stage Selected Ranges from
|
||||||
|
the Command Palette .
|
||||||
|
|
||||||
|
Click the (...) button and then select{' '}
|
||||||
|
Undo Last Commit to undo the previous commit. The changes
|
||||||
|
are added to the Staged Changes section.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
AVAP TM Dev Studio makes it easy to see what Git commands are
|
||||||
|
actually running. This is helpful when learning Git or debugging a
|
||||||
|
difficult source control issue.
|
||||||
|
|
||||||
|
Use the Toggle Output command (
|
||||||
|
`Ctrl+Shift+U` ) and select Git in the
|
||||||
|
dropdown.
|
||||||
|
|
||||||
|
View diff decorations in editor. See documentation for more details.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
During a merge, go to the Source Control view (
|
||||||
|
`Ctrl+Shift+G` ) and make changes in the diff view.
|
||||||
|
|
||||||
|
You can resolve merge conflicts with the inline CodeLens which lets you{' '}
|
||||||
|
Accept Current Change ,{' '}
|
||||||
|
Accept Incoming Change ,{' '}
|
||||||
|
Accept Both Changes , and Compare Changes
|
||||||
|
.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
git config --global merge.tool vscode
|
||||||
|
|
||||||
|
git config --global mergetool.vscode.cmd 'code --wait $MERGED'
|
||||||
|
```
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
git config --global diff.tool vscode
|
||||||
|
|
||||||
|
git config --global difftool.vscode.cmd 'code --wait --diff $LOCAL
|
||||||
|
$REMOTE'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
From the Run and Debug view ( `Ctrl+Shift+D` ), select{' '}
|
||||||
|
create a launch.json file , which will prompt you to
|
||||||
|
select the environment that matches your project (Node.js, Python,
|
||||||
|
C++, etc). This will generate a launch.json file. Node.js support is
|
||||||
|
built-in and other environments require installing the appropriate
|
||||||
|
language extensions. See the debugging documentation for more details.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Place breakpoints next to the line number. Navigate forward with the Debug
|
||||||
|
widget.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Place breakpoints next to the line number. Navigate forward with the Debug
|
||||||
|
widget.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Inspect variables in the Run panels and in the console.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Logpoints act much like breakpoints but instead of halting the debugger
|
||||||
|
when they are hit, they log a message to the console. Logpoints are
|
||||||
|
especially useful for injecting logging while debugging production servers
|
||||||
|
that cannot be modified or paused.
|
||||||
|
|
||||||
|
Add a logpoint with the Add Logpoint command in the left
|
||||||
|
editor gutter and it will be displayed as a "diamond" shaped
|
||||||
|
icon. Log messages are plain text but can include expressions to be
|
||||||
|
evaluated within curly braces ('{}').
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
A trigged breakpoint is a breakpoint that is automatically enabled once
|
||||||
|
another breakpoint is hit. They can be very useful when diagnosing failure
|
||||||
|
cases in code that happen only after a certain precondition.
|
||||||
|
|
||||||
|
Triggered breakpoints can be set by right-clicking on the glyph margin,
|
||||||
|
selecting Add Triggered Breakpoint , and then choosing
|
||||||
|
which other breakpoint enables the breakpoint.
|
||||||
|
|
||||||
|
http://https://code.visualstudio.com/assets/docs/editor/debugging/debug-triggered-breakpoint.mp4
|
||||||
|
|
||||||
|
### Task runner
|
||||||
|
|
||||||
|
Select Terminal from the top-level menu, run the command{' '}
|
||||||
|
Configure Tasks , then select the type of task you'd
|
||||||
|
like to run. This will generate a tasks.json file with content like the
|
||||||
|
following. See the Tasks documentation for more details.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
// See https://go.microsoft.com/fwlink/?LinkId=733558
|
||||||
|
// for the documentation about the tasks.json format
|
||||||
|
"version": "2.0.0",
|
||||||
|
"tasks": [ {
|
||||||
|
"type": "npm",
|
||||||
|
"script": "install",
|
||||||
|
"group": {
|
||||||
|
"kind": "build",
|
||||||
|
"isDefault": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
There are occasionally issues with auto generation. Check out the
|
||||||
|
documentation for getting things to work properly.
|
||||||
|
|
||||||
|
Select Terminal from the top-level menu, run the command{' '}
|
||||||
|
Run Task , and select the task you want to run. Terminate
|
||||||
|
the running task by running the command Terminate Task
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can define a keyboard shortcut for any task. From the{' '}
|
||||||
|
Command Palette ( `Ctrl+Shift+P` ),
|
||||||
|
select Preferences: Open Keyboard Shortcuts File , bind
|
||||||
|
the desired shortcut to the workbench.action.tasks.runTask command, and
|
||||||
|
define the Task as args.
|
||||||
|
|
||||||
|
For example, to bind `Ctrl+H` to the Run tests task, add the
|
||||||
|
following:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"key": "ctrl+h",
|
||||||
|
"command": "workbench.action.tasks.runTask",
|
||||||
|
"args": "Run tests"
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
Run npm s
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
From the explorer you can open a script in the editor, run it as a task,
|
||||||
|
and launch it with the node debugger (when the script defines a debug
|
||||||
|
option like --inspect-brk). The default action on click is to open the
|
||||||
|
script. To run a script on a single click, set npm.scriptExplorerAction to
|
||||||
|
"run". Use the setting npm.exclude to exclude scripts in
|
||||||
|
package.json files contained in particular folders.
|
||||||
|
|
||||||
|
With the setting npm.enableRunFromFolder, you can enable to run npm
|
||||||
|
scripts from the File Explorer's context menu for a folder. The
|
||||||
|
setting enables the command Run NPM Script in Folder ...
|
||||||
|
when a folder is selected. The command shows a Quick Pick list of the npm
|
||||||
|
scripts contained in this folder and you can select the script to be
|
||||||
|
executed as a task.
|
||||||
|
|
||||||
|
### Portable mode
|
||||||
|
|
||||||
|
AVAP TM Dev Studio has a Portable mode which lets you keep
|
||||||
|
settings and data in the same location as your installation, for example,
|
||||||
|
on a USB drive.
|
||||||
|
|
||||||
|
### Insiders builds
|
||||||
|
|
||||||
|
The AVAP™ Dev Studio Code team uses the Insiders version to test the
|
||||||
|
latest features and bug fixes of AVAP™ DS. You can also use the Insiders
|
||||||
|
version by downloading it here.
|
||||||
|
|
||||||
|
* For Early Adopters - Insiders has the most recent code changes for users and extension authors to try out.
|
||||||
|
* Frequent Builds - New builds every day with the latest bug fixes and features.
|
||||||
|
* Side-by-side install - Insiders installs next to the Stable build allowing you to use either independently.
|
||||||
|
|
@ -0,0 +1,78 @@
|
||||||
|
101OBeX offers different plans:
|
||||||
|
|
||||||
|
* Developer
|
||||||
|
* Startup
|
||||||
|
* Business
|
||||||
|
* Enterprise
|
||||||
|
|
||||||
|
The ideal plan to become familiar with 101OBeX and introduce yourself to
|
||||||
|
the capabilities of the system. It provides complete access to APIs with a
|
||||||
|
maximum allowance of 500 monthly transactions so you can start your
|
||||||
|
project, with no a membership cost*.
|
||||||
|
|
||||||
|
*There is no membership fee. Transactional costs, plug-ins and other
|
||||||
|
services within the membership are not included. To exploit these
|
||||||
|
services, it will be necessary to purchase a different plan.
|
||||||
|
|
||||||
|
Starting at 50 $ per month, you will have 2 project slots with one active
|
||||||
|
project, and 5,000 monthly transactions to start your project.
|
||||||
|
|
||||||
|
Starting at 150 $ per month, you can have up to 5 projects and 2
|
||||||
|
pre-activated slots, along with 50,000 monthly transactions to launch your
|
||||||
|
business at the highest level.
|
||||||
|
|
||||||
|
Geared towards corporations requiring special configurations. Membership
|
||||||
|
activation is done through the sales team:{' '}
|
||||||
|
sales@101obex.com .
|
||||||
|
|
||||||
|
The chosen subscription type (developer, startup, business or
|
||||||
|
enterprise) that determines the configuration of the set of available
|
||||||
|
resources.
|
||||||
|
|
||||||
|
* Total project slots.
|
||||||
|
* Pre-activated projects.
|
||||||
|
* Maximum transactional volume.
|
||||||
|
* Monthly transactions.
|
||||||
|
* Storage.
|
||||||
|
* Support.
|
||||||
|
|
||||||
|
If payment is established monthly, charges will be made on the first day
|
||||||
|
of each month for the total membership amount, plus contracted add-ons and
|
||||||
|
plugins. For the first month, a prorated amount will be charged from the
|
||||||
|
plan's start date to the end of the month. If payment is established
|
||||||
|
annually, a full year of service will be charged, and renewal will occur
|
||||||
|
the day after the plan expires.
|
||||||
|
|
||||||
|
101OBeX does not invoice exempt, since it is not a possibility
|
||||||
|
contemplated in the service. If any of the elements that make up a plan
|
||||||
|
exceed its limit, the service will stop being provided.
|
||||||
|
|
||||||
|
To prevent your projects from being left without service, 101OBeX offers
|
||||||
|
the possibility of configuring alarms that will allow you to receive
|
||||||
|
notifications based on limits for each category. Although these alarms are
|
||||||
|
configurable, they have pre-established minimums to ensure that you are
|
||||||
|
always informed.
|
||||||
|
|
||||||
|
The client always has the possibility of expanding the limits for each of
|
||||||
|
the components that make up a membership through the purchasing of add-ons
|
||||||
|
or by upgrading their plan.
|
||||||
|
|
||||||
|
Clients can check their membership status in the dashboard at any time,
|
||||||
|
along with plan configuration in the Subscription Plan section of the menu
|
||||||
|
bar.
|
||||||
|
|
||||||
|
In the Settings section of the menu, an option is available to track
|
||||||
|
transaction history linked to membership collections. How to change the
|
||||||
|
payment method Payment methods can be changed from monthly to annual and
|
||||||
|
vice versa at any time. How to change the payment method At present the
|
||||||
|
only form of payment is by credit card. But you can add new cards and
|
||||||
|
change your payment method at any time.
|
||||||
|
|
||||||
|
You have the possibility to upgrade and downgrade your plan according to
|
||||||
|
your needs.
|
||||||
|
|
||||||
|
At present the only form of payment is by credit card. But you can add new
|
||||||
|
cards and change your payment method at any time.
|
||||||
|
|
||||||
|
You have the possibility to upgrade and downgrade your plan according to
|
||||||
|
your needs.
|
||||||
|
|
@ -0,0 +1,34 @@
|
||||||
|
### Where:
|
||||||
|
|
||||||
|
* status : Shows if the call has been successful (true) or not (false).
|
||||||
|
* codtran : Transaction code that identifies the executed operation.
|
||||||
|
* result : Contains information about the result of the service.
|
||||||
|
* user_id_registration : New user ID.
|
||||||
|
* longitud_otp : Length of the OTP associated with the operation.
|
||||||
|
* elapsed : Operation execution time.
|
||||||
|
|
||||||
|
### Where:
|
||||||
|
|
||||||
|
* status : Shows if the call has been successful (true) or not (false).
|
||||||
|
* level: Error importance level.
|
||||||
|
* message : Error message.
|
||||||
|
* error : Sole error code.
|
||||||
|
* Error catalogue Message Cause Email is required The parameter enviar_email_confirmar has been sent, but the parameter email has not been informed nor attached an email address The phone + prefix phone already exists The account identified by phone already exits and is activated An attempt to create an account without a phone was made Required parameter not provided phone An attempt was made to create a phone number with the wrong length The value of the parameter phone has the wrong length, depending on the country indicated in country_code An attempt was made to create an account with a prefix other than the country prefix != country_code The value of the parameter prefix does not match with the country code indicated in country_code The account <cuenta> is pending to sign the discharge The account already exists in the system, but is inactive The nick nick is already used The account identified by nick exists and it's active Country not found Controlled error in case the country code entered is wrong. We have found a problem and are working to fix it ... sorry for the inconvenience Uncontrolled error 500: Internal Server Error In order not to provide service information, a 500 error is thrown if a required parameter is not reported. 500: Internal Server Error You can also get such an error if an uncontrolled error occurs on the server Chart 2.a.2 : List of exceptions thrown by the service{' '} Alta Usuario . Business logic This section details some particularities related to this service that it is advisable to take into account.{' '} If an invalid channel_id value is provided or none is provided, the default channel (Web) is set. If an email is associated with the new user, an activation email will be sent, even if their account is automatically activated. If, on the contrary, email is not provided, there is the possibility of activating the user directly by sending the{' '} activa parameter. If it is not activated directly to the user, an SMS is sent with activation instructions. The account will remain in an inactive state until it is activated, it will not be deleted from the system at any time. If the account is already active, trying to activate it again will get a 404 error. This error is forced from the system when no registration is found to sign. The PIN is generated and sent in the first activation SMS. If the user does not activate the account or does not enter the OTP correctly, the password generated initially is reused and it is not sent in subsequent activation messages. The user's nickname can be used for the identification process (login). If nick is not indicated during the registration process, it will take the value of phone parametro. If the parameter affiliate_id is specified, the name of the same will be used in the welcome SMS, instead of using the name of the affiliate by default (Pademobile).
|
||||||
|
* Business logic This section details some particularities related to this service that it is advisable to take into account.{' '}
|
||||||
|
* If an invalid channel_id value is provided or none is provided, the default channel (Web) is set.
|
||||||
|
* If an email is associated with the new user, an activation email will be sent, even if their account is automatically activated.
|
||||||
|
* If, on the contrary, email is not provided, there is the possibility of activating the user directly by sending the{' '} activa parameter. If it is not activated directly to the user, an SMS is sent with activation instructions. The account will remain in an inactive state until it is activated, it will not be deleted from the system at any time.
|
||||||
|
* If the account is already active, trying to activate it again will get a 404 error. This error is forced from the system when no registration is found to sign.
|
||||||
|
* The PIN is generated and sent in the first activation SMS. If the user does not activate the account or does not enter the OTP correctly, the password generated initially is reused and it is not sent in subsequent activation messages.
|
||||||
|
* The user's nickname can be used for the identification process (login). If nick is not indicated during the registration process, it will take the value of phone parametro. If the parameter affiliate_id is specified, the name of the same will be used in the welcome SMS, instead of using the name of the affiliate by default (Pademobile).
|
||||||
|
|
||||||
|
This section details, for each box, all the information necessary to
|
||||||
|
exploit the previously documented services.
|
||||||
|
|
||||||
|
There is a user who has an "AFFILIATE" profile and who will
|
||||||
|
allow managing the community:
|
||||||
|
|
||||||
|
### Examples
|
||||||
|
|
||||||
|
Below are some examples of calls to the services described in this
|
||||||
|
document:
|
||||||
|
|
@ -0,0 +1,26 @@
|
||||||
|
This tool is designed to enable developers to work with the 101OBeX API.
|
||||||
|
With this tool, developers can retrieve information about their API
|
||||||
|
privileges, quotas, API Token, and more..
|
||||||
|
|
||||||
|
To begin, developers need to initialize their token using the
|
||||||
|
'init' parameter. This process involves authenticating through the
|
||||||
|
Google OAuth API to obtain the API token, which is stored locally on their
|
||||||
|
computer. Once the token is initialized, developers can use the
|
||||||
|
'info' parameter to access details about their API privileges,
|
||||||
|
projects, teams, and access token. Finally, developers have the option to
|
||||||
|
remove all downloaded information from their computer using the
|
||||||
|
'clean' parameter.
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases
|
||||||
|
|
||||||
|
Mac:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease-staging/101obexcli-macosx.zip
|
||||||
|
|
||||||
|
Linux:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease/101obexcli.-.linux.zip
|
||||||
|
|
||||||
|
Win32:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease/101obexcli-win32.zip
|
||||||
|
|
@ -0,0 +1,78 @@
|
||||||
|
101OBeX offers different plans:
|
||||||
|
|
||||||
|
* Developer
|
||||||
|
* Startup
|
||||||
|
* Business
|
||||||
|
* Enterprise
|
||||||
|
|
||||||
|
The ideal plan to become familiar with 101OBeX and introduce yourself to
|
||||||
|
the capabilities of the system. It provides complete access to APIs with a
|
||||||
|
maximum allowance of 500 monthly transactions so you can start your
|
||||||
|
project, with no a membership cost*.
|
||||||
|
|
||||||
|
*There is no membership fee. Transactional costs, plug-ins and other
|
||||||
|
services within the membership are not included. To exploit these
|
||||||
|
services, it will be necessary to purchase a different plan.
|
||||||
|
|
||||||
|
Starting at 50 $ per month, you will have 2 project slots with one active
|
||||||
|
project, and 5,000 monthly transactions to start your project.
|
||||||
|
|
||||||
|
Starting at 150 $ per month, you can have up to 5 projects and 2
|
||||||
|
pre-activated slots, along with 50,000 monthly transactions to launch your
|
||||||
|
business at the highest level.
|
||||||
|
|
||||||
|
Geared towards corporations requiring special configurations. Membership
|
||||||
|
activation is done through the sales team:{' '}
|
||||||
|
sales@101obex.com .
|
||||||
|
|
||||||
|
The chosen subscription type (developer, startup, business or
|
||||||
|
enterprise) that determines the configuration of the set of available
|
||||||
|
resources.
|
||||||
|
|
||||||
|
* Total project slots.
|
||||||
|
* Pre-activated projects.
|
||||||
|
* Maximum transactional volume.
|
||||||
|
* Monthly transactions.
|
||||||
|
* Storage.
|
||||||
|
* Support.
|
||||||
|
|
||||||
|
If payment is established monthly, charges will be made on the first day
|
||||||
|
of each month for the total membership amount, plus contracted add-ons and
|
||||||
|
plugins. For the first month, a prorated amount will be charged from the
|
||||||
|
plan's start date to the end of the month. If payment is established
|
||||||
|
annually, a full year of service will be charged, and renewal will occur
|
||||||
|
the day after the plan expires.
|
||||||
|
|
||||||
|
101OBeX does not invoice exempt, since it is not a possibility
|
||||||
|
contemplated in the service. If any of the elements that make up a plan
|
||||||
|
exceed its limit, the service will stop being provided.
|
||||||
|
|
||||||
|
To prevent your projects from being left without service, 101OBeX offers
|
||||||
|
the possibility of configuring alarms that will allow you to receive
|
||||||
|
notifications based on limits for each category. Although these alarms are
|
||||||
|
configurable, they have pre-established minimums to ensure that you are
|
||||||
|
always informed.
|
||||||
|
|
||||||
|
The client always has the possibility of expanding the limits for each of
|
||||||
|
the components that make up a membership through the purchasing of add-ons
|
||||||
|
or by upgrading their plan.
|
||||||
|
|
||||||
|
Clients can check their membership status in the dashboard at any time,
|
||||||
|
along with plan configuration in the Subscription Plan section of the menu
|
||||||
|
bar.
|
||||||
|
|
||||||
|
In the Settings section of the menu, an option is available to track
|
||||||
|
transaction history linked to membership collections. How to change the
|
||||||
|
payment method Payment methods can be changed from monthly to annual and
|
||||||
|
vice versa at any time. How to change the payment method At present the
|
||||||
|
only form of payment is by credit card. But you can add new cards and
|
||||||
|
change your payment method at any time.
|
||||||
|
|
||||||
|
You have the possibility to upgrade and downgrade your plan according to
|
||||||
|
your needs.
|
||||||
|
|
||||||
|
At present the only form of payment is by credit card. But you can add new
|
||||||
|
cards and change your payment method at any time.
|
||||||
|
|
||||||
|
You have the possibility to upgrade and downgrade your plan according to
|
||||||
|
your needs.
|
||||||
|
|
@ -0,0 +1,48 @@
|
||||||
|
Add-ons are collections of attributes or features that can be added to
|
||||||
|
your project. They allow for personalization, adaptation to your needs,
|
||||||
|
and optimization of usage. You can activate add-ons in different processes
|
||||||
|
throughout the acquisition of a plan or the life of a project. You can
|
||||||
|
also find in the Setting section an Add-on chapter in the settings section
|
||||||
|
dedicated exclusively to the administration of these components.
|
||||||
|
|
||||||
|
Currently, the following Add-ons are available:
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Allows you to add a new empty slot to later activate a project and start
|
||||||
|
working with it. Plans have a defined limit for projects and active slots.
|
||||||
|
This add-on allows expansion to the maximum permitted slots.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Expand the volume of monthly requests in your plan and manage the total
|
||||||
|
set of requests for each of your projects. The volume of requests
|
||||||
|
available in a plan can never exceed the maximum request capacity
|
||||||
|
established in that plan.
|
||||||
|
|
||||||
|
Plans have a predefined storage capacity. For example, a Business plan has
|
||||||
|
a maximum storage capacity of 2 Teras and a default storage of 1 Gb. This
|
||||||
|
means that the storage can be increased from the default 1 Gb to 2 Teras
|
||||||
|
maximum, but no more. If more storage is required, it will be necessary to
|
||||||
|
upgrade the plan.
|
||||||
|
|
||||||
|
If your project or set of project exceed the maximum storage allowed for
|
||||||
|
the plan you have, you will need to upgrade the your plan.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Access to professional support through the 101OBeX platform's suite of
|
||||||
|
engineers.
|
||||||
|
|
||||||
|
We recommend reviewing the Pricing document for details about the pricing
|
||||||
|
configuration of the entire Add-on catalog. If a project or node reaches
|
||||||
|
the limit in any of its properties or configurations, the requests will
|
||||||
|
begin to return. To prevent this situation from causing problems in your
|
||||||
|
projects, 101OBeX is configured to support up to 10% more in each of the
|
||||||
|
configurations during the next 24 hours from the moment any of the limits
|
||||||
|
are exceeded. After this period, applications will begin to be given back.
|
||||||
|
|
||||||
|
To further prevent such scenarios, 101OBeX employs an alarm system. This
|
||||||
|
system sends notifications when specific properties approach predefined
|
||||||
|
thresholds, granting you control over your project's growth at all
|
||||||
|
times.
|
||||||
|
|
@ -0,0 +1,48 @@
|
||||||
|
Add-ons are collections of attributes or features that can be added to
|
||||||
|
your project. They allow for personalization, adaptation to your needs,
|
||||||
|
and optimization of usage. You can activate add-ons in different processes
|
||||||
|
throughout the acquisition of a plan or the life of a project. You can
|
||||||
|
also find in the Setting section an Add-on chapter in the settings section
|
||||||
|
dedicated exclusively to the administration of these components.
|
||||||
|
|
||||||
|
Currently, the following Add-ons are available:
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Allows you to add a new empty slot to later activate a project and start
|
||||||
|
working with it. Plans have a defined limit for projects and active slots.
|
||||||
|
This add-on allows expansion to the maximum permitted slots.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Expand the volume of monthly requests in your plan and manage the total
|
||||||
|
set of requests for each of your projects. The volume of requests
|
||||||
|
available in a plan can never exceed the maximum request capacity
|
||||||
|
established in that plan.
|
||||||
|
|
||||||
|
Plans have a predefined storage capacity. For example, a Business plan has
|
||||||
|
a maximum storage capacity of 2 Teras and a default storage of 1 Gb. This
|
||||||
|
means that the storage can be increased from the default 1 Gb to 2 Teras
|
||||||
|
maximum, but no more. If more storage is required, it will be necessary to
|
||||||
|
upgrade the plan.
|
||||||
|
|
||||||
|
If your project or set of project exceed the maximum storage allowed for
|
||||||
|
the plan you have, you will need to upgrade the your plan.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Access to professional support through the 101OBeX platform's suite of
|
||||||
|
engineers.
|
||||||
|
|
||||||
|
We recommend reviewing the Pricing document for details about the pricing
|
||||||
|
configuration of the entire Add-on catalog. If a project or node reaches
|
||||||
|
the limit in any of its properties or configurations, the requests will
|
||||||
|
begin to return. To prevent this situation from causing problems in your
|
||||||
|
projects, 101OBeX is configured to support up to 10% more in each of the
|
||||||
|
configurations during the next 24 hours from the moment any of the limits
|
||||||
|
are exceeded. After this period, applications will begin to be given back.
|
||||||
|
|
||||||
|
To further prevent such scenarios, 101OBeX employs an alarm system. This
|
||||||
|
system sends notifications when specific properties approach predefined
|
||||||
|
thresholds, granting you control over your project's growth at all
|
||||||
|
times.
|
||||||
|
|
@ -0,0 +1,26 @@
|
||||||
|
This tool is designed to enable developers to work with the 101OBeX API.
|
||||||
|
With this tool, developers can retrieve information about their API
|
||||||
|
privileges, quotas, API Token, and more.
|
||||||
|
|
||||||
|
To begin, developers need to initialize their token using the
|
||||||
|
'init' parameter. This process involves authenticating through the
|
||||||
|
Google OAuth API to obtain the API token, which is stored locally on their
|
||||||
|
computer. Once the token is initialized, developers can use the
|
||||||
|
'info' parameter to access details about their API privileges,
|
||||||
|
projects, teams, and access token. Finally, developers have the option to
|
||||||
|
remove all downloaded information from their computer using the
|
||||||
|
'clean' parameter.
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/
|
||||||
|
|
||||||
|
Mac:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease-staging/101obexcli-macosx.zip
|
||||||
|
|
||||||
|
Linux:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease/101obexcli.-.linux.zip
|
||||||
|
|
||||||
|
Win32:
|
||||||
|
|
||||||
|
* https://github.com/101OBeXCorp/101obexcli/releases/download/prerelease/101obexcli-win32.zip
|
||||||
|
|
@ -0,0 +1,5 @@
|
||||||
|
This tool is designed to enable developers to work with the 101OBeX API.
|
||||||
|
With this tool, developers can retrieve information about their API
|
||||||
|
privileges, quotas, API Token, and more.
|
||||||
|
|
||||||
|
https://github.com/101OBeXCorp/101obexcli
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
* Management Console: FILE PROCEDURE URL `admin.py` actividad_usuarios /ws/admin.py/actividad_usuarios `admin.py` actualizardatosusuarios /ws/admin.py/actualizardatosusuarios `admin.py` administrar_solicitud_kyc /ws/admin.py/administrar_solicitud_kyc `admin.py` afiliadoscontipo /ws/admin.py/afiliadoscontipo `admin.py` altaadmin /ws/admin.py/altaadmin `admin.py` altaafiliado /ws/admin.py/altaafiliado `admin.py` cambiarcertificacion /ws/admin.py/cambiarcertificacion `admin.py` cambiarperfilusuario /ws/admin.py/cambiarperfilusuario `admin.py` certificarkyc /ws/admin.py/certificarkyc `admin.py` confirmaringreso /ws/admin.py/confirmaringreso `admin.py` cuadrodemando /ws/admin.py/cuadrodemando `admin.py` datoscuenta /ws/admin.py/datoscuenta `admin.py` editorperfiles /ws/admin.py/editorperfiles `admin.py` histconfirmaciones /ws/admin.py/histconfirmaciones `admin.py` histingresosta /ws/admin.py/histingresosta `admin.py` informesadmin /ws/admin.py/informesadmin `admin.py` ingresofondosta /ws/admin.py/ingresofondosta `admin.py` listado_kyc /ws/admin.py/listado_kyc `admin.py` operaciones /ws/admin.py/operaciones `admin.py` prefondeo /ws/admin.py/prefondeo `admin.py` revert /ws/admin.py/revert `admin.py` revisarorigendefondos /ws/admin.py/revisarorigendefondos `admin.py` revocar_kyc /ws/admin.py/revocar_kyc `admin.py` saldota /ws/admin.py/saldota `admin.py` saldousuarioafecha /ws/admin.py/saldousuarioafecha `admin.py` setgetconfig /ws/admin.py/setgetconfig `admin.py` transacciones /ws/admin.py/transacciones `admin.py` usuariosconsaldo /ws/admin.py/usuariosconsaldo `afiliados.py` comisionesafiliado /ws/afiliados.py/comisionesafiliado `afiliados.py` consultatransacciones /ws/afiliados.py/consultatransacciones `afiliados.py` dashboardafiliado /ws/afiliados.py/dashboardafiliado `afiliados.py` devolucion /ws/afiliados.py/devolucion `afiliados.py` resumencomisionesafiliado /ws/afiliados.py/resumencomisionesafiliado `clearing.py` index /ws/clearing.py/index `liquidacion.py` liquidacionafiliado /ws/liquidacion.py/liquidacionafiliado `divisas.py` actualizar /ws/divisas.py/actualizar `listanegra.py` listado /ws/listanegra.py/listado `listanegra.py` poner /ws/listanegra.py/poner `listanegra.py` quitar /ws/listanegra.py/quitar `impersonar.py` enviodinero /ws/impersonar.py/enviodinero `comunidad.py` altacomunidad /ws/comunidad.py/altacomunidad `bloqueos.py` bloquear /ws/bloqueos.py/bloquear `bloqueos.py` desbloquear /ws/bloqueos.py/desbloquear `bloqueos.py` listado /ws/bloqueos.py/listado `util.py` informes /ws/util.py/informes `util.py` bancos_agregadorfinanciero /ws/util.py/bancos_agregadorfinanciero
|
||||||
|
|
||||||
|
* Commons: FILE PROCEDURE URL `divisas.py` listado /ws/divisas.py/listado `firma.py` firmar /ws/firma.py/firmar `util.py` get_caracteristicas /ws/util.py/get_caracteristicas `util.py` provincias /ws/util.py/provincias `util.py` paises /ws/util.py/paises `util.py` perfiles /ws/util.py/perfiles `util.py` operadores /ws/util.py/operadores `util.py` afiliados /ws/util.py/afiliados `util.py` get_importe_transaccion /ws/util.py/get_importe_transaccion `users.py` login /ws/users.py/login `users.py` logout /ws/users.py/logout `users.py` loginonline /ws/users.py/loginonline `users.py` logintpv /ws/users.py/logintpv `users.py` checksession /ws/users.py/checksession `users.py` compruebasesion /ws/users.py/compruebasesion
|
||||||
|
|
||||||
|
* Loyalty: FILE PROCEDURE URL `donaciones.py` depositotarjetaydonar /ws/donaciones.py/depositotarjetaydonar `donaciones.py` donar /ws/donaciones.py/donar `donaciones.py` donartarjeta /ws/donaciones.py/donartarjeta `donaciones.py` get_caracteristica /ws/donaciones.py/get_caracteristica `programadepuntos.py` actualizar /ws/programadepuntos.py/actualizar `programadepuntos.py` crear /ws/programadepuntos.py/crear `programadepuntos.py` datos /ws/programadepuntos.py/datos `programadepuntos.py` listado /ws/programadepuntos.py/listado `programadepuntos.py` listado_usuarios /ws/programadepuntos.py/listado_usuarios `movimientos.py` canjear_puntos /ws/movimientos.py/canjear_puntos
|
||||||
|
|
||||||
|
* Checkout: FILE PROCEDURE URL `granemisor.py` listado /ws/granemisor.py/listado `granemisor.py` transferencia /ws/granemisor.py/transferencia `pagodeservicios.py` enviarticketemail /ws/pagodeservicios.py/enviarticketemail `pagodeservicios.py` infoservicio /ws/pagodeservicios.py/infoservicio `pagodeservicios.py` listaservicios /ws/pagodeservicios.py/listaservicios `pagodeservicios.py` pagarservicio /ws/pagodeservicios.py/pagarservicio `pagodeservicios.py` pagarserviciotarjeta /ws/pagodeservicios.py/pagarserviciotarjeta `pagoderecibosv2.py` firmar /ws/pagoderecibosv2.py/firmar `pagoderecibosv2.py` firmar_original /ws/pagoderecibosv2.py/firmar_original `pagoderecibosv2.py` info /ws/pagoderecibosv2.py/info `pagoderecibosv2.py` lista /ws/pagoderecibosv2.py/lista `pagoderecibosv2.py` pagar /ws/pagoderecibosv2.py/pagar `pagodiferido.py` pagodiferido /ws/pagodiferido.py/pagodiferido `util.py` precios_servicio /ws/util.py/precios_servicio `pagomovil.py` pagomovil /ws/pagomovil.py/pagomovil `tiempoaire.py` recargar /ws/tiempoaire.py/recargar
|
||||||
|
|
||||||
|
* Wallet: FILE PROCEDURE URL `origenesdefondos.py` gestor_origenes_propios /ws/origenesdefondos.py/gestor_origenes_propios `cuentas.py` saldo /ws/cuentas.py/saldo `movimientos.py` actividad /ws/movimientos.py/actividad `movimientos.py` listado /ws/movimientos.py/listado
|
||||||
|
|
||||||
|
* Notifications: FILE PROCEDURE URL `movimientos.py` enviarsms /ws/movimientos.py/enviarsms `sms.py` procesarpeticion /ws/sms.py/procesarpeticion `sms.py` tecnophone2_notificacion_envio /ws/sms.py/tecnophone2_notificacion_envio `notificaciones.py` gestor_notificaciones /ws/notificaciones.py/gestor_notificaciones `notificaciones.py` leer_notificaciones /ws/notificaciones.py/leer_notificaciones `notificaciones.py` leer_uno /ws/notificaciones.py/leer_uno `notificaciones.py` numero_no_leidos /ws/notificaciones.py/numero_no_leidos `alarmas.py` crearalarma /ws/alarmas.py/crearalarma `alarmas.py` desempaquetar /ws/alarmas.py/desempaquetar `push_notifications.py` apn_dispositivo /ws/push_notifications.py/apn_dispositivo `push_notifications.py` apn_dispositivos_con_app_id /ws/push_notifications.py/ apn_dispositivos_con_app_id `push_notifications.py` asociar_device_token /ws/push_notifications.py/asociar_device_token `push_notifications.py` reiniciar_badges /ws/push_notifications.py/reiniciar_badges
|
||||||
|
|
||||||
|
* Onboarding: FILE PROCEDURE URL NOTES `cuentas.py` alta /ws/cuentas.py/alta `cuentas.py` baja /ws/cuentas.py/baja `cuentas.py` parar /ws/cuentas.py/parar `cuentas.py` activar /ws/cuentas.py/activar `users.py` alta_cliente /ws/users.py/alta_cliente `users.py` certificarcuenta /ws/users.py/certificarcuenta `users.py` acreditar_nivel_kyc /ws/users.py/acreditar_nivel_kyc `users.py` alta_kyc /ws/users.py/alta_kyc `users.py` campos_alta_cliente /ws/users.py/campos_alta_cliente `users.py` reenviarotpalta /ws/users.py/reenviarotpalta `seguridad_itf.py` condiciones_legales /ws/seguridad_itf.py/condiciones_legales `seguridad_itf.py` preguntas_de_seguridad /ws/seguridad_itf.py/preguntas_de_seguridad `netverify.py` certificar /ws/netverify.py/certificar `netverify.py` certificarcertify /ws/netverify.py/certificarcertify `netverify.py` finalizar /ws/netverify.py/finalizar `netverify.py` listado /ws/netverify.py/listado `netverify.py` revocar /ws/netverify.py/revocar `netverify.py` solicitar /ws/netverify.py/solicitar `users.py` cambiodedatos /ws/users.py/cambiodedatos `users.py` cambioperfilcontrolado /ws/users.py/cambioperfilcontrolado `users.py` checknick /ws/users.py/checknick `users.py` data /ws/users.py/data `users.py` firmarconclaveprivada /ws/users.py/firmarconclaveprivada `users.py` get_photo /ws/users.py/get_photo `users.py` info_usuario /ws/users.py/info_usuario `users.py` restartpin /ws/users.py/restartpin `users.py` upload_photo /ws/users.py/upload_photo `mls.py` activar /ws/mls.py/activar `carga_masiva.py` usuarios_ctm /ws/carga_masiva.py/usuarios_ctm Alta masiva de usuarios ctm
|
||||||
|
|
||||||
|
* Remittance (Money movements): FILE PROCEDURE URL NOTES `movimientos.py` anularcomprartarjeta /ws/movimientos.py/anularcomprartarjeta `movimientos.py` comprar /ws/movimientos.py/comprar `movimientos.py` comprartarjeta /ws/movimientos.py/comprartarjeta `movimientos.py` depositotarjeta /ws/movimientos.py/depositotarjeta `movimientos.py` depositotarjetaotracuenta /ws/movimientos.py/depositotarjetaotracuenta `movimientos.py` entreorigenes /ws/movimientos.py/entreorigenes `movimientos.py` enviar /ws/movimientos.py/enviar `movimientos.py` enviarhalcash /ws/movimientos.py/enviarhalcash `movimientos.py` enviosderegalo /ws/movimientos.py/enviosderegalo `movimientos.py` pedir /ws/movimientos.py/pedir `movimientos.py` recargar /ws/movimientos.py/recargar `movimientos.py` remesadirigida /ws/movimientos.py/remesadirigida `movimientos.py` repetirtransaccion /ws/movimientos.py/repetirtransaccion `movimientos.py` retirar /ws/movimientos.py/retirar `movimientos.py` retirarbanco /ws/movimientos.py/retirarbanco `pademobile_prepago.py` consultar_saldo_prepago /ws/pademobile_prepago.py/consultar_saldo_prepago `pademobile_prepago.py` ingresar_prepago /ws/pademobile_prepago.py/ingresar_prepago `pademobile_prepago.py` registrar_monedero_prepago /ws/pademobile_prepago.py/ registrar_monedero_prepago `pademobile_prepago.py` retirar_prepago /ws/pademobile_prepago.py/retirar_prepago `movimientos.py` transferenciasmasivas /ws/movimientos.py/transferenciasmasivas `util.py` carga_masiva_ctm /ws/util.py/carga_masiva_ctm Carga masiva de saldos a usuarios CTM
|
||||||
|
|
||||||
|
* ...: FILE PROCEDURE URL `movimientos.py` comprobartransaccion /ws/movimientos.py/comprobartransaccion `movimientos.py` consultatransaccion /ws/movimientos.py/consultatransaccion `movimientos.py` datos_transaccion /ws/movimientos.py/datos_transaccion `shake.py` ejecutar /ws/shake.py/ejecutar `shake.py` obtener /ws/shake.py/obtener `shakev2.py` ejecutar /ws/shakev2.py/ejecutar `shakev2.py` obtener /ws/shakev2.py/obtener `chat.py` chat_operator /ws/chat.py/chat_operator `chat.py` chat_user /ws/chat.py/chat_user `chat.py` prebind /ws/chat.py/prebind `util.py` logs /ws/util.py/logs `util.py` template_informes /ws/util.py/template_informes .
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
* Management Console: FILE PROCEDURE URL FILES `admin.py` actividad_usuarios /ws/admin.py/actividad_usuarios `admin.py` actualizardatosusuarios /ws/admin.py/actualizardatosusuarios `admin.py` administrar_solicitud_kyc /ws/admin.py/administrar_solicitud_kyc `admin.py` afiliadoscontipo /ws/admin.py/afiliadoscontipo `admin.py` altaadmin /ws/admin.py/altaadmin `admin.py` altaafiliado /ws/admin.py/altaafiliado `admin.py` cambiarcertificacion /ws/admin.py/cambiarcertificacion `admin.py` cambiarperfilusuario /ws/admin.py/cambiarperfilusuario `admin.py` certificarkyc /ws/admin.py/certificarkyc `admin.py` confirmaringreso /ws/admin.py/confirmaringreso `admin.py` cuadrodemando /ws/admin.py/cuadrodemando `admin.py` datoscuenta /ws/admin.py/datoscuenta `admin.py` editorperfiles /ws/admin.py/editorperfiles `admin.py` histconfirmaciones /ws/admin.py/histconfirmaciones `admin.py` histingresosta /ws/admin.py/histingresosta `admin.py` informesadmin /ws/admin.py/informesadmin `admin.py` ingresofondosta /ws/admin.py/ingresofondosta `admin.py` listado_kyc /ws/admin.py/listado_kyc `admin.py` operaciones /ws/admin.py/operaciones `admin.py` prefondeo /ws/admin.py/prefondeo `admin.py` revert /ws/admin.py/revert `admin.py` revisarorigendefondos /ws/admin.py/revisarorigendefondos `admin.py` revocar_kyc /ws/admin.py/revocar_kyc `admin.py` saldota /ws/admin.py/saldota `admin.py` saldousuarioafecha /ws/admin.py/saldousuarioafecha `admin.py` setgetconfig /ws/admin.py/setgetconfig `admin.py` transacciones /ws/admin.py/transacciones `admin.py` usuariosconsaldo /ws/admin.py/usuariosconsaldo `afiliados.py` comisionesafiliado /ws/afiliados.py/comisionesafiliado `afiliados.py` consultatransacciones /ws/afiliados.py/consultatransacciones `afiliados.py` dashboardafiliado /ws/afiliados.py/dashboardafiliado `afiliados.py` devolucion /ws/afiliados.py/devolucion `afiliados.py` resumencomisionesafiliado /ws/afiliados.py/resumencomisionesafiliado `clearing.py` index /ws/clearing.py/index `liquidacion.py` liquidacionafiliado /ws/liquidacion.py/liquidacionafiliado `divisas.py` actualizar /ws/divisas.py/actualizar `listanegra.py` listado /ws/listanegra.py/listado `listanegra.py` poner /ws/listanegra.py/poner `listanegra.py` quitar /ws/listanegra.py/quitar `impersonar.py` enviodinero /ws/impersonar.py/enviodinero `comunidad.py` altacomunidad /ws/comunidad.py/altacomunidad `bloqueos.py` bloquear /ws/bloqueos.py/bloquear `bloqueos.py` desbloquear /ws/bloqueos.py/desbloquear `bloqueos.py` listado /ws/bloqueos.py/listado `util.py` informes /ws/util.py/informes `util.py` bancos_agregadorfinanciero /ws/util.py/bancos_agregadorfinanciero
|
||||||
|
|
||||||
|
* Tools: FILE PROCEDURE URL FILED `divisas.py` listado /ws/divisas.py/listado X `firma.py` firmar /ws/firma.py/firmar X `util.py` get_caracteristicas /ws/util.py/get_caracteristicas `util.py` provincias /ws/util.py/provincias `util.py` paises /ws/util.py/paises `util.py` perfiles /ws/util.py/perfiles `util.py` operadores /ws/util.py/operadores `util.py` afiliados /ws/util.py/afiliados `util.py` get_importe_transaccion /ws/util.py/get_importe_transaccion `users.py` login /ws/users.py/login X `Accesos` `users.py` logout /ws/users.py/logout `users.py` loginonline /ws/users.py/loginonline `users.py` logintpv /ws/users.py/logintpv `users.py` checksession /ws/users.py/checksession `users.py` compruebasesion /ws/users.py/compruebasesion
|
||||||
|
|
||||||
|
* Loyalty: FILE PROCEDURE URL FILED `donaciones.py` depositotarjetaydonar /ws/donaciones.py/depositotarjetaydonar `donaciones.py` donar /ws/donaciones.py/donar `donaciones.py` donartarjeta /ws/donaciones.py/donartarjeta `donaciones.py` get_caracteristica /ws/donaciones.py/get_caracteristica `programadepuntos.py` actualizar /ws/programadepuntos.py/actualizar `programadepuntos.py` crear /ws/programadepuntos.py/crear `programadepuntos.py` datos /ws/programadepuntos.py/datos `programadepuntos.py` listado /ws/programadepuntos.py/listado `programadepuntos.py` listado_usuarios /ws/programadepuntos.py/listado_usuarios `movimientos.py` canjear_puntos /ws/movimientos.py/canjear_puntos
|
||||||
|
|
||||||
|
* Checkout: FILE PROCEDURE URL FILED `granemisor.py` listado /ws/granemisor.py/listado `granemisor.py` transferencia /ws/granemisor.py/transferencia `pagodeservicios.py` enviarticketemail /ws/pagodeservicios.py/enviarticketemail `pagodeservicios.py` infoservicio /ws/pagodeservicios.py/infoservicio X `Bills2` `pagodeservicios.py` listaservicios /ws/pagodeservicios.py/listaservicios X `Bills2` `pagodeservicios.py` pagarservicio /ws/pagodeservicios.py/pagarservicio X `Bills2` `pagodeservicios.py` pagarserviciotarjeta /ws/pagodeservicios.py/pagarserviciotarjeta `pagoderecibosv2.py` firmar /ws/pagoderecibosv2.py/firmar `pagoderecibosv2.py` firmar_original /ws/pagoderecibosv2.py/firmar_original `pagoderecibosv2.py` info /ws/pagoderecibosv2.py/info X `Bills2` `pagoderecibosv2.py` lista /ws/pagoderecibosv2.py/lista X `Bills2` `pagoderecibosv2.py` pagar /ws/pagoderecibosv2.py/pagar X `Bills2` `pagodiferido.py` pagodiferido /ws/pagodiferido.py/pagodiferido `util.py` precios_servicio /ws/util.py/precios_servicio `pagomovil.py` pagomovil /ws/pagomovil.py/pagomovil `tiempoaire.py` recargar /ws/tiempoaire.py/recargar Se ejecuta a traves de `pagodeservicios.py`
|
||||||
|
|
||||||
|
* Wallet: FILE PROCEDURE URL NOTES FILED `origenesdefondos.py` gestor_origenes_propios /ws/origenesdefondos.py/gestor_origenes_propios Hay que dividirlo en 7 endpoints diferentes X `origenes_de_fondos` `cuentas.py` saldo /ws/cuentas.py/saldo `movimientos.py` actividad /ws/movimientos.py/actividad X `movimientos.py` listado /ws/movimientos.py/listado X
|
||||||
|
|
||||||
|
* Notifications: FILE PROCEDURE URL FILED `movimientos.py` enviarsms /ws/movimientos.py/enviarsms `sms.py` procesarpeticion /ws/sms.py/procesarpeticion `sms.py` tecnophone2_notificacion_envio /ws/sms.py/tecnophone2_notificacion_envio `notificaciones.py` gestor_notificaciones /ws/notificaciones.py/gestor_notificaciones `notificaciones.py` leer_notificaciones /ws/notificaciones.py/leer_notificaciones `notificaciones.py` leer_uno /ws/notificaciones.py/leer_uno `notificaciones.py` numero_no_leidos /ws/notificaciones.py/numero_no_leidos `alarmas.py` crearalarma /ws/alarmas.py/crearalarma `alarmas.py` desempaquetar /ws/alarmas.py/desempaquetar `push_notifications.py` apn_dispositivo /ws/push_notifications.py/apn_dispositivo `push_notifications.py` apn_dispositivos_con_app_id /ws/push_notifications.py/ apn_dispositivos_con_app_id `push_notifications.py` asociar_device_token /ws/push_notifications.py/asociar_device_token `push_notifications.py` reiniciar_badges /ws/push_notifications.py/reiniciar_badges
|
||||||
|
|
||||||
|
* Onboarding: FILE PROCEDURE URL NOTES FILED `cuentas.py` alta /ws/cuentas.py/alta X `alta_baja_modificacion` `cuentas.py` baja /ws/cuentas.py/baja X `alta_baja_modificacion` `cuentas.py` parar /ws/cuentas.py/parar `cuentas.py` activar /ws/cuentas.py/activar `users.py` alta_cliente /ws/users.py/alta_cliente `users.py` certificarcuenta /ws/users.py/certificarcuenta `users.py` acreditar_nivel_kyc /ws/users.py/acreditar_nivel_kyc `users.py` alta_kyc /ws/users.py/alta_kyc `users.py` campos_alta_cliente /ws/users.py/campos_alta_cliente `users.py` reenviarotpalta /ws/users.py/reenviarotpalta `seguridad_itf.py` condiciones_legales /ws/seguridad_itf.py/condiciones_legales `seguridad_itf.py` preguntas_de_seguridad /ws/seguridad_itf.py/preguntas_de_seguridad `netverify.py` certificar /ws/netverify.py/certificar `netverify.py` certificarcertify /ws/netverify.py/certificarcertify `netverify.py` finalizar /ws/netverify.py/finalizar `netverify.py` listado /ws/netverify.py/listado `netverify.py` revocar /ws/netverify.py/revocar `netverify.py` solicitar /ws/netverify.py/solicitar `users.py` cambiodedatos /ws/users.py/cambiodedatos X `alta_baja_modificacion` `users.py` cambioperfilcontrolado /ws/users.py/cambioperfilcontrolado `users.py` checknick /ws/users.py/checknick X `users.py` data /ws/users.py/data `users.py` firmarconclaveprivada /ws/users.py/firmarconclaveprivada `users.py` get_photo /ws/users.py/get_photo `users.py` info_usuario /ws/users.py/info_usuario `users.py` restartpin /ws/users.py/restartpin `users.py` upload_photo /ws/users.py/upload_photo `mls.py` activar /ws/mls.py/activar `carga_masiva.py` usuarios_ctm /ws/carga_masiva.py/usuarios_ctm Alta masiva de usuarios ctm
|
||||||
|
|
||||||
|
* Remittance (Money movements): FILE PROCEDURE URL NOTES FILED `movimientos.py` anularcomprartarjeta /ws/movimientos.py/anularcomprartarjeta X `Interfaz Servicios Pagos` `movimientos.py` comprar /ws/movimientos.py/comprar `movimientos.py` comprartarjeta /ws/movimientos.py/comprartarjeta X `Interfaz Servicios Pagos` `movimientos.py` depositotarjeta /ws/movimientos.py/depositotarjeta `movimientos.py` depositotarjetaotracuenta /ws/movimientos.py/depositotarjetaotracuenta `movimientos.py` entreorigenes /ws/movimientos.py/entreorigenes `movimientos.py` enviar /ws/movimientos.py/enviar X `movimientos.py` enviarhalcash /ws/movimientos.py/enviarhalcash `movimientos.py` enviosderegalo /ws/movimientos.py/enviosderegalo `movimientos.py` pedir /ws/movimientos.py/pedir X `movimientos.py` recargar /ws/movimientos.py/recargar `movimientos.py` remesadirigida /ws/movimientos.py/remesadirigida `movimientos.py` repetirtransaccion /ws/movimientos.py/repetirtransaccion X Falta revisar la repeticion de `tiempoaire.py` `movimientos.py` retirar /ws/movimientos.py/retirar `movimientos.py` retirarbanco /ws/movimientos.py/retirarbanco `pademobile_prepago.py` consultar_saldo_prepago /ws/pademobile_prepago.py/consultar_saldo_prepago `pademobile_prepago.py` ingresar_prepago /ws/pademobile_prepago.py/ingresar_prepago `pademobile_prepago.py` registrar_monedero_prepago /ws/pademobile_prepago.py/ registrar_monedero_prepago `pademobile_prepago.py` retirar_prepago /ws/pademobile_prepago.py/retirar_prepago `movimientos.py` transferenciasmasivas /ws/movimientos.py/transferenciasmasivas `util.py` carga_masiva_ctm /ws/util.py/carga_masiva_ctm Carga masiva de saldos a usuarios CTM
|
||||||
|
|
||||||
|
* ...: FILE PROCEDURE URL FILED `movimientos.py` comprobartransaccion /ws/movimientos.py/comprobartransaccion `movimientos.py` consultatransaccion /ws/movimientos.py/consultatransaccion X Esta mal la URL indicada en{' '} `Interfaz Servicios Pagos` `movimientos.py` datos_transaccion /ws/movimientos.py/datos_transaccion `shake.py` ejecutar /ws/shake.py/ejecutar `shake.py` obtener /ws/shake.py/obtener `shakev2.py` ejecutar /ws/shakev2.py/ejecutar `shakev2.py` obtener /ws/shakev2.py/obtener `chat.py` chat_operator /ws/chat.py/chat_operator `chat.py` chat_user /ws/chat.py/chat_user `chat.py` prebind /ws/chat.py/prebind `util.py` logs /ws/util.py/logs `util.py` template_informes /ws/util.py/template_informes
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
* Management Console: FILE PROCEDURE URL `admin.py` actividad_usuarios /ws/admin.py/actividad_usuarios `admin.py` actualizardatosusuarios /ws/admin.py/actualizardatosusuarios `admin.py` administrar_solicitud_kyc /ws/admin.py/administrar_solicitud_kyc `admin.py` afiliadoscontipo /ws/admin.py/afiliadoscontipo `admin.py` altaadmin /ws/admin.py/altaadmin `admin.py` altaafiliado /ws/admin.py/altaafiliado `admin.py` cambiarcertificacion /ws/admin.py/cambiarcertificacion `admin.py` cambiarperfilusuario /ws/admin.py/cambiarperfilusuario `admin.py` certificarkyc /ws/admin.py/certificarkyc `admin.py` confirmaringreso /ws/admin.py/confirmaringreso `admin.py` cuadrodemando /ws/admin.py/cuadrodemando `admin.py` datoscuenta /ws/admin.py/datoscuenta `admin.py` editorperfiles /ws/admin.py/editorperfiles `admin.py` histconfirmaciones /ws/admin.py/histconfirmaciones `admin.py` histingresosta /ws/admin.py/histingresosta `admin.py` informesadmin /ws/admin.py/informesadmin `admin.py` ingresofondosta /ws/admin.py/ingresofondosta `admin.py` listado_kyc /ws/admin.py/listado_kyc `admin.py` operaciones /ws/admin.py/operaciones `admin.py` prefondeo /ws/admin.py/prefondeo `admin.py` revert /ws/admin.py/revert `admin.py` revisarorigendefondos /ws/admin.py/revisarorigendefondos `admin.py` revocar_kyc /ws/admin.py/revocar_kyc `admin.py` saldota /ws/admin.py/saldota `admin.py` saldousuarioafecha /ws/admin.py/saldousuarioafecha `admin.py` setgetconfig /ws/admin.py/setgetconfig `admin.py` transacciones /ws/admin.py/transacciones `admin.py` usuariosconsaldo /ws/admin.py/usuariosconsaldo `afiliados.py` comisionesafiliado /ws/afiliados.py/comisionesafiliado `afiliados.py` consultatransacciones /ws/afiliados.py/consultatransacciones `afiliados.py` dashboardafiliado /ws/afiliados.py/dashboardafiliado `afiliados.py` devolucion /ws/afiliados.py/devolucion `afiliados.py` resumencomisionesafiliado /ws/afiliados.py/resumencomisionesafiliado `clearing.py` index /ws/clearing.py/index `liquidacion.py` liquidacionafiliado /ws/liquidacion.py/liquidacionafiliado `divisas.py` actualizar /ws/divisas.py/actualizar `listanegra.py` listado /ws/listanegra.py/listado `listanegra.py` poner /ws/listanegra.py/poner `listanegra.py` quitar /ws/listanegra.py/quitar `impersonar.py` enviodinero /ws/impersonar.py/enviodinero `comunidad.py` altacomunidad /ws/comunidad.py/altacomunidad `bloqueos.py` bloquear /ws/bloqueos.py/bloquear `bloqueos.py` desbloquear /ws/bloqueos.py/desbloquear `bloqueos.py` listado /ws/bloqueos.py/listado `util.py` informes /ws/util.py/informes `util.py` bancos_agregadorfinanciero /ws/util.py/bancos_agregadorfinanciero
|
||||||
|
|
||||||
|
* Tools: FILE PROCEDURE URL `divisas.py` listado /ws/divisas.py/listado `firma.py` firmar /ws/firma.py/firmar `util.py` get_caracteristicas /ws/util.py/get_caracteristicas `util.py` provincias /ws/util.py/provincias `util.py` paises /ws/util.py/paises `util.py` perfiles /ws/util.py/perfiles `util.py` operadores /ws/util.py/operadores `util.py` afiliados /ws/util.py/afiliados `util.py` get_importe_transaccion /ws/util.py/get_importe_transaccion `users.py` login /ws/users.py/login `users.py` logout /ws/users.py/logout `users.py` loginonline /ws/users.py/loginonline `users.py` logintpv /ws/users.py/logintpv `users.py` checksession /ws/users.py/checksession `users.py` compruebasesion /ws/users.py/compruebasesion
|
||||||
|
|
||||||
|
* Loyalty: FILE PROCEDURE URL `donaciones.py` depositotarjetaydonar /ws/donaciones.py/depositotarjetaydonar `donaciones.py` donar /ws/donaciones.py/donar `donaciones.py` donartarjeta /ws/donaciones.py/donartarjeta `donaciones.py` get_caracteristica /ws/donaciones.py/get_caracteristica `programadepuntos.py` actualizar /ws/programadepuntos.py/actualizar `programadepuntos.py` crear /ws/programadepuntos.py/crear `programadepuntos.py` datos /ws/programadepuntos.py/datos `programadepuntos.py` listado /ws/programadepuntos.py/listado `programadepuntos.py` listado_usuarios /ws/programadepuntos.py/listado_usuarios `movimientos.py` canjear_puntos /ws/movimientos.py/canjear_puntos
|
||||||
|
|
||||||
|
* Checkout: FILE PROCEDURE URL `granemisor.py` listado /ws/granemisor.py/listado `granemisor.py` transferencia /ws/granemisor.py/transferencia `pagodeservicios.py` enviarticketemail /ws/pagodeservicios.py/enviarticketemail `pagodeservicios.py` infoservicio /ws/pagodeservicios.py/infoservicio `pagodeservicios.py` listaservicios /ws/pagodeservicios.py/listaservicios `pagodeservicios.py` pagarservicio /ws/pagodeservicios.py/pagarservicio `pagodeservicios.py` pagarserviciotarjeta /ws/pagodeservicios.py/pagarserviciotarjeta `pagoderecibosv2.py` firmar /ws/pagoderecibosv2.py/firmar `pagoderecibosv2.py` firmar_original /ws/pagoderecibosv2.py/firmar_original `pagoderecibosv2.py` info /ws/pagoderecibosv2.py/info `pagoderecibosv2.py` lista /ws/pagoderecibosv2.py/lista `pagoderecibosv2.py` pagar /ws/pagoderecibosv2.py/pagar `pagodiferido.py` pagodiferido /ws/pagodiferido.py/pagodiferido `util.py` precios_servicio /ws/util.py/precios_servicio `pagomovil.py` pagomovil /ws/pagomovil.py/pagomovil `tiempoaire.py` recargar /ws/tiempoaire.py/recargar
|
||||||
|
|
||||||
|
* Wallet: FILE PROCEDURE URL NOTES `origenesdefondos.py` gestor_origenes_propios /ws/origenesdefondos.py/gestor_origenes_propios Hay que dividirlo en 7 endpoints diferentes `cuentas.py` saldo /ws/cuentas.py/saldo `movimientos.py` actividad /ws/movimientos.py/actividad `movimientos.py` listado /ws/movimientos.py/listado
|
||||||
|
|
||||||
|
* Notifications: FILE PROCEDURE URL `movimientos.py` enviarsms /ws/movimientos.py/enviarsms `sms.py` procesarpeticion /ws/sms.py/procesarpeticion `sms.py` tecnophone2_notificacion_envio /ws/sms.py/tecnophone2_notificacion_envio `notificaciones.py` gestor_notificaciones /ws/notificaciones.py/gestor_notificaciones `notificaciones.py` leer_notificaciones /ws/notificaciones.py/leer_notificaciones `notificaciones.py` leer_uno /ws/notificaciones.py/leer_uno `notificaciones.py` numero_no_leidos /ws/notificaciones.py/numero_no_leidos `alarmas.py` crearalarma /ws/alarmas.py/crearalarma `alarmas.py` desempaquetar /ws/alarmas.py/desempaquetar `push_notifications.py` apn_dispositivo /ws/push_notifications.py/apn_dispositivo `push_notifications.py` apn_dispositivos_con_app_id /ws/push_notifications.py/ apn_dispositivos_con_app_id `push_notifications.py` asociar_device_token /ws/push_notifications.py/asociar_device_token `push_notifications.py` reiniciar_badges /ws/push_notifications.py/reiniciar_badges
|
||||||
|
|
||||||
|
* Onboarding: FILE PROCEDURE URL NOTES `cuentas.py` alta /ws/cuentas.py/alta `cuentas.py` baja /ws/cuentas.py/baja `cuentas.py` parar /ws/cuentas.py/parar `cuentas.py` activar /ws/cuentas.py/activar `users.py` alta_cliente /ws/users.py/alta_cliente `users.py` certificarcuenta /ws/users.py/certificarcuenta `users.py` acreditar_nivel_kyc /ws/users.py/acreditar_nivel_kyc `users.py` alta_kyc /ws/users.py/alta_kyc `users.py` campos_alta_cliente /ws/users.py/campos_alta_cliente `users.py` reenviarotpalta /ws/users.py/reenviarotpalta `seguridad_itf.py` condiciones_legales /ws/seguridad_itf.py/condiciones_legales `seguridad_itf.py` preguntas_de_seguridad /ws/seguridad_itf.py/preguntas_de_seguridad `netverify.py` certificar /ws/netverify.py/certificar `netverify.py` certificarcertify /ws/netverify.py/certificarcertify `netverify.py` finalizar /ws/netverify.py/finalizar `netverify.py` listado /ws/netverify.py/listado `netverify.py` revocar /ws/netverify.py/revocar `netverify.py` solicitar /ws/netverify.py/solicitar `users.py` cambiodedatos /ws/users.py/cambiodedatos `users.py` cambioperfilcontrolado /ws/users.py/cambioperfilcontrolado `users.py` checknick /ws/users.py/checknick `users.py` data /ws/users.py/data `users.py` firmarconclaveprivada /ws/users.py/firmarconclaveprivada `users.py` get_photo /ws/users.py/get_photo `users.py` info_usuario /ws/users.py/info_usuario `users.py` restartpin /ws/users.py/restartpin `users.py` upload_photo /ws/users.py/upload_photo `mls.py` activar /ws/mls.py/activar `carga_masiva.py` usuarios_ctm /ws/carga_masiva.py/usuarios_ctm Alta masiva de usuarios ctm
|
||||||
|
|
||||||
|
* Remittance (Money movements): FILE PROCEDURE URL NOTES `movimientos.py` anularcomprartarjeta /ws/movimientos.py/anularcomprartarjeta `movimientos.py` comprar /ws/movimientos.py/comprar `movimientos.py` comprartarjeta /ws/movimientos.py/comprartarjeta `movimientos.py` depositotarjeta /ws/movimientos.py/depositotarjeta `movimientos.py` depositotarjetaotracuenta /ws/movimientos.py/depositotarjetaotracuenta `movimientos.py` entreorigenes /ws/movimientos.py/entreorigenes `movimientos.py` enviar /ws/movimientos.py/enviar `movimientos.py` enviarhalcash /ws/movimientos.py/enviarhalcash `movimientos.py` enviosderegalo /ws/movimientos.py/enviosderegalo `movimientos.py` pedir /ws/movimientos.py/pedir `movimientos.py` recargar /ws/movimientos.py/recargar `movimientos.py` remesadirigida /ws/movimientos.py/remesadirigida `movimientos.py` repetirtransaccion /ws/movimientos.py/repetirtransaccion `movimientos.py` retirar /ws/movimientos.py/retirar `movimientos.py` retirarbanco /ws/movimientos.py/retirarbanco `pademobile_prepago.py` consultar_saldo_prepago /ws/pademobile_prepago.py/consultar_saldo_prepago `pademobile_prepago.py` ingresar_prepago /ws/pademobile_prepago.py/ingresar_prepago `pademobile_prepago.py` registrar_monedero_prepago /ws/pademobile_prepago.py/ registrar_monedero_prepago `pademobile_prepago.py` retirar_prepago /ws/pademobile_prepago.py/retirar_prepago `movimientos.py` transferenciasmasivas /ws/movimientos.py/transferenciasmasivas `util.py` carga_masiva_ctm /ws/util.py/carga_masiva_ctm Carga masiva de saldos a usuarios CTM
|
||||||
|
|
||||||
|
* ...:(?) FILE PROCEDURE URL `movimientos.py` comprobartransaccion /ws/movimientos.py/comprobartransaccion `movimientos.py` consultatransaccion /ws/movimientos.py/consultatransaccion `movimientos.py` datos_transaccion /ws/movimientos.py/datos_transaccion `shake.py` ejecutar /ws/shake.py/ejecutar `shake.py` obtener /ws/shake.py/obtener `shakev2.py` ejecutar /ws/shakev2.py/ejecutar `shakev2.py` obtener /ws/shakev2.py/obtener `chat.py` chat_operator /ws/chat.py/chat_operator `chat.py` chat_user /ws/chat.py/chat_user `chat.py` prebind /ws/chat.py/prebind `util.py` logs /ws/util.py/logs `util.py` template_informes /ws/util.py/template_informes
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue