MINORIA TECHNOLOGY
MINORIA architecture enables SLMs to deliver safe, empathetic, and clinically meaningful support for PTSD evaluation and triage, supplementing human care while minimizing risks of bias and misinterpretation
Core Components
- SLM Architecture
- PTSD-Specific Modules
- Ethical AI Guardrails

SLM Architecture: Core Technology
1. Training on Anonymized Clinical Interviews and Trauma Narratives
- The models are trained using large datasets of anonymized clinical interviews and synthetic trauma narratives, such as those found in the TIDE dataset, which contains 10,000 dialogues across 500 diverse PTSD client personas1.
- These datasets are reviewed by clinical psychologists to ensure trauma sensitivity and realism, embedding key PTSD language markers (e.g., avoidance, dissociation, hypervigilance) into the training data3.
2. Adaptive Learning with Clinician Feedback
- The architecture integrates adaptive learning loops: after deployment, clinicians review model outputs and provide feedback, which is used to refine the model’s predictions in subsequent updates1.

PTSD-Specific Modules
A. Symptom Detection
- SLMs are fine-tuned to identify DSM-5 PTSD criteria within patient narratives, including symptoms like flashbacks, avoidance, and hyperarousal6.
- This is achieved using NLP techniques such as transformer-based models (BERT, RoBERTa) and domain-specific embeddings (e.g., LLaMA), which are particularly effective at recognizing nuanced symptom language in clinical transcripts5.
B. Risk Stratification

Transparent Decision Logs
- Every model decision and output is logged in a transparent, auditable format, allowing clinicians to review and verify the reasoning behind each prediction3.
- This transparency is essential for clinical accountability and regulatory compliance, and it supports trust in AI-assisted care3
- To prevent misdiagnosis, especially in minority populations, the SLMs undergo adversarial training and are evaluated using demographic-specific scenarios1.
- The training data and model outputs are systematically audited to detect and correct biases, ensuring equitable performance across age, gender, and cultural groups7..
