Transformer models unlocked extraordinary results despite heavy inefficiencies. Their success is a brute‑force achievement, yet the architecture itself creates unsustainable energy and processing demands.

No human rereads an 800‑page book to interpret the last word. We carry a compressed, evolving understanding. Native Transformers cannot do this. We are building an evolved AI architecture ecosystem. (It all started while trying to build a genre-busting video game.)

Advancing the State of AI

Narrative Intelligence

Narrative Intelligence
Narrative Intelligence

Find the thread...

The platform clusters evolving storylines using neurosymbolic reasoning to trace actor networks and predict narrative events. Unlike pattern-matching systems, our approach builds causal chains from domain-specific knowledge graphs, delivering explainable forecasts of influence operations and narrative trajectories.

Efficient by Design

Efficient by Design
Efficient by Design

Post-transformer architecture...

TL;DR - Non-transformer-based architecture has several benefits

Quadratic self-attention. Ouch. O(n²) scaling is a fundamental limitation of transformers that creates hard walls for long contexts. Add to this a number of other issues:

  • Modern GPU architectures that are often memory-bound rather than compute-bound during inference, especially for large models, resulting in a memory bandwidth bottleneck.

  • The KV cache represents significant memory overhead that scales with sequence length and batch size, often exceeding the model parameters themselves in memory consumption for long contexts.

  • There are uniform computational cost regardless of token complexity. While MoE models did emerge partly to address this inefficiency through conditional computation, they are still autoregressive during inference, where you generate one token at a time, and each token generation requires the full attention computation over the entire context. It's effectively a "patch" on one component of a fundamentally flawed architecture.

  • Positional encodings are a kind of "hack" as an additive solution to handle the permutation-invariant nature of attention, rather than an intrinsic architectural feature.


The start of our research predates the published works of the transformer technology in 2017, so we were already viewing the challenge of cognitive AI through a different lens. It's one we believe will have significant impact on operational and energy consumption costs, and performance output.

Disinformation
Detection

Disinformation Detection
Disinformation Detection

Expose sophisticated influence operations...

Traditional content moderation systems rely on reactive flagging of individual posts, leaving organizations vulnerable to coordinated campaigns that operate across platforms. Our neurosymbolic approach provides comprehensive influence operation detection that understands not just what is being said, but how coordinated actors work together to shape narratives.

Advanced Campaign Coordination Analysis:

  • Cross-platform network mapping: Trace influence networks across social media platforms, forums, and messaging apps by analyzing behavioural patterns, timing correlations, and content propagation paths

  • Multi-vector coordination signals: Identify synchronized hashtag campaigns, coordinated link sharing, and distributed narratives across seemingly unrelated accounts

Synthetic Media & AI-Generated Content Detection:

  • Deepfake and manipulated media identification: Detect AI-generated images, videos, and audio using advanced forensic analysis of compression artifacts, inconsistencies, and generation signatures

  • AI-written text detection: Identify bot-generated content, synthetic personas, and AI-assisted amplification through linguistic pattern analysis and stylometric fingerprinting

Attribution Confidence & Evidence Chain:

  • Probabilistic attribution scoring: Each detected influence operation receives confidence scores based on behavioural evidence, technical indicators, and pattern strength

  • Explainable AI reasoning: Full audit trails show exactly which signals triggered alerts, enabling analysts to validate findings and present court-ready evidence

Enterprise Integration & Intelligence Sharing:

  • Native STIX/TAXII compatibility: Seamlessly integrates with existing threat intelligence platforms, custom feeds or in-house/on-premise data stores

  • Explainable AI reasoning: Full audit trails show which signals triggered alerts, enabling analysts to validate findings and present court-ready evidence

Agentic Assistance

Agentic Assistance
Agentic Assistance

Autonomous agents with human oversight...

Deploy AI agents that autonomously discover, investigate, and correlate influence operations across data sources while providing complete audit trails of their decision-making process. Unlike black-box AI systems, every agent action includes symbolic reasoning chains that security analysts or investigators can validate, override, or learn from.


Continuous multi-source threat hunting with full reasoning transparency. Agents show exactly why they flagged specific patterns, which knowledge graph paths triggered alerts, and how they weighted conflicting evidence.


Enables 24/7 threat detection. Analyst-level reasoning quality at machine speed, while maintaining human oversight and building institutional knowledge.

Ethical AI

Ethical AI
Ethical AI

Transparency and accountability built-in...

Every threat detection decision includes full symbolic reasoning traces showing exactly which knowledge graph paths, evidence weights, and logical inferences led to each conclusion.

Real-time explainability:

  • Analysts can drill down into any alert to see the complete chain of reasoning, challenge individual inference steps, and understand how the system weighted conflicting evidence sources.

  • This enables rapid government procurement approval and operational deployment in high-stakes environments where decision accountability is legally required.


Privacy and data protection:

  • Only collecting/processing data necessary for threat detection, with different access tiers based on customer authorization level and legal authority

  • Supporting authorized government and international law enforcement access to non-public data sources while maintaining strict audit trails and jurisdictional compliance

  • GDPR-compliant processing for commercial customers, with enhanced capabilities for qualified law enforcement and intelligence agencies operating under appropriate legal frameworks (court orders, national security authorities, international treaties)

  • Ensuring data processing location and retention policies align with customer legal authorities and cross-border data sharing agreements


Human-in-the-loop requirements:

  • Ensuring critical decisions always have human oversight

Data Ingest

Data Ingest
Data Ingest

Zero-trust gateway with data normalization...

Securely ingest and normalize intelligence from any source, such as social media APIs, dark web scrapers, classified government feeds, sensor networks or financial data. Data streams maintain strict security boundaries and data sovereignty.


Universal semantic normalization: Automatically converts disparate data formats (JSON, XML, STIX, custom feeds) into standardized knowledge graph entities when needed while preserving source attribution and classification levels.


Eliminate intelligence silos without compromising operational security: Analysts work with unified data views while maintaining compartmentalization and need-to-know access controls.


Secure deployment: Air-gapped processing nodes, sovereign cloud deployment, and region-locked data residency ensure compliance with national security requirements and international data protection laws.