OrionAI · inference you control

OrionAIChat.Com

A refined chat front-end for your OpenAI-compatible stack—built for people who already own the hardware, the keys, and the curiosity.

What Orion is

Orion is your workspace for AI — a fast, clean interface built around the models and providers you already use. Whether that’s Ollama locally, private endpoints, or cloud models, Orion keeps everything in one place and under your control.

Switch models easily, stream responses in real time, tune inference settings, and focus on your workflow instead of terminals or APIs. Orion speaks the protocols your stack already understands while keeping the experience simple, responsive, and centered on your prompts and ideas.

What you can do

  • Open Talk — natural chat with streaming replies and a thread that feels like a product, not a playground.
  • Pick Vrilsoft Gen or Code — two curated modes on Talk for general work versus code-heavy questions.
  • Use tools, quietly — when the model asks for web_search or get_weather, Orion runs them server-side and hands clean results back—no API roulette in the browser.
  • Stay oriented — this page reflects what your deployment considers live right now, so you are never guessing which endpoint answered last.
  • Api access for Developers — we provide a free API key for developers to use OrionAI. Just signin with your VrilOne account and visit VrilsoftApi.Com.

Talk models

These are the models surfaced in Talk—the ids your inference layer must expose (for example as Ollama tags).

Vrilsoft Gen

Default

General-purpose answers, explanations, drafting, and everyday reasoning—the mode Talk opens on.

vrilsoft-gen-ai

Vrilsoft Code

Optional

Toggle in Talk when you want sharper focus on programming, APIs, refactors, and technical nitty-gritty.

vrilsoft-code-ai
Live inference snapshot
Active model
vrilsoft-gen-ai
Inference endpoint
https://orionaichat.com:11434/v1
Configured catalog
vrilsoft-gen-ai vrilsoft-code-ai:latest vrilsoft-gen-ai:latest

Display names reflect Orion’s public model lineup where aliases apply; routing details remain under your Orion inference configuration (database). Optional Inference:TalkEndpointDisplay replaces loopback URLs on this card for visitors. The Inference:ApiKey setting may override outbound auth from appsettings. Changing defaults requires the Admin role where policy permits.

Open Talk