Our Tech Stack

A brief summary of our tech stack, will add more details if anyone is interested - join our discord and give us a shout

Product

Frontend

Sveltekit : truly the greatest developer experience. Also produces incredibly optimized code.

Tailwind : love it or hate it, you gotta love it

Flowbite : one of the better component libraries for Svelte, but we may swap it out for Shadcn, or our own design library

Mobile : for mobile deployment we use Ionic Capacitor because it's the only true write once, run anywhere solution.

Backend

Python : the decision to use python is for developer experience and support for the data science and LLM ecosystem

FastAPI : the best server currently available through python

Beanie : an incredible developer experience for an ODM w/ native FastAPI support

Models

GPT-4o : we mainly use GPT-4o for our models

Fine tuned model : we're working on fine-tuning our own model for greater performance and speed

An added benefit of fine tuning our own model will be security, an essential part for B2B will be allowing people to run their own LLMs, on ideally as cheap hardware as possible.

Database

Redis : for user sessions

MongoDB : we store most entities in MongoDB + leverage their Vector DB for semantic search

Google Cloud Storage : for blob storage

Deployment

Finn is deployed on a mixture of cloud and on-prem to insure

1. Cloud: for maximum availability for services that need to be quick and always available

2. On-prem: for compute heavy, non-time-sensitive jobs for cost savings

Testing

Pytest : for backend testing

Playwright : for e2e testing

Monitoring

Langfuse : open-source LLM tracing system, really good for monitoring costs, fixing errors and optimizing prompts

LGTM stack : Loki-for logs, Grafana - for dashboards and visualization, Tempo - for traces, and Mimir - for metrics

DiscordSignup for Finn

... more insights