Sheaf
vLLM solved inference for text LLMs. The same gap exists for every other class of foundation model — time series, tabular, molecular, and more. Sheaf fills it: a unified serving layer with standard batching contracts across model classes.
Things I've built — ML systems, startups, OSS tools, AI agents.
8 projects · 7 active
vLLM solved inference for text LLMs. The same gap exists for every other class of foundation model — time series, tabular, molecular, and more. Sheaf fills it: a unified serving layer with standard batching contracts across model classes.
Keyword search misses posts that use different words for the same idea. This uses nomic-embed-text-v1.5 on Modal to embed all posts at build time, then runs cosine similarity in the browser at query time — no search index, no database.
Paste any two snippets and see how a real embedding model (nomic-embed-text-v1.5 via Modal) and Claude's judgment compare — they often disagree, and the gap is where the interesting explanation lives.
Text-based trademark search misses visually conflicting marks — so I built one that doesn't. Uses CLIP and DINOv2 embeddings with pgvector to surface conflicts by appearance, not just name.
Most people interact with blockchains through abstractions. I wanted to understand Ethereum from the ground up — so I wrote an ERC-20 token in Solidity from scratch, no frameworks, deployed to mainnet, and documented it publicly.
Most /now pages go stale within weeks. This one stays current automatically — a nightly GitHub Actions cron triggers a Netlify rebuild that pulls live GitHub activity and Goodreads data at build time, no database needed.
I wanted visibility into what people actually read without handing data to a third party. Built a Netlify serverless function that captures page views into Supabase, with a Chart.js dashboard behind basic auth.
A home for writing and thinking in public — built to own the stack end to end. Astro + Netlify with self-hosted analytics, a living /now page, and no dependencies on third-party platforms.