Show HN: N0x – LLM inference, agents, RAG, Python exec in browser, no back end https://ift.tt/pcYOLZC
Show HN: N0x – LLM inference, agents, RAG, Python exec in browser, no back end Built this because I was tired of every AI tool shipping my data to someone else server n0x runs the full stack LLM inference via WebGPU, autonomous ReAct agents, RAG over your own docs, sandboxed Python execution via Pyodide all inside a single browser tab. No account No keys No backend Models download once, cache in IndexedDB permanently. Biggest challenge was context window budgeting for the agent loop and making the WASM vector search non-blocking. Happy to talk architecture. GitHub: https://ift.tt/aPhqTlo | Live demo: https://n0x-three.vercel.app https://n0xth.vercel.app/ March 17, 2026 at 10:44PM
Show HN: N0x – LLM inference, agents, RAG, Python exec in browser, no back end https://ift.tt/pcYOLZC
Reviewed by Technology World News
on
March 18, 2026
Rating:
No comments: