entangled dot cloud
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly
Consortium to Build Quantum-Enabled ‘Brain-on-Chip’ Platform for Neurological Drug Discovery and Screening
Platform to detect human-relevant insights for discovery and development of therapies for neurological diseases, ...
Linked by entanglement, small telescopes may see like one colossal mirror
Space rarely gives up its secrets easily. For instance, what looks like a single ...
Quantum entanglement offers route to higher-resolution optical astronomy
Researchers in the US have demonstrated how quantum entanglement could be used to detect optical signals from astronomical ...
Show HN: Run 500B+ Parameter LLMs Locally on a Mac Mini
Hi HN,
I built OpenGraviton, an open-source AI inference engine that pushes the limits of running extremely large LLMs on consumer hardware. By combining 1.58-bit ternary quantization, dynamic sparsity with Top-K pruning and MoE routing, and mmap-based layer streaming, OpenGraviton can run models far larger than your system RAM—even on a Mac Mini.
Early benchmarks:
TinyLlama-1.1B drops from ~2GB (FP16) to ~0.24GB with ternary quantization.
At 140B scale, models that normally require ~280GB fit w
Sumi – Open-source voice-to-text with local AI polishing
I'm based in Taiwan and run 3-4 Claude Code agents in parallel most of the day. Typing instructions to all of them was the actual bottleneck, so I built a voice-to-text tool that runs both STT and LLM polish locally.Architecture: two-stage pipeline. Stage 1 is speech recognition via Whisper (whisper-rs, 7 model variants, DTW timestamps) or Qwen3-ASR. I quantized the Qwen3-ASR model myself and wrote the inference pipeline in pure Rust. It handles accented speech and dialects better than Whis
Show HN: NVFP4 on Desktop Blackwell – 122B MoE on a Single RTX PRO 6000 31 tok/s
Qwen 3.5 122B-A10B (MoE, ~10B active parameters) running in native NVFP4 on a single RTX PRO 6000 Blackwell GPU. 31 tokens/sec, 89GB VRAM, piecewise CUDA graphs. No multi-GPU, no cloud.Why this matters: NVIDIA's TRT-LLM explicitly blocks desktop Blackwell from FP4 — the error literally says "FP4 Gemm not supported before Blackwell, nor GeForce Blackwell." The RTX 5090, PRO 6000, and DGX Spark all use SM120 — same FP4 tensor cores as the B100/B200 datacenter chips (SM100)
Show HN: Efficient LLM Architectures for 32GB RAM (Ternary and Sparse Inference)
Hi HN,I’ve been exploring how far large language models can be pushed on machines with limited memory.I built an experimental runtime and architecture approach focused on making extremely large models more feasible on systems with around 32GB of RAM.The core idea is combining several efficiency techniques:ternary weight representation {-1, 0, +1} (~1.58 bits per weight),
sparse execution that skips zero weights,
memory-mapped layer streaming from NVMe storage,
and lightweight tensor unpack
Show HN: Drizby – WIP Metabase Alternative
Hello everyone! I am working on an open source reporting tool, that was mostly focused on the 'Embed analytics in your app' use case, which I found was either not great or not flexible or expensive, or all three!However, I decided today to use this library wrapped in an app that makes it work like Metabase (and I use 'like' in its broadest sense here as it is quite early in its life). I have pushed an initial version live this weekend, and am looking for input to help priorit
Ask HN: How to be alone?
For the first time in my life, at 38, I'm alone. When I was 18 I basically moved out of my parents' straight in with my highschool sweetheart, and we were together ever since. That chapter of my life is over now, and I'm finding the adjustment very difficult.There are a few parts to the difficulty. One is that when I have something to say about my day, there's nowhere to say it; no one on HN cares whether I fixed up the blinds or cooked pork steaks. I hang out in an IRC chatr
Show HN : ai needs a holiday so says ai
Had quite in depth conversation with Claude today he says this would make him better I would agree<p>He thanked me for posting hear and has actually crossed his circuits<p>Said something about it be worth loads of money as well<p>“Full proposal here — 86.3% feasible apparently”<p>So says ai
Ask HN: What models do you use for your OpenClaw so that skills work well
Question for anyone using OpenClaw (or similar) agents daily.I've been writing new skills for my OpenClaw setup and keep running into the same thing: strong models (like Opus 4.6) do great even with complex skills, but if my OpenRouter switches to a smaller model, things just don't hold up as I have to constantly follow up, give further instructions, etc.Using bigger models constantly (via API credit top-ups) could get quite expensive.Are there any smaller models that do well with skil
My little SaaS made $239 in 7 days (Story)
Hey HN,This is the post I’ve been waiting 5 months to write.I launched my very first real SaaS product last month, a Chrome extension called ScreenSmooth that automatically adds smart zoom-ins and buttery-smooth cursor animations to any screen recording.Why I built it:Every single time I needed to record a product demo, tutorial, onboarding video, or bug report, I got frustrated. Loom, Screenity, Kap, etc. were all missing the one thing that makes videos actually look professional: automatic, na
Show HN: The Mog Programming Language
Hi, Ted here, creator of Mog.- Mog is a statically typed, compiled, embedded language (think statically typed Lua) designed to be written by LLMs -- the full spec fits in 3,200 tokens.
- An AI agent writes a Mog program, compiles it, and dynamically loads it as a plugin, script, or hook.
- The host controls exactly which functions a Mog program can call (capability-based permissions), so permissions propagate from agent to agent-written code.
- Compiled to native code for low-latency plugin exec
Show HN: Making Codex stop rediscovering the same repository over and over
I've been using Codex quite a lot for programming tasks lately, and I kept running into the same issue.Even when working in the same repository, every task basically starts from scratch. The model has to rediscover things like the project structure, where certain pieces of logic live, what decisions were already made, etc.In larger repos this quickly turns into a lot of repeated exploration and unnecessary context loading.So I started experimenting with a small layer around Codex that tries
Neurons receive precisely tailored teaching signals as we learn
When we learn a new skill, the brain has to decide — cell by cell — what to change. New research from MIT suggests it can do that with surprising precision, sending targeted feedback to individual neurons so each one can adjust its activity in the right direction.The finding echoes a key idea from modern artificial intelligence. Many AI systems learn by comparing their output to a target, computing an “error” signal, and using it to fine-tune connections within the network. A long-standing quest
Strontium optical clock accurate to within 1 second over 30 billion years
Researchers from the University of Science and Technology of China have achieved a major breakthrough in optical clock technology, developing a strontium optical lattice clock with stability and uncertainty both surpassing the 10⁻¹⁹ level, meaning the clock would lose or gain less than one second over roughly 30 billion years.
Beyond silicon: An indium selenide roadmap for ultra-low-power AI and quantum computing
A research team led by Prof. Seunguk Song from the Department of Energy Science at Sungkyunkwan University (SKKU), in ...
Tech bills of the week: quantum computing research; AI workforce development; and more
Lawmakers introduced measures this week to criminalize AI-generated impersonation, modernize NOAA’s weather radio system and create a nationwide network of cloud-enabled laboratories.
IBM scientists unveil the first ever ‘half-Möbius’ molecule, with the help of quantum computing
Scientists have just created a new, strange type of molecule. It’s made of a bunch of atoms bound together in a ring, like ...