Monday morning started with one of those emails that makes you sit up a little straighter. I'd been granted access to Claude Code. Finally, the long wait was over. It felt like Christmas — the kind of Christmas where you don’t unwrap socks, but potential superpowers. I signed up immediately, dropped in my card details like it was Monopoly money, and waited to see what this thing could do.
Heres what Anthropic say..
"Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster through natural language commands. By integrating directly with your development environment, Claude Code streamlines your workflow without requiring additional servers or complex setup."
Basically its a terminal with an LLM built-in and functions for it to perform actions in that terminal.
I already use Claude, and more recently Gemini, in Cursor almost daily. They're great coding companions — fast, informed, always polite — but this was something else. Claude Code hinted at more autonomy, more "do the thing" energy. So I decided to throw it into the deep end. I’d have it build something I’ve always wanted but never had time for: a voice assistant I could talk to while coding. A real Tony Stark–style AI pal. Not a gimmick. Something useful.
And yes, it had to be better than Siri.
I’ve been an Apple fan since forever, but Siri is... well, Siri is embarrassing at this point. Apple’s “local-first” AI stance is philosophically solid — privacy-respecting, long-term sustainable — but it’s slowed them to a crawl. They’re competing with models backed by 100x the compute. And yet, even within those constraints, they should be doing much better. Some of the smallest open-source LLMs already outperform Siri by a wide margin. It’s honestly baffling. I wanted to see if I could prove that — if I could build a smarter assistant on a ten-year-old PC, using only local tools and Claude Code.
So I dug out an old Xeon-powered tower, wiped Windows from its soul, and installed Ubuntu. It had 16GB of RAM and a creaky NVIDIA GTX 970 with 4GB of VRAM. Perfect. I installed Claude Code, Cursor, Chrome, Git. In the end product everything would run locally. No cloud. No cheating.
Claude’s first instinct was to use Python — understandable, but I have a complicated relationship with Python (I think its pure cancer). It’s great for prototyping, terrible for production and scale. I asked it to switch to C++. A better test, everyone will be using phyton, I want to stretch its legs a bit. I also directed it to use Ollama to run Gemma models locally, Whisper.cpp for audio transcription, and a lightweight TTS tool for speech. With that, we were off.
Claude Code didn’t just write some C++. It built an entire working structure: source files, install scripts, a README (which I hadn’t asked for), and a full build process. I figured it would break — and it did — but the structure was so coherent that I decided to let it fix its own mistake. I pasted in the compiler error, said nothing else, and it just... fixed it. It built. I ran the binary. It created some temp files and exited. Not ideal. So I gave it more feedback. After a few more rounds of simple prompts and pasted logs, something magical happened.
I was speaking to my computer. Not via an API. Not through the cloud. On a local machine, fully offline, running open-source models, with Claude Code orchestrating the whole thing.
The experience of working with it was strange at first. I’d fall into a sort of glazed-over state, letting it take over, just nudging it here and there when it got stuck. But as I settled into the rhythm, I realised something: it works best when you give it a process. So I built one. Define the task. Build. Test. Self-review. Fix. Commit. Document. Each new feature followed the same loop, and Claude handled it with surprising grace. Even when asked to commit to Git, it did so with clean messages and no drama.
Yes, it had weak spots. Claude sometimes lost track of the high-level goal, got distracted by unnecessary optimisations, or looped into weird folder structures. It’s like it was viewing the world through a toilet roll tube — powerful, but with narrow field of vision. Testing was another weak point. Unit tests were fine, but it struggled with high-level integration logic. When I tried to reduce latency by streaming audio to Whisper and then to the LLM, it needed constant guidance. But to be fair — that’s hard, even for humans.
And yet, the amount of working code it wrote — and the quality of the project it assembled — was staggering. Naming, structure, dependencies, onboarding scripts… it handled the full stack of engineering support, not just the code. That’s where it really shined. I wasn’t just getting a chatbot that wrote functions; I was getting an engineer who documented, scaffolded, packaged, and explained its work.
And the system it built! Even now, I still find myself asking questions just to hear the responses. Places to visit. The meaning of quantum physics. What it thinks of humans. You know — the usual stuff. Talking to an LLM hit’s different to just writing to it, even with a 90 style Stephen Hawking speech synthesizer (I actually think that makes it better somehow).
Looking back, what started as a throwaway experiment turned into something more profound. I don’t think every project needs to scale beyond what Claude Code can do right now. For small tools and prototypes, it’s already transformative. And for bigger systems, it’s a phenomenal way to get started — fast.
Of course, there’s still fear in all of this. I’m a developer. This is what I do. And watching something do it faster, cleaner, and with more patience? That’s intimidating. But it’s also exhilarating. The same way compilers changed how we thought about machine code, LLMs are going to change how we write software. There’ll be a transition period where we still use human-readable code. But eventually? I think we’ll move to something else entirely — a new kind of machine language, running on new kinds of machines, built by systems that we won’t fully understand.
And maybe that’s okay.
Because what we gain is the ability to build things we never had time, money, or skill to do before. Things that once lived in our imagination can now live on a junk PC in a dusty corner of the room, speaking back to you like it always belonged there.
The age of magic is near. And if you ask me — it’s already begun.
Heres me in my office vibing with my new pal:
https://x.com/Depthperpixel/status/1909217404925751436
Take a look at the code yourself:
https://github.com/LeeMatthewHiggins/voice_assistant