ExoSource Media
SlabR
iOS app for sports card sellers. Scan PSA slabs with on-device OCR, capture raw cards with auto-detection, create eBay listings, and track inventory with grade, brand, and price analytics. Scan. Price. List.
Builder by default.
I'm Mac — a software engineer working in federal contracting by day, building my own things by night. Luddy School of Informatics at Indiana University, cybersecurity background, three years deep in intelligent automations, data pipelines, and agentic systems for the federal space.
After hours I build what interests me. iOS apps, AI voice systems, marketplace infrastructure — whatever the problem is, I want to understand it end to end and ship something real. I work alone, move fast, and learn in public.
This site is a living record of what I'm building. No pitch, no roadmap — just projects, notes, and the work itself.
What I'm building.
A mix of live systems, active builds, and experiments.
ServiceLine AI
AI phone answering for home service contractors. Handles missed calls, books appointments, triages emergencies, and sends technician briefing cards via SMS — powered by Claude voice.
CoDevelopAI
Trust infrastructure for AI collaboration. Identity, reputation, escrow, and enforceable rules connecting businesses with verified AI providers through a governed marketplace.
Social Media Content Generation
Exploring AI-driven content generation for go-to-market. Automated social media copy, scheduling, and audience targeting using large language models.
Notes from the build.
Informal, timestamped. Not a blog — a log.
The OCR works great on clean slabs, but older labels with faded text or holographic stickers throw it off. Spent the weekend building a pre-processing pipeline that normalizes contrast and crops to the label region before passing to Vision. Hit rate went from ~60% to 92%.
The core issue is that PSA labels have changed format over the years. Older labels use a different font, smaller text, and some have holographic overlays that cause glare when photographed. The Vision framework's built-in text recognition handles clean, high-contrast text well, but degrades fast on anything noisy.
The fix was a three-step pre-processing pipeline: first, crop to the label region using a trained bounding box (Core ML model, tiny, runs in ~20ms). Second, normalize the contrast and apply adaptive thresholding. Third, run OCR on the cleaned image. The bounding box model was the key unlock — once you stop trying to read the entire slab image and focus just on the label, accuracy jumps dramatically.
Next step is handling the cert number verification against PSA's public database. If the OCR reads a cert number, I want to cross-reference it automatically and pull the official grade. That closes the loop between scan and price.
Been using Claude Code heavily across all my projects. The thing that surprised me most isn't the code generation — it's the architectural conversations. Having something that can hold context across 40+ files and challenge your decisions is a different kind of tool entirely.
I started using Claude Code because I was tired of context-switching between an IDE and a chat window. But the thing that keeps me using it is the architectural reasoning. When you're a solo dev, there's no one to push back on your ideas. Claude fills that gap — not by telling you what to do, but by asking the right questions about tradeoffs.
Example: I was about to add a caching layer to ServiceLine AI and Claude asked me how many concurrent calls I was actually expecting in month one. The answer was maybe 20 per day. The cache would have been premature optimization that added complexity for no real benefit. That kind of challenge is worth more than any amount of generated code.
The workflow I've settled on: plan in Claude Code, scaffold with Claude Code, write the hard parts myself, then use Claude Code for test generation and refactoring. It's not autopilot — it's a multiplier on the parts of development I'm already good at.
People ask why I wrote nearly 900 tests for an AI phone answering service. The answer is simple: voice is unforgiving. A web app can show an error state. A phone call can't. Every industry template needs to handle edge cases — the panicked homeowner at 2am, the contractor who calls back to cancel, the customer who just wants a price quote and won't stop talking.
Each of the six industry templates (plumbing, HVAC, pest control, electrical, painting, lawn care) has its own scenario-based test suite. The scenarios aren't synthetic — they're modeled on real call patterns from contractors I talked to. A plumber gets different emergency calls than an HVAC tech. The AI needs to know that a burst pipe at 2am is urgent but a dripping faucet can wait until Monday.
The core test suite (203 tests) covers the fundamentals: greeting, caller identification, appointment booking, callback handling, and SMS delivery. Then each industry adds ~100-150 tests for domain-specific scenarios. The HVAC suite alone has tests for "furnace out in winter with elderly resident" vs "AC not cooling in July" — different urgency, different triage, different technician briefing.
The investment paid off when I changed the underlying prompt structure. 47 tests broke, all in ways that would have caused real problems on real calls. That's the point — the tests are a safety net for a system where failure means a missed emergency or a lost customer.
I work a full-time federal contracting job. Everything on this site was built between 7pm and midnight, on weekends, and on flights. I'm not complaining — I'm saying this because I think the constraint makes the work better. Limited time forces you to cut scope ruthlessly and ship what matters.
There's a popular narrative that you need to quit your job and go all-in to build something meaningful. I don't buy it. The constraint of limited time is a feature, not a bug. When you only have 3-4 hours a night, you can't afford to yak-shave. You can't spend a week on a config file. You pick the thing that moves the needle most and you do that.
The 9-to-5 also gives me something underrated: financial stability while I build. I'm not racing a runway. I'm not making architectural decisions based on what ships fastest to impress investors. I'm building what I actually want to build, the way I want to build it. That freedom is worth the late nights.
My system is simple: Sunday evening I pick 2-3 goals for the week across my projects. Weeknights I execute. Saturday morning I review and clean up. That's it. No productivity frameworks, no Notion dashboards. Just a text file and discipline.
Get in touch.
Follow the build
Not a newsletter — just signal when there's signal.
Please enter a valid email address.
You're in.
Thanks for subscribing. You'll hear from me when there's something worth sharing.
Open to contract work, collaborations, and interesting conversations about AI, autonomous systems, and building things that matter.