BLEKit
Two iOS security apps sharing a common BLE scanning engine. Built for security professionals and people who need to know if they're being followed.
Bluetooth Low Energy is everywhere — AirTags, fitness trackers, smart locks, industrial sensors. Yet most mobile security tools treat Bluetooth as an afterthought. There is no serious BLE audit tool for iOS, and no counter-surveillance tool that answers the question "am I being followed?" for non-technical users.
These are two different markets with two different user profiles. A security auditor doing a site sweep needs information density, GATT enumeration, and structured reports. Someone fleeing a domestic violence situation needs a clear answer and captured evidence — not a technical readout.
But under the hood, both problems need the same engine: scan the BLE environment, identify devices, fingerprint them across MAC address rotations, and score what you find. The architecture decision was to build one engine and two apps.
Monorepo architecture: two app targets sharing a local Swift package. Changes to the shared layer are immediately available to both apps with no versioning ceremony.
Two apps, one engine
After the initial architecture sketch showed both scan modes sharing a base layer, I split the project into two separate apps with a shared Swift package. Security auditors and domestic violence survivors are completely different markets — one app with two modes would fight over navigation, theming, and App Store positioning.
The product decision drove the code architecture. A monorepo with a shared package is cleaner than one app with two personalities. Both apps get the same engine improvements automatically, but each owns its UX, its theme, and its App Store listing.
Tradeoff: more build complexity vs. cleaner product identity and independent App Store listings
Thread-safe engine, value-type UI snapshots
Apple's App Store review runs thread sanitizer diagnostics. Off-main-thread @Published mutations cause intermittent crashes that surface during review but not during development. I chose lock-guarded class internals with value-type snapshot publishing to eliminate this entire failure category before writing a single line of UI code.
DeviceIdentity is a lock-guarded class — it is not an ObservableObject. ViewModels subscribe to engine-layer Combine publishers, read properties on the main thread, and produce @Published value-type snapshots. Views bind exclusively to these snapshots.
Tradeoff: more boilerplate in ViewModels vs. zero risk of thread-sanitizer rejection at review
Ephemeral data by default
Both apps wipe scan data at session end. No persistent database of discovered devices. The only data that survives a session is the user's whitelist and explicitly captured evidence snippets. A tool that audits others' BLE devices should not create a surveillance record of its own.
Tradeoff: no session history or trend analysis vs. unassailable privacy posture
Feasibility-first scoping
The original concept was a full network security toolkit for iOS. The first thing I did was assess what iOS actually allows — and it blocks raw sockets, packet capture, and nmap-style scanning. Instead of forcing a compromised version, I identified BLE auditing as an underserved niche where iOS gives full access via CoreBluetooth. The pivot happened before any code was written.
Tradeoff: narrower scope vs. building something that can actually ship and differentiate
Five-dimension threat scoring
Overwatch scores non-whitelisted devices across five dimensions to determine whether they represent a genuine tracking threat. The weighting was designed to minimize false positives in crowded environments while catching the specific behavioural signatures of deliberate following.
How far the user has traveled while the device stayed in range. Highest weight — distance is the strongest indicator.
How long the device has been continuously in range. Duration alone can be innocent — combined with distance, it's telling.
Is the signal defying expected decay for a stationary device? A consistent RSSI while the user moves suggests the device is moving too.
Does the device match a known tracker profile — AirTag, Tile, SmartTag? Known tracker signatures get weighted immediately.
Has this device appeared, disappeared, and reappeared? This is the strongest single indicator of deliberate following — someone circling back or waiting ahead.
All scores are modulated by a crowd density factor (0.5 to 1.0). A device scoring 0.7 in a stadium gets reduced to 0.35 — likely just another phone in a crowd. The same score on a quiet residential street stays at 0.7. This single adjustment eliminates most false positives without sacrificing sensitivity in the scenarios that matter.
Pre-build audit: 13 issues caught
Before attempting compilation, I ran a systematic review of every file for correctness — individually and as an integrated system. The audit checked imports, type references, published property bindings, navigation paths, and ID relationships across the engine-to-UI boundary. Thirteen issues were identified and fixed in the same pass:
The 13-issue pre-build audit caught real bugs but also revealed that some architectural decisions made early in the engine design — before the UI layer existed — didn't account for the UI's needs cleanly. The dual evaluation sets for audit findings, for example, were a patch for a problem that wouldn't have existed if the engine had been designed with the UI's filtering requirements in mind from the start.
In a future project, I'd build a minimal UI stub earlier in the process — even before the engine is complete — to force interface questions to the surface before the engine API solidifies. The cost of a throwaway prototype is lower than the cost of retrofitting an API that's already been built against.