THE CLAIM
Every major technological leap does two things at once: it opens fresh tactical ground for criminals and it arms states and corporations with unprecedented tools of surveillance and control. The same systems that let teenagers remotely wipe a stranger’s phone in 2012 now let police reconstruct a suspect’s life from cloud backups. Cryptocurrencies that enable borderless fraud and ransomware also give investigators transparent ledgers of financial activity. Consumer DNA kits that expose family secrets also let detectives unmask serial killers decades later.
This is not a temporary imbalance that law or design will “fix.” It is a structural feature of digitized life. As more of identity, value, and movement pass through software, crime and policing converge on the same infrastructure. The result is a durable tension: making systems safe enough to withstand modern attackers almost inevitably makes them instrumentally useful for monitoring, profiling, and coercion. The central question is no longer whether technology threatens privacy; it is which actors gain leverage over whom, and under what constraints.
THE EVIDENCE
The 2012 account-takeover incident described in the editor’s letter captures an early, almost quaint, version of this pattern. A couple of teenagers, not sophisticated nation-state operators, used publicly available data and social engineering to talk their way into an Amazon account. There, they harvested the last four digits of a real credit card number, leveraged that fragment to convince Apple they were the rightful owner, and cascaded into control of iCloud, Gmail, and Twitter. No exploits. No zero-days. Just procedural gaps in the way large platforms authenticated identity.
That episode was a preview of today’s landscape. The basic ingredients-widely available personal data, loosely verified customer service workflows, deeply interconnected accounts-have now scaled to billions of people. Modern account-takeover rings combine stolen credential dumps, phishing kits, SIM-swapping, and deepfake audio to defeat weak verification schemes. Where the 2012 attackers guessed security answers and read out partial card numbers, their 2026 counterparts can clone a victim’s voice, spin up convincing fake documents, and route the proceeds through tumblers on a public blockchain.
Yet the same digital exhaust that powers these crimes also powers their investigation. The moment those 2012 hackers signed in, they left IP addresses, device fingerprints, and transaction trails. Even if the victim chose not to press charges, platform logs and service provider records could, in principle, reconstruct the attack. At larger scales, this is how contemporary fraud and cybercrime cases are routinely built: correlating login metadata, cell-site location data, exchange records, and chat logs from messaging platforms. Every “frictionless” service quietly stockpiles evidence.
DNA genealogy is the clearest recent illustration of this dual use. Consumer genetics services and public genealogy databases were marketed to help people discover their ancestry or find lost relatives. Investigators realized they could upload crime-scene DNA profiles, search for partial matches, and then work outward through family trees. This technique famously led to the arrest of the Golden State Killer decades after his crimes, inaugurating a new era of “genetic surveillance.” A tool built for personal curiosity became a permanent extension of forensic power-with implications for millions of people who never consented but share DNA with those who did.

Cryptocurrency follows the same pattern. Bitcoin and its successors dramatically lowered the barrier to moving money across borders without banks, enabling new forms of extortion and fraud at scale: ransomware payments, rug pulls, darknet markets. But those same “pseudonymous” ledgers are public and permanent. Blockchain analytics firms now help law enforcement trace flows of stolen funds, cluster addresses by behavior, and deanonymize participants using on‑ and off‑chain linkages. Criminals gained a censorship-resistant payment rail; investigators gained a globally visible map of their transactions.
Autonomous and semi-autonomous systems offer similar trade-offs in the physical world. Off-the-shelf autopilots and drone kits make it easier to transport contraband, scout targets, or disrupt infrastructure without directly exposing a human operator. At the same time, fleets of connected vehicles generate high-resolution telemetry—location, speed, sensor data—feeding into cloud platforms that can reconstruct movements minute by minute. A self-driving delivery van can be hijacked; but once recovered, its logs can describe exactly where it went, when, and under whose software control.
Overlay all of this with pervasive surveillance infrastructure—CCTV with automated license plate readers, social media platforms that retain years of private messages, mobile phones constantly negotiating with towers and Wi‑Fi access points—and the pattern is unavoidable. New technologies expand the space of things that can be done without immediate detection. They also expand the quantity and granularity of data that can be weaponized after the fact, whether by police, intelligence services, corporations, or, in some regimes, outright authoritarian apparatuses.
THE STRONGEST OBJECTION
The strongest pushback is that this is a familiar story of short-lived imbalance, not a structural law. In this view, technologies initially empower bad actors because norms, regulations, and defenses take time to catch up. But over the long run, liberal democracies have repeatedly domesticated disruptive tools without permanently sacrificing civil liberties. Telegraphs, telephones, and the early internet all spawned waves of fraud and abuse. Legal frameworks, technical standards, and cultural expectations eventually converged on workable compromises.
On this account, contemporary worries about crypto-enabled crime, DNA databases, or AI-boosted social engineering are iterations of the same cycle. Yes, cryptocurrencies facilitated ransomware spikes, but governments responded with stricter exchange regulations and sanctions, and the worst excesses ebbed. Yes, genetic genealogy raises privacy questions, but court rulings, warrant requirements, and internal policy guidelines can limit misuse. Law is slow by design, the argument goes, but that deliberate pace is exactly what reins in overreach and preserves rights. The presence of dual-use risks does not prove an inevitable slide into surveillance; it simply describes the management challenge.
This objection also emphasizes agency. Technologies do not unilaterally “force” trade-offs. Legislatures, regulators, courts, and publics can insist on strong encryption, data minimization, and bright-line limits on state access. End-to-end encrypted messaging, for instance, demonstrably makes it harder for police and intelligence agencies to monitor conversations, yet it has been widely deployed and politically defended. That example seems to cut directly against the claim that every new layer of digital infrastructure inevitably deepens surveillance capacity.
WHY THE CLAIM HOLDS
The objection is correct about cycles of adaptation but wrong about the baseline. What accumulates over time is not just better regulation; it is more infrastructure. Each generation of technology embeds itself as a new substrate for both crime and policing, and almost none of it is rolled back. Phone metadata did not replace physical surveillance; it layered on top of it. Location histories from smartphones did not replace call records; they enriched them. Genetic databases did not replace fingerprints; they expanded biometric reach to relatives who never touched a crime scene.
This ratchet effect is why the 2012 account takeover looks so tame in retrospect. The attackers exploited gaps in customer service scripts and the weak coupling between Amazon, Apple, and Google identity systems. Those gaps were partially closed with measures like two-factor authentication and stricter verification. But in closing them, platforms also normalized more intensive data collection and continuous identity checks: device binding, behavioral profiling, risk scoring of logins, and mandated phone-number anchors. The attack surface shifted; the amount of data generated about each user grew.
End-to-end encryption shows the same pattern. It is true that strong encryption limits direct content surveillance. But it has not reduced the overall observability of digital life. Instead, it has pushed law enforcement toward “metadata is data” strategies—exploiting contact graphs, timing patterns, cloud backups, and device-level access. Simultaneously, many encrypted apps require phone numbers, access to contact lists, and integration with operating systems that themselves remain heavily instrumented. Users gained message secrecy; the system gained richer peripheral data.
The structural reason is simple: digital systems run on logs. Security, reliability, personalization, and monetization all depend on collecting, storing, and analyzing vast streams of granular events. Any capability that makes platforms more resilient to crime—better anomaly detection, more accurate fraud scoring, stronger authentication—rests on this instrumentation. Those same logs, once created, are irresistible to investigators, intelligence agencies, and, in less constrained contexts, political operatives. Legal process and internal policy can narrow who accesses which data and when, but they rarely erase the data itself.
So the dual use is not an accident of immaturity; it is baked into the economic and technical logic of networked computing. Technologies that meaningfully reduce attack surfaces without generating new data—like offline cash, anonymous transit tickets, or air-gapped machines—are precisely the ones being displaced in favor of trackable, updatable, remotely manageable alternatives. That is why innovations like cryptocurrencies, autonomous systems, and consumer genomics predictably arrive as both criminal opportunity and surveillance upgrade.
THE IMPLICATION
If every major technology is structurally dual-use in this way, then the central societal question is not how to “balance” innovation with privacy, as if those were separable dials. The question is how much leverage any actor—state, corporation, or criminal network—is allowed to accumulate over others through control of shared infrastructure. The reality of crypto-fueled scams, autopilot-enabled mischief, and AI-powered social engineering is inseparable from the reality of permanent, high-resolution records in the hands of police, platforms, and data brokers.
That means arguments about security and civil liberties cannot be resolved by gesturing at future fixes or better norms. The trade-offs are already concretely embedded in protocols, device architectures, legal retention mandates, and commercial incentives. As digital systems continue to absorb identity, money, mobility, and biology, crime and its prosecution will operate on the same rails. Any claim to strengthen one without fortifying the other should be treated with skepticism. The infrastructure does not care who is using it.



