They Already Know Everything — The Cybersecurity and Privacy Crisis Nobody Wants to Admit Is Happening to You

The story of how your digital life became the world’s most valuable and least protected resource — and why the people who should be fixing it are profiting from it instead


Maria didn’t know she’d been hacked until her mortgage was denied.

She’d applied for a refinance on the house she’d owned for eleven years — good credit history, stable income, clean record. The bank’s system flagged her immediately. Not because of anything in her financial history. Because her Social Security number showed up in a database of compromised identities linked to three fraudulent credit applications filed eighteen months earlier in two states she’d never visited, for a car she’d never driven and a credit card she’d never held.

The breach that exposed her data had happened four years before that. A mid-sized healthcare company she’d used once — a single urgent care visit in 2019 — had been hit by ransomware. Her records, along with 2.3 million others, had been exfiltrated and sold on a dark web marketplace. By the time anyone notified her, the data had changed hands four times and been incorporated into bundled identity packages that criminals used to construct synthetic identities and commit fraud at scale.

It took Maria nineteen months to fully clear her credit. Nineteen months of phone calls, affidavits, credit freezes, dispute letters, and the particular exhausting humiliation of having to prove to institutions that you are who you say you are. She estimates she spent over two hundred hours on it. The company whose breach exposed her data paid a regulatory fine that amounted to less than forty cents per affected individual. No executive faced criminal charges. The company’s stock dipped for two weeks and recovered.

This story is not unusual. A version of it happens to millions of people every year. What is unusual is that we have collectively decided to treat it as a background condition of modern life — an unfortunate but inevitable feature of the digital world — rather than what it actually is: a systemic failure of accountability, regulation, and corporate incentive that is causing measurable harm to real people at enormous scale, while the entities responsible face consequences so mild they function as operating costs rather than deterrents.

I want to tell you what’s actually happening in cybersecurity and privacy. Not the version that gets discussed in corporate press releases and congressional hearings. The real version. The one that security researchers, privacy lawyers, dark web analysts, and breach investigators see every day and can rarely say out loud without risking their careers or their company relationships.

Some of this is going to make you angry. It should.


Chapter One: The Breach Economy

There is a functioning, sophisticated, and remarkably stable economy built entirely around stolen personal data. It has suppliers, distributors, wholesalers, retailers, and end consumers. It has quality control, customer service, and product differentiation. It has its own pricing conventions, its own reputation systems, and its own innovation cycle.

And it is enormous.

The estimates of the global cybercrime economy vary depending on methodology, but credible assessments from security researchers and law enforcement consistently place it in the trillions of dollars annually — larger than the GDP of most countries. This is not a cottage industry of lone hackers in dark rooms. It is a distributed, professionalized criminal ecosystem with organizational structures that would be recognizable to any business school graduate.

Here is how the supply chain works, in terms that are more concrete than most coverage provides.

The initial breach — the penetration of a corporate system and exfiltration of personal data — is often performed by specialized groups that do nothing else. They are not the ones who ultimately use the data fraudulently. They steal it and sell it wholesale to data brokers operating in criminal marketplaces. These marketplaces, hosted on the dark web, function like commodity exchanges. Stolen credit card numbers, Social Security numbers, health records, login credentials, passport scans, and combinations thereof are listed with prices that vary by data freshness, completeness, and the verified financial profile of the individuals involved.

A fresh, verified credit card number with associated billing information sells for between five and twenty dollars depending on the card’s credit limit and country of origin. A complete identity package — Social Security number, date of birth, address history, credit profile, and supporting documents — sells for between fifty and two hundred dollars. Healthcare records, which contain more personally identifying information than almost any other data type, command premium prices. The criminal market understands data value better than most corporate privacy policies suggest their owners do.

The buyers are varied. Some are individual fraudsters running small-scale operations — opening credit cards, filing fraudulent tax returns, making purchases before the fraud is detected. But the more sophisticated buyers are organized groups running industrial-scale operations. They use automation to test stolen credentials against hundreds of websites simultaneously. They use machine learning to identify the highest-value targets in a dataset. They have customer service operations that help buyers verify data quality. They have dispute resolution processes.

The companies whose systems are breached bear the direct costs of response — the forensic investigation, the legal fees, the regulatory fines, the credit monitoring services offered to affected individuals. But the individuals whose data is stolen bear costs that are diffuse, delayed, and often never attributed to the breach at all — the denied loan, the fraudulent account that damages credit, the hours spent disputing transactions, the emotional toll of identity theft. Because these costs fall on millions of individuals rather than a single corporate entity, they are rarely aggregated and rarely drive accountability.

This asymmetry — concentrated profits, diffuse harm — is the structural reason the breach economy persists and grows despite decades of corporate security investment and regulatory attention.


Chapter Two: The Surveillance Business You Opted Into Without Knowing

Let me tell you about a company you’ve probably never heard of that knows more about you than your closest friends.

It knows where you sleep. It knows where you work. It knows which route you take to get there and how long it takes you at different times of day. It knows which stores you visit, how long you spend in each, and roughly how much you spend. It knows which political events you’ve attended, which religious institutions, which medical facilities. It knows who you spend time with — not their names necessarily, but the consistent presence of other devices near yours, which in aggregate builds a remarkably complete picture of your social network. It knows your sleep schedule, your exercise habits, your travel patterns.

It knows all of this because it buys location data from the apps on your phone. The weather app you gave location permission to. The flashlight app. The free game. The coupon app. The navigation app you use occasionally. Each of these apps, and hundreds like them, quietly collects your precise GPS coordinates at regular intervals and sells that data to brokers who aggregate, enrich, and resell it to anyone willing to pay.

The companies doing this are not operating in shadows. They are legitimate businesses with offices in major cities, venture capital funding, and clients that include insurance companies, hedge funds, retailers, real estate developers, law enforcement agencies, and political campaigns. The legal basis for all of this, in most jurisdictions, is the privacy policy you technically agreed to when you downloaded the app — a document averaging over three thousand words that approximately zero percent of users read.

The location data industry is just one branch of a much larger commercial surveillance ecosystem that has grown up largely unnoticed alongside the consumer internet. Your browsing history is tracked across virtually every website you visit through cookies, browser fingerprinting, login tracking, and pixel tags — invisible single-pixel images embedded in websites and emails that report back when loaded. This data is aggregated into behavioral profiles bought and sold in real-time advertising auctions that happen in the milliseconds between when you click a link and when the page loads.

Your purchase history — not just online purchases but physical retail purchases made with loyalty cards — is sold by retailers to data brokers and used to build financial profiles deployed by insurers, lenders, and employers. Your health data, if generated by fitness apps, period tracking apps, sleep trackers, mental health apps, or symptom checkers, is entirely outside HIPAA’s scope and largely unprotected. The companies holding it can sell it, share it, or lose it in a breach with consequences far less severe than a hospital breach would face.

Your car, if it’s a modern connected vehicle, is almost certainly generating and transmitting data about your driving behavior, location, and in some cases the content of conversations that happen inside it. Several major automakers have been documented selling this data to insurance companies without explicit owner consent.

The sum of all of this is a portrait of your life — your habits, your health, your relationships, your finances, your beliefs, your vulnerabilities — of a detail and accuracy that would have required a dedicated surveillance team just twenty years ago. That portrait exists. It is being bought and sold. You almost certainly do not know who has it, what they’re doing with it, or what happens to it if the company holding it is breached, acquired, or decides to monetize it differently.


Chapter Three: The Security Theater We’ve All Been Performing

Here is something that security professionals have known for years and almost never say publicly in a way that reaches ordinary people: a significant portion of corporate cybersecurity spending is not primarily about making systems more secure. It is about making companies look like they tried, in the event of a breach, so that the regulatory and legal consequences are manageable.

This is not a fringe critique. It is the quiet consensus of the security community, expressed in private conversations, in closed conference sessions, in the bitter humor of security engineers who have watched their careful recommendations overridden for business reasons and their warnings ignored until the inevitable breach.

The dynamic works like this. Companies face a genuine cybersecurity problem — their systems are targets, the threats are real, and defending against all of them is genuinely difficult. They also face a regulatory and reputational problem — if breached, they need to demonstrate they took reasonable steps to prevent it. The tension between these two problems is supposed to resolve in favor of actually improving security. In practice, it often resolves in favor of security that is visible, documentable, and defensible in a regulatory proceeding — which is not always the same as security that is effective.

The result is vast spending on compliance frameworks, audit processes, penetration testing reports, and security certifications that generate documentation but don’t necessarily improve the underlying security posture. Companies achieve certification for frameworks like SOC 2 and ISO 27001 while maintaining configurations that any competent attacker can exploit. The certifications demonstrate that processes exist. They do not demonstrate that those processes are effective, or that the organization has the engineering culture and operational rigor to actually defend against sophisticated adversaries.

Meanwhile, the things that actually make systems more secure — rigorous software engineering practices, aggressive patching schedules, minimal data collection and retention, genuine investment in security team authority and organizational culture — are harder, slower, and don’t generate the kind of paper trail that protects executives in a regulatory proceeding.

The regulatory fines that companies pay after major breaches are, almost uniformly, small enough relative to company revenue to be absorbed without meaningful business impact. The FTC fine for the Equifax breach that exposed the personal data of 147 million Americans was $575 million, against a company with annual revenue of $3.5 billion. The math says: the fine is not a deterrent. It’s an operating cost.


Chapter Four: The People Who Are Supposed to Protect You

In the early days of the internet, there was a genuine culture of security research built on the principle that finding and exposing vulnerabilities made systems safer for everyone. Researchers who found bugs and disclosed them — first privately to the affected company, then publicly if the company failed to fix them — performed a genuine public service. The model, imperfect as it was, worked.

That model has been under pressure from two directions that rarely get discussed together.

The first pressure comes from government intelligence agencies. The United States, and many other governments, has a practice of stockpiling software vulnerabilities — deliberately keeping them secret rather than disclosing them so they can be used for offensive intelligence operations. The logic is understandable: a vulnerability in a foreign government’s system is valuable. Disclosing it means the enemy patches it. So you keep it secret and exploit it.

The problem is that the same vulnerabilities that exist in foreign systems also exist in domestic systems — in American companies, American hospitals, American infrastructure, American individuals’ devices. The vulnerability doesn’t care about national borders. And when a stockpiled government vulnerability is stolen — as happened dramatically in 2017 when NSA hacking tools were leaked and subsequently used in the WannaCry and NotPetya attacks that caused tens of billions of dollars in damage worldwide — the result is catastrophic harm to the very systems the government is supposed to protect.

The second pressure comes from the private market for offensive capability. A robust market for undisclosed software vulnerabilities — zero-days — has developed, with prices for critical vulnerabilities in major platforms reaching into the millions of dollars. The buyers include government intelligence agencies, law enforcement agencies, and private companies that sell surveillance tools to governments.

The NSO Group built and sold a surveillance tool called Pegasus that exploited zero-day vulnerabilities in iOS and Android to silently compromise the phones of targets worldwide. NSO’s clients used Pegasus against journalists, human rights activists, political dissidents, lawyers, and heads of state. The tool was implicated in the surveillance of associates of murdered journalist Jamal Khashoggi.

The existence of this market — in which the vulnerabilities in the devices carried by every person reading this article are bought, sold, and weaponized — is the direct consequence of a regulatory vacuum in which the development and sale of offensive cyber capabilities is essentially uncontrolled, and in which financial incentives point toward exploitation rather than disclosure.


Chapter Five: What Your Data Does After You Stop Thinking About It

Here is a scenario that is not hypothetical. It is constructed from documented practices, and every element has been confirmed in reporting or regulatory proceedings.

You download a mental health app. The app’s privacy policy states that it may share “anonymized and aggregated” data with partners for research and analytics. You use the app to track your mood, record journal entries about your anxiety, and note the days when you’re struggling.

The “anonymized” data the app shares is not anonymous in any meaningful sense. It includes a persistent device identifier linked to your advertising profile, your approximate location derived from your phone’s sensors, and behavioral patterns that — combined with the advertising profile — allow reasonably confident inference of your identity. The analytics partner aggregates this data with data from other apps and sells insights to pharmaceutical companies, insurance actuaries, and employers who use workplace wellness programs.

Your mental health data — or rather, the behavioral and psychological profile derived from it — informs a pharmaceutical company’s marketing targeting, an insurance company’s actuarial models, and potentially an employer’s assessment of workforce risk. You have no knowledge of any of this. You have no way to access, correct, or delete the data once it leaves the app. The terms you agreed to were technically disclosed but practically impossible to understand.

This is not science fiction. The data broker industry has well-documented categories for sensitive inferences — financial distress, health conditions, political affiliation, religious belief, sexual orientation — derived from behavioral data and sold to clients whose use cases are not disclosed. The scenarios where this data causes concrete harm are real and documented: location data sold to advocacy groups and used to target people who visited sensitive healthcare facilities; mental health data used by insurers to identify high-risk populations; employment screening that incorporates data broker profiles in ways that discriminate against protected classes without the applicants’ knowledge.

And then there’s the breach scenario. All of this sensitive, inference-rich data — the mental health patterns, the location history, the financial behavior, the health signals — sits in databases at data broker companies with, in many cases, security postures considerably worse than the healthcare companies they’re not regulated like. When a data broker is breached — and they are breached, regularly, with far less media coverage than equivalent healthcare or financial breaches — the data that leaks is not just names and Social Security numbers. It is a comprehensive psychological and behavioral portrait of millions of people, with all the sensitive inferences intact.


Chapter Six: The Countries That Are Winning the Privacy War

Not everyone has given up.

The European Union’s General Data Protection Regulation represents the most comprehensive attempt by any major jurisdiction to actually constrain the surveillance economy rather than merely regulate its most visible abuses. The GDPR is imperfect — its enforcement has been inconsistent, and Ireland’s data protection authority, which hosts most major U.S. tech companies’ European operations, has been criticized for regulatory capture. But its principles — data minimization, purpose limitation, explicit consent, the right to be forgotten, mandatory breach notification, and fines calibrated as a percentage of global revenue — represent a fundamentally different approach.

The fines under GDPR are not operating costs. A fine of four percent of global annual revenue for a company like Google or Meta is a number that changes behavior. The threat of it has already changed data practices at major tech companies — not as much as advocates would like, but measurably.

The United States remains an outlier among wealthy democracies in its lack of comprehensive federal privacy legislation. The sectoral approach — HIPAA for healthcare, FERPA for education, FCRA for credit reporting — leaves enormous gaps that the data broker and advertising technology industries exploit systematically. The American legislative process has produced privacy bills that have died in committee or been gutted by industry lobbying for over a decade.

This is not an accident. The industries that profit from the current regime are among the most effective lobbying forces in Washington. The argument that consumers benefit from data-driven advertising deserves particular scrutiny, because it’s the one most frequently deployed and least frequently examined. The implicit deal — you get free services in exchange for your data — sounds reasonable until you examine what you’re actually giving and getting. You’re giving a comprehensive, permanent, and largely irrevocable portrait of your behavior, health, psychology, and relationships. You’re getting targeted advertising that research consistently shows consumers find more intrusive than valuable, and free versions of services whose paid alternatives would cost most users less than twenty dollars a month. That’s not a good deal. It’s a deal maintained by friction and inertia rather than genuine preference.


Chapter Seven: The Dark Corners Nobody Is Covering

There are several dimensions of the cybersecurity and privacy crisis that receive almost no mainstream coverage, despite being significant and growing.

The cyber insurance market is in serious trouble. Premiums have increased by hundreds of percent over the past four years as ransomware claims have exploded. Major insurers have pulled back from the market or narrowed coverage dramatically. The policies that remain are written with exclusions so broad — including exclusions for attacks attributed to nation-state actors, which covers exactly the most sophisticated ransomware — that businesses that thought they had transferred their cyber risk to an insurance company are discovering the transfer was more limited than advertised.

The cybersecurity posture of connected medical devices is, in many cases, genuinely alarming. Insulin pumps, pacemakers, infusion pumps, and hospital monitoring systems run outdated software, cannot be easily patched, often use hard-coded credentials, and are connected to hospital networks in ways that create pathways into clinical systems. Security researchers have demonstrated the ability to remotely manipulate these devices in controlled settings. The installed base of insecure legacy devices represents a patient safety problem that has not been adequately addressed.

The industrial control systems that manage power grids, water treatment plants, natural gas pipelines, and manufacturing facilities are, in many cases, decades old and designed for an era of physical isolation. The gradual connection of these systems to corporate IT networks — for monitoring, efficiency, and remote management — has created attack surfaces that nation-state adversaries have systematically explored and, in some documented cases, pre-positioned within. The gap between the severity of this risk and public awareness of it is striking and deliberate.

Finally, phishing has been transformed by AI in ways that security awareness training hasn’t caught up to. Phishing emails written by AI are grammatically flawless, contextually appropriate, and personalized using data scraped from social media. Voice cloning technology allows attackers to impersonate executives and family members in phone calls with frightening authenticity. The social engineering attacks that training was designed to teach employees to recognize look very different now than when the training was designed.


Chapter Eight: What I Actually Think — And It’s Not Comfortable

I’ve spent this article walking you through documented realities. Now I want to tell you what I believe, clearly and without hedging.

The current state of cybersecurity and privacy is not a technical problem with a technical solution. It is a political problem — a problem of power, accountability, and the corruption of regulatory systems by the industries they’re supposed to regulate — and it will not be solved by better security software, more aware users, or voluntary industry self-regulation.

The companies holding your data have demonstrated, repeatedly and systematically, that they will not prioritize your privacy and security over their business interests unless compelled to do so by regulation with teeth. The evidence for this is not ambiguous. It is the documented history of the past twenty years.

The American regulatory framework for privacy and cybersecurity is structurally inadequate and has been kept that way deliberately by industries with enormous financial stakes in the status quo. The bipartisan failure to pass comprehensive federal privacy legislation is not an accident or an oversight. It is the successful operation of a lobbying machine that has made protecting the surveillance economy a higher priority than protecting the people whose data funds it.

The security theater dynamic will persist as long as the consequences of a breach are calibrated to demonstrated effort rather than actual harm. The individuals whose data is exposed in breaches, whose identities are stolen, whose sensitive information is sold without consent, deserve accountability that the current system does not provide. Forty cents per person for a healthcare breach exposing millions of patients’ most sensitive information is not accountability. It’s absolution.

And I believe — this is the most uncomfortable part — that the technical community has been complicit in ways it rarely acknowledges. The engineers who built the surveillance infrastructure powering the data broker industry, the advertising technology stack, the connected devices with inadequate security — they were not villains. They were doing their jobs, building what they were asked to build, in organizations with incentives that didn’t reward asking uncomfortable questions about what their work would be used for. But the aggregate result of all those individual decisions is a surveillance infrastructure of extraordinary scope and power that has been turned against the people it was ostensibly designed to serve.

None of this means the situation is hopeless. The GDPR demonstrated that regulation can change industry behavior at scale. Growing consumer awareness of privacy issues creates political pressure that didn’t exist fifteen years ago. The security research community, despite the legal and financial pressures it faces, continues to expose vulnerabilities and hold companies accountable in ways that courts and regulators often don’t.

But improvement requires first being honest about what is actually happening — naming it clearly, without the softening language of “challenges” and “complexities” that allows the status quo to persist by making it seem more ambiguous than it is.

What is happening is that you are being surveilled, your data is being sold, the systems holding it are inadequately secured, the companies responsible face consequences that don’t change their behavior, and the regulatory systems that should protect you have been compromised by the industries they regulate.

That’s not a challenge. That’s not a complexity. That’s a failure. A specific, attributable, addressable failure of corporate accountability and political will.

Maria is still paying the price for it. So are millions of people like her. The question is how much longer we decide that’s acceptable.


The security researchers, privacy lawyers, and policy advocates working on these issues deserve more attention than they get. If this article made you angry, find out who is working on these problems in your jurisdiction and consider whether they deserve your support.

Leave a Comment

Recent Comments

No comments to show.
Categories