FTC.gov (Click for more)

FTC.gov (Click for more)

Nearly one-third of U.S. workers describe themselves as “free agents,” and talk of the “gig economy” has grown so loud that it’s an issue in the presidential campaign. As more people look online for work that supplements —or replaces — their old-fashioned paycheck and helps pay down debts, it seems time for a reminder about popular work-at-home programs that might not be what they seem. When you’re looking for a “gig,” you want to know how to spot questionable “opportunities” – and you certainly don’t want to rope friends and family into them.

In an economic recovery that’s still dogged by fits and starts, it’s inevitable that workers will look to new sources of income. It’s hard to quantify the gig economy, but Kelly Services, a recruiting company, tried recently. It defined free agents as independent contractors, freelance business owners, temp workers, moonlighters and employees with multiple sources of revenue. By that definition, 31% of U.S. adults surveyed called themselves free agents. Meanwhile, Republican presidential candidate Jeb Bush extolled the virtues of the gig economy recently by taking an Uber ride and Democratic presidential candidate Hillary Clinton warned in a speech that gig workers might not have the legal protections they need.

(This story first appeared on Credit.com. Read it there.)

“This on-demand, or so-called ‘gig,’ economy is creating exciting economies and unleashing innovation,” she said. “But it is also raising hard questions about workplace protections and what a good job will look like in the future.”

Among the protections workers need is security that when they interview for an online, work-at-home job, the position is real. A recent survey by FlexJobs, a job-search site, found that 17% of adults said they’d been scammed at least once while looking online for a job — meaning they’d been roped into criminal activity or were never paid for work they did.

Recruiting friends for sales jobs — and earning commission from your recruits’ sales — is among the more popular work-at-home gigs to be found online. The jobs can range from perfectly legitimate to legal, but a bit shady, to outright fraud. If you are entertaining work in this arena, it’s important to know how to spot trouble.

The terms you’ll see most associated with this type of work are “multi-level marketing,” which is legal, and “pyramid scheme,” which is illegal. It might seem easy to distinguish between these two, but that’s not always the case.

In short, a legal multi-level marketing (MLM) firm must have a legitimate product, and derive real profits from that product. If a firm derives most revenue from recruits — using the new recruit money to pay older employees, for example — that’s a pyramid scheme. Common sense goes a long way here. If you’re going to be selling something virtual that has no intrinsic value — or talking friends into doing so — you are on thin legal and economic ground. Pyramid schemes always collapse, as the pool of new recruits eventually dries up.

Naturally, firms that run pyramid schemes will often try to convince recruits that they are, in fact, a legal multi-level marketing company. And since the firm’s very existence depends on doing so, many sales pitches you hear will be very aggressive and persuasive. So never rush into any arrangement with such a firm, particularly if there’s a large cost to buy into the program.

The SEC has a handy checklist of signs that you might be looking at a pyramid scheme:

No genuine product or service. MLM programs involve selling a genuine product or service to people who are not in the program. Exercise caution if there is no underlying product or service being sold to others, or if what is being sold is speculative or appears inappropriately priced.

Promises of high returns in a short time period. Be leery of pitches for exponential returns and “get rich quick” claims. High returns and fast cash in an MLM program may suggest that commissions are being paid out of money from new recruits rather than revenue generated by product sales.

Easy money or passive income. Be wary if you are offered compensation in exchange for little work such as making payments, recruiting others and placing advertisements.

No demonstrated revenue from retail sales. Ask to see documents, such as financial statements audited by a certified public accountant (CPA), showing that the MLM company generates revenue from selling its products or services to people outside the program.

Buy-in required. The goal of an MLM program is to sell products. Be careful if you are required to pay a buy-in to participate in the program, even if the buy-in is a nominal one-time or recurring fee (e.g., $10 or $10/month).

Complex commission structure. Be concerned unless commissions are based on products or services that you or your recruits sell to people outside the program. If you do not understand how you will be compensated, be cautious.

Emphasis on recruiting. If a program primarily focuses on recruiting others to join the program for a fee, it is likely a pyramid scheme. Be skeptical if you will receive more compensation for recruiting others than for product sales.

Even if a firm you are considering passes the pyramid test, that doesn’t mean you should get involved. “Legal” doesn’t necessarily mean it’s a good idea. Some perfectly legal MLMs may be a bad idea for you. For starters, these jobs in the end are sales jobs, and in many cases, something akin to door-to-door sales. You’ll make money by doing the hard job of convincing people to buy a product. You might make additional money by convincing friends and family to also sell that product, but you might also lose friends and family that way. And, in the end, you’ll be relying on their sales abilities, too.

Even with an MLM, it’s important that you believe you can earn decent money for bills, like a mortgage or car payment, simply from selling the product yourself. If the real money that has your attention comes only from recruitment — your “downline” in industry lingo — you are likely to be disappointed. The field of friends you can recruit shrinks very fast.

Most of all, if you wouldn’t buy the product yourself, then you probably shouldn’t try selling it.


Scroll down for a clickable version of this map.

Scroll down for a clickable version of this map.

If you want to know why home prices are out of whack in America, look at our lousy schools.

One oft-overlooked element of home pricing is the scarcity of good public schools in America.  Overlooked by everyone except young families, of course, who know quite well how much good “free” schools cost.

In plenty of Americans towns and cities, parents feel the public schools aren’t good enough for their kids, so they spend anywhere from $5,000-$25,000 annually — per child — on a private school.  Quick math will tell you that buying a home in a good school district is worth its weight in gold. Econ 101 tells you that home prices in districts with good schools soar as parents try to fight their way in.

Here’s another big reason American families are restless. They must navigate a terribly unfair Catch-22 — either they stretch their budgets to afford a home near good schools; or buy a more affordable home and pay for private school.

Now I have some data to prove it, courtesy of the folks at RealtyTrac. The firm analyzed school test scores for nearly 27,000 elementary schools in more than 7,200 U.S. zip codes, along with home price affordability in those same zip codes. (Good schools were derived from Department of Education data, and ranked at least one-third higher than average on test scores.) The result: surprise! In two-thirds of zip codes that had at least one “good” school, average wage earners would be forced to spend more than one-third their income to buy a median priced home.  Or as I like to say, average people with average incomes can’t afford average homes, a sign of a broken market.  Meanwhile, the median sales price of homes in areas with good schools was DOUBLE the price of homes not near good schools ($411,573 vs $210,662).

Below, you’ll see a chart of 10 zip codes where home prices are low relative to wages, but there are good schools nearby. Below that is a clickable, national map that you can use to explore the data yourself.  If for some reason you have trouble loading the map, click here.

affordable good schools

*Includes zips with at least one good school (defined as having a 2014 test score at least one-third higher than the state average) and with affordable homes (where buying a median priced home requires one-third or less of the average wage-earner’s income). Only most affordable zip code from each state is listed. States with no good schools in affordable zip codes or insufficient home price data were not included. Sources: RealtyTrac, BLS, State Depts. of Education.

Like what you are reading? Support me by signing up for my email newsletter or clicking on an advertisement and patronizing a sponsor.


Chart derived from data provided by Berkeley Terner Center for Housing Innovation.

Chart derived from data provided by Berkeley Terner Center for Housing Innovation.

Life in up-and-coming cities like Raleigh, Jacksonville and Memphis might not be quite as inexpensive as you’ve heard. At least not if you are a renter.  It’s another reason you might feel Restless.

Recently, Credit.com examined markets in America with the highest (and lowest) rates of housing-poor residents. Generally, analysts describe residents as housing poor if they spend more than 30% of their income on housing. We generated top 10 and bottom 10 city lists of housing-poor residents using data from the U.S. Census Bureau via the Berkeley Terner Center for Housing Innovation. Cities at the top of the list — where the most residents are housing poor — were no surprise: Los Angeles, New York, Miami. Cities in the bottom of the list were a bit predictable, too. That included Raleigh, N.C.; Columbus, Ohio; and Buffalo, N.Y. This is a familiar story: America’s “second-tier” cities are where the affordable housing is.

Scratch the surface, however, and things aren’t quite so clear.

(This story first appeared on Credit.com. Read it there.)

The census data also includes information on how many renters vs. homeowners are housing poor. When you isolate renters, you find that many of these affordable cities aren’t that affordable. If half a city’s renters are housing poor, it’s hard to call that place inexpensive.

In every city studied, renters were more likely than homeowners to be housing poor. That makes sense. A certain portion of homeowners have paid off their mortgages. People who are deep into mortgage terms, thanks to inflation, see their payments essentially go down over time and are far less likely to spend a big chunk of their paycheck on housing.

But the discrepancy between housing-poor renters and buyers makes for interesting reading. In some places, renters are more than twice as likely to be housing poor; in other places, the rate is far lower. We’re calling this gap between housing-poor renters and owners the “rental penalty.” If you live in places with a big rental penalty, odds that your financial life is under serious stress are far higher when renting.

When we ranked cities this way, there were several surprises. Rochester, N.Y., had the highest rental penalty. There, only 22% of owners are housing poor, while 54% of renters are. Also in the top five are Jacksonville, Fla., and Raleigh — both considered inexpensive escapes for big-expensive-city dwellers. In Raleigh, 19.8% of owners are housing poor, but 48.1% of renters are.

Pittsburgh appears to be the most affordable place to live in the simple list from the other day, and it survives this rental penalty analysis, landing roughly in the middle of the pack. So do Columbus, Oklahoma City and Louisville. But Buffalo and Memphis, Tenn., don’t. There, renters are more than twice as likely to be housing poor.

eneralizations about any housing market are fraught with peril, as all housing is intensely local, so it’s important to note that the “rental penalty” which shows up in the data could be caused by many factors.

“I suspect there are multiple reasons for why the difference in being ‘cost-burdened’ between renters and owners would be higher in some local markets,” said Jed Kolko, who called attention to the data in his report on highlights from the recently-released 2014 Census data for the Berkeley Terner Center. “One reason is that markets with long-term residents would have fewer cost-burdened households because they bought their home long ago and might not have [a] mortgage any longer. Another is that in markets where housing generally is very expensive, renting is more common for middle- and higher-income households, who may look less different than owners relative to other cities.”

On the other hand, some cities that showed up as expensive for renters didn’t surprise Daren Blomquist, vice president of RealtyTrac. In those markets, investors are scooping up homes and renting them, he said, putting price pressure on both first-time homebuyers and renters alike.

“Raleigh and Jacksonville stand out as hot spots for investors,” he said. “These are markets that on face value are fairly affordable, but you have these situations where institutional investors are creating more demand … they are willing and able to pay more than first-time homebuyers, who are not able to compete, so they stay as renters. Then rental costs go up.”

In a normal market, as rental prices rise, more people would make the leap to owning, putting the markets back into balance. But that’s not happening in some areas because outside investors are altering that delicate balance.

What Can Consumers Do?

The rental penalty data suggests two lessons for consumers. First, moving to Buffalo or Raleigh is not cheap for everyone. A New York City dweller trading apartment rent for a mortgage payment will probably feel like either place is Shangri-La, but someone who moves there to rent might end up feeling just as housing poor as they did in the big city they left.

Second, if you are already renting in a market with a big rental penalty, it’s probably a good idea to redouble your efforts to buy. (You may want to check your credit before youapply for a mortgage, though, since a good credit score will qualify you for better terms and conditions. You can get your free credit report summary on Credit.com to see where your credit stands.) For example: Blomquist and RealtyTrac regularly generate “rent vs. buy” data, and in Jacksonville, residents spend 36% of their income on rent, while a new home can be purchased with a mortgage payment that is only 21% of income.

“There may be some hesitancy on the part of a person moving there. They don’t know for sure it’s a long-term move,” he said. But the combination of relatively high rental prices and a strong job market can make such markets a smart buy. “Even if they are only going to be there for five years.”



Read all the Cynja series


Click image above to watch video.

Click image above to watch video.

I talked to NBC’s Pete Williams yesterday about the encryption debate that has raged since the Paris terror attacks.  The debate got additional legs when New York District Attorney Cyrus Vance released a white paper Wednesday arguing that Apple and Google should be forced to include backdoors in their smartphones.  His argument: If cops can get wiretap orders, why can’t they ask Silicon Valley firms to access phones loaded with critical evidence.

As I said to Pete, we are struggling with a war of imprecise metaphors that cloud the discussion, and it’s really hard to reduce the discussion to a sound bite.  After all, the urge to stop terror attacks is a pretty powerful sound bite.

Fortunately, many in the technology community have stepped forward with well-reasoned arguments explaining why forcing U.S. companies to break their own encryption would make us less safe, not more. If you really care about the discussion, read this paper from July authored by a set of technology experts called “Keys Under Doormats.”

If that’s a bit detailed for you, I wrote a straightforward piece earlier this week outlining the arguments. But in essence, what I say in the video above is pretty simple. It someone can break the encyption, anyone can.  If Apple can be forced to unlock phones by the Justice Department, Apple can be forced to unlock phones by the Chinese government, too.  And there’s a high likelihood that criminals will eventually figure out how to use this backdoor.  Encryption with a hole isn’t encryption any longer. Meanwhile, criminals will just use rogue encryption products. They’ll actually gain the upper hand.

It’s important to note that, after an initial flurry of stories blaming encryption for this attack, all evidence so far points to the Paris criminals using old-fashioned plain text and face-to-face conversations to communicate. (See this Techdirt.com piece.)  Government back doors wouldn’t have stopped this attack.  Better policing might have. French authorities have lamented that they already have to much information to deal with and not enough analysts. Inevitably, what we discover after each attack is that it could have been prevented, that the information needed to do so was already at hand — but it was missed.

Everyone wants technology to be a silver bullet and solve all our problems.  Sadly, it won’t. It’s not magic.  It’s often snake oil.

Here’s a thought experiment: Imagine if terrorist attacks led to outraged cries for more human resources to protect us.




Hey parents! You won’t believe the contracts your kids have been roped into.

Like a fine print virus spreading quickly around the globe, under-aged teen-agers are suddenly being shrink-wrapped into contracts of dubious enforceability all around the web. The situation highlights a conundrum for companies targeting the 13-17 crowd: how to set rules with minors who generally can’t actually consent to contract terms, and almost certainly don’t get their parents’ permission to do so.

Snapchat changed its terms of service recently, attracting a lot of attention. While most of it was focused on the company giving itself virtual ownership over content posted on the service, something else in the terms caught my eye.

“By using the Services, you state that: You can form a binding contract with Snapchat—meaning that if you’re between 13 and 17, your parent or legal guardian has reviewed and agreed to these Terms.”

Well, really it caught privacy lawyer Joel Winston’s eye. He called it to my attention.

Let me take a guess and estimate that of Snapchat’s roughly 100 million users, most of them minors, perhaps 43 or so have shown those terms to their agreeable parents.  In other words, if your kid uses Snapchat, he or she has almost certainly lied about you to the company, all in the name of forming a contract – of sorts.

Winston had a different problem with the language.

“A minor cannot declare herself competent to sign a binding contract that would otherwise require consent from an adult,” he said.  There are some exceptions to that, which we’ll get to.  But the headline point remains.  Generally speaking, contracts with minors aren’t really contracts.

So what’s this language doing in Snapchat’s terms of service?  It’s not just Snapchat. That very language appears in lots of kid-focused services, like Skout (a flirting tool), THQ (a game site), and even  PETAkids.com (an animal rights site.)  Similar terms appears across the Web.

Snapchat certainly is a leader in the 13-17 space, however.  I asked the firm to comment about its terms.  It declined.

When I ran Snapchat’s terms by Ira Reinhgold, executive director at the National Association of Consumer Advocates, he was aghast.

“Why did they do this, to frighten people into not suing them?” he said, rhetorically.  “I cannot imagine any court would find this binding.  No lawyer worth his salt would think this would think this is going to stick…a youngster cannot consent.”

Maybe…and maybe not. Last year, a California court actually did rule that, in some circumstances, terms of service are enforceable against minors. That case involved Facebook’s use of member photos in “Sponsored Stories.” Facebook’s terms at the time provided for what amounted to a publicity rights release, and the plaintiffs in the case argued that release was unenforceable. A judge sided with Facebook.

To put a fine point on it, minors can agree to certain kind of contract terms (that allow them to work, for example), but such contracts have a unique status and can be voided at any time by the minor.  Because the plaintiffs in the case continued to use Facebook, they had not voided their contract, and therefore Facebook was protected by the agreement.

“This is a big win for all online services, not only Facebook,” wrote Eric Goldman in a blog post about the case.

The situation highlights the unique problem of dealing with children over 13 but under 18 Goldman, said to me.

“Snapchat may have legally enforceable contracts with minors. Contracts with minors are usually ‘voidable,’ meaning that the minor can tear up the contract whenever he/she wants. However, until the minor disaffirms, the contract is valid. And in the case of social networking services, the courts have indicated that minors can disaffirm the contract only by terminating their accounts, meaning that the contract remains legally binding for the entire period of time the minor has the account,” he said. “As a contracts scholar, I can understand the formalist logic behind this conclusion, but it conflicts with the conceptual principle that minors aren’t well-positioned to protect their own interests in contract negotiations.”

On the other hand, the solution might be worse than the problem itself.

“The counter-story is that most online services don’t have any reliable way to determine the age of their users, and an adhesion contract that works unpredictably on only some classes of users isn’t really useful. And I don’t think anyone would favor web-wide ‘age-gating’ as the solution to that problem,” he said.

Of course, the problem isn’t just the existence of a contract, but what the terms of that contract might be, and whether a minor is capable of understanding and consenting to its terms.  Winston is concerned with what comes after the “parental promise” section in Snapchat’s contract: a binding arbitration agreement and class action waiver. (That’s the kind of waiver the Consumer Financial Protection Bureau is about to ban.)

“All claims and disputes arising out of, relating to, or in connection with the Terms or the use the Services that cannot be resolved informally or in small claims court will be resolved by binding arbitration,” the terms say. “ALL CLAIMS AND DISPUTES WITHIN THE SCOPE OF THIS ARBITRATION AGREEMENT MUST BE ARBITRATED OR LITIGATED ON AN INDIVIDUAL BASIS AND NOT ON A CLASS BASIS.” (Snapchat’s CAPS, not mine)

As Winston sees it, not only is Snapchat requiring a minors to agree to a contract, it’s requiring them to surrender their rights to have their day in court.

“I would certainly be very interested to read any legal ruling that enables a 13 year old to agree that she will ‘waive any constitutional and statutory rights to go to court and have a trial in front of a judge or jury,’ “ he said, echoing the terms.  “I am not currently aware of any case law that enforces a mandatory binding arbitration clause against an adult parent based on the purported ‘consent’ of her minor child.”

Were those terms to survive a court challenge, and if Snapchat tried something like Sponsored Stories, Snapchat’s minor users would have waived their rights to join a class action against the firm.

In the end, you might be wondering why parents – or kids – might want to argue with Snapchat anyway? Winston leaps at a chance to answer that.

“The Snapchat TOS contract is relevant because the company is actively collecting personal data from millions of children. That includes device phonebook, camera and photos, user location information (from) GPS, wireless networks, cell towers, Wi-Fi access points, and other sensors, such as gyroscopes, accelerometers, and compasses,” he said. “It’s also relevant because Snapchat is sharing user data from millions of children with third-parties and business partners for the purpose of advertising and monetization.”

I’m not one to give parents more homework, and I hesitate to advise you to try to read all the terms of service agreements to every app on your child’s phone.  But it might be a good learning moment to ask your kids what they’ve told tech companies about you — and find out what you’ve agreed to.


Expect to see many, many more news stories like this.

Expect to see many, many more news stories like this.

It’s natural to look for a scapegoat after something terrible happens, like this: If only we could read encrypted communications, perhaps the Paris terrorist attacks could have been stopped.  It’s natural, but it’s wrong.  Read every story you see about Paris carefully and look for evidence that encryption played a role.

There’s a reason The Patriot Act was passed only a few weeks after 9-11, and it wasn’t because Congress was finally able to act quickly and efficiently on something.  The speed came because many elements of the Patriot Act had already been written, and forces with an agenda were sitting in wait for a disaster so they could push that agenda.  That is wrong.

So here we are now, once again faced with political opportunism after an unthinkable human tragedy, and we must remain strong in the face of it.  There is no simple answer to terrorism, and we should all know this by now.  And so there must be no simple discussion about the use of encryption in the Western world.  The debate requires a bit of thoughtful analysis, and we owe it to everyone who ever died for a free society to have this debate thoughtfully.

The basics are this: Only recently, computing power has become inexpensive enough that ordinary citizens can scramble messages so effectively that even governments with near-infinite resources cannot crack them. Such secret-keeping powers scare government officials, and for good reason.  They can, theoretically, allow criminals and terrorists to communicate with a cloak of invisibility.  Not surprisingly, several government officials have called for a method that would allow law enforcement to crack these codes.  There are many schemes for this, but they all boil down to something akin to creating a master key that would be generated by encryption-making firms and given to government officials, who would use the key only after a judge granted permission.  This is sometimes referred to as creating “backdoors” for law enforcement.

Governments can already listen in on telephone conversations after obtaining the proper court order.  What’s the difference with a master encryption key?

Sadly, it’s not so simple.

For starters, U.S. firms that sell products using encryption would create backdoors, if forced by law.  But products created outside the U.S.?  They’d create backdoors only if their governments required it.  You see where I’m going. There will be no global master key law that all corporations adhere to.  By now I’m sure you’ve realized that such laws would only work to the extent that they are obeyed.  Plenty of companies would create rogue encryption products, now that the market for them would explode.  And of course, terrorists are hard at work creating their own encryption schemes.

There’s also the problem of existing products, created before such a law. These have no backdoors and could still be used. You might think of this as the genie out of the bottle problem, which is real. It’s very,  very hard to undo a technological advance.

Meanwhile, creation of backdoors would make us all less safe.  Would you trust governments to store and protect such a master key?  Managing defense of such a universal secret-killer is the stuff of movie plots.  No, the master key would most likely get out, or the backdoor would be hacked.  That would mean illegal actors would still have encryption that worked, but the rest of us would not. We would be fighting with one hand behind out backs.

In the end, it’s a familiar argument: disabling encryption would only stop people from using it legally. Criminals and terrorists would still use it illegally.

Is there some creative technological solution that might help law enforcement find terrorists without destroying the entire concept of encryption? Perhaps, and I’d be all ears. I haven’t heard it yet.

Only a few weeks after 9-11, a software engineer who told me he was working for the FBI contacted me and told me he was helping create a piece of software called Magic Lantern.  It was a type of computer virus, a Trojan horse keylogger, that could be remotely installed on a target’s computer and steal passphrases used to open up encrypted documents.  The programmer was uncomfortable with the work and wanted to expose it. I wrote the story for msnbc.com, and after denying the existence of Magic Lantern for a while, the FBI ultimately conceded using this strategy.  While we could debate the merits of Magic Lantern, at least it constituted a targeted investigation — something far, far removed from rendering all encryption ineffective.

For a far more detailed examination of these issues, you should read Kim Zetter at Wired, as I always do. Then make up your own mind.

Don’t let a politician or a law enforcement official with an agenda make it for you. Most of all, don’t allow someone who capitalizes on tragedy a mere hours after the first blood is spilled — an act so crass it disqualifies any argument such a person makes — to influence your thinking.


The FBI is developing software capable of inserting a computer virus onto a suspect’s machine and obtaining encryption keys, a source familiar with the project told MSNBC.com. The software, known as “Magic Lantern,” enables agents to read data that had been scrambled, a tactic often employed by criminals to hide information and evade law enforcement. The best snooping technology that the FBI currently uses, the controversial software called Carnivore, has been useless against suspects clever enough to encrypt their files.

Magic lantern installs so-called “keylogging” software on a suspect’s machine that is capable of capturing keystrokes typed on a computer. By tracking exactly what a suspect types, critical encryption key information can be gathered, and then transmitted back to the FBI, according to the source, who requested anonymity.

The virus can be sent to the suspect via e-mail — perhaps sent for the FBI by a trusted friend or relative. The FBI can also use common vulnerabilities to break into a suspect’s computer and insert Magic Lantern, the source said.

Magic Lantern is one of a series of enhancements currently being developed for the FBI’s Carnivore project, the source said, under the umbrella project name of Cyber Knight.

Mentioned in unclassified documents
The FBI released a series of unclassified documents relating to Carnivore last year in response to a Freedom of Information Act request filed by the Electronic Privacy Information Center. The documentation was heavily redacted — most information was blacked out. They included a document describing the “Enhanced Carnivore Project Plan,” which was almost completely redacted. According to the anonymous source, redacted portions of that memo mention Cyber Knight, which he described as a database that sorts and matches data gathered using various Carnivore-like methods from e-mail, chat rooms, instant messages and Internet phone calls. It also matches the files with the necessary encryption keys.

MSNBC.com repeatedly contacted the FBI to discuss this story. However, after three business days the FBI was still requesting more time before commenting. MSNBC.com has filed a Freedom of Information Act request with the bureau.

Word of the FBI’s new software comes on the heels of a major victory for the use of Carnivore. The USA Patriot Act, passed last month, made it a little easier for the bureau to deploy the software. Now agents can install it simply by obtaining an order from a U.S. or state attorney general — without going to a judge. After-the-fact judicial oversight is still required.

FBI has already stolen keys
If Magic Lantern is in fact used to steal encryption keys, it would not be the first time the FBI has employed such a tactic. Just last month, in an affidavit filed by Deputy Assistant Director Randall Murch in U.S. District Court, the bureau admitted using keylogging software to steal encryption keys in a recent high-profile mob case. Nicodemo Scarfo was arrested last year for loan sharking and running a gambling racket. During their investigation, Murch wrote in his affidavit, FBI agents broke into Scarfo’s New Jersey office and installed encryption-key-stealing software on the suspect’s machine. The key was later used to decrypt critical evidence in the case.

Magic Lantern would take the method used in Scarfo one step further, allowing agents to “break in” to a suspect’s office and install keylogging software remotely. But in both cases, the software works the same way.

It watches for a suspect to start a popular encryption program called Pretty Good Privacy. It then logs the passphrase used to start the program, essentially given agents access to keys needed to decrypt files.

Encryption keys are unbreakable by brute force, but the keys themselves are only protected by the passphrase used to start the Pretty Good Privacy program, similar to a password used to log on to a network. If agents can obtain that passphrase while typed into a computer by its owner, they can obtain the suspect’s encryption key — similar to obtaining a key to a lock box which contains a piece of paper that includes the combination for a safe.

Breaking new ground
David Sobel, attorney for the Electronic Privacy Information Center and outspoken critic of Carnivore, did not outright reject the notion of a Magic-Lantern-style project, but raised several cautions.

“This is breaking new ground for law enforcement, to be planting viruses on target computers,” Sobel said. “It raises a new set of issues that neither Congress nor the courts have ever dealt with.”

Stealing encryption keys could be touchy ground for federal investigators, who have always fretted openly about encryption’s ability to help criminals and terrorists hide their work. During the Clinton administration, the FBI found itself on the losing side of a lengthy public debate about the federal government’s ability to circumvent encryption tools. The most recently rejected involved so-called key escrow — all encryption keys would have been stored by the government for emergency recall.

Levels playing field with criminals
A spokesperson for Rep. Dick Armey (R-Texas), said he thought Magic Lantern, as described to him by MSNBC.com, was considerably more palatable than key escrow.

“Citizens should have ability to keep their files and e-mails safe from bureaucratic prying eyes. But this would only be usable against a limited set of people. It’s not as troubling as saying the government should have all the keys,” said the Armey spokesperson. He also said Magic Lantern didn’t raise the same Fourth Amendment concerns regarding search and seizure as Carnivore, because Magic Lantern apparently targets one suspect at a time. Armey, an outspoken Carnivore critic, has complained about the potential for the FBI’s Internet sniffing software to capture too much data as packets fly by headed for a suspect — known in the legal world as an “overly broad” search.

Sobel was concerned that the keylogging software itself could result in overly broad searches, since it would be possible to observe every keystroke entered by a suspect, even if a court order specified a search only for encryption keys. Developers in the Scarfo case went to some trouble to limit the data stored by the keylogging software installed on Scarfo’s computer, shutting the system on and off in an attempt to comply with the court order, according to Murch’s affidavit. But given the confusion surrounding keylogging and encryption, and the mystery surrounding projects like Carnivore, Sobel said he’s worried about the bureau’s use of software that hasn’t been clearly explained to the public or the Congress.

“It is a matter of what protections are in place. At this point, the best documented case is Scarfo, and that raises concerns,” he said. “The federal magistrate who approved the technology in Scarfo had no understanding of what this thing was. I hope there can be meaningful oversight (for Magic Lantern).”



Read all the Cynja series


Click to read the complaint (PDF)

Click to read the complaint (PDF)

From the file of, “Thank God someone finally did something about this,” an alleged tech support scam has been shut down by the Federal Trade Commission and two state attorneys general.

By now, you are well versed in the scam, which I “fell” for more than a year ago: One way or another, a victim ends up on the phone with a telemarketer, that person points out some suspicious-looking files on the victim’s computer, and then sells a “service” to fix the problem for hundreds of dollars.   The problems are just normal Windows operations, of course, and the service is bogus.  (I blame Microsoft, in part, for this, as Windows freaks people out on its own.)

On Friday, the FTC and officials from Pennsylvania and Connecticut announced they had convinced a judge to shut down one such operation that had tricked consumers into collectively paying $17 million.

“We’re pleased the court shut down these scammers, who defrauded consumers out of millions of dollars by preying on their lack of technical expertise,” said Jessica Rich, director of the FTC’s Bureau of Consumer Protection. “Our goal is now to get money back for the victims in this case, and keep the defendants out of the scam tech support business.”

The company involved in the FTC complaint went by several names: Click4Support, LLC; iSourceUSA LLC, and UBERTECHSUPPORT.  In this case, consumers were drawn to the service via online ads and popups.

“Consumers who responded to the phony ads were routed to a call center operated by the defendants, where telemarketers would frequently misrepresent that they were ‘a Microsoft agent,’ ‘Google support,’ or ‘work with AT&T,’ among other affiliation claims,” the FTC said. “The telemarketers would then convince consumers to give them remote access to their computers, navigate to harmless portions of the computer, such as the Windows Event Viewer, and mislead consumers into thinking their computer was infected with viruses and malware.”

Naturally, Windows Event Viewer is full of routine error messages, but that’s enough to convince unfamiliar consumers that something is wrong.  Then, the big sell comes. A telemarketer pitches a monthly service — with a price tag that might ultimately rise to thousands of dollars.

“The purported services include, among other things, correcting error and warning messages, installing security software, cleaning up the computer of adware, malware, and spyware, performing a ‘tune up’ or ‘optimization’ of the computer, restarting Microsoft services and reinstalling drivers, creating a backup of the computer, and promising to provide continuous monitoring of the computers and round-the-clock support,” the FTC said.

The complaint in the case alleges that the defendants violated the FTC Act, the Telemarketing Sales Rule, the Connecticut Unfair Trade Practices Act, and the Pennsylvania Unfair Trade Practices and Consumer Protection Law.

It’s important to note that the style of attack is a bit different from the one I fell for, so while this operation has been shut down for now, others will almost certainly continue. After all, $17 million is a lot of money; as long as people keep falling for a scam, scammers will try it.  So don’t let down your guard.  If someone you don’t know offers you technical support on your PC, just don’t click.