Category Archives: InfoSec

Data Privacy Day and Practical Online Security

Today is Data Privacy Day, where we bow our heads and give thanks to the benevolent corporations that so closely guard all of our data. Without these titans of industry, data breaches would be routine and your private accounts could be accessed by nefarious hackers wearing ski masks.

don't let this guy win
don’t let this guy win

But just in case you don’t feel these companies always have your best interests in mind, there are a few simple things you can do to protect yourself online. Obviously this is not a comprehensive list and will not protect you against all adversaries, but you’ve gotta start somewhere.


Put a passcode on your phone, seriously. If you’ve followed the “debate” over the use of encryption in iPhones and Android devices, you know that certain groups (like the FBI and more local law enforcement) are very upset that encryption is now the default on modern devices. Encryption means that the data should be inaccessible to anyone who is not you, but it does you no good unless you enable it with a passcode. If you don’t want to use a passcode, don’t bother reading the rest of this post. Anyone with physical access to your phone can get at the data inside.

Additionally, if you feel like you could be in a situation where you could be physically coerced into unlocking your phone, turn off the fingerprint or face-recognition unlocking features. Don’t reveal your password to anyone without something that’s been signed by a judge.

Also, pick a GOOD passcode. Don’t pick 1234, 0000, 2580, or your birthdate. And just like a password, don’t tell it to ANYONE. Not your lover, not your boss, not your pastor. Also, wipe your phone’s screen regularly because I can probably guess your passcode based on the Dorito cheese your greasy fingers leave behind.

Password Manager

Speaking of passwords, you should never reuse them! If you use the same password on your Google, Facebook, and Amazon accounts, anyone who guesses that single password has access to all those accounts.

I recommend using a password manager to keep track of all these things. The way a password manager works is that you remember one master password, which is used to unlock an encrypted database of the passwords you use on other sites.

I personally use 1Password, which costs money (though I think there’s a free trial), or LastPass, which is free. Both can generate new secure passwords for you when you sign up for a new site, but all you need to remember is the master password. Both options above have browser extensions and mobile apps, which reduce the amount of hassle it takes to start using passwords more securely.

Install Signal

If you have a smartphone, this is a necessity. It’s currently the most secure text-messaging app on the market, and it’s free. Messages between you and other Signal users will be encrypted, so even an adversary using IMSI-catchers (aka Stingrays; when they’re in planes they’re sometimes called Dirtboxes) won’t be able to view them.

Of course, using Signal does not mean you’re completely secure if the other person does not have it installed. Signal gives you an indication if the other party has it. You can also use the app to make secure phone calls with other Signal users.

Apple’s iMessage also provides fairly good security, in that it encrypts your conversations, but only works for conversations between iPhone users.

Enable Two-Factor Authentication on Everything

This is probably the most “cumbersome” step but will also provide the greatest security against attempts to access your accounts. It’s called two-factor authentication (sometimes multi-factor authentication) and the basic idea is that it should take more than just a username and a password to log in to an account. Since a username and password are things you know, we want to require something else to prove your identity. Typically this is something you have (like a smartphone) or something you are (like a fingerprint).

By enabling two-factor authentication, the next time some masked hacker guesses your username and password for a website, the site will send a verification code to an app on your phone or as a text message to you. Without that code, they won’t be able to log in and see all your secret messages and cat pictures! However, you’ll need to go through some configuration steps to enable this. I recommend starting by enabling two-factor authentication on your Google account first.

If you are able, I suggest installing Google Authenticator (or Authy) on your phone rather than getting verification codes via text message. Not all services use two-factor authentication and some only use codes sent as text messages rather than using Google Authenticator. Here is a handy chart of sites that support it – I recommend enabling on all that you can, particularly Facebook (they’re called “Login Approvals”) and Twitter.

I’m sure I forgot something, so feel free to ask questions or drop knowledge in the comments. I’m available to give presentations and assist with security at a discounted rate (if I like you), or at my usual hourly rate (if I have no idea who you are). Stay safe out there!

CISA is a terrible cybersecurity law

In what has become an annual tradition, Congress has renewed their efforts to pass some type of cybersecurity legislation. For the past four years, privacy advocates and security experts have consistently opposed these bills due to inadequate protections of American civil liberties, and this year’s offering, the Cybersecurity Information Sharing Act (CISA), is no exception.

CISA greatly expands the scope of government surveillance at the expense of American civil liberties. The bill would allow private companies to share any data they’ve created and collected with the government, who could then use it for their own purposes.

Data sharing can be useful, of course. To combat cyberthreats, private companies already share data with each other, and refer to this type of sharing as “threat intelligence.” Threat intelligence isn’t perfect, but helps companies identify dangers online in order to mitigate risks and secure their networks.

But this bill goes much further than that. CISA makes all information-sharing easier between the private sector and the government, not just for information relating to threats. For example, the federal government could use data collected from Google or Facebook during a criminal investigation. This violates the principle of due process, which suggests that courts should have oversight into how government agencies conduct investigations.

In this sense, CISA provides a clear way for the government to get around warrant requirements.

In exchange for providing this information, the bill grants legal immunity to private companies who break the law or who have poor network security. Thanks to this provision, it’s no surprise that industry groups like the Chamber of Commerce and the Financial Services Roundtable have been lobbying for this bill. CISA would also create a new exemption to Freedom of Information laws, preventing Americans from discovering what data about them is being shared with the government.

This immunity means that the government will be unable to prosecute companies who do not adequately protect their customers’ data. This is likely to lead to fewer resources being dedicated to cybersecurity threats, as the threat of a fine or lawsuit is reduced.

The growing volume of data that private companies gather on Americans makes this legislation more problematic. Google knows the contents of your email, as well as your search history, videos you’ve watched, and even where you’ve been. Facebook knows who your friends are, what type of articles you like, and whose profile you’re most likely to click on. To grant the government access to this information with no oversight on how it is used is not only unconstitutional, but also morally objectionable.

CISA advocates claim that there are adequate privacy protections to “scrub” personal data before it reaches the FBI or NSA. But included in the bill are loopholes which allow for unfettered access to this personal data at the discretion of these same government agencies.

If Congress is serious about addressing the evolving threats posed by criminals online, there are a number of proactive steps that should be taken. The Computer Fraud and Abuse Act of 1986 is in need of an overhaul. It’s ridiculous that our primary law written to stop computer crimes was written when the chief threat to the United States was the Soviet Union. As currently written, the law prevents security researchers from doing their jobs, such as building tools that help mitigate threats before the bad guys exploit them.

Second, Congress needs to get serious about the threat posed by the ‘Internet of Things. We know that Volkswagen intentionally evaded emissions testing by writing a few extra lines of computer code. We need to know that our self-driving cars, voting machines, and medical devices are working properly and securely, and cannot do so without being able to audit the code that powers them. We shouldn’t wait until a criminal takes control of these devices to begin properly securing our infrastructure.

We need legislation that addresses current and future threats. There are few, if any, cybersecurity experts that believe this bill will improve overall security. Nothing in the bill would have prevented major data breaches like what occurred at the Office of Personnel Management, which exposed the personal details of millions of innocent Americans, some at the highest levels of government. To the contrary, this bill would put even more data on the same insecure government servers that have already been exploited by criminals.


I was hoping to have an edited version of the above published somewhere, but with the vote being likely to happen tomorrow, there isn’t enough time. That said, below are some accompanying notes for those who want to dig a bit deeper.

The first glaring hole with this bill are the lack of cybersecurity professionals who support this bill. I actually scoured the Internet to find someone respected within the industry who thought this was a good bill, and was unable to find a single one. On most other security-related issues, such as the potential regulation of 0day markets, there are a few different camps that security experts fall into. There is no such pro-CISA camp.

While I often side with the EFF on Internet-related issues, even experts that I usually disagree with politically are opposed to this. This letter in opposition to CISA features many respected information security experts (including Bruce Schneier), and Brian Krebs has also commented on why the bill is misguided:

So when experts are opposed to such a bill, who exactly is supporting it? As I mentioned above, the Chamber of Commerce and Financial Services Roundtable are two of the industry groups that support it, and the reasoning is obvious. Companies and banks that have poor information security practices become immune to cybersecurity-related lawsuits, provided they share their data with the government.

This incentive also makes data-sharing for companies less than the “voluntary” proposition that advocates claim. Instead of securing their networks, CISA creates a perverse incentive to reduce the impact of network security when doing a cost-benefit analysis. If this bill passes, there are two important ways to reduce the risk of a cybersecurity-related lawsuit: secure your network OR share your data with the government. While some companies like Facebook and Google will never share *all* their data with the government, they would be foolish to not share *just enough* data to keep themselves immune from lawsuits.

While often the backing of the financial industry is enough to pass legislation, they have a powerful ally in the intelligence community. Here’s some good reading on the intelligence community‘s potentially changed role if CISA passes.

But to me, the key reason I dislike this bill is deception. I don’t like that this is called a “cybersecurity” bill. It’s a surveillance bill. Snowden’s revelations have shifted the political landscape to largely oppose state surveillance, which makes it amazing that a bill which hands over large amounts of data to the state is close to passage.

As I briefly mentioned at the outset of my initial piece some of this has to do with issue fatigue. After witnessing the eventual passage of this bill (I consider it the successor of CISPA, first introduced in 2011), I am much more pessimistic about the future of American politics. The voice of industry professionals and civil liberties groups will never be as loud and sustained as those of industry groups who represent clients who all stand to benefit.

But the other reason I hate this bill is that it confuses real security with a false sense of security. The classic misdirectional dialogue applies:

“The situation is bleak, something must be done.”

“This is something, therefore this must be done!”

The Internet of Things presents an entirely new, and more immediate problem. We’re living in a world where new devices are not only running more code than ever, but are also reliant upon internet connections in new ways. Why does my thermostat need to be connected to the internet in order to keep my house’s temperature steady? Dick Cheney’s doctor disabled the WiFi on his patient’s pacemaker due to the threat posed by hackers, so why do the rest of American citizens accept such a risk?

They don’t, they’re just unaware of the reality of the threat. These threats will only increase as we push towards “modernization” without any thought for the consequences. I’ll write a bit more on the problems with the security of the Internet of Things in the coming months on my blog.

And finally, I’ve linked to her blog multiple times in this post, but there was another good post over at emptywheel which sums up why this is a bad bill.

VPN Security Issue Can Reveal True IP

I use a Virtual Private Network (VPN) on a regular basis.  There are many reasons to do so.  It helps keep my true IP address concealed; all my internet traffic appears encrypted to the ISP.   If I need to use Wi-Fi at a coffee shop, I can do so without fear that the owner of the access point could be snooping on me.  Some internet content is also geographically restricted, and my VPN provides me a choice of where I want my internet traffic to originate from.

As it turns out, a wee bit of Javascript magic will convince a web browser to reveal the originating IP.  While I’m connected to my VPN (through their provided applet, but this also works with other connection methods), here is what Google reports as my IP address:

my IP address

When I visit a site that is using some STUN Javascript:


Yes, that 50.*.*.* IP address is mine.  As noted by that demo above, the request will not show up in dev consoles and privacy-related browser extensions will not block it either (aside from NoScript, which blocks all Javascript).  You can read more about this security problem.

But there is good news.  This problem does not affect any web browsers in OS X.  It appears to only impact Windows machines, and only the Firefox and Chrome browsers.  Of course, we want all browsers to be secure, so how to fix this?

If you’re on Windows and using Firefox, type “about:config” in the address bar, and set “media.peerconnection.enabled” to False.

If you’re on Windows and using Chrome, type “chrome://flags/” in the address bar and check “Disable WebRTC device enumeration.”

The superior way to fix this is to force all traffic to go through your VPN, but my skills with Windows Firewall are a bit lacking.  If you control your own physical firewall, you probably already have a good idea on how to force web traffic to go over port 1194 (OpenVPN) during VPN sessions.  Properly implemented, that should also plug this data leak.

I advise anyone who cares about privacy who is using Windows to take the above steps to fix the problem.  There are lots of people out there who want to track you so they can spy on you and sell you things.  Why make it easy for them?

ThreatPost also has more on this.