Category Archives: Data Security

Data Privacy Day and Practical Online Security

Today is Data Privacy Day, where we bow our heads and give thanks to the benevolent corporations that so closely guard all of our data. Without these titans of industry, data breaches would be routine and your private accounts could be accessed by nefarious hackers wearing ski masks.

don't let this guy win
don’t let this guy win

But just in case you don’t feel these companies always have your best interests in mind, there are a few simple things you can do to protect yourself online. Obviously this is not a comprehensive list and will not protect you against all adversaries, but you’ve gotta start somewhere.

Passcodes

Put a passcode on your phone, seriously. If you’ve followed the “debate” over the use of encryption in iPhones and Android devices, you know that certain groups (like the FBI and more local law enforcement) are very upset that encryption is now the default on modern devices. Encryption means that the data should be inaccessible to anyone who is not you, but it does you no good unless you enable it with a passcode. If you don’t want to use a passcode, don’t bother reading the rest of this post. Anyone with physical access to your phone can get at the data inside.

Additionally, if you feel like you could be in a situation where you could be physically coerced into unlocking your phone, turn off the fingerprint or face-recognition unlocking features. Don’t reveal your password to anyone without something that’s been signed by a judge.

Also, pick a GOOD passcode. Don’t pick 1234, 0000, 2580, or your birthdate. And just like a password, don’t tell it to ANYONE. Not your lover, not your boss, not your pastor. Also, wipe your phone’s screen regularly because I can probably guess your passcode based on the Dorito cheese your greasy fingers leave behind.

Password Manager

Speaking of passwords, you should never reuse them! If you use the same password on your Google, Facebook, and Amazon accounts, anyone who guesses that single password has access to all those accounts.

I recommend using a password manager to keep track of all these things. The way a password manager works is that you remember one master password, which is used to unlock an encrypted database of the passwords you use on other sites.

I personally use 1Password, which costs money (though I think there’s a free trial), or LastPass, which is free. Both can generate new secure passwords for you when you sign up for a new site, but all you need to remember is the master password. Both options above have browser extensions and mobile apps, which reduce the amount of hassle it takes to start using passwords more securely.

Install Signal

If you have a smartphone, this is a necessity. It’s currently the most secure text-messaging app on the market, and it’s free. Messages between you and other Signal users will be encrypted, so even an adversary using IMSI-catchers (aka Stingrays; when they’re in planes they’re sometimes called Dirtboxes) won’t be able to view them.

Of course, using Signal does not mean you’re completely secure if the other person does not have it installed. Signal gives you an indication if the other party has it. You can also use the app to make secure phone calls with other Signal users.

Apple’s iMessage also provides fairly good security, in that it encrypts your conversations, but only works for conversations between iPhone users.

Enable Two-Factor Authentication on Everything

This is probably the most “cumbersome” step but will also provide the greatest security against attempts to access your accounts. It’s called two-factor authentication (sometimes multi-factor authentication) and the basic idea is that it should take more than just a username and a password to log in to an account. Since a username and password are things you know, we want to require something else to prove your identity. Typically this is something you have (like a smartphone) or something you are (like a fingerprint).

By enabling two-factor authentication, the next time some masked hacker guesses your username and password for a website, the site will send a verification code to an app on your phone or as a text message to you. Without that code, they won’t be able to log in and see all your secret messages and cat pictures! However, you’ll need to go through some configuration steps to enable this. I recommend starting by enabling two-factor authentication on your Google account first.

If you are able, I suggest installing Google Authenticator (or Authy) on your phone rather than getting verification codes via text message. Not all services use two-factor authentication and some only use codes sent as text messages rather than using Google Authenticator. Here is a handy chart of sites that support it – I recommend enabling on all that you can, particularly Facebook (they’re called “Login Approvals”) and Twitter.

I’m sure I forgot something, so feel free to ask questions or drop knowledge in the comments. I’m available to give presentations and assist with security at a discounted rate (if I like you), or at my usual hourly rate (if I have no idea who you are). Stay safe out there!

CISA is a terrible cybersecurity law

In what has become an annual tradition, Congress has renewed their efforts to pass some type of cybersecurity legislation. For the past four years, privacy advocates and security experts have consistently opposed these bills due to inadequate protections of American civil liberties, and this year’s offering, the Cybersecurity Information Sharing Act (CISA), is no exception.

CISA greatly expands the scope of government surveillance at the expense of American civil liberties. The bill would allow private companies to share any data they’ve created and collected with the government, who could then use it for their own purposes.

Data sharing can be useful, of course. To combat cyberthreats, private companies already share data with each other, and refer to this type of sharing as “threat intelligence.” Threat intelligence isn’t perfect, but helps companies identify dangers online in order to mitigate risks and secure their networks.

But this bill goes much further than that. CISA makes all information-sharing easier between the private sector and the government, not just for information relating to threats. For example, the federal government could use data collected from Google or Facebook during a criminal investigation. This violates the principle of due process, which suggests that courts should have oversight into how government agencies conduct investigations.

In this sense, CISA provides a clear way for the government to get around warrant requirements.

In exchange for providing this information, the bill grants legal immunity to private companies who break the law or who have poor network security. Thanks to this provision, it’s no surprise that industry groups like the Chamber of Commerce and the Financial Services Roundtable have been lobbying for this bill. CISA would also create a new exemption to Freedom of Information laws, preventing Americans from discovering what data about them is being shared with the government.

This immunity means that the government will be unable to prosecute companies who do not adequately protect their customers’ data. This is likely to lead to fewer resources being dedicated to cybersecurity threats, as the threat of a fine or lawsuit is reduced.

The growing volume of data that private companies gather on Americans makes this legislation more problematic. Google knows the contents of your email, as well as your search history, videos you’ve watched, and even where you’ve been. Facebook knows who your friends are, what type of articles you like, and whose profile you’re most likely to click on. To grant the government access to this information with no oversight on how it is used is not only unconstitutional, but also morally objectionable.

CISA advocates claim that there are adequate privacy protections to “scrub” personal data before it reaches the FBI or NSA. But included in the bill are loopholes which allow for unfettered access to this personal data at the discretion of these same government agencies.

If Congress is serious about addressing the evolving threats posed by criminals online, there are a number of proactive steps that should be taken. The Computer Fraud and Abuse Act of 1986 is in need of an overhaul. It’s ridiculous that our primary law written to stop computer crimes was written when the chief threat to the United States was the Soviet Union. As currently written, the law prevents security researchers from doing their jobs, such as building tools that help mitigate threats before the bad guys exploit them.

Second, Congress needs to get serious about the threat posed by the ‘Internet of Things. We know that Volkswagen intentionally evaded emissions testing by writing a few extra lines of computer code. We need to know that our self-driving cars, voting machines, and medical devices are working properly and securely, and cannot do so without being able to audit the code that powers them. We shouldn’t wait until a criminal takes control of these devices to begin properly securing our infrastructure.

We need legislation that addresses current and future threats. There are few, if any, cybersecurity experts that believe this bill will improve overall security. Nothing in the bill would have prevented major data breaches like what occurred at the Office of Personnel Management, which exposed the personal details of millions of innocent Americans, some at the highest levels of government. To the contrary, this bill would put even more data on the same insecure government servers that have already been exploited by criminals.

PostScript

I was hoping to have an edited version of the above published somewhere, but with the vote being likely to happen tomorrow, there isn’t enough time. That said, below are some accompanying notes for those who want to dig a bit deeper.

The first glaring hole with this bill are the lack of cybersecurity professionals who support this bill. I actually scoured the Internet to find someone respected within the industry who thought this was a good bill, and was unable to find a single one. On most other security-related issues, such as the potential regulation of 0day markets, there are a few different camps that security experts fall into. There is no such pro-CISA camp.

While I often side with the EFF on Internet-related issues, even experts that I usually disagree with politically are opposed to this. This letter in opposition to CISA features many respected information security experts (including Bruce Schneier), and Brian Krebs has also commented on why the bill is misguided:

So when experts are opposed to such a bill, who exactly is supporting it? As I mentioned above, the Chamber of Commerce and Financial Services Roundtable are two of the industry groups that support it, and the reasoning is obvious. Companies and banks that have poor information security practices become immune to cybersecurity-related lawsuits, provided they share their data with the government.

This incentive also makes data-sharing for companies less than the “voluntary” proposition that advocates claim. Instead of securing their networks, CISA creates a perverse incentive to reduce the impact of network security when doing a cost-benefit analysis. If this bill passes, there are two important ways to reduce the risk of a cybersecurity-related lawsuit: secure your network OR share your data with the government. While some companies like Facebook and Google will never share *all* their data with the government, they would be foolish to not share *just enough* data to keep themselves immune from lawsuits.

While often the backing of the financial industry is enough to pass legislation, they have a powerful ally in the intelligence community. Here’s some good reading on the intelligence community‘s potentially changed role if CISA passes.

But to me, the key reason I dislike this bill is deception. I don’t like that this is called a “cybersecurity” bill. It’s a surveillance bill. Snowden’s revelations have shifted the political landscape to largely oppose state surveillance, which makes it amazing that a bill which hands over large amounts of data to the state is close to passage.

As I briefly mentioned at the outset of my initial piece some of this has to do with issue fatigue. After witnessing the eventual passage of this bill (I consider it the successor of CISPA, first introduced in 2011), I am much more pessimistic about the future of American politics. The voice of industry professionals and civil liberties groups will never be as loud and sustained as those of industry groups who represent clients who all stand to benefit.

But the other reason I hate this bill is that it confuses real security with a false sense of security. The classic misdirectional dialogue applies:

“The situation is bleak, something must be done.”

“This is something, therefore this must be done!”

The Internet of Things presents an entirely new, and more immediate problem. We’re living in a world where new devices are not only running more code than ever, but are also reliant upon internet connections in new ways. Why does my thermostat need to be connected to the internet in order to keep my house’s temperature steady? Dick Cheney’s doctor disabled the WiFi on his patient’s pacemaker due to the threat posed by hackers, so why do the rest of American citizens accept such a risk?

They don’t, they’re just unaware of the reality of the threat. These threats will only increase as we push towards “modernization” without any thought for the consequences. I’ll write a bit more on the problems with the security of the Internet of Things in the coming months on my blog.

And finally, I’ve linked to her blog multiple times in this post, but there was another good post over at emptywheel which sums up why this is a bad bill.

VPN Security Issue Can Reveal True IP

I use a Virtual Private Network (VPN) on a regular basis.  There are many reasons to do so.  It helps keep my true IP address concealed; all my internet traffic appears encrypted to the ISP.   If I need to use Wi-Fi at a coffee shop, I can do so without fear that the owner of the access point could be snooping on me.  Some internet content is also geographically restricted, and my VPN provides me a choice of where I want my internet traffic to originate from.

As it turns out, a wee bit of Javascript magic will convince a web browser to reveal the originating IP.  While I’m connected to my VPN (through their provided applet, but this also works with other connection methods), here is what Google reports as my IP address:

my IP address

When I visit a site that is using some STUN Javascript:

myIPSTUN

Yes, that 50.*.*.* IP address is mine.  As noted by that demo above, the request will not show up in dev consoles and privacy-related browser extensions will not block it either (aside from NoScript, which blocks all Javascript).  You can read more about this security problem.

But there is good news.  This problem does not affect any web browsers in OS X.  It appears to only impact Windows machines, and only the Firefox and Chrome browsers.  Of course, we want all browsers to be secure, so how to fix this?

If you’re on Windows and using Firefox, type “about:config” in the address bar, and set “media.peerconnection.enabled” to False.

If you’re on Windows and using Chrome, type “chrome://flags/” in the address bar and check “Disable WebRTC device enumeration.”

The superior way to fix this is to force all traffic to go through your VPN, but my skills with Windows Firewall are a bit lacking.  If you control your own physical firewall, you probably already have a good idea on how to force web traffic to go over port 1194 (OpenVPN) during VPN sessions.  Properly implemented, that should also plug this data leak.

I advise anyone who cares about privacy who is using Windows to take the above steps to fix the problem.  There are lots of people out there who want to track you so they can spy on you and sell you things.  Why make it easy for them?

ThreatPost also has more on this.

Heartbleed and the Computer Fraud and Abuse Act

As the Heartbleed story broke last week, a number of individuals and security vendors released tools designed to test for the vulnerability.  One very popular tool was written and hosted by Filippo Valsorda.  Many systems administrators took advantage of this free tool in order to test the security of their own systems.

Tools that test for vulnerabilities make the internet more secure.  Consumers feel safer knowing their bank or email provider is not leaking sensitive information.  Similarly, websites which do not immediately patch their systems put their customers’ data at risk, and assessment tools allow this information to be known.  A publicly-available assessment tool allows anybody to test whether sites they rely on are properly protecting data.

But releasing these assessment tools to the public is problematic from a legal perspective.  Using a security assessment tool to test any site you don’t control is a violation of the Computer Fraud and Abuse Act (CFAA).

The CFAA amended 18 USC § 1030 to define crimes which occur due to computer misuse.  Multiple clauses of this law could be violated by scanning a website for vulnerabilities without prior authorization.  The Heartbleed bug allows an attacker to receive information located in a server’s memory just by asking for it, so the way to assess whether a particular server is secure is to ask for extra information and see whether the server provides it.  Subsection (a)(2)(c) of 18 USC § 1030 deals specifically with unauthorized access to information:

(a) Whoever —

(2) intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains—

(C) information from any protected computer;

What exactly is a “protected computer”?   The CFAA defines such in subsection (e)(2):

(e) As used in this section–

(2) the term “protected computer” means a computer–

(A) exclusively for the use of a financial institution or the United States Government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government; or

(B) which is used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States;

(The growth of the internet, unforeseen when the CFAA was introduced in 1986, essentially means that a “protected computer” as defined above covers every internet-connected computer, as they are used in “interstate or foreign commerce or communication”.)

Of course, being charged under one section of the CFAA does not preclude being charged under additional sections.  Subsections (a)(5)(B) and (C) cover potential damage caused by access from those who:

(B) intentionally accesses a protected computer without authorization, and as a result of such conduct, recklessly causes damage;

(C) intentionally accesses a protected computer without authorization, and as a result of such conduct, causes damage and loss

Section (b) of 18 USC § 1030 makes attempting or conspiring to attempt unauthorized access a crime.  Use of a vulnerability assessment tool could be considered tantamount to “casing the joint” before actually committing the crime:

(b) Whoever conspires to commit or attempts to commit an offense under subsection (a) of this section shall be punished as provided in subsection (c) of this section.

The provisions of the CFAA were intended to fight crime, but they’ve made criminals out of every internet user who is concerned about security.  Criminalizing security research makes us all less safe – after all, how does anyone know who to trust without  basic knowledge regarding security practices?

The inability of prosecutors to uniformly enforce this outdated law also creates a system of selective enforcement.  Since it’s impossible to punish everyone, a federal prosecutor can choose who they would like to charge under this law.  The technology community is painfully aware of what happens when overzealous prosecutors take the CFAA too far.

Part of this problem is symptomatic of a larger issue.  As it stands, it is currently impossible to count the number of federal crimes that could be committed:

“There is no one in the United States over the age of 18 who cannot be indicted for some federal crime,” said John Baker, a retired Louisiana State University law professor who has also tried counting the number of new federal crimes created in recent years. “That is not an exaggeration.”

There have been recent efforts to reform the CFAA.  A bill introduced by Zoe Lofgren would eliminate penalties for Terms of Service violations, such as using a friend’s Netflix account or joining a class-action lawsuit against Steam or Sony.  While these reforms are a step in the right direction, they do not go far enough to de-criminalize responsible online behavior.

Additional resources:
Prosecuting Computer Crimes Handbook
A Practitioner’s Guide to the CFAA
Cybercrimes & Misdemeanors

Recap: MN Civil Law Committee Hearing on Surveillance and Privacy

Here’s my edited dump of notes from today’s meeting (apologies if any of it is misattributed or incorrect):

Today the State of MN Civil Law Committee convened to hear testimony regarding state and local government use of surveillance technologies.  At issue was how these technologies impact an individual’s right to privacy, and what legislative steps can be taken to allow law enforcement’s use of these technologies while protecting constitutional rights.

The first person to testify was the ACLU’s Catherine Crump.  She prefaced her comments by mentioning that while many privacy issues have surfaced due to the NSA, problems can also arise at the state and local level.  The ACLU is not opposed to surveillance technologies, but recognizes that oversight is required to prevent powerful technologies from being abused.

(While I would prefer that modern technologies not be used to surveil in the first place, this is a perfectly sane position to take.  Being “opposed” to technology is a pretty difficult proposition, since it’s the actual use of technology that can be problematic – it would be like opposing streaming video technology because you watched a bad movie.)

Crump’s testimony was focused on four areas: GPS tracking of vehicles, cell phone location tracking, automated license plate readers, and surveillance drones.  Crump also noted that extended surveillance often leads to the discovery of very private information about an individual, and that 28 days of GPS surveillance was considered a “search” by the Supreme Court.  Previously, searches like this were limited by the cost of technology, but the plummeting cost of GPS technology requires the state to impose additional legal restraints against this type of use.

Crump also touched on some topics related to cell phone tracking.  The first is all carriers store historical data for at minimum one year, and that carriers are willing to share this data with law enforcement.  This historical data is often much more sensitive than current location, since it can be used to identify patterns of activity.

Current cell phone location data is obviously very useful in the event of an immediate threat or crime being committed, and I do not believe anyone is opposed to police using this data.  Law enforcement can also work with carriers to receive what’s called a “tower dump” which consists of a list of cell phones that have recently connected to a particular cell phone tower.  Both these uses of technology require oversight into the frequency these tools are used, who they are used against, and how they are deployed.

In closing, Crump stated that legislation which adds oversight to the use of technology needs to address where future technology is headed.  For example, surveillance drones will likely soon become a part of our landscape, so it’s important to come up with legislation regarding acceptable drone use before they become widely deployed.

Next, Commissioner of Public Safety Ramona Dohman answered a few questions form the committee.  Most interesting to me was that Kingfish/Stingray (cell phone exploitation devices) have been deployed in Minnesota since 2005 – almost 10 years!  Other interesting points made by Dohman (or her assistants – my notes are terrible) was that data collected by Kingfish was not kept, but that it could be – the claim is that this data would not be very useful.  Also, in response to a question, the identities of the specific officers that access data is not available to the public.

Next up was Minneapolis PD Chief Janee Harteau, who stated that MPD does not have any cell phone exploitation technology and does not have any plans to obtain it.  When MPD has such a need, they get a warrant and make a request to the BCA who handles the technology aspect.  When asked why MPD does not contact the Hennepin County Sheriff’s Office (who also has Kingfish), she could not give an answer – personally, I get the feeling that MPD and Hennepin County Sheriff’s don’t always see eye-to-eye.  Harteau also stated that MPD does not own and drones and has no plans to purchase any drones.    Harteau also was questioned over her department’s policy of keeping license plate reader (LPR) data for 90 days (a time period I consider somewhat reasonable).

St Paul Police Chief Tom Smith was a little more active about stating the benefits of consumer location technology, noting that OnStar could find him if he were in an accident in northern MN, and also touting some of the features of Apple’s iOS7.  He noted that St Paul does not use Triggerfish or Kingfish, and that like Minneapolis, when they need to use that technology, they get a warrant and contact the BCA.  Smith also stated that he and Harteau were both members of the International Association of Police Departments, and that that organization might be able to help draft some model legislation.

After some additional testimony from Olmstead County Sheriff Dave Mueller and MN Sheriff’s Association Executive Director Jim Franklin, things got a little more interesting.  Don Gemberling of the Minnesota Coalition on Government Information raised the possibility of a privacy and civil liberties board in Minnesota (after keenly pointing out that at one point, George Orwell himself was a cop).  He also cited Judge Brandeis’ dissent in the Olmstead case (“if the government becomes a lawbreaker, it breeds contempt for law”) as one reason this board might need to be established, and said that it’s not only the bad guys you have to worry about, but also the good guys who lose control.

Rich Neumeister gave some additional comments, stating that law enforcement has been increasingly trending toward secrecy, and that this trend has been going on a long time.  He noted that even the LPR data took 4 years before it was made public knowledge, and that police in the late 80s used handheld scanners to attempt to listen to phone calls transmitted by cordless phones.  He has also been unable to obtain even the names of the companies that BCA has contracts with.

Last, Deputy Secretary of State Beth Fraser spoke.  She talked briefly about the Safe at Home program, which helps shield victims of domestic abuse from their abusers.  She stated her concern of what happens when an abuser is a member of the law enforcement community, and would like a way for certain data to be deleted that does not have a legitimate use.

Overall, the meeting was about what I expected.  Not a whole lot that was accomplished today, but I am grateful for Rep. John Lesch keeping important privacy issues at the forefront of discussion.  As always, feel free to contact me via email or leave a note in the comments.

Nice Ride and user privacy – crossing the line

I’m a really big fan of Nice Ride, the bike-sharing program we have here in the Twin Cities. It’s a great way to encourage cycling (especially for beginners) and exploration of the cities – there are so many little wonderful things you miss when you’re in a car or riding the bus. That’s why I was disappointed when Nice Ride disclosed rider data to the public without removing a field which can be used to individually identify riders.

Privacy has been in the Minnesota news recently, when it was discovered that the Minneapolis police department was scanning license plates and using that information to compile a database of driver activity (such as where and when a car was spotted). The mere existence of such a database is disturbing, but is unfortunately not news to those of us who follow the advancing deployment of technology. What was disturbing was that this data was semi-public – anyone could request the locations where a particular license plate was observed, and the police would provide that data. Since this story broke, efforts have been made to reduce the overall scale of the database, in addition to monitoring and/or restricting access to the public.

Nice Ride, on the other hand, apparently has no qualms about publishing their entire database, complete with a unique subscriber ID. This unique subscriber ID allows anyone with a copy of the database to track an individual user’s activity throughout the Nice Ride system. This is useful information for Nice Ride employees who are using this data to figure out how individual riders are using the bikes, allowing Nice Ride to better serve their customers. But releasing this data to the public means that a subscriber ID can be easily linked with an actual person, exposing an individual’s entire ride history. There are many conclusions one can draw about individual Nice Ride users by manipulating this data (and combining it with other data), so let’s take a look!

I’d like to start out by describing the easiest ways to correlate a subscriber ID and an actual user, but I don’t really have the heart to publish a thorough methodology – that’s one of the things I’m deeply opposed to, and is my main grievance with the irresponsible publication of this data. I did not personally use Nice Ride this year, so I don’t even have a subscriber ID in the system. But if you’re a user/consumer of social media, can you remember tweeting or updating your Facebook status when you rode on a Nice Ride? Remember someone else who did? Know of any ways that you can find this info again, as well as the date/time it was published? Well, that’s one way to start. (Again, I apologize for not writing more on this but I’m trying not to go too in-depth. Simple observation is the other obvious way – you saw that cute girl get on a Nice Ride at a certain date/place/time, and while you don’t have her name, now Nice Ride has told you everywhere she has ridden a shared bike)

Once you match a single person to a subscriber ID, the floodgates are open. You get every single individual ride’s start time/date, as well as location, and the same for the destination (time, date, location). It’s also trivial to glance at any person’s data and see if any other user has checked out a bike from the same location within the same timeframe, potentially gaining the subscriber ID of a known acquaintance, spouse, etc.

Or, to take an example from the Minneapolis Bike Love forum:

Let’s say I take a bike out every morning near my house and ride it to work. My ex-wife knows I do this. She uses this information to figure out my subscriber ID because I am the only one who daily takes that bike from there and rides to the location near my work. Using my ID she looks at my other activity. She sees that I am riding places in the middle of the day. She sees that I am riding places when I told her I was out of town. She sees that I am riding around when I told her I was too sick to take the kids. She sees that I am riding to a place where I spent Saturday night and ride away the next morning. I just do not want her knowing that shit and I did not pay NiceRide to tell her.

The bottom line is that publishing this data is irresponsible and potentially dangerous. Bike-share programs in other cities also publish the exact same data (in addition to cool charts), but without the subscriber ID. I support the great things that Nice Ride does in order to make biking more accessible to beginners and those who prefer to avoid the hassle of bike maintenance. But they seriously need to remove just one field before publishing their data.

Update as of 12/8/2012:

Of course there’s one more thing that I neglected to mention in the above post. If you go to Nice Ride’s sign-up page, you’re presented with the user agreement at the bottom. About 2/3 of the way through that document, the section on “Confidential Information” (which is the only aspect of the user agreement related to privacy, as far as I can tell) refers the user to the Privacy Policy on the website.

Now, most modern websites have some sort of Privacy Policy which governs data that is submitted or stored via the website, so that’s kind of sloppy – obviously subscriber ID, check-in times, station locations, etc. are not submitted via the website. And ignoring that oversight, most of the Privacy Policy is relatively standard boilerplate, even the section that reads:

We may share aggregated demographic information (data that cannot identify any individual person) with our partners and sponsors.

The data they have published is not aggregated data (and can potentially be used to identify individuals), and they are not providing it strictly to partners and sponsors, but to the public. There are good reasons for this (so other data nerds can make maps and track behavior). Even if Nice Ride removed the subscriber ID, they would still not be in technical compliance with their policy (because of the aggregation claim), but they would remove the possibility of identification of users, which is all I really care about.

And finally, Nice Ride published a similar dataset in 2011, but included Date of Birth, Gender, and ZIP Code – making it very easy to identify people. It doesn’t appear that they did much about this oversight (other than properly redacting this data in 2012), as Minneapolis Mayor RT Rybak’s subscriber ID appears to be in use in both the 2011 and 2012 data sets (though either he stopped using Nice Ride in May 2012, or was assigned a new subscriber ID – this doesn’t surprise me considering he’s an avid cyclist and probably prefers his own bike). It would have been a smart idea to re-assign subscriber IDs after that inadvertent disclosure.

And if you’re wondering, I did email the Director of IT for Nice Ride prior to publishing this, and he was unconcerned about the privacy implications of publishing the data. I didn’t tell him specifically about the privacy policy violations mentioned in this update, because I thought of that angle after he stopped replying to my email. The EFF sent me a form letter telling me to contact my local bar association, and a reporter from the Star Tribune couldn’t come up with an angle which was appealing enough to readers.

If anyone has any ideas on how to get this resolved (either updating their policy to state that they will share ride data about users, or to stop publishing the subscriber ID field), please let me know and share the link to this post. Thanks!

Email and the Petraeus Affair

To be honest, I haven’t been following the Petraeus affair saga with a whole lot of interest. ISure, it’s interesting to some, but I would rather not separate the wheat from the chaff in terms of reporting. I simply don’t trust many news outlets to get the details right, and so I’d rather not get wrapped up in the nitty-gritty.

But I saw an interesting question on twitter – how exactly DOES the FBI go about reading people’s email? And, by extension – how do *I* go about reading others’ email? Well, the cold reality is that I’m not really interested in reading your email. I sometimes have to do it (as part of my job) and believe me, it’s boring, and I think most people who work in IT feel the same way.

The first thing to remember is that if the FBI wants to read any email of yours that is beyond six months old, it’s easy! A federal prosecutor needs to approve a subpoena, and that’s it. No, I did not substitute “prosecutor” for “judge” – it’s really a federal prosecutor. It’s kinda like having your own prescription pad and writing out what you want, without the hassle of going to the doctor!

Second, if you’re accessing your email from behind a corporate firewall, you may already be subject to monitoring! At many large organizations, all traffic may be filtered through a web proxy – these are often used for filtering content (like blocking Facebook at work), and can also be leveraged to perform Man-In-The-Middle attacks on other sites you visit, including your personal email or bank information.

See, normally when you go to your webmail or banking site and enter your credentials, you’re “safe” because the certificate presented by the site is also on a list of “approved” Trusted Certificate Issuers. While this is inherently insecure for many reasons (Google arbitrarily chooses whom to trust if you’re using Google Chrome, for example), the system can easily be manipulated by corporate IT departments by simply adding their own certificate to your browser’s Trusted Certificates list. This enables anyone with this certificate who is sitting between you and Gmail (for example) to decrypt information travelling between your computer and the email server.

Well I was going to write more, but I’m kinda busy today. Suffice to say, only check email on a device you can control and whose entry point to the internet is a gateway that you trust. But there’s not too much you can do about a subpoena (short of running your own mail server)…