I was recently asked this question…
I’m working on a project right now where my team wants to substitute passwords and usernames for biometric authentication. I have expressed my multiple concerns for the security of such a system, but the idea has now come up that we could use a system with at least 2 factors of biometric authentication, such as facial and voice recognition. While such a system is definitely better than one form of biometric authentication only, I still believe it is more insecure than using passwords. And even if it were not, I believe it is concerning from a privacy standpoint and makes our database a prime target for hackers.
To which I replied… When evaluating any authentication solution you should consider the FAR, FRR, and CER.
FAR = False Acceptance Rate or when someone who is not an authorized user is granted access.
FRR = False Reject Rate or when a authorized user is rejected.
CER = Crossover Error Rate which is the point at which the FAR and FRR meet.
You want your FAR and FRR to both be very low. If your FAR was 1 in every 100 unique authorizations; meaning that one time in every 100 authorizations an unauthorized person was granted access, that would be 1%. Is that acceptable given the number of people using the system?
You should try to account in your design for a FAR event (unauthorized user with access) and have some other protection in place; so that leads to a MFA (multi factor authorization) scheme.
FRR is what will truly frustrate your authorized users because they will be turned away and unable to access the system. That drives up the cost of operating the system since some additional person will have to be standing by to allow the rejected but authorized person access.
The CER or crossover rate is a way of detecting if either the FAR or the FRR is low. If either is a low number it will result in a lower CER. If you want to make sure that unauthorized users DO NOT have access and that your authorized users are not being turned away; you want to maximize your CER.
The Traffic Light Protocol (TLP) takes something that most people know and applies it to a new problem. In this case the simple concept of roadway traffic lights applied to information sharing. As defined by FIRST, an organization formed by cyber first responders; the Traffic Light Protocol is “a set of designations used to ensure that sensitive information is shared with the appropriate audience”.
According to the TLP when sharing information between two parties (a source and a recipient) the traffic light colors instruct the party receiving the information (the recipient) what the party sending the information expects regarding how the information will be used.
The key to understanding TLP is its simplicity. Traffic lights or signals are something used and seen by drivers and passengers on roadways around the world.
It’s important that each person in an organization handling information understand and use TLP all the time and the same way. Successful implementation of TLP in an organization is when everyone uses the protocol to process information the same way.
While most roadway traffic signals have either two or three lights; the protocol defines 4 conditions.
TLP:Red – information classified as RED when the party sharing the information intends that it will not be disclosed. The use of this information should be restricted to participants only. I tell people that when information classified as TLP:Red is shared with you; that information should stay with you.
TLP:Amber – Information classified as AMBER is intended for limited disclosure. That means you should only share this information with people in your organization. If you work for a company in the Information Security department when you receive information classified as TLP:Amber you can share it with others in your Information security department. Some organizations stretch this to be interpreted as within the company. Specific company policies and procedures should clarify this.
TLP:Green – Information classified as GREEN is also limited disclosure, however disclosure should be limited to the community; people in your organization and other organizations whom you regularly work with. Like TLP:Amber your organizations policies and procedures should define the community.
TLP:White – Information classified as TLP:White “carries minimal or no foreseeable risk of misuse” and can be shared broadly. It’s important to note that information classified as TLP:White is still subject to other organizational information classification (such as Secret, Top Secret , or NoForn and copyrights should be observed.
The Internet is changing yet again. One of my predictions for 2018 is that everyone will witness a migration from corporate or private data centers to the ‘Cloud’, or Internet hosted data centers. There have been tremendous advances made in both securing the Cloud and sharing with the broader technical community how to secure the Cloud.
Some important reading material about Cloud security includes:
Amazon’s Shared Responsibility Security Model,
Azure’s Security Center, and
Google’s Application Layer Transport Security.
I recently came across two very good articles about USB forensics.
The Hitchhiker’s Guide to USB Forensics was published at the Cyberforensicator site by Oleg Skulkin and Igor Mikhaylov. It is a very well thought out an written description of how to find out by operating system analysis what files have been copied to a USB device. They used a Windows 10 virtual machine and the Oxygen Forensics AXIOM tool to conduct a basic analysis. They are locating evidence about what files have been copied or moved.
I was looking for references to how to investigate just the USB drive itself. I found the SANS Computer Forensic Guide to profiling USB Thumbdrives on Win7, Vista, and XP. This is a blog post by Rob Lee dated September of 2009. This was more in line with what I was looking for given I that one found the USB device and wanted to start treating it as evidence. Rob had written about the differences between analyzing USB thumb drives and drive enclosures. There was much good info in both posts.
In the past week I completed the work for the first MOOC (Massive Open online Course) that I’ve ever taken. The course was Surveillance Law which I completed via Coursera. Let me start by saying that this course was fantastic. The presenter, Jonathan Mayer from Stanford did a great job delivering a series of short lectures that introduced and discussed US surveillance laws from technical and legal perspectives. The readings were great on that Mayer and the course team choose great materials but also advised participants when to read and when to skim. The lectures and materials covered topics and news that happened just weeks and months ago; so the overall course was tremendously relevant and informative.
The discussion forums in a MOOC can be pretty daunting. There were many, many people participating. I read a number of messages and threads that I felt were off topic and became less interested in participating there. I regret that now as I later learned that a number of regional, online (Google hangouts?), and over the phone study groups formed. I would have liked to participate in one of those. The constant “we’re screwed’, ‘the government is watching us’ attitudes expressed and off topic back and forth in some (many) of the discussions had turned me off. I realize now they turned me off too soon.
Among what I thought were the highlights of the course:
– How to Read a Legal Opinion, A Guide for New Law Students by Orin Kerr was a fantastic read. Thank you.
– Liberty and Security in a Changing World, Report and Recommendations of The President’s Review group on Intelligence and Communications Technologies. I had seen and read this document before but i reading it again in contect with the lectures i got so much more out of it.
– Jonathan’s great red t-shirt
– An archive of all of the course lectures appears on Youtube!
I would highly recommend this course to anyone interested in criminal justice or surveillance law. I’d also highly recommend Jonathan Mayer as a course instructor.
BBC News is reporting today that Google has updated their search engine algorithm to provide a higher rank to websites that use HTTPS. The web news site Gigaom explains further that the algorithm identifies web sites that use HTTPS / TLS and uses it as a ‘light factor’ that impacts less than 1% of global queries.
After reading Thomas’ article on re-evaluating the safety of Mac OS/X last week I finally managed to bring most of my Apple equipment up to current. I checked all the network devices and updated most of those. My Windows machines are all current. I do have that one Mac that won’t run a current OS. I read a great tips article over at Naked Security about bringing that as close to protected as possible.
Five years ago I made a decision to move from PasswordSafe to AgileBits 1Password. As someone in the security field I’ve always tried to practice what I preach and using different passwords for different sites and cycling passwords (changing passwords every N months) is important. I looked at a number of different password management solutions. I enthusiastically moved to 1Password as it offered everything I was looking for. Early on I used a local database but after becoming comfortable with the product I moved to using a shared database stored at DropBox.
One of the other password managers I looked at was PasswordBox. PasswordBox offers an application that includes the capability to sync passwords back to ‘cloud’ storage at the developers site and is available for Mac, Windows, and mobile platforms. When I first looked at this my concern with PasswordBox was that there was no knowing how my passwords would be secured given the applications storage model (i.e stored where?). With 1Password storage is either local or at DropBox. The 1Password folks have been called out on encryption (Cult of Mac 2012, Lifehacker 2013, TraxArmstrong 2013 ) numerous times over the past years. I followed that controversy and think the AgileBits team handled it well so I have no reservations recommending 1Pasword with DropBox.
Using any password manager one of the harder problems seems to be keeping the browser plug-in alive. As Firefox has marched through release after release I’ve had to update the plug-in and recently had to uninstall / reinstall the plugin after the 1Password major version change. That’s just one browser. I try to keep 1Password running in Firefox, Safari, and Chrome.
Something that I have been looking for as a feature of a password manager has been some way to do password escrow. That is creating the means to pass on information in my password manager should something happen to me. The simple way of doing this is to give someone I trust the password to my password manager. The downside is that the act of giving that password information creates a potentially huge point of friction. You have to ask yourself ‘Will the person I gave that password to do the right thing at the right time?’. Giving someone the password also equates to giving them the keys to everything. You lose the capability to purge some information you don’t want to pass on. One way around that is the capability offered by an application such as Legacy Locker.
Legacy Locker and other apps like it (Perpetu) offer a service that passes on usernames and passwords that you select to some person or people that you designate in the event that you ‘in theory’ pass away or become permanently incapacitated. All of these offer some form of credential or service escrow capability. They solve a very difficult problem that is faced by virtually all Internet based service providers; how to allow someone other than the user who agreed to the terms of agreement and opened the account into an account.
My advice regarding password managers is that more people should use them. They are an important tool to maintaining one’s individual security on the Internet. In order to be truly useful across multiple devices a password manager needs to use some common storage point and using Internet Cloud based storage works. The key to using Cloud based storage and keeping your passwords secure is making sure the manager supports strong encryption.
I read an excellent article by Nate Anderson in Ars Technica, “How the FBI found Miss Teen USA’s webcam spy” about how they broke the recent Miss USA ‘sextortion’ case. It got me thinking about how many of my friend and colleagues become temporary IT support personnel at the end or the year trying to help their parents and loved ones through their various computer problems. While remote access tools are a tremendous help in solving these issues without having to travel to someone’s home; they do pose a risk. Even my wife’s favorite support tool; Teamviewer has been targeted. By their design these tools are developed to sit and listen for an incoming connection. If you do use these tools make sure that you are using a non trivial password or pass-phrase. Try to make sure that the tool doesn’t load upon start up and requires that someone find and execute the program before a remote connection can be created. If possible move the link to the utility out of the normal applications folder and into a sub folder so that it is that much harder to ‘accidentally’ launch.
Trying to secure the Internet and all it’s users, content, and services is a difficult job. The Internet is a global resource that supports many different cultures and languages. The purpose of the various Internet web sites that appear on the Internet vary from commercial sites selling products and services to informational sites about many more topics that most people need or care to know about. There are a myriad of operating systems and applications used to produce and access those sites. As if Advanced Persistent Threats (APT) were not bad (or scary) enough there is now a new term used to describe the attacks that security personnel are trying to secure all these operating systems and applications from. Welcome Targeted Persistent Attacks (TPA)!
The first read where I came across TPA was over at Tech Republic. During an interview with the Research Vice President at NSS Labs they report:
“The truth of the matter is that an APT is sometimes made up of known exploits/vulnerabilities that are not that Advanced; so the term APT doesn’t define the action correctly. TPA highlights that the actor is going after a specific target such as company X or an entire industry sector like financial services, and will be persistent in attacking the target”
Uhh? So the reason we need a new category of product is because some malware writer slacked off and didn’t use the latest, most advanced exploit or vulnerability and instead used something that Microsoft already addressed a couple of Tuesday’s ago?
To be fair this blog post that also appeared at NSS labs makes a better case for the new term (TPA that was). What NSS Labs seems to be talking about here is threat or breach detection. Of course, there is also a TPA focused Breach Detection Systems (BDS) product buyers guide.