I read all the time. I admit that I read less now that I found and use Audible the Amazon audio book service). While Audible is great the books I chose to read (or re-read) this summer are probably not available there. I recently re-read the Blue Team Field Manual (BTFM) and read the Red Team Field Manual (RTFM) and Operator Handbook for the first time. All are fantastic and I recommend that anyone in any cyber role from student to practitioner should consider having copies of all three near their desk.
About these books. These are not anything like reading popular science fiction. These are all books that you read sitting next to a computer (or computers) running Windows and or Linux. These literally are a handbook and manuals; technical references that list out instructions for how to accomplish various tasks and gather information.
The Blue Team Field Manual version 1.2 by Alan White and Ben Clark
I believe I purchased the BTFM at least 2 years ago. When I went looking for it I couldn’t find it so I bought another copy at Amazon. For $14.95 you can’t go wrong. The BTFM was written by Alan White and Ben Clarke and version 1.2 is copyrighted 2017. My recent copy from Amazon says that it was printed in June 2020 (so you know it’s a hot item or other people like me are losing their copies).
The BTFM is based on the NIST Cybersecurity Framework. The National Institute of Standards and Technology (NIST) Framework consists of standards, guidelines and best practices to guide organizations seeking to manage cybersecurity risk. It’s a how to guide for organizations to structure how to defend their digital presence.
The reason the BTFM is so great is because it’s structured based on the NIST Framework. The sections of that Framework are:
- Respond, and
The BTFM walks through tools and concepts that applicable at each section; such as describing Linux and Windows defenses in the Protect chapter and describing the steps to perform live triage of Linux and Windows systems in the respond chapter. I call theses descriptions but they really are not. Each chapter has a list of commands that the reader can execute on a computer and then review. After you’ve done this if you were unfamiliar with that command you’ll likely use Google and find out more about it. But along the way you’ll see the command and see the output and hopefully create a memory so that if you are looking for a way to copy the application logs from a Windows computer in the future there is a wmic command to list all log files and the wevtutil command to copy the individual files.
The BTFM is only 132 pages long including the index and two scratch pad pages. That said it is not a ‘fast read’. If you take a copy and sit by a dual boot computer it will take days to go over the commands. I suggest trying to knock out a chapter per evening. Beyond the five chapters I pointed out earlier there is a chapter 0 (zero) that lists key documents as per the NIST Framework. There is a chapter titled Tip and Tricks that has various OS Cheats and descriptions of tools; this really compliments the material in chapters one through 5. The last two sections contain various Incident Management Checklists (good questions that should be included in an investigation) and Security Incident Information about the (open source) VERIS schema.
I was recently asked this question…
I’m working on a project right now where my team wants to substitute passwords and usernames for biometric authentication. I have expressed my multiple concerns for the security of such a system, but the idea has now come up that we could use a system with at least 2 factors of biometric authentication, such as facial and voice recognition. While such a system is definitely better than one form of biometric authentication only, I still believe it is more insecure than using passwords. And even if it were not, I believe it is concerning from a privacy standpoint and makes our database a prime target for hackers.
To which I replied… When evaluating any authentication solution you should consider the FAR, FRR, and CER.
FAR = False Acceptance Rate or when someone who is not an authorized user is granted access.
FRR = False Reject Rate or when a authorized user is rejected.
CER = Crossover Error Rate which is the point at which the FAR and FRR meet.
You want your FAR and FRR to both be very low. If your FAR was 1 in every 100 unique authorizations; meaning that one time in every 100 authorizations an unauthorized person was granted access, that would be 1%. Is that acceptable given the number of people using the system?
You should try to account in your design for a FAR event (unauthorized user with access) and have some other protection in place; so that leads to a MFA (multi factor authorization) scheme.
FRR is what will truly frustrate your authorized users because they will be turned away and unable to access the system. That drives up the cost of operating the system since some additional person will have to be standing by to allow the rejected but authorized person access.
The CER or crossover rate is a way of detecting if either the FAR or the FRR is low. If either is a low number it will result in a lower CER. If you want to make sure that unauthorized users DO NOT have access and that your authorized users are not being turned away; you want to maximize your CER.
These are the names and links to profiles of the Board of Trustees for the Internet Society (as of this date in November 2019). According to their dot org web page, the Internet Society has a vision that “the Internet is for everyone”. These people are the Trustees of the Society at the time when the organization has arranged to sell the rights to the .org registry for an undisclosed sum to a private equity company called Ethos Capital.
Why is this important? In my opinion if the Internet is truly for everyone there should be a means for everyone to share their thoughts there. The dot org registry in my mind has always been the domain where organizations; both for and not for profit, could acquire a domain and have an opportunity to spread their views.
I’m disappointed that the Internet Society has chosen to sell the rights to the domain; which includes setting prices and completing sales transactions for all dot org domains. I believe this has a chilling effect on there actually being an Internet for everyone,
Update: I’m not alone in this opinion.
Another update: I’m not always a fan of these organizations but apparently they too think this sale is a bad idea.
Yet another update: It’s starting to look like this sale won’t happen but I’m waiting to hear about potential judicial challenges.
The Traffic Light Protocol (TLP) takes something that most people know and applies it to a new problem. In this case the simple concept of roadway traffic lights applied to information sharing. As defined by FIRST, an organization formed by cyber first responders; the Traffic Light Protocol is “a set of designations used to ensure that sensitive information is shared with the appropriate audience”.
According to the TLP when sharing information between two parties (a source and a recipient) the traffic light colors instruct the party receiving the information (the recipient) what the party sending the information expects regarding how the information will be used.
The key to understanding TLP is its simplicity. Traffic lights or signals are something used and seen by drivers and passengers on roadways around the world.
It’s important that each person in an organization handling information understand and use TLP all the time and the same way. Successful implementation of TLP in an organization is when everyone uses the protocol to process information the same way.
While most roadway traffic signals have either two or three lights; the protocol defines 4 conditions.
TLP:Red – information classified as RED when the party sharing the information intends that it will not be disclosed. The use of this information should be restricted to participants only. I tell people that when information classified as TLP:Red is shared with you; that information should stay with you.
TLP:Amber – Information classified as AMBER is intended for limited disclosure. That means you should only share this information with people in your organization. If you work for a company in the Information Security department when you receive information classified as TLP:Amber you can share it with others in your Information security department. Some organizations stretch this to be interpreted as within the company. Specific company policies and procedures should clarify this.
TLP:Green – Information classified as GREEN is also limited disclosure, however disclosure should be limited to the community; people in your organization and other organizations whom you regularly work with. Like TLP:Amber your organizations policies and procedures should define the community.
TLP:White – Information classified as TLP:White “carries minimal or no foreseeable risk of misuse” and can be shared broadly. It’s important to note that information classified as TLP:White is still subject to other organizational information classification (such as Secret, Top Secret , or NoForn and copyrights should be observed.
OK. Once you’ve downloaded Ubuntu the next decision will be where to install it. My suggestion is go virtual. I run Linux on my corporate laptop and on my personal iMac; both using VMware Fusion. As of this writing I am running v8.5 and the current version for Mac is v10. The other way of going is to install VMware ESXi on a server and create a Linux virtual machine. That’s a great way of learning not only about Linux but also virtualization.
One of the modern corporate technology problems I used to deal with almost every working day was the screen saver settings on my corporate laptop. The corporate security team has done an amazing job of locking my PC down and making it safe. The down side of that is that they control the screen saver settings so that after 5 minutes my PC will display the screen saver and require a password to make it go away. If I’m presenting or delivering a demo this is not good.
Then I purchased and installed a WeiebeTech Mouse Jiggler (www.cru-inc.com/mj3). This tiny USB device looks like another mouse to my PC. More importantly it behaves like a mouse that is always moving. Not moving so much that you see the mouse on the display move; but enough so that the screensaver doesn’t start.
Now I can plug in this very small USB device during preparation for a presentation or demo and not have to concern myself about the screen saver.
The Jiggler has other uses. If you are a security or forensics professional you should probably have one of these in your pocket at all times. If you are asked to examine a computer plugging the Jiggler in will make sure that the screen saver (and potentially a password challenge) doesn’t happen there.
The Internet is changing yet again. One of my predictions for 2018 is that everyone will witness a migration from corporate or private data centers to the ‘Cloud’, or Internet hosted data centers. There have been tremendous advances made in both securing the Cloud and sharing with the broader technical community how to secure the Cloud.
Some important reading material about Cloud security includes:
Amazon’s Shared Responsibility Security Model,
Azure’s Security Center, and
Google’s Application Layer Transport Security.
I just finished the course ‘International Cyber Conflicts’ at Coursera. The course was developed and led by professors Sanjay Goel and Kevin Williams from the State University of New York at Albany. This was a five week course that consisted of recorded presentations with inline questions; discussion forums; and end of week quizzes.
The presentations and readings for this course were good. After several readings referred to Cybersecurity and Cyberwar by Singer and Friedman; I elected to buy the book. I had been able to obtain the book through my local library on an inter library loan. After the second reading I really enjoyed the book and purchased it via Amazon.
I would say the only downside to this course like others that I have viewed is that the discussion forums were not really that good. The discussion forums themselves merged comments from previous offerings of the course (from about a year ago). I can appreciate why the instructors did this ; in an attempt to seed the discussion forums and get more people contributing. I didn’t think that worked. and then as with many Coursera offerings some people just don’t understand or seek to contribute to the discussions. Tighter moderation might help there.
I enjoyed the course and would recommend it to anyone interested in cyber security. The cost for the course was free unless you request a completion certificate.
I recently came across two very good articles about USB forensics.
The Hitchhiker’s Guide to USB Forensics was published at the Cyberforensicator site by Oleg Skulkin and Igor Mikhaylov. It is a very well thought out an written description of how to find out by operating system analysis what files have been copied to a USB device. They used a Windows 10 virtual machine and the Oxygen Forensics AXIOM tool to conduct a basic analysis. They are locating evidence about what files have been copied or moved.
I was looking for references to how to investigate just the USB drive itself. I found the SANS Computer Forensic Guide to profiling USB Thumbdrives on Win7, Vista, and XP. This is a blog post by Rob Lee dated September of 2009. This was more in line with what I was looking for given I that one found the USB device and wanted to start treating it as evidence. Rob had written about the differences between analyzing USB thumb drives and drive enclosures. There was much good info in both posts.
Today the New York Times Opinion pages offered an editorial titled ‘Combating the Real Threat to Election integrity’. Authored by the Times Editorial Board the essay has two faults. One is that it continues to pile on the story that the Russian Government is responsible for cyber attacks aimed at the United States electoral system. The second perhaps more glaring fault is that the United States Federal government should somehow pay for securing the voting infrastructure to be used in future elections.
The Times Editorial Board attribution of various cyber attacks to the Russian government is just plain wrong. While various unnamed sources in the United states Intelligence Agencies (we are told by the US Media) have evidence that the Russian Government is responsible for these attacks; I’ve not read any report from any credible Internet security specialist that ca provide hard evidence to back these claims.
Suggesting that the US Federal government pay to secure the electoral system is wrong. A vote is the right of every US citizen. Sadly, slightly better than half choose to exercise this right. However the US voting system is structured so that citizens vote where they live. Local election officials should bear the responsibility of securing elections. Providing them with financing care of the Federal government is a mistake in that those local officials would not be accountable to the source of the funds. Local election officials and local governments should bear the cost of securing the vote and be accountable to local citizens.
What is needed from the US Federal government are standards regarding cyber security and the electoral process that local election officials can both understand and implement.