Saturday, September 29, 2012

UltraEdit macro to strip out specific values

I absolutely LOVE UltraEdit...since the first day I used it. One of the many things that make this tool so valuable and almost irreplaceable is that of the ability to create your own macros, and the power those macros can have in processing almost any type of file.

I have been working on a project that has a LOT of little things that I have to add all over the place. Becuase of this, I want to create a file with each variable on its own line. The problem originally is that the variables are of variable length and the easiest place that they exist is in a ton of if/else statements. So, I created a macro that, based on copying the if/else block to a new UE document, would create that second file list.

1. a document that you want to strip the individual variable names from.
   - This document needs to be named and saved.
2. a constant start and stop delimiter for each variable.
   - In my case, a dollar sign ($) starts my PHP variable name and a ' =' is directly after the variable name. However, with the power of regexs, you could have a set of acceptable start and stop delimiters, but I have no intention of covering that tonight.
3. a second document in UE that will hold the stripped-out variable.
   - This document needs to be named, saved, and directly on the right as listed in the tabs.

To create a new macro in UE, without using the record function, simply follow the mouseclicks here:
Macro->Edit Macro->New Macro
Then give it a name (and a hotkey if you'd like). Once you have done this and clicked OK, you should see the name of your macro in a dropdown bar at the top of the screen as well as the left-side window should be empty.

There are a lot of options in the right-side window that you can select and add to the left-side window. However, for the purposes of this post, simply copy and replace the below lines, and of course modifying for your own search type and expression, should work just fine for anyone.

UE Macro - StripVars
    Find RegExp "\$posPl[a-zA-Z]*\ ="
    Key END

Some final comments:
- The ColumnModeOn is optional. It just happened to be on when I set this one up.
- PerlReOn: This indicates that the regular expression search on the next line will be using a Perl RegEx. There are other options that can be used instead of Perl RegExs
- The RegExp I am using here should be self-explanatory...but: what I am looking for are all words (only one each time this macro is run) that start with $posPl, has any number of alpha chars, and ends with a space followed by an equals sign. I happen to know that the variables I want are the only words that will match this patter and that this pattern will only match the variables I want.
- The reason to have both documentes already saved with the new doc to the right of the original is for the macro commands of NextDocument and PreviousDocument.

I will leave the figuring out of this macro, additions to such as saving after each stripping of the word and pasting in the new file, to anyone who wants to spend some freetime learning more of this AWESOME tool!!!

Sunday, September 9, 2012

My thoughts on a new SSL protocol

The prevalence of encrypted traffic on the internet, network LANs/WANs, indicates that more and more people, companies, and even governments are trying to be more proactive in the protection of data. This also makes it harder for network security analyst to sometimes tell the good traffic from the "bad" traffic. If the analyst has authorization and the tools to do so, they can peek at the traffic and see what's going in and out.

However, I don't know that the majority of traffic analysts are authorized or have the tools to tear down these sessions in order to peak into them. There are obviously concerns that span everything from legal to real-time session tear downs that affect the actual session itself. Another concern is much can an organization spend on appliances that will provide the mechanism to "peek" into the SSL traffic? I would assume that some serious cost-based analysis would have to happen for most organizations.

So, I wonder about the possibility of a new SSL protocol, or something similar. I haven't done any research to see what options have been explored by those smarter than I, so there is a strong possibility that what I am thinking about has already been tried and found wanting. But, here's what I am thinking:

An SSL protocol that utilizes a "secondary transmission" protocol. I am basically proposing that in addition to the normal session initiation, traffic, and ending, that an additional packet be sent that contains the name of the authenticated user and the resource(s) that they are accessing/downloading/uploading/modifying, as well as containing the sequence number of the previous SSL packet, a time of conversation value, and a flag to indicate if an anomaly (such as the username changing mid-conversation).

The transmission of this secondary packet should be based upon some psuedo-random algorithm that makes it harder for a malicious actor to spoof. Additionally, there should be a number of these packets sent and the totality should be based upon an algorithm using information specific to the SSL session itself.

As for the destination of this secondary packet I haven't decided on what I think is best. I am leaning towards a UDP based protocol that sends to the either the same host participating in the SSL conversation or to the localhosts logging (such as 514 syslog on a *nix machine).

Like I stated earlier, this thought may have already been analyzed and found wanting, but I wonder if there isn't something more in general that we can do with SSL to allow for it maintain the privacy it affords (the legal concern), the protection it facilitates, and the durability it needs WHILE supporting a mechanism for auditing its validity.

I would be very interested...after I finish some other stuff in my project queue to attempt to put this into practice. But first, I will definitely spend some time researching what has already, if at all, been attempted.

Passed the GREM exam

I was going to actually post this almost two months ago...when I had actually sat and passed the GREM exam. However, I am kind of glad I didn't as I heard an interesting opinion the other day at lunch.

I don't want to incorrectly attribute a quote that I don't fully remember. That said, I do remember the "gist" of the comment and subsequent conversation: the SANS FOR610 class should be renamed to the "90% malware behavior analysis and 10% reversing malware course." Quite a mouthful, but not necessarily incorrect. I have spent a good deal of time thinking about this statement and the subsequent conversation, which also led me to think about my certifications (current and possible future ones).

Before anyone jumps on me (although I don't think any of the two people who actually read this thing will get upset about this) - I believe the above statement about a new name for SANS FOR610 is relatively accurate. However, I don't believe that this is a negative thing, which is how I believe the person making the statement was implying.

The reason I believe the statement is accurate is because, having attended a FOR610 session that Lenny Zeltser instructed, it is how the course was taught, at least in my own estimation. Furthermore, while I don't want to put words in Zelter's mouth, I do seem to recall that he favored behavioral analysis prior to doing any actual reversing. This happens to be an approach that I agree with completely.  Which is why, although I agree with the "renaming" statement, I believe the course teaches an appropriate and effective approach to the reverse engineering of malware. The statement maker had also stated that using an old, free version of IDA and OllyDbg, was not actually reverse engineering. This is where I started to not agree with this person's assessment.

I should note that the individual making these statements is probably, at least, 500 times more experienced and intelligent than I. Point in fact, I could probably work for this person for quite awhile and still not learn all that he seems to know. However, I think sometimes people take these types of statements, when made by seasoned professionals such as this person, and automatically accept them with a high-level of acceptance. Too often I have seen brand new people to the field of network security take these types of statements as gospel truth and run with them. This is only a personal side concern of mine in that I wish the community of practice did more to spread the knowledge. I think that the blogs and twitter feeds and articles are all great, but I can't help but wonder if there's not more that can be done to help out in our own local communities. But that is not really the point of my thoughts here.

Back to my point here...what I think about the value and name of the GREM certification that is supported by the SANS FOR610 course.

Personally, I feel that the course, at least as it is taught by Zeltser, is rather effective and technically relevant. I also think that of the Certifications I have, I think that the GREM is probably one of the hardest to pass and definitely requires that person either have had some experience in reversing malware or a photographic memory. But I will get back to the certification topic in a minute. First, I wanted to give my impression and opinion of the FOR610 class.

My Opinion of SANS' FOR610
I can't speak highly enough of the efforts that Zeltser has obviously put into the development and upkeep of the course. Furthermore, while I agree with the "90% behavioral analysis and only 10% reverse engineering" statement, I don't think it's derogatory thought at all. Quite the opposite in fact. I think it's an accurate statement in fact...even if my recollection of what Zeltser said about this very thought is incorrect. Why do I think this? It's simple really: It's the best method to reverse engineer malware. Quite frankly, I think it is the best method to reverse engineer most software.

If an analyst were to just start off by throwing the executable into a debugger or decompiler, they might, might, be able to tear down the potentially bad executable and identify what it is doing (going to do). They might even be able to identify it's different branching options given whatever environmental conditions that the executable checks for. However, how many analysts out there are experienced enough to even look at the portions of the code that isn't actually stepped through by the debugger/decompiler? Having been around some others who have claimed to be "reversers", it became obvious that the same thing affected there ability to effectively reverse engineer the software: the shortest path, or, if you like, the path of least resistance. It appears that an assumption that is almost considered criminal NOT to make is the assumption that the code is going to execute the same with little variance. In this case, the analyst runs the risk of missing logical branches that execute only when certain tests are met (or not met even). While an if/else branch may be easy for an analyst to recognize and a jnz call for what it is, I believe that the possibility abounds for complacent and/or inexperienced analyst to miss crucial operations of the code when they don't at first spend some time examining the behavior of the potentially malicious executable. Thus, I adhere to a belief that the initial efforts to reverse any software, not even just malware, must start with a behavioral analysis. While the 90% metric may not apply to every effort, I think that to be effective AND reliable, an analyst should spend at least 50% on each side of the effort.

I do realize that the reverse engineering "purist" are probably all shaking their heads at this point. And if all analysts were able to take the time to examine each and every line of the executable in question, then I would be shaking my head with them. However, and I think most in the security domain would agree, there is a time factor with reversing malware. Now, while this time factor exists with probably all other types of reverse engineering software, I believe it to be a more crucial factor when one is attempting to reverse malware. Because time is a factor in identifying the actions of malware, I believe that the initial behavioral analysis compliments the need to meet a time demand and get potentially vital information out to the community of practice.

I don't know if I can, or should dig into this topic any more. The bottom line is that I agree with the idea that the FOR610 class leans more heavily towards the behavioral analysis than actual reversing of the code. However, I believe that it must be this way...much like I believe that reverse engineering malware should follow the same approximate breakdown of their efforts.

As I initially mentioned at the start of this post, the passing of the GREM exam as well as the above referenced conversation really got me thinking again about my own current certifications, which ones I wanted to bother keeping, and which ones I still have a personal desire to complete. Coincidentally, I have also had a few conversations with people aspiring to get into the network security realm and/or software engineering.

My own thoughts, and these conversations, have really kind of caused me to change my thinking a little bit. While I have always agreed that a resume and accompanying certs are helpful on job applications but that experience trumps all...I have started to wonder about the possibility of having too many certs. Or, at least the possibility that 'listing' too many certs can be detrimental to a person's job efforts. I don't know what I ultimately think about this...but I am starting to lean towards a desire to keep to myself whatever certs I have and to only provide the list of them when necessary. Which doesn't mean I am done getting some new certs though...there's still a few I want to get and I am 95% sure that I want to go for the GSE in the next year.

PHP warnings about headers and the session_start function

If you read the voluminous amount of information available on the internet, you will learn that the HTTP Header must be sent before any other HTML code is processed. This also includes blank lines. This enforcement by PHP came about with PHP 4.3 and it can cause some headache if you are dealing with sessions and/or header redirects.

I have been working on a PHP site for someone and have actually had to deal with this more than once. The indicator that something is wrong is a 'Warning' message that will be displayed in the browser that looks similar to:

Warning: Cannot modify header information - headers already sent by (output started at /mysite/htdocs/utils/config.php:37) in /mysite/htdocs/controllers/processLogin.php on line 27

For the purposes of this post, the content of these files do not matter so much. However, to be clear about what I do have in my code:

- in config.php, which I use solely for back end DB connections, I have at the top of my file:

- in processLogin.php, I have:
   if ($count == 1) {        
        $_SESSION['username'] = $myusername;        
        header("location: ../index.php?p=welcome");
    } else {
        header("location: ../index.php?p=autherror");

where the first 'header()' statement is the line 27 referenced in the warning message above.

Having the header() function after a session_start() call CAN cause the same warning in certain conditions. The bottom line with this is that at the start of each file where it is needed you should have the session_start() call before anything else, PHP or HTML. However, THIS was NOT my problem.
If you noticed in the 'Warning' line states that the output was started at line 37 of config.php. Interestingly enough, my file only had 32 lines of code, including the opening and closing PHP tags, the closing tag being the 32nd line. What I didn't realize was that I had those extra lines after the closing tag, although I had immediately noticed that the line being reported was a higher number than the lines of code in that file.

So, as you may have guessed, the problem was the blank lines. Even though I had the session_start() at the correct locations, the blank lines were being processed in-between the session_start() and the header() calls, thus violating the standard that the HTML header MUST be the first thing sent  for the page.

Wednesday, September 5, 2012

SNORT Rules Cheat Sheet

I like...make that LOVE...cheat sheets and easy-to-use Quick Reference Guides. I like them so much, I make my own when time permits. So, I thought I would share one here. I got tired of looking for the latest location that I stored the latest SNORT manual at on my computer. Although I like to think I am organized, I do know that I am forgetful. So, it is not uncommon for me to forget where I have placed things. My solution was to, like I said already, to create a cheat sheet of SNORT rule options.

This cheat sheet is basically a version 1 document...only slightly past the draft stage. :-) However, it is a fairly good listing and explanation of the different options (as taken straight from the manual), and the base format, of SNORT rules. I welcome any comments, complaints, or suggestions.

The links below are for the both the PDF and PPTX version of the cheat sheet.

Snort Rules Cheat Sheet (PDF Format)      Snort Rules Cheat Sheet (PPTX Format) that I am not trudging through schoolwork until 3 a.m., I can finally get back to working on a desktop app that I started for creating/validating Snort rules. Last time I worked on it, I was about 80% done with the app. However, I had also started thinking about either changing it or forking it to create a tool for the description of a good number (I forget all the protocols I had already addressed) packet standards. I started thinking of it as more of a reference and learning tool for anyone that wants to get started with intrusion detection, or even those who just want a quick desktop app reference guide.

One really, REALLY, cool thing I did already start to add was an event listener for mouse-clicks on the different sections (to include at the bit level) of the packet format displayed. This event listener would create at least one (depending on a few variables) set of statements for tools such as tcpdump (live capture or from file) or tshark (for stripping out from a pcap file those packets an analyst may want to look at more in detail.

I would be greatly interested in any feedback on this idea. I do have a website I am finishing first but I hope to have a release version of this tool, at least as a teaching/reference tool for packets, in the next 2-3 weeks. Not sure if I am going to keep the Snort rule testing part in this as Snort already has this functionality.

Monday, September 3, 2012

Ubuntu 12.04 and VMWare Player

In the last few months I have been swamped...COMPLETELY swamped...with not only my own projects and things around the house but also on projects for other people. Unfortunately, part of my time constraint issues have stemmed from one of my test machines being infected, not one but TWO hard drives failing, and a network connection at home (provided by Comcast) that generally sucks everytime it rains.

Some of the testing I have been working on over the last few months has been a little more frustrating than running across a read-only partition that I posted about earlier. While I absolutely love Ubuntu AND SecurityOnion, I have been rather annoyed as of late with trying to set up a distributed setup of Ubuntu 12.04 (32-bit) virtual machines, in order to do some testing of some newer SO features.

It is first important to note that the installation of SecurityOnion is EASY and FAST and is NOT the cause of my heartache with the testing I am attempting to do. In fact, I am ONLY installing the OS right now and will then move from a base install of the OS to installing the latest/greatest SO version.

The problem is Ubuntu 12.04 itself. I am using a downloaded ISO and VMWare's Player and am now into hour 3 of the install process!!! Unfortunately, I do not have enough time to troubleshoot the issue but I suspect it is nothing more than a slow connection...although I am letting VMware's Easy Intaller run which may also be the culprit. The ISO I am using is coming from a network location and it "appears" as if the installer is trying to ALSO retrieve packages from the repo. This is NOT what I want it to do so I have 'skipped' the retrieval of a LOT of files and was very interested to see the end result. However, the VM is now HUNG at the "Easy Installer's" installation of VMware Tools.

Bottom line: things that are "designed" to help us or make things faster/automated STILL run a high risk of slowing down progress. Some irony though in this process is that the actual SO image (Live CD) is designed with ease/speed of install in mind...and it DOES work!

I have a new plan now:
- perform OS install and NOT use the Easy Installer (for a total of three VMs)
- establish host only network for said VMs
- stress test VMs to identify any performance issues NOT related to IDS tools
- install latest SO version from repo (on all three VMs)
    - one to serve as the collection point for data from the other two

In parrallel to the above, I am also going to set up three VMs using the latest SO ISO file (again, one manager and two sensor VMs) and begin testing from there.

FreeBSD Read-Only filesystem...and changing to read-write

I have had some some seriously annoying issues involving time, especially in the last three months. The worst part has been the "little" things that pop up that have caused delays all of the projects I have undertaken during this time. That said, this is the first of a few quick posts that I wanted to make.

Anyway, the primary point of this post is to record the command used to make a FreeBSD system read AND writable when it is read-only. I have been recently testing a FreeBSD system and was very annoyed to find that the system installs as read-only. It really didn't take long to find the right command from the manpage...but putting it here makes it even easier to find later...and maybe helps someone else.

So, if you want to make ALL partitions on a FreeBSD system writable:

#>mount -u -a -o rw

-u   means update the current permissions on the selected partitions
-a   means all, which in this case means all partitions listed in fstab
-o   means options, where you can see I used 'rw'...intuitively this should be understood as "read, write" privileges.

Because I have used -a I do NOT need to list any specific partition (nor required options). The -u  also implies that I want the command to update the listed (or fstab-defined) mount points. I have NOT tried this command on any other flavor of linux but I am fairly confident that it is a universal command....mount is NOT a "new" *nix command. :-)

Edited (5 Sep 12): One thing that I didn't think about was the persistence of read-write property. The above command will make your read-only partitions writable, but the next reboot or the next mount command will use the options in the fstab file. To make this change permanent (usually*), you would need to edit:
#> vi /etc/fstab

The format of each line of this file is:

device   mountPoint  fileSystemType options   dump   pass#

There are whole chapters of books written about how to use the mount command and fstab files, so I have no intention of rehashing all of that here. The important part here is that the options column be changed from ro (or just 'r') to the desired rw.

I mentioned above that the change to read-write would usually persist upon modification of the /etc/fstab file. However, this is NOT always the case...but I don't have time to write anything more about that. Maybe a future blog post or an another update to this one.