Saturday, November 14, 2009

My Vista OGRE, Part II

A little more today on my trials and tribulations of using OGRE. :-)

To better explain a couple of things from my previous post:
The OGRE App Wizard works well with VS 2008. Creating a new C++ project will show the option to create a new OGRE application. After assigning a name, the "next" button will open the app wizard. The defaults for this are probably suitable for anyone, unless options such as CEGUI support are needed. The one thing to be careful about is copying files/folders from the OgreSDK location to your project (as needed). One specific issue with this is the OgreMain.lib and OgreMain_d.lib files. If a project is created in a directory other than the OgreSDK directory, these files will not be automatically found and the app will crash. These can be added to the project by copying the 'OgreSDK\lib' folder to the project directory, by opening the executable from the 'OgreSDK\bin\debug' folder, or making appropriate changes in VS to point to the right path.
Secondly, I didn't elaborate on setting the environmental variable for OGRE (in Vista) originally, but will so now:
- As an administrator, this can be done through the GUI: System Control Panel, Advanced Settings, Environmental Variables
- As a regular user: from the command line, enter: setx OGRE_HOME c:\OgreSDK.
Lastly, OGRE is not fond of spaces in file/folder names, so do so with caution.

So, now onto where I am in this journey:
After searching around Google's 3D Warehouse for the "perfect" city to use, I ultimately decided to create my own. Had I fully envisioned the amount of time this would take, I would not have been so picky. Creating my own street layout took two attempts. I took a more Agile approach and just "dove right in." Had I read a little first, I would have found out about connecting pieces (by endpoint in my case), rotating pieces, and filling in areas. The second attempt, with about three hours of work, produced a street map were X = [-1300, 1300], Y = 0, and Z = [-1120, 1100], with 40 intersections of different types (90 deg. intersections, 3-way', and 4-ways). I was really impressed with myself when I finished this, exported it, and used it in my OGRE application test.

However, what I didn't account for was initially was that, in order to script the movement of vehicles, I needed to know where each car would be when it entered an intersection, and where it would be when it left (depending on direction of turn), including yaw and turn translation. Thus began a LOT of trial and error for identifying where each position was. It didn't help that the road models I used, including the pieces of straight section of road, where slightly skewed. I know that there is a way in the OGRE API to use the mouse to identify a point on a mesh. However, I dove into my trial/error method thinking it would be a lot faster than it was (12 hours to identify entry and exit positions for 22 of the 40 intersections).

After deciding that I had enough intersection information for multiple paths, I decided it was time to test the movement of my camaro from/through intersections. I used the OGRE Intermediate Tutorial #1 as a guide. If my camaro would would have had animation defined, then the tutorial as it was written would have been perfect. But, I din't have animation defined, so out of luck there! What I did was use this same tutorial and make some minor changes:
- deleted the "Knot" entities and Robot entity
- Added my city.mesh
- Added my camaro.mesh
- commented out (for now) the mAnimationState lines from inside the if/else's that are inside the MoveDemoListener::frameStarted function
- changed the return to 'true'.

All of this worked well, except my car is facing sideways as it drives :-), which I will work on later tonight.

Sunday, November 8, 2009

My Vista OGRE

The classes for my Masters that I am taking right now are definitely difficult. In one class, we have a group project to create a real-time version of YouTube. That one has been fun (after moving quickly away from ffmpeg into the Red5 framework) and educational.

My other class is an individual project to create a 3D driving simulator with a focus not only on the data structures, but on a "collision-avoidance" algorithm. I should add that, like in real life, the requirements have changed slightly each week. However, the project has so far been a fun challenge. The biggest change is that we are to create it from the ground up, whereas initially we were going to be given the underlying model and were to just focus on the avoidance of collisions. This change brings me to the point of this post: OGRE and installing it on Vista.

In order to meet the requirements of this project without digging into my wallet, I decided to use the following:
- OGRE 1.6.4 (for Visual C++.NET 2008) Prebuilt SKD:
- Google Sketchup 7 (free version):
- Ogremeshexporter:
- DirectX or OpenGL (personal preference)
- OGRECommandLineTools (needed for OgreXMLConverter, which allows for conversions to the mesh format):

The reason I went with the prebuilt instead of building OGRE from source was more of a time concern than anything, although I will say that as of a month ago, the source as downloaded from soureforge was buggy.

The installation of the three products was as easy as double-clicking the installer for each one, in the order I have above, setting/verifying the OGRE_HOME environmental variable is set to "c:\OgreSDK", and restarting (not required, but I prefered). To verify that OGRE was correctly installed, the following folder structure should exist:
----\lib (should have "OgreMain.lib" and "OgreMain_d.lib")

Outside of the OGRE examples, Google Sketchup can be used to create a new mesh for OGRE. Sketchup can import directly from 3D Warehouse. Once a model has been imported, it can be exported to OGRE mesh format (tools->Export to Ogre Mesh). The Sketchup exporter will have its own directory ([root]\SketchupExporter) in which the exported .mesh, .material, etc. files will be saved to by defualt.

I intend to keep the OGRE thread going for awhile as I work through it, so I am not going to write directly about creating OGRE projects, as it can be a pain in the rear. There is some excellent information on the OGRE Wiki for this information. The last thing I wanted to add to this is that there are AppWizards for automatically creating a new VC++ project and it can be found here:
A lot of the links at this page go back to sourceforge, but apparently (I didn't use the wizard) these tools are good for each IDE for which they were written.

Friday, October 9, 2009

Is Comcast Really this dense?

Anyone who has seen been surfing the internet and recieved the annoying anti-virus pop-up knows what a pain (not to mention danger) that these can be to your system. For those who have never seen this, what happens is that a window opens that looks suprising like your "My Computer" window, some smaller pop-up opens, and you are informed that your system is probably/maybe/definitely infected with any number of virus.
The most prevalent hosting for these malicious, fake, and generally money scamming anti-virus pop-ups is Velcom ( However, Comcast customers in the Denver area may have even more cause for concern.
As reported by Sunbelt Blog on 8 October, Comcast will begin presenting pop-ups to users when their computer is identified as being part of a botnet. The people of Denver now get the pleasure of questioning whether the pop-up anti-virus window is real or not, in ADDITION to wondering if the Comcast pop-up warning is real or not as well.
Personally, I think that this has to be one of the stupidiest ways for an ISP to warn its customers (in addition to invasive), and I would seriously question the team that came up with this "bright idea."

Fake/Malicious/Money Mule anti-virus: using pop-ups
Comcast: using pop-ups.

Friday, September 11, 2009

But my profile is private

I don't want to bash too much on Facebook (see previous post), but there is another concern out there that I wanted to publicize: Your private Facebook profile may not be private!

As tested by Super-Phil (a guy I work with), having a private profile on Facebook is really only private (which does not imply that it cannot be hacked anyway) if you do NOT join any groups. What do this mean?????

For a situational example, let's say that you are bored and trolling Facebook for ex-girlfriends. Suddenly you find one and your excited to make contact...only to be deflated by the fact that when you click on your ex's profile, you are told that some or all content is visible only to your ex's Facebook friends. Now isn't that a bummer! However, there is a way around this, or a caution for those who wish to remain private: if your ex is part of a group on Facebook!

What do you do? Join the group! After joining the group, as Super-Phil tested this past weekend, you can see any other member's FULL profile. I leave it up to the reader to decide if this is good or bad. I cannot currently test this from where I am at right now, but I have faith in Super-Phil, as he is a Facebook and vulnerability guru.

Something else that Super-Phil noted: be wary of sites such as NING dot com (apparently they are currently being sued) becuase they scrape Facebook profiles and put your information out there for even more to see....even if your profile on Facebook is private, but you are a member/customer of sites such as NING.

What's Old is New...

There are hundereds of products that promise to "rejuvenate" our older population, remove wrinkles, or just plain make you "feel younger." These are items that attempt to "turn" the older people into "new." Most of these products, I think, are junk and do nothing but cost money.

However, there is a much larger problem with older now being new. For those unaware, old malware that continually resurfaces in an attempt to trick people into bad situations. These old-turning-new products are doing more than costing money. A recent example of this is the re-appearance of the Koobface virus on Facebook.

The Koobface virus has been around for awhile and yet it continues to be used. Facebook has reports from last year about it, and yet it is still rearing it's ugly head. Specifically, I have seen it three times in the last week:
1) A friend of mine posted to my wall a warning that an Facebook email had been sent from her account, linking to a video, that she didn't send it, and that she knew it was malicious.
2) I recieved an email from the same friend that contained a different video link. However, from some of the text in the message, I knew it was fake/spoofed.
3) A posting went on my wall yesterday, to a third video, and by the same friend's account.

Having faith in my setup at home, I decided I would follow the link on the wall posting. Sure enough, a "new" facebook page opened. This new page had a video player in the middle of it, with a message window telling me that I needed to Update my Flash Player Plugin. About 2 seconds later, a new window opened with nothing more than an obfuscated string of about 20 characters. It was then that Norton kicked off the big warning. I made note of the URL in the new window, clicked "view info" in my Norton warning, and then closed out the bad browser window.
For giggles, I clicked the movie link on my facebook page again. The exact same sequence of events happened, as expected, with one BIG difference: the URL in the new window had a different top level address. The initial URL started with 67.X.X.X and the second time I followed this malicious link, the URL began with 74.X.X.X. I didn't bother with a third time.

From what I have read on other blogs and sites, had I clicked the "upgrade flash plugin" option on the first pop-up (fake Facebook page), and clicked OK to the download, I would have invited trouble into my electron world.

Additionally, the second, almost blank window that pops-up with an obfuscated string is actually attempting to autodownload the Koobface virus as well. For more information on Koobface, check out: (September 10, 2009 posting)

I should also note that this worm is infecting (ed) more than just Facebook. MySpace, Twitter, some blogs , and other Social Networking sites. The last link above provides some information on how to get rid of this "bad boy" should you become infected.

Friday, August 14, 2009

Unpack the Junk instead of Opening it

Today I learned a new and awesome trick for unpacking javascript that is found in packet captures. I have pasted the method below from the original site, along with the link to the author's posting. In short, this is an invaluable tool and makes me love Firefox even more!

Update: This technique can also be used to deobfuscate Yahoo Counters.

[Copied text]
Without any intro – crap that I usually write explaining why I had to write this post, I’m going for the subject. You(general junta or web developers or scared security guys) might see some eval packed javascript which phishing idiots ask you to copy paste on your URL bar and hit enter key.
Unpacking JS is a PITA was an answer that my brain use to give whenever I think about it. Just now, I found a very easy method to convert it into readable Javascript without any extra tool (IE boys, run away) Its very simple in FF or Opera.
FF guys, all you need to do is …
Copy the eval packed JS. something like —- eval(function(p,a,c,k,e,d){e=function(c) …………………. }
Open Error Console on your firefox
Paste the packed JS in Code input tab
Add eval = alert; at the beginning of the code
Hit Evaluate
You will get the proper javascript for the packed javascript. Copy paste it into any code prettifier. It will become perfectly readable. Opera folks, follow this. Packed JS is a huge asset for Phishing as who would have expected that packed JS in this code will make you join around 26 communities and send some stupid message to all your friends without your knowledge as soon as you copy paste some JS code on your URL bar and hit enter.
[End copied text from:]

Monday, August 3, 2009

Setting up Apache Tomcat on CentOS5

This past weekend I decided that I didn't like the performance on my current CentOS5 setup. With that in mind, I set out to re-install and begin, again, to configure from scratch. My whole goal with this server is to eventually have running a: web server, emial server for the family, and local domain for the home network (as opposed to the current workgroup settings.

The re-install of CentOS 5.3 was again a breeze. Although, I didn't get into too many security settings. The thought behind that is that I want to make sure it will work for my needs and then I will tighten it down before publishing any content to the world.

The complicated step was the installation and testing of Apache Tomcat 6.0.20. With that in mind, a short 'how-to' (based upon the below link) is below:

To get started:

1) Files needed:
- These should be saved/moved to: /root
- jre-6u14-linux-i586.bin
- jdk-6u14-linux-i586.bin
- These should be saved/moved to: /usr/share
- apache-ant-1.7.1-bin.tar.gz
- apache-tomcat-6.0.20.tar.gz
2) Directorys needed:
- /usr/java
3) Notes:
- If some of the below process 'aren't found by your bash shell, use /sbin/[servicename]


Install Java (JDK and JRE):

1) move to the java folder:
# cd /usr/java
2) Install JRE and JDK:
# sh /root/jre-6u14-linux-i586.bin
# sh /root/jdk-6u14-linux-i586.bin
- Verify installation. There should be a jre and jdk file in the /usr/java folder

Install ant and Apache

1) move to share folder:
# cd /usr/share
- Install ant first:
# tar -xzf apache-ant-1.7.1-bin.tar.gz
- install apache tomcat
# tar -xzf apache-tomcat-6.0.18.tar.gz

Enable Ant linkage

# ln -s /usr/share/apache-ant-1.7.1/bin/ant /usr/bin

Configure environmental variable:
- move to folder with
#cd /usr/share/apache-tomcat-6.0.20/bin
- open in your favorite editor (I used vi)
- add as a second line:
- JAVA_HOME=/usr/java/jdk1.6.0_14

Test config

# cd /usr/share/apache-tomcat-6.0.20/bin
# ./

Check for error log
# less /usr/share/apache-tomcat-6.0.18/logs/catalina.out

Run the startup file (I may have to edit file location...doing this from memory)
#cd /usr/share/apache-tomcat-6.0.20/bin
# ./

A startup script can be found on the below link. This script can be used to cause tomcat to start automatically at system startup. I did test this script on my original install, but opted not to use it this time (remember, I reloaded CentOS to try to clear up performance issues). I should note that the below link uses older versions of java (update 10) and of apache tomcat (6.0.18).

My end result is that my service works as it should, I set up a DynDNS account to test it, and I am now ready to re-build my website and move my domain.


Monday, July 27, 2009

Moving On

As mentioned in a previous post, last Thursday, I did something I have never done before: I turned in my two week notice!
Basically, through word-of-mouth, and two good interviews, I was offered a postion that is: closer to home, pays a lot more, and doesn't have the headaches of my current position. Anyway, that is enough complaining. I actually HAVE a lot to be grateful for, and to look forward to:
1) My son starts Kindergarten in two-weeks. He will be the only kid running around a Georgia school with Michigan Wolverine and Detroit Tiger shirts on! I LOVE IT!
2) My wife, after years of not being able to due to Army life, is finally in college and starts her first full semester in a couple of weeks. She deserves it!
3) My new job sounds incredible, is a place I have wanted to work for AWHILE, and, it will allow me to focus on what I like: intrusion detection and prevention, and I think some code inspections! I am definitely excited. This job is closer to home, more money, and, so far anyway, better people!

Drop it and Run?

I have never been able to just quit something without a reason. Nor have I been able to walk away from problems unsolved without at least making sure that I have said "my peace." I mention this because I turned in my two-week notice last Thursday, and yet here I am, still trying to solve problems for my current (for the next 8 working days) employer.

Anyway, I will save the complaining for another time but there are two things really standing out to me right now:
1) We (the cyber security section) asked for access to our network scanner over a month ago, for the second time. Finally, after the GM said we were to be given access so that we had oversight and could do our own scans, we were informed that we had access. The problem came with the fact that our normal user accounts were used, through LDAP, to grant this access. This does not allow for oversight, or privileged scanning. I KNOW that the system admins knew this is how they set it up, ignoring the requirement and the GM. However, that is all I am going to say on that for now.
2) Symantec Endpoint Protection 11: Holy Crud! What should be a very simple install has become a pain, for me and the second Symantec tech support guy (who has been A LOT better than the first).
We (my boss and I) started out with getting the pre-req's installed: java, ASP, IIS. We verified all permissions and that the IIS setup was correct. Then we installed SEP11.
The problem we had was that the clients would NEVER talk to the server. So again, at the request of the Syamanted Tech guy, we double and triple-checked the settings for IIS, the communications file (Sylink.xml), the network connectivity, etc. Nada! After the third day, the tech rep asked us to uninstall IIS (no other app was using) and SEP manager. This we did, in addition to removing the symantec client on the server box as well. We went so far as to verify that there was no left over symantec data ANYWHERE on the system.
Unfortunately, after we did all of this, we (this time with the web admin, to cover our rear-ends) set out to re-install IIS and verify its installation. However, it would NEVER install correctly. After hours of digging around on the few error messages we had, I found that this related back to file permissions on the %windir%/Registration folder (an old, MS05-051, issue). So I did the obvious thing and started to manually ensure that the permissions were correct.
However, even before I could make one change, the system froze! I was able to log off and attemp to log on, only to have the system freeze completely! What happened next was BAD!
I waited, and I waited, but the system would just not finish loading. So, I did what I think to be the obvious, and only, option: I did a hard shutdown of the server. This is a Dell PowerEdge 1950, and I already had questions about how the sysadmins set it up...but that's a WHOLE different blog. In any case, what happened was that the Server will NOT, at all, power back up! I have never had a server that had basically nothing on it, (wasn't even in production yet, really) fail to start after a hard shutdown!
To be continued?....

Monday, July 13, 2009

SSLF/FDCC compliance - The Fun Never Ends

My two biggest focal areas regarding security are: intrusion analysis and secure software programming. After testing two different, but related, security products (SEP 11 and Policy Orchestrator) and then performing the subsequent installations of each, I have come to the conclusion that:
Software installers should automatically verify known registry/GPO settings as necessary, or should at least return the top [five] possible failure mechanisms. I know I am not the first to suggest this requirement. I am also aware that there are some concerns with allowing an installer TOO much access to high-value information on the system. However, I believe that there is a compromise to be had somewhere in the middle.

Of the two products, McAfee's ePolicy Orchestrator (ePO) is used, at least in this instance, for compliance monitoring. It does have some other excellent functionality, and the program is good in general. However, I am not making a sales pitch.

The biggest issues with ePO is that the following settings MUST be made:
- Log on as a service: the accounts listed here must include the account used by ePO. This is somewhat of an obvious requirement. However, an overzealous SysAdmin can cause some heartache for the one installing ePO (as was the case here), as this requirement isn't noticed until half-way through the process.
- NtfsDisable8dot3NameCreation: In general terms, allowing for 8.3 names presents a low risk to the system and can slow down high-use systems. However, it should be noted that "turning on" this setting is an FDCC requirement. The problem with this setting being enabled is that Apache, used by ePO, will not install correctly, or if enable later, will not load properly (roughly 100 core load error messages will display on the initial GUI under the log in section).

The other product, the one that inspired this rant, is Symantec's SEP 11. While this product appears to be an improvement in many ways over version 10, there are still some kinks to work out. However, I believe some of these "kinks" could be mitigated, or at least avoided, if some functionality was tested by the installer, specifically LDAP authentication. Buried in a Symantec Forum was the fact that if:
- LDAP Server Signing Requirements must not be required. If this setting is "required," the SEP Manager will NEVER authenticate to the LDAP. Troubleshooting this issue was easier with Wireshark captures than with searching Symantec (or maybe I just felt like being an uber-geek that day). What I decided to do was to start capturing through Wireshark, with a simple filter set for my source ip address. Once I did that and attempted a couple of authentications, it was obvious that the LDAP server was immediately denying the authentication attempt. This was obvious from the fact that the LDAP server was immediately issuing a RST packet after the Hello packet. With this setting changed, the SEPM could easily import from the LDAP server.

So my conclusions are these:
1) Although documented in both cases above, some of the documentation was "bulky" or timely to locate. A better organization of "known issues" in release documentation would be helpful.
2) If the installer called a method to verify these settings, or attempted to "use" these settings during install, and then returned the appropriate error message(s), troubleshooting would be less time-consuming (read: more cost effective).
3) It is still better to look at what the program is actually trying to do on the network, then sift through pages of help forums. (I should add that, although not suprised, tech support at one of these companies, was not aware of the issue or the fix). I guess I would rather spend time looking at the traffic, reading the packets, and using that to decipher what is happening on the network.
4) The SSLF rearing it's ugly head, again, validates once more that careful testing and understanding of the settings is definitely required.

Monday, June 22, 2009

GCIA exam Passed

The last Friday, I sat the GCIA exam, and passed (*crowd cheers wildly*). I offer the below observations for anyone running accross this post and looking for wisdom on this exam:

1) Either you know it or you don't! Have confidence in your answers so that you aren't second guessing yourself. As a perfectionist, and being competitive, I looked up EVERY answer for the first 100 questions. Out of this bunch, there was probably four that I really needed to look up. I missed 5 questions in the first 80, but was then rushing to complete the last 70 in little more than an hour...needless to say, the bulk of my wrong answers came at this point!!!

2) Manage your time wisely. Four hours goes by quickly if you do what I did in #1 above. I ended up not answering three questions due to time expiration!

3) Mark your books well...and study before the test, not at the test site. This goes along with numbers 1 and 2.

4) If you are a perfectionist, limit your options for "open book." Create notes on your weak areas and bring only those notes and corresponding (well-marked) books to the test table. If you are like me, then too many options to verify answers is only going to bog you down. (See #1, first sentence).

5) If the software is discussed in the book, or in the class, USE it, TEST, it, LIVE it, LEARN it! This was a big help to me as the questions regarding specific tools were the easist and the answers where right in the front of the cranial housing.

6) Have a good mentor. The mentor I had, [name withheld to protect his reputation :-)] did an excellent job with presenting the material and then took a lot of the topics a "step-further."

SecurityOnion Unleashed...Get Yours Now!

If you want the be an Intrusion Analyst of any caliber, you must have the best tools available. These tools start with Intrusion Detection and the best place to download a comprehensive, and free, Intrusion Detection distro is at the below link. Doug has put a lot of time and energy into this distro and has included in it tools for testing, configuring, and installing a top-of-the-line IDS on your system.

Doug's Blog posting for this distro better explains what it is, what it does, and why you MUST have this distro:

Tuesday, June 2, 2009

Fedora 10 versus CentOS 5.3

My initial interest in starting a Blog was to record my attempts at setting up my home server to host my family website, possibly a mail service for family, and for home networking.

Previously, I had a DELL laptop with Vista Home Premium installed (AMD TK-53 processor, 4 GB RAM, 260Gb HDD). This laptop had given me a headache for a year, and DELL tech support is a joke!

After giving up on DELL tech support, I decided to slap some *nix flavor on the box and see if it would be more stable. I chose Fedora 10. This went extremely smooth and the laptop has been working great, and stable ever since. My only outstanding issue is that I need to get the wireless working (Broadcom 43XX). I was going to use NDIS wrapper to do so, but then found out that it does not allow for promisuous mode (that does not do me any good when I *need* to sniff traffic at Starbucks LOL).

Anyway, with Fedora 10 working great on my laptop, I decided it was time to move my server from (Windows OS name withheld to avoid jeers) to Fedora 10. Boy, was this a pain! The initial install would ONLY run in text mode. This was not a problem. The problem was that it would never boot into the GUI. Now, while I tend to prefer the command line, I still wanted the GUI available, and the fact that "init 5" only caused the box to hang, really caused me concern.

What I found out was that Fedora 10 has an issue with SCSI drives. There is a 'mkinitrd' work-around for this issue, but at this point, I decided to try something else. Enter CentOS 5.3!

The first thing I noticed about CentOS 5.3 was the installation was a breeze, although I didn't do too much customizing. The second I noticed was that the issue with SCSI drives was not present...i.e., I could boot into the GUI. The only reason why I wanted this was, and it may be bad form (but I really don't care) was so that I could perform any updates and maintenance on our webpage through an IDE directly on the server. Although I have been using a different box for development, it will be quicker in my busy life to be able to use the server directly for updates and maintenance.

The only issue I had with CentOS 5.3, so far, was getting my HP printer drivers installed. What I ended up doing was getting the HPLIP (HP Linux Imaging and Printer, see link below) driver pack. Although this has the option of using an auto-installer, I opted for manual. There were some dependencies that I had to yum search for, but the install was relatively quick and easy.
After the installation, I checked to make sure that the right services were running and then I *tried* to print....AAGGHH! Something wasn't working. I restarted the box and the printer worked perfectly.

It was at this point that I realized the importance of prior planning. Why would I want my web server to host my home network printer? I didn't! So, all that work for nothing, I moved the printer to another box. As I can not see a reason to print from the Server, I will not be configuring the box to use a network printer.

As soon as I decide how I want to design our families web page, I will be moving it to the server and will be using some service, probably DynDNS, to resolve. I plan on getting a lot deeper into SAMBA in the next week or so, but I still have some other things to test elsewhere in my world, such as an Ubuntu distro on my AMD box (mentioned above).

HPLIP CentOS install help:

Setting up SNORT

As a side project, I have been setting up SNORT on a small network. The nice thing about this setup is also the annoying thing: it will never touch any other network. This is nice in a sense that traffic seen by the sensor will [hopefully] only be originating on the network and/or its interfaces. However, the annoying thing is that 99% of the packets ever viewed by SNORT are going to be normal traffic for the network. This means that the chances of anything of interest being in the logs is slim to none!

The initial setup of SNORT was actually done by someone else. I had been testing an earlier version, 2.4, successfully before we moved this to the production environment. What I didn't know was that version 2.8.4 was the one installed. There were some minor differences in the snort.conf file, but other than that, there was nothing specific to worry about for our environment.

When 2.8.4 was installed however, there were some issues that were unkown at the time (I had to leave town directly after the install). Apparently, when the installation happened and the tar files were decompressed, they were put, by the operator, into the wrong file location. With that in mind, the sequence of events goes like this:

1) SNORT installed initially
2) SNORT configured to run as a service (so the installer thought) :-)
c:\Snort\bin>snort.exe /SERVICE /INSTALL -c "c:\snort\etc\snort.conf -l "c:\snort\log" -A full -i2 ideX
3) SNORT set to start "Automatically"
- Here is where the problem/bug(?) was
At this point the installer verified that SNORT was capturing packets (with no attempt to trigger alerts). However, what the installer failed to realize at this point was that the log file being created was snort.log.XXXXXXXXXX, as opposed to the name we had set in snort.conf.
4) Returned from trip and started verifying that SNORT was running properly, or in this case, NOT running properly.
5) Investigated why SNORT was not logging packets to the right file, why it was logging all packets, and why NO alerts had been triggered over a ten day period. (With a full rule set and some of the required user actions, something should have triggered)
- I, after talking with the SNORT genius Doug Burks, stopped the SNORT service
- I restarted SNORT (in IDS mode) manually from the command line using:
>snort.exe -c "c:\snort\etc\snort.conf" -l "c:\snort\log" -A full -deX
- At this point, I realized I had a problem. Multiple errors with the snort.conf file reported.
- These errors all resolved to issues with preprocessor items.
- After some investigating, I realized that the structure of c:\snort was severly messed up! This is why at run time I was getting multiple errors...the files were not where they were supposed to be.
- In the interest of time, I removed SNORT completely and re-installed, paying attention to the folder structure from the tarballs.
- I then restarted SNORT manually using the same command as above.
- I then used HPING2 to craft some packets and sent them on their merry way.
- I also used the windows NMAP, ZenMap, to fun some noisy scans against some boxes.
- At this point, things were golden, SNORT was properly acting in NIDS mode, and all was right in the world.
6) I reconfigured SNORT to run as a service, using the command above (#1 ).
7) Took a much needed nap.

Lessons Learned:
1) Do not let someone who has never used or configured SNORT install it for you, unless you are present to help! (I should say that I didn't have a choice here)
2) When installing SNORT as a service, if the snort.conf cannot be built at runtime, SNORT will default into packet logger mode.
- ALWAYS test your deployment manually before running as a service!
- Look at the logs in \snort\log
- if they are named "snort.log.XXXXXXXXXX" then you are probably running in logger mode (a good way to know is to assign a name to the .log file in snort.conf that will be obvious to you when it is working correctly)
3) Pay close attention to the path variables in the snort.conf file...they are your friend for configuring SNORT.

Tuesday, May 19, 2009

Its the little things

After I finally figured out what was breaking the client/server communication in GuardianEdge, I ran accross another issue: "Not enough server storage is available to process this command." This error message popped up everytime I attempted to access a share on the encrypted drive. After some quick research, I determined this to be caused by the IRPStackSize registry setting. After some trial and error with the size and machine, I determined that setting this DWORD to (dec)20 on the domain controller was the correct fix. Apparently, this setting is either changed or removed by some versions of Norton AV.
Although annoying, it was a fairly easy fix. Now it is time to turn my attention to McAfee's Policy Auditor and to getting back into the *nix world.
At home I am currently playing around with the SecurityOnion LiveCD from Doug Burks, Fedora 10 as a client, Fedora 10 as a Server, and CentOS 5. These should keep me busy for awhile.
If you haven't checked it out yet, you should look at Doug's blog:
Doug is a packet guru and the SecurityOnion LiveCD is an excellent tool for intrusion analysis/detection.

Playing Catch up in Vegas

There is nothing like a week-long conference in Vegas in slow you least from a work standpoint.
I lived 4 hours from Vegas for 5 years, and never went! I was looking forward to this trip as both a chance to network, and as a chance to finally see Vegas. We stayed at Las Vegas Lakes and had a good time at the conference. The only site-seeing we did was to visit the Hoover Dam and to go down to the Strip one night that week. We were going to go back on the last night, but just weren't in the mood after spending quite a few hours at the Dam, taking the Dam tour, buying the Dam souviners, acting like Dam get the point. :)
Walking the strip was a little over-rated, but the Crazy Horse show at the MGM was great! Other than that, it was almost entirely conference sessions and homework for me. Maybe next year I will take the wife.

As a short recap to SSLF testing:
I finally finished testing the SSLF baseline against one product, GuardianEdge Hard Drive. After using different methods to test the baseline (all at once, individually, and in groups) I determined that it was the "Log on as Service" right. The irony is that I spent this time testing because a System Admin was 110% certain that this setting was correct in their production environment, but would not allow me to double check. In any event, it is fixed! Finally!

Thursday, April 30, 2009

Back to the SSLF

Since I now have working test environment, and includes both real and virtual machines, I will be starting back onto the SSLF testing (Fedora 10 work this weekend...hopefully).
However (and this is more planning for tomorrow), this time:
- Different user logged into each machine.
- Each box in a seperate OU
- This allows me both redundancy, and a way to verify any problems.
- This also allows me to test different apps on different clients at the same time
- Settings will be (painfully) applied sequentially.

I should also note that I created two backup copies of each virtual machine. I got into too much of a hurry in my previous attempts. :(

Passwords in plaintext?

It has been a week since I could post on here, and the reasons range from mundane to just downright stupid.

I really didn't have anything to post earlier in the week. I had decided to change my test bed set-up. Instead of: 1 DC, 1 MS, 1WS, all on an XP host, I converted another real box into a server. In this way, I can better duplicate network traffic for capture, and for testing functionality. I did originally think that I was OK with just using a virtual network on one box. However, I started to notice "little things" in the VMWare Server environment. The biggest one was that joining the domain took not only creating the computer in AD, but the Host Record (A) in DNS, along with restarting (so far) a minimum of three times. I have never had that happen before, believe it to be a hardware issue on the boxes I am using, and have moved passed that issue.

The really stupid thing that cost me a whole day: I was indirectly involved with a friend of mine who had found out that an admin had placed a very interesting plaintext file on a desktop. Apparently, the file had an Overly obvious name, and contained the admin account name and password for a vital application (this account also happened to be a domain account, and was part of Enterprise, Domain, and Schema admin groups...don't ask).

In any event, if anyone ever reads this and wonders what the set-up is now:
1 DC, 2 MS (virtual), 3 WS (2 virtual, 1 host).

Thursday, April 23, 2009

Virtual(ly annoying) domains

The testing that I have been doing of the SSLF baseline for XP workstations has been a fun challenge.
-Creating a new DC in a virtual environment
It seems as though even in a virtual environment, Microsoft still has to be a pain to work with at times. I know that this is a surprise. On Tuesday, I had successfully crashed my virtual DC (with some strange mutations visible in the RSOP prior to crash), which was followed by the fact that even after promoting my member server to a DC, I was still not able re-join my test workstation to the domain. Although it should have been a simple issue of moving the workstation into a workgroup and then back to the domain, it wasn't! The steps I had to take to re-create my virtual network are:
1) Create another virtual machine and install Server 2K3.
2) Remove all roles from original member server
3) Remove member server from domain (workstation already moved back to workgroup)
4) Run 'DCPROMO' on new server, setting up AD and DNS (a new subnet range had to used)
5) Move member server into new domain (Step 4 was done twice, with a new domain name used the second time. While I tried to keep the original domain name, this was unsuccessful. The MS an WS could ping, but not join to the domain.)
6) Establish roles on MS
7) Move workstation to new domain

It has been some time since I have done any major network admin. However, becuase I had to do some strange additional steps, I wonder if:
a) VMWare maintains a permenant routing table for bridged virtual networks?
b) even without doing any transfers, using images, shouldn't I have been able to just add at least the workstation ot the new domian through the progression: old domain->workstation->new domain?

In any event, it was right back to the SSLF adventure after this point.

Tuesday, April 21, 2009

Side Project - SSLF Baseline

Has anyone ever really tested how the SSLF baseline for windows workstations affects different software products and comm pipes used on the network? I have had so many experiences with the SSLF "breaking" this or that client/server application, and yet the documentation available is minimal. Anyone can find what each setting means and does. The problem is that most commercial software is not well documented at the lower layers
So one of my side projects is to test the SSLF baseline in a virtual environment and to see how each setting affects whatever product I am using at that time. I think that this is going to turn into a long-term project as there is a large number of security applications that I want to test against this baseline.

U of Michigan here I come!

Last Friday I received my official acceptance letter for the Masters in Software Engineering at the best university in the country - University of Michigan!
People have asked me how that relates to me focusing on network security and my answer is always the same - At least 90% of the security issues found on networks can be attributed to poor programming techniques!

First Post (Last Post??)

Life has been crazy the last couple of weeks. In addition to death, sick kids, robbery, and school, I decided to take my Dell Inspiron 1721 and install Fedora 10. I have always preferred *nix flavors to Windows, but this will be first time ever installing and running one at home. I just hope that the Dell will work better with Fedora then it did with Vista!