Monday, January 18, 2010

Putting off the easy things

Last May I had the opportunity to go to the DoE Cybersecurity conference at Las Vegas Lakes. While I was there, I spent some time talking with the sales rep from McAfee about ePolicy Orchestrator. It was also during this time that he convinced me to drop my card in a bowl for a chance to win a new firewall. As it turned out, I won one of two they were giving away, and recieved said wonderful piece of hardware around the end of July'09.

When I recieved this device, I was already happy with the Belkin router I was using at home and had not done enough with my server to really matter. In fact, with the exception of my wife's laptop and daughter's desktop, I have probably re-loaded all of my other computers at least three times since then. So, I really didn't see any justification for spending the time to move from the Belkin router to the SnapGear SG565. What a mistake!

Installing the SG565 was relatively simple and quick. The only issue that I had was that my cable modem required a hard restart. However, that was such a small thing compared to the immediate benefits! Unfortunately, I am fairly certain that McAfee has since discontinued this line since buying out Secure Computing.

Why am I am so happy with the SG565 (especially when I haven't even finished some of the finer set-up issues)? It really boils down to a list of features, and how easy they have been to set-up (or appear to be, in the case of those I need to finish).
These features:
-2 USB ports that I am now using for a shared printer and shared storage (negates some of the headache of a mixed OS home network).
-SNORT built-in
-ClamAV built-in
-What appears to be an excellent interface for firewall rules
-3G support (through a 3G wireless USB key)
-Stronger (but not too strong) wireless signal

In the short time I have been running the SG565, I have seen a definite improvement in network speed, as well as wireless connection states. With my Belkin, the connections were constantly having to be refreshed due to a weaker signal (over 30 feet, almost true Line Of Sight). Further, the adding of printers (to the Windows OS's so far) seemed to be easier than when I had it shared off of one of my desktops.

All in all, I am extremely happy that I FINALLY got around to setting this up. There are a few items on my to do list relating to this though: moving server to a DMZ, setting up SNORT, better F/W rules, etc. I am just glad to have such an awesome device...especially since I didn't have to pay for it at all! :-)

I am think I am going to start with the SNORT config on this. I am going to get another storage device first and setup a new db. I also want to look more into what version of SNORT this is, if it is upgradable, etc. The device appears to be able to accept syslog, other IDS, and other Firewall inputs, so I might end up not using the SNORT on the SG565, and just use the SG565 to aggregate and right down to the db. The firewall does provide a tcpdump feature to capture packets, and on-the-fly configs can be made to capture all traffic from a specific IP, MAC, rule, etc. Looks like I could really turn this into a huge project...time permitting, of course.

1 project down, 987 more for the year!

Saturday, January 16, 2010

Moving Forward

The winter semester for my MSc. started this past Wednesday, and today I actually have some time to look at the posted class videos. I am a distance learning student through the University of Michigan - Dearborn. The two classes I am currently taking are: Database Systems and Web Services. Interestingly enough, these classes both relate to this crazy idea I have: a global repository of malware information (source, binaries, hashes, IP's, etc). So instead of listening completely to the current video I have, I am listening 40% and spending the rest of my concentration on this post. :-)

Some might ask: But isn't a global repository already available with sites such as, for example? Not to the level of detail I prefer, or think the security community could use. There are numerous organizations that focus solely on network security, or on attacking that security. Some of these are EmergingThreats, the MetaSploit project, SANS, etc. There are even numerous research groups (acadamia and business) that have their own internal data stores of vectors used. Then there are the legal and procedural requirements of governments and business that require the storage of data about attacks/compromises.

As an example: I was recenlty asked by another network security individual about some javascript that was extremely obfuscated. I was intrgued, worked on it for a bit, and then remembered that I know of some who have repositiores of obfuscated javascript. So I contacted them in hopes that they had already seen this particular vector. The result was that I didn't get help becuase the repository that these individuals had was/is controlled by an anti-virus vendor and the vendor was unwilling to share research results.

It would have been much more promising for the security industry if this other tech could have shared directly the code he found with those that controlled that particular repository. Maybe my buddy could've had an answer much quicker than what it ended up taking. However, and much more to the point, a globally available repository would have alleviated a lot of the headache for my buddy. It would relieve that headache for a lot of people.

What do I propose? I propose that a massive undertaking begin, where everyone pitches in, but I get all the credit. :-)

Seriously though: Why not take ALL the data each time your employer or home network is attacked, convert it to an XML file, and upload the data for everyone else to see (minus internal IP/connection data, user names, etc.). The logistics are harder than I think this sounds to most people, but this could be done. I can already hear some screaming, "But then even more 'would-be' attackers (read: fat, pimply-faced scriptkiddies) would be able to attack us." I answer with the belief that if we (the network security industry as a whole!) were to immediately publish this data for all to see, we reduce the attack landscape currently available to the bad guys. Signatures can be written, rules enabled, and the good guys score. The immediate down-side to this is that the bad guys would have to (and they WOULD) get smarter. However, if we had 10 times as many examples of a particular exploit, wouldn't signature AND behavior based detection be much more feasable, and effective? Of course, I could just be ranting and raving becuase it's a Saturday and college football is over! :-(

I am getting off of the soap-box in favor of doing some work and listening to one of my Prof's, and seeing if the Tiger's have made any more moves in preperation for Spring Training. :-)

Sunday, January 10, 2010

My Vista OGRE, Part III

It has been awhile since I have had time to update my blog. Life has a funny way of getting in the way of best laid plans. :-)

In any event, I figured I would update (finalize, for now), my comments on using OGRE. The real bottom line on this project is that the requirements changing drastically coupled with only ~10 weeks to do this (in addition to work, my other graduate class, family, Michigan Football, and sleep) really was not enough time to do this properly. I guess I should add that trying to just "dive-in" into this was definitely the wrong approach. With this in mind, my summary here may be updated as I "remember" more of what I did to finish this project. It might be pertinent to add that I did finish the class with an "A," although I don't know my project grade. However, with the weight it carried, it had to have been at least a "B." :-)

If anyone is actually reading this blog and is interested, I did produce a write-up on the project, design decisions, and thoughts on how to make it better. Plus, I have the source code still. Can't ever seem to delete source code. :-)

In the submitted prototype, I ended up using two different data structures (standard Linked List and standard deque) and a couple of algorithms used during each frame updates.

Entry, Exit, and Curve Vectors were all stored in a standard linked list (as IntersectionNodes). While I believe that there are other methods to account for these coordinates, and the actions that need to happen at each one in a scripted world of traffic, I decided that a linked list would be the easiest to maintain and use for these vectors.

The IntersectionNodes themselves stored not only the Vector3 types, but also Ints and Chars to indicate:
- The implicit direction of travel that a car must be facing.
- for example, Intersection 1 is a four-way stop. An entry Vector3 facing North would indicate a stopping point for the car facing North and an corresponding exit Vector3 with the car facing South.
- The number of the intersection.

The deque was used for scripting individual car routes (I have 5 vehicles that are mix of cars, trucks, and a school bus). Initially, it was a simple matter of adding each entry and exit point to the route deque's. During each frame update, the current position (Vector3) is compared to the list of Entry Vectors, which indicates a stopping point for the vehicle (curves were seperated from Entry and Exit Vector lists) and an algorithm for slowing the car was used (below). The hard part of the deque's (made hard by my own inability to just do some research first), and something I think I mentioned already, was calculating the turns through intersections and curves. The algorithms I attempted based on some math (sin, cosine, etc), never "quite" worked right. I ended up calculating manually a minumum number of points of a turn and adjusting the Vector3's x and z values. I added no less than seven Vector3's to between each Entry and Exit point on the deque's.

The stopping and restarting of the vehicles was fairly simple to transform into an algorithm to use during each frame update (I did not seperate this out into another class as I think it would have added more coupling than neccessary). Basically, what I did at each frame update was take the precent of change in distance to the stopping point (<=30 from Entry Vector3), and slowed the car by the same delta. To prevent any negative values in direction (makes the car go backwards), I tested for any distance <= 0.5. If distance was <= 0.5, the algorithm automatically set the wait timer to 30, distance to 0.0, and current position = destination (Entry Vector). This basically "snapped" a car to the stopping point and initiated the stop wait timer.

The Camera views, and toggling them, was a little difficult and time did not allow me to finish this aspect. The current prototype has one camera (movable) that overlooks the "city" I created.

Futre Plans/Increments:
As time permits, I would like to do more with this project, if only for myself. There is an OGRE physics engine that I have seen used for car movements and I would like to incoporate this. I also plan to revisit not only the multiple (toggling) cameras, but a more dynamic way to script out the movements of the cars that are not being "driven" by the user.

OGRE provides support for peripherals suchs as joysticks and I may be interested enough in the future to play with this for the driving simulator. However, right now, the actual "driving" of the car will actually be "camera view."

I also want to look at how storing Vector3's for a turn compares to a more dynamic approach of using a turning algorithm/function. I am curious as to the comparison of:
- storing and popping a turn algorithm as compared to a method that calculates the Vector3 of a set number of points on a turn. It is the memory used versus the processor time used that I am curious about.

The bottom line is that I really want to find some extra time to create the next increment on this prototype. I realized too late that there are some definite improvements I could implement to make this simulator: more realistic, more efficient, and more fun. The OGRE API, and the additional physics engine, definitely provide all the means to transform this prototype into what I want it to be. My middle daughter is only 8 years old, so I have at least seven years to perfect this program. :-)