Tuesday, December 23, 2014

Kali (Debian) error when updating/upgrading

It has been a good bit since I've had time to write anything here, and one of the things keeping me busy is my attempts to prepare to take the OSCP. What an incredibly fun course, albeit a little frustrating at times. One of the things that has really been driving me nuts, with more than just OSCP work, is that of time being wasted on oddball problems.

My most recent pain in the rear came tonight when I finally decided it was time to figure out why my msfupdate wasn't actually performing an update. Each time I ran it over the last month, I noted the failure and kept pushing on with whatever target I was going after. Tonight, much to my chagrin, has been all but wasted on WAITING for 115 updates to download and be applied to my Kali VM. This was after I found a solution to the error I was getting when I attempted to run apt-get upgrade -f:

dpkg: error: parsing file '/var/lib/dpkg/available' near line 14392 package 'libpurple-bin':

which returned an error code of (2). After my looking and searching around, I learned the following:
  • libpurple-bin is part(all?) of pidgin, the Instant Messenger program...that I do not use on that particular Kali VM
  • the actual line, 14392, of /var/lib/dpkg/available ended  up containing garbage
  • in the same directory, unknown to me until I looked, was available-old, which had the same date/time stamp as 'available' but a different size (smaller)
So, with some quick copy commands, I made the following changes:
  • mv /var/lib/dpkg/available /var/lib/dpkg/available_ERR_Line_14932.orig
  • cp /var/lib/dpkg/available-old /var/lib/dpkg/available
I then was able to re-run apt-get upgrade -f, and after approximately 30 more minutes, it was time to reboot the VM and see if it all works, AND if my MSF was newer than September 2014. Success!

In trying to find the problem, I noticed that there were a LOT of posts with related errors dealing with /var/lib/dpkg/status, or in a few cases, the entire /var/lib/dpkg directory. Before doing what one forum poster did, which was accidentally removing the entire directory, I'd suggest the following for anyone who faces this annoying error:
  • first, check to see if the system already has a backup of whatever dpkg file is giving you pains
  • if there isn't a backup of the last working file you need, you can try to fix the issue by copying said file from either the latest installation media or from another VM of the same flavor
After getting a "good" copy of the file in place, I'd suggest at a minimum running these few commands:
  • sudo dpkg --configure -a
  • apt-get clean
  • apt-get update -f
  • apt-get upgrade -f

Hope this helps the next nerd to have this issue! Now, back to PHP reverse shells! :-)

Thursday, September 25, 2014

Awesome Bash Vulnerability

I know that this has been posted by a number of people including Doug Burks from SecurityOnion. Just thought I'd share the links. The vulnerability itself...nice...from an attack perspective.


Tuesday, June 17, 2014

Augusta's HORRIBLE drivers and Network Insiders

I have been having this thought circling my overly-active...for about a week now.

I was driving home from the VA hospital last week, taking one of my normal, short routes to get home. One particular stretch of the road I was on has three traffic lights all within [about] a half-mile total. These lights are, naturally!!!, timed so as to make you stop at each of the three. However, there is one exception to the stopping rule: the "I don't care/I'm more important than anyone else on the road/It's my right/" attitude. At least this is what I personally believe is [at the least] a large portion of their attitude about red lights and what red lights mean.

To put it simply, as I pulled up to stop at the first of the three traffic lights, there was a fellow going the opposite direction as me who just had to make a left turn, despite the light having already been changed from Yellow to Red. Yup, this ignorant/selfish/lazy/stupid person ran the red light. However, that's not even the worst part about the bad HORRIBLE drivers around this town?

Granted, Augusta, GA certainly has [much] bigger problems then schmucks running red lights. However, the interesting, (and absolutely FRUSTRATING) fact here in Augusta is that at least ~90% of the red lights get run...this is my perception at least. This poses an additional need to be extra-vigilant, always watching ALL of the other lanes at an intersection and waiting at least five or ten seconds before entering the intersection (if you're the first car in your lane).

    A couple of quick points on the problem:
  • The driver running the light isn't really doing anything out of the ordinary, based upon the "normal" behavior of drivers in this area
  • But they are still breaking the law, trying to sneak by the rules and sneak by any cops (if the cops would even pull them over)

    A couple of quick points on possible fixes:
  • Have more cops (and make them write the tickets)
  • Have the cops learn what to watch for, especially in terms of that one 1987 Oldsmobile Cutlass that starts gunning it while still being 100' from the signal

By now, my three (if I have even that many) followers may be wondering: What in the world does the poor driving habits of people in Augusta, GA have to do with anything network related? The simple answer is: "Nothing."...Directly that is. But some certain things stuck out the other day that reminded me of a common network problem, so let me see if I can tie these two things together.

Can we think of all of the cars on the road in the city of Augusta, GA as representing user/machine activities on a network. For instance, one car could be used to represent a user transfering a file while another car could represent standard AD replication between multiple DC's. If you can think along these lines, then I believe it will be easy for you to make the same leap that I did.

When I was thinking about all of these cars in the turn lane, I came to the conclussion that they could be placed into two distinct catagories, despite any number of small, and large, differences in either the vehicle(s) or the driver(s). These two catagories are: safe drivers, and unsafe drivers. Now, think about what they are doing: they are making a turn, the same turn, roughly going the same direction (although the tires of each car may not follow the exact arc of another).

Now, and my brain did this auto-magically so I might be wrong, but it's not a far leap to translate the drivers of each car into "Users." As the drivers are all driving, the "Users" are all..."using"...a network, a resource, a device, etc. This is the point in my thinking were I came to draw a correlation from the drivers making the turn to one of the biggest problems we face in network security: The Insider! So who is the insider in my analogy and can this do ANYTHING for me, or others, in terms of finding that insider?

Who is the insider in my turning cars analogy: It's easy....it's the baby in the backseat! ...just kidding. The insider is represented in this analogy by the idiot jerk, or even JERKS, who run the red light instead of waiting for the next green light. Those that made the green, and even yellow, lights are the users performing "normal," authorized activity on the network. Maybe that driver who forces the yellow and it changes while they are still in their turn, that driver could possibly be lumped in with the jerks who just run the red light. Your call there, because I honestly don't know that it matters to my overall thinking here.

So I have told you all about the poor driving in Augusta, GA and about how I easily correlated that into network activity, especially the inside threat. Now what? Here is where my head started to hurt the other day. I wonder if an algorithm could be defined to identify those who are GOING to run the red light, before they run it...or as soon as they put more pressure on the gas pedal? If this could be done then I believe an algorithm can be created to better identify an insider threat...maybe not as early as when they are just "thinking about it" or testing their access, but at least as early as the start of that first file transfer or copying action. Thinking about this has led me to some other thoughts regarding protection from the insider (and detection?).

Budgeting for your company's security may be a difficult or even non-existent task due to financial constraints/availability. However, I wonder about some of the bigger equations and books that throw around terms like Risk, Mitigation, Return On Investment, etc. Do we not already have at least 50% of the needed functionality to start working towards identifying and/or stopping insider threats/breaches? Do we not already have at least 50% of the tools needed to turn those terms into action...to put our brain where are mouths are, so to speak?

Some questions/thoughts:
  • Can you baseline your network in terms of:
    • The average time window that each user is logged in?
      • If so, then why not block them out of all times outside of that window? Sure you may have someone stay later than usual on rare occasions, but in that instance they could conceivably call the help desk for a short logon time extension.
    • The average number of bytes each user sends [somewhere]?
      • Then users could be grouped and rules utilized for abnormal data sends.
      • For example, you baseline and find that ten of your users send less than 10MB of data via SMTP between 9:00a and 12:00p. A rule firing for 10.01MB of data transfered, or maybe even using a calculated tolerance (say, 10%?) would alert those monitoring said rules
  • Do we really have any excuses left to not start making better usage of access controls
    • Windows Group Policies and ACL granularity have improved a lot in the last couple of years
      • Would it be that hard to create a security group for only those allowed to access a particular file
        • AND apply time limits
        • If it's an Office doc, the Directory Rights Management provides even more help here
    • *NIX systems support finer grained access controls
  • Implementing a two-person rule
    • I envision something akin to the User Access Control prompt needing the credentials of an administrative user
      • As two-person rule, the credentials would have to be of someone else who is already authorized to access the material
  • Is user training really effective against the insider?
    • NO! Absolutely NOT!
I cannot be the first one to have any of these thoughts or to present any of these questions. I wouldn't be surprised if these thoughts and questions hadn't been floating around the security nerd cubicles for the past 20 years. I wonder though, are we, people in general, getting more and more complacent about what we do, what we see our office mates do?

Saturday, May 31, 2014

Military Leave Scam (Facebook/Phishing)

[edit: I am removing the photographs on the outside chance that they are not really the pictures of the piece of trash who is pulling this scam].

I don't normally "dime" people out on non-life threatening things, but this has really got me ticked off as it's someone who messed with MY family!
An older scam appears to be resurfacing....the Soldier that needs help going on leave to be with you. A cousin of mine was hit with this and luckily she started to question everything he was saying and the pics he sent.

The scam is, in some version, a long-distance relationship with an supposed Officer in [pick your branch]. He really wants to take leave to come and spend time with you, but he is on a "secret/underground/important" mission and can't get free (time off) unless he has a replacement. So, he needs you to send an email to some General/Colonel stating that you REALLY need him to allow your boyfriend to come home for leave for some emergency. Plus, he also needs money and after you write his leadership, you'll get a "bill" of charges for his leave to come see you.

The guy that tried to scam my cousin is: https://www.facebook.com/eric.mcdaniel.9615?fref=ts. He claims to be a USMA graduate, a CAPT in the 23rd Artillery Division (which isn't real). Says he is stationed at Red River Army Depot (but lives in California???), in this non-existent unit, sent my cousin a crack-head DoD ID card (looks more Chuck-E-Cheese), claims to be from Poland, Virginia, and/or England. Even worse, claims to be a hero/veteran. Pictures he sent her looked like UK/Aus/NZ uniform, although even those pics were wrong!

Also, his "official" Army email is an AOL address, which is NOT how the US Military does things, and his unit/CO email address is u.sarmytransitdepartment@usa.com, again, NOT a valid military address, at least to my knowledge.

One of the last correspondences (see below) that she got from him/his "unit" detailed the following:
"He is a Captain, 23rd Artillery Special Ops located in TX based out of Red River Army Depot.  Leave to be his house or my house.  Was told he just needed a darn good reason from family.  Says has almost 20 years in.  Army saying they need pay for flight and replacement officer.  Underground mission was mentioned once."

 This guy is a SCHMUCK!!! ...to put it nicely.
Since there is already a lot of information on the web about this type of scam, I am not going to go into much detail. However, I would like to post some pics of the emails and pictures that my cousin received from this loser!
Emails from "his Unit Leadership"



[Edit: Picture Removed]

[Edit: Picture Removed]

[Edit: Picture Removed]

[Edit: Picture Removed]

Of all the pictures he sent my cousin, one is my absolute favorite. He tried so hard to convince my cousin that this format was an alternate DEERs card!!! What a TOOL!!!!!!

[Update: More info]

Just wanted to add here that, after I read the records of my cousin's email and IM exchanges with this uber-tool, I am even more convinced that he needs, at least, a very large and long-lasting blanket party. In multiple messages he claimed to have "lost more men" during the previous night's "dangerous mission" outside of RRAD. It was obvious just part of his scam to try to get my cousin to worry about him and to try to get her to speed up the "leave payment" that his "unit" required. Jeez, I wouldn't mind whoopin' this boy one or two times!!!!

Monday, May 5, 2014

Annual Simulator Training

I watched an interested episode of Nova with the family the other night. It detailed issues with cruise ships and their sinking and compared similarities between disasters such as the recent Italian ship in which the Captain hit a BIG rock, the Titanic, and the Oceanos. A large part of the comparisons dealt mainly with the ships and the Captains. However, part of the episode dealt with crew training.

In the aviation world, at least in the US, pilots are apparently required to take annual simulator training. Maritime crews and Captains are not. Neither are Network Security professionals. Wait. What's this about annual simulator training for nerds?

It's simple really...at least in my head; I admit that I may be foggy after having a Cinqo de Mayo dinner with the family at Chili's. But I think it makes sense, at least some sort of annual event for all types of network security folks, not just DoD exercises or SANS training courses....or even the rush to submit CPEs for your CISSP at the end of the cycle. I may be putting the majority of us nerds into the same container as I am personally, but here's my points:

1. I am frequently moving from one project to the next. Although some may have solutions in the same domain (pun intended!) as others, there is no one-size-fits-all, at least I haven't found it yet. In the last 12 months, for example, I have had to work out solutions using: flash, php, perl, python, bash, batch, PowerShell, new exploits and old exploits, etc, etc. Have you not had to do the same? I'm certainly not complaining, although it can be frustrating on job interviews if you haven't touched perl in 12 months and you get a specific question only to have the answer stuck on the tip of the tongue. :-(

2. What's old is new and what's new is old. While base methodologies and languages haven't changed a whole lot over the last decade, it's safe to say that solutions using said methodologies and languages, or some combination thereof, have certainly changed. Do you use the same type (or even the exact same) script for some task you've been doing over the last decade. I care to venture the answer is no...or at least I think it should be. For example, what I used to like to do in Perl or Batch/Bash scripts, I now like to do in PowerShell (and Perl and Batch/Bash). Some GUI tools even catch my fancy every once in a while. :-)

3. Who's the bad guy? He's not the same one he was decade ago. Probably not even the same he was a year ago (or it could be a she, to be fair!). Does the adversary (being he, she, or them) use the same tactics, techniques, and procedures...tools and methodologies as a year ago. Sure, beaconing will always be beaconing...but even this has changed over the years in terms of ports, protocols, services, encryption, data, etc, etc, etc.

4. Certification providers and requirerers (not sure that's a real world) have moved towards a demand for recertifying, annual training requirements, or a combination thereof. That's all fine and dandy and usually not that complex to satisfy. But, is it REALLY satisfying? If you took a SANS course last year just to learn something new and/or satisfy some CPE/CMU/C?? requirement, could you sit down today and perform even half of the tasks you learned. No...I don't think most people can unless that training was already a part of their job function or they found a way to incorporate it as such! Fact is our training and our knowledge, in any field really, is perishable. If you don't use it, you WILL lose it and that's the cold hard fact of the matter.

I have at least 15 different and specific skills/languages/tools listed on my resume. I can even talk to all of them to some extent. However, there is a number of them that I need to use the manpage for to refresh myself because of the perishability of the skills (and sometimes because the ever-so-slight differences between some scripting languages). I mention this not as a focal point of this post, but just a way to maybe bring the point home a little bit more, that we computer security nerds live a world that is as horizontal as it is vertical and our tools are as perishable as mayonnaise on the back porch on a hot summer day.

This brings me to the thought of "Annual Simulator Training" for computer security nerds. If we had, regardless of industry or threat, or better yet, tailored to industry and threat, an annual training on a simulator, wouldn't this in fact allow us time to re-learn, re-master, re-visit the tools we don't get to use often enough, or in ways that are challenging enough. I am sure that one of the two people who read this blog has already thought "Doesn't the DoD or USCYBER already do an annual training/simulation event?" The answer would be yes. However, the number of players in the "big" simulation is not a realistic sampling of the network security nerds inside the DoD/USCYBER complex itself. Not to mention, it's training only for this finite number of participants.

There are options available now in terms of simulation environments for network protection, detection, response, analysis, etc. But they are targeted to specific customers (read: If you have enough cash or credit, you can have a simulation network that works 50% of the time...if you're lucky!!!). Furthermore, as with the Captains and crews of marine vessels, there is no one specific requirement for annual simulation training in our field. True that there are the requirements of CPEs/CMUs and the opportunities that some individuals get based solely on there employment location and/or provider. However, I would argue that the private sector is as important as the government and public sectors in terms of how we are equipped to protect and defend. And if the level of importance is the same across the board, at least to all US interests (private, public, and governmental), then the training exercises and environments should be further extended to support annual simulation training for all of us network defenders and penetration testers! Furthermore, if these training capabilities were extended, this would allow for federal support/mandate of an annual simulation training environment.

Now, before anyone shoots me for thinking that I want the federal government to be "all up in our business," let me say this: I don't want them in our business, I want them to better support our business. I think in doing so, we all benefit regardless of sector. Furthermore, it might help stop some of the hair-brained, half-cocked, knee-jerk reactions that I see in terms of policy shifts and requirements development (why should baselines shift every time some CISO or some Colonel gets wind of some "new" threat?).

OK. I'm going to stop here and take a giant leap off of my soap box! Hope you enjoyed this episode (psychotic episode???) and stay tuned for a word from our sponsors. :-)

Monday, February 10, 2014

Zend Framework 2 configured on Ubuntu 12.04

Setting Up the Zend Framework 2 Skeleton App on Ubuntu 12.04

I recall a class I took a few years ago during my first semester in the software engineering Master's program at the University of Michigan. This particular class stays with me as the instructor had a "guest speaker" who was on the development team for the Zend Framework. At the time I chose to use a different framework (and language) so I filed the usage of the Zend Framework for a rainy day. I think that day has come...but in a good way.

Recently I had been working on a PHP-based web application. I would like to say that it is finished...but unfortunately I need to do a few more housekeeping things with it and write a few more pages. This project though got me thinking, the first time a year ago, and then again three months ago that I would like to migrate it to a purer MVC-based application than what it is right now. There were multiple catastrophic failures while I was writing and backing up code along with the "customer" in this case changing requirements multiple times. So while I started initially with an MVC pattern, the current state of the application is much more ad hoc than I had wanted. Which is what got me started thinkning again the ZendFramework and what led indirectly to this blog entry.

I figured that it shouldn't be too hard to install a well-known framework on my Ubuntu VM or my craptastic Windows 8 host...right? Turns out that it was a little more complex than I anticipated. Between Windows 8 and Ubuntu 12.04 I would say that the required workload is about the same. Since I prefer Ubuntu, I want to focus on that, especially since I built this VM specifically for development projects. So without any more fluff or garbage from me...

I found an install guide on the zendframework.com website. While the guide was good, and offered multiple options, it certainly wasn't exact...or maybe the right word is "complete."
The first option in the guide is that of using composer. However, unless you know to add the right flags to ignore stabilty, so I am going to skip over the usage of composer for this install and instead, clone the git repository of the framework and go from there. I do have some screenshots as well as some video (I am going to make my first attempt at a YouTube instructional video for this effort) that I will load/link here on a future update of this post.

The steps that worked for me (using a base, updated install of Ubuntu 12.04 iso):

1. Clone the Skeleton Application repository from github:
sudo git clone https://github.com/zendframework/ZendSkeletonApplication.git /var/www/zf2-skeletonApp

2. cd into new directory: cd /var/www/zf2-skeletonApp

3. Now here is where we'll use the composer inside the framework to install:
sudo php composer.phar self-update
sudo php composer.phar install

When the above two commands are run and completed, which can take a few minutes, you will most likely get a block of messages like the below that describe additional functionality you can add to you Zend Framework.

Loading composer repositories with package information Installing dependencies (including require-dev) - Installing zendframework/zendframework (2.2.5) Downloading: 100%
zendframework/zendframework suggests installing ext-intl (ext/intl for i18n features (included in default builds of PHP)) zendframework/zendframework suggests installing doctrine/annotations (Doctrine Annotations >=1.0 for annotation features) zendframework/zendframework suggests installing ircmaxell/random-lib (Fallback random byte generator for Zend\Math\Rand if OpenSSL/Mcrypt extensions are unavailable) zendframework/zendframework suggests installing ocramius/proxy-manager (ProxyManager to handle lazy initialization of services) zendframework/zendframework suggests installing zendframework/zendpdf (ZendPdf for creating PDF representations of barcodes) zendframework/zendframework suggests installing zendframework/zendservice-recaptcha (ZendService\ReCaptcha for rendering ReCaptchas in Zend\Captcha and/or Zend\Form) Writing lock file Generating autoload files

4. Set up required VirtualHost - Depending on your setup, you will need to modify one or more of the below:
    - /etc/http/httpd.conf
    - /etc/http/extra/httpd-vhosts.conf
    - /etc/apache2/apache2.conf

On Ubuntu (and other Debian based systems that package apache under /etc/apache2), the virtual host setup can be a little tricky. To start with, create a file with the name of the file being the same as your hostname. So, since I have called my virtual host zf2-skeletonApp, I have created:
/etc/apache2/sites-available/zf2-skeletonApp.localhost with the following content:

ServerName zf2-skeletonApp.localhost

    DocumentRoot /var/www/zf2-skeletonApp/public
    SetEnv APPLICATION_ENV "development"
        DirectoryIndex index.php
        AllowOverride All
        Order allow,deny
        Allow from all

Take note of ServerName, DocumentRoot, and Directory. ServerName is the NamedVirtualHost value that apache2 will look for when the site is requested. DocumentRoot and Directory both contain the full paths to the 'public' directory of the zf2-skeletonApp. This is where the Zend Framework will serve the default index.php.

6. So that we don't forget to do it later, let's make sure that the site is enabled and that the ReWrite mod is enabled.

   - to enable the site, use the following command:
sudo a2ensite zf2-skeletonApp.localhost (Note the full name of the site used in the command). Another item that should be noted is: if you already have a file of the same name at /etc/apache2/sites-enabled, the a2ensite command will fail. The reason for this is that the a2ensite command creates a link of the available site at the sites/enabled/ location

- You could run the command to enable the Rewrite mod without checking to see if it's already enabled. However, you should spend a minute and drive around your apache2 file structure a little bit and what better chance than this one. :-) Check the listing of the files in /etc/apache2/mods-rewrite/ ls -la /etc/apache2/mods-rewrite Now, to enable the Rewrite mod, use the following commnad. sudo a2enmod rewrite

7. Now it's time to update /etc/hosts so that your hostname will resolve (loopback address is perfectly fine for this) - zf2-skeletonApp.localhost zf2-skeletonApp

8. One final restart your web server sudo /etc/init.d/apache2 restart

9. Check your work by attempting to open the zf2-skeletonApp.localhost site in your browser firefox http://zf2-skeletonApp.localhost &

10. If everything was successful you should be looking at the welcome page (zf2-skeletonApp/public/index.html). - If you get the default apache2 install page, or some other page, then you missed a step.

I'll soon be updating this with some screenshots and hopefully a video of the full process.