I found a new project that I want to work on. Actually, I was invited to work on yet another web-based application project that I think will be completely awesome. But that is not the project I am referencing for this post. Nope...I want to play around with password cracking.
I know that WEP is not the strongest; it is the easiest to crack anymore, especially with methods such as the PTW approach (Pychkine, Tews, Weinmann) or the older Korek attack (either one can be easily run through aircrack-ng). However, after an interesting time at a relatives (one of my Michigan cousins named Henry....one of three) over thanksgiving week, I wonder more and more about the issues surrounding the creation, storage, and transmission, of solid passwords.
A password should be hard to crack, easy to memorize, should work on the system it's established for (think earlier LANMAN stuff). I think everyone agrees upon this. However, there have been some divergent schools of thought for sometime now on what a "solid" or "strong" password consists of in general terms. Everyone with access to something on the Internet in the last decade has had to have noticed the change in complexity requiremenets. It is not uncommon anymore for a password to have requrements such as:
- length (12+ chars)
- at least one from each group:
---- uppercase letters
---- lowercase letters
---- numerals
---- special charactors (%,#,$,%,^,&, for example)
- can't be reused X amount of times
- and so on.
On top of making a password hard to guess...and hopefully hard to crack (although I believe that ALL passwords, given the time and processing power, can be cracked), the method of how a password is encrypted and transmitted is constantly being revised and tested. Some of these include client-side encryption (OR hashing, with or without a salt), server-side (BAD Idea if you ask me), WEP, WPA2 with PEAK, etc.
Getting back to my point, I recently stayed at Henry's (my relative) house. I was on vacation and had zero desire to do anything with a computer. That was until he made a comment about his wireless router password using WEP. He didn't challenge me, but I thought it would be fun to capture some traffic and see if aircrack-ng, or even Cain & Able could crack his password. So, I deferred getting the password from him and proceeded to break out my AirPCAP tool and just "collect" some IV's. I figured that with 128-bit encryption set on his six year old router, that it wouldn't take long to gather enough IV's and crack the password. Afterall, my wife's cocky aunt had refused to give me her WPA2 protected password and I proceeded to gain that one (and two of her neighbors accidentally) within about 20 minutes. (I should add, with some cockyness, that she hasn't tried to talk trash to me again...LOL). Since the WPA2 crack was SOOOOO quick, and since my unique IV collection amount kept climbing rather quickly, I was certain that I would have his password in no time.
I was wrong. Henry's password proved to be VERY difficult for aircrack, using either Korek or PTW attacks. I decided to query Henry about his setup....I wasn't trying to force anything, just trying to crack gracefully. After talking with Henry, I verified that: 1) he was not broadcasting his ESSID (Cain picked that up for me really quickly though), it was only WEP he was using, and that NO special charactors were used. One question I neglected to ask him was the length of his password. It was LONG...30 chars
I decided that I would collect at least 50,000 unique IVs. With a 128-bit key, PTW should be OK with 40,000+ IVs. I let this collection run for awhile (I didn't actually clock it) and then piped it to the cracker.
The aircrack-ng GUI provides a relatively great UI. I used a variety of settings to attempt both the Korek attack and the PTW attack, even though I was seriously short of IVs for Koreks. I tried with and without using a known ESSID and/or BSSID, and after seeing the password, I tried adding 1,2,3,4, and 5 of the first decrypted charactors. No Joy!!! I couldn't believe it. I chalk some of this failure up to my trying to be gentle. However, and here is where my new side-project is coming from, it REALLY got me thinking about the value of computational complexity versus a just plain long password of numbers and letters. An interesting side not to this was that Henry's password contained ONLY charactors used in hex (A-F and 0-9), had at least two dictionary words at the start of the password and used some pretty common key-stroke patterns.
So know I want to spend some quality time with different crackers, password creation methods, encryption and hashing algorithms, etc. and just run some test on cracking WEP versus WPA2 and using passwords that contain only alpha-numerics and NOT special charactors. I think I will also compare passwords that are collections of dictionary words versus complex passwords. This just sounds like a LOT of fun to me.
I know that these types of evaluations have been done numerous times...but it sounds like fun AND it sounds like a realistic way to get more familiar with the tools and get reaquainted with some of the protocols. In any event, I am rambling which means I need to call it a day. Tomorrow's a busy day of more compliance work and a Doctor's appointment...yay. Plus, I guess it's time to start getting some rest if I am going to initiate this project and a few others I mentioned in an earlier post.
Tuesday, December 13, 2011
Wednesday, November 30, 2011
Updating My Certs - GPEN
Sat and passed the GPEN last week. It was a great class with John Strand in Baltimore. Test was definitely different than practice tests, but not so bad. Just because I am bored right now, I have updated my cert logo picture. :-)
Monday, November 21, 2011
Making my work day EASIER via a quick script
When I get to work, I have a group of things on the Windows box (that I am stuck using) that I both have to and want to have open on my desktops:
Outlook, for that annoying now enterprise email
Browser: with mutliple tabls to highly used sites
Some tools I use on a daily basis
File Browser
etc.
Bottom line is that I LOVE scripts. Well, I just plain love programming, and ANY kind of code is fun code to me. I believe that anyone working in IT more than a day at least KNOWS the value of scripting some tasks. I take it a little further and will try to script anything and everything I can. Whether it's OS apps I need or new Macros in UltraEdit, I REALLY want to make things as easy and streamlined as possible.
There are multiple solutions to using startup scripts. In GPOs you can assign scripts to the user(s) or to the actual box, at both start-up or shutdown. You can use a scheduled task to do something after a login executes or at particular times (like open your timecard application at lunchtime, for instance).
Another way....my preferred way, is write a simple bat file that will do what I want when I log onto the box, and I copy it to (Windows 7 Profession)
C:\Users\myusername\AppData\Roaming\Microsoft\Windows\Start Menu
Once I have a working bat file in this location, EVERY time I log in, the bat file runs and my world is at peace.
As an example one is below (really quick since I have a cert exam in 6 hours and it might be a good idea to get some sleep). I am doing this from memory so I am not 100% certain on the "start" syntax...but I know it's close.:
mybat.bat
@echo off
start /d "MyEmail" /PATHTOOUTLOOK/ outlook.exe
start /d "MyPages" /PATHTOFIREFOX/ firefox.exe http://www google.com http://www.espn.com
exit 0
In my example script, I am using firefox as opposed to IE. This is more than just personal preferrence. As of the last time I check, IE8 and IE9 did not support a way to open multiple tabs in one browser from the command line. I will recheck this and edit the entry if I find and test successfully some evidence contrary to what I initially read about this with IE8/9.
Outlook, for that annoying now enterprise email
Browser: with mutliple tabls to highly used sites
Some tools I use on a daily basis
File Browser
etc.
Bottom line is that I LOVE scripts. Well, I just plain love programming, and ANY kind of code is fun code to me. I believe that anyone working in IT more than a day at least KNOWS the value of scripting some tasks. I take it a little further and will try to script anything and everything I can. Whether it's OS apps I need or new Macros in UltraEdit, I REALLY want to make things as easy and streamlined as possible.
There are multiple solutions to using startup scripts. In GPOs you can assign scripts to the user(s) or to the actual box, at both start-up or shutdown. You can use a scheduled task to do something after a login executes or at particular times (like open your timecard application at lunchtime, for instance).
Another way....my preferred way, is write a simple bat file that will do what I want when I log onto the box, and I copy it to (Windows 7 Profession)
C:\Users\myusername\AppData\Roaming\Microsoft\Windows\Start Menu
Once I have a working bat file in this location, EVERY time I log in, the bat file runs and my world is at peace.
As an example one is below (really quick since I have a cert exam in 6 hours and it might be a good idea to get some sleep). I am doing this from memory so I am not 100% certain on the "start" syntax...but I know it's close.:
mybat.bat
@echo off
start /d "MyEmail" /PATHTOOUTLOOK/ outlook.exe
start /d "MyPages" /PATHTOFIREFOX/ firefox.exe http://www google.com http://www.espn.com
exit 0
In my example script, I am using firefox as opposed to IE. This is more than just personal preferrence. As of the last time I check, IE8 and IE9 did not support a way to open multiple tabs in one browser from the command line. I will recheck this and edit the entry if I find and test successfully some evidence contrary to what I initially read about this with IE8/9.
Tuesday, October 11, 2011
SANS560 at SANS Baltimore 2011
Just a quick one here. Today was day 2 of SANS Balitmore 2011, and I am even more impressed with the presentations we had today in SANS560 than we had in day 1. John Strand is our instructor, something a co-worker and I intentionally attempted to schedule, and it's been well worth it so far. It's not every day I get time to play with nmap, nessus, scapy, hping2, and tcpdump (well...tcpdump is pretty much everyday for me), but we spend some actual FUN time in those today. At least it was fun for me. There did appear to be some that struggled with the exercises due to a lack of non-familiarity. However, it seems as though everyone is enjoying it.
My employer paid for part of this training, but a chunk of change still had to/has to come from me. Had the class been boring or non-informative, I think I would be a little ticked off. However, even with having some experience pen-testing and having gone through other pen test training, I am so far thinking that I have gotten over 1000% ROI and that this has been one of the better classes so far...or it at least rivals the SANS507 I took earlier this year from David Hoelzer.
One of the nice things about most of today just being review...I could rather quickly run through the examples and work on installing both BackTrack 5r1 AND the newest release of Doug Burk's SecurityOnion (which there really is no excuse for anyone NOT to have by now). I am just having too much nerdy fun this week!
My employer paid for part of this training, but a chunk of change still had to/has to come from me. Had the class been boring or non-informative, I think I would be a little ticked off. However, even with having some experience pen-testing and having gone through other pen test training, I am so far thinking that I have gotten over 1000% ROI and that this has been one of the better classes so far...or it at least rivals the SANS507 I took earlier this year from David Hoelzer.
One of the nice things about most of today just being review...I could rather quickly run through the examples and work on installing both BackTrack 5r1 AND the newest release of Doug Burk's SecurityOnion (which there really is no excuse for anyone NOT to have by now). I am just having too much nerdy fun this week!
Labels:
amap,
BackTrack 5r1,
David Hoelzer,
Doug Burks,
enum,
GPEN,
hping2,
John Strand,
Nessus,
netcat,
nmap,
penetration testing,
SANS Baltimore 2011,
SANS560,
scapy,
SecurityOnion,
tcpdump
Wednesday, August 31, 2011
Robocopy Config file?
If you run Robocopy with an RCJ file (Robocopy Job) just once, then the file is just that: a job file. However, if you plan to use the same settings over and over again, then consider this a configuration file that is easily modifiable and copyable to reuse on other directories.
I personally have a directory structure set up like:
Backups
|--BackUpJobs
| |--*.RCJ files
|--BackUpLogs
| |--*.lob files
|--*.bat files (for single jobs)
|--RunAllBUs.bat (to execute all jobs)
The log files should be self-explanatory. Here I just want to run through the bat and RCJ files.
For my individual bat files, I will use something like this for all single jobs:
@echo off
cd c:\users\myusername\desktop\BackUp
robocopy /JOB:BackUpJobs\[WhatItIs]BUJOB.RCJ
pause
Where [WhatItIs] is something indicating what directory I am backing up. For example, if I was backing up the CIS577 directory, the path would be: BackUpJobs\CIS577BUJOB.RCJ
As with any bat script, the path is relative to where the script is executing from.
Now for the fun part, the rcj "config" files. These files, if you know the syntax for robocopy, can be modified in short time to create a full backup system with a variety of operations. For instance, one job can do a /MOV, which will delete everything from the source after it's copied to the destination while another job just makes a copy of the directory and all subs(/E), copying new(er) files to the destination.
CIS577BUJOB.RCJ
::
:: Robocopy Job
::C:\USERS\MYUSERNAME\DESKTOP\BACKUP\BACKUPJOBS\CIS577BUJOB.RCJ
::
:: Created by myusername on Sun Apr 10 2011 at 20:46:13
:: Modified by hand. 15 May at 1210 am.
::
:: Source Directory :
::
/SD:C:\Users\myusername\Desktop\CIS577\ :: Source Directory.
::
:: Destination Directory :
::
/DD:\\werdenshare\GoFlex Home Personal\DaveSchoolMain\UofM_Dearborn\CIS577\
:: Destination Directory.
::
:: Include These Files :
::
/IF :: Include Files matching these names
:: *.* :: Include all names (currently - Command Line may override)
::
:: Exclude These Directories :
::
/XD :: eXclude Directories matching these names
:: :: eXclude no names (currently - Command Line may override)
::
:: Exclude These Files :
::
/XF :: eXclude Files matching these names
:: :: eXclude no names (currently - Command Line may override)
::
:: Copy options :
::
/S ::Copy Subdirs but not empty ones
/E ::Copy Subdirs including empty ones
/COPY:DAT :: what to COPY (default is /COPY:DAT).
::
:: Retry Options :
::
/R:1000000
:: number of Retries on failed copies: default 1 million.
/W:30
:: Wait time between retries: default is 30 seconds.
::
:: Logging Options :
::
/LOG+:C:\Users\myusername\Desktop\BackUp\BackUpLogs\CIS577BULog.log
:: output status to LOG file (overwrite existing log).
The RCJ file does nothing more than pass the parameters on the command line that you would be using if you didn't use the job file. So without this file, your robocopy job using the above would be:
$>robocopy C:\Users\myusername\Desktop\CIS577\ "\\werdenshare\GoFlex Home Personal\DaveSchoolMain\UofM_Dearborn\CIS577\"
/S /E /LOG+:C:\Users\myusername\Desktop\BackUp\BackUpLogs\CIS577BULog.log
A couple things to notice: the source and destination directories ONLY need to be wrapped in quotation marks IF either one has spaces AND is passed on the command line. In the RCJ file, no quotation marks needed. Also, if you notice the command line example parameters, you will see [source] [destination] /S /E /LOG+. and not the other options such as /XD from the file. This is becuase when a job you created is saved to an RCJ file, all defaults are written to the file unless you have passed a paremeter to overwrite their usage completely.
The really easy part that I like is that I can copy this file out, adjust the source and destination, at a minimum, and then save the file as another robocopy job file. The extra bit of ease here, if you know where the options go, is that you can easily add any option changes to the files as you create them or modify your needs. For example, in the Copy options section, I can add /MOV to the list of uncommented parameters and this will do what you'd expect as I mentioned before (although the folder/subfolder structure will remain intact.)
This is probably enough from me on Robocopy this year. :-) Now I am working on a perl script to take an exported list of IE and FireFox bookmarks and to create an XML file for these. Other than the easy answer of just wanting a quicker way to access good used references, the format I am going with (as created by my buddy James) will allow me to add usernames, masked passwords (if I am feeling crazy), and/or password hints. Additionally, I am going to take it a step farther for another display field for things such as Frequent Flyer program numer and POC info. Really this is an academic exercise to create something I want...I get a little tired of scrolling through a TON of bookmarks on a LOT of different computers. By doing this, I can keep it updated and portable....basically a poor-man's way to sync some favorites between computers.
I personally have a directory structure set up like:
Backups
|--BackUpJobs
| |--*.RCJ files
|--BackUpLogs
| |--*.lob files
|--*.bat files (for single jobs)
|--RunAllBUs.bat (to execute all jobs)
The log files should be self-explanatory. Here I just want to run through the bat and RCJ files.
For my individual bat files, I will use something like this for all single jobs:
@echo off
cd c:\users\myusername\desktop\BackUp
robocopy /JOB:BackUpJobs\[WhatItIs]BUJOB.RCJ
pause
Where [WhatItIs] is something indicating what directory I am backing up. For example, if I was backing up the CIS577 directory, the path would be: BackUpJobs\CIS577BUJOB.RCJ
As with any bat script, the path is relative to where the script is executing from.
Now for the fun part, the rcj "config" files. These files, if you know the syntax for robocopy, can be modified in short time to create a full backup system with a variety of operations. For instance, one job can do a /MOV, which will delete everything from the source after it's copied to the destination while another job just makes a copy of the directory and all subs(/E), copying new(er) files to the destination.
CIS577BUJOB.RCJ
::
:: Robocopy Job
::C:\USERS\MYUSERNAME\DESKTOP\BACKUP\BACKUPJOBS\CIS577BUJOB.RCJ
::
:: Created by myusername on Sun Apr 10 2011 at 20:46:13
:: Modified by hand. 15 May at 1210 am.
::
:: Source Directory :
::
/SD:C:\Users\myusername\Desktop\CIS577\ :: Source Directory.
::
:: Destination Directory :
::
/DD:\\werdenshare\GoFlex Home Personal\DaveSchoolMain\UofM_Dearborn\CIS577\
:: Destination Directory.
::
:: Include These Files :
::
/IF :: Include Files matching these names
:: *.* :: Include all names (currently - Command Line may override)
::
:: Exclude These Directories :
::
/XD :: eXclude Directories matching these names
:: :: eXclude no names (currently - Command Line may override)
::
:: Exclude These Files :
::
/XF :: eXclude Files matching these names
:: :: eXclude no names (currently - Command Line may override)
::
:: Copy options :
::
/S ::Copy Subdirs but not empty ones
/E ::Copy Subdirs including empty ones
/COPY:DAT :: what to COPY (default is /COPY:DAT).
::
:: Retry Options :
::
/R:1000000
:: number of Retries on failed copies: default 1 million.
/W:30
:: Wait time between retries: default is 30 seconds.
::
:: Logging Options :
::
/LOG+:C:\Users\myusername\Desktop\BackUp\BackUpLogs\CIS577BULog.log
:: output status to LOG file (overwrite existing log).
The RCJ file does nothing more than pass the parameters on the command line that you would be using if you didn't use the job file. So without this file, your robocopy job using the above would be:
$>robocopy C:\Users\myusername\Desktop\CIS577\ "\\werdenshare\GoFlex Home Personal\DaveSchoolMain\UofM_Dearborn\CIS577\"
/S /E /LOG+:C:\Users\myusername\Desktop\BackUp\BackUpLogs\CIS577BULog.log
A couple things to notice: the source and destination directories ONLY need to be wrapped in quotation marks IF either one has spaces AND is passed on the command line. In the RCJ file, no quotation marks needed. Also, if you notice the command line example parameters, you will see [source] [destination] /S /E /LOG+. and not the other options such as /XD from the file. This is becuase when a job you created is saved to an RCJ file, all defaults are written to the file unless you have passed a paremeter to overwrite their usage completely.
The really easy part that I like is that I can copy this file out, adjust the source and destination, at a minimum, and then save the file as another robocopy job file. The extra bit of ease here, if you know where the options go, is that you can easily add any option changes to the files as you create them or modify your needs. For example, in the Copy options section, I can add /MOV to the list of uncommented parameters and this will do what you'd expect as I mentioned before (although the folder/subfolder structure will remain intact.)
This is probably enough from me on Robocopy this year. :-) Now I am working on a perl script to take an exported list of IE and FireFox bookmarks and to create an XML file for these. Other than the easy answer of just wanting a quicker way to access good used references, the format I am going with (as created by my buddy James) will allow me to add usernames, masked passwords (if I am feeling crazy), and/or password hints. Additionally, I am going to take it a step farther for another display field for things such as Frequent Flyer program numer and POC info. Really this is an academic exercise to create something I want...I get a little tired of scrolling through a TON of bookmarks on a LOT of different computers. By doing this, I can keep it updated and portable....basically a poor-man's way to sync some favorites between computers.
Tuesday, August 9, 2011
Checking in...and ISSA Meeting awesomeness!
So I didn't get the time I thought I would to do one "oneliner each day" for July. Been really busy with work trips, finishing up a grueling semester with the world's worst professor, and just trying to take a breather for a day or two. That said, I really don't have any freetime right now with the kids and the wife starting school. However, I did make time to go the Quarterly Greater Augusta ISSA meeting. That was a GREAT decision.
Not only was it great to see friends and catch-up a little...it was really awesome to listen to John Strand (of pauldotcom.com fame and http://www.john-strand.com/) and Matt Jonkman (Emerging Threats, Suricata). Anyone who was aware of this meeting and just arbitrarily chose to not go...shame on you becuase it was VERY good!
Strand's presentation was really kick-butt. He talked more about a change in culture, what's effective (and not effective) and things he thought were appropriate for moving forward. The real examples he laid out, especially regarding SSL issues, were pretty awesome! The dude was really rocking his presentation and I think we all learned something while having some really good laughs!
Jonkman focused primarily on Suricata. It started out more like a sales pitch and, in fairness, it probably primarily was just that. However, the information he passed ended up being pretty interesting to me, especially about some of the upcoming releases for Suricata. I have had some exchanges with Jonkman in the past and he's always struck me as pretty smart, which he again appeared to be so tonight.
Not only was it great to see friends and catch-up a little...it was really awesome to listen to John Strand (of pauldotcom.com fame and http://www.john-strand.com/) and Matt Jonkman (Emerging Threats, Suricata). Anyone who was aware of this meeting and just arbitrarily chose to not go...shame on you becuase it was VERY good!
Strand's presentation was really kick-butt. He talked more about a change in culture, what's effective (and not effective) and things he thought were appropriate for moving forward. The real examples he laid out, especially regarding SSL issues, were pretty awesome! The dude was really rocking his presentation and I think we all learned something while having some really good laughs!
Jonkman focused primarily on Suricata. It started out more like a sales pitch and, in fairness, it probably primarily was just that. However, the information he passed ended up being pretty interesting to me, especially about some of the upcoming releases for Suricata. I have had some exchanges with Jonkman in the past and he's always struck me as pretty smart, which he again appeared to be so tonight.
Thursday, July 21, 2011
netstat oneliner: list the ports that are listening
A really quick one tonight. It will be nice to actually have some time soon to expound more on these things as this semester winds down (and maybe only one more to finish the Masters!)
Sometimes I want to konw what ports are listening on a server. I can use this information to help troubleshoot a non-working inbound connection or I can use this to make sure that specific ports are NOT listening. Run the below as root or using sudo:
netstat -an | grep -i listen or netstat -an | grep "LISTEN"
This command can, like every unix command I can think of tonight, can be used/piped with other commands, such as awk in order to clean-up/format the output.
Sometimes I want to konw what ports are listening on a server. I can use this information to help troubleshoot a non-working inbound connection or I can use this to make sure that specific ports are NOT listening. Run the below as root or using sudo:
netstat -an | grep -i listen or netstat -an | grep "LISTEN"
This command can, like every unix command I can think of tonight, can be used/piped with other commands, such as awk in order to clean-up/format the output.
Tuesday, July 19, 2011
ps oneliner: search for a specific running process
Have you ever wanted to verify/search for a running process? Or, have you ever wanted to see if you had multiple counts of the same process, possibly indicating orphans or hung processes?
It's rather easy! As an example, assuming you are logged in as root or su'd, and looking for snort:
ps -ef | grep -v grep | grep snort
The "grep -v grep" command will invert the selection for lines matching "grep"....so it will print ONLY lines that do not contain "grep". Why is this important? Well...it's not. However, the grep command is a process itself so if you have one running snort process, you will get two lines returned:
- the line containing the actual results for the real snort process
- the line containing the grep action(s)
So, assuming that you have only snort running, and running only once, this command would return one line, showing the snort process information (including startup arguments...yay!)
But what if you need to count how many processes? Add the -c switch to the final pipe to grep:
ps -ef | grep -v grep | grep -c snort
This will return an integer value of the number of processes containing snort in the return of the ps command.
It's important to note that this DOES count/show EVERY line of the ps output that contains "snort". This could, if you were running other programs that integrated with snort parts, such as Barnyard, count/show more than one line.
There is a lot more fun that could be had with this. For example, you could search for more than one process, use awk to strip their PIDs, and then find the difference of the two....a quick way to see if one of a group of automatically started programs might be have hung after the other one(s) restarted.
Don't forget these helpful notes too:
grep -i ...case-insensitive
|| ...logical OR operator...look for this OR that
&& ...logical AND operator....must match BOTH
--cat filename1 | grep something1 | grep something2 ...is inherently a logical AND operation
It's rather easy! As an example, assuming you are logged in as root or su'd, and looking for snort:
ps -ef | grep -v grep | grep snort
The "grep -v grep" command will invert the selection for lines matching "grep"....so it will print ONLY lines that do not contain "grep". Why is this important? Well...it's not. However, the grep command is a process itself so if you have one running snort process, you will get two lines returned:
- the line containing the actual results for the real snort process
- the line containing the grep action(s)
So, assuming that you have only snort running, and running only once, this command would return one line, showing the snort process information (including startup arguments...yay!)
But what if you need to count how many processes? Add the -c switch to the final pipe to grep:
ps -ef | grep -v grep | grep -c snort
This will return an integer value of the number of processes containing snort in the return of the ps command.
It's important to note that this DOES count/show EVERY line of the ps output that contains "snort". This could, if you were running other programs that integrated with snort parts, such as Barnyard, count/show more than one line.
There is a lot more fun that could be had with this. For example, you could search for more than one process, use awk to strip their PIDs, and then find the difference of the two....a quick way to see if one of a group of automatically started programs might be have hung after the other one(s) restarted.
Don't forget these helpful notes too:
grep -i ...case-insensitive
|| ...logical OR operator...look for this OR that
&& ...logical AND operator....must match BOTH
--cat filename1 | grep something1 | grep something2 ...is inherently a logical AND operation
Labels:
-ef,
grep,
grep -c,
grep -i,
oneliner scripts,
ps -ef,
ps oneliner
Sunday, July 17, 2011
grep oneliner: get the line you want and its neighbors
Grep is great for printing out a line (or multiple lines) that match a given value. However, I have found it sometimes helpful to search large files, especially log files, at get the line I want plus a few before and after.
If I want to find errors in the /var/log/messages file and I know the line will contain the word "ERROR", I can use the below to get all the lines matching (case-sensitive in this example) as well as 3 before and 3 after.
grep ERROR -B 3 -A 3 /var/log/messages
If I want to find errors in the /var/log/messages file and I know the line will contain the word "ERROR", I can use the below to get all the lines matching (case-sensitive in this example) as well as 3 before and 3 after.
grep ERROR -B 3 -A 3 /var/log/messages
Friday, July 15, 2011
Book Review Pending for ACM: Spyware and Adware
In a few weeks, I think, I am going to have my third book review published in an ACM journal. Yay! While I would prefer to have time to actually do research and write something a little more substantial than a review, I do find the reviews to be a fun and enjoyable experience, as well as a learning one.
This most recent review for SIGACT was actually for a relatively smaller book (less than 200 pages). The book itself is called Spyware and Adware by John Aycock. I am going to withhold any in depth comments, but I will say that this is a book that could be useful for one of the largest ranges of people I can think of for a technical book. It's also part of a bigger series by Springer.
This most recent review for SIGACT was actually for a relatively smaller book (less than 200 pages). The book itself is called Spyware and Adware by John Aycock. I am going to withhold any in depth comments, but I will say that this is a book that could be useful for one of the largest ranges of people I can think of for a technical book. It's also part of a bigger series by Springer.
alias oneliner: make yum installs a little faster
I should preface this with: I KNOW that alias is a oneliner command by its very nature. :-) But sometimes it's just fun to pass on even the little commands. dw
Ever get tired of entering:
yum install WhatIWant
then having to enter y or no to confirm. Or worse, being reminded by the system that you need to be root and then having to:
sudo yum install WhatIWant
An easy thing to do in the bash shell is to use an alias. If you want permanent aliases, you can easily create these as well by creating the ~/.bash_aliases file, which will then run at start up. The file should have one row per alias command, exactly the same way you would enter the below on the command line:
alias myyumi='sudo yum -y install'
After running this command, I am now able to enter the below to install something and have the YES option assumed. The two side notes here are:
1) You must be in the sudoers file to execute this alias
2) If you do not have NOPASSWD set in the sudoers file then you WILL have to enter your password prior to the yum process starting.
I have met the two conditions above and run the alias command. Now I can run:
myyumy WhatIWant
and WhatIWant should install without any further interaction on my part (not accounting for any possible dependency hells that is).
A note on the nameing of my alias:
- I like to use 'my' at the start of aliases as a matter of personal preference....because I made it. :-)
- The 'yum' in the middle should be easy to grasp: it's a representation of the root command, in this case yum. If it was a command like system-config-network then I would use 'snc'
- The 'y' at the end is the parameter(s) I am including in the aliases. Metacharacters can be used in aliases too. So if I wanted to run the system-config-network aliased and in the background I would create the alias like:
alias mysnc&='system-config-network &'
Ever get tired of entering:
yum install WhatIWant
then having to enter y or no to confirm. Or worse, being reminded by the system that you need to be root and then having to:
sudo yum install WhatIWant
An easy thing to do in the bash shell is to use an alias. If you want permanent aliases, you can easily create these as well by creating the ~/.bash_aliases file, which will then run at start up. The file should have one row per alias command, exactly the same way you would enter the below on the command line:
alias myyumi='sudo yum -y install'
After running this command, I am now able to enter the below to install something and have the YES option assumed. The two side notes here are:
1) You must be in the sudoers file to execute this alias
2) If you do not have NOPASSWD set in the sudoers file then you WILL have to enter your password prior to the yum process starting.
I have met the two conditions above and run the alias command. Now I can run:
myyumy WhatIWant
and WhatIWant should install without any further interaction on my part (not accounting for any possible dependency hells that is).
A note on the nameing of my alias:
- I like to use 'my' at the start of aliases as a matter of personal preference....because I made it. :-)
- The 'yum' in the middle should be easy to grasp: it's a representation of the root command, in this case yum. If it was a command like system-config-network then I would use 'snc'
- The 'y' at the end is the parameter(s) I am including in the aliases. Metacharacters can be used in aliases too. So if I wanted to run the system-config-network aliased and in the background I would create the alias like:
alias mysnc&='system-config-network &'
Thursday, July 14, 2011
sed onliner: append a new line of text to a file
Missed a few days this week, but I think that it's okay to blame the homework and my birthday. I already posted one sed onliner dealing with the replacing of text. This one should append a new line to after a line that matches a sed script expression:
#!/bin/sh
sed '/FINDME/ a\
The new line we are adding` fileToEdit.conf
The -i switch can be added to make this edit occur "in-place" (homework for the interested reader).
The new line is added after EVERY line matching the expression, in this case FINDME. I might get around to adding a part two to this, where you can append after only a single specific line, regardless of multiple matches. One way to do this would be with the ";" operator. However, I am getting back to my review assignment for school. Maybe tomorrow I will do this, or some Perl (yeah!)
#!/bin/sh
sed '/FINDME/ a\
The new line we are adding` fileToEdit.conf
The -i switch can be added to make this edit occur "in-place" (homework for the interested reader).
The new line is added after EVERY line matching the expression, in this case FINDME. I might get around to adding a part two to this, where you can append after only a single specific line, regardless of multiple matches. One way to do this would be with the ";" operator. However, I am getting back to my review assignment for school. Maybe tomorrow I will do this, or some Perl (yeah!)
Monday, July 11, 2011
tar oneliner: backup to a network location
tar is a pretty straightforward and handy tool that anyone administering anything on a *nix box should learn. If I don't have a typo, the below one liner will create a system backup, excluding the named directories and send it via SSH to a remote server, where the .tar file will be written. Errors are redirected ( 2> ) to a log file in /var/log/backups (assuming you have this directory and it has the appropriate permissions.
One last note: if you don't run this as root, you won't get a complete (if any) archive created.
Command (the line break is only formatting on here. This command can be entered on one line.
tar cvpj --exclude=/dev/* --exclude=/sys/* --exclude=/tmp/* / 2> /var/log/backups/`date +%d%M%Y`_Backup.log | ssh yourserver "cat > /home/backups/`date +%d%M%Y`_Backup.tar"
c - create backup tar
v - list files being tarred
p - maintain file perms
j - use bzip2 (slower but deeper compression) / can use z instead which is gzip
g - could be added to this string of commands in order to create incremental backups
--exclude= exclude some directory. The trailing * will stop tar from creating an empty copy of the excluded directory.
ssh - should be self-explanatory
To schedule this, you can use at or create a new cron entry such as:
10 * * * 1,3,5 /usr/bin/backup
were /usr/bin/backup is a script containing the above tar command and the command should run at 12:10 am on Monday, Wednesday, and Friday (days 1, 3, and 5 of the week)
One last note: if you don't run this as root, you won't get a complete (if any) archive created.
Command (the line break is only formatting on here. This command can be entered on one line.
tar cvpj --exclude=/dev/* --exclude=/sys/* --exclude=/tmp/* / 2> /var/log/backups/`date +%d%M%Y`_Backup.log | ssh yourserver "cat > /home/backups/`date +%d%M%Y`_Backup.tar"
c - create backup tar
v - list files being tarred
p - maintain file perms
j - use bzip2 (slower but deeper compression) / can use z instead which is gzip
g - could be added to this string of commands in order to create incremental backups
--exclude= exclude some directory. The trailing * will stop tar from creating an empty copy of the excluded directory.
ssh - should be self-explanatory
To schedule this, you can use at or create a new cron entry such as:
10 * * * 1,3,5 /usr/bin/backup
were /usr/bin/backup is a script containing the above tar command and the command should run at 12:10 am on Monday, Wednesday, and Friday (days 1, 3, and 5 of the week)
Sunday, July 10, 2011
ngrep oneliner: look for a domain name in DNS traffic
ngrep is a pretty useful tool and should be useful to any network security work. It is NOT the same as tcpdump, in case anyone was wondering. I may be a little off in my explanation tonight, but ngrep does something so much better than tcpdump: searches for regex's.
So, to search for a hostname, as a whole word, in DNS traffic in an already captured traffic file:
ngrep -w 'somehost' -I /stored/mypcaps.pcap port 53
So, to search for a hostname, as a whole word, in DNS traffic in an already captured traffic file:
ngrep -w 'somehost' -I /stored/mypcaps.pcap port 53
Saturday, July 9, 2011
mtr oneliner: better than tracert sometimes
Another really quick on since I have two research papers to start .
A good tool for testing network link(s) is mtr. Check out the man page on your favorite linux machine or on the net.
mtr google.com
or, to use only IPv4 and skip DNS resolution on each hop:
mtr 4 --no-dns google.com
or, if you want to do the same thing but see how fast you can get into trouble at work or home:
mtr 4 --no-dns playboy.com
A good tool for testing network link(s) is mtr. Check out the man page on your favorite linux machine or on the net.
mtr google.com
or, to use only IPv4 and skip DNS resolution on each hop:
mtr 4 --no-dns google.com
or, if you want to do the same thing but see how fast you can get into trouble at work or home:
mtr 4 --no-dns playboy.com
Friday, July 8, 2011
netstat oneliner: what process are associated with what ports
Ever wanted to know what ports are open and what process is using these ports? Run the below as root and you should have your answer.
netstat -tlnp
netstat -tlnp
Thursday, July 7, 2011
awk oneliner - remove all extra whitespace from file
Remove all extra whitespaces from each line of a file. Basically, a trim on both ends and all but one space between fields is removed:
awk '{ $1=$1; print }'
Yup...another very hard one to write. :-) But useful in formating. This could be combined with another awk to replace each single space between fields with a delimeter of your choice.
awk '{ $1=$1; print }'
Yup...another very hard one to write. :-) But useful in formating. This could be combined with another awk to replace each single space between fields with a delimeter of your choice.
Wednesday, July 6, 2011
passwd oneliner - Locked user account listing
A quick one to get a list of user accounts that are locked out:
passwd -S -a | awk '/LK/{print $1}'
Pretty straightforward but must be run as root.
passwd -S -a | awk '/LK/{print $1}'
Pretty straightforward but must be run as root.
Sunday, July 3, 2011
grep oneliner: search for a value recursively
Ever forget exactly which file you had placed some particular code in and really wanted to find it quick. Here's a grep oneliner to do just that. In this example, I am looking for "#define MAX_VALUE" in a directory containing many source files and sub-directories.
grep -R --include "*.c" "#define MAX_VALUE" .
Note: the "." indicates the current directory in Linux. If your files are in a different tree, just replace the "." with that tree's root location. For example, if you wanted to look /usr/bin/local/:
grep -R --include "*.c" "#define MAX_VALUE" /usr/bin/local
grep -R --include "*.c" "#define MAX_VALUE" .
Note: the "." indicates the current directory in Linux. If your files are in a different tree, just replace the "." with that tree's root location. For example, if you wanted to look /usr/bin/local/:
grep -R --include "*.c" "#define MAX_VALUE" /usr/bin/local
Saturday, July 2, 2011
for oneliner - make backups in a directory
Create a backup copy of all filenames of a specific extension:
for f in *.c; do cp $f $f.c.backup; done
This will find all "c" files in the current directory you are in and then make a copy of them, appending ".backup" to the end of the original filename.
Simple, and maybe overused....yet so nice to use sometimes when pushing around a lot of files.
Simple, and maybe overused....yet so nice to use sometimes when pushing around a lot of files.
Friday, July 1, 2011
sed oneliner - replace text in file
One liner for use inside of a script where a line, or part of a line, or a file needs to be changed:
Suppose that we are passing the name of a file to edit to this script as the first parameter
OLDVARIABLE="the old variable"
NEWVARIABLE="the new variable"
...
sed -i "s/$OLDVARIABLE/$NEWVARIABLE/" $1
This will do an in place (-i) edit of the file passed to the sed script (the first parameter $1). The sed script itself will do a substitution (s) of the first instance of OLDVARIABLE with the value of NEWVARIABLE. If you wanted to do this at every instance, add (g) to the end of the sed script /g"
The double qoutes around the sed script are not a typo...they are there because I am using variables instead of text or regex.
And that's my oneliner for today...snuck it right in before midnight.
Suppose that we are passing the name of a file to edit to this script as the first parameter
OLDVARIABLE="the old variable"
NEWVARIABLE="the new variable"
...
sed -i "s/$OLDVARIABLE/$NEWVARIABLE/" $1
This will do an in place (-i) edit of the file passed to the sed script (the first parameter $1). The sed script itself will do a substitution (s) of the first instance of OLDVARIABLE with the value of NEWVARIABLE. If you wanted to do this at every instance, add (g) to the end of the sed script /g"
The double qoutes around the sed script are not a typo...they are there because I am using variables instead of text or regex.
And that's my oneliner for today...snuck it right in before midnight.
July Gifts
My birthday is this month and I realized that not only am I getting a little older but it's been some time since I really posted on here. So, amidst a book review I am working on, family, school, an IEEE standard, and hopefully fishing, I have decided to try something new....but not really unique.
For the month of July, I want to try to post some useful oneliner script or small code block that I find helpful. I think I am going to start with some sed or awk love....but there are probably 8 billion of those on the net. I will think about while I dream of packets, beer, babes, and the beach. LOL
For the month of July, I want to try to post some useful oneliner script or small code block that I find helpful. I think I am going to start with some sed or awk love....but there are probably 8 billion of those on the net. I will think about while I dream of packets, beer, babes, and the beach. LOL
Updated my certs --- GSNA and MCTS
This last week I reviewed and sat the GSNA and the MS70-643 (which I was certain that I would fail). I ended up falling asleep in the GSNA and passing with 86%. Next time it'll be above 90% though. I was surprised at the 70-643. I took it cold a few months ago and bombed it...I think my score was just under 500. That was my first ever MS test experience and for some reason I thought the CISSP experience was more pleasant. In any event, I earned an 890 on that exam.
I had initially bought the vouchers (a 2-pack) for my previous job. However, I don't really need any MS certs now for where I am (Back in packet analysis heaven!!!!!). I do have another voucher out of the pack I bought, so I'll have to se that one. After that, maybe the full MCITP.
But I want to really look into is the SANS GSE or Cyber Guardian programs, the GREM, GCIH, and GCFA. Those sound like fun. Some linux ones would be cool too.
I realize that I like the certs. Not for my resume or my wall...but for my own sense of accomplishment. I mean let's face it, only the person sitting the exam knows for sure how much knowledge they had and how much brain-dump help that they had in passing. I like to learn new things, and expanding my skillset and then proving to ME that I learned something....I like that!!!
I even made a picture:
I had initially bought the vouchers (a 2-pack) for my previous job. However, I don't really need any MS certs now for where I am (Back in packet analysis heaven!!!!!). I do have another voucher out of the pack I bought, so I'll have to se that one. After that, maybe the full MCITP.
But I want to really look into is the SANS GSE or Cyber Guardian programs, the GREM, GCIH, and GCFA. Those sound like fun. Some linux ones would be cool too.
I realize that I like the certs. Not for my resume or my wall...but for my own sense of accomplishment. I mean let's face it, only the person sitting the exam knows for sure how much knowledge they had and how much brain-dump help that they had in passing. I like to learn new things, and expanding my skillset and then proving to ME that I learned something....I like that!!!
I even made a picture:
Sunday, April 10, 2011
Robocopy on Windows 7
I have never claimed to be an expert on, well, anything. However, I do like to try to learn something new every day and I usually stick to the "nerdy" stuff. I recently decided that I wanted to improve the way I backed up important data at home. At work, we script it and tar it and set the archive bits and get the emails...that always seemed like overkill to me. That is until I accidently ruined two, (YES, 2) removable HDD's in one night, including a one week old 1TB Seagate drive that I had bought on sale...bummer!
I am not at an endstate yet in my search for the best backup solution for the home network. One thing I have been playing with is Robocopy...and oh what fun it has been.
My setup:
- A new (non-dropped on the floor and ruined) 1TB GoFlex network storage drive.
- Many computers...but testing from the one with Windows 7 Professional.
Source:
c:\users\myusername\Desktop\CIS577
Destination:
\\GOFLEX_HOME\GoFlex Home Personal\Dave_School\CIS577
Goal:
To back up school, family, and other documents on an automatic and easy basis...not to mention reliable. I should mention here that the Seagate software for the GoFlex comes with a backup solution that is fairly easy to use and customize. (Secretly, I just wanted an excuse to again play with Robocopy...remind myself of its functions and limitations).
Command (From ELEVATED Command Prompt):
$>robocopy c:\users\myusername\Desktop\CIS577 \\GOFLEX_HOME\GoFlex Home Personal\Dave_School\CIS577 /LOG:BackUpLogs\PicsBUlog /SAVE:BackUpJobs\PicsJob /B /V /E
- The /E is probably redundant with the /B, but I wanted to add it to ensure the directory recursion.
- The /LOG option points to a folder in the current working directory and the name of a command file for this particular backup job
- The /SAVE option points to a folder in the current working directory and the name of the logfile for this particular backup job
- The /V, like almost any other command line program....Verbosity...YEAH! :-)
If I want to run this job as a service or just in the background, I can add the /MON option (/MON:#) with a number representing the number of changes made to the source that will automatically trigger the backup job again. Careful though...if you add this from a normal command prompt...you may be waiting AWHILE for anything to happen if you are not actively changing the source location.
So Robocopy has been fun to play with today. I created jobs to backup all of our pictures from our recent trip to Gatlinburg and it is running better than copying through the GUI....yeah!
I am not at an endstate yet in my search for the best backup solution for the home network. One thing I have been playing with is Robocopy...and oh what fun it has been.
My setup:
- A new (non-dropped on the floor and ruined) 1TB GoFlex network storage drive.
- Many computers...but testing from the one with Windows 7 Professional.
Source:
c:\users\myusername\Desktop\CIS577
Destination:
\\GOFLEX_HOME\GoFlex Home Personal\Dave_School\CIS577
Goal:
To back up school, family, and other documents on an automatic and easy basis...not to mention reliable. I should mention here that the Seagate software for the GoFlex comes with a backup solution that is fairly easy to use and customize. (Secretly, I just wanted an excuse to again play with Robocopy...remind myself of its functions and limitations).
Command (From ELEVATED Command Prompt):
$>robocopy c:\users\myusername\Desktop\CIS577 \\GOFLEX_HOME\GoFlex Home Personal\Dave_School\CIS577 /LOG:BackUpLogs\PicsBUlog /SAVE:BackUpJobs\PicsJob /B /V /E
- The /E is probably redundant with the /B, but I wanted to add it to ensure the directory recursion.
- The /LOG option points to a folder in the current working directory and the name of a command file for this particular backup job
- The /SAVE option points to a folder in the current working directory and the name of the logfile for this particular backup job
- The /V, like almost any other command line program....Verbosity...YEAH! :-)
If I want to run this job as a service or just in the background, I can add the /MON option (/MON:#) with a number representing the number of changes made to the source that will automatically trigger the backup job again. Careful though...if you add this from a normal command prompt...you may be waiting AWHILE for anything to happen if you are not actively changing the source location.
So Robocopy has been fun to play with today. I created jobs to backup all of our pictures from our recent trip to Gatlinburg and it is running better than copying through the GUI....yeah!
Review of User Interface Prototyping Articles
This semester I am taking a User Interface Design course (CIS577). As part of the course work, we have been required to review articles that have been published. The below review is from two articles published in 1992 and 1994, although I believe they make points that are relevant today.
Ref: Neilson, J. Finding Usability Problems Through Heuristic Evaluation. In Proc. CHI '92, ACM Press (1992), 373-380.
Ref: Rettig, M. Prototyping for Tiny Fingers. In Communications of the ACM 37, 4 (April 1994), 21-27
Review of Finding Usability Problems Through Heuristic Evaluation and Prototyping for Tiny Fingers.
The two articles reviewed are both over fifteen years in age. However, the underlying points made by the authors of each article are as useful today as they were at the time of the writing. Finding Usability Problems Through Heuristic Evaluation (Finding) was written by Jacob Neilson of Bellcore for the 1992 Association of Computing Machinery’s (ACM) Computer Human-Interaction Conference. Prototyping for Tiny Fingers (Prototyping) was also produce for the ACM, specifically written by Marc Rettig for the April 1994 Communications of the ACM. Both articles focus on [the author’s] recommended techniques for the evaluation and utilization of user interface (UI) prototyping. Two links exist between these two articles: (1) not so blatant is the link found in the discussions of paper prototypes compared to [full] running system prototypes, and (2) Rettig makes use of, and reference to, Neilson’s Funding. As Rettig presents a the more straightforward suggestions, as compared to the higher level of Funding, it is therefore more germane to discuss Rettig’s work first.
In Prototyping, Rettig thoroughly discusses his belief in the value of Lo-Fidelity (Lo-Fi) prototypes. Lo-Fi prototypes, as explained by Rettig, are UI prototypes that are constructed of paper and manipulated through an individual “playing the computer.” To present his support of Lo-Fi prototypes, Rettig first takes the natural path of explaining what he defines as Hi-Fidelity (Hi-Fi) prototypes consist of: fully functioning prototypes created through the use of modeling tools and/or high level programming languages. Through his defining of Hi-Fi prototypes, Rettig presents a concise set of problems/risks that are inherent in this method of prototyping: Length of build/change time, Reviewers tend to focus on “fit and finish” issues such as color choices and font, Developers resistance to change, Setting of unrealistic expectations, and, One bug can bring the project to a halt.
Once Rettig presents the issues that he believes are typical (and cost-inducing) of Hi-Fi prototyping, Rettig explains how his organization had come to use the Lo-Fi method and the benefits that they (he) had identified in its usage. His introduction to Lo-Fi is not important. However, the benefits of its use that he has articulated are well worth some discussion.
The primary benefit, according to Rettig, of Lo-Fi prototypes is that of cost, in terms of both time and money. Rettig presents a reasonable and efficient procedure, as well as [his] recommended material that allow for a development team to construct prototypes that allow for both effective end-user evaluations as well as low cost changes. This procedure is one that allows for a component based paper prototype to be quickly created, have parts duplicated where necessary, and have the user evaluation results created/documented in a manner that can successfully drive the necessary documentation and changes.
Rettig does an excellent job in explaining the benefits of Lo-Fi prototyping as well as the established set of procedures that he and his coworkers followed. It should be noted that Rettig also makes the points of: (1) If you already have a working Hi-Fi prototype then it should not be scrapped for a Lo-Fi prototype as it would not be cost effective, and (2) Hi-Fi prototypes have their place in UI design but every developer should at least attempt to utilize a Lo-Fi methodology in order to compare for themselves the possible benefits that may be gained from its usage. These benefits can be traced to heuristic evaluations of prototype reviews and empirical evidence garnered from these evaluations.
One of the sources for Rettig’s belief in the benefits of Lo-Fi prototypes is Neilson’s Finding, in which Neilson examines the use of heuristic evaluations during prototype review processes. Neilson presents here an enumeration of three primary types of reviewers as well as an articulation as to when heuristic evaluations did, and did not, prove to be efficient.
The effectiveness of Neilson’s discussion of heuristic-based evaluations can be found in the three primary groups of reviewers that Neilson used. The three groups identified and used by Neilson were: Novice, Regular, and Double. The Regular (a general usability expert) and the Double (a usability expert that specialized in the field of focus) were often expensive to employ and not always available. Due to this fact, Neilson identified and used a third group of reviewers: Novices (those with no usability evaluation experience).
The interface chosen by Neilson to test and present his evaluation was that of a telephone banking system. This interface was similar to many of the same types in use today. For the purposes of his evaluation, Neilson gave each evaluator a list of tasks that should be performed using the system. This set of tasks and the given interface allowed for Neilson to categorize the results.
The results produced by Neilson’s evaluation were able to be categorized into two primary groups: Major Problems and Minor Problems. In addition to this grouping, focal areas were identified as well as the benefits of paper or system prototypes in a given evaluation step.
Neilson’s conclusions were predominantly expected: (1) usability specialists were able to utilize heuristic evaluations to identify problem areas more than novices were able to identify, and (2) usability specialists with specific expertise were the best suited to utilize heuristic methods in interface prototype evaluations. Of interesting note is that Neilson’s recommendation of 3-5 evaluators on one project appears to be the basis of Rettig’s belief that no more than four team members should be used (and in differing capacities).
In reading both Prototypes and Finding it is apparent that there is no single method that best allows for a complete evaluation of a user interface. Both Hi-Fi and Lo-Fi have their requisite places, as do the differing heuristic areas presented by Neilson. This was as true 19 years ago as it is for today’s software developer. A developer, or even a program manager, would be well-cautioned to learn multiple methods and to attempt to implement the one that best addresses the project being evaluated.
Ref: Neilson, J. Finding Usability Problems Through Heuristic Evaluation. In Proc. CHI '92, ACM Press (1992), 373-380.
Ref: Rettig, M. Prototyping for Tiny Fingers. In Communications of the ACM 37, 4 (April 1994), 21-27
Review of Finding Usability Problems Through Heuristic Evaluation and Prototyping for Tiny Fingers.
The two articles reviewed are both over fifteen years in age. However, the underlying points made by the authors of each article are as useful today as they were at the time of the writing. Finding Usability Problems Through Heuristic Evaluation (Finding) was written by Jacob Neilson of Bellcore for the 1992 Association of Computing Machinery’s (ACM) Computer Human-Interaction Conference. Prototyping for Tiny Fingers (Prototyping) was also produce for the ACM, specifically written by Marc Rettig for the April 1994 Communications of the ACM. Both articles focus on [the author’s] recommended techniques for the evaluation and utilization of user interface (UI) prototyping. Two links exist between these two articles: (1) not so blatant is the link found in the discussions of paper prototypes compared to [full] running system prototypes, and (2) Rettig makes use of, and reference to, Neilson’s Funding. As Rettig presents a the more straightforward suggestions, as compared to the higher level of Funding, it is therefore more germane to discuss Rettig’s work first.
In Prototyping, Rettig thoroughly discusses his belief in the value of Lo-Fidelity (Lo-Fi) prototypes. Lo-Fi prototypes, as explained by Rettig, are UI prototypes that are constructed of paper and manipulated through an individual “playing the computer.” To present his support of Lo-Fi prototypes, Rettig first takes the natural path of explaining what he defines as Hi-Fidelity (Hi-Fi) prototypes consist of: fully functioning prototypes created through the use of modeling tools and/or high level programming languages. Through his defining of Hi-Fi prototypes, Rettig presents a concise set of problems/risks that are inherent in this method of prototyping: Length of build/change time, Reviewers tend to focus on “fit and finish” issues such as color choices and font, Developers resistance to change, Setting of unrealistic expectations, and, One bug can bring the project to a halt.
Once Rettig presents the issues that he believes are typical (and cost-inducing) of Hi-Fi prototyping, Rettig explains how his organization had come to use the Lo-Fi method and the benefits that they (he) had identified in its usage. His introduction to Lo-Fi is not important. However, the benefits of its use that he has articulated are well worth some discussion.
The primary benefit, according to Rettig, of Lo-Fi prototypes is that of cost, in terms of both time and money. Rettig presents a reasonable and efficient procedure, as well as [his] recommended material that allow for a development team to construct prototypes that allow for both effective end-user evaluations as well as low cost changes. This procedure is one that allows for a component based paper prototype to be quickly created, have parts duplicated where necessary, and have the user evaluation results created/documented in a manner that can successfully drive the necessary documentation and changes.
Rettig does an excellent job in explaining the benefits of Lo-Fi prototyping as well as the established set of procedures that he and his coworkers followed. It should be noted that Rettig also makes the points of: (1) If you already have a working Hi-Fi prototype then it should not be scrapped for a Lo-Fi prototype as it would not be cost effective, and (2) Hi-Fi prototypes have their place in UI design but every developer should at least attempt to utilize a Lo-Fi methodology in order to compare for themselves the possible benefits that may be gained from its usage. These benefits can be traced to heuristic evaluations of prototype reviews and empirical evidence garnered from these evaluations.
One of the sources for Rettig’s belief in the benefits of Lo-Fi prototypes is Neilson’s Finding, in which Neilson examines the use of heuristic evaluations during prototype review processes. Neilson presents here an enumeration of three primary types of reviewers as well as an articulation as to when heuristic evaluations did, and did not, prove to be efficient.
The effectiveness of Neilson’s discussion of heuristic-based evaluations can be found in the three primary groups of reviewers that Neilson used. The three groups identified and used by Neilson were: Novice, Regular, and Double. The Regular (a general usability expert) and the Double (a usability expert that specialized in the field of focus) were often expensive to employ and not always available. Due to this fact, Neilson identified and used a third group of reviewers: Novices (those with no usability evaluation experience).
The interface chosen by Neilson to test and present his evaluation was that of a telephone banking system. This interface was similar to many of the same types in use today. For the purposes of his evaluation, Neilson gave each evaluator a list of tasks that should be performed using the system. This set of tasks and the given interface allowed for Neilson to categorize the results.
The results produced by Neilson’s evaluation were able to be categorized into two primary groups: Major Problems and Minor Problems. In addition to this grouping, focal areas were identified as well as the benefits of paper or system prototypes in a given evaluation step.
Neilson’s conclusions were predominantly expected: (1) usability specialists were able to utilize heuristic evaluations to identify problem areas more than novices were able to identify, and (2) usability specialists with specific expertise were the best suited to utilize heuristic methods in interface prototype evaluations. Of interesting note is that Neilson’s recommendation of 3-5 evaluators on one project appears to be the basis of Rettig’s belief that no more than four team members should be used (and in differing capacities).
In reading both Prototypes and Finding it is apparent that there is no single method that best allows for a complete evaluation of a user interface. Both Hi-Fi and Lo-Fi have their requisite places, as do the differing heuristic areas presented by Neilson. This was as true 19 years ago as it is for today’s software developer. A developer, or even a program manager, would be well-cautioned to learn multiple methods and to attempt to implement the one that best addresses the project being evaluated.
Review of CHI 2006 Article on Tabletop Displays
This semester I am taking a User Interface Design course (CIS577). As part of the course work, we have been required to review articles that have been published. The below review is from an article published in 2006.
Ref: Tang, et al. Collaborative Coupling over Tabletop Displays. In Proc. CHI '06, ACM Press (2006), 1181-1190.
Review of Collaborative Coupling over Tabletop Displays
Collaborative Coupling over Tabletop Displays is an article written by five researchers (Tang et al.) from the Universities of British Columbia and Calgary. The article focuses on this group’s research of designs for collaborative interfaces for tabletops and presents their methodologies and observations of two different studies. Additionally, the implications of implementing at least one method, as well as the group’s overall conclusions are presented.
Tang et al. initially presented the confusion that is generally inherent in the study of collaborative efforts. The referenced efforts of study focused on group activities using both traditional (non-interactive) and interactive desktops. During this explanation of some of the difficulties in studying collaboration some important key words and phrases were defined:
- mixed-focus collaboration – the frequent bi-directional transition between individual and shared tasks
- coupling – as used in this article, coupling refers to: collaborative coupling style
- three viewing technologies
o lenses – show information in spatially localized areas
o filters – show information globally
o Shadowboxes – allows spatially localized areas to be displaced
Before delving into the details of Study 1, the authors present some additionally important information in the three primary sections of: Collaborative Coupling, Background, and Overview of Observational Studies.
In their discussion of Collaborative Coupling, the authors reiterate the important point that: the efforts of a group cannot be easily divided into only the two categories of “independent” or “shared.” They further explain that collaborative coupling refers to “manner in which collaborators are involved and occupied with each other’s work.” Coupling, as used by the authors, refers to both a level of workspace awareness and of a “desire to work closely or independently of one another.”
The Background and Overview sections provide a full discussion of the issues facing current research involving the design of collaborative tabletops. In these sections additionally important terms such as coordination, interference, and territories are defined and discussed. While all three of these terms are relevant to this study, the definition and use of interference seems to play more direct role in the studies and results. Interference is used by the authors to describe any user, system, or environmental action or attempted action. For example, interference can be when two individuals attempt to manipulate the same object. Likewise, interference can be the execution of a command that re-positions multiple objects, thus introducing the need for users to re-learn the location of each object prior to their use.
The discussion of Study 1 indicates that study 1 was focused more on the learning/identification of how groups and individuals coordinate themselves when presented with a “spatially fixed visualization.” The authors indicated that more than one of their hypotheses had been disproven. Specifically of note was the disproving of their expectation that individual members of the group would naturally favor individual efforts as opposed to group collaboration. Empirically, the authors identified that participant efforts were visibility independent for only 24% of the total time. This revelation appears to be the actual driving factor behind the authors conducting a second study; the authors were unclear as to if Study 2 would have been deemed necessary had more of their hypotheses been proven.
During Study 1, it was noted that the individuals not only preferred to work together, but that they also preferred the “group-type” visualization tools (global filters). The task that each group was assigned was that of developing a route (using specific constraints) of travel through a fictitious city as displayed on a tabletop. The individuals tended to move and work together naturally, which was contrary to the authors’ hypothesis.
Study 2 was conducted under established conditions that were based upon the outcomes of Study 1: explicit individual tasks and roles, a redesigned lens widget, conflicting data layers, the removal of the ShadowBox, and the implementing of multiple sub-problems. The other differences between Studies 1 and 2 are: a slightly different set of test subjects and a custom man-made graph (fully connected) was used in Study 2 as opposed to the fictitious city map of Study 1. Whereas Study 1 revealed coordination of groups over spatially fixed visualizations, Study 2 revealed that there were six distinct styles of coupling: (SPSA) – Same Problem Same Area, (VE) – View Engaged, (SPDA) – Same Problem Different Area, (V) – View, (D) – Disengaged, and (DP) – Different Problems.
Study 2 appears to be a logical extension of Study 1, and in fact could be relabeled as Study 1 Part 2. The introduction of more guidance and stipulations in Study 2 did allow for the validation of the results from Study 1 as well as Study 2’s own results and observations. In conducting these studies and reviewing the results, the authors drew some relatively helpful conclusions regarding the methodology used in tabletop interface design.
However, it is evident that if this research is accepted as the sole authority, then there is no clear single approach that can be utilized for the design methodologies of interactive tabletops. The authors state that a “flexible set of tools allowing fluid transitions between views is required to fully support the dynamics of mixed-focus collaboration”.
Ref: Tang, et al. Collaborative Coupling over Tabletop Displays. In Proc. CHI '06, ACM Press (2006), 1181-1190.
Review of Collaborative Coupling over Tabletop Displays
Collaborative Coupling over Tabletop Displays is an article written by five researchers (Tang et al.) from the Universities of British Columbia and Calgary. The article focuses on this group’s research of designs for collaborative interfaces for tabletops and presents their methodologies and observations of two different studies. Additionally, the implications of implementing at least one method, as well as the group’s overall conclusions are presented.
Tang et al. initially presented the confusion that is generally inherent in the study of collaborative efforts. The referenced efforts of study focused on group activities using both traditional (non-interactive) and interactive desktops. During this explanation of some of the difficulties in studying collaboration some important key words and phrases were defined:
- mixed-focus collaboration – the frequent bi-directional transition between individual and shared tasks
- coupling – as used in this article, coupling refers to: collaborative coupling style
- three viewing technologies
o lenses – show information in spatially localized areas
o filters – show information globally
o Shadowboxes – allows spatially localized areas to be displaced
Before delving into the details of Study 1, the authors present some additionally important information in the three primary sections of: Collaborative Coupling, Background, and Overview of Observational Studies.
In their discussion of Collaborative Coupling, the authors reiterate the important point that: the efforts of a group cannot be easily divided into only the two categories of “independent” or “shared.” They further explain that collaborative coupling refers to “manner in which collaborators are involved and occupied with each other’s work.” Coupling, as used by the authors, refers to both a level of workspace awareness and of a “desire to work closely or independently of one another.”
The Background and Overview sections provide a full discussion of the issues facing current research involving the design of collaborative tabletops. In these sections additionally important terms such as coordination, interference, and territories are defined and discussed. While all three of these terms are relevant to this study, the definition and use of interference seems to play more direct role in the studies and results. Interference is used by the authors to describe any user, system, or environmental action or attempted action. For example, interference can be when two individuals attempt to manipulate the same object. Likewise, interference can be the execution of a command that re-positions multiple objects, thus introducing the need for users to re-learn the location of each object prior to their use.
The discussion of Study 1 indicates that study 1 was focused more on the learning/identification of how groups and individuals coordinate themselves when presented with a “spatially fixed visualization.” The authors indicated that more than one of their hypotheses had been disproven. Specifically of note was the disproving of their expectation that individual members of the group would naturally favor individual efforts as opposed to group collaboration. Empirically, the authors identified that participant efforts were visibility independent for only 24% of the total time. This revelation appears to be the actual driving factor behind the authors conducting a second study; the authors were unclear as to if Study 2 would have been deemed necessary had more of their hypotheses been proven.
During Study 1, it was noted that the individuals not only preferred to work together, but that they also preferred the “group-type” visualization tools (global filters). The task that each group was assigned was that of developing a route (using specific constraints) of travel through a fictitious city as displayed on a tabletop. The individuals tended to move and work together naturally, which was contrary to the authors’ hypothesis.
Study 2 was conducted under established conditions that were based upon the outcomes of Study 1: explicit individual tasks and roles, a redesigned lens widget, conflicting data layers, the removal of the ShadowBox, and the implementing of multiple sub-problems. The other differences between Studies 1 and 2 are: a slightly different set of test subjects and a custom man-made graph (fully connected) was used in Study 2 as opposed to the fictitious city map of Study 1. Whereas Study 1 revealed coordination of groups over spatially fixed visualizations, Study 2 revealed that there were six distinct styles of coupling: (SPSA) – Same Problem Same Area, (VE) – View Engaged, (SPDA) – Same Problem Different Area, (V) – View, (D) – Disengaged, and (DP) – Different Problems.
Study 2 appears to be a logical extension of Study 1, and in fact could be relabeled as Study 1 Part 2. The introduction of more guidance and stipulations in Study 2 did allow for the validation of the results from Study 1 as well as Study 2’s own results and observations. In conducting these studies and reviewing the results, the authors drew some relatively helpful conclusions regarding the methodology used in tabletop interface design.
However, it is evident that if this research is accepted as the sole authority, then there is no clear single approach that can be utilized for the design methodologies of interactive tabletops. The authors state that a “flexible set of tools allowing fluid transitions between views is required to fully support the dynamics of mixed-focus collaboration”.
Tuesday, January 4, 2011
Creating a local RPM site for Isolated Red Hat Servers - Part II
In my previous post, I discussed one option of downloading, but NOT installing, required and marked packages for a RedHat Server. In the first Part, I included a script that would create a directory and download the rpm files to this directory specifically. I had tested this script on an RHEL 5.5 machine.
This post is continuation of the Part I and makes the assumption that the reader now has, in at least some fashion, downloaded the rpm files that are needed to patch a system that can't (or doesn't) touch the internet. This system is assumed to be mirrored by the system that we used to download patches. So....
Using the script from Part I, and today's date, we should have a folder /vpmt/updates/2011_01_04/ that contains all of the currently needed rpm files. What are we supposed to do with these files, other than just stare at them?
I am so glad you asked, and I hope that you are prepared for a long, LONG, drawn out answer. Please understand that there is a LOT of work required in updating an offline system from patches downloaded to a mirrored online system. So...
1. Copy the files to a disk
2. Copy the files from the disk to the offline system
3. Open a terminal window and navigate to the folder where you copied the rpm files to in step 2
4. Execute (as root, or with su) chmod 755 *.rpm
5. Execute (as root, or with su) rpm -Uvh *.rpm from the directory where the files were copied
...and those five steps, ALL five long and tedious steps, are all that should be required to install the patches that you downloaded on the first system. Now, I know what you are thinking: "What about all the dependencies that are bound to be present?" This is where the way rpm works and its options come into play.
More to follow about RPM.
This post is continuation of the Part I and makes the assumption that the reader now has, in at least some fashion, downloaded the rpm files that are needed to patch a system that can't (or doesn't) touch the internet. This system is assumed to be mirrored by the system that we used to download patches. So....
Using the script from Part I, and today's date, we should have a folder /vpmt/updates/2011_01_04/ that contains all of the currently needed rpm files. What are we supposed to do with these files, other than just stare at them?
I am so glad you asked, and I hope that you are prepared for a long, LONG, drawn out answer. Please understand that there is a LOT of work required in updating an offline system from patches downloaded to a mirrored online system. So...
1. Copy the files to a disk
2. Copy the files from the disk to the offline system
3. Open a terminal window and navigate to the folder where you copied the rpm files to in step 2
4. Execute (as root, or with su) chmod 755 *.rpm
5. Execute (as root, or with su) rpm -Uvh *.rpm from the directory where the files were copied
...and those five steps, ALL five long and tedious steps, are all that should be required to install the patches that you downloaded on the first system. Now, I know what you are thinking: "What about all the dependencies that are bound to be present?" This is where the way rpm works and its options come into play.
More to follow about RPM.
Labels:
-Uvh,
offline systems,
patching,
Repository,
RHEL5.5,
rpm,
Update
Monday, January 3, 2011
Creating a local RPM site for Isolated Red Hat Servers - Part I
Some of us do, or have had to, work with multiple networks. Some of the organizations I have had the pleasure of working in have had some networks that were completely isolated from any network access but still need have their patching level maintained.
One reason for this type of set-up is when you have an isolated network that still has the requirement of being regularly updated. In set-ups such as this, it is often helpful to have a mirrored set-up that can in fact pull updates when required.
We can try to just run the yum command in a terminal. However, if you just run yum update based on the default yum.conf file, then the system will only ask [y/N] if you want the updates to be applied. Answering N does NOT download the update files, unfortunately. So what is the answer? A script, like the one below should provide the first part of a viable solution. The below script requires that the yum-downloadonly package be installed.
Script to download but not install updates for all packages marked for update
#########################################
# FileName: Get_Yum_Updates.sh #
# Comments: Tested on RHEL5.5 x86_x64 #
#########################################
#!/bin/bash
#create a unique name for folder and file names
RunDate=`date %y_%m_%d`
#Set up filenames unique to day that script is run
InitUpdateCheckFile="Yum_Check_Update_Init_20${RunDate}.txt"
FinalUpdateCheckFile="Yum_Check_Update_Final_20${RunDate}.txt"
DiffBriefFile="Yum_Diff_Brief_20${RunDate}.txt"
DiffFullFile="Yum_Diff_Fill_20${RunDate}.txt"
#create new directory for rpm files
DownloadDir="/vpmt/updates/20${RunDate}"
mkdir ${DownloadDir}
#cd to new directory for diff purposes
cd ${DownloadDir}
#before downloading anything, let's see what's needed
#and pipe this list to our first check update file
yum check-update > ${InitUpdateCheckFile}
#now, lets run the yum command that we REALLY want
yum --downloaddir="${DownloadDir} --downloadonly update
#for one final check that nothing was installed, let's
#run another check-update and now pipe it to the final
#check-update listing file
yum check-update > ${FinalUpdateCheckFile}
#becuase it can be helpful for further CM actions or
#for when files need to be updated on multiple machines
#here is a setup for the diff options
#first, lets diff, with the brief switch, the two check update files
diff --brief ${InitUpdateCheckFile} ${FinalUpdateCheckFile} > {$DiffBriefFile}
#here we could test the size of DiffBrief File and end the script
#if the size is 0, meaning that there were no changes (as a result
#of our yum command above) between what packages need to be updated .
#However, for just an example of a defualt diff operation
diff ${InitUpdateCheckFile} ${FinalUpdateCheckFile} > {$DiffFullFile}
#end
exit 0
###################################################
The above script should be fairly easy to understand. A few other notes on the script should be made. The first one is the variable for the DownloadDir: this can be set to whatever directory you want.../vpmt/updates/ just happens to be where I would put the rpm files downloaded by the yum operation.
Secondly, the actual yum command CAN take other parameters. One of the more notable ones is the --skip-broken. The --skip-broken switch tells the yum operation to skip packages with dependency issues/problems. Another convenient switch is the -y option. This option uses an equivalent of the assume-yes configuration option. If you are only downloading, and are not worried about the exact size of the total download, this option is nice as it allows you to walk away and the script will finish without your help.
One final note on the above script: the comment about testing the size of the diff --brief file is NOT the only test we could do to make this script more robust. We could in fact test for things such as our directory already existing, testing for specific permissions on the directory, and even (with a few more lines) testing to see if any previous download folders created by the script contain any of the patches we need. The bottom line is that this script is not an "end all" cure for getting AND tracking your patching program. I merely offer it as a starting point.
Once this script has been executed succesfully, the following should exist:
1) a folder: /vpmt/updates/20[the date]
- for example: /vpmt/updates/2011_1_3/
- This directory should contain:
I. All downloaded RPMs from the yum operation
II. Metacache data from the yum operation
III. The txt files we created from running the check-update switch and the diff operations.
What do we do, and what can we do, with this folder and its contents I will leave for a different day (but I will say that the focus will be on creating a DVD of the RPMs and using the localinstall option of yum. I will say that we can now use the check-update files created here to monitor CM operations. These files can be explicity massaged with another script to create an action item list on all packages that were updated from this newly created folder.
I noticed that it has been about three months since I have posted ANYTHING to my blog. With any luck, and purely for my own memory only, I can keep it more updated from now on. I do intend to add a second part to this post (and maybe some updates to this one) when I get a little more time this week. I would also like to play around kickstart files in the near future.
One final note: I hope that everyone's New Year and Christmas days were safe and enjoyable! --dw
One reason for this type of set-up is when you have an isolated network that still has the requirement of being regularly updated. In set-ups such as this, it is often helpful to have a mirrored set-up that can in fact pull updates when required.
We can try to just run the yum command in a terminal. However, if you just run yum update based on the default yum.conf file, then the system will only ask [y/N] if you want the updates to be applied. Answering N does NOT download the update files, unfortunately. So what is the answer? A script, like the one below should provide the first part of a viable solution. The below script requires that the yum-downloadonly package be installed.
Script to download but not install updates for all packages marked for update
#########################################
# FileName: Get_Yum_Updates.sh #
# Comments: Tested on RHEL5.5 x86_x64 #
#########################################
#!/bin/bash
#create a unique name for folder and file names
RunDate=`date %y_%m_%d`
#Set up filenames unique to day that script is run
InitUpdateCheckFile="Yum_Check_Update_Init_20${RunDate}.txt"
FinalUpdateCheckFile="Yum_Check_Update_Final_20${RunDate}.txt"
DiffBriefFile="Yum_Diff_Brief_20${RunDate}.txt"
DiffFullFile="Yum_Diff_Fill_20${RunDate}.txt"
#create new directory for rpm files
DownloadDir="/vpmt/updates/20${RunDate}"
mkdir ${DownloadDir}
#cd to new directory for diff purposes
cd ${DownloadDir}
#before downloading anything, let's see what's needed
#and pipe this list to our first check update file
yum check-update > ${InitUpdateCheckFile}
#now, lets run the yum command that we REALLY want
yum --downloaddir="${DownloadDir} --downloadonly update
#for one final check that nothing was installed, let's
#run another check-update and now pipe it to the final
#check-update listing file
yum check-update > ${FinalUpdateCheckFile}
#becuase it can be helpful for further CM actions or
#for when files need to be updated on multiple machines
#here is a setup for the diff options
#first, lets diff, with the brief switch, the two check update files
diff --brief ${InitUpdateCheckFile} ${FinalUpdateCheckFile} > {$DiffBriefFile}
#here we could test the size of DiffBrief File and end the script
#if the size is 0, meaning that there were no changes (as a result
#of our yum command above) between what packages need to be updated .
#However, for just an example of a defualt diff operation
diff ${InitUpdateCheckFile} ${FinalUpdateCheckFile} > {$DiffFullFile}
#end
exit 0
###################################################
The above script should be fairly easy to understand. A few other notes on the script should be made. The first one is the variable for the DownloadDir: this can be set to whatever directory you want.../vpmt/updates/ just happens to be where I would put the rpm files downloaded by the yum operation.
Secondly, the actual yum command CAN take other parameters. One of the more notable ones is the --skip-broken. The --skip-broken switch tells the yum operation to skip packages with dependency issues/problems. Another convenient switch is the -y option. This option uses an equivalent of the assume-yes configuration option. If you are only downloading, and are not worried about the exact size of the total download, this option is nice as it allows you to walk away and the script will finish without your help.
One final note on the above script: the comment about testing the size of the diff --brief file is NOT the only test we could do to make this script more robust. We could in fact test for things such as our directory already existing, testing for specific permissions on the directory, and even (with a few more lines) testing to see if any previous download folders created by the script contain any of the patches we need. The bottom line is that this script is not an "end all" cure for getting AND tracking your patching program. I merely offer it as a starting point.
Once this script has been executed succesfully, the following should exist:
1) a folder: /vpmt/updates/20[the date]
- for example: /vpmt/updates/2011_1_3/
- This directory should contain:
I. All downloaded RPMs from the yum operation
II. Metacache data from the yum operation
III. The txt files we created from running the check-update switch and the diff operations.
What do we do, and what can we do, with this folder and its contents I will leave for a different day (but I will say that the focus will be on creating a DVD of the RPMs and using the localinstall option of yum. I will say that we can now use the check-update files created here to monitor CM operations. These files can be explicity massaged with another script to create an action item list on all packages that were updated from this newly created folder.
I noticed that it has been about three months since I have posted ANYTHING to my blog. With any luck, and purely for my own memory only, I can keep it more updated from now on. I do intend to add a second part to this post (and maybe some updates to this one) when I get a little more time this week. I would also like to play around kickstart files in the near future.
One final note: I hope that everyone's New Year and Christmas days were safe and enjoyable! --dw
Subscribe to:
Posts (Atom)