Sunday, April 10, 2011

Robocopy on Windows 7

I have never claimed to be an expert on, well, anything. However, I do like to try to learn something new every day and I usually stick to the "nerdy" stuff. I recently decided that I wanted to improve the way I backed up important data at home. At work, we script it and tar it and set the archive bits and get the emails...that always seemed like overkill to me. That is until I accidently ruined two, (YES, 2) removable HDD's in one night, including a one week old 1TB Seagate drive that I had bought on sale...bummer!

I am not at an endstate yet in my search for the best backup solution for the home network. One thing I have been playing with is Robocopy...and oh what fun it has been.

My setup:
- A new (non-dropped on the floor and ruined) 1TB GoFlex network storage drive.
- Many computers...but testing from the one with Windows 7 Professional.


\\GOFLEX_HOME\GoFlex Home Personal\Dave_School\CIS577

To back up school, family, and other documents on an automatic and easy basis...not to mention reliable. I should mention here that the Seagate software for the GoFlex comes with a backup solution that is fairly easy to use and customize. (Secretly, I just wanted an excuse to again play with Robocopy...remind myself of its functions and limitations).

Command (From ELEVATED Command Prompt):
$>robocopy c:\users\myusername\Desktop\CIS577 \\GOFLEX_HOME\GoFlex Home Personal\Dave_School\CIS577 /LOG:BackUpLogs\PicsBUlog /SAVE:BackUpJobs\PicsJob /B /V /E

- The /E is probably redundant with the /B, but I wanted to add it to ensure the directory recursion.

- The /LOG option points to a folder in the current working directory and the name of a command file for this particular backup job

- The /SAVE option points to a folder in the current working directory and the name of the logfile for this particular backup job

- The /V, like almost any other command line program....Verbosity...YEAH! :-)

If I want to run this job as a service or just in the background, I can add the /MON option (/MON:#) with a number representing the number of changes made to the source that will automatically trigger the backup job again. Careful though...if you add this from a normal command may be waiting AWHILE for anything to happen if you are not actively changing the source location.

So Robocopy has been fun to play with today. I created jobs to backup all of our pictures from our recent trip to Gatlinburg and it is running better than copying through the GUI....yeah!

Review of User Interface Prototyping Articles

This semester I am taking a User Interface Design course (CIS577). As part of the course work, we have been required to review articles that have been published. The below review is from two articles published in 1992 and 1994, although I believe they make points that are relevant today.

Ref: Neilson, J. Finding Usability Problems Through Heuristic Evaluation. In Proc. CHI '92, ACM Press (1992), 373-380.

Ref: Rettig, M. Prototyping for Tiny Fingers. In Communications of the ACM 37, 4 (April 1994), 21-27

Review of Finding Usability Problems Through Heuristic Evaluation and Prototyping for Tiny Fingers.

The two articles reviewed are both over fifteen years in age. However, the underlying points made by the authors of each article are as useful today as they were at the time of the writing. Finding Usability Problems Through Heuristic Evaluation (Finding) was written by Jacob Neilson of Bellcore for the 1992 Association of Computing Machinery’s (ACM) Computer Human-Interaction Conference. Prototyping for Tiny Fingers (Prototyping) was also produce for the ACM, specifically written by Marc Rettig for the April 1994 Communications of the ACM. Both articles focus on [the author’s] recommended techniques for the evaluation and utilization of user interface (UI) prototyping. Two links exist between these two articles: (1) not so blatant is the link found in the discussions of paper prototypes compared to [full] running system prototypes, and (2) Rettig makes use of, and reference to, Neilson’s Funding. As Rettig presents a the more straightforward suggestions, as compared to the higher level of Funding, it is therefore more germane to discuss Rettig’s work first.

In Prototyping, Rettig thoroughly discusses his belief in the value of Lo-Fidelity (Lo-Fi) prototypes. Lo-Fi prototypes, as explained by Rettig, are UI prototypes that are constructed of paper and manipulated through an individual “playing the computer.” To present his support of Lo-Fi prototypes, Rettig first takes the natural path of explaining what he defines as Hi-Fidelity (Hi-Fi) prototypes consist of: fully functioning prototypes created through the use of modeling tools and/or high level programming languages. Through his defining of Hi-Fi prototypes, Rettig presents a concise set of problems/risks that are inherent in this method of prototyping: Length of build/change time, Reviewers tend to focus on “fit and finish” issues such as color choices and font, Developers resistance to change, Setting of unrealistic expectations, and, One bug can bring the project to a halt.

Once Rettig presents the issues that he believes are typical (and cost-inducing) of Hi-Fi prototyping, Rettig explains how his organization had come to use the Lo-Fi method and the benefits that they (he) had identified in its usage. His introduction to Lo-Fi is not important. However, the benefits of its use that he has articulated are well worth some discussion.

The primary benefit, according to Rettig, of Lo-Fi prototypes is that of cost, in terms of both time and money. Rettig presents a reasonable and efficient procedure, as well as [his] recommended material that allow for a development team to construct prototypes that allow for both effective end-user evaluations as well as low cost changes. This procedure is one that allows for a component based paper prototype to be quickly created, have parts duplicated where necessary, and have the user evaluation results created/documented in a manner that can successfully drive the necessary documentation and changes.

Rettig does an excellent job in explaining the benefits of Lo-Fi prototyping as well as the established set of procedures that he and his coworkers followed. It should be noted that Rettig also makes the points of: (1) If you already have a working Hi-Fi prototype then it should not be scrapped for a Lo-Fi prototype as it would not be cost effective, and (2) Hi-Fi prototypes have their place in UI design but every developer should at least attempt to utilize a Lo-Fi methodology in order to compare for themselves the possible benefits that may be gained from its usage. These benefits can be traced to heuristic evaluations of prototype reviews and empirical evidence garnered from these evaluations.

One of the sources for Rettig’s belief in the benefits of Lo-Fi prototypes is Neilson’s Finding, in which Neilson examines the use of heuristic evaluations during prototype review processes. Neilson presents here an enumeration of three primary types of reviewers as well as an articulation as to when heuristic evaluations did, and did not, prove to be efficient.

The effectiveness of Neilson’s discussion of heuristic-based evaluations can be found in the three primary groups of reviewers that Neilson used. The three groups identified and used by Neilson were: Novice, Regular, and Double. The Regular (a general usability expert) and the Double (a usability expert that specialized in the field of focus) were often expensive to employ and not always available. Due to this fact, Neilson identified and used a third group of reviewers: Novices (those with no usability evaluation experience).

The interface chosen by Neilson to test and present his evaluation was that of a telephone banking system. This interface was similar to many of the same types in use today. For the purposes of his evaluation, Neilson gave each evaluator a list of tasks that should be performed using the system. This set of tasks and the given interface allowed for Neilson to categorize the results.

The results produced by Neilson’s evaluation were able to be categorized into two primary groups: Major Problems and Minor Problems. In addition to this grouping, focal areas were identified as well as the benefits of paper or system prototypes in a given evaluation step.

Neilson’s conclusions were predominantly expected: (1) usability specialists were able to utilize heuristic evaluations to identify problem areas more than novices were able to identify, and (2) usability specialists with specific expertise were the best suited to utilize heuristic methods in interface prototype evaluations. Of interesting note is that Neilson’s recommendation of 3-5 evaluators on one project appears to be the basis of Rettig’s belief that no more than four team members should be used (and in differing capacities).

In reading both Prototypes and Finding it is apparent that there is no single method that best allows for a complete evaluation of a user interface. Both Hi-Fi and Lo-Fi have their requisite places, as do the differing heuristic areas presented by Neilson. This was as true 19 years ago as it is for today’s software developer. A developer, or even a program manager, would be well-cautioned to learn multiple methods and to attempt to implement the one that best addresses the project being evaluated.

Review of CHI 2006 Article on Tabletop Displays

This semester I am taking a User Interface Design course (CIS577). As part of the course work, we have been required to review articles that have been published. The below review is from an article published in 2006.
Ref: Tang, et al. Collaborative Coupling over Tabletop Displays. In Proc. CHI '06, ACM Press (2006), 1181-1190.

Review of Collaborative Coupling over Tabletop Displays

Collaborative Coupling over Tabletop Displays is an article written by five researchers (Tang et al.) from the Universities of British Columbia and Calgary. The article focuses on this group’s research of designs for collaborative interfaces for tabletops and presents their methodologies and observations of two different studies. Additionally, the implications of implementing at least one method, as well as the group’s overall conclusions are presented.
Tang et al. initially presented the confusion that is generally inherent in the study of collaborative efforts. The referenced efforts of study focused on group activities using both traditional (non-interactive) and interactive desktops. During this explanation of some of the difficulties in studying collaboration some important key words and phrases were defined:
- mixed-focus collaboration – the frequent bi-directional transition between individual and shared tasks
- coupling – as used in this article, coupling refers to: collaborative coupling style
- three viewing technologies
o lenses – show information in spatially localized areas
o filters – show information globally
o Shadowboxes – allows spatially localized areas to be displaced
Before delving into the details of Study 1, the authors present some additionally important information in the three primary sections of: Collaborative Coupling, Background, and Overview of Observational Studies.
In their discussion of Collaborative Coupling, the authors reiterate the important point that: the efforts of a group cannot be easily divided into only the two categories of “independent” or “shared.” They further explain that collaborative coupling refers to “manner in which collaborators are involved and occupied with each other’s work.” Coupling, as used by the authors, refers to both a level of workspace awareness and of a “desire to work closely or independently of one another.”
The Background and Overview sections provide a full discussion of the issues facing current research involving the design of collaborative tabletops. In these sections additionally important terms such as coordination, interference, and territories are defined and discussed. While all three of these terms are relevant to this study, the definition and use of interference seems to play more direct role in the studies and results. Interference is used by the authors to describe any user, system, or environmental action or attempted action. For example, interference can be when two individuals attempt to manipulate the same object. Likewise, interference can be the execution of a command that re-positions multiple objects, thus introducing the need for users to re-learn the location of each object prior to their use.
The discussion of Study 1 indicates that study 1 was focused more on the learning/identification of how groups and individuals coordinate themselves when presented with a “spatially fixed visualization.” The authors indicated that more than one of their hypotheses had been disproven. Specifically of note was the disproving of their expectation that individual members of the group would naturally favor individual efforts as opposed to group collaboration. Empirically, the authors identified that participant efforts were visibility independent for only 24% of the total time. This revelation appears to be the actual driving factor behind the authors conducting a second study; the authors were unclear as to if Study 2 would have been deemed necessary had more of their hypotheses been proven.
During Study 1, it was noted that the individuals not only preferred to work together, but that they also preferred the “group-type” visualization tools (global filters). The task that each group was assigned was that of developing a route (using specific constraints) of travel through a fictitious city as displayed on a tabletop. The individuals tended to move and work together naturally, which was contrary to the authors’ hypothesis.
Study 2 was conducted under established conditions that were based upon the outcomes of Study 1: explicit individual tasks and roles, a redesigned lens widget, conflicting data layers, the removal of the ShadowBox, and the implementing of multiple sub-problems. The other differences between Studies 1 and 2 are: a slightly different set of test subjects and a custom man-made graph (fully connected) was used in Study 2 as opposed to the fictitious city map of Study 1. Whereas Study 1 revealed coordination of groups over spatially fixed visualizations, Study 2 revealed that there were six distinct styles of coupling: (SPSA) – Same Problem Same Area, (VE) – View Engaged, (SPDA) – Same Problem Different Area, (V) – View, (D) – Disengaged, and (DP) – Different Problems.
Study 2 appears to be a logical extension of Study 1, and in fact could be relabeled as Study 1 Part 2. The introduction of more guidance and stipulations in Study 2 did allow for the validation of the results from Study 1 as well as Study 2’s own results and observations. In conducting these studies and reviewing the results, the authors drew some relatively helpful conclusions regarding the methodology used in tabletop interface design.
However, it is evident that if this research is accepted as the sole authority, then there is no clear single approach that can be utilized for the design methodologies of interactive tabletops. The authors state that a “flexible set of tools allowing fluid transitions between views is required to fully support the dynamics of mixed-focus collaboration”.