Below is a link to Bill Frezza’s very interesting article, “The Skeptical Outsider,” for BIO-IT World in which he addresses key issues in the pharmaceutical industry. We at Indigo are seeing first hand how automation can improve life science labs and are at the forefront of providing those solutions
Music has always been one of my greatest passions. Some of my earliest and best memories include driving through town in my Dad’s old white pick-up truck, pumping Chuck Berry through the speakers. I started playing the trumpet in 5th grade and picked up my first guitar shortly after. Something about songs always made sense to me. I liked the way songs had a flow, how people had songs they loved and songs they hated, how a song could make you feel comfortable and at home. When it came time for me to go to college, I elected to attend a small liberal arts school in Nashville, TN to study the music industry. After a year, I realized I wasn’t really enjoying my time studying music and I wanted something different. I loved creating and consuming music, but learning the ins and outs of record labels and radio stations wasn’t fulfilling my desire to craft something of my own. Shortly thereafter, I discovered Interaction Design and haven’t looked back since.
Interaction Design is the study of how people use and work with objects. My particular area of study focuses specifically on the user interfaces of computer software and the way users typically interact with those interfaces. This might include conducting usability testing, interviewing software users about likes and dislikes in regards to a specific product, or researching how other applications are presenting their interfaces. The end-goal of this extensive process is to build a user interface that is intuitive, functional, and productive. I have grown to love the process of interface creation for many reasons. The work is challenging; predicting what a user will expect from a piece of software based on the interface is an exciting exploration into the user’s mind. Considering client requests, browser limitations, and stylistic requirements results in an experience that is consistently new and exciting. However, most importantly for me, creating an interface is a process that is surprisingly similar to the same process I might use when writing a song.
The components that make up a good interface are actually quite similar to the components in a good song. An interface can produce effects and emotions in a person much like a song can. Earlier I said I loved how music could flow and affect people; the same behavior can be linked to interfaces as well. “I like the way interfaces have a flow, how people have interfaces they love and interfaces they hate, how an interface can make you feel comfortable and at home.” When I discovered this similarity, I began to understand why I was so interested in the process of creating interfaces. It was a musical thing, and to create great interfaces I needed to compose them like a song. Interfaces need to avoid dissonance between the various elements. They need to be catchy and they need to get stuck in your head. When you use a good interface for the first time, you will come back to it again and again to enjoy how well it works. Interfaces have genres, different base-level layouts that can be built upon to create something that is entirely your own. At very nearly every level of the process, there are parallels to music and music production.
As my education and career in Interaction Design continues, I am learning more and more about how to work within the world of interfaces. It is a fascinating process and I am consistently encountering new challenges and design problems to solve. In the software industry, it is fairly common for developers to see themselves as craftsmen instead of programmers. Building software from blueprint to finished project is a task relatively akin to something an architect might tackle. I like this analogy because it provides a bit more substance to the work developers typically do. I think Interaction Designers are craftsmen too, building products for people to use. However, I think there is a better word for the work a designer does. I think we are composers. Even though I’m now working with computers instead of instruments and pixels instead of melodies, in the end the goal hasn’t changed. Music or Interface, all a composer can hope to do is positively affect someone with his work by creating something that has the potential to make a listener or user’s life a little bit better. If that can be accomplished, the composition is a success.
Indigo is my second internship experience since college. I feel like I am a lot happier working in Indigo, not only because I have a better IDE and debugger, but also because of Behavior Driven Development.
I heard of Test Driven Development (TDD) in class, and I thought it was a very cool idea because we, as developers, are responsible for writing tests for our own code. Tests make us care about our code a lot more because not only we need the code to run, we also need the code to run right. And Behavior Driven Development (BDD) put this definition into a whole new level.
I still remember last year, when I was trying to add a small feature to the product for my company, I had to wipe almost all my code clean and restart after putting a week of effort into it after the feature owner told me that the functionality was completely not as what he was expecting. It was a bad feeling. From then on, whenever I was unsure about whether I am doing the right thing or not, I needed to find the feature owner and talk to him just to make sure I was still in the right direction. A lot of time was spent on talking to stakeholders back and forth.
Coming into Indigo, I didn’t know what to expect, and I was still scared from last year’s internship. I was being very cautious and making sure I was doing the right thing from start to finish. The first big project I worked on in Indigo was adding SFTP protocol support to the file transfer mechanism in one of our C# products. I was totally confused reading the code. It was a big project. I did not know where I should start, and it took me a long time to figure out how to add in the functionality without breaking the original code. I was just looking at the code very cautiously, treating the code just like a piece of antique. I felt like it would break once I touched it.
I started to get help, and I was told to read the feature files. These little poems, starting with Given, When, and Then, completely changed my view of developing software.
A feature file is a file that is written in natural language, describing the behaviors of the feature in a step-by-step process. At the beginning of the file it tells us the stakeholder and the business value of the feature. Right after that, it has sets of scenarios to define each step of how the feature will work with data flow and human/system interactions. The definitions are very easy for both the stakeholder and the developer to read and understand what the feature will do in the system. Most importantly, the feature file is executable if developer maps each step onto its equivalent action so the features steps are runnable and testable.
After I read the feature files, and I suddenly realized that all I had to accomplish is to make sure these file operations defined in the feature file works the same way if I switched to SFTP protocol. I immediately compiled a new feature file with almost the same steps, then I started to add code around the behavioral tests so I can pass the tests, after a couple of days I finally got a big green bar showing on my test console in Visual Studio, indicating that my feature tests passed.
It was a great feeling, not just because the tests prove my code works, but also because the a lot of care and effort put into the code got paid off. I think that is the most rewarding feeling to every developer, and I truly enjoy interning in Indigo and BDD.
I’ve been programming professionally (within certain definitions of professionalism) for twenty years. As you might imagine, I’ve seen technologies, techniques, and buzzwords come and go. It often seems that the one standard in our industry is that great claims are made without the necessary great proof, if the product is even delivered. It’s enough to make one cynical.
Now, many things are great improvements on what exists, what Newton called “standing on the shoulders of giants”. Most people in the industry are honest, passionate professionals interested in new tools and techniques; that’s why we went into this business in the first place. Still, one naturally casts a somewhat cynical eye towards anyone and anything making great claims about new technologies and techniques.
Last year I started at Indigo Biosystems. Hearing that I must use test-driven development was fine with me. I hadn’t done it yet, but I’ve always been a stickler for testing; supporting “Other People’s Software” will do that. But really, how good could it be? By the end of my first week I was hooked. I changed customer-facing code for our core product, writing only what was needed, and was 100% confident that I hadn’t broken anything. How? Because all specs for all the previous enhancements were in our tests, and if all tests passed, the code still ran to spec. I hadn’t broken anything.
Where had this been all my life? More importantly, where had this been when I was updating the 911 system, writing pharma-medical databases for EMTs, and writing software for combat vehicles? It had struck me very quickly that test-driven development is a tool, technique, and philosophy that removes the fear, uncertainty, and doubt about writing critical software.
Anyone who’s been in this business a while has had to support Other People’s Software. Many of us have written what became Other People’s Software, even the best and brightest of us. And what image does Other People’s Software conjure up? Something that is … rickety. You get that sense of “it’s working, don’t touch it, let’s hope nothing changes, oh my gosh, we need a change, I think nothing’s broken, brace yourself …” This is because code, like all other building material, rots. It rots because it doesn’t change and adapt to a changing environment, and it doesn’t change because people are afraid to change it. But with test-driven development, the fear and thus the rot goes away.
Still, there is a caveat. While I was confident that I hadn’t broken anything, that was within certain limits. Fortunately, those limits are mostly you. It’s always possible to write a bad test, or not one in the first place. However, that’s just bad programming and it can be fixed. Your tests are only as good as your specs and your professionalism; these can be controlled and improved because test-driven development will expose any holes in the processes and standards used in development.
Perl creator Larry Wall once said the three great virtues of a programmer are laziness, impatience, and hubris. Test-driven development feeds these virtues: laziness, because you only write code to pass the tests, impatience, because you can start writing code right away and can find out immediately if you broke anything; and hubris, because you know your code works to spec and because you can refactor at anytime in complete confidence.
More than this, I’ve found that test-driven development forces me to think more about my code: what are good tests, what should we prevent, what are the inevitable corner cases that cause so much support work, is there a cleaner way to do this … I’ve found my code to be much cleaner and to need less refactoring on the first pass. This means that the full lifecycle for developing code is quicker and simpler; and therefore (management loves this) cheaper.
My first project involved code written in Ruby on Rails, but I’ve also used it in C#, bash, and C++. Granted, using test-driven development in more established languages feels really odd at first, mostly because you have more habits to break. On the other hand, bad habits should be broken. How often have you pushed shell or C code to production without full confidence in your code? Test-driven development is a great confidence boost, because the confidence comes from empirical evidence. You know it performs to spec, you’ve got a suite of tests to back up the spec, you know your refactorings and rewrites broke nothing, and anyone can jump in to support it if you’re not available.
Imagine that: supporting Other People’s Software that you know 1) works, 2) has been fully documented, 3) can be easily and quickly refactored, and 4) can be changed with confidence.
As I’ve told coworkers and interviewees, why on earth would you do it any other way?
The scientific industry has struggled for quite some time with the concept of good design. The general thought process in that field seems to be something along the lines of “If I can see the data, why does the interface need to look well-designed?” This sentiment is indicative of the attitude with which many commonly used scientific programs were developed. Combining decades-old interfaces with a complete disregard for interaction design, it’s understandable that many of these software products are difficult and frustrating to use. Fortunately, I have been presented with the opportunity to take a fledgling scientific software product and consider how it might be made better through the use of good design practices. ASCENT from Indigo Biosystems is a Mass Spectrometry Data Review web application aimed at increasing efficiency of data analysis in toxicology labs. During my summer internship, my goal is to redesign the software from top to bottom, hopefully resulting in a product that is more intuitive and enjoyable-to-use.
My approach is to progress through the software in the same manner as a typical user, so the sensible place to start was with the login page. For reference, the current login page looks like this:
The login page is incredibly important, and it provides the user with their first glimpse into what the rest of the web app might contain. It gives a first impression that will likely impact the user’s overall opinion of the software. Therefore, the page should be indicative of something familiar and comforting. In many of the client sites where this software will be used, employees use secure ID badges to gain access to the laboratories. Therefore, I elected to utilize a skeuomorphic approach (design based on real life objects and actions) and design the login page around the concept of these badges. I thought this would be effective because the purpose, gaining access to a secure location, is the same in both the physical world and the digital application. This parallel between the digital and analog worlds leads the user into the application in a familiar manner, creating a satisfying atmosphere in which they can use the product.
After deciding to make use of the ID Badge concept, I used Adobe Illustrator to create an initial mockup of what the digital version might look like. That concept looked like this:
The final design wound up looking quite similar to this initial concept, but there were a few issues worth mentioning. First, I wound up changing the color scheme quite drastically for the final design. Indigo’s logo is the rainbow pinwheel seen in the above image. The wide array of colors made adhering to an aesthetically pleasing color palette quite difficult. As a result, I elected to modify the logo such that it is a single color, as seen in the final design. This allowed the introduction of the blue background, which led into the utilization of a blue-centric color scheme for the entire web application. I will talk about color scheme and branding more extensively in Part Two of this blog series.
Since the ID Badge was designed with rounded corners, I wanted to be able to use a similar design for both the text fields and the submit button. However, creating text fields with rounded corners is a bit trickier than one may think. After some quick Googling, I decided the best method to accomplish this would be to create a transparent PNG representing the shape and size of the field I wanted. I then set the PNG as the background of the field using CSS so the text would appear on top of the image. I also used the input pseudoclass in CSS to disable the highlight effect on the text field, preventing a blue rectangle from appearing when the field is selected.
Initially, the “sign in” button had a rollover effect such that it would turn a darker shade of gray upon mouse hover. Indigo wanted this design to function on the iPad as well and I discovered the rollover effect was causing issues with the way the screen rendered on mobile devices. To resolve this issue, I removed the effect but forced the cursor to use the pointer icon when hovering over the button. This created a platform-agnostic way of indicating the location of the clickable area to a mouse user while remaining convenient and easy to read for mobile users.
I took all of these factors into account to arrive at the final design as displayed below.
I’m excited about this change because I feel it gives the login page significantly more character, in addition to looking much more interesting and unique. It was a fun page to work on because it allowed a huge amount of creativity and influence; I could basically take the page in whatever direction I wanted. I look forward to continuing my work on ASCENT and writing blogs to bring others through the process with me. Until next time!
Also, as a fun Easter Egg, try scanning the barcode at the bottom of the ID Badge.
I had never worked in a corporate environment before I came to Indigo as an intern this summer. I’ve been in school continuously since I was in kindergarten and my jobs have been academic in nature. My undergraduate and master’s degrees are in computer science and I’m working on my PhD in the same. One would hope that I know what I’m doing, programming-wise, after all of that education.
Know what I may, the coding practices (or the lack thereof) that I experienced in my various academic environments did not prepare me for those at Indigo. Here, the code should be readable without comments and easy to come back to months or years later. Test-driven development makes code generation slower, but also makes it harder for bugs to hide. Clean code practices (usually!) make code easier to understand at a glance. Pairing on code makes the writing go faster, catches more bugs and typos, and brings people up to speed on a given project. There is strict versioning control and lots of people look at changes and test them, making sure nothing breaks. Maintainability and testing are king.
My experience of code in the academy is very different. Academia is all about ideas. No one in research cares about the second person who figured something out and an academic can’t publish the second paper about a topic without contributing a novel addition or change. In the academic research environment, code tends to be written as quickly as possible and hacked until it works because people don’t want to be scooped. As soon as code is reasonably efficient, it’s done. One should correctly implement the algorithm and optimize it as much as possible (saving further optimizations for additional publications). One should pull down or generate data as swiftly as possible; people need to finish code so they can analyze the results before that conference deadline in June, et cetera. Publication deadlines serve as an academic equivalent to product release deadlines, but the focus isn’t on code, it’s on writing and results. The same rigor is not applied to the code. Thus maintainability is at best an afterthought. Speed, efficiency, and correctness reign in academia, pushing ease of use to the sidelines.
Code written this way is mostly okay in academic research because academic code is written for a friendlier environment. Usually research code is written for a single person (the author) or the author’s research group. It’s also relatively short-lived compared to corporate code products. If there are outside users, they are knowledgeable and competent academics. Maintainability and usability are not as important in the academic context. Turning single-use code into a picture of maintainable and usable beauty won’t improve a paper’s chances of acceptance at a conference. The advantage to writing code this way is that idea generation, the real meat of research, goes faster. There are wild, wonderful, careful, small, and all sorts of other ideas in the academy, and there are more cropping up every day. Spending a lot of time on specific implementations could retard the progress of ideas.
However, there are benefits to taking some inspiration from corporate code.
The weekend before I came to Indigo I had to document some of my research code for my adviser. I had written this code quickly for data analysis, working on serial deadlines and adding features as I needed them. Documenting it was an excruciating and time-consuming process because my code was so awful. My variable names were uninformative, my functions often more than 100 lines, my loops and conditionals so nested that it was hard to keep the whole path in my head at once, and it was very brittle. It did exactly what I wanted it to do, but I had to remember what it was I wanted when I wrote it! Unfortunately, reloading the context of all of that ugly code into my head was nigh impossible, even only a few months or weeks after the fact.
Part of the reason my code looked like it did was because I did not expect anyone else to be using it, only myself. Part of it was because I wanted to write it quickly. That weekend, I sorely regretted that code. I’ll probably regret it in the future when I have to change or use it again.
What I’ve learned at Indigo after only a few weeks on the job will change how I code, even if I currently find it frustrating and difficult. I want my future code to be readable and understandable at a glance, for my own sake and anyone else’s with whom I collaborate. I want to be able to change it easily and quickly verify it still does what I want it to. I don’t want to repeat that weekend.
I won’t use every practice as used at Indigo, but many are applicable. Pairing the academic idea factory with some aspects of rigorous corporate code practices could make academic work more accessible. Perhaps it could make it easier to port ideas to solutions. Wouldn’t it be amazing if cutting-edge research led more directly and quickly to cutting-edge solutions?
In my personal experience, this is true most of the time. I have come across research projects, particularly in HCI or other human-centered areas, that aim towards general users. Even then, I haven’t come across research code that is not either 1. a prototype, such as a phone application to remind seniors to take medication tested for usability and published in CHI, or 2. aimed towards academia (though not necessarily only academics) or not explicitly meant to be touched by other people, such as the Chez Scheme compiler.
In my personal experience, academic prototypes are based on very interesting ideas and could lead to great solutions, but for the most part collect dust after the prototyping stage.
Which isn’t to say this is always the case! My experience is incomplete!
As most laboratory personnel know, errors within the laboratory can be harmful. In clinical diagnostics, technicians are trained to understand that the results produced by laboratories are used to treat patients and that errors can be fatal to these patients. Similar consequences can be found in other industries.
With that understanding, laboratories invest in quality, in fact, sometimes not quite understanding the cost of the actions performed to achieve (supposedly) a certain level of quality. Many of the quality procedures in the laboratory are associated with satisfying minimum regulatory requirements deemed to yield the quality necessary to be licensed.
But does your laboratory really know the true cost of quality and is the level of investment in achieving that quality a conscious investment or the default of the process adopted? Knowing these costs are important when implementing quality improvement processes, including automation. Let’s review some of the elements of quality costs to help you quantify these costs in your laboratory.
The cost of quality is any cost that would otherwise not have been spent if quality were perfect. The idea of perfect exists in theory, but in practice it is difficult if not impossible to obtain. Therefore, there will be a cost associated with poor quality. Significant chunks of this cost remain hidden because accounting systems are not designed to identify them, and thus these costs are buried in routine operational costs. Getting a handle on these costs helps identify areas of opportunity for improvement within a laboratory leading to a reduction of the cost of errors.
Client Complaint Costs (External Failure Costs): When your client (doctor, scientist, regulator) calls you to question the validity of the laboratory result there are a set of actions that triggers the laboratory response. Among these actions are researching how the result was obtained, the state of the sample at testing, and other possible mishandlings within the testing and reporting process.
The costs of client complaints are varied; some are tangible:
- The effort researching how the error occurred
- Correcting the error, including retesting, if possible
- Regulatory fines when applicable
- Lawsuit expenses when applicable
Some are intangible:
- Potential loss of revenue from the complaining client’s account
- Damage to the reputation of the laboratory and loss of other business
- Expense of repairing quality image
Quality Control Costs (Internal Failure Costs): Most laboratories associate this expense to the cost of quality. Again, the costs are tangible and intangible.
Among the tangible costs are:
- The effort and material incurred in QC and standard samples
- The effort required to review results – many laboratory processes require two people (sometimes three) to review the results so as to catch any erroneous results
- The effort and expense of rerunning the tests if found in error by the review process
Among the intangible costs are:
- Decreased service level (turn-around-time) that leads to loss of revenue
- Decreased instrument capacity through the inability of the review process to keep up with the instrument output
Inspection Costs: These are the costs associated with monitoring compliance with regulatory requirements or the expected level of quality. These costs include:
- Effort and material to calibrate equipment
- Effort to monitor quality of laboratory processes, including proficiency sample processing
- Effort of internal audits in preparation for external audits, including audits from regulatory agencies
Prevention Costs: These are the costs invested to prevent errors from occurring. Among these costs are the following:
- Effort to develop and maintain a quality system, including documentation of standard operating procedures
- Enrollment in quality surveys and other comparable quality prevention measures
- Efforts spent in method development to ensure the production testing process produces quality results at a higher testing volume (it’s scalable)
- Development of quality rules to ensure results comply to expected quality
- Technician training
So, using these four elements, can you estimate the cost of quality in your laboratory? Is it higher or lower than you expected? Can you find areas that would have a significant impact on quality and reduce cost?
In my previous blog (The Human Side of Automation), I talked about the effects of automation on personnel. Another component of the story I shared on that blog was many of the savings achieved were the result of understanding and calculating the savings on the cost of quality. I remember from that project that the accounting department was insufficiently prepared to calculate the cost of quality, it required extensive input from people in the laboratory who knew the processes.
I hope you find this helpful, especially if you are considering the implementation of a quality improvement step into your process and want to justify it to your management.
I invite you to share your insights on the cost of quality in your laboratory or share a story of how some quality improvement decision produced significant savings.
As a fledgling HCI Professional / Interface Designer, using a new piece of software is always an interesting and informative experience. Since beginning my college career, I’ve become significantly more enthralled by the idea of an interface working to better the overall usability of a program. Because of this, from my perspective, programs are effectively split in half: one half exploring how the software functions as a whole, the other involving the interface and how logical its design is. This two-part evaluation system really allows for a user to get a feel for how satisfying or frustrating a piece of software will be to use. With this knowledge, a user can then determine how to best proceed in terms of using that program. This could involve diving into a manual to learn more about usage or abandoning the application altogether and attempting to find a better solution.
A few weeks ago I was tasked with converting a rather daunting stack of data-flow diagrams into a digital format. There are a couple of choices for programs that would be best for this type of work (such as Visio on the PC and OmniGraffle on the Mac), but Indigo needed a solution that would have cross platform functionality to prevent issues with editing files down the road. With this requirement in mind, some quick research pointed in the direction of Inkscape (http://inkscape.org/), a free, open-source SVG editor with some pretty robust tools for making data-flow diagrams. The program appeared to be pretty well put together and suited for the type of work I would be doing. After quickly becoming acquainted with the tools, I started drawing. A few weeks and 116 diagrams later, I think it’s fairly safe to say I’m at least moderately experienced with the software. My overall feeling is that the software is, in most respects, very well suited to creating data flow diagrams. After using the program so extensively, I compiled a brief overview of the program’s pros and cons along with some useful tips about the application to assist you before you begin to diagram using Inkscape.
- Inkscape does a fantastic job of dealing with lines and curves. The Bezier Curve tool is easy to use and modifying curves / lines is very intuitive.
- Because it is a vector based program, Inkscape can perform smoothing on hand-drawn lines using the Freehand Line tool. This allowed me to create the Gaussian Peaks used in some of the diagrams.
- Similar to Photoshop, Inkscape has rulers at the top and side of the working window. The user can click and drag on these rulers to create guides, used to help align elements within the document.
- The user is provided with an extensive collection of line strokes / elements that can be used in the creation of flowcharts. (Dotted / Dashed / Arrows / etc.)
- Preset rotation amounts make it easier to align text with slanted lines. (More on this in the tips section)
- The biggest downfall of Inkscape is an interface in which certain actions can only be performed via key commands that aren’t very intuitive. For example, there is no clear way to rotate objects using the interface. Instead, a seemingly random key command is used (explored in the tips section) to handle rotation.
- Initially, the way the program handles formatting was confusing. Text alignment didn’t seem to remain consistent. There are some tips regarding this in the section below.
- There is no horizontal scroll bar that affects the view of your workspace. Instead, the horizontal scroll along the bottom pans left and right on the color selector located above it.
- It appears that occasionally the program will render lines at different thicknesses, even if they have the same weight applied to them.
Tips and Tricks:
- To scroll horizontally in the workspace, hold down the shift key and use the scroll wheel on the mouse. This was the only way I found I was able to scroll left and right.
- To rotate a line / object / text box, hold down the alt (Windows) or option (Mac) key and use the open and close bracket keys ( [ and ] ) to rotate left and right. This is a precision rotation used to line up objects with other objects.
- For a controlled rotation, objects can be rotated fixed amounts by using certain keys in conjunction with the open and close bracket keys. Use the control key to rotate objects by 90 degrees and the Windows / Command key to rotate objects by 15 degrees.
- When using the Bezier Curve tool to create a straight line, the control key can be held down to force the created line to apply to certain angles. These fixed angles correspond with the Windows / Command key rotation function mentioned above. This allows you to quickly align text and straight lines using the fixed 15 degree increment values.
- The control key can also be used to create shapes that adhere to predefined ratios. This allows for easy creation of perfect circles / squares.
- Regarding formatting, the program seems to use the following method to determine defaults. It is rather confusing, so I will explain by using an example.
- Let’s say you create a text field and type something into the program. After typing, you decide to center the text within the field.
- If you now create another text field, the program will revert back to left-justified, the default setting.
- However, if you create a text field and set the formatting to be centered before inputting any data into the field, Inkscape will now apply centered as the new default for all future text fields. Subsequent text fields will be centered upon creation.
- In general, Inkscape tends to set formatting defaults based on changes applied to empty fields. This can take a while to get used to and can become frustrating.
- Inkscape includes the option to save your files as PDFs. However, if you are working with a large number of files and wish to maintain the .svg files for later editing as well, I would suggest waiting until all your files are created and then using a program to batch convert from .svg to .pdf.
This is by no means a comprehensive guide to Inkscape, merely a short collection of thoughts and tips I collected while I was working with the program. Hopefully you found some of this information useful! Thanks!
As technology continues to improve, the opportunities for automation are increasing, sometimes in a dramatic fashion. Many companies and managers view these as opportunities, while others view automation as a harbinger of difficult decisions, including decisions to eliminate positions.
As managers, when thinking about technology, most agree on two things:
- Automation technology is progressing fast. This process simplifies not only blue collar, but also white collar jobs.
- The availability of these technologies will narrow the gap with one’s competitors so choosing not to adopt automation is not an option.
So, what makes managers hesitant about these automation decisions?
Probably the most important reason for adoption delays is fear of change. I contend that an important component of that fear is the unpleasant thought of eliminating people’s jobs with automation. What plan can be implemented to minimize such a bad side effect and embrace progress?
My first job out of college placed me into a project that would replace a batch system with online system eliminating the need for key punch operators. The company business analysts made a strong case regarding competitive advantage by reducing turn-around-time as well as adding benefits of cost reduction.
Working with the laboratory supervisors, I understood the benefits, but also noticed their concerns about changes the technology would bring. One such concern was what to do with the surplus workers.
I brought this observation to the project steering committee, which included an old wise CFO. He had been instrumental in laying out the advantages the planned automation would bring to the company:
- It would decrease direct costs for sending out results and
- It would decrease turn-around time allowing clients to receive laboratory results sooner
In dealing with the concerns of the supervisors, he explained a plan to deal with these personnel reductions. The plan included the following:
- Establish a hiring freeze in the affected departments;
- Examine back logged projects that could use the affected staff and project their re-allocation; and
- Identify current openings within the laboratory and re-allocate the affected staff to those areas.
The CFO also advised the executive team on the use of the savings in the direct costs and re-allocation of those savings to three areas:
- Investment in marketing and sales to drive more business by using the newly gained competitive advantage;
- Investment in other infrastructure projects to further increase capacity; and
- Investment in R&D to bring more tests to market.
In the next meeting with the laboratory supervisors, the CFO laid out the plan, which was enthusiastically embraced. The project gained speed and was completed very successfully.
In essence, what the CFO did was transform a stressful change into an opportunity. He engaged the lab management and gave them an opportunity to benefit from the change. He also clarified the importance of using the savings to pursue business growth to the executive team.
Over the years, I have been fortunate to participate in many high growth companies. Inevitably, at various points, these companies needed automation to sustain their growth. The guiding principles derived from the advice of that wise CFO have served me well. I hope these same principles assist your innovative decision making. I invite you to share any other ideas from your own experience.
Raul Zavaleta, CEO
Indigo delivers high performance data analytics for diagnostic laboratory operations. Most of the heavy lifting is done using the Condor High Throughput Computing system from the Condor Team at the University of Wisconsin. Condor is a very powerful piece of software which simplifies parallel computing for “embarrassingly parallel” problems. Automated instrumental data analysis can usually be pipe lined into such an architecture with excellent performance results.
We use condor on both Amazon EC2 with Ubuntu and on customer systems (mostly VMWare). There are several good starting images for Condor for Amazon EC2 and getting condor running on most Linux distributions is simple. We do most of our development on Mac so I wanted a simple condor master that could accept flocking nodes (if I needed more compute power) for developing and testing.
Here’s how I set my own system up:
Use the /etc/launchd.conf file to set the path to the Condor executables and set the required CONDOR_CONFIG environment variable. This file might not exist on your machine, so create it if don’t have one already.
setenv PATH /export/condor/bin:/export/condor/sbin:$PATH setenv CONDOR_CONFIG /export/condor/etc/condor_config
When you reboot your machine, the global path will be properly set.
I run condor as a normal user (me) with the condor_master command, so I want all the condor binaries on my execution the path. Also, I want condor to use my specific condor_config and condor_config.local, so I set the environment variable CONDOR_CONFIG to the main configuration file and it points to condor_config.local.
I expanded the installation tarball (condor-7.5.6-x86_macos_10.4-stripped.tar.gz from the Condor Download Center) into a directory, I used /export/condor. You can put this wherever you like, but this directory will be entered into the configuration script so just make sure the directory is consistent throughout. Later, I will show that I put all the configuration and logging directories in the same place. You don’t have to make all these changes, but I wanted everything in a nice, neat place.
Next, I put the condor_config and condor_config.local files in the /export/condor/etc directory.
You can use the vanilla condor_config that come with the distribution, but you have to change the following sections: first around line 56 – you can leave the CONDOR_HOST = $(FULL_HOSTNAME):
##-------------------------------------------------------------------- ## Pathnames: ##-------------------------------------------------------------------- ## Where have you installed the bin, sbin and lib condor directories? RELEASE_DIR = /export/condor ## Where is the local condor directory for each host? ## This is where the local config file(s), logs and ## spool/execute directories are located LOCAL_DIR = /export/condor/etc ## Where is the machine-specific local config file for each host? LOCAL_CONFIG_FILE = /export/condor/etc/condor_config.local
Just put in the location for your installation here.
I also changed my execution settings to something like the Condor “TESTINGMODE”. That just means that I don’t want jobs suspended, killed or stopped if I use my computer:
# When should we only consider SUSPEND instead of PREEMPT? WANT_SUSPEND = False # When should we preempt gracefully instead of hard-killing? WANT_VACATE = False ## When is this machine willing to start a job? START = True ## When to suspend a job? SUSPEND = False ## When to resume a suspended job? CONTINUE = True ## When to nicely stop a job? ## (as opposed to killing it instantaneously) PREEMPT = False ## When to instantaneously kill a preempting job ## (e.g. if a job is in the pre-empting stage for too long KILL = False PERIODIC_CHECKPOINT = False PREEMPTION_REQUIREMENTS = False PREEMPTION_RANK = 0 CLAIM_WORKLIFE = 1200
I also found that it was easier to just create the following directories:
and change the following in “Part 2”
LOCK = $(LOCAL_DIR)/lock
and “Part 4” of the configuration file:
###################################################################### ## Daemon-wide settings: ###################################################################### ## Pathnames RUN = $(LOCAL_DIR)/run LOG = $(LOCAL_DIR)/log SPOOL = $(LOCAL_DIR)/spool EXECUTE = $(LOCAL_DIR)/execute
Also if you don’t want the system to run a benchmark (I turned this off), you can comment out the RunBenchmark line lower down in Part 4:
## When a machine unclaimed, when should it run benchmarks? ## LastBenchmark is initialized to 0, so this expression says as soon ## as we're unclaimed, run the benchmarks. Thereafter, if we're ## unclaimed and it's been at least 4 hours since we ran the last ## benchmarks, run them again. The startd keeps a weighted average ## of the benchmark results to provide more accurate values. ## Note, if you don't want any benchmarks run at all, either comment ## RunBenchmarks out, or set it to "False". #BenchmarkTimer = (time() - LastBenchmark) #RunBenchmarks : (LastBenchmark == 0 ) || ($(BenchmarkTimer) >= (4 * $(HOUR))) #RunBenchmarks : False
Next, I don’t want to create a special “condor” user on my laptop, I just want to run condor as “me”. To do this, I need my user and group id: From the command prompt run the id command:
uid=501(“me”) gid=20(staff) groups=20(staff),…
I then use the following condor_config.local:
CONDOR_IDS = 501.20 START_MASTER = True START_DAEMONS = True START = TRUE FLOCK_FROM = * HOSTALLOW_READ = * HOSTALLOW_WRITE = * CONDOR_HOST = $(FULL_HOSTNAME) DAEMON_LIST = MASTER, SCHEDD, STARTD, NEGOTIATOR, COLLECTOR TRUST_UID_DOMAIN = TRUE
I use the condor_config.local to override anything I didn’t fix (or forgot to fix) in the condor_config file. The CONDOR_IDS allows me to run condor_master as me and allows other nodes to flock to my machine as a master.
If everything went right, you should be able to run the condor_master command followed by the condor_status command. For my MacBook Pro I get:
Name OpSys Arch State Activity LoadAv Mem ActvtyTime slot1@rkj OSX X86_64 Unclaimed Idle 0.310 2048 0+00:28:31 slot2@rkj OSX X86_64 Unclaimed Idle 0.000 2048 0+00:28:32 slot3@rkj OSX X86_64 Unclaimed Idle 0.000 2048 0+00:28:33 slot4@rkj OSX X86_64 Unclaimed Idle 0.000 2048 0+00:28:34 Total Owner Claimed Unclaimed Matched Preempting Backfill X86_64/OSX 4 0 0 4 0 0 0 Total 4 0 0 4 0 0 0
Your display will vary by memory, cores, and how busy your machine is at the moment.
Your system should now be ready for condor_submit…