Automation Is Awesome—Except When It’s Not

Posted by
[vc_row][vc_column][vc_column_text][mk_dropcaps style=”simple-style”]I[/mk_dropcaps] am a fan of automation from way back. Growing up, my dad sold factory automation systems, and dinnertime conversation regularly included stories about robots and automated assembly lines. What kid doesn’t like robots? It’s probably fate that I ended up working for Quantum, the market share leader in open systems tape automation for as long as I can remember.

But automation has its limits. Have you ever tried using an Amazon subscription to buy cat food, paper towels, or anything else? It doesn’t work very well. Not because of Amazon, but because humans (and cats) aren’t machines. Sometimes we use lots of paper towels, sometimes we go on vacation and use none. Cats are fickle—one month they love beef, the next month chicken. Schedule-based automation fails because the knowledge about when to reorder can’t be calculated. It’s a variable, and it lives in my head where Amazon can’t (yet) get to it.

The solution is to have just enough automation at the right point in the process. Now when I need paper towels, I just tell Alexa to order some, and she does the rest. I trigger the action, but Alexa automates the process. Believe it or not data management can have the same challenges and solution.

Everyone wants to save money on storage, and data archiving is often the best way to do that. Most archiving software uses simple policies, such as “if a file hasn’t been accessed in six months, copy it to the archive.” This works great for things like user home directories and departmental shares, where files aren’t related to one another and are often not accessed much after their initial creation.

But in many organizations—often in industries that create and consume massive amounts of data—this simple approach doesn’t work. There are two reasons for this. First, there are groups of files that are related to one another and must be handled together as part of a “project”. If you randomly archive just a few of these files, applications and automated business processes break or experience unacceptable slowdowns. Second, the knowledge about when a project can be archived doesn’t reside in the file system. It’s in someone’s head, or sometimes an external database.

The solution for these project-based environments is to automate the identification of files as components of a project and automate the archive process, but provide manual control over when archiving occurs. Quantum now offers a tool that does exactly this. It’s called ClarityNow. ClarityNow enables data owners to see and manage their unstructured data on a project basis, in business terms. When a project is wrapped up, a simple right-click by the data owner (not IT) sends all of the associated files to the archive at once.

I’d love to tell you more, but I’d rather show you. Contact Quantum for a free ClarityNow data assessment or demo—and a chat about how we can help you better manage your unstructured data and control your storage costs.[/vc_column_text][/vc_column][/vc_row]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.