Recently, I climbed on stage to moderate a panel discussion on “infrastructure vs. cloud” at the Technology in Government conference in Canberra, Australia. My panelists ranged from first-line government IT managers to heavy hitters like Barbara Cohn, the first chief data officer of New York state.
I’m a fan of automation from way back. Growing up, my dad sold factory automation systems, and dinnertime conversation regularly included stories about robots and automated assembly lines. What kid doesn’t like robots? It’s probably fate that I ended up working for Quantum, the market share leader in open systems tape automation for as long as I can remember.
Collection and analysis of large data sets is perennially hot. Remember Data Warehouses? ‘Big Data’ is just the latest buzzword for this trend. Admit it - it’s an alluring vision. Supposedly just save enough data and apply the right tools, and insight (and money) will rain from the clouds. Though frequently clothed in breathless hype, there is a kernel of truth here. You can find insight in rivers of data if you have the right tools. Organizations across a range of industries are successfully capturing and analyzing oceans of machine- and sensor-generated data with Splunk.
As my colleague Terry Grulke pointed out earlier, there is lot of funny math used by deduplication vendors to try to convince you that their system can go fast. With our DXi systems we don’t have to hire Cirque de Soleil to generate our performance numbers. We can keep it simple because DXi systems are just really, really fast – natively. That’s what I’m going to talk here about here – “Native” performance. That is, the capability of the DXi system itself vs. some manufactured “logical” number like the ones Terry wrote about. Apparently, our high performance is confusing to some of our competitors.