Big Data Made Easy


Almost every article today about big data talks about the notion that the country is suffering from a crucial shortage of data scientists, architects, and analysts. A McKinsey & Co. survey in 2011 pointed out that many organizations lack both the skilled personnel needed to mine big data for insights and the structures and incentives required to use big data to make informed decisions and act on them. That is why we are currently seeing a trend for companies to look outside their organization for specialty (development) shops to take their data and give them the analytics they define as most needed to be nimble in their marketplace. This is why I began working with companies who are not only taking the customer’s requirements on what they need but dreaming up ways that data analysis can be more effective and might be outside the company’s capacity to dream up.

Companies jobs are to build or sell their product not massage their data to figure out the best way to pick pack and ship items in the most effective manner. It is up to companies that have extensive knowledge from the data side of things and know where the low hanging fruit reside in an effort to help companies be streamlined and aggressive in this marketplace. It is my belief that Data is Gold, that is why I am working with SMU and their Digital Advisory Committee to help refine their course curriculum and help shape the new batch of business students looking to become Data scientists.

What seems to be missing from many discussions around “Big Data” is a dialogue about how to steer around the bottleneck of and make big data directly accessible to business leaders. We have gone a long way  developing better ways to get and use analytics. Analytics to stock stores, show warehouse inventory levels, sales and ad success, plan on where to put products to better sell them in a store environment, track user behavior. Thus allowing Vendors within a large chain look into how their products are doing within that chain on a national level. This use of data is making the companies that use it more nimble to customer demands and sleeker in keeping warehouse and store stock in check. This is a new era in the software industry, and we are working to make this more popular and less scary for companies that hold their data dear. Many Retail giants fear letting this data be mined by Manufacturers or Distributors, but we have found that by allowing access to portions of the data; Manufacturers and Distributors (Vendors) can be more competitive and aggressive in pursing larger retailers’ demands and offering more competitive pricing.

To accomplish this goal, it’s helpful to understand the role of software development companies in data and cloud storage solutions. Currently big data is a melting pot of distributed data architectures and tools in this highly technical environment. Companies whose baliwick  serve as the gatekeepers and mediators between these systems and the people who run the businesses – the “data experts”.

While difficult to generalize, there are three main schools of thought around “Big Data”: data architecture, machine learning, and analytics. While these data relationships are important, the fact is that not every company actually needs a highly specialized data team like Google or Facebook. The solution lies in creating fit-to-purpose products and solutions that abstract away as much of the technical complexity as possible, so that the power of big data can be put into the hands of business users, Manufacturers and Distributors.

Let’s dig a little deeper into the three main roles of today’s Big Data sorting types, using a retail store as a backdrop.


The future of big data applications is in making use of all of these “microinteractions” the users do in the system seem seamless and more to the point, error proof. According to Dan Saffer a “microinteraction” is; “a product use case boiled down to a single moment focused on a single task.” All of these micro activities when taken as a whole can be made simple by looking at the bigger result, the report generated, and the use of that report, then ultimately the Business value.

Some of these “microinteractions” are what make the user interface more successful and trickle up into the broader user experience making a holistic application that is intuitive and streamlines the processes involved. This is where I take any gratification in making a product. Feedback that not only is the application easier to use, but makes sense and is intuitive is better praise than any design reward I could ever receive. Praise from users is invaluable, and when you have thousands of users, the handful that speak up are indicative of a few thousand whose lives you have made easier in the process. I look at the server logs, analytics on the users, and user patterns to discern how to improve an application, but the people that speak up and ask for things are those make my life easier on the whole. Users who indicate their need for features, less clicks and different ways of doing things often appreciate when the features are enabled and bring value to an application as a whole. So speak up on user design and report usage and let us know what we can do better. I for one am always ready to listen. I know big data can be used to make a company millions, even save them millions, and want every company to be successful using interfaces I have helped build. For me it’s not about how pretty it is, but streamlining the processes and making it easier for the users to understand and use.


Data Architecture

The key to reducing complexity is to limit scope. Nearly every business is interested in capturing user behavior – engagements, purchases, transactions and in store behavior – and almost every one of them has a way of tracking purchases, be it in store reward cards, credit cards or data exported from sales directly from stores themselves (receipts).

Limiting scope to this basic functionality would allow us to create templates for the standard data inputs, making both data capture and connecting the pipes much simpler using API’s. We’d also need to find meaningful ways to package the different data touch points. These reports should be tailored for each purpose. It comes down to the 80/20 rule: 80 percent of big data use cases (which is all most businesses need), can be achieved with 20 percent of the effort and technology.


Machine Learning

Surely we need machine learning, right? Well, if you have very customized needs, perhaps. But most of the standard challenges that require big data, like recommendation engines and personalization systems, can be abstracted out. For example, a large part of the job of a data modeling company is crafting “features,” which are meaningful combinations of input data that make machine learning effective. As much as we’d like to think that all we have to do is plug data into the machine and hit “go,” the reality is people need to help the machine by giving it useful ways of looking at the world, and companies that understand the target of the specific business industry are looking for the abstracted data. The theory behind “Machine Learning” is that by allowing users to click and use the applications the computer can analyze and make computational “guesses”. MIT, Stanford and other institutions are implementing coursework for students to learn more about these “thinking machines” and improve the rate of success exponentially. Google has implemented a “Prediction API” using a RESTful interface to build smarter applications. Machine Learning has a long way to go to understanding the industry data and the uses to which the users of big data applications are extrapolating from the data, we still need to tell it what to look for, and that is a long road ahead of us.

On a per industry basis, feature creation could be made into templates, eventually. Every retailer has a notion of buy flow and user segmentation, for example. What if every day users could directly encode their ideas and representations of their reports into the system, bypassing the need to go to data analysts, and IT management as middlemen and translators? Would the reports users generate give enough insight into what the company needs better than anything an analyst or manager could propose as a set of requirements?

So while the industry experts can look at the tools the businesses have used in the past, Analytic companies have found that there are specific instances of the users manipulating the reports to get entirely different data than was initially conceived. So allowing the users to create their own reports, save their own filters, and manipulate the data in a million different ways. Once the impact of giving the users the tools to build their own reports and get away from “canned” reports and stop pulling the data into an excel spreadsheet to manipulate it to extrapolate information in different ways, the floodgates are opened to improved business strategies. Strategies we hear our users doing by working “around the constraints” of existing paradigms. Giving users autonomy and the ability to share new reports in this paradigm with their teams and within the business can improve their bottom line in ways that we have never conceived before. Who needs a “Learning Machine” when the users are refining and finding ways to build better more targeted reports?


It’s never easy to gather the most valuable insights from data. There are ways to provide industry-specific ­­perspectives, which allow business experts to experiment – much like a data analyst. This seems to be the easiest problem to solve, as evidenced by analytics products utilized by companies like who combine custom coded applications with analytics applications on the market to provide a keen insight into how to do business on a grand scale.

By making products that are less constrained and more accessible to Business Users, Manufacturers and Distributors they are allowing users to be the experts and give them greater control than ever before. There is always room for a friendly interface that gives users more control of the data they see and work with. Analytic Platforms also take into consideration how the users interact with the data and learns from the results that analytics deliver. This is the critical feedback loop, and business experts want to provide users the ability to make modifications in that loop.

Once BIG DATA works directly for users interacting with the data and combine that with learning from what the users are doing in the system, we may enter a new age of big data where we learn from each other. Maybe then, big data will actually solve more problems than it creates.

Related Research

About Faith Warren

Faith Warren is currently a User Experience Group Manager working directly with fortune 100 companies to improve application delivery to the users of the “Big Data” she mentions in this blog. You can read more about her and some of her past and present work at she is an avid User Experience Advocate and enjoys making new User Interfaces that are intuitive and make whatever business or process she is working on solving “make more money”. “That’s my goal in life and I figure it is every other companies’ greatest ambition too.”