Making big data useful to everyday users
We need to simplify the data management process to optimise data-driven insights. This is especially relevant for the socio-economic development of emerging economic countries, where the majority of the population is perhaps not tech-ready yet
Big data management has attracted enormous attention in recent decades from different knowledge and practice streams, due to its broader cross-disciplinary scope and cross-functional implications.
The practical need to analyse data in support of different interconnected knowledge streams such as information technology (IT), policy and decision-making, strategic management, marketing, and sustainable growth is well-recognised for enriching progressive management development capacity in diverse socio-economic settings.
However, we need to simplify the data management process to optimise data-driven insights. This will be especially valuable for the non-expert stakeholders (the IT users).
Experts are focusing on simplifying big data management, which includes data capture, storage, visualisation, and inflow and outflow of data, to make on-time targeted decisions based on the right datasets for the right target audience, which gives us a strategic challenge in the practical context of big data management.
For example, which context-based data we should collect and store, related to our different organisational contexts, storage period, the sequence and duration of data inflow and outflow needed in order to visualise and analyse the right dataset, big data tracking for pattern mining to explore stakeholder behaviour, and so forth, in order to neutralise the prospective variance and factual error in data analytics, are the crucial challenges.
The purpose here is to avoid management myopia and assist non-IT-expert managers in making the right decisions based on the right datasets to attract the right stakeholder groups.
However, there are diverse challenges in big data management that may adversely impact futuristic decision-making in both the government and private sectors. Cases of breaching stakeholder privacy are known, such as data harvesting occurring for social media users as well as their friends.
Another issue is the proper management of data streams and data mining. In terms of improper management between multiple data streams and data pattern mining, a spokesperson from the supermarket chain Tesco described that they struggled to offer a flawless online customer experience as the system misled data from multiple data streams of online customer orders.
In practice, there are more examples of data mismanagement from multiple data streams because of the absence of a streamlined algorithm for data inflow and outflow. This is done in order to track, explore, visualise, and analyse the right data from multiple data sources to make the right decision, especially by non-expert data users, at the right time and target the right stakeholders.
As a result, in the real-life business world, the absence of a contextual and scalable big data management structure often leads to myopic decisions.
Similar to many other sectors, healthcare is also affected by improper data management approaches. Experts described that the key big data challenges in contemporary healthcare include data inconsistency and instability, data quality (volume, variety, velocity, and veracity), limitations of observational data, data validation, and data analysis perspectives. It can lead health professionals to derive inaccurate and untimely insights from the enormous flow of patient data.
To simplify the data management process, especially for non-expert users, the following steps can be taken:
- Establishing common business terminology for uncomplicated data mining patterns allows large datasets to be easily understood by diverse groups of internal and external stakeholders with varying backgrounds. This is crucial when datasets and their underlying meaning are shared and analysed by cross-functional teams comprising non-IT specialists.
- Simplifying data inflow and outflow management is essential for understanding the origin and utilisation of data across different management levels, from front-desk customer service representatives to top management and vice versa.
- Large and dynamic sets of volatile raw data require effective and interactive data exploration techniques accessible to non-experts. These techniques should be based on stable and uncomplicated methods of information abstraction, sampling, and insight extraction, enabling users to address current problems readily.
- Big data analysis software should enhance user comprehension by offering customisation or individualisation capabilities for various non-expert user-defined exploration scenarios. This ensures users access the "right data, at the right time, in the right context" based on their unique preferences and diverse practical business and management needs.
- Ideally, big data analysis software should empower diverse data users (e.g., customers, investors, customer service attendants, and other key stakeholders) to gain accurate insights and value from the ongoing big data flow as quickly as possible, minimising reliance on IT experts.
- These patterns should be scalable and offer customised insights based on common business or management terms, minimising the need for IT expertise and ultimately allowing the wider society to benefit from big data. This is particularly critical for the socio-economic development of emerging economies, where the population may not be as tech-ready as in developed nations.
Dr Riad Shams is an Assistant Professor and Head of the programme at the Newcastle Business School, Northumbria University, UK.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions and views of The Business Standard.