Log data are like streams of non-stop “tweets” coming from nearly every IT asset in one's infrastructure. By mining this information and managing it proactively – instead of ignoring it until something goes wrong – organizations can mitigate risk, ensure service availability and promote operational efficiency.
Log data provides an immutable “fingerprint” of user and system activity. It can reveal something as simple as a failed logon. Or, where there are significant deviations from established baselines, it may indicate a runaway application or an actual security breach.
Unfortunately, when it comes to log data, many businesses struggle to make use of it wisely. Log data's reach overlaps issues such as access to privileged or protected information, the inherent complexities of enforcing bring-your-own-device (BYOD) policies, and detecting well-known vulnerabilities.
Aside from addressing security and compliance, log data helps organizations improve IT productivity, reducing downtime and its associated costs. As well, log data can help optimize resources, boost service-level performance, identify and troubleshoot problems, improve configuration and change management, support accurate capacity planning and refine business analysis
Ideally, an effective, enterprise-class log management methodology should be able to effectively collect, centralize and consume log data in a distributed Big Data environment.
According to Gartner, a midsized enterprise creates 20,000 messages per second of operational data in its activity logs. That's more than 150 GB of operational data in an eight-hour day. When log data is blended in real time with loyalty, supply chain, marketing, social and click stream information, the potential for rich analytics is enormous. But if one is not properly managing log data, they will never know what they're missing…until it's too late.