Server Monitoring Authors: Liz McMillan, Carmen Gonzalez, Ken Schwaber, JP Morgenthal, Pat Romanski

Blog Feed Post

Node.Js server monitoring, part 2

node-js-monitorLast time we mentioned two fundamental principles while monitoring any object:

1. The monitor should collect as much important information as possible that will allow to accurately evaluate the health state of an object.
2. The monitor should have little to no effect on the activity of the object.

Sure, these two principles work against each other in most of cases, but with Node.js they work together quite nicely because Node.js is based on event-driven technology and doesn’t use the traditional threads-driven approach. This technology allows to register many listeners for one event and process them in parallel almost independently. To avoid even a small effect on the production server, it was decided to separate the monitor into two parts – the first is the javascript module-plugin that listens to all server events and accumulates necessary information and the second is the Linux shell script that periodically runs the monitor-plugin by using the REST technique for collecting, processing, and sending information to the Monitis main server.

Normally, it is necessary to add couple lines in your existing Node.js server code to activate the monitor-plugin:

var monitor = require('monitor');// insert monitor module-plugin

var server = … // the definition of current Node.js server


monitor.Monitor(server); //add server to monitor

Now the monitor will begin collecting and measuring data. The monitor-plugin has an embedded simple HTTP server that sends accumulated data by request and should correspond (in current implementation) to the following pattern

10010 – the listen port of monitor-plugin (configurable)
‘node_monitor’ – the pathname keyword
‘action=getdata’ – command for getting collected data
‘access_code’ – the specially generated access code that changes for every session

Please notice that monitis-plugin server (in current implementation) listens the localhost only. This and usage of the security access code for every session gives enough security while monitoring. More detailed information can be found along with implemented code in the github repository.

Server monitoring metrics

There are a largely standard set of metrics which can be used to monitor the underlying health of any server.

* CPU Usage describes the level of utilisation of the system CPU(s) and is usually broken down into three states.
o IO Wait – indicates the proportion of CPU cycles spent waiting for IO (disk or network) events. If you experience large IO Wait proportions, it can indicate that your disks are causing a performance bottleneck.
o System – indicates the proportion of CPU cycles spent performing kernel-level processing. Generally you will find only a small proportion of your CPU cycles are spent on system tasks, Hence if you see spikes it could indicate a problem.
o User – indicates the proportion of CPU cycles spent performing user instigated processing. This is where you should see the bulk of your CPU cycles consumed; it includes activities such as web serving, application execution, and every other process not owned by the kernel.
o Idle – indicates the spare CPU capacity you have – all the cycles where the CPU is, quite literally, doing nothing.
* Load Average is a metric that indicates the level of load that a server is under at a given point in time. Usually evaluated as number of requests per second.
* RAM usage by server is broken down usually into the following parts.
o Free – the amount of unallocated RAM available. Linux systems tend to keep this as low as possible; and do not free up the system’s physical RAM until it is requested by another process.
o Inactive – RAM that is in-use for buffers and page caching, but hasn’t been used recently so will be reclaimed first for use by a running process.
o Active – RAM that has been used recently and will not be reclaimed unless we have insufficient Inactive RAM to claim from. In Linux systems this is generally the one to keep an eye on. Sudden, rapid increases signal a memory hungry process that will soon cause VM swapping to occur.
* The server uptime is a metric showing the elapsed time since the last reboot. Non-linear behavior of the server uptime line indicates that the server was rebooted somehow.
* Throughput – the amount of data traffic passing through the servers’ network interface is fundamentally important. It is usually broken down into inbound and outbound throughput and normally measured as average values for some period by kbit or kbyte per sec.
* Server Response time is defined as the duration from receiving a request to sending a response. Normally, it should not exceed reasonable timeout (usually this depends on the complexity of processing) but should be as little as possible. Usually, the average and peak response time is evaluated for some time period.
* The count of successfully processed requests is evaluated as the percentage of responses with 2xx status codes (success) to requests during observing time. This value should be as close as possible to 100%.

We have used part of these metrics and added some specific statistics that are important for our task (e.g. client platform, detailed info for response codes, etc.)

Test results

The results below were obtained on a Node.js server equipped with the monitor described above on a Debian6-x64. The server listens on HTTP (81) and HTTPS (443) ports and does not have a large load.

By double-clicking on a line you can view a specific part of the monitoring data.

The data can be shown in graphical view too.

In conclusion, the monitoring system has successfully tracked the metrics and found the Node.Js server to be in a good health state.

Share Now:del.icio.usDiggFacebookLinkedInBlinkListDZoneGoogle BookmarksRedditStumbleUponTwitterRSS

Read the original blog entry...

More Stories By Hovhannes Avoyan

Hovhannes Avoyan is the CEO of PicsArt, Inc.,

@ThingsExpo Stories
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...