As you can see, something is consuming our free memory, but we are not yet swapping in this ~]# vmstat -SM 45 10 The -S switch shows our data in a table and the -M switch shows the output in megabytes to make it easier to read. In the example in Listing 1, we are using the vmstat command to look at our resources every 45 seconds 10 times. The vmstat command is quite useful for this. While looking at /proc/memory and using the free command are useful for knowing "right now" what our memory usage is, there are occasions when we want to look at memory usage over a longer duration. On 64-bit platforms, virtual address space is not needed and all system memory will be shown as low memory. On older 32-bit systems, you will see low memory and high memory due to the way that memory is mapped to a virtual address. High memory is memory to which the kernel does not have a direct physical address and, thus, it must be mapped via a virtual address. Low memory is memory to which the kernel has direct physical access. However, with this method, we don't get swap information from the output and the output is in kilobytes. The same data can be obtained by examining /proc/memory and looking specifically at the high and low values. The capital K in Killed indicates that the process was killed with a -9 signal, and this is usually a good sign that the OOM killer might be the ~]# egrep 'High|Low' /proc/meminfo The oracle process was killed by the OOM killer because of an out-of-memory condition. In the following example, we are going to take a look at our syslog to see whether we can locate the source of our problem. When troubleshooting an issue where an application has been killed by the OOM killer, there are several clues that might shed light on how and why the process was killed. The mechanism the kernel uses to recover memory on the system is referred to as the out-of-memory killer or OOM killer for short. When too many applications start utilizing the memory they were allocated, the over-commit model sometimes becomes problematic and the kernel must start killing processes in order to stay operational. If a process actually utilizes the memory it was allocated, the kernel then provides these resources to the application. This over-commit model allows the kernel to allocate more memory than it actually has physically available. Because many applications allocate their memory up front and often don't utilize the memory allocated, the kernel was designed with the ability to over-commit memory to make memory usage more efficient. The Linux kernel allocates memory upon the demand of the applications running on the system. In certain situations, the root cause of the issue can be traced to the system running low on memory and killing an important process in order to remain operational. When attempting to determine the root cause after the initial triage, it's often a mystery as to why the application or database suddenly stopped functioning. When a server that's supporting a database or an application server goes down, it's often a race to get critical services back up and running especially if it is an important production system. It also provides methods for configuring the OOM killer to better suit the needs of many different environments. This article describes the Linux out-of-memory (OOM) killer and how to find out why it killed a particular process.
0 Comments
Leave a Reply. |