Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Kebin23
Participant

High memory consumption SMS R81.20

Hello checkpoint community, today we are experiencing high memory consumption on our SMS R81.20 HF take 24.
The CPU was at 3% but the memory was at 91% alerted.

I checked the Swap memory and observed that 3GB was being used, and this increased as time went by, also the free memory was at 0, which I consider an expected behavior of the Swap memory as it does not have free memory.

MEMORY_HIGH.png

What I couldn't determine was what was consuming most of my memory? With the top command it only showed that Java was consuming 22% of memory at certain times, but nothing abnormal.

Because the Swap memory increased over time, I decided to restart the processes (cpstop; cpstart). After that I noticed that the Swap memory used was at 0, the free memory increased to 2 and the available memory to 9. With the restart of processes the problem that I presented was solved, at a graphic level I also observed that my SMS is no longer alerted .

MEMORY_NORMAL.png

SMS_NORMAL.png

What could be the trigger for my free memory to have fallen to 0 and consequently my used Swap memory continually increased?

Regards

0 Kudos
12 Replies
PhoneBoy
Admin
Admin

Sounds like a memory leak somewhere.
Have you opened a TAC case? https://help.checkpoint.com 

0 Kudos
the_rock
Legend
Legend

Can you please send output of ps -auxw command?

Andy

0 Kudos
Timothy_Hall
Champion Champion
Champion

Also run top and hit SHIFT-M to sort by memory consumption, and post a screenshot of that.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Igor_Demchenko
Participant

Hi, checkmates!

Timothy, I have the similar problem (R81.20 Jumbo Hotfix Take 26)

Here are the screenshots
top.JPGfree.JPG

0 Kudos
Srdjan_B
Collaborator
Collaborator

We are having customer with similar issue.  After upgrade to R81.20, CPD CPM was intermittently crashing when doing publish. TAC recommended increasing heap size to 4096M, which solved that issue. Customer noticed higher memory utilization, so they increased memory from 16GB to 24GB (SMS is in vmware environment), thinking that server needs more memory due to heap size increase. But it does not seem to help. Based on the attached graph, further increase is not likely to solve the issue. JHF T41 did not help either (attached graph starts with reboot after installing JHF T41).

We will open the TAC case, but is there anything else to check before opening it?

Thank you

 

# cpstat -f memory os

Total Virtual Memory (Bytes):  58795761664
Active Virtual Memory (Bytes): 27898380288
Total Real Memory (Bytes):     25040007168
Active Real Memory (Bytes):    22943072256
Free Real Memory (Bytes):      2096934912
Memory Swaps/Sec:              -
Memory To Disk Transfers/Sec:  -

# ps auxw --sort -rss 
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
admin    10567 13.9 53.1 18265700 13005324 ?   SLsl  2023 5392:34 /opt/CPshrd-R81.20/jre_64/bin/java -D_CPM=TRUE -Xaot:forceaot -Xmx
cp_post+ 25786  0.5  6.0 1602188 1480268 ?     Ss    2023 134:43 postgres: postgres cpm 127.0.0.1(40008) idle
cp_post+  8782  0.3  6.0 1590248 1469480 ?     Ss    2023 126:09 postgres: postgres cpm 127.0.0.1(60316) idle
cp_post+  8783  0.4  5.9 1597800 1466828 ?     Ss    2023 135:54 postgres: postgres cpm 127.0.0.1(60318) idle
cp_post+ 10260  0.2  5.9 1584892 1465284 ?     Ss    2023  57:09 postgres: postgres cpm 127.0.0.1(49440) idle
cp_post+ 19845  0.3  5.9 1586396 1455176 ?     Ss    2023 117:05 postgres: postgres cpm 127.0.0.1(58862) idle
cp_post+ 28041  0.3  5.9 1583676 1454740 ?     Ss    2023 113:08 postgres: postgres cpm 127.0.0.1(47330) idle
cp_post+ 19573  0.2  5.9 1605012 1453936 ?     Ss    2023  80:58 postgres: postgres cpm 127.0.0.1(53042) idle
cp_post+  8781  0.2  5.9 1583884 1446388 ?     Ss    2023  92:15 postgres: postgres cpm 127.0.0.1(60314) idle
cp_post+ 20109  0.3  5.9 1598596 1446300 ?     Ss    2023 110:54 postgres: postgres cpm 127.0.0.1(58896) idle
cp_post+  7764  0.4  5.8 1565576 1436892 ?     Ss    2023  74:23 postgres: postgres cpm 127.0.0.1(32912) idle
cp_post+ 19575  0.2  5.7 1561816 1417860 ?     Ss    2023  74:41 postgres: postgres cpm 127.0.0.1(53044) idle
cp_post+  9186  0.3  5.7 1553268 1407412 ?     Ss    2023 105:27 postgres: postgres cpm 127.0.0.1(48460) idle
cp_post+  7070  0.5  5.7 1573360 1394376 ?     Ss   Jan03  58:20 postgres: postgres cpm 127.0.0.1(42518) idle
cp_post+ 31184  0.0  5.6 1541940 1379936 ?     Ss   Jan03   8:01 postgres: postgres cpm 127.0.0.1(52130) idle
cp_post+  2079  0.3  5.5 1524432 1367784 ?     Ss   Jan03  34:46 postgres: postgres cpm 127.0.0.1(33558) idle
admin    10571  5.0  5.3 14335324 1297716 ?    SNLsl 2023 1951:06 /opt/CPshrd-R81.20/jre_64/bin/java -D_solr=TRUE -Xdump:directory=/
cp_post+ 26415  0.2  5.2 1549284 1281552 ?     Ss   Jan04  18:33 postgres: postgres cpm 127.0.0.1(41962) idle
cp_post+ 19730  0.0  5.1 1409488 1253848 ?     Ss    2023   2:53 postgres: checkpointer   
cp_post+ 16497  0.4  4.8 1548548 1193536 ?     Ss   Jan05  29:33 postgres: postgres cpm 127.0.0.1(42110) idle
cp_post+ 11266  0.7  4.6 1543012 1128160 ?     Ss   Jan09  10:10 postgres: postgres cpm 127.0.0.1(47380) idle
cp_post+  5812  0.4  4.2 1517948 1042840 ?     Ds   Jan09   6:07 postgres: postgres cpm 127.0.0.1(48328) SELECT
cp_post+ 19745  0.0  2.8 1414488 689864 ?      Ss    2023  10:49 postgres: postgres monitoring 127.0.0.1(58852) idle
cp_post+ 19744  0.0  2.6 1414360 653396 ?      Ss    2023  11:07 postgres: postgres monitoring 127.0.0.1(58850) idle
admin    10636  0.1  2.5 4780848 633972 ?      SNLsl 2023  64:46 /opt/CPshrd-R81.20/jre_64/bin/java -D_RFL=TRUE -Xdump:directory=/va
admin    10741  0.2  1.9 4778624 471468 ?      SLsl  2023  93:23 /opt/CPshrd-R81.20/jre_64/bin/java -D_smartview=TRUE -Xdump:directo
admin     9688  0.4  1.3 3688688 325560 ?      SLl   2023 171:47 /opt/CPshrd-R81.20/jre_64/bin/java -D_vSEC=TRUE -Xdump:directory=/v
admin     9430  0.8  1.3 919124 322920 ?       Ssl   2023 345:19 fwm
admin     9427  0.3  0.9 639284 220084 ?       Dsl   2023 133:08 fwd -n
admin    10973  1.2  0.7 551604 172788 ?       SNsl  2023 494:06 /opt/CPrt-R81.20/log_indexer/log_indexer -workingDir /opt/CPrt-R81.

 

 

0 Kudos
Valerio5286
Explorer

Hello CheckPoint community, I have the same problem for 81.20. 15 days ago I used the cpstop command to stop the services, that solved the problem for the moment, but after two weeks the problem presented itself again, the increase in memory was gradual. Currently I will perform the same procedure because my memory is at 97%

Valerio5286_0-1714070849648.png

I performed some before and after commands to apply cpstop:

Before:

top before.png

vmstat -s before.pngfree -mt before.png

 

After:

Performance.pngTop after.pngFree after.pngvmstat after.png

 

cpstop resuelve el problema temporalmente. Quiero averiguar la causa raíz del problema, ¿tienes alguna idea?

 

 

 

 

0 Kudos
the_rock
Legend
Legend

You need more physical ram, for sure. I worked with a client who always had this problem, case went to TAC escalation, they suggested memory upgrade to 64 GB, because no matter debugs and steps ran, issue would never go away. Once ram was upgraded, situation got much better.

Andy

0 Kudos
Valerio5286
Explorer

Thanks for your comment.
What happens is that when the cpstop command is applied the memory reduces up to approximately 36%, this makes me think that it has enough memory and it could be associated with a memory leak, for this reason I have raised a case with the TAC to try to Identify the root cause of the problem.
If I receive relevant information I will share it with the community.

0 Kudos
the_rock
Legend
Legend

Doing cpstop or rebooting is not even good workaround in this case, it will simply "relieve" the load temporarily, but the issue will come back in no time.

Andy

0 Kudos
Lesley
Advisor

I would recommend to reboot if you want to clear memory not cpstop;cpstart

Second maybe you can check for memory related issues solved in later take you have installed. 

https://sc1.checkpoint.com/documents/Jumbo_HFA/R81.20/R81.20/Take_54.htm?tocpath=_____6

-------
If you like this post please give a thumbs up(kudo)! 🙂
0 Kudos
PhoneBoy
Admin
Admin

Absent other symptoms (especially swap usage), this is normal/expected behavior.
The value you need to look at is NOT "free" (never been allocated to anything) but where it says "buff/cache" which is memory that is currently being used, but can easily be reallocated for other purposes.
Which means you're at about 90% memory utilization, which isn't unusual for a management server.
That plus the fact that Swap is barely being used suggests your system is not exceeding physical memory usage.

Having said that, it never hurts to have more memory on your management server. 

0 Kudos
PhoneBoy
Admin
Admin

To see if you have a memory leak: https://support.checkpoint.com/results/sk/sk35496
If so, engage with TAC.
However, I 100% agree with the suggestion for more RAM.

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events