For geant4.8.1p01, a memroy leak is occured when I use QGSP in example/novice/N02, which is not occured with LHEP or original N02 physicslist. I just checked the memory usage with "top" and found it was really increasing when QGSP was used. I'm sorry I couldn't find out where the problem arose. Could you please fix the problem? My environment is as follows. Scientific Linux SL release 4.3 (Beryllium) Linux 2.6.9-34.0.2.ELhugemem #1 SMP gcc version 3.4.5 CLHEP 1.9.2.2 I just modified GNUmakefile, exampleN02.cc, and vis.mac as follows. 1. GNUmakefile I added folowings ############################################ EXTRALIBS += -L$(G4LIB)/plists/$(G4SYSTEM) G4LISTS_BASE = $(G4INSTALL)/physics_lists EXTRALIBS += -lPackaging EXTRALIBS += -lQGSP EXTRALIBS += -lLHEP ############################################# 2. exampleN02.cc I added following two "include" lines and replace ExN02PhysicsList by QGSP or LHEP as follows. Finaaly, lines inside all the "#ifdef G4VIS_USE" blocks are commented out. ############################################################### #include "QGSP.hh" #include "LHEP.hh" //runManager->SetUserInitialization(new ExN02PhysicsList); runManager->SetUserInitialization(new QGSP); //runManager->SetUserInitialization(new LHEP); //#ifdef G4VIS_USE //#include "G4VisExecutive.hh" //#endif //#ifdef G4VIS_USE // Visualization, if you choose to have it! // G4VisManager* visManager = new G4VisExecutive;// // visManager->Initialize(); //#endif //#ifdef G4VIS_USE // delete visManager; //#endif ################################################################# 3. vis.mac Just followings line is written. ################################################################## /run/beamOn 1000000 ##################################################################
The problem is occured in LHEP also. I'm sorry for wrong information. The meomry usage as a function of number of event processed is shown in http://www-he.scphys.kyoto-u.ac.jp/member/nanjo/mem.gif Is it an expected behavior?
There is no memory leak in exampleN02, as far as we can verify with any memory leak tool or with Valgrind on Linux. The increase of memory you notice is due first to the initialisation and then to the progressive growth of the allocator's buffer for track/trajectories or information associated to the events. The memory usage at some point reaches a plateau, as also confirmed in your plot; occasionally it may grow due to an event bigger in size; the memory consumed is NOT returned to the free store until a run is concluded and no leaks are observed at the end of the job related to the event loop.