Over the years I have looked at a lot method server logs to try and determine root cause as to why the method server stopped. The number one cause by far is a single long running operation overwhelming the available method server memory. What that operation is often varies, but in general it's an operation that needs to 'touch' a lot of data, it could be a search, bill of materials operation, customized operation, a standard operation against a huge data set, something stuck in a loop you name it I've probably seen it. The way the memory becomes overwhelmed also varies, the types of problems which are often bugs are a series of SQL statements each returning a 'bunch of data' that cumulatively overwhelms the memory the second type is when there is one SQL returns hundreds of thousands of results back to the method server which then overwhelms the memory. QueryLimits are designed to prevent the second type but the first type is tougher to deal with because traditional tools generally don't deal with the method server stopping. When it's a series of statements cumulatively leading to the memory being consumed we often enable SQL Logging in the $WT_HOME\codebase\WEB-INF\log4jMethodServer.properties: like
These settings will generate a very large MethodServer log file, use with caution.
Further details on why method servers crash can be found in the knowledge base article found here(authenticated).
The Garbage Collection Baiting (GCBaiter) setting might help in some of these cases. It tries to terminate the biggest transaction before OOME occurs. Its disabled by default OOTB, but is set by default when Windchill Configuration Assistant is run unassisted. Customers should check documentation for more information if interested.