The message size directly influences the performance of an interface processed in SAP PI. The size of a PI message depends on two elements,the PI header and the actual payload. The Payload can vary greatly between interfaces or over time for example, larger messages during year-end closing.The size of the PI message header can cause a major overhead for small messages of only a few kB and can cause a decrease in the overall throughput of the interface.The larger the message payload, the smaller the overhead due to the PI message header. On the other hand, large messages require a lot of memory on the Java stack which can cause heavy memory usage on ABAP or excessive garbage collection activity will also reduce the overall system performance.
Large Message Queue on PI ABAP :
In this case the interfaces using the ABAP Integration Server,uses the large message queue filters to restrict the parallelization of mapping calls from ABAP queues. To do so set the parameter EO_MSG_SIZE_LIMIT of category TUNING to e.g. 5.000 to direct all messages larger 5 MB to dedicated XBTL* or XBTM* queues.The value of the parameter depends on the number of large messages and the acceptable delay that might be caused due to a backlog in the large message queue. To reduce the backlog the number of large message queues can also be configured via parameter EO_MSG_SIZE_LIMIT_PARALLEL of category TUNING. The default value is 1 so that all messages larger than the defined threshold will be processed in one single queue. Naturally the parallelization should not be set higher than 2 or 3 to avoid overloading of the Java memory due to parallel large message requests.
Large message queues on PI Adapter Engine :
Handling of large messages in the Messaging System for the Java based Adapter Engine. Contrary to the Integration Engine, it is not the size of a single large message only that determines the parallelization.Instead the sum of the size of the large messages across all adapters on a given Java server node is limited to avoid overloading the Java heap. This is based on so called permits that define a threshold of a message size. Each message larger than the permit threshold is considered as large message. The number of permits can be configured as well to determine the degree of parallelization. Per default the permit size is 10 MB and 10 permits are available. This means that large messages will be processed in parallel if 100 MB are not exceeded.
To discuss more in detail, for an example, using the default values. Let us assume we have 5 messages waiting to be processed (status “To Be Delivered” on one server node. Message A has 5 MB, message B has 10 MB, message C has 50 MB, message D 150 MB, message E 50 MB and message F 40 MB. Message A is not considered large since the size is smaller than the permit size and is not considered large and can be immediately processed. Message B requires 1 permit, message C requires 5. Since enough permits are available processing will start (status DLNG). Hence for message D all available 10 permits would be required. Since the permits are currently not available it cannot be scheduled. If blacklisting is enabled the message will be put to error status (NDLV) since it exceed the maximum number of defined permits. In that case the message would have to be restarted manually. Message E requires 5 permits and can also not be scheduled. But since there are 4 permits left message F is put to DLNG. Due to the smaller size message B and message F finish first releasing 5 permits. This is sufficient to schedule message E which requires 5 permits. Only after message E and C have finished message D can be scheduled consuming all available permits.
The example above shows a potential delay a large message could face due to the waiting time for the permits. But the assumption is that large messages are not time critical and therefore additional delay is less critical than potential overload of the system.The large message queue handling is based on the Messaging System queues. This means that restricting the parallelization is only possible after the initial persistence of the message in the Messaging System queues. Per default this is only done after the Receiver Determination. Therefore if you have a very high parallel load of incoming large requests this feature will not help. Instead you would have to restrict the size of incoming requests on the sender channel (e.g. file size limit in the file adapter or the icm/HTTP/max_request_size_KB limit in the ICM for incoming HTTP requests). If you have very complex extended receiver determination or complex content based routing it might be useful to configure staging in the first processing step of the Messaging System (BI=3) as described in Logging / Staging on the AAE (PI 7.3 and higher).
The number of permits consumed can be monitored in PIMON-->Monitoring-->Adapter Engine Status. The number of threads corresponds to the number of consumed permits
General Hardware Bottleneck :
During all tuning actions discussed,please keep in mind that the limitation of all activities is set by the underlying CPU and memory capacity. The physical server and its hardware have to provide resources for three PI runtime: the Adapter Engine, the Integration Engine, and the Business Process Engine. Tuning one of the engines for high throughput leaves fewer resources for the remaining engines. Thus, the hardware capacity has to be monitored closely.
Note: Please for more information, give a glance on the SAP Note : 1727870 - Handling of large messages in the Messaging system and SAP Note 894509 – PI Performance Check.