On 19.12.2019 20:52, Robert Haas wrote:
> On Thu, Dec 19, 2019 at 10:59 AM Tom Lane <[email protected]> wrote:
>> Bruce Momjian <[email protected]> writes:
>>> Good question.  I am in favor of allowing a larger value if no one
>>> objects.  I don't think adding the min/max is helpful.
>>
>> The original poster.
And probably anyone else, who debugs stuck queries of yet another crazy 
ORM. Yes, one could use log_min_duration_statement, but having a 
possibility to directly get it from pg_stat_activity without eyeballing 
the logs is nice. Also, IIRC log_min_duration_statement applies only to 
completed statements.
> I think there are pretty obvious performance and memory-consumption
> penalties to very large track_activity_query_size values.  Who exactly
> are we really helping if we let them set it to huge values?
>
> (wanders away wondering if we have suitable integer-overflow checks
> in relevant code paths...)
The value of pgstat_track_activity_query_size is in bytes, so setting it 
to any value below INT_MAX seems to be safe from that perspective. 
However, being multiplied by NumBackendStatSlots its reasonable value 
should be far below INT_MAX (~2 GB).
Sincerely, It does not look for me like something badly needed, but 
still. We already have hundreds of GUCs and it is easy for a user to 
build a sub-optimal configuration, so does this overprotection make sense?
Regards
-- 
Alexey Kondratov
Postgres Professional https://www.postgrespro.com
Russian Postgres Company