You could easily instruct the orm to add a comment in front of the sql query so the database can profiling of a complete http request and show you all the bad things your orm is doing.
You don’t need to go this path; modern(-ish) APM systems like Datadog will let you trace SQL commands in the context of the request being made. You can group by endpoint, or group by sql (merging IDs that differ), or chop and filter the data however you need. The DB is not in a position to give you all the stats you need; instead the DB should support e.g. exporting plan info to further enrich the whole-request metrics that are already being collected.
I've used Datadog but if you need per query tracing from the database logs, I think the only way is to embed a trace ID in a SQL comment (or in an no-op column) like the grandparent showed.
Datadog is tracing the request from the application akin to something like
The gap with that style of application-level tracing is that the database logs give no indication of where a query came from, hence the need for embedding a comment in the query with the trace ID.
I would love for a better mechanism than SQL comments for distributed tracing all the way to the database.
- Explain plans for slow queries, via auto_explain in Postgres. You could get really fancy here and convert the pieces of an explain plan (Aggregate, ModifyTable, FullTableScan) into proper distributed traces. Hard to tell, but it looks like GCP might offer that.
- Various errors logs associated with queries that require more detail than a SQL state code. For example, lock timeouts.
- Debugging, especially root cause analysis when you're trying to figure out why something broke. The trace IDs help build up context.
Something like: /* dbxperience:request=9a7cd2a6 */ SELECT ....
I have not tested it, but this is something google cloud sql recently promoted: https://cloud.google.com/blog/products/databases/get-ahead-o...