Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could easily instruct the orm to add a comment in front of the sql query so the database can profiling of a complete http request and show you all the bad things your orm is doing.

Something like: /* dbxperience:request=9a7cd2a6 */ SELECT ....

I have not tested it, but this is something google cloud sql recently promoted: https://cloud.google.com/blog/products/databases/get-ahead-o...



You don’t need to go this path; modern(-ish) APM systems like Datadog will let you trace SQL commands in the context of the request being made. You can group by endpoint, or group by sql (merging IDs that differ), or chop and filter the data however you need. The DB is not in a position to give you all the stats you need; instead the DB should support e.g. exporting plan info to further enrich the whole-request metrics that are already being collected.

I use this in prod; it’s great. Expensive though.

https://www.datadoghq.com/blog/mysql-monitoring-with-datadog...


I've used Datadog but if you need per query tracing from the database logs, I think the only way is to embed a trace ID in a SQL comment (or in an no-op column) like the grandparent showed.

Datadog is tracing the request from the application akin to something like

    openTrace("Query Name", queryStr)
    db.Query(queryStr)
    closeTrace()
The gap with that style of application-level tracing is that the database logs give no indication of where a query came from, hence the need for embedding a comment in the query with the trace ID.

I would love for a better mechanism than SQL comments for distributed tracing all the way to the database.


I guess what I'm getting at is questioning why you need to look at the database logs (beyond what your APM system can pull from the DB's metrics).

What are you getting from the DB logs that you can't get elsewhere?


Sure, off the top of my head:

- Explain plans for slow queries, via auto_explain in Postgres. You could get really fancy here and convert the pieces of an explain plan (Aggregate, ModifyTable, FullTableScan) into proper distributed traces. Hard to tell, but it looks like GCP might offer that.

- Various errors logs associated with queries that require more detail than a SQL state code. For example, lock timeouts.

- Debugging, especially root cause analysis when you're trying to figure out why something broke. The trace IDs help build up context.


"Easily", not sure that is always easy to do. Is a clever approach to be sure, but to the author's point there could probably be better UX.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: