OpenTelemetry tracing

Tracing provides a detailed log of the activity that Prisma Client carries out, at an operation level, including the time taken to execute each query. It helps you analyze your application's performance and identify bottlenecks. Tracing is fully compliant with , so you can use it as part of your end-to-end application tracing system.

Tracing gives you a highly detailed, operation-level insight into your Prisma ORM project. If you want aggregated numerical reporting, such as query counts, connection counts, and total query execution times, see Metrics.

About tracing

When you enable tracing, Prisma Client outputs the following:

  • One trace for each operation (e.g. findMany) that Prisma Client makes.
  • In each trace, one or more . Each span represents the length of time that one stage of the operation takes, such as serialization, or a database query. Spans are represented in a tree structure, where child spans indicate that execution is happening within a larger parent span.

The number and type of spans in a trace depends on the type of operation the trace covers, but an example is as follows:

image

You can send tracing output to the console, or analyze it in any OpenTelemetry-compatible tracing system, such as , and . On this page, we give an example of how to send tracing output to Jaeger, which you can run locally.

Trace output

For each trace, Prisma Client outputs a series of spans. The number and type of these spans depends on the Prisma Client operation. A typical Prisma trace has the following spans:

  • prisma:client:operation: Represents the entire Prisma Client operation, from Prisma Client to the database and back. It contains details such as the model and method called by Prisma Client. Depending on the Prisma operation, it contains one or more of the following spans:
    • prisma:client:connect: Represents how long it takes for Prisma Client to connect to the database.
    • prisma:client:serialize: Represents how long it takes to validate and transform a Prisma Client operation into a query for the query engine.
    • prisma:engine: Represents how long a query takes in the query engine.
      • prisma:engine:connection: Represents how long it takes for Prisma Client to get a database connection.
      • prisma:engine:db_query: Represents the database query that was executed against the database. It includes the query in the tags, and how long the query took to run.
      • prisma:engine:serialize: Represents how long it takes to transform a database query result into a Prisma Client result.

For example, given the following Prisma Client code:

prisma.user.findMany({
where: {
email: email,
},
include: {
posts: true,
},
})

The trace is structured as follows:

  • prisma:client:operation
    • prisma:client:serialize
    • prisma:engine
      • prisma:engine:connection
      • prisma:engine:db_query: details of the first SQL query or command...
      • prisma:engine:db_query: ...details of the next SQL query or command...
      • prisma:engine:serialize

Considerations and prerequisites

If your application sends a large number of spans to a , this can have a significant performance impact. For information on how to minimize this impact, see Reducing performance impact.

To use tracing, you must do the following:

  1. Install the appropriate dependencies.
  2. Enable the tracing feature flag in your Prisma schema file.
  3. Install OpenTelemetry packages.

Get started with tracing in Prisma ORM

This section explains how to install and register tracing in your application.

Step 1. Install up-to-date Prisma ORM dependencies

Use version 4.2.0 or later of the prisma, @prisma/client, and @prisma/instrumentation npm packages.

$npm install prisma@latest --save-dev
$npm install @prisma/client@latest --save
$npm install @prisma/instrumentation@latest --save

Step 2: Enable the feature flag in your Prisma schema file

In the generator block of your schema.prisma file, enable the tracing feature flag:

generator client {
provider = "prisma-client-js"
previewFeatures = ["tracing"]
}

Step 3: Install OpenTelemetry packages

Finally, install the appropriate OpenTelemetry packages, as follows:

npm install @opentelemetry/semantic-conventions @opentelemetry/exporter-trace-otlp-http @opentelemetry/instrumentation @opentelemetry/sdk-trace-base @opentelemetry/sdk-trace-node @opentelemetry/resources

Register tracing in your application

The following code provides a minimal tracing configuration. You need to customize this configuration for your specific application.

setup.ts
1// Imports
2import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions'
3import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http'
4import { registerInstrumentations } from '@opentelemetry/instrumentation'
5import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base'
6import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node'
7import { PrismaInstrumentation } from '@prisma/instrumentation'
8import { Resource } from '@opentelemetry/resources'
9
10// Configure the trace provider
11const provider = new NodeTracerProvider({
12 resource: new Resource({
13 [SemanticResourceAttributes.SERVICE_NAME]: 'example application',
14 }),
15})
16
17// Configure how spans are processed and exported. In this case we're sending spans
18// as we receive them to an OTLP-compatible collector (e.g. Jaeger).
19provider.addSpanProcessor(new SimpleSpanProcessor(new OTLPTraceExporter()))
20
21// Register your auto-instrumentors
22registerInstrumentations({
23 tracerProvider: provider,
24 instrumentations: [new PrismaInstrumentation()],
25})
26
27// Register the provider globally
28provider.register()

OpenTelemetry is highly configurable. You can customize the resource attributes, what components gets instrumented, how spans are processed, and where spans are sent.

You can find a complete example that includes metrics in .

Tracing how-tos

Visualize traces with Jaeger

is a free and open source OpenTelemetry collector and dashboard that you can use to visualize your traces.

The following screenshot shows an example trace visualization:

Jaeger UI

To run Jaeger locally, use the following command:

docker run --rm --name jaeger -d -e COLLECTOR_OTLP_ENABLED=true -p 16686:16686 -p 4318:4318 jaegertracing/all-in-one:latest

You'll now find the tracing dashboard available at http://localhost:16686/. When you use your application with tracing enabled, you'll start to see traces in this dashboard.

Send tracing output to the console

The following example sends output tracing to the console with ConsoleSpanExporter from @opentelemetry/sdk-trace-base.

// Imports
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions'
import { registerInstrumentations } from '@opentelemetry/instrumentation'
import {
BasicTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
} from '@opentelemetry/sdk-trace-base'
import { AsyncHooksContextManager } from '@opentelemetry/context-async-hooks'
import * as api from '@opentelemetry/api'
import { PrismaInstrumentation } from '@prisma/instrumentation'
import { Resource } from '@opentelemetry/resources'
// Export the tracing
export function otelSetup() {
const contextManager = new AsyncHooksContextManager().enable()
api.context.setGlobalContextManager(contextManager)
//Configure the console exporter
const consoleExporter = new ConsoleSpanExporter()
// Configure the trace provider
const provider = new BasicTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'test-tracing-service',
[SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
}),
})
// Configure how spans are processed and exported. In this case we're sending spans
// as we receive them to the console
provider.addSpanProcessor(new SimpleSpanProcessor(consoleExporter))
// Register your auto-instrumentors
registerInstrumentations({
tracerProvider: provider,
instrumentations: [new PrismaInstrumentation()],
})
// Register the provider
provider.register()
}

Trace Prisma Client middleware

By default, tracing does not output spans for Prisma Client middleware. To include your middleware in your traces, set middleware to true in your registerInstrumentations statement, as follows:

registerInstrumentations({
instrumentations: [new PrismaInstrumentation({ middleware: true })],
})

This will add the following span type to your traces:

  • prisma:client:middleware: Represents how long the operation spent in your middleware.

Trace interactive transactions

When you perform an interactive transaction, you'll see the following spans in addition to the standard spans:

  • prisma:client:transaction: A that wraps the prisma span.
    • prisma:engine:itx_runner: Represents how long an interactive transaction takes in the query engine.
    • prisma:engine:itx_query_builder: Represents the time it takes to build an interactive transaction.

As an example, take the following Prisma schema:

schema.prisma
1generator client {
2 provider = "prisma-client-js"
3 previewFeatures = ["tracing", "interactiveTransactions"]
4}
5
6datasource db {
7 provider = "postgresql"
8 url = env("DATABASE_URL")
9}
10
11model User {
12 id Int @id @default(autoincrement())
13 email String @unique
14}
15
16model Audit {
17 id Int @id
18 table String
19 action String
20}

Given the following interactive transaction:

await prisma.$transaction(async (tx) => {
const user = await tx.user.create({
data: {
email: email,
},
})
await tx.audit.create({
data: {
table: 'user',
action: 'create',
id: user.id,
},
})
return user
})

The trace is structured as follows:

  • prisma:client:transaction
  • prisma:client:connect
  • prisma:engine:itx_runner
    • prisma:engine:connection
    • prisma:engine:db_query
    • prisma:engine:itx_query_builder
      • prisma:engine:db_query
      • prisma:engine:db_query
      • prisma:engine:serialize
    • prisma:engine:itx_query_builder
      • prisma:engine:db_query
      • prisma:engine:db_query
      • prisma:engine:serialize
  • prisma:client:operation
    • prisma:client:serialize
  • prisma:client:operation
    • prisma:client:serialize

Add more instrumentation

A nice benefit of OpenTelemetry is the ability to add more instrumentation with only minimal changes to your application code.

For example, to add HTTP and tracing, add the following instrumentations to your OpenTelemetry configuration. These instrumentations add spans for the full request-response lifecycle. These spans show you how long your HTTP requests take.

// Imports
import { ExpressInstrumentation } from '@opentelemetry/instrumentation-express'
import { HttpInstrumentation } from '@opentelemetry/instrumentation-http'
// Register your auto-instrumentors
registerInstrumentations({
tracerProvider: provider,
instrumentations: [
new HttpInstrumentation(),
new ExpressInstrumentation(),
new PrismaInstrumentation(),
],
})

For a full list of available instrumentation, take a look at the .

Customize resource attributes

You can adjust how your application's traces are grouped by changing the resource attributes to be more specific to your application:

const provider = new NodeTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'weblog',
[SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
}),
})

There is an ongoing effort to standardize common resource attributes. Whenever possible, it's a good idea to follow the .

Reduce performance impact

If your application sends a large number of spans to a collector, this can have a significant performance impact. You can use the following approaches to reduce this impact:

Send traces in batches using the BatchSpanProcessor

In a production environment, you can use OpenTelemetry's BatchSpanProcessor to send the spans to a collector in batches rather than one at a time. However, during development and testing, you might not want to send spans in batches. In this situation, you might prefer to use the SimpleSpanProcessor.

You can configure your tracing configuration to use the appropriate span processor, depending on the environment, as follows:

import {
SimpleSpanProcessor,
BatchSpanProcessor,
} from '@opentelemetry/sdk-trace-base'
if (process.env.NODE_ENV === 'production') {
provider.addSpanProcessor(new BatchSpanProcessor(otlpTraceExporter))
} else {
provider.addSpanProcessor(new SimpleSpanProcessor(otlpTraceExporter))
}

Send fewer spans to the collector with sampling

Another way to reduce the performance impact is to to send fewer spans to the collector. This reduces the collection cost of tracing but still gives a good representation of what is happening in your application.

An example implementation looks like this:

import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions'
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node'
import { TraceIdRatioBasedSampler } from '@opentelemetry/core'
import { Resource } from '@opentelemetry/resources'
const provider = new NodeTracerProvider({
sampler: new TraceIdRatioBasedSampler(0.1),
resource: new Resource({
// we can define some metadata about the trace resource
[SemanticResourceAttributes.SERVICE_NAME]: 'test-tracing-service',
[SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
}),
})

Troubleshoot tracing

My traces aren't showing up

The order in which you set up tracing matters. In your application, ensure that you register tracing and instrumentation before you import any instrumented dependencies. For example:

import { registerTracing } from './tracing'
registerTracing({
name: 'tracing-example',
version: '0.0.1',
})
// You must import any dependencies after you register tracing.
import { PrismaClient } from '@prisma/client'
import async from 'express-async-handler'
import express from 'express'

Child traces start before parent traces

We're still investigating .