The OpenTelemetry community offers the Collector in a separate Lambda layer from the instrumentation layers to give users maximum flexibility. This is different than the current AWS Distribution of OpenTelemetry (ADOT) implementation which bundles instrumentation and the Collector together.
Once you’ve instrumented your application you should add the Collector Lambda layer to collect and submit your data to your chosen backend.
Find the
most recent Collector layer release
and use it’s ARN after changing the <region>
tag to the region your Lambda is
in.
Note: Lambda layers are a regionalized resource, meaning that they can only be used in the Region in which they are published. Make sure to use the layer in the same region as your Lambda functions. The community publishes layers in all available regions.
The configuration of the OTel Collector Lambda layer follows the OpenTelemetry standard.
By default, the OTel Collector Lambda layer uses the config.yaml.
In the Lambda environment variable settings create a new variable that holds your authorization token.
In your config.yaml
file add your preferred exporter(s) if they are not
already present. Configure your exporter(s) using the environment variables you
set for your access tokens in the previous step.
Without an environment variable being set for your exporters the default configuration only supports emitting data using the debug exporter. Here is the default configuration:
receivers:
otlp:
protocols:
grpc:
endpoint: '0.0.0.0:4317'
http:
endpoint: '0.0.0.0:4318'
exporters:
# NOTE: Prior to v0.86.0 use `logging` instead of `debug`.
debug:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
metrics:
receivers: [otlp]
exporters: [debug]
telemetry:
metrics:
address: localhost:8888
Publish a new version of your Lambda to enable the changes you made.
Please find the list of available components supported for custom configuration here. To enable debugging, you can use the configuration file to set log level to debug. See the example below.
The OTel Lambda Layers supports the following types of confmap providers:
file
, env
, yaml
, http
, https
, and s3
. To customize the OTel
collector configuration using different Confmap providers, Please refer to
Amazon Distribution of OpenTelemetry Confmap providers document
for more information.
Here is a sample configuration file of collector.yaml
in the root directory:
#collector.yaml in the root directory
#Set an environment variable 'OPENTELEMETRY_COLLECTOR_CONFIG_FILE' to '/var/task/collector.yaml'
receivers:
otlp:
protocols:
grpc:
endpoint: 'localhost:4317'
http:
endpoint: 'localhost:4318'
exporters:
# NOTE: Prior to v0.86.0 use `logging` instead of `debug`.
debug:
awsxray:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [awsxray]
metrics:
receivers: [otlp]
exporters: [debug]
telemetry:
metrics:
address: localhost:8888
Once your collector configuration is set through a confmap provider, create an
environment variable on your Lambda function
OPENTELEMETRY_COLLECTOR_CONFIG_FILE
and set the path of configuration w.r.t to
the confmap provider as its value. for e.g, if you are using a file configmap
provider, set its value to /var/task/<path>/<to>/<filename>
. This will tell
the extension where to find the collector configuration.
You can set this via the Lambda console, or via the AWS CLI.
aws lambda update-function-configuration --function-name Function --environment Variables={OPENTELEMETRY_COLLECTOR_CONFIG_FILE=/var/task/collector.yaml}
You can configure environment variables via CloudFormation template as well:
Function:
Type: AWS::Serverless::Function
Properties:
...
Environment:
Variables:
OPENTELEMETRY_COLLECTOR_CONFIG_FILE: /var/task/collector.yaml
Loading configuration from S3 will require that the IAM role attached to your function includes read access to the relevant bucket.
Function:
Type: AWS::Serverless::Function
Properties:
...
Environment:
Variables:
OPENTELEMETRY_COLLECTOR_CONFIG_FILE: s3://<bucket_name>.s3.<region>.amazonaws.com/collector_config.yaml
Cette page est-elle utile?
Thank you. Your feedback is appreciated!
Please let us know how we can improve this page. Your feedback is appreciated!