OTel is a bear though. I think the biggest advantage it gives you is the ability to move across tracing providers
Nowadays they're contributing more to the project directly and have built some support to embed the collector into their DD agent. Other vendors (splunk, dynatrace, new relic, grafana, honeycomb, sumo logic, etc.) contribute to the project a bunch and typically recommend using OTel to start instead of some custom stuff from before.
It's a nice dream. At Google Cloud Next last year, the vendors kinda of came in two buckets. Datadog, and everyone trying to replace Datadog's outrageous bills.
I mean I understand why they did that but it really removes one of the most compelling parts about Otel. We ended doing the hard work of using the standard Otel libraries. I had to contribute a PR or two to get it all to work with our services but am glad that's the route we went because now we can switch vendors if needed (which is likely in the not too distant future in our case.
I used to really like Datadog for being a one-stop observability shop and even though the experience of integrating with it is still quite simple, I think product and pricing wise they've jumped the shark.
I'm much happier these days using a collection of small time services and self-hosting other things, and the only part of that which isn't joyful is the boilerplate and not really understanding when and why you should, say, use gRPC over HTTP, and stuff like that.
Heaven help you if it's a contrib collector bugfix
It's extremely easy to self-host, either on a dev machine, a VPS, or in any Docker-based PaaS.
You should try Otel native observability platforms like SigNoz, Honeycomb, etc. your life will be much simpler
Disclaimer : i am one of the maintainers at SigNoz
If anything I think the backends were kinda slow to adopt.
Edit: I wonder why suggesting JVM instrumentation that is much more polished than the OTel and Lightbend agents gets me downvoted?
OpenTelemetry is universal. As long as you can send the right network packages to one of a number of ingesting programs, you can have pretty dashboards and a lot of insights, regardless of the programming language of the program that originated the metric / trace / log.
For our team, it is very simple:
* we use a library send traces and traces only[0]. They bring the most value for observing applications and can contain all the data the other types can contain. Basically hash-maps vs strings and floats.
* we use manual instrumentation as opposed to automatic - we are deliberate in what we observe and have great understand of what emits the spans. We have naming conventions that match our code organization.
* we use two different backends - an affordable 3rd party service and an all-on-one Jaeger install (just run 1 executable or docker container) that doesn't save the spans on disk for local development. The second is mostly for piece of mind of team members that they are not going to flood the third party service.
[0] We have a previous setup to monitor infrastructure and in our case we don't see a lot of value of ingesting all the infrastructure logs and metrics. I think it is early days for OTEL metrics and logs, but the vendors don't tell you this.
Otel may be okay for a green field project but turning this thing on in a production service that already had telemetry felt like replacing a tire on a moving vehicle.
> felt like replacing a tire on a moving vehicle.
Some people do this as a joke / dare. I mean literally replacing a car tire on a moving vehicle.
You Saudi drift up onto one side, and have people climb out of the side in the air, and then swap the tire while the car is driving on two wheels.
It's pretty insane stuff: https://youtu.be/Str7m8xV7W8?si=KkjBh6OvFoD0HGoh
My whole career I’ve been watching people on greenfield projects looking down on devs on already successful products for not using some tool they’ve discovered, missing the fact that their tool only functions if you build your whole product around the exact mental model of the tool (green field).
Wisdom is learning to watch for people obviously working on brownfield projects espousing a tool. Like moving from VMs to Docker. Ansible to Kubernetes (maybe not the best example). They can have a faster adoption cycle and more staying power.
https://www.scylladb.com/2020/05/28/sas-institute-changing-a...
Seems like moving to OTel might even be a bit more complex for some brownfield folks.
I'm still looking for an endpoint just to send simple one-off metrics to from parts of infrastructure that's not scrapable.
Top google result, for me, for 'send metrics to otel' is https://opentelemetry.io/docs/specs/otel/metrics/. If I go through the the Language APIs & SDK more whole bunch of useless junk https://opentelemetry.io/docs/languages/js/
Compare to the InfluxDB "send data" getting started https://docs.influxdata.com/influxdb/cloud/api-guide/client-... which gives you exactly it in a few lines.
The InfluxData docs you're linking to are similar to Observability vendor docs, which do indeed amount to "here's the endpoint, plug it in here, add this API key, tada".
But OpenTelemetry isn't an observability vendor. You can send to an OpenTelemetry Collector (and the act of sending is simple), but you also need to stand that thing up and run it yourself. There's a lot of good reasons to do that, but if you don't need to run infrastructure right now then it's a lot simpler to just send directly to a backend.
Would it be more helpful if the docs on OTel spelled this out more clearly?
I understand the role that all the different parts of OTel plays in the ecosystem vs InfluxDB, but if you pay attention to that documentation page, it starts off with the easiest thing (here's how you manually send one metric), and then ramps up the capabilities and functionality from here. OTel docs slam you straight into "here's a complete observaility stack for logs, metrics, and traces for your whole k8s deployment".
However, since OTel is not a backend, there's no pluggable endpoint + API key you can just start sending to. Since you were comparing the relative difficulties of sending data to a backend, that's why I responded in kind.
I do agree that it's more complicated, there's no argument there. And the docs have a very long way to go to highlight easier ways to do things and ramp up in complexity. There's also a lot more to document since OTel is for a wider audience of people, many of whom have different priorities. A group not talked about much in this thread is ops folks who are more concerned with getting a base level of instrumentation across a fleet of services, normalizing that data centrally, pulling in from external sources, and making sure all the right keys for common fields are named the right way. OTel has robust tools for (and must document) these use cases as well. And since most of us who work on it do so in spare time, or a part-time capacity at work, it's difficult to cover it all.
https://jeremymorrell.dev/blog/minimal-js-tracing/
"It might help to go over a non-exhaustive list of things the offical SDK handles that our little learning library doesn’t:
- Buffer and batch outgoing telemetry data in a more efficient format. Don’t send one-span-per-http request in production. Your vendor will want to have words."
- Gracefully handle errors, wrap this library around your core functionality at your own peril"
You can solve them of course, if you can
First time it takes 5 minutes to setup locally, from then on you just run the command in a separate terminal tab (or Docker container, they have an image too).
... if (and only if) all the libraries you use also stick to that subset, yea. That is overwhelmingly not true in my experience. And the article shows a nice concrete example of why.
For green-field projects which use nothing but otel and no non-otel frameworks, yea. I can believe it's nice. But I definitely do not live in that world yet.
And in NodeJS, about four times the CPU usage of StatsD. We ended up doing our own aggregation to tamp this down and to reduce tag proliferation (StatsD is fine having multiple processes reporting the same tags, OTEL clobbers). At peak load we had 1 CPU running at 60-80% utilization. Until something changes we couldn’t vertically scale. Other factors on that project mean that’s now unlikely to happen but it grates.
OTEL is actively hostile to any language that uses one process per core. What a joke.
Just go with Prometheus. It’s not like there are other contenders out there.
(And some Go tbf.)
But I’d rather do that three more times before I want to see OpenTelemetry again.
Also Prometheus is getting OTEL interop.
Prometheus ecosystem is very interoperable, by the way.
Meanwhile functionality to retain and recruit new customers sat in the backlog.
Edit to add: also regarding the perf issues I saw: do you really want to pay for an extra server or half a server in your cluster just in case some day comes? These decisions were much fuzzier when you ordered hardware once every two years and just had to live with the capacity you got.
How would you build the "holy grail" map that shows a trace of every sub component in a transaction broken down by start/stop time etc... for instance show the load balancer see a request, the request get handled by middlewares etc, then go onto some kind of handler/controller, the sub-queries inside of that like database calls or cache calls. I don't think that is possible with prometheus?
Correct. Prometheus is just metrics.
The main argument for oTel is that instead of one proprietary vendor SDK or importing prometheus and jaeger and whatever you want to use for logging, just import oTel and all that will be done with a common / open data format.
I still believe in that dream but it's clear that the whole project needs some time/resources to mature a bit more.
If anybody remembers the Terraform/ToFu drama, it's been really wild to see how much support everybody pledged for ToFu but all the traditional observability providers have just kinda tolerated oTel :/
Otel is an attempt to package such arithmetic.
Web apps have added so many layers of syntax sugar and semantic wank, we’ve lost sight its all just the same old math operations relative to different math objects. Sets are not triangles but both are tested, quantified, and compared with the same old mathematical ops we learn by middle school.
OTel does feel a little bit heavy, unless you're already used to e.g. New Relic, Dynatrace, etc. where you have to run an agent process and instrumentize your code to some extent; it's never going to be free to audit every function call! This is why (a) you sample down and don't keep every trace, and (b) unless your company is extremely flush with cash you probably don't run tracing in every environment. If you can get away with it just in a staging or perf test env you can reap most of the benefit without the production impact and cost.
Sorry for knowing how computers actually work (EE grad not a CS grad). I know that can frustrate CS grads who think their preferred OS and favorite programming language is how a computer works. You’re describing how contemporary SWEs view their day job.
Edit: teleMETRY …what’s in a name? Oh right …meaning.
Speaking of meaning, the best I can make of your point is that you're using a much broader definition of "metrics" than the rest of this conversation, and in particular broader than Prometheus (remember context? very important for "meaning"!) supports. That or you really just don't know what a "trace" is (in this context).
Any guesses as to etymology?
They're related, but people have a very specific idea and concept of what each is, you haven't actually provided a good argument why we should throw out these distinctions just because they somewhat resemble each other if you ignore a few details
You may be thinking of metrics in the sense of counters and gauges, but that's not the data model that OpenTelemetry (and before they, Zipkin, Jaeger, and OpenCensus) uses for traces.
The data model for tracing is to emit events that provide a span ID and an optional parent span ID. The event collector can piece these together into a tree after the fact, which will work as long as the parent structure is maintained.
Prometheus is absolutely not suitable for this.
Quibbling about the word "telemetry" doesn't really help here. OpenTelemetry supports three different, completely different subsets of functionality: Metrics (counters, gauges, histograms), traces (span events in a tree structure), and logging (structured log events). They each have completely different client interfaces.
Metrics in OTEL is about three years old and it’s garbage for something that’s been in development for three years.
[1] https://grafana.com/docs/tempo/latest/getting-started/instru...
Grafana, the same company that develops and sells Tempo created a horizontally scalable version of Prometheus called Mimir.
OpenTelemetry is an ecosystem, not just 1 app. It’s protocols, libraries, specs, a Collector (which acts as a clearinghouse for metrics+traces+logs data). It’s bigger than just Tempo. The intention of Patel seems to be to decouple the protocol from the app by having adapters for all of the pieces.
Syslog is kinda a pain, but it's an hour of work and log aggregation is set up. Is the difference the pain of doing simple things with elastic compute and kubernetes?
In my experience, it's often folks who have experience setting up metrics or log collection with something smaller (e.g., StatsD) and sometimes for purposes with less scope, who have the most frustration with OTel. All the concepts are different, carry different names, have different configs, have different quirks, etc. There's often an expectation that things will largely the same as before and that they can carry over the cursed knowledge they had from the other toolset.
My guess is that Prometheus cannot do distributed tracing, while OpenTelemetry can. Is that what you meant?
Some companies (ie: Datadog) are contributing to the tooling but I think most companies would rather spend dev time on their own platforms than something that anybody (competitor) can use.
My teammate said that at a previous job he wanted to add OpenTelemetry tracing to some C++ code he was working on. He took one look at the reference implementation for C++ OpenTelemetry and decided instead to write his own tracing library that sends gRPC to the OpenTelemetry collector.
It's also worth noting that, at least last time I checked, the reference implementations per programming language are less like reference implementations of some specification, and more like "this is the code you use to do OpenTelemetry in this language."
Also open-source & self-hostable.
Glitchtip is the Sentry compatible open source (MIT) one https://gitlab.com/glitchtip/glitchtip-backend/-/blob/v4.2.2... with the extra advantage that it doesn't require like 12 containers to deploy (e.g. https://github.com/getsentry/self-hosted/blob/24.12.1/docker... )
Then you can have something that sums, and removes the attribute.
With statsd/delta if you lose sending a signal - then all data gets skewed, with cumulation - you only use precision.
edit... forgot to say - my use case is "push based" metrics as these are coming from "batch" tools, not long running processes that can be scraped.
Apache Skywalking might be worth a look in some circumstances, doesn't eat too many resources, is fairly straightforwards to setup and run, admittedly somewhat jank (not the most polished UI or docs), but works okay: https://skywalking.apache.org/
Also I quite liked that a minimal setup is indeed pretty minimal: a web UI, a server instance and a DB that you already know https://skywalking.apache.org/docs/main/latest/en/setup/back...
In some ways, it's a lot like Zabbix in the monitoring space - neither will necessarily impress anyone, but both have a nice amount of utility.
I tried doing a simple otel setup in .NET and after a few hours of trying to grok the documentation of the vendor my org has chosen, hopped into a discord run by a colleague that has part of their business model around 'pay for the good otel on the OSS product' and immediately stated that whatever it cost, it was worth the money.
I'd rather build another reliable event/pubsub library without prior experience than try to implement OTEL.
Otel is a design by committee garbage pile of half baked ideas.
Turtles all the way down.
If you already have reloadable configuration infrastructure, or plan to add it in the future, this is just spreading out your configuration capture. No thank you (and by “no thank you” I mean fuck right off).
If you want to improve your bus number for production triage, you have to make it so anyone (senior) can first identify and then reproduce the configuration and dependencies of the production system locally without interrupting any of the point people to do so. If you cannot see you cannot help.
Just because you’re one of k people who usually discover the problem quickly doesn’t mean you’ll always do it quickly. You have bad days. You have PTO. People release things or flip feature toggles that escape your notice. If you stop to entertain other people’s queries or theories you are guaranteed to be in for a long triage window, and a potential SLA violation. But if you never accept other perspectives then your blind spots can also make for SLA violations.
Let people putter on their own and they can help with the Pareto distributions. Encourage them to do so and you can build your bus number.
Feels like a "leaky abstraction" (or "leaky framework") issue. If we wanted to put everything under one umbrella, then well, an SQL database can also do all these things at the same time! Doesn't mean it should.
But I still dislike OTel every time I have to deal with it.
OTel doesn't define any limits on the # of spans in a trace (nor the # of attributes on a span!) but it will be bound by the limits of whatever backend you use. In the case of the one I work for, we do limit the total size of a span to be 1MB or less with 64KB per attribute before truncation. Other backends have different limitations. This is the first I've heard of such a small limitation on the total number of spans in a trace though. Traces are just (basically) collections of structured logs with in-built correlation IDs. I can't imagine why you'd limit them like this.
The other problem I noticed looking at the wire protocol was that the data for the parent trace doesn’t seem to get sent until the trace closes. That seems like a bookkeeping nightmare to me. There should be a start of trace packet and an update at the end. I shouldn’t have finished spans showing up before the parent trace has been registered. And that’s what it looked like in the dumps my OPs people sent me to debug.
For example the author of the software instruments it with OTel -- either language interface or wire protocol -- and the operator of the software uses the backend of choice.
Otherwise, you have a combinatorial matrix of supported options.
(Naturally, this problem is moot if the author and operator are the same.)
Where are the three existing, successful solutions it is trying to abstract over?
It doesn’t know what it is because it’s violating the Rule of Three.
Am I detecting sarcasm or did I just bring my own?
APM products in general.
How to send StatsD data to Datadog: https://docs.datadoghq.com/developers/dogstatsd/?tab=hostage...
Places like datadog and posthog are selling you their ability to ingest your existing data. I call bullshit. It’s a problem looking for a solution. It’s an excuse for engineers to build moats around a moderately difficult problem by making it inscrutable.
It's the exact opposite.
I can export traces (or metrics or logs) to whatever backend I want, and change easily.
If you look up how to send traces to any popular vendor the options are either a) use our proprietary for at and proprietary agents and SDKs, or b) use otel
Iirc metrics in OTEL are very similar to Prometheus. Haven't looked at logging but realistically logging becomes an afterthougt with a good tracing setup.
For the backend?
Datadog, New Relic, Grafana, Sentry, Azure Monitor, Splunk, Dynatrace, Honeycomb
https://opentelemetry.io/docs/specs/semconv/general/attribut...
https://opentelemetry.io/docs/specs/semconv/hardware/common/
https://opentelemetry.io/docs/specs/semconv/system/container...
https://opentelemetry.io/docs/specs/semconv/system/k8s-metri...
https://opentelemetry.io/docs/specs/semconv/http/http-metric...
https://opentelemetry.io/docs/specs/semconv/cloud-providers/...
Things, especially crosscutting concerns, you want to use in production should have stopped experiencing basic growing pains like this long before you touch them. It’s not baked yet. Come back in a year. Or two.
Tracing is very mature, with metric and logging implementations stable for a number of popular languages [1].
the "experimental" status was renamed "development"
[0] https://opentelemetry.io/docs/specs/otel/versioning-and-stab...
[1] https://opentelemetry.io/docs/languages/#status-and-releases
That doesn’t really change things now does it. It’s still a bunch of people sitting around saying “MMMM” loudly while eating half-raw cookies.
Ours was old so based on domain <dry heaving sounds>, but by the time I left the project there were just a few places left where anyone touched raw domains directly and you could switch to AsyncLocalStorage in a reasonable amount of time.
The simplest thing that could work is to pass the original request or response context everywhere but that… has its own struggles. It’s hell on your function signatures (so I sympathize with my predecessors not doing that but goddamn) and you really don’t want an entire sequence diagram being able to fire the response. That’s equivalent to having a function with 100 return statements in it.
tl;dr
while there are certainly many areas to improve for the project, some reasons why it could seem complicated
Extensibility by Design: Flexibility in defining meters and signals ensures diverse use cases are supported.
It's still a relatively new technology (~3 years old), growing pains are expected. OpenTelemetry is still the most advanced open standard handling all three signals together.
Creating an HTTP endpoint that publishes metrics in a Prometheus-scrape-able format? Easy! Some boolean/float key-value-pairs with appropriate annotations (basically: is this a counter or a gauge?), and done! And that lead (and leads!) to some very usable Grafana dashboards-created-by-actual-users and therefore much joy.
Then, I read up on how to do things The Proper Way, and was initially very much discouraged, but decided to ignore All that Noise due to the existing solutions working so well. No complaints so far!
Traces work, and I have the spanmetrics exporter set up, and I can actually see the spanmetrics in prometheus if I query directly, but they won't show up in the jaeger "monitor" tab, no matter what I do.
I spent 3 days on this before my boss is like "why don't we just manually instrument and send everything to the SQL server and create a grafana dashboard from that" and agh I don't want to do that either.
Any advice? It's literally the simplest usecase but I can't get it to work. Should I just add grafana to the pile?
SigNoz does look interesting, I may give this a shot, thank you. I'm a bit concerned about it conflicting with other things going on in our docker-compose but it doesn't look too bad..
Yes, but only if everything in your stack is supported by their auto instrumentation. Take `aiohttp` for example. The latest version is 3.11.X and ... their auto instrumentation claims to support `3.X` [0] but results vary depending on how new your `aiohttp` is versus the auto instrumentation.
It's _magical_ when it all just works, but that ends up being a pretty narrow needle to thread!
[0]: https://github.com/open-telemetry/opentelemetry-python-contr...
Semver should never be treated as anything more than some tired programmer's shrug and prayer that nobody else notices the breakages they didn't notice themselves. Pin strict dependencies instead of loose ones, and upgrade only after integration testing.
There are only two kinds of updates, ones that intend to break something and ones that don't intend to break something, and neither one guarantees that the intent matches the outcome.
That's precisely my point, but you said it better :).
I have had _mixed_ results getting auto instrumentation working reliably with packages that are - technically - supported.
I’ve been pushing the use of Datadog for years but their pricing is out of control for anyone between mid size company and large enterprises. So as years passed and OpenTelemetry API’s and SDK’s stabilized it became our standard for application observability.
To be honest the documentation could be better overall and the onboarding docs differ per programming language, which is not ideal.
My current team is on a NodeJS/Typescript stack and we’ve created a set of packages and an example Grafana stack to get started with OpenTelemetry real quick. Maybe it’s useful to anyone here: https://github.com/zonneplan/open-telemetry-js
Wait... so, the problem is that everyone makes it super easy, and so this product solves that by being complicated? ;P
Also, per the hackiness, it tends to have visible perf impact. I know with dynatrace agent we had 0-1MS metrics pop up to 5-10ms (this service had a lot of traffic so it added up) and I'm pretty sure on .NET side there's issues around general performance of OTEL. I also know some of the work/'fun' colleagues have had to endure to make OTEL performant for their libs, in spite of the fact it was a message passing framework where that should be fairly simple...
Not a fan of datadog vs just good metric collection. OTOH while I see the value of OTEL vs what I prefer to do... in theory.
My biggest problem with all of the APM vendors, once you have kernel hooks via your magical agent all sorts of fun things come up that developers can't explain.
My favorite example: At another shop we eventually adopted Dynatrace. Thankfully our app already had enough built-in metrics that a lead SRE considered it a 'model' for how to do instrumentation... I say that because, as soon as Dynatrace agents got installed on the app hosts, we started having various 'heisenbugs' requiring node restarts as well as a directly measured drop in performance. [0]
Ironically, the metrics saved us from grief, yet nobody had an idea how to fix it. ;_;
[0] - Curiously, the 'worst' one was MSSQL failovers on update somehow polluting our ADO.NET connection pools in a bad way...
Our containers regularly fail due vague LD_PRELOAD errors. Nobody has invested the time to figure out what the issue is because it usually goes away after restarting; the issue is intermittent and non-blocking, yet constant.
It's miserable.
Docs are an absolute monstrosity that rival Bazel's for utility, but are far less complete. Implementations are extremely widely varied in support for basics. Getting X to work with OTel often requires exactly what they did here: reverse-engineering X to figure out where it does something slightly abnormal... which is normal, almost every library does something similar, because it's so hard to push custom data through these systems in a type-safe way, and many decent systems want type safety and will spend a lot of effort to get it.
It feels kinda like OAuth 2 tbh. Lots of promise, obvious desirable goals, but completely failing at everything involving consistent and standardized implementation.
there's some gold here, but most of it is over in the consultant/vendor space today, I fear.
Recently, the .NET team launched .NET Aspire and it’s awesome. Super easy to visualize everything in one place in my local development stack and it acts as an orchestrator as code.
Then when we deploy to k8s we just point the OTEL endpoint at the DataDog Agent and everything just works.
We just avoid the DataDog custom trace libraries and SDK and stick with OTEL.
Now it’s a really nice development experience.
https://learn.microsoft.com/en-us/dotnet/aspire/fundamentals...
This project is really nice for that https://github.com/grafana/docker-otel-lgtm
Takes 5 minutes to set it up locally on your dev machine the first time, from then on you can just have a separate terminal tab where you simply run `/path/to/openobserve` and that's it. They also offer a Docker image for local and remote running as well, if you don't want to have the grand complexity of a single statically-linked binary. :P
It's an all-in-one fully compliant OpenTelemetry backend with pretty graphs. I love it for my projects, hasn't failed me in any detectable way yet.
On my own projects, I send the metrics I care about out through the logs and have another project I run collect and aggregate them from the logs. Probably “wrong” but it works and it's easy to set up.
I haven't decided exactly what to blame for this. In some ways, it's necessary to have vague, inconsistent terminology to cover various use cases. And, to be fair some of the UIs predate OTel.
Loki works great.
It's foss, and ypu can point it to any otel compat enpoint. Plus the client that the pydantic team made is 10 times better and simpler than the official otel lib.
Samuel Colvin has a cool intervew where he explains how he got there: https://www.bitecode.dev/p/samuel-colvin-on-logfire-mixing-p...
I can see the value in smaller software -- I fought for it many times, in fact -- but you will have to do better when making a case for your program. Just giving one semi-informed dismissive take reads like a beer-infused dismissal.
Once in awhile I try to spin up OTel like they say. Every single time it sucks. I'll keep trying, though. NewRelic's pricing is so brutal that I hold out hope. Unfortunately, NR's product really is that good...
And reading this it seems a lot of people agree. Hope that can be fixed at some point. Tracing should be simple.
See for example this project: https://github.com/jmorrell/minimal-nodejs-otel-tracer
I think it is more a POC but it shows that all this complexity is not needed IMO.
In Go, currently it is a deliberate choice to be both very granular and specific, so that end-users can ultimately depend on only the exact packages they need and have nothing standing between them and any customization of the SDK they need to do for their organizations.
There's some ways to isolate this kind of setup, which we document like so: https://opentelemetry.io/docs/languages/go/getting-started/#...
Stuff that into an otel.go file and then the rest of your code is usually pretty okay. From there your application code usually looks like this:
https://gist.github.com/cartermp/f37b6702109bbd7401be8a1cab8...
The main thing people sometimes struggle with at this point is threading context through all their calls if they haven't done that yet. It's annoying, but unfortunately running into a limitation of the Go language here. Most of the other languages (Java, .NET, Ruby, etc.) keep context implicitly available for you at all times because the languages provide that affordance.