InfluxData Community Forums - Latest topics https://community.influxdata.com/latest Latest topics Wed, 22 Apr 2026 20:59:57 +0000 Support `accept_partial` writes via `/api/v3/write_lp` endpoint for partial-failure resilience Systems Problem

When writing a batch of line protocol records where even a single record has a schema conflict (e.g., integer value for a column previously defined as float), the SDK throws InfluxDBApiHttpException and the entire batch is treated as failed. There is no way for the caller to know that some records in the batch were actually written successfully by the server.

Server Capability Exists But SDK Cannot Leverage It!

Reference: https://docs.influxdata.com/influxdb3/enterprise/write-data/http-api/v3-write-lp/

The InfluxDB v3 /api/v3/write_lp endpoint supports the accept_partial query parameter (default: true). When enabled, the server writes all valid lines and returns HTTP 400 with structured details identifying only the rejected lines:

{
  "error": "partial write of line protocol occurred",
  "data": [
    {
      "error_message": "invalid column type for column 'temp'...",
      "line_number": 2,
      "original_line": "..."
    }
  ]
}

Gaps in SDK

Gap 1: /api/v3/write_lp is only accessible when noSync=true

In InfluxDBClientImpl.writeData(), the endpoint selection is:

  • noSync=true/api/v3/write_lp

  • noSync=false (default) → /api/v2/write

There is no way to use the v3 write endpoint without also opting into noSync. The /api/v2/write endpoint does not support accept_partial.

Gap 2: HTTP 400 always throws, even on partial success

In RestClient.request(), any response with status code outside 200–299 throws InfluxDBApiHttpException.

When accept_partial=true and the server returns HTTP 400, valid lines may have been written, but the SDK reports the entire operation as failed. The caller has no programmatic way to determine which lines succeeded and which failed.

Desired Outcome

SDK consumers need the ability to

  1. Use the /api/v3/write_lp endpoint with accept_partial support
  2. Programmatically distinguish between total failure and partial failure
  3. Identify which specific lines in a batch failed (line number, error message, original line)

1 post - 1 participant

Read full topic

]]>
https://community.influxdata.com/t/support-accept-partial-writes-via-api-v3-write-lp-endpoint-for-partial-failure-resilience/58431 Wed, 22 Apr 2026 20:59:57 +0000 No No No community.influxdata.com-topic-58431 Support `accept_partial` writes via `/api/v3/write_lp` endpoint for partial-failure resilience
CNC machine shop data collection project InfluxDB 2 My team and I are currently working on a data collection project within our machine shop. We will have 35 machines all sending data once they’re all online. We’re all very new to a project like this so looking to see if anyone has any guidance for how we should go about some aspects of it.

This project is also tied to us rolling out a new mes system through SIEMENS so there are some layers to it. Currently, this is how we are doing things:

CNC machines (Lathes, horizontal machine centers, vertical machine centers, lasers, press breaks) generate data through MTConnect, OPC-UA, or FANUC Focas2 protocols.

Data is sent to SIEMENS BFC (brown field connectivity) device to interpret and organize the data coming out of the machine.

Data is sent to InfluxDB (V2) to create database

Calculations and interpretations for things like cycle time, part could, overrides, etc. are done in Influx.

Influx data is picked up by another software called Factory Thread in order to bring in data from SIEMENS mes and tie them together

Data is then stored in sql for documentaion, reporting, etc.

Like I said previously, there is a lot of layers to it all so it can get confusing at times. The biggest thing that I’m questioning is when should we actually be doing the data calculations and interpretation? Is it better to do it with the raw data in the BFC using middleware or is our approach the better way to do it? Also, if anyone has any insight to a project like this of things they found helpful, or things to watch out we would appreciate that information.

Thank you!

1 post - 1 participant

Read full topic

]]>
https://community.influxdata.com/t/cnc-machine-shop-data-collection-project/58430 Tue, 21 Apr 2026 13:22:25 +0000 No No No community.influxdata.com-topic-58430 CNC machine shop data collection project
InfluxDB v3 Java client fails to initialize inside Apache Flink with gRPC/Arrow Flight URI errors InfluxDB 3 I am trying to use the InfluxDB v3 Java client inside an Apache Flink streaming job (custom RichSinkFunction).

A simple standalone Java test works correctly (client initialization + writePoint succeeds), but the same client initialization fails when executed inside a Flink task.

Observed Behavior

Inside Flink, the client fails during initialization in open():

java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected scheme-specific part at index 9: grpc+tcp:
    at org.apache.arrow.flight.Location.forGrpcInsecure(Location.java:122)
    at com.influxdb.v3.client.internal.FlightSqlClient.createLocation(FlightSqlClient.java:207)
    at com.influxdb.v3.client.internal.FlightSqlClient.createFlightClient(FlightSqlClient.java:148)
    at com.influxdb.v3.client.internal.FlightSqlClient.<init>(FlightSqlClient.java:102)
    at com.influxdb.v3.client.internal.InfluxDBClientImpl.<init>(InfluxDBClientImpl.java:116)
    at com.influxdb.v3.client.InfluxDBClient.getInstance(InfluxDBClient.java:519)

If I pass the host without scheme:

host:8181

I get the error above.

If I pass the host with scheme:

https://host:8181

I get a different error:

java.lang.IllegalArgumentException: Address types of NameResolver 'unix' not supported by transport

Expected Behavior

The InfluxDB v3 Java client should initialize correctly inside a Flink operator (RichSinkFunction.open()), just like it does in a standalone Java application.


Minimal Reproduction

public class InfluxSink extends RichSinkFunction<SensorEvent> {

    private transient InfluxDBClient client;

    @Override
    public void open(Configuration parameters) throws Exception {

        ClientConfig config = new ClientConfig.Builder()
                .host("https://host:8181")
                .token("TOKEN".toCharArray())
                .database("DB")
                .build();

        this.client = InfluxDBClient.getInstance(config);
    }

    @Override
    public void invoke(SensorEvent value, Context context) {
        // no-op
    }
}

Executed within a Flink job:

filteredStream.addSink(new InfluxSink());

Environment

  • Apache Flink: (e.g. 1.18.1)

  • Java: 17 (also tested with newer versions)

  • InfluxDB v3 Java client: 1.80 (mvn)

  • OS: macOS / Linux (tested on both)

  • Deployment: local + VM


Additional Notes

  • The same configuration works outside Flink in a plain main() method.

  • The error originates from Arrow Flight (org.apache.arrow.flight.Location).

  • This may be related to classloader isolation, URI parsing, or gRPC/Flight initialization inside Flink runtime.


Questions

  1. Is the InfluxDB v3 Java client officially supported inside Flink jobs?

  2. What is the correct format for .host() when used with Flight/gRPC?

  3. Are there known issues with Arrow Flight + Flink classloader?

  4. Is additional configuration required for gRPC name resolution in this context?


Workarounds I tried

  • Using host with and without scheme

  • Removing any custom NameResolverRegistry configuration

  • Running with parallelism = 1

  • Testing outside Flink (works)


Any guidance would be greatly appreciated.

3 posts - 3 participants

Read full topic

]]>
https://community.influxdata.com/t/influxdb-v3-java-client-fails-to-initialize-inside-apache-flink-with-grpc-arrow-flight-uri-errors/58425 Thu, 16 Apr 2026 12:43:10 +0000 No No No community.influxdata.com-topic-58425 InfluxDB v3 Java client fails to initialize inside Apache Flink with gRPC/Arrow Flight URI errors
http://localhost:8086 these is not working InfluxDB 2

2 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/http-localhost-8086-these-is-not-working/58421 Fri, 10 Apr 2026 14:39:48 +0000 No No No community.influxdata.com-topic-58421 http://localhost:8086 these is not working
OOM on InfluxDB or InfluxDB stopped working InfluxDB 1 Our influxdb constantly crashes with pod’s memory set currently at 32GB and limit as 64GB. Not crashing after requests set to 32GB memory but stopped working until restarted. The memory hit 10GB after a week and it stopped working. I removed all customization in the data section of the configMap so default can take effect. We have tried two many parameters that didn’t resolve the issue. We have had more then 10 incidents in the last 30-40 days and we need permanent solution now. Version number is 1.8.0
Databases: 40+ databases with 364 retention
Shards: Excessive shards counts due to 30-day shards durations x 364 day retention
Memory Usage: Default cache Settings
Replication: replicationN = 1 for all databases (single pod)
Writes per hour: 845 requests/hour and 40-45k per day.

We have another environment on v1.8.3 with similar configuration but running standalone on a service with much high writes i.e. 500k/hour that has been working fine with no issues.

Any insight to determine root cause or isolation issues will be appreciated.
I have used Google and AI to troubleshoot and gather details with no success.

3 posts - 3 participants

Read full topic

]]>
https://community.influxdata.com/t/oom-on-influxdb-or-influxdb-stopped-working/58410 Tue, 07 Apr 2026 20:56:22 +0000 No No No community.influxdata.com-topic-58410 OOM on InfluxDB or InfluxDB stopped working
Telegraf nftables plugin - unmarshall error in set Telegraf I’m unable to use the nftables plugin in telegraf, getting this error message:

Error in plugin: parsing command output failed: unable to parse set: json: cannot unmarshal string into Go struct field namedSet.elem of type nftables.elem

I have this set defined:

set AllowedIPs {
  type ipv4_addr
  elements = { 10.47.55.250, 10.47.56.232 }
}

the nft --json list table firewall output:

    {
      "set": {
        "family": "inet",
        "name": "AllowedIPs",
        "table": "firewall",
        "type": "ipv4_addr",
        "handle": 18,
        "flags": [
          "interval"
        ],
        "elem": [
          "10.47.55.250",
          "10.47.56.232"
        ]
      }
    }

This issue looks to be because it’s expecting the element to be more like

"elem": [{ "val": "10.47.55.250" }]

Any ideas?

5 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/telegraf-nftables-plugin-unmarshall-error-in-set/58409 Tue, 07 Apr 2026 20:42:13 +0000 No No No community.influxdata.com-topic-58409 Telegraf nftables plugin - unmarshall error in set
Disk_used_percent is missing for iscsi disks Telegraf I can’t get telegraf to include disk_used_percent for iscsi disks. The metrics for the mount point “/” are properly delivered. But the metrics for “/var/lib/docker/volumes” and “/usr/local/backup” are missing. The docker image used is the most recent available. The filesystem used on the disks is ext4. Any ideas what I forgot?

compose.yml

  telegraf:
    container_name: grafana-telegraf
    image: telegraf:alpine
    user: telegraf:989
    pid: host
    network_mode: host
    restart: unless-stopped
    depends_on:
      - prometheus
    volumes:
      - ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
#      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /var/run:/var/run:ro
      - /proc/net:/hostfs/proc/net:ro
      - /dev:/dev:ro

telegraf.conf


[agent]
  interval = "15s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "15s"
  flush_jitter = "0s"
  precision = "0s"


[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
  core_tags = false


[[inputs.mem]]


[[inputs.swap]]


[[inputs.system]]
  fieldexclude = ["uptime_format"]

[[inputs.net]]
  interfaces = ["enp*"]


[[inputs.disk]]
  mount_points = ["/","/var/lib/docker/volumes","/usr/local/backup"]

[[inputs.diskio]]
  skip_serial_number = true


[[inputs.docker]]
  endpoint = "unix:///var/run/docker.sock"
  gather_services = false
  source_tag = false
  container_name_include = []
  container_name_exclude = []
  storage_objects = ["container", "image", "volume"]
  timeout = "5s"
  perdevice_include = ["cpu"]
  total_include = ["cpu", "blkio", "network"]
  docker_label_include = ["com_docker_compose_project", "container_name"]
  docker_label_exclude = ["container_id", "container_image", "container_status", "container_version", "server_version", "host"]
  namedrop = ["container_id", "container_image", "container_status", "container_version", "server_version", "host"]
  fieldexclude = ["container_id","health_status"]


[[outputs.http]]
  url = "http://10.4.0.3:9090/api/v1/write"
  data_format = "prometheusremotewrite"
  use_batch_format = true

  [outputs.http.headers]
    Content-Type = "application/x-protobuf"
    Content-Encoding = "snappy"
    X-Prometheus-Remote-Write-Version = "0.1.0"

Greetings!

3 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/disk-used-percent-is-missing-for-iscsi-disks/58403 Fri, 03 Apr 2026 10:21:39 +0000 No No No community.influxdata.com-topic-58403 Disk_used_percent is missing for iscsi disks
Unable to Delete a measurement in InfluxDB3-core InfluxDB 3 I am trying to delete a measurement I created but after deleting it there seems to be a new measurement with the name: <measurement_name>-<time_of_deletion>

Not sure what is happening. I tried refreshing and restarting the core as well.
Is this some bug or am I missing something?

4 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/unable-to-delete-a-measurement-in-influxdb3-core/58402 Thu, 02 Apr 2026 08:02:55 +0000 No No No community.influxdata.com-topic-58402 Unable to Delete a measurement in InfluxDB3-core
Release Announcement: InfluxDB OSS 1.12.3 and InfluxDB Enterprise 1.12.3 InfluxDB 1 New InfluxDB OSS v1 and InfluxDB Enterprise v1 releases are now available:

These releases include significant performance improvements related to compaction and disk I/O, especially for high-cardinality workloads. If using any version of InfluxDB OSS v1 or InfluxDB Enterprise v1, we strongly recommend upgrading to 1.12.3.

1 post - 1 participant

Read full topic

]]>
https://community.influxdata.com/t/release-announcement-influxdb-oss-1-12-3-and-influxdb-enterprise-1-12-3/58401 Wed, 01 Apr 2026 15:11:48 +0000 No No No community.influxdata.com-topic-58401 Release Announcement: InfluxDB OSS 1.12.3 and InfluxDB Enterprise 1.12.3
influxDB3 writePoints InfluxDBApiHttpException InfluxDB 2 Sometimes when I use writePoints (java) to write a list of points, I get an InfluxDBApiHttpException. The exception occurs in the middle of the list, lets say point number 50 of 100. My question is, did points 1-49 get written to the database, or do they all fail together? I’m working on determining this myself, but I thought someone here might have a quick answer.

We do not have NoSync turned on, so I believe under the hood it is using the v2 RequestClient.

3 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/influxdb3-writepoints-influxdbapihttpexception/58400 Tue, 31 Mar 2026 14:11:31 +0000 No No No community.influxdata.com-topic-58400 influxDB3 writePoints InfluxDBApiHttpException
Mosquitto server statistics Telegraf Hi,

I simply wanted to present Mosquitto Server statistics to my InfluxDB/Grafana.

So how many events over time are processed / load the CPU and Memory of this virtual server….

I found different hints how each Topic (e.g. MQTT enabled Sensor data) can be monitored so that the Telegraf registers to different topics - BUT this is not meant!

I simply want to have the general overview of the Debian virtual sever instance and its task as a Broker overall. I hope I could describe it correct?

Is there a telegraf.conf as a simple examle how this can be addressed?

Thanks so far.

3 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/mosquitto-server-statistics/58399 Tue, 31 Mar 2026 10:33:13 +0000 No No No community.influxdata.com-topic-58399 Mosquitto server statistics
[Announcement] Telegraf Enterprise now in public beta Systems Get early access to Telegraf Controller and provide feedback to help shape the future of Telegraf Enterprise. Telegraf Enterprise is for organizations running Telegraf at scale and is comprised of two key components:

  • Telegraf Controller: A control plane (UI + API) that centralizes Telegraf configuration management and agent health visibility.
  • Telegraf Enterprise Support: Official support for Telegraf Controller and Telegraf plugins.

The Telegraf Enterprise beta primarily focuses on testing and gathering feedback on Telegraf Controller. If you’re interested in and willing to test it out, we’d love to hear your thoughts!

Join the Telegraf Enterprise beta

More information:

2 posts - 1 participant

Read full topic

]]>
https://community.influxdata.com/t/announcement-telegraf-enterprise-now-in-public-beta/58393 Thu, 26 Mar 2026 14:38:33 +0000 No No No community.influxdata.com-topic-58393 [Announcement] Telegraf Enterprise now in public beta
InfluxDB could not see tags and fields InfluxDB 2 Hi guys, first of all I really new with influxdb and node-red and I would like to send information in Json format which I used InfluxDB node to make a connection so I got a problem when I send this structure

[
{
“measurement”: “production_v3”,
“fields”: {
“temp”: 5.5,
“light”: 678,
“humidity”: 51
},
“tags”:{
“location”:“garden”
},
“timestamp”: 5454257254
}
]

once I go back to influxDB so I could not see fields and Tags correctly.

Node-Red seems like working fine so I’m not sure that what I did it incorrect with this set up

2 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/influxdb-could-not-see-tags-and-fields/58392 Thu, 26 Mar 2026 03:02:51 +0000 No No No community.influxdata.com-topic-58392 InfluxDB could not see tags and fields
InfluxDB 1.8 → 2.x/3.x: recommended migration strategy for Docker-based HA setup? Systems Hi,

I’m looking for advice on migrating from InfluxDB 1.8.10 to a newer version (2.x or 3.x) in a Home Assistant–managed environment.

Current setup

  • Platform: Home Assistant Supervised (Debian)

  • Home Assistant Core: 2026.3.3

  • Supervisor: 2026.03.2

  • InfluxDB: 1.8.10

  • InfluxDB add-on: 5.0.2

In this setup, InfluxDB runs as a Supervisor-managed Docker container (add-on), and Home Assistant writes time-series data into it (used by Grafana for dashboards).

Goal

Upgrade to a newer InfluxDB version without losing historical data, which is critical for long-term analysis.

Questions

  1. Is there an official or recommended migration path from InfluxDB 1.8 → 2.x or 3.x in this kind of containerized setup?

  2. Would you recommend:

    • in-place upgrade (if even possible), or

    • running a parallel instance and migrating data?

  3. What are the best tools/methods to migrate data:

    • influxd upgrade

    • export/import (line protocol)

    • replication/bridge approaches?

  4. Any known pitfalls when migrating from InfluxQL → Flux/SQL, especially for Grafana dashboards?

  5. Has anyone done this specifically in a Home Assistant / Docker-based environment?

Any real-world experiences or recommended approaches would be highly appreciated.

Thanks in advance!

2 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/influxdb-1-8-2-x-3-x-recommended-migration-strategy-for-docker-based-ha-setup/58390 Wed, 25 Mar 2026 17:17:51 +0000 No No No community.influxdata.com-topic-58390 InfluxDB 1.8 → 2.x/3.x: recommended migration strategy for Docker-based HA setup?
Notice: Changes to Linux memory usage tracking with the Telegraf `mem` input plugin Telegraf Starting in Telegraf v1.36.0, the used_percent field reported by the mem input plugin on Linux increased by roughly 6-20% for the same memory state. This was caused by an upstream change in the gopsutil dependency (v4.25.8), which changed the Used memory calculation from Total - Free - Buffers - Cached to Total - Available (using the kernel’s MemAvailable from /proc/meminfo).

The new formula is more accurate as the old one assumed all cached and buffered memory was immediately reclaimable, which is not always the case. Dashboards or alerts based on used_percent thresholds may need adjustment. The raw fields (free, buffered, cached, available, total) are unaffected and can be used to compute either definition in queries.

Telegraf Release Notes

2 posts - 1 participant

Read full topic

]]>
https://community.influxdata.com/t/notice-changes-to-linux-memory-usage-tracking-with-the-telegraf-mem-input-plugin/58387 Tue, 24 Mar 2026 16:31:00 +0000 Yes No No community.influxdata.com-topic-58387 Notice: Changes to Linux memory usage tracking with the Telegraf `mem` input plugin
How to parse multiple values from a single string field? InfluxDB 2 Home Assistant provides day ahead electricity prices as attributes of the entity “current electricity price”, so in InfluxDB those values appear all together in the field “prices_str” as string:

(InfluxQL query for compactness :slight_smile:

SELECT first("prices_str") FROM "€/kWh" WHERE ("entity_id"::tag = 'current_electricity_market_price') AND $timeFilter GROUP BY time(1d) fill(null)

The output is (newlines introduced by me):

Time: 23/03/2026, 01:00:00 

First:
[
{‘from’: datetime.datetime(2026, 3, 22, 23, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 0, 0, tzinfo=tzutc()), ‘price’: 0.12251}, 
{‘from’: datetime.datetime(2026, 3, 23, 0, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 1, 0, tzinfo=tzutc()), ‘price’: 0.12084}, 
{‘from’: datetime.datetime(2026, 3, 23, 1, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 2, 0, tzinfo=tzutc()), ‘price’: 0.12097}, 
{‘from’: datetime.datetime(2026, 3, 23, 2, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 3, 0, tzinfo=tzutc()), ‘price’: 0.11966}, 
{‘from’: datetime.datetime(2026, 3, 23, 3, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 4, 0, tzinfo=tzutc()), ‘price’: 0.12543}, 
{‘from’: datetime.datetime(2026, 3, 23, 4, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 5, 0, tzinfo=tzutc()), ‘price’: 0.14252}, 
{‘from’: datetime.datetime(2026, 3, 23, 5, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 6, 0, tzinfo=tzutc()), ‘price’: 0.17483}, 
{‘from’: datetime.datetime(2026, 3, 23, 6, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 7, 0, tzinfo=tzutc()), ‘price’: 0.18474}, 
{‘from’: datetime.datetime(2026, 3, 23, 7, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 8, 0, tzinfo=tzutc()), ‘price’: 0.14784}, 
{‘from’: datetime.datetime(2026, 3, 23, 8, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 9, 0, tzinfo=tzutc()), ‘price’: 0.10681}, 
{‘from’: datetime.datetime(2026, 3, 23, 9, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 10, 0, tzinfo=tzutc()), ‘price’: 0.07596}, 
{‘from’: datetime.datetime(2026, 3, 23, 10, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 11, 0, tzinfo=tzutc()), ‘price’: 0.0452}, 
{‘from’: datetime.datetime(2026, 3, 23, 11, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 12, 0, tzinfo=tzutc()), ‘price’: 0.04671}, 
{‘from’: datetime.datetime(2026, 3, 23, 12, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 13, 0, tzinfo=tzutc()), ‘price’: 0.06637}, 
{‘from’: datetime.datetime(2026, 3, 23, 13, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 14, 0, tzinfo=tzutc()), ‘price’: 0.08541}, 
{‘from’: datetime.datetime(2026, 3, 23, 14, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 15, 0, tzinfo=tzutc()), ‘price’: 0.102}, 
{‘from’: datetime.datetime(2026, 3, 23, 15, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 16, 0, tzinfo=tzutc()), ‘price’: 0.12346}, 
{‘from’: datetime.datetime(2026, 3, 23, 16, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 17, 0, tzinfo=tzutc()), ‘price’: 0.1729}, 
{‘from’: datetime.datetime(2026, 3, 23, 17, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 18, 0, tzinfo=tzutc()), ‘price’: 0.2495}, 
{‘from’: datetime.datetime(2026, 3, 23, 18, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 19, 0, tzinfo=tzutc()), ‘price’: 0.2389}, 
{‘from’: datetime.datetime(2026, 3, 23, 19, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 20, 0, tzinfo=tzutc()), ‘price’: 0.1995}, 
{‘from’: datetime.datetime(2026, 3, 23, 20, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 21, 0, tzinfo=tzutc()), ‘price’: 0.16809}, 
{‘from’: datetime.datetime(2026, 3, 23, 21, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 22, 0, tzinfo=tzutc()), ‘price’: 0.15219}, 
{‘from’: datetime.datetime(2026, 3, 23, 22, 0, tzinfo=tzutc()), ‘till’: datetime.datetime(2026, 3, 23, 23, 0, tzinfo=tzutc()), ‘price’: 0.13706}
]

As you can see the time fields start at 23:00 of the previous day, since my timezone is +1.

I want to produce an output which can be plotted by Grafana (for example) so that I can have in the morning the hourly energy prices. The query will never be run for days in the past, since it makes sense only as forecast.

Can it be done in Flux?

4 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/how-to-parse-multiple-values-from-a-single-string-field/58383 Tue, 24 Mar 2026 08:44:20 +0000 No No No community.influxdata.com-topic-58383 How to parse multiple values from a single string field?
[Telegraf] Final aggregator plugin sending data but not visible Systems Hello guys,

First of all thanks for this whole suite of software, it’s amazing.
Here is my setup :

Client MQTT → Broker Mosquitto ← Telegraf → InfluxDB 2 ← Grafana

Telegraf version : 1.37.2
InfluxDB version : v2.8.0

Here is my telegraf.conf content :

[agent]
  debug = true
  hostname = "telegraf"

[[inputs.mqtt_consumer]]
  name_override = "mqtt_binary_switch"
  servers = ["tcp://mosquitto:1883"]
  topics = [
        "monitoring/+/binary_sensor/+/state",
        "monitoring/+/switch/+/state"
  ]
  data_format = "value"
  data_type = "string"
  [[inputs.mqtt_consumer.topic_parsing]]
    topic = "monitoring/+/+/+/+"
    tags = "_/device/_/field/_"

[[processors.enum]]
  namepass = ["mqtt_binary_switch"]
  [[processors.enum.mapping]]
    fields = [ "value" ]
    [processors.enum.mapping.value_mappings]
      ON = 1
      OFF = 0
      "on" = 1
      "off" = 0
      true = 1
      false = 0

[[aggregators.final]]
  namepass = ["mqtt_binary_switch"]
  period = "3600s"
  drop_original = false
  keep_original_field_names = true
  output_strategy = "periodic"

My problem : I’m using ESP32 on ESPHome to send periodic data via the MQTT component to a MQTT Broker. Binary sensors (those configured up there) are updated whenever they change. Most of them don’t change regularly (heatpump presets for example).

Let’s say one of those switch went from OFF to ON 10 hours ago, if I only display the last 6 hours I won’t see any value, because of Telegraf pulled the retained values when logging to the broker, but since it didn’t change since, it won’t display any data.

I fell upon the final aggregator, and while it has some downsides, I thought it could help sending the last data to InfluxDB to fill the gap for shorter display time range.

Logs show that the data is sent :

2026-03-24T05:59:26Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2026-03-24T05:59:36Z D! [outputs.influxdb_v2] Wrote batch of 77 metrics in 8.672816ms
2026-03-24T05:59:36Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2026-03-24T05:59:46Z D! [outputs.influxdb_v2] Wrote batch of 109 metrics in 8.175897ms
2026-03-24T05:59:46Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2026-03-24T05:59:56Z D! [outputs.influxdb_v2] Wrote batch of 82 metrics in 8.852907ms
2026-03-24T05:59:56Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2026-03-24T06:00:00Z D! [aggregators.final] Updated aggregation range [2026-03-24 06:00:00 +0000 UTC, 2026-03-24 07:00:00 +0000 UTC]
2026-03-24T06:00:06Z D! [outputs.influxdb_v2] Wrote batch of 131 metrics in 105.086984ms
2026-03-24T06:00:06Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2026-03-24T06:00:16Z D! [outputs.influxdb_v2] Wrote batch of 104 metrics in 52.062887ms

Yet, when I display a time range that does not include telegraf first connection to the broker (red points frames should return data) :

Any help would be much appreciated.

4 posts - 3 participants

Read full topic

]]>
https://community.influxdata.com/t/telegraf-final-aggregator-plugin-sending-data-but-not-visible/58382 Tue, 24 Mar 2026 06:40:45 +0000 No No No community.influxdata.com-topic-58382 [Telegraf] Final aggregator plugin sending data but not visible
Incremental backup using rsync or lsyncd InfluxDB 2 Hello everyone,

I desperately need to ensure high availability for my current setup.

  • I have two server on two different location.
  • InfluxDB version: V2.7 OSS on bare-metal

I tried to use a cronjob which use the backup/restore command of influx but the problem is it takes 10hours to run since it uses a full backup.

My question is:

> Will it be possible to run rsync or lsyncd directly on the data at ~/.influxdbv2/engine (or any other value of engine-path) instead of running influx backup ?

I want to implement my own incremental backup using something like this to copy only the data from the last 24 hour:

find ~/.influxdbv2/engine \
    -type f \
    -ctime -0 \
    -exec rsync -a {} USER@REMOTE:~/.influxdbv2/engine

Thanks !

6 posts - 3 participants

Read full topic

]]>
https://community.influxdata.com/t/incremental-backup-using-rsync-or-lsyncd/58379 Sun, 22 Mar 2026 08:52:06 +0000 No No No community.influxdata.com-topic-58379 Incremental backup using rsync or lsyncd
New folder migration of an InfluxDB3 Enterprise DB InfluxDB 3 Under Windows 10, I need to change the folder of an InfluxDB3 Enterprise database. Is there any best practice for doing this?

I tried simply copying the whole data and subdirectories in the new folder and changing the directory in the configuration, but I ran into these problems:

  1. A large number of files were not recognized because of permission changes. I tried to update all permissions at once, but it did not work. Since the database contains around 70 GB of data and hundreds of thousands of files, I also tried some scripts, but after hours of scanning they were still far from finishing.
  2. The license is not recognized because it seems to be linked to the directory. I requested a new license for home use several times, but it was never sent to the email address provided. I tried at least five times.

Do you have any suggestions?

Federico

2 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/new-folder-migration-of-an-influxdb3-enterprise-db/58369 Tue, 17 Mar 2026 13:06:29 +0000 No No No community.influxdata.com-topic-58369 New folder migration of an InfluxDB3 Enterprise DB
Telegraf is not automatically starting after reboot Systems Hello,

I have installed influxdb and telegraf in a proxmox lxc container (system: debian 13). I use http-input to collect data from an my api.

I generated a new API token (under influxdb load data ), I start Telegraf with the command: telegraf --config http://xxx.xxx.xxx.xxx:8086/api/v2/telegrafs/1069591a04a65000 in the console.

In the data explorer I can see the data is written to my bucket within influxdb.

After I reboot the lxc container telegraf is not starting automatically.

command “journalctl --no-pager -u telegraf” shows following messages:

Mar 17 08:03:11 lxc-INFLUXDB systemd[1]: Starting telegraf.service - Telegraf…
Mar 17 08:03:12 lxc-INFLUXDB (telegraf)[93]: telegraf.service: Referenced but unset environment variable evaluates to an empty string: TELEGRAF_OPTS
Mar 17 08:03:14 lxc-INFLUXDB telegraf[93]: 2026-03-17T08:03:14Z W! Strict environment variable handling is the new default starting with v1.38.0! If your configuration does not work with strict handling please explicitly add the --non-strict-env-handling flag to switch to the previous behavior!
Mar 17 08:03:14 lxc-INFLUXDB telegraf[93]: 2026-03-17T08:03:14Z I! Loading config: /etc/telegraf/telegraf.conf
Mar 17 08:03:14 lxc-INFLUXDB telegraf[93]: 2026-03-17T08:03:14Z E! [telegraf] Error running agent: no outputs found, probably invalid config file provided
Mar 17 08:03:14 lxc-INFLUXDB systemd[1]: telegraf.service: Main process exited, code=exited, status=1/FAILURE
Mar 17 08:03:14 lxc-INFLUXDB systemd[1]: telegraf.service: Failed with result ‘exit-code’.
Mar 17 08:03:14 lxc-INFLUXDB systemd[1]: Failed to start telegraf.service - Telegraf.
Mar 17 08:03:14 lxc-INFLUXDB systemd[1]: telegraf.service: Consumed 315ms CPU time, 163.7M memory peak.
Mar 17 08:03:14 lxc-INFLUXDB systemd[1]: telegraf.service: Scheduled restart job, restart counter is at 1.

I don’t know how to solve the problem.

Please can you help.

2 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/telegraf-is-not-automatically-starting-after-reboot/58368 Tue, 17 Mar 2026 09:30:40 +0000 No No No community.influxdata.com-topic-58368 Telegraf is not automatically starting after reboot
Synology - Docker - influxdb 2.0 - lost operator token? Systems Hi, how is it possible to get the original Token back? need that to have access via ioBroker Backitup. A new all in token does not help.

Thx.

4 posts - 3 participants

Read full topic

]]>
https://community.influxdata.com/t/synology-docker-influxdb-2-0-lost-operator-token/58367 Mon, 16 Mar 2026 10:13:08 +0000 No No No community.influxdata.com-topic-58367 Synology - Docker - influxdb 2.0 - lost operator token?
InfluxDB 1.8 – All data disappeared after changing retention policies InfluxDB 1 Hi there,

I’m trying to understand what happened with my InfluxDB instance because all data prior to Feb 4 disappeared, and I suspect it may be related to changes I made to the retention policies.

My version:

InfluxDB shell version: 1.8.10

System context

Earlier that day some system maintenance was done:

475 2026-02-04 07:54:54 : apt update
478 2026-02-04 07:55:26 : apt upgrade
483 2026-02-04 08:01:18 : apt autoremove

InfluxDB logs

Later in the logs I can see this error related to disk space:

Feb 04 09:30:00 monitor influxd-systemd-start.sh[521]: ts=2026-02-04T08:30:00.005840Z lvl=info msg="Executing query" log_id=0zVLdLXl000 service=query query="SELECT mean(used_percent) AS used_percent FROM telegraf_intern.autogen.disk WHERE (host =~ /^proxmox$/ AND fstype = 'ext4') AND time >= now() - 6h AND time <= now() - 30s GROUP BY time(1m), path"

Feb 04 09:30:00 monitor influxd-systemd-start.sh[521]: ts=2026-02-04T08:30:00.306211Z lvl=info msg="Error writing snapshot" log_id=0zVLdLXl000 engine=tsm1 error="error opening new segment file for wal (1): write /var/lib/influxdb/wal/telegraf/autogen/1709/_00024.wal: no space left on device"

Feb 04 09:30:01 monitor influxd-systemd-start.sh[521]: ts=2026-02-04T08:30:01.306270Z lvl=info msg="Error writing snapshot" log_id=0zVLdLXl000 engine=tsm1 error="error opening new segment file for wal (1): write /var/lib/influxdb/wal/telegraf/autogen/1709/_00024.wal: no space left on device"

So at that moment the disk was full.

Investigation commands

Later in the timeline, I found this in the history:

470 2026-02-04 08:18:09 : ncdu /

This particular one was earlier, during maintenance, but I don’t think anyone deleted the information from using -d in the ncdu interface.

460 2026-02-04 10:47:11 : influx
461 2026-02-04 10:47:37 : df -h
462 2026-02-04 10:52:58 : du -sh /var/lib/influxdb/data/*
463 2026-02-04 10:54:14 : du -sh /var/lib/influxdb/data/telegraf/* | sort -h
464 2026-02-04 10:54:14 : du -sh /var/lib/influxdb/data/telegraf_intern/* | sort -h
465 2026-02-04 10:54:39 : du -sh /var/lib/influxdb/data/telegraf/autogen/* | sort -h
466 2026-02-04 10:54:39 : du -sh /var/lib/influxdb/data/telegraf_intern/autogen/* | sort -h

Retention policy changes

“Scrolling up” in the InfluxDB shell history, these commands were executed (in this order). Unfortunately I cannot know the when or how much time passed between them.

(I mean that I cannot know either the day or the time when they were executed in relation to the rest of the logs)

USE telegraf;
CREATE RETENTION POLICY "18months" ON "telegraf" DURATION 78w REPLICATION 1 DEFAULT;

USE telegraf_intern;
CREATE RETENTION POLICY "18months" ON "telegraf_intern" DURATION 78w REPLICATION 1 DEFAULT;

USE icinga2;
CREATE RETENTION POLICY "18months" ON "icinga2" DURATION 78w REPLICATION 1 DEFAULT;

SHOW RETENTION POLICIES ON telegraf;
SHOW RETENTION POLICIES ON telegraf_intern;
SHOW RETENTION POLICIES ON icinga2;

DROP RETENTION POLICY "autogen" ON telegraf;
DROP RETENTION POLICY "autogen" ON telegraf_intern;
DROP RETENTION POLICY "dades_18_mesos" ON telegraf_intern;
DROP RETENTION POLICY "autogen" ON icinga2;

What I was trying to do

I noticed there were several retention policies:

  • autogen

  • dades_18_mesos

  • others

InfluxDB seemed to be storing more than 18 months of data, so I assumed the policies might be overlapping.

My reasoning was:

  • autogen has infinite retention

  • dades_18_mesos had 18 months for example

So I thought that since autogen was infinite, it effectively included the 18-month data anyway.

Because of that, I:

  1. Created new 18-month retention policies

  2. Set them as DEFAULT

  3. Then dropped the autogen policies

I assumed that data within the new 18-month policies would remain, and only the rest would be deleted.

Problem

All data prior to Feb 4 disappeared :sob:

Question

Did dropping the autogen retention policy delete all the data that was stored under it?

Even though I created the new 18months retention policy before dropping autogen, could that still have caused the loss of historical data?

Or could the disk full situation have played a role in this? or even the “upgrade” ¿?

Any clarification about how InfluxDB handles retention policy deletion and data migration (or lack of it) would be very helpful

5 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/influxdb-1-8-all-data-disappeared-after-changing-retention-policies/58366 Tue, 10 Mar 2026 09:40:09 +0000 No No No community.influxdata.com-topic-58366 InfluxDB 1.8 – All data disappeared after changing retention policies
Inspiration help: Monitoring SQL server and sending the data to Azure monitor Telegraf Hi all,

I am new to the InfluxData/Telegraf world, and what I’m trying to do is use a Telegraf Docker setup to monitor a SQL Server installation running on a VM—either in Azure or Azure Stack/Local.

I have a running Docker container on Rocky Linux.

The Docker container is running, it connects to one of our SQL Servers, and it is successfully sending metrics to Azure Monitor.

Some of the data is useful when it ends up as metrics, but most of it doesn’t make much sense in Azure Metrics.

It might be due to my limited experience with Telegraf, but what I’d like to achieve is a setup where I can create a dashboard showing the state and performance of the SQL Server, as well as the state and performance of the individual databases.
We build dashboards in SquaredUp, which allows us to query almost anything in Azure, so it doesn’t matter too much whether the data ends up in Metrics or Log Analytics—but personally, I would prefer Log Analytics.

My Telegraf config file currently looks like this:
(Of course, I have anonymized it.)

[agent]
interval = “10s” # Collect every 10s; Azure Monitor output will aggregate into 1m buckets
round_interval = true
flush_interval = “15s”
flush_jitter = “0s”
metric_batch_size = 1000
metric_buffer_limit = 20000
precision = “1s”
omit_hostname = false
debug = true
quiet = false

#========================================================================

INPUTS

#========================================================================

[[inputs.sqlserver]]
servers = [“Server=1.2.3.4;User Id=telegraf;Password=pw;app name=telegraf;”]
database_type = “SQLServer”

query_timeout = “30s”
#========================================================================

OUTPUTS

#========================================================================

[[outputs.azure_monitor]]
timeout = “20s”
region = “westeurope”
resource_id = “/subscriptions/xxx/resourceGroups/rg-xxxx-vm-p-we/providers/Microsoft.Compute/virtualMachines/xxx”
namespace_prefix = “Telegraf/”

strings_as_dimensions = false

If I change strings_as_dimensions to true, I get a lot of errors about the length of the input.

So—does anyone have suggestions for how I can make the data more useful in Azure Monitor and SquaredUp?
Or any good ideas for an alternative approach to reach the same goal?

Reagards

Jan Dam

3 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/inspiration-help-monitoring-sql-server-and-sending-the-data-to-azure-monitor/58352 Fri, 06 Mar 2026 08:50:42 +0000 No No No community.influxdata.com-topic-58352 Inspiration help: Monitoring SQL server and sending the data to Azure monitor
Influx - INSERT value not working InfluxDB 1 Hello

How can I insert a value into my Influx 1.12 DB? I have 1.x-c9a9af2d63 (I think that’s 1.12) on a Raspberry Pi 3 B+ and I am collecting values from my gasmeter into the db “gaszaehler” with field “zaehlerstand”:

> use gaszaehler 
Using database gaszaehler
> select * from zaehlerstand order by time desc limit 1
name: zaehlerstand
time                kubikmeter
----                ----------
1772604767445944143 67674.66

So now I tried to insert own data into this db (as in the doc example) with:

> insert zaehlerstand, kubikmeter=67674.661
ERR: {"error":"unable to parse 'zaehlerstand, kubikmeter=67674.661': missing tag key"}

But neither this works:

> insert zaehlerstand, kubikmeter=67674.661,time=1772604767445944144
ERR: {"error":"unable to parse 'zaehlerstand, kubikmeter=67674.661,time=1772604767445944144': missing tag key"}

How is it done?

Thanks, frank

4 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/influx-insert-value-not-working/58347 Wed, 04 Mar 2026 07:34:41 +0000 No No No community.influxdata.com-topic-58347 Influx - INSERT value not working
C# .NET v3 Client - Get all fields Client SDKs I’m currently working with the v3 c# api and wonder what is the best way to get all fields/measurement of a point?

The only solution i found so far is to use GetFieldNames and query over each one manually?

var result = _influxClient.QueryPoints(query: query, namedParameters: new Dictionary<string, object>
{
    { "escapedStart", start.ToString("yyyy-MM-dd'T'HH:mm:ss'Z'") },
    { "escapedEnd", end.ToString("yyyy-MM-dd'T'HH:mm:ss'Z'") },
    { "escapedStationId", stationId.ToString() }
}, queryType: QueryType.
InfluxQL
);

await foreach (var item in result)
{
    var point = item.AsPoint();
    var fieldnames = point.GetFieldNames();
    foreach (var fieldname in fieldnames)
    {
        var field = point.GetDoubleField(fieldname);
    }
}var result = _influxClient.QueryPoints(query: query, namedParameters: new Dictionary<string, object>
{
    { "escapedStart", start.ToString("yyyy-MM-dd'T'HH:mm:ss'Z'") },
    { "escapedEnd", end.ToString("yyyy-MM-dd'T'HH:mm:ss'Z'") },
    { "escapedStationId", stationId.ToString() }
}, queryType: QueryType.InfluxQL);

await foreach (var item in result)
{
    var point = item.AsPoint();
    var fieldnames = point.GetFieldNames();
    foreach (var fieldname in fieldnames)
    {
        var field = point.GetDoubleField(fieldname);
    }
}

2 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/c-net-v3-client-get-all-fields/58345 Tue, 03 Mar 2026 22:47:51 +0000 No No No community.influxdata.com-topic-58345 C# .NET v3 Client - Get all fields
Telegraf Enterprise: Overview and Q&A (Webinar) Telegraf Interested in what’s coming in the world of Telegraf and metric collection? Join me Tuesday, March 10, 2026 for an introduction to Telegraf Enterprise, a soon-to-be-available offering designed to help teams manage Telegraf deployments at scale. Register for the Telegraf Enterprise: Overview and Q&A webinar.

1200x628_Telegraf Enterprise@2x

2 posts - 1 participant

Read full topic

]]>
https://community.influxdata.com/t/telegraf-enterprise-overview-and-q-a-webinar/58344 Tue, 03 Mar 2026 16:27:35 +0000 No No No community.influxdata.com-topic-58344 Telegraf Enterprise: Overview and Q&A (Webinar)
Problem with mapping SQL timestamp with InfluxDB time in Telegraf Telegraf Hi,

I am trying to migrate data from MSSQL to InfluxDB for testing purpose.

I have set up entire telegraf config but still have a problem with mapping “time_event” field from MSSQL to timestamp in InfluxDB. Format from MSSQL is datetime2(7). I tried to convert it into different format, without success. It still shows time_event as field and timestamp is created by InfluxDB.
Same problem I seem to have with creating tag. “machine_id” field defined as a tag still shows up in test logs as a field.

Below is my config for SQL input. Most likely it is something basic that I am missing, but will be super grateful for support.

[[inputs.sql.query]]
query = “SELECT time_event, machine_id, recipe, machine_state, machine_mode, machine_speed, machine_speed_t, good_count, reject_count FROM dbo.oee1 WHERE time_event > DATEADD(second, -120, GETUTCDATE())”

measurement = “production_oee”
time_column = “time_event”
tag_columns_include = [“machine_id”]
field_columns_int = [“machine_state”, “machine_mode”, “machine_speed_t”, “good_count”, “reject_count”]
field_columns_float = [“machine_speed”]
field_columns_string = [“recipe”]

[[processors.converter]]
[processors.converter.tags]
string = [“machine_id”]

[[processors.dedup]]
dedup_interval = “360s”

Logs from testing Telegraf. machine_id as well as time_event are still in the fields.
production_oee,host=xxx good_count=7499786i,machine_id=“1021”,machine_mode=1i,machine_speed=0,machine_speed_t=24i,machine_state=9i,recipe=“xx”,reject_count=120217i,time_event=1772537862974086100i 1772537863000000000

3 posts - 3 participants

Read full topic

]]>
https://community.influxdata.com/t/problem-with-mapping-sql-timestamp-with-influxdb-time-in-telegraf/58342 Tue, 03 Mar 2026 11:40:26 +0000 No No No community.influxdata.com-topic-58342 Problem with mapping SQL timestamp with InfluxDB time in Telegraf
Facing problem with importing csv data , in influxdb3 oss InfluxDB 3 I am attempting to write data to InfluxDB 3.0 using an annotated CSV file and the influxdb3 CLI tool. Despite following the documentation for annotated CSVs, I consistently receive a 400 Bad Request error. Most existing troubleshooting guides I’ve found appear to be for InfluxDB 2.0 and don’t seem to apply here.
command i am using to insert
influxdb3 write --token MY_TOKEN --file docData.csv --database newTest
#datatype measurement,tag,double,dateTime:RFC3339 m,host,usedPercent,time mem,host1,64.23,2020-01-01T00:00:00Z mem,host2,72.01,2020-01-01T00:00:00Z

2 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/facing-problem-with-importing-csv-data-in-influxdb3-oss/58336 Thu, 26 Feb 2026 14:40:38 +0000 No No No community.influxdata.com-topic-58336 Facing problem with importing csv data , in influxdb3 oss
Trigger input on startup as well as long interval Telegraf I have written an input.exec plugin to fetch the kernel version of the server running telegraf. This works very fine and with ‘interval = “2h”’ it doesn’t clog my buckets.

After a kernel update I want to get the new version in my dashboard immediately. So I looked for a startup = true parameter, which doesn’t exist…

I saw ticket Feature Request: Allow collecting and sending metrics immediately on start · Issue #7293 · influxdata/telegraf · GitHub and it seems that the discussion stopped in 2021…

Is there any way to solve this dilemma without sending the data every 10 sec?

thanks Philipp

3 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/trigger-input-on-startup-as-well-as-long-interval/58335 Thu, 26 Feb 2026 10:26:53 +0000 No No No community.influxdata.com-topic-58335 Trigger input on startup as well as long interval
Loss of old measurements Systems

Hello,

I’m new to working with InfluxDB3. A few months ago, I set up a database with the following characteristics:

OS: Windows (Binary execution)

Version: InfluxDB 3 Core

Storage: Local Object Store with more or less 9350 snapshot files

I had no problems with the database. But recently, the oldest measures are disappearing over time without any retention policy defined. I’ve noticed that by deleting the most recent .info.json files from the snapshot folder, I can see older files that had disappeared, at the cost of no longer seeing the newer ones. I’ve been looking for information about this and haven’t found anything. Are there any limitations? Perhaps something is misconfigured?

Thanks in advance.

6 posts - 2 participants

Read full topic

]]>
https://community.influxdata.com/t/loss-of-old-measurements/58329 Mon, 23 Feb 2026 15:11:21 +0000 No No No community.influxdata.com-topic-58329 Loss of old measurements