Skip to content

[Bug] [woker-node] docker-compose deploy memory unlimit increment until kill #9157

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 of 3 tasks
arvin-o-o opened this issue Apr 11, 2025 · 0 comments
Open
2 of 3 tasks
Labels

Comments

@arvin-o-o
Copy link

arvin-o-o commented Apr 11, 2025

Search before asking

  • I had searched in the issues and found no similar issues.

What happened

  • docker stats
23a2384f8b2f   seatunnel_worker_2   107.98%   1.496GiB / 14.96GiB   10.00%    5.04GB / 2.08GB   0B / 1.24MB       1293
ee741d1c66d6   seatunnel_worker_1   130.42%   1.543GiB / 14.96GiB   10.31%    5.48GB / 2.28GB   0B / 1.26MB       1578
3c0ddb016c22   seatunnel_master     49.89%    1.558GiB / 14.96GiB   10.41%    2.63GB / 6.74GB   0B / 73.2MB       1857
3373aee11b9d   starrocks_be         98.01%    1.279GiB / 14.96GiB   8.55%     3.29GB / 6.96GB   654MB / 5.28GB    680
1c76c7613b73   starrocks_fe         0.66%     2.098GiB / 14.96GiB   14.03%    1.17GB / 3.19GB   1.59GB / 1.94GB   314

  • /hazelcast/rest/maps/system-monitoring-information
[
    {
        "isMaster": "true",
        "host": "172.16.0.2",
        "port": "5801",
        "processors": "4",
        "physical.memory.total": "15.0G",
        "physical.memory.free": "834.1M",
        "swap.space.total": "0",
        "swap.space.free": "0",
        "heap.memory.used": "774.4M",
        "heap.memory.free": "249.6M",
        "heap.memory.total": "1024.0M",
        "heap.memory.max": "1024.0M",
        "heap.memory.used/total": "75.62%",
        "heap.memory.used/max": "75.62%",
        "minor.gc.count": "137",
        "minor.gc.time": "9390ms",
        "major.gc.count": "0",
        "major.gc.time": "0ms",
        "load.process": "15.09%",
        "load.system": "89.38%",
        "load.systemAverage": "9.86",
        "thread.count": "1884",
        "thread.peakCount": "1909",
        "cluster.timeDiff": "0",
        "event.q.size": "0",
        "executor.q.async.size": "0",
        "executor.q.client.size": "0",
        "executor.q.client.query.size": "0",
        "executor.q.client.blocking.size": "0",
        "executor.q.query.size": "0",
        "executor.q.scheduled.size": "0",
        "executor.q.io.size": "0",
        "executor.q.system.size": "0",
        "executor.q.operations.size": "1455",
        "executor.q.priorityOperation.size": "0",
        "operations.completed.count": "304933",
        "executor.q.mapLoad.size": "0",
        "executor.q.mapLoadAllKeys.size": "0",
        "executor.q.cluster.size": "0",
        "executor.q.response.size": "7",
        "operations.running.count": "50",
        "operations.pending.invocations.percentage": "0.00%",
        "operations.pending.invocations.count": "0",
        "proxy.count": "10",
        "clientEndpoint.count": "0",
        "connection.active.count": "49",
        "client.connection.count": "0",
        "connection.count": "2"
    },
    {
        "isMaster": "false",
        "host": "172.16.0.4",
        "port": "5801",
        "processors": "4",
        "physical.memory.total": "15.0G",
        "physical.memory.free": "832.7M",
        "swap.space.total": "0",
        "swap.space.free": "0",
        "heap.memory.used": "860.4M",
        "heap.memory.free": "163.6M",
        "heap.memory.total": "1024.0M",
        "heap.memory.max": "1024.0M",
        "heap.memory.used/total": "84.01%",
        "heap.memory.used/max": "84.01%",
        "minor.gc.count": "1514",
        "minor.gc.time": "60200ms",
        "major.gc.count": "172",
        "major.gc.time": "402257ms",
        "load.process": "32.83%",
        "load.system": "99.09%",
        "load.systemAverage": "9.86",
        "thread.count": "1134",
        "thread.peakCount": "1134",
        "cluster.timeDiff": "-196",
        "event.q.size": "0",
        "executor.q.async.size": "0",
        "executor.q.client.size": "0",
        "executor.q.client.query.size": "0",
        "executor.q.client.blocking.size": "0",
        "executor.q.query.size": "0",
        "executor.q.scheduled.size": "0",
        "executor.q.io.size": "0",
        "executor.q.system.size": "0",
        "executor.q.operations.size": "2",
        "executor.q.priorityOperation.size": "0",
        "operations.completed.count": "17566",
        "executor.q.mapLoad.size": "0",
        "executor.q.mapLoadAllKeys.size": "0",
        "executor.q.cluster.size": "0",
        "executor.q.response.size": "0",
        "operations.running.count": "6",
        "operations.pending.invocations.percentage": "0.00%",
        "operations.pending.invocations.count": "0",
        "proxy.count": "10",
        "clientEndpoint.count": "0",
        "connection.active.count": "28",
        "client.connection.count": "0",
        "connection.count": "2"
    },
    {
        "isMaster": "false",
        "host": "172.16.0.3",
        "port": "5801",
        "processors": "4",
        "physical.memory.total": "15.0G",
        "physical.memory.free": "832.6M",
        "swap.space.total": "0",
        "swap.space.free": "0",
        "heap.memory.used": "829.5M",
        "heap.memory.free": "194.5M",
        "heap.memory.total": "1024.0M",
        "heap.memory.max": "1024.0M",
        "heap.memory.used/total": "81.00%",
        "heap.memory.used/max": "81.00%",
        "minor.gc.count": "1547",
        "minor.gc.time": "65612ms",
        "major.gc.count": "176",
        "major.gc.time": "404527ms",
        "load.process": "37.06%",
        "load.system": "98.83%",
        "load.systemAverage": "10.67",
        "thread.count": "1324",
        "thread.peakCount": "1324",
        "cluster.timeDiff": "-261",
        "event.q.size": "0",
        "executor.q.async.size": "0",
        "executor.q.client.size": "0",
        "executor.q.client.query.size": "0",
        "executor.q.client.blocking.size": "0",
        "executor.q.query.size": "0",
        "executor.q.scheduled.size": "0",
        "executor.q.io.size": "0",
        "executor.q.system.size": "0",
        "executor.q.operations.size": "18",
        "executor.q.priorityOperation.size": "0",
        "operations.completed.count": "18339",
        "executor.q.mapLoad.size": "0",
        "executor.q.mapLoadAllKeys.size": "0",
        "executor.q.cluster.size": "0",
        "executor.q.response.size": "1",
        "operations.running.count": "20",
        "operations.pending.invocations.percentage": "0.00%",
        "operations.pending.invocations.count": "0",
        "proxy.count": "10",
        "clientEndpoint.count": "0",
        "connection.active.count": "17",
        "client.connection.count": "0",
        "connection.count": "2"
    }
]

SeaTunnel Version

2.3.8

SeaTunnel Config

- docker-compose.yaml

networks:
  arvin_qa:
    external: true
    name: arvin_qa
services:
  seatunnel_master:
    container_name: seatunnel_master
    entrypoint: "/bin/sh -c \" export HZ_NETWORK_PUBLICADDRESS=172.16.0.2; /opt/seatunnel/bin/seatunnel-cluster.sh\
      \ -r master \"    \n"
    environment:
      ST_DOCKER_MEMBER_LIST: 172.16.0.2,172.16.0.3,172.16.0.4
    image: apache/seatunnel:2.3.8
    networks:
      arvin_qa:
        ipv4_address: 172.16.0.2
    ports:
    - published: 5801
      target: 5801
    volumes:
    - /home/arvin/seatunnel/config:/opt/seatunnel/config:rw
  seatunnel_worker1:
    container_name: seatunnel_worker_1
    depends_on:
      seatunnel_master:
        condition: service_started
    entrypoint: "/bin/sh -c \" /opt/seatunnel/bin/seatunnel-cluster.sh -r worker \"\
      \ \n"
    environment:
      ST_DOCKER_MEMBER_LIST: 172.16.0.2,172.16.0.3,172.16.0.4
    image: apache/seatunnel:2.3.8
    networks:
      arvin_qa:
        ipv4_address: 172.16.0.3
    volumes:
    - /home/arvin/seatunnel/worker1_config/jvm_worker_options:/opt/seatunnel/config/jvm_worker_options:rw
    - /home/arvin/seatunnel/worker1_config/seatunnel.yaml:/opt/seatunnel/config/seatunnel.yaml:rw
  seatunnel_worker2:
    container_name: seatunnel_worker_2
    depends_on:
      seatunnel_master:
        condition: service_started
    entrypoint: "/bin/sh -c \" /opt/seatunnel/bin/seatunnel-cluster.sh -r worker \"\
      \ \n"
    environment:
      ST_DOCKER_MEMBER_LIST: 172.16.0.2,172.16.0.3,172.16.0.4
    image: apache/seatunnel:2.3.8
    networks:
      arvin_qa:
        ipv4_address: 172.16.0.4
    volumes:
    - /home/arvin/seatunnel/worker2_config/jvm_worker_options:/opt/seatunnel/config/jvm_worker_options:rw
    - /home/arvin/seatunnel/worker2_config/seatunnel.yaml:/opt/seatunnel/config/seatunnel.yaml:rw
`

- jvm_master_options

# JVM Heap
-Xms1g
-Xmx1g

# JVM Dump
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp/seatunnel/dump/zeta-server

# Metaspace
-XX:MaxMetaspaceSize=512M

# G1GC
-XX:+UseG1GC





- jvm_worker_options
seatunnel_worker_1 == seatunnel_worker_2

# JVM Heap
-Xms1g
-Xmx1g
-Xss512k

# JVM Dump
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp/seatunnel/dump/zeta-server

# Metaspace
-XX:MaxMetaspaceSize=256M

# G1GC
-XX:+UseG1GC

-XX:NativeMemoryTracking=summary
-XX:MaxDirectMemorySize=256M

`

- seatunnel.yaml

seatunnel:
  engine:
    classloader-cache-mode: true
    history-job-expire-minutes: 1440
    backup-count: 1
    queue-type: blockingqueue
    print-execution-info-interval: 60
    print-job-metrics-info-interval: 60
    slot-service:
      dynamic-slot: true
    checkpoint:
      interval: 10000
      timeout: 60000
      storage:
        type: hdfs
        max-retained: 3
        plugin-config:
          namespace: /tmp/seatunnel/checkpoint_snapshot
          storage.type: hdfs
          fs.defaultFS: file:///tmp/ # Ensure that the directory has written permission
    telemetry:
      metric:
        enabled: false
`

- besides: use default

Running Command

docker-compose up -d

Error Exception

woker node killed

Zeta or Flink or Spark Version

No response

Java or Scala Version

jdk8

Screenshots

No response

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

@arvin-o-o arvin-o-o added the bug label Apr 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant