Skip to content

Upgrading from v4.7.7 to v4.8.0 crashes (mainnet on lite full node) #6328

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
joeky888 opened this issue May 13, 2025 · 13 comments
Closed

Upgrading from v4.7.7 to v4.8.0 crashes (mainnet on lite full node) #6328

joeky888 opened this issue May 13, 2025 · 13 comments
Labels

Comments

@joeky888
Copy link

joeky888 commented May 13, 2025

I have been hosting java-tron lite full node for a while, and this is the first time that I have no clue what to do, because I don't see errors. Docker tronprotocol/java-tron:GreatVoyage-v4.7.7 is fine on my server, but upgrading to v.4.8.0 crashes. There is no error log.

Again, the following config works in v4.7.7.

To reproduce, download the latest lite node backup and re-deploy like this:

docker

services:
  tron_litenode:
    image: tronprotocol/java-tron:GreatVoyage-v4.8.0
    container_name: tron_node
    restart: always
    ulimits:
      nproc: 4096
      nofile:
        soft: 4096
        hard: 4096
    command: -c "/java-tron/mainnet.conf" --log-config "/java-tron/logback.xml" -d "/java-tron/data" -w
    healthcheck:
      test:
        [
          "CMD",
          "curl",
          "-fs",
          "http://localhost:8090/wallet/getnodeinfo",
        ]
      interval: 10s
      timeout: 5s
      start_period: 120s
      retries: 10
    environment:
      - DEFAULT_JVM_OPTS="-Xmx22300m -Xms10g -XX:+UseConcMarkSweepGC"
    volumes:
      - /home/ubuntu/trondata:/java-tron/data
      - ./conf/logback.xml:/java-tron/logback.xml:ro
      - ./conf/mainnet.conf:/java-tron/mainnet.conf:ro
    logging:
      driver: "json-file"
      options:
          max-size: "1m"
    ports:
    #   - "8080:8080" # HTTP
    #   - "8090:8090" # HTTP
    #   - "18888:18888" # HTTP
      - "50051:50051" # Full node

./conf/logback.xml

Click here
<?xml version="1.0" encoding="UTF-8"?>

<configuration>

  <!-- Be sure to flush latest logs on exit -->
  <shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook"/>

  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
      <pattern>%d{HH:mm:ss.SSS} %-5level [%t] [%c{1}]\(%F:%L\) %m%n</pattern>
    </encoder>
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      <level>INFO</level>
    </filter>
  </appender>

  <root level="INFO">
    <appender-ref ref="STDOUT"/>
  </root>

  <logger level="INFO" name="app"/>
  <logger level="INFO" name="net"/>
  <logger level="INFO" name="backup"/>
  <logger level="INFO" name="discover"/>
  <logger level="INFO" name="crypto"/>
  <logger level="INFO" name="utils"/>
  <logger level="INFO" name="actuator"/>
  <logger level="INFO" name="API"/>
  <logger level="INFO" name="witness"/>
  <logger level="INFO" name="DB"/>
  <logger level="INFO" name="capsule"/>
  <logger level="INFO" name="VM"/>

</configuration>

./conf/mainnet.conf

Click here
storage {
  # Directory for storing persistent data
  db.engine = "LEVELDB",
  db.sync = false,
  db.directory = "database",
  index.directory = "index",
  transHistory.switch = "off",
  # You can custom these 14 databases' configs:

  # account, account-index, asset-issue, block, block-index,
  # block_KDB, peers, properties, recent-block, trans,
  # utxo, votes, witness, witness_schedule.

  # Otherwise, db configs will remain default and data will be stored in
  # the path of "output-directory" or which is set by "-d" ("--output-directory").
  # setting can impove leveldb performance .... start
  # node: if this will increase process fds,you may be check your ulimit if 'too many open files' error occurs
  # see https://github.com/tronprotocol/tips/blob/master/tip-343.md for detail
  # if you find block sync has lower performance,you can try  this  settings
  #default = {
  #  maxOpenFiles = 100
  #}
  #defaultM = {
  #  maxOpenFiles = 500
  #}
  #defaultL = {
  #  maxOpenFiles = 1000
  #}
  # setting can impove leveldb performance .... end
  # Attention: name is a required field that must be set !!!
  properties = [
    //    {
    //      name = "account",
    //      path = "storage_directory_test",
    //      createIfMissing = true,
    //      paranoidChecks = true,
    //      verifyChecksums = true,
    //      compressionType = 1,        // compressed with snappy
    //      blockSize = 4096,           // 4  KB =         4 * 1024 B
    //      writeBufferSize = 10485760, // 10 MB = 10 * 1024 * 1024 B
    //      cacheSize = 10485760,       // 10 MB = 10 * 1024 * 1024 B
    //      maxOpenFiles = 100
    //    },
    //    {
    //      name = "account-index",
    //      path = "storage_directory_test",
    //      createIfMissing = true,
    //      paranoidChecks = true,
    //      verifyChecksums = true,
    //      compressionType = 1,        // compressed with snappy
    //      blockSize = 4096,           // 4  KB =         4 * 1024 B
    //      writeBufferSize = 10485760, // 10 MB = 10 * 1024 * 1024 B
    //      cacheSize = 10485760,       // 10 MB = 10 * 1024 * 1024 B
    //      maxOpenFiles = 100
    //    },
  ]

  needToUpdateAsset = true

  //dbsettings is needed when using rocksdb as the storage implement (db.engine="ROCKSDB").
  //we'd strongly recommend that do not modify it unless you know every item's meaning clearly.
  dbSettings = {
    levelNumber = 7
    //compactThreads = 32
    blocksize = 64  // n * KB
    maxBytesForLevelBase = 256  // n * MB
    maxBytesForLevelMultiplier = 10
    level0FileNumCompactionTrigger = 4
    targetFileSizeBase = 256  // n * MB
    targetFileSizeMultiplier = 1
  }

  //backup settings when using rocks db as the storage implement (db.engine="ROCKSDB").
  //if you want to use the backup plugin, please confirm set the db.engine="ROCKSDB" above.
  backup = {
    enable = false  // indicate whether enable the backup plugin
    propPath = "prop.properties" // record which bak directory is valid
    bak1path = "bak1/database" // you must set two backup directories to prevent application halt unexpected(e.g. kill -9).
    bak2path = "bak2/database"
    frequency = 10000   // indicate backup db once every 10000 blocks processed.
  }

  balance.history.lookup = false

  # checkpoint.version = 2
  # checkpoint.sync = true

  # the estimated number of block transactions (default 1000, min 100, max 10000).
  # so the total number of cached transactions is 65536 * txCache.estimatedTransactions
  # txCache.estimatedTransactions = 1000

  # data root setting, for check data, currently, only reward-vi is used.

  # merkleRoot = {
  # reward-vi = 9debcb9924055500aaae98cdee10501c5c39d4daa75800a996f4bdda73dbccd8 // main-net, Sha256Hash, hexString
  # }
}

node.discovery = {
  enable = true
  persist = true
}

# custom stop condition
#node.shutdown = {
#  BlockTime  = "54 59 08 * * ?" # if block header time in persistent db matched.
#  BlockHeight = 33350800 # if block header height in persistent db matched.
#  BlockCount = 12 # block sync count after node start.
#}

node.backup {
  # udp listen port, each member should have the same configuration
  port = 10001

  # my priority, each member should use different priority
  priority = 8

  # time interval to send keepAlive message, each member should have the same configuration
  keepAliveInterval = 3000

  # peer's ip list, can't contain mine
  members = [
    # "ip",
    # "ip"
  ]
}

crypto {
  engine = "eckey"
}

# prometheus metrics start
node.metrics = {
  prometheus{
    enable=true
    port="9527"
  }
}
# prometheus metrics end

node {
  # trust node for solidity node
  # trustNode = "ip:port"
  trustNode = "127.0.0.1:50051"

  # expose extension api to public or not
  walletExtensionApi = true

  listen.port = 18888

  connection.timeout = 2

  fetchBlock.timeout = 200

  tcpNettyWorkThreadNum = 0

  udpNettyWorkThreadNum = 1

  # Number of validate sign thread, default availableProcessors / 2
  # validateSignThreadNum = 16

  maxConnections = 30

  minConnections = 8

  minActiveConnections = 3

  maxConnectionsWithSameIp = 2

  maxHttpConnectNumber = 50

  minParticipationRate = 15

  isOpenFullTcpDisconnect = false

  p2p {
    version = 11111 # 11111: Mainnet; 20180622: Nile testnet
  }

  active = [
    # Active establish connection in any case
    # Sample entries:
    # "ip:port",
    # "ip:port"
  ]

  passive = [
    # Passive accept connection in any case
    # Sample entries:
    # "ip:port",
    # "ip:port"
  ]

  fastForward = [
    "100.26.245.209:18888",
    "15.188.6.125:18888"
  ]

  http {
    fullNodeEnable = true
    fullNodePort = 8090
    solidityEnable = true
    solidityPort = 8091
  }

  rpc {
    port = 50051
    #solidityPort = 50061
    # Number of gRPC thread, default availableProcessors / 2
    # thread = 16

    # The maximum number of concurrent calls permitted for each incoming connection
    # maxConcurrentCallsPerConnection =

    # The HTTP/2 flow control window, default 1MB
    # flowControlWindow =

    # Connection being idle for longer than which will be gracefully terminated
    maxConnectionIdleInMillis = 60000

    # Connection lasting longer than which will be gracefully terminated
    # maxConnectionAgeInMillis =

    # The maximum message size allowed to be received on the server, default 4MB
    # maxMessageSize =

    # The maximum size of header list allowed to be received, default 8192
    # maxHeaderListSize =

    # Transactions can only be broadcast if the number of effective connections is reached.
    minEffectiveConnection = 1

    # The switch of the reflection service, effective for all gRPC services
    # reflectionService = true
  }

  # number of solidity thread in the FullNode.
  # If accessing solidity rpc and http interface timeout, could increase the number of threads,
  # The default value is the number of cpu cores of the machine.
  #solidity.threads = 8

  # Limits the maximum percentage (default 75%) of producing block interval
  # to provide sufficient time to perform other operations e.g. broadcast block
  # blockProducedTimeOut = 75

  # Limits the maximum number (default 700) of transaction from network layer
  # netMaxTrxPerSecond = 700

  # Whether to enable the node detection function, default false
  # nodeDetectEnable = false

  # use your ipv6 address for node discovery and tcp connection, default false
  # enableIpv6 = false

  # if your node's highest block num is below than all your pees', try to acquire new connection. default false
  # effectiveCheckEnable = false

  # Dynamic loading configuration function, disabled by default
  # dynamicConfig = {
    # enable = false
    # Configuration file change check interval, default is 600 seconds
    # checkInterval = 600
  # }

  dns {
    # dns urls to get nodes, url format tree://{pubkey}@{domain}, default empty
    treeUrls = [
      #"tree://AKMQMNAJJBL73LXWPXDI4I5ZWWIZ4AWO34DWQ636QOBBXNFXH3LQS@main.trondisco.net", //offical dns tree
    ]

    # enable or disable dns publish, default false
    # publish = false

    # dns domain to publish nodes, required if publish is true
    # dnsDomain = "nodes1.example.org"

    # dns private key used to publish, required if publish is true, hex string of length 64
    # dnsPrivate = "b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"

    # known dns urls to publish if publish is true, url format tree://{pubkey}@{domain}, default empty
    # knownUrls = [
    #"tree://APFGGTFOBVE2ZNAB3CSMNNX6RRK3ODIRLP2AA5U4YFAA6MSYZUYTQ@nodes2.example.org",
    # ]

    # staticNodes = [
    # static nodes to published on dns
    # Sample entries:
    # "ip:port",
    # "ip:port"
    # ]

    # merge several nodes into a leaf of tree, should be 1~5
    # maxMergeSize = 5

    # only nodes change percent is bigger then the threshold, we update data on dns
    # changeThreshold = 0.1

    # dns server to publish, required if publish is true, only aws or aliyun is support
    # serverType = "aws"

    # access key id of aws or aliyun api, required if publish is true, string
    # accessKeyId = "your-key-id"

    # access key secret of aws or aliyun api, required if publish is true, string
    # accessKeySecret = "your-key-secret"

    # if publish is true and serverType is aliyun, it's endpoint of aws dns server, string
    # aliyunDnsEndpoint = "alidns.aliyuncs.com"

    # if publish is true and serverType is aws, it's region of aws api, such as "eu-south-1", string
    # awsRegion = "us-east-1"

    # if publish is true and server-type is aws, it's host zone id of aws's domain, string
    # awsHostZoneId = "your-host-zone-id"
  }

  # open the history query APIs(http&GRPC) when node is a lite fullNode,
  # like {getBlockByNum, getBlockByID, getTransactionByID...}.
  # default: false.
  # note: above APIs may return null even if blocks and transactions actually are on the blockchain
  # when opening on a lite fullnode. only open it if the consequences being clearly known
  openHistoryQueryWhenLiteFN = false # Set to true if need more apis for lite full node

  jsonrpc {
    # Note: If you turn on jsonrpc and run it for a while and then turn it off, you will not
    # be able to get the data from eth_getLogs for that period of time.

    # httpFullNodeEnable = true
    # httpFullNodePort = 8545
    # httpSolidityEnable = true
    # httpSolidityPort = 8555
    # httpPBFTEnable = true
    # httpPBFTPort = 8565
  }

  # Disabled api list, it will work for http, rpc and pbft, both fullnode and soliditynode,
  # but not jsonrpc.
  # Sample: The setting is case insensitive, GetNowBlock2 is equal to getnowblock2
  #
  # disabledApi = [
  #   "getaccount",
  #   "getnowblock2"
  # ]
}

## rate limiter config
rate.limiter = {
  # Every api could be set a specific rate limit strategy. Three strategy are supported:GlobalPreemptibleAdapter、IPQPSRateLimiterAdapte、QpsRateLimiterAdapter
  # GlobalPreemptibleAdapter: permit is the number of preemptible resource, every client must apply one resourse
  #       before do the request and release the resource after got the reponse automaticlly. permit should be a Integer.
  # QpsRateLimiterAdapter: qps is the average request count in one second supported by the server, it could be a Double or a Integer.
  # IPQPSRateLimiterAdapter: similar to the QpsRateLimiterAdapter, qps could be a Double or a Integer.
  # If do not set, the "default strategy" is set.The "default startegy" is based on QpsRateLimiterAdapter, the qps is set as 10000.
  #
  # Sample entries:
  #
  http = [
    #  {
    #    component = "GetNowBlockServlet",
    #    strategy = "GlobalPreemptibleAdapter",
    #    paramString = "permit=1"
    #  },

    #  {
    #    component = "GetAccountServlet",
    #    strategy = "IPQPSRateLimiterAdapter",
    #    paramString = "qps=1"
    #  },

    #  {
    #    component = "ListWitnessesServlet",
    #    strategy = "QpsRateLimiterAdapter",
    #    paramString = "qps=1"
    #  }
  ],

  rpc = [
    #  {
    #    component = "protocol.Wallet/GetBlockByLatestNum2",
    #    strategy = "GlobalPreemptibleAdapter",
    #    paramString = "permit=1"
    #  },

    #  {
    #    component = "protocol.Wallet/GetAccount",
    #    strategy = "IPQPSRateLimiterAdapter",
    #    paramString = "qps=1"
    #  },

    #  {
    #    component = "protocol.Wallet/ListWitnesses",
    #    strategy = "QpsRateLimiterAdapter",
    #    paramString = "qps=1"
    #  },
  ]

  # global qps, default 50000
  # global.qps = 50000
  # IP-based global qps, default 10000
  # global.ip.qps = 10000
}



seed.node = {
  # List of the seed nodes
  # Seed nodes are stable full nodes
  # example:
  # ip.list = [
  #   "ip:port",
  #   "ip:port"
  # ]
  ip.list = [
    "3.225.171.164:18888",
    "52.53.189.99:18888",
    "18.196.99.16:18888",
    "34.253.187.192:18888",
    "18.133.82.227:18888",
    "35.180.51.163:18888",
    "54.252.224.209:18888",
    "18.231.27.82:18888",
    "52.15.93.92:18888",
    "34.220.77.106:18888",
    "15.207.144.3:18888",
    "13.124.62.58:18888",
    "54.151.226.240:18888",
    "35.174.93.198:18888",
    "18.210.241.149:18888",
    "54.177.115.127:18888",
    "54.254.131.82:18888",
    "18.167.171.167:18888",
    "54.167.11.177:18888",
    "35.74.7.196:18888",
    "52.196.244.176:18888",
    "54.248.129.19:18888",
    "43.198.142.160:18888",
    "3.0.214.7:18888",
    "54.153.59.116:18888",
    "54.153.94.160:18888",
    "54.82.161.39:18888",
    "54.179.207.68:18888",
    "18.142.82.44:18888",
    "18.163.230.203:18888",
    # "[2a05:d014:1f2f:2600:1b15:921:d60b:4c60]:18888", // use this if support ipv6
    # "[2600:1f18:7260:f400:8947:ebf3:78a0:282b]:18888", // use this if support ipv6
  ]
}

genesis.block = {
  # Reserve balance
  assets = [
    {
      accountName = "Zion"
      accountType = "AssetIssue"
      address = "TLLM21wteSPs4hKjbxgmH1L6poyMjeTbHm"
      balance = "99000000000000000"
    },
    {
      accountName = "Sun"
      accountType = "AssetIssue"
      address = "TXmVpin5vq5gdZsciyyjdZgKRUju4st1wM"
      balance = "0"
    },
    {
      accountName = "Blackhole"
      accountType = "AssetIssue"
      address = "TLsV52sRDL79HXGGm9yzwKibb6BeruhUzy"
      balance = "-9223372036854775808"
    }
  ]

  witnesses = [
    {
      address: THKJYuUmMKKARNf7s2VT51g5uPY6KEqnat,
      url = "http://gr1.com/",
      voteCount = 100000026
    },
    {
      address: TVDmPWGYxgi5DNeW8hXrzrhY8Y6zgxPNg4,
      url = "http://gr2.com/",
      voteCount = 100000025
    },
    {
      address: TWKZN1JJPFydd5rMgMCV5aZTSiwmoksSZv,
      url = "http://gr3.com/",
      voteCount = 100000024
    },
    {
      address: TDarXEG2rAD57oa7JTK785Yb2Et32UzY32,
      url = "http://gr4.com/",
      voteCount = 100000023
    },
    {
      address: TAmFfS4Tmm8yKeoqZN8x51ASwdQBdnVizt,
      url = "http://gr5.com/",
      voteCount = 100000022
    },
    {
      address: TK6V5Pw2UWQWpySnZyCDZaAvu1y48oRgXN,
      url = "http://gr6.com/",
      voteCount = 100000021
    },
    {
      address: TGqFJPFiEqdZx52ZR4QcKHz4Zr3QXA24VL,
      url = "http://gr7.com/",
      voteCount = 100000020
    },
    {
      address: TC1ZCj9Ne3j5v3TLx5ZCDLD55MU9g3XqQW,
      url = "http://gr8.com/",
      voteCount = 100000019
    },
    {
      address: TWm3id3mrQ42guf7c4oVpYExyTYnEGy3JL,
      url = "http://gr9.com/",
      voteCount = 100000018
    },
    {
      address: TCvwc3FV3ssq2rD82rMmjhT4PVXYTsFcKV,
      url = "http://gr10.com/",
      voteCount = 100000017
    },
    {
      address: TFuC2Qge4GxA2U9abKxk1pw3YZvGM5XRir,
      url = "http://gr11.com/",
      voteCount = 100000016
    },
    {
      address: TNGoca1VHC6Y5Jd2B1VFpFEhizVk92Rz85,
      url = "http://gr12.com/",
      voteCount = 100000015
    },
    {
      address: TLCjmH6SqGK8twZ9XrBDWpBbfyvEXihhNS,
      url = "http://gr13.com/",
      voteCount = 100000014
    },
    {
      address: TEEzguTtCihbRPfjf1CvW8Euxz1kKuvtR9,
      url = "http://gr14.com/",
      voteCount = 100000013
    },
    {
      address: TZHvwiw9cehbMxrtTbmAexm9oPo4eFFvLS,
      url = "http://gr15.com/",
      voteCount = 100000012
    },
    {
      address: TGK6iAKgBmHeQyp5hn3imB71EDnFPkXiPR,
      url = "http://gr16.com/",
      voteCount = 100000011
    },
    {
      address: TLaqfGrxZ3dykAFps7M2B4gETTX1yixPgN,
      url = "http://gr17.com/",
      voteCount = 100000010
    },
    {
      address: TX3ZceVew6yLC5hWTXnjrUFtiFfUDGKGty,
      url = "http://gr18.com/",
      voteCount = 100000009
    },
    {
      address: TYednHaV9zXpnPchSywVpnseQxY9Pxw4do,
      url = "http://gr19.com/",
      voteCount = 100000008
    },
    {
      address: TCf5cqLffPccEY7hcsabiFnMfdipfyryvr,
      url = "http://gr20.com/",
      voteCount = 100000007
    },
    {
      address: TAa14iLEKPAetX49mzaxZmH6saRxcX7dT5,
      url = "http://gr21.com/",
      voteCount = 100000006
    },
    {
      address: TBYsHxDmFaRmfCF3jZNmgeJE8sDnTNKHbz,
      url = "http://gr22.com/",
      voteCount = 100000005
    },
    {
      address: TEVAq8dmSQyTYK7uP1ZnZpa6MBVR83GsV6,
      url = "http://gr23.com/",
      voteCount = 100000004
    },
    {
      address: TRKJzrZxN34YyB8aBqqPDt7g4fv6sieemz,
      url = "http://gr24.com/",
      voteCount = 100000003
    },
    {
      address: TRMP6SKeFUt5NtMLzJv8kdpYuHRnEGjGfe,
      url = "http://gr25.com/",
      voteCount = 100000002
    },
    {
      address: TDbNE1VajxjpgM5p7FyGNDASt3UVoFbiD3,
      url = "http://gr26.com/",
      voteCount = 100000001
    },
    {
      address: TLTDZBcPoJ8tZ6TTEeEqEvwYFk2wgotSfD,
      url = "http://gr27.com/",
      voteCount = 100000000
    }
  ]

  timestamp = "0" #2017-8-26 12:00:00

  parentHash = "0xe58f33f9baf9305dc6f82b9f1934ea8f0ade2defb951258d50167028c780351f"
}

// Optional.The default is empty.
// It is used when the witness account has set the witnessPermission.
// When it is not empty, the localWitnessAccountAddress represents the address of the witness account,
// and the localwitness is configured with the private key of the witnessPermissionAddress in the witness account.
// When it is empty,the localwitness is configured with the private key of the witness account.

//localWitnessAccountAddress =

localwitness = [
]

#localwitnesskeystore = [
#  "localwitnesskeystore.json"
#]

block = {
  needSyncCheck = true
  maintenanceTimeInterval = 21600000
  proposalExpireTime = 259200000 // 3 day: 259200000(ms)
}

# Transaction reference block, default is "solid", configure to "head" may incur TaPos error
# trx.reference.block = "solid" // head;solid;

# This property sets the number of milliseconds after the creation of the transaction that is expired, default value is  60000.
# trx.expiration.timeInMilliseconds = 60000

vm = {
  supportConstant = true
  maxEnergyLimitForConstant = 100000000
  minTimeRatio = 0.0
  maxTimeRatio = 50.0
  saveInternalTx = false

  # Indicates whether the node stores featured internal transactions, such as freeze, vote and so on
  # saveFeaturedInternalTx = false

  # In rare cases, transactions that will be within the specified maximum execution time (default 10(ms)) are re-executed and packaged
  # longRunningTime = 10

  # Indicates whether the node support estimate energy API.
  # estimateEnergy = false

  # Indicates the max retry time for executing transaction in estimating energy.
  # estimateEnergyMaxRetry = 3
}

committee = {
  allowCreationOfContracts = 0  //Mainnet:0 (reset by committee),test:1
  allowAdaptiveEnergy = 0  //Mainnet:0 (reset by committee),test:1
}

event.subscribe = {
    version = 1 // 1 means v2.0 , 0 means v1.0 Event Service Framework
    native = {
      useNativeQueue = true // if true, use native message queue, else use event plugin.
      bindport = 5555 // bind port
      sendqueuelength = 1000 //max length of send queue
    }

    path = "" // absolute path of plugin
    server = "" // target server address to receive event triggers
    // dbname|username|password, if you want to create indexes for collections when the collections
    // are not exist, you can add version and set it to 2, as dbname|username|password|version
    // if you use version 2 and one collection not exists, it will create index automaticaly;
    // if you use version 2 and one collection exists, it will not create index, you must create index manually;
    dbconfig = ""
    contractParse = true
    topics = [
        {
          triggerName = "block" // block trigger, the value can't be modified
          enable = false
          topic = "block" // plugin topic, the value could be modified
          solidified = false // if set true, just need solidified block, default is false
        },
        {
          triggerName = "transaction"
          enable = false
          topic = "transaction"
          solidified = false
          ethCompatible = false // if set true, add transactionIndex, cumulativeEnergyUsed, preCumulativeLogCount, logList, energyUnitPrice, default is false
        },
        {
          triggerName = "contractevent"
          enable = false
          topic = "contractevent"
        },
        {
          triggerName = "contractlog"
          enable = false
          topic = "contractlog"
          redundancy = false // if set true, contractevent will also be regarded as contractlog
        },
        {
          triggerName = "solidity" // solidity block trigger(just include solidity block number and timestamp), the value can't be modified
          enable = true            // the default value is true
          topic = "solidity"
        },
        {
          triggerName = "solidityevent"
          enable = false
          topic = "solidityevent"
        },
        {
          triggerName = "soliditylog"
          enable = false
          topic = "soliditylog"
          redundancy = false // if set true, solidityevent will also be regarded as soliditylog
        }
    ]

    filter = {
       fromblock = "" // the value could be "", "earliest" or a specified block number as the beginning of the queried range
       toblock = "" // the value could be "", "latest" or a specified block number as end of the queried range
       contractAddress = [
           "" // contract address you want to subscribe, if it's set to "", you will receive contract logs/events with any contract address.
       ]

       contractTopic = [
           "" // contract topic you want to subscribe, if it's set to "", you will receive contract logs/events with any contract topic.
       ]
    }
}
@joeky888 joeky888 changed the title Upgrade from v4.7.7 to v4.8.0 crashes (mainnet on lite full node) Upgrading from v4.7.7 to v4.8.0 crashes (mainnet on lite full node) May 13, 2025
@abc-x-t
Copy link

abc-x-t commented May 14, 2025

hi, please try to remove the -w in docker compose command
in 4.8.0, if the node is witness, but no private key is configured, the node will exit at the initialization stage.

@Sunny6889
Copy link

@abc-x-t Would it be better if we could print the clear exit reason?

@abc-x-t
Copy link

abc-x-t commented May 14, 2025

@abc-x-t Would it be better if we could print the clear exit reason?

Yes, a friendly message is needed to help user understand what happened.

@joeky888
Copy link
Author

hi, please try to remove the -w in docker compose command

I can confirm this is working. 👍

@halibobo1205
Copy link
Contributor

@joeky888 Can you find the following error log: This is a witness node, but localWitnesses is null

@joeky888
Copy link
Author

@halibobo1205 Nope.

@317787106
Copy link
Contributor

@joeky888 Have you set the logging directory for Docker? You can refer to this example.
Alternatively, you can run a full node using the Java JAR method. Note that error logs may sometimes appear in nohup.out.

@joeky888
Copy link
Author

Oh! yes I saw the error in the containers tron.log.

tron.log:

Click here
03:42:07.001 INFO  [main] [app](FullNode.java:25) Full node running.
03:42:07.586 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:42:07.862 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:07.867 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:09.286 INFO  [main] [app](FullNode.java:25) Full node running.
03:42:09.863 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:42:10.149 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:10.153 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:11.598 INFO  [main] [app](FullNode.java:25) Full node running.
03:42:12.211 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:42:12.487 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:12.492 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:14.238 INFO  [main] [app](FullNode.java:25) Full node running.
03:42:14.828 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:42:15.105 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:15.109 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:17.131 INFO  [main] [app](FullNode.java:25) Full node running.
03:42:17.706 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:42:17.982 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:17.987 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:20.836 INFO  [main] [app](FullNode.java:25) Full node running.
03:42:21.432 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:42:21.704 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:21.710 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:26.163 INFO  [main] [app](FullNode.java:25) Full node running.
03:42:26.730 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:42:27.015 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:27.019 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:34.656 INFO  [main] [app](FullNode.java:25) Full node running.
03:42:35.241 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:42:35.514 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:35.519 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:49.579 INFO  [main] [app](FullNode.java:25) Full node running.
03:42:50.195 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:42:50.497 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:42:50.503 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:43:17.429 INFO  [main] [app](FullNode.java:25) Full node running.
03:43:18.030 WARN  [main] [app](LocalWitnesses.java:104) PrivateKey is null.
03:43:18.329 ERROR [main] [Exit](ExitManager.java:49) Shutting down with code: WITNESS_INIT(1).
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)
03:43:18.334 ERROR [main] [Exit](ExitManager.java:26) Uncaught exception
org.tron.core.exception.TronError: This is a witness node, but localWitnesses is null
	at org.tron.core.config.args.Args.setParam(Args.java:456)
	at org.tron.core.config.args.Args.setParam(Args.java:379)
	at org.tron.program.FullNode.main(FullNode.java:26)

The weird part is that I have a ./conf/logback.xml which indicates the logger should log to stdout only, right? Is there a problem in my logback.xml configuration? Sorry, I'm not a java guy.

@joeky888
Copy link
Author

How can I log level info, warn and error to stdout. I have tweaked the logback config several times but so far no luck.

@317787106
Copy link
Contributor

@joeky888 The log is too large; printing it to stdout is not recommended.

@joeky888
Copy link
Author

The log is too large;

I would set log level to warn or error to prevent large logs.

@Sunny6889
Copy link

Sunny6889 commented May 16, 2025

@joeky888 In case you need the logs printing on your console in realtime try this:
docker exec -it tron_node tail -f ./logs/tron.log
As tron_node is your docker container name

@GordonLtron
Copy link

Is there any more issue to solve ? If no i suggest to close the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants