Cluster down-checkpoint-all

Cluster down-checkpoint-all
0

Cluster down-checkpoint-all

Vendor: checkpoint

OS: all

Description:
Indeni will alert if a cluster is down or any of the members are inoperable.

Remediation Steps:
Review the cause for one or more members being down or inoperable.
Review other alerts for a cause for the cluster failure.",

chkp-cphaprob_state_monitoring-vsx

name: chkp-cphaprob_state_monitoring-vsx
description: Run "cphaprob state" for cluster status monitoring on VSX
type: monitoring
monitoring_interval: 1 minute
requires:
    vendor: checkpoint
    high-availability: 'true'
    vsx: 'true'
    clusterxl: 'true'
    role-firewall: 'true'
    asg:
        neq: 'true'
comments:
    cluster-mode:
        why: |
            To check the cluster mode of the cluster in each of the VS context.
        how: |
            BY using the Check Point Cluster command "cphaprob state" in each of the VS context
        can-with-snmp: false
        can-with-syslog: false

    cluster-member-active:
        why: |
            To check the active cluster member of the VS cluster for the VS context.
        how: |
            By using the Check Point Cluster command "cphaprob state" in each of the VS context
        can-with-snmp: false
        can-with-syslog: false

    cluster-member-active-live-config:
        why: |
            To collect the live configuration  of the active cluster member  of the cluster for each of the VS context
        how: |
            By using the Check Point Cluster command "cphaprob state" in each of the VS context
        can-with-snmp: false
        can-with-syslog: false

    cluster-member-states:
        why: |
            To collect the cluster member states of the active cluster member of the cluster for each of the VS context
        how: |
            By using the Check Point Cluster command "cphaprob state" in each of the VS context
        can-with-snmp: false
        can-with-syslog: false

    cluster-state:
        why: |
            To know the overall state of the cluster, taking all members into account,as one down node could mean loss
            of redundancy. The check is done per VS context
        how: |
            By using the Check Point built-in "cphaprob state" command, we retrieve the status of the cluster members.
            From this we determine if the cluster is in a healthy state.
        can-with-snmp: true
        can-with-syslog: false

    cluster-state-live-config:
        why: |
            To collect the cluster state live configuration of the cluster per VS context.
        how: |
            By using the Check Point Cluster command "cphaprob state" in each of the VS context
        can-with-snmp:  false
        can-with-syslog: false
steps:
-   run:
        type: SSH
        file: cphaprob-state-vsx.remote.1.bash
    parse:
        type: AWK
        file: cphaprob-state-vsx.parser.1.awk

cross_vendor_cluster_down_novsx

// Deprecation warning : Scala template-based rules are deprecated. Please use YAML format rules instead.

package com.indeni.server.rules.library.templatebased.crossvendor

import com.indeni.ruleengine.expressions.conditions.{Equals => RuleEquals, Not => RuleNot, Or => RuleOr}
import com.indeni.server.common.data.conditions.{Equals => DataEquals, Not => DataNot}
import com.indeni.server.rules.RuleContext
import com.indeni.server.rules.library.templates.StateDownTemplateRule
import com.indeni.server.rules.RemediationStepCondition

/**
  *
  */
case class cross_vendor_cluster_down_novsx() extends StateDownTemplateRule(
  ruleName = "cross_vendor_cluster_down_novsx",
  ruleFriendlyName = "Clustered Devices (Non-VS): Cluster down",
  ruleDescription = "Indeni will alert if a cluster is down or any of the members are inoperable.",
  metricName = "cluster-state",
  applicableMetricTag = "name",
  metaCondition = !DataEquals("vsx", "true"),
  historyLength = 2,
  alertItemsHeader = "Clustering Elements Affected",
  alertDescription = "One or more clustering elements in this device are down. This alert was added per the request of <a target=\"_blank\" href=\"http://il.linkedin.com/pub/gal-vitenberg/83/484/103\">Gal Vitenberg</a>.",
  baseRemediationText = "Review the cause for one or more members being down or inoperable.")(
  RemediationStepCondition.VENDOR_CP -> "Review other alerts for a cause for the cluster failure.",
  RemediationStepCondition.VENDOR_PANOS -> "Log into the device over SSH and run \"less mp-log ha-agent.log\" for more information.",
  RemediationStepCondition.VENDOR_CISCO ->
    """|
      |1. Verify the communication between the FHRP peers . A random, momentary loss of data communication between the peers is the most common problem that results in continuous FHRP state change (ACT<-> STB) unless this error message occurs during the initial installation.
      |2. Check the CPU utilization by using the "show process CPU" NX-OS command. FHRP state changes are often due to High CPU Utilization.
      |3. Common problems for the loss of FHRP packets between the peers to investigate are physical layer problems, excessive network traffic caused by spanning tree issues or excessive traffic caused by each Vlan.
      |
      |In the case of a vPC problem, validate the following:
      |1. Check that STP bridge assurance is not enabled on the vPC links. Bridge assurance should only be enabled on the vPC peer link
      |2. Compare the vPC domain IDs of the two switches and ensure that they match. Execute the "show vpc brief"  to compare the output that should match across the vPC peer switches.
      |3. Verify that both the source and destination IP addresses used for the peer-keepalive messages are reachable from the VRF associated with the vPC peer-keepalive link.
      |Then, execute the "sh vpc peer-keepalive" NX-OS command and review the output from both switches.
      |4. Verify that the peer-keepalive link is up. Otherwise, the vPC peer link will not come up.
      |5. Review the vPC peer link configuration, execute the "sh vpc brief" NX-OS command and review the output. Besides, verify that the vPC peer link is configured as a Layer 2 port channel trunk that allows only vPC VLANs.
      |6. Ensure that type 1 consistency parameters match. If they do not match, then vPC is suspended. Items that are type 2 do not have to match on both Nexus switches for the vPC to be operational. Execute the "sh vpc consistency-parameters" command and review the output
      |7. Verify that the vPC number that you assigned to the port channel that connects to the downstream device from the vPC peer device is identical on both vPC peer devices
      |8. If you manually configured the system priority, verify that you assigned the same priority value on both vPC peer devices
      |9. Verify that the primary vPC is the primary STP root and the secondary vPC is the secondary STP root.
      |10. Review the logs for relevant findings
      |11. For more information please review  the next  vPC troubleshooting guide:
      |https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/troubleshooting/guide/N5K_Troubleshooting_Guide/n5K_ts_vpc.html""".stripMargin
)