Skip to content

Unwanted HBase regionserver #94

@krisskross

Description

@krisskross

I noticed that the HBase region server installation gets messed up when doing a manual amb-shell installation using a json gist.

source ambari-functions
amb-start-cluster 3
amb-shell

blueprint add --url https://gist.githubusercontent.com/krisskross/901ed8223c1ed1db80e3/raw/869327be9ad15e6a9f099a7591323244cd245357/ambari-hdp2.3
cluster build --blueprint hdp-2.3
cluster assign --hostGroup master --host amb1.service.consul
cluster assign --hostGroup slave_1 --host amb2.service.consul
cluster create

host list
amb1.service.consul [ALERT] 172.17.0.79 centos6:x86_64
amb2.service.consul [ALERT] 172.17.0.80 centos6:x86_64

First, one extra container 12311dd1655c gets created that i'm not sure is needed?

$ sudo docker ps

CONTAINER ID        IMAGE                         COMMAND                CREATED             STATUS              PORTS                                                              NAMES
12311dd1655c        sequenceiq/ambari:2.1.2-v1    "/bin/sh -c /tmp/amb   42 minutes ago      Up 42 minutes       8080/tcp                                                           loving_stallman     
ff51cc267878        sequenceiq/ambari:2.1.2-v1    "/start-agent"         43 minutes ago      Up 43 minutes       8080/tcp                                                           amb2                
b50f6b429c61        sequenceiq/ambari:2.1.2-v1    "/start-agent"         43 minutes ago      Up 43 minutes       8080/tcp                                                           amb1                
78b6c91713ae        sequenceiq/ambari:2.1.2-v1    "/start-server"        43 minutes ago      Up 43 minutes       8080/tcp                                                           amb-server          
1169a087ce4a        sequenceiq/consul:v0.5.0-v6   "/bin/start -server    43 minutes ago      Up 43 minutes       53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 8500/tcp   amb-consul          

The HBase installation creates two instead of one region servers on the slave. This messes with regions in transition and the servers generally unstable.

amb2.node.dc1.consul
amb2.service.consul

I noticed that "node.dc1" comes from the start-agent and start-server scripts, but i'm not sure they are to blame. Any way, the amb2.node.dc1.consul region server must go.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions