Deploying GlusterFS DaemonSet on Kubernetes
up vote
0
down vote
favorite
Im new to GlusterFS and am trying to deploy GlusterFS as a DaemonSet in a new K8S cluster.
My K8S cluster is setup on Bare Metal and all the host machines are Debian9 based.
Im getting the GlusterFS DaemonSet from the official Kubernetes Incubator repo which is here. The image being used is based off of CentOS.
Now when I deploy the DaemonSet, all the pods stay in Pending state. When I do a describe on the Pods, I get livelinessProbe/ReadinessProbe failed with the following ERRORs.
[glusterfspod-6h85 /]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2018-11-10 19:41:53 UTC; 2min 2s ago
Process: 68 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=1/FAILURE)
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Starting GlusterFS, a clustered file-system server...
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: glusterd.service: control process exited, code=exited status=1
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Failed to start GlusterFS, a clustered file-system server.
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Unit glusterd.service entered failed state.
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: glusterd.service failed.
Then I exec into the Pods and check the logs and they say :
-- Unit sshd.service has begun starting up.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: error: Bind to port 2222 on 0.0.0.0 failed: Address already in use.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: error: Bind to port 2222 on :: failed: Address already in use.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: fatal: Cannot bind any address.
Nov 10 19:35:24 kubernetes-agent-4 systemd[1]: sshd.service: main process exited, code=exited, status=255/n/a
Nov 10 19:35:24 kubernetes-agent-4 systemd[1]: Failed to start OpenSSH server daemon.
And
[2018-11-10 19:34:42.330154] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-11-10 19:34:42.330165] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
[2018-11-10 19:34:42.333911] E [socket.c:805:__socket_server_bind] 0-socket.management: Port is already in use
[2018-11-10 19:34:42.333926] W [rpcsvc.c:1788:rpcsvc_create_listener] 0-rpc-service: listening on transport failed
[2018-11-10 19:34:42.333938] E [MSGID: 106244] [glusterd.c:1757:init] 0-management: creation of listener failed
[2018-11-10 19:34:42.333949] E [MSGID: 101019] [xlator.c:720:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
[2018-11-10 19:34:42.333965] E [MSGID: 101066] [graph.c:367:glusterfs_graph_init] 0-management: initializing translator failed
[2018-11-10 19:34:42.333974] E [MSGID: 101176] [graph.c:738:glusterfs_graph_activate] 0-graph: init failed
[2018-11-10 19:34:42.334371] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55adc15817dd] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55adc1581683] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55adc1580b8b] ) 0-: received signum (-1), shutting down
And
[2018-11-10 19:34:03.299298] I [MSGID: 100030] [glusterfsd.c:2691:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 5.0 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-11-10 19:34:03.330091] I [MSGID: 106478] [glusterd.c:1435:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-11-10 19:34:03.330125] I [MSGID: 106479] [glusterd.c:1491:init] 0-management: Using /var/lib/glusterd as working directory
[2018-11-10 19:34:03.330135] I [MSGID: 106479] [glusterd.c:1497:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-11-10 19:34:03.334414] W [MSGID: 103071] [rdma.c:4475:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-11-10 19:34:03.334435] W [MSGID: 103055] [rdma.c:4774:init] 0-rdma.management: Failed to initialize IB Device
[2018-11-10 19:34:03.334444] W [rpc-transport.c:339:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-11-10 19:34:03.334537] W [rpcsvc.c:1789:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-11-10 19:34:03.334549] E [MSGID: 106244] [glusterd.c:1798:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-11-10 19:34:05.496746] E [MSGID: 101032] [store.c:447:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-11-10 19:34:05.496843] E [MSGID: 101032] [store.c:447:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-11-10 19:34:05.496846] I [MSGID: 106514] [glusterd-store.c:2304:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 50000
[2018-11-10 19:34:05.513644] I [MSGID: 106194] [glusterd-store.c:3983:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Is there something I have missed ? The volumes
section on the DaemonSet manifest mounts volumes from hostPath
. Should I deploy glusterfs-server
on my host machines aswell ? Or is this a CentOS / Debian mismatch issue ?
kubernetes glusterfs
add a comment |
up vote
0
down vote
favorite
Im new to GlusterFS and am trying to deploy GlusterFS as a DaemonSet in a new K8S cluster.
My K8S cluster is setup on Bare Metal and all the host machines are Debian9 based.
Im getting the GlusterFS DaemonSet from the official Kubernetes Incubator repo which is here. The image being used is based off of CentOS.
Now when I deploy the DaemonSet, all the pods stay in Pending state. When I do a describe on the Pods, I get livelinessProbe/ReadinessProbe failed with the following ERRORs.
[glusterfspod-6h85 /]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2018-11-10 19:41:53 UTC; 2min 2s ago
Process: 68 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=1/FAILURE)
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Starting GlusterFS, a clustered file-system server...
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: glusterd.service: control process exited, code=exited status=1
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Failed to start GlusterFS, a clustered file-system server.
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Unit glusterd.service entered failed state.
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: glusterd.service failed.
Then I exec into the Pods and check the logs and they say :
-- Unit sshd.service has begun starting up.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: error: Bind to port 2222 on 0.0.0.0 failed: Address already in use.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: error: Bind to port 2222 on :: failed: Address already in use.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: fatal: Cannot bind any address.
Nov 10 19:35:24 kubernetes-agent-4 systemd[1]: sshd.service: main process exited, code=exited, status=255/n/a
Nov 10 19:35:24 kubernetes-agent-4 systemd[1]: Failed to start OpenSSH server daemon.
And
[2018-11-10 19:34:42.330154] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-11-10 19:34:42.330165] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
[2018-11-10 19:34:42.333911] E [socket.c:805:__socket_server_bind] 0-socket.management: Port is already in use
[2018-11-10 19:34:42.333926] W [rpcsvc.c:1788:rpcsvc_create_listener] 0-rpc-service: listening on transport failed
[2018-11-10 19:34:42.333938] E [MSGID: 106244] [glusterd.c:1757:init] 0-management: creation of listener failed
[2018-11-10 19:34:42.333949] E [MSGID: 101019] [xlator.c:720:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
[2018-11-10 19:34:42.333965] E [MSGID: 101066] [graph.c:367:glusterfs_graph_init] 0-management: initializing translator failed
[2018-11-10 19:34:42.333974] E [MSGID: 101176] [graph.c:738:glusterfs_graph_activate] 0-graph: init failed
[2018-11-10 19:34:42.334371] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55adc15817dd] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55adc1581683] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55adc1580b8b] ) 0-: received signum (-1), shutting down
And
[2018-11-10 19:34:03.299298] I [MSGID: 100030] [glusterfsd.c:2691:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 5.0 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-11-10 19:34:03.330091] I [MSGID: 106478] [glusterd.c:1435:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-11-10 19:34:03.330125] I [MSGID: 106479] [glusterd.c:1491:init] 0-management: Using /var/lib/glusterd as working directory
[2018-11-10 19:34:03.330135] I [MSGID: 106479] [glusterd.c:1497:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-11-10 19:34:03.334414] W [MSGID: 103071] [rdma.c:4475:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-11-10 19:34:03.334435] W [MSGID: 103055] [rdma.c:4774:init] 0-rdma.management: Failed to initialize IB Device
[2018-11-10 19:34:03.334444] W [rpc-transport.c:339:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-11-10 19:34:03.334537] W [rpcsvc.c:1789:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-11-10 19:34:03.334549] E [MSGID: 106244] [glusterd.c:1798:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-11-10 19:34:05.496746] E [MSGID: 101032] [store.c:447:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-11-10 19:34:05.496843] E [MSGID: 101032] [store.c:447:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-11-10 19:34:05.496846] I [MSGID: 106514] [glusterd-store.c:2304:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 50000
[2018-11-10 19:34:05.513644] I [MSGID: 106194] [glusterd-store.c:3983:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Is there something I have missed ? The volumes
section on the DaemonSet manifest mounts volumes from hostPath
. Should I deploy glusterfs-server
on my host machines aswell ? Or is this a CentOS / Debian mismatch issue ?
kubernetes glusterfs
1
WithhostNetwork: true
, and a massive list of exposed ports, it's no wonder you're seeing[2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
. Do you already have something listening on any one of those ports on your Nodes?
– Matthew L Daniel
Nov 12 at 5:00
No. Port2222
is not being used by the node or by any other Pod. Its also complaining about missing files. Any idea why that might be ?
– Jason Stanley
Nov 12 at 15:01
1
No idea about the missing files, but you'll have to excuse my skepticism about the availability of those ports when the logs contain two separate messages showing a port binding failure. Pragmatically speaking, you might be happier trying to get it to run on one Node, with just docker by hand, then scale it out wider when you have more confidence about the process
– Matthew L Daniel
Nov 12 at 17:32
Thanks. I have a feeling that working with this Dockerfile is a bit too much since there is a lot going on CentOS based. Im in the process of writing my own Docker Image based off of Debain now. Hope it works.
– Jason Stanley
Nov 13 at 1:46
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Im new to GlusterFS and am trying to deploy GlusterFS as a DaemonSet in a new K8S cluster.
My K8S cluster is setup on Bare Metal and all the host machines are Debian9 based.
Im getting the GlusterFS DaemonSet from the official Kubernetes Incubator repo which is here. The image being used is based off of CentOS.
Now when I deploy the DaemonSet, all the pods stay in Pending state. When I do a describe on the Pods, I get livelinessProbe/ReadinessProbe failed with the following ERRORs.
[glusterfspod-6h85 /]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2018-11-10 19:41:53 UTC; 2min 2s ago
Process: 68 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=1/FAILURE)
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Starting GlusterFS, a clustered file-system server...
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: glusterd.service: control process exited, code=exited status=1
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Failed to start GlusterFS, a clustered file-system server.
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Unit glusterd.service entered failed state.
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: glusterd.service failed.
Then I exec into the Pods and check the logs and they say :
-- Unit sshd.service has begun starting up.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: error: Bind to port 2222 on 0.0.0.0 failed: Address already in use.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: error: Bind to port 2222 on :: failed: Address already in use.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: fatal: Cannot bind any address.
Nov 10 19:35:24 kubernetes-agent-4 systemd[1]: sshd.service: main process exited, code=exited, status=255/n/a
Nov 10 19:35:24 kubernetes-agent-4 systemd[1]: Failed to start OpenSSH server daemon.
And
[2018-11-10 19:34:42.330154] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-11-10 19:34:42.330165] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
[2018-11-10 19:34:42.333911] E [socket.c:805:__socket_server_bind] 0-socket.management: Port is already in use
[2018-11-10 19:34:42.333926] W [rpcsvc.c:1788:rpcsvc_create_listener] 0-rpc-service: listening on transport failed
[2018-11-10 19:34:42.333938] E [MSGID: 106244] [glusterd.c:1757:init] 0-management: creation of listener failed
[2018-11-10 19:34:42.333949] E [MSGID: 101019] [xlator.c:720:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
[2018-11-10 19:34:42.333965] E [MSGID: 101066] [graph.c:367:glusterfs_graph_init] 0-management: initializing translator failed
[2018-11-10 19:34:42.333974] E [MSGID: 101176] [graph.c:738:glusterfs_graph_activate] 0-graph: init failed
[2018-11-10 19:34:42.334371] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55adc15817dd] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55adc1581683] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55adc1580b8b] ) 0-: received signum (-1), shutting down
And
[2018-11-10 19:34:03.299298] I [MSGID: 100030] [glusterfsd.c:2691:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 5.0 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-11-10 19:34:03.330091] I [MSGID: 106478] [glusterd.c:1435:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-11-10 19:34:03.330125] I [MSGID: 106479] [glusterd.c:1491:init] 0-management: Using /var/lib/glusterd as working directory
[2018-11-10 19:34:03.330135] I [MSGID: 106479] [glusterd.c:1497:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-11-10 19:34:03.334414] W [MSGID: 103071] [rdma.c:4475:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-11-10 19:34:03.334435] W [MSGID: 103055] [rdma.c:4774:init] 0-rdma.management: Failed to initialize IB Device
[2018-11-10 19:34:03.334444] W [rpc-transport.c:339:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-11-10 19:34:03.334537] W [rpcsvc.c:1789:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-11-10 19:34:03.334549] E [MSGID: 106244] [glusterd.c:1798:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-11-10 19:34:05.496746] E [MSGID: 101032] [store.c:447:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-11-10 19:34:05.496843] E [MSGID: 101032] [store.c:447:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-11-10 19:34:05.496846] I [MSGID: 106514] [glusterd-store.c:2304:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 50000
[2018-11-10 19:34:05.513644] I [MSGID: 106194] [glusterd-store.c:3983:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Is there something I have missed ? The volumes
section on the DaemonSet manifest mounts volumes from hostPath
. Should I deploy glusterfs-server
on my host machines aswell ? Or is this a CentOS / Debian mismatch issue ?
kubernetes glusterfs
Im new to GlusterFS and am trying to deploy GlusterFS as a DaemonSet in a new K8S cluster.
My K8S cluster is setup on Bare Metal and all the host machines are Debian9 based.
Im getting the GlusterFS DaemonSet from the official Kubernetes Incubator repo which is here. The image being used is based off of CentOS.
Now when I deploy the DaemonSet, all the pods stay in Pending state. When I do a describe on the Pods, I get livelinessProbe/ReadinessProbe failed with the following ERRORs.
[glusterfspod-6h85 /]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2018-11-10 19:41:53 UTC; 2min 2s ago
Process: 68 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=1/FAILURE)
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Starting GlusterFS, a clustered file-system server...
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: glusterd.service: control process exited, code=exited status=1
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Failed to start GlusterFS, a clustered file-system server.
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: Unit glusterd.service entered failed state.
Nov 10 19:41:53 kubernetes-agent-4 systemd[1]: glusterd.service failed.
Then I exec into the Pods and check the logs and they say :
-- Unit sshd.service has begun starting up.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: error: Bind to port 2222 on 0.0.0.0 failed: Address already in use.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: error: Bind to port 2222 on :: failed: Address already in use.
Nov 10 19:35:24 kubernetes-agent-4 sshd[93]: fatal: Cannot bind any address.
Nov 10 19:35:24 kubernetes-agent-4 systemd[1]: sshd.service: main process exited, code=exited, status=255/n/a
Nov 10 19:35:24 kubernetes-agent-4 systemd[1]: Failed to start OpenSSH server daemon.
And
[2018-11-10 19:34:42.330154] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-11-10 19:34:42.330165] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
[2018-11-10 19:34:42.333911] E [socket.c:805:__socket_server_bind] 0-socket.management: Port is already in use
[2018-11-10 19:34:42.333926] W [rpcsvc.c:1788:rpcsvc_create_listener] 0-rpc-service: listening on transport failed
[2018-11-10 19:34:42.333938] E [MSGID: 106244] [glusterd.c:1757:init] 0-management: creation of listener failed
[2018-11-10 19:34:42.333949] E [MSGID: 101019] [xlator.c:720:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
[2018-11-10 19:34:42.333965] E [MSGID: 101066] [graph.c:367:glusterfs_graph_init] 0-management: initializing translator failed
[2018-11-10 19:34:42.333974] E [MSGID: 101176] [graph.c:738:glusterfs_graph_activate] 0-graph: init failed
[2018-11-10 19:34:42.334371] W [glusterfsd.c:1514:cleanup_and_exit] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55adc15817dd] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55adc1581683] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55adc1580b8b] ) 0-: received signum (-1), shutting down
And
[2018-11-10 19:34:03.299298] I [MSGID: 100030] [glusterfsd.c:2691:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 5.0 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-11-10 19:34:03.330091] I [MSGID: 106478] [glusterd.c:1435:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-11-10 19:34:03.330125] I [MSGID: 106479] [glusterd.c:1491:init] 0-management: Using /var/lib/glusterd as working directory
[2018-11-10 19:34:03.330135] I [MSGID: 106479] [glusterd.c:1497:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-11-10 19:34:03.334414] W [MSGID: 103071] [rdma.c:4475:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-11-10 19:34:03.334435] W [MSGID: 103055] [rdma.c:4774:init] 0-rdma.management: Failed to initialize IB Device
[2018-11-10 19:34:03.334444] W [rpc-transport.c:339:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-11-10 19:34:03.334537] W [rpcsvc.c:1789:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-11-10 19:34:03.334549] E [MSGID: 106244] [glusterd.c:1798:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-11-10 19:34:05.496746] E [MSGID: 101032] [store.c:447:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-11-10 19:34:05.496843] E [MSGID: 101032] [store.c:447:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-11-10 19:34:05.496846] I [MSGID: 106514] [glusterd-store.c:2304:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 50000
[2018-11-10 19:34:05.513644] I [MSGID: 106194] [glusterd-store.c:3983:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Is there something I have missed ? The volumes
section on the DaemonSet manifest mounts volumes from hostPath
. Should I deploy glusterfs-server
on my host machines aswell ? Or is this a CentOS / Debian mismatch issue ?
kubernetes glusterfs
kubernetes glusterfs
asked Nov 10 at 19:50
Jason Stanley
671727
671727
1
WithhostNetwork: true
, and a massive list of exposed ports, it's no wonder you're seeing[2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
. Do you already have something listening on any one of those ports on your Nodes?
– Matthew L Daniel
Nov 12 at 5:00
No. Port2222
is not being used by the node or by any other Pod. Its also complaining about missing files. Any idea why that might be ?
– Jason Stanley
Nov 12 at 15:01
1
No idea about the missing files, but you'll have to excuse my skepticism about the availability of those ports when the logs contain two separate messages showing a port binding failure. Pragmatically speaking, you might be happier trying to get it to run on one Node, with just docker by hand, then scale it out wider when you have more confidence about the process
– Matthew L Daniel
Nov 12 at 17:32
Thanks. I have a feeling that working with this Dockerfile is a bit too much since there is a lot going on CentOS based. Im in the process of writing my own Docker Image based off of Debain now. Hope it works.
– Jason Stanley
Nov 13 at 1:46
add a comment |
1
WithhostNetwork: true
, and a massive list of exposed ports, it's no wonder you're seeing[2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
. Do you already have something listening on any one of those ports on your Nodes?
– Matthew L Daniel
Nov 12 at 5:00
No. Port2222
is not being used by the node or by any other Pod. Its also complaining about missing files. Any idea why that might be ?
– Jason Stanley
Nov 12 at 15:01
1
No idea about the missing files, but you'll have to excuse my skepticism about the availability of those ports when the logs contain two separate messages showing a port binding failure. Pragmatically speaking, you might be happier trying to get it to run on one Node, with just docker by hand, then scale it out wider when you have more confidence about the process
– Matthew L Daniel
Nov 12 at 17:32
Thanks. I have a feeling that working with this Dockerfile is a bit too much since there is a lot going on CentOS based. Im in the process of writing my own Docker Image based off of Debain now. Hope it works.
– Jason Stanley
Nov 13 at 1:46
1
1
With
hostNetwork: true
, and a massive list of exposed ports, it's no wonder you're seeing [2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
. Do you already have something listening on any one of those ports on your Nodes?– Matthew L Daniel
Nov 12 at 5:00
With
hostNetwork: true
, and a massive list of exposed ports, it's no wonder you're seeing [2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
. Do you already have something listening on any one of those ports on your Nodes?– Matthew L Daniel
Nov 12 at 5:00
No. Port
2222
is not being used by the node or by any other Pod. Its also complaining about missing files. Any idea why that might be ?– Jason Stanley
Nov 12 at 15:01
No. Port
2222
is not being used by the node or by any other Pod. Its also complaining about missing files. Any idea why that might be ?– Jason Stanley
Nov 12 at 15:01
1
1
No idea about the missing files, but you'll have to excuse my skepticism about the availability of those ports when the logs contain two separate messages showing a port binding failure. Pragmatically speaking, you might be happier trying to get it to run on one Node, with just docker by hand, then scale it out wider when you have more confidence about the process
– Matthew L Daniel
Nov 12 at 17:32
No idea about the missing files, but you'll have to excuse my skepticism about the availability of those ports when the logs contain two separate messages showing a port binding failure. Pragmatically speaking, you might be happier trying to get it to run on one Node, with just docker by hand, then scale it out wider when you have more confidence about the process
– Matthew L Daniel
Nov 12 at 17:32
Thanks. I have a feeling that working with this Dockerfile is a bit too much since there is a lot going on CentOS based. Im in the process of writing my own Docker Image based off of Debain now. Hope it works.
– Jason Stanley
Nov 13 at 1:46
Thanks. I have a feeling that working with this Dockerfile is a bit too much since there is a lot going on CentOS based. Im in the process of writing my own Docker Image based off of Debain now. Hope it works.
– Jason Stanley
Nov 13 at 1:46
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53242816%2fdeploying-glusterfs-daemonset-on-kubernetes%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
With
hostNetwork: true
, and a massive list of exposed ports, it's no wonder you're seeing[2018-11-10 19:34:42.333893] E [socket.c:802:__socket_server_bind] 0-socket.management: binding to failed: Address already in use
. Do you already have something listening on any one of those ports on your Nodes?– Matthew L Daniel
Nov 12 at 5:00
No. Port
2222
is not being used by the node or by any other Pod. Its also complaining about missing files. Any idea why that might be ?– Jason Stanley
Nov 12 at 15:01
1
No idea about the missing files, but you'll have to excuse my skepticism about the availability of those ports when the logs contain two separate messages showing a port binding failure. Pragmatically speaking, you might be happier trying to get it to run on one Node, with just docker by hand, then scale it out wider when you have more confidence about the process
– Matthew L Daniel
Nov 12 at 17:32
Thanks. I have a feeling that working with this Dockerfile is a bit too much since there is a lot going on CentOS based. Im in the process of writing my own Docker Image based off of Debain now. Hope it works.
– Jason Stanley
Nov 13 at 1:46