Connecting to an AWS database from GKE












0















I am having troubles connecting a Google Cloud Platform Kubernetes pod to an external MySQL running on AWS.



Here's my deployment file (some sensitive parts replaced by ***):



apiVersion: apps/v1
kind: Deployment
metadata:
name: watches-v1
spec:
replicas: 3
selector:
matchLabels:
app: watches-v1
template:
metadata:
labels:
app: watches-v1
spec:
containers:
- name: watches-v1
image: silasberger/watches:1.0
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: MYSQL_HOST
value: "***.eu-west-1.rds.amazonaws.com"
- name: MYSQL_DB
value: "***"
- name: MYSQL_USER
value: "***"
- name: MYSQL_PASS
value: "***"
- name: API_USER
value: "***"
- name: API_PASS
value: "***"


This is the Dockerfile which I build and push to Dockerhub as watches:1.0:



FROM node:8

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

ENV MICROSERVICE="watches"
ENV WATCHES_API_VERSION="1"

CMD [ "npm", "start" ]


The following things work:




  • Connect to the AWS MySQL instance from a bash, using the mysql command

  • running the Docker image in a local container, no errors, everything as expected


However, as soon as I apply the deployment in my Kubernetes cluster, the pods aren't able to connect to the AWS DB. The application starts, I can access the swagger page, but when I run the kubectl logs <pod-name> command, I always get this error:



Unable to connect to the database: { SequelizeConnectionError: connect ETIMEDOUT
at Utils.Promise.tap.then.catch.err (/usr/src/app/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:149:19)
at tryCatcher (/usr/src/app/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/usr/src/app/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/usr/src/app/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (/usr/src/app/node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (/usr/src/app/node_modules/bluebird/js/release/promise.js:690:18)
at _drainQueueStep (/usr/src/app/node_modules/bluebird/js/release/async.js:138:12)
at _drainQueue (/usr/src/app/node_modules/bluebird/js/release/async.js:131:9)
at Async._drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:147:5)
at Immediate.Async.drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:810:20)
at tryOnImmediate (timers.js:768:5)
at processImmediate [as _immediateCallback] (timers.js:745:5)
name: 'SequelizeConnectionError',
parent:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true },
original:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true } }


It chooses the correct host, DB name and credentials (as indicated by a previous part of the log not shown here), but it apparently can't connect to it. As you can see, the application is written in Node.js and uses Sequelize.



All the research I have done so far pointed to a firewall issue, so I set the following VPC rule on the Google Cloud Platform for that project:



$ gcloud compute firewall-rules describe allow-all-outbound
allowed:
- IPProtocol: all
creationTimestamp: '2018-11-14T02:51:20.808-08:00'
description: Allow all inbound connections
destinationRanges:
- 0.0.0.0/0
direction: EGRESS
disabled: false
id: '7178441953737326791'
kind: compute#firewall
name: allow-mysql-outbound
network: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/networks/default
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/firewalls/allow-mysql-outbound


Since this didn't change anything, I also tried adding the same rule again, with direction INGRESS, but that didn't work either (as I expected).



I am totally new to the Google Cloud Platform and to Kubernetes, so maybe this is just a dumb mistake, but I'm really out of ideas on how to get it to work.










share|improve this question




















  • 1





    What about on the AWS side? Does the RDS service allow connections from anywhere?

    – Jacob Tomlinson
    Nov 14 '18 at 13:55











  • Yes, I believe it should (as mentioned, I'm new to it). The "Public Accessibility" checkbox is checked, in the instance's network & security settings. I can connect to it using a terminal, and I didn't whitelist my IP or anything. From the same machine, local Docker containers can access it as well.

    – Silas Berger
    Nov 14 '18 at 14:08











  • I think I got it! It looks like you were right about the AWS side. I'll check if it really works and post my answer as soon as I can. Thanks for the tip!

    – Silas Berger
    Nov 14 '18 at 14:10
















0















I am having troubles connecting a Google Cloud Platform Kubernetes pod to an external MySQL running on AWS.



Here's my deployment file (some sensitive parts replaced by ***):



apiVersion: apps/v1
kind: Deployment
metadata:
name: watches-v1
spec:
replicas: 3
selector:
matchLabels:
app: watches-v1
template:
metadata:
labels:
app: watches-v1
spec:
containers:
- name: watches-v1
image: silasberger/watches:1.0
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: MYSQL_HOST
value: "***.eu-west-1.rds.amazonaws.com"
- name: MYSQL_DB
value: "***"
- name: MYSQL_USER
value: "***"
- name: MYSQL_PASS
value: "***"
- name: API_USER
value: "***"
- name: API_PASS
value: "***"


This is the Dockerfile which I build and push to Dockerhub as watches:1.0:



FROM node:8

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

ENV MICROSERVICE="watches"
ENV WATCHES_API_VERSION="1"

CMD [ "npm", "start" ]


The following things work:




  • Connect to the AWS MySQL instance from a bash, using the mysql command

  • running the Docker image in a local container, no errors, everything as expected


However, as soon as I apply the deployment in my Kubernetes cluster, the pods aren't able to connect to the AWS DB. The application starts, I can access the swagger page, but when I run the kubectl logs <pod-name> command, I always get this error:



Unable to connect to the database: { SequelizeConnectionError: connect ETIMEDOUT
at Utils.Promise.tap.then.catch.err (/usr/src/app/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:149:19)
at tryCatcher (/usr/src/app/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/usr/src/app/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/usr/src/app/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (/usr/src/app/node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (/usr/src/app/node_modules/bluebird/js/release/promise.js:690:18)
at _drainQueueStep (/usr/src/app/node_modules/bluebird/js/release/async.js:138:12)
at _drainQueue (/usr/src/app/node_modules/bluebird/js/release/async.js:131:9)
at Async._drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:147:5)
at Immediate.Async.drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:810:20)
at tryOnImmediate (timers.js:768:5)
at processImmediate [as _immediateCallback] (timers.js:745:5)
name: 'SequelizeConnectionError',
parent:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true },
original:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true } }


It chooses the correct host, DB name and credentials (as indicated by a previous part of the log not shown here), but it apparently can't connect to it. As you can see, the application is written in Node.js and uses Sequelize.



All the research I have done so far pointed to a firewall issue, so I set the following VPC rule on the Google Cloud Platform for that project:



$ gcloud compute firewall-rules describe allow-all-outbound
allowed:
- IPProtocol: all
creationTimestamp: '2018-11-14T02:51:20.808-08:00'
description: Allow all inbound connections
destinationRanges:
- 0.0.0.0/0
direction: EGRESS
disabled: false
id: '7178441953737326791'
kind: compute#firewall
name: allow-mysql-outbound
network: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/networks/default
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/firewalls/allow-mysql-outbound


Since this didn't change anything, I also tried adding the same rule again, with direction INGRESS, but that didn't work either (as I expected).



I am totally new to the Google Cloud Platform and to Kubernetes, so maybe this is just a dumb mistake, but I'm really out of ideas on how to get it to work.










share|improve this question




















  • 1





    What about on the AWS side? Does the RDS service allow connections from anywhere?

    – Jacob Tomlinson
    Nov 14 '18 at 13:55











  • Yes, I believe it should (as mentioned, I'm new to it). The "Public Accessibility" checkbox is checked, in the instance's network & security settings. I can connect to it using a terminal, and I didn't whitelist my IP or anything. From the same machine, local Docker containers can access it as well.

    – Silas Berger
    Nov 14 '18 at 14:08











  • I think I got it! It looks like you were right about the AWS side. I'll check if it really works and post my answer as soon as I can. Thanks for the tip!

    – Silas Berger
    Nov 14 '18 at 14:10














0












0








0








I am having troubles connecting a Google Cloud Platform Kubernetes pod to an external MySQL running on AWS.



Here's my deployment file (some sensitive parts replaced by ***):



apiVersion: apps/v1
kind: Deployment
metadata:
name: watches-v1
spec:
replicas: 3
selector:
matchLabels:
app: watches-v1
template:
metadata:
labels:
app: watches-v1
spec:
containers:
- name: watches-v1
image: silasberger/watches:1.0
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: MYSQL_HOST
value: "***.eu-west-1.rds.amazonaws.com"
- name: MYSQL_DB
value: "***"
- name: MYSQL_USER
value: "***"
- name: MYSQL_PASS
value: "***"
- name: API_USER
value: "***"
- name: API_PASS
value: "***"


This is the Dockerfile which I build and push to Dockerhub as watches:1.0:



FROM node:8

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

ENV MICROSERVICE="watches"
ENV WATCHES_API_VERSION="1"

CMD [ "npm", "start" ]


The following things work:




  • Connect to the AWS MySQL instance from a bash, using the mysql command

  • running the Docker image in a local container, no errors, everything as expected


However, as soon as I apply the deployment in my Kubernetes cluster, the pods aren't able to connect to the AWS DB. The application starts, I can access the swagger page, but when I run the kubectl logs <pod-name> command, I always get this error:



Unable to connect to the database: { SequelizeConnectionError: connect ETIMEDOUT
at Utils.Promise.tap.then.catch.err (/usr/src/app/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:149:19)
at tryCatcher (/usr/src/app/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/usr/src/app/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/usr/src/app/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (/usr/src/app/node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (/usr/src/app/node_modules/bluebird/js/release/promise.js:690:18)
at _drainQueueStep (/usr/src/app/node_modules/bluebird/js/release/async.js:138:12)
at _drainQueue (/usr/src/app/node_modules/bluebird/js/release/async.js:131:9)
at Async._drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:147:5)
at Immediate.Async.drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:810:20)
at tryOnImmediate (timers.js:768:5)
at processImmediate [as _immediateCallback] (timers.js:745:5)
name: 'SequelizeConnectionError',
parent:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true },
original:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true } }


It chooses the correct host, DB name and credentials (as indicated by a previous part of the log not shown here), but it apparently can't connect to it. As you can see, the application is written in Node.js and uses Sequelize.



All the research I have done so far pointed to a firewall issue, so I set the following VPC rule on the Google Cloud Platform for that project:



$ gcloud compute firewall-rules describe allow-all-outbound
allowed:
- IPProtocol: all
creationTimestamp: '2018-11-14T02:51:20.808-08:00'
description: Allow all inbound connections
destinationRanges:
- 0.0.0.0/0
direction: EGRESS
disabled: false
id: '7178441953737326791'
kind: compute#firewall
name: allow-mysql-outbound
network: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/networks/default
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/firewalls/allow-mysql-outbound


Since this didn't change anything, I also tried adding the same rule again, with direction INGRESS, but that didn't work either (as I expected).



I am totally new to the Google Cloud Platform and to Kubernetes, so maybe this is just a dumb mistake, but I'm really out of ideas on how to get it to work.










share|improve this question
















I am having troubles connecting a Google Cloud Platform Kubernetes pod to an external MySQL running on AWS.



Here's my deployment file (some sensitive parts replaced by ***):



apiVersion: apps/v1
kind: Deployment
metadata:
name: watches-v1
spec:
replicas: 3
selector:
matchLabels:
app: watches-v1
template:
metadata:
labels:
app: watches-v1
spec:
containers:
- name: watches-v1
image: silasberger/watches:1.0
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: MYSQL_HOST
value: "***.eu-west-1.rds.amazonaws.com"
- name: MYSQL_DB
value: "***"
- name: MYSQL_USER
value: "***"
- name: MYSQL_PASS
value: "***"
- name: API_USER
value: "***"
- name: API_PASS
value: "***"


This is the Dockerfile which I build and push to Dockerhub as watches:1.0:



FROM node:8

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

ENV MICROSERVICE="watches"
ENV WATCHES_API_VERSION="1"

CMD [ "npm", "start" ]


The following things work:




  • Connect to the AWS MySQL instance from a bash, using the mysql command

  • running the Docker image in a local container, no errors, everything as expected


However, as soon as I apply the deployment in my Kubernetes cluster, the pods aren't able to connect to the AWS DB. The application starts, I can access the swagger page, but when I run the kubectl logs <pod-name> command, I always get this error:



Unable to connect to the database: { SequelizeConnectionError: connect ETIMEDOUT
at Utils.Promise.tap.then.catch.err (/usr/src/app/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:149:19)
at tryCatcher (/usr/src/app/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/usr/src/app/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/usr/src/app/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (/usr/src/app/node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (/usr/src/app/node_modules/bluebird/js/release/promise.js:690:18)
at _drainQueueStep (/usr/src/app/node_modules/bluebird/js/release/async.js:138:12)
at _drainQueue (/usr/src/app/node_modules/bluebird/js/release/async.js:131:9)
at Async._drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:147:5)
at Immediate.Async.drainQueues (/usr/src/app/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:810:20)
at tryOnImmediate (timers.js:768:5)
at processImmediate [as _immediateCallback] (timers.js:745:5)
name: 'SequelizeConnectionError',
parent:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true },
original:
{ Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/usr/src/app/node_modules/mysql2/lib/connection.js:192:13)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true } }


It chooses the correct host, DB name and credentials (as indicated by a previous part of the log not shown here), but it apparently can't connect to it. As you can see, the application is written in Node.js and uses Sequelize.



All the research I have done so far pointed to a firewall issue, so I set the following VPC rule on the Google Cloud Platform for that project:



$ gcloud compute firewall-rules describe allow-all-outbound
allowed:
- IPProtocol: all
creationTimestamp: '2018-11-14T02:51:20.808-08:00'
description: Allow all inbound connections
destinationRanges:
- 0.0.0.0/0
direction: EGRESS
disabled: false
id: '7178441953737326791'
kind: compute#firewall
name: allow-mysql-outbound
network: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/networks/default
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/adept-vine-222109/global/firewalls/allow-mysql-outbound


Since this didn't change anything, I also tried adding the same rule again, with direction INGRESS, but that didn't work either (as I expected).



I am totally new to the Google Cloud Platform and to Kubernetes, so maybe this is just a dumb mistake, but I'm really out of ideas on how to get it to work.







mysql amazon-web-services kubernetes google-cloud-platform sequelize.js






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 14 '18 at 13:50







Silas Berger

















asked Nov 14 '18 at 13:29









Silas BergerSilas Berger

664




664








  • 1





    What about on the AWS side? Does the RDS service allow connections from anywhere?

    – Jacob Tomlinson
    Nov 14 '18 at 13:55











  • Yes, I believe it should (as mentioned, I'm new to it). The "Public Accessibility" checkbox is checked, in the instance's network & security settings. I can connect to it using a terminal, and I didn't whitelist my IP or anything. From the same machine, local Docker containers can access it as well.

    – Silas Berger
    Nov 14 '18 at 14:08











  • I think I got it! It looks like you were right about the AWS side. I'll check if it really works and post my answer as soon as I can. Thanks for the tip!

    – Silas Berger
    Nov 14 '18 at 14:10














  • 1





    What about on the AWS side? Does the RDS service allow connections from anywhere?

    – Jacob Tomlinson
    Nov 14 '18 at 13:55











  • Yes, I believe it should (as mentioned, I'm new to it). The "Public Accessibility" checkbox is checked, in the instance's network & security settings. I can connect to it using a terminal, and I didn't whitelist my IP or anything. From the same machine, local Docker containers can access it as well.

    – Silas Berger
    Nov 14 '18 at 14:08











  • I think I got it! It looks like you were right about the AWS side. I'll check if it really works and post my answer as soon as I can. Thanks for the tip!

    – Silas Berger
    Nov 14 '18 at 14:10








1




1





What about on the AWS side? Does the RDS service allow connections from anywhere?

– Jacob Tomlinson
Nov 14 '18 at 13:55





What about on the AWS side? Does the RDS service allow connections from anywhere?

– Jacob Tomlinson
Nov 14 '18 at 13:55













Yes, I believe it should (as mentioned, I'm new to it). The "Public Accessibility" checkbox is checked, in the instance's network & security settings. I can connect to it using a terminal, and I didn't whitelist my IP or anything. From the same machine, local Docker containers can access it as well.

– Silas Berger
Nov 14 '18 at 14:08





Yes, I believe it should (as mentioned, I'm new to it). The "Public Accessibility" checkbox is checked, in the instance's network & security settings. I can connect to it using a terminal, and I didn't whitelist my IP or anything. From the same machine, local Docker containers can access it as well.

– Silas Berger
Nov 14 '18 at 14:08













I think I got it! It looks like you were right about the AWS side. I'll check if it really works and post my answer as soon as I can. Thanks for the tip!

– Silas Berger
Nov 14 '18 at 14:10





I think I got it! It looks like you were right about the AWS side. I'll check if it really works and post my answer as soon as I can. Thanks for the tip!

– Silas Berger
Nov 14 '18 at 14:10












1 Answer
1






active

oldest

votes


















1














As it turns out, the problem was on the AWS side. Thanks Jacob Tomlinson for the suggestion.



While Public Accessibility was activated for the AWS MySQL instance, it apparently didn't allow access from all sources. I'm not sure why it worked from my local machine, but anyway.



I was able to solve it by adding a security group in AWS that allows inbound traffic on all ports and with all protocols for the source 0.0.0.0/0. I then associated this security group with my MySQL instance (go to the instance, click Modify, go to Network & Security settings, choose the newly created group, save changes). I will still need to tweak this rule from a security perspective, but at least it all works now.






share|improve this answer
























  • I would really recommend against these settings - you are pretty much asking for your database to get hacked. It's best to limit traffic to specific IP ranges, and if you need to connect remotely I would use an SSH tunnel.

    – doublesharp
    Nov 14 '18 at 16:18













  • Agreed. While this has solved your problem you could well cause yourself a lot of pain here. One solution would be to set up a VPN connection between the gcloud EKS cluster and your AWS VPC. cloud.google.com/solutions/…

    – Jacob Tomlinson
    Nov 15 '18 at 9:11













Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53301378%2fconnecting-to-an-aws-database-from-gke%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














As it turns out, the problem was on the AWS side. Thanks Jacob Tomlinson for the suggestion.



While Public Accessibility was activated for the AWS MySQL instance, it apparently didn't allow access from all sources. I'm not sure why it worked from my local machine, but anyway.



I was able to solve it by adding a security group in AWS that allows inbound traffic on all ports and with all protocols for the source 0.0.0.0/0. I then associated this security group with my MySQL instance (go to the instance, click Modify, go to Network & Security settings, choose the newly created group, save changes). I will still need to tweak this rule from a security perspective, but at least it all works now.






share|improve this answer
























  • I would really recommend against these settings - you are pretty much asking for your database to get hacked. It's best to limit traffic to specific IP ranges, and if you need to connect remotely I would use an SSH tunnel.

    – doublesharp
    Nov 14 '18 at 16:18













  • Agreed. While this has solved your problem you could well cause yourself a lot of pain here. One solution would be to set up a VPN connection between the gcloud EKS cluster and your AWS VPC. cloud.google.com/solutions/…

    – Jacob Tomlinson
    Nov 15 '18 at 9:11


















1














As it turns out, the problem was on the AWS side. Thanks Jacob Tomlinson for the suggestion.



While Public Accessibility was activated for the AWS MySQL instance, it apparently didn't allow access from all sources. I'm not sure why it worked from my local machine, but anyway.



I was able to solve it by adding a security group in AWS that allows inbound traffic on all ports and with all protocols for the source 0.0.0.0/0. I then associated this security group with my MySQL instance (go to the instance, click Modify, go to Network & Security settings, choose the newly created group, save changes). I will still need to tweak this rule from a security perspective, but at least it all works now.






share|improve this answer
























  • I would really recommend against these settings - you are pretty much asking for your database to get hacked. It's best to limit traffic to specific IP ranges, and if you need to connect remotely I would use an SSH tunnel.

    – doublesharp
    Nov 14 '18 at 16:18













  • Agreed. While this has solved your problem you could well cause yourself a lot of pain here. One solution would be to set up a VPN connection between the gcloud EKS cluster and your AWS VPC. cloud.google.com/solutions/…

    – Jacob Tomlinson
    Nov 15 '18 at 9:11
















1












1








1







As it turns out, the problem was on the AWS side. Thanks Jacob Tomlinson for the suggestion.



While Public Accessibility was activated for the AWS MySQL instance, it apparently didn't allow access from all sources. I'm not sure why it worked from my local machine, but anyway.



I was able to solve it by adding a security group in AWS that allows inbound traffic on all ports and with all protocols for the source 0.0.0.0/0. I then associated this security group with my MySQL instance (go to the instance, click Modify, go to Network & Security settings, choose the newly created group, save changes). I will still need to tweak this rule from a security perspective, but at least it all works now.






share|improve this answer













As it turns out, the problem was on the AWS side. Thanks Jacob Tomlinson for the suggestion.



While Public Accessibility was activated for the AWS MySQL instance, it apparently didn't allow access from all sources. I'm not sure why it worked from my local machine, but anyway.



I was able to solve it by adding a security group in AWS that allows inbound traffic on all ports and with all protocols for the source 0.0.0.0/0. I then associated this security group with my MySQL instance (go to the instance, click Modify, go to Network & Security settings, choose the newly created group, save changes). I will still need to tweak this rule from a security perspective, but at least it all works now.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 14 '18 at 14:31









Silas BergerSilas Berger

664




664













  • I would really recommend against these settings - you are pretty much asking for your database to get hacked. It's best to limit traffic to specific IP ranges, and if you need to connect remotely I would use an SSH tunnel.

    – doublesharp
    Nov 14 '18 at 16:18













  • Agreed. While this has solved your problem you could well cause yourself a lot of pain here. One solution would be to set up a VPN connection between the gcloud EKS cluster and your AWS VPC. cloud.google.com/solutions/…

    – Jacob Tomlinson
    Nov 15 '18 at 9:11





















  • I would really recommend against these settings - you are pretty much asking for your database to get hacked. It's best to limit traffic to specific IP ranges, and if you need to connect remotely I would use an SSH tunnel.

    – doublesharp
    Nov 14 '18 at 16:18













  • Agreed. While this has solved your problem you could well cause yourself a lot of pain here. One solution would be to set up a VPN connection between the gcloud EKS cluster and your AWS VPC. cloud.google.com/solutions/…

    – Jacob Tomlinson
    Nov 15 '18 at 9:11



















I would really recommend against these settings - you are pretty much asking for your database to get hacked. It's best to limit traffic to specific IP ranges, and if you need to connect remotely I would use an SSH tunnel.

– doublesharp
Nov 14 '18 at 16:18







I would really recommend against these settings - you are pretty much asking for your database to get hacked. It's best to limit traffic to specific IP ranges, and if you need to connect remotely I would use an SSH tunnel.

– doublesharp
Nov 14 '18 at 16:18















Agreed. While this has solved your problem you could well cause yourself a lot of pain here. One solution would be to set up a VPN connection between the gcloud EKS cluster and your AWS VPC. cloud.google.com/solutions/…

– Jacob Tomlinson
Nov 15 '18 at 9:11







Agreed. While this has solved your problem you could well cause yourself a lot of pain here. One solution would be to set up a VPN connection between the gcloud EKS cluster and your AWS VPC. cloud.google.com/solutions/…

– Jacob Tomlinson
Nov 15 '18 at 9:11






















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53301378%2fconnecting-to-an-aws-database-from-gke%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Florida Star v. B. J. F.

Danny Elfman

Retrieve a Users Dashboard in Tumblr with R and TumblR. Oauth Issues