Multi stage Dockerfile leads to running out of space











up vote
1
down vote

favorite












As my code (nodeJS-application) is changing more often than the (npm) dependencies do, I've tried to build something like a cache in my CI.



I'm using a multi-stage Dockerfile. In that I run npm install for all, and only, prod dependencies. Later they are copied to the final image so that it is much smaller. Great.



Also the build get super fast if no dependency has been changed.



However, over time the hd gets full so I have to run docker prune ... to get the space back. But, when I do this, the cache is gone.



So if I run a prune after each pipeline in my CI, I do not get the 'cache functionality' of the multi-stage Dockerfile.



### 1. Build
FROM node:10.13 AS build
WORKDIR /home/node/app

COPY ./package*.json ./
COPY ./.babelrc ./

RUN npm set progress=false
&& npm config set depth 0
&& npm install --only=production --silent
&& cp -R node_modules prod_node_modules
RUN npm install --silent

COPY ./src ./src
RUN ./node_modules/.bin/babel ./src/ -d ./dist/ --copy-files

### 2. Run
FROM node:10.13-alpine
RUN apk --no-cache add --virtual
builds-deps
build-base
python
WORKDIR /home/node/app

COPY --from=build /home/node/app/prod_node_modules ./node_modules
COPY --from=build /home/node/app/dist .

EXPOSE 3000
ENV NODE_ENV production
CMD ["node", "app.js"]









share|improve this question
























  • What is wrong with that question, as it is downvoted and there is a close request. It doesn't help, if there is no comment on that...
    – user3142695
    Nov 11 at 9:11















up vote
1
down vote

favorite












As my code (nodeJS-application) is changing more often than the (npm) dependencies do, I've tried to build something like a cache in my CI.



I'm using a multi-stage Dockerfile. In that I run npm install for all, and only, prod dependencies. Later they are copied to the final image so that it is much smaller. Great.



Also the build get super fast if no dependency has been changed.



However, over time the hd gets full so I have to run docker prune ... to get the space back. But, when I do this, the cache is gone.



So if I run a prune after each pipeline in my CI, I do not get the 'cache functionality' of the multi-stage Dockerfile.



### 1. Build
FROM node:10.13 AS build
WORKDIR /home/node/app

COPY ./package*.json ./
COPY ./.babelrc ./

RUN npm set progress=false
&& npm config set depth 0
&& npm install --only=production --silent
&& cp -R node_modules prod_node_modules
RUN npm install --silent

COPY ./src ./src
RUN ./node_modules/.bin/babel ./src/ -d ./dist/ --copy-files

### 2. Run
FROM node:10.13-alpine
RUN apk --no-cache add --virtual
builds-deps
build-base
python
WORKDIR /home/node/app

COPY --from=build /home/node/app/prod_node_modules ./node_modules
COPY --from=build /home/node/app/dist .

EXPOSE 3000
ENV NODE_ENV production
CMD ["node", "app.js"]









share|improve this question
























  • What is wrong with that question, as it is downvoted and there is a close request. It doesn't help, if there is no comment on that...
    – user3142695
    Nov 11 at 9:11













up vote
1
down vote

favorite









up vote
1
down vote

favorite











As my code (nodeJS-application) is changing more often than the (npm) dependencies do, I've tried to build something like a cache in my CI.



I'm using a multi-stage Dockerfile. In that I run npm install for all, and only, prod dependencies. Later they are copied to the final image so that it is much smaller. Great.



Also the build get super fast if no dependency has been changed.



However, over time the hd gets full so I have to run docker prune ... to get the space back. But, when I do this, the cache is gone.



So if I run a prune after each pipeline in my CI, I do not get the 'cache functionality' of the multi-stage Dockerfile.



### 1. Build
FROM node:10.13 AS build
WORKDIR /home/node/app

COPY ./package*.json ./
COPY ./.babelrc ./

RUN npm set progress=false
&& npm config set depth 0
&& npm install --only=production --silent
&& cp -R node_modules prod_node_modules
RUN npm install --silent

COPY ./src ./src
RUN ./node_modules/.bin/babel ./src/ -d ./dist/ --copy-files

### 2. Run
FROM node:10.13-alpine
RUN apk --no-cache add --virtual
builds-deps
build-base
python
WORKDIR /home/node/app

COPY --from=build /home/node/app/prod_node_modules ./node_modules
COPY --from=build /home/node/app/dist .

EXPOSE 3000
ENV NODE_ENV production
CMD ["node", "app.js"]









share|improve this question















As my code (nodeJS-application) is changing more often than the (npm) dependencies do, I've tried to build something like a cache in my CI.



I'm using a multi-stage Dockerfile. In that I run npm install for all, and only, prod dependencies. Later they are copied to the final image so that it is much smaller. Great.



Also the build get super fast if no dependency has been changed.



However, over time the hd gets full so I have to run docker prune ... to get the space back. But, when I do this, the cache is gone.



So if I run a prune after each pipeline in my CI, I do not get the 'cache functionality' of the multi-stage Dockerfile.



### 1. Build
FROM node:10.13 AS build
WORKDIR /home/node/app

COPY ./package*.json ./
COPY ./.babelrc ./

RUN npm set progress=false
&& npm config set depth 0
&& npm install --only=production --silent
&& cp -R node_modules prod_node_modules
RUN npm install --silent

COPY ./src ./src
RUN ./node_modules/.bin/babel ./src/ -d ./dist/ --copy-files

### 2. Run
FROM node:10.13-alpine
RUN apk --no-cache add --virtual
builds-deps
build-base
python
WORKDIR /home/node/app

COPY --from=build /home/node/app/prod_node_modules ./node_modules
COPY --from=build /home/node/app/dist .

EXPOSE 3000
ENV NODE_ENV production
CMD ["node", "app.js"]






node.js docker npm dockerfile docker-multi-stage-build






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 11 at 10:54









Engineer Dollery

10.2k34069




10.2k34069










asked Nov 11 at 9:00









user3142695

1,561836110




1,561836110












  • What is wrong with that question, as it is downvoted and there is a close request. It doesn't help, if there is no comment on that...
    – user3142695
    Nov 11 at 9:11


















  • What is wrong with that question, as it is downvoted and there is a close request. It doesn't help, if there is no comment on that...
    – user3142695
    Nov 11 at 9:11
















What is wrong with that question, as it is downvoted and there is a close request. It doesn't help, if there is no comment on that...
– user3142695
Nov 11 at 9:11




What is wrong with that question, as it is downvoted and there is a close request. It doesn't help, if there is no comment on that...
– user3142695
Nov 11 at 9:11












1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted










If your CI system lets you have multiple docker build steps, you could split this into two Dockerfiles.



# Dockerfile.dependencies
# docker build -f Dockerfile.dependencies -t me/dependencies .
FROM node:10.13
...
RUN npm install


# Dockerfile
# docker build -t me/application .
FROM me/dependencies:latest AS build
COPY ./src ./src
RUN ./node_modules/.bin/babel ./src/ -d ./dist/ --copy-files

FROM node:10.13-alpine
...
CMD ["node", "app.js"]


If you do this, then you can delete unused images after each build:



docker image prune


The most recent build of the dependencies image will have a label, so it won't be "dangling" and won't appear in the image listing. On each build its label will get "taken from" the previous build (if it changed) and so this sequence will clean up previous builds. This will also delete the "build" images, though as you note if anything changed to trigger a build it will probably be in the src tree and so forcing a rebuild there is reasonable.



In this specific circumstance, just using the latest tag is appropriate. If the final built images have some more unique tag (based on a version number or timestamp, say) and they're stacking up then you might need to do some more creative filtering of that image list to clean them up.






share|improve this answer























  • What is the difference between docker images -f dangling=true -q | xargs docker rmi and docker images prune?
    – user3142695
    Nov 11 at 12:14










  • Nothing. But docker images -q | xargs docker rmi worked back in the ancient days (also when multiple Dockerfiles was the only way to do "multi-stage" image builds) and it's what I've internalized. docker image prune would work just fine here.
    – David Maze
    Nov 11 at 12:23










  • I updated the answer to use docker image prune as shorter and easier.
    – David Maze
    Nov 11 at 12:24











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53247216%2fmulti-stage-dockerfile-leads-to-running-out-of-space%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote



accepted










If your CI system lets you have multiple docker build steps, you could split this into two Dockerfiles.



# Dockerfile.dependencies
# docker build -f Dockerfile.dependencies -t me/dependencies .
FROM node:10.13
...
RUN npm install


# Dockerfile
# docker build -t me/application .
FROM me/dependencies:latest AS build
COPY ./src ./src
RUN ./node_modules/.bin/babel ./src/ -d ./dist/ --copy-files

FROM node:10.13-alpine
...
CMD ["node", "app.js"]


If you do this, then you can delete unused images after each build:



docker image prune


The most recent build of the dependencies image will have a label, so it won't be "dangling" and won't appear in the image listing. On each build its label will get "taken from" the previous build (if it changed) and so this sequence will clean up previous builds. This will also delete the "build" images, though as you note if anything changed to trigger a build it will probably be in the src tree and so forcing a rebuild there is reasonable.



In this specific circumstance, just using the latest tag is appropriate. If the final built images have some more unique tag (based on a version number or timestamp, say) and they're stacking up then you might need to do some more creative filtering of that image list to clean them up.






share|improve this answer























  • What is the difference between docker images -f dangling=true -q | xargs docker rmi and docker images prune?
    – user3142695
    Nov 11 at 12:14










  • Nothing. But docker images -q | xargs docker rmi worked back in the ancient days (also when multiple Dockerfiles was the only way to do "multi-stage" image builds) and it's what I've internalized. docker image prune would work just fine here.
    – David Maze
    Nov 11 at 12:23










  • I updated the answer to use docker image prune as shorter and easier.
    – David Maze
    Nov 11 at 12:24















up vote
1
down vote



accepted










If your CI system lets you have multiple docker build steps, you could split this into two Dockerfiles.



# Dockerfile.dependencies
# docker build -f Dockerfile.dependencies -t me/dependencies .
FROM node:10.13
...
RUN npm install


# Dockerfile
# docker build -t me/application .
FROM me/dependencies:latest AS build
COPY ./src ./src
RUN ./node_modules/.bin/babel ./src/ -d ./dist/ --copy-files

FROM node:10.13-alpine
...
CMD ["node", "app.js"]


If you do this, then you can delete unused images after each build:



docker image prune


The most recent build of the dependencies image will have a label, so it won't be "dangling" and won't appear in the image listing. On each build its label will get "taken from" the previous build (if it changed) and so this sequence will clean up previous builds. This will also delete the "build" images, though as you note if anything changed to trigger a build it will probably be in the src tree and so forcing a rebuild there is reasonable.



In this specific circumstance, just using the latest tag is appropriate. If the final built images have some more unique tag (based on a version number or timestamp, say) and they're stacking up then you might need to do some more creative filtering of that image list to clean them up.






share|improve this answer























  • What is the difference between docker images -f dangling=true -q | xargs docker rmi and docker images prune?
    – user3142695
    Nov 11 at 12:14










  • Nothing. But docker images -q | xargs docker rmi worked back in the ancient days (also when multiple Dockerfiles was the only way to do "multi-stage" image builds) and it's what I've internalized. docker image prune would work just fine here.
    – David Maze
    Nov 11 at 12:23










  • I updated the answer to use docker image prune as shorter and easier.
    – David Maze
    Nov 11 at 12:24













up vote
1
down vote



accepted







up vote
1
down vote



accepted






If your CI system lets you have multiple docker build steps, you could split this into two Dockerfiles.



# Dockerfile.dependencies
# docker build -f Dockerfile.dependencies -t me/dependencies .
FROM node:10.13
...
RUN npm install


# Dockerfile
# docker build -t me/application .
FROM me/dependencies:latest AS build
COPY ./src ./src
RUN ./node_modules/.bin/babel ./src/ -d ./dist/ --copy-files

FROM node:10.13-alpine
...
CMD ["node", "app.js"]


If you do this, then you can delete unused images after each build:



docker image prune


The most recent build of the dependencies image will have a label, so it won't be "dangling" and won't appear in the image listing. On each build its label will get "taken from" the previous build (if it changed) and so this sequence will clean up previous builds. This will also delete the "build" images, though as you note if anything changed to trigger a build it will probably be in the src tree and so forcing a rebuild there is reasonable.



In this specific circumstance, just using the latest tag is appropriate. If the final built images have some more unique tag (based on a version number or timestamp, say) and they're stacking up then you might need to do some more creative filtering of that image list to clean them up.






share|improve this answer














If your CI system lets you have multiple docker build steps, you could split this into two Dockerfiles.



# Dockerfile.dependencies
# docker build -f Dockerfile.dependencies -t me/dependencies .
FROM node:10.13
...
RUN npm install


# Dockerfile
# docker build -t me/application .
FROM me/dependencies:latest AS build
COPY ./src ./src
RUN ./node_modules/.bin/babel ./src/ -d ./dist/ --copy-files

FROM node:10.13-alpine
...
CMD ["node", "app.js"]


If you do this, then you can delete unused images after each build:



docker image prune


The most recent build of the dependencies image will have a label, so it won't be "dangling" and won't appear in the image listing. On each build its label will get "taken from" the previous build (if it changed) and so this sequence will clean up previous builds. This will also delete the "build" images, though as you note if anything changed to trigger a build it will probably be in the src tree and so forcing a rebuild there is reasonable.



In this specific circumstance, just using the latest tag is appropriate. If the final built images have some more unique tag (based on a version number or timestamp, say) and they're stacking up then you might need to do some more creative filtering of that image list to clean them up.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 11 at 12:23

























answered Nov 11 at 11:42









David Maze

8,8082821




8,8082821












  • What is the difference between docker images -f dangling=true -q | xargs docker rmi and docker images prune?
    – user3142695
    Nov 11 at 12:14










  • Nothing. But docker images -q | xargs docker rmi worked back in the ancient days (also when multiple Dockerfiles was the only way to do "multi-stage" image builds) and it's what I've internalized. docker image prune would work just fine here.
    – David Maze
    Nov 11 at 12:23










  • I updated the answer to use docker image prune as shorter and easier.
    – David Maze
    Nov 11 at 12:24


















  • What is the difference between docker images -f dangling=true -q | xargs docker rmi and docker images prune?
    – user3142695
    Nov 11 at 12:14










  • Nothing. But docker images -q | xargs docker rmi worked back in the ancient days (also when multiple Dockerfiles was the only way to do "multi-stage" image builds) and it's what I've internalized. docker image prune would work just fine here.
    – David Maze
    Nov 11 at 12:23










  • I updated the answer to use docker image prune as shorter and easier.
    – David Maze
    Nov 11 at 12:24
















What is the difference between docker images -f dangling=true -q | xargs docker rmi and docker images prune?
– user3142695
Nov 11 at 12:14




What is the difference between docker images -f dangling=true -q | xargs docker rmi and docker images prune?
– user3142695
Nov 11 at 12:14












Nothing. But docker images -q | xargs docker rmi worked back in the ancient days (also when multiple Dockerfiles was the only way to do "multi-stage" image builds) and it's what I've internalized. docker image prune would work just fine here.
– David Maze
Nov 11 at 12:23




Nothing. But docker images -q | xargs docker rmi worked back in the ancient days (also when multiple Dockerfiles was the only way to do "multi-stage" image builds) and it's what I've internalized. docker image prune would work just fine here.
– David Maze
Nov 11 at 12:23












I updated the answer to use docker image prune as shorter and easier.
– David Maze
Nov 11 at 12:24




I updated the answer to use docker image prune as shorter and easier.
– David Maze
Nov 11 at 12:24


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53247216%2fmulti-stage-dockerfile-leads-to-running-out-of-space%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Florida Star v. B. J. F.

Danny Elfman

Lugert, Oklahoma