How much total virtual cores are required to process 100GB of data in spark
up vote
-2
down vote
favorite
For example if i choose 16 vcore with 10 worker nodes, so 16-1 (one core to store the hadoop daemons) * 10 (worker nodes) = 150 vcores (Total).
Is it safe to say that 150 vcores are required to process 100GB of data?
or is there any calculation should i need to consider before choosing the vcore?
apache-spark
add a comment |
up vote
-2
down vote
favorite
For example if i choose 16 vcore with 10 worker nodes, so 16-1 (one core to store the hadoop daemons) * 10 (worker nodes) = 150 vcores (Total).
Is it safe to say that 150 vcores are required to process 100GB of data?
or is there any calculation should i need to consider before choosing the vcore?
apache-spark
add a comment |
up vote
-2
down vote
favorite
up vote
-2
down vote
favorite
For example if i choose 16 vcore with 10 worker nodes, so 16-1 (one core to store the hadoop daemons) * 10 (worker nodes) = 150 vcores (Total).
Is it safe to say that 150 vcores are required to process 100GB of data?
or is there any calculation should i need to consider before choosing the vcore?
apache-spark
For example if i choose 16 vcore with 10 worker nodes, so 16-1 (one core to store the hadoop daemons) * 10 (worker nodes) = 150 vcores (Total).
Is it safe to say that 150 vcores are required to process 100GB of data?
or is there any calculation should i need to consider before choosing the vcore?
apache-spark
apache-spark
asked Nov 11 at 0:13
Ram
11
11
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
-1
down vote
Number of Core in each node:-
A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.
Probably this link can help you better:
https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
-1
down vote
Number of Core in each node:-
A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.
Probably this link can help you better:
https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/
add a comment |
up vote
-1
down vote
Number of Core in each node:-
A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.
Probably this link can help you better:
https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/
add a comment |
up vote
-1
down vote
up vote
-1
down vote
Number of Core in each node:-
A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.
Probably this link can help you better:
https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/
Number of Core in each node:-
A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.
Probably this link can help you better:
https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/
answered Nov 11 at 10:50
swapnil shashank
625
625
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244684%2fhow-much-total-virtual-cores-are-required-to-process-100gb-of-data-in-spark%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown