sudo apt autoremove
If the above does not remove old ones lets first check the current kernel thats being used
uname -r
Lists the current used kernel version. Lets remove the earlier versions of the kernel upgrades
sudo apt autoremove
If the above does not remove old ones lets first check the current kernel thats being used
uname -r
Lists the current used kernel version. Lets remove the earlier versions of the kernel upgrades
OS: Centos | |
Requisite: JAVA 7+8 | |
sudo vi /etc/profile | |
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk | |
export JRE_HOME=/usr/lib/jvm/jre | |
Download Kafka from any of the given website: the one use here is kafka_2.11-0.10.1.1.tgz | |
Untar: tar -xvf kafka_2.11-0.10.1.1.tgz | |
sudo mv kafka_2.11-0.10.1.1.tgz /opt |
In the distributed world, while writing map-reduce codes, there are many situations where the input data seems to be non partitiionable. In which case all the data though would be picked up by multiple mappers, it gets mapped to the same key. Once all the mappers are run and if we end up in having one key with huge list of values, then we would be burdening the reducers. When I say burdening reducers it invovles burdening all the steps after mapper till the data enters the reducer nodes.
Lets take an example to deal more on this situation and see how this can be resolved.
Challenge: Finding of average of natural numbers.
Bird-view: