First of all switch to the hdfs
user
sudo su hdfs
hdfs -fsck /
# or
hdfs fsck hdfs://hdfsHost:port/
import scala.collection.mutable.Stack | |
case class Tower(name: String, disks: Stack[Int]) | |
case class TohState(src: String, tgt: String, tmp: String) | |
case class TohStep(move: String, before: TohState, after: TohState) | |
object LazyToh { | |
val srcName = "SRC" |
import org.joda.time.DateTime | |
import scala.annotation.tailrec | |
import scala.util.Random | |
object RANStream { | |
val randomAlphaNumIterator = Random.alphanumeric.iterator | |
@tailrec | |
def getRandomString(length: Int, acc: String = ""): String = { |
Lets assume that you have a cluster with the name - awesome_cluster
.
On a fresh Ambari cluster, we need to follow following steps to create the HDFS view.
Well... Ambari tries to impersonate
the current logged in user with the superuser
, and thus the simplest ( but not the best ) thing is to allow the superuser
to imperonate all users. Similarily we also need to tell Ambari the name of the host from which the superuser can connect
( simplest is to allow all hosts ).
On Ambari dashboard go to Services > HDFS > Config > Advanced > Custom Core Site
and add following new paramters
httpOnly
(and secure
to true
if running over SSL) when setting cookies.csrf
for preventing Cross-Site Request Forgery: http://expressjs.com/api.html#csrfbodyParser()
and only use multipart explicitly. To avoid multiparts vulnerability to 'temp file' bloat, use the defer
property and pipe()
the multipart upload stream to the intended destination.