Medio ambiente: ubuntu 14.04, hadoop 2.6

Después de que yo escriba el start-all.sh y jps, DataNode no lista de la terminal

>jps
9529 ResourceManager
9652 NodeManager
9060 NameNode
10108 Jps
9384 SecondaryNameNode

de acuerdo a esta respuesta : Datanode proceso no se ejecuta en Hadoop

Yo intente su mejor solución

  • bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie)
  • rm -Rf /app/tmp/hadoop-your-username/*
  • bin/hadoop namenode -format (or hdfs in the 2.x series)

Sin embargo, ahora me sale esto:

>jps
20369 ResourceManager
26032 Jps
20204 SecondaryNameNode
20710 NodeManager

Como se puede ver, incluso el NameNode falta, por favor me ayude.

DataNode logs : https://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032

NmaeNode logs : https://gist.github.com/fifiteen82726/02dcf095b5a23c1570b0

mapred-site.xml :

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

</configuration>

ACTUALIZACIÓN

[email protected]:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
[email protected]'s password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
[email protected]'s password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
Starting secondary namenodes [0.0.0.0]
[email protected]'s password: 
0.0.0.0: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
0.0.0.0: secondarynamenode running as process 20204. Stop it first.
15/04/30 01:07:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
resourcemanager running as process 20369. Stop it first.
[email protected]'s password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: nodemanager running as process 20710. Stop it first.
[email protected]:/usr/local/hadoop/sbin$ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager

ACTUALIZACIÓN

[email protected]:/usr/local/hadoop/sbin$ $HADOOP_HOME ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/05/03 09:32:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
[email protected]'s password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
[email protected]'s password: 
localhost: datanode running as process 28584. Stop it first.
Starting secondary namenodes [0.0.0.0]
[email protected]'s password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
15/05/03 09:32:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-ubuntu.out
[email protected]'s password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-ubuntu.out
[email protected]:/usr/local/hadoop/sbin$ jps
6842 Jps
28584 DataNode
podría usted por favor, actualice el namenode los registros ?
Publica tu datanode registros.
en realidad, usted no debe formatear el namenode más de una vez, Ahora el clúster es inestable debido a esto..
Lo siento por preguntar una pregunta tonta, ¿cómo encontrar NameNode, DataNode los registros?
Usted puede encontrar hadoop registros en $HADOOP_HOME/logs carpeta.

OriginalEl autor Coda Chang | 2015-04-28

5 Comentarios

  1. 5

    FATAL org.apache.hadoop.hdfs.servidor.datanode.DataNode: Excepción en secureMain
    java.io.IOException: Todos los directorios en la dfs.datanode.de datos.dir no son válidos: «/usr/local/hadoop_store/hdfs/datanode/»

    Este error puede ser debido a la mala permisos para /usr/local/hadoop_store/hdfs/datanode/ carpeta.

    FATAL org.apache.hadoop.hdfs.servidor.namenode.NameNode: Error al iniciar el namenode.
    org.apache.hadoop.hdfs.servidor.común.InconsistentFSStateException: Directorio /usr/local/hadoop_store/hdfs/namenode está en un estado incoherente: almacenamiento de directorio no existe o no es accesible.

    Este error puede ser debido a la mala permisos para /usr/local/hadoop_store/hdfs/namenode carpeta o no existe. Para corregir este problema siga estas opciones:

    OPCIÓN I:

    Si no tienes la carpeta /usr/local/hadoop_store/hdfs, a continuación, crear y dar permiso a la carpeta de la siguiente manera:

    sudo mkdir /usr/local/hadoop_store/hdfs
    sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
    sudo chmod -R 755 /usr/local/hadoop_store/hdfs
    

    Cambio hadoopuser y hadoopgroup a su hadoop nombre de usuario y hadoop groupname respectivamente. Ahora, intente iniciar los procesos de hadoop. Si el problema todavía persiste, pruebe con la opción 2.

    OPCIÓN II:

    Eliminar el contenido de /usr/local/hadoop_store/hdfs carpeta:

    sudo rm -r /usr/local/hadoop_store/hdfs/*
    

    Cambiar los permisos de carpetas:

    sudo chmod -R 755 /usr/local/hadoop_store/hdfs
    

    Ahora, iniciar los procesos de hadoop. Se debe trabajar.

    NOTA: el nuevo puesto de registros si el error persiste.

    ACTUALIZACIÓN:

    En caso de que usted no ha creado la hadoop de usuario y de grupo, haga lo siguiente:

    sudo addgroup hadoop
    sudo adduser --ingroup hadoop hadoop
    

    Ahora, el cambio de titularidad de /usr/local/hadoop y /usr/local/hadoop_store:

    sudo chown -R hadoop:hadoop /usr/local/hadoop
    sudo chown -R hadoop:hadoop /usr/local/hadoop_store
    

    Cambiar el usuario de a hadoop:

    su - hadoop
    

    Introduzca su hadoop contraseña de usuario. Ahora su terminal debe ser como:

    [email protected]:$

    Ahora, escriba:

    $HADOOP_HOME/bin/start-all.sh

    o

    sh /usr/local/hadoop/bin/start-all.sh

    Gracias, voy a probarlo.
    Ahh, lo siento. No recuerdo que jamás conjunto de hadoop nombre de usuario o nombre de grupo. Donde puedo consultar los nombres?
    ¿Qué ls -l /usr/local resultados?
    ¿Qué whoami en la terminal de resultados?
    Hiciste estos pasos durante la instalación de hadoop: sudo addgroup hadoopgroupname y sudo adduser --ingroup hadoopgroupname hadoopusername?. El hadoopgroupname y hadoopusername has dado, mientras que la instalación será su hadoop groupname y nombre de usuario, respectivamente.

    OriginalEl autor Rajesh N

  2. 2

    Me enfrenté el problema similar, jps fue no mostrando datanode.

    Extracción del contenido de hdfs carpeta y cambiar los permisos de carpetas trabajado para mí.

    sudo rm -r /usr/local/hadoop_store/hdfs/*
    sudo chmod -R 755 /usr/local/hadoop_store/hdfs    
    hadoop namenode =format
    start-all.sh
    jps
    
    Como principio, he probado de esta manera. Pero yo no.

    OriginalEl autor Gitanjali Pathania

  3. 0

    Una cosa para recordar cuando la configuración de los permisos:—-
    ssh-keygen -t rsa -P «»
    El comando anterior debe ser introducido en el namenode sólo.
    y, a continuación, genera una clave pública debe ser añadido a todos los datos del nodo
    ssh-copy-id-i ~/.ssh/id_rsa.pub
    y, a continuación, pulse la tecla comando
    ssh
    el permiso se establece ……
    después de que la contraseña no se requieren en el momento de iniciar dfs……

    OriginalEl autor Sanjay Das

  4. 0

    Enfrentan el mismo problema: Namenode servicio no se muestra en Jps comando
    Solución: debido a Su problema de permisos del directorio /usr/local/hadoop_store/hdfs
    acaba de cambiar el permiso y el formato de namenode y reiniciar el hadoop:

    $sudo chmod -R 755 /usr/local/hadoop_store/hdfs

    $hadoop namenode -formato de

    $start-all.sh

    $jps

    OriginalEl autor PANDURANG BHADANGE

  5. 0

    Solución es la primera parada de su namenode el uso de
    ir a /usr/local/hadoop

    bin/hdfs namenode -format

    a continuación, elimine hdfs y directorio tmp de tu casa

    mkdir ~/tmp
    mkdir ~/hdfs
    chmod 750 ~/hdfs
    

    goto hadoop directorio y empezar a hadoop

    `sbin/start-dfs.sh`
    

    se mostrará el datanode

    OriginalEl autor Amar Desai

Dejar respuesta

Please enter your comment!
Please enter your name here