Failed: execution error, return code 2 from org apache. hadoop. hive. ql.exec. mr.

Big data learning monk 2022-02-13 07:16:47 阅读数:462

failed execution error return code

Go to hive When inserting data into a table in , Cannot run successfully directly , Report a mistake

The reason for the error :
The first one is ,namenode Not enough memory
reason :JVM There is not enough memory left job Caused by operation

Error message

0: jdbc:hive2://hadoop101:10000> insert into table student values(1002,"zss");
INFO : Compiling command(queryId=root_20210909172055_08738d9c-4ded-4067-a2ac-6a64572ad49b): insert into table student values(1002,"zss")
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col1, type:int, comment:null), FieldSchema(name:col2, type:string, comment:null)], properties:null)
INFO : Completed compiling command(queryId=root_20210909172055_08738d9c-4ded-4067-a2ac-6a64572ad49b); Time taken: 0.425 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=root_20210909172055_08738d9c-4ded-4067-a2ac-6a64572ad49b): insert into table student values(1002,"zss")
WARN : Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
INFO : Query ID = root_20210909172055_08738d9c-4ded-4067-a2ac-6a64572ad49b
INFO : Total jobs = 3
INFO : Launching Job 1 out of 3
INFO : Starting task [Stage-1:MAPRED] in serial mode
INFO : Number of reduce tasks determined at compile time: 1
INFO : In order to change the average load for a reducer (in bytes):
INFO : set hive.exec.reducers.bytes.per.reducer=<number>
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=<number>
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=<number>
INFO : number of splits:1
INFO : Submitting tokens for job: job_1631178391888_0002
INFO : Executing with tokens: []
INFO : The url to track the job: http://hadoop101:8088/proxy/application_1631178391888_0002/
INFO : Starting Job = job_1631178391888_0002, Tracking URL = http://hadoop101:8088/proxy/application_1631178391888_0002/
INFO : Kill Command = /opt/bagdata/hadoop-3.1.3/bin/mapred job -kill job_1631178391888_0002
INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO : 2021-09-09 17:21:13,094 Stage-1 map = 0%, reduce = 0%
INFO : 2021-09-09 17:21:53,093 Stage-1 map = 100%, reduce = 100%
ERROR : Ended Job = job_1631178391888_0002 with errors
ERROR : FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
INFO : MapReduce Jobs Launched:
INFO : Stage-Stage-1: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
INFO : Total MapReduce CPU Time Spent: 0 msec
INFO : Completed executing command(queryId=root_20210909172055_08738d9c-4ded-4067-a2ac-6a64572ad49b); Time taken: 59.838 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2

terms of settlement 1:

set hive.exec.mode.local.auto=true;

The reason for the error
The second kind ,yarn Insufficient resources
reason :
The error is YARN The way virtual memory is calculated leads to , In the above example, the memory requested by the user program is 1Gb,YARN Multiply this value by a scale ( The default is 2.1) Get the value of the requested virtual memory , When YARN When the calculated virtual memory value required by the user program is greater than the calculated value , The above error will be reported . Adjusting the proportional value can solve this problem . The specific parameters are :yarn-site.xml Medium yarn.nodemanager.vmem-pmem-ratio

resolvent :
adjustment hadoop The configuration file yarn-site.xml The median :

<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>
<description>default value is 1024</description>
</property>

increase yarn.scheduler.minimum-allocation-mb Number , From default 1024 Change it to 2048; The above operation problems are solved immediately

copyright:author[Big data learning monk],Please bring the original link to reprint, thank you. https://en.javamana.com/2022/02/202202130716446147.html