apache spark - Using Pycuda with PySpark - nvcc not found -
my environment: i'm using hortonworks hdp 2.4 spark 1.6.1 on small aws ec2 cluster of 4 g2.2xlarge instances ubuntu 14.04. each instance has cuda 7.5, anaconda python 3.5, , pycuda 2016.1.1.
in /etc/bash.bashrc i've set:
cuda_home=/usr/local/cuda cuda_root=/usr/local/cuda path=$path:/usr/local/cuda/bin
on 4 machines can access nvcc command line ubuntu user, root user, , yarn user.
my problem: have python-pycuda project i've adapted run on spark. runs great on local spark installation on mac, when run on aws get:
filenotfounderror: [errno 2] no such file or directory: 'nvcc'
since runs on mac in local mode, guess is configuration issue cuda/pycuda in worker processes i'm stumped be.
any ideas?
edit: below stack trace 1 of jobs failing:
16/11/10 22:34:54 info executorallocationmanager: requesting 13 new executors because tasks backlogged (new desired total 17) 16/11/10 22:34:57 info tasksetmanager: starting task 16.0 in stage 2.0 (tid 34, ip-172-31-26-35.ec2.internal, partition 16,rack_local, 2148 bytes) 16/11/10 22:34:57 info blockmanagerinfo: added broadcast_3_piece0 in memory on ip-172-31-26-35.ec2.internal:54657 (size: 32.2 kb, free: 511.1 mb) 16/11/10 22:35:03 warn tasksetmanager: lost task 0.0 in stage 2.0 (tid 18, ip-172-31-26-35.ec2.internal): org.apache.spark.api.python.pythonexception: traceback (most recent call last): file "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 46, in call_capture_output popen = popen(cmdline, cwd=cwd, stdin=pipe, stdout=pipe, stderr=pipe) file "/home/ubuntu/anaconda3/lib/python3.5/subprocess.py", line 947, in __init__ restore_signals, start_new_session) file "/home/ubuntu/anaconda3/lib/python3.5/subprocess.py", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg) filenotfounderror: [errno 2] no such file or directory: 'nvcc' during handling of above exception, exception occurred: traceback (most recent call last): file "/hadoop/yarn/local/usercache/ubuntu/appcache/application_1478814770538_0004/container_e40_1478814770538_0004_01_000009/pyspark.zip/pyspark/worker.py", line 111, in main process() file "/hadoop/yarn/local/usercache/ubuntu/appcache/application_1478814770538_0004/container_e40_1478814770538_0004_01_000009/pyspark.zip/pyspark/worker.py", line 106, in process serializer.dump_stream(func(split_index, iterator), outfile) file "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func file "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func file "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 317, in func file "/home/ubuntu/pycuda-euler/src/cli_spark_gpu.py", line 36, in <lambda> hail_mary = data.mappartitions(lambda x: ec.assemble2(k, buffer=x, readlength = datalength,readcount=datacount)).saveastextfile('hdfs://172.31.26.32/genome/sra_output') file "./eulercuda.zip/eulercuda/eulercuda.py", line 499, in assemble2 lmerlength, evlist, eelist, levedgelist, entedgelist, readcount) file "./eulercuda.zip/eulercuda/eulercuda.py", line 238, in constructdebruijngraph lmercount, h_kmerkeys, h_kmervalues, kmercount, numreads) file "./eulercuda.zip/eulercuda/eulercuda.py", line 121, in readlmerskmerscuda d_lmers = enc.encode_lmer_device(buffer, partitionreadcount, d_lmers, readlength, lmerlength) file "./eulercuda.zip/eulercuda/pyencode.py", line 78, in encode_lmer_device """) file "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 265, in __init__ arch, code, cache_dir, include_dirs) file "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 255, in compile return compile_plain(source, options, keep, nvcc, cache_dir, target) file "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 78, in compile_plain checksum.update(preprocess_source(source, options, nvcc).encode("utf-8")) file "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 50, in preprocess_source result, stdout, stderr = call_capture_output(cmdline, error_on_nonzero=false) file "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 197, in call_capture_output return forker[0].call_capture_output(cmdline, cwd, error_on_nonzero) file "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 54, in call_capture_output % ( " ".join(cmdline), e)) pytools.prefork.execerror: error invoking 'nvcc --preprocess -arch sm_30 -i/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/cuda /tmp/tmpkpqwoaxf.cu --compiler-options -p': [errno 2] no such file or directory: 'nvcc' @ org.apache.spark.api.python.pythonrunner$$anon$1.read(pythonrdd.scala:166) @ org.apache.spark.api.python.pythonrunner$$anon$1.<init>(pythonrdd.scala:207) @ org.apache.spark.api.python.pythonrunner.compute(pythonrdd.scala:125) @ org.apache.spark.api.python.pythonrdd.compute(pythonrdd.scala:70) @ org.apache.spark.rdd.rdd.computeorreadcheckpoint(rdd.scala:313) @ org.apache.spark.rdd.rdd.iterator(rdd.scala:277) @ org.apache.spark.rdd.mappartitionsrdd.compute(mappartitionsrdd.scala:38) @ org.apache.spark.rdd.rdd.computeorreadcheckpoint(rdd.scala:313) @ org.apache.spark.rdd.rdd.iterator(rdd.scala:277) @ org.apache.spark.rdd.mappartitionsrdd.compute(mappartitionsrdd.scala:38) @ org.apache.spark.rdd.rdd.computeorreadcheckpoint(rdd.scala:313) @ org.apache.spark.rdd.rdd.iterator(rdd.scala:277) @ org.apache.spark.scheduler.resulttask.runtask(resulttask.scala:66) @ org.apache.spark.scheduler.task.run(task.scala:89) @ org.apache.spark.executor.executor$taskrunner.run(executor.scala:214) @ java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1142) @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:617) @ java.lang.thread.run(thread.java:745)
to close loop on this, worked way through problem.
note: know not nor permanent answer people in case running poc code dissertation , final results i'm decommissioning servers. doubt answer suitable or appropriate users.
i ended hardcoding full path nvcc compile_plain() in pycuda's compiler.py file.
partial listing:
def compile_plain(source, options, keep, nvcc, cache_dir, target="cubin"): os.path import join assert target in ["cubin", "ptx", "fatbin"] nvcc = '/usr/local/cuda/bin/'+nvcc if cache_dir: checksum = _new_md5()
hopefully points else in proper direction.
Comments
Post a Comment