Illegal instruction (core dumped) after running import tensorflow

Question:

I created a fresh virtual environment: virtualenv -p python2 test_venv/
And installed tensorflow: pip install --upgrade --no-cache-dir tensorflow

import tensorflow gives me Illegal instruction (core dumped)

Please help me understand what’s going on and how I can fix it. Thank you.

CPU information:

-cpu
          description: CPU
          product: Intel(R) Core(TM) i3 CPU       M 330  @ 2.13GHz
          bus info: cpu@0
          version: CPU Version
          capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm arat cpufreq

Stacktrace obtained with gdb:

#0  0x00007fffe5793880 in std::pair<std::__detail::_Node_iterator<std::pair<tensorflow::StringPiece const, std::function<bool (tensorflow::Variant*)> >, false, true>, bool> std::_Hashtable<tensorflow::StringPiece, std::pair<tensorflow::StringPiece const, std::function<bool (tensorflow::Variant*)> >, std::allocator<std::pair<tensorflow::StringPiece const, std::function<bool (tensorflow::Variant*)> > >, std::__detail::_Select1st, std::equal_to<tensorflow::StringPiece>, tensorflow::StringPieceHasher, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::_M_emplace<std::pair<tensorflow::StringPiece, std::function<bool (tensorflow::Variant*)> > >(std::integral_constant<bool, true>, std::pair<tensorflow::StringPiece, std::function<bool (tensorflow::Variant*)> >&&) ()
   from /media/gerry/hdd_1/ws_hdd/test_venv/local/lib/python2.7/site-packages/tensorflow/python/../libtensorflow_framework.so
#1  0x00007fffe5795735 in tensorflow::UnaryVariantOpRegistry::RegisterDecodeFn(std::string const&, std::function<bool (tensorflow::Variant*)> const&) () from /media/gerry/hdd_1/ws_hdd/test_venv/local/lib/python2.7/site-packages/tensorflow/python/../libtensorflow_framework.so
#2  0x00007fffe5770a7c in tensorflow::variant_op_registry_fn_registration::UnaryVariantDecodeRegistration<tensorflow::Tensor>::UnaryVariantDecodeRegistration(std::string const&) ()
   from /media/gerry/hdd_1/ws_hdd/test_venv/local/lib/python2.7/site-packages/tensorflow/python/../libtensorflow_framework.so
#3  0x00007fffe56ea165 in _GLOBAL__sub_I_tensor.cc ()
   from /media/gerry/hdd_1/ws_hdd/test_venv/local/lib/python2.7/site-packages/tensorflow/python/../libtensorflow_framework.so
#4  0x00007ffff7de76ba in call_init (l=<optimized out>, argc=argc@entry=2, argv=argv@entry=0x7fffffffd5c8, env=env@entry=0xa7b4d0)
    at dl-init.c:72
#5  0x00007ffff7de77cb in call_init (env=0xa7b4d0, argv=0x7fffffffd5c8, argc=2, l=<optimized out>) at dl-init.c:30
#6  _dl_init (main_map=main_map@entry=0xa11920, argc=2, argv=0x7fffffffd5c8, env=0xa7b4d0) at dl-init.c:120
#7  0x00007ffff7dec8e2 in dl_open_worker (a=a@entry=0x7fffffffb5c0) at dl-open.c:575
#8  0x00007ffff7de7564 in _dl_catch_error (objname=objname@entry=0x7fffffffb5b0, errstring=errstring@entry=0x7fffffffb5b8, 
    mallocedp=mallocedp@entry=0x7fffffffb5af, operate=operate@entry=0x7ffff7dec4d0 <dl_open_worker>, args=args@entry=0x7fffffffb5c0)
    at dl-error.c:187
#9  0x00007ffff7debda9 in _dl_open (
    file=0x7fffea7cbc34 "/media/gerry/hdd_1/ws_hdd/test_venv/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so", mode=-2147483646, caller_dlopen=0x51ad19 <_PyImport_GetDynLoadFunc+233>, nsid=-2, argc=<optimized out>, argv=<optimized out>, env=0xa7b4d0)
    at dl-open.c:660
#10 0x00007ffff75ecf09 in dlopen_doit (a=a@entry=0x7fffffffb7f0) at dlopen.c:66
#11 0x00007ffff7de7564 in _dl_catch_error (objname=0x9b1870, errstring=0x9b1878, mallocedp=0x9b1868, operate=0x7ffff75eceb0 <dlopen_doit>, 
    args=0x7fffffffb7f0) at dl-error.c:187
#12 0x00007ffff75ed571 in _dlerror_run (operate=operate@entry=0x7ffff75eceb0 <dlopen_doit>, args=args@entry=0x7fffffffb7f0) at dlerror.c:163
#13 0x00007ffff75ecfa1 in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87
#14 0x000000000051ad19 in _PyImport_GetDynLoadFunc ()
#15 0x000000000051a8e4 in _PyImport_LoadDynamicModule ()
#16 0x00000000005b7b1b in ?? ()
#17 0x00000000004bc3fa in PyEval_EvalFrameEx ()
#18 0x00000000004c136f in PyEval_EvalFrameEx ()
#19 0x00000000004b9ab6 in PyEval_EvalCodeEx ()
#20 0x00000000004b97a6 in PyEval_EvalCode ()
#21 0x00000000004b96df in PyImport_ExecCodeModuleEx ()
#22 0x00000000004b2b06 in ?? ()
#23 0x00000000004a4ae1 in ?? ()
Asked By: Gerry

||

Answers:

I would use older version. Looks like your CPU does not support AVX instructions.

Quoting from their Release Page

Breaking Changes
Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
Prebuilt binaries will use AVX instructions. This may break TF on older CPUs.

You have atleast two options:

  1. Use tensorflow 1.5 or older

  2. Build from source

Regarding your concern for differences, you will miss out on new features, but most basic features and documentations are not that different.

Answered By: Dinesh

Unfortunately, 1.6 has given many people the same error. I received it after installing 1.7 on a machine with an old Core2 CPU. I’ve settled with 1.5, as I can’t fit the big graphics card in the machine with the up-to-date processor!

Answered By: mjflory

There is an issue on github about this, which seems to have gotten little interest from the tensorflow team, unfortunately.

There are a few community builds around the web that might work depending on your situation:

Answered By: Laurent S

As explained in the accepted answer, this issue can be fixed either by installing older version of TensorFlow (v1.5) or building from source. Between the two, building from source is arguably a preferred route despite the additional effort. Granted that the binary contains the most updated components of TensorFlow.

This article explains how to build TensorFlow from sources and optimizes for the older CPU. The key is in detecting the CPU flags and enable all the CPU flags for optimization when configuring the build.

The following command is used to detect common CPU optimization flags:

$ grep flags -m1 /proc/cpuinfo | cut -d ":" -f 2 | tr '[:upper:]' '[:lower:]' | { read FLAGS; OPT="-march=native"; for flag in $FLAGS; do case "$flag" in "sse4_1" | "sse4_2" | "ssse3" | "fma" | "cx16" | "popcnt" | "avx" | "avx2") OPT+=" -m$flag";; esac; done; MODOPT=${OPT//_/.}; echo "$MODOPT"; }

If by executing the command, -mavx and/or -mavx2 is not shown, it can be confirmed that AVX support is missing and the source build should be done with other optimization flags displayed in the output.

In a related article, the common root cause of this issue is discussed in more details, which is provided as an additional reference.

Answered By: mikaelfs

I had a similar issue and it turned out that it is due to I have slightly old CPU and that doesn’t work very well with 1.6+ versions of TensorFlow https://www.tensorflow.org/install/source

Note: Starting with TensorFlow 1.6, binaries use AVX instructions which may not run on older CPUs.

So as mentioned before you can either install TensorFlow 1.5, or if you still want the latest version of TF, you will need to install it with conda instead (both solutions worked with me)

For conda installation:

conda create -n tensorflow
conda install tensorflow-gpu -n tensorflow

https://github.com/tensorflow/tensorflow/issues/17411

Answered By: Ehab AlBadawy

The following steps worked for me.
(remove exsisting tensorflow)

inside conda virtual env

step 1: install keras-application using pip

step 2: install tensorflow (no need to downgrade)

Answered By: elphi

I would use docker to downgrade tf to a previous version. You can find the different tags on dockerhub

For example:

docker run --gpus all -it tensorflow/tensorflow:2.2.1-gpu bash
Answered By: David Bacelj

It might not related directly to TensorFlow, Keras, Pytorch. Sorry about it.

But it happened to me on L4T, (Nvidia Jetson AGX Xavier) while I was installing the latest versions of NumPy, pandas, protobuf it raises that weird errors and meantime on console I don’t know why, I mean Really, I’ll appreciate if someone can. It warns me about a dependency of the Pandas called python-dateutil=2.8.1

To figure that out Follow back to the rabbit hole and I tried those steps:

pip3 uninstall numpy 
pip3 uninstall pandas
pip3 uninstall protobuf 
pip3 uninstall python-dateutil 

Then try to install them with a specific version

pip3 install numpy==1.13.3
pip3 install pandas==0.22.0
pip3 install protobuf==3.0.0

It works now well with TensorFlow: 1.5.0, PyTorch: 1.6, 1.7

Answered By: Emre

Similar issue going from tensorflow==2.3.1 to tensorflow==2.4.0
Prebuilt binaries do not work really well with current context of cpu chip shortage that makes the upgrade difficult for a lot of people.

Might need to build tensorflow by myself to be able to use latest feature from tensorflow_probability (that depends on tf 2.4.0)

Edit2:
from https://github.com/tensorflow/tensorflow/releases/tag/v2.4.1

This release removes the AVX2 requirement from TF 2.4.0.

Looks like I was not the only one having difficulties with avx2 support

Answered By: Tobbey
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.