On all my previous posts (like this one) you can see VASmalltalk running on any Raspberry Pi, on Rock64 and even on Nvidia Jetson TX2:
In addition, you can also see previous posts where I show how to use TensorFlow from Smalltalk to recognize objects in images.
Last week, at ESUG 2019, I demoed a VA Smalltalk and TensorFlow project on an Nvidia Jetson Nano provided by Instantiations.
In this post, I will show you how to get started with the Jetson Nano, how to run VASmalltalk and finally how to use the TensorFlow wrapper to take advantage of the 128 GPU cores.
What do you need before starting
Below is the whole list of supplies I gathered:
- NVIDIA Jetson Nano Developer Kit
- MicroSD card: I got a Samsung 128GB U3
- Power supply: you can choose either a limited USB or a much more powerful DC switching power supply. I got the latter.
- Case (optional): I like metal cases so I got one with Power & Reset Control Switch.
- Fan (optional): I could only find one fan that would fit on the Nano but it was very expensive.
- Wireless Module (optional): the board does not come with built-in WiFi or Bluetooth so I decided to buy this module.
Assembling the Nano and related hardware
For this step, I first followed this short guide but finally moved to this one which was super detailed. I won’t repeat everything written there but instead I will add my own bits below.
I started by formatting the SD. For this, I always use “SD Card Formatter” program. Downloading the operating system image and flashing the SD was easy… But the first downside is that for the first boot you NEED an external monitor, keyboard and mouse. No way to do it headless :( After the first boot, you can indeed enable SSH and VNC, but not for the first time.
The next step was to assemble the Wifi and Bluetooth. It was not a walk in the park but not that difficult either. You need to disassemble the Nano a bit, connect some wires, etc:
Something bad is that the board is configured to start by default with a USB power supply. In my case, I ordered a DC instead as it’s better if you want to take the maximum power. But…to tell the Nano whether to use USB or DC, you must change the jumper J48. But guess what? The development kit (100USD) does NOT even bring a single jumper. So you got your Nano and your DC power supply and you are dead, you can’t boot it. Seriously? (BTW, before you ask, no, I didn’t have a female-to-female cable with me that day as a workaround for the jumper)
The other complicated part was to assemble the case and the fan. For that, I needed to carefully watch this video a few times. Once it was built, it felt really solid and nice. BTW the case did come with a jumper for J48 which was really nice since that meant I could use the DC power supply.
The fan itself was also complicated. The Noctua NF-A4x20 5V PWM I bought wouldn’t fit easily. The NA-AV3 silicone anti-vibration mounts would not get through the holes of the Nano. And the screws for the fan provided by the case were too short. So I had to buy some other extra screws that were long enough.
When I was ready to try the fan, I powered it and nothing happened. I thought I did something wrong and I had to re-open the case a few times… painful process. I almost gave up, when I found some help over the internet. Believe it or not, you must run a console command in order to start the fan: sudo jetson_clocks
. After that, it started working.
Setting it to run headless
While in most boards and operating systems this is easy, on the Nano this is a challenging part. The SSH part is easy and you almost don’t need to do anything in particular. But for VNC… OMG…. I followed all the recommendations provided in this guide. In my case, I could never get the xrdp
working… when it tries to connect from my Mac, it simply crashes…
As for VNC, after all the workarounds/corrections mentioned there, I was able to connect but the resolution was too bad (640×480). I spent quite some time googling until I found a workaround mentioned here. Basically, I did sudo vim /etc/X11/xorg.conf
and I added these lines:
Section "Screen"
Identifier "Default Screen"
Monitor "Configured Monitor"
Device "Default Device"
SubSection "Display"
Depth 24
Virtual 1280 800
EndSubSection
EndSection
In other words, I needed to change the size of the Virtual Display that is used if no monitor is connected (by default it was 640×480).
After rebooting, I was finally able to get a decent resolution with VNC.
Installing VASmalltalk dependencies
This part was easy and I basically followed the bash script of a previous post:
# Install VA Dependencies for running headfull and VA Environments tool
sudo apt-get install --assume-yes --no-install-recommends \
libc6 \
locales \
xterm \
libxm4 \
xfonts-base \
xfonts-75dpi \
xfonts-100dpi
# Only necessary if we are using OpenSSL from Smalltalk
sudo apt-get install --assume-yes --no-install-recommends \
libssl-dev
# Generate locales
sudo su
echo en_US.ISO-8859-1 ISO-8859-1 >> /etc/locale.gen
echo en_US.ISO-8859-15 ISO-8859-15 >> /etc/locale.gen
locale-gen
exit
Installing TensorFlow and VASmalltalk wrapper
The first thing you must do is to either build TensorFlow from scratch for Nvidia Jetson Nano with CUDA support or try to get a pre-build binary from somewhere. I am getting the latter using the following bash script:
mkdir tensorflow
cd tensorflow
wget https://dl.photoprism.org/tensorflow/nvidia-jetson/libtensorflow-nvidia-jetson-nano-1.14.0.tar.gz
tar xvzf libtensorflow-nvidia-jetson-nano-1.14.0.tar.gz
cd lib
ln -s libtensorflow_framework.so libtensorflow_framework.so.1
The symbolic link is a workaround because in the shared libraries that I downloaded, libtensorflow.so
would depend on libtensorflow_framework.so.1
but the library that was shipped was libtensorflow_framework.so
and so I made a symlink.
To install VASmalltalk and the TensorFlow wrapper, I followed the instructions from the Github repository. The only detail is that ARM64 VM will be shipped in the upcoming 9.2 ECAP 3….so send me a private message and I will send it to you until the release is public.
For the .ini
file I added:
TENSORFLOW_LIB=/home/mpeck/Instantiations/tensorflow/lib/libtensorflow.so
The last bit is that TensorFlow needs help so that libtensorflow
can find libtensorflow_framework
. So what I did is to export LD_LIBRARY_PATH
before starting the VASmalltalk image. Another possibility is moving the shared libraries to /usr/lib
or /usr/local/lib
. It’s up to you.
cd ~/Instantiations/VastEcap3_b437_7b7fc914f16f_linux/raspberryPi64/
export LD_LIBRARY_PATH=/home/mariano/Instantiations/libtensorflow-nvidia-jetson-nano-1.14.0.2/lib:$LD_LIBRARY_PATH
./abt64.sh
And all tests were green:
Confirming we are using GPU
By default, if a GPU is present (and the shared library was compiled with GPU support), TensorFlow will use GPU over CPU. From Smalltalk we can confirm this by checking the available TFDevice
by inspecting the result of (TFSession on: TFGraph create) devices
You can then run a simple test like TensorFlowCAPITest >> testAddControlInput
and see the log printed into the xterm
. You should see that a GPU device is being used:
Using 128 GPU cores, TensorFlow and VASmalltalk to detect Kölsch beers with #esug19 pictures
OK. So we have TensorFlow running, all our tests passing and we are sure we are using GPU. The obvious next step is to run some real-world demo.
In a previous post you saw some examples of Object Detection. During the ESUG 2019 Conference I wanted to show this demo but instead of recognizing random objects on random images, I showed how to detect “beers” (Kölsch! we were at Cologne, Germany!) on the real pictures people uploaded to Twitter #esug19 hashtag.
The code for that was fairly easy:
ObjectDetectionZoo new
imageFiles: OrderedCollection new;
addImageFile: '/home/mariano/Instantiations/tensorflow/esug2019/beer1.png';
graphFile: '/home/mariano/Instantiations/tensorflow/frozen_inference_graph-faster_resnet50.pb';
labelsFile: 'examples/objectDetectionZoo/mscoco_label_map.pbtxt';
prepareImageInput;
prepareSession;
predict;
openPictureWithBoundingBoxesAndLabel
And here are the results:
Conclusions
The Nvidia Jetson Nano does offer good hardware at a reasonable price. However, it’s much harder to setup than most of the boards out there. It’s the first board that takes me sooooo much time to get fully working.
But the worst part, in my opinion, is the state of Linux Tegra. Crashes everywhere and almost impossible to setup something as simple as VNC. I would really like to see a better/newer OS for the Nano.
Once all your painful setup is done, it works well and it provides nice GPU capabilities. We now have everything in place to start experimenting with it.
PS: Thanks Maxi Tabacman and Gera Richarte for doing a review of this post!
Top comments (8)
Hi Mariano, cool blog! Apologies for the VNC thing - we made some updates in the latest JetPack 4.2.1 (L4T R32.2). If you check the
README-vnc.txt
file in theL4T-README
drive on the Nano or that pops up when you connect your Nano to host computer over micro USB, it includes simplified instructions to bring up VNC.For convenience, here is also a link to
README-vnc.txt
on Gist: gist.github.com/dusty-nv/0329cd330...Hope that helps!
Hi Dustin,
Thanks for the reply. Very much appreciated. And sorry if it seem harsh from my side, but I was trying to be honest.
Now, the image I have burn is
jetson-nano-sd-r32.2-2019-07-16.zip
. Isn't that the same you are talking about? In other words, I think I am using the latest one.As for the
README-vnc.txt
yes, I saw it also when connected to it and it did help. However, there are 2 things that I haven't seen any solution:vino
(default 640x480 sucks), hence my workaround described in the post.compiz
crash when via VNC you go toSettings
->Display
. Are these fixed anywhere? I would be happy to open an issue if you tell me where. Thanks in advance,Thanks for pointing out those two issues Mariano. We are looking into the
Settings -> Display
tool and will update the next version of theREADME-vnc.txt
document with instructions to change the resolution. Appreciate the feedback!Excellent! Thanks a lot Dustin for taking into account my feedback. Very much appreciated! IMHO the Nano is a great product...it just needs to polish the OS a bit.
Best,
I'll admit I just skimmed the article, but is there any reason in specific you connected via VNC (as opposed to SSH, possibly with X11-Forwarding)?
Hi,
For this particular case, I prefer VNC over X11 forwarding. The reasons?
I may have other reasons, but those are the ones on top of my head now. I think that while X11 forwarding is cool, when doing IoT, Edge Computing, SBC, etc... having VNC is a must have.
Best,
Those are some really good reasons.
I'm coming from a "plain" sysadmin background so nothing I work with even has an X-server running (mostly containers anyway), hence I was wondering.
On the command line I usually stick to asciinema (over screen-shotting my terminal), but seeing the whole OS around it to make clear what machine this has to be run on is a good point that I should definitely think of more often myself.
So it seems we agree. The right tool for each problem :)
Thanks for sharing asciinema, I wasn't awareness of it.