Google Chat: zj734465502@gmail.com
+86-0755-88291180
sales01@spotpear.com
dragon_manager@163.com
services01@spotpear.com
manager01@spotpear.com
WhatsApp:13246739196
If you buy the DEV KIT from Waveshare, we have flashed a Jetpack 4.6 OS to the emmc of the Jetson Nano and enabled the SDMM3 (for SD card). If you need to modify the SD card startup, please refer to the manual to modify the startup path.
If you have special requirements for the factory image version, please contact the customer service of the Waveshare shop to communicate and confirm.
JETSON NANO DEV KIT made by Waveshare, based on AI computers Jetson Nano and Jetson Xavier NX, provides almost the same IOs, size, and thickness as the Jetson Nano Developer Kit (B01), more convenient for upgrading the core module. By utilizing the power of the core module, it is qualified for fields like image classification, object detection, segmentation, speech processing, etc., and can be used in sorts of AI projects.
Compared with the conventional kit, JETSON-NANO-LITE-DEV-KIT simplifies the interface of the carrier board, the USB3.0 port is reduced from the original 4 to 1, and 2x USB2.0 ports are used instead, and the CSI port is changed from the original in addition, the carrier board of the Lite version also adds power and reset buttons. The carrier board of the Lite version is compatible with the original official Jetson Nano 2GB Developer Kit in terms of appearance and interface. It is suitable for users who do not require more interface resources. The core board of the Lite version also uses the Jetson Nano Module 4GB version.
GPU | NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores and 0.5 TFLOPS (FP16) |
---|---|
CPU | Quad-core ARM® Cortex®-A57 MPCore processor |
Memory | 4 GB 64-bit LPDDR4 1600 MHz – 25.6 GB/s |
Storage | 16 GB eMMC 5.1 Flash Storage |
Video Encode | 250 MP/s 1 x 4K @ 30 (HEVC) |
Video Decode | 500 MP/s 1 x 4K @ 60 (HEVC) |
Camera | 12-ch (3x4 or 4x2) MIPI CSI-2 D-PHY 1.1 (18 Gbps) |
Connectivity | Wi-Fi requires external chip 10/100/1000 BASE-T Ethernet |
Display | EDP 1.4 | DSI (1 x 2) 2 simultaneous |
UPHY | 1 x 1/2/4 PCIe, 1 x USB 3.0, 3 x USB 2.0 |
IO | 3 x UART, 2 x SPI, 2 x I2S, 4 x I2C, multi GPIO headers |
https://developer.nvidia.com/zh-cn/embedded/jetpack
You need to register an account. After logging in, we can download it successfully. If you don't know how to register, you can refer to NVIDIA-acess
sudo dpkg -i sdkmanager_1.6.1-8175_amd64.deb (enter according to your own version).
sudo apt --fix-broken install
The following Jetpack download is based on Jetpack 4.6.2 as an example, for other Jetpack version resource pack download methods, please refer to the Jetpack download method in the FAQ.
sudo mkdir sources_nano cd sources_nano
https://developer.nvidia.com/embedded/l4t/r32_release_v7.2/t210/jetson-210_linux_r32.7.2_aarch64.tbz2 https://developer.nvidia.com/embedded/l4t/r32_release_v7.2/t210/tegra_linux_sample-root-filesystem_r32.7.2_aarch64.tbz2
Move the Jetpack to a folder and extract it (in practice, try to use the tab button to automatically complete the instructions).
sudo mv ~/Downloads/Jetson-210_Linux_R32.7.2_aarch64.tbz2 ~/sources_nano/ sudo mv ~/Downloads/Tegra_Linux_Sample-Root-Filesystem-R32.7.2_aarch64.tbz2 ~/sources_nano/
sudo tar -xjf Jetson-210_Linux_R32.7.2_aarch64.tbz2 cd Linux_for_Tegra/rootfs/ sudo tar -xjf .. /.. /Tegra_Linux_Sample-Root-Filesystem_R32.7.2_aarch64.tbz2 cd .. / sudo ./apply_binaries.sh (If an error occurs, follow the prompts and re-enter the instruction).
cd ~/sources_nano/Linux_for_Tegra sudo ./flash.sh jetson-nano-emmc mmcblk0p1
ls /dev/sd*
sudo mkfs.ext4 /dev/sda
Only SDA remains, as shown below:
sudo vi /boot/extlinux/extlinux.confFind the statement APPEND ${cbootargs} quiet root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4 console=ttyS0,115200n8 console=tty0, modify mmcblk0p1 to sda.
sudo mount /dev/sda /mnt
sudo cp -ax / /mnt
sudo umount /mnt/
sudo reboot
Note: The operation will format the TF card.
sudo apt-get install device-tree-compiler
cd ~/nvidia/nvidia_sdk/JetPack_4.6_Linux_JETSON_NANO_TARGETS/Linux_for_Tegra/kernel/dtb #You can modify the path for different jetpacks dtc -I dtb -O dts -o tegra210-p3448-0002-p3449-0000-b00.dts tegra210-p3448-0002-p3449-0000-b00.dtb
If you are using a resource pack, use the following command:
cd sources_nano/Linux_for_Tegra/kernel/dtb dtc -I dtb -O dts -o tegra210-p3448-0002-p3449-0000-b00.dts tegra210-p3448-0002-p3449-0000-b00.dtb
sudo vim tegra210-p3448-0002-p3449-0000-b00.dts
cd-gpios = <0x5b 0xc2 0x0>; sd-uhs-sdr104; sd-uhs-sdr50; sd-uhs-sdr25; sd-uhs-sdr12; no-mmc; uhs-mask = <0xc>;
dtc -I dts -O dtb -o tegra210-p3448-0002-p3449-0000-b00.dtb tegra210-p3448-0002-p3449-0000-b00.dts
cd ~/nvidia/nvidia_sdk/JetPack_4.6_Linux_JETSON_NANO_TARGETS/Linux_for_Tegra sudo ./flash.sh jetson-nano-emmc mmcblk0p1
If you are using a resource pack, use the following command:
cd sources_nano/Linux_for_Tegra sudo ./flash.sh jetson-nano-emmc mmcblk0p1
sudo ls /dev/mmcblk*
sudo mkfs.ext4 /dev/mmcblk1
If the following message appears, a file system is already available.
Unmount the SD card first:
sudo umount /media/ (here press the Tab key to complete automatically).
Format the SD card again using the format command.
After successful formatting, enter:
sudo ls /dev/mmcblk*
There is only mmcblk1, as shown below.
sudo vi /boot/extlinux/extlinux.confFind the statement APPEND ${cbootargs} quiet root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4 console=ttyS0,115200n8 console=tty0, modify mmcblk0p1 to mmcblk1 to save.
sudo mount /dev/mmcblk1 /mnt
sudo cp -ax / /mnt
sudo umount /mnt/
sudo reboot
sudo apt-get install device-tree-compiler
cd ~/nvidia/nvidia_sdk/JetPack_4.6_Linux_JETSON_NANO_TARGETS/Linux_for_Tegra/kernel/dtb dtc -I dtb -O dts -o tegra210-p3448-0002-p3449-0000-b00.dts tegra210-p3448-0002-p3449-0000-b00.dtb
sudo vim tegra210-p3448-0002-p3449-0000-b00.dts
cd-gpios = <0x5b 0xc2 0x0>; sd-uhs-sdr104; sd-uhs-sdr50; sd-uhs-sdr25; sd-uhs-sdr12; no-mmc; uhs-mask = <0xc>;
dtc -I dts -O dtb -o tegra210-p3448-0002-p3449-0000-b00.dtb tegra210-p3448-0002-p3449-0000-b00.dts
cd ~/nvidia/nvidia_sdk/JetPack_4.6_Linux_JETSON_NANO_TARGETS/Linux_for_Tegra sudo ./flash.sh jetson-nano-emmc mmcblk0p1
sudo ls /dev/mmcblk*
sudo vi /boot/extlinux/extlinux.conf
Note: If the SD card memory is 64G, after entering the system, open the terminal, and enter df -h to check the disk size if the space size is not normal, please refer to the expansion image in the FAQ.
Username: waveshare User password: waveshare
sudo dpkg -i nomachine_7.10.1_1_arm64.deb
1. Open NoMachine and then enter the IP of Jetson Nano in the "Search" bar. For example, "192.168.15.100".
2. Click "Connect to new hos 192.168.15.100", and then enter the username and password of Jetson Nano. Click "login".
3. After loading, there is an interface for software introduction, and we just need to click "OK".
4. By now, we can log in to Jetson Nano successfully.
1. Configure VNC Server:
gsettings set org.gnome.Vino require-encryption false gsettings set org.gnome.Vino prompt-enabled false gsettings set org.gnome.Vino authentication-methods "['vnc']" gsettings set org.gnome.Vino lock-screen-on-disconnect false gsettings set org.gnome.Vino vnc-password $(echo -n "mypassword"|base64)
2. Set the desktop to start automatically at boot, and create a new self-starting file in the .config path.
mkdir -p .config/autostart sudo vim ~/.config/autostart/vino-server.desktop
Add the following content:
[Desktop Entry] Type=Application Name=Vino VNC server Exec=/usr/lib/vino/vino-server NoDisplay=true
3. Check what manager you are currently using:
cat /etc/X11/default-display-manager
4. Edit the file:
sudo vim /etc/gdm3/custom.conf
5. Remove the comments on the following three lines, and modify the AutomaticLogin line to your own username.
WaylandEnable=false AutomaticLoginEnable = true AutomaticLogin = waveshare
6. Reboot Jetson Nano:
sudo reboot
1. Open VNC Viewer, enter the IP address of Jetson Nano and press Enter to confirm. For example:
192.168.15.102
2. Enter the VNC login password set earlier and click "Ok":
3. At this point, you have successfully logged in to Jetson Nano.
When the previous system is installed, only the basic system is installed. Other JetPack SDK components, such as CUDA, need to be further installed after the system starts normally. Here are the steps to install the SDK.
When using the SDK Manager to install the SDK, you do not need to set the nano to recovery mode, that is, you do not need to short-circuit the pins.
Users without ubuntu direct computer can choose to install directly on Jetson Nano using the following instructions.
sudo apt update sudo apt install nvidia-jetpack
sudo su #Switch to super user su waveshare #Switch common users
ls ls -a #Show all files and directories (hidden files starting with . are also listed) ls -l #In addition to the file name, it also lists the file type, permissions, owner, file size and other information in detail ls -lh #file sizes are listed in an easy-to-understand format, e.g. 4K
ls --help
who | User Type | Description |
u | user | file owner |
g | group | The file owner's group |
o | others | All other users |
a | all | User used, equivalent to ugo |
Operator | Description | |
+ | Increase permissions for the specified user type | |
- | Remove the permission of the specified user type | |
= | Set the settings of the specified user permissions, that is to reset all permissions of the user type |
Mode | Name | Description |
r | read | set to read permission |
w | write | set to writable permission |
x | Execute permission | Set as executable permission |
X | Special execution permission | Set the file permission to be executable only when the file is a directory file, or other types of users have executable permission |
s | setuid/gid | When the file is executed, set the file's setuid or setgid permissions according to the user type specified by the who parameter |
t | paste bit | set the paste bit, only superuser can set this bit, only file owner u can use this bit |
chmod a+r file
chmod a-x file
chmod a+rw file
chmod +rwx file
chmod u=rw,go= file
chmod -R u+r,go-r waveshare
# | Permissions | rwx | Binary |
7 | Read + Write + Execute | rwx | 111 |
6 | Read + Write | rw- | 110 |
5 | read + execute | rwx | 101 |
4 | Read only | r-- | 100 |
3 | Write + Execute | -wx | 011 |
2 | Write only | -w- | 010 |
1 | Only execute | --x | 001 |
0 | None | --- | 000 |
sudo chmod 664 file
touch file.txt
sudo mkdir waveshare
sudo mkdir -p waveshare/test
cd .. #Return to the previous directory cd /home/waveshare #Enter /home/waveshare directory cd #return to user directory
sudo cp –r test/ newtest
sudo mv file1 /home/waveshare
sudo rm test.txt
sudo rm homework
sudo rm -r homework
sudo reboot
sudo shutdown -h now
sudo shutdown -h 10
sudo shutdown -r now
head test.py -n 5
df -h
tar -cvzf waveshare.tar.gz *
tar -xvzf waveshare.tar.gz
sudo apt install nano
ifconfig
ifconfig eth0
ifconfig wlan0
1. Log in to Jetson Nano, and modify the host file, the command is as follows:
sudo vim /etc/hosts
2. Modify the hostname file, and replace the jp46 here with the name to be modified, such as waveshare, and press the keyboard ZZ:
sudo vim /etc/hostname
3. After the modification is completed, restart the Jetson Nano:
sudo reboot
4. We can also check the IP address with the following command:
hostname -I
1. First remove the default Vi editor:
sudo apt-get remove vim-common
2. Then reinstall Vim:
sudo apt-get install vim
3. For convenience, you have to add the following three sentences after the /etc/vim/vimrc file:
set nu #display line number syntax on # syntax highlighting set tabstop=4 #tab back four spaces
vim filename //Open the filename file :w // save the file :q //Quit the editor, if the file has been modified please use the following command :q! //Quit the editor without saving :wq //Exit the editor and save the file :wq! //Force quit the editor and save the file ZZ //Exit the editor and save the file ZQ //Exit the editor without saving
a //Add text to the right of the current cursor position i //Add text to the left of the current cursor position A //Add text at the end of the current line I //Add text at the beginning of the current line (the beginning of the line with a non-empty character) O //Create a new line above the current line o //Create a new line below the current line R //Replace (overwrite) the current cursor position and some text behind it J //Merge the line where the cursor is located and the next line (still in command mode)
x // delete the current character nx // delete n characters from the cursor dd // delete the current line ndd //Delete n lines including the current line down u //Undo the previous operation U //Undo all operations on the current row
yy //Copy the current line to the buffer nyy //Copy the current line down n lines to the buffer yw //Copy the characters from the cursor to the end of the word nyw //Copy n words starting from the cursor y^ //Copy the content from the cursor to the beginning of the line y$ //Copy from cursor to end of line p //Paste the contents of the clipboard after the cursor P //Paste the contents of the clipboard before the cursor
This tutorial takes a Windows system to remotely connect to a Linux server as an example. There are multiple ways to upload local files to the server.
scp +parameter +username/login name+@+hostname/IP address+ : + target file path+local storage path
scp waveshare@192.168.10.80:file .
Where "." represents the current path.
scp file waveshare@192.168.10.80:
scp -r waveshare@192.168.10.80:/home/pi/file .
scp -r file waveshare@192.168.10.80:
Note: The above waveshare needs to be changed to the username of your system, and the IP address to the actual IP address of Jetson Nano.
File sharing is possible using the Samba service. The Jetson Nano file system can be accessed in the Windows Network Neighborhood, which is very convenient.
1. First install Samba, enter into the terminal:
sudo apt-get update sudo apt-get install samba -y
2. Create a shared folder sambashare in the /home/waveshare directory.
mkdir sambashare
3. After the installation is complete, modify the configuration file /etc/samba/smb.conf:
sudo nano /etc/samba/smb.conf
Pull to the end of the file and add the following statement to the end of the file.
[sambashare] comment = Samba on JetsonNano path = /home/waveshare/sambashare read only = no browsable = yes
Note: waveshare here needs to be changed to your system username. In other words, path is the shared folder path you want to set.
4. Restart the Samba service.
sudo service smbd restart
5. Set shared folder password:
sudo smbpasswd -a waveshare
Note: The username here needs to be changed to the username of your system. If it is not the username, it will fail.
You will be asked to set a Samba password here. It is recommended to use your system password directly, which is more convenient to remember.
6. After the setup is complete, on your computer, open the file manager.
\\192.168.10.80\sambashare
7. Enter the login name and password set in step 5 earlier.
8. Let's verify, create a new test folder in windows, and you can see the test folder in the Jetson Nano sambashare directory.
View the first connected camera screen:
nvgstcapture-1.0
View the picture of the second camera connected:
nvgstcapture-1.0 --sensor-id=1
Fan speed adjustment, note that 4 wires are required to debug the fan.
sudo sh -c 'echo 255 > /sys/devices/pwm-fan/target_pwm' #Where 255 is the maximum speed, 0 is stop, modify the value to modify the speed cat /sys/class/thermal/thermal_zone0/temp #Get the CPU temperature, you can intelligently control the fan through the program #The system comes with a temperature control system, and manual control is not required in unnecessary situations
1. Scan WIFI.
sudo nmcli dev wifi
2. Connect to the WIFI network ("wifi_name" and "wifi_password" need to be replaced with the SSID and password of your actual WiFi.)
sudo nmcli dev wifi connect "wifi_name" password "wifi_password"
3. If "successfully" is displayed, the wireless network is successfully connected, and the motherboard will automatically connect to the WiFi you specified next time it is powered on.
1. Python3.6 version is installed by default in Jetson Nano, directly install PIP.
sudo apt update sudo apt-get install python3-pip python3-dev
2. After the installation is complete, we check the PIP version.
pip3 -V
3. The default installed PIP is version 9.01, you need to upgrade it to the latest version.
python3 -m pip install --upgrade pip
4. After the upgrade is successful, check the pip version information and find some problems.
pip3 -V
5. We use the command to solve it as follows:
python3 -m pip install --upgrade --force-reinstall pip sudo reboot
6. Install important installation packages in the field of machine learning.
Install important packages in the field of machine learning sudo apt-get install python3-numpy sudo apt-get install python3-scipy sudo apt-get install python3-pandas sudo apt-get install python3-matplotlib sudo apt-get install python3-sklearn
1. Check the CUDA version, if there appears "command not found", you need to configure the environment.
nvcc -V cat /usr/local/cuda/version.txt
Note: If you use the "cat" command, you can not check the version here. Please enter the "/usr/local/" directory to see if there is a CUDA directory.
If you do not install CUDA by referring to the Uninstalled CUDA section below, configure the environment after the installation is complete.
2. Set environment variables:
sudo vim .bashrc Add at the end of the file: export PATH=/usr/local/cuda-10.2/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH export CUDA_HOME=$CUDA_HOME:/usr/local/cuda-10.2
3. Update environment variables.
source .bashrc
4. Check the CUDA version again.
nvcc -V
1. Open the SDK Manager on the Ubuntu 18.04 computer, skip to step 2, and download CUDA; after the download is complete, find the CUDA installation package.
cd /Downloads/nvidia/sdkm_downloads
sudo apt-key add /var/cuda-repo-10-2-local/7fa2af80.pub
sudo apt update sudo apt install cuda-toolkit-10-2 sudo dpkg -i cuda-repo-l4t-10-2-local_10.2.460-1_arm64.deb
1. Install the needed package:
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran sudo pip3 install -U pip testresources setuptools==49.6.0
2. Install python independencies:
sudo pip3 install -U --no-deps numpy==1.19.4 future==0.18.2 mock==3.0.5 keras_preprocessing==1.1.2 keras_applications==1.0.8 gast==0.4.0 protobuf pybind11 cython pkgconfig packaging sudo env H5PY_SETUP_REQUIRES=0 pip3 install -U h5py==3.1.0
3. Install Tensorflow (online installation often fails, you can refer to step 4 for offline installation).
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v46 tensorflow
4. Finally, it is recommended to install offline, first log in to NVIDIA's official website to download the TensorFlow installation package (take "jetpack4.6 TensorFlow2.5.0 nv21.08" as an example, it is recommended to use Firefox browser to download).
5. After the installation is complete, check whether the installation is successful, enter into the terminal:
python3 import tensorflow as tf
6. View the version information:
tf.__version__
1. Login and download Pytorch. Here, we take Pytorch v1.9.0 as an example:
2. Download the independencies libraries.
sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev libopenblas-base libopenmpi-dev
3. Install Pytorch
sudo pip3 install torch-1.9.0-cp36-cp36m-linux_aarch64.whl
4. Verify whether Pytorch has been installed successfully.
python3 import torch x = torch.rand(5, 3) print(x)
5. View the version information:
import torch print(torch.__version__)
1. The Torchvision version should match the Pytorch version. The Pytorch version we installed earlier is 1.9.0, and Torchvision installs the v0.10.0 version.
2. Download and install torchvision:
git clone --branch v0.10.0 https://github.com/pytorch/vision torchvision cd torchvision export BUILD_VERSION=0.10.0 sudo python3 setup.py install
3. Verify whether Torchvision is installed successfully.
python3 import torchvision
4.The error may be that the Pillow version is too high, uninstall and reinstall.
sudo pip3 uninstall pillow sudo pip3 install pillow
5. View the version information.
import torchvision print(torchvision.__version__)
1. First download darknet on github:
git clone https://github.com/AlexeyAB/darknet.git
2. After downloading, you need to modify Makefile:
cd darknet sudo vim Makefile
Change the first four lines of 0 to 1.
GPU=1 CUDNN=1 CUDNN_HALF=1 OPENCV=1
3. The cuda version and path should also be changed to our actual version and path, otherwise the compilation will fail:
Change NVCC=nvcc to NVCC=/usr/local/cuda-10.2/bin/nvcc
4. After the modification is completed, compile and enter into the terminal:
sudo make
1. Test
./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights data/dog.jpg
2. If you want to open the picture, you need to use the test mode, it will ask you to enter the location of the picture after execution:
./darknet detector test ./cfg/coco.data ./cfg/yolov4.cfg ./yolov4.weights
1. Yolov4-tiny video detection (there is no video file in the data downloaded from github, and the user needs to upload the video file to be detected to the data folder).
./darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights data/xxx.mp4
The frames is about 14 fps.
1. Check the device number of the USB camera:
ls /dev/video* ./darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights /dev/video0 ./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights /dev/video0
1. Install cmake.
sudo apt-get update sudo apt-get install git cmake libpython3-dev python3-numpy
2. Get the jetson-inference open source project.
git clone https://github.com/dusty-nv/jetson-inference cd jetson-inference git submodule update --init
3. Create a new folder, compile:
sudo mkdir build cd build sudo cmake ../
When the download model and Pytorch interface appear, we choose to skip (Quit and skip).
4. Download the model, then place it in the "jetson-inference/data/networks" directory, and unzip it.
5. Copy files to Jetson Nano with U disk or #MobaXterm File Transmission.
cd jetson-inference/build sudo make sudo make install
6. Install v4l camera driver, input the following command in the terminal:
sudo apt-get install v4l-utils v4l2-ctl --list-formats-ext
cd ~/jetson-inference/build/aarch64/bin/ ./detectnet-camera
Use TensorRT to accelerate the frame rate to 24 fps.
./detectnet-camera --network=facenet # Run the face recognition network ./detectnet-camera --network=multiped # run multi-level pedestrian/baggage detector ./detectnet-camera --network=pednet # Run the original single-stage pedestrian detector ./detectnet-camera --network=coco-bottle # Detect bottles/soda cans under camera ./detectnet-camera --network=coco-dog # detect dog under camera
./detectnet-camera --network=facenet # Use FaceNet, default MIPI CSI camera (1280 × 720) ./detectnet-camera --camera=/dev/video1 --network=facenet # Use PedNet, V4L2 camera /dev/video1 (1280 x 720) ./detectnet-camera --width=640 --height=480 --network=facenet # Use PedNet, default MIPI CSI camera (640 x 480)
The Jetson TX1, TX2, AGX Xavier, and Nano development boards include a 40-pin GPIO header, which is similar to the 40-pin header in the Raspberry Pi.
1. To import the Jetson.GPIO module, please use:
import Jetson.GPIO as GPIO
2. Pin number:
GPIO.setmode(GPIO.BOARD) GPIO.setmode(GPIO.BCM) GPIO.setmode(GPIO.CVM) GPIO.setmode(GPIO.TEGRA_SOC)
mode = GPIO.getmode()
The model must be GPIO. BOARD, GPIO. BCM, GPIO. CVM, GPIO. TEGRA_SOC or None. 3. If GRIO detects that a pin has been set to a non-default value, you will see a warning message.
GPIO.setwarnings(False)
4. Set the channel:
# (where channel is based on the pin numbering mode discussed above) GPIO.setup(channel, GPIO.IN)
GPIO.setup(channel, GPIO.OUT)
GPIO.setup(channel, GPIO. OUT, initial=GPIO. HIGH)
# add as many as channels as needed. You can also use tuples: (18,12,13) channels = [18, 12, 13] GPIO.setup(channels, GPIO.OUT)
5. input
GPIO.input(channel) This will return the GPIO. LOW or GPIO. HIGH.
6. output: To set the value of a pin configured as an output, please use:
GPIO.output(channel, state) where state can be GPIO.LOW or GPIO.HIGH.
channels = [18, 12, 13] # or use tuples GPIO.output(channels, GPIO.HIGH) # or GPIO.LOW # set first channel to LOW and rest to HIGH GPIO.output(channel, (GPIO.LOW, GPIO.HIGH, GPIO.HIGH))
7. clean up
At the end of the program, it's a good idea to clean up the channel so that all pins are set to their default state. To clean up all used channels, please call:
GPIO.cleanup()
GPIO.cleanup(chan1) # cleanup only chan1 GPIO.cleanup([chan1, chan2]) # cleanup only chan1 and chan2 GPIO.cleanup((chan1, chan2)) # does the same operation as previous statement
8. Jetson Board Information and Library Versions:
GPIO.JETSON_INFO
This provides a Python dictionary with the following keys: P1_REVISION, RAM, REVISION, TYPE, MANUFACTURER, and PROCESSOR. All values in the dictionary are strings, but P1_REVISION is an integer.
GPIO.VERSION
This provides a string with the XYZ version format.
9. Interrupt
In addition to polling, the library provides three additional methods to monitor input events:
GPIO.wait_for_edge(channel, GPIO.RISING)
# timeout is in milliseconds GPIO.wait_for_edge(channel, GPIO.RISING, timeout=500)
The function returns the channel on which the edge was detected, or None if a timeout occurred.
# set rising edge detection on the channel GPIO.add_event_detect(channel, GPIO.RISING) run_other_code() if GPIO.event_detected(channel): do_something()
As before, you can detect events for GPIO.RISING, GPIO.FALLING or GPIO.BOTH.
# define callback function def callback_fn(channel): print("Callback called from channel %s" % channel) # add rising edge detection GPIO.add_event_detect(channel, GPIO.RISING, callback=callback_fn)
def callback_one(channel): print("First Callback") def callback_two(channel): print("Second Callback") GPIO.add_event_detect(channel, GPIO.RISING) GPIO.add_event_callback(channel, callback_one) GPIO.add_event_callback(channel, callback_two)
In this case, the two callbacks run sequentially, not simultaneously, because only the thread runs all the callback functions.
# bouncetime set in milliseconds GPIO.add_event_detect(channel, GPIO.RISING, callback=callback_fn, bouncetime=200)
If edge detection is no longer needed, it can be removed as follows:
GPIO.remove_event_detect(channel)
10. Check GPIO channel functionality
This function allows you to check the functionality of the provided GPIO channels:
GPIO.gpio_function(channel) The function returns GPIO.IN or GPIO.OUT.
import Jetson.GPIO as GPIO import time as time LED_Pin = 11 GPIO.setwarnings(False) GPIO.setmode(GPIO.BOARD) GPIO.setup(LED_Pin, GPIO.OUT) while (True): GPIO.output(LED_Pin, GPIO.HIGH) time.sleep(0.5) GPIO.output(LED_Pin, GPIO.LOW) time.sleep(0.5)
git clone https://github.com/NVIDIA/jetson-gpio
sudo mv ~/jetson-gpio
cd /opt/nvidia/jetson-gpio sudo python3 setup.py install
sudo groupadd -f -r gpio sudo usermod -a -G gpio user_name
Note: user_name is the username you use, say waveshare.
sudo cp /opt/nvidia/jetson-gpio/lib/python/Jetson/GPIO/99-gpio.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules && sudo udevadm trigger
cd /opt/nvidia/jetson-gpio/samples/ sudo python3 simple_input.py
1. Install I2Ctool first and input in the terminal:
sudo apt-get update sudo apt-get install -y i2c-tools sudo apt-get install -y python3-smbus
2. Check the installation and input in the terminal:
apt-cache policy i2c-tools
If the output is as follows, the installation is successful:
i2c-tools: Installed: 4.0-2 Candidate: 4.0-2 Version list: ***4.0-2500 500 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 Packages 100 /var/lib/dpkg/status
sudo i2cdetect -y -r -a 0
sudo i2cdump -y 0 0x68
sudo i2cset -y 0 0x68 0x90 0x55
parameters | meaning | |
0 | represents the I2C device number | |
0x68 | represents the address of the I2C device | |
0x90 | represents the register address | |
0x55 | represents the data written to the register |
sudo i2cget -y 0 0x68 0x90
parameters | meaning | |
0 | represents the I2C device number | |
0x68 | represents the address of the I2C device | |
0x90 | represents the register address |