From DSCL

ME530707 2016

Jump to: navigation, search

530.707 Robot Systems Programming Course Home Page

http://dscl.lcsr.jhu.edu/ME530707_2016

Rosorg-logo.png

530.707 Spring 2016 Class Photo - Click here for higher resolution image.

Contents

Article and Photos of Spring 2016 Independent Class Projects

Course Description

This course seeks to introduce students to open-source software tools that are available today for building complex experimental and fieldable robotic systems. The course is grouped into four sections, each of which building on the previous in increasing complexity and specificity: tools and frameworks supporting robotics research, robotics-specific software frameworks, integrating complete robotic systems, and culminates with an independent project of the student's own design using small mobile robots or other robots in the lab. Students will need to provide a computer or a virtual-box (with at least a few GB of memory and a few tens of GB of disc space) running Ubuntu 14.04 LTS Trusty Tahr (http://releases.ubuntu.com/14.04 or one of its variants such as Xubuntu 14.04 LTS) and ROS Indigo Igloo (http://wiki.ros.org/indigo) - note that these specific versions of Linux and ROS are required! Students should have an understanding of intermediate programming in C/C++ (including data structures and dynamic memory allocation) Familiarity with Linux programming. Familiarity with software version control systems (e.g. subversion, mercurial, git), linear algebra. Recommended Course Background: EN.530.646 Robot Devices, Kinematics, Dynamics, and Control and EN.600.636 Algorithms for Sensor Based Robotics.

The course is open to undergraduates (4 credit) with the permission of the instructor.

Instructors

Faculty

Professor Louis L. Whitcomb
Department of Mechanical Engineering
G.W.C. Whiting School of Engineering
The Johns Hopkins University
office: 223C Latrobe Hall
Phone: 410-516-6724
email: [email]
Office Hours: During regular problem sessions during class or by appointment. Senior Asministrative Coordinator: Ms. Deana Santoni [email]

Teaching Assistants

  • Mr. Abhinav Kunapareddy [email]
  • Mr. Farshid Alambeigi [email]

Class Schedule

  • Tuesday 4:30PM-6:00PM - Room 107 Malone Hall
  • Thursday 4:30PM-6:00PM - Room 107 Malone Hall

Office Hours in the Wyman 140 Lab

We are available to answer individual questions during regular office hours in the Wyman 140 lab. You are welcome to bring your computer to Wyman 140 to work on the assignments during office hours, or to use the desktop computers provided in this lab.

  • Monday 3-4PM - Mr. Farshid Alambeigi
  • Wednesday 3-4PM - Prof. Louis Whitcomb
  • Friday 12-1PM: - Mr. Abhinav Kunapareddy

Textbooks

Although there is no required text for the course, if you are new to ROS we recommend that you get and read one or more of the following two introductory ROS texts:

Electronic books available through the Milton Eisenhower Library are accessible from on-camps IP addresses. If you are off-campus, you will need to VPN into the campus network to access these electronic books.

Prerequisites

This course will benefit you the most if you are already familiar with the following subjects:

  • Kinematics & Dynamics
  • Linear Control Theory
  • Basic Machine Vision
  • Basic Probability Theory and Random Processes
  • Some Digital Signal Processing

This course requires that you are already familiar with:

  • The Linux Operating System
  • Imperative Programming Languages
  • Markup Languages
  • Data Structures
  • Linear Algebra

This course will require you to:

  • Use the following programming languages:
    • C++
    • bash
    • Python
  • Use the following markup languages:
    • YAML
    • HTML
    • XML
    • JSON
  • Use the following software tools:
    • Version control systems
      • subversion (svn)
      • git
      • mercurial (hg)
    • gdb
    • valgrind
    • CMake, GNU Make

Wyman 140 Lab

We are setting up the 6 computers as dual boot with Windows 8 and 64-bit Xubuntu Linux 14.04 LTS and ROS Indigo (aka ROS Indigo Igloo) installed and five turtlebot 2s mobile robots in the Wyman 140 Lab. By Jan 28 or so we hope to have everything set up so that you can boot the computers into Linux, and log in with your JHED ID and password.

  • If you have problems logging in for the First Time on a Workstation: There is a BUG in likewise that sometimes crops up: The first time you log in to a workstation using likewise, the graphical login may fail. Workaround: for your very first (and only the first) login with likewise on a machine:
      • 1. Type CTRL-ALT-F1 to get an ASCII console.
      • 2. Log in with your JHED ID and password. Your home directory on the machine will be named by your jhed id, for my case it is: /home/lwhitco1
      • 3. Log out.
      • 5. Type CTRL-ALT-F7 to get the X-windows login screen.
      • 6. Log in normally using the graphical login screen with you JHED ID and password.
      • 7. For all future logins on this machine, you can log in in normally using the graphical login screen with you JHED ID and password.

Wyman 140 Lab Etiquette:

    • Your account has sudo privileges so BE CAREFUL WHEN USING sudo! The only sudo command you should ever use is "sudo apt-get install ros-indigo-packagename" where "packagename" is the name of a ros package.
    • Do not edit or modify and ROS files under /opt/ros. If you want to modify a standard ROS package, then download the INDIGO versoin of the source-code for the package into your catkin workspace, and modify your local copy. ROS will automatically detect your local copy and use it instead of the system version in /opt/ros.
    • The only version of the operating system we will support is Xubuntu 14.04 64-bit.
    • DO NOT upgrade the operating system to more recent releases such as 15.10.
    • The only version of ROS we will support is ROS Indigo
    • DO NOT install other versions of ROS other than ROS Indigo.
    • Leave the lab spotless.
    • Never not "lock" a workstation with a screen-saver. Locked computers will be powered off.
    • No Backup!: The files on these computers are NOT BACKED UP. Any files that you leave in your account on these computers can disappear at any time. Use https://git.lcsr.jhu.edu to save your projects during and after every work session.
    • When you are finished a work session, log off the computer.
    • If you encounter a problem: Notify Prof. Whitcomb and the TAs if you have any problems with the lab or its equipment. Put "530.707" in the subject line of any email regarding this course.

Turtlebot Computers: Lenovo Thinkpad Yoga 260

The Intel 8260 WiFi/Bluetooth chipset on this computer is not supported under 14.04, so we have installed a backport driver as described here. Every time you update the kernel on these computers (e.g. if sudo apt-get upgrade or sudo apt-get dist-upgrade downloads a new linux kernel), then you need to recompile and reinstall the backport driver for the intel 8260 WiFi chipset with the following commands from within the "turtlebot" account:

  • cd ~/Downloads/backports-20150923
  • make clean
  • make defconfig-iwlwifi
  • make
  • sudo make install

and then reboot the computer with

  • sudo reboot now

The 8260 Bluetooth functionality is not supported under Ubuntu 14.04 so we have purchased Panda Wireless Bluetooth 4.0 USB adapters for teams that need Bluetooth. To enable these devices in ubuntu sound settings we execute the following commands from within the "turtlebot"to install the "blueman" ubuntu bluetooth manager (thanks to this page):

  • sudo add-apt-repository ppa:blueman/ppa
  • sudo apt-get update
  • sudo apt-get dist-upgrade

and then reboot the computer with

  • sudo reboot now

Wyman 140 Ethernet Network

Workstations 1-6 are set up on a local turtlebot network. The turtlebot netbook computers and workstations 1-6 are set to static IP addresses via a static DHCP lease. Here is the list of computers and IP addresses you should use when working with the turtlebots. If we need additional workstations for working with the turtlebots, we can add them to the turtlebot network.

Computer IP Address Location Status
turtlebot-01 192.168.1.111 turtlebot 1 OK
turtlebot-02 192.168.1.112 turtlebot 2 Not yet configured or tested, do not use for now.
turtlebot-03 192.168.1.113 turtlebot 3 OK
turtlebot-04 192.168.1.114 turtlebot 4 OK
turtlebot-05 192.168.1.115 turtlebot 5 OK
turtlebot-06 192.168.1.116 spare, for general purpose class project use OK
turtlebot-07 192.168.1.117 spare, for general purpose class project use OK
workstation-1 191.168.1.121 station 1 OK
workstation-2 191.168.1.122 station 2 OK
workstation-3 191.168.1.123 station 3 OK
workstation-4 191.168.1.124 station 4 OK
workstation-5 191.168.1.125 station 5 OK
workstation-6 191.168.1.126 station 6 OK

Wyman 140 Turtlebot WIFI network

The SSIDs for the turtlebot WIFI networks are turtlebot-2.4GHZ. See the TA or instructor for the WIFI password for the turtlebot WIFI network.

Wyman 140 Lab Access

  • Wyman Park Building: Show your J-Card at the front desk during business hours. After hours, you should be able to enter the building front door with your J-Card. If this does not work for any reason, please call the JHU Campus Police on their non-emergency phone number 410-616-4600 to ask them to come and let you into the building. Let us know if you have an problems.
  • Wyman 140 Room Access: Use your J-Card to swipe in.
  • Wyman 140 Lab Hours of Availability and NON-AVAILABILITY: You can use the lab any time EXCEPT when another class has a weekly lab session scheduled. Here are the times the lab is NOT available to you:
    • 530.343 DADS Labs
      • Mondays 6-9PM
      • Tuesdays 3-6PM
      • Thursdays 2:30-5:30PM
      • Fridays 1:30-4:30PM
    • 530.682 Haptic Applications Labs
      • Mondays 4-5:45PM
      • Wednesdays 6-7:45PM

You are not permitted to use Wyman 140 during the above times when other classes have lab sessions scheduled.

Syllabus

Week 1: Jan 26 & 28: Course Overview and ROS Basics

NOTE: in this course we will exclusively use Ubuntu 14.04 LTS (or an equivalent release such as Xubuntu 14.04 LTS) and the stable ROS Indigo Igloo release, we will NOT be using the more recent ROS Jade release.

  • Topics
    • Course Overview
      • Course Description
      • Prerequisites
      • Assignments
      • Class Project
    • Background of Robot Software Frameworks and The Open-Source Robotics Community
    • Development Tools, Managing Code, Environments, Installing Software, Building Code
    • The ROS core systems: Packaging, Buildsystems, Messaging, CLI & GUI tools, creating packages, nodes, topics, services, and paramaters.
    • Writing a Simple Publisher and Subscriber (C++)
    • Examining the Simple Publisher and Subscriber
    • Writing a Simple Service and Client (C++)
    • Examining the Simple Service and Client
  • Reading
  • Assignments to do this week
    • Install Ubuntu 14.04 LTS (or an equivalent release such as Xubuntu 14.04 LTS).
    • Install ROS Indigo Igloo
    • Complete these Tutorials
      • Installing and Configuring Your ROS Environment. NOTE: in this course we will exclusively use Ubuntu 14.04 LTS (or an equivalent release such as Xubuntu 14.04 LTS) and the stable ROS Indigo Igloo release, we will NOT be using the more recent ROS Jade release.
      • Navigating the ROS Filesystem
      • Creating a ROS Package
      • Building a ROS Package
      • Understanding ROS Nodes
      • Understanding ROS Topics
      • Understanding ROS Services and Parameters
      • Using rqt_console and roslaunch
      • Using rosed to edit files in ROS
      • Creating a ROS msg and a ROS srv
      • Writing a publisher and subscriber in C++
      • Writing a Simple Publisher and Subscriber (C++)
      • Examining the Simple Publisher and Subscriber
      • Writing a service and client in C++
      • Examining the Simple Service and Client
  • Assignments to hand in this week
  • Homework #1: Assignments to hand in this week. DUE Monday Feb 1, 2016.
    • Write and test a publisher node that publishes a TOPIC and subscriber node that subscribes to this TOPIC.
    • Write and test a server node that provides a SERVICE and client node that calls this SERVICE.
    • Hand in your code project "beginner_tutorials" on https://git.lcsr.jhu.edu
      • Login to our GIT server at https://git.lcsr.jhu.edu using your JHED ID and Password, and create and add your ssh key.
      • If this is the first time you are using this GIT server, email lcsr-it@jhu.edu to ask Anton Deguet to increase your quota on git.lcsr.jhu.edu so that you can push your assignments to the server. Be sure to include your JHED ID with your request.
      • Create a project called "beginner_tutorials" on https://git.lcsr.jhu.edu,
      • Add the TAs and the instructor as members of the project
      • Initialize your project "beginner_tutorials" as a git repository
      • Add the files to the repo
      • Commit them to the repo
      • Add the remote repository
      • Push your files
      • Push the repo to the server
      • Email the TAs and instructor when done, with "530.707 Assignment 1" in the subject line.
      • See us with questions.

Week 2: Feb 2 & 4: Roslaunch, Nodes, tf, Parameters, and Rosbag

Week 3: Feb 9 & 11: Joysticks, Eigen, Rviz, and ROS Node development in C++

In your package.xml file, include

 <build_depend>cmake_modules</build_depend>

and

<run_depend>cmake_modules</run_depend>

Near the top of your CMakeLists.txt file include

find_package(cmake_modules REQUIRED)
find_package(Eigen REQUIRED)

and

include_directories( include
  ${Eigen_INCLUDE_DIRS}
  ${catkin_INCLUDE_DIRS}
  )

In your C++ source file include the Eigen header files - the latter one provides the matrix exponential:

#include <eigen3/Eigen/Core>
#include <eigen3/unsupported/Eigen/MatrixFunctions>


  • Homework #3: Assignments to hand in this week. DUE 4:30 PM Tuesday Feb 16, 2016.
    • Joystick Assignment #1 of 4: Install your joystick and test it (nothing to hand in for this assignment)
      • Plug in and test your USB joystick
        • List the usb devices on your computer with the "lsusb" command with the joystick plugged USB cable connected and also when disconnected.
        • See that the device /dev/input/js0 appears when your joystick is connected, and that this device vanishes when the joystick is disconnected
        • Use the command "jstest /dev/input/js0" to test your joystick. This utility gives text output of the joystick data.
        • Alternatively, test the joystick with the graphical widget "jstest-gtk".
          • Install this utility with the command "sudo apt-get install jstest-gtk"
          • Run this utility it with the command "jstest-gtk".
    • Joystick Assignment #2 of 4: Tutorial: Configuring and Using a Linux-Supported Joystick with ROS (nothing to hand in for this assignment)
      • Notes on this tutorial for most Ubuntu 14.04 installations:
        • The default joystick is /dev/input/js0 (where "0" is numeral zero, not the letter O.
        • The permissions for /dev/input/js0 are already OK, i.e. you NOT need to change the permissions for /dev/input/js0 with the command "sudo chmod a+rw /dev/input/js0".
        • The ROS joy_node automatically looks for the device /dev/input/js0. You do NOT need to set the parameter with the command "rosparam set joy_node/dev "/dev/input/js0".
      • Run "roscore" in one terminal, then run "rosrun joy joy_node" and look at the topic /joy
      • Be sure to use the commands "rosnode list", "rostopic list", and rostopic echo /joy" to explore the /joy topic messages.
    • Joystick Assignment #3 of 4 (to hand in): I'd like you to do a tutorial entitled Tutorial: Writing a Teleoperation Node for a Linux-Supported Joystick for Turtlesim, but this tutorial is outdated --- it tells you to create an older style package that will use the "rosbuild" build system with the command "roscreate-pkg learning_joy roscpp turtlesim joy". In this class we have been using the "catkin" build system, however, so the the command to create a catkin package is "catkin_create_pkg learning_joy roscpp turtlesim joy". Please refer to an earlier tutorials that you did to recall how to create and build catkin packages: Creating a ROS Package and Building a ROS Package. Moreover, this tutorial works with an older version of turtlesim. So:
      • Create a catkin ros package called "learing_joy"
      • Use the C++ file turtle_teleop_joy.cpp available here.
      • Use the launch file turtle_joy.launch available here.
      • Explore this example, looking at ros nodes and topics from the coman
      • Be sure to use the commands "rosnode list", "rostopic list", "rostopic echo", "rostopic type", and "rostopic hz" to explore the /joy and /turtle1/cmd_vel topics.
      • Run rqt_graph to see the nodes and topics graphically.
    • Joystick Assignment #4 of 4 (to hand in):
      • Modify your program turtle_teleop_joy.cpp so that it fully sets all 6 elements of the Twist message based upon joystick unput. You can choose your own mapping. Here is the mapping that I chose:
        • X vel proportional to left joystick fore and aft
        • Y vel proportional to left joystick left and right
        • Z vel proportional to Y and A butons
        • rotation vel about X proportional to X and B butons
        • rotation vel about Y proportional to right joysticj fore and aft
        • rotation vel about Z proportional to right joysticj left and right
      • Write a new node named "teleop_3d.cpp" and associated launch file named "teleop_3d.launch" in the "learning_joy" package. The goal is for the teleop_3d node to subscribe to the /turtle1/cmd_vel Twist topic and publish a tf frame named teleop_3d that you can fly around with the joystick. The teleop_3d.cpp node should implement a simple first order numerical simulation where the transform teleop_3d begins at identity and is integrated from the Twist topic liner and angular velocity. The launch file should launch a joy node, the revised turtle_teleop_joy node, and the new teleop_3d node. Your program teleop_3d.cpp should
        • Subscribe to the Twist topic /turtle1/cmd_vel
        • Numerically integrate the moving transform frame every time a Twist message is published to create a moving transform frame.
        • Publish the moving transform frame as a tf frame named "teleop_3d"
      • Run rqt_graph to see the nodes and topics graphically.
      • Use the command "rosrun tf tf_echo world teleop_3d" to display numerically the relation between the world frame and the teleop_3d frame.
      • Run rviz a display the framing moving around in 3D.
      • Be sure to use the commands "rosnode list", "rostopic list", "rostopic echo", "rostopic type", and "rostopic hz" to explore the /joy, /turtle1/cmd_vel, and /teleop_3d topics.
  • Homework #3: Assignments to hand in this week.
    • Package with the Assignment #3 Teleoperation Node for a Linux-Supported Joystick for Turtlesim --- you can use and improve upon the sample code provided above.
    • Package with the Assignment #4 Teleoperation Node that publishes a tf frame named "teleop_3d" that can be flown around in 3D with the joystick and visualized in rviz.

Week 4: Feb 16 & 18: URDF and Robot State Publisher

  • Topics
    • Unified Robot Description Format (URDF)
    • Robot State Publisher
  • Reading
  • Notes from Class: In class we downloaded the urdef tutorial package into my catkin workspace. Here are few notes on the steps taken to do this. In this example we will download a local copy of the urdef tutorial into ~/catkin_ws/src/urdf_tutorial. You can edit this local copy. The system copy located here /opt/ros/indigo/share/urdf_tutorial but you cannot edit these files without sudo privileges. Better to edit your own local copy rather than mucking with the system copy. This is an example of workspace overlay where we create a package in a local workspace that ROS will use in preference to the default system package of the same name. Linux commands are shown in bold font. Comments are in italic font.
    • cd ~/catkin_ws/src (cd to ~/catkin_ws/src)
    • git clone https://github.com/ros/urdf_tutorial.git (clone the git tutorial from github. Note that this creates the directory ~/catkin_ws/src/urdf_tutorial and associated subdirectories.)
    • cd ~/catkin_ws (cd to ~/catkin_ws)
    • rm -r devel build (remove the catkin_ws/devel and catkin_ws/build directory trees, which deletes ~/catkin_ws/devel/setup.bash
    • unset CMAKE_PREFIX_PATH (delete the bash environment variable "CMAKE_PREFIX_PATH" that was set when I sourced an old workspace setup file such as the command "source ~/catkin_ws/devel/setup.bash" in my ~/.bashrc file.)
    • source /opt/ros/indigo/setup.bash (Setup default Indigo ROS bash shell environment variables)
    • echo $ROS_PACKAGE_PATH (Look at the ROS_PACKAGE_PATH environment variable that was set by the previous command. It should be a string like this: /opt/ros/indigo/share:/opt/ros/indigo/stacks )
    • catkin_make (Make the everything in my workspace from scratch, including generate a new ~/catkin_ws/devel/setup.bash)
    • source devel/setup.bash (Source the newly created file ~/catkin_ws/devel/setup.bash to add this new workspace to the ROS bash environment variables, in particular it will add the present workspace to the ROS_PACKAGE_PATH environment variable)
    • echo $ROS_PACKAGE_PATH (Look again at the ROS_PACKAGE_PATH environment variable that was set by the previous command. It should NOW be a string with your catkin workspace listed at the first element, followed by the standard package path like this: /home/llw/catkin_ws/src:/opt/ros/indigo/share:/opt/ros/indigo/stacks )
    • rospack profile (This command forces rospack to rebuild the cache of the ros package path that is used by roscd. The cache the text file ~/.ros/rospack_cache).
    • roscd urdf_tutorial (Now roscd will take me to my local copy of the urdef tutorial in ~/catkin_ws/src/urdf_tutorial instead of taking me to the system copy located here /opt/ros/indigo/share/urdf_tutorial )
    • roslaunch urdf_tutorial display.launch model:=urdf/01-myfirst.urdf (Now I can run the tutorial exercise and edit the local URDF files in ~/catkin_ws/src/urdf_tutorial/urdf)
    • Note that the later tutorial urdf files such as urdf/05-visual.urdf refer to a PR2 gripper model mesh files. If you get error messages from RVIZ like "[ERROR] [1393792989.872632824]: Could not load resource [package://pr2_description/meshes/gripper_v0/l_finger.dae]: Unable to open file "package://pr2_description/meshes/gripper_v0/l_finger.dae" then you need to install the PR2 mesh files. You can install the PR2 model files with the command sudo apt-get install ros-indigo-pr2-common
  • Assignments to do this week:
    • 1. Learning URDF Tutorials
      • 1. Create your own urdf file
      • 2. Parse a urdf file
      • 3. Using the robot state publisher on your own robot
      • 4. Start using the KDL parser (You can skip this tutorial for now if you like, it is not required for this week's assignment.)
      • 5. Using urdf with robot_state_publisher
    • 2. Learning URDF Step by Step
      • 1. Building a Visual Robot Model with URDF from Scratch
      • 2. Building a Movable Robot Model with URDF
      • 3. Adding Physical and Collision Properties to a URDF Model
      • 4. Using Xacro to Clean Up a URDF File
    • Homework #4: Assignment to hand in this week - DUE 4:30PM Tuesday Feb 23, 2016: Develop a ROS package named my_robot_urdf for an original robot of your own with at least 4 joints and a moving base.
      • Your package should consist of at least the following:
        • An URDF file named urdf/my_robot.urdf describing the robot links and joints
        • A C++ node named src/robot_joint_state_publisher.cpp
          • Your node should publish the following:
            • sensor_msgs/JointState messages for this robot on the /joint_state_publisher .
            • A tf transform named "robot_base" that a transform that specified the moving position of the robot in the world.
          • Your program robot_joint_state_publisher.cpp should move the robot around some, and also move the robot joints. You can do this by programming the motion, or you can use the joystick or other device as input, or both.
        • A RVIZ initialization file called "my_robot.rviz" that displays your robot and the its various frames.
        • A launch file named 'launch/my_robot.launch that
          • Sets the parameter "robot_description" to resolve to your URDF file.
          • Launches robot_joint_state_publisher node.
          • Launches a standard state_publisher node from the robot_state_publisher package.
          • Launches RVIZ specifying the correct rviz initialization file.
      • Push the finished package to my_robot_urdf https://git.lcsr.jhu.edu and share it with the course instructors.

Week 5: Feb 23 & 25: Gazebo Intro, SDF, worlds, and ROS IDEs

  • Topics
    • Simulating robots, their environments, and robot-environment interaction with Gazebo
    • Gazebo ROS integration.
    • NOTE: You do not need to install Gazebo. Your full ROS Indigo desktop installation will have installed Gazebo V2.2.6 So DO NOT follow the installation instructions on gazebosim.org. If for some reason the ros gazebo package is not installed, install it with sudo apt-get install ros-indigo-gazebo-ros ros-indigo-gazebo-ros-pkgs ros-indigo-gazebo-ros-control
    • NOTE: Gazebo is CPU-intensive, and may not run very well in virtual boxes. A lab full powerful desktop PCs are available for you to use in Lab Room 140 of the Wyman Park Building.
  • Reading
  • Assignments to do this week
    • Gazebo Version 2.2 Tutorials
      • Get Started
        • Skip "Install". Do not install Gazebo, it was installed when your installed the full ROS Indigo desktop.
        • Quick Start: How to run Gazebo with a simple environment.
        • Components: This page provides and overview of the basic components in a Gazebo simulation.
        • Architecture:Overview of Gazebo's code structure.
        • Screen Shots
      • Build a Robot
        • Model Structure and Requirements: How Gazebo finds and load models, and requirements for creating a model.
        • How to contribute a model (you can skip his one for now).
        • Make a model: Describes how to make models.
        • Make a mobile robot: How to make model for a 2-wheeled mobile robot.
        • Using the GUI: How to use the graphical interface.
        • Import Meshes
        • Attach Meshes
        • Adding a Sensor to the Robot
        • Make a simple gripper
        • Attach the gripper to the robot.
      • Build a World
        • Building a world
        • Modifying a world
        • Skip "Digital Elevation Models", "Population of models", and "Building Editor" for now --- you can return to them at a later date when and if you need them.
      • Friction: How to set friction properties. Be sure to experiment with the more exhaustive friction example linked at the very end of this tutorial. This is the example that I showed in class with sliding blocks. Modify the gravity, friction, and initial position/orientation of the objects to observe different dynamics.
      • Connect to ROS: ROS integration
        • Skip this: Which combination of ROS/Gazebo versions to use You can skip this as you will use the default versio 2.2.x that comes with ROS Indigo --- see gazebo ROS installation notes above.
        • Installing gazebo_ros_pkgs
        • Using roslaunch: Using roslaunch to start Gazebo, world files and URDF models
        • URDF in Gazebo: Using a URDF in Gazebo
    • Homework #5: Assignments to hand in this week - DUE 4:30PM Tuesday March 1, 2016:
      Screen shot of rviz displaying a simple two-wheeled robot. Click thumbnail for higher resolution image.
      • Create a new Gazebo ROS package named gazebo_ros_my_mobile_robot_project with the standard directory structure specified in the Creating your own Gazebo ROS Package tutorial and exemplified in the URDF in Gazebo RRBot package that you downloaded and used in this tutorial. Your project should have at least the following subdirectories:
        • gazebo_ros_my_mobile_robot_project/urdf
        • gazebo_ros_my_mobile_robot_project/launch
        • gazebo_ros_my_mobile_robot_project/worlds
      • Create both an URDF file and a SDF file for a simple new mobile robot of your own original creation, named my_mobile_robot.urdf and my_mobile_robot.sdf.
        Screen shot of Gazebo displaying a simple two-wheeled robot at the gas station. Click thumbnail for higher resolution image.
        • You can create the URDF and SDF file directly by editing them seprately (the hard way!), or create an URDF file with extra fields for SDF parameters (the easy way!) and then automatically generate the SDF my_mobile_robot.sdf file from the URDF file my_mobile_robot.urdf with gzsdf as described in URDF in Gazebo with the command gzsdf print my_mobile_robot.urdf > my_mobile_robot.sdf.
        • Using the XACRO macro package will save a lot of time and effort in the end.
        • Your new robot should have at least two wheels (or legs) to enable its motion.
        • Your new robot should be statically stable at rest in a normal gravitational field when on a flat surface.
        • It should have nice colors.
        • Create a launch file named my_mobile_robot_rviz.launch which launches a joint state publisher, a robot state publisher, and rviz set to display your robot --- recall that the launch file from the urdef_tutorial example did just this.
      • Create a gazebo world file named my_mobile_robot.world which provides at least horizontal plane, gravity, and your new robot. In Gazebo you should be able to cause the robot to move on the plane by applying torques to the wheel (or leg) joints.
      • Create a launch file named my_mobile_robot_gazebo.launch which launches your robot world with your robot in it --- recall that you did this in the assigned tutorial section on roslaunch with gazebo.
      • Push the finished package gazebo_ros_my_mobile_robot_project as a new repo on https://git.lcsr.jhu.edu and share it with the course instructors.

Week 6: Mar 1 & 3: Gazebo physical simulation, ROS Integration,

  • Topics
    • Simulating robots, their environments, and robot-environment interaction with Gazebo
    • Gazebo ROS integration.
    • Gazebo Intermediate Concepts
  • Reading
  • Assignments to do this week
    • Gazebo Version 2.2 Tutorials
      Rviz screen shot showing the RRBot camera and laser-scanner sensor data. Click for higher resolution image.
      • Connect to Ros: ROS Integration
        • Gazebo plugins in ROS
          • Note: As mentioned in class, if you are running a PC or VirtualBox without a GPU graphics adapter then gzserver will crash when you run the launch file with an error message beginning something like [gazebo-1] process has died [pid 3207, exit code 139, cmd /opt/ros/indigo/lib/gazebo_ros/gzserver... - so you will need to modify the rrbot.gazebo sdf file to use non-GPU hokuto laser plugin as follows:
            • Replace <sensor type="gpu_ray" name="head_hokuyo_sensor"> with <sensor type="ray" name="head_hokuyo_sensor">
            • Replace <plugin name="gazebo_ros_head_hokuyo_controller" filename="libgazebo_ros_gpu_laser.so"> with <plugin name="gazebo_ros_head_hokuyo_controller" filename="libgazebo_ros_laser.so">
          • Note: To visualize the laser in gazebo as shown in the Gazebo figure on the right of this web page and in the tutorial
            Gazebo screen shot showing the RRBot camera and laser-scanner sensor data. Click for higher resolution image.
            here you will also need to set the visualize property of the hokuyo laser plugin to true (i.e. "<visualize>true</visualize>").
        • ROS control Before you do this tutorial be sure that you have the indigo ros-control packages installed with the command "sudo apt-get install ros-indigo-ros-control ros-indigo-ros-controllers". If these packages are not installed on your system you will get error messages like [ERROR] [WallTime: 1395611537.772305] [8.025000] Failed to load joint_state_controller and [ERROR] [WallTime: 1395611538.776561] [9.025000] Failed to load joint1_position_controller.
        • ROS communication with Gazebo
          • Note that there is an error in the Set Model State Example of this tutorial --- the first "rosservice call..." command example specifies an orientation on the coke can that causes it to fall over ("orientation: {x: 0, y: 0.491983115673, z: 0, w: 0.870604813099 } }") Your command should instead specify the correct upright orientation ("orientation: {x: 0, y: 0.0, z: 0, w: 1.0 }"').
        • ROS Plugins
          • Note: There are typos in both commands given in Create a ROS Package section of this tutorial.
          • Note: If you previously downloaded the gazebo_ros_demos tutorial package from this tutorial then you do not need to create a new custom package names "gazebo_tutorials" for this tutorial since the "gazebo_tutorials" package is already present in the directory ~/catkin_ws/src/gazebo_ros_demos/custom_plugin_tutorial .
        • Advanced ROS integration
    • Homework #6: Assignments to hand in this week
    • Expand upon a package that you developed for last week's assignment (or create a new robot package if you prefer). Use standard ROS plug-ins (or write your own if you prefer) for control and sensing of one robot that you developed URDF and/or URDF/SDFs for last week. Use at least one one physical plug-in (ros control of the joints, differential drive, etc) and sensor plugin (e.g. camera, laser, projector, video).
    • Push your package to the git server and share with the instructor and TAs.
    • Demonstrate your robot and its operating plugins to one of the TAs during the regular office hours.

Week 7: Mar 8 & 10 Turtlebot Simulation in Gazebo, SLAM Navigation

  • Reading
  • Assignments to do this week
    • Install the turtlebot packages and the turtlebot simulator Gazebo and Rviz packages.
      • Install the turtlebot packages as described in the Turtlebot Installation Tutorial Do the "Ubuntu Package Installation with the command sudo apt-get install ros-indigo-turtlebot ros-indigo-turtlebot-apps ros-indigo-turtlebot-rviz-launchers ros-indigo-turtlebot-simulator ros-indigo-turtlesim ros-indigo-turtlebot-teleop ros-indigo-kobuki-ftdi
        • Note: You DO NOT need to do a source installation
      • Note that you do not need to run the command rosrun kobuki_ftdi create_udev_rules because your notebook computer will not be physically connected to the kobuki base with a USB cable. The notebook computers that are physically on the turtlebot will be connected to the kbuki base with a USB cable. Your computer will communicate to the turtlebot's on-board netbook via WiFi.
    • Read about the Turtlebot simulator package for ROS Indigo.
    • Do the Turtlebot Simulator Tutorials for Gazebo (do not do the "Stage" simulator tutorials).
      • Turtlebot bringup guide for ROS Indigo
      • Explore a Gazebo world with a simulated turtlebot
        • To command the turtlebot motion from the keyboard, this node allows you to use the arrow-keys (up, down, left, right): roslaunch kobuki_keyop safe_keyop.launch
        • Install the additional turtlebot simulator and
        • Be sure to
          • List and examine the nodes, topics, service
          • Echo some of the published topics
          • Run rqt_graph to visualize the topic data paths
      • Make a SLAM map and navigate with it.
        • Be sure to
          • List and examine the nodes, topics, service
          • Echo some of the the published topics
          • Run rqt_graph to visualize the topic data paths
          • Save your map to files with your name as part of the filename - for example I might use the command rosrun map_server map_saver -f map_by_louis to generate the map occupancy grid data file map_by_louis.pgm and its associated meta-data file map_by_louis.yaml.
            • Examine the contents of the .pgm map file with an image viewer such as gimp.
            • Examine the contents of the .yaml metadata map with a text editor or more.
          • Note: To run the AMCL mapping demo, you need to specify the full path name of the .yaml map file, e.g. roslaunch turtlebot_gazebo amcl_demo.launch map_file:=/home/llw/my_first_map.yaml
  • Homework #7: Assignments to hand in this week
    • Email the two map files (.pgm and .yaml) that you generated to the instructor and the TAs with the subject line "530.707 HW# 7 by your_firstname your_lastname".
    • Demonstrate to the TAs or instructor using your map to navigate the playground world with amcl particle filter navigation (you did this in the last part of the SLAM navigation tutorial).

Spring Break: Mar 15 & 17

  • Spring Break

Week 8: Mar 22 & 24: Using and Programming Turtlebot in the Lab

  • Topics
  • Reading
  • Assignments to do this week
    • Read and understand the Spring 2014 530.707 Robot Systems Programming Turtlebot_Inventory_and_Network
    • When you start using a turtlebot:
      • Verify that that the netbook is turned on and charged.
      • Verify that that the kobuki base is turned on and charged.
    • Before driving a turtlebot, disconnect all power from the netbook and base.
    • When you finish using a turtlebot
      • Plug in the netbook to charge.
      • Plug in the kobuki base to charge.
      • Log out of the workstation and netbook
    • Turtlebot Tutorials
    • 1. Installation
      • 1. Installation
        • Wyman 157 Workstations and Turtlebot Netbooks:The necessary turtlebot ROS packages are already installed on Wyman 157 Workstations 9-14 and all Turtlebot Netbooks.
        • Installing Turtlebot Packages on Your Personal Notebook: If you want to do turtlebot development on your notebook PC, you will need to install turtlebot packages on your notebook. Most of the packages have already been installed on the turtlebots and the desktop workstations. The command is sudo apt-get install ros-indigo-turtlebot ros-indigo-turtlebot-apps ros-indigo-turtlebot-viz ros-indigo-turtlebot-simulator ros-indigo-kobuki-ftdi
        • Note that the command rosrun kobuki_ftdi create_udev_rules has ALREADY BEEN RUN to create the device /dev/kobuki for the turtlebot USB connection on the Turtlebot netbooks. You do not need run this command again on the netbooks. You do not need to run this command on the desktop development stations or on your own notebook PC.
      • 2. Post-Installation Setup of the Turtlebot Netbook READ THIS TUTORIAL, BU DO NOT DO IT, WE ALREADY DID IT FOR EACH TURTLEBOT.
        • Network time protocol service "chrony" is already installed on the turtlebot netbooks and the desktop development PCs.
      • 3. Workstation Installation
        • Don't forget to install the turtlebot packages onto your own laptop (termed "Worksation" in the tutorial) as described. in this tutorial. These packages are already installed on the lab desktop PCs.
      • 4. Network Configuration
        • Follow these steps to configure the ROS environment variables so that your development machine ROS nodes can talk to ROS nodes on the turtlebot netbook.
        • You will need to configure your personal account's .bashrc to set ROS environment variables on your development machine (termed "Worksation" in the tutorial) as described in this tutorial.
        • You will need to configure your personal account's .bashrc to set ROS environment variables on the turtlebot netbook as described in this tutorial.
        • IF YOU USE A DIFFERENT WORKSTATION OR TURTLEBOT, YOU NEED TO CHANGE THESE ENVIRONMENT VARIABLES BOTH ON YOUR ACCOUNT ON YOUR WORKSTATION AND ON YOUR ACCOUNT ON THE TURTLEBOT NETBOOK.
    • 2. Getting Started
      • 2.1 Bringup
        • 1. TurtleBot Bringup How to start the TurtleBot software.
        • 2. TurtleBot Care and Feeding This tutorials explains how to charge and maintain your TurtleBot.
        • 3.3D Visualisation Visualising 3d and camera data from the kinect/asus.
          • Note: There is an error in this tutorial as noted here. The command roslaunch turtlebot_bringup 3dsensor.launch fails to publish sensor topic data (e.g. no depth maps, and the scan data is all NAN). Use the command roslaunch turtlebot_bringup 3dsensor.launch depth_registration:=false to get sensor topic data.
      • 2.2 Teleoperation
        • 1. Keyboard Teleop Keyboard teleoperation of a turtlebot
          • We recommend using this keyboard teleop node: roslaunch kobuki_keyop safe_keyop.launch rather than the default turtlebot keyop node.
          • NOTE:Sometimes the kobuki_keyop safe_keyop.launch is wonky if you do not run a persistent roscore before launching anything.
          • Run the keyboard teleop while 3D sensor and image information are being displayed in RVIZ.
          • Drive the turtlebot around.
          • Try not to crash or to push against immobile objects (the turtlebot wheel motors could overheat and burn out).
    • 3. Advanced Usage
  • Homework #8: Assignments to hand in this week
    • Demonstrate to the TAs or instructor keyboard teleop of the turtlebot with 3D sensor data being displayed in RVIZ.
    • Demonstrate to the TAs or instructor using your map to navigate the Wyman 157 lab with amcl particle filter navigation using the map that you created.

Week 9: Mar 29 & 31: Independent Project: Project Proposal

Week 10: Apr 5 & 7: Independent Project: Project Implementation and Testing

  • Topics
    • Additional topics TBD
  • Reading
    • TBD
  • Assignments to do this week
    • Week 2 of project development.
    • Use class time and office hours
  • Homework #10: Assignments to hand in this week
    • Project Progress report #1 (email to instructor and TAs)

Week 11: Apr 12 & 14: Independent Project: Project Implementation and Testing

  • Topics
    • Additional topics TBD
  • Reading
    • TBD
  • Assignments to do this week
    • Week 3 of project development.
    • Use class time and office hours
  • Homework #11: Assignments to hand in this week
    • Project Progress report #2 (email to instructor and TAs)

Week 12: Apr 19 & 21: Independent Project: Project Implementation and Testing

  • Topics
    • Additional topics TBD
  • Reading
    • TBD
  • Assignments to do this week
    • Week 4 of [project development.
    • Use class time and office hours
  • Homework #12: Assignments to hand in this week
    • Project Progress report #3 (email to instructor and TAs)

Week 13: Apr 26 & 28: Independent Project: Project Implementation and Testing

  • Topics
    • Additional topics TBD
  • Reading
    • TBD
  • Assignments to do this week
    • Week 5 of project development.
    • Use class time and office hours
  • Homework #13: Assignments to hand in this week
    • Project Progress report #4 (email to instructor and TAs)

May 11: Independent Project Demonstrations, 2-5PM Wednesday May 11, 2016

  • Schedule For Class Project Demonstrations 2-5PM Wednesday May 11, 2016 (exact times may vary)
    • 2:00PM Demo Location: Krieger 70 Hydro Lab - Robots that swim and fly, Oh My!
      • 2:00PM Preliminary Control and Navigation ROS Package for OpenROV 2.8 by Shahriar Sefati and Laughlin Barker
      • 2:20PM Quadrotor Autonomy by Rodolfo Finocchi and Azwad Sabik
      • 2:40PM Virtual reality Control of Drone by Zach Sabin and Dave Morra
      • 3:00PM walk to Hackerman Hall
    • 3:10PM Demo Location: in or near Hackerman Hall - Robotic scale-model self-driving cars!
      • 3:10PM Class Photo (you can bring your robot if you like)
      • 3:20PM Control of RC Model Car Inside and Outside of Hackerman Hall by Ryan Howarth and Rachel Hegeman
      • 3:40PM Robotic Security Guard by Greg Langer, Stefan Reichenstein, and Ted Staley
      • 4:00PM walk to 140 Wyman Park Building
    • 4:10PM Demo Location: 140 Wyman Park Building - Mobile robots navigating and interacting with people in the lab!
    • 4:10PM Turtlebot following human and picking up the ball by Sipu Ruan and Zhe Kang
    • 4:40PM TurtleBot Autonomous Map Building and Object Delivery Project by Hao Yuan and Bo Lei
  • Final report is due by end of exam period. Here is the outline:
    • Project report (PDF) Email PDF project report of your project to the Instructor and TAs. Be sure to put 530.707 in email subject line. Your project report should include
      • Project Title, Author(s) (i.e. team members), Date
      • Description of proposed project goals
      • Software
        • List and description of the new ros packages you implemented to accomplish your project. Include git repository URL for each package.
        • List of the major existing ROS packages you utilized
      • Hardware and infrastructure
        • List and description of any new or additional hardware and infrastructure you needed to implement your project.
        • List of existing and infrastructure you needed to implement your project.
      • Sample Data Products (if any) from the project
      • Link to brief online video of your project in action (desired but not required).
      • Project "Lessons Learned"
      • Suggestions for future projects
      • References - list of any references cited in your text
    • Project video (desired but not required).
    • Add section on your project to the new Projects section of course web page. Content is cut-and-paste from your project report.

2016 Independent Class Projects

Here is a link to some photos and videos of the final 2015 class project demonstrations on May 7, 2015


Virtual Reality Control of Drone, by David Morra and Zach Sabin

  • Link to brief online video of your project in action.
  • Project Goals

The Main goal of this project is to create a ROS system that allowed a Parrot AR Drone to be controlled by a pilot wearing an Oculus Rift and a Myo Armband. The video from the drone would be streamed into the Oculus Rift, creating an immersive environment for the user of the drone. In addition to this tele-operated mode, we also planned on implementing some form of waypoint navigation that would allow the drone to follow a pre-planned path.

  • Software
    • Software package we created

We created 1 package for this project:

  • oculus_drone: Package for operating Parrot AR.Drone 2.0 with Oculus Rift and Myo Gesture Control Armband.
    • Nodes:
      • main.cpp (ardrone_joystick_node): Provide flight and velocity controls for the AR.Drone 2.0 using the Oculus Rift and Myo Armband as input devices.
      • main_joy.cpp: Allow operator to fly the AR.Drone using only a game controller (for simplicity and debugging).
    • Launch files:
      • ardrone.launch: Launch the fully operational mode of our project. Control the AR.Drone using the Oculus Rift and Myo, view the video feed from the AR.Drone on the Oculus display, and initiate our autonomous task of returning to the initial takeoff site and landing.
      • ardrone_joy_only.launch: Launch the corresponding nodes to allow the operator to fly the drone from a first-person video perspective using a video game controller as the only input source.
      • simulate.launch: Same as "ardrone.launch" but in a simulated Gazebo environment.
      • simulate_joy_only.launch: Same as "ardrone_joy_only.launch" but in a simulated Gazebo environment.


    • Existing software packages
      • Oculus: ROS driver for Oculus Rift DK1
      • ros_myo: ROS driver for Myo Gesture Control Armband.
      • "tum_ardrone": ROS package for low level drone control, odometry, and autopilot.
      • "Joy": ROS driver for simple video game controller.
      • "relay": ROS node to broadcast image from one topic to another.
  • Hardware and Infrastructure
    • Hardware provided:
    • Parrot AR.Drone 2.0: Low-cost quadcopter with forward and downward facing video cameras.
    • Oculus Rift Developer Kit 1: Virtual Reality headset with stereo display and head pose estimation.
    • Myo Gesture Control Armband: Bluetooth armband with sensors for gross and fine arm motion.
  • Project "Lessons Learned"
    • Keeping track of state of robot/node is important and not trivial
      • For example when we were executing flip we queued up other flip commands in the robot’s command queue and it would then execute them continuously. To avoid this we had to track the state of the robot (currently flipping) and the state of the pilot’s head to avoid sending repeat flip commands
    • Good to have safe kill features / dead man switch
      • There were a couple times before we had implemented this into our software where we would lose connection with the drone and it would then fly erratically around the net.
    • Isolate hardware and software debugging (e.g. via simulation)
      • This proved useful to ensure that we were publishing commands to the correct topics with the correct signs and orientations. It also allowed us to code at home as opposed to having to be in the lab with the drone whenever we worked.
    • Unit testing features before adding to package
      • Testing every feature by itself before adding it to the code made our debugging much easier since we didn’t have to test the entire package every time we added a feature.
    • Integration testing to make sure that feature doesn’t interfere with other parts of package
      • However just because a particular piece works doesn’t mean that it will integrate successfully into the project. Several times we wrote code that worked individually but didn’t work or broke something else when we added it to the package as a whole.
    • Keep the case on the drone and avoid gaps in the net
      • We were lucky that our drone didn’t break but there were a few times where the outside net was not correctly positioned relative to the bottom net and the drone fell through the gap.
    • Net can make it harder to actually test features although it is vital for safety
      • We certainly were glad to have the net, especially when the drone went haywire. However it did make testing more difficult as were operating in a much more confined environment.
    • Pay attention to firmware
      • Especially with open source projects that might not be under active development.
      • We had to be careful with the firmware version of all of our hardware to ensure it worked with the ROS nodes that other people had written for them.
  • Suggestions for future projects
    • Incorporate a true stereo vision camera into the AR.Drone.
    • Use the Oculus Rift for Augmented Reality quadcopter applications.
  • Possible future work
    • Improve stability and comfortability of controls.
    • Decrease video lag on Oculus Rift display.

Quadrotor Autonomy by Rodolfo Finocchi and Azwad Sabik

Quadcoptor tags finocchi.JPG
  • Project Goals

The main goal of this project is to develop a ROS package that facilitates the autonomous flight of a Parrot AR Drone 2.0. Using AR-tag tracking capabilities, image data from the Parrot's bottom camera, along with altimetry data from the drone's ultrasonic sensor, help it localize itself in a previously-mapped 3D environment. Additionally, the software provides functionality for the autonomous search and tracking of a platform marked with a target AR tag, as well as autonomous landing on said platform.

  • Software
    • Software package we created: We created 1 package for this project:
      • "hydro_landings": Package for autonomous operation of Parrot AR Drone 2.0. Includes nodes for localization, target search and tracking, and landing.
    • Nodes:
      • drone_pose_estimator.py: ROS node that calculates a pose estimate of the AR Drone in the world frame using the relative positions of the 12 AR tags in the drone's environment. Collects image data from downward-facing camera to calculate transformation. When no tags are in the drone's field of view, on-board sensors are used for odometry based on previous pose estimate. An exponential smoothing low-pass filter is used to filter-out noisy signal and create a smooth trajectory. The tuning of this filter consists on finding the ideal parameter values that most reduces noise while maintaining lag at a minimum. This node publishes the filtered pose estimate as a geometry_msgs::Pose and the target_in_sight as a Boolean.
      • drone_search.py: ROS node that performs search of predetermined target AR tag. Node constantly updates position goals of drone, commanding it to follow a elliptical trajectory until the target tag is in sight. At that time, the target position in the world frame becomes the goal. Once the target is found, this node begins the landing procedure. This node publishes the goal position as a geometry_msgs::Pose.
      • velocity_cmd.cpp: ROS node that calculates drone velocity based on distance between estimated pose and goal location. A proportional-derivative (PD) controller is used for close-loop position control. Once the error is below a pre-selected threshold, the goal is marked as reached and drone_search.py can create a new one. This node publishes the velocity command as a geometry_msgs::Twist and whether or not the goal as been reached as a Boolean.
      • visualize.py: Defines a Graphical User Interface (GUI) that provides pose estimate visualization. The top left displays a model of the flight-cage area with the 12 AR tags displayed. The current position and pose estimates, as well as the position goal are shown. The top right shows data from the drone's video stream. Individual estimates for the x,y,z and yaw parameters of the drone are plotted in the bottom half.
    • Launch files:
      • quadrotor_control.launch: Sets up environment for Parrot AR Drone by defining cage and AR tag sizes, coordinate frame and camera parameters and launching joy_node, ardrone_teleop, ar_track_alvar, and drone_pose_estimator nodes. Depending on the value of "simulation" parameter (true is default), nodes will effect either a model quadrotor in a simulated Gazebo environment (if true), or the physical drone (if false). In the first case, quadrotor_simulation.launch is called, while in the second, ardrone.launch from the ardrone_autonomy package is called. Both are defined below.
      • quadrotor_simulation.launch: Launched when quadrotor_control is launched with its "simulation" parameter set to true. Creates cage world in Gazebo and spawns a URDF model of the quadrotor.
      • quadrotor_autonomy.launch: Calls velocity_cmd.cpp node to continuously update command velocity based on error between pose estimate and goal position. Also calls drone_search.py node to continuously redefine the current goal.
    • Existing software packages
      • "AR Drone Autonomy package": this ROS package is based on the official AR-Drone SDK and provides relatively low-level commands for both control and sensor-reading with the Parrot. Notable control commands include both translation and rotation about the three principal axes, and notable sensor readings include: elevation as determined by the downward-facing ultrasonic rangefinder; IMU measurements; and video input from both front-facing and downward-facing cameras. Noted commands had been determined functional to varying degrees of precision both on physical Parrot and in simulation, which we were able to confirm in practice.
      • "TUM Simulator package": this ROS package allows for simulation of the Parrot AR Drone in Gazebo and is designed for compatibility with the ardrone_autonomy package. It is functional in Indigo, despite not being optimized for it.
      • "AR Track Alvar package": this ROS package allows us to track the location of AR tacks in the Parrot's field of view to help it determine its 3D position in space and provide it with controller feedback in target search and tracking. We use the downward facing camera in the Parrot's physical flight environment to detect any of 12 AR tags placed adhered to the floor in a 3x4 pattern.
      • Joy package:ROS driver for a generic Linux joystick.
      • "AR Drone Joystick package": Works together with Joy package to convert joystick readings into AR Drone velocity commands. Used for testing, debugging, and emergency control.
  • Hardware and Infrastructure
    • Hardware we created:
      • AR tags: 13 AR tags were used in this project - 12 organized in a predefined pattern inside the drone cage for drone localization and one to act as a drone-landing target.
    • Hardware provided:
      • Parrot AR Drone 2.0: Low-cost quadrotor with forward and downward facing video cameras, ultrasound altimeter, gyroscope, accelerometer, magnetometer, and other sensors useful in localization and odometry.
      • Drone Cage in Hydro Lab: Netting set up by professor, teammates and other drone team to define safe flying zone.
  • Project "Lessons Learned"
    • Simulations are useful but dangerous
      • They are good for debugging and confirming expected robot behavior.
      • They can also be misleading and difficult to replicate in real-world. Additional time should be left to debug in real-world implementation.
    • Good idea to have safe kill features
      • Drone can act erratically if it loses its pose estimate, leading to potentially dangerous behavior. A kill switch is useful to avoid any injury to operator or damage to device.
    • Drones are very fragile
      • The Parrot AR Drone 2.0 is prone to breaks in the case of a crash or unstable landing.
      • It is a good idea to have a replacement device or spare parts on hand to deal with these situations.
    • Camera quality can greatly affect results of computer vision algorithms.
      • Lower resolution downward-facing camera sometime lost track of AR tags, making it difficult for the drone to localize itself.
      • Larger AR tags could have helped to resolve this issue.
    • Code documentation is critical when working in a team and developing separately
      • Integrating code from two or more sources can be difficult and time-consuming.
      • The process is made much simpler when all team members appropriately comment and document their code, explaining the parameters and outputs of all functions, among other aspects.
  • Suggestions for future projects
    • The autonomous search, tracking, and landing behaviors of the AR Parrot Drone 2.0 implemented in this project can contribute to a number of useful high level tasks, including those of delivery and search and rescue. Some possible projects that can bring the system closer to these overarching goals are:
      • Autonomous tracking of and landing on moving platform
      • Autonomous avoidance of obstacles encountered along the tracking and landing flight path.
      • Autonomous retrieval of an object from a marked target location

TurtleBot Autonomous Map Building and Object Delivery Hao Yuan, Bo Lei

  • Project Goals

The Main goal of this project is to use one Kinect-TurtleBot robot to autonomously explore the lab room and generate a 2D floor map, then it will delivery objects from one person to the other. After generating the map autonomously, the robot will navigate itself to one of the teammates where there is an AR tag to guide the robot. The person waits at the first position and loads the delivery object on the turtlebot. After that, the robot will navigate itself to a rough area in the other half of the lab room using the map it previously generated.Then the robot will move towards another person (this feature also can be triggered by the voice control), where there is a purple color tag to let the robot track. Another person waiting there will pick up the delivery object from the turtlebot and the object delivery part completes. We add a joystick to select the working mode of the robot and we can control the whole process using a joystick. The instructions on the joystick are shown below:

top view
  • Software
    • Software package we created

In our project, we have made the turtlebot do a couple of tasks and put all of the nodes and the launch files in one project. We created a package called “turtlebot_project”(https://git.lcsr.jhu.edu/lbo2/final-project). There are several launch files, parameter files and nodes in our package:

    • Nodes:
      • ARtracker.cpp: This node subscribes from “/ar_pose_marker” topic and sends message to "/mobile_base/commands/velocity" topic. According to the relative position of the AR tag, it does the path planning and sends the Turtlebot velocity message to the topic.
      • colorTracker.cpp: This node subscribes from “/blobs_3d” topic and sends message to "/mobile_base/commands/velocity". According to the relative position of the color blob, it does the path planning and sends the Turtlebot velocity message to the topic.
      • mapsaver.cpp: This node subscribes from “/joy” topic and runs the bash command to save the map when the X button on the joystick is pressed.
      • startArTracking.cpp: This node subscribes from “/joy” topic and runs the bash command to start tracking the AR tagwhen the START button on the joystick is pressed.
      • startColorTracking.cpp: This node subscribes from “/joy” topic and runs the bash command to start tracking the color when the LB button on the joystick is pressed.
      • viewNavigation.cpp: This node subscribes from “/joy” topic and runs the bash command to view Turtlebor navigation in Rviz when the BACK button on the joystick is pressed.
      • startVoice.cpp: This node subscribes from “/joy” topic and runs the bash command to view start the voice control nodes when the right axle on the joystick is pressed.
      • Selector.py: This node subscribes from “/joy” topic and runs corresponding bash commands to: 1. Stop AR tag tracking when A button on the joystick is pressed. 2. Stop color tracking when RB button on the joystick is pressed. 3. Kill all the nodes to top the system by pressing the mid button on the joystick when emergency happens


    • Launch files:
      • Startup.launch: This launch file firstly launches minimal.launch in turtlebot_bringup package to start the necessary nodes for running the turtlebot. Then it launches 3dsensor.launch in turtlebot_bringup package to get depth image from the kinect. This launch file is run on the turtlebot.
      • AutoMapping.launch: This launch file start autonomous mapping by using the operator node in nav2d_operator package to do autonomous obstacle avoidance and navigator node in nav2d_navigator package to do path planning cooperating with launching the gmapping.launch to do the SLAM. This launch file is run on the workstation.
      • local.launch: This launch file launches a bunch of nodes we wrote to do the specific tasks. This launch file is run on the workstation.
      • trackPass.launch: This launch file loads the map the robot generated, launches the nodes in turtlebot_navigation package, the ar_track_alvar node in ar_track_alvar package to recognize the AR tag and nodes in cmvision_3d package to recognize the specific color we set. This launch file is run on Turtlebot.
    • Existing software packages
      • Cmvision package: This package was used for fast color blob detection and usually combined with Cmvision_3d Packages for the 3D color tracking.
      • Joy package:Joy Package publishes topic(/joy) depend on the inputs of the controller and it includes the state of each one of the joystick's buttons and axes.
      • Pocketsohinx Package: This package includes a well-developed voice dictionary, a voice recognizer, and corresponding movements using python based interface.
      • AR_Track_Alvar Package: This package is a ROS wrapper for Alvar, an AR tag tracking library. The published topic, ar_pose_marker, includes a list of the poses of the observed AR tag, with respect to the output frame.
      • Cmvision_3d Packages: Cmvision_3d uses the topic produced by cmvision to publish the position of each color blob relative to its camera frame, and the frames in the tf stack for each color.
      • Navigation_2D Package:This package includes the obstacle avoidance, a simple path planner, and a graph based SLAM (Simultaneous Localization and Mapping) node that allows robots generate a map of a planar environment.
      • Gmapping:The gmapping package provides a ros node (slam_gmapping), which has a laser-based SLAM feature.
  • Hardware and Infrastructure
  • Hardware we created:
    • AR tag
    • Color tag
    • Hardware provided:
    • TurtleBot with Kinect
    • JoyStick
  • Sample Data
    • Lab Map Autonomous Generated and the Cost Map During the Mapping
top view:top view
    • AR Tag Recognition and Color Recognition
top view:top view
  • Demo Video
  • Project "Lessons Learned"
    • We run the ar_track_alvar package on the workstation at first but the depth image contains very large data; in order to process the data on workstation, it first has to retrieve the image data from the Turtlebot so the AR tag message updating rate is really slow. We solved this problem by running the ar_track_alvar package on the turtlebot laptop. The laptop does the AR tag calculation at first so the data send to the workstation is reduced and the updating time will be reduced as well.
    • The SLAM node in the navigation_2d package doesn’t work for turtlebot. We used the gmapping SLAM node combined with the path planning node in the navigation_2d package to complete the autonomous mapping feature.
    • The accuracy of the color tracking will be affected by the environment badly. The color tracking will be influenced because the lab has various colors and student may wear the color cloth similar to the tracking color. Therefore, always check the color of the surrounding at first.
    • When the turtlebot speed is slow, it will get stucked by the uneven/sloping floor. When the robot do the AR/ color tracking, it slows down when close to the target. If the floor is uneven, the turtlebot doesn’t have enough speed to overcome it.
    • The voice control doesn’t work well when the user speak far from it. We solved this problem by using the bluetooth speaker.
  • Suggestions for future projects
    • If you are considering either the AR tag tracking or the color tracking, we suggest the AR tag tracking as it is more reliable and less affected by environment.
    • The SLAM node in the navigation_2d package doesn’t work for turtlebot. We suggest to use gmapping node and the plan path node in the navigation_2d package to do the autonomous mapping.


Turtlebot Human Following and Object Picking Up, by Sipu Ruan and Zhe Kang

  • Project Goals

The Main goal of this project is to let Turtlebot follow a person, find and pick up an object and give it to the person. For following and tracking, color tags are used: Turtlebot can detect and track the tag that is held by a person/ sticked on the object, and move to the target using propotional speed control method. For picking, a 2 Degree-of-Freedom robot manipulator is designed and 3D-printed, the motions of which are controlled by Arduino. Additional functions are also implemented: the robot can receive commands from voice or a joystick; the robot can play sounds and “speak” when it is doing its missions.

Figure 1 Project Facilities
  • Software
    • Software package we created
 We created 1 package for this project:
  • robot_motion: This package contains 3 nodes for sending command signals, and controlling robot speed; and 5 launch files to launch all the nodes we need.
    • Nodes:
      • robot_motion_controller.cpp: This node can be seen as a central controller that subscribes to command topics from joystick and voice, and a topic that indicates the robot has reached the target; and publishes command topics to follower, Arduino program and sound_play nodes. The node classifies voice/ joy commands into the following situations: “follow”, letting robot follow a person( who holds a color tag); “get”, allowing robot to find the object to be picked; “pick”, after reaching the target, letting robot pick the object; “release”, letting the robot manipulator release the object; “stop”, forcing the robot to stop. It also publishes corresponding topics to different nodes to fulfill the tasks.
      • robot_follower.cpp: The node subscribes to commands and blobs information that sent from controller and cmvision_3d package respectively. Each time it receives blobs information, it calculates the velocity of mobile base according to proportional speed control method, and publishes command velocity to the robot base. In addition, it also publishes a topic to controller indicating the robot has reached the target.
      • joy_controller.cpp: This node subscribes to /joy topic and publishes velocity to robot base using Axes[0] and Axes[1], and commands to controller using Buttons[0], [1], [2] and [3]. The aim of adding this function is for voice control assistance and emergency stop. If the “pocketsphinx” package does not recognize the words we said, we could use joystick to control the robot motions.
    • Launch files:
      • project_bringup.launch: launches turtlebot bringup, 3dsensor, cmvision_3d and pocketsphinx launch files together.
      • robot_motion.launch: launches robot_motion_controller and follower nodes.
      • arduino.launch: launch serial_node in rosserial_python package, and set port parameters.
      • sound_play.launch: launch sound_play_node and test_sound_play nodes.
      • joy_controller.launch: launch joy_controller node.
    • Existing software packages
      • PocketSphinx package: This package is used for voice_recognization of turtlebot. We developed new lm and dic files to improve the accuracy of words recognition. In the lm and dic file we developed, unrelated words are deleted and similar words will not be used for better detection.
      • Cmvision package
      • Cmvision_3d package:Those two cmvision packages are used for fast color blob detection. Currently cmvision package is not supported on ROS Indigo, but with some modifications of “package.xml” file, it can be built from source (also need to install WxWidgets library). cmvision_3d package publishes the 3D coordinates of the detected object. We modified the color text file, in which the RGB and threholds are included.
      • Sound_play package:Play sounds on the turtlebot to confirm the command it received. This package can play .ogg or .wav files and “speak” specific words we assigned. We rewrote the test.cpp file to play sounds and words for our use, i.e. “robot ready”, “start following”, “finding the object”, “target reached”, “picking finished”, “releasing the object”.
      • Rosserial_python package:The ROS interface with Arduino. We wrote an Arduino program “turtlebot_arm_servo.ino” to to control the servos. The manipulator motions are divided into 4 parts: “set up”, “pick”, “release”, and “reset”.
      • Joy package:ROS driver for a generic Linux joystick.
  • Hardware and Infrastructure
  • Hardware we created:
    • Bluetooth Adapter: On Ubuntu 14.04, bluetooth doesn’t work well because the kernal does not support bluetooth at this time. So we installed an external bluetooth adapter.
    • Bluetooth headset: Voice control on turtlebot.
    • 2-DOF robot manipulator: Robot arm is designed by pro/E, and 3D printed. A gripper is bought.
  • Hardware provided:
    • 2 servos: move robot arm and gripper.
    • Arduino Mega board: microcontroller.
Figure 2 CAD Models for robot arm and mounting base
Figure 3 3D-printing details
Figure 4 Hardware Installation
  • Project "Lessons Learned"
    • Before designing an infrastructure for manufacturing in a designing software(ProE, CAD,.etc), it is super important to measure the length in the real world, instead of simply checking the data of the turtlebot online. Wrong designs resulted from inaccurate data would lead to great financial lose and implementing difficulties in the next stages.
    • “Cmvision” package is not officially supported in Indigo, so “sudo apt-get install” is not possible. However, we should always search for and keep trying make-up solutions (i.e. build in source) in order to use this package.
    • Source devel/setup.bash every time you make some changes to the launch(and cpp) file.
    • The “depth” and “camera” topics contain a lot of data, so it transmitts really slow via wifi. As a result, we should launch the “cmvision” package on the turtlebot, instead of launching on workstation, for a faster data transmission.
    • The pocketsphinx node published the voice control string data through topic /output/recognizor. We just need to subscibe to this topic in our self-implemented file.
    • In the color defination text file, the format is important! (Do Not leave blank paragraph)
    • The grasp action for our turtlebot requires high-accuracy distance measurement between the object and kinect. However, we can hardly make it accurate by attaching the color tag on the bottle. In order to solve this, we attach the color blob on the static desk, and put the object underneath.
  • Suggestions for future projects
    • Kinect is not designed to detect accurate distance. So think carefully before using it in distance-related task, or make sure to give it some thresholds on the distance measurement.
    • Run the whole process of your object before started in your mind. Predict possible problems you may encounter in this process, many of which may be very small but would be hard to solve based on your design.

Preliminary control and navigation ROS package for the OpenROV 2.8 underwater robot vehicle, by Shahriar Sefati and Laughlin Barker

  • Project Goals

The Main goal of this project was to implement a teleoperation and navigation system for the OpenROV using Robot Operating System (ROS). The teleoperation portion of the project proposed building upon a ROS/NodeJS bridge proof of concept first demonstrated by two UCSD undergraduates in 2014. The teleop package developed in this project enables control of the OpenROV thrusters, lights, lasers, and camera tilt. It also publishes the OpenROV telemetry such as light level, battery status, etc as ROS topics. For the navigation aspect of the vehicle, we implemented an existing Visual Odometry (VO) algorithm called ORB-SLAM that has been released within the ROS community. Visual Odometry is distinct from Simultaneous Localization And Mapping, in that the map can be sparse, providing only enough information for accurate localization, and not necessarily persistent.

  • Software
    • Software package we created

We created 2 packages for this project:

  • teleop: Package to control the following aspects of the OpenROV:

Thrusters Lights Laser Camera tilt And publishes the following OpenROV telemetry as ROS topics: Light level Laser status BBB CPU load Battery voltage Battery current OpenROV cape current Camera servo tilt

  • frame_translate: To allow dynamic placement of a reference frame, as well as compute the relative scale of ORB-SLAM’s odometry, a packaged entitled frame_translate was written which provides two functions:
    • Placement of additional TF at a location of the user's choosing (currently configured place a frame a the ROV’s position upon user command).
    • Scale calculation. By moving the ROV a known distance underwater (preliminary tests conducted by piloting the ROV from weighted ends of a section of rope), the relative scale of ORB-SLAM’s VO estimate can be computed, and used to produce a corrected odometry estimate.
    • Nodes:
      • teleop.cpp: This node subscribes to joystick messages via the /joy topic, and publishes to the appropriate ROS topics. Button and axis mappings are adjustable via the ROS parameter server. Forward lights, laser, camera tilt, forward and backward thrust, and depth control commands were dynamically mapped to buttons on a hand controller .
      • translate_world.cpp: The task of this node is to place a /tf frame at the pose of the camera at the instant that it is called. This is helpful to set a custom world frame in any desired pose within the environment, later to be used as a known reference.
      • get_scale.cpp: This node takes a fix distance between two known points in the environment (in meters) as argument and outputs the scale of the map in ORB-SLAM relative to the reality. Translate_world node will generate two fixed /tf frames on the endpoints of a known-length object and outputs the scale of the map based on these information.


    • Launch files:
      • rosbridge.launch: Includes the rosbridge_websocket.launch from the rosbridge_server package.
      • teleop.launch: Starts the joy node and teleop node.
      • webcam.launch: Starts the gscam node to publish the camera image data and info topics.
    • Existing software packages
      • gscam: A ROS camera driver that uses gstreamer to connect to devices such as webcams.
      • camera1394: ROS driver for devices supporting the IEEE 1394 Digital Camera (IIDC) protocol.
      • rosbridge_server: Part of the rosbridge_suite packages, providing a WebSocket transport layer.
      • joy: ROS driver for a generic Linux joystick.
      • ORB-SLAM: ORB-SLAM is a versatile and accurate Monocular SLAM solution able to compute in real-time the camera trajectory and a sparse 3D reconstruction of the scene in a wide variety of environments, ranging from small hand-held sequences to a car driven around several city blocks.
Figure 1: ORB-SLAM in action. Features are being tracked as green spots in the lower left picture. The map is being constructed with the location of the OpenROV on the right picture. Position of the ROV in the environment (water tank) can be observed on the lower right, where the front camera view is depicted on the upper left picture.
  • Hardware and Infrastructure
  • Hardware we created:
    • 3D printed tilted bracket: A small bracket was designed, 3D printed and mounted within the OpenROV enclosure to hold the additional Point Grey camera at a 20 degree angle from the vertical plane.
    • Hardware provided:
    • OpenROV v2.8: OpenROV is a small open source, underwater Remotely Operated Vehicle (ROV). The vehicle has a 100m depth capability, (approximate) 2 hour runtime on rechargeable LiOn batteries, and serves telemetry and high-definition video over a thin two-wire tether.
    • Firefly MV FMVU-03MTM Point Grey camera: A global shutter, 60fps, and a resolution of 752x480 was chosen as the extra camera.
    • hardware name: Hardware description.
  • Link to brief online video of your project in action.
    • Project Demo: [1]
  • Project "Lessons Learned"
    • Monocular SLAM (using a single camera) does not provide accurate depth information and hence the constructed map is a scale of the actual environment. Some SLAM packages work around this problem by having the user take a particular known camera initialization (e.g moving the camera 10 cm parallel to the plane of interest).
    • ROS camera_calibration node performs better using a checkerboard with smaller number of squares yet squares with larger edges. This enables almost constant real time tracking of the features on the board.
  • Suggestions for future projects
    • It is good to have prior knowledge of what type of camera and work space, each of the Visual Odometry packages are designed for. This could save time in deciding which package is more compatible with the hardware (camera type, i.e global shutter vs rolling shutter) and working environment.
  • Possible future work
    • ORB-SLAM Visual Odometry using the default OpenROV camera.
    • Studying and comparison of other Visual Odometry packages.
    • Object distance detection package using the laser beams .
    • Heading/Depth control of the ROV using ROS.

Autonomous Motion Self Driving Car, by Greg Langer, Stefan Reichenstin and Ted Staley

  • Project Goals

The Main goal of this project is to autonomously drive around in a known map.

  • Software
    • Software package we created

We worked with 1 main package for this project:

  • rampage_rosclass: Built off of code written by Subhransu Mishra. rampage_rosclass takes user input and autonomous driving commands to teleoperate or autonomously operate our vehicle.
    • Nodes:
      • joyful_rampage.cpp: It subscribes to a Joy node for controller input, as well as a Twist message for autonomous navigation. The Twist message contains a command velocity which provided a forward velocity and angular velocity, the latter of which was converted to angular heading using the formula ω = VL * tan(φ) , where ω is the angular velocity, V is the linear velocity, L is the wheelbase (distance between axles), and φ is the steering angle.
    • Launch files:
      • autodope.launch: Launches sensors, AMCL, Navigation stack with base local planner (designed with holonomic robots in mind).
      • autodope_teb.launch: Launches the same as autodope, but with the teb local planner (designed with ackerman steering robots in mind. Works much better than on the demo day after some additional tuning).
      • dopemobile_sensors.launch: Launches all the hardware sensors.
      • rampage_amcl_diff.launch: Launches a mapserver and the AMCL localizer and link from og_org to odom to base_link
      • map_and_statictf.launch: Launches the TF transforms world to og_org
      • Dopemobile_hector_super.launch: Launches sensors and Hector SLAM.
      • Dopemobile_hector_rosbag.launch: Launches just the car without sensors and with Hector SLAM.
      • Dopemobile_gmapping_super.launch: Launches sensors and gmapping.
      • Dopemobile_gmapping_rosbag.launch: Launches just the car without sensors and with gmapping.
    • Existing software packages
      • Rosserial: For interaction between Arduino Serial and ROS.
      • Navigation Stack: A system for autonomous navigation using AMCL and robot specific parameters.
      • TEB Local Planner: A local planner designed to work for car-like robots.
      • Joy: ROS driver for a generic Linux joystick.
      • Hector Mapping: Visual odometry based SLAM using just laser scans.
      • gmapping: SLAM using wheel odometry and laser scans.
      • AMCL: Probabilistic pose estimation using a particle filter, odometry, and laser scans.
  • Hardware and Infrastructure
  • Hardware we created:
    • Structure for computer, sensors, chips, batteries, and other robot components: Built from 80/20-like Makerbeam aluminum extrusions and laser-cut acrylic. Is the structure for the majority of the components and is attached directly to the chassis of the RC Car.
    • Hardware provided:
    • Redcat Rampage RC Car: The preassembled remote control car we built the robot on top of.
    • 3D Printed Encoder Mount: Mount provided to hold encoder.
    • Hokuyo LiDAR: Powerful daylight capable LiDAR sensor with range up to 30m.
    • Microsoft Kinect: 3D Depth sensor - not used yet.
    • GPS (x2): One standalone GPS, one integrated with APM Board. Neither used yet.
    • IMU: Integrated with APM Board. Used to measure rotation of vehicle.
    • Encoder: Mounted to vehicle drivetrain. Used to estimate distance travelled.
    • Magnetometer: Integrated with APM GPS. Not used yet.
  • Link to brief online video of project in action -- Driving in Gilman Hall
    • Project Demo, Driving: [2]
    • Project Demo, Crashing: [3]
    • Robot 3D View: [4]
  • Project "Lessons Learned"
    • ROS packages still need careful tuning, and just because the software is already written doesn’t mean it will work out of the box.
    • Just because components work well in daylight doesn’t guarantee that the area outside you choose to use them in will be ideal for your purposes.
    • Buy spares of key components (boards, batteries)
    • Safety switches / dead man switches are really important.
    • There is a reason why some sensors are more expensive than others.
    • Do not design any structural parts as an unsupported cantilever. It will break.
    • Always ask if something has been done before trying to do it yourself.
    • Integrating code from multiple sources is time consuming.
  • Suggestions for future projects
    • Implement a new local planner that is either specific to our car, or a little more generic for Ackerman­ steering type vehicles so as to allow the car to navigate through any path it can fit through.
    • Once reliable motion is done, many more options exist for possible directions to take.

Autonomous Control of RC Car Around Hackerman Hall, by Rachel Hegeman and Ryan Howarth

  • Project Goals

The goal of this project is to build a miniature car using an RC car drive system capable of autonomous navigation of known maps of areas around campus. In addition to that, the goal is to create a web interface on the Hopkins network to handle requests for the car to go to different areas on campus, potentially as an intra-campus pick-up and delivery system. To accomplish this, the car will use sensors such as a 2-D lidar, an inertial measurement unit, and an encoder on its drive shaft. The software on the car’s computer will both interface with and make use of existing ROS packages.

  • Software
    • Software package we created. We created 1 package for this project:
      • rampage_planning: The rampage_planning package contains the node we created to translate the geometry_msgs::Twist message published by move_base on the /cmd_vel topic into the rampage_logger::UavCmds message to send to the APM board connected to the motor controller. It also contains the launch files and parameter files for our version of the ROS navigation stack.
      • Nodes:
        • cmd_vel_to_uav_cmds: Node that translates the geometry_mgs::Twist messages on the cmd_vel topic published by the teb_local_planner to the rampage_mgs::UavCmds on the uav_cmd topic to communicate steering and velocity commands to rosserial_python.
      • Launch files:
        • rampage_logger/rampage_all_hardware.launch: Located in the rampage_logger package, this launch file launches both rosserial_python nodes, the Hokuyo urg_node, the cmd_vel_to_uav_cmds node, and anything else necessary for the hardware to begin publishing sensor messages through ROS.
        • rampage_planning/move_base.launch: Launches the ROS navigation stack with the teb_local planner for sending navigation goals to the car.
        • rampage_logger/map_and_statictf.launch: Launches all static transform publishers as well as the map server that publishes the specified map.
        • rampage_logger/rampage_amcl_diff.launch: Launches amcl in order to perform localization for the car.
    • Existing software packages
      • rampage_logger: The rampage_logger package contains the launch files we used to launch rosserial for both the APM board responsible for logging IMU data and for the APM board responsible for logging the encoder data and sending motor commands. The package also contains maps of hackerman that Subhransu made himself and that we used for initial localization and navigation testing.
      • rampage_firmware: The rampage_firmware package contains the ros compatible firmware that Subhransu created for both of the APM boards. We used the firmware from this package for our boards as well in order to transmit our sensor data to rosserial_python on the computer.
      • rampage_estimator_and_controller: The rampage_estimator and controller package contains the source for the rampage_commander library that Subhransu created to send velocity and steering commands to the APM firmware, which we made use of in our own code to translate velocity and steering commands coming from the ROS navigation stack.
      • rampage_msgs: This package contains the message type definitions of the UavCmds, the message type sent to rosserial_python to be translated into motor commands on the APM connected to the motor controller and electronic speed controller (ESC); ImuSimple, the messages published by rosserial_python from the APM connected to the IMU; WheelOdometry, the message type published by rosserial_python from the APM connected to the encoders; and PwmCmds, the message type sent from the firmware of the APM connected to the motor controller to communicate with the controller’s firmware.
      • gmapping: The gmapping packaged was used to makes map various location. It takes Lidar and odometry data and makes a 2D map. Maps of smaller places with well defined features that the Lidar can identify work much better. Therefore, mapping outdoors did not work as well as indoors.
      • amcl: Amcl was used to localize the robot in a specific map. It is an Adaptive Monte Carlo probabilistic localization algorithm that uses a particle filter to give a best estimate of position.
      • move_base: Referred to as the ROS navigation stack, move_base relies on modular packages for cost analysis and planning to send velocity commands to a robot according to the navigation goal that it receives. By default, move_base uses Navfn as its global planner and base_local_planner as its local planner, neither of which are designed to plan for the constraints put on a nonholonomic vehicle.
      • navfn: Navfn is a ROS package that implements a global planner for the ROS navigation stack. A global planner refers to a costmap to calculate the path of least resistance to reach whatever navigation goal it is given. It sends this path to the local planner, which commands the vehicle’s actual movement. Navfn uses Djikstra’s algorithm to calculate the path.
      • teb_local_planner: The teb_local_planner is a local planner built to interface with the ROS navigation stack that is designed to handle the constraints experienced by nonholonomic vehicles. The local planner receives a global path from the global planner in the navigation stack and overlays it with the dynamic obstacle information from the local costmap to calculate the path of least resistance that follows the global plan as closely as possible.
      • costmap_2d: The costmap_2d package we used for both our global costmap and our local costmap calculation. The costmaps were fed to the planners to enable the planners to calculate the global and local paths of least resistance.
      • rosserial_python: Rosserial was used to communicate with the two Arduino microcontrollers. This packages abstracts many of the complexities of serial communication.
      • rqt_reconfigure: We used rqt_reconfigure to adjust the parameters of both our local and global planner to try to optimize the car’s ability to accomplish navigation goals within a small space.
  • Hardware and Infrastructure
    • Redcat Racing Rampage XB-E: We used this ⅕ scale RC car as the chassis for our robot. It has a very powerful electric motor and significant suspension.
    • Hokuyo UTM-30LX Laser Rangefinder: Laser rangefinder used for localization in this project. It has a 270 degree scan range and a sampling rate of 50hz. It can be used indoors and outdoors and has a maximum range of 30m.
    • Andoer APM 2.6: Micro controller used for interfacing with low level hardware. It has an Atmel Atmega2560 chip as well a pressure sensor, and IMU. We used two of these boards in this project. One was used to interface with the radio receiver, car ESC, car servo, and encoder buffer board. The other was used to collect IMU and GPS data.
    • Gigabyte Brix GB-BSi7H-6500 Computer: A small form factor computer that was mounted on the car. It has a dual core i7 with onboard graphics and a SSD.
    • S18V20F12, D24V60F5, DCDC-USB-200: We used three voltage regulators on the car. The S18V20F12 was a step up/down converter with a 12V output to power the LIDAR. The D24V60F5 was 5V step down converter used to power the onboard ethernet switch and router. The DCDC-USB-200 was programmable step up converter used to power the computer at 19V.
    • AMT103 Encoder: Encoder used to gather odometry information about the car. It has 2048 counts per revolution and was mounted on the motor output shaft using a couple 3D printed parts.
  • Project Videos:
    • Car navigating in a straight line: [5]
    • Car navigating a turn: [6]
  • Project "Lessons Learned"
    • One thing that we could have done a better job of was committing to our git repository often, when discrete components of the project had been finished and were working. There were a few times when we had changed so many files that we didn’t know how to get back to a previous well-functioning state, which was a problem when we began seeing worse performance. Another thing we could have done was give our parameter files version numbers, and then been liberal with our use of the dynamic_reconfigure node to dump our current parameters into a file. That way we could have tracked our parameter adjustments better.
    • Additionally, we actually wasted a bit of time due to the fact that we didn’t set up a static IP for the computers we used to ssh into the car computer early on. When we were beginning to send navigation commands to the vehicle, the IP address on one of our computers changed, making it so that we could get all the info about the nodes running on the car computer, but couldn’t send any messages, including navigation commands. This resulted in us having to do some unnecessary debugging for an hour or two.
    • In setting up our local planner, we were not very careful about setting any of the various configuration parameters. When tuning the parameters, it is important to test them one by one and record the effects they have on the robot.
  • Suggestions for future projects
    • Adapt a local and global planner for Ackerman Drive commands (a planning algorithm for nonholonomic vehicles) to the ROS navigation stack to and implement robust dynamic obstacle avoidance for nonholonomic vehicles.
    • Accomplish autonomous mapping of campus or other areas with the car.
  • Possible future work
    • Write/obtain a global planner for nonholonomic vehicles
    • Fix stability of steering commands
    • Improve map making in outside areas by adding in the gps
    • Extend battery cables or make batteries more accessible to reduce wear on connectors and on hands
    • Make battery connections better so that there isn't as great as risk of shorting

Ethics

Students are encouraged to work in groups to learn, brainstorm, and collaborate in learning how to solve problems.

Problem Sets and Lab Assignments Final Writeups: Your final writeups for pre-lab exercises and lab assignments must be done independently without reference to any notes from group sessions, the work of others, or other sources such as the internet.

While working on your final writeups for assignments, you may refer to your own class notes, your own laboratory notes, and the text.

Disclosure of Outside Sources: If you use outside sources other than your class notes and your text to solve problems in the pre-lab and lab assignments (i.e. if you have used sources such as your roommate, study partner, the Internet, another textbook, a file from your office-mate's files) then you must disclose the outside source and what you took from the source in your writeup.

In this course, we adopt the ethical guidelines articulated by Professor Lester Su for M.E. 530.101 Freshman experiences in mechanical engineering I, which are quoted with permission as follows:

Cheating is wrong. Cheating hurts our community by undermining academic integrity, creating mistrust, and fostering unfair competition. The university will punish cheaters with failure on an assignment, failure in a course, permanent transcript notation, suspension, and/or expulsion.

Offenses may be reported to medical, law or other professional or graduate schools when a cheater applies. Violations can include cheating on exams, plagiarism, reuse of assignments without permission, improper use of the Internet and electronic devices, unauthorized collaboration, alteration of graded assignments, forgery and falsification, lying, facilitating academic dishonesty, and unfair competition. Ignorance of these rules is not an excuse.

On every exam, you will sign the following pledge: "I agree to complete this exam without unauthorized assistance from any person, materials or device. [Signed and dated]"

For more information, see the guide on "Academic Ethics for Undergraduates" and the Ethics Board web site (http://ethics.jhu.edu).

I do want to make clear that I'm aware that the vast majority of students are honest, and the last thing I want to do is discourage students from working together. After all, working together on assignments is one of the most effective ways to learn, both through learning from and explaining things to others. The ethics rules are in place to ensure that the playing field is level for all students. The following examples will hopefully help explain the distinction between what constitutes acceptable cooperation and what is not allowable.

Student 1: Yo, I dunno how to do problem 2 on the homework, can you clue me in? 

Student 2: Well, to be brief, I simply applied the **** principle
that is thoroughly explained in  Chapter **** in the course text.

Student 1: Dude, thanks! (Goes off to work on problem.)

- This scenario describes an acceptable interaction. 
There is nothing wrong with pointing someone in the right direction.


Student Y: The homework is due in fifteen minutes and I haven't 
done number 5 yet! Help me!

Student Z: Sure, but I don't have time to explain it to you, so
here. Don't just copy it, though.
(Hands over completed assignment.)

Student Y: I owe you one, man. (Goes off to copy number 5.)

 - This scenario is a textbook ethics violation on the part of 
 both students. Student Y's offense is obvious; student Z is 
 guilty by virtue of facilitating plagiarism, even though he/she 
 is unaware of what student Y actually did.


Joe Student: Geez, I am so swamped, I can't possibly write up the 
lab report and do the lab data calculations before it's all due.

Jane student: Well, since we were lab partners and collected all 
the data together...maybe you could just use my Excel spreadsheet
with the calculations, as long as you did the write-up yourself....

Joe Student: Yeah, that's a great idea!

- That is not a great idea. By turning in a lab report with Jane's
spreadsheet included, Joe is submitting something that isn't his 
own work.


Study group member I: All right, since there's three of us and
there's six problems on the homework, let's each do two. I'll 
do one and two and give you copies when I'm done.

Study group member II: Good idea, that'll save us a lot of work.
I'll take three and five.

Study group member III: Then I guess I'll do four and six. Are you
guys sure this is OK? Seems fishy to me.

Study group member I: What's the problem? It's not like we're
copying the entire assignment. Two problems each is still a lot 
of work.

- This is clearly wrong. Copying is copying even if it's only 
part of an assignment.


Mike (just before class): Hey, can you help me? I lost my
calculator, so I've got all the problems worked out but I 
couldn't get the numerical answers. What is the answer for 
problem 1?

Ike: Let's see (flips through assignment)... I got 2.16542.
 
Mike: (Writing) Two point one six five four two...what about 
number 2?

Ike: For that one... I got 16.0.

Mike: (Writing) Sixteen point oh...great, got it, thanks. 
Helping out a friend totally rules!

- Helping out a friend this way does not rule, totally or 
partially. As minor as this offense seems, Mike is still 
submitting Ike's work as his own when Mike gets the numerical 
answer and copies it in this way.


This page was last modified on 2 June 2016, at 21:22.