The Cooperative Robotic Watercraft (CRW) project focuses on the challenge of developing control techniques for operator control of large robot teams in the wild. Small, autonomous watercraft are an ideal approach to a number of applications in flood mitigation and response, environmental sampling, and numerous other applications. Relative to other types of vehicles, watercraft are inexpensive, simple, robust and reliable.
Using an Android phone for navigation, communication, and camera imagery, and off-the-shelf RC boat components, we were able to build a low cost fleet of both fan-propelled airboats for shallow water bodies and dual propeller differential-drive boats for high currents.
In order to capture team plans and plan context-specific operator interaction a new language was developed based on Petri Nets. These Situational Awareness and Mixed Initiative (SAMI) Petri Nets, or SPNs, are the subject of the PhD dissertation.
Here you can see a large team of boats and operator GUI driven entirely via a SPN. This plan is used to incrementally deploy boats while checking for hardware failures at our test site at Katara Beach in Doha, Qatar.
The CRW project was developed at Carnegie Mellon University and Carnegie Mellon University Qatar. Source code, including a simulation environment, is available here. JAR packaged executables are available for Windows, OSX, and Linux here.
Publications:
Alessandro Farinelli, Masoume M. Raeissi, Nicolo’ Marchi, Nathan Brooks, and Paul Scerri. Interacting with team oriented plans in multi-robot systems. Autonomous Agents and Multi-Agent Systems, 31(2):332–361, Mar 2017.
A. Farinelli, N. Marchi, M. Raeissi, N. Brooks, and P. Scerri. A Mechanism for Smoothly Handling Human Interrupts in Team Oriented Plans. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 2015.
N. Brooks, E. de Visser, T. Chabuk, E. Freedy, and P. Scerri. An approach to team programming with markups for operator interaction. In Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems, 2013.
The goal of the TARDEC CANINE competition was to develop a robotic platform capable of autonomously searching, identifying, and retrieving objects specified by an operator.
The general structure of the challenges was
Interact with the robot to communicate the challenge constraints
Show the object to the robot it needs to retrieve
Throw or place the object far from the robot
The robot searches and locates the object, avoiding objects with similar color or shape properties and human obstacles
The robot captures the object
The robot returns the object to a pre-specified location or operator’s current position
The video below shows our robot, Scotty, retrieving of one of the lighter, bouncier objects.
The final competition was held at Fort Benning in Georgia. Unfortunately, computers onboard the robotic suffered catastrophic failure presumably due to hot storage conditions in the lockdown area and we were unable to compete.
As a Systems Engineering course project, teams of 3 CMU engineering graduate students designed exhibits for the Pittsburgh Children’s Museum. My group designed a “Digital Graffiti” wall, which tracks plastic ball collisions with a white wall using a Microsoft Kinect and simulates the resulting paint splatter on the wall using a projector and speakers.
The wall “resets” every 60 seconds, saving the images to our website so kids can retrieve their art when they get home. Check the website for more details and, in a few weeks, source code! In the meantime, to install the drivers we used in this project, check this post.
Below is a short video of the exhibit at the Children’s Museum of Pittsburgh.
As you can see, the exhibit has two operating modes: paint phase and cleanup phase. During the paint phase, kids would throw balls at the wall and watch and listen to the results. During the cleanup phase, kids would (sometimes) collect the balls to refill the bins while music played. For this project, we used audio created by the artist Waterflame. Some kids continued throwing balls against the wall during this phase and others (especially the toddlers) would begin dancing. The exhibit was very popular and had nearly zero idle time during the 3 days it was present at the museum. When school field trip groups came through, the system had as many as 16 simultaneous users yelling, screaming, throwing balls, blocking throws, rolling around and causing general (but expected) chaos.
A unique shape-color-number pattern was generated at the end of each phase, which kids could use to retrieve their image from the project website.
The main bottlenecks in the system right now are:
The 30 FPS poll rate of the Kinect – balls thrown too fast do not register. An absorbent material attached to the wall would help mitigate this problem.
Timing issues between the color and depth camera – balls thrown at extreme angles have a location mismatch between the color and depth images when using factory calibration transformations. I think this is caused by a small delay between the camera polling.
The high reflectivity of the plastic balls, complicating proper color identification. Foam balls were investigated but did not weigh enough for proper throwing. Colorspace shifting and further investigation into foam ball vendors and methods of adding weight are possible solutions to this problem.
In other news, we were approached by a company designing exhibits for the Science Museum of Beersheba. They were interested in setting up the exhibit and we were able to get it set up and configured remotely. Their integration looks much more slick!
This was a great project and I’m glad to finally have proper Kinect drivers set up! For more information about the project, including setting up your own Digital Graffiti wall, check out the Digital Graffiti Homepage.
When I got my Kinect back in December, after several failed attempts with ROS cturtle and OpenNI drivers I gave up and began using the Windows CLNUI drivers and libfreenect Ubuntu drivers. Recently, with the help of a very good friend, I have finally migrated from using the CLNUI Kinect drivers to the OpenNI Windows drivers, which have easy-to-use access to the internal calibration matrix of between the RGB and depth cameras. This zip file includes a README in the base directory which should tell you the basics of getting it set up, along with the necessary files and an example cpp file. This is basically the same process as listed at www.codeproject.com except that as of Monday the stable versions of each requisite library would not play nicely with each other (yeah, nothing new there!). So these are much older drivers (sadly, no audio drivers). If you have a set of more up to date drivers that work, please let me know!
A few interesting things I found
Installing in a place other than the default directory may mess it up (or I’m just missing a library import setting somewhere).
Windows 7 has some power saving features which may cause the drivers to fail within minutes of usage on some laptops. I have a Lenovo x200 and my RGB feed would quickly desynchronize/jitter and cause the drivers to crash immediately afterwards.
To fix that I had to do the following
Plug in the Kinect
Open Device Manager
Go through USB Root Hubs and look at Power tab for Kinect (labelled as “Generic USB Hub (3 ports)”)
In the USB Root Hub’s Power Management tab, uncheck “Allow computer to turn off this device to save power”
I’ll post why I got the drivers working in a few days. Enjoy!
(SUAVE) Simple UAV Environment investigated strategies using World-In-Miniature models to maintain situational awareness and control teams of up to 22 UAVs, comparing WIM to traditional strategies which display each individual image feed.
Here a 3D terrain mesh has been painted using a low resolution and outdated satellite image. A SUAVE team of simulated UAVs flies through the environment, collecting camera imagery. The live camera imagery is used to update the image “painted” onto the terrain mesh in real-time. UAV flight paths are rendered as white lines and the UAV’s current location is represented by a sphere. The user can then fly through the model to view terrain from specific locations and angles looking for areas of interest.
“Two Villages” is a Java based social decision game. It was designed to investigate cross-cultural collaboration and cultural golden values and supports two players over the internet and extensive user input tracking. Two players in different rooms would each control the wildfire fighting capabilities of one of the two neighboring villages. The game was designed to explore trust and reciprocation between the two players as wildfires threatened the two villages with varying degrees of severity. Below is the introductory video played to familiarize participants with the game.
The codebase was written in Java and runs on Windows, Linux and OSX. Two Villages was developed at Carnegie Mellon University.
Publications
Z. Semnani-Azad, K. Sycara and M. Lewis, “Dynamics of helping behavior and cooperation across culture,” Collaboration Technologies and Systems (CTS), 2012 International Conference on, Denver, CO, USA, 2012, pp. 525-530.
The Multi-robot Control System (MrCS) is used in conjunction with the Unified System for Automation and Robot Simulation (USARSim), which can simulate high-fidelity urban search and rescue scenarios. This year, we added a new data management system to the operator interface, the image queue. The image queue allows the operator to asynchronously view imagery from the team using a priority queue. Priority score of individual images is a combination of factors such as the amount of previously unseen territory captured by the image and vision based probability of a victim being present in the image. This allows for rapid operator coverage of very large areas by selecting a small subset of the image database which captures the entirety of the world.
When victims are detected, a sub queue is created, displaying images from different angles near the indicated position. This allows for quick and accurate refinement of the victim location and condition via triangulation. If sufficient high quality imagery is unavailable, a task to return to the area and take more imagery can be added to the system, which is then allocated to a robot using the MrCS distributed task allocation.
MrCS was developed at Carnegie Mellon University and University of Pittsburgh.
Publications
S. Chien, M. Lewis, S. Mehrotra, N. Brooks, and Katia Sycara. Scheduling operator attention for multi-robot control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 473–479. 2012.
N. Brooks, P. Scerri, K. Sycara, H. Wang, S. Chien, and M. Lewis. Asynchronous control with ATR for large robot teams. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 55, pages 444–448. 2011.
S. Okamoto, N. Brooks, S. Owens, K. Sycara, and P. Scerri. Allocating spatially distributed tasks in large, dynamic robot teams. In Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems, pages 1245–1246, 2011.
H. Wang, A. Kolling, N. Brooks, M. Lewis, and K. Sycara. Synchronous vs. asynchronous control for large robot teams. Virtual and Mixed Reality-Systems and Applications, pages 415–424, 2011.
H. Wang, A. Kolling, N. Brooks, S. Owens, S. Abedin, P. Scerri, P. Lee, S. Chien, M. Lewis, and K. Sycara. Scalable target detection for large robot teams. In Proceedings of the 6th international conference on Human-robot interaction, pages 363–370. 2011.