The Orocos Toolchain v2.X is the merging of the RTT, OCL and other tools that you require to build Orocos applications.
We are gradually migrating the wiki pages of the RTT/OCL to the Toolchain Wiki. All wiki pages under RTT/OCL are considered to be for RTT/OCL 1.x versions
What you find below is only for the 2.x releases.
This is extremely easily done with the orocreate-pkg script, see Getting started.
You can use packages in two ways:
# User provided files: # Package directory: .../packagename/manifest.xml, Makefile, CMakeLists.txt,... # Sources: .../packagename/src/*.cpp # Headers: .../packagename/include/packagename/*.hpp # Build results: # Built Component libraries for 'packagename': .../packagename/lib/orocos/gnulinux/*.so|dll|... # Built Plugin libraries for 'packagename': .../packagename/lib/orocos/gnulinux/plugins/*.so|dll|... # Type libraries for 'packagename': .../packagename/lib/orocos/gnulinux/types/*.so|dll|... # Build information for 'packagename': .../packagename/packagename-gnulinux.pc
For allowing multi-target builds, the libraries are put in thelib/orocos/targetname/ directory in order to avoid loading a library for a different target. In the example above, the targetname is gnulinux.
When you use the UseOrocos.cmake macros (Orocos Toolchain 2.3.0 or later), linking with dependees will be done automatically for you.
You may add a link instruction using the classical CMake syntax:
orocos_component( mycomponent ComponentSource.cpp ) target_link_libraries( mycomponent ${YOUR_LIBRARY} )
The component and plugin loaders of RTT will search your ROS_PACKAGE_PATH, and its target subdirectory for components and plugins.
You can then import the package in the deployer application by using:
import("packagename")
# Install dir (the prefix): /opt/orocos # Headers: /opt/orocos/include/orocos/gnulinux/packagename/*.hpp # Component libraries for 'packagename': /opt/orocos/lib/orocos/gnulinux/packagename/*.so|dll|... # Plugin libraries for 'packagename': /opt/orocos/lib/orocos/gnulinux/packagename/plugins/*.so|dll|... # Type libraries for 'packagename': /opt/orocos/lib/orocos/gnulinux/packagename/types/*.so|dll|... # Build information for 'packagename': /opt/orocos/lib/pkgconfig/packagename-gnulinux.pc
For allowing multi-target installs, the packages will be installed in orocos/targetname/packagename (for example: orocos/xenomai/ocl) in order to avoid loading a library for a different target. In the example above, the targetname is gnulinux.
You may add a link instruction using the classical CMake syntax:
orocos_component( mycomponent ComponentSource.cpp ) target_link_libraries( mycomponent -lfoobar )
RTT_COMPONENT_PATH=/opt/orocos/lib/orocos export RTT_COMPONENT_PATH
The component and plugin loaders of RTT will search this directory, and its target subdirectory for components and plugins. So there is no need to encode the target name in the RTT_COMPONENT_PATH (but you may do so if it is required for some case).
You can then import the package in the deployer application by using:
import("packagename")
The toolchain is a set of libraries and programs that you must compile on your computer in order to build Orocos applications. In case you are on a Linux system, you can use the bootstrap.sh script, which does this for you.
After installation, these libraries are available:
These programs are available:
Orocos component libraries are living in packages. You need to understand the concept of packages in Orocos in order to be able to create and use components. See more about Component Packages
Your primary reading material for creating components is the Orocos Components Manual. A component is compiled into a shared library (.so or .dll).
Use the orocreate-pkg script to create a new package that contains a ready-to-compile Orocos component, which you can extend or play with. See Using orocreate-pkg for all details. (Script available from Toolchain version 2.1.1 on).
Alternatively, the oroGen tool allows you to create components with a minimum knowledge of the RTT API.
The DeploymentComponent loads XML files or scripts and dynamically creates, configures and starts components in a single process. See the Orocos Deployment Manual
The TaskBrowser is our primary interface with a running application. See the Orocos TaskBrowser Manual
$ cd ~/orocos $ orocreate-pkg myrobot component Using templates at /home/kaltan/src/git/orocos-toolchain/ocl/scripts/pkg/templates... Package myrobot created in directory /home/kaltan/src/git/orocos-toolchain/myproject/myrobot $ cd myrobot $ ls CMakeLists.txt Makefile manifest.xml src # Standard build (installs in the same directory as Orocos Toolchain): $ mkdir build ; cd build $ cmake .. -DCMAKE_INSTALL_PREFIX=orocos $ make install # OR: ROS build: $ make
You can modify the .cpp/.hpp files and the CMakeLists.txt file to adapt them to your needs. See orocreate-pkg --help for other options which allow you to generate other files.
All files that are generated may be modified by you, except for all files in the typekit directory. That directory is generated during a build and under the control of the Orocos typegen tool, from the orogen package.
After the 'make install' step, make sure that your RTT_COMPONENT_PATH includes the installation directory (or that you used -DCMAKE_INSTALL_PREFIX=orocos) and then start the deployer for your platform:
$ deployer-gnulinux Switched to : Deployer This console reader allows you to browse and manipulate TaskContexts. You can type in an operation, expression, create or change variables. (type 'help' for instructions and 'ls' for context info) TAB completion and HISTORY is available ('bash' like) Deployer [S]> import("myrobot") = true Deployer [S]> displayComponentTypes I can create the following component types: Myrobot OCL::ConsoleReporting OCL::FileReporting OCL::HMIConsoleOutput OCL::HelloWorld OCL::TcpReporting OCL::TimerComponent = (void) Deployer [S]> loadComponent("TheRobot","Myrobot") Myrobot constructed ! = true Deployer [S]> cd TheRobot Switched to : TheRobot TheRobot [S]> ls Listing TaskContext TheRobot[S] : Configuration Properties: (none) Provided Interface: Attributes : (none) Operations : activate cleanup configure error getPeriod inFatalError inRunTimeError isActive isConfigured isRunning setPeriod start stop trigger update Data Flow Ports: (none) Services: (none) Requires Operations : (none) Requests Services : (none) Peers : (none)
You now need to consult the Component Builder's Manual for instructions on how to use and extend your Orocos component. All relevant documentation is available on the Toolchain Reference Manuals page.
The generated package contains a manifest.xml file and the CMakeLists.txt file will call ros_buildinit() if ROS_ROOT has been set and also sets the LIBRARY_OUTPUT_PATH to packagename/lib/orocos such that the ROS tools can find the libraries and the package itself. The ROS integration is mediated in the UseOrocos-RTT.cmake file, which gets included on top of the generated CMakeLists.txt file and is installed as part of the RTT. The Makefile file is rosmake compatible.
The OCL deployer knows about ROS packages and can import Orocos components (and their dependencies) from them once your ROS_PACKAGE_PATH has been correctly set.
Extracted from the instructions on http://www.ros.org/wiki/groovy/Installation/OSX/MacPorts/Repository
echo 'export PATH=/opt/local/bin:/opt/local/sbin:$PATH' >> ~/.bash_profile
echo 'export LIBRARY_PATH=/opt/local/lib:$LIBRARY_PATH' >> ~/.bash_profile
cd ~ git clone https://github.com/smits/ros-macports.git
sudo sh -c 'echo file:///Users/user/ros-macports >> /opt/local/etc/macports/sources.conf'
sudo port sync
sudo port install python27 sudo port select --set python python27 sudo port install boost libxslt lua51 ncurses pkgconfig readline netcdf netcdf-cxx omniORB p5-xml-xpath ros-hydro-catkin py27-sip ros-hydro-cmake_modules eigen3 dyncall ruby20 sudo port select --set nosetests nosetests27 sudo port select --set ruby ruby20
ruby --version
which gem sudo gem install facets nokogiri
lua -v
sudo port uninstall lua
git clone https://github.com/gccxml/gccxml cd gccxml mkdir build cd build cmake .. -DCMAKE_INSTALL_PREFIX=/opt/local make sudo make install
mkdir -p ~/orocos_ws/src cd ~/orocos_ws/src sudo port install py27-wstool wstool init . curl https://gist.githubusercontent.com/smits/9950798/raw | wstool merge - wstool update cd orocos_toolchain git submodule foreach git checkout toolchain-2.8
cd ~/orocos_ws source /opt/local/setup.bash sudo /opt/local/env.sh catkin_make_isolated --install-space /opt/orocos --install --cmake-args -DENABLE_CORBA=TRUE -DCORBA_IMPLEMENTATION=OMNIORB - DRUBY_INCLUDE_DIR=/opt/local/include/ruby-2.0.0 -DRUBY_CONFIG_INCLUDE_DIR=/opt/local/include/ruby-2.0.0/x86_64-darwin13 -DRUBY_LIBRARY=/opt/local/lib/libruby2.0.dylib -DCMAKE_PREFIX_PATH="$CMAKE_PREFIX_PATH;/opt/local"
source /opt/orocos/setup.bash
echo ‘export GCCXML_COMPILER=g++-mp-4.3’ >> ~/.bash_profile
These exercises are hosted on Github .
You need to have the Component Builder's Manual (see Toolchain Reference Manuals) at hand to complete these exercises.
Take also at the Toolchain Reference Manuals for in-depth explanations of the deployment XML format and the different transports (CORBA, MQueue)
You'll need to have the Scripting Chapter of the Component Builder's Manual at hand for clarifications on syntax and execution semantics.
path("/opt/orocos/lib/orocos") // Path to where components are located [1] import("myproject") // imports a specific project in the path [2] import("ocl") // imports ocl from the path require("print") // loads the 'print' service globally. [3] loadComponent("HMI1","OCL::HMIComponent") // create a new HMI component [4] loadComponent("Controller1","MyProjectController") // create a new controller loadComponent("Test1","TaskContext") // creates an empty test component
You can test this code by doing:
deployer-gnulinux -s startup.ops
deployer-gnulinux ... Deployer [S]> help runScript runScript( string const& File ) : bool Runs a script. File : An Orocos program script. Deployer[S]> runScript("startup.ops")
The first line of startup.ops ([1]) extends the standard search path for components. Every component library directly in a path will be discovered using this statement, but the paths are not recursively searched. For loading components in subdirectories of a path directory, use the import statement. In our example, it will look for the myproject directory in the component paths and the ocl directory. All libraries and plugins in these directories will be loaded as well.
After importing, we can create components using loadComponent ([4]). The first argument is the name of the component instance, the second argument is the class type of the component. When these lines are executed, 3 new components have been created: HMI1, Controller1 and Test1.
Finally, the line require("print") loads the printing service globally such that your script can use the 'print.ln("text")' function. See help print in the TaskBrowser after you typed require("print").
Now extend the script to include the lines below. The create connection policy objects and connect ports between components.
// See the Doxygen API documentation of RTT for the fields of this struct: var ConnPolicy cp_1 // set the fields of cp_1 to an application-specific value: cp_1.type = BUFFER // Use ''BUFFER'' or ''DATA'' cp_1.size = 10 // size of the buffer cp_1.lock_policy = LOCKED // Use ''LOCKED'', ''LOCK_FREE'' or ''UNSYNC'' // other fields exist too... // Start connecting ports: connect("HMI1.positions","Controller1.positions", cp_1) cp_1 = ConnPolicy() // reset to defaults (DATA, LOCK_FREE) connect("HMI1.commands","Controller1.commands", cp_1) // etc...
Connecting data ports is done using ConnPolicy structs that describe the properties of the connection to be formed. You may re-use the ConnPolicy variable, or create new ones for each connection you form. The Component Builder's Manual has more details on how the ConnPolicy struct influences how connections are configured.
Finally, we configure and start our components:
if ( HMI1.configure() == false ) print.ln("HMI1 configuration failed!") else { if ( Controller1.configure() == false ) print.ln("Controller1 configuration failed!") else { HMI1.start() Controller1.start() } }
StateMachine SetupShutdown { var bool do_cleanup = false, could_config = false; initial state setup { entry { // Configure components could_config = HMI1.configure() && Controller1.configure(); if (could_config) { HMI1.start(); Controller1.start(); } } transitions { if do_cleanup then select shutdown; if could_config == false then select failure; } } state failure { entry { print.ln("Failed to configure a component!") } } final state shutdown { entry { // Cleanup B group HMI1.stop() ; Controller1.stop(); HMI1.cleanup() ; Controller1.cleanup(); } } } RootMachine SetupShutdown deployApp; deployApp.activate() deployApp.start()
State machines are explained in detail in the Scripting Chapter of the Component Builder's Manual.
Connecting an output port of one component with an input port of another component, where both components are distributed using the CORBA deployer application, deployer-corba.
This is your first XML file for component A. We tell that it runs as a Server and that it registers its name in the Naming Service. (See also Using CORBA and the CORBA transport reference manual for setting up naming services)
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <struct name="ComponentA" type="HMI"> <simple name="Server" type="boolean"><value>1</value></simple> <simple name="UseNamingService" type="boolean"><value>1</value></simple> </struct> </properties>
Save this in component-a.xml and start it with: deployer-corba -s component-a.xml
This is your second XML file for component B. It has one port, cartesianPosition_desi. We add it to a connection, named cartesianPosition_desi_conn. Next, we declare a 'proxy' to Component A we created above, and we do the same for it's port, add it to the connection named cartesianPosition_desi_conn.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <struct name="ComponentB" type="Controller"> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>cartesianPosition_desi_conn</value></simple> </struct> </struct> <!-- ComponentA is looked up using the 'CORBA' naming service --> <struct name="ComponentA" type="CORBA"> <!-- We add ports of A to the connection --> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition" type="string"> <value>cartesianPosition_desi_conn</value></simple> </struct> </struct> </properties>
Save this file as component-b.xml and start it with deployer-corba -s component-b.xml
When component-b.xml is started, the port connections will be created. When ComponentA exits and re-starts, ComponentB will not notice this, and you'll need to restart the component-b xml file as well. Use a streaming based protocol (ROS, POSIX MQueue) in case you want to be more robust against such situations.
You can also form the connections in a third xml file, and make both components servers like this:
Starting ComponentA:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <struct name="ComponentA" type="HMI"> <simple name="Server" type="boolean"><value>1</value></simple> <simple name="UseNamingService" type="boolean"><value>1</value></simple> </struct> </properties>
Save this in component-a.xml and start it with: cdeployer -s component-a.xml
Starting ComponentB:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <struct name="ComponentB" type="Controller"> <simple name="Server" type="boolean"><value>1</value></simple> <simple name="UseNamingService" type="boolean"><value>1</value></simple> </struct> </properties>
Save this in component-b.xml and start it with: cdeployer -s component-b.xml
Creating two proxies, and the connection:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <!-- ComponentA is looked up using the 'CORBA' naming service --> <struct name="ComponentA" type="CORBA"> <!-- We add ports of A to the connection --> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition" type="string"> <value>cartesianPosition_desi_conn</value></simple> </struct> </struct> <!-- ComponentB is looked up using the 'CORBA' naming service --> <struct name="ComponentB" type="CORBA"> <!-- We add ports of B to the connection --> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>cartesianPosition_desi_conn</value></simple> </struct> </struct> </properties>
Save this in connect-components.xml and start it with: deployer-corba -s connect-components.xml
See deployer and CORBA related Toolchain Reference Manuals.
These instructions are meant for the Orocos Toolchain version 2.4.0 or later.
mkdir ~/training
export ROS_PACKAGE_PATH=~/training:$ROS_PACKAGE_PATH
sudo apt-get install python-setuptools sudo easy_install -U rosinstall
rosinstall ~/training orocos_exercises.rosinstall /opt/ros/electric
source ~/training/setup.bash
rosdep install youbot_common rosdep install rFSM
rosmake youbot_common rtt_dot_service rttlua_completion
useOrocos(){ source $HOME/training/setup.bash; source $HOME/training/setup.sh; source /opt/ros/electric/stacks/orocos_toolchain/env.sh; setLUA; } setLUA(){ if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua" export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua" export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so" export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin" } useOrocos
roscd hello-1-task-execution make rosrun ocl deployer-gnulinux -s start.ops
var double a a=1.1
var float64[] b(2) b[0]=4.4
find_package(OROCOS-RTT REQUIRED rtt-marshalling) # Defines: ${OROCOS-RTT_RTT-MARSHALLING_LIBRARY} and ${OROCOS-RTT_RTT-MARSHALLING_FOUND}
pre-2.3.2: You may only call find_package(OROCOS-RTT ... ) once. Next calls to this macro will return immediately, so you need to specify all plugins up-front. RTT versions from 2.3.2 on don't have this limitation.
After find_package found the RTT and its plugins, you must explicitly use the created CMake variables in order to have them in effect. This looks typically like:
# Link all targets AFTER THIS LINE with 'rtt-scripting' COMPONENT: if ( OROCOS-RTT_RTT-SCRIPTING_FOUND ) link_libraries( ${OROCOS-RTT_RTT-SCRIPTING_LIBRARY} ) else( OROCOS-RTT_RTT-SCRIPTING_FOUND ) message(SEND_ERROR "'rtt-scripting' not found !") endif( OROCOS-RTT_RTT-SCRIPTING_FOUND ) # now define your components, libraries etc... # ... #Prefered way to link instead of the above method: target_link_libraries( mycomponent ${OROCOS-RTT_RTT-SCRIPTING_LIBRARY})
Or for linking with the standard provided CORBA transport:
# Link all targets AFTER THIS LINE with the CORBA transport (detected by default!) : if ( OROCOS-RTT_CORBA_FOUND ) link_libraries( ${OROCOS-RTT_CORBA_LIBRARIES} ) else( OROCOS-RTT_CORBA_FOUND ) message(SEND_ERROR "'CORBA' transport not found !") endif( OROCOS-RTT_CORBA_FOUND ) # now define your components, libraries etc... # ... #Prefered way to link instead of the above method: target_link_libraries( mycomponent ${OROCOS-RTT_RTT_CORBA_LIBRARIES})
Orocos has a system which lets you specify which packages you want to use for including headers and linking with their libraries. Orocos will always get these flags from a pkg-config .pc file, so in order to use this system, check that the package you want to depend on provides such a .pc file.
If the package or library you want to use has a .pc file, you can directly use this macro:
# The CORBA transport provides a .pc file 'orocos-rtt-corba-<target>.pc': orocos_use_package( orocos-rtt-corba ) # Link with the OCL Deployment component: orocos_use_package( ocl-deployment ) # now define your components, libraries etc...
This macro has a similar effect as putting this dependency in your manifest.xml file, it sets the include paths and links your libraries if OROCOS_NO_AUTO_LINKING is not defined in CMake (the default). Some packages (like OCL) define multiple .pc files, in which case you can put the ocl dependency in the manifest.xml file and use orocos_use_package() to use a specific ocl .pc file.
If the argument to orocos_use_package() is a real package, it is advised to put the dependency in the manifest.xml file, such that the build system can use that information for dependency tracking. In case it is a library as a part of a package (in this case: CORBA is a sub-library of the 'rtt' package), you should put rtt as a dependency in the manifest.xml file, and orocos-rtt-corba with the orocos_use_package macro as shown above.
################################################################################## # # CMake package configuration file for the OROCOS-RTT package. # This script imports targets and sets up the variables needed to use the package. # In case this file is installed in a nonstandard location, its location can be # specified using the OROCOS-RTT_DIR cache # entry. # # find_package COMPONENTS represent OROCOS-RTT plugins such as scripting, # marshalling or corba-transport. # The default search path for them is: # /path/to/OROCOS-RTTinstallation/lib/orocos/plugins # /path/to/OROCOS-RTTinstallation/lib/orocos/types # # For this script to find user-defined OROCOS-RTT plugins, the RTT_COMPONENT_PATH # environment variable should be appropriately set. E.g., if the plugin is located # at /path/to/plugins/libfoo-plugin.so, then add /path/to to RTT_COMPONENT_PATH # # This script sets the following variables: # OROCOS-RTT_FOUND: Boolean that indicates if OROCOS-RTT was found # OROCOS-RTT_INCLUDE_DIRS: Paths to the necessary header files # OROCOS-RTT_LIBRARIES: Libraries to link against to use OROCOS-RTT # OROCOS-RTT_DEFINITIONS: Definitions to use when compiling code that uses OROCOS-RTT # # OROCOS-RTT_PATH: Path of the RTT installation directory (its CMAKE_INSTALL_PREFIX). # OROCOS-RTT_COMPONENT_PATH: The component path of the installation # <prefix>/lib/orocos + RTT_COMPONENT_PATH # OROCOS-RTT_PLUGIN_PATH: OROCOS-RTT_PLUGINS_PATH + OROCOS-RTT_TYPES_PATH # OROCOS-RTT_PLUGINS_PATH: The plugins path of the installation # <prefix>/lib/orocos/plugins + RTT_COMPONENT_PATH * /plugins # OROCOS-RTT_TYPES_PATH: The types path of the installation # <prefix>/lib/orocos/types + RTT_COMPONENT_PATH * /types # # OROCOS-RTT_CORBA_FOUND: Defined if corba transport support is available # OROCOS-RTT_CORBA_LIBRARIES: Libraries to link against to use the corba transport # # OROCOS-RTT_MQUEUE_FOUND: Defined if mqueue transport support is available # OROCOS-RTT_MQUEUE_LIBRARIES: Libraries to link against to use the mqueue transport # # OROCOS-RTT_VERSION: Package version # OROCOS-RTT_VERSION_MAJOR: Package major version # OROCOS-RTT_VERSION_MINOR: Package minor version # OROCOS-RTT_VERSION_PATCH: Package patch version # # OROCOS-RTT_USE_FILE_PATH: Path to package use file, so it can be included like so # include(${OROCOS-RTT_USE_FILE_PATH}/UseOROCOS-RTT.cmake) # OROCOS-RTT_USE_FILE : Allows you to write: include( ${OROCOS-RTT_USE_FILE} ) # # This script additionally sets variables for each requested # find_package COMPONENTS (OROCOS-RTT plugins). # For example, for the ''rtt-scripting'' plugin this would be: # OROCOS-RTT_RTT-SCRIPTING_FOUND: Boolean that indicates if the component was found # OROCOS-RTT_RTT-SCRIPTING_LIBRARY: Libraries to link against to use this component # (Notice singular _LIBRARY suffix !) # # Note for advanced users: Apart from the OROCOS-RTT_*_LIBRARIES variables, # non-COMPONENTS targets can be accessed by their imported name, e.g., # target_link_libraries(bar @IMPORTED_TARGET_PREFIX@orocos-rtt-gnulinux_dynamic). # This of course requires knowing the name of the desired target, which is why using # the OROCOS-RTT_*_LIBRARIES variables is recommended. # # Example usage: # find_package(OROCOS-RTT 2.0.5 EXACT REQUIRED rtt-scripting foo) # # Defines OROCOS-RTT_RTT-SCRIPTING_* # find_package(OROCOS-RTT QUIET COMPONENTS rtt-transport-mqueue foo) # # Defines OROCOS-RTT_RTT-TRANSPORT-MQUEUE_* # ##################################################################################
orocreate-pkg example
You may remove most of the code/statements that you don't use. We only left the most common CMake macros not commented, which tells you which ones you should use most certainly when building a component:
# # The find_package macro for Orocos-RTT works best with # cmake >= 2.6.3 # cmake_minimum_required(VERSION 2.6.3) # # This creates a standard cmake project. You may extend this file with # any cmake macro you see fit. # project(example) # Set the CMAKE_PREFIX_PATH in case you're not using Orocos through ROS # for helping these find commands find RTT. find_package(OROCOS-RTT REQUIRED ${RTT_HINTS}) # Defines the orocos_* cmake macros. See that file for additional # documentation. include(${OROCOS-RTT_USE_FILE_PATH}/UseOROCOS-RTT.cmake) # # Components, types and plugins. # # The CMake 'target' names are identical to the first argument of the # macros below, except for orocos_typegen_headers, where the target is fully # controlled by generated code of 'typegen'. # # Creates a component library libexample-<target>.so # and installs in the directory lib/orocos/example/ # orocos_component(example example-component.hpp example-component.cpp) # ...you may add multiple source files # # You may add multiple orocos_component statements. # # Building a typekit (recommended): # # Creates a typekit library libexample-types-<target>.so # and installs in the directory lib/orocos/example/types/ # #orocos_typegen_headers(example-types.hpp) # ...you may add multiple header files # # You may only have *ONE* orocos_typegen_headers statement ! # # Building a normal library (optional): # # Creates a library libsupport-<target>.so and installs it in # lib/ # #orocos_library(support support.cpp) # ...you may add multiple source files # # You may add multiple orocos_library statements. # # Building a Plugin or Service (optional): # # Creates a plugin library libexample-service-<target>.so or libexample-plugin-<target>.so # and installs in the directory lib/orocos/example/plugins/ # # Be aware that a plugin may only have the loadRTTPlugin() function once defined in a .cpp file. # This function is defined by the plugin and service CPP macros. # #orocos_service(example-service example-service.cpp) # ...only one service per library ! #orocos_plugin(example-plugin example-plugin.cpp) # ...only one plugin function per library ! # # You may add multiple orocos_plugin/orocos_service statements. # # Additional headers (not in typekit): # # Installs in the include/orocos/example/ directory # orocos_install_headers( example-component.hpp ) # ...you may add multiple header files # # You may add multiple orocos_install_headers statements. # # Generates and installs our package. Must be the last statement such # that it can pick up all above settings. # orocos_generate_package()
This page documents both basic and advanced use of the RTT Lua bindings by example. More formal API documentation is available here.
As of orocos toolchain-2.6 the deployment component launched by rttlua has been renamed from deployer to Deployer. This is to remove the differences between the classical deployer and rttlua and to facilitate portable deployment scripts. This page has been updated to use the new, uppercase name. If you are using an orocos toolchain version prior to 2.6, replace use "deployer" instead.
Lua is a simple, small and efficient scripting language. The Lua RTT bindings provide access to most of the RTT API from the Lua language. Use-cases are:
To this end RTT-Lua consists of:
Most information here is valid for all three approaches. If not, this is explicitly mentioned. The listings are shown as interactively entered into the rttlua- REPL (read-eval-print loop), but could just the same be stored in a script file.
Currently RTT-Lua is in OCL. Is is enabled by default but will only be built if the Lua-5.1 dependency (Debian: liblua5.1-0-dev, liblua5.1-0, lua5.1) is found.
CMake options:
BUILD_LUA_RTT
: enable this to build the rttlua shell, the Lua component, and the Lua plugin.BUILD_LUA_RTT_DYNAMIC_MODULES
: (EXPERIMENTAL) build RTT and deployer as pure Lua plugins. Not recommended unless you know what you are doing.BUILD_LUA_TESTCOMP
: build a simple testcomponent that is used for testing the bindings. Not required for normal operation.rttlib.lua
is a Lua module, which is not strictly necessary, but highly recommended to load as it adds various syntactic shortcuts and pretty printing (Many examples on this page will not work without!). The easiest way to load it is to setup the LUA_PATH
variable:
export LUA_PATH=";;$HOME/src/git/orocos/ocl/lua/modules/?.lua"
If you are a orocos_toolchain_ros user and do not want to hardcode the path like this, you can source the following script in your .bashrc:
#!/bin/bash RTTLUA_MODULES=`rospack find ocl`/lua/modules/?.lua if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";" fi export LUA_PATH="$LUA_PATH;$RTTLUA_MODULES"
$ ./rttlua-gnulinux OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux) >
or for orocos_toolchain_ros users:
$ rosrun ocl rttlua-gnulinux OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux) >
Now we have a Lua REPL that is enhanced with RTT specific functionality. In the following RTT-Lua code is indicated by a ">
" prompt, while shell scripts are shown with the typical "$
".
Before doing anything it is recommended to load rttlib. Like any Lua module this can be done with the require
statement. For example:
$ ./rttlua-gnulinux OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux) > require("rttlib") >
As it is annoying having to type this each time, this loading can automated by putting it in the ~/.rttlua
dot file. This (Lua) file is executed on startup of rttlua:
require("rttlib") rttlib.color=true
The (optional) last line enables colors.
rttlib.stat()
Print information about component instances and their state> rttlib.stat() Name State isActive Period lua PreOperational true 0 Deployer Stopped true 0
rttlib.info()
Print information about available components, types and services> rttlib.info() services: marshalling scripting print LuaTLSF Lua os typekits: rtt-corba-types rtt-mqueue-transport rtt-types OCLTypekit types: ConnPolicy FlowStatus PropertyBag SendHandle SendStatus TaskContext array bool bools char double float int ints rt_string string strings uint void comp types: OCL::ConsoleReporting OCL::FileReporting OCL::HMIConsoleOutput OCL::HelloWorld OCL::LuaComponent OCL::LuaTLSFComponent OCL::TcpReporting ...
Here:
> tc = rtt.getTC()
Above code calls the getTC()
function, which returns the current TC and stores it in a variable 'tc'. For showing the interface just write =tc
. In the repl the equal sign is a shortcut for 'return', which in turn causes the variable to be printed. (BTW: This works for displaying any variable)
> =tc TaskContext: lua state: PreOperational isActive: true getPeriod: 0 peers: Deployer ports: properties: lua_string (string) = // string of lua code to be executed during configureHook lua_file (string) = // file with lua program to be executed during configuration operations: bool exec_file(string const& filename) // load (and run) the given lua script bool exec_str(string const& lua-string) // evaluate the given string in the lua environment
Since (rttlua beta5) the above does not print the standard TaskContext operations anymore. To print these, use tc:show()
.
(Yes, you really want this)
Get it here. Checkout the README for the (simple) compilation and setup.
rttlua
does not offer persistent history like in the taskbrowser. If you want it, you can use rlwrap and to wrap rttlua as follows:
alias rttlua='rlwrap -a -r -H ~/.rttlua-history rttlua-gnulinux'
If you run 'rttlua' it should have persistent history.
Most modern editors provide basic syntax highlighting for Lua code.
The following shows the basic API, see section Automatically creating and cleaning up component interfaces for a more convenient way add/remove ports/properties.
> pin = rtt.InputPort("string") > pout = rtt.OutputPort("string") > =pin [in, string, unconn, local] // > =pout [out, string, unconn, local] //
Both In- and OutputPorts optionally take a second string argument (name) and third argument (description).
> tc:addPort(pin) > tc:addPort(pout, "outport1", "string outport that contains latest X") > =tc -- print tc interface to confirm it is there.
For this the ports don't have to be added to the TaskContext:
> =pin:connect(pout) true > return pin [in, string, conn, local] // > return pout [out, string, conn, local] // >
The rttlua-* REPL automatically creates a deployment component that is a peer of the lua taskcontext:
> tc = rtt.getTC() > depl = tc:getPeer("Deployer") > cp=rtt.Variable("ConnPolicy") > =cp {data_size=0,type="DATA",name_id="",init=false,pull=false,transport=0,lock_policy="LOCK_FREE",size=0} > depl:connect("compA.port1","compB.port2", cp)
> rttlib.info() services: marshalling, scripting, print, os, Lua typekits: rtt-types, rtt-mqueue-transport, OCLTypekit types: ConnPolicy, FlowStatus, PropertyBag, SendHandle, SendStatus, TaskContext, array, bool, bools, char, double, float, int, ints, rt_string, string, strings, uint, void comp types: OCL::ConsoleReporting, OCL::FileReporting, OCL::HMIConsoleOutput, OCL::HelloWorld, OCL::LuaComponent, OCL::TcpReporting, OCL::TimerComponent, OCL::logging::Appender, OCL::logging::FileAppender, OCL::logging::LoggingService, OCL::logging::OstreamAppender, TaskContext
> cp = rtt.Variable("ConnPolicy") > =cp {data_size=0,type="DATA",name_id="",init=false,pull=false,transport="default",lock_policy="LOCK_FREE",size=0} > cp.data_size = 4711 > print(cp.data_size) 4711
Printing the available constants:
> =rtt.globals {SendNotReady=SendNotReady,LOCK_FREE=2,NewData=NewData,SendFailure=SendFailure,\ SendSuccess=SendSuccess,NoData=NoData,UNSYNC=0,LOCKED=1,OldData=OldData,BUFFER=1,DATA=0} >
Accessing constants - just index!
> =rtt.globals.LOCK_FREE 2
It is cumbersome to initalize complex types with many subfields:
> tc = rtt.getTC() > depl = tc:getPeer("Deployer") > depl:import("kdl_typekit") > t=rtt.Variable("KDL.Frame") > =t {M={Z_y=0,Y_y=1,X_y=0,Y_z=0,Z_z=1,Y_x=0,Z_x=0,X_x=1,X_z=0},p={Y=0,X=0,Z=0}} > t.M.X_x=3 > t.M.Y_x=2 > t.M.Z_x=2.3 ...
To avoid this, use the fromtab()
method:
> t:fromtab({M={Z_y=1,Y_y=2,X_y=3,Y_z=4,Z_z=5,Y_x=6,Z_x=7,X_x=8,X_z=9},p={Y=3,X=3,Z=3}})
or even shorter using the table-call syntax of Lua,
> t:fromtab{M={Z_y=1,Y_y=2,X_y=3,Y_z=4,Z_z=5,Y_x=6,Z_x=7,X_x=8,X_z=9},p={Y=3,X=3,Z=3}}
When you created an RTT array type, the initial length will be zero. You must set the length of an array before you can assign elements to it (starting from toolchain-2.5 fromtab
will do this automatically:
> ref=rtt.Variable("array") > ref:resize(3) > ref:fromtab{1,1,10} > print(ref) -- prints {1,1,10} ...
> p1=rtt.Property("double", "p-gain", "Proportional controller gain")
(Note: the second and third argument (name and description) are optional and can also be set when adding the property to a TaskContext)
> tc=rtt.getTC() > tc:addProperty(p1) > =tc -- check it is there...
> tc=rtt.getTC() > pgain = tc:getProperty("pgain") > =pgain -- will print it
> p1:set(3.14) > =p1 -- a property can be printed! p-gain (double) = 3.14 // Proportional controller gain
In particular, the following will not work:
> p1=3.14
Lua works with references! This will assign the variable p1
a numeric value of 3.14 and the reference to the property is lost.
> print("the value of " .. p1:info().name .. " is: " .. p1:get()) the value of p-gain is: 3.14
Assume a property of type KDL::Frame. Similarily to Variables the subfields can be accessed by using the dot syntax:
> d = tc:getPeer("Deployer") > d:import('kdl_typekit') > f=rtt.Property('KDL.Frame') > =f (KDL.Frame) = {M={Z_y=0,Y_y=1,X_y=0,Y_z=0,Z_z=1,Y_x=0,Z_x=0,X_x=1,X_z=0},p={Y=0,X=0,Z=0}} // > f.M.Y_y=3 > =f.M.Y_y 3 > f.p.Y=1 > =f (KDL.Frame) = {M={Z_y=0,Y_y=3,X_y=0,Y_z=0,Z_z=1,Y_x=0,Z_x=0,X_x=1,X_z=0},p={Y=1,X=0,Z=0}} // >
Like Variables, Properties feature a fromtab
method to initalize a Property from values in a Lua table. See Section RTT Types and Typekits - Convenient initalization of multi-field types for details.
As properties are not automatically garbage collected, property memory must be managed manually:
> tc:removeProperty("p-gain") > =tc -- p-gain is gone now > p1:delete() -- delete property and free memory > =p1 -- p1 is 'dead' now. userdata: 0x186f8c8
Synchronous calling of operations from Lua:
> d = tc:getPeer("Deployer") > =d:getPeriod() 0
> d = tc:getPeer("Deployer") > op = d:getOperation("getPeriod") > =op -- can be printed! double getPeriod() // Get the configured execution period. -1.0: no thread ... > =op() -- call it 0
"Sending" Operations permits to asynchronously request an operation to be executed and collect the results at a later point in time.
> d = tc:getPeer("Deployer") > op = d:getOperation("getPeriod") > handle=op:send() -- calling it > =handle:collect() SendSuccess 0
Note:
collect()
returns multiple arguments: first a SendStatus string ('SendSuccess', 'SendFailure') followed by zero to many output arguments of the operation.collect
blocks until the operation was executed, collectIfDone()
will immediately return (but possibly with 'SendNotReady')Answer: No.
Workaround: define a new TaskContext that inherits from LuaComponent and add the Operation there. Implement the necessary glue between C++ and Lua by hand (not hard, but some manual work required).
Answer: No (but potentially it would be easy to add. Ask on the ML).
For example, to load the marshalling service in a component and then to use it to write a property (cpf) file:
> tc=rtt.getTC() > depl=tc:getPeer("Deployer") > depl:loadService("lua", "marshalling") -- load the marshalling service in the lua component true > =tc:provides("marshalling"):writeProperties("props.cpf") true
A second (and slightly faster) option is to get the Operation before calling it:
> -- get the writeProperties operation ... > writeProps=tc:provides("marshalling"):getOperation("writeProperties") > =writeProps("props.cpf") -- and call it to write the properties to a file. true
> depl:loadService("lua", "marshalling") -- load the marshalling service > depl:loadService("lua", "scripting") -- load the scripting service > print(tc:provides()) Service: lua Subservices: marshalling, scripting Operations: activate, cleanup, configure, error, exec_file, exec_str, getPeriod, inFatalError, inRunTimeError, isActive, isConfigured, isRunning, setPeriod, start, stop, trigger, update Ports: Service: marshalling Subservices: Operations: loadProperties, readProperties, readProperty, storeProperties, updateFile, updateProperties, writeProperties, writeProperty Ports: Service: scripting Subservices: Operations: activateStateMachine, deactivateStateMachine, eval, execute, getProgramLine, getProgramList, getProgramStatus, getProgramStatusStr, getProgramText, getStateMachineLine, getStateMachineList, getStateMachineState, getStateMachineStatus, getStateMachineStatusStr, getStateMachineText, hasProgram, hasStateMachine, inProgramError, inStateMachineError, inStateMachineState, isProgramPaused, isProgramRunning, isStateMachineActive, isStateMachinePaused, isStateMachineRunning, loadProgramText, loadPrograms, loadStateMachineText, loadStateMachines, pauseProgram, pauseStateMachine, requestStateMachineState, resetStateMachine, runScript, startProgram, startStateMachine, stepProgram, stopProgram, stopStateMachine, unloadProgram, unloadStateMachine Ports: >
The RTT Global Service is useful for loading services into your application that don't belong to a specific component. Your C++ code accesses this object by calling
RTT::internal::GlobalService::Instance();
The GlobalService object can be accessed in Lua using a call to:
gs = rtt.provides()
And allows you to load additional services into the global service:
gs:require("os") -- or: rtt.provides():require("os")
Which you can access later-on again using the rtt table:
rtt.provides("os"):argc() -- returns the number of arguments of this application rtt.provides("os"):argv() -- returns a string array of arguments of this application
-- create activity for producer: period=1, priority=0, -- schedtype=ORO_SCHED_OTHER (1). depl:setActivity("producer", 1, 0, rtt.globals.ORO_SCHED_RT)
-- create activity for producer: period=0, priority=0, -- schedtype=ORO_SCHED_OTHER (1). depl:setActivity("producer", 0, 0, rtt.globals.ORO_SCHED_RT)
depl:setMasterSlaveActivity("name_of_master_component", "name_of_slave_component")
(see also the example in section How to write a RTT-Lua component)
-- deploy_app.lua require("rttlib") tc = rtt.getTC() depl = tc:getPeer("Deployer") -- import components, requires correctly setup RTT_COMPONENT_PATH depl:import("ocl") -- depl:import("componentX") -- import components, requires correctly setup ROS_PACKAGE_PATH (>=Orocos 2.7) depl:import("rtt_ros") rtt.provides("ros"):import("my_ros_pkg") -- create component 'hello' depl:loadComponent("hello", "OCL::HelloWorld") -- get reference to new peer hello = depl:getPeer("hello") -- create buffered connection of size 64 cp = rtt.Variable('ConnPolicy') cp.type=1 -- type buffered cp.size=64 -- buffer size depl:connect("hello.the_results", "hello.the_buffer_port", cp) rtt.logl('Info', "Deployment complete!")
run it:
$ rttlua-gnulinux -i deploy-app.lua
or using orocos_toolchain_ros
$ rosrun ocl rttlua-gnulinux -i deploy-app.lua
Note: The -i
option makes rttlua enter interactive mode (the REPL) after executing the script. Without it would exit after finishing executing the script, which in this case is probably not what you want.
A Lua component is created by loading a Lua-script implementing zero or more TaskContext hooks in a OCL::LuaComponent. The following RTT hooks are currently supported:
bool configureHook()
bool activateHook()
bool startHook()
void updateHook()
void stopHook()
void cleanupHook()
void errorHook()
All hooks are optional, but if implemented they must return the correct return value (if not void of course). It is also important to declare them as global (by not adding a local
keyword. Otherwise they would be garbage collected and not called)
The following code implements a simple consumer component with an event-triggered input port:
require("rttlib") tc=rtt.getTC(); -- The Lua component starts its life in PreOperational, so -- configureHook can be used to set stuff up. function configureHook() inport = rtt.InputPort("string", "inport") -- global variable! tc:addEventPort(inport) cnt = 0 return true end -- all hooks are optional! --function startHook() return true end function updateHook() local fs, data = inport:read() rtt.log("data received: " .. tostring(data) .. ", flowstatus: " .. fs) end -- Ports and properties are the only elements which are not -- automatically cleaned up. This means this must be done manually for -- long living components: function cleanupHook() tc:removePort("inport") inport:delete() end
A matching producer component is shown below:
require "rttlib" tc=rtt.getTC(); function configureHook() outport = rtt.OutputPort("string", "outport") -- global variable! tc:addPort(outport) cnt = 0 return true end function updateHook() outport:write("message number " .. cnt) cnt = cnt + 1 end function cleanupHook() tc:removePort("outport") outport:delete() end
A deployment script to deploy these two components:
require "rttlib" rtt.setLogLevel("Warning") tc=rtt.getTC() depl = tc:getPeer("Deployer") -- create LuaComponents depl:loadComponent("producer", "OCL::LuaComponent") depl:loadComponent("consumer", "OCL::LuaComponent") --... and get references to them producer = depl:getPeer("producer") consumer = depl:getPeer("consumer") -- load the Lua hooks producer:exec_file("producer.lua") consumer:exec_file("consumer.lua") -- configure the components (so ports are created) producer:configure() consumer:configure() -- connect ports depl:connect("producer.outport", "consumer.inport", rtt.Variable('ConnPolicy')) -- create activity for producer: period=1, priority=0, -- schedtype=ORO_SCHED_OTHER (1). depl:setActivity("producer", 1, 0, rtt.globals.ORO_SCHED_RT) -- raise loglevel rtt.setLogLevel("Debug") -- start components consumer:start() producer:start() -- uncomment to print interface printing (for debugging) -- print(consumer) -- print(producer) -- sleep for 5 seconds os.execute("sleep 5") -- lower loglevel again rtt.setLogLevel("Warning") producer:stop() consumer:stop()
(available from toolchain-2.5)
The function rttlib.create_if
can (re-) generate a component interface from a specification as shown below. Conversely, rttlib.tc_cleanup
will remove and destruct all ports and properties again.
-- stupid example: iface_spec = { ports={ { name='inp', datatype='int', type='in+event', desc="incoming event port" }, { name='msg', datatype='string', type='in', desc="incoming non-event messages" }, { name='outp', datatype='int', type='out', desc="outgoing data port" }, }, properties={ { name='inc', datatype='int', desc="this value is added to the incoming data each step" } } } -- this create the interface iface=rttlib.create_if(iface_spec) function configureHook() -- it is safe to be run twice, existing ports -- will be ignored. Thus, running cleanup() and configure() -- will reconstruct the interface again. iface=rttlib.create_if(iface_spec) inc = iface.props.inc:get() return true end function startHook() -- ports/props can be indexed as follows: iface.ports.outp:write(1) return true end function updateHook() local fs, val fs, val = iface.ports.inp:read() if fs=='NewData' then iface.ports.outp:write(val+inc) end end function cleanupHook() -- remove all ports and properties rttlib.tc_cleanup() end
In contrast to Components (which typically contain functionality which is standalone), Services are useful for extending functionality of existing Components. The LuaService permits to execute arbitrary Lua programs in the context of a Component.
The following dummy example loads the LuaService into a HelloWorld component and then runs a script that modifies a property:
require "rttlib" tc=rtt.getTC() d = tc:getPeer("Deployer") -- create a HelloWorld component d:loadComponent("hello", "OCL::HelloWorld") hello = d:getPeer("hello") -- load Lua service into the HelloWorld Component d:loadService("hello", "Lua") -- Execute the following Lua script (defined a multiline string) in -- the service. This dummy examples simply modifies the Property. For -- large programs it might be better tostore the program in a separate -- file and use the exec_file operation instead. proggie = [[ require("rttlib") tc=rtt.getTC() -- this is the Hello Component prop = tc:getProperty("the_property") prop:set("hullo from the lua service!") ]] prop = hello:getProperty("the_property") -- get hello.the_property print("the_property before service call:", prop) hello:provides("Lua"):exec_str(proggie) -- execute program in the service print("the_property after service call: ", prop)
More useful than just running once is to be able to execute a function synchronously with the updateHook of the host component. This can be achieved by registering a ExecutionEngine hook (much easier than it sounds!).
The following Lua service code implements a simple monitor that tracks the currently active (TaskContext) state of the component in whose context it is running. When the state changes the new state is written to a port "tc_state", which is added to the context TC.
This code could be useful for a supervision statemachine that can then easily react to this state change by means of an event triggered port.
require "rttlib" tc=rtt.getTC() d = tc:getPeer("Deployer") -- create a HelloWorld component d:loadComponent("hello", "OCL::HelloWorld") hello = d:getPeer("hello") -- load Lua service into the HelloWorld Component d:loadService("hello", "Lua") mon_state = [[ -- service-eehook.lua require("rttlib") tc=rtt.getTC() -- this is the Hello Component last_state = "not-running" out = rtt.OutputPort("string") tc:addPort(out, "tc_state", "currently active state of TaskContext") function check_state() local cur_state = tc:getState() if cur_state ~= last_state then out:write(cur_state) last_state = cur_state end return true -- returning false will disable EEHook end -- register check_state function to be called periodically and -- enable it. Important: variables like eehook below or the -- function check_state which shall not be garbage-collected -- after the first run must be declared global (by not declaring -- them local with the local keyword) eehook=rtt.EEHook('check_state') eehook:enable() ]] -- execute the mon_state program hello:provides("Lua"):exec_str(mon_state)
Note: the -i option causes rttlua to go to interactive mode after executing the script (and not exiting afterwards).
$ rttlua-gnulinux -i service-eehook.lua > rttlib.portstats(hello) the_results (string) = the_buffer_port (string) = NoData tc_state (string) = Running > hello:error() > rttlib.portstats(hello) the_results (string) = the_buffer_port (string) = NoData tc_state (string) = RunTimeError >
It is often useful to validate a deployed system at runtime, however you want to avoid cluttering individual components with non-functional validation code. Here's what to do (Please also see this post on orocos-users, which inspired the following)
Use-case: check for unconnected input ports
1. Write a function to validate a single component
The following function accepts a TaskContext as an argument and checks wether it has unconnected input ports. If yes it prints an error.
function check_inport_conn(tc) local portnames = tc:getPortNames() local ret = true for _,pn in ipairs(portnames) do local p = tc:getPort(pn) local info = p:info() if info.porttype == 'in' and info.connected == false then rtt.logl('Error', "InputPort " .. tc:getName() .. "." .. info.name .. " is unconnected!") ret = false end end return ret end
2. After deployment, execute the validation function on all components:
This can be done using the mappeers
function.
rttlib.mappeers(check_inport_conn, depl)
The mappeers
function is a special variant of map which calls the function given as a first argument on all peers reachable from a TaskContext (given as a second argument). We pass the Deployer here, which typically knows all components.
Here's a dummy deployment example to illustrate:
require "rttlib" tc=rtt.getTC() depl=tc:getPeer("Deployer") -- define or import check_inport_conn function here -- dummy deployment, ports are left unconnected. depl:loadComponent("hello1", "OCL::HelloWorld") depl:loadComponent("hello2", "OCL::HelloWorld") rttlib.mappeers(check_inport_conn, depl)
Executing it will print:
0.155 [ ERROR ][/home/mk/bin//rttlua-gnulinux::main()] InputPort hello1.the_buffer_port is unconnected! 0.155 [ ERROR ][/home/mk/bin//rttlua-gnulinux::main()] InputPort hello2.the_buffer_port is unconnected!
rFSM is a fast, lightweight Statechart implementation is pure Lua. Using RTT-Lua rFSM Statecharts can conveniently be used with RTT. The rFSM sources can be found here.
Answer:
Typically a Component will be preferred when
A Service is preferred when
There will, undoubtly, be exceptions!
Summary: Create a OCL::LuaComponent. In configureHook
load and initalize the fsm, in updateHook
call rfsm.run(fsm)
(see the rFSM docs for general information)
The source code for this example can be found here.
It is a best-practice to split the initalization (setting up required functions, peers or ports used by the fsm) and the fsm model itself into two files. This way the fsm model is kept as platform independent and hence reusable as possible.
The following initalization file is executed in the newly create LuaComponent for preparing the environment for the statemachine, that is loaded and initalized in configureHook.
launch_fsm.lua
require "rttlib" require "rfsm" require "rfsm_rtt" require "rfsmpp" local tc=rtt.getTC(); local fsm local fqn_out, events_in function configureHook() -- load state machine fsm = rfsm.init(rfsm.load("fsm.lua")) -- enable state entry and exit dbg output fsm.dbg=rfsmpp.gen_dbgcolor("rfsm-rtt-example", { STATE_ENTER=true, STATE_EXIT=true}, false) -- redirect rFSM output to rtt log fsm.info=function(...) rtt.logl('Info', table.concat({...}, ' ')) end fsm.warn=function(...) rtt.logl('Warning', table.concat({...}, ' ')) end fsm.err=function(...) rtt.logl('Error', table.concat({...}, ' ')) end -- the following creates a string input port, adds it as a event -- driven port to the Taskcontext. The third line generates a -- getevents function which returns all data on the current port as -- events. This function is called by the rFSM core to check for -- new events. events_in = rtt.InputPort("string") tc:addEventPort(events_in, "events", "rFSM event input port") fsm.getevents = rfsm_rtt.gen_read_str_events(events_in) -- optional: create a string port to which the currently active -- state of the FSM will be written. gen_write_fqn generates a -- function suitable to be added to the rFSM step hook to do this. fqn_out = rtt.OutputPort("string") tc:addPort(fqn_out, "rFSM_cur_fqn", "current active rFSM state") rfsm.post_step_hook_add(fsm, rfsm_rtt.gen_write_fqn(fqn_out)) return true end function updateHook() rfsm.run(fsm) end function cleanupHook() -- cleanup the created ports. rttlib.tc_cleanup() end
A dummy statemachine stored in the fsm.lua file:
return rfsm.state { ping = rfsm.state { entry=function() print("in ping entry") end, }, pong = rfsm.state { entry=function() print("in pong entry") end, }, rfsm.trans {src="initial", tgt="ping" }, rfsm.trans {src="ping", tgt="pong", events={"e_pong"}}, rfsm.trans {src="pong", tgt="ping", events={"e_ping"}}, }
Option A: Running the rFSM example with a Lua deployment script
deploy.lua
-- alternate lua deploy script require "rttlib" tc=rtt.getTC() d=tc:getPeer("Deployer") d:import("ocl") d:loadComponent("Supervisor", "OCL::LuaComponent") sup = d:getPeer("Supervisor") sup:exec_file("launch_fsm.lua") sup:configure() cmd = rttlib.port_clone_conn(sup:getPort("events"))
Run it. cmd is an inverse (output) port which is connected to the incoming (from POV of the fsm) 'events' port of the fsm, so by writing to it we can send events:
$ rosrun ocl rttlua-gnulinux -i deploy.lua OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux) INFO: created undeclared connector root.initial > sup:start() > in ping entry > cmd:write("e_pong") > in pong entry > cmd:write("e_ping") > in ping entry > cmd:write("e_pong") > in pong entry
Option B: Running the rFSM example with an Orocos deployment script
deploy.ops
import("ocl") loadComponent("Supervisor", "OCL::LuaComponent") Supervisor.exec_file("launch_fsm.lua") Supervisor.configure
After starting the supervisor we 'leave' it, so we can write to the 'events' ports:
$ rosrun ocl deployer-gnulinux -s deploy.ops INFO: created undeclared connector root.initial Switched to : Deployer This console reader allows you to browse and manipulate TaskContexts. You can type in an operation, expression, create or change variables. (type 'help' for instructions and 'ls' for context info) TAB completion and HISTORY is available ('bash' like) Deployer [S]> cd Supervisor TaskBrowser connects to all data ports of Supervisor Switched to : Supervisor Supervisor [S]> start = true Supervisor [R]> in ping entry Supervisor [R]> leave Watching Supervisor [R]> events.write ("e_pong") = (void) Watching Supervisor [R]> in pong entry Watching Supervisor [R]> events.write ("e_ping") = (void) Watching Supervisor [R]> in ping entry Watching Supervisor [R]>
This is basically the same as executing a function periodally in a service (see the Service example above). There is a convenience function service_launch_rfsm
in rfsm_rtt.lua
to make this easier.
The steps are:
require "rfsm_rtt" -- get reference to exec_str operation fsmfile = "fsm.lua" execstr_op = comp:provides("Lua"):getOperation("exec_str") rfsm_rtt.service_launch_rfsm(fsmfile, execstr_op, true)
The last line means the following: launch fsm in <fsmfile> in service identified by execstr_op
, true: create an execution engine hook so that the rfsm.step is called at the component frequency. (See the generated rfsm_rtt API docs).
Generally speaking, the most effective way of creating a new FSM from a parent one is populating the original simple states by overriding them with composite states. In this context, the parent FSM provides “empty” boxes to be filled with application-specific code.
In the following example, “daughter_fsm.lua” loads “mother_fsm.lua” and overrides a state, two transitions and a function. “daughter_fsm.lua” is launched by a Lua Orocos component named “fsm_launcher.lua” . Deployment is done by “deploy.ops” . Instructions on how to run the example follow.
mother_fsm.lua
-- mother_fsm.lua is a basic fsm with 2 simple states return rfsm.state { StateA = rfsm.state { entry=function() print("in state A") end, }, StateB = rfsm.state { entry=function() print("in state B") end, }, -- consistent transition naming makes overriding easier rfsm.trans {src="initial", tgt="StateA" }, tr_A_B = rfsm.trans {src="StateA", tgt="StateB", events={"e_mother_A_to_B"}}, tr_B_A = rfsm.trans {src="StateB", tgt="StateA", events={"e_mother_B_to_A"}}, }
daughter_fsm.lua
-- daughter_fsm.lua loads mother_fsm.lua -- implementing extra states, transitions and functions -- by adding and overriding the original ones. require "utils" require "rttros" -- local variables to avoid verbose function calling local state, trans, conn = rfsm.state, rfsm.trans, rfsm.conn -- path to the fsm to load local base_fsm_file = "mother_fsm.lua" -- load the original fsm to override local fsm_model=rfsm.load(base_fsm_file) -- set colored outputs indicating the current state dbg = rfsmpp.gen_dbgcolor( {STATE_ENTER=true}, false) -- Overriding StateA -- In "mother_fsm.lua" StateA is an rfsm.simple_state -- Here we make it an rfsm.composite_state fsm_model.StateA = rfsm.state { StateA1= rfsm.state { entry=function() print("in State A1") end, }, StateA2 = rfsm.state { entry=function() print("in State A2") end, }, rfsm.transition {src="initial", tgt="StateA1"}, tr_A1_A2 = rfsm.transition {src ="StateA1", tgt="StateA2", events={"e_move_to_A2"}}, tr_A2_A1 = rfsm.transition {src ="StateA2", tgt="StateA1", events={"e_move_to_A1"}}, } -- Overriding single transitions fsm_model.tr_A_to_B = rfsm.trans {src="StateA", tgt="StateB", events={"e_daughter_A_to_B"}} fsm_model.tr_B_to_A = rfsm.trans {src="StateB", tgt="StateA", events={"e_daughter_B_to_A"}} -- Overriding a specific function fsm_model.StateB.entry = function() print("I am in State B in the daughter FSM") end return fsm_model
fsm_launcher.lua
require "rttlib" require "rfsm" require "rfsm_rtt" require "rfsmpp" local tc=rtt.getTC(); local fsm local fqn_out, events_in function configureHook() -- load state machine fsm = rfsm.init(rfsm.load("daughter_fsm.lua")) -- enable state entry and exit dbg output fsm.dbg=rfsmpp.gen_dbgcolor("FSM loading example", { STATE_ENTER=true, STATE_EXIT=true}, false) -- redirect rFSM output to rtt log fsm.info=function(...) rtt.logl('Info', table.concat({...}, ' ')) end fsm.warn=function(...) rtt.logl('Warning', table.concat({...}, ' ')) end fsm.err=function(...) rtt.logl('Error', table.concat({...}, ' ')) end -- the following creates a string input port, adds it as a event -- driven port to the Taskcontext. The third line generates a -- getevents function which returns all data on the current port as -- events. This function is called by the rFSM core to check for -- new events. events_in = rtt.InputPort("string") tc:addEventPort(events_in, "events", "rFSM event input port") fsm.getevents = rfsm_rtt.gen_read_str_events(events_in) -- optional: create a string port to which the currently active -- state of the FSM will be written. gen_write_fqn generates a -- function suitable to be added to the rFSM step hook to do this. fqn_out = rtt.OutputPort("string") tc:addPort(fqn_out, "rFSM_cur_fqn", "current active rFSM state") rfsm.post_step_hook_add(fsm, rfsm_rtt.gen_write_fqn(fqn_out)) return true end function updateHook() rfsm.run(fsm) end function cleanupHook() -- cleanup the created ports. rttlib.tc_cleanup() end
deploy.ops
import("ocl") loadComponent("Supervisor", "OCL::LuaComponent") Supervisor.exec_file("fsm_launcher.lua") Supervisor.configure Supervisor.start
To test this example, run the Deployer:
rosrun ocl deployer-gnulinux -lerror -s deploy.ops
Then:
Deployer [S]> cd Supervisor TaskBrowser connects to all data ports of Supervisor Switched to : Supervisor Supervisor [R]> leave Watching Supervisor [R]> events.write ("e_move_to_A2") FSM loading example: STATE_EXIT root.StateA.StateA1 in State A2 FSM loading example: STATE_ENTER root.StateA.StateA2
A Coordinator often needs to interact with many or all other components in its vicinity. To avoid having to write peer1 = depl:getPeer("peer1")
all over, you can use the following function to generate a table of peers which are reachable from a certain component (commonly the deployer):
peertab = rttlib.mappeers(function (tc) return tc end, depl)
Assume the Deployer has two peers "robot" and "controller", they can be accessed as follows:
print(peertab.robot) -- or peertab.controller:configure()
> cp=rtt.Variable("ConnPolicy") > cp.transport=3 -- 3 is ROS > cp.name_id="/l_cart_twist/command" -- topic name > depl:stream("CompX.portY", cp)
or with sweet one-liner (thx to Ruben!):
> depl:stream("CompX.portY", rtt.provides("ros"):topic("/l_cart_twist/command"))
This is sometimes usefull for loading scripts etc. that are located in different packages.
The rttros.lua
collects some basic but useful stuff for interacting with ROS. This one is "borrowed" from the excellent roslua:
> require "rttros" > =rttros.find_rospack("geometry_msgs") /home/mk/src/ros/unstable/common_msgs/geometry_msgs >
Lua has to work with two typesystems: its own and the RTT typesystem. To makes this as smooth as possible the basic RTT types are automatically converted to their corresponding Lua types as shown by the table below:
RTT | Lua |
---|---|
bool | boolean |
float | number |
double | number |
uint | number |
int | number |
char | string |
string | string |
void | nil |
This conversion is done in both directions: basic values read from ports or basic return values of operation are converted to Lua; vice versa if an operation with basic Lua values is called these will automatically be converted to the corresponding RTT types.
In short: write a function which accepts a lua table representation of you data type and returns either a table or a string. Assign it to rttlib.var_pp.mytype
, where mytype is the value returned by the var:getType()
method. That's all!
Quick example: ConnPolicy
type
(This is just an example. It has been done for this type already).
The out-of-box printing of a ConnPolicy
will look as follows:
./rttlua-gnulinux Orocos RTTLua 1.0-beta3 (gnulinux) > return rtt.Variable("ConnPolicy") {data_size=0,type=0,name_id="",init=false,pull=false,transport=0,lock_policy=2,size=0}
This not too bad, but we would like to display the string representation of the C++ enums type
and lock_policy
. So we must write a function that returns a table...
function ConnPolicy2tab(cp) if cp.type == 0 then cp.type = "DATA" elseif cp.type == 1 then cp.type = "BUFFER" else cp.type = tostring(cp.type) .. " (invalid!)" end if cp.lock_policy == 0 then cp.lock_policy = "UNSYNC" elseif cp.lock_policy == 1 then cp.lock_policy = "LOCKED" elseif cp.lock_policy == 2 then cp.lock_policy = "LOCK_FREE" else cp.lock_policy = tostring(cp.lock_policy) .. " (invalid!)" end return cp end
and add it to the rttlib.var_pp
table of Variable formatters as follows:
rttlib.var_pp.ConnPolicy = ConnPolicy2tab
now printing a ConnPolicy
again calls our function and prints the desired fields:
> return rtt.Variable("ConnPolicy") {data_size=0,type="DATA",name_id="",init=false,pull=false,transport=0,lock_policy="LOCK_FREE",size=0} >
If you are used to manage your application with the classic OCL Taskbrowser or if you want your application to be connected via Corba, you may only use lua for deployment, and continue to use your former deployer. To do so, you have to load the lua service into your favorite deployer (deployer, cdeployer, deployer-corba, ...) and then call your deployment script.
Exemple : launch your prefered deployer :
cdeployer -s loadLua.ops
with loadLua.ops :
//load the lua service loadService ("Deployer","Lua") //execute your deployment file Lua.exec_file("yourLuaDeploymentFile.lua")
and with yourLuaDeploymentFile.lua containing the kind of stuff described in this Cookbook. Like the one in paragraph "How to write a deployment script"
$ <fsm_install_dir>/tools/rfsm-viz -f <your_fsm_file>.lua
options:
see here: https://gist.github.com/3957702 (thx to Ruben).
Answer: everything besides Ports and Properties. So if you have Lua components/Services which are deleted and recreated, it is advisable to cleanup properly. This means:
portX:delete()
Update for toolchain-2.5: The utility function rttlib.tc_cleanup()
will do this for you.
Please ask questions related to RTT Lua on the orocos-users mailing list.
Lua specific links
The RTT Lua bindings are licensed under the same license as the OROCOS RTT.
The Orocos 1.x releases are still maintained but no longer recommended for new applications.
Look here for information on
This page explains how to install the Orocos Toolchain from the public repositories using a script. ROS-users might want to take a look at the orocos_toolchain stack and the rtt_ros_integration stack.
ruby --version
sh bootstrap.sh
. This installs the toolchain-2.6 branch (latest fixes, stable).Summarized:
cd $HOME mkdir orocos cd orocos mkdir orocos-toolchain cd orocos-toolchain wget -O bootstrap-2.6.sh http://gitorious.org/orocos-toolchain/build/raw/toolchain-2.6:bootstrap.sh sh bootstrap.sh source env.sh
Tweaking build and install options can be done by modifying autoproj/config.yml. You must read the README and the Autoproj Manual in order to understand how to configure autoproj. See also the very short introduction on Using Autoproj.
When the script finishes, try some Orocos toolchain commands (installed by default in 'install/bin'):
typegen deployer-gnulinux ctaskbrowser
After some time, you can get updates by going into the root folder and do
# Updates to latest fixes of release branch: autoproj update # Builds the toolchain autoproj build
You might have to reload the env.sh script after that as well. Simply open a new console. See also Using Autoproj.
Download the archive from the toolchain homepage. Unpack it, it will create an orocos-toolchain-<version> directory. Next do:
cd $HOME mkdir orocos cd orocos tar -xjvf /path/to/orocos-toolchain-<version>.tar.bz2 cd orocos-toolchain-<version> ./bootstrap_toolchain source ./env.sh autoproj build
Take a look at the Getting Started page for the most important documents.
Important changes
Online API resources
Cheat sheets
All manuals
autoproj update autoproj build
autoproj switch-config branch=toolchain-2.3 autoproj update autoproj build
You may replace the branch=toolchain-2.3 with any branch number going forward or backward in releases. We have: master, stable, toolchain-2....
If you'd like to reconfigure some of the package options, you can do so by writing
autoproj update --reconfigure
autoproj build
Warning: this will erase your current configuration (ie CMake) in case you had modified it manually !
A comprehensive autoproj manual can be found here