Simulator evolution

Current state

The simulator is right now a quite unique tool for a robot team. It can be used to prove the integration of several components without the need of the final hardware. It runs the actual code, which will be run in the robot, via a scheduler and the result is displayed on a gui.

Still it lacks two major features : * the test aspect. Things like collisions can't be verified exactly. * the automated aspect. Test cases can only be simple and human verified.

What's next?

So the objective of the rework is to made these two features possible.

The enablement will be made through a major rework of the existing code and the implementation of several new items such as a test cases framework or test helpers (possibility to script the position of obstacles).

The test cases will be implemented in Python for practical reasons (the rest of the code is also in Python). The implemention will be eased with the use of reusable components and a modular architecture (the modular aspect is already in the project dna with things like Mex). This work must be usable next year with a minimum amount of work.

Things to implement

Milestones : * Differenciation of the simulation and graphical layers * Abstraction of the playground and obstacles (and implementation of this year board) * Abstraction of the robot (and implementation of this year robot) * Colision detection * Add a test framework to enable test cases * Implement a xunit like framework (maybe via extension of pyunit for reusability and standardization)

Deadline : end of week 2 of february (around the 15th)

Architecture idea

The architecture is organised in three layers.

The first one is the IO (input/output) layer. Its role is to communicate with a simulated programs or a real robot. For example, an IO layer object can use the mex library to retrieve information about the actuators outputs, or to set a sensor input for a simulated program. Another one can use the serial port (real or simulated) to get simulated program view of the world (computed position for example).

The second one is the Model layer. Its role is to implement simulation of external elements. This includes robot mechanical systems, environment, playground elements... For example, a Model layer object can simulated the handling of playground elements inside the robot, depending of the actuators movements. Another one could simulate a distance sensor by looking up elements crossing its beam on the playground, computing distance to intersecting elements, and return this to IO layer, with added noise.

The last one is the View layer. Its role is to present results to the user of the simulated system. This could be raw text output, graphical interface output, or it could also analyse results to check if a condition is verified.

All those layers communicate using the subject/observer pattern. Objects of the Model layer can be observers of objects in the IO layer. This means that if a value is changed in an IO layer object, it will notify the Model layer object. Once notified, the IO layer object can query information from the IO layer object. Communication in the other direction is done directly. If an IO layer object does a computation at fixed interval, it does not need to be an observer, it can just request data at time of computation.

Objects in the View layer can get informations from the Model layer or directly from the IO layer.

Instantiation

An application startup is quite complicated as it implies the creation of many objects.

To ease construction, while still allowing a large panel of configuration, there is several level of boot-strapping classes.

IO layer

Each simulated program (io and asserv) comes with two classes:

  • a Proto class, which can be used to communicate with a real robot using the serial port or to a simulated one.
  • a Mex class, which can be used to communicate with a simulated robot using the mex library.

It also comes with an init module containing default parameters to be passed to the Proto constructor (host and target dicts). TODO: this should be changed as it depends of the robot to be instantiated.

Those two classes are part of the IO Layer, they provides objects to communicate with the running program (simulated or not).

Here is a sample hierarchy (free syntax):

asserv/
  mex.py
    class Mex:
      motor[]                # to access motor shaft position, speed, pwm...
      position               # this is the simulated position
  proto.py
    class Proto:
      integrated_position    # this is the position computed by the program
      pwm                    # this is motor input

Using those classes, construction is easier, but there is still work for the IO layer depending of the robot to instantiate. Therefore, for each robot, there is an HostIO class in a robots.robot_name package which:

  • starts simulated programs,
  • connects them to the different Proto classes (one robot may use several of each type),
  • instantiates the different Mex classes,
  • aggregates every components objects as attributes in the HostIO class for easy access,
  • send robot specific parameters.

Now every sensors and actuators are available as IO layer objects in the HostIO object. For example, there can be a motor[0] which communicate with the first motor, without needing to know which simulated program handles it. It could also be called lift_motor for example.

Model layer

Model layer objects are less dependent of the implementation. Therefore, the is a set of available objects ready to be reused.

For each robot, there is a Model class in a robots.robot_name package which:

  • takes an HostIO class as constructor parameter,
  • instantiates Model layer objects and connects them to the right IO layer objects,
  • gives access to all those instantiated object as attribute of the Model class.

View layer

This one is really similar to the Model layer classes instantiation. For each robot, there is a View class which takes the HostIO and Model objects as constructor parameters.

Controller

The controller classes controls the flow of execution in the python script.

TODO: there is differences whether there is an GUI or not as the GUI will want to control the execution flow. When there is no GUI, we should provide a controller which behave as a GUI so that both executions looks similar.

TODO: more to come

  • how to detect collision,
  • how to simulate a distance sensor (detect collision of a defined kind of objects with a segment),
  • how are playground objects described,
  • how to make tests.