# Robot motion planning based on Voronoi

**Introduction:**

- Voronoi
- Why Voronoi
- Motion Planning

**Steps:**

1) Defining an obstacle

- Black and white problem.
- Color gradient.

2) Detecting all obstacle’s edges

- Gradient Magnitude.

3) Discretizing all continuous edges into points

- Start & end nodes problem.
- How?!
- Boundary points problem.

4) Voronoi Part

- Saving and reading from files.

5) Deleting non-safe paths (or edges)

6) Dijkstra

- Start & end nodes problem.

7) Delete all other edges

8) Decimation

- Robot Commands.

**Introduction:**

**Voronoi:**From Wikipedia, In mathematics, a Voronoi diagram is a way of dividing space into a number of regions. A set of points (called seeds, sites, or generators) is specified beforehand and for each seed there will be a corresponding region consisting of all points closer to that seed than to any other. (As shown in this figure).

**Why:**Our goal is to use this diagram to plan the safest route for a robot to go from one location to another. As it’s shown in the figure, the edges are as far as possible from all obstacles (points).

**Motion Planning:**This part of the project is about how to convert an image with obstacles into input for Voronoi Diagram, and then calculating the safest shortest path for the robot.

1) **Defining the obstacles:** After taking the input from the user (image containing obstacles), We want to define the obstacles where the robot should avoid (Note we only need to know the obstacles we don’t search for the possible paths because anything else will be considered passable):

- The first problem is the color of the obstacle; in the first phase we assumed that it’ll be black and white image (where black means that it’s an obstacle).
- Then we decide the pixel’s color whether it’s black or not. So we used
**Color Distance (Euclidean distance)**. That’s if the distance between a pixel and a black pixel <= 200 then it’s considered an obstacle (200 is the default value, it’s changeable by user). - After that we generalized obstacle’s color to be changeable by user (controlled by the same metric (Color Distance)).

2) **Detecting obstacles’ edges: **now after knowing the obstacles we need to convert them to points in order to run and construct Voronoi. Our approach is converting these obstacles into edges and then converting these edges into points that represent the shape of the obstacles.

- We used
**Gradient Magnitude**algorithm to convert obstacles to edges.

3) **Discretizing all edges into points: **here we need to discretize all edges to points.

- Before discussing this problem, there’s an important note, the start and end positions for the robot should be also represented as points the reason for that will be clear in the next steps.
- The problem in this step is that the distance between any two points must be less than robot’s size in order to prevent the algorithm from accepting a path that goes through the obstacle.
**How :**our approach is having a 2D matrix represent the edges from step 2, we iterate over each point and erase any point that is near to it with distance <= robot size. (Look at the figure).**Boundary points:**The edges of the image are also considered obstacles, so we discretized these edges by the same technique used with other points.

4) **Voronoi Part: **now we organized the input and it’s ready to construct Voronoi with.

- Voronoi diagram was made in c++ (to be more efficient).
- We save the size of the image and points in a file. Then execute a c++ exe file which read the input and save the output in another file
- After that the main program (C#) reads the output and deletes the input ,output and exe file.

5) **Deleting non-safe paths (or edges): **After receiving the output from Voronoi c++. As shown in the figure, there’re some edges that the robot cannot use them.

- This is done by iterating over each point and delete all edges that the distance between the point and them <= robot size.

6) **Dijkstra: **now everything is ready to compute the shortest (**safest**) path. In this stage we run dijkstra algorithm over all safe edges in order to find the shortest path from the start to the end nodes.

- Note: we connected the start & end points with all points in their convex polygon.

7) **Delete all other edges: **After calculating the final path, now we neglect all other unused edges.

8) **Decimation** (Optional step)**: **Finally we need to downsample the final path in order to send it as commands to the robot (to reduce the amount of commands).

- All commands are describing the distance and orientation that the robot should take.
- We make a linear line every 5 units in x-axis (it’s the default value, changeable by user).
- Details are shown in the example

Source code can be found here : https://github.com/TarekVito/RobotPlanning

For excitable version click here

Great topic (Y)

Can a robot uses this to detect its path or it’s just theoretical

Well in the last part of the post -Robot commands-, I won’t claim that these commands are good enough to be used for a robot. Because there’re some missing unsolved problems other than movement distance and orientation like

1) The relation (or ratio) between the actual map and user’s image is not computed

2) Regarding robot’s input type. I assume in this project that the robot can calculate the distance and orientation it makes.