Software_Architecture

My initial goal is to build a remotely controllable robot able to roam around the house and interact with my pets through voice or video (despite the fact that don't have any for the time being). I want to be able to see the environment through the camera of the robot while controlling it. The next phase would be to change and improve the controlling mechanism, controlling it via voice command, maybe using brain waves to control it and finally using AI and image processing to make it at least partially autonomous. Finally adding other controllable components to the house environment and making them able to interact with each other. Although it might seem like a very high-level description, but I am going to try to extract the actual requirements (architecturally significant requirements) from it.

First there is the obvious need for acceptable mobility on a presumably slightly uneven floor, which is more or less addressed by the physical structure of the cub mentioned in the previous posts (forget about stairs, etc. for now). There is a camera installed on the cub as the infrastructure for streaming video, and also a raspberry pi4 as the cub's main computer.

As can be clearly seen, to implement these requirements we are going to be engaged with a relatively wide spectrum of different tools, technologies and levels of abstraction. We need to be able to talk to servos, we need to be able to talk through USART, different parts need to communicate over some medium. At some point at the same time we are going to need an image processing unit, voice recognition, AI, etc. Implementing an application capable of handling such a wide range of requirements in a monolithic manner is clearly impossible. To me, these properties were strong indicators that I have to adopt the SOA (service-oriented architecture) style as the general style and multiple micro-services in particular to implement the entire system. In this architectural style (SOA) all of the functions of the final application are modularized and presented as separate services each one hiding any implementation details, letting us easily use different technologies, tools and languages. In a micro-service software development pattern, an application is split as a collection of loosely coupled services each one responsible for a rather uniform task, all communicating with lightweight protocols (using smart endpoints and dumb pipes like message queues).

In order to be able to control the cub from remotely (before making it autonomous) we should obviously use one of the different forms of wireless telecommunication technologies (a wide range of choices from devices like nRF24, XBee, LoRa to WIFI, GPRS and other wireless mobile telecommunications technologies like 3G and 4G). Having experience working with almost all of them I didn't even bother thinking about using the low-level part of this spectrum. Aside from very limited range and bandwidth, they are very low level causing very tight coupling of code (tight coupling is somehow the inherent nature of the code interacting with hardware) making it very difficult to write, read, maintain and scale (let alone testing!). So I am going to use a 4G portable modem in order to connect the raspberry to the internet as the main medium for sending/receiving data (initially I am going to use WIFI and LAN but the Idea is the same).

To create an stream from the installed camera (on the cub), sending it, and playing the stream on the other side I am going to use VLC. It enables us to easily stream a device and play it.

I am going to choose AMQP as the lightweight messaging protocol to communicate between services. I will use RabbitMQ which is an implementation of AMQP. Clearly, MQTT and ZeroMQ which are quite light and usually more suitable for embedded uses are also viable options, but I choose AMQP because I have more experience using it.

Aside from the two VLC instances, and the RabbitMQ instance (running on a server, or created using CloudAMQP) I have decided to divide the main functionalities into three main services running as separate processes or even in the form of separate containers (I will use docker containers). One of them running on the cub's raspberry pi4, is responsible for communicating with message broker on one hand and the motor controller on the other hand using motor_ctrl queue, and also sending a periodic connection indicator signal to the motor controller (to stop motor when disconnected). The second one also running on the cub's raspberry pi4 is used for communicating between message broker and servo controller listening to the servo_ctrl queue (both services are also responsible for checking serial connection periodically to stop service when disconnected). The third one running on a separate system (used to control the cub) gathers controller information (for example from joystick, later from speech recognition tool) turns them into events and lets different observers to subscribe to it, interpret these events in different ways, and send them into the different queues using an exchange.

We can add more actuators/sensors using the same procedure. Having a service interacting with them and the queue on the cub side and another service on the controller side communicating with them using a separate queue.

I have tried to describe the overall architecture of this system in the following diagram. It is not meant to be a standard 4+1 model or C4 model diagram but is somehow close to the container diagram of the C4 model :


Overall architecture of the system, red dotted boundaries represent separate services, blue dashed lines represent possible system borders
 

In the following posts, I will describe each one of these services, more specifically the three main services and their codes in more detail.


Arash Ardeshiri

July 25 2021





Comments

Popular posts from this blog

Mechanics_Chassis

Software_Video streaming services

Software_Joystick service