Software_Video streaming services
This is the last part of the first phase. With this part completed we can control the cub while being able to see through its camera (probably even cameras) connected to the cub's raspberry pi. First I was going to use VLC for streaming the camera video but aside from an annoying 3-5 seconds delay, I did not have any easy access to individual frames for further possessing, so I decided to write a simple service for it using python, flask, and Open-CV.
![]() |
| The webcam used on the cub |
First I wrote a simple web server running on cub raspberry waiting for requests from the client-side, capturing a frame and responding with the captured frame when receiving a get request. Then I also added some meta-data to the returned data, representing frame dimensions and channels to make it easier to decode the frame automatically which happens on the client-side. Soon I realized I need to be able to change both dimensions and channels of the frames from the client-side and added some meta-data regarding requested dimensions and channels to the request. Since changing the camera hardware settings is a time-consuming process and different clients might need different settings and hence change it constantly and rapidly(also in order to make the service stateless), I decided not to change the camera setting, but to use its default settings, and only resize/change each frame based on the request and send it. I also added a timestamp to the request which is copied in the response to be able to constantly measure the response time. I also tried to encode sending frames to JPG to reduce the size, and decoupling the frame capture loop from the loop sending it using threads, which both caused lots of pressure on the limited resources of the cub, and so I reverted them both.
Here is the code for the server part which we run on the cub:
camera_server
this server simply listens for requests, and when receiving one captures a frame, changes it based on the request and sends it. It also sends back the respective dimension information and time information.
The client part:
the client simply forms a request based on the current setting sends it, reshapes the received frame using the metadata inside the response and shows the result. It also calculates and shows the time difference between a request and its corresponding response and shows it.
The output video of front camera:
With this part done, the first phase is over. Now we can control the cub while seeing through its front camera.
The next Phase is to use these raw frames and processing them, and also changing the controlling scheme, probably automating it to some extent. Till then.
Arash Ardeshiri
September 1 2021

Comments
Post a Comment