As adoption has increased, drones have become well-known for their aerial imaging capabilities. Their photography offers meaningful, data-driven insight for industries where growing demand exists for automation and technology adoption.
Agriculture is ripe to benefit from drone imaging. Using a UAV drone to fly over fields, detailed photographs can be captured of the crops and later stitched together to form a single map image much like Google or Bing Maps. More sophisticated solutions designed for this use can determine crop health and even produce a fertilization report, which can be uploaded to direct fertilization machines.
Challenge: How to Avoid a Processing Slow-Down
While incredibly useful, the photos, many gigabytes in size, can take 6-8 hours to render on a modern workstation. Vendors who provide these image processing services need a system able to handle multiple processing tasks at once.
In cases where users upload images to generate an aerial map and report, a single processing task can monopolize all the resources on the server. Azure message queuing can help in offloading the main application server by pushing out processing to dedicated servers.
Azure Message Queuing: Architect to Offload Processing
Using Azure Virtual Machines and Azure Message Queues to offload intensive processing tasks from your primary Application Servers to dedicated Processing Servers can help businesses scale horizontally and vertically by saving time that would otherwise be spent downloading images to nodes.
The proposed solution, built using Azure’s services, will use Azure message queuing to schedule pending tasks along with table storage for centralized logging and virtual machines to complete the processing.
Users’ images will be uploaded to an Azure File Storage and mounted to Azure Virtual Machines for processing. This saves on download time providing the Processing Server access to all files from the user in the system without having to download many gigabytes.
Azure Message Queuing Benefits
Azure provides rich APIs when it comes to managing their services — from command line tools to different client libraries for various platforms allowing developers to directly manage the virtual machines.
A dedicated Management Application can be added to the system, which can monitor the load. If all Processing Nodes are busy, and there are a lot of Pending Messages, the Management Application can spawn new processing nodes in seconds. Later, redundant nodes can be shut down or even disposed, reducing operational costs and scaling the system when needed.
Pending Messages can be assigned priority, or a priority queue can be created for Pending Messages. The Processing Nodes work first on tasks with priority. If you’re offering different pricing plans within your application, you can add such priority to the higher-end accounts’ tasks.
In this implementation, the Virtual Machine completing the processing is hosted in Azure, which means that we can easily boost its power in a matter of seconds reducing the time for a processing task. In the case of drone imagery, boosting RAM and CPU can reduce the time from 6 hours to 4 hours. Adding GPU can shrink time to 2 hours.
Multiple Processing Servers take processing tasks from the queues. Start with one server, and add additional servers if needed. The design of the system allows adding and removing Processing Nodes without any downtime or having to reconfigure other parts of the system. Developers must simply point the new Node to the Message Queues.
The Azure Queues work with text messages and provide cross-platform APIs, which means developers are not bound to a specific platform. Additionally, you can change the systems on both sides of the message queues without disrupting the other side. This gives you great flexibility and a loosely coupled system.
How to Build a Demo Application
For the processing servers, we’ll use a demo application hosted inside a Docker container. I used Docker because the original project incorporated an open-source library called OpenDroneMap, which also provides processing inside a Docker container.
System Components for Azure Message Queuing
- Message Queues
- Pending Queue — Holds new tasks that users have submitted for processing
- Processing Queue —Tasks that are being processed are moved to this queue
- Finished Queue — Holds finished tasks with status information
- Progress Queue (optional) — Processing nodes can use this queue to push progress updates
- Processing Server/Cluster
- Shared Storage
- Table Storage (for logging)
1. A user initiates a new processing task, a Pending Message is enqueued to the Pending Queue. Then the UI starts polling the Processing/Finished Queues for status updates.
2. When a Processing Server is available, it dequeues the next Pending Message from the Pending Queue and enqueues a Processing Message.
3. While working, a Processing Server can enqueue Progress Messages to the Progress Queue. This step is optional. The Processing Queue can also be used for this purpose.
4. When the Processing Server is done, it enqueues a Finished Message to the Finished Queue. The Processing Server shuts down, and the UI picks up the finished task.
Setting Up the Message Queue Shared Storage, Table Storage and Processing Server
A Storage Account in Azure provides File, Queue and Table services. The File services allow file shares to be mounted on any OS virtual machines. This means that the application server can share the same File Share with the processing virtual machines.
The Azure Storage Account also provides Blob storage. Unlike File services, Blob storage cannot be mounted to virtual machines. Azure Table Storage will be used for centralized logging.
We are going to use the Storage Account’s Message Queue for the implementation, because it comes bundled with the other services, but Azure Service Bus is also an option.
Sidenote: You can sign up for free Azure monthly credit through the Visual Studio Dev Essentials program: https://www.visualstudio.com/dev-essentials/.
1. Create your Storage Account.
The Storage Account will provide the File and Queue services.
2. Create the message queues and logs table using Azure Storage Explorer — http://storageexplorer.com/.
3. Create a separate Resource Group for the virtual processing machines.
4. Create a new virtual machine, which will be used for processing. Remember we are using a Docker host, and the processing logic will be inside a custom container.
Setup details and sample code can be found here:
Here is the final Azure setup:
Client Code Overview
Our demo client is a Python application running in a Docker container. Azure provides cross-platform API libraries so it is quite easy to communicate with its services. Additionally, the Queue services that come with Azure Storage allow HTTP queries.
The container is based on the latest Ubuntu distribution and depends on the following packages:
- cifs-utils; Azure’s Storage supports the CIFS file system, which means that you’ll need to have some additional tooling installed on your client to be able to mount the share.
- python-pip and the azure package; The azure package for Python brings all Azure client libraries.
When the container is started, the client script will start polling the Pending Queue and dequeue any new JSON messages.
When a Pending Message is dequeued, a Message will be enqueued in the Processing Queue. If the new Message has a “share” property, the client will treat it as a share name and start a mount procedure. Mounting is done via the standard mount command with some additional parameters:
mount -t cifs <source_url> <destination_folder> -o vers=3.0,user=<user>,password=<password>
Now the Azure storage will be mounted as a folder in the operating system. This means if the processing involves user files, there will be no overhead by downloading files from the share to the Processing Server.
As part of the demo, the client script will list all files that are on the share.
Once processing is complete, any mounted shares are unmounted, and a Finished message will be enqueued with a “success” property added to the original JSON.
All significant operations will be logged in table logs inside the Azure Table Storage we’ve setup.
The architectural approach with Message Queues in Azure can be easily demonstrated with Azure Storage Explorer if you haven’t developed a user-facing application yet. We can manually add messages to the Pending Queue and see how the processing clients react.
For the sake of the demonstration, I’ve added a share named test with a text file test.txt.
The next step would be to add a new message to the pending queue (make sure it’s Base64 encoded as the client is expecting it).
With the demo client started, keep an eye on the processing and finished queues and the client’s progress.
You can check the logs in the Azure table storage.
Here is the output of the processing client.