{"id":25657,"date":"2025-10-10T16:36:12","date_gmt":"2025-10-10T14:36:12","guid":{"rendered":"https:\/\/sixphere.com\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/"},"modified":"2025-10-10T16:36:12","modified_gmt":"2025-10-10T14:36:12","slug":"from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision","status":"publish","type":"post","link":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/","title":{"rendered":"From pixels to action: how we developed the brain of an autonomous rover with embedded computer vision"},"content":{"rendered":"","protected":false},"excerpt":{"rendered":"<p>Our team faced the challenge of developing the complete brain for an autonomous rover, a system that required visual perception and real-time decision-making on low-power hardware. This case study is a technical deep dive into our process: from the implementation of the Mask R-CNN segmentation model for accurate object identification, to the acceleration of performance on a Raspberry Pi using a Coral TPU accelerator. Discover how we overcame the challenges of model optimization and quantization in TensorFlow to deliver a robust and efficient Edge AI solution for intelligent automation.  <\/p>\n","protected":false},"author":6,"featured_media":25654,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","content-type":"","footnotes":""},"categories":[197,195,1,185],"tags":[202,244,196],"class_list":["post-25657","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-automation-iiot","category-ia-machine-learning-en","category-sin-categoria","category-technology-innovation","tag-digital-transformation","tag-edge-iot-en","tag-machine-learning-en"],"acf":{"cuerpo":"<span style=\"font-weight: 400;\">In the world of industrial automation, the true challenge is not always scale, but intelligent precision. Many industries face repetitive tasks that require not only movement, but also perception and real-time decision-making. Recently, our team was hired to address a challenge of this nature: developing the control software and vision system for an autonomous rover designed to operate in a highly specialized agricultural environment.  <\/span>\n\n<span style=\"font-weight: 400;\">The objective was to create a system that could navigate through rows of cultivation trays, visually identify hundreds of individual compartments, and perform a specific action on each one based on its state. This post is a deep dive into our technical process, from the selection of the AI models to the optimization of the hardware for efficient operation at the edge (Edge AI). <\/span>\n<h3><b>The problem: robotic precision at the micro level<\/b><\/h3>\n<span style=\"font-weight: 400;\">The client presented us with a clear scenario: a rover had to move autonomously along an installation with thousands of cultivation units arranged in a grid. Our responsibility was to develop the rover's brain, which had to be capable of: <\/span>\n<ol>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Accurately identifying each individual compartment in the camera's field of view.<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Visually analyzing the content of each compartment to make a decision.<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Sending commands to the robot's actuators to perform a specific task, in this case, dispensing a product.<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Doing all of this in real-time, while the rover was moving, and with low-power hardware to maximize autonomy.<\/li>\n<\/ol>\n<span style=\"font-weight: 400;\">This was a classic computer vision and robotics problem, where speed and efficiency on an embedded device were just as important as the accuracy of the AI model.<\/span>\n<h3><b>Our Solution Architecture: A Modular Approach on Raspberry Pi<\/b><\/h3>\nWe decided to base our solution on a Raspberry Pi<span style=\"font-weight: 400;\">, a versatile and cost-effective platform for prototyping and deployment. However, to equip it with the necessary intelligence, we designed a modular software and hardware pipeline focused on vision. <\/span>\n\n<span style=\"font-weight: 400;\">The workflow we proposed was as follows: the rover's camera captures an image of a section of the tray; our software identifies each compartment; extracts each one as a sub-image; a second model classifies the state of each compartment; and finally, orders are sent to the rover's motors.<\/span>\n<h4><b>Phase 1: visual perception \u2014 precise segmentation with Mask R-CNN<\/b><\/h4>\n<span style=\"font-weight: 400;\">The first and most critical step was teaching the rover to \"see\" and delimit the objects of interest. It wasn't enough to know that a compartment was there; we needed its exact contours to isolate its visual content from the rest of the image. <\/span>\n\nFor this task, we chose Mask R-CNN (Region-based Convolutional Neural Network)<span style=\"font-weight: 400;\">. Unlike other object detectors that only draw a rectangle (<\/span>bounding box), Mask R-CNN offered us two key outputs:\n<ul>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Bounding box: A box that framed each detected compartment.<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Segmentation mask:<\/b><span style=\"font-weight: 400;\"> A pixel-level mask that delineated the exact shape of the compartment. This was the key to our strategy, as it allowed us to completely ignore the background and the tray structure, focusing the analysis only on the area of interest. <\/span><\/li>\n<\/ul>\n<span style=\"font-weight: 400;\">We trained this model using a dataset of images that our team carefully annotated. The result was a robust vision system capable of identifying and isolating each compartment with high fidelity, even under variations in lighting and perspective. <\/span>\n<h4><b>Phase 2: Preparing the ground for classification \u2014 anchors and data extraction<\/b><\/h4>\n<span style=\"font-weight: 400;\">With the compartments already segmented, the next logical step was to prepare this data for the classification model. To optimize this process, we worked on the  <\/span><b>anchors <\/b>calculation<span style=\"font-weight: 400;\">.<\/span>\n\n<span style=\"font-weight: 400;\">The <\/span><i>\n  <span style=\"font-weight: 400;\">anchors<\/span>\n<\/i><span style=\"font-weight: 400;\"> are reference boxes that detection models use to predict the location and size of objects. By adjusting and optimizing these  <\/span><i><span style=\"font-weight: 400;\">anchors<\/span><\/i><span style=\"font-weight: 400;\"> to the specific dimensions of the compartments, we improved the efficiency and accuracy of the detection. This refinement allowed us to create a highly efficient data extraction pipeline: <\/span>\n<ol>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The rover captures an imagen.<\/span><\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Mask R-CNN, with its optimized anchors, generates the masks and bounding boxes in milliseconds.<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Our software uses these coordinates to crop and normalize an individual image for each compartment.<\/span><\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">These cropped images become the standardized input for the next module of the AI system.<\/span><\/li>\n<\/ol>\n<span style=\"font-weight: 400;\">This modular design ensured that, even though the classification model was still under development, the foundation of visual perception was already solid and ready for integration.<\/span>\n<h4><b>Phase 3: The hardware challenge \u2014 implementing Edge AI with the coral TPU accelerator<\/b><\/h4>\n<span style=\"font-weight: 400;\">Running a model like Mask R-CNN on a Raspberry Pi in real-time is, to be direct, unfeasible if relying solely on its CPU. To overcome this computational barrier, we integrated a  <\/span><b>Coral TPU accelerator<\/b><span style=\"font-weight: 400;\">.<\/span>\n\n<span style=\"font-weight: 400;\">This small but powerful chip, designed by Google, is optimized for running machine learning model inferences. By connecting it to the Raspberry Pi, we transformed our platform: <\/span>\n<ul>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\"><b>High speed inference:<\/b><span style=\"font-weight: 400;\"> The processing time per image was drastically reduced, going from seconds to mere milliseconds. This allowed the rover to operate smoothly and without pauses, meeting the real-time requirement. <\/span><\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Low power consumption: The TPU performs these calculations with much higher energy efficiency than a CPU, a decisive factor in maximizing the rover's battery life during long work shifts.<\/li>\n<\/ul>\n<span style=\"font-weight: 400;\">The integration of the Coral TPU was a fundamental pillar of our design, demonstrating our ability to deploy advanced AI solutions in resource-constrained environments (Edge AI).<\/span>\n<h4><b>Phase 4: The final optimization \u2014 training and quantization with TensorFlow<\/b><\/h4>\n<span style=\"font-weight: 400;\">Hardware alone is not the complete solution. To squeeze the maximum performance out of the Coral TPU, model optimization is essential. Our workflow for this was based on   <\/span><b>TensorFlow<\/b><span style=\"font-weight: 400;\">:<\/span>\n<ol>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Cloud training: First, we trained our Mask R-CNN model on high-performance computing platforms to achieve the maximum possible accuracy.<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Quantization: Once validated, we applied a technique called post-training quantization<span style=\"font-weight: 400;\">. This process converts the model weights from 32-bit floating-point numbers to 8-bit integers. The benefits are enormous for an embedded system: <\/span>\n<ul>\n \t<li style=\"font-weight: 400;\" aria-level=\"2\">Up to 4 times smaller model: Facilitates storage and loading on the Raspberry Pi.<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"2\">Much faster inference: The Coral TPU is specifically designed to execute 8-bit integer operations at breakneck speed.<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"2\">Lower memory and power consumption: A lighter and more efficient model reduces the overall system load.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<span style=\"font-weight: 400;\">This quantization process was the final touch that allowed us to deploy a state-of-the-art vision model on modest hardware, without sacrificing the operational speed required for the project.<\/span>\n<h3><b>Conclusion: A model for intelligent automation<\/b><\/h3>\n<span style=\"font-weight: 400;\">This project is a testament to how the integration of cutting-edge software and hardware can solve complex industrial automation problems. By combining an advanced segmentation model like Mask R-CNN with the power of hardware acceleration from the Coral TPU and optimization techniques like quantization, we developed a robust, fast, and efficient robotic brain. <\/span>\n\n<span style=\"font-weight: 400;\">The approach we followed \u2014precise perception, data extraction, and optimization for edge hardware\u2014 is a versatile blueprint that our team can apply to a wide range of challenges, from quality control on production lines to automated logistics. We demonstrated that artificial intelligence does not have to live in the cloud; we can bring it to the field, where the action happens, creating autonomous and truly intelligent solutions. <\/span>","clasificacion":"Normal"},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Computer vision for rovers: our success story with Raspberry Pi and Edge AI<\/title>\n<meta name=\"description\" content=\"Discover how we developed a computer vision system for autonomous rovers. We explain our process with Mask R-CNN, Raspberry Pi, and Coral TPU for intelligent, real-time automation.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Computer vision for rovers: our success story with Raspberry Pi and Edge AI\" \/>\n<meta property=\"og:description\" content=\"Discover how we developed a computer vision system for autonomous rovers. We explain our process with Mask R-CNN, Raspberry Pi, and Coral TPU for intelligent, real-time automation.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/\" \/>\n<meta property=\"og:site_name\" content=\"Innovaci\u00f3n que conecta tu empresa - Sixphere Technologies\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/6phere\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-10T14:36:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/sixphere.com\/wp-content\/uploads\/2025\/10\/rover-e1760106930778.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Jes\u00fas Mar\u00eda Jurado\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@6phere\" \/>\n<meta name=\"twitter:site\" content=\"@6phere\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Jes\u00fas Mar\u00eda Jurado\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":[\"Article\",\"BlogPosting\"],\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/\"},\"author\":{\"name\":\"Jes\u00fas Mar\u00eda Jurado\",\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/#\\\/schema\\\/person\\\/7b742a9b0be62b4b0cc93de5339c3332\"},\"headline\":\"From pixels to action: how we developed the brain of an autonomous rover with embedded computer vision\",\"datePublished\":\"2025-10-10T14:36:12+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/\"},\"wordCount\":17,\"publisher\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/sixphere.com\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/rover-e1760106930778.png\",\"keywords\":[\"Digital transformation\",\"Edge IoT\",\"Machine Learning\"],\"articleSection\":[\"Automation &amp; IIoT\",\"IA &amp; Machine Learning\",\"Sin categor\u00eda\",\"Technology &amp; Innovation\"],\"inLanguage\":\"en-US\"},{\"@type\":[\"WebPage\",\"ItemPage\"],\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/\",\"url\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/\",\"name\":\"Computer vision for rovers: our success story with Raspberry Pi and Edge AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/sixphere.com\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/rover-e1760106930778.png\",\"datePublished\":\"2025-10-10T14:36:12+00:00\",\"description\":\"Discover how we developed a computer vision system for autonomous rovers. We explain our process with Mask R-CNN, Raspberry Pi, and Coral TPU for intelligent, real-time automation.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/#primaryimage\",\"url\":\"https:\\\/\\\/sixphere.com\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/rover-e1760106930778.png\",\"contentUrl\":\"https:\\\/\\\/sixphere.com\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/rover-e1760106930778.png\",\"width\":1024,\"height\":1024,\"caption\":\"Rovert\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Portada\",\"item\":\"https:\\\/\\\/sixphere.com\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"From pixels to action: how we developed the brain of an autonomous rover with embedded computer vision\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/sixphere.com\\\/en\\\/\",\"name\":\"Sixphere Technologies\",\"description\":\"Construimos tecnolog\u00eda de innovaci\u00f3n que conecta tu empresa de forma eficiente, \u00fatil y escalable con IA, datos, software e integraci\u00f3n.\",\"publisher\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/sixphere.com\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/#organization\",\"name\":\"Sixphere Technologies\",\"url\":\"https:\\\/\\\/sixphere.com\\\/en\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/sixphere.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/sixphere-isopo-main.jpg\",\"contentUrl\":\"https:\\\/\\\/sixphere.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/sixphere-isopo-main.jpg\",\"width\":750,\"height\":600,\"caption\":\"Sixphere Technologies\"},\"image\":{\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/6phere\\\/\",\"https:\\\/\\\/x.com\\\/6phere\",\"https:\\\/\\\/es.linkedin.com\\\/company\\\/sixphere\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/sixphere.com\\\/en\\\/#\\\/schema\\\/person\\\/7b742a9b0be62b4b0cc93de5339c3332\",\"name\":\"Jes\u00fas Mar\u00eda Jurado\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7610df99e02a00b49296911caccfd5a6f862ccec254921169505e8c6ac91eaf4?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7610df99e02a00b49296911caccfd5a6f862ccec254921169505e8c6ac91eaf4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7610df99e02a00b49296911caccfd5a6f862ccec254921169505e8c6ac91eaf4?s=96&d=mm&r=g\",\"caption\":\"Jes\u00fas Mar\u00eda Jurado\"},\"url\":\"https:\\\/\\\/sixphere.com\\\/en\\\/blog\\\/author\\\/jesus-jurado\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Computer vision for rovers: our success story with Raspberry Pi and Edge AI","description":"Discover how we developed a computer vision system for autonomous rovers. We explain our process with Mask R-CNN, Raspberry Pi, and Coral TPU for intelligent, real-time automation.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/","og_locale":"en_US","og_type":"article","og_title":"Computer vision for rovers: our success story with Raspberry Pi and Edge AI","og_description":"Discover how we developed a computer vision system for autonomous rovers. We explain our process with Mask R-CNN, Raspberry Pi, and Coral TPU for intelligent, real-time automation.","og_url":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/","og_site_name":"Innovaci\u00f3n que conecta tu empresa - Sixphere Technologies","article_publisher":"https:\/\/www.facebook.com\/6phere\/","article_published_time":"2025-10-10T14:36:12+00:00","og_image":[{"width":1024,"height":1024,"url":"https:\/\/sixphere.com\/wp-content\/uploads\/2025\/10\/rover-e1760106930778.png","type":"image\/png"}],"author":"Jes\u00fas Mar\u00eda Jurado","twitter_card":"summary_large_image","twitter_creator":"@6phere","twitter_site":"@6phere","twitter_misc":{"Written by":"Jes\u00fas Mar\u00eda Jurado"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":["Article","BlogPosting"],"@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/#article","isPartOf":{"@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/"},"author":{"name":"Jes\u00fas Mar\u00eda Jurado","@id":"https:\/\/sixphere.com\/en\/#\/schema\/person\/7b742a9b0be62b4b0cc93de5339c3332"},"headline":"From pixels to action: how we developed the brain of an autonomous rover with embedded computer vision","datePublished":"2025-10-10T14:36:12+00:00","mainEntityOfPage":{"@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/"},"wordCount":17,"publisher":{"@id":"https:\/\/sixphere.com\/en\/#organization"},"image":{"@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/#primaryimage"},"thumbnailUrl":"https:\/\/sixphere.com\/wp-content\/uploads\/2025\/10\/rover-e1760106930778.png","keywords":["Digital transformation","Edge IoT","Machine Learning"],"articleSection":["Automation &amp; IIoT","IA &amp; Machine Learning","Sin categor\u00eda","Technology &amp; Innovation"],"inLanguage":"en-US"},{"@type":["WebPage","ItemPage"],"@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/","url":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/","name":"Computer vision for rovers: our success story with Raspberry Pi and Edge AI","isPartOf":{"@id":"https:\/\/sixphere.com\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/#primaryimage"},"image":{"@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/#primaryimage"},"thumbnailUrl":"https:\/\/sixphere.com\/wp-content\/uploads\/2025\/10\/rover-e1760106930778.png","datePublished":"2025-10-10T14:36:12+00:00","description":"Discover how we developed a computer vision system for autonomous rovers. We explain our process with Mask R-CNN, Raspberry Pi, and Coral TPU for intelligent, real-time automation.","breadcrumb":{"@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/#primaryimage","url":"https:\/\/sixphere.com\/wp-content\/uploads\/2025\/10\/rover-e1760106930778.png","contentUrl":"https:\/\/sixphere.com\/wp-content\/uploads\/2025\/10\/rover-e1760106930778.png","width":1024,"height":1024,"caption":"Rovert"},{"@type":"BreadcrumbList","@id":"https:\/\/sixphere.com\/en\/blog\/from-pixels-to-action-how-we-developed-the-brain-of-an-autonomous-rover-with-embedded-computer-vision\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Portada","item":"https:\/\/sixphere.com\/en\/"},{"@type":"ListItem","position":2,"name":"From pixels to action: how we developed the brain of an autonomous rover with embedded computer vision"}]},{"@type":"WebSite","@id":"https:\/\/sixphere.com\/en\/#website","url":"https:\/\/sixphere.com\/en\/","name":"Sixphere Technologies","description":"Construimos tecnolog\u00eda de innovaci\u00f3n que conecta tu empresa de forma eficiente, \u00fatil y escalable con IA, datos, software e integraci\u00f3n.","publisher":{"@id":"https:\/\/sixphere.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sixphere.com\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/sixphere.com\/en\/#organization","name":"Sixphere Technologies","url":"https:\/\/sixphere.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sixphere.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/sixphere.com\/wp-content\/uploads\/2023\/12\/sixphere-isopo-main.jpg","contentUrl":"https:\/\/sixphere.com\/wp-content\/uploads\/2023\/12\/sixphere-isopo-main.jpg","width":750,"height":600,"caption":"Sixphere Technologies"},"image":{"@id":"https:\/\/sixphere.com\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/6phere\/","https:\/\/x.com\/6phere","https:\/\/es.linkedin.com\/company\/sixphere"]},{"@type":"Person","@id":"https:\/\/sixphere.com\/en\/#\/schema\/person\/7b742a9b0be62b4b0cc93de5339c3332","name":"Jes\u00fas Mar\u00eda Jurado","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/7610df99e02a00b49296911caccfd5a6f862ccec254921169505e8c6ac91eaf4?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/7610df99e02a00b49296911caccfd5a6f862ccec254921169505e8c6ac91eaf4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7610df99e02a00b49296911caccfd5a6f862ccec254921169505e8c6ac91eaf4?s=96&d=mm&r=g","caption":"Jes\u00fas Mar\u00eda Jurado"},"url":"https:\/\/sixphere.com\/en\/blog\/author\/jesus-jurado\/"}]}},"_links":{"self":[{"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/posts\/25657","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/comments?post=25657"}],"version-history":[{"count":0,"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/posts\/25657\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/media\/25654"}],"wp:attachment":[{"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/media?parent=25657"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/categories?post=25657"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sixphere.com\/en\/wp-json\/wp\/v2\/tags?post=25657"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}