Case Study

Robotic Depalletizing with AI-Driven Vision

Improving efficiencies and overcoming the challenges of mixed pallet stacks for one distribution warehouse

Project Overview

This global food and beverage distributor saw an opportunity to reduce labor and improve efficiencies by revamping their order fulfillment process. For optimal results, the customer needed depalletizers capable of unstacking mixed pallets.

The existing order fulfillment process focused on one order at a time, which required a large amount of people to move around the facility to pick items. The heavy fork truck traffic was not only inefficient, but also dangerous.

To reduce redundant trips down long aisles, wasted time, and potential for accidents, the new fulfillment process combined several orders. This allowed pickers to be dedicated to a particular area of the warehouse and efficiently collect items on a pallet and bring them to a central point. There, an automatic depalletizer would unload the pallets onto a conveyor, where a barcode reader would then identify to which order the item belonged and route it into the appropriate lane for completion of the individual order. Each lane would be outfitted with a robotic palletizer to
automate the build of the pallet for individual retail outlet locations.

Up to this point, the depalletizer proved to be the biggest challenge because it had to be able to identify a wide range of products, hundreds of different case sizes, bag types and other product groupings, and reliably pick from an often unstable pallet. Routinely, new SKUs were introduced to the warehouse as well, which needed to be accommodated by the depalletizer.

The customer had investigated many proposals that included sophisticated vision systems, but every time a product could not be identified for various reasons – such as overhanging bags, shrink-wrapped products, or new items – the operation would stop, requiring manual intervention by an operator close by and restart of the system. With such significant workflow interruptions, the return on the investment would not be enough to warrant automating this process.

Pearson’s solution was able to overcome these challenges. In partnership with vision company PlusOne Robotics, a 3D camera system is used to identify three-dimensional geometric surfaces, edges, and corners, as well as size, shape, height, and location of products in real time to find an optimal pick point. Integrated machine learning provides additional information if vision data alone is insufficient. If neither 3D vision, nor AI data allows the system to identify a pick point, an automatic request for remote human intervention is generated. This supervisory system is called Yonder, and through it’s remote connectivity, enables responses in under six seconds. A remote robot manager, who is able to oversee a large number of robots simultaneously, manually selects an item to be picked and allows the system to continue running with minimal wait time. All of these manual responses are used by the machine-learning algorithms to improve the systems’ performance over time. This customer also took advantage of PlusOne’s service offer to supply a remote robot supervisor to maximize the efficient use of resources and secure 24/7 support.