A World Language for Machines

The VDMA, in its strive to create a world language for factory automation, has teamed up with the OPC Foundation and its OPC UA technology, and is now developing market and industry-specific standards for the various domains in machine building, such as Robotics, Machine Vision, Machine Tools and many more.

Wouldn’t it be great, if machines could communicate in a direct way with each other? This idea is at the core of the Industry 4.0 movement to create the smart factory of the future. This goal of reaching ‘interoperability’ is the new core competence that must distinguish our future products in a connected world of Industrial IoT – but even more, the acting people and organizations involved.

The VDMA, Europe’s biggest industrial association uniting more than 3,200 member companies from the area of mechanical engineering and machine building, has decided to drive the development of interoperability forward. In simple words: To create nothing less than a world language for factory automation. To do this, it has teamed up with the OPC Foundation and its OPC UA technology and is now developing market and industry-specific standards for the many domains in machine building, such as Robotics, Machine Vision, Machine Tools and many more. These domain-specific standards are called ‘Companion Specifications’. These specifications define the specifics needed within each domain. This article focuses on one such specification: the OPC UA Machine Vision Companion Specification.

OPC UA for Machine Vision

The acronym OPC UA stands for Open Platform Communications Unified Architecture. OPC UA is a vendor and platform independent machine-to-machine communication technology, which is officially recommended by the (German) Industry 4.0 initiative and other international consortiums, like the Industrial Internet Consortium (IIC), to implement Industry 4.0. The specification of OPC UA can be divided in two areas: The basis specification and companion specifications. The basis specification describes ‘how’ data can be transferred in an OPC UA manner and the companion specifications describe ‘what’ information and data are transferred. The OPC Foundation is responsible for the development of the basis specification. Sector-specific companion specifications are developed within working groups, usually organized by trade associations, like the VDMA as one key player in the Industry 4.0 initiative.

OPC Machine Vision Companion Specification

In January 2016, the VDMA Machine Vision Board decided to develop an OPC UA Companion Specification for Machine Vision. The work has been done within the joint working group, which consists by definition of the OPC Foundation and a host organization, which is the VDMA. To have broader reach, we decided to bring this important standardization work into our global standardization initiative called G3. This G3-cooperation consists of 5 machine vision associations acting around the world and fortifies our joint working as shown in Figure 1. The joint working group is coordinated by VDMA Machine Vision Initiative.

A generic information model

The OPC UA Companion Specification for Machine Vision (in short OPC Machine Vision) provides a generic information model for all vision systems – from simple vision sensors to complex inspection systems. Put simply, it defines the essence of any vision system that does not necessarily have to be a ‘machine’ vision system. OPC Machine Vision is the accepted and officially supported OPC UA Companion Specification for vision systems by the OPC Foundation.

Part 1

Part 1 describes an abstraction of the generic vision system, i.e. a representation of a so called ‘digital twin’ of the system. It handles the management of recipes, configurations and results in a standardized way, whereas the contents stay vendor-specific and are treated as black boxes (Figure 2). It allows the control of a vision system in a generalized way, abstracting the necessary behavior via a state machine concept (Figure 3). A test implementation has already been successfully completed and was presented to a large audience of automotive engineering experts at a major event of the automotive industry in Germany dedicated to OPC UA. A hardware demonstrator is now being developed which will be showcased at major trade shows in Germany soon.

In future parts, the generic basic information model will shift to a more specific ‘skill-based’ information model. For this purpose, the proprietary input and output data black boxes will be broken down and substituted with standardized information structures and semantics. Following the idea of implementing information model structures derived from OPC UA DI (Device Integration, part 100) as OPC Robotics already did in Part 1 of the OPC Robotics companion specification. Thus, it ensures the idea of cross domain interoperability: so that machine vision systems can talk to robots and vice versa (and at a later stage to all kinds of devices).

This goal of reaching ‘interoperability’ is the new core competence that must distinguish our future products in a connected world of Industrial IoT - but even more, the acting people and organizations involved.

OPC Robotics Companion Specification

The OPC UA Companion Specification for Robotics (in short OPC Robotics) provides a generic information model for robotic systems, independent of their characteristic or build-up. OPC Robotics is the accepted and officially supported OPC UA Companion Specification for robotic systems by the OPC Foundation.

OPC Robotics Part 1 provides the rules for a standardized description of robotic systems to enable asset management and condition monitoring. It enables users to access industrial robots independent of location, brand or type of kinematics and read out important data (condition monitoring).

The OPC Robotics information model, especially the identification data, is derived from OPC UA DI (Device Integration, part 100) which allows a generic access and usability of data without specific knowledge of OPC Robotics.

Nine vendors with different kinds of robots integrated the information model to provide data for one HMI in the cloud and presented this at the Automatica trade fair 2018 in Munich. A touchscreen provided easy access to condition monitoring data of the nine robots participating in this demonstration. A simple touch on the touchscreen interface gave the user access to the condition monitoring data of the nine robots, all by different providers and located in geographically different locations. In a nutshell, the demonstrator enabled information about the different robots to be made available in a homogenized way, quite literally ‘at your fingertips’.

Based on the part 1, the information model will be extended in further parts to offer more information and features based on OPC UA. The plan for future parts contains messaging and alarming, remote control on several levels like loading programs, starting or stopping the robot or to influence the program flow for different product or process variants (as OPC Machine Vision, Part 1).

Our Goal

The scope is not only to complement or substitute existing interfaces between systems, e.g. a vision system and a robot, and their adjacent process environments by using OPC UA, but rather to create so far non-existent horizontal and vertical integration abilities to communicate relevant data to other authorized process participants, e.g. right up to the IT enterprise level. It is possible to have a gradual phase-in of OPC Machine Vision and OPC Robotics with other coexisting interfaces. The benefits are a shorter time to market by a simplified integration, a generic applicability and scalability and an improved customer perception due to defined and consistent semantics. Specific example: Both Companion Specifications enable their real system representatives (Vision System or Robot) to seamlessly communicate with the whole factory and beyond.

This begs the question: “Has image processing thus finally arrived in automation technology and become an integral part of it?” The answer is: “Definitely YES!” The previous standards (keyword automation pyramid) left little room for the powerful information image processing systems – this has now fundamentally changed.

Image processing systems were considered to be ‘glorified light cabinets’ in these standards which delivered a result of 0 or 1 (good or bad). Any additionally generated information – in the sense of interfaces, e.g. to an MES – was actually generated outside existing standards. Now Industrial Machine Vision can play all cards with its well-defined information model and can do so on the basis of suitable and standardized interfaces - OPC UA.

The scope is not only to complement or substitute existing interfaces between systems, but also to create so far non-existent horizontal and vertical integration abilities to communicate relevant data to other authorized process participants.


Eplan
  Facebook   Twitter   Linkedin   Subscribe