Give us your feedback

AI & robotics at work: Innovations driving productivity

NEWS
Wed 06 Aug 2025

Artificial intelligence and robotics are rapidly reshaping European workplaces, driving transformative gains in productivity, safety, and operational efficiency. According to Eurostat’s 2025 data, 13.5% of EU enterprises with 10 or more employees had adopted AI technologies in 2024—up from 8.0% in 2023—with higher adoption rates in large firms (around 41%) (European Commission). Meanwhile, robotics is scaling fast: in 2022, almost 72,000 industrial robots were installed across the EU—a 6% increase year-over-year—and the industrial robotics market is projected to more than double to USD 9.87 billion by 2030. These trends underscore a continent-wide push toward automation, smart manufacturing, and AI-enhanced processes, driven by competitive pressures and supported by the EU’s digital strategy and innovation infrastructure.

The webinar “AI & Robotics at Work: Innovations Driving Productivity”, held on June 26, offered a timely look into this transformation. Participants heard from researchers involved in two EU-funded projects focused on the practical application of AI and robotics in complex domains:

  • Miguel Ceriani presented outcomes of the HACID project (Hybrid Human Artificial Collective Intelligence in Open-Ended Domains), highlighting how AI systems can collaborate with human experts to improve decision-making in areas like climate services and medical diagnostics. His presentation focused on the development of a flexible knowledge graph that integrates human and machine-generated insights to support collective problem-solving.
  • Maria Chiara Leva, Ammar Abbas and Doaa Almhaithawi discussed the CISC project findings (Collaborative Intelligence for Safety-Critical Systems), which investigates how AI can be used to enhance safety and performance in high-risk environments. Their work emphasized the importance of human-AI collaboration where intelligent systems assist workers without replacing their critical judgment.

Knowledge graph for climate services in the HACID Project

The presentation offered a detailed look into the development of a knowledge graph for climate services within the EU-funded HACID project—“Hybrid Human-Artificial Collective Intelligence in Open-Ended Domains.” The climate services use case centers on a composite knowledge graph that interlinks key concepts, datasets, and simulation outputs. It separates:

  • Domain knowledge: General, reusable knowledge about climate modeling
  • Case-specific knowledge: Details tailored to individual simulations or scenarios

This layered structure supports interoperability and context-aware reasoning.

Key components:

The graph models semantic relationships among:

  • Climate models: Formal simulation systems, described by features like supported processes and configuration parameters; linked to their developer institutions
  • Simulations: Executions of these models under specific scenarios, generating long-term climate projections
  • Datasets: Metadata about simulation outputs—covering variables (e.g., temperature, precipitation), resolution, coverage, and dataset structure

These elements are cross-linked to enable rich queries across models, simulations, and resulting data products.

Methodology and technologies

Built with semantic web standards (RDF, OWL, SPARQL), the graph’s ontology was co-designed with experts from the UK Met Office, ensuring scientific accuracy. Existing datasets were mapped using standards-compliant, declarative methods. The ontology is open-access and supports external reuse and integration.

Scale and coverage

The current graph includes:

  • ~150,000 datasets
  • ~2,000 simulations
  • <100 climate models
  • ~700 climate variables
  • ~3 million RDF triples

While the knowledge graph does not store the massive numerical outputs of climate models, it provides rich metadata to help users discover and navigate available resources for further analysis or integration.

Integrating Human-in-the-Loop AI in Safety-Critical Systems: Insights from the CISC and Robomate Projects

This presentation explored the integration of AI and collaborative intelligence into safety-critical domains, drawing from interim findings in two major EU-funded research efforts: CISC (Collaborative Intelligence for Safety-Critical Systems) and Robomate. The work highlights how human-machine teaming can enhance safety, performance, and resilience in industries like manufacturing and robotics—moving beyond full automation toward systems that support, rather than replace, human operators.

At the heart of the research were three real-world Living Labs, each co-designed with industrial partners to reflect authentic operational challenges.

Live Lab One explored the integration of collaborative robotics into human work environments through a series of targeted studies aimed at optimizing human-system interaction. In one core study, researchers examined how operators interacted with a teleoperated robotic system during a cutting task. User performance was evaluated based on cut quality, task duration, and collision rates, while physiological responses such as EEG signals, eye movement, and questionnaire data were used to assess mental workload. The primary objective was to design an intuitive interface that could improve user performance while minimizing cognitive strain.

A second stream of research focused on naturalistic learning for cobots, investigating the mutual expectations between humans and collaborative robots. This included assessing how humans perceive robot feedback and how robots could adapt to human behavior using physiological indicators like pupillometry and subjective ratings. The insights contributed to the development of a new generation of cobots that are safer, more accessible, and better suited to SME environments—further developed under the Robomate project. This initiative emphasized programming by demonstration, enabling non-experts to easily teach robots new tasks without coding.

The final component simulated a collaborative assembly station to measure the physical and cognitive demands placed on human workers. Using tools such as ergonomic assessments, job rotation planning, and task analysis, researchers designed an adaptive cobot workstation featuring adjustable lighting, ergonomic layouts, and physiological monitoring systems. Together, these studies demonstrated how thoughtfully designed human-robot systems can improve both productivity and worker well-being.

Live Lab Two focused on enhancing human performance in the automotive manufacturing sector, particularly in the area of quality control. The core objective was to reduce human workload and improve defect detection during the vehicle painting process through the integration of artificial intelligence. Researchers developed a low-cost AI solution using latent space representations, a method known for its flexibility and transferability across domains. This approach enabled the classification of known defects and the detection of anomalies that could then be flagged for human verification, thus combining automation with human oversight.

The team implemented a four-phase framework designed to minimize operator involvement by automatically filtering and labeling familiar defects while dynamically adapting to new ones. This ensured a balance between efficiency and accuracy, maintaining human operators in the loop where necessary. A SWOT analysis of the solution highlighted its strengths—such as scalability, adaptability, and alignment with industry 4.0 goals—but also noted challenges, including limited training data, implementation costs, and the need for manual validation in early stages. Despite these hurdles, the research demonstrated strong potential for full automation and broader applicability in industrial quality assurance.

Live Lab Three explored the integration of collaborative intelligence within control room operations, focusing on how AI can assist operators in managing alarm overload—a persistent problem where operators face far more alarms than industry safety standards recommend. The team developed a decision-support interface aimed at filtering, prioritizing, and explaining alarms using AI tools like influence diagrams and reinforcement learning. The system was tested in a high-fidelity simulator replicating an ammonia production plant, with scenarios of varying complexity to evaluate the effect of AI-supported interfaces versus traditional methods.

Four experimental conditions were studied, ranging from no support to full AI assistance with electronic procedures. Results showed that in low-complexity scenarios, AI support improved operator performance. However, in high-complexity scenarios, performance actually declined when using AI support compared to some non-AI conditions. The team identified that reduced situational awareness—known as “out-of-the-loop” syndrome—was a key factor, indicating that while AI can enhance efficiency, it can also unintentionally cause over-reliance or detachment in complex situations. The findings underscored the importance of designing AI systems that maintain operator engagement and trust, suggesting future directions focused on dynamic function allocation, trust calibration, and operator training to enable truly effective human-AI collaboration in safety-critical environments.

Overall, the webinar “AI & Robotics at Work: Innovations Driving Productivity” offered valuable insights into how cutting-edge technologies are transforming human-machine collaboration across various industries. Through real-world examples, the discussion highlighted both the potential and the challenges of implementing AI-driven solutions—from improving defect detection in manufacturing to optimizing control room decision-making and managing alarm overload. The presentations emphasized the importance of trust, explainability, and human-centered design in the deployment of AI and robotics. As these technologies continue to evolve, the webinar made it clear that their success depends not just on innovation, but on thoughtful integration with human work processes, ensuring that productivity gains do not come at the cost of human oversight, safety, or engagement.