Date of Award

Fall 2025

Document Type

Open Access Dissertation

Department

Mechanical Engineering

First Advisor

Ramy Harik

Abstract

Manufacturers face two opposing challenges: the escalating demand for customized products and the pressure to reduce lead times. Current manufacturing equipment, although reliable, operates at the limits of its technology and lacks adaptability to dynamic environments. This trade-off has been described as the optimal degree of automation, a threshold beyond which further automation incurs more cost than benefit. Smart manufacturing has introduced adaptive, data-driven systems, but many deployments still lack the contextual adaptability and autonomous decision-making required to handle unplanned disruptions. This underscores the need for intelligent, autonomous systems capable of real-time adaptation without costly reconfiguration, minimizing human intervention during production disruptions. Such systems transcend adaptive automation by addressing not only operational tasks (the "what") but also strategic execution and optimization (the "how" and "why"), thereby fulfilling the industry’s growing need for systems that are not merely reactive but proactive, and not just efficient but intelligent in balancing efficiency, safety, and adaptability.Yet, the broader adoption of autonomous manufacturing systems remains hampered by conceptual ambiguity and practical constraints. The term "autonomy" is often conflated with advanced automation or isolated AI functions, obscuring which subsystems truly act independently. Moreover, stringent safety and reliability requirements, limited trust in AI-driven decisions, complex human–machine interactions, and data scarcity present formidable operational barriers that impede progress. To address these challenges, this dissertation introduces three key contributions. First, it establishes a structured taxonomy of manufacturing system autonomy, synthesized from literature and validated in industry practice. The taxonomy delineates a continuum from traditional automated systems to adaptive, intelligent, smart, autonomous, and cognitive manufacturing systems, defined by progressively increasing capabilities, intelligence, decision-making, and levels of human–machine interaction. This framework clarifies distinctions between conventional automation, advanced AI-driven functionality, and genuine autonomy, providing a socio-technical foundation to assess system maturity and guide development toward fully autonomous production. Second, to demonstrate practical implementation, a Computer-Aided Assembly Planning (CAAP) framework is developed, integrated with computer vision (CV) to dynamically generate and adapt assembly sequences in response to real-time environmental perception and component variability. The system employs multiple CV algorithms on RGB-D sensor data: it segments objects to create masks, converts them into depth-augmented point clouds, and registers these to CAD models via an Iterative Closest Point algorithm, yielding precise 6-DoF pose estimates for parts. Using these perceptual inputs, the CAAP system extracts explicit and inferred assembly constraints from the CAD data and constructs a directed acyclic graph (DAG) representing precedence relationships. Assembly plans are derived through a constraint-driven search, with each step's kinematic feasibility rigorously evaluated through trajectory simulation and volumetric collision detection with path planning. In cases where 3D CAD models are unavailable, a fallback mechanism uses optical character recognition and image segmentation on 2D CAD drawings to identify parts, infer feasible sequences, and simulate assembly operations in 2D environments. Third, a closed-loop, diversity-aware active learning framework is introduced for data-efficient training of computer vision models. This approach leverages self-supervised SimCLR feature embeddings to cluster unlabeled images and iteratively select the most informative and diverse samples for labeling, thereby minimizing annotation effort while delivering production-level accuracy and scalability. To further reduce reliance on real data, a complementary method generates high-fidelity synthetic industrial images, mitigating training bottlenecks and ensuring robust, scalable performance without disrupting operations. Collectively, these contributions bridge key gaps in smart manufacturing by clarifying the concept of autonomy and advancing the development of practical, adaptive, and cognitive production systems.

Rights

© 2025, Ibrahim Yousif

Share

COinS