barr .ai y ridge
  • home
  • experience
  • research
  • publications
  • Publications
    • Onboard Autonomous Health Assessment and Global Localization for the Mars Helicopter: Towards Multi-Flight Operations
    • An Addendum to NeBula: Toward Extending Team CoSTAR's Solution to Larger Scale Environments
    • Simulation-Aided Handover Prediction From Video Using Recurrent Image-to-Motion Networks
    • Demonstration of Autonomous Sampling Techniques in an Icy Moon Terrestrial Analog
    • Early Recall, Late Precision: Multi-Robot Semantic Object Mapping under Operational Constraints in Perceptually-Degraded Environments
    • Self-Supervised Traversability Prediction by Learning to Reconstruct Safe Terrain
    • Rover Relocalization for Mars Sample Return by Virtual Template Synthesis and Matching
    • Machine Vision Based Sample-Tube Localization for Mars Sample Return
    • Smart Hardware Integration with Advanced Robot Programming Technologies for Efficient Reconfiguration of Robot Workcells
    • Training of Deep Neural Networks for the Generation of Dynamic Movement Primitives
    • Cut & Recombine: Reuse of Robot Action Components Based on Simple Language Instructions
    • Convolutional Encoder-Decoder Networks for Robust Image-to-Motion Prediction
    • Learning to Write Anywhere with Spatial Transformer Image-to-Motion Encoder-Decoder Networks
    • D6.6: Manual for Programming of Assembly Skills and Tasks
    • Base Frame Calibration of a Reconfigurable Multi-robot System with Kinesthetic Guidance
    • Rapid State Machine Assembly for Modular Robot Control Using Meta-Scripting, Templating and Code Generation
    • Active Reconfiguration of Software and Hardware in a Robotic Workcell
    • Computational Models of Affordance in Robotics: A Taxonomy and Systematic Classification
    • SMACHA : An API for Rapid State Machine Assembly
    • A Reconfigurable Robot Workcell in the Automotive Industry
    • Rapid Hardware and Software Reconfiguration in a Robotic Workcell
    • Reconcell Workcell Design
    • Action-Grounded Surface Geometry and Volumetric Shape Feature Representations for Object Affordance Prediction
    • D4.1: Sub-system for 3D Simulation, Visualization and Interfacing with the User
    • D6.1: Technical Report on Software and Hardware Components in the Workcell
    • Robotic Affordance Learning: Old Ideas, Recent Developments, and Potential Paths Forward
    • D4.4: Action Execution
    • D6.3: Repositories of Software, Data Bases and Benchmarks
    • Comparison of Action-Grounded and Non-Action-Grounded 3-D Shape Features for Object Affordance Classification
    • D5.4: Demonstrator of Intermediate Integration of the System
    • Self-Supervised Online Learning of Basic Object Push Affordances
    • D2.3.3: Transfer of Affordances and Categories: Technical Report or Scientific Publication on How to Use the Developed Representations of Affordances and Categories within the Architecture and in the Final Demonstration
    • Learning Basic Object Affordances in a Robotic System
    • Action-Grounded Push Affordance Bootstrapping of Unknown Objects
    • Transfer of Assembly Operations to New Workpiece Poses by Adaptation to the Desired Force Profile
    • DR 5.5: Combining Basic Cross-Modal Concepts into Novel Concepts
    • Relevance Determination for Learning Vector Quantization Using the Fisher Criterion Score
    • DR 5.4: Active Learning of Cross-Modal Concepts
    • DR 5.2: Continuous Learning of Cross-Modal Concepts
    • Self-Supervised Cross-Modal Online Learning of Basic Object Affordances for Developmental Robotic Systems
    • Unsupervised Learning of Basic Object Affordances from Object Properties
    • Towards Learning Basic Object Affordances from Object Properties
    • A System for Learning Basic Object Affordances Using a Self-Organizing Map
    • A System for Continuous Learning of Visual Concepts
    • A Framework for Continuous Learning of Simple Visual Concepts
    • DR 5.6: Framework for Continuous Learning with Different Levels of Supervision: Cognitive Systems for Cognitive Assistants
    • Interaktiven Sistem Za Kontinuirano Učenje Vizualnih Konceptov
    • DR.5.4: Object Models Suitable for Continuous and Human-Assisted Learning
    • Techniques for Computing Exact Hausdorff Measure with Application to a Sierpinski Sponge in $\mathbb{R}^3$
    • On Different Modes of Continuous Learning of Visual Properties
  • Research
    • CoHORT: Cooperative Human Operations with Robot Teams
    • Ingenuity Mars Helicopter
    • CADRE: Cooperative Autonomous Distributed Robotic Exploration
    • Europa Lander
    • DARPA RACER: Robotic Autonomy in Complex Environments with Resiliency
    • DARPA Subterranean Challenge
    • InVADER: In situ Vent Analysis Divebot for Exobiology Research
    • Mars Sample Return
    • IMEDNets: Image-to-Motion Encoder-Decoder Networks
    • ReconCell: A Reconfigurable Robot Workcell
    • ACAT: Learning and Execution of Action Categories
    • Xperience: Robots Bootstrapped through Learning from Experience
    • CogX: Cognitive Systems that Self-Understand and Self-Extend
    • CoSy: Cognitive Systems for Cognitive Assistants
    • VISIONTRAIN: Computational and Cognitive Vision Systems
  • Experience

DR 5.5: Combining Basic Cross-Modal Concepts into Novel Concepts

May 1, 2012·
Danijel Skočaj
,
Alen Vrečko
,
Barry Ridge
,
Peter Uršič
,
Aleš Leonardis
,
Sergio Roa
,
Geert-Jan Kruijff
,
Miroslav Janíček
· 0 min read
PDF Cite Project
Last updated on May 1, 2012

← Transfer of Assembly Operations to New Workpiece Poses by Adaptation to the Desired Force Profile Nov 1, 2013
Relevance Determination for Learning Vector Quantization Using the Fisher Criterion Score Feb 1, 2012 →

© 2025 Barry Ridge. This work is licensed under CC BY NC ND 4.0