Tarek El-Gaaly

Ph.D. Candidate, Rugers University

Short Bio: I am a computer science Ph.D. student at Rutgers University. I am a member of the Computer Vision Group at the Center for Biomedical Imaging and Modelling (CBIM) under Professor Dimitri Metaxas. My advisor is Professor Ahmed Elgammal. My research focus is Computer Vision, Machine Learning and Robotics. More information can be found in my resume.



Selected Projects

[More coming soon...] Object Recognition using Convolutional Neural Networks
We analyze the layers of CNNs to see what happens to object-view manifolds. More coming soon.

Amr Bakry*, Mohamed Elhoseiny*, Tarek El-Gaaly* and Ahmed Elgammal Digging Deep into the Layers of CNNs: In Search of How CNNs Achieve View Invariance (*equal contribution) - [pdf]
3D Object Recognition:
We built a Bayesian hierarchical grouping model for perceptual object-part decomposition based on medial-axis representations of parts.

This work was published in the Association for the Advancement of Artificial Intelligence (AAAI) 2015:

Tarek El-Gaaly, Vicky Froyen, Ahmed Elgammal, Jacob Feldman and Manish Singh, A Bayesian Approach to Perceptual 3D Object-Part Decomposition using Skeleton-based Representations, Accepted AAAI 2015 (~26% acceptance rate) [pdf] [bibtex]


Joint object Categorization and Pose Estimation:
In this work we build a framework based on object-view manifold analysis to perform simultaneous object categorization, instance recognition and pose estimation. Multiiple images of an object are known to lie on a low-dimensional intrinsic view-manifold. The premise of this work is that feature spaces deform unit circle view-manifolds, in the case of table-top objects rotating on a turn-table and captured by a camera from a fixed height, or a sphere manifold in the more general case. The deformation is captured by a homeomorphic mapping from the input feature space to points on this conceptual view-manifold. I also built a near real-time system based on this work (see video on left).

This work was published in:
  • Haopeng Zhang, Tarek El-Gaaly, Ahmed Elgammal, Zhiguo Jiang Factorization of View-Object Manifolds for Joint Object Recognition and Pose Estimation, ElSevier - Computer Vision and Image Understanding (CVIU) 2015 [pdf]
  • Haopeng Zhang, Tarek El-Gaaly, Ahmed Elgammal, Zhiguo Jiang Joint Object and Pose Recognition using Homeomorphic Manifold Analysis, AAAI 2013 (~29% acceptance rate) [pdf] [bibtex]
  • object localization Object Localization:
    Object localization and Perceptual Grouping of local features in images using visual and spatial affinity in a transductive semi-supervised learning framework.

    This work was published in:
    Tarek El-Gaaly, Marwan Torki and Ahmed Elgammal, Spatial-Visual Label Propagation for Local Feature Classification, ICPR 2014 [pdf] [bibtex]
    NASA Centennial Challenge 2013 - Sample Return Challenge
    NASA Centennial Challenge 2013 - Sample Return Challenge:
    In this project I collaborated with Worcester Polytechnic Institute (WPI) to build a rover for the NASA CEntennial Challenge. Our robot AERO (Autonomous Exploration Rover) can be seen to the left on the starting platform with the team in the background. The website documenting the building of the robot can be seen here: Blog
    Semantic Hashing Aerial Vehicle Localization using Semantic Geometric Hashing:
    In this work we used semantic features, such as buildings in aerial views to localize in satellite maps. We built an algorithm to perform geometric hashing on these semantic features. This is a form of large-scale global localization across urban terrain.

    This work was published in:
    Turgay Senlet, Tarek El-Gaaly, Ahmed Elgammal, Hierarchical Semantic Hashing: Visual Localization from Buildings on Maps, ICPR 2014 [pdf] [supplementary material] [bibtex]
    Experiments in using Micro-Aerial Vehicles for 3D Computer Vision. More videos can be found here: Aerial Vehicle research
    Autonomous Airboat Obstacle Avoidance
    Using monocular vision from an Android smartphone camera, I built an autonomous obstacle avoidance using optical flow, flow trajectory clustering and reflection detection. A live demo is shown on the left. The website for this project is: (CMU Cooperative Robotic Watercraft). More videos on Robotics.net.

    This work was published in:
    Tarek El-Gaaly, Christopher Tomaszewski, Abhinav Valada et al., Visual Obstacle Avoidance for Autonomous Watercraft using Smartphones, Autonomous Robots and Multirobot Systems (ARMS), Workshop in AAMAS 2013 [pdf]

    RGBD Table-top Object Pose Recognition:
    The red annotations are the ground-truth pose angles (i.e. azimuth/yaw) of the tabletop objects (from the RGBD-dataset - University of Washington). Blue annotations signify the estimated pose based on visual local feature information alone. Green annotations represent the final recognized pose using both visual and depth information.

    This work was published in:
    Tarek El-Gaaly, Marwan Torki, Ahmed Elgammal, Maneesh Singh, RGBD Object Pose Recognition using Local-Global Multi-Kernel Regression, International Conference on Pattern Recognition, ICPR 2012 [pdf] [bibtex]
    Autonomous indoor robot navigation:
    In this work I built an algorithm to perform autonomous indoor navigation using an XboX Kinect sensor on a Pioneer robot (P3DX).

    MSc in Computer Science Thesis: In my masters thesis I researched a new method of measuring Atmospheric Scattering from sequences of images. The goal was to correlate these measurements with Particulate Matter (PM10). A byprodut of our approach is image/scene dehazing as seen in the bottom image. The first figure shows a sequence of images of a hazy scene. The second figure shows the scene resulting from 2 state-of-the-art dehazing methods and our dehazing algorithm (rightmost). Our dehazing method recovers the hue of the scene and also returns a natural looking sky without any extra processing.

    This work was published in the International Conference on Computer Vision Theory and Applications (VISAPP) 2010:

    Tarek El-Gaaly and Joshua Gluckman, Measueing Atmospheric Scattering from Digital Image Sequences, VISAPP 2010 [pdf] [bibtex]


    For a full description of my MSc Thesis - refer to the thesis document: [pdf]



    Funds/Grants

    • Collaboration and partial funding from Siemens Corporate Research
    • Rutgers faculty seed funding - for multiple UAV vision research




    Teaching

    About Me

    image description

    I am a Computer Science Ph.D. student @ Rutgers University. I did my undergraduate and masters degrees in computer science at the American University in Cairo.

    Research Interests

    Computer Vision
    Machine Learning
    Robotics
    AI
    Space Exploration
    Swarm Intelligence

    Contact

    tgaaly at cs.rutgers.edu