Khaled Elfakharany
Back to Projects
professionalcompleted

Adam.ai - AI Meeting Assistant Platform

Adam.ai
Aug 2016 - Mar 2017
Team of 5
Overview

An innovative AI meeting assistant platform delivered across three form factors: a physical robot that sits on meeting tables, a mobile iOS app for portable use, and CCTV camera integration for transforming any room into a smart meeting space. The system used computer vision for participant recognition and natural language processing for automated meeting note generation and task assignment.

Problem Solved

Meeting participants spent significant time on administrative tasks like note-taking and task tracking, reducing focus on actual discussion content

My Role: R&D Engineer
  • Developed computer vision systems using OpenCV for person recognition
  • Implemented natural language processing for meeting transcription and understanding
  • Built iOS mobile application for portable meeting assistant functionality
  • Designed and manufactured the first physical 3D body for Adam robot using sliced MDF technique
  • Integrated hardware sensors with software AI systems
Key Outcomes
  • Developed physical robot form factor with sensors, camera, and microphone for automated meeting capture
  • Implemented person recognition system using computer vision enabling automatic participant identification
  • Created automated task assignment system based on meeting content analysis
  • Built iOS mobile application providing portable meeting assistant functionality
  • Designed CCTV integration module to transform standard rooms into smart meeting spaces
  • Manufactured first physical 3D body for Adam robot using innovative sliced MDF construction technique
  • Presented completed robot prototype to Yoxel CEO during company visit

Scale

  • Three form factors delivered: robot, mobile app, CCTV integration

Technology Stack

Primary Technologies
PythonC++OpenCV
Secondary Technologies
SwiftiOS SDKNatural Language Processing
Infrastructure
Cloud servicesEmbedded systems
Tools
XcodeGitCAD software
Challenges & Solutions
Technical

Challenge

Integrating computer vision person recognition with real-time meeting audio processing required handling multiple data streams simultaneously

Solution

Developed pipeline architecture that processed video frames for face detection while separately analyzing audio streams for speaker identification and content extraction

Impact

Achieved reliable multi-modal meeting capture combining visual and audio understanding

Hardware

Challenge

Creating a physical robot form factor that was professional enough for corporate meeting rooms while housing all necessary sensors and processing hardware

Solution

Designed innovative sliced MDF construction using CAD software, assembling 2D sheets into 3D puzzle structure that provided attractive housing for hardware components

Impact

Produced first physical embodiment of Adam robot that was presentable to executive stakeholders

Domain

Challenge

Natural language understanding had to accurately extract action items and task assignments from unstructured meeting conversations

Solution

Implemented NLP patterns for detecting commitment language, action verbs, and assignment structures to automatically identify and attribute tasks to participants

Impact

Enabled automated post-meeting task lists without manual note-taking

The Story

Situation

Adam.ai was developing a cutting-edge AI meeting assistant platform, but needed to bring the technology from research into tangible products that could demonstrate value to enterprise customers. The challenge was creating three distinct form factors that each served different use cases while sharing the same core AI capabilities.

Task

As an R&D engineer, Khaled was responsible for developing the computer vision and NLP systems while also designing the physical hardware embodiment of the robot assistant.

Action

Khaled built the person recognition system using OpenCV that could identify meeting participants from video feeds. He implemented the natural language processing pipeline that extracted meeting notes and task assignments from audio streams. For the physical robot, he designed the 3D body using SolidWorks, creating an innovative sliced MDF construction technique where 2D sheets assembled into a 3D puzzle structure housing the sensors and camera. He also developed the iOS mobile app providing the same functionality in a portable format. The technical work required constant iteration between hardware constraints and software capabilities.

Result

The project delivered three working form factors of the Adam meeting assistant. The physical robot prototype was impressive enough to present to Yoxel's CEO during a company visit. This early AI project established foundational skills in computer vision, NLP, and hardware-software integration that would influence Khaled's approach to complex technical problems throughout his career.

What I Learned

Technical

  • Computer vision for person recognition using OpenCV
  • Natural language processing for meeting understanding
  • Hardware-software integration for AI robotics
  • Multi-platform development (physical robot, mobile, CCTV)
  • Embedded systems development with sensor integration

Soft Skills

  • Working in early-stage AI R&D environment
  • Cross-functional collaboration between hardware and software teams
  • Presenting technical prototypes to executive leadership

Key Insights

  • 💡 AI assistants work best when they operate transparently in the background rather than requiring explicit interaction
  • 💡 Physical form factor significantly impacts user acceptance of AI systems in professional settings
  • 💡 Transition from academic research to industry application requires focusing on practical reliability over theoretical perfection