Bachelor of Engineering, Alexandria University - Specialized in Computer and Communication Engineering. - Graduated with CGPA: 3.31/4. - Graduation Project: Research on Adversarial Attacks on Deep Learning Models.
Software Engineer
I am a software engineer focused on the intersection of Machine Learning, Security, and Low-Level Systems. I build where complexity meets consequence—driven by the belief that software is not just a career, but a tool for meaningful impact. Beyond the technical stack, I consider myself a lifelong student. I spend a significant amount of my time researching and learning—not just within Computer Science, but across any field that sparks my curiosity. I believe that the most innovative solutions come from looking outside your own bubble and applying diverse perspectives to technical challenges. I’m not here to just go through the motions of a career. I want to leave a lasting mark through the things I build and the knowledge I share. This site is my way of documenting that journey—a place where I ship my projects, share my research, and distribute what I’ve learned to others. I am always looking for meaningful work and collaborations that move the needle.
﷽ ﴿ وَقُلِ ٱعۡمَلُواْ فَسَيَرَى ٱللَّهُ عَمَلَكُمۡ وَرَسُولُهُۥ وَٱلۡمُؤۡمِنُونَۖ وَسَتُرَدُّونَ إِلَىٰ عَٰلِمِ ٱلۡغَيۡبِ وَٱلشَّهَٰدَةِ فَيُنَبِّئُكُم بِمَا كُنتُمۡ تَعۡمَلُونَ ﴾ [التوبة: 105]
Bachelor of Engineering, Alexandria University - Specialized in Computer and Communication Engineering. - Graduated with CGPA: 3.31/4. - Graduation Project: Research on Adversarial Attacks on Deep Learning Models.
Comprehensive deep learning course covering neural networks, CNNs, RNNs, and practical applications. Taught by Andrew Ng from deeplearning.ai.
Data Engineer at Ejada Systems - Contributed to developing a full-stack name matching engine enterprise applications using React, Spring Boot, and Apache Solr, deployed on PostgreSQL, Oracle, and SQL Server. - Contributed to developing a real-time streaming platform using Apache Flink, with integrations including Kafka, HDFS, GoRules, and JDBC. - Designed and implemented a versatile MVEL-based rules engine in Java, enabling the execution of complex business logic and custom functions. - Handled deployments, server management, disaster recovery, and client-side support across multiple environments and multiple clients. - Worked across the full stack and directly with big data tools and rule engines to deliver reliable, scalable systems.
Deep Learning R&D Intern at Alexandria University Worked on a Natural Language Processing (NLP) project to detect programming Languages in a certain file using Transformers (Fine tuned a pretrained transformer for our problem).
Software Engineer Intern at One Health Network Working with .NET Core and PHP to develop the LIMS and Multi-vendor Medical Market place Application, as well as JavaScript, MS SQL and MariaDB specific tasks.
In this project, we explored the role of Adversarial Machine Learning in Natural Language Processing, focusing on how carefully crafted inputs can expose vulnerabilities in ML models. Our goal was twofold: to evaluate model robustness and to contribute toward building more secure NLP systems. We implemented and tested both character-level attacks—such as flips, spacing tricks, and inner-letter shuffles—and word-level attacks, including transformer-based strategies like BERT Attack. What made our work unique was its focus on the Arabic language, which remains underexplored in this domain. Arabic’s complex morphology and script presented unique challenges, making it an ideal testbed for adversarial techniques. Our findings help push forward the understanding of how NLP models can be better secured across diverse languages.
Note-ish is a fast, privacy-first productivity app built in Python for the terminal. It runs entirely offline and helps users manage tasks, habits, notes, journals, and calendar events from one simple interface. Designed for minimalism and speed, it’s ideal for developers who prefer working in the command line without sacrificing functionality or privacy.
A comparative NLP study that benchmarks five architectures on the task of mapping Arabic definitions to their correct words — given a description, predict the term. Built on a merged dataset of ~97,000 entries from multiple Arabic lexical sources, with a custom CAMeL Tools preprocessing pipeline handling diacritics, orthographic normalization, and lemmatization. The project progresses through TF-IDF (18.2% Top-1), FastText + FAISS semantic search (15.0%), six fine-tuned Arabic Transformer models including CamelBERT and MARBERT (27.6% Top-1 after contrastive NT-Xent training), and finally generative LLMs with Retrieval-Augmented Generation using ChromaDB and multilingual-e5-base embeddings. Qwen3.5-4B with RAG achieved 39.8% Top-1 under morphological evaluation with no fine-tuning, running entirely on local Apple Silicon. Evaluation goes beyond standard accuracy — a custom morphological normalization layer accounts for Arabic inflection, and additional metrics track output coverage, repetition rate, language consistency, and per-domain performance breakdown. Each approach is analyzed for why it succeeds or fails on Arabic specifically, with findings including why TF-IDF outperforms static embeddings on short glosses, how contrastive fine-tuning roughly doubles Transformer accuracy, and why generative models fundamentally change the OOV problem.
Given an Arabic definition or description, predict the word it describes.