Hey, I'm Gallil Maimon

I am a Computer Science PhD student at SLP lab at the Hebrew University of Jerusalem, under the supervision of Dr. Yossi Adi. I'm also a self-proclaimed beer-geek and homebrewer. The focus of my research is representation learning, specifically in the field of audio and language processing.


Publications

An image illustrating speaking style conversion vs. traditional VC.

Speaking Style Conversion With Discrete Self-Supervised Units

EMNLP 2023 (Findings)
Gallil Maimon, Yossi Adi

Voice Conversion (VC) is the task of making a spoken utterance by one speaker sound as if uttered by a different speaker, while keeping other aspects like content unchanged. Current VC methods, focus primarily on spectral features like timbre, while ignoring the unique speaking style of people which often impacts prosody. In this study, we introduce a method for converting not only the timbre, but also prosodic information (i.e., rhythm and pitch changes) to those of the target speaker. The proposed approach is based on a pretrained, self-supervised, model for encoding speech to discrete units, which make it simple, effective, and easy to optimise. We consider the many-to-many setting with no paired data. We introduce a suite of quantitative and qualitative evaluation metrics for this setup, and empirically demonstrate the proposed approach is significantly superior to the evaluated baselines. Code and samples can be found under the project page.

An image illustrating the sillent_killer attack.

Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor Attack

Arxiv preprint
Tzvi Lederer, Gallil Maimon, Lior Rokach

Backdoor poisoning attacks pose a well-known risk to neural networks. However, most studies have focused on lenient threat models. We introduce Silent Killer, a novel attack that operates in clean-label, black-box settings, uses a stealthy poison and trigger and outperforms existing methods. We investigate the use of universal adversarial perturbations as triggers in clean-label attacks, following the success of such approaches under poison-label settings. We analyze the success of a naive adaptation and find that gradient alignment for crafting the poison is required to ensure high success rates. We conduct thorough experiments on MNIST, CIFAR10, and a reduced version of ImageNet and achieve state-of-the-art results.

An image illustrating Universal Adversarial Policies.

A Universal Adversarial Policy for Text Classifiers

Neural Networks 2022
Gallil Maimon, Lior Rokach

We introduce and formally define a new adversarial setup against text classifiers named universal adversarial policies. Under this setup one learns a single perturbation policy which given a text and a classifier selects the optimal pertubations (which words to replace) in order to reach an adversarial text. It is universal in the sense that one policy must generalise to many unseen texts. We introcde LUNATC which learns such a policy with reinforcement learning and succesfully generalises to unseen texts from as little as 500 texts.