Selected papers

An up to date list of all publications can be found on my Google Scholar profile.

Filter: , , , , , , , , , , , , , , , , , , , , , , , ,

2021

Compressing deep neural networks via layer fusion

James O' Neill, Greg V. Steeg, Aram Galstyan

Asian Conference in Machine Learning

TLDR: This paper proposes a dynamic weight sharing technique that learns to tie weights during retraining (compression phase).

, , ,

Siamese capsule networks

James O' Neill

arXiv preprint arXiv:1805.07242

TLDR: This paper proposes to extend capsule networks to a siamese network for metric learning tasks.

,

Deep Neural Compression Via Concurrent Pruning and Self-Distillation

James O' Neill, Sourav Dutta, Haytham Assem

arXiv preprint arXiv:2109.15014

TLDR: This paper proposes the combination of pruning and self-distillation and uses a cross-correlation based KD objective that naturally fits with magnitude-based pruning.

, ,

I Wish I Would Have Loved This One, But I Didn't–A Multilingual Dataset for Counterfactual Detection in Product Reviews

James O' Neill, Polina Rozenshtein, Ryuichi Kiryo, Motoko Kubota and Danushka Bollegala

Empirical Methods for Natural Language Processing (EMNLP)

TLDR: This paper proposes a dynamic weight sharing technique that learns to tie weights during retraining (compression phase).

, , ,

Semantically-Conditioned Negative Samples for Efficient Contrastive Learning

James O' Neill, Danushka Bollegala

Asian Conference in Machine Learning

TLDR: This paper proposes a dynamic weight sharing technique that learns to tie weights during retraining (compression phase).

,

2020

An Overview of Neural Network Compression

James O' Neill

arXiv preprint arXiv:2006.03669

TLDR: This paper provides a thorough overview of weight sharing, pruning, tensor decomposition, knowledge distillation and quantization.

, , , ,

2018

Meta-embedding as auxiliary task regularization

James O' Neill, Danushka Bollegala

European Conference on Artificial Intelligence

TLDR: We propose supervised meta-embedding that learns to reconstruct an ensemble of static word embeddings while learning on a downstream task.

, ,

Learning to Evaluate Neural Language Models

James O' Neill, Danushka Bollegala

Pacific Association of Computation Linguistics (PACLING)

TLDR: We propose pretrained textual similarity models to evaluate neural language models.

,

Transfer Reward Learning for Policy Gradient-Based Text Generation

James O' Neill, Danushka Bollegala

arXiv preprint arXiv:1909.03622

TLDR: We propose pretrained textual similarity models to issue rewards based on the semantic similarity of generated and ground truth sequences for an actor-critic sequence predictor.

, , ,

$ k $-Neighbor Based Curriculum Sampling for Sequence Prediction

James O' Neill, Danushka Bollegala

arXiv preprint arXiv:2101.09313

TLDR: We propose Nearest-Neighbor Replacement Sampling, a technique to mitigate exposure bias by replacing ground truth tokens with semantically similar tokens during training.

, ,