Research background

Research

Welcome to our research hub, where we explore groundbreaking advancements in generative AI, Transformer architectures, and dataset optimization.

Papers

These projects represent our commitment to advancing the theoretical and practical foundations of AI to create more capable, intuitive, and user-aligned systems. Explore our research to learn more!

  • Fine-Tuning Next-Scale Visual Autoregressive Models with Group Relative Policy Optimization

    Fine-Tuning Next-Scale Visual Autoregressive Models with Group Relative Policy Optimization

    Authors: Matteo Gallici, Haitz Sáez de Ocáriz Borde
    Fine-tuning next-scale visual autoregressive models with reinforcement learning enhances image quality, style control, and generalization beyond pre-trained distributions.
  • Beyond Parallelism: Synergistic Computational Graph Effects in Multi-Head Attention

    Beyond Parallelism: Synergistic Computational Graph Effects in Multi-Head Attention

    Authors: Haitz Sáez de Ocáriz Borde
    Multi-head attention can induce synergistic interactions within the computational graph that transcend the efficiency gains attributable solely to parallelization.
  • Re-LAION–Caption 19M for Improved Text Alignment Fine-tuning

    Re-LAION–Caption 19M for Improved Text Alignment Fine-tuning

    Authors: Nicholas Merchant*, Haitz Sáez de Ocáriz Borde*, Andrei Cristian Popescu, Carlos Garcia Jurado Suarez
    Generative models struggle with prompt adherence due to noisy training data, but refining datasets with structured captions improves text-image alignment and reduces reliance on prompt engineering.