Violeta menendez gonzalez
Hem / Kultur, Media & Underhållning / Violeta menendez gonzalez
As a member of the Surrey Reproducibility Society, I organised the Monthly Mini Hacks (MMH), which are workshops that bridge open science and coding. Once logged in, select “submit a proposal” and select the “Open Research” track.
Apply here: https://pretalx.fosdem.org/fosdem-2026/cfp
If you have any issues with Pretalx, do not despair: contact us at open-research-devroom-manager@fosdem.org.
Follow us on Mastodon (@FosdemResearch@fosstodon.org) for updates and announcements.
About the Devroom
The Open Research devroom addresses FLOSS developers in a broad community concerned with research production and curation: scientists, engineers, journalists, archivists, curators, activists.
Our evaluation shows competitive results compared to previous state-of-the-art techniques.
Violeta Menéndez González (2022)Poster: SaiNet: Stereo aware inpainting behind objects with generative networks figshare
DOI: 10.6084/m9.figshare.21701630
Poster presented at AI4CC at CVPR 2022 and BMVA Symposium 2022, for the paper "SaiNet: Stereo aware inpainting behind objects with generative networks" (https://doi.org/10.48550/arXiv.2205.07014).
Submissions
Must include:
- Title
- Abstract
- Description
- Talk licence: FOSDEM is an open-source software conference, please specify which OSI approved license your proposal uses.
- Speaker name, contact, biography and availability
Can include:
- Submission notes: write if you want to give a Lightning talk or Lecture here.
This paper proposes ZeST-NeRF, a new approach that can produce temporal NeRFs for new scenes without retraining. To this end we unify radiance field models with adversarial learning and perceptual losses. But adding temporal information adds an extra layer of complexity. This includes scientific research, investigative journalism, data journalism, OSINT, as well as research and investigations undertaken by NGOs, civil society, community and activist groups, etc.
We seek talks about:
- New releases of open source software.
E.g., how to hold an algorithm accountable to social scientists; how to foster better reproducibility and interoperability thanks to FLOSS; or how to cope with biases of a chart for a data journalist.
- Contribute to the debate about bridging tech culture with research and investigative environments (data journalism, investigative journalism, activism and academia), including tips and best practices for navigating tensions, as well as the contribution of the open source movement to research and investigations sustainability through organizational hosting, funding for projects, support and maintenance, etc.
- Share your experience about building open source devices or communities across a variety of research and investigative contexts.
We welcome talks from various research and investigative contexts: research labs, libraries, newsrooms, museums, hackerspaces, maker labs, community and activist groups.
The resulting system provides up to 60% improvement in perceptual accuracy compared to current state-of-the-art radiance field models on this problem. This is a view synthesis problem where the number of reference views is limited, and the baseline between target and reference view is significant. It is aimed at developers and anyone interested in the free and open-source software movement.
To this end we unify radiance field models with adversarial learning and perceptual losses. Advances in network architecture and loss regularisation are unable to satisfactorily remove these artifacts. All FOSDEM talks are published under Creative Commons CC-BY licence on the FOSDEM video recordings archive.
- New releases of open source software.
Submission
1.
Code: https://github.com/violetamenendez/svs-sparse-novel-view
Violeta Menéndez González, Andrew Gilbert, Graeme Phillipson, Stephen Jolly, Simon Hadfield (2022)SaiNet: Stereo aware inpainting behind objects with generative networks, In: arXiv.org Cornell University Library, arXiv.org
In this work, we present an end-to-end network for stereo-consistent image inpainting with the objective of inpainting large missing regions behind objects.
I gave a talk about our work at the MMH at FOSDEM 2024. Our evaluation shows competitive results compared to previous state-of-the-art techniques.
I was lucky to be able to attend the 34th British Machine Vision Conference in Aberdeen, and publish my paper at the 1st Workshop in Video Understanding and its Applications.
I had the opportunity to give a talk about my paper ZeST-NeRF: Using temporal aggregation for Zero-Shot Temporal NeRFs.
The British Machine Vision Conference (BMVC) is the British Machine Vision Association’s (BMVA) annual conference on machine vision, image processing, and pattern recognition.
My research focused on Generating Virtual Camera Views Using Generative Networks, and I am experienced in deep learning inpainting techniques, generative adversarial networks, novel view synthesis approaches, NeRF, Gaussian Splatting, and more. In this work, we instead focus on hallucinating plausible scene contents within such regions.