Alonso Marco

Alonso Marco

Postdoctoral Research Fellow in Machine Learning and Robotics

University of California Berkeley

I am a postdoctoral research fellow at the Hybrid Systems Lab, University of California Berkeley, working with Prof. Claire J. Tomlin.

I pursued my PhD in Computer Science at Prof. Stefan Schaal’s robot lab at the Max Planck Institute for Intelligent Systems, in Tübingen (Germany) and the University of Southern California. I was advised by Prof. Sebastian Trimpe and co-advised by Prof. Philipp Hennig. During my PhD, I collaborated with Prof. Jeannette Bohg, Prof. Angela P. Schoellig, Prof. Andreas Krause.

In 2019, I was a visiting researcher at the Computational and Biological Learning Lab, at University of Cambridge, UK, working with Prof. José Miguel Hernández-Lobato in Bayesian optimization. I also was a PhD intern at Meta AI (formerly FAIR) in California, working with Prof. Roberto Calandra in model-based reinforcement learning.

I am broadly interested in preventing unsafe behavior in autonomous systems that navigate through real-world unstructured environments. Specifically, I study out-of-distribution (OoD) run-time monitors that trigger a backup policy when the perceived environment is beyond generalization.

See more

I also explore data-efficiency in model-based reinforcement learning by informing the probabilistic dynamics model with existing expert knowledge (e.g., physics models, high-fidelity simulators). In particular, I study Gaussian process state-space models and Bayesian networks.

At a high level, I am highly passionate about scaling theoretically sound ideas to real systems. During my academic journey I acquired hands-on expertise with quadrupeds, hexapods, bipeds and robot manipulators that navigate and interact in the real world.

Contact: amarco [at] berkeley [dot] edu


Interests
  • Robot Learning
  • Model-based Reinforcement Learning
  • Variational Inference and Sampling Methods
  • Gaussian Processes and Bayesian Neural Networks
  • Bayesian Optimization
Education
  • PhD in Robotics and Machine Learning, 2020

    Max Planck Institute for Intelligent Systems and University of Tübingen, Germany

  • MSc in Artificial Intelligence, 2015

    Polytechnical University of Catalonia, Spain

News

[Jul 2023]

I presented our current progress on “Out of Distribution Detection via Simulation-Informed Gaussian Process State Space Models” at Prof. Jeannette Bohg’s group, the Interactive Perception and Robot Learning Lab at Stanford.

[Jul 2023]

Our paper “Out of Distribution Detection via Domain-Informed Gaussian Process State Space Models” has been accepted to CDC, held at Marina Bay Sands, in Singapore.

[Jun 2023]

I presented a poster at the the Safe Aviation Autonomy Annual Meeting, under the NASA University Leadership Initiative (ULI), held at Stanford.

[Jun 2023]

I presented my work on “Online out-of-distribution detection via simulation-informed deep Gaussian process state-space models” at the DARPA Assured Neuro Symbolic Learning and Reasoning (ANSR) campus visit, held at UC Berkeley. Slides.

[May 2023]

I am part of the best paper award committee for 5th Learning for Dynamics and Control Conference (L4DC), held at the University of Pennsylvania.

[Mar 2023]

We’ve submitted our paper “Out of Distribution Detection via Domain-Informed Gaussian Process State Space Models” to CDC, currently under review!

[Jul 2022]

Our paper on Koopman-based Ljapunov functions has been accepted at the 61st IEEE Conference on Decision and Control (Mexico)!!

[Jun 2022]

I presented a poster at the Safe Aviation Autonomy NASA ULI annual meeting, held at Stanford.

[May 2022]

I am serving as part of the best paper award committee for 4th Learning for Dynamics and Control Conference (L4DC), held at Stanford.

[Nov 2021]

I was invited as a guest lecturer at UC San Diego to talk about Bayesian optimization. This talk is part of a seminar series organized by Prof. Sylvia Herbert.

Older

[Sep 2021]

I have moved in to Berkeley! I’ve joined the Hybrid Systems Lab as a postdoc, working with Prof. Claire Tomlin on model-based RL and kernel methods.

[Apr 2021]

I have been awarded the Rafael del Pino Excellence Fellowship awarded to Spanish researchers with an outstanding academic path (1% acceptance rate).

[Feb 2021]

I have been invited to give a talk (remotely) at the Learning for Dynamics and Control seminar at UC Berkeley, jointly organized by Prof. Koushil Sreenathat, Prof. Ben Recht and Prof. Francesco Borrelli’s groups.

[Jul 2020]

I have defended my PhD at the University of Tübingen, Germany! My thesis entitled “Bayesian Optimization in Robot Learning: Automatic Controller Tuning and Sample-Efficient Methods” can be found here.

[Jul 2020]

I have been invited to present (remotely) my PhD thesis at UC Berkeley, at Prof. Claire Tomlin’s group.

[Dec 2019]

I have presented at Facebook Artificial Intelligence Research (FAIR) the work I did during my intership.

[Sep 2019]

I have moved in to California for an internship at Facebook Artificial Intelligence Research (FAIR), working in model-based RL with Roberto Calandra.

[Apr 2019]

I have presented my ongoing work with Prof. José Miguel Hernández-Lobato at the Computational and Biological Learning Lab, University of Cambridge, UK.

[Mar 2019]

I have presented a poster at the Div-f Conference, at the University of Cambridge, UK.

[Mar 2019]

I have moved in to Cambridge, UK for a research stay at the Computational and Biological Learning Lab, working with Prof. José Miguel Hernández-Lobato.

[Jan 2019]

Our journal paper “Data-efficient Auto-tuning with Bayesian Optimization: An Industrial Control Study” has been published!

Publications

Up-to-date with my google scholar profile

(2023). Out of Distribution Detection via Domain-Informed Gaussian Process State Space Models. IEEE 62nd Conference on Decision and Control (CDC), (under review).

Cite

(2022). Koopman-based Neural Lyapunov functions for general attractors. IEEE 61st Conference on Decision and Control (CDC).

PDF Cite

(2021). Gosafe: Globally optimal safe robot learning. IEEE International Conference on Robotics and Automation (ICRA).

PDF Cite Video

(2021). Robot learning with crash constraints. IEEE Robotics and Automation Letters (RA-L).

PDF Cite Code Video Talk

(2020). Excursion search for constrained Bayesian optimization under a limited budget of failures. arXiv preprint arXiv:2005.07443.

PDF Cite

(2019). Classified regression for Bayesian optimization: Robot learning with unknown penalties. arXiv preprint arXiv:1907.10383.

PDF Cite

(2019). Data-efficient autotuning with bayesian optimization: An industrial control study. IEEE Transactions on Control Systems Technology.

PDF Cite

(2018). Gait learning for soft microrobots controlled by light fields. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

PDF Cite

(2017). Optimizing long-term predictions for model-based policy search. Conference on Robot Learning (CoRL).

PDF Cite

(2017). Model-based policy search for automatic tuning of multivariate PID controllers. IEEE International Conference on Robotics and Automation (ICRA).

PDF Cite

(2017). On the design of LQR kernels for efficient controller learning. IEEE 56th Annual Conference on Decision and Control (CDC).

PDF Cite Slides Talk

(2017). Virtual vs. Real: Trading off simulations and physical experiments in reinforcement learning with Bayesian optimization. IEEE International Conference on Robotics and Automation (ICRA).

PDF Cite

(2016). Automatic LQR tuning based on Gaussian process global optimization. IEEE international conference on robotics and automation (ICRA).

PDF Cite Code Slides Video

(2015). Automatic LQR tuning based on Gaussian process optimization: Early experimental results. Second Machine Learning in Planning and Control of Robot Motion Workshop at International Conference on Intelligent Robots and Systems (IROS).

PDF Cite

(2015). Gaussian process optimization for self-tuning control. M.Sc. Thesis, Polytechnic University of Catalonia.

PDF Cite

Projects

*
Controller Learning using Bayesian Optimization
Manual tuning of controller parameters for real robotic tasks is part of the control design process, yet it is tedious and time consuming. In this project, we explore ways to automate the tuning process using Bayesian optimization and to increase data efficiency while preserving safety.
Controller Learning using Bayesian Optimization
Increasing Data Efficiency in model-based RL via Informed Probabilistic Priors
Dynamical systems represented using probabilistic models are hindered by the model bias induced by the prior, which reduces data efficiency. Herein, we explore how to construct physics-informed priors to increase data efficiency and mitigate the sim2real gap.
Increasing Data Efficiency in model-based RL via Informed Probabilistic Priors
Out of Distribution Detection using probabilistic dynamical models
In order for robots to be safely deployed in the real world, it is desirable to prevent them from being overconfident in situations far from the training data. We explore this idea using Gaussian process state space models.
Out of Distribution Detection using probabilistic dynamical models
Robot Learning from Failures
Although failures are to be avoided by all means in safety-critical robot applications, there exist a wide range of scenarios in which failing is undesired, but not catastrophic. Because failures are informative about what should be avoided, we treat them as an additional learning source, and develop optimization algorithms that allow failures only when the information gain is worth the cost.
Robot Learning from Failures