Until end of August, I am at Google Seattle, doing a summer internship, working mainly with Brendan McMahan. I am going to be working some of the algorithmic chellenges in Federated Optimization/Learning --- decentralized and highly unbalanced datasets.
As of today, I am visiting the Future of Humanity Institute in Oxford, trying to better understand where the ideas regarding AI safety stand at the moment and how I might be able to contribute. I highly recommend reading Superintelligence, if you haven't yet.
I am excited to be attending the Optimization Without Borders in beautiful Les Houches under Mount Blanc. The event is dedicated to the 60th birthday of Yuri Nesterov and his lifelong contributions to the field.
Thanks to everybody at NIPS for the amazing atmosphere, inspiring discussions and many new ideas. I presented Stop Wasting My Gradients: Practical SVRG, which I co-authored with Reza Babanezhad, Mohamed Osama Ahmed, Alim Virani, Mark Schmidt and Scott Sallinen from University of British Columbia. I also presented a poster on Federated Optimization: Distributed Optimization Beyond the Datacenter. in Optimization in Machine Learning Workshop.
Even before that (Nov 23) I briefly visited the Future of Humanity Institute in Oxford, and discussed some of the long term dangers arising from technological development. For those of you who haven't noticed it yet, I highly recommend Nick's book on Superintelligence. The book is free of bold unjustified predictions, and rather summarizes the open problems and discussed current ideas.
New preliminary paper out! Federated Optimization: Distributed Optimization Beyond the Datacenter , which is based on my summer work in Google with Brendan McMahan. We introduce a new important setting, in which data are distributed across a very large number of computers, each having access only to few data points. This is primarily motivated by the setting, where users keep their data on their devices, but the goal is still to train a high quality global model.
Not many people showed up in Halloween costumes at INFORMS Annual Meeting in Philadelphia. Never mind, I am feeling overwhelmed by the 80 parallel sessions going on. I am giving a talk on Federated Optimization on Monday. Slides are here , paper available soon (available upon request).
I am gave a talk at seminar at Comenius University in Bratislava, speaking about S2GD and work on Federated Optimization I have done during summer in Google, the next step in large-scale distributed optimization. Preliminary paper coming out soon...
I am giving a talk today at the ISMP conference in Pittsburgh. So far very inspirational event, in a suprisingly pleasant city.
Until end of August, I am at Google, doing a summer internship, working mainly with Brendan McMahan. Hope there's lots of cool stuff to learn!
At home, in Edinburgh, we are organizing three day workshop - Optimization & Big Data 2015 - focused on large-scale optimization, with keynote speaker Arkadi Nemirovski. I am giving an invited talk on Thursday, my slides are available here.
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting, joint work with Jie Liu, Martin Takáč and Peter Richtárik. This is the full-size version of the following short paper which was presented at the NIPS Optimization workshop.
The Edinburgh SIAM Student Chapter, for which I am in the committee, organized Annual Conference, with speakers including Yves Wiaux (Heriott-Watt University), Marta Faias (New University of Lisbon), Michel Destrade (NUI Galway), Klaus Mosegaard (University of Copenhagen), Samuel Cohen (Oxford) and Julien Dambrine (University of Poitiers).
The aims of the Chapter are promote interdisciplinary and multi-institutional collaboration, increasing student engagement and increase awareness of the application of mathematical techniques to real world problems in science and industry.
As of today, I am attending Machine Learning Summer School in Sydney. Seems to be full of interesting program and people.
I am attending a very interesting multidisciplinary conference International Biomedical and Astronomical Signal Processing (BASP) Frontiers workshop in beautiful Villars-sur-Ollon. The main areas are Cosmology, Medical Imaging and Signal Proccessing. I am giving a talk on Wednesday (slides) about the Semi-Stochastic Gradient Descent.
UPDATE: I received the Best Contribution Award in the area of Signal Processing. Overall, the conference had an amazing apmosphere, and I managed to talk to many interesting people outisde of my field. Highly reccommend to anyone!
The year finished with very interesting conference, Foundations of Computational Mathematics in Montevideo. I presented a poster on Mini-batch semi-stochastic gradient descent in the proximal setting (see poster here).
We also managed to release Semi Stochastic Coordinate Descent, joint with Zheng Qu and Peter Richtárik. This is the full-length version of this brief paper, presented at 2014 NIPS workshop on Optimization in Machine Learning.
I am attending NIPS (Neural Informations Processing Systems) in Montreal. Obviously, it's really cold here.
I am presenting two posters at the Optimization for Machine Learning Workshop. The poters are on the Mini-batch semi-stochastic gradient descent in the proximal setting (see poster here) and the Semi-Stochastic Coordinate Descent (see poster here).
The following three weeks, I am visiting Martin Takáč at Lehigh University. Optimization group is strong here, so I look forward to hopefully fruitful time. For now, I can say it's surprisingly cold here these days.
And here are some refined slides.
This week, I am on a research visit at the Data Analytics Lab led by Thomas Hofmann at ETH Zurich. First impression is amazing, Zurich seems like a great place to live and work.
Slides from my talk are here .
This week we released two short papers. First one, S2CD: Semi-stochastic coordinate descent , coauthored with Zheng Qu and Peter Richtárik, extends the S2GD algorithm to coordinate setting.
The other one, mS2GD: Mini-batch semi-stochastic gradient descent in the proximal setting , joint with Jie Liu, Martin Takáč and Peter Richtárik, discusses extension of the S2GD algorithm into minibatched setting, which admits twofold speedup. One from better estimates, and other from simple parallelism.
See the abstract in the right-hand column --->
Staying in beutiful Smolenice Castle in Slovakia, attending International Conference on Mathematical Methods in Economy and Industry, a conference dating its tradition back to 1973.
This week I am attending 4th IMA Conference on Numerical Linear Algebra and Optimisation. I co-organized minisymposium on First order methods and big data optimization (organized with Zheng Qu and Peter Richtárik), in which I am also giving a talk.
Efficient implementation of Semi-Stochastic Gradient Descent for logistic regression is available in the MLOSS repository. The code is works with MATLAB, and is implemented in C++.
On Monday I gave an invited talk at the SIAM Annual Meeting in Chicago on the S2GD algorithm (slides) . Withinin the same minisymposium, very interesting talk was delivered by S V N Vishwanathan on NOMAD (slides)
As a representative of Edinburgh SIAM Student Chapter, I attended a breakfast with other representatives and SIAM executives and got some ideas on how to improve our chapter.
Regarding my observations, the whole conference feels much more (compared to other conferences) like a networking event for anyone involved with SIAM activities.
I was awarded the 2014 Google Europe Doctoral Fellowship in Optimization Algorithms! The news was announced today in the Google Research Blog.
The University of Edinburgh was particularly successful this year, bagging two awards (each University can nominate up to two students). Out of the 15 Europe Fellowships, 4 were awarded to universities in the UK: 2 others in Cambridge. The rest went to students in Switzerland (4), Germany (3), Israel (2), Austria (1) and Poland (1).
This is what Google says about these Fellowships:
Nurturing and maintaining strong relations with the academic community is a top priority at Google. Today, we're announcing the 2014 Google PhD Fellowship recipients. These students, recognized for their incredible creativity, knowledge and skills, represent some of the most outstanding graduate researchers in computer science across the globe. We're excited to support them, and we extend our warmest congratulations.
I would like to thank everyone for their support!
This week, King's College is hosting London Optimization Workshop. The first day saw, among others, very interesting talk by Panos Parpas on A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization.
It has been 400 hundred years since Scot John Napier published his Mirifici Logarithmorum Canonis Descriptio. For this work, he is recognised as the inventor of logarithms. At the occastion of this rare anniversary, I attended a workshop that follows on from the last celebration held in 1914!
I helped organise today's Edinburgh SIAM Student Chapter Conference 2014 . Except minor technical difficulties, all went well, and graduate students had the opportunity to take a look into different areas of research in applied mathematics.
In February, you can meet me in Cambridge around Feb 7 attending LMS meeting on Sparse Regularisation for Inverse Problems , or in London Feb 17-19 at Big Data: Challenges and Applications .
...to my new personal web page. I hope it's red enough.
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large number of \nodes, but the goal remains to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of utmost importance.
A motivating example for federated optimization arises when we keep the training data locally on users' mobile devices rather than logging it to a data center for training. Instead, the mobile devices are used as nodes performing computation on their local data in order to update a global model. We suppose that we have an extremely large number of devices in our network, each of which has only a tiny fraction of data available totally; in particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, we assume that no device has a representative sample of the overall distribution.
We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results. This work also sets a path for future research needed in the context of federated optimization.
We present and analyze several strategies for improving the performance of stochastic variance-reduced gradient (SVRG) methods. We first show that the convergence rate of these methods can be preserved under a decreasing sequence of errors in the control variate, and use this to derive variants of SVRG that use growing-batch strategies to reduce the number of gradient calculations required in the early iterations. We further (i) show how to exploit support vectors to reduce the number of gradient computations in the later iterations, (ii) prove that the commonly-used regularized SVRG iteration is justified and improves the convergence rate, (iii) consider alternate mini-batch selection strategies, and (iv) consider the generalization error of the method.
School of Mathematics
Faculty of Mathematics, Physics and Informatics
This is what Google says about the Fellowship: Nurturing and maintaining strong relations with the academic community is a top priority at Google. Today, we're announcing the 2014 Google PhD Fellowship recipients. These students, recognized for their incredible creativity, knowledge and skills, represent some of the most outstanding graduate researchers in computer science across the globe. We're excited to support them, and we extend our warmest congratulations.
in the area of Signal Processing
Highly competitive PhD scholarship awarded to several students across University
Purpose of the competition was to develop a one-shot-learning gesture recognizer for Microsoft Kinect. We were awarded a travel grant to present our solution at the International Conference on Pattern Recognition 2012. We were invited to submit a paper to Journal of Machine Learning Research. The paper has been accepted for publication.
Olomouc, Czech Republic
Trojsten is a Slovak NGO working with the most talented high school children in the fields of mathematics, physics and informatics. Trojsten organizes three postal competitions and week-long camps for the best contestants. Apart from being an organizer of the mathematical part, I was also in charge of finance and administration of the entire organization. Approximately 70 volunteers involved.
Vocablr is a language teaching platform designed to make learning new vocabulary fun. I was responsible for most of the non-design part. We were also part of a Slovak startup accelerator program.
Coordination of sub-national rounds.
Journal of Universal Rejection
Neural Information Processing Systems
Journal of Machine Learning Research
Hello! Here, you'll understand how we projected this theme. Read all the text to know any details about Metroid vCard Template. It's a powerfull personal page to start your web life, because it's simple, fast, animated, beautiful, colorful and it brings many possibilities!
.col .c2-1 .first
.col .c3-1 .first
.col .c3-1 .first
A: Yes, simply add a <section class="content"> into the <div id="page"></div> and create a tag link with the (class="menu").
See the example:
<a href="#blog" id="blog" class="menu">Blog Link</a>
<section id="blog-page" class="content">
Metroid was designed to be simple to use and customize. Here's how easy it is to use it.
A: Everything that is displayed on the screen, whether text, form boxes, portfolio or progress bars, are written in a single file named "index.html". The screen changes are simulated by script calls, so our theme is considered Onepage.
A: Go to the folder "img/profile/" and replace the file "photo.jpg". We recommend that you use a square photo with 245x245 pixels.
Note : We recommend that you use the same sizes as the profile pictures and blog to keep the layout is appropriate in all resolutions (screen, tablet and mobile).
Icons | Credits: Font Awesome (http://fortawesome.github.io/Font-Awesome/)
The Metroid comes with predefined CSS files with 10 different colors in different color tones. In addition, we selected 20 patterns to choose as background.
Patterns | Credits: Subtle Patterns (http://subtlepatterns.com/)
A: Just choose the color and pattern you want and change the CSS call into index.html file on line 32. To configure the desired pattern go to the folder "img/bg/" and see the numbering pattern to set the line 54 into index.html.
See the example:
Line 32: <link rel="stylesheet" type="text/css" href="css/colors/color-name.css" id="color" />
Change to name of color.
Line 54: <body class="bg01">
Change from '01' to '20'.
A: Yes, simply access the file "custom.css", include your colors there, comment out the line 32 and uncomment line 33.
Metroid used fonts from Google Fonts. There are amazing font styles in Google Font that you can choose. So, you can choose the better font to you, and apply in your new website. We are using the font "Ubuntu" in all Metroid's pages. Enjoy and customize!
Use simple tags to create your headers, texts and lists.
Well, Thanks so much for purchased our items! Weâ€™re really appreciated it and hope you enjoy it! If you need support, you can leave your questions or doubts in comments area of the item. We usually get back to you within 2-12hours (except holiday seasons which might take longer).
Theme by Ararazu Themes.
Copyright © 2013 John Anderson Smith