Loading

Jakub Konečný

Researcher

I am PhD student at the University of Edinburgh under supervision of Peter Richtárik.

As of June 2014, I am recipient of the Google European Doctoral Fellowship in Optimization Algorithms.

I am developing large-scale optimization algorithms for machine learning.

Social Profiles

New paper out Nov 25, 2016

We have released a new paper, Randomized Distributed Mean Estimation: Accuracy vs Communication, joint with Peter Richtárik. We consider the problem of distributed computation of arithmetic average of vectors, under constraints on communication. We propose a flexible family of randomized algorithms exploring the trade-off between expected communication cost and estimation error.

 

Federated Learning Oct 19, 2016

We have recently released two new papers in collaboration with Google, focusing on Federated Learning - a setting where we need to learn from massively decentralized data, such as when the training data reside on phones of users that generated them.

The first work, Federated Optimization: Distributed Machine Learning for On-Device Intelligence describes the setting in detail, and proposes algorithms with particular focus on convex setting. In Federated Learning: Strategies for Improving Communication Efficiency , we address the communication efficiency of systems for Federated Learning. In the best case, we were able to train a deep model, while the amount of bits communicated over network was smaller than the original data itself.

 

Learning, Privacy, Mobile Data Sep 13, 2016

I am attending Learning, Privacy, and Mobile Data Workshop organized by Google in Seattle. Organized by the team I interned with, the goal is to better understand interactions between the fields of Machine Learning, Optimization, Privacy and Cryptography to facilitate move to mobile computing devices, and fundamentally change the way machine learning is deployed.

 

IMA Birmingham Sep 7, 2016

Together with colleagues from our research group, we organized two minisymposia at the 5th IMA Conference on Numerical Linear Algebra and Optimization in Birmingham.

I spoke about our recent work, AIDE: Fast and Communication Efficient Distributed Optimization.

 

New paper out [AIDE] Aug 29, 2016

In collaboration with Sashank Reddi, Barnabás Póczos and Alex Smola from Carnegie Mellon University, and Peter Richtárik, we propose and analyse a new algorithm, AIDE: Fast and Communication Efficient Distributed Optimization, in short an accelerated inexact DANE algotihm.

Earlier this summer, I presented a poster on AIDE at the Internalional Conference on Machine Learning 2016 in New York.

 

PhD Fellowship Summit Aug 23, 2016

I am speaking at the Google PhD Fellowship Summit in Mountain View, about my experience of merging research efforts in academia and in Google. Also, I met many passionate young researchers that enjoy the same support by Google during their studies, a very motivating experience.

 

Google Summer Internship May 16, 2016

Until end of August, I am at Google Seattle, doing a summer internship, working mainly with Brendan McMahan. I am going to be working some of the algorithmic chellenges in Federated Optimization/Learning --- decentralized and highly unbalanced datasets.

 

Future of Humanity Institute Mar 14, 2016

As of today, I am visiting the Future of Humanity Institute in Oxford, trying to better understand where the ideas regarding AI safety stand at the moment and how I might be able to contribute. I highly recommend reading Superintelligence, if you haven't yet.

 

Optimization Without Borders Feb 7, 2016

I am excited to be attending the Optimization Without Borders in beautiful Les Houches under Mount Blanc. The event is dedicated to the 60th birthday of Yuri Nesterov and his lifelong contributions to the field.

 

Late updates Jan 11, 2016

Thanks to everybody at NIPS for the amazing atmosphere, inspiring discussions and many new ideas. I presented Stop Wasting My Gradients: Practical SVRG, which I co-authored with Reza Babanezhad, Mohamed Osama Ahmed, Alim Virani, Mark Schmidt and Scott Sallinen from University of British Columbia. I also presented a poster on Federated Optimization: Distributed Optimization Beyond the Datacenter. in Optimization in Machine Learning Workshop.

Before that (Nov 25-27) I spoke at one of the scoping workshops of The Alan Turing Institute , which is being set up, with the topic Distributed Machine Learning and Optimization.

Even before that (Nov 23) I briefly visited the Future of Humanity Institute in Oxford, and discussed some of the long term dangers arising from technological development. For those of you who haven't noticed it yet, I highly recommend Nick's book on Superintelligence. The book is free of bold unjustified predictions, and rather summarizes the open problems and discussed current ideas.

 

Federated Optimization Nov 14, 2015

New preliminary paper out! Federated Optimization: Distributed Optimization Beyond the Datacenter , which is based on my summer work in Google with Brendan McMahan. We introduce a new important setting, in which data are distributed across a very large number of computers, each having access only to few data points. This is primarily motivated by the setting, where users keep their data on their devices, but the goal is still to train a high quality global model.

 

Practical SVRG Nov 10, 2015

New paper out! Stop Wasting My Gradients: Practical SVRG, which I co-authored with Mark Schmidt and his group has been accepted to the NIPS conference. I am looking forward to meeting you all there!

 

INFORMS Annual Meeting Nov 1, 2015

Not many people showed up in Halloween costumes at INFORMS Annual Meeting in Philadelphia. Never mind, I am feeling overwhelmed by the 80 parallel sessions going on. I am giving a talk on Federated Optimization on Monday. Slides are here , paper available soon (available upon request).

 

Alma Mater Oct 21, 2015

I am gave a talk at seminar at Comenius University in Bratislava, speaking about S2GD and work on Federated Optimization I have done during summer in Google, the next step in large-scale distributed optimization. Preliminary paper coming out soon...

 

ISMP 2015 Jul 17, 2015

I am giving a talk today at the ISMP conference in Pittsburgh. So far very inspirational event, in a suprisingly pleasant city.

 

Google Summer Internship May 26, 2015

Until end of August, I am at Google, doing a summer internship, working mainly with Brendan McMahan. Hope there's lots of cool stuff to learn!

 

Optimization & Big Data 2015 May 6, 2015

At home, in Edinburgh, we are organizing three day workshop - Optimization & Big Data 2015 - focused on large-scale optimization, with keynote speaker Arkadi Nemirovski. I am giving an invited talk on Thursday, my slides are available here.

 

Full paper - mS2GD Apr 20, 2015

Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting, joint work with Jie Liu, Martin Takáč and Peter Richtárik. This is the full-size version of the following short paper which was presented at the NIPS Optimization workshop.

 

Edinburgh SIAM Student
Chapter Conference Apr 13, 2015

The Edinburgh SIAM Student Chapter, for which I am in the committee, organized Annual Conference, with speakers including Yves Wiaux (Heriott-Watt University), Marta Faias (New University of Lisbon), Michel Destrade (NUI Galway), Klaus Mosegaard (University of Copenhagen), Samuel Cohen (Oxford) and Julien Dambrine (University of Poitiers).

The aims of the Chapter are promote interdisciplinary and multi-institutional collaboration, increasing student engagement and increase awareness of the application of mathematical techniques to real world problems in science and industry.

 

SUTD Singapore Feb 27, 2015

Until Mar 13, I am at the Singapore University of Technology and Design, visiting Ngai-Man Cheung and Selin Damla Ahipasaoglu.

 

MLSS 2015 Sydney Feb 16, 2015

As of today, I am attending Machine Learning Summer School in Sydney. Seems to be full of interesting program and people.

 

BASP Jan 26, 2015

I am attending a very interesting multidisciplinary conference International Biomedical and Astronomical Signal Processing (BASP) Frontiers workshop in beautiful Villars-sur-Ollon. The main areas are Cosmology, Medical Imaging and Signal Proccessing. I am giving a talk on Wednesday (slides) about the Semi-Stochastic Gradient Descent.

UPDATE: I received the Best Contribution Award in the area of Signal Processing. Overall, the conference had an amazing apmosphere, and I managed to talk to many interesting people outisde of my field. Highly reccommend to anyone!

 

Late updates Jan 7, 2015

The year finished with very interesting conference, Foundations of Computational Mathematics in Montevideo. I presented a poster on Mini-batch semi-stochastic gradient descent in the proximal setting (see poster here).

We also managed to release Semi Stochastic Coordinate Descent, joint with Zheng Qu and Peter Richtárik. This is the full-length version of this brief paper, presented at 2014 NIPS workshop on Optimization in Machine Learning.

 

NIPS 2014 Dec 8, 2014

I am attending NIPS (Neural Informations Processing Systems) in Montreal. Obviously, it's really cold here.

I am presenting two posters at the Optimization for Machine Learning Workshop. The poters are on the Mini-batch semi-stochastic gradient descent in the proximal setting (see poster here) and the Semi-Stochastic Coordinate Descent (see poster here).

 

Research visit
Lehigh University Nov 17, 2014

The following three weeks, I am visiting Martin Takáč at Lehigh University. Optimization group is strong here, so I look forward to hopefully fruitful time. For now, I can say it's surprisingly cold here these days.

And here are some refined slides.

 

Research visit - ETH Zurich Nov 3, 2014

This week, I am on a research visit at the Data Analytics Lab led by Thomas Hofmann at ETH Zurich. First impression is amazing, Zurich seems like a great place to live and work.

Slides from my talk are here .

 

An interesting article Oct 23, 2014

Here is an interesting interview with Michael Jordan about the big thing about Big Data and more...

 

Two short papers out Oct 19, 2014

This week we released two short papers. First one, S2CD: Semi-stochastic coordinate descent , coauthored with Zheng Qu and Peter Richtárik, extends the S2GD algorithm to coordinate setting.

The other one, mS2GD: Mini-batch semi-stochastic gradient descent in the proximal setting , joint with Jie Liu, Martin Takáč and Peter Richtárik, discusses extension of the S2GD algorithm into minibatched setting, which admits twofold speedup. One from better estimates, and other from simple parallelism.

 

New paper out Sep 30, 2014

Simple complexity analysis of direct search , joint with Peter Richtárik.

See the abstract in the right-hand column --->

 

MMEI 2014 (in a castle!) Sep 9, 2014

Staying in beutiful Smolenice Castle in Slovakia, attending International Conference on Mathematical Methods in Economy and Industry, a conference dating its tradition back to 1973.

 

4th IMA NLAO Sep 3, 2014

This week I am attending 4th IMA Conference on Numerical Linear Algebra and Optimisation. I co-organized minisymposium on First order methods and big data optimization (organized with Zheng Qu and Peter Richtárik), in which I am also giving a talk.

 

S2GD code available Jul 9, 2014

Efficient implementation of Semi-Stochastic Gradient Descent for logistic regression is available in the MLOSS repository. The code is works with MATLAB, and is implemented in C++.

 

SIAM Annual Meeting Jul 8, 2014

On Monday I gave an invited talk at the SIAM Annual Meeting in Chicago on the S2GD algorithm (slides) . Withinin the same minisymposium, very interesting talk was delivered by S V N Vishwanathan on NOMAD (slides)

As a representative of Edinburgh SIAM Student Chapter, I attended a breakfast with other representatives and SIAM executives and got some ideas on how to improve our chapter.

Regarding my observations, the whole conference feels much more (compared to other conferences) like a networking event for anyone involved with SIAM activities.

 

Google PhD Fellowship Jun 18, 2014

I was awarded the 2014 Google Europe Doctoral Fellowship in Optimization Algorithms! The news was announced today in the Google Research Blog.

The University of Edinburgh was particularly successful this year, bagging two awards (each University can nominate up to two students). Out of the 15 Europe Fellowships, 4 were awarded to universities in the UK: 2 others in Cambridge. The rest went to students in Switzerland (4), Germany (3), Israel (2), Austria (1) and Poland (1).

This is what Google says about these Fellowships:
Nurturing and maintaining strong relations with the academic community is a top priority at Google. Today, we're announcing the 2014 Google PhD Fellowship recipients. These students, recognized for their incredible creativity, knowledge and skills, represent some of the most outstanding graduate researchers in computer science across the globe. We're excited to support them, and we extend our warmest congratulations.

I would like to thank everyone for their support!

 

London Optimization W. Jun 9, 2014

This week, King's College is hosting London Optimization Workshop. The first day saw, among others, very interesting talk by Panos Parpas on A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization.

 

SIAM OP14 May 18, 2014

I am off to San Diego to attend SIAM Conference on Optimization. I am giving a talk about S2GD (slides) . The weather is obviously good :)

 

Funding awarded Apr 25, 2014

Thanks to support from SIAM and School Research & Development Fund, I am going to attend 2014 SIAM Annual Meeting in Chicago, where I will represent Edinburgh SIAM Student Chapter.

 

Logarithm anniversary Apr 2, 2014

It has been 400 hundred years since Scot John Napier published his Mirifici Logarithmorum Canonis Descriptio. For this work, he is recognised as the inventor of logarithms. At the occastion of this rare anniversary, I attended a workshop that follows on from the last celebration held in 1914!

 

SIAM Student Conference Mar 14, 2014

I helped organise today's Edinburgh SIAM Student Chapter Conference 2014 . Except minor technical difficulties, all went well, and graduate students had the opportunity to take a look into different areas of research in applied mathematics.

 

Meet me Jan 25, 2014

In February, you can meet me in Cambridge around Feb 7 attending LMS meeting on Sparse Regularisation for Inverse Problems , or in London Feb 17-19 at Big Data: Challenges and Applications .

 

Welcome... Jan 23, 2014

...to my new personal web page. I hope it's red enough.

Recent abstracts

Randomized Distributed Mean Estimation: Accuracy vs Communication

We consider the problem of estimating the arithmetic average of a finite collection of real vectors stored in a distributed fashion across several compute nodes subject to a communication budget constraint. Our analysis does not rely on any statistical assumptions about the source of the vectors. This problem arises as a subproblem in many applications, including reduce-all operations within algorithms for distributed and federated optimization and learning. We propose a flexible family of randomized algorithms exploring the trade-off between expected communication cost and estimation error. Our family contains the full-communication and zero-error method on one extreme, and an ε-bit communication and O(1/(εn)) error method on the opposite extreme. In the special case where we communicate, in expectation, a single bit per coordinate of each vector, we improve upon existing results by obtaining O(r/n) error, where r is the number of bits used to represent a floating point value.

 
Federated Optimization: Distributed Machine Learning for On-Device Intelligence

We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimization, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of federated optimization.

 
Federated Learning: Strategies for Improving Communication Efficiency

Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model with training data distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of utmost importance. In this paper, we propose two ways to reduce the uplink communication costs. The proposed methods are evaluated on the application of training a deep neural network to perform image classification. Our best approach reduces the upload communication required to train a reasonable model by two orders of magnitude.

 
AIDE: Fast and Communication Efficient Distributed Optimization

In this paper, we present two new communication-efficient methods for distributed minimization of an average of functions. The first algorithm is an inexact variant of the DANE algorithm that allows any local algorithm to return an approximate solution to a local subproblem. We show that such a strategy does not affect the theoretical guarantees of DANE significantly. In fact, our approach can be viewed as a robustification strategy since the method is substantially better behaved than DANE on data partition arising in practice. It is well known that DANE algorithm does not match the communication complexity lower bounds. To bridge this gap, we propose an accelerated variant of the first method, called AIDE, that not only matches the communication lower bounds but can also be implemented using a purely first-order oracle. Our empirical results show that AIDE is superior to other communication efficient algorithms in settings that naturally arise in machine learning applications.

Education

Achievements & Awards

  • Google Europe Doctoral Fellowship 2014-2017
    Google

    This is what Google says about the Fellowship: Nurturing and maintaining strong relations with the academic community is a top priority at Google. Today, we're announcing the 2014 Google PhD Fellowship recipients. These students, recognized for their incredible creativity, knowledge and skills, represent some of the most outstanding graduate researchers in computer science across the globe. We're excited to support them, and we extend our warmest congratulations.

  • Best Contribution Award 2015
    Biological and Astronomical Signal Processing workshop

    in the area of Signal Processing

  • Principal’s Career Development Scholarship 2013
    University of Edinburgh

    Highly competitive PhD scholarship awarded to several students across University

  • ChaLearn Gesture Challenge, 2nd place 2012
    ChaLearn

    Purpose of the competition was to develop a one-shot-learning gesture recognizer for Microsoft Kinect. We were awarded a travel grant to present our solution at the International Conference on Pattern Recognition 2012. We were invited to submit a paper to Journal of Machine Learning Research. The paper has been accepted for publication.

  • International Mathematical Olympiad 2010
    Honourable mention

    Astana, Kazakhstan

  • Olympiad in Informatics, national round 2010
    11th place
  • Middle European Mathematical Olympiad 2008
    Honourable mention

    Olomouc, Czech Republic

Download My Resume


Fields of Academic Interest

  • Convex Programming 100%
  • Machine Learning 95%
  • Differential Privacy 75%
  • Rand. Numerical Algebra 45%
  • Computer Vision 35%
  • Something new 75%

Programming Skills

  • MATLAB 90%
  • Python 70%
  • TensorFlow 65%
  • C++/C# 35%
  • LaTeX 85%

Art

  • Painting 2%
  • Singing 0%
  • Playing guitar 17%

Publications

Contact Info

  • University of Edinburgh
    James Clerk Maxwell Building, 5406
    Peter Guthrie Tait Road
    Edinburgh
    EH9 3FD
  • J.Konecny@sms.ed.ac.uk
  • prachetit

Keep In Touch


 
*By using this form, you risk that your message will not be delivered. Why not use normal e-mail instead?

Introduction

Hello! Here, you'll understand how we projected this theme. Read all the text to know any details about Metroid vCard Template. It's a powerfull personal page to start your web life, because it's simple, fast, animated, beautiful, colorful and it brings many possibilities!

Exclusive Grid System

.col .c1

.col .c2-1 .first

.col .c2-1

.col .c3-1 .first

.col .c3-1

.col .c3-1

.col .c3-1 .first

.col .c3-2

Creating new pages

Is it easy to do?

A: Yes, simply add a <section class="content"> into the <div id="page"></div> and create a tag link with the (class="menu").

See the example:
<a href="#blog" id="blog" class="menu">Blog Link</a>

<div id="page">
  <section id="blog-page" class="content">
    <div class="inner">
      ...page content
    </div>
  </section>

</div>

Changing the content

Metroid was designed to be simple to use and customize. Here's how easy it is to use it.

How to change the text of my site?

A: Everything that is displayed on the screen, whether text, form boxes, portfolio or progress bars, are written in a single file named "index.html". The screen changes are simulated by script calls, so our theme is considered Onepage.

How to change profile picture?

A: Go to the folder "img/profile/" and replace the file "photo.jpg". We recommend that you use a square photo with 245x245 pixels.

Note : We recommend that you use the same sizes as the profile pictures and blog to keep the layout is appropriate in all resolutions (screen, tablet and mobile).

Icons | Credits: Font Awesome (http://fortawesome.github.io/Font-Awesome/)

Changing the style

The Metroid comes with predefined CSS files with 10 different colors in different color tones. In addition, we selected 20 patterns to choose as background.
Patterns | Credits: Subtle Patterns (http://subtlepatterns.com/)

How to do?

A: Just choose the color and pattern you want and change the CSS call into index.html file on line 32. To configure the desired pattern go to the folder "img/bg/" and see the numbering pattern to set the line 54 into index.html.

See the example:
Line 32: <link rel="stylesheet" type="text/css" href="css/colors/color-name.css" id="color" />
Change to name of color.

Line 54: <body class="bg01">
Change from '01' to '20'.

Can I set up my colors in the theme?

A: Yes, simply access the file "custom.css", include your colors there, comment out the line 32 and uncomment line 33.

Typography

Metroid used fonts from Google Fonts. There are amazing font styles in Google Font that you can choose. So, you can choose the better font to you, and apply in your new website. We are using the font "Ubuntu" in all Metroid's pages. Enjoy and customize!

Use simple tags to create your headers, texts and lists.

Headers

<h1>Title<h1>
<h2>Title<h2>
<h3>Title<h3>
<h4>Title<h4>
<h5>Title<h5>
<h6>Title<h6>

Texts

<p>Paragraph<p>
<label>Label<label>
<span>Span<span>
<code>Code<code>
<pre>Pre<pre>
<a>Anchor<a>

Lists

<ul>
<li>item</li>
<li>item</li>
<li>item</li>
<li>item</li>
</ul>

Need Support

Still have questions?

Well, Thanks so much for purchased our items! We’re really appreciated it and hope you enjoy it! If you need support, you can leave your questions or doubts in comments area of the item. We usually get back to you within 2-12hours (except holiday seasons which might take longer).

Theme by Ararazu Themes.