In this episode I spend some time with Ivan Beschastnikh, Associate Professor of Computer Science at UBC. Ivan is co Author of Erlay a proposed Bandwidth Efficient Transaction Relay for Bitcoin, and Biscotti a distributed machine learning system. To access comprehensive show notes, complete guest bios and links mentioned in the episode take a look below or go to advancetechmedia.org and click episode title.

Written by Alexandra Moxin

In this episode I spend some time with Ivan Beschastnikh, Associate Professor of Computer Science at the University of British Columbia and co-author of the Erlay paper a proposed Bandwidth-Efficient Transaction Relay for Bitcoin. Erlay suggests improvements to the bandwidth efficiency of Bitcoin nodes by proposing changes to the way transactions are disseminated.

Erlay was co-authored by Gleb Naumenko, formerly of UBC CompSci who is currently in residency with Chaincode labs, Pieter Wuille, Bitcoin Core developer and co-founder of Blockstream, Gregory Maxwell, Bitcoin Core developer, co-Founder and Chief Technology Officer of Blockstream and Alexandra (Sasha) Fedorova of UBC Electrical and Computer Engineering (ECE).

We also discuss Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning, a distributed machine learning system currently under development. The system proposes changes to the current centralized model of storing data and explores the boundaries of federated learning.

Closing out the episode Ivan mentions his interest in network devices such as programmable switches, construction of distributed systems and program synthesis. He asks listeners for their thoughts about applications and incentives for distributed peer to peer machine learning systems. You can reach Ivan via his website.

Enjoy!

Show Links

Ivan Beschastnikh on LinkedIn
Ivan Beschastnikh on Twitter
Bandwidth-Efficient Transaction Relay for Bitcoin (the Erlay paper)
Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning
Ivan’s research statement (for tenure review) from 2018 that summarizes his work at UBC from 2013-2018
Modeling Systems from Logs of their Behavior, Ivan Beschastnikh, 2013
Private and secure distributed ML, FoolsGold and Biscotti (BlockML) projects, 2018
Program analysis for distributed systems, Dinv, Dara, and PGo projects, 2018
Scalable Constraint-based Virtual Data Center Allocation, talk at UC Louvain, August 2016
Mining temporal and data-temporal specifications, Texada and Quarry projects, August 2016
Helping developers make sense of distributed systems, Colloquium at Sonoma U., October 2015
Making sense of distributed systems, UBC CS FLS talk, January 2014
Modeling systems from logs of their behavior, Microsoft Research Redmond, July 2013
UBC Computer Science
Blockstream
Chaincode Labs
Federated Learning: Collaborative Machine Learning without Centralized Training Data, by Brendan McMahan and Daniel Ramage via Google AI Blog
Stochastic Gradient Descent explained, Stanford EDU Deep Learning Tutorial, contributors: Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen, Adam Coates, Andrew Maas, Awni Hannun, Brody Huval, Tao Wang, Sameep Tandon
Barefoot Networks
Intel to Acquire Barefoot Networks, Accelerating Delivery of Ethernet-Based Fabrics by Navin Shenoy via Intel Newsroom
Program Synthesis, Wikipedia

Twitter Mentions