David Zuckerman, University of Texas at Austin – Randomness

How do you have a breakthrough in randomness?

David Zuckerman, professor in the department of computer science at the University of Texas at Austin, details randomness and how algorithms and computers can make sense of it.

David Zuckerman holds an Endowed Professorship in the Computer Science Department at the University of Texas at Austin. He received an A.B. in Mathematics from Harvard University in 1987 and a Ph.D. in Computer Science from U.C. Berkeley in 1991. He was a postdoctoral fellow at MIT from 1991-1993, and at Hebrew University in the Fall of 1993. He has been with the University of Texas since then, visiting U.C. Berkeley from 1999-2000, Harvard University from 2004-2005, and the Institute for Advanced Study from 2011-12.

His research focuses primarily on pseudorandomness and the role of randomness in computing. He is best known for his work on randomness extractors and their applications. His other research interests include coding theory, distributed computing, cryptography, inapproximability, and other areas of complexity theory. His research awards include a Simons Investigator Award, a Best Paper Award at STOC 2016, ACM Fellow, a Guggenheim Fellowship, a Packard Fellowship for Science and Engineering, a Sloan Research Fellowship, and an NSF Young Investigator Award.

Randomness

AM-favicon-pink

Randomness is amazingly useful in many areas.  For example, the gold standard for medical research involves randomized controlled trials.

My research focuses on randomness, computing, and algorithms, which are like recipes for computing something. Many randomized algorithms are faster than algorithms that don’t use randomness. Scientists use random numbers to simulate complex systems such as the climate. Computer security is impossible without random numbers.

On the other hand, generating random numbers seems difficult for a computer.  This is because a computer appears the opposite of random, executing one instruction after the other in a predetermined way.  For that reason, computers use ad hoc sources of randomness, such as timing intervals between keystrokes. However, such ad hoc random sources are really only a little bit random.  It is therefore essential to improve the quality of the random source.  Unfortunately, this is impossible with just one, general, low-quality random source.

Recently, my former PhD student Eshan Chattopadhyay and I solved a longstanding open problem by introducing an algorithm that combines two low-quality random sources to create one high-quality random source.  Previous attempts needed at least one of the two input sources to be of moderately high-quality.  Our algorithm, called a two-source extractor, allows us to produce high-quality randomness starting with exponentially lower quality random numbers than known previously.

Theoreticians consider our two-source extractor a breakthrough. It also gives a major improvement to an important mathematical problem in a field called Ramsey Theory, which seeks to find structure even in random-looking objects.

It will take time before our extractor is made practical, but already several researchers have built and improved on our ideas.

Share
No Responses