Azadeh Farzan is a Professor at the University of Toronto interested in software verification, programming languages, formal methods, and program synthesis, all with an emphasis on concurrency-related issues. We talk about her interest in concurrency research, and what makes a good problem, and what the programming languages community could think more about.
JY: Tell us about yourself. How did you get here?
AF: I was born in Iran. I remained there until the end of my undergraduate studies, which was in computer engineering in Sharif. Back then computer science didn't exist in Iran. I was lucky enough that I went to an elite junior high and high school, where I received a very good education in all basic sciences.
After finishing my undergraduate degree in Iran, I went to the University of Illinois for a PhD. My advisor was José Meseguer, who is not really a POPL person—he's more of a logician and generally in formal methods. A good part of my thesis was related to logic, but a good part of it was also related concurrency. Ed Clarke ended up being an external on my thesis committee and I went to do a postdoc with him at CMU. I was halfway, six months, into my postdoc when I applied to two Canadian schools, with the main goal of leaving the US. It was right after the 2007 crash, and hardly any school was hiring, but I got the position at the University of Toronto and have been here since.
My transition from a supervisor who was in logic to a postdoctoral supervisor who was in model checking to becoming more and more programming languages flavored was mostly internally driven.
JY: How did you become interested in logic?
AF: Computer science and logic went hand in hand. I started programming relatively early, especially for back then. It was around 11, I think. That was mostly due to my parents' influence, because they were both computer engineers. Then at the same time, the first year that I was in this nice school, they had a class on proposition logic for us as part of an after school program. They taught us inference rules, quantifiers, and things like that. I found it extremely fascinating. So much so that I would recommend that everyone is taught propositional logic at 11.
I had always an interest in programming languages in general. When I went to Illinois originally, I was assigned to a supervisor was in compilers. When I realized that compilers was a little too systems-y, too heuristics-based for my taste, I switched to my current supervisor. At the time, there weren't really any core programming languages people left in Illinois. There used to be more, and there are more people now, but in this key time that I entered, compilers or logic and formal methods were really the only choices for me. I chose logic and formal methods.
JY: How would you characterize your subarea of POPL?
AF: When I was a student, we used to call my area algorithmic verification, and now it's both algorithmic verification and synthesis. There is a big emphasis on algorithms and getting things done and a lot of it goes hand in hand with automation. For the purpose of automation, there has to be an algorithm that does the task.
JY: How did you get started working in concurrency?
AF: I see the main quest of programming languages research as providing ideas that can help with programmer productivity, with reliability (correctness and/or security) of the software, and with other primary goals with software, for instance reusability, resilience, and so on. In a concurrent setting, it's very, very difficult for a programmer to achieve these things all at once because of combinatorial blowup in all the cases that have to be considered. They are just too many for a human mind to really fathom. Programmers often try to use simple rules to not to have to explicitly consider the many cases. These simple rules have to be taught to you specifically or you have to figure them out by experience. You have to be conservative. Don't be inventive, don't do anything out of the ordinary, or you are bound to make a mistake. Often times, you will not even know what your mistake is. Sophisticated modern hardware has only made this problem worse. Even programmers who knew the simple rules before are not aware of the ramifications of the combination of memory hierarchy and multicore processors and how to write programs that perform well and are correct for these architectures. It has always felt like a very natural domain to help programmers with programming, in the most general sense of the word “help.”
JY: Tell us about your research in compositionality for concurrency.
AF: There are these seminal papers that talk about compositional program proofs for concurrent programs. They are very elegant. The notion of compositionality that was introduced back then has withstood the for about 40 years. Compositionality in the classic sense means that the proof should be guided by the program structure. In particular, program structure determines how the programmer decided to divide up the computation and put it into different threads. And that, the reasoning has to follow exactly that structure. We started this line of work based on the subset of programs that don't have proofs that are naturally compositional in this classic sense. We just wanted to say, "Okay. Maybe it doesn't have to be like that. If it can, that's really nice and it will lead to simple and elegant proofs, but what if it's not like that? Can we think about a different way of decomposing the proof argument?”
Consider for example, how people who are not in programming languages would argue that an algorithm—in an algorithms textbook—is correct. They would structure the argument as, "Here is a common scenario and here are all the degenerate cases of the algorithm. Here are the arguments that go with why the algorithm is correct in the common case and in all the degenerate cases. We covered all those scenarios, and therefore We've given a proof of correctness for this algorithm." The idea is that this kind of decomposition into these different scenarios of what could happen in a system is an alternative way of decomposing a proof argument that could be helpful next to the classic method.
A good deal of this work was in my former PhD student Zachary Kincaid’s thesis, which won the honorable mention at SIGPLAN’s John C. Reynolds Doctoral Dissertation Award.
JY: How can your work help programmers working in industry?
AF: It depends on what we mean by industry. If we mean a broad class, which would include every programmer out there who writes code for money, then I would say that it is not useful to the majority of them. If we focus on the small class of programmers that write serious and critical code, by which I mean for example the person who designs the spinlocks for Linux, then our work will be very helpful to them. My work is essentially targeted at automated proofs of smaller, but tricky to reason about, programs.
JY: Something I really like about your work is how there's a very strong theoretical foundation, but it's also potentially very useful. I was wondering how you choose the problems to work on.
AF: My method is rather personal. I have a first level test that goes like this. If someone would come to me and say that this other person just solved this and there's a talk, would I want to immediately drop everything and go to the talk? The problem being an important and interesting one is naturally an aspect of this.
The second part has to do with the solution space. I would say that this part is half indulgent. I should really enjoy working in that solution space. By that, I mean the sort of subproblems that need to be solved on the path to addressing the original problem. They should be fun to do. I should not be able to go to bed at night because of I am working on one of them and I just can’t give up. Naturally, things don’t always pan out like that, but by aiming for that, at least about thirty percent of my work ends up being pure joy for me.
JY: Are there other major directions or focus areas you think our community should take in the next few years?
AF: There are some really old problems that still deserve a lot of attention that are being a little abandoned by younger people. Maybe partially because they are not trendy, but also partially because it's far more difficult to publish in an established area compared to a new area. One thing that I wish the community would focus on more is some of these old problems, for example invariant generation. Whenever someone makes progress in these fundamental areas, it helps us all, whether you're working on something new and trendy or old. I wish we could encourage work on these core problems, so that our younger community members with all their energy and creativity would focus some of their attention on them, as opposed to on the invention of new problems. It is really nice to see the new directions as well.
Related to this point, publishing in a new problem space is often (but not always) easier because standards are not yet established for what really counts as a contribution there, and in contrast, things very easily get labeled as “incremental” in an established problem area. I really love how broad POPL is. The same broadness also hinders efforts, from the community, to come together and identify the top problems in PL and define standards of (scientific ways of) measuring progress towards these common goals. I wish we could put more of an effort in this as a community, even if it is a challenge, considering how broad POPL is.