Matthew Might, Utah University

Matthew Might, Utah University

Matt Might
Associate Professor
Presidential Scholar

My primary research area is static analysis of higher-order programs.

My broader interests include language design, compiler implementation, security, program optimization, parallelism and program verification.

I run the U Combinator software systems research group.

I am available as an expert witness on subjects within my expertise. With respect to my reports, I am willing to be deposed and to testify.

My son Bertrand was the first patient ever discovered with a rare disorder known as N-Glycanase deficiency. I wrote an essay about the process of scientific discovery, and the aftermath has been covered by an article in The New Yorker and in Der Spiegel. Learn more at NGLY1.org.

Teaching

Spring 2015: Compilers.
Spring 2014: Scripting Languages.
Fall 2013: Advanced Compilers.
Spring 2013: Compilers.
Spring 2012: Scripting Languages.
Spring 2011: Compilers.
Fall 2009: Advanced topics in compilation.
Spring 2009: Programming language analysis.
Spring 2009: Static analysis seminar.

Blog

blog.might.net is really just a collection of short articles.

Here are the 7 most recent:

HOWTO: Get tenure
Counting hash collisions with the birthday paradox
Parsing BibTeX into S-Expressions, JSON, XML and BibTeX
Low-level web programming in Racket
Rare disease match-making via the internet
Desugaring regular operations in context-free grammars
Meeting notes: Small thoughts on large cohorts

Advertisements

High Performance Linux

High Performance Linux
There is special type of DDoS attacks, application level DDoS, which is quite hard to combat against. Analyzing logic which filters this type of DDoS attack must operate on HTTP message level. So in most cases the logic is implemented as custom modules for application layer (usually nowadays user space) HTTP accelerators. And surely Nginx is the most widespread platform for such solutions. However, common HTTP servers and reverse proxies were not designed for DDoS mitigation- they are simply wrong tools for this issue. One of the reason is that they are too slow to combat with massive traffic

Dice News: Speed Test: Comparing Intel C++, GNU C++, and LLVM Clang Compilers

Dice News: Speed Test: Comparing Intel C++, GNU C++, and LLVM Clang Compilers
Conclusion: It’s interesting that the code built with the g++ compiler performed the best in most cases, although the clang compiler proved to be the fastest in terms of compilation time. But I wasn’t able to test much regarding the parallel processing with clang, since its Cilk Plus extension aren’t quite ready, and the Threading Building Blocks team hasn’t ported it yet.

Phoronix: A Linux Compiler Deathmatch: GCC, LLVM, DragonEgg, Open64, Etc…

Phoronix: A Linux Compiler Deathmatch: GCC, LLVM, DragonEgg, Open64, Etc…
Open64 had an incredibly strong finish when looking at the performance of its resulting C-Ray binary. Open64’s C-Ray binary was over 40% faster than the GCC and LLVM-GCC / DragonEgg releases tested. Open64 was also produced a blazing fast binary for Himeno that was 93% faster than the second fastest compiler, LLVM-GCC 4.2.1, and 2.6 times faster than GCC 4.5.1.

ccache

ccache
ccache is a compiler cache. It speeds up recompilation by caching previous compilations and detecting when the same compilation is being done again. Supported languages are C, C++, Objective-C and Objective-C++.

LEG/Engineering/OPTIM/Assembly

LEG/Engineering/OPTIM/Assembly
I scanned a large number of free software packages in two Linux distributions, looking for places where architecture-specific assembly code might be used so that I could identify likely packages where porting and/or optimisation would be necessary or worthwhile for ARMv8-based and (to a lesser extent) ARMv7-based server platforms. Thankfully, it became clear that most of the software in common usage in Linux distributions does not rely on assembly code. In the places (1435 packages) where assembly code is used, I have analysed it, categorised it by purpose and then applied rough prioritisation by software area. Methods I worked through scans of packages in Ubuntu and Fedora looking for x86/ARM assembly. Christian Reis and Matthias Klose provided a list of target packages from scanning the Ubuntu archive; Jon Masters and Al Stone gave me a similar list from the Fedora archive. Each of these lists was generated using different locally-written tools. This might seem strange, but I considered the potential differences useful. More variance in input methods would hopefully make it less likely that something would be missed; after all, the two distributions overlap substantially in terms of the packages included. Given the lists from both sources, I merged them as well as possible, trying to pick up on places where the same package might have different names across the distros. Then I worked through the long list of source packages that was generated, performing the following 4 steps in each case: Download and unpack the source Look for all likely-looking assembly files within the source (*.[sS] *.asm *.ASM, etc.) Look for inline assembly contained in other source files (*.c *.C *.h *.H *.cpp, etc.) (By far the longest step) In the cases with actual assembly, try to work out: the purpose of the assembly code whether or not the assembly code is used I did not spend any time specifically looking for the use of intrinsics for SIMD operations (MMX, SSE, NEON, etc.), but I have remarked on it where I saw it in passing. Pot