web.dev: Review performance and get detailed guidance on how to improve it.

web.dev: Review performance and get detailed guidance on how to improve it.

What is web.dev?
web.dev is the ultimate resource for developers of all backgrounds to learn, create, and solve on the web. It’s meant to not only educate developers, but help them apply what they’ve learned to any site they work on, be it personal or business.

web.dev was borne of a belief that if we make high quality experiences easy to build, it will enable more meaningful engagement on the web—for users and developers alike. Simply put, we realized the only way the web gets better is if we help the people building it succeed.

And the web can be better.

ucmerced.edu: Mayya Tokman

ucmerced.edu: Mayya Tokman


A new class of exponential propagation iterative methods of Runge-Kutta type (EPIRK)
M. Tokman, Journal of Computational Physics, 230 (2011) 8762-8778.

New adaptive exponential propagation iterative Runge-Kutta- type (EPIRK) methods
M. Tokman, P. Tranquilli, and J. Loffeld. Submitted, 2011.

Comparative performance of exponential, implicit, and explicit integrators for stiff systems of ODEs.
J. Loffeld and M. Tokman. Submitted, 2010.

Efficient design of exponential-Krylov integrators for large scale computing
M. Tokman and J. Loffeld. Proceedings of the 10th International Conference on Computational Science, Procedia Computer Science, 1(1), pp. 229-237, (2010).

Computational aspects of mucus propulsion by cilated epithelium
R. Chatelin, P. Poncet and M. Tokman, Proceedings of the 2nd European Conference on Microfluidics, Toulouse 2010.

Automated assessment of short free-text responses in computer science using latent semantic analysis
R. Klein, A. Kyrilov & M. Tokman, ITiCSE’ 11 Proceedings of the 16th annual joint conference on Innovation and technology in computer science education, 158-162, Darmstadt (2011).

Efficient integration of large stiff systems of ODEs with exponential propagation iterative (EPI) methods
M. Tokman, Journal of Computational Physics 213 (2006) 748–776.

Three-dimensional Model of the Structure and Evolution of the Coronal Mass Ejections
M. Tokman, P. Bellan, Astrophysical Journal, 567(2), pp. 1202, 2002.

Investigations into the Relationship between Spheromak, Solar and Astrophysical Plasmas
P.M. Bellan, S.C. Hsu, J.F. Hansen, M. Tokman, S.E. Pracko, C.A. Romero-Talamas, Proceedings on 19th International Atomic Energy Agency Fusion Energy Conference, Lyon, 2002.

codecapsule.com: Robin Hood hashing — backward shift deletion

codecapsule.com: Robin Hood hashing — backward shift deletion

3. Experimental protocol

In order to test the effect of backward shift deletion on performance, I am going to use the same test cases that I used in my previous article about Robin Hood hashing [1]. The only difference is that since I observed that there was no difference between the tables of size 10k and 100k, this time I am only plotting the results for tables of size 10k. For more details regarding the test cases, take a look that the “Experiment protocol” section in the my previous article [1].

4. Results

The results are presented in the four figures below, each figure showing the same statistic across the different test cases:
Figure 2 is the mean DIB
Figure 3 is the median of DIB
Figure 4 is the 95th percentile of DIB
Figure 5 is the variance of DIB
Each of these figures is holding sub-figures, each for a different test case:
(a) is the “batch” test case, with LFM=0.8 and LFR=0.1
(b) is the “batch” test case, with LFM=0.8 and LFR=0.8
(c) is the “ripple” test case, with LFM=0.8 and LFR=0.1
(d) is the “loading” test case
The mean DIBs are adjusted for graphs of the the Robin Hood with tombstones, in such a way that the minimum DIB is shifted to zero, and this in order to make a fair comparison with the other algorithms. Indeed, because the implementation of Robin Hood hashing with tombstones is considering only probes between the minimum and maximum DIBs, the probing of an item never starts at DIB 0 but at the minimum DIB. The graphs for Robin Hood hashing with backward shift deletion and the graphs for basic linear probing are not shifted down.

5. Discussion

In the results presented above, it is clear that Robin Hood hashing with backward shift deletion outperforms both basic linear probing and Robin Hood hashing with tombstones. In addition, the mean DIB and variance of DIB are constant, and this even after a large number of insertion and deletion operations, which is consistent with the results presented in the thesis of Celis [4].
The most striking results are is Figure 4, where the 95th percentile for Robin Hood hashing with backward shift deletion remains at a value of around 7, which proves that even in the worst cases, the number of probes to find an entry will be very small.

6. Conclusion

The algorithm I had implemented in my first version of Robin Hood hashing using tombstones [1] was the one described by the pseudo-code of the original thesis [4]. Yet, I was unable to reproduce the results presented in that same thesis. The reason is that the results were describing the theoretical performance based on the mathematical model. And if the math were right, the given pseudo-code was doing it wrong. Thanks to Paul Khuong and his suggestion of using a backward shift on deletion, the practical results now match the theoretical results.
An interesting follow-up would be to experiment to see how costly the backward shift is for the CPU. With these new results, my feeling is now that Robin Hood hashing with backward shift deletion is definitely an interesting algorithm, and given its very linear memory access pattern, it would be worth investigating further for an on-disk key-value store.

Stop Misquoting Donald Knuth!

Stop Misquoting Donald Knuth!

I am tired of slow load times. The ratio of bytes loaded to load time should be very close to the I/O throughput of the machine. If it is not, somebody is wasting my time.  I am tired of programs not stopping immediately, the instant I click the little X, because somebody is traversing a large reference graph and doing lots of itty-bitty deletes. I am tired of seeing progress bars and splash screens.

As a developer, I am tired of my IDE slowing to a crawl when I try to compile multiple projects at a time.

As a citizen of planet Earth, I am tired of all the electricity that gets wasted by organizations who throw hardware at software problems, when a more efficient implementation might allow them to consume much, much less, and spend less money powering it all.

PDQ: Pretty Damn Quick

PDQ: Pretty Damn Quick
PDQ (Pretty Damn Quick) is a software tool associated with the books Analyzing Computer System Performance with Perl::PDQ (Springer 2005, 20011) and The Practical Performance Analyst (McGraw-Hill 1998, iUniverse.com Press 2000). The PDQ software may be downloaded freely from this web site whether or not you own a copy of the book. PDQ uses queue-theoretic paradigms to represent all kinds of computer systems. Computer system resources (whether hardware and software) are represented by queues (more formally, a queueing network-not to be confused with a data network-which could be a PDQ queueing model) and the queueing model is solved “analytically” (meaning via a combination of algorithmic and numerical procedures). Queues are invoked as functions in PDQ by making calls to the appropriate library functions (listed below 3). Once the queueing model is expressed in PDQ, it can be solved almost instantaneously by calling the PDQ_Solve() function. This in turn generates a report of all the c

facebook.com: BigPipe: Pipelining web pages for high performance

facebook.com: BigPipe: Pipelining web pages for high performance
BigPipe is a fundamental redesign of the dynamic web page serving system. The general idea is to decompose web pages into small chunks called pagelets, and pipeline them through several execution stages inside web servers and browsers. This is similar to the pipelining performed by most modern microprocessors: multiple instructions are pipelined through different execution units of the processor to achieve the best performance. Although BigPipe is a fundamental redesign of the existing web serving process, it does not require changing existing web browsers or servers; it is implemented entirely in PHP and JavaScript.

dreamhost.com: Web Server Performance Comparison – DreamHost

dreamhost.com: Web Server Performance Comparison – DreamHost
Remember, Apache supports a larger toolbox of things it can do immediately and is probably the most compatible across all web software out there today… and most websites really don’t get so many concurrent hits as to gain large performance/memory benefits from Lighttpd or nginx. But hey, it never hurts (too much) to swap your web servers around and see what works best for you!

Lockless Inc. Low level software to optimize performance

Lockless Inc. Low level software to optimize performance
The Lockless Memory Allocator is downloadable under the GPL 3.0 License. You can thus use the allocator in other open-source programs. However, if you wish to use it in closed-source proprietory software, Contact us about other options. Lockless MPI Released Version 1.2 of the Lockless MPI has just been released. It is optimized for modern 64bit multicore systems, and supports programs running on Linux. There are bindings for C, C++ and FORTRAN. It supports version 1.3 of the MPI spec, with a few small parts of version 2.0