Every time you analyze your project in full (the Check Solution command), the analyzer creates a small XML file with the information about the number of messages in the %AppData%\Roaming\PVS-Studio\Statistics folder. In PVS-Studio menu, you will find the Analysis Statistics command to open the statistics viewing settings dialog. In this dialog, you can select the desired time interval, rule sets, and message severity levels. After that, click on the “Show in Excel” button to open Microsoft Excel installed on your computer and view the resulting chart.
Do you see the rise of the message number in this chart? It is because the developers stopped fixing bugs in new code after they had achieved 0 warnings for old code. Fortunately, the project manager noticed it in time and gave his team a talking-to, after which they eliminated all the bugs again and then kept their number at zero further on.
Git-annex works by storing the contents of files being tracked by it to separate location. What is stored into the repository, is a symlink to the to the key under the separate location. In order to share the large binary files between a team for example the tracked files need to be stored to a different backend. At the time of writing (23rd of July 2015): S3 (Amazon S3, and other compatible services), Amazon Glacier, bup, ddar, gcrypt, directory, rsync, webdav, tahoe, web, bittorrent, xmpp backends were available. Ability to store contents in a remote of your own devising via hooks is also supported.
Git-annex uses separate commands for checking out and committing files, which makes its learning curve bit steeper than other alternatives that rely on filters. Git-annex has been written in haskell, and the majority of it is licensed under the GPL, version 3 or higher. Because git-annex uses symlinks, Windows users are forced to use a special direct mode that makes usage more unintuitive.
Latest version of git-annex at the time of writing is 5.20150710, released on 10th of July 2015, and the earliest article I found from their website was dated 2010. Both facts would state that the project is quite mature.
Git Large File Storage (Git LFS)
The core Git LFS idea is that instead of writing large blobs to a Git repository, only a pointer file is written. The blobs are written to a separate server using the Git LFS HTTP API. The API endpoint can be configured based on the remote which allows multiple Git LFS servers to be used. Git LFS requires a specific server implementation to communicate with. An open source reference server implementation as well as at least another server implementation available. The storage can be offloaded by the Git LFS server to cloud services such as S3 or pretty much anything else if you implement the server yourself.
Git LFS uses filter based approach meaning that you only need to specify the tracked files with one command, and it handles rest of invisibly. Good side about this approach is the ease of use, however there is currently a performance penalty because of how Git works internally. Git LFS is licensed under MIT license and is written in Go and the binaries are available for Mac, FreeBSD, Linux, Windows. The version of Git LFS is 0.5.2 at the time of writing, which suggests it’s still in quite early shape, however at the time of writing there were 36 contributors to the project. However as the version number is still below 1, changes to APIs for example can be expected.
git-bigfiles – Git for big files
The goals of git-bigfiles are pretty noble, making life bearable for people using Git on projects hosting very large files and merging back as many changes as possible into upstream Git once they’re of acceptable quality. Git-bigfiles is a fork of Git, however the project seems to be dead for some time. Git-bigfiles is is developed using the same technology stack as Git and is licensed with GNU General Public License version 2 (some parts of it are under different licenses, compatible with the GPLv2).
git-fat works in similar manner as git lfs. Large files can be tracked using filters in .gitattributes file. The large files are stored to any remote that can be connected through rsync. Git-fat is licensed under BSD 2 license. Git-fat is developed in Python which creates more dependencies for Windows users to install. However the installation itself is straightforward with pip. At the time of writing git-fat has 13 contributors and latest commit was made on 25th of March 2015.
Licensed under MIT license and supporting similar workflow as the above mentioned alternatives git lfs and git-fat, git media is probably the oldest of the solutions available. Git-media uses the similar filter approach and it supports Amazon’s S3, local filesystem path, SCP, atmos and WebDAV as backend for storing large files. Git-media is written in Ruby which makes installation on Windows not so straightforward. The project has 9 contributors in GitHub, but latest activity was nearly a year ago at the time of writing.
Git-bigstore was initially implemented as an alternative to git-media. It works similarly as the others above by storing a filter property to .gitattributes for certain type of files. It supports Amazon S3, Google Cloud Storage, or Rackspace Cloud account as backends for storing binary files. git-bigstore claims to improve the stability when collaborating between multiple people. Git-bigstore is licensed under Apache 2.0 license. As git-bigstore does not use symlinks, it should be more compatible with Windows. Git-bigstore is written in Python and requires Python 2.7+ which means Windows users might need an extra step during installation. Latest commit to the project’s GitHub repository at the time of writing was made on April 20th, 2015 and there is one contributor in the project.
Git-sym is the newest player in the field, offering an alternative to how large files are stored and linked in git-lfs, git-annex, git-fat and git-media. Instead of calculating the checksums of the tracked large files, git-sym relies on URIs. As opposed to its rivals that store also the checksum, git-sym only stores the symlinks in the git repository. The benefits of git-sym are thus performance as well as ability to symlink whole directories. Because of its nature, the main downfall is that it does not guarantee data integrity. Git-sym is used using separate commands. Git-sym also requires Ruby which makes it more tedious to install on Windows. The project has one contributor according to its project home page.
We’re all familiar with the 3 basic categories of authentication.
1) Knowledge factors (passwords, PINs)
2) Possession factors (a software/hardware token – Yubikey/Google Authenticator/SecureID)
3) Inherence factors (fingerprint, heartbeat, iris/retina scanning)
While the vast majority of sites use knowledge factors, a growing number are turning to multi-factor solutions in an effort to bolster security; to the detriment of the user experience.
Cue continuous authentication / behavioral biometrics… the process of identifying a user based on the subtle nuances in their voice, typing patterns, facial features and location.
How does it work?
As opposed to traditional authentication which is only interested in what you type, behavioral biometric systems collect & profile how you type too. By actively monitoring how you type, the system is able to build a profile on you.
In order to achieve this, the system monitors how long each key is depressed (dwell time), how long between each key press (gap time), how long to type a known string and hundreds of other metrics.
With enough supporting data, it’s entirely possible to identify you based purely on how you type.
Back in 2011, professor Christophe Rosenberger at ENSICAEN announced it was possible to determine the user’s gender after just a few keystrokes.
Over the last 4 years, many companies have researched & invested heavily in leveraging this technology.
Meet BehavioSec, a swedish company which shot to fame after recent publications on BBC News, the Wall Street Journal, CNBC, Wired, Forbes to name a few.
After a brief training period, their technology is able to identify a user with astonishing accuracy.
Over the next few days, I researched the underlying technology and explored ways to nullify such profiling. You can read Per’s analysis of this technology here.
Although many implementations claim to use hundreds of metrics, it became clear that only a few were weighted heavily enough to really matter.
1) Dwell time – How long each key is depressed.
2) Gap time – How long between each key press.
If we can skew these statistics enough, it’d be almost impossible to profile and/or identify a user.
Meet KeyboardPrivacy, a proof-of-concept Google Chrome extension which interferes with the periodicity of everything you enter into a website.
Once installed, you can continue to use the web exactly as you do now. When you enter anything on your keyboard, KeyboardPrivacy will artificially alter the rate at which your entry reaches the document object model (DOM).
W. M Wonham received the B.Engrg. degree in engineering physics from McGill University, Montreal, Quebec, Canada in 1956, and the Ph.D. degree in control engineering from the University of Cambridge, Cambridge, England, in 1961.
From 1961 to 1969 he was associated with the Control and Information Systems Laboratory at Purdue University, the Research Institute for Advanced Studies (RIAS) of the Martin Marietta Co., the Division of Applied Mathematics at Brown University, and (as a National Academy of Sciences Research Fellow) with the Office of Control Theory and Application of NASA’s Electronics Research Center. In 1970 he joined the Systems Control Group of the Department of Electrical Engineering at the University of Toronto, Canada. In addition he has held visiting academic appointments with the Department of Electrical Engineering at MIT, the Department of Systems Science and Mathematics at Washington University, the Department of Mathematics of the University of Bremen, the Mathematics Institute of the Academia Sinica (Beijing), the Indian Institute of Technology (Kanpur), and the Universidade Federal de Santa Catarina (Florianopolis). His research interests have lain in the areas of stochastic control and filtering, the geometric theory of multivariable control, and more recently in discrete event systems from the viewpoint of formal logic and language. He has authored or coauthored about seventy-five research papers as well as the book Linear Multivariable Control: A Geometric Approach.
Dr. Wonham is a Fellow of the IEEE and of the Royal Society of Canada. In 1987 he was the recipient of the IEEE Control Systems Science and Engineering Award, and in 1990 was Brouwer medallist of the Netherlands Mathematical Society. For the period 1992-96 he held the J.Roy Cockburn Chair in Engineering and Applied Science at The University of Toronto.
See the International Space Station! As the third brightest object in the sky the space station is easy to see if you know when to look up.
NASA’s Spot The Station service gives you a list of upcoming sighting opportunities for thousands of locations worldwide, and will let you sign up to receive notices of opportunities in your email inbox or cell phone. The space station looks like a fast-moving plane in the sky, but it is dozens of times higher than any airplane and traveling thousands of miles an hour faster. It is bright enough that it can even be seen from the middle of a city! To learn more about the space station, its international crew, and how they live and working in space, please visit the space station mission pages.
Recent results on optimal extrapolations of first-order stationary iterations have shown that they are necessarily divergent in a wide class of problems. This paper examines a second-order iterative process which is guaranteed to converge — in particular when applied to the solution of an arbitrary equation system. A general convergence theory for semi-iterative techniques is established at the same time.
Andrew Hughes Hallett, University Professor of Economics and Public Policy at George Mason University
Professor of Economics at the University of St Andrews; and Council of Economic Advisers to the Scottish Government since 2007.
Andrew Hughes Hallett is Professor of Economics and Public Policy in the School of Public Policy at George Mason University and in the School of Economics at St Andrews University in Scotland. Previously he was Professor of Economics at Vanderbilt University and at the University of Strathclyde in Scotland. He is a graduate of the University of Warwick (UK) and London School of Economics, holds a Doctorate from Oxford University. He has been Visiting Professor and Fulbright Fellow at Princeton University, Bundesbank Professor at the Free University of Berlin, and has held visiting positions at the Universities of Warwick, Frankfurt, Rome, Paris X, Cardiff, Copenhagen and the Kennedy School at Harvard.
Beyond the academic world, he has acted as consultant to the World Bank and the IMF; also the Peterson Institute for International Economics; and to the UN, OECD, the European Commission, the European Central Bank, to various governments and a number of central banks.
Date of Birth: 01 November 1947
Nationality: British (US Green Card)
Econometric Methods, Numerical Analysis
179. “Alternative Techniques for Solving Systems of Nonlinear Equations”, Journal of
Computation and Applied Mathematics, 1982, Vol. 8, pp. 35-48.
180. “Simple and Optimal Extrapolations in a First Order Iteration”, International Journal of
Computer Mathematics, 1984, Vol. 15, pp. 309-318.
181. “Second Order Iterations with Guaranteed Convergence”, Journal of Computation and
Applied Mathematics, 1984, Vol. 10, pp. 285-291.
182. “Multiparameter Extrapolation and Deflation Methods for Solving Equation Systems”,
International Journal of Mathematics and Mathematical Sciences, 1984, Vol. 7, pp. 793-802.
183. “Efficient Solution Techniques for Dynamic Nonlinear Rational Expectations Models”,
Journal of Economic Dynamics and Control, 1986, pp. 139-145 (with P Fisher and S Holly).
188. “The Convergence Characteristics of Iterative Methods for Model Solution”, Bulletin of
the Oxford Institute of Economics and Statistics, 1987, 49, 231-244 (with P Fisher).