Especially in urban areas, two locations may be quite close geographically but difficult to travel between. I wondered whether one could create a map where, instead of physical distances, points are arranged according to some sort of travel-time between them. This would be useful for many purposes.
Unfortunately, such a mapping is mathematically impossible in general (for topological reasons). But so is a true map of the Earth, hence the need for Mercator or other projections. The first step in constructing a useful visualization is to define an appropriate Travel-Time metric function. Navigation systems frequently compute point-to-point values, but they are not bound by the need to maintain a consistent set of Travel Times between all points. That is our challenge – to construct a Travel Time metric.
Both information theory and statistical mechanics make rather cavalier use of a simple continuous version of the discrete entropy. Treatments often gloss over a number of subtleties in the definition of such a quantity, and this can lead to confusion. A proper continuous version of the discrete entropy is not easy to construct and may not exist. The differential entropy commonly bandied about actually is a discrete entropy in disguise, and possesses an implicit coarse-graining scale.
In this article, I review discrete entropy and probability densities, carefully analyze the continuous limit and issues encountered, and touch on several possible approaches. An enumeration of various axiomatic formulations also is provided. The piece is pedagogical and does not contain original research, though I offer a couple of my own thoughts on possible means of generalizing entropy.
While an acquaintance with probability and entropy is assumed, the discussion is fairly self contained and should be accessible to a broad audience.
In another post, I discussed the mathematical calculation of optical parameters for a configuration of stacked lenses and camera components. As is evident from the example worked out there, the procedure is somewhat tedious. Instead, it is better to spend twice the time writing a program to do it. Fortunately I already did this and offer it to you, gentle reader, to use and criticize. I expect no less than one rabid rant about some aspect that doesn’t pedantically conform to the IEEE standard. This is working code (and has been checked over and tested to some extent). I use it. However, it is not commercial grade and was not designed with either efficiency or robustness in mind. It is quick and dirty – but graciously so.
Think of this as a mine-shaft. You enter at your own risk and by grace of the owner. And if you fall, there won’t be non-stop human interest coverage on 20 TV channels as rescue workers try to extract you. That’s because you’re not a telegenic little kid and this is a metaphor. Rather, you will end up covered in numeric slime of dubious origin. But I still won’t care.
All this said, I do appreciate constructive criticism and suggestions. Please let me know about any bugs. I don’t plan to extensively maintain this program, but I will issue fixes for significant bugs.
The program I provide is a command line unix (including MacOS) utility. It should be quite portable, as no funky libraries are involved. The program can analyze a single user-specified configuration or scan over all possible configurations from an inventory file. In the latter case, it may restrict itself to configurations accessible using the included adapters or regardless of adapter. It also may apply a filter to limit the output to “interesting” cases such as very high magnification, very wide angle, or high telephoto.
The number of configurations can be quite large, particularly when many components are available, there are no constraints, and we account for the large number of focal/zoom choices for each given stack. For this reason, it is best to constrain scans to a few components in an inventory (by commenting out the components you don’t need). For example, if one has both 10 and 25mm extension tubes then try with only one. If this looks promising, restrict yourself to the components involved and uncomment the 25mm as well.
Either through the summary option or the use of a script to select out desirable configurations, the output may be analyzed and used for practical decisions. For example, if a 10x macro lens is needed and light isn’t an issue then a 1.4X telextender followed by a 200mm zoom followed by a reversed 28mm will do the trick. It will have a high f-stop, but if those components are already owned and we don’t need a low f-stop it may be far more cost-effective option than a dedicated ultra-macro lens (there aren’t any at 10X, but a 5X one is available).
For simple viewing of the results, I recommend the use of my “tless” utility. This isn’t a shameless plug. I wrote tless for myself, and I use it extensively.
I like to play around with various configurations of camera lenses. This partly is because I prefer to save money by using existing lenses where possible, and partly because I have a neurological condition (no doubt with some fancy name in the DSM-IV) that compels me to try to figure things out. I spent 5 years at an institute because of this problem and eventually got dumped on the street with nothing but a PhD in my pocket. So let this be a warning: keep your problem secret and don’t seek help.
A typical DSLR (or SLR) owner has a variety of lenses. Stacking these in various ways can achieve interesting effects, simulate expensive lenses (which may internally be similar to such a stack), or obtain very high magnifications. Using 3 or 4 lenses, a telextender, a closeup lens, and maybe some extension rings (along with whatever inexpensive adapter rings are needed), a wide variety of combinations can be constructed. In another entry, I’ll offer a companion piece of freeware that enumerates the possible configurations and computes their optical properties.
In the present piece, I examine the theory behind the determination of those properties for any particular setup. Given a set of components (possibly reversed) and some readily available information about them and the camera, we deduce appropriate optical matrices, construct an effective matrix for the system, and extract the overall optical properties – such as focal length, nearest object distance, and maximum magnification. We account for focal play and zoom ranges as needed.
The exposition is self-contained, although this is not a course on optics and I simply list basic results. Rather, I focus on the application of matrix optics to real camera lenses. I also include a detailed example of a calculation.
As far as I am aware, this is the only treatment of its kind. Many articles discuss matrix methods or the practical aspects of reversing lenses for macro photography. However, I have yet to come across a discussion of how to deduce the matrix for a camera lens and vice-versa.
After reading the piece, you may wonder whether it is worth the effort to perform such a calculation. Wouldn’t it be easier to simply try the configurations? To modify the common adage, a month on the computer can often save an hour in the lab. The short answer is yes and no. No I’m not an economist, why do you ask?
If you have a specific configuration in mind, then trying it is easier. However, if you have a set of components and want to determine which of the hundreds of possible configurations are candidates for a given use (just because the calculation works, doesn’t mean the optical quality is decent), or which additional components one could buy to make best use of each dollar, or which adapter rings are needed, or what end of the focal ranges to use, then the calculation is helpful. Do I recommend doing it by hand? No. I even used a perl script to generate the results for the example. As mentioned, a freeware program to accomplish this task in a more robust manner will be forthcoming. Think of the present piece as the technical manual for it.
While exploring theoretical physics and computer science, I commonly encounter large sets whose cardinalities are of interest. Rather than endlessly recalculate these as needed, I would prefer to have a single reference which consolidates all of the salient results. To my knowledge such a work does not exist, so I decided to create it. Consider it a missing chapter on cardinality from Abramowitz and Steguin.
There are many excellent works on the rigorous development of cardinal theory, the more intricate aspects of the continuum hypothesis, and various axiomatic formulations of set theory. Rather than emphasize these, the present work attempts to summarize practical results in cardinal arithmetic as well as list the cardinalities of many common sets. No attempt at rigor or a systematic development is made. Instead, sufficient background is provided for a reader with a basic knowledge of sets to quickly find results they require. Proof sketches offer the salient aspects of derivations without the distraction of formal rigor. Where I perceive that pitfalls or confusion may arise (or where I encountered them myself), I have attempted clarification.
In addition, I included a discussion of infinite bases and integration from the standpoint of cardinality. These are topics that are of interest to me. Hopefully, others will find their mention useful as well.
If you detect any errors in my exposition, wish to offer suggestions for improvement, or know of any omitted references or proofs, I would be grateful for your comments.
Over the years, I’ve found delimited text files to be an easy way to store or output small amounts of data. Unlike SQL databases, XML, or a variety of other formats, they are human readable. Many of my applications and scripts generate these text tables, as do countless other applications. Often there is a header row and a couple of columns that would best be kept fixed while scrolling. One way to view such files is to pull them into a spreadsheet, parse them, and then split the screen. This is slow and clumsy, and updates are inconvenient to process. Instead, I wanted an application like the unix utility ‘less’ but with an awareness of table columns. The main requirements were that it be lightweight (i.e. keep minimal content in memory and start quickly), parse a variety of text file formats, provide easy synchronized scrolling of columns and rows, and allow horizontal motion by columns. Strangely, no such utility existed. Even Emacs and vi don’t provide an easy solution. So I wrote my own unix terminal application. I tried to keep the key mappings as true to “less” (and hence vi) as possible. The code is based on ncurses and fairly portable. The project is hosted on Google Code and is open source.
Have you ever wondered what really is meant by a “deciding vote” on the Supreme Court or a “swing State” in a presidential election? These terms are bandied about by the media, but their meaning isn’t obvious. After all, every vote is equal, isn’t it? I decided to explore this question back in 2004 during the election year media bombardment. What started as a simple inquiry quickly grew into a substantial project. The result was an article on the subject, which I feel codifies the desired understanding. The paper contains a rigorous mathematical framework for block voting systems (such as the electoral college), a definition of “influence”, and a statistical analysis of the majority of elections through 2004. The work is original, but not necessarily novel. Most if not all has probably been accomplished in the existing literature on voting theory. This said, it may be of interest to a technical individual interested in the subject. It is self-contained, complete, and written from the standpoint of a non-expert in the field. For those who wish to go further, my definition of “influence” is related to the concept of “voting power” in the literature (though I am unaware of any analogue to my statistical definition).
“Once upon a time there was a physicist. He was productive and happy and dwelt in a land filled with improbably proportioned and overly cheerful forest creatures. Then a great famine of funding occurred and the dark forces of string theory took power and he was cast forth into the wild as a heretic. There he fought megalomaniacs and bureaucracies and had many grand adventures that appear strangely inconsistent on close inspection. The hero that emerged has the substance of legend.
But back to me. I experienced a similar situation as a young physicist, but in modern English and without the hero bit. However, once upon a time I DID write physics papers. This is their story…
My research was in an area called Renormalization Group theory (for those familiar with the subject, that’s the “momentum-space” RG of Quantum Field Theory, rather than the position-space version commonly employed in Statistical Mechanics – although the two are closely related).
In simple terms, one could describe the state of modern physics (then and now) as centering around two major theories: the Standard Model of particle physics, which describes the microscopic behavior of the electromagnetic, weak, and strong forces, and General Relativity, which describes the large scale behavior of gravity. These theories explain all applicable evidence to date, and no prediction they make has been excluded by observation (though almost all our effort has focused on a particular class of experiment, so this may not be as impressive as it seems). In this sense, they are complete and correct. However, they are unsatisfactory. Their shortcomings are embodied in two of the major problems of modern physics (then and now): the origin of the Standard Model and a unification of Quantum Field Theory with General Relativity (Quantum Field Theory itself is the unification of Quantum Mechanics with Special Relativity). My focus was on the former problem. The Standard Model is not philosophically satisfying. Besides the Higgs particle, which is a critical component but has yet to be discovered, there is a deeper issue. The Standard Model involves a large number of empirical inputs (about 21, depending on how you count them), such as the masses of leptons and quarks, various coupling constants, and so on. It also involves a specific non-trivial set of gauge groups, and doesn’t really unify the strong force and electro-weak force (which is a proper unification of the electromagnetic and weak forces). Instead, they’re just kind of slapped together. In this sense, it’s too arbitrary. We’d like to derive the entire thing from simple assumptions about the universe and maybe one energy scale. There have been various attempts at this. Our approach was to look for a “fixed point”. My studying which theories are consistent as we include higher and higher energies, we hoped to narrow the field from really really big to less really really big – where less really really big is 1. My thesis and papers were a first shot at this, using a simple version of Qunatum Field Theory called scalar field theory (which coincidentally is useful in it’s own right, as the Higgs particle is a scalar particle). We came up with some interesting results before the aforementioned cataclysms led to my exile into finance.
Unfortunately, because of the vagaries of copyright law I’m not allowed to include my actual papers. But I can include links. The papers were published in Physical Review D and Physical Review Letters. When you choose to build upon this Earth Shattering work, be sure to cite those. They also appeared on the LANL preprint server, which provides free access to their contents. Finally, my thesis itself is available. Anyone can view it, but only MIT community members can download or print it. Naturally, signed editions are worth well into 12 digits. So print and sign one right away.