2012-10-24

find or rule out a periodic universe (via structure)

Questions from Kilian Walsh (NYU) today reminded me of an old, abandoned idea: Look for evidence of a periodic universe (topological non-triviality) in the large-scale structure of galaxies. Papers by Starkman (CWRU) and collaborators (one of several examples is here) claim to rule out most interesting topologies using the CMB alone. I don't doubt these papers but (a) they effectively make very strong predictions for the large-scale structure and (b) if CMB (or topology) theory is messed up, maybe the constraints are over-interpreted.

The idea would be to take pairs of finite patches of the observed large-scale structure and look to see if there are shifts, rotations, and linear amplifications (to account for growth and bias evolution) that make their long-wavelength (low-pass filtered) density fields match. Density field tracers include the LRGs, the Lyman-alpha forest, and quasars. You need to use (relatively) high-redshift tracers if you want to test conceivably relevant topologies.

Presumably all results would be negative; that's fine. But one nice side effect would be to find structures (for example clusters of galaxies) residing in very similar environments, and by similar I mean in terms of full three dimensional structure, not just mean density on some scale. That could be useful for testing non-linear growth of structure.

2012-10-21

find LRG-LRG double redshifts

Vivi Tsalmantza and I have found many double redshift in the SDSS spectroscopy (a few examples are published here but we have many others) by modeling quasars and galaxies with a data-driven model and then fitting new data with a mixture of two things at different redshifts. We have found that finding such things is straightforward. We have also found that among all galaxies, luminous red galaxies are the easiest to model (that's no breakthrough; it has been known for a long time).

Put these two ideas together and what have you got? An incredibly simple way to find double-redshifts of massive galaxies in spectroscopy. And the objects you find would be interesting: Rarely have double redshifts been found without emission lines (LRG spectra are almost purely stellar with no nebular lines), and because the LRGs sometimes host radio sources you might even get a Hubble-constant-measuring golden lens. For someone who knows what a spectrum is, this project is one week of coding and three weeks of CPU crushing. For someone who doesn't, it is a great learning project. If you get started, email me, because I would love to facilitate this one! I will happily provide consultation and CPU time.

2012-10-10

find or rule out ram pressure stripping in galaxy clusters

We know a lot about the scalar properties of galaxies as a function of clustocentric distance: Galaxies near cluster centers tend to be redder and older and more massive and more dense than galaxies far from cluster centers. We also know a lot about the tensor properties of galaxies as a function of clustocentric distance: Background galaxies tend to be tangentially sheared and galaxies in or near the cluster have some fairly well-studied but extremely weak alignment effects. What about vector properties?

Way back in the day, star NYU undergrad Alex Quintero (now at Scripps doing oceanography, I think) and I looked at the morphologies of galaxies as a function of clustocentric position, with the hopes of finding offsets between blue and red light (say) in the direction of the cluster center. These are generically predicted if ram-pressure stripping or any other pressure effects are acting in the cluster or infall-region environments. We developed some incredibly sensitive tests, found nothing, and failed to publish (yes I know, I know).

This is worth finishing and publishing, and I would be happy to share all our secrets. It would also be worth doing some theory or simulations or interrogating some existing simulations to see more precisely what is expected. I think you can probably rule out ram-pressure stripping as a generic influence on cluster members, although maybe the simulations would say you don't expect a thing. By the way, offsets between 21-cm and optical are even more interesting, because they are seen in some cases, and are more directly relevant to the question. However, it is a bit harder to assemble the unbiased data you need to perform a sensitive experiment.

2012-10-09

cosmology with finite-range gravity

Although the Nobel Prize last year went for the accelerated expansion of the Universe, in fact acceleration is not a many-sigma result. What is a many-sigma result is that the expansion is not decelerating by as much as it should be given the mass density. This begs the question: Could gravity be weaker than expected on cosmological scales? Models with, say, an exponential cutoff of the gravitational force law at long distances are theoretically ugly (they are like massive graviton theories and usually associated with various pathologies) but as empirical objects they are nice: A model with an exponentially suppressed force law at large distance is predictive and simple.

The idea is to compute the detailed expansion history and linear growth factor (for structure formation) for a homogeneous and isotropic universe and compare to existing data. By how much is this ruled out relative to a cosmological-constant model? The answer may be a lot but if it is only by a few sigma, then I think it would be an interesting straw-man. For one, it has the same number of free parameters (one length scale instead of one cosmological constant). For two, it would sharpen up the empirical basis for acceleration. For three, it would exercise an idea I would like to promote: Let's choose models on the joint basis of theoretical reasonableness and computability, not theoretical reasonableness alone! If we had spent the history of physics with theoretical niceness as our top priority, we would never have got the Bohr atom or quantum mechanics!

One amusing note is that if gravity does cut off at large scales, then in the very distant future, the Universe will evolve into an inhomogeneous fractal. Fractal-like inhomogeneity is something I have argued against for the present-day Universe.

2012-10-06

cosmological simulation as deconvolution

After a talk by Matias Zaldarriaga (IAS) about making simulations faster, I had the following possibly stupid idea: It is possible to speed up simulations of cosmological structure formation by simulating not the full growth of structure, but just the departures away from a linear or quadratic approximation to that growth. As structure grows, smooth initial conditions condense into very high-resolution and informative structure. First observation: That growth looks like some kind of deconvolution. Second: The better you can approximate it with fast tools, the faster you can simulate (in principle) the departures or errors in the approximation. So let's fire up some machine learning!

The idea is to take the initial conditions, the result of linear perturbation theory, the result of second-order perturbation theory, and a full-up simulation, and try to infer each thing from the other (with some flexible model, like a huge, sparse linear model, or some mixture of linear models or somesuch). Train up and see if we can beat other kinds of approximations in speed or accuracy. Then see if we can use it as a basis for speeding full-precision simulations. Warning: If you don't do this carefully, you might end up learning something about gravitational collapse in the Universe!. My advice, if you want to get started, is to ask Zaldarriaga for the inputs and outputs he used, because he is sitting on the ideal training sets for this, and may be willing to share.

2012-10-03

compare EM to other optimization algorithms

For many problems, the computer scientists tell us to use expectation maximization. For example, in fitting a distribution with a mixture of Gaussians, EM is the bee's knees, apparently. This surprises me, because the EM optimization is so slow and predictable; I am guessing that a more aggressive optimization might beat it. Of course a more aggressive optimization might not be protected by the same guarantees as EM (which is super stable, even in high dimensions). It would be a service to humanity to investigate this and report places where EM can be beat. Of course this may all have been done; I would ask my local experts before embarking.