Showing posts with label Alexia Schulz. Show all posts
Showing posts with label Alexia Schulz. Show all posts

Friday, September 23, 2011

Blogging Update

I've been a very bad blogger recently. Basically this is because I've been spending all my time working on a project with people at Berkeley, and so there isn't as much need to post, as I was mainly using this blog to communicate to Alexia on our project. Alexia, if you are reading this I haven't given up on our project, and in fact much of the work I am doing now will be useful for us. However, considering I want to graduate this year, I've been pushing forward on this new project that is working, and close to being publishable.

My collaborators at Berkeley have wanted me to keep quiet about my new project's results (until we publish), and so I've not been able to post plots etc. for this project. However, I do find it helpful for organization and motivation to write on this blog, so I think I'll start writing on it again, if nothing else that to say what I did that day.

I've been working a lot of error analysis for my correlation functions. I have bootstrap/jackknife analysis in place and have been doing various fits to the correlation function to try to figure out the errors. It's strange because depending on what fitting methods I use, I get wildly different errors... and sometimes it seems like I am getting a statistically significant result (3+ sigma) and other times not.

I've also been much better at writing self-contained code that is easily re-runnable. I used to drive Alexia crazy will all the "cutting and pasting" I was doing. Well no more. I know am doing object-oriented programs with lots of functions etc. It's actually really nice.

I've been learning a lot more about python. Using cool fitting tools and statistics tools. Python is such a powerful language.

Tuesday, May 24, 2011

Reconstruction (Log Binning) Results

In this post I discussed trying different binning to get the reconstruction to overlap more with the actual correlation functions.

Here is the binning I am using. The binning is log in theta (for 2d correlation function) and rlos (for 3d correlation function), but the binning in redshift is such that each bin has the same number of galaxies.

Here are the arrays:

rbins = array([ 4.592407, 136.86079 , 170.595585, 194.675063, 211.888514,
227.822058, 242.439338, 254.738113, 267.180894, 277.596667,
287.557931, 297.031654, 305.563449, 314.144417, 322.061375,
329.842934, 337.956897, 345.179652, 352.14308 , 359.026505,
365.954462, 372.052363, 378.261215, 384.175881, 389.837465,
395.409799, 400.562692, 405.823926, 411.006328, 416.036745,
420.853707, 425.608283, 430.068758, 434.598696, 438.968533,
443.201263, 447.527471, 451.469393, 455.440266, 459.405314,
463.13316 , 466.657868, 470.378975, 474.188462, 478.033728,
481.934477, 485.61191 , 489.252768, 492.81945 , 496.462536,
499.999607]

theta = array([ 1.12201850e-03, 1.41253750e-03, 1.77827940e-03,
2.23872110e-03, 2.81838290e-03, 3.54813390e-03,
4.46683590e-03, 5.62341330e-03, 7.07945780e-03,
8.91250940e-03, 1.12201845e-02, 1.41253754e-02,
1.77827941e-02, 2.23872114e-02, 2.81838293e-02,
3.54813389e-02, 4.46683592e-02, 5.62341325e-02,
7.07945784e-02, 8.91250938e-02, 1.12201845e-01,
1.41253754e-01, 1.77827941e-01, 2.23872114e-01,
2.81838293e-01, 3.54813389e-01, 4.46683592e-01,
5.62341325e-01, 7.07945784e-01, 8.91250938e-01,
1.12201845e+00, 1.41253754e+00, 1.77827941e+00,
2.23872114e+00, 2.81838293e+00, 3.54813389e+00,
4.46683592e+00, 5.62341325e+00, 7.07945784e+00,
8.91250938e+00])

rlos = array([ 0.52962686, 0.59425111, 0.66676072, 0.74811783,
0.83940201, 0.94182454, 1.05674452, 1.18568685,
1.33036253, 1.49269131, 1.6748272 , 1.87918702,
2.10848252, 2.36575629, 2.65442222, 2.97831072,
3.34171959, 3.74947105, 4.20697571, 4.72030438,
5.29626863, 5.94251114, 6.66760716, 7.48117828,
8.39402009, 9.41824545, 10.5674452 , 11.85686853,
13.3036253 , 14.92691309, 16.74827196, 18.79187021,
21.08482517, 23.65756295, 26.54422221, 29.78310718,
33.41719588, 37.49471047, 42.06975708, 47.20304381])

Here are plots of the 2D and 3D correlation functions:



I've found it hard to find phibin spacing that doesn't go below the limits of the correlation function. I get the following error with a lot of binning:

sximat=crl.getximat(th,rlos,phibins,rs,xispec,xierr)
0.000505647568659 is less than minimum r_ss 0.000529627
refusing to extrapolate

Perhaps I should allow it to extrapolate to smaller angles....

but the following phibins works:

phibins = array([ 0.01 , 0.03578947, 0.06157895, 0.08736842, 0.11315789,
0.13894737, 0.16473684, 0.19052632, 0.21631579, 0.24210526,
0.26789474, 0.29368421, 0.31947368, 0.34526316, 0.37105263,
0.39684211, 0.42263158, 0.44842105, 0.47421053, 0.5 ])

Here is what the xi and sximat plots look like, as you can see they overlap more now:

But the reconstruction isn't so great:


If I change the binning (here is 50 bins):

phibins =
array([ 0.00459241, 0.13686079, 0.17059558, 0.19467506, 0.21188851,
0.22782206, 0.24243934, 0.25473811, 0.26718089, 0.27759667,
0.28755793, 0.29703165, 0.30556345, 0.31414442, 0.32206137,
0.32984293, 0.3379569 , 0.34517965, 0.35214308, 0.3590265 ,
0.36595446, 0.37205236, 0.37826121, 0.38417588, 0.38983746,
0.3954098 , 0.40056269, 0.40582393, 0.41100633, 0.41603674,
0.42085371, 0.42560828, 0.43006876, 0.4345987 , 0.43896853,
0.44320126, 0.44752747, 0.45146939, 0.45544027, 0.45940531,
0.46313316, 0.46665787, 0.47037898, 0.47418846, 0.47803373,
0.48193448, 0.48561191, 0.48925277, 0.49281945, 0.49646254,
0.49999961])

Another thing is that now I have so many theta, rlos, r bins the crl.getCoeff takes a long time to run.


I suppose this looks better. At this point I need to either try a different binning, or re-write the crl.getximat to allow extrapolation to smaller angles / separations to that I can try different reconstructions. Opinions Alexia?

Tuesday, May 10, 2011

xmpl_rv, ximat and sximat

Alexia I suggested to look at xmpl_rv (ximat) and sximat to make sure that the interpolation of the xi matrix is in the same location as the xi matrices from the data.

The results are surprising. First of all, the plot of ximat vs sximat is basically a flat line, when plotted log space (click on images to enlarge):

The above plot has sximat, ximat for 3 different binnings (39, 40, and 41) and the plots basically all fall on top of each other. This is from the best "working" reconstruction mock data, described here.

But they don't fall on top of the xi correlation functions from the data:


I thought maybe this was a conversion issue, but I checked and I think they are in the same units. In order to even get the cximat to fall on top of the xi correlation functions you need to divide by 10^4:

Alexia am I doing this right? Is this weird to you?

Different Reconstruction

Here is the reconstruction I ran last night with a different photometric distribution function, below shows the photometric and spectroscopic distributions:

I used the same number of bins and angles as yesterdays post. Here are the correlation functions:


And here are the reconstructions with several different binnings. None of them match as well as yesterdays post, but both 40 and 39 are close-ish:

I talked to Alexia this morning, she suggested that I do the binning sensitivity check and also look at how xmpl_rv (in getximat) compares to sximat. I'm doing that now. Most posts to come shortly.

Here's the code to make the following (also in logs/110509log.py):
import numpy as N
from pylab import *
from matplotlib import *
import crlold as crl
from correlationData import *
import galsphere as gals
import datetime as dt

workingDir = 'run201159_123'

runConstantsFile = './'+workingDir+'/runConstants.dat'

mockLRGfile,photofile,specfile,corr2dfile,corr3dfile,photo2dfile,photoCDfile,spec2dfile,spec3dfile,\
argumentFile,runConstantsFile,wpsMatrixFile,xiMatrixFile,oversample,corrBins,mincorr,maxcorr,boxside,\
xicorrmin,xicorrmax,nxicorr,rbinmin,rbinmax,nrbins,psubsize,ssubsize=readRunConstantsFileAll(runConstantsFile)

rbinspace = int((rbinmax-rbinmin)/(nrbins))
rbins = makeRbins(rbinmin,rbinmax,rbinspace)


#read in masks
minRaP,maxRaP,minDecP,maxDecP,minCDP,maxCDP,minRaS,maxRaS,minDecS,maxDecS,minCDS,maxCDS=readmasks(workingDir)


#Import the photometric redshift distribution
photoCD = readPhotoCDfile(photoCDfile)


#Create a matrix to input the correlation function data
# Write wps Matrix to file
#The number of redshift bins you want to skip for the
#reconstruction (this removes noisy bins from calculation)
skip = 0
makeWPSmatrix2(workingDir,rbins,corrBins, wpsMatrixFile, skip)

#Create a matrix to input the correlation function data
# Write Xi Matrix to file
makeXimatrix2(workingDir,rbins,nxicorr,xiMatrixFile, skip)

#Plot wps correlation functions
numtoplot = int(nrbins-skip) #number of correlation functions you want to display on plot
plotWPS(skip, numtoplot, workingDir, rbins)
numtoplot = 10
plotWPSScreen(skip, numtoplot, workingDir, rbins)

#Plot xi correlation functions
plotXi(skip, numtoplot, workingDir, rbins)
plotXiScreen(skip, numtoplot, workingDir, rbins)


#from commandsAlexia import *
xissfile = xiMatrixFile
wpsfile = wpsMatrixFile
nphibins = size(rbins)

# Read in the data
#spectroscopic autocorrelation function
X=N.loadtxt(xissfile,comments='#')
X=X.T
rs=X[0,:]/1000. #want box units (Gpc, not Mpc)
xispec=X[1:,:] #xi in nspecbins


#2D cross correlation between spec and photo samples
th,rlos,wps=N.loadtxt(wpsfile,comments='#',unpack=True)
th=N.unique(th)
rlos=N.unique(rlos)
#rlos=getcomovingD(rlos) #adjusts from redshift to Mpc/h
rlos=rlos/1000 #want box units (Gpc, not Mpc)

#covariance matrix of wps measurements, wps vector arranged
covar = N.identity(wps.shape[0],float)
xierr = N.identity(xispec.shape[1],float)

# compute the xi matrix using the spec sample,
npbins=39#the number of bins you want reconstructed
reconstructmax = 0.5 #in Gpc
reconstructionmin = 0.0 #in Gpc
phibsize = (reconstructmax - reconstructionmin)/npbins

phibins = linspace(start=reconstructionmin,stop=reconstructmax,num=npbins,endpoint=True)

#phibins =(linspace(2,npbins,num=npbins,endpoint=False)+0.5)*(1./npbins)
sximat=crl.getximat(th,rlos,phibins,rs,xispec,xierr)

# invert to recover phi
phircvd=crl.getCoeff(wps,covar,sximat)
for i in range(len(phircvd.x)):
if (phircvd.x[i] < 0):
phircvd.x[i]=0

print 'the recovered selection function is not normalized.'

outphi4 = phircvd.x
phibins4 = phibins

start = 1
end = npbins - 1

#junkbins = linspace(start=0,stop=0.5,num=100,endpoint=True)
junkbins = phibins
junk = histogram(photoCD/1000., bins=junkbins)
junkdata = junk[0]
plot(junkbins[start:end], junkdata[start:end], 'r.-', label = 'photometric data')
plot((phibins4[start:end]+1.0*phibsize), outphi4[start:end]*sum(junkdata[start:end])/sum(outphi4[start:end]), 'g-', label='reconstruction %i'%npbins)

xlabel('comoving distance (Gpc/h)')
ylabel('number of objects per bin')
title('Redshift Distribution')
from pylab import *
legend(loc=2)


Monday, March 21, 2011

Comparing Correlation Functions

I've been trying to figure out why my 2d Correlation Functions look weird. I've put out request to several Berkeley folks to compare my correlation function results on a data set with theirs, but while I wait for them to respond, I'm going to compare to some literature.

For the 2-d angular correlation function on SDSS LRG galaxies, I have the following plots from Connolly et al (2001):



And from Ross et al (2007):


I changed the angular bins of my correlation function to match these sets to try to form a comparison. Here is my 2D correlation function on BOSS data:

As you can see the slope of the function is much shallower for my correlation function, than it is the literature.

The 3D correlation functions are a better match:
From Ross et al (2007):

And here is mine:

At least the slopes seem to be comparable.

I am currently running the 3D correlation function on a larger los range to try to compare to Kazin et al. :

This will take a bit because I am correlating out to such large scales (180 Mpc/h).

Another thing I noticed that I wanted to talk to Alexia about is the actual computation of the correlation function in the code. Right now it is the following:

xi = (dd - dr - rd + rr)/rr

where

dd = # of data-data pairs in bin
dr = (# if data-random pairs in bin) / (Random Oversampling)
rd = (# if data-random pairs in bin) / (Random Oversampling)
rr = (# if random-random pairs in bin) / (Random Oversampling)

I am wondering if the rr should be divided by (random oversampling)^2 because there are that many more point in the RR set. I haven't changed this code from when I first got it from Alexia, so I assume it is correct. But I was looking the Landy, Szalay Paper and it seemed like they were using factors of n(n-1)/2 which is approximately n^2, and we are using factors of n.

Thoughts?

3/21 5:15 Update: Here is my plot of the 3d Correlation function on the large los range, it looks very similar to the Kazin plot above, you can even see the BAO signal @ ~100 Mpc/h:

Monday, February 28, 2011

BOSS Mask Problem?

Below is an email I sent to Martin White today after Shirley, Nic and David couldn't figure out what I was doing wrong with my masks/randoms:

~~~~~~~~
Martin,
Back in December you generated a polygon mask for me for the BOSS data. I've been using this mask + BOSS data for my reconstruction project with Alexia and have gotten puzzling results when I compute correlation functions on the data.

This prompted me to look more closely at the way I was applying the mask to make sure I wasn't making a stupid mistake.

The BOSS data inside the mask (with weights greater than 0) looks very patchy up close, and both Ross and Schlegel think it shouldn't look like this.

I was wondering if you could look at the attached plots and see if you think the mask is being applied correctly.

MaskedBOSSDataAll.png shows the entire footprint. The blue points are galaxies in the spAll file that were outside the mask or have a weight = 0. The red points are galaxies in the spAll file that were inside the mask and have a weight GT 0.


MaskedBOSSDataClose.png shows a zoom into the region (35 < Dec < 55, 110 < RA < 130). The color scheme is the same as above. This shows the strange "patchy" nature of the masked data even in regions which seem to have been observed. I tried to use the same ra/dec range as Fig 3 from your paper (http://arxiv.org/PS_cache/arxiv/pdf/1010/1010.4915v2.pdf) for easy comparison.


One difference between my data and that in your plot is that the weights (completeness) of the red objects in my plots range from [0.75 to 1.0], whereas in your paper's plot they seem to range from [0 to 1.0]. Perhaps this is how you determined what is in the mask, if it had a spectroscopic completeness greater than 0.75?

If you have any insight into what I am doing wrong, that would be really helpful. Below is the IDL code I am using to apply the masks and generate the attached plots.

Thank you,
Jessica

;Here the code I used to apply the mask:
;Read in BOSS data
bossgalaxies = '/clusterfs/riemann/raid001/jessica/boss/spAll-v5_4_14.fits'
spall = mrdfits(bossgalaxies, 1)

;Trim to be a galaxy
isgal = where(spall.specprimary EQ 1 $
AND (spall.boss_target1 AND 2L^0+2L^1+2L^2+2L^3+2L^7) NE 0 $
AND strmatch(spall.class,'GALAXY*') $
AND spall.zwarning EQ 0)

;Select galaxies
galaxies = spall[isgal]
ra = galaxies.ra
dec = galaxies.dec
z = galaxies.z
spall = 0

bossmask = "/clusterfs/riemann/raid001/jessica/boss/bossX.002.ply"
read_mangle_polygons, bossmask, polygons, id
results = is_in_window(ra=ra,dec=dec,polygons,in_polygon=polynum)
inBossMask = where(results GT 0)
polys = polynum[inBossMask]
bossWeight = polygons[polys].weight
;Cut the Ra/Dec/z such that it is in the Shirley Mask
sra = ra[inBossMask]
sdec = dec[inBossMask]
sz = z[inBossMask]

;Here is the code I used to make the attached plots
;Plot Data that is inside the mask
window,xsize=700,ysize=600
xobject = sra
yobject = sdec
xtit = 'RA'
ytit = 'Dec'
mtit = 'SDSS BOSS Data w/ and w/o Mask'
plot, xobject, yobject, psym=3, symsize=2, xrange = [130,110], yrange=[35,55], XTITLE = xtit, YTITLE = ytit, TITLE = mtit, charsize = 2, charthick = 1, thick = 2, xthick=2, ythick=2
oplot, ra, dec, ps=3, color=fsc_color('blue')
thisweight = where(bossWeight GT 0)
oplot, xobject[thisWeight], yobject[thisWeight], ps=3, color=fsc_color('red')


plot, xobject, yobject, psym=3, symsize=2, xrange = [0,360], yrange=[0,60], XTITLE = xtit, YTITLE = ytit, TITLE = mtit, charsize = 2, charthick = 1, thick = 2, xthick=2, ythick=2
oplot, ra, dec, ps=3, color=fsc_color('blue')
thisweight = where(bossWeight GT 0)
oplot, xobject[thisWeight], yobject[thisWeight], ps=3, color=fsc_color('red')

Thursday, February 24, 2011

Figuring out Mask Problems

I re-made the random catalogs without removing the duplicates. David thought that perhaps this was causing problems with the randoms not matching the catalogs or the correlation functions not working properly.

Below are a bunch of histograms of the distribution of the data (white) and randoms (green) for the two methods I have tried for making randoms, mine and Shirley's. You'll notice that the spectroscopic randoms mismatch the spectroscopic data much more than the photometric data/randoms (for both methods). This makes me perhaps think there is a problem with the mask I am using for the spectroscopic set. Martin White made this mask for me.. I've ask for a meeting with David this afternoon to look at this in more detail.

My randoms



Shirley's Randoms



I've tried zooming in on regions in the data where the mismatch is greatest. For instance ( dec > 40, 110 < ra < 130). I've posted these on the blog too.

It does look like there are regions where there are data and not randoms, which would suggest a problem with the mask. (Note that the regular spacing of the randoms is because this is how Shirley generates randoms, by putting an object in the center of the grid). I plotted these because it is easier to see with the regular spacing that regions have been missed. For instance at ra ~129.4, dec ~ 49.4, There are several data points (red), but but randoms (blue). You can click on the below pictures to make them bigger.








Friday, December 3, 2010

BOSS Galaxies

Spent today getting the BOSS Galaxies in the correct form to use as my spectroscopic data set. Here's what I did:

1) Downloaded latest spAll file from here to ~/boss/:
:/clusterfs/riemann/
raid006/bosswork/groups/boss/spectro/redux/spAll-v5_4_14.fits

2) In IDL, trim the data to be galaxies in stripe 82:
bossgalaxies = '/home/jessica/boss/spAll-v5_4_14.fits'
spall = mrdfits(bossgalaxies, 1)

stripe82gal = where(((spall.ra LE 60.0) OR (spall.ra GE 300.)) $
and ((spall.dec GE -1.25) and (spall.dec LE 1.25)) $
AND spall.specprimary EQ 1 $
AND (spall.boss_target1 AND 2L^0+2L^1+2L^2+2L^3+2L^7) NE 0 $
AND strmatch(spall.class,'GALAXY*') $
AND spall.zwarning EQ 0

3) Write these galaxies out to a col delimited file that python can read:
thisfile = '/home/jessica/boss/BOSSstripe82RaDec.dat'
writecol, thisfile, spall[stripe82gal].ra, spall[stripe82gal].dec, spall[stripe82gal].z

4) Do the same to Shirley Ho's photometric LRG file:
shirleydata = "/home/shirleyho/research/SDSS_PK/powerspec_dir/DATA/ra_dec_z_run_rerun_camcol.z_gt_0.01"
readcol, shirleydata, RA, DEC, z, RUN, rerun, camcol, x, x, x, x, x, x, x, x, format='(F,F,F,D,D,D,D,D,F,F,F,F,F,F)'
datasize = n_elements(ra)

;Data just within stripe 82
stripe82all = where(((ra LE 60.0) OR (ra GT 305.)) $
and ((dec GE -1.25) and (dec LE 1.25)))

; Readout ra, dec, redshift of stripe 82 Shirly objects
thisfile = '/home/jessica/boss/ShirlyDataRaDec.dat'
writecol, thisfile, ra[stripe82all], dec[stripe82all], z[stripe82all]

If I want to use the whole BOSS Galaxy footprint I have a mask here from Martin White:
/home/jessica/boss/bossX.002.ply

You can read this file like so:

infile = "./bossX.002.ply"
read_mangle_polygons, infile, polygons, id

There are more functions in idlutils in playing with the mask.

Now I am trying to do the reconstruction on these two data sets. Using BOSS as the spectroscopic set and Shirley's LRG galaxies as the spectroscopic galaxies.

Here are some plots of the two sets of data:








The log file to make these data files and plots is here:
../logs/101207log.pro
../logs/101207log.py

Thursday, December 2, 2010

At Los Alamos

I am visiting Alexia at Los Alamos National Lab for the next three weeks. Los Alamos is beautiful, but COLD.

I'm putting the final touches on the data-correlation function setup.

I noticed that in the 3D auto-correlation, I am producing two different random catalogs and using the following calculation:

(dd -dr1 -r2d +r1r2)/(r1r2)

This might be extra computing power than necessary if it is ok for r1 = r2... but I forget if they need to be different... perhaps they do need to be different in order for this to work. It seems like this is something I would have thought of before.

I talked to Alexia about this and she thinks it shouldn't make a difference either way. So I am going to set r1 = r2 and see what happens....

Old Autocorrelation Function results (without randoms fed in)
# r (Mpc/h) cross xi (r)
# r (Mpc/h) cross xi (r)
1.0009750000 113.0999999998
3.0009250000 4.3312368973
5.0008750000 0.1890557940
7.0008250000 0.3768587361
9.0007750000 0.4359240581
11.0007250000 0.1349752029
13.0006750000 0.1273595140
15.0006250000 -0.1351841518
17.0005750000 0.2378351539
19.0005250000 0.2994336370
21.0004750000 -0.0479553597
23.0004250000 -0.0998846930
25.0003750000 -0.1438266200
27.0003250000 -0.3070417433
29.0002750000 -0.3108741254
31.0002250000 -0.1177777522
33.0001750000 -0.0633376228
35.0001250000 -0.2604527820
37.0000750000 -0.2517495550
39.0000250000 -0.0304629134


New Autocorrelation Function results (with randoms fed in)
I think small differences are rounding errors
# Reached checkpoint 4
# r (Mpc/h) cross xi (r)
1.0009750000 113.0999999998
3.0009250000 4.3294918806
5.0008750000 0.1890963726
7.0008250000 0.3766263941
9.0007750000 0.4359240581
11.0007250000 0.1349752029
13.0006750000 0.1273650408
15.0006250000 -0.1355281973
17.0005750000 0.2380827199
19.0005250000 0.2995001354
21.0004750000 -0.0479749624
23.0004250000 -0.0998864926
25.0003750000 -0.1437953465
27.0003250000 -0.3070974753
29.0002750000 -0.3107744869
31.0002250000 -0.1177689518
33.0001750000 -0.0632545877
35.0001250000 -0.2605975485
37.0000750000 -0.2517029624
39.0000250000 -0.0304363094

New Autocorrelation Function with r1 = r2 (with randoms fed in)
There are pretty big difference between these two, but this could just be because my data set is small. Should check on a larger set.
# Reached checkpoint 4
# r (Mpc/h) cross xi (r)
1.0009750000 120.9999999998
3.0009250000 3.9579487179
5.0008750000 0.1234024722
7.0008250000 0.3826803206
9.0007750000 0.4731333382
11.0007250000 0.0871628790
13.0006750000 0.0808080808
15.0006250000 -0.1168749344
17.0005750000 0.2334021835
19.0005250000 0.3148748159
21.0004750000 -0.0339674194
23.0004250000 -0.0762365727
25.0003750000 -0.1365716955
27.0003250000 -0.2876355988
29.0002750000 -0.3209356312
31.0002250000 -0.1164599148
33.0001750000 -0.0517181534
35.0001250000 -0.2402264337
37.0000750000 -0.2732306595
39.0000250000 -0.0168706759

Thursday, November 11, 2010

Modifying 3D Correlation Function

I'm changing the 3D correlation function code to accept data and randoms (similar to what I just did for the 2D correlation function code).

This is proving to be a little tricky because in the current version I do some tricks to make sure the periodic boundary conditions don't add power incorrectly, and I also re-scale the data because the correlation function is expecting it to be in a 1X1X1 box.

After talking with Alexia, I've decided to do these conversions outside of the correlation function in python, such that the code expects the data to be in a 1x1x1 box.

Results to come soon.....

Monday, October 18, 2010

Next steps...

I talked to Shirley and Alexia and they both agree that the procedure outlined in my last post makes sense. Shirley told me that the weights in her file don't mean anything, so I should just treat them is a step function. I need to talk to David to make sure he agrees with all this.

Thinking I should post-pone my trip to Los Alamos until December. Feeling like I'd like to have the data in better shape before I get there, and the flights are cheaper.

Steps:
  1. Talk to David and make sure he agrees with these steps. Also talk about funding for trip.
  2. Generate randoms as laid out in last post.
  3. Book tickets for December.
  4. Keep looking for temp housing in Los Alamos.

Monday, September 27, 2010

Reconstruction on LRGs

I talked to Alexia today and we decided that I should get the BOSS data together and run correlation functions on it, rather than continuing to de-bug the reconstruction binning. I'm a little frustrated with that problem, and it would be nice to work on something new. I know that both Shirley Ho and Eyal Kazin have LRG catalogs, so I am contacting both of them to see if they are available for me to use.

Also planning to visit Alexia in late October/ early November. Trying to find cheap housing in Los Alamos, if anyone knows of anything out there...

Friday, September 24, 2010

Bad Blogger

To the 3 people that actually read my blog (Alexia, mom and dad), I know I have been a really bad blogger lately. I've been writing a paper, and didn't have much to post about that... and then I was in Paris for a SDSS Collaboration meeting.

Anyway, the Fall 2010 semester has started (how did that happen?!?) and I plan to get back to regular posting. I also am hoping to work exclusively on the reconstruction project now that the likelihood paper is (almost) done.

Apologies again. Here are some plots from the paper for your entertainment (the colors look a little less weird in black on white, but I inverted the images to keep with the blog color scheme):