Friday, December 4, 2009

Likelihood Optimatization

I've been playing with various parameters in the likelihood method trying to find the most efficient cuts. The three things I have been playing with are:

added errors (what to use as the input adderr into likelihood_compute)
everything - variability (what happens when we take away variable objects from the L_everything file)
QSO + variability (what happens when we add the variable objects to the L_QSO likelihoods)

Here are my findings (these are on the co-added fluxes, next step is to re-run with single epoch):
Normal Errors (adderr = [0.014, 0.01, 0.01, 0.01, 0.014])
Percent of those targeted are quasars (based on 40/degree^2 targeting)
No variability: 0.649914
Variability everything: 0.656196
Variability everything + QSO = 0.651057

5X Errors (adderr = 5*[0.014, 0.01, 0.01, 0.01, 0.014])
Percent of those targeted are quasars (based on 40/degree^2 targeting)
No variability: 0.641348
Variability everything: 0.645346
Variability everything + QSO = 0.603655

7X Errors (adderr = 7*[0.014, 0.01, 0.01, 0.01, 0.014])
Percent of those targeted are quasars (based on 40/degree^2 targeting)
No variability: 0.627641
Variability everything: 0.624786
Variability everything + QSO = 0.572244

No Errors (adderr = 0.0*[0.014, 0.01, 0.01, 0.01, 0.014])
Percent of those targeted are quasars (based on 40/degree^2 targeting)
No variability: 0.644203
Variability everything: 0.627070
Variability everything + QSO = 0.619075

It looks like the errors we were running have the best numbers, and using the variability everything, but not the variability QSO.

I am going to play more with the definitions of variable everything and variable qso to see if I can get these to work better. I also want to play with not cutting on a L_ratio = 0.01, but perhaps changing L_ratio as a function of L_QSO (it seems we might be able to get a few more objects if we have L_ratio cut decrease as L_QSO gets large.

No comments:

Post a Comment